Skip to content
pk910 edited this page Aug 18, 2024 · 20 revisions

Welcome to the dora-the-explorer wiki!

My own test Instances:

Setup

Use Release

To run the explorer, you need to create a configuration file first. Download a copy of the default config and change it for your needs.

Afterwards, download the latest binary for your distribution from the releases page and start the explorer with the following command:

./dora-explorer -config=explorer-config.yaml

You should now be able to access the explorer via http://localhost:8080

Use docker image

I'm maintaining a docker image for this explorer: pk910/dora-the-explorer

There are various images available:

  • latest: The latest stable release
  • unstable: That latest master branch version (automatically built, unstable)
  • v1.x.x: Version specific images for all releases.

Follow these steps to run the docker image:

  1. Create a copy of the default config and change it for your needs.
    You'll especially need to configure the beaconapi, executionapi & database sections.
    wget -O explorer-config.yaml https://raw.githubusercontent.com/ethpandaops/dora/master/config/default.config.yml
    nano explorer-config.yaml
    
  2. Start the container
    docker run -d --restart unless-stopped --name=dora -v $(pwd):/config -p 8080:8080 -it pk910/dora-the-explorer:latest -config=/config/explorer-config.yaml
    

You should now be able to access the explorer via http://localhost:8080

read logs:

docker logs dora --follow

stop & remove container:

docker stop dora

docker rm -f dora

Build from source

To build the explorer from source, you need a machine with go 1.20 and a C compiler installed.

Run the following commands to clone the repository and compile the binaries:

git clone https://github.com/ethpandaops/dora.git
cd dora
make

This should build the binaries to ./bin.
Afterwards, you can run the binary as described in the "Use Release" section.

Dependencies

The explorer has no mandatory external dependencies. It can even run completely in memory only.
However, for best performance I recommend using a PostgreSQL database.

Configuration

There is a default configuration that should be used as a base for your own explorer configuration.
You can find the default config here: /config/default.config.yml

The configuration sections are described more detailed below:

server

The server section configures the host and port, which the internal webserver will listen on.

I strongly recommend using the explorer with a proper webserver (nginx, apache, ...) in front, which supports SSL and stuff.

server:
  host: "localhost" # Address to listen on
  port: "8080" # Port to listen on

frontend

The frontend sections contains all frontend related settings.

The siteName & siteSubtitle settings are used to set the site title and subtitle. The name/subtitle is shown in the header of all pages and is also part of the html title tag.

The ethExplorerLink setting is used to configure links to a EL block explorer. When configured, all EL related hashed / block numbers are linked to this EL explorer following the standardized linking schema.

The validatorNamesYaml / validatorNamesInventory settings can be used to give validator indexes a name.
You can either supply validator names via a yaml file following this format via validatorNamesYaml.
Or you can configure the explorer to fetch validator names from a ethPandaOps inventory api via validatorNamesYaml (see this for more details on the api).

frontend:
  enabled: true # Enable or disable to web frontend
  debug: false # for development only, loads templates and static files from FS instead of using the embedded files
  pprof: false # enable pprof endpoint at /debug/pprof/
  minify: false # minify html templates

  # Name of the site, displayed in the title tag
  siteLogo: "" # custom logo image url
  siteName: "Dora the Explorer"
  siteSubtitle: "Testnet XY"
  siteDescription: "lightweight beaconchain explorer for testnet xy"
  
  # link to EL Explorer
  ethExplorerLink: "https://explorer.ephemery.dev/"

  # file or inventory url to load validator names from
  validatorNamesYaml: "validator_names.yaml"
  #validatorNamesInventory: "https://config.4844-devnet-7.ethpandaops.io/api/v1/nodes/validator-ranges"

  # page timeouts
  pageCallTimeout: 55s
  httpReadTimeout: 15s
  httpWriteTimeout: 15s
  httpIdleTimeout: 20s

beaconapi

The beaconapi section configures the underlying beacon nodes which are used to fetch data from. The explorer allows configuring multiple endpoints which are used in parallel to grab the latest data from all nodes. Dora needs at least one reachable beacon node to start up, but the more nodes are connected the better the fork & orphaned block tracking works.

The explorer uses the default API endpoints only, so it should be compatible with any CL client type.

During synchronization, dora needs to fetch all blocks & the dependent beacon state of each epoch once, so at least one connected beacon node should run with historic block & state reconstruction enabled. For maximum performance, you may also lower the restore point interval to 32 slots (default is much higher), however this comes with a significant increase of storage needs and may only be suitable for small testnets.
The lighthouse flag to set the restore point interval is --slots-per-restore-point 32

The endpoint url supports basic http authentication by encoding username & password via http://<user>:<password>@127.0.0.1:5052/.

Besides of the endpoints there are also some caching related settings in this section. This cache is mostly used to cache page models (not RPC calls), so the location of these settings might be moved to the frontend section in future :)
There are two caching types supported:

  • Local caching: cache directly in the explorer process, limited by the localCacheSize setting (in MByte)
  • Redis caching: remote redis cache, configured via redisCacheAddr.
    Might be left out entirely if not needed.
    To prevent collisions on a shared redis cache, use the redisCachePrefix setting to configure a unique key prefix.
beaconapi:
  # CL Client RPC
  endpoints:
    - name: "local"
      url: "http://127.0.0.1:5052"
      priority: 1  # higher priority clients are preferred for calls
      archive: true # node has historic states & blocks

  # local cache for page models
  localCacheSize: 100 # 100MB

  # remote cache for page models
  redisCacheAddr: ""
  redisCachePrefix: ""

executionapi

The executionapi section configures the underlying execution nodes which are used to fetch data from. The explorer allows configuring multiple endpoints which are used to monitor the chain head. Dora does not require a execution client to work, but several features (deposit & beacon operation transaction tracking) will not work without.

The explorer uses the common JSONRPC endpoints only, so it should be compatible with any EL client type.

The endpoint url supports basic http authentication by encoding username & password via http://<user>:<password>@127.0.0.1:5052/.

executionapi:
  # execution node rpc endpoints
  endpoints:
    - name: "local"
      url: "http://127.0.0.1:8545"
      priority: 1  # higher priority clients are preferred for calls
      archive: true # node has historic states & blocks

  # batch size for log event calls
  depositLogBatchSize: 1000

indexer

The indexer section contains all settings related to indexing.

The explorer includes a indexing algorithm that keeps track of blocks from the latest epochs in memory.
This indexer data is used for live aggregations and caching too, so it is a required part of the explorer.

On startup, the explorer fills up the local cache with all blocks from unfinalized epochs. The explorer needs to know about all non-finalized blocks in memory to work properly. To avoid a big memory consumption, the explorer moves block bodies from old unfinalized epochs into the db ("pruning"). The inMemoryEpochs setting controls how many unfinalized block bodies are kept in memory before being pruned and moved to the DB.

On finalization, the explorer does the final block & epoch processing. This includes doing vote & other aggregations and writing blocks & epoch stats to the db.

Besides of indexing (which takes care of the head), there is also a synchronizer which takes care of indexing already passed epochs.
Synchronizing old epochs can be disabled via the disableSynchronizer setting.

During synchronization, there is a cooldown after each epoch to not overwhelm the bn api. The cooldown can be configured via syncEpochCooldown. It can be disabled, but will cause some high load on the the CL node during synchronization then ;)

indexer:

  # max number of epochs to keep in memory
  inMemoryEpochs: 3
  
  # disable synchronizer (don't index historic epochs)
  disableSynchronizer: false

  # reset synchronization state to this epoch on startup - only use to resync database, comment out afterwards
  #resyncFromEpoch: 0

  # force re-synchronization of epochs that are already present in DB - only use to fix missing data after schema upgrades
  #resyncForceUpdate: true

  # number of seconds to wait between each epoch (don't overload CL client)
  syncEpochCooldown: 2

database

The database section contains all settings related to the explorer database.

There are different database engines supported:

  • sqlite Local SQLite3 database
  • pgsql PgSQL database

The SQLite engine stores all data in a local file, which is specified via the file setting. You may run the explorer completely in memory by setting the filename to :memory:. However, this comes with the negative effect of having to resynchronize the chain after each restart.

The PgSQL engine allows defining a separate set of writer credentials to allow more complex replication setups.
For small testnets / low access activity it's totally fine to use a local database and suppress the pgsqlWriter section completely.
When specifying both sections, the explorer uses the normal credentials for all SELECT statements, and executes INSERT/UPDATE/DELETE queries via the writer connection.

You don't need to care about schema initialization or upgrades. The explorer will take care of that itself.

database:
  engine: "sqlite" # sqlite / pgsql

  # sqlite settings
  sqlite:
    file: "./explorer-db.sqlite"

  # pgsql settings
  pgsql:
    host: "127.0.0.1"
    port: 5432
    user: ""
    password: ""
    name: ""
  pgsqlWriter: # optional separate writer connection (used for replication setups)
    host: "127.0.0.1"
    port: 5432
    user: ""
    password: ""
    name: ""