Subsquid for ĞDev

Following this subject: Subsquid (ancienne Hydra): indexer de blockchain substrate exposant des API GraphQL

I think that now subsquid project has evolved a lot and we might reconsider using it as a framework to develop an other indexer.

# install squid command line interface
pnpm i -g @subsquid/cli@latest
# check installed version
pnpm i -g @subsquid/cli@latest
# create new repo from template
sqd init duniter-gdev --template substrate

Then you can modify your template and follow documentation: Simple Substrate indexer | Subsquid

I’ll publish a repo as soon as I get a minimal working example, but feel free to explore in parallel, we might discover complementary things.

We NEED a complete indexer asap to be able to inspect the blockchain easily and build POCs. The duniter-indexer strategy is fine at long term, but not relevant for short term in my opinion.


[edit] at this point I am indexing locally gdev data from remote gdev.p2p.legal.
In Altair extension (https://altairgraphql.dev/), I can run this request by connecting to http://localhost:8888/graphql

query MyQuery {
  metadata {
      specName
  }
}

which returns

{
  "data": {
    "metadata": [
      {
        "specName": "gdev"
      }
    ]
  }
}
1 Like

I made some progress on this subject, I am able to submit a request like:

{
  batch(
    limit: 10
    includeAllBlocks: false
    events: [{ name: "AuthorityMembers.MemberGoOnline" }]
  ) {
    header {
      height
    }
    events
  }
}

and get an answer :

{
  "data": {
    "batch": [
      {
        "header": {
          "height": 7021
        },
        "events": [
          {
            "args": 2457,
            "callId": "0000007021-000001-72b7b",
            "extrinsicId": "0000007021-000001-72b7b",
            "id": "0000007021-000002-72b7b",
            "indexInBlock": 2,
            "name": "AuthorityMembers.MemberGoOnline",
            "phase": "ApplyExtrinsic",
            "pos": 4
          }
        ]
      },
      {
        "header": {
          "height": 38961
        },
        "events": [
          {
            "args": 7139,
            "callId": "0000038961-000001-99384",
            "extrinsicId": "0000038961-000001-99384",
            "id": "0000038961-000002-99384",
            "indexInBlock": 2,
            "name": "AuthorityMembers.MemberGoOnline",
            "phase": "ApplyExtrinsic",
            "pos": 4
          }
        ]
      },
      {
        "header": {
          "height": 290054
        },
        "events": [
          {
            "args": 7139,
            "callId": "0000290054-000001-723f3",
            "extrinsicId": "0000290054-000001-723f3",
            "id": "0000290054-000002-723f3",
            "indexInBlock": 2,
            "name": "AuthorityMembers.MemberGoOnline",
            "phase": "ApplyExtrinsic",
            "pos": 4
          }
        ]
      }
    ]
  }
}

It is still indexing locally. When I’m happy with the result, I will share how to do.
The tricky part was the capital letter at the beginning of the event pallet which is not present on polkadotjsapp.

Right now, it’s still a simple docker compose running subsquid instance and connected to gdev.p2p.legal endpoint.

docker-compose.yml
services:
  db:
    image: postgres:15  # CockroachDB cluster might be a better fit for production deployment
    restart: always
    volumes:
      - /var/lib/postgresql/data
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: squid-archive

  ingest:
    depends_on:
      - db
    restart: on-failure
    image: subsquid/substrate-ingest:firesquid
    command: [
       "-e", "wss://gdev.p2p.legal/ws",
       "-c", "5", # allow up to 20 pending requests for the above endpoint (default is 5)
       #  "--start-block", "1000000", # uncomment to specify a non-zero start block
       "--out", "postgres://postgres:postgres@db:5432/squid-archive"
    ]

  gateway:
    depends_on:
      - db
    image: subsquid/substrate-gateway:firesquid
    environment:
      RUST_LOG: "substrate_gateway=info,actix_server=info"
    command: [
       "--database-url", "postgres://postgres:postgres@db:5432/squid-archive",
       "--database-max-connections", "3", # max number of concurrent database connections
       # "--evm-support" # uncomment for chains with Frontier EVM pallet
                         # (e.g. Moonbeam/Moonriver or Astar/Shiden)
    ]
    ports:
      - "8888:8000"

  # Explorer service is optional.
  # It provides rich GraphQL API for querying archived data.
  # Many developers find it very useful for exploration and debugging.
  explorer:
    image: subsquid/substrate-explorer:firesquid
    environment:
      DB_TYPE: postgres # set to `cockroach` for Cockroach DB
      DB_HOST: db
      DB_PORT: "5432"
      DB_NAME: "squid-archive"
      DB_USER: "postgres"
      DB_PASS: "postgres"
    ports:
      - "4444:3000"

I want to see how they suggest to implement custom indexing and how it compares to current duniter-indexer.

4 Likes

I set up a public generic subsquid indexer for ĞDev which can be accessed here: https://subsquid.gdev.coinduf.eu/

You can know the latest indexed block with

{
  status {
    head
  }
}

which returns something like

{
  "data": {
    "status": {
      "head": 1276102
    }
  }
}
1 Like

In bonus, you also have the explorer at https://explorer.subsquid.gdev.coinduf.eu/graphql.
It can for example get the events Balances.Transfer where the the amount is 314.

query MyQuery {
  events(limit: 3, where: {args_jsonContains: "{\"amount\":\"314\"}", name_eq: "Balances.Transfer"}) {
    name
    args
  }
}
{
  "data": {
    "events": [
      {
        "name": "Balances.Transfer",
        "args": {
          "to": "0xb2c04f16d0b7069bf237675bfbdf8a0eab4d6fa790f2a1d545ceb6c73428e6d5",
          "from": "0x0ed0734a282c8d3551694d74e12f6ec9a568770ad351a67303994908b638071b",
          "amount": "314"
        }
      }
    ]
  }
}
1 Like

Could you be more specific? I agree with the fact an indexer is quickly and absolutely necessary. But I don’t get why duniter-indexer could not feat this requirement.

2 Likes

Because compare to subsquid, an homemade indexer will not have all blockchain schema and exhaustive data available, we have to implement everything we want as we go along.

Subsquid comes with everything needed to analyze our blockchain out of the box.

3 Likes

See also: Why we need a full featured indexer? :smiley_cat:

2 Likes

An other thing which is very useful to help debug other users is the request:

{
  batch(
    limit: 10
    includeAllBlocks: false
    events: [{ name: "System.ExtrinsicFailed" }]
    fromBlock: 1430000
    toBlock: 1460000
  ) {
    header {
      height
      hash
    }    
  }
}

It allows to see the failed extrinsics in a given timeframe.

1 Like

Je n’ai pas grand chose à partager à ce sujet, j’ai mis tout ce que j’avais sur le sujet ci-dessus :

  • docker-compose pour lancer subsquid (firesquid)
  • endpoint graphql public pour le requêter
  • exemples de requêtes quand je m’en sers

Par contre, je peux faire comme propose Pini

Mais en ce moment je me sens un peu débordé, je vais essayer d’expliciter mes priorités du moment pour clarifier.