I made some progress on this subject, I am able to submit a request like:
{
batch(
limit: 10
includeAllBlocks: false
events: [{ name: "AuthorityMembers.MemberGoOnline" }]
) {
header {
height
}
events
}
}
and get an answer :
{
"data": {
"batch": [
{
"header": {
"height": 7021
},
"events": [
{
"args": 2457,
"callId": "0000007021-000001-72b7b",
"extrinsicId": "0000007021-000001-72b7b",
"id": "0000007021-000002-72b7b",
"indexInBlock": 2,
"name": "AuthorityMembers.MemberGoOnline",
"phase": "ApplyExtrinsic",
"pos": 4
}
]
},
{
"header": {
"height": 38961
},
"events": [
{
"args": 7139,
"callId": "0000038961-000001-99384",
"extrinsicId": "0000038961-000001-99384",
"id": "0000038961-000002-99384",
"indexInBlock": 2,
"name": "AuthorityMembers.MemberGoOnline",
"phase": "ApplyExtrinsic",
"pos": 4
}
]
},
{
"header": {
"height": 290054
},
"events": [
{
"args": 7139,
"callId": "0000290054-000001-723f3",
"extrinsicId": "0000290054-000001-723f3",
"id": "0000290054-000002-723f3",
"indexInBlock": 2,
"name": "AuthorityMembers.MemberGoOnline",
"phase": "ApplyExtrinsic",
"pos": 4
}
]
}
]
}
}
It is still indexing locally. When I’m happy with the result, I will share how to do.
The tricky part was the capital letter at the beginning of the event pallet which is not present on polkadotjsapp.
Right now, it’s still a simple docker compose running subsquid instance and connected to gdev.p2p.legal
endpoint.
docker-compose.yml
services:
db:
image: postgres:15 # CockroachDB cluster might be a better fit for production deployment
restart: always
volumes:
- /var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: squid-archive
ingest:
depends_on:
- db
restart: on-failure
image: subsquid/substrate-ingest:firesquid
command: [
"-e", "wss://gdev.p2p.legal/ws",
"-c", "5", # allow up to 20 pending requests for the above endpoint (default is 5)
# "--start-block", "1000000", # uncomment to specify a non-zero start block
"--out", "postgres://postgres:postgres@db:5432/squid-archive"
]
gateway:
depends_on:
- db
image: subsquid/substrate-gateway:firesquid
environment:
RUST_LOG: "substrate_gateway=info,actix_server=info"
command: [
"--database-url", "postgres://postgres:postgres@db:5432/squid-archive",
"--database-max-connections", "3", # max number of concurrent database connections
# "--evm-support" # uncomment for chains with Frontier EVM pallet
# (e.g. Moonbeam/Moonriver or Astar/Shiden)
]
ports:
- "8888:8000"
# Explorer service is optional.
# It provides rich GraphQL API for querying archived data.
# Many developers find it very useful for exploration and debugging.
explorer:
image: subsquid/substrate-explorer:firesquid
environment:
DB_TYPE: postgres # set to `cockroach` for Cockroach DB
DB_HOST: db
DB_PORT: "5432"
DB_NAME: "squid-archive"
DB_USER: "postgres"
DB_PASS: "postgres"
ports:
- "4444:3000"
I want to see how they suggest to implement custom indexing and how it compares to current duniter-indexer.