Squid: soucis migration db lors du démarrage à partir de zero avec les images (PostGraphile) 0.5.1

@poka Je suis en train de tenter à nouveau de setup Squid sur mon serveur ARM64; je viens de récupérer le docker-compose.yaml à la racine du repo et de supprimer tous les volumes que j’avais avant.

L’erreur au démarrage:

docker compose up -d
[+] Running 5/5
 ✔ Network duniter-gtest-squid-v2_default         Created                                                                                               0.0s
 ✔ Volume "duniter-gtest-squid-v2_postgres-data"  Created                                                                                               0.0s
 ✔ Container duniter-gtest-squid-db-v2            Healthy                                                                                               5.9s
 ✘ Container duniter-gtest-squid-processor-v2     Error                                                                                                 7.2s
 ✔ Container duniter-gtest-squid-server-v2        Created                                                                                               0.0s
dependency failed to start: container duniter-gtest-squid-processor-v2 is unhealthy

En récupérant les logs du container duniter-gtest-squid-processor-v2, je vois cette erreur à la fin (pour une des migrations DB on dirait):

error: error: type identity does not exist
Migration "udHistoryFunction1762714409038" failed, error: type identity does not exist
query: ROLLBACK
QueryFailedError: type identity does not exist
    at PostgresQueryRunner.query (/squid/node_modules/.pnpm/typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:216:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async udHistoryFunction1762714409038.up (/squid/db/migrations/1762714409038-udHistoryFunction.js:7:5)
    at async MigrationExecutor.executePendingMigrations (/squid/node_modules/.pnpm/typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
    at async DataSource.runMigrations (/squid/node_modules/.pnpm/typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2/node_modules/typeorm/data-source/DataSource.js:266:35)
    at async /squid/node_modules/.pnpm/@subsquid+typeorm-migration@1.3.0_typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2_/node_modules/@subsquid/typeorm-migration/lib/apply.js:46:9 {
  query: '-- Drop existing functions/types if they exist\n' +
    'DROP FUNCTION IF EXISTS identity_ud_history_computed(identity);\n' +
    'DROP FUNCTION IF EXISTS identity_ud_history_computed(identity, ud_history_order_by);\n' +
    'DROP TYPE IF EXISTS ud_history_order_by CASCADE;\n' +
    '\n' +
    '-- Create enum for UD history ordering options\n' +
    'CREATE TYPE ud_history_order_by AS ENUM (\n' +
    "    'BLOCK_NUMBER_ASC',\n" +
    "    'BLOCK_NUMBER_DESC', \n" +
    "    'TIMESTAMP_ASC',\n" +
    "    'TIMESTAMP_DESC',\n" +
    "    'AMOUNT_ASC',\n" +
    "    'AMOUNT_DESC'\n" +
    ');\n' +
    '\n' +
    '-- UD History function for identity (recreate for Graphile) with orderBy support\n' +
    'CREATE OR REPLACE FUNCTION identity_ud_history_computed(\n' +
    '    identity_row identity,\n' +
    "    order_by ud_history_order_by DEFAULT 'BLOCK_NUMBER_DESC'\n" +
    ')\n' +
    'RETURNS SETOF ud_history\n' +
    'LANGUAGE plpgsql STABLE\n' +
    'AS $$\n' +
    'DECLARE\n' +
    '    order_clause text;\n' +
    'BEGIN\n' +
    '    -- Determine order clause based on enum\n' +
    '    CASE order_by\n' +
    "        WHEN 'BLOCK_NUMBER_ASC' THEN\n" +
    "            order_clause := 'ORDER BY ud.block_number ASC';\n" +
    "        WHEN 'BLOCK_NUMBER_DESC' THEN\n" +
    "            order_clause := 'ORDER BY ud.block_number DESC';\n" +
    "        WHEN 'TIMESTAMP_ASC' THEN\n" +
    "            order_clause := 'ORDER BY b.timestamp ASC';\n" +
    "        WHEN 'TIMESTAMP_DESC' THEN\n" +
    "            order_clause := 'ORDER BY b.timestamp DESC';\n" +
    "        WHEN 'AMOUNT_ASC' THEN\n" +
    "            order_clause := 'ORDER BY ud.amount ASC, ud.block_number DESC';\n" +
    "        WHEN 'AMOUNT_DESC' THEN\n" +
    "            order_clause := 'ORDER BY ud.amount DESC, ud.block_number DESC';\n" +
    '        ELSE\n' +
    "            order_clause := 'ORDER BY ud.block_number DESC';\n" +
    '    END CASE;\n' +
    '\n' +
    "    RETURN QUERY EXECUTE format('\n" +
    '        SELECT \n' +
    "            CONCAT(''ud-'', $1, ''-'', ud.block_number, ''-'', COALESCE(ud.event_id, ''''))::character varying AS id,\n" +
    '            ud.amount,\n' +
    '            ud.block_number,\n' +
    '            b.timestamp,\n' +
    '            $2 AS identity_id\n' +
    '        FROM universal_dividend ud\n' +
    '        JOIN block b ON ud.block_number = b.height\n' +
    '        WHERE EXISTS (\n' +
    '            SELECT 1\n' +
    '            FROM (\n' +
    '                SELECT\n' +
    '                    me1.block_number as creation_block,\n' +
    '                    COALESCE(\n' +
    '                        (\n' +
    '                            SELECT me2.block_number\n' +
    '                            FROM membership_event me2\n' +
    '                            WHERE me2.identity_id = me1.identity_id\n' +
    "                                AND me2.event_type = ''Removal''\n" +
    '                                AND me2.block_number > me1.block_number\n' +
    '                            ORDER BY me2.block_number\n' +
    '                            LIMIT 1\n' +
    '                        ),\n' +
    '                        (SELECT MAX(block_number) FROM universal_dividend)\n' +
    '                    ) as removal_block\n' +
    '                FROM membership_event me1\n' +
    '                WHERE me1.identity_id = $2\n' +
    "                    AND me1.event_type = ''Creation''\n" +
    '            ) as membership_periods\n' +
    '            WHERE ud.block_number >= membership_periods.creation_block\n' +
    '                AND ud.block_number <= membership_periods.removal_block\n' +
    '        )\n' +
    "        %s', order_clause)\n" +
    '    USING identity_row.name, identity_row.id;\n' +
    'END;\n' +
    '$$;\n' +
    '\n' +
    'COMMENT ON FUNCTION identity_ud_history_computed(identity, ud_history_order_by) IS \n' +
    "  E'@fieldName udHistory\\nGet UD history for an identity based on membership periods';",
  parameters: undefined,
  driverError: error: type identity does not exist
      at /squid/node_modules/.pnpm/pg@8.16.3/node_modules/pg/lib/client.js:545:17
      at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      at async PostgresQueryRunner.query (/squid/node_modules/.pnpm/typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:181:25)
      at async udHistoryFunction1762714409038.up (/squid/db/migrations/1762714409038-udHistoryFunction.js:7:5)
      at async MigrationExecutor.executePendingMigrations (/squid/node_modules/.pnpm/typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
      at async DataSource.runMigrations (/squid/node_modules/.pnpm/typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2/node_modules/typeorm/data-source/DataSource.js:266:35)
      at async /squid/node_modules/.pnpm/@subsquid+typeorm-migration@1.3.0_typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2_/node_modules/@subsquid/typeorm-migration/lib/apply.js:46:9 {
    length: 112,
    severity: 'ERROR',
    code: '42704',
    detail: undefined,
    hint: undefined,
    position: undefined,
    internalPosition: undefined,
    internalQuery: undefined,
    where: undefined,
    schema: undefined,
    table: undefined,
    column: undefined,
    dataType: undefined,
    constraint: undefined,
    file: 'functioncmds.c',
    line: '270',
    routine: 'interpret_function_parameter_list'
  },
  length: 112,
  severity: 'ERROR',
  code: '42704',
  detail: undefined,
  hint: undefined,
  position: undefined,
  internalPosition: undefined,
  internalQuery: undefined,
  where: undefined,
  schema: undefined,
  table: undefined,
  column: undefined,
  dataType: undefined,
  constraint: undefined,
  file: 'functioncmds.c',
  line: '270',
  routine: 'interpret_function_parameter_list'
}

Edit: Je viens de tenter sur mon pc en local (amd64) j’ai le même soucis.

1 Like

C’est un soucis avec les migration embarqués dans l’iamge alors.
Quand j’aurais un peu de temps je pousserai une image docker avec d’autres migration, on testera voir ce que ça donne. La procédure n’est pas hyper clair côté squid à ce niveau, sur le cycle de vie de ces migrations.

1 Like

Pour info, je viens de re-tester avec la version précédente des images: 0.5.0 et j’ai une autre erreur (que j’avais déjà eu sur mon serveur la première fois que j’ai installé Squid avec PostGraphile) :

De nouveau; sans aucune données de base (donc DB vide).

Le comportement est le suivant; le service processor plante après peu de temps, avec ces logs juste avant le “exit”; puis il recommence ce comportement en boucle.

processor-1  | {"level":2,"time":1765045504345,"ns":"sqd:processor:mapping","msg":"Saving v1 transaction history and comments"}
processor-1  | {"level":2,"time":1765045509123,"ns":"sqd:processor:mapping","msg":"Flushing changes to storage, this can take a while..."}
processor-1  | {"level":2,"time":1765045509123,"ns":"sqd:processor:mapping","msg":"(about ~5 minutes for all g1 history and genesis data)"}
processor-1  | {"level":5,"time":1765045510755,"ns":"sqd:processor","err":{"stack":"QueryFailedError: relation \"block\" does not exist\n    at PostgresQueryRunner.query (/squid/node_modules/.pnpm/typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:216:19)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async InsertQueryBuilder.execute (/squid/node_modules/.pnpm/typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2/node_modules/typeorm/query-builder/InsertQueryBuilder.js:106:33)\n    at async StoreWithCache.insert (/squid/node_modules/.pnpm/@subsquid+typeorm-store@1.5.1_@subsquid+big-decimal@1.0.0_typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2_/node_modules/@subsquid/typeorm-store/lib/store.js:88:17)\n    at async /squid/node_modules/.pnpm/@belopash+typeorm-store@1.5.0_@subsquid+typeorm-config@4.1.1_typeorm@0.3.26_ioredis@5.7_0562d5ed8c2d447272b97a9e909772f4/node_modules/@belopash/typeorm-store/lib/store.js:219:21","query":"INSERT INTO \"block\"(\"id\", \"height\", \"hash\", \"parent_hash\", \"state_root\", \"extrinsicsic_root\", \"spec_name\", \"spec_version\", \"impl_name\", \"impl_version\", \"timestamp\", \"validator\", \"extrinsics_count\", \"calls_count\", \"events_count\") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15), ($16, $17, $18, $19, $20, $21, $22, $23, $24, $25, $26, $27, $28, $29, $30), ($31, $32, $33, $34, $35, $36, $37, $38, $39, $40, $41, $42, $43, $44, $45), ($46, $47, $48, $49, $50, $51, $52, $53, $54, $55, $56, $57, $58, $59, $60), ($61, $62, $63, $64, $65, $66, $67, $68, $69, $70, $71, $72, $73, $74, $75), ($76, $77, $78, $79, $80, $81, $82, $83, $84, $85, $86, $87, $88, $89, $90), ($91, $92, $93, $94, $95, $96, $97, $98, $99, $100, $101, $102, $103, $104, $105), ($106, $107, $108, $109, $110, $111, $112, $113, $114, $115, $116, $117, $118, $119, $120), ($121, $122, $123, $124, $125, $126, $127, $128, $129, $130, $131, $132, $133, $134, $135), ($136, $137, $138, $139, $140, $141, $142, $143, $144, $145, $146, $147
... <Très longue string d'arguments $xxx qui continue et se termine étrangement - voir ci-dessous>
 $9105), ($9106, $9107, $9108, $9109, $9110, $9111, $9112, $9113, $9114, $9115, $9116, $9117, $9118, $9119, $9120), ($9121, $9122, $9123, $9124, $9125, $9126, $9127, $9128, $9129, $9130, $9131, $9132, $9133, $9134, $9135), ($9136, $9137, $9138, $9139, $9140, $9141, $9142, $9143, $9144, $9145, $9146, $9147, $9148, $9149, $9150), ($9151, $9152, $9153, $9154, $9155, $9156, $91
processor-1 exited with code 1

Pour info, c’est également en local (amd64) que j’ai ce soucis.

Du coup; je commence à me demander si la précondition pour que cela fonctionne n’est pas d’avoir une DB pré-remplie par l’ancien Squid (Hasura) ??

Soit ça, soit il y a une opération manuelle à faire en plus que juste configurer le compose.yaml + .env ?

Non c’est pas ça, ça peut être 2 choses:

  • Tu n’as pas totalement reset ton volume db squid: Ca m’arrive souvent avec squid je ne sais pas pourquoi. Le seul moyen que j’ai trouvé est de faire un docker system prune -a en vérifiant bien que les container squid sont arrêté.
    Attention ça va supprimer toutes tes container docker, volumes et image qui ne sont pas liés à un container démarré.
  • Soucis de scripts de migration de db, attends une DB présente pour s’executer.

Je penche plutôt pour l’option 1.

Je viens de faire system prune -a sur mon pc local après avoir stoppé le container squid (aussi avec docker compose down -v).

En le relançant, même comportement et même dump d’une query d’insert avec énormément de paramètres avant le exit.

Juste pour que la ligne d’erreur soit un peu plus lisble, j’ai reformatté à la main la très longue ligne :slight_smile:

Stack: QueryFailedError: relation "block" does not exist
    at PostgresQueryRunner.query (/squid/node_modules/.pnpm/typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:216:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async InsertQueryBuilder.execute (/squid/node_modules/.pnpm/typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2/node_modules/typeorm/query-builder/InsertQueryBuilder.js:106:33)
    at async StoreWithCache.insert (/squid/node_modules/.pnpm/@subsquid+typeorm-store@1.5.1_@subsquid+big-decimal@1.0.0_typeorm@0.3.26_ioredis@5.7.0_pg@8.16.3_reflect-metadata@0.2.2_/node_modules/@subsquid/typeorm-store/lib/store.js:88:17)
    at async /squid/node_modules/.pnpm/@belopash+typeorm-store@1.5.0_@subsquid+typeorm-config@4.1.1_typeorm@0.3.26_ioredis@5.7_0562d5ed8c2d447272b97a9e909772f4/node_modules/@belopash/typeorm-store/lib/store.js:219:21
    
query: "INSERT INTO "block"("id", "height", "hash", "parent_hash", "state_root", "extrinsicsic_root", "spec_name", "spec_version", "impl_name", "impl_version", "timestamp", "validator", "extrinsics_count", "calls_count", "events_count")
        VALUES (... <TOO MANY PARAMETERS> ...)"

Ok alors c’est sûr c’est les scripts de migration SQL de la DB qui posent problème.
Je regarde ça dans la semaine.

2 Likes

@Nicolas80 Je viens de revoir totalement le système de migration SQL de squid. Je l’ai rendu plus robuste en prenant en compte l’état où la DB est vide ou pleine. J’ai également protégé l’execution du script de génération de migration lorsque la DB locale est vide.
J’ai poussé les images docker 0.5.2, tu me dira si ça résous tes problèmes.

Ca faisait un moment que ce sujet me démangeait un peut côté squid, pendant les développements, les restart de la DB générait souvent des migrations inutiles.
D’après mes quelques tests, c’est beaucoup plus agréable maintenant.


edit: attends j’ai des erreurs en lançant cette image sur mon serveur en prod, je vais devoir ajuster quelques chose je pense.


edit2: @Nicolas80 j’ai corrigé différents truc en 0.5.4, ça fonctionne chez moi par dessus DB déjà peuplé. Peux-tu me dire si ça fonctionne chez toi en DB vierge ?

1 Like

@poka Cela semble fonctionner :slight_smile:

Par contre, tout de même un log un peu étrange juste avant qu’il commence à synchroniser ❌ Failed to apply custom SQL functions: password authentication failed for user "postgres" :

processor-1  | {"level":2,"time":1765388327729,"ns":"sqd:processor:mapping","msg":"Genesis flushed"}
processor-1  | {"level":2,"time":1765388327729,"ns":"sqd:processor:mapping","msg":"====================="}
processor-1  | {"level":2,"time":1765388327729,"ns":"sqd:processor:mapping","msg":"Starting blockchain indexing with 13 smiths, 7589 members and 43919 accounts!"}
db-1         | 2025-12-10 17:38:47.768 UTC [385] FATAL:  password authentication failed for user "postgres"
db-1         | 2025-12-10 17:38:47.768 UTC [385] DETAIL:  Password does not match for user "postgres".
db-1         |  Connection matched pg_hba.conf line 100: "host all all all md5"
processor-1  | 📝 Custom SQL functions not found, applying them now...
db-1         | 2025-12-10 17:38:47.773 UTC [386] FATAL:  password authentication failed for user "postgres"
db-1         | 2025-12-10 17:38:47.773 UTC [386] DETAIL:  Password does not match for user "postgres".
db-1         |  Connection matched pg_hba.conf line 100: "host all all all md5"
processor-1  | ❌ Failed to apply custom SQL functions: password authentication failed for user "postgres"
processor-1  | {"level":2,"time":1765388327807,"ns":"sqd:processor","msg":"9 / 870426, rate: 0 blocks/sec, mapping: 0 blocks/sec, 0 items/sec, eta: 9566h 39m"}
processor-1  | {"level":2,"time":1765388332278,"ns":"sqd:processor:mapping","msg":"601 Historical.RootStored"}
processor-1  | {"level":2,"time":1765388332808,"ns":"sqd:processor","msg":"679 / 870426, rate: 132 blocks/sec, mapping: 972 blocks/sec, 2915 items/sec, eta: 1h 50m"}
...

Mais je vois que ça semble continuer l’indexation correctement…

processor-1  | {"level":2,"time":1765389423183,"ns":"sqd:processor:mapping","msg":"142871 Historical.RootStored"}
processor-1  | {"level":2,"time":1765389423699,"ns":"sqd:processor","msg":"142949 / 870610, rate: 132 blocks/sec, mapping: 994 blocks/sec, 3002 items/sec, eta: 1h 32m"}
processor-1  | {"level":2,"time":1765389427575,"ns":"sqd:processor:mapping","msg":"143471 Historical.RootStored"}
processor-1  | {"level":2,"time":1765389428700,"ns":"sqd:processor","msg":"143629 / 870610, rate: 135 blocks/sec, mapping: 1061 blocks/sec, 3184 items/sec, eta: 1h 30m"}
processor-1  | {"level":2,"time":1765389431449,"ns":"sqd:processor:mapping","msg":"144000 Treasury.Spending"}
processor-1  | {"level":2,"time":1765389431449,"ns":"sqd:processor:mapping","msg":"144000 Treasury.Rollover"}
processor-1  | {"level":2,"time":1765389432141,"ns":"sqd:processor:mapping","msg":"144071 Historical.RootStored"}
processor-1  | {"level":2,"time":1765389433700,"ns":"sqd:processor","msg":"144299 / 870610, rate: 133 blocks/sec, mapping: 819 blocks/sec, 2472 items/sec, eta: 1h 31m"}
2 Likes

Très bien vue !
Je sais d’où ça vient, ou oublie d’un script de migration qui n’est plus utile, je l’ai supprimé, ainsi que du code mort, et je viens de repousser une 0.5.5, mais qui ne change rien a terme de comportement hormis cette erreur qui ne devrait plus survenir dans les logs.

3 Likes

@poka est-ce que tu aurais diminué le niveau de logs avec la 0.5.5 ?

J’ai redémarré de 0; je vois que ça travaille en utilisation CPU; mais je ne vois pas de logs après ceux-ci:

duniter-gtest-squid-server-v2     | 90.15.23.207 - - [10/Dec/2025:18:46:50 +0000] "POST /v1/graphql HTTP/1.1" 200 31 "-" "Dart/3.9 (dart:io)"
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392444127,"ns":"sqd:processor:mapping","msg":"Saving v1 transaction history and comments"}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392455277,"ns":"sqd:processor:mapping","msg":"Flushing changes to storage, this can take a while..."}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392455277,"ns":"sqd:processor:mapping","msg":"(about ~5 minutes for all g1 history and genesis data)"}
duniter-gtest-squid-processor-v2  | {"level":3,"time":1765392459180,"ns":"sqd:processor:rpc","msg":"connection failure","rpcUrl":"ws://duniter-gtest-archive-v2:9944","reason":"RpcConnectionError: Socket connection terminated","rpcCall":{"id":143,"jsonrpc":"2.0","method":"chain_getFinalizedHead"}}
duniter-gtest-squid-processor-v2  | {"level":3,"time":1765392459182,"ns":"sqd:processor:rpc","msg":"will pause new requests for 10ms","rpcUrl":"ws://duniter-gtest-archive-v2:9944"}
CONTAINER ID   NAME                               CPU %     MEM USAGE / LIMIT     MEM %     NET I/O          BLOCK I/O        PIDS
8bbe10184a5c   duniter-gtest-squid-server-v2      0.00%     37.08MiB / 23.42GiB   0.15%     1.19MB / 519kB   16MB / 197kB     11
eb329b110d8b   duniter-gtest-squid-processor-v2   29.13%    5.273GiB / 23.42GiB   22.52%    1.23MB / 873MB   573MB / 0B       22
ddde62633425   duniter-gtest-squid-db-v2          77.03%    278.6MiB / 23.42GiB   1.16%     873MB / 1.93MB   6.5GB / 15.8GB   8

Ou bien c’est parceque je viens également de rajouter un network docker en plus (external) pour le “server” - car il doit être joignable par mon NGinx…

Edit: Ah, il vient de continuer les logs, il synchronise :slight_smile:

1 Like

dak, non pas touché aux logs.

duniter-gtest-squid-processor-v2  | {"level":3,"time":1765392459180,"ns":"sqd:processor:rpc","msg":"connection failure","rpcUrl":"ws://duniter-gtest-archive-v2:9944","reason":"RpcConnectionError: Socket connection terminated","rpcCall":{"id":143,"jsonrpc":"2.0","method":"chain_getFinalizedHead"}}
duniter-gtest-squid-processor-v2  | {"level":3,"time":1765392459182,"ns":"sqd:processor:rpc","msg":"will pause new requests for 10ms","rpcUrl":"ws://duniter-gtest-archive-v2:9944"}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392980041,"ns":"sqd:processor:mapping","msg":"Genesis flushed"}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392980041,"ns":"sqd:processor:mapping","msg":"====================="}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392980041,"ns":"sqd:processor:mapping","msg":"Starting blockchain indexing with 13 smiths, 7589 members and 43919 accounts!"}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392980619,"ns":"sqd:processor","msg":"9 / 871200, rate: 0 blocks/sec, mapping: 0 blocks/sec, 0 items/sec, eta: 20610h 32m"}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392985620,"ns":"sqd:processor","msg":"319 / 871203, rate: 0 blocks/sec, mapping: 296 blocks/sec, 889 items/sec, eta: 585h 1m"}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392987129,"ns":"sqd:processor:mapping","msg":"601 Historical.RootStored"}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392990116,"ns":"sqd:processor:mapping","msg":"1201 Historical.RootStored"}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392990625,"ns":"sqd:processor","msg":"1309 / 871203, rate: 202 blocks/sec, mapping: 308 blocks/sec, 923 items/sec, eta: 1h 12m"}
duniter-gtest-squid-processor-v2  | {"level":2,"time":1765392994374,"ns":"sqd:processor:mapping","msg":"1801 Historical.RootStored"}

:slight_smile:

1 Like

Y’en a pour ± 40minutes de synchro normalement.

Comment je peux facilement valider que ça fonctionne quand il à terminé ?

(si possible avec une simple commande curl ou … comme ça je pourrais scripter un check avec Uptime Kuma :slight_smile: )

Normalement, ce serveur est exposé via:
http://squid.gtest.fr.brussels.ovh/

gcli -i http://localhost:8081/v1/graphql indexer check

Tiens, l’option -o json ne sembler pas fonctionner pour cette commande :wink:

1 Like

C’est bon :smiley:

gcli -i "https://squid.gtest.fr.brussels.ovh/v1/graphql" indexer check
╭────────────────────────┬─────────────────────────────────────────┬────────────────────────────────────────────────╮
│ Variable               ┆ Duniter                                 ┆ Indexer                                        │
╞════════════════════════╪═════════════════════════════════════════╪════════════════════════════════════════════════╡
│ URL                    ┆ wss://archive-rpc.gtest.fr.brussels.ovh ┆ https://squid.gtest.fr.brussels.ovh/v1/graphql │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ genesis hash           ┆ 0xd458…c57a                             ┆ 0xd458…c57a                                    │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ finalized block number ┆ 872048                                  ┆                                                │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ finalized block hash   ┆ 0x118a…2bfe                             ┆ 0x118a…2bfe                                    │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ latest block number    ┆ 872051                                  ┆ 872051                                         │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ latest block hash      ┆ 0xf593…bc6c                             ┆ 0xf593…bc6c                                    │
╰────────────────────────┴─────────────────────────────────────────┴────────────────────────────────────────────────╯
1 Like

3 posts were merged into an existing topic: Indexeur Squid bloqué sur le même bloc