Duniter squid doubts

I have some issue with the indexer. I can make it work partially with this big dockerfile a bit adapted to keep ports internally, but not with the splitted one. The processor fails with this weird error:

My main doubt is about the “archive” mode. Let me explain: if I start to follow the documentation in:
https://duniter.org/wiki/duniter-v2/
and when I continue with the “Run an indexer”, it mentions that “You need an archive node running on the same network”
https://duniter.org/wiki/duniter-v2/indexers/duniter-squid/
I suppose that this is the previous running mirror node, but when I connect both containers, the previous error raises.

But If I try to run the same, but with the Giga version, it seems that works better (at least it does not fails with that error).

Can I run a mirror connected with a duniter squid? or should I install a duniter v2 node in archive mode + duniter squid?

Comparing the docker compose files here and there I see that there is a flag:

      - DUNITER_PRUNING_PROFILE=archive

in the Giga one that does not exists in the mirror documentation.

As all this is running currently in the same VM and docker, what do you recommend?

I hope that my doubts are clear and helps a bit to clarify the documentation for the newcomers.

BR,

Vicente

1 Like

You need an archive node, in order for the indexer to index all the states of the blockchain, so to say the history of transfers, certifications, and others.

1 Like

Yes, but, the archive node I think is not mentioned until this part of the documention:

  • why not to specify better how to run an archive node?
  • why not to include the archive node in the same docker of the squid?
  • which is the recommended way to run this with a mirror instance in the same server?

TIA

4 Likes

You are asking the right questions!
The documentation needs a bit of love, because it’s missing bits and parts, and few people went through it.

5 Likes

Please copy the error as text, it is easier to read and quote and it allows a search on the forum to point directly to other people with the same problem as you!

This means that the state data is not available anymore in the duniter node. You are likely missing - DUNITER_PRUNING_PROFILE=archive on you mirror node to make it an archive node. If you added it later, you must delete the volume and re-sync the node in order to keep all states.

Haha not very clear at the moment :joy:, but I guess the doc also is not clear, and this discussion will help improve it!

An archive node is a mirror node with an “archive” pruning profile, meaning it does not discard states. I agree we should add a doc page for that.

I prefer to split docker-compose files for more manageability but that’s mostly a personal preference. I know that my archive node is stable and that I do not want to accidentally remove its volume or shut it down. I want to manage updates separately. Moreover, I’m likely to run multiple instances of the indexer on the same archive node, and that’s not easy with a single docker compose (I would have to precise every time the services in the docker compose commands). But I totally understand the “giga docker compose” approach, it’s just not mine.

Depends on who recommends it!! This is the decision tree I would suggest:

  • do you have a lot of disk space on your machine?
    • no: only run a mirror node
    • yes: set your mirror as archive
  • do you still have a lot of disk space after the archive node?
    • no: consider running squid on another server on the same network as the archive node
    • yes: run squid on the same server as your archive node

You can always use an external archive node for your squid indexer, but the main thing to consider is network traffic. That’s why I suggested to use a docker network on the same machine.

3 Likes

Currently, what are the archive and indexer disk sizes?

2 Likes

@joss.rendall started to collect these info to include them in the documentation → Draft: Doc v2 (!19) · Merge requests · websites / duniter_website_fr_v2 · GitLab

Here are my current volume sizes:

$ docker system df -v
[...]
VOLUME NAME                            LINKS     SIZE
duniter-gdev-archive_data-archive      1         37GB
duniter-gdev-subsquid_postgres-data    1         8.08GB
datapod_kubo_data                      1         4.998GB
datapod_db_data                        1         88.04MB

So archive node can be quite large, as well as squid. For the moment datapod only contains Cesium+ profiles, but I expect it to grow more when we include other info and when it’s going to be used.

2 Likes

Thanks indeed for your explanations.

Finally worked:
image

but, the hasura graphql schema seems different that the one from https://gdev-indexer.p2p.legal/v1/graphql so my test queries fails there.

I’m using image: h30x/duniter-squid-hasura:latest.

Also I tried to find/test other indexer nodes without success, any updated list of them?

image

It will be great a way to find peers like in v1:
https://g1.duniter.org/network/peers
I mean with the full url. I know that there is a peers call in polkadot but seems that is protected because I get always a empty list.

Many doubts/request in one messge sorry!

TIA

1 Like

A post was split to a new topic: Help needed to install v2 software + suggestion to list endpoints

Probably because the p2p.legal instance is out of date. @poka can you install a newer version?

That’s the one I’m also using. I added labels that can be seen on dockerhub: https://hub.docker.com/r/h30x/duniter-squid-hasura/tags but it seems that I messed up because all labels seem to have the same digest :thinking:. I’m definitely bad at docker :sob:

For the moment, there is no automated mechanism to declare a new instance. This is the point of the topic “listing endpoints” (Liste des endpoints), but it isn’t done yet. I started to manually list endpoints on the “about gdev” topic (About the ĞDev network) in wiki mode. Duniter Panel has a network view allowing to check duniter and indexers endpoints: Duniter panel, un client pour tester la v2 - #7 by HugoTrentesaux. For the moment it displays all endpoints that you added in your settings.

The peers call is there to list p2p endpoints. It is only available on unsafe API, so you can see it on a smith node for instance. However there is no such call to list rpc endpoints or graphql endpoints. We have to implement one but that’s not as easy as it seems.

1 Like

I uptated mine a few weeks ago, but I will check tonight

Hopefully I’ll have fixed Duniter Squid crash for out of range integer until then :sweat_smile:

"https://gdev-indexer.cgeek.fr/graphql",
"https://gdev-hasura.cgeek.fr/graphql",
"https://indexer-gdev.pini.fr/graphql",

These ones were indexer using the previous stack (Démarrez votre indexeur (tutoriel) - #28 by poka). This indexer is not maintained anymore, only squid should be used.

I’m trying to fix the bug. As you can see my localhost indexer and coindufeu are re-synchronizing while gyroide and yours are stuck at block 3501641 where UD value exceeds 2^31.