I have some issue with the indexer. I can make it work partially with this big dockerfile a bit adapted to keep ports internally, but not with the splitted one. The processor fails with this weird error:
My main doubt is about the “archive” mode. Let me explain: if I start to follow the documentation in: https://duniter.org/wiki/duniter-v2/
and when I continue with the “Run an indexer”, it mentions that “You need an archive node running on the same network” https://duniter.org/wiki/duniter-v2/indexers/duniter-squid/
I suppose that this is the previous running mirror node, but when I connect both containers, the previous error raises.
But If I try to run the same, but with the Giga version, it seems that works better (at least it does not fails with that error).
Can I run a mirror connected with a duniter squid? or should I install a duniter v2 node in archive mode + duniter squid?
Comparing the docker compose files here and there I see that there is a flag:
- DUNITER_PRUNING_PROFILE=archive
in the Giga one that does not exists in the mirror documentation.
As all this is running currently in the same VM and docker, what do you recommend?
I hope that my doubts are clear and helps a bit to clarify the documentation for the newcomers.
You need an archive node, in order for the indexer to index all the states of the blockchain, so to say the history of transfers, certifications, and others.
Please copy the error as text, it is easier to read and quote and it allows a search on the forum to point directly to other people with the same problem as you!
This means that the state data is not available anymore in the duniter node. You are likely missing - DUNITER_PRUNING_PROFILE=archive on you mirror node to make it an archive node. If you added it later, you must delete the volume and re-sync the node in order to keep all states.
Haha not very clear at the moment , but I guess the doc also is not clear, and this discussion will help improve it!
An archive node is a mirror node with an “archive” pruning profile, meaning it does not discard states. I agree we should add a doc page for that.
I prefer to split docker-compose files for more manageability but that’s mostly a personal preference. I know that my archive node is stable and that I do not want to accidentally remove its volume or shut it down. I want to manage updates separately. Moreover, I’m likely to run multiple instances of the indexer on the same archive node, and that’s not easy with a single docker compose (I would have to precise every time the services in the docker compose commands). But I totally understand the “giga docker compose” approach, it’s just not mine.
Depends on who recommends it!! This is the decision tree I would suggest:
do you have a lot of disk space on your machine?
no: only run a mirror node
yes: set your mirror as archive
do you still have a lot of disk space after the archive node?
no: consider running squid on another server on the same network as the archive node
yes: run squid on the same server as your archive node
You can always use an external archive node for your squid indexer, but the main thing to consider is network traffic. That’s why I suggested to use a docker network on the same machine.
So archive node can be quite large, as well as squid. For the moment datapod only contains Cesium+ profiles, but I expect it to grow more when we include other info and when it’s going to be used.
I’m using image: h30x/duniter-squid-hasura:latest.
Also I tried to find/test other indexer nodes without success, any updated list of them?
It will be great a way to find peers like in v1: https://g1.duniter.org/network/peers
I mean with the full url. I know that there is a peers call in polkadot but seems that is protected because I get always a empty list.
Probably because the p2p.legal instance is out of date. @poka can you install a newer version?
That’s the one I’m also using. I added labels that can be seen on dockerhub: https://hub.docker.com/r/h30x/duniter-squid-hasura/tags but it seems that I messed up because all labels seem to have the same digest . I’m definitely bad at docker
For the moment, there is no automated mechanism to declare a new instance. This is the point of the topic “listing endpoints” (Liste des endpoints), but it isn’t done yet. I started to manually list endpoints on the “about gdev” topic (About the ĞDev network) in wiki mode. Duniter Panel has a network view allowing to check duniter and indexers endpoints: Duniter panel, un client pour tester la v2 - #7 by HugoTrentesaux. For the moment it displays all endpoints that you added in your settings.
The peers call is there to list p2p endpoints. It is only available on unsafe API, so you can see it on a smith node for instance. However there is no such call to list rpc endpoints or graphql endpoints. We have to implement one but that’s not as easy as it seems.
I’m trying to fix the bug. As you can see my localhost indexer and coindufeu are re-synchronizing while gyroide and yours are stuck at block 3501641 where UD value exceeds 2^31.