By the way, I just got an answer ^^
Well, we could shoot each other for a long time…
polux.re but the old domain is still working)
However in RPC localPeerId and localListenAddresses I have:
I don’t know which one is right.
The latter as I understand it. Where does the former peer ID come from?
The former comes from the
--public-addr command-line option. (I don’t remember how I generated it) But maybe it’s not read or maybe the peer id is dummy and the real one is generated independently.
You shouldn’t set it this way. Use the address part before
/p2p only. See : ĞDev5 smiths - #26 by Pini
I have to fix my setup. Will you update the bootnodes into ĞDev5 or into ĞDev6?
OK for me, this key is being used as gdev-rpc right now, but I can change my node to a validator anyway.
However, I’m not sure of the
tcp/30333 part. I should be in the same configuration as @Pini.
edit : confirmed, please use:
It may be good to have at least one non-validator bootnode with a big bus factor, i.e. hosted by Axiom-Team. It would make less urgent to update the bootnode list as people change domain names or peer ids or crash their personal server.
Do not set manually the protocol and the peerId, it’s added automatically by Duniter2. @pini documented it his table: Duniter | Configure your node (Docker) (
p2p/<peer_id>part of the address shouldn’t be set in this variable. It is automatically added by Duniter.
I do not see why three different servers managed by A, B, C would have a higher bus factor than a single server managed by A&B&C.
I’m not saying we should replace, but add.
Nodes hosted by individuals may fluctuate randomly and we’ll have to update the bootnodes list quite often. If during a few years we forgot to update it, all the bootnodes will be down with a non-negligible probability (maybe they just changed their port or domain).
So I propose a fallback server with a canonical address which is guaranteed to never change and does not depend on individuals only. Like
It should also have RPC disabled to avoid becoming the de facto default node for clients.
Edit: This node may be the same used by the indexer, if the indexer is hosted by Axiom.
When all the bootnodes are down, someone starting a node can add another with
--botnodes argument. What you suggest will be relevant for Ğ1 network, but for ĞDev and ĞTest, I don’t see the problem to stick to individual nodes only.
This won’t work imho,
gdev-smith.pini.fr do not have its WS/RPC APIs exposed, or if you do you must avoid
--rpc-methods=Unsafe usage (for security reasons), in which case you can’t administrate your validator node.
It does if RPC/WS APIs are exposed with
--rpc-methods=Unsafe option. Otherwise, indeed, there is no particular security issue with P2P port.
What makes you think I manually added it? I was just giving you my bootnode URL.
This is the P2P endpoint which is bound to the port 30333 of my smith instance. Nothing related to the RPC API.
What is the
wss meant for then?
Also, I’m curious to know how you handle this mapping.
The CI released a new docker image
duniter/duniter-v2s:sha-a160bedd (it’s ok to keep commit hash for now in order to find easily on which branch we are dealing with).
Only bootnode issue:
💔 The bootnode you want to connect to at `/dns/gdev.polux.re/tcp/30333/p2p/12D3KooWQ9dAZWSNQLLb3WG1gtNYhqhu7BUpaCXpUACvCFeoq8Ff` provided a different peer ID `12D3KooWJmjLNArKNerjUgVyEQmHuZgNTBe2mQb6vjmuGR635Vuh` than the one you expect `12D3KooWQ9dAZWSNQLLb3WG1gtNYhqhu7BUpaCXpUACvCFeoq8Ff`
PS : I updated my archive node and because I am not good at docker and renamed my service, I also lost my volume and changed my peerId
Now you know
(will be changed later)
Because the pattern
p2p/<nodeId> appears twice in the address I quoted. I think the second one was set by Duniter2 and the first one was set manually. But maybe there is a bug that appends this twice with different peerId? [edit: it’s a “copy-paste” bug ]
My mistake, I wrongly copy/pasted on the forum I think.
See duniter command line help (
duniter --help) and libp2p addressing documentation:
--public-addr <PUBLIC_ADDR>... The public address that other nodes will use to connect to it. This can be used if there's a proxy in front of this node --listen-addr <LISTEN_ADDR>... Listen on this multiaddress. By default: If `--validator` is passed: `/ip4/0.0.0.0/tcp/<port>` and `/ip6/[::]/tcp/<port>`. Otherwise: `/ip4/0.0.0.0/tcp/<port>/ws` and `/ip6/[::]/tcp/<port>/ws`.
As I understand it you can force libp2p to listen on a web socket instead of TCP. This way you can setup your node behind a web reverse proxy, which is how I did.
I’m using a personal fork of nginx-proxy (*) which allows to map multiple ports of the same docker container. Here is the related configuration:
- VIRTUAL_HOST=gdev.pini.fr - VIRTUAL_PORT=30333,9933:/http,9944:/ws
- VIRTUAL_HOST=gdev-smith.pini.fr - VIRTUAL_PORT=30333,9944:/ws
For my smith node I expose the WS/RPC interface as well, but its access is protected with a certificate. This avoids the setting of an SSH tunnel when I want to deal with it.
(*) There is ongoing work on upstream nginx-proxy to implement a similar feature, so hopefully my fork won’t be needed anymore at some point in the future.
I’d be in favor of releasing
duniter/duniter-v2s:latest as well so that:
- current image would be easily identifiable
- the image documentation would be pushed to dockerhub by the CI
Thois would greatly help wannabe smiths.
I’ve just updated my mirror node with your image and I see one more bootnode issue :
2023-03-10 09:35:26 💔 The bootnode you want to connect to at `/dns/gdev.coinduf.eu/tcp/30333/p2p/12D3KooWMCrfuSXdGvokGCjbuN9KZvq9N7WWBoPat91gG1PU4w2b` provided a different peer ID `12D3KooWAVY7T3eqGxyjCPbKfMKrkT55XR6BuBxpW5sEEJYAJu3n` than the one you expect `12D3KooWMCrfuSXdGvokGCjbuN9KZvq9N7WWBoPat91gG1PU4w2b`. 2023-03-10 09:35:26 💔 The bootnode you want to connect to at `/dns/gdev.polux.re/tcp/30333/p2p/12D3KooWQ9dAZWSNQLLb3WG1gtNYhqhu7BUpaCXpUACvCFeoq8Ff` provided a different peer ID `12D3KooWJmjLNArKNerjUgVyEQmHuZgNTBe2mQb6vjmuGR635Vuh` than the one you expect `12D3KooWQ9dAZWSNQLLb3WG1gtNYhqhu7BUpaCXpUACvCFeoq8Ff`.
EDIT: These error messages keep showing in the logs. This goes in favor of having few but resilient bootnodes.
Ok, then I should use a release tag if I want to use the CI. I’m doing it.
Yes, I did see it immediately after updating my archive node:
 you can use