Nodes stuck to a block

My validator node and my RPC node are both stuck on the same block.

Validator logs look like this :

The logs get messages about syncing, but sometimes errors : 
2023-02-18 18:59:00 error sending packet on iface 10.0.0.90: Operation not permitted (os error 1)    
2023-02-18 18:59:00 error sending packet on iface 10.0.0.90: Operation not permitted (os error 1)    
2023-02-18 18:59:03 ⚙️  Syncing  0.0 bps, target=#953015 (6 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 0.4kiB/s ⬆ 0.2kiB/s    
2023-02-18 18:59:08 ⚙️  Syncing  0.0 bps, target=#953016 (6 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 1.1kiB/s ⬆ 0.4kiB/s    
2023-02-18 18:59:13 ⚙️  Syncing  0.0 bps, target=#953017 (6 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 0.8kiB/s ⬆ 0.3kiB/s    
2023-02-18 18:59:18 ⚙️  Syncing  0.0 bps, target=#953018 (6 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 0.9kiB/s ⬆ 0.4kiB/s

My RPC node is stuck also with these kind of logs :

2023-02-18 19:04:00 error sending packet on iface 10.0.0.88: Operation not permitted (os error 1)    
2023-02-18 19:04:03 ⚙️  Syncing  0.0 bps, target=#953065 (10 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 3.0kiB/s ⬆ 0.6kiB/s    
2023-02-18 19:04:08 ⚙️  Syncing  0.0 bps, target=#953066 (10 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 0.9kiB/s ⬆ 0.4kiB/s    
2023-02-18 19:04:13 ⚙️  Syncing  0.0 bps, target=#953067 (10 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 1.1kiB/s ⬆ 0.7kiB/s    
2023-02-18 19:04:18 ⚙️  Syncing  0.0 bps, target=#953068 (10 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 1.0kiB/s ⬆ 0.7kiB/s    
2023-02-18 19:04:23 ⚙️  Syncing  0.0 bps, target=#953068 (10 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 0.4kiB/s ⬆ 0.4kiB/s    
2023-02-18 19:04:28 ⚙️  Syncing  0.0 bps, target=#953069 (10 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 1.1kiB/s ⬆ 0.7kiB/s

I keep those node stuck if somebody wants to investigate… Or give me tools to do so :wink: .

[Edit]
A clue ? I loose internet access sometimes. Can disturb the sync, may be.
Docker compose (with ansible variables) :

  # ===== RPC =====
  duniter_rpc:
    image: "{{ duniter_v2s_rpc_image }}"
    restart: unless-stopped
    ports:
      # telemetry
      - 9615:9615
      # rpc
#      - 9933:9933
      # rpc-ws
      - 9944:9944
      # p2p
      - 30333:30333
    volumes:
      - "{{ duniter_v2s_rpc_data_path }}:/var/lib/duniter"
    environment:
      - DUNITER_CHAIN_NAME=gdev
    command:
      - "--node-key-file=/var/lib/duniter/node.key"
      - "--public-addr"
      - "/dns/{{ domain_name }}/tcp/30333/p2p/{{ duniter_v2s_rpc_peer_id }}"
#      - "--ws-external"
      - "--rpc-cors=all"
      - "--pruning=14400"
      - "--name"
      - "vit-rpc"

  # ===== VALIDATOR =====
  duniter_validator:
    image: "{{ duniter_v2s_validator_image }}"
    restart: unless-stopped
    ports:
      # telemetry
      - 9616:9615
      # rpc
      #- 9934:9933
      # rpc-ws
      - 9945:9944
      # p2p
      - 30334:30333
    volumes:
      - "{{ duniter_v2s_validator_data_path }}:/var/lib/duniter"
    environment:
      - DUNITER_CHAIN_NAME=gdev
    command:
      - "--node-key-file=/var/lib/duniter/node.key"
      - "--rpc-cors=all"
      - "--rpc-methods=Unsafe"
      - "--validator"
      - "--pruning=14400"
      - "--name"
      - "vit-validator"
1 Like

Very strange error. I used to get this at start, and had to change permissions on the volume volder. Which docker image do you use ? (or do you use custom build ?)

No custom build. Only the Duniter V2S docker image for gdev.

[
    {
        "Id": "sha256:ca98dc1918e806d4b62e5a58ec3cdc61e5be74625865f439b7a6faff2f5fdee6",
        "RepoTags": [
            "duniter/duniter-v2s:sha-f442e6eb"
        ],
        "RepoDigests": [
            "duniter/duniter-v2s@sha256:64508c0d650e4e1a3f0f986835c3979a58479e7bac8d85efed84b964f146e88c"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2022-12-15T17:38:10.804524013Z",
        "Container": "",

Ok, tu peux essayer avec sha-latest ou 3189d769f70ec1f6c05282211e4d6ab4c2de6361 ? @Pini a fait un super boulot entre temps, et c’est beaucoup plus simple à utiliser et moins buggé.

2 Likes

L’image que @HugoTrentesaux et moi utilisons est pinidh/duniter-v2s:sha-latest. Elle n’a été uploadée que sur mon compte pour l’instant.

EDIT : d’ailleurs il serait probablement temps d’uploader en duniter/duniter-v2s:latest pour simplifier les choses.

2 Likes

Bien vu, j’avais oublié que c’était image: pinidh/duniter-v2s:sha-latest. Mais si je me souviens bien, j’avais mis une branche nommée “release/*” sur ce commit pour que GitLab génère une image. Mais peut-être la pipeline a échoué, je sais plus, j’ai trop de choses en tête.

J’ai édité mon premier billet avec le docker compose. @Pini si tu vois une bizarrerie…

1 Like

Tu as mis ton template ansible, dans ce cas mais aussi ton playbook :smiley:
(sinon on n’a pas le numéro de version)

La nouvelle doc est ici, tu vas pouvoir simplifier grandement ton docker-compose.yml Duniter | Run a mirror node

Il y en a. Regarde la doc, c’est beaucoup plus simple maintenant. En particulier plus besoin de récupérer le peer ID.

2 Likes

Pas possible, c’est un role Ansible complet pour mon serveur. Je mets l’image que vous voulez moi, je suis pas regardant. Donc pini sha latest quoi. ;).

Bon je regarde la doc asap.
[edit] ben merde on cause en french maintenant :joy:

1 Like

Sorry, I was the first starting to speak french :slight_smile:

My personal strategy for choosing the language:

personal, not intended to be useful for future people → best common language with immediate interlocutors (e.g. specific or temporary debug, PR discussion)
else → English

Node restarted with a docker-compose from

Image : pinidh/duniter-v2s:sha-latest

There is again an error on an network interface, but the sync goes well…

Node key file '/var/lib/duniter/node.key' exists.
Node peer ID is '12D3KooWDuzVbBcnnEEKh32R6MUKvQvLENyzLHHUfg4kTyUQq7hp'.
Starting duniter with parameters: --name vit-rpc --node-key-file /var/lib/duniter/node.key --rpc-cors all --chain gdev -d /var/lib/duniter --unsafe-rpc-external --unsafe-ws-external
2023-02-20 15:49:40 Duniter    
2023-02-20 15:49:40 ✌️  version 0.3.0-3189d769f70    
2023-02-20 15:49:40 ❤️  by Axiom-Team Developers <https://axiom-team.fr>, 2021-2023    
2023-02-20 15:49:40 📋 Chain specification: Ğdev    
2023-02-20 15:49:40 🏷  Node name: vit-rpc    
2023-02-20 15:49:40 👤 Role: FULL    
2023-02-20 15:49:40 💾 Database: ParityDb at /var/lib/duniter/chains/gdev/paritydb/full    
2023-02-20 15:49:40 ⛓  Native runtime: gdev-400 (duniter-gdev-1.tx1.au1)    
2023-02-20 15:53:29 🏷  Local node identity is: 12D3KooWDuzVbBcnnEEKh32R6MUKvQvLENyzLHHUfg4kTyUQq7hp    
2023-02-20 15:53:29 💻 Operating system: linux    
2023-02-20 15:53:29 💻 CPU architecture: x86_64    
2023-02-20 15:53:29 💻 Target environment: gnu    
2023-02-20 15:53:29 💻 CPU: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz    
2023-02-20 15:53:29 💻 CPU cores: 4    
2023-02-20 15:53:29 💻 Memory: 7888MB    
2023-02-20 15:53:29 💻 Kernel: 4.15.0-142-generic    
2023-02-20 15:53:29 💻 Linux distribution: Debian GNU/Linux 10 (buster)    
2023-02-20 15:53:29 💻 Virtual machine: no    
2023-02-20 15:53:29 📦 Highest known block at #945735    
2023-02-20 15:53:29 〽️ Prometheus exporter started at 127.0.0.1:9615    
2023-02-20 15:53:29 Running JSON-RPC HTTP server: addr=0.0.0.0:9933, allowed origins=None    
2023-02-20 15:53:29 Running JSON-RPC WS server: addr=0.0.0.0:9944, allowed origins=None    
2023-02-20 15:53:29 ***** Duniter has fully started *****    
2023-02-20 15:53:39 creating instance on iface 10.0.0.94    
2023-02-20 15:53:39 creating instance on iface 172.18.0.3    
2023-02-20 15:53:39 creating instance on iface 10.0.2.45    
2023-02-20 15:53:39 error sending packet on iface 10.0.0.94: Operation not permitted (os error 1)    
2023-02-20 15:53:39 error sending packet on iface 10.0.0.94: Operation not permitted (os error 1)    
2023-02-20 15:53:39 💤 Idle (0 peers), best: #945735 (0x30a3…1fe2), finalized #917920 (0x2758…6af6), ⬇ 0 ⬆ 0    
2023-02-20 15:53:40 🔍 Discovered new external address for our node: /ip4/80.67.176.219/tcp/30333/ws/p2p/12D3KooWDuzVbBcnnEEKh32R6MUKvQvLENyzLHHUfg4kTyUQq7hp    
2023-02-20 15:53:44 ⚙️  Syncing 14.1 bps, target=#979879 (9 peers), best: #945806 (0x5208…1c56), finalized #917920 (0x2758…6af6), ⬇ 159.4kiB/s ⬆ 9.6kiB/s    
2023-02-20 15:53:49 ⚙️  Syncing  8.7 bps, target=#979880 (9 peers), best: #945850 (0x4ba1…4470), finalized #917920 (0x2758…6af6), ⬇ 1.6kiB/s ⬆ 0.2kiB/s    
2023-02-20 15:53:54 ⚙️  Syncing  0.0 bps, target=#979881 (9 peers), best: #945850 (0x4ba1…4470), finalized #917920 (0x2758…6af6), ⬇ 2.1kiB/s ⬆ 0.9kiB/s    
2023-02-20 15:53:59 ⚙️  Syncing  0.0 bps, target=#979881 (9 peers), best: #945850 (0x4ba1…4470), finalized #917920 (0x2758…6af6), ⬇ 2.4kiB/s ⬆ 0.8kiB/s    
2023-02-20 15:54:04 ⚙️  Syncing  0.0 bps, target=#979882 (9 peers), best: #945850 (0x4ba1…4470), finalized #917920 (0x2758…6af6), ⬇ 1.3kiB/s ⬆ 0.7kiB/s    
2023-02-20 15:54:09 ⚙️  Syncing  0.0 bps, target=#979883 (9 peers), best: #945850 (0x4ba1…4470), finalized #917920 (0x2758…6af6), ⬇ 0.6kiB/s ⬆ 46 B/s    
2023-02-20 15:54:14 ⚙️  Syncing  0.0 bps, target=#979884 (9 peers), best: #945850 (0x4ba1…4470), finalized #917920 (0x2758…6af6), ⬇ 3.2kiB/s ⬆ 1.6kiB/s    
2023-02-20 15:54:16 discovered: 12D3KooWB3zoZgS9GmzDMJSkgRXd3eiGft5HQAB84GEBj9GpZY9V /ip4/10.0.2.46/tcp/30333    
2023-02-20 15:54:16 discovered: 12D3KooWB3zoZgS9GmzDMJSkgRXd3eiGft5HQAB84GEBj9GpZY9V /ip4/127.0.0.1/tcp/30333    
2023-02-20 15:54:16 discovered: 12D3KooWB3zoZgS9GmzDMJSkgRXd3eiGft5HQAB84GEBj9GpZY9V /ip4/10.0.0.95/tcp/30333    
2023-02-20 15:54:16 discovered: 12D3KooWB3zoZgS9GmzDMJSkgRXd3eiGft5HQAB84GEBj9GpZY9V /ip4/172.18.0.7/tcp/30333    
2023-02-20 15:54:16 error sending packet on iface 10.0.0.94: Operation not permitted (os error 1)    
2023-02-20 15:54:19 ⚙️  Syncing  0.0 bps, target=#979885 (10 peers), best: #945850 (0x4ba1…4470), finalized #917920 (0x2758…6af6), ⬇ 1.4kiB/s ⬆ 1.3kiB/s    
2023-02-20 15:54:24 ⚙️  Syncing  0.0 bps, target=#979886 (10 peers), best: #945850 (0x4ba1…4470), finalized #917920 (0x2758…6af6), ⬇ 1.3kiB/s ⬆ 0.8kiB/s    
2023-02-20 15:54:30 ⚙️  Syncing  0.0 bps, target=#979886 (10 peers), best: #945850 (0x4ba1…4470), finalized #917920 (0x2758…6af6), ⬇ 0.5kiB/s ⬆ 0.5kiB/s
2 Likes

Your instance seems to have 3 network interfaces:

2023-02-20 15:53:39 creating instance on iface 10.0.0.94    
2023-02-20 15:53:39 creating instance on iface 172.18.0.3    
2023-02-20 15:53:39 creating instance on iface 10.0.2.45    

Only one of them generates this error:

2023-02-20 15:53:39 error sending packet on iface 10.0.0.94: Operation not permitted (os error 1)

Very strange. I did not see 3 network interfaces on the host…

If you give me tool names to investigate further why the docker container mount 3 network interfaces, I will be happy to verify this.

vit@z68-gen3:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether bc:5f:f4:19:ba:ce brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.105/24 brd 192.168.1.255 scope global enp5s0
       valid_lft forever preferred_lft forever
    inet6 fe80::8da7:f302:d13a:234/64 scope link 
       valid_lft forever preferred_lft forever
3: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:52:32:3e:4d brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
       valid_lft forever preferred_lft forever
    inet6 fe80::42:52ff:fe32:3e4d/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:93:c3:b4:b6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

[EDIT]
Mystery solved for the error message: my docker server is of type swarm. It adds the stack network (group of containers declared in the docker-compose file). As well as a network called Ingress, which is the one that is causing the problem. But this doesn’t seem to bother Duniter v2s.

1 Like

After several weeks of flawless work my RPC node is now stuck as well, at block #981318. Nothing in the log:

2023-02-20 18:17:19 ✨ Imported #981315 (0xe20d…e5e2)                                                                                                                                                              
2023-02-20 18:17:22 💤 Idle (6 peers), best: #981315 (0xe20d…e5e2), finalized #917920 (0x2758…6af6), ⬇ 2.8kiB/s ⬆ 0.6kiB/s                                                                                         
2023-02-20 18:17:22 💤 Idle (6 peers), best: #981315 (0xe20d…e5e2), finalized #917920 (0x2758…6af6), ⬇ 2.8kiB/s ⬆ 0.6kiB/s    
2023-02-20 18:17:24 ✨ Imported #981316 (0x4cf7…58ed)    
2023-02-20 18:17:27 💤 Idle (6 peers), best: #981316 (0x4cf7…58ed), finalized #917920 (0x2758…6af6), ⬇ 2.6kiB/s ⬆ 0.4kiB/s                                                                                         
2023-02-20 18:17:30 ✨ Imported #981317 (0x8d14…9cff)                                                                                                                                                              
2023-02-20 18:17:32 💤 Idle (6 peers), best: #981317 (0x8d14…9cff), finalized #917920 (0x2758…6af6), ⬇ 2.1kiB/s ⬆ 0.4kiB/s                                                                                         
2023-02-20 18:17:37 ✨ Imported #981318 (0x63bc…bf47)                                                                                                                                                              
2023-02-20 18:17:37 💤 Idle (6 peers), best: #981318 (0x63bc…bf47), finalized #917920 (0x2758…6af6), ⬇ 2.0kiB/s ⬆ 0.3kiB/s                                                                                         
2023-02-20 18:17:42 💤 Idle (6 peers), best: #981318 (0x63bc…bf47), finalized #917920 (0x2758…6af6), ⬇ 2.7kiB/s ⬆ 2.9kiB/s    
2023-02-20 18:17:47 💤 Idle (6 peers), best: #981318 (0x63bc…bf47), finalized #917920 (0x2758…6af6), ⬇ 1.4kiB/s ⬆ 0.2kiB/s                                                                                         
2023-02-20 18:17:52 💤 Idle (6 peers), best: #981318 (0x63bc…bf47), finalized #917920 (0x2758…6af6), ⬇ 1.8kiB/s ⬆ 0.3kiB/s    
2023-02-20 18:17:57 💤 Idle (6 peers), best: #981318 (0x63bc…bf47), finalized #917920 (0x2758…6af6), ⬇ 2.3kiB/s ⬆ 0.2kiB/s                                                                                         
2023-02-20 18:18:02 💤 Idle (6 peers), best: #981318 (0x63bc…bf47), finalized #917920 (0x2758…6af6), ⬇ 2.0kiB/s ⬆ 0.2kiB/s    
2023-02-20 18:18:07 💤 Idle (6 peers), best: #981318 (0x63bc…bf47), finalized #917920 (0x2758…6af6), ⬇ 2.6kiB/s ⬆ 0.5kiB/s                                                                                         
2023-02-20 18:18:12 💤 Idle (6 peers), best: #981318 (0x63bc…bf47), finalized #917920 (0x2758…6af6), ⬇ 1.4kiB/s ⬆ 0.2kiB/s    
2023-02-20 18:18:17 💤 Idle (6 peers), best: #981318 (0x63bc…bf47), finalized #917920 (0x2758…6af6), ⬇ 1.7kiB/s ⬆ 0.3kiB/s                                                                                         
2023-02-20 18:18:22 💤 Idle (6 peers), best: #981318 (0x63bc…bf47), finalized #917920 (0x2758…6af6), ⬇ 1.8kiB/s ⬆ 0.2kiB/s    

Restarting it to see if its state improves…

Nope. It catches up with the other nodes then stalls:

2023-02-20 18:34:53 Running JSON-RPC HTTP server: addr=0.0.0.0:9933, allowed origins=None    
2023-02-20 18:34:53 Running JSON-RPC WS server: addr=0.0.0.0:9944, allowed origins=None    
2023-02-20 18:34:53 ***** Duniter has fully started *****    
2023-02-20 18:34:53 discovered: 12D3KooWCEVBrLK9g8unsHLj8wWzobbwArDZMMwg5LeCbiWhB3ug /ip4/172.18.0.16/tcp/30333/ws    
2023-02-20 18:34:53 🔍 Discovered new external address for our node: /ip4/5.135.160.24/tcp/30333/ws/p2p/12D3KooWH8fb9TsHvvXfD9cWLJtNuiMGGBTZ3a7gkxV4aXcsfDxG    
2023-02-20 18:34:58 💤 Idle (6 peers), best: #981329 (0xa628…9f65), finalized #917920 (0x2758…6af6), ⬇ 19.5kiB/s ⬆ 5.1kiB/s    
2023-02-20 18:35:03 💤 Idle (6 peers), best: #981352 (0xbad8…1082), finalized #917920 (0x2758…6af6), ⬇ 3.5kiB/s ⬆ 0.3kiB/s    
2023-02-20 18:35:08 💤 Idle (6 peers), best: #981361 (0xc82f…a1d8), finalized #917920 (0x2758…6af6), ⬇ 2.4kiB/s ⬆ 0.2kiB/s    
2023-02-20 18:35:12 Accepting new connection 1/100
2023-02-20 18:35:13 💤 Idle (6 peers), best: #981386 (0xfbe4…cfd9), finalized #917920 (0x2758…6af6), ⬇ 2.8kiB/s ⬆ 0.5kiB/s    
2023-02-20 18:35:18 💤 Idle (6 peers), best: #981396 (0x9edf…5267), finalized #917920 (0x2758…6af6), ⬇ 2.1kiB/s ⬆ 0.4kiB/s    
2023-02-20 18:35:23 💤 Idle (6 peers), best: #981419 (0x5de7…3872), finalized #917920 (0x2758…6af6), ⬇ 2.1kiB/s ⬆ 68 B/s    
2023-02-20 18:35:28 💤 Idle (6 peers), best: #981446 (0xa81a…fca0), finalized #917920 (0x2758…6af6), ⬇ 2.3kiB/s ⬆ 0.4kiB/s    
2023-02-20 18:35:33 💤 Idle (6 peers), best: #981468 (0x3590…f6cd), finalized #917920 (0x2758…6af6), ⬇ 2.2kiB/s ⬆ 0.2kiB/s    
2023-02-20 18:35:37 ✨ Imported #981493 (0xd5ff…8496)    
2023-02-20 18:35:38 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 2.0kiB/s ⬆ 0.1kiB/s    
2023-02-20 18:35:43 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 2.4kiB/s ⬆ 0.6kiB/s    
2023-02-20 18:35:48 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 1.3kiB/s ⬆ 0.1kiB/s    
2023-02-20 18:35:53 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 0.7kiB/s ⬆ 0    
2023-02-20 18:35:58 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 4.0kiB/s ⬆ 0.8kiB/s    
2023-02-20 18:36:03 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 1.5kiB/s ⬆ 0.2kiB/s    
2023-02-20 18:36:08 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 2.2kiB/s ⬆ 0.2kiB/s    
2023-02-20 18:36:13 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 2.0kiB/s ⬆ 0.3kiB/s    
2023-02-20 18:36:18 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 2.1kiB/s ⬆ 0.5kiB/s    

Any idea?

EDIT: It has suddenly caught up again, reliably this time. Maybe I should have waited a bit before restarting it:

2023-02-20 18:40:03 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 2.5kiB/s ⬆ 0.3kiB/s    
2023-02-20 18:40:08 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 2.7kiB/s ⬆ 0.1kiB/s    
2023-02-20 18:40:13 ⚙️  Syncing  0.0 bps, target=#981541 (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 1.6kiB/s ⬆ 0.4kiB/s    
2023-02-20 18:40:18 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 2.5kiB/s ⬆ 0.4kiB/s    
2023-02-20 18:40:23 💤 Idle (6 peers), best: #981493 (0xd5ff…8496), finalized #917920 (0x2758…6af6), ⬇ 1.4kiB/s ⬆ 0           
2023-02-20 18:40:27 ✨ Imported #981497 (0xa226…d5ac)    
2023-02-20 18:40:27 ✨ Imported #981498 (0xad26…61e6)                                                                                                                                                              
2023-02-20 18:40:27 ✨ Imported #981498 (0xde9d…269c)    
2023-02-20 18:40:28 💤 Idle (6 peers), best: #981499 (0x3781…0bc0), finalized #981498 (0xad26…61e6), ⬇ 2.2kiB/s ⬆ 0.7kiB/s    
2023-02-20 18:40:28 ✨ Imported #981501 (0x9479…3b81)    
2023-02-20 18:40:29 ✨ Imported #981503 (0x9404…97fe)                                                                                                                                                              
2023-02-20 18:40:30 ✨ Imported #981510 (0x934e…c951)    
...
2023-02-20 18:42:23 💤 Idle (6 peers), best: #981562 (0x93ba…2f23), finalized #981560 (0x6139…0114), ⬇ 5.0kiB/s ⬆ 6.2kiB/s    
2023-02-20 18:42:24 ✨ Imported #981563 (0x000f…3c30)    
2023-02-20 18:42:24 ✨ Imported #981563 (0x9095…819f)    
2023-02-20 18:42:28 💤 Idle (6 peers), best: #981563 (0x000f…3c30), finalized #981561 (0x838b…f35a), ⬇ 4.9kiB/s ⬆ 5.7kiB/s    
2023-02-20 18:42:30 ✨ Imported #981564 (0x85bf…b2de)    
2023-02-20 18:42:33 💤 Idle (6 peers), best: #981564 (0x85bf…b2de), finalized #981562 (0x93ba…2f23), ⬇ 3.6kiB/s ⬆ 5.9kiB/s    
2023-02-20 18:42:36 ✨ Imported #981565 (0xd09b…42df)    
2023-02-20 18:42:38 💤 Idle (6 peers), best: #981565 (0xd09b…42df), finalized #981563 (0x000f…3c30), ⬇ 3.9kiB/s ⬆ 5.3kiB/s    
2023-02-20 18:42:42 ✨ Imported #981566 (0x5cbd…de6a)    
2023-02-20 18:42:43 💤 Idle (6 peers), best: #981566 (0x5cbd…de6a), finalized #981563 (0x000f…3c30), ⬇ 4.8kiB/s ⬆ 6.1kiB/s    
2023-02-20 18:42:48 ✨ Imported #981567 (0x110c…a2cf)    
2023-02-20 18:42:48 ♻️  Reorg on #981567,0x110c…a2cf to #981567,0x6ed0…cad9, common ancestor #981566,0x5cbd…de6a    
2023-02-20 18:42:48 ✨ Imported #981567 (0x6ed0…cad9)    
2023-02-20 18:42:48 💤 Idle (6 peers), best: #981567 (0x6ed0…cad9), finalized #981564 (0x85bf…b2de), ⬇ 6.1kiB/s ⬆ 5.8kiB/s    

EDIT2: And I notice at the same time that finalization has resumed! Wooooot :tada:

1 Like

As we can see on telemetry page, only the vit-rpc node has sync well and is following the blocks. The vit-validator node is stuck.

1 Like

My Validator node is still stuck on an old block.
It doesn’t seem to connect to any peer, and the log display this repeatedly :

home_duniter_validator.1.flto5wdr5vpz@z68-gen3    | 2023-05-14 10:36:40 💤 Idle (0 peers), best: #1985800 (0x1ef4…52d5), finalized #1985512 (0xe7dc…1bf6), ⬇ 0 ⬆ 0    
home_duniter_validator.1.flto5wdr5vpz@z68-gen3    | 2023-05-14 10:36:45 💤 Idle (0 peers), best: #1985800 (0x1ef4…52d5), finalized #1985512 (0xe7dc…1bf6), ⬇ 0 ⬆ 0    
home_duniter_validator.1.flto5wdr5vpz@z68-gen3    | 2023-05-14 10:36:50 💤 Idle (0 peers), best: #1985800 (0x1ef4…52d5), finalized #1985512 (0xe7dc…1bf6), ⬇ 0 ⬆ 0    
home_duniter_validator.1.flto5wdr5vpz@z68-gen3    | 2023-05-14 10:36:55 💤 Idle (0 peers), best: #1985800 (0x1ef4…52d5), finalized #1985512 (0xe7dc…1bf6), ⬇ 0 ⬆ 0    
home_duniter_validator.1.flto5wdr5vpz@z68-gen3    | 2023-05-14 10:37:00 💤 Idle (0 peers), best: #1985800 (0x1ef4…52d5), finalized #1985512 (0xe7dc…1bf6), ⬇ 0 ⬆ 0    
home_duniter_validator.1.flto5wdr5vpz@z68-gen3    | 2023-05-14 10:37:05 💤 Idle (0 peers), best: #1985800 (0x1ef4…52d5), finalized #1985512 (0xe7dc…1bf6), ⬇ 0 ⬆ 0

Docker image has been updated today, but node restarted is still stuck:

                "ContainerSpec": {
                    "Image": "pinidh/duniter-v2s:sha-latest@sha256:f6645154991c67a2dfd7665b9a4f01eecfe2d5045e5cac16acef702bf9644a9d",
                    "Labels": {
                        "com.docker.stack.namespace": "home"
                    },
                    "Env": [
                        "DUNITER_CHAIN_NAME=gdev",
                        "DUNITER_NODE_NAME=vit-validator",
                        "DUNITER_VALIDATOR=true"
                    ],

Some more investigation on my validator node with no peers:

docker run -v /home/vit/duniter_validator:/var/lib/duniter -p 9616:9615 -p 127.0.0.1:9945:9944 -p 30334:30333 -e DUNITER_CHAIN_NAME=gdev -e DUNITER_VALIDATOR=true -e DUNITER_DE_NAME=vit-validator -e DUNITER_PUBLIC_ADDR=/dns/vit.fdn.org/tcp/30334 -e DUNITER_LISTEN_ADDR=/ip4/0.0.0.0/tcp/30333 pinidh/duniter-v2s:sha-latest --log sub-libp2p

Now the logs are more verbose:

2023-06-19 13:09:44.142 TRACE tokio-runtime-worker sub-libp2p: Addresses of PeerId("12D3KooWMYJzk1FfBZjEAuEvwUnH2Luj5Bq4ouLX1tgZBPpFegaB"): ["/dns/gdev.p2p.legal/tcp/30334", "/dns/gdev.p2p.legal/tcp/30334"]    
2023-06-19 13:09:44.143 TRACE tokio-runtime-worker sub-libp2p: Libp2p => Dialing(PeerId("12D3KooWMYJzk1FfBZjEAuEvwUnH2Luj5Bq4ouLX1tgZBPpFegaB"))    
2023-06-19 13:09:44.143 TRACE tokio-runtime-worker sub-libp2p: Addresses of PeerId("12D3KooWL1SQXYUWSHLczucVwk3nX4Hk3N8xo5ZR5w8RtuFKCYnW"): ["/dns/gdev.trentesaux.fr/tcp/30334", "/dns/gdev.trentesaux.fr/tcp/30334"]

There is a problem here, the port I use on the host and as external public port (30334), is used to connect to other nodes as well ??!! :sweat_smile:

[EDIT]
After some tests, it seems that the only two bootstrap nodes are really configured with port 30334 somewhere, but are unreachable…

1 Like