Need to fix GitLab CI for Duniter-v2s

The Duniter-v2s repo has a heavy CI allowing to build Duniter with embeded chainspec and publish an image to Docker Hub (made by @pini). Elois stopped providing a server for this runner so @poka moved it to Axiom Team server. But seemingly due to obscure tag practices, the CI is broken. @1000i100 told us how to fix it during 11-13 November meeting in Bordeaux but it still has to be done. @Moul you seem to be working on it? Do you need help from @immae for example?

The only runner with the dind label (axiom-team-ci-privileged-dind) was apparently not available.
Once I added dind label to redshift runner, the job started.

1 Like

Can you explain me why we are using tags? As I can read on the GitLab CI documentation, tags allow to select runners. But in our case, we have only three or four runners with similar capabilities (I don’t know) and no need to keep any of them available for an urgent task, so any task can go in any runner, no?

Right, but we have some runners which can and other can’t build containers, for instance. That’s the purpose of the docker label.
Some runners might be dedicated to a project, with secrets in, or you could dedicate a runner to a project for performance purposes.

1 Like

I know nothing about gitlab runners but in my mind, they are only kind of virtual machines so I could not imagine that they would have different capabilities other than the physical limitation of ram and cpu of their host machine.

There is special configuration to be done on the runner to be able to build containers.
I am not sure anymore, but it might be dind Docker in Docker.

Yep dind (docker in docker) need more privilege (running in a docker called by root) so there is more security breach usable for these runners if malicious build is stated in it. if i remember well, i configure it to run only on protected branch and tag. Perhaps that’s what lock somewhere in git flow.

2 Likes

Ok, now that the CI is running, it fails for a reason of password: deploy_docker_release_sha (#76660) · Jobs · nodes / rust / Duniter v2S · GitLab
Does someone know how to fix it?

This §Variables is only available on § Protected branches or tags. The branch should be named release/* for the variable to be available in the CI.

1 Like

I’m not confortable with these kind of limitations. As @poka told me, the CI should be here to make the developer’s work easier, but as elois implemented it, they actually make it a nightmare (see Bootstraper une ĞDev and Difficultés avec l'outillage for examples of what an undocumented CI can make me feel link). If nobody is here to manage this part, I’m in favor of removing all the limitations so that @poka and I can publish docker images for smith to use and for developers to test (like duniter-indexer, gecko…).

1 Like

Without this feature, Docker Hub can be displayed in the CI and stolen by anyone having the right to push on a branch on this repository. I would not take the risk to remove this security feature.

One option could be to replace Docker Hub publication with GitLab container registry. The latter does not require a CI/CD variable to be protected, since the authentication is integrated.

I am not the one to take this decision. Publishing on Docker Hub was made on purpose to make the images available on the main platform. This is part of the vision as having duniter-v2s repository on GitHub for more visibility.

2 Likes

Ok, thanks for the explanations. So @poka only needs to rename his branch release/* for the pipeline to pass?

2 Likes

It looks like I arrived after the battle, but if you still need help I’m around.

I confirm that release/ and master are the only branches that can access to the dockerhub’s password, and those branches are reserved to “high privileged” (“maintainer”) users for good reason, since we don’t want to limit contribution from random persons but we don’t want them to do an “echo $PASSWORD” either.

2 Likes

It seems the current CI is highly inefficient: each step installs Rust, downloads all the dependencies, rebuilds them from scratch…

I think the cache should be kept between non-concurrent pipelines. Cargo is smart enough to rebuild what needs to be rebuilt, and Substrate is already long enough to build.

Is this possible?

Edit: La CI est très bien et très utile, c’est juste le côté “pour voir ce que ça donne si on déplace l’interrupteur du 15e étage, on va tout détruire et reconstruire l’immeuble” qui me gêne, avec la latence et le gâchis d’énergie et d’occupation et d’usure du serveur qui vont avec.

3 Likes