On 8/31/23 01:38, wolf wrote:
On 2023-08-30 19:19:25 +0200, Jakub Skokan wrote:
Thanks for the feedback!
On 8/30/23 18:45, wolf wrote:
On 2023-08-30 17:00:07 +0200, Jakub Skokan wrote: [...] Comments regarding the "known issues" section:
guix system reconfigure requires –allow-downgrades, why? Something is fishy with the channels.
Guix after fresh install is a bit funny sometimes. If one runs `guix pull' before trying the reconfigure, the --allow-downgrades is no longer necessary.
That didn't work for me. I ran guix pull and system reconfigure from the same shell, but system reconfigure still used some older commit. No idea why.
Hm, maybe you did not run `hash guix'?
Actually I re-created the VPS on staging, and, from fresh deploy, this is sufficient to run the reconfigure without --allow-downgrades:
ssh root@37.205.14.33 -t '. /etc/profile; guix pull; hash guix; guix system reconfigure --no-bootloader /etc/config/system.scm'
I wanted to put it here, maybe it will be of use to someone.
Yep, that's it, I missed both . /etc/profile and hash guix. I didn't know what hash does and I didn't think it would be important x) KB article updated!
There is /ifcfg.del, however (@ (vpsadminos) vpsadminos-networking) does not use it as #:stop, and even if it did, I do not think that (#:one-shot?) services do invoke #:stop. I will send a patch for this in due time (turning the service into "sleep inf", so that #:stop will start to work).
I'd prefer if we could e.g. prevent the service from being restarted. Or make the script idempotent, so that it wouldn't fail. It makes no sense to bring down the network just because bash in shebang was updated. While /ifcfg.del exists, there's no real reason to call it, ever.
There is #:transient?, but I do not know how well/if it interacts with #:one-shot and system reconfiguration. The script being idempotent is the correct solution, you are right.
I would wait for /run on tmpfs to be implemented:
https://issues.guix.gnu.org/64775
When that's done, we could modify /ifcfg.add as:
[ -f /run/vpsadminos-network ] && exit 0 touch /run/vpsadminos-network # ...the rest of the script...
to prevent re-running it. It won't work now when /run is persisted on disk. Until then, this activation error should be harmless anyways.
cgroups v1 are not mounted. cgroups do not seem to be needed by the base system, contact us in case it's a problem for some service or submit a patch to the template.
The only high-profile package that explicitly depends on the cgroup v1 is currently docker afaik, which in a world where podman exists is not that important. I plan to try to produce a patch moving guix to v2, which should solve this issue.
The available cgroup version depends on the host. So far we have cgroups v1 everywhere, migration to v2 is planned:
https://kb.vpsfree.org/manuals/vps/cgroups
Guix works with cgroups v2, I use it on my dev machine.
You mean in foreign mode or in the GuixSD setup? I did not figure out easy built-in way to switch GuixSD to v2, so if there is one, I would love to know.
I dunno, I tried Guix only inside container/VPS. I saw cgroups v2 support in the source:
https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/system/file-systems.scm#...
In our case, cgroups v2 seem to be pre-mounted by LXC when the host is using v2.
Jakub