Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I assume that if the nix daemon is being started with each pod, then you're downloading a fresh copy of everything every time, so for large closures you give up a lot of the benefit of the cluster being able to cache layers (as in, say, a nix2container- or nixery-style image) and achieve very fast subsequent startups.

I'm a k8s novice, but would there be a way to run the nix daemon/cache in a semi-persistent pod on each node, and then "attach" it to the actual worker pods?



> I assume that if the nix daemon is being started with each pod, then you're downloading a fresh copy of everything every time

I got the impression that it uses the node's nix daemon.

From the project readme's FAQ https://github.com/pdtpartners/nix-snapshotter#faq

> What's the difference between this and a nix-in-docker?

> If you run nix inside a container (e.g. nixos/nix or nixpkgs/nix-flake) then you are indeed fetching packages using the Nix store. However, each container will have its own Nix store instead of de-duplicating at the host level.

> nix-snapshotter is intended to live on the host system (sibling to containerd and/or kubelet) so that multiple containers running different images can share the underlying packages from the same Nix store.


Yeah, I think that's the "main" idea of it, but it was also mentioned that this can work in GKE/EKS if you start a new daemon each time.


The EKS/GKE integration involves modifying the host that the kubelet lives and adding nix-snapshotter as a sibling host service.

I didn’t mean running nix-snapshotter as a Kubernetes resource because then there’s a chicken & egg problem. Kubernetes needs nix-snapshotter image service to resolve the nix-snapshotter image.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: