Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sounds like this solves a similar problem to Docker. Can you comment on what the differences are, and the relative strengths and weaknesses of each approach?


I was confused by this first.

So Nix is ultimately a tool for making and sharing reproducible package builds. It has a binary cache, but it's not necessary. Like Ports, packages get built by default.

Docker, on the other hand, is a distribution and execution mechanism. It provides an abstract way to move around a fully assembled, ready-to-go service or application running in isolation.

It's entirely reasonable to use both. You can use Nix to build and manage docker images and make extremely minimalist docker images. You can use Nix knowing that the entire process is perfectly reproducible, and the Docker containerization is only a final integration step.

With this, you sorta get the best of both worlds. You get a reproducible build (and if done right, also a reproducible dev environment via nix-shell) and with Docker you get the ability to build and run a prepped copy with a well-defined interface.

Docker really doesn't provide a way to reproduce a built image from scratch. You sort of have to trust and build on existing images, and most folks making bulk images appeal to external tooling outside docker files to do this.


> Sounds like this solves a similar problem to Docker. Can you comment on what the differences are, and the relative strengths and weaknesses of each approach?

NixOS committer here.

Docker attempts to achieve reproducibility by capturing the entire state of a system in an image file. It also attempts to conserve space by taking a layered approach to images, so when you base your Dockerfile on some base image, your resulting image is the union of the base image's layers and your own changes.

Here's where Docker's approach falls down, and how this could be fixed (and indeed is, by Nix).

Flaw #1:

Building an image from a given Dockerfile is not guaranteed to be reproducible. You can, for example, access a resource over the network in one of your build steps; if the contents of that resource changes between two `docker build`s (e.g. a new version of whatever you're downloading is released, or an attacker substitutes the resource), you'll silently get different resulting images, and very likely will run into "well, it works on my machine" issues.

Solution:

Prohibit any step of your build process from accessing the network, unless you've supplied the expected hash (say, sha256) of any resulting artifacts.

For the projects I work on in my free time, using NixOS on my personal computers, I've never been bitten by nondetermism.

I wish I could say the same about my work projects that use Docker. My team members and I have run into countless issues where our Dockerfiles stop working and then we have to drop everything and play detective so e.g. new hires can get to work, or put fires out in our C.I. env when the cached layers are flushed, etc. So many wasted hours.

Flaw #2:

What happens if you have two Dockerfiles that don't share the same lineage, but you install some of the same packages? You end up with multiples layers on your disk that contain the same contents. That's wasted space.

Solution:

I'll use NixOS as an example, again. In NixOS, you can look at any package and compute the entire dependency graph. It should be noted that this graph includes not only the names of the packages, but also precisely which version of each package was used as a build input. This goes for both build inputs and runtime dependencies.

Note: by "version" I mean not only the version as listed in the release notes, but every detail of how the package was built: which version of Python was used? And then transitively: what version of C was used to compile that Python? etc, etc.

NixOS exploits this by allowing you to share packages with the host machine, and then each NixOS container you spin up has the necessary runtime dependencies bind-mounted into the container's root file system. As a result, NixOS has better deduplication (read: zero). Also, by the same graph traversal mechanism, it's trivial to take any environment, serialize the runtime dependency graph, and send that graph to another machine, back it up somewhere, create a bootable ISO for CD/USB -- whatever you can dream up.

Thanks to Nix's enforced determinism, you can trivially build container environments (and all of their required packages) in parallel across a fleet of build machines. In fact, Nix's superiority at building packages is so strong that people have gone so far as to build Docker images using Nix instead of `docker` (where they can't avoid Docker entirely for whatever reason): https://github.com/NixOS/nixpkgs/blob/2a036ca1a5eafdaed11be1...

You can read more about the Docker image building support here (along with my own rationale for this in the comments): http://lethalman.blogspot.com/2016/04/cheap-docker-images-wi...

I'm keeping things simple here and trying to address the most salient "Docker vs Nix" points, though I could continue talking about other strengths of NixOS outside of the scope of Docker/container tech, if desired.

Docker's union filesystem approach is great in a world where you can't use a better package manager. For everyone else, there are package managers that obviate the need for such hacks, don't waste space, provide determinism at both runtime and build time, etc.


The main difference between the two is Nix's ideas come from functional programming, and Docker's are imperative. The practical outcome of this is that it's easier to keep your system clean over time with Nix than Docker because of the way it's been designed.

Nix isn't only a package manager, it is also a functional programming language intended for system administration. This means that, while a Nix file is comparative to a Dockerfile, it has several key differences:

1. All Nix files are just functions that take a number of arguments and return a system config (like a JSON object, but with some nice functionality). A Dockerfile is a set of commands you run to build a system to a "starting" state. This is imperative (you're telling the computer to "do this", then "do that", etc.), so once it's done, you can mutate state to deviate from what you specified in your Dockerfile. With Nix, while you can technically do this on some systems, it does provide you with the command line tools so you don't break things (e.g. nix-shell, nix-env). Note, on NixOS, other measures are taken to encourage safety.

2. If things go wrong in Nix(OS), the idea is that you can do a fresh install in your system, copy your old Nix file to it, and with one bash command, be back to where you were before things went haywire. In terms of containers, there's nothing new with this because this is exactly what Docker does. However, Nix also has this concept of generations, so every time you use a Nix command to change your system state either declaratively via a Nix file or imperatively in the command-line using nix-env, you can roll back to a previous version of state. This is especially nice with NixOS, because it creates generations for your entire system too (includes hardware config, drivers, kernel etc.), and makes separate GRUB entries for each generation, so if something breaks after you do a system upgrade, you just chose an old GRUB entry to go back to where you were. AFAIK, Docker doesn't offer anything like this, and this is a good example of how these tools' designs can impact their feature-set so dramatically.

3. A neat feature of Docker is composability. You can inherit from other, pre-existing Dockerfiles, and you can deploy multi-container apps with various tools. Composability at a single-container level is very straightforward with Nix. Since every config is just a function, you simply call the function exposed by a different Nix file with the correct arguments, and... voila! Once you've made your desired Nix file, you can run it either using nix-shell or nixos-container. While I'm no expert, I believe they perform better than Docker as they don't use virtualization. For multi-container deployment, there is NixOps. You write some Nix files describing the VMs you want to deploy, and run a bash command to deploy them to various back-ends (AWS, Azure, etc.). Again, the big difference here is that you can incrementally modify these VMs in a safe way using Nix. If you change your deployment config file, Nix will figure out something has changed, and modify the corresponding VMs to achieve the desired state.

Some may believe that Docker and Nix are very similar, and to their credit, they are in certain scenarios. The thing I like about Nix is that it's one language (and architecture) that was designed well. It's minimal, yet makes it possible to do so much in a safe way.

Nix has been around for a while, but I think the community is growing quickly as functional programming continues to take off. I'm excited to see where it goes, and am super grateful I have a tool like this to use while coding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: