Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think one of the main problems with Alpine adaption is due to how the official run-times are set up on the Docker hub.

- Python-slim uses debian:jessie

- Ruby-slim uses debian:jessie

- Node-slim uses debian:jessie

Your web application is probably going to pull in from one of those run-times which automatically sets you up to use jessie.

I'd also like to see someone take a random large project and see if their native extensions compile under Alpine without any other dependencies and to compare the final image size of a real world web app with alpine vs jessie.

It's sort of a micro benchmark to compare it like this because a project with 75 gems/packages and a couple of native extensions that need to be compiled will drastically increase the size of your image, with or without Alpine.

I absolutely do think it's worth optimizing your images, but this seems like something that may end up being quite personal to your app because it will require a bit of tinkering to get everything your app needs to work. I also wouldn't bother doing it until I was constantly pulling them down in production to auto-scale.



actually, in this case it totally makes sense. the issue is that alpine uses musl, and many things only compile for glibc. if you're writing in any of those languages, chances are you're going to install a library that requires some c compilation (yaml parsing, database libraries, numeric processing, etc) and this becomes an issue

*edit: which is what you were saying all along, and this didn't sound enough like "yes I agree"


Have you had trouble with any specific libraries? We're using alpine-based images with statically-linked binaries and haven't had any issues compiling third-party libs. One area you're likely to run into trouble is RPC, but I only discovered that in messing about with something experimental.

The real problem with musl in these environments is its DNS behavior, particularly if you're running on a platform like Kubernetes that uses DNS search domains for service discovery. Not hard to work around, but the workarounds are a bit, er, inelegant. See http://www.openwall.com/lists/musl/2015/09/04/4 and https://github.com/gliderlabs/docker-alpine/issues/8


Yes, I found out Rails containers were problematic with Alpine because therubyracer would segfault when run under musl. It appeared to be a known issue at the time, though I haven't looked into whether it's fixed.


Go has official images based both on alpine and wheezy, so you can choose https://hub.docker.com/_/golang/ in case you have issues with C extensions. Most upstream projects are happy to take patches to work with Musl, and with docker it is much easier to replicate issues than it used to be when you had to install the distro.


Yep, that is true. I can't remember the last time I worked on an app that didn't require compiling 1 or more dependencies.

As for the edit, sorry about that. I edited my comment about a minute after posting it.


yeah totally. my rule of thumb is that if it's got c extensions, Debian is fine. if it's compiled, you'll be doing that outside your container anyway (because who wants gcc in prod aye?) so you may as well copy your 20kb binary into alpine rather than Debian


I'm pretty new to Docker, so I'm curious about "a project with 75 gems/packages and a couple of native extensions that need to be compiled"...

Is the common procedure in the Docker world to build an application image that includes all the build tools that were used to build native dependencies? That seems like it does generate a pretty large image.

I figured I'd take a three-step approach to my first node.js app in Docker:

1. Build an image to build my dependencies. This uses the same base image as step #2 will, but installs all the development tools and libraries (eg. build-essentials, libpq5-dev), and then outputs a .tar.gz to a shared volume containing my node_modules folder.

2. Build an image with my dependencies; imports the runtime versions of any libraries (eg. libpq5), imports & expands the .tar.gz generated by #1.

3. Build an image with my application, FROM the image in #2.

The process is optimized by having the automation check for the existence of #2 by hashing the contents of the relevant Dockerfiles, and the package.json list of dependencies, and doing a `docker pull` with that hash to see if I've already built #2. If so, my build just needs to build #3.

It's a bit more complex (Hello, everything in Docker-land), but ends up being pretty powerful. But your post makes me think I've over-complexified it a bit.


My suggestion is to build a package installer for your app and use that to build the final image. For example, we use fpm (running in a container) to build .deb packages, then we push those to an apt repository (artifactory) and then build images downstream using apt-get.

Initially we did a lot of cloning from source and compiling/installing dependencies, but it's very slow, there's a lot of wasted time in rebuilding identical code, and it's hard to provide patches and upgrades to customers.


Yours isn't overly complex, it's one way to trim down an image. However, it is a lot more complicated than just defining 1 Dockerfile that at least copies in your package.json file separately to speed up future deploys that don't touch your packages.

I guess I just don't see the time vs. effort value in optimizing most smaller projects.

For example, that 75 gem project may take 5 minutes to build once but after that it takes 10 seconds to build and push a version that updates the app code.

I'm ok with this pattern for most of my projects because you can easily get by with 1 host to serve tens of thousands of requests a month on most typical projects. It's not like I'm spinning up and destroying dozens of instances a day where the network overhead is a legit concern (if I were, then I would optimize).


I simply build all required packages externally to the container and then bake the resulting binaries into the image by adding the requisite file trees. Fairly easy to script after the first two or three attempts, really.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: