Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Speed boost achievement unlocked on Docker Desktop 4.6 for Mac (docker.com)
206 points by ingve on March 16, 2022 | hide | past | favorite | 153 comments


Oh hey, I’m the Konstantinos mentioned at the end. The excellent work done by the Asahi Linux folk on discovering demonstrating the incredislow MacBook nvme flush is what prompted me to dig into this for Docker.

Nice to see the fixes make it into a release!


I sponsor https://github.com/marcan (Asahi). I encourage anyone invested in the Mac ecosystem to do the same. This discovery is just one of the benefits of having people paid to poke around Apple's stuff.

PS: of course you can support Docker also: https://www.docker.com/pricing


Thank you for advancing Docker science. And for crediting the Asahi folks.


Nice work and thanks! Is there any more info on how the nvme flush issue led to such massive perf improvements? The blog post doesn't seem to go into details.


Sorry I missed this message - I think the best write up of it is from the issue thread: https://github.com/docker/roadmap/issues/7#issuecomment-1044...


Just upgraded my mac to give it go. So far, with this change you made Docker Desktop for mac go from barely usable, to very useful. Good game.


I was super impressed by your detailed investigation in that thread! Thank you for your contributions!


Lots of chatter here about alternatives that all seem to miss the point that the alternatives suffer from the same filesystem performance issues that this release claims to improve. Might not be an issue for you but if you've ever had to work with a volume mount with a ton of small files (like php or node dependencies) then this could be a real life saver.


> the alternatives suffer from the same filesystem performance issues that this release claims to improve

Hehe, not if you run Docker in a Linux VM, edit your code using the SSH VSCode plugin, and port forward any services from the VM to your Mac.

After doing this for over a year, I don't know why anyone still uses Docker for Mac, other than corporate how-to guides that still start with "1. Download Docker for Mac", or feeling unfamiliar with Linux.

Docker was originally created as a chroot'ed process container with abstracted networking, nothing more -- and certainly not a full VM. Why not run it as it was intended to be run?

Edit: Am I incorrect? Or is something else wrong?


What you described (running a Linux VM and forwarding ports) is exactly what Docker Desktop already does for you without having to maintain packages, configure ssh keys, deal with networking, etc.

The thing I don't like about your method is having all my code checked out on the VM. As soon as I want to start using anything other than VSCode to manage it, I'm now hopping through layers. I'm also restricted to the VM's filesystem/size limits, and individual changes to files are not backed up by Time Machine, only the entire VM disk image.

And only because you mentioned corporate how-to guides, Docker Desktop requires a paid license for commercial use. I think your method is a perfectly valid way to work around needing a license, but the license comes with commercial support and some other features some companies may find useful.


Sorry, what do you think Docker for Mac does? It’s exactly what you’ve described, with more ease of use and features like volume mounts from your Mac into the VM (which this post talks about optimizing). Your approach requires lots more fiddling with little to no benefit.


Prior to these improvements there was a huge benefit: much better filesystem performance. This blog post says they increased performance by up to 98%, which tells you just how slow it was before (slow enough that I personally chose to forgo Docker altogether).


The filesystem performance within the VM is the same; the issue only arises/arose when mounting from macOS into the VM. To each their own, of course; but there’s no real perf difference between rolling your own and using Docker for Mac.


If you're running 10+ docker containers like my company does, running all of them inside a single VM will use significantly less memory than spinning up multiple VMs like Docker Desktop does.


> the issue only arises/arose when mounting from macOS into the VM

Isn't that the usual case: you have your code on your local filesystem and then run it using an interpreter in the Docker VM?


Nope. You can certainly do that; but “the right way” is to add your sources and compile them (as necessary) during the image build; and then treat the container as an “application”, not as an “environment”. This is how any sane production setup works, and you get a lot of benefits using it during development (such as reproducible builds and only one pre-req for a dev setup: Docker). If you’re just volume-mounting your code into the container, all Docker is providing is a disposable runtime environment. Which is handy for some things, but the general idea is to end up with a fully containerized program that you can tag, push to a registry; and pull to your production environment.


> This is how any sane production setup works

Well sure, but almost nobody is running production workloads on macOS. Regarding development workflows. Won't rebuilding the image every time you make changes be slow? I can get ~1 second reload times for node.js services outside of Docker. And I believe inside Docker too on linux.


It's not necessarily because people are serving their production workloads from macOS systems.

However, they develop on them - and it makes sense to make those environments (and the processes) similar to your actual production setup. So if your applications code gets compiled, and deployed as part of the Docker image - then ideally when you spin up a dev environment, it should follow the same process.


That isn’t how Docker Desktop works, there’s one VM.


So in that model, you have to ship your codebase into the VM and then work on it from there. If you're git-managed now you're having to either ship your git credentials into that VM to push stuff up, or you're like having to figure out how to ship the codebase back to the host when you need to.

The whole "magic" is that you work on the files on your host machine! I think VSCode is doing god's work (along with other tools) to make remote editing a nicer experience, but well... I use Docker on Linux and just edit files locally. It's nice!


Git credentials are not a huge deal if you use ssh keys. You can forward your local ssh identity to the VM / remote host by adding these options to the Host entry in you ssh config:

  IdentitiesOnly yes
  AddKeysToAgent yes
  ForwardAgent yes


At this point you can just use Vagrant again and skip using Docker.


Vagrant also has filesystem performance issues. VirtualBox host directory sharing is awfully — unusably — slow, even more so than the old Docker Desktop mount.

Vagrant takes a lot more disk space and RAM. VirtualBox isn't as efficient as the macOS hypervisor. And every Vagrant project is a separate VM, instead sharing the same VM for multiple projects.


Ease of use. Some people like the conveniences in Desktop and the stuff you described has to be set up and maintained. Lots of people out there just need to start writing application code and get it running in the dev environment they've been given. And employers will happily pay for licenses.


This is exactly how we configure our development environments after trying all sorts of different combinations.

We find it significantly faster to do macOS + Linux VM + Docker than macOS + Docker for Mac. We're using the Remote Containers plugin in VSCode.

I just tried Docker for Mac 4.6.0 and it is still slower than the above setup - though I think this might be down to bind vs. volume mounts.


Well, I use SublimeText, SublimeMerge, Finder, and Preview. All of which you can pry from my cold, dead paws. :)

Before anyone mentions sshfs, macFUSE+sshf is literally the only thing I’ve seen that can fuck up the entire disk subsystem’s ability to mount and unmount anything until you reboot.


Docker Desktop is a VM because it allows you to run Linux containers. If it weren't a VM then you can onlu use it to run macOS containers (which don't exist thanks to the lack of necessary kernel primitives) or Windows containers.


Both Kata Containers and UTM support virtio-fs, so this is not strictly true. The former can be used as a stand-in replacement for the runtime used by docker desktop[1]. With the latter, one could use a UTM-backed guest as a docker runtime in macOS[2] or run docker directly on the guest[3].

[1] https://github.com/kata-containers/documentation/blob/master...

[2] https://www.codeluge.com/post/setting-up-docker-on-macos-m1-...

[3] https://www.lifeintech.com/2021/11/03/docker-performance-on-...


Is this an issue with dynamic languages in general? Are Java/Kotlin and Golang, for example, not affected?


When using Java, with say, jib and skaffold, when a change is detected the image is rebuilt with some fairly smart cacheing being done to minimize the build time.

In more interesting setups, the class files aren't in the image but rather mapped in - much the same way one would with dynamic and then a hot reload - https://docs.spring.io/spring-boot/docs/1.3.8.RELEASE/refere...

> Spring Loaded goes a little further in that it can reload class definitions with changes in the method signatures. With some customization it can force an ApplicationContext to refresh itself (but there is no general mechanism to ensure that would be safe for a running application anyway, so it would only ever be a development time trick probably).

And this way, the container can remain the same with the class files being changed underneath it.

https://github.com/spring-projects/spring-loaded


It depends what you need to do. If you build a golang binary in a container and run go mod tidy, you get hit by it. If you build a java app with, say, maven, you will pull deps into a volume mount and you get hit by it. Of course, you can mount a host dir to avoid the problem to a certain extent.


I know it isn't a direct replacement, but I switched to rancher desktop[0] on macos and it has been a pretty easy transition just using nerdctl instead of docker cli

[0]https://rancherdesktop.io/


Similarly switched to https://github.com/abiosoft/colima and have been happy with it.


I’ve been trying to use Colima but I get weird networking issue - there is a GitHub issue open for it but no solution for now. Was forced to figure out something else for now


I've also swapped to colima and the only thing I struggle with sometimes is just remembering the executable name to start it up :D


That was pretty straightforward to me: COntainer LInux MAcos


Multipass from Canonical has been working really well for me.

https://multipass.run/


Except that ugly firewall bug. Many of us can't use it with the macos firewall, network just stops working after a couple of days.


Agreed. Multipass seemed great until I had to fight against networking. I switched to Colima and haven't had a problem since.


You can also use Docker using Rancher Desktop. It’s a setting.


It's more than a setting, it's the first thing you are asked when you start it for the first time. I love that RD doesn't default to one or the other and make it harder to use one.


While speed improvements for storage are indeed welcome, it would be nice if the massive resource "leak" occurring with the native mac virtualization would be fixed.

I'm aware of some document describing that it's not really a resource leak, and instructs us to look at real mem instead of memory consumption, but sadly MacOS is not conviced, and will happliy use memory and swap space until eventually it crashes with out of memory, so while a memory leak might not be the cause, _something_ is claiming to use a lot of memory and not releasing it again.


I was curious what's the situation for Colima. It like like fs sharing is being discussed a lot over the years, but virtiofs depends on Mac's virtualisation.framework and can't be used in qemu currently. https://github.com/lima-vm/lima/issues/20#issuecomment-10686...

Instead 9p experimental support got there a few days ago: https://github.com/lima-vm/lima/issues/20#issuecomment-10660...


Canonical now offers one-command docker environments with Multipass for those looking at alternatives: https://ubuntu.com/blog/docker-on-mac-and-windows-multipass


Running MacOS on silicon, and after enabling VirtioFS as documented, I see a small but not massive improvement. Launching a command with `npx mocha test/index.js` with an empty test only requiring a module (with a lot of dependencies inside it) leads to:

- On MacOS host: around 2s - Inside Docker container with VirtioFS enabled: around 20s - Inside Docker container without VirtioFS enabled: around 25s


[correction] The speed measurement of 20s is for a X86 Docker image with the platform field forced to amd64. For a native silicon image, the speed improvement is impressive, I am down to 4s.


This is fantastic! Docker for Mac has years of pain with the filesystem speed. I remember first diving into the issue and finding some months long Docker forum thread saying "this is basically unfixible except for micro improvements to the kernel." I use Docker for Mac a lot more now and my company is standardizing on containerizing local development. It's been really helpful to avoid poisoning engineer's machines with hard-to-build / env corrupting software (hi Ruby). Disk heavy operations are of course noticeably slower in these systems and I'm excited to try out this new feature.


Can docker desktop please stop endlessly pestering me to upgrade? It's impossible to skip the update unless I pay which is total BS.


I've had it uninstall itself twice and then fail to reinstall on update. The second time it did it, I just left it be. Maybe you'll get lucky too if you decide to click.


I've experienced freezes on macOS + Docker 4.6 + using Vite in a php/apache/nodejs container.

When these happened, I can't even do CTRL D in the terminal. The STOP button in Docker doesn't work. Had to restart.

Anybody else?

Otherwise it does seem quite q bit faster, like `vite` starts up almost instantly instead of taking a second or two (having been used days prior, so it has its cache).


I don't trust Docker and never will, it's better to just download the official binary and install the Daemon into a Linux VM then configure it using docker context.


Podman and nerdctl seem to be the better option when it must be true open source. I know it’s very hard to build the Docker from source that you can download as binary. I don’t like this. Build process should be documented for everyone for trust reasons.


sure and I totally agree.

But for desktop development docker is a godsend to have a reproducible environment.


Previous to this update, I was regularly searching for alternatives, and felt that running docker on my mac was the worst part of my job. The performance improvement is ridiculous, and was totally worth upgrading my mac from big sur for.


I don’t like how the cookie dialogue on the Docker blog says it would take a few minutes to save my cookie preferences. I won’t wait a few minutes to just be able to start reading. This was my last visit on the Docker website.


I guess it's still insanely slow for windows mounts? Or has that changed too?


RE:Windows, docker <> wsl2 ubuntu feels just like docker <> native ubuntu. We're able to do all our CUDA dev that way too, the only gap is OpenCL. I don't use docker desktop. As it's annoying to do most modern AI + HPC without nvidia GPUs, as long as you're not doing OpenCL, windows seems to have beaten os x for most data people.

In contrast, I largely stopped using docker os x because of the broken fs virtualization here made it impossible to do productive JS dev. Subsecond editing vs 10s-3min is a huge step back. Unclear if the the new patches get it to the native-level feeling of wsl2 (1-3% overhead) or still feeling broken in practice. And of course, apple's vertical control means no nvidia GPUs / CUDA, even eGPUs, so no way for docker os x to support that either.


On Windows I just have the files/repo checked out on the WSL side. At least JetBrains' products handle that very effectively (you get a performance drop the other way with that back to windows side, of course, but IntelliJ/PyCharm/etc handles that with a local cache and some magic making it behave as expected).

One other issue this also solves is that volume mounts from windows don't inotify correctly, so hot reloading doesn't work properly. But with files on the WSL side that works fine.


Docker in WSL is the only way it's half usable on windows, but as you say, performance in jetbrains products are quite slow when it communicates with WSL2.

Lets hope it will be possible to communicate more efficiently in the future.


Still think the best approach is to forgo containers locally and use tools like Jib or Bazel that build containers directly without a Docker daemon.


Did you mean this one? https://github.com/bazelbuild/rules_docker

I was very interested in this Bazel-based way of building containers but its README page says "it is on minimal life support," which does not inspire confidence. How's your experience using it?


rules_docker has been great for Go and Python 3. Can't comment on other langs.

Jib has always been absolutely excellent for Kotlin and Java, really can't fault it in any way. I specifically use it with Gradle though, unsure about the quality of the Maven plugin but I would assume great also.


Any chance this will become available to older macos versions? My old 2013 Mac doesn't run version 12.


No because it relies on some newer kernel features.


Invalid SSL on WPengine—can someone who saw it briefly summarize?


The release notes have a good overview: https://docs.docker.com/desktop/mac/release-notes/#docker-de...

> Docker Desktop 4.6.0 gives macOS users the option of enabling a new experimental file sharing technology called VirtioFS. During testing VirtioFS has been shown to drastically reduce the time taken to sync changes between the host and VM, leading to substantial performance improvements. For more information, see VirtioFS.


On Linux how is this done? Raw no virtio or anything?


On Linux you don't need Docker Desktop and and run docker natively without any VMs. FS is managed via overlayfs which provides near native performance.


"Docker for Mac" is actually a full Linux virtual machine. The Docker commands you run on your host mac are sent into the Linux VM, to run the technologies Docker uses ("control groups" and "namespaces" mainly). The Linux VM is hidden as an implementation detail of Docker for Mac, meaning you can't ssh into the VM, you can only interact with it through Docker commands.

On Linux itself, Docker simply uses the same core Linux technologies to create the isolated runtime "containers" using "control groups" and "namespaces." Docker on Linux is basically native speed because it doesn't have to jump through hoops like syncing the VM filesystem with the host Mac's filesystem, which is a major point of slowness and a long standing pain point with Docker for Mac.


You seem to have a good understanding of these containers.

Do you know in simple terms why macOS doesn't have such "native containers"? After all isn't it more linux-like than Windows?

I saw this website that suggests Apple themselves are not providing such a feature... do they see it as a security issue?

https://macoscontainers.org/


It really hinges on these things called "control groups" and "namespaces," which are features of the Linux kernel. The "kernel" is the core computer program that interfaces with the hardware. MacOS has its own kernel. (If you install htop for mac, run htop at a terminal, press "t" to go into tree view, and scroll to the top, you'll see kernel_task, the representation of the kernel).

A "control group" (or "cgroup") is a feature of the Linux kernel that lets you create an environment with a fixed allocation of memory, CPU, and other resources. Aka a controlled group of resources. Whatever runs in here only gets the resources defined by whoever created the cgroup.

The other feature is "namespaces", aka a "process namespace." It's another feature of the Linux kernel that lets you create an isolated namespace for processes to run in. If you're a process running inside this namespace, to you, it looks like you're the only process on the system. You can run multiple processes in a namespace (Docker containers can run more than one process!)

So on Linux, to create a container, you basically use native Linux features to build this isolated environment, and run some process(es) in it. This is also why there are different ways to run containers (BSD jails), really what we're talking about is building an isolated environment.

As far as I know, the Mac kernel doesn't have these same features to create isolated environments, which is likely why Docker for Mac went with a full Linux VM, which includes the kernel. This is approaching the limit of my knowledge. I don't know what the Mac kernel is missing or what gaps or proposals there are to create a container-like environment. I also haven't worked with BSD jails or other technologies so I don't know how portable containers are between systems, if at all.


Thanks for the excellent explanation!


MacOS is based on a Mach kernel with a BSD userland. It’s unixy but containers are a Linux thing based on namespaces and cgroups among other things all of which are Linux kernel primitives.

BSD has jails which are similar but different.

So the answer is MacOS doesn’t offer containers like you know them because it’s not linux hence Docker for Mac spins up a VM.


Ok so you mean it's something that's lacking at a much more lower level of the OS?


Yes. Windows offers Linux containers, via thin Hyper-V layer and Windows containers can be hosted in 2 min different ways.


This is about making the same files accessible inside and outside a container. Docker does that by bind mounting files from outside of the container's file tree into it. Bind mounts are just a way to provide a different file path to access an existing directory. It has zero performance cost.

The reason Docker Desktop and similar solutions need to come up with these different complicated solutions is that they want to share files between containers running inside a Linux VM and the host system. So they need some kind of way of syncing the files.


Ahh thanks for clarifying everyone. Makes sense. I didn't know overlayfs is near native, cool! I guess I wasn't doing too much file io to notice the containers being slow or perhaps I just got used to things being slow. Either way, glad to see this improved for folks not using Docker on linux distros.


Reminder: Docker Desktop is not free software (it's not even source available) and has embedded spyware that uploads a ton of sensitive info from your system without consent when it crashes.


We do not upload any information from crash reports unless you upload the report yourself. We only collect anonymous usage data, which you can opt out of.


Did you change this recently? I personally observed Docker Desktop attempting to upload zips containing system information and pcaps without my consent on crash.

Only Little Snitch stopped it.

Note also that unless you are using Tor or something like Private Relay, your "anonymous" usage reporting isn't anonymous at all. Client IP is location data.


source?


The binary. I assume without verification that Docker Inc's "privacy" policy will corroborate.


> Reminder: Docker Desktop is not free software (it's not even source available) and has embedded spyware that uploads a ton of sensitive info from your system without consent when it crashes.

This is a straw man argument, just in case anyone cares. The "reminder" is that Docker isn't free and not "even" Open Source. I think Docker has produced Open Source code, however. As most of us get that Docker is nowadays NOT Open Source, the argument falls to "reminding us" that it "uploads a ton of sensitive info" "without consent". Whatever.

Docker has personal info on everyone who uses Docker and uses their online services. Phone numbers. Email addresses. Passwords for various Docker websites. People love this because they are Docker customers or long time users. They don't care because most of these customers/users are DOING BUSINESS. Sure, Docker runs third party analytics on their site and pixel trackers in their emails. Big deal. They are in business to make money. They aren't used by everyone (read the public), and most people that run it (assuming it didn't uninstall itself), run it on their work computer. What info is on these machines that Docker would risk their shareholder value to "steal" or "divulge"? Not much, is the answer.

To be clear, that last part of the straw man isn't a reminder, it's a claim. Further evidence of straw man shenanigans is present here in replying to "source?" by saying "the binary". Of course data would come out of a binary, but which binary are you referring to? The Docker binary, which runs containers and crashed, or the binary that handles the crash of the other binary? Or maybe it's the installer binary that crashed? Mine did. Twice.

All that said, all you people who just shout "privacy, privacy, grumble, grumble, large company, grumble" are a pain in the ass. If you care about privacy, limit what software you use, put stuff that you worry about on another machine and keep your damn data off it. Privacy advocacy isn't worrying about your privacy, it's understanding how the public, at large, looses access to understanding about where they should share their data. Just telling people to worry about privacy isn't helping, and probably does more harm than good.


> This is a straw man argument

Argument for or against what, exactly? It's a statement of fact.


It's a vague, unsupported assertion until you can specify the conditions or the information being sent in enough detail for anyone else to verify the claim.


What evidence do you have that it sends this information before you approve it in the crash report? What kind of sensitive information are you referring to?


Shame you started charging for commercial entities. We've all uninstalled Docker Desktop (Windows) now in our team and just gone to using the software (mainly SQL server) natively on our machines instead.


Software is worth paying for if it saves you time (and therefore money) and buys you a consistent, reliable experience with a pleasant UX.

Building software is also the thing that pays a lot of our bills. We expect our customers to pay us for our engineering labor. What makes Docker any different?


It is and that's great, but changing the license never goes over well, especially for developers in a large company. I would have loved to just pay for DD, but it's really hard to answer the question "this has been free for years, why should we pay for it now?". The bait-and-switch also feels pretty insidious to a lot of developers, please refer to when Oracle did the same thing with their JVM implementation.

I've also found that a lot of large enterprises have had big difficulties on hammering out licensing terms with Docker. When it's a product you actively use, that means the devs will either wait for shit to hit the fan or start looking at alternatives, and find that there are still free competitors (Rancher Desktop, Podman, Minikube depending on your use-case).

IMO, it would have been a whole lot better if the Desktop product had been paid to begin with, rather than being suddenly switched as a last minute monetization strategy.


The bait-and-switch, providing it for free then suddenly taking it away.


Did you really, truly think that a product like Docker desktop would stay free forever? This is a fundamental aspect of building software products and companies. I do not understand how in 2022 we still have people hanging out in tech forums like HN that do not understand this.

The first objective of a pre-revenue company is to grow as fast as possible and then either get acquired or start generating revenue, which almost always involves marking up their product. And because of the nature of software, it's easy to build POC's and prototypes that half-work today, with the promise of better functionality and integration in the future. But that tomorrow always comes with a cost.

Somehow, folks in this industry seem to be convinced that it's OK for them to turn a profit on their value-add software, but not other people. What are software companies supposed to do? Overcharge today for software that doesn't necessarily work right now? What's the model that companies like Docker should use instead?


>Did you really, truly think that a product like Docker desktop would stay free forever?

You're mistaken here. Nobody has issues paying for software, however people have issues with being baited into a free product to then suddenly be charged after companies get locked in, which is a shitty business practice that needs to be discouraged.

Had Docker been transparent from the start that at some point in the future it will cost money, then that would have been fair since this would have been factored into business decisions.

You can't just hand out free candy and after people swallow it, you tell them they now have to pay up or throw it up, since you can't expect such good candy to be free. That's basically a scam.

Again, it's not about the money, it's about transparency.


Hire a team and build your own fucking software then. That's the other option if building a business around someone else's free work isn't perfect enough for you.


I didn't build any business one someone's free work. Your comment is out of place.


> Had Docker been transparent from the start that at some point in the future it will cost money, then that would have been fair since this would have been factored into business decisions.

Docker never said the product would be free forever. You'd have an argument if they initially committed to keeping Docker Desktop free forever and reneged, but they didn't. It's kind of silly to expect a company to communicate in such vague terms. What company comes out with a statement saying "we're not sure when or what the details are, but someday, we're gonna start charging for Docker Desktop". Also, how does this work when a company decides to pivot? Maybe they initially intended never to charge for DD but their initial business strategy didn't work and they had to pivot. Is it wrong for a business to modify it's revenue streams?

If you choose to incorporate a a piece of technology developed by a private company into your commercial project that is free today, you should expect to pay for it at some point in the future.


I'm not going to challenge your point but Docker also found itself in the unusual position of being commoditised before it could figure out how to make money. It's not a novel thought, it's been discussed here plenty enough that their business model vanished under their feet, especially when kubernetes entered the fray.

I can see why people aren't happy with how things have played out. Docker does need to earn its crust and pay its people, and this is strictly for the desktop UI, team space, and nascent collaboration feature with dev environments. It's a value-add over barebones docker but, and this is what is probably frustrating, it targets Windows and Mac users exclusively because it's not as easy to run docker without their software.

Let's not also forget that they made optional upgrades a paid-for feature, while at the same time releasing software with showstopping bugs and regressions. Their response was literally to pay them money so you could deny an automatic upgrade. So my sympathy for Docker finding itself in this position is fairly limited, knowing that.


If there's one rule of adding paid features, it's "Don't make previously-free features paid". They set the anchor price to "free" and then changed their minds later. Human psychology does not appreciate this and will resist and resent it, because we see it as someone taking away what we have.

The solution is to add additional features and charge for them; they should have charged more for e.g. automatic kubernetes configurations and other more "businessey" features, but now it's too late and there's no value-adds that they haven't already added.

In the end, it's not a complaint about Docker's "value-add" software; they're complaining about an increase in the price without an increase in the value.

I'm not sure what Docker could do from this point on. Maybe I'm missing something, but they seem pretty doomed overall.


This is not a matter of understanding. It's a matter of acceptance. Phrases like "it's a bait and switch" are just excuses around an entitled mindset.


If by "suddenly" you mean "with 5 months notice."


Those 5 months are irrelevant when companies have invested years of development effort in weaving docker into their workflow. It's a bait and switch, simple as that.


If you didn’t pay for it, you can only have zero expectations, simple as that.

You can provide something for free, not make any promises that it will be free with upgrades in perpetuity, and one day decide to stop. It’s as simple as that.


> If you didn’t pay for it, you can only have zero expectations, simple as that.

Says who? There's plenty of free as in didn't transact dollars for software that I have extremely high expectations for. Visual Studio Code, Gmail, Github, etc.

When they one day decide to stop I can also choose to be disappointed, not buy and no longer use their software. It's as simple as that.


I think there's a distinction to be made between B2C and B2B there. Gmail and Github are not free for businesses (and Docker is still free for individuals).


You are betting your company worklow/future without contracts?


I would argue it's only bait and switch if they implied that it would always be free.


Did they say from the beginning "Hey, it's free for now, but brace yourselves, we'll start charging you at some point in the future"?


Did they say from the beginning * "Hey, it's free forever!" * ?

If they made it explicit that it is free forever, then I would agree with your stance that Docker in the wrong. But so far I couldn't find a shred of information indicating that Docker commercial/enterprise account will be free forever. So that didn't put Docker in a wrong.

And be realistic of this world, you cannot expect software companies to survive on free account alone forever. Who is paying the servers to keep it up? Docker. Who is providing the support? Docker. Who paying the developers to keep Docker products updated and introduce new features? Docker. How could Docker survive without the cash flow with the free account forever. I mean Google, Microsoft, Chococately, Homebrew/Brew, etc have commercial licenses to cover the cost of their products to keep it free for personal-use users. If you don't see this coming, well I don't what to tell you but that is the cost of doing business.


>Did they say from the beginning "Hey, it's free forever!" * ? If they made it explicit that it is free forever, then I would agree with your stance that Docker in the wrong. *

Well sorry but free without any conditions attached, becomes free as in beer. If I give you a shiny toy for free no strings attached, I can't come back to your house later and ask for money for it because I didn't specify how long it would be free. If I didn't specify the conditions of the 'free', then I goofed up.

If they didn't put any conditions on the duration of the 'free' from the start and is causing them to loose money, then either they were financially stupid from the start or, more likely, they wanted to pull the classic SV success scam where you bait customers into a free* product for years, burning endless cash to capture the market, and when customers become locked in thanks to the 'free', and you are the de-facto standard with no competition left, start rent seeking.


I has been searching for Docker pricing information about various licenses and I came across mutliple snapshot from Wayback and it clearly show they are charging for the support and various thing. It dated way back to 2015, commercial entities already paying for the support back then. It been there for years! To be clear, Free (as in personal-use license) accounts have been FREE to use and it still free. It just that other licenses are not and Docker been charging those other licenses for years.

And people complaining about the "bait-and-switch" that occurred few years ago while Docker has been charging for non-free licenses way before that. Huh?


I could be wrong, but I'm pretty sure they've been explicit for a quite awhile about Docker Enterprise being their product for organizations.

https://web.archive.org/web/20200101044220/https://www.docke...


After many years of service making it hard to remove, and a non negligible cost.


> After many years of service making it hard to remove...

Would you rather they go under as a company, due to how badly their attempts at monetizing containers have failed in the past and then have Docker Desktop not be maintained at all and have Docker as a whole take a noticeable hit?

> ...and a non negligible cost.

I'd say that perhaps using either Docker with the CLI, IDE integrations/plugins or using Rancher Desktop (https://rancherdesktop.io/) are more cost effective alternatives in that case.


So the thing that makes this egregious is they let you use it for free for _almost a decade_, while they fumbled finding an alternative monetisation strategy?


Spin however you want, I am sympathetic toward their situation, it doesn't make it appreciable nonetheless.


You built a company on questionable business model expecting that “somewhere down the line you will be making money” which never happened as it happens most of the time when you kick the can down the road.

Now I have to use your software because you used investor money to make that software industry standard (standard, which is in fact pretty low). Yet you still made no money.

Who knows, maybe if Docker didn’t exist the vacuum in the solution space would be filled with something better or worse?


Docker Desktop on macOS is none of the three (consistent, reliable, or pleasant UX). I have -- with some exceptions -- replaced it trivially with Colima for how I use DD (and I am not in the class of people who need to pay for DD, but the forced updates and space-wasting UI were unpleasant).


Docker Desktop doesn't save any time or money vs. things like minikube.

The initial setup is slightly easier, but then you're exposed to random modal dialog popups for all time moving forward.

If there was some value add, then maybe it would be worth paying for. As it is, it looks like they spun off the part of the company that had any hope of turning a profit (via support contracts), and now they're flailing around in search of a business model.

It's a shame. Dockerfiles are nice. I hope they come up with a way of getting people to pay for some (actual) value add.


Certainly true in my case - I live on the command line but currently being on a Mac, Docker is Docker Desktop. If there was a CLI only version of Docker for Mac that's what I'd be using.


multipass docker if you can live without a firewall


In a corporate environment so probably not unfortunately.

I know there are better alternatives but it seems like there are a handful of alternatives right now and no clear preferred alternative.


> Shame you started charging for commercial entities.

Is that how it works for commercial license? Commercial license come with extras than what personal-use/free licenses have. You are whining that Docker is charging for commercial license that the company you works for is a commercial entity. Docker provides an platform and they are not free for commercial to use. Personal use is a drop in the bucket for them because personal users are not overwhelming their servers or pulling huge data daily. Commercial entities is a deluge in the bucket for Docker because entities use their servers more often than personal user do. Commercial uses consumes more data than personal use and you want Docker to accept that and eat the costs for commercial entities? If Docker does this, then Docker will not last a year without commercial pricing.

You, sir, have a strange ideology of this.


Docker Hub was part of why we declined Docker for Business and instead came up with our own WSL-based workaround. Docker Hub is an anti-feature for large enterprises; we had been blocking direct access to it for a while before all this and instead providing images via internal Artifactory once we’d had a look at them.

Another major part was SSO being merely on the roadmap at the point we would have had to begin the purchasing process with our licensing service. “Well, just slap down a credit card!” That is not how large enterprises work, and Docker appears to have fallen down on their market research when they didn’t set up a way to deal with purchase orders.


All commercial entities I have worked for wanted to locally host docker hub for cost, resiliency and security reasons.

Third parties even produced patches to do this but upstream rejected them.

They wanted a monopoly on burning server and network time on CI jobs. Now they have it, and I'm not sympathetic if it wastes their money. I'm more concerned that it wastes my time.


My take of this is that those commercial entities are not raising hell over this. Commercial entities have the buying power (not the individuals nor the single developer(s) of the company) and they are complicit with it. The reason why Docker haven't change stance on the pricing because those entities haven't/don't attempt to use their buying power to sway Docker to change their stance. Yes, developers are raised hell over it but their power as a developers in the commercial entities are not equal. Developers does not run the company, the board of director/CEO/founder/owner does as they please. They just accepts it. That is not the fault of Docker, that is on those commercial entities who have the money continue to pay for those. It is too late to change the stance on this because entities have been paying for Docker support for YEARS and YEARS and YEARS, even way before that last year pricing tier changes.


Give Podman a look, if you haven't already. It's FOSS through-and-through, and kinda eats Docker's lunch for a lot of my personal uses. Not sure how it scales in larger environments, but it's pretty close to feature parity with Docker where it counts.


nerdctl[0] is also worth a look.

[0]: https://github.com/containerd/nerdctl


For those curious, Podman also has an official GUI currently only available for macOS at https://github.com/containers/podman-desktop


For running a container it works fine. But for a developer workflow with docker compose, remote interpreters etc. it's nowhere near docker's offering.


Does Podman have a VM wrapper like Docker Desktop for running on MacOS?


Last I looked, yes. AFAIK, MacOS lacks the necessary underlying features in the kernel to implement containers (at least this kind of container) at all, so you have to run a Linux VM under the hood to enable rough feature parity, which is a waste of resources in my view. Yet another reason I'm planning to ditch MacOS for Linux full time very soon.


It's not that it doesn't implement containers, it's that most likely your container has Linux binaries in it and thus it cannot run on macOS.


Oh, sorry you misunderstood my question.

I know that I would have to run Linux in a VM. My question was whether Podman provided an installer that set up a Linux VM in the same way that Docker Desktop does.

It's completely awful, seeing Mac and Windows spend their their trying to VM Linux, poorly. But that's what I have to work with.


What is completely awful is developers tying themselves in knots trying to get MacOS to do things that Apple don't really want to do.

The community edition of Docker is much closer to most cloud container setups anyway, so using Docker Desktop is really counter productive.

There are better alternatives for your workstation, especially if your production environment is Linux based. Most, if not all, of the major development environments run well in Linux. Obviously not XCode, but for server side stuff it doesn't matter.


You have made two assumptions there.

1. That I don't use Linux when I can. I use Fedora on all my personal equipment, including laptops. When I have a choice I do use Linux because it is simply a better development environment. The question was because this is an article about Docker Desktop on MacOS and the poster suggested using Podman.

2. That running `brew install docker` was actually that hard.

Don't get me wrong, I absolutely dislike running MacOS on my work laptop, but I don't have a choice and I have not seen a job where I would have a choice. Docker works "seamlessly", except that being a VM it does use 3Gb for even the most basic containers.


Most employers would give you some choice over your work machine don't they?


No. Most employers I've encountered pick one for you. The best I've had is the choice of Windows or Mac.

They have never let me pick Linux.

[Edit] Actually, one did. That was also the first place that I used Linux containers, long before Docker Desktop was a thing, and before the hype train really got rolling. Back when Docker really made sense for process isolation, rather than using it as a lazy Linux VM for building on your laptop.


So, at the moment, if you are expected to do docker builds targetting linux servers, the clear logical choice is windows and wsl2 if you can't have a linux desktop, right?


Current work location gives developers Mac's. Something about the data scientists preferring Python which works significantly better on Mac's than Windows.

I would disagree that WSL2 is a clear choice. That would mean using Windows, which is a trainwreck of a UI, and gets less consistent with each release. Has Windows 11 fixed multi-desktops yet?

I personally find using Linux, with Gnome, so much more pleasant than either Windows or Mac. Much more consistent and without the arbitrary restrictions.


I guess docker inc must be desperate to have got rid of a bunch of freeloaders.

Keep in mind that docker for desktop needs a license for companies having more than 250 employees or 10 million in revenues, not from kids experimenting, not from hobbists, not students, companies doing 10 millions of dollars, if you are a company that makes 10 million of revenues a year and have uninstalled a software that was valuable for you to run away from paying 5 dollars, then write the name of the company so we can also avoid it in the future


5 dollars per user per month, I think?

If it was 5 dollars one-time fee for a greater level of access to the docker hub, many companies would have gone for it. At 1,000 employees you'd have to get $60,000 authorized yearly, which is going to make many companies look for the cheaper/free option.

Upon waking up some more, I found this: https://www.docker.com/pricing

"Teams" option is $7/user/month. "Enterprise" is $21/user/month.


I really hate modern companies, always so entitled to bailouts and freebies, but what kind of sympathy are you expecting? OMG a company with 1000 employees with macbooks paid several thousands dollars per year is going to be bankrupt because someone has decided that their entitlement to freebies is kaput, help me I'm out of napkins to stop the cry


I'm not defending the attitude. In fact I loathe the reluctance of companies to pay for software.

In a recent gossip session with some buddies, one told a story of a company that had hit the Docker rate limit during a production deploy but still refused to pay the subscription fee!

Plus so much open source software that should be funded by companies whose business literally would die without it. How many companies fund Homebrew, or whatever apps are used in their build toolchain? How many pay for desktop or server Linux?


So we're in the same boat, I really have your same thoughts in those regards, It's frustrating


Are you charging your customers money for the things your team builds for them? Why should Docker be different?


What was the point of using Docker in the first place then?


I don't know to be honest. Wasn't my decision (in-fact I campaigned against it). Probably just to say "we're using Docker" on the job spec.


Know what’s even faster? Configuring your local host server with the same values as your remote production server. It really isn’t that hard.


Cool. I need to roll back to a version from last year, with entirely different versions of daemons (PostgreSQL, et c). Then run that side-by-side with what's currently deployed. I may end up needing to git-bisect and repeat this process a half dozen times—this afternoon. I want none of those deps or configs to hang around, at all, afterward. And I'll need all that gone next week and a whole different set of services installed because I'll be helping out on some other project entirely. But then I'll come back to this and need all those things installed again. At any moment someone might have a question on something else entirely—a whole suite of services, potentially—that, ideally, I'd be able to fire up an try out locally in, at most, a few minutes, then completely eradicate when I'm done so it's not wasting disk space.

What's your solution for that? A bunch of full-fat VMs configured with Ansible or something? Been there, done that, less convenient, more resource-intensive, and far noisier/cruftier than Docker (though the script-configured-VM approach is still far better than running that stuff directly on my machine[s], admittedly)


Sorry, no. Unless of course you plan on not doing anything else on your machine. Otherwise it can't be a 1:1 replica of your production environment, which defats the purpose. The reason containerization is so appealing is because it lets you have the same exact setup (to the byte) everywhere. By doing that, you build predictable environments.

If your production environments don't require reproducibility that is fine. If you want to figure out what might be causing issues in production by using a setup that has the same dependencies but a different underlying base, fine. But that just won't do for lots and lots of products.


Been there, done that. 2014: 20k lines of chef code, followed with tons of ansible yaml soup.

Containers are better.


Looking at our Kubernetes/Docker yaml, my soup tastes different but it is still a soup nonetheless.


It is when your local host server is macos and your remote production server is... anything else. I'm sure the number of people running macos servers in production is quite small in proportion to linux.


Are you suggesting that containerization is worthless?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: