There’s no reason we shouldn’t be replacing our containers with WASI. Containers are absolutely miserable things that should just be VMs (in the WASM sense, not in the “run Linux in a virtual X86” sense)
The tooling is just not there yet. Everyone is just stuck on supporting Docker still.
There are a thousand reasons, which is why nobody is doing it. They're orthogonal. Problems WASM/WASI doesn't solve:
- Building / moving file hierarchies around
- Compatibility with software that expects Linux APIs like /proc
- Port binding, DNS, service naming
- CLI / API tooling for service management
And about a gazillion other things. WASI, meanwhile, is just a very small subset of POSIX but with a bunch of stuff renamed so nothing works on it. It's not meaningfully portable in any way outside of UNIX so you might as well just write a real Linux app. WASI buys you nothing.
WASM is heavily overfit to the browser user case. I think a lot of the dissipated excitement is due to people not appreciating how much that is true. The JVM is a much more general technology than WASM is which is why it was able to move between such different use cases successfully (starting on smart TV boxes, then applets, then desktop apps, then servers + smart cards, then Android), whereas WASM never made it outside the browser in any meaningful way.
WASM seems to exist mostly because Mozilla threw up over the original NaCL proposal (which IMO was quite elegant). They said it wasn't 'webby', a quality they never managed to define IMO. Before WASM Google also had a less well known proposal to formally extend the web with JVM bytecode as a first class citizen, which would have allowed fast DOM/JS bindings (Java has an official DOM/JS bindings API for a long time due to the applet heritage). The bytecode wouldn't have had full access to the entire Java SE API like applets did, so the security surface area would have been much smaller and it'd have run inside the renderer sandbox like V8. But Mozilla rejected that too.
So we have WASM. Ignoring the new GC extensions, it's basically just regular assembly language with masked memory access and some standardized ABI stuff, with the major downside that no CPU vendor uses it so it has to be JIT compiled at great expense. A strange animal, not truly excellent at anything except pleasing the technical aesthetic tastes of the Mozillians. But if you don't have to care about what Mozilla think it's hard to come up with justifications for using it.
> WASI, meanwhile, is just a very small subset of POSIX but with a bunch of stuff renamed so nothing works on it.
WASI fixed well-known flaws in the POSIX API. That's not a bad thing.
> the major downside that no CPU vendor uses it so it has to be JIT compiled at great expense.
WASM was designed to be JIT-compiled into its final form at the speed it is downloaded by a web browser.
JS JIT-compilers in modern web browsers are much more complex, often having multiple compilers in tiers so it spends time optimising only the hottest functions.
Outside web browsers, I'd think there are few use-cases where WASM couldn't be AOT-compiled.
> WASM seems to exist mostly because Mozilla threw up over the original NaCL proposal (which IMO was quite elegant). They said it wasn't 'webby', a quality they never managed to define IMO.
No, Mozilla's concerns at the time were very concrete and clear:
- NaCl was not portable - it shipped native binaries for each architecture.
- PNaCl (Portable Native Client, which came later) fixed that, but it only ran out of process, making it depend on PPAPI, an entirely new set of APIs for browsers to implement.
Wasm was designed to be PNaCl - a portable bytecode designed to be efficiently compiled - but able to run in-process, calling existing Web APIs through JS.
I don't think their concerns were concrete or clear. What does "portable" mean? There are computers out there that can't support the existing feature set of HTML5, e.g. because they lack a GPU. But WebGPU and WebGL are a part of the web's feature set. There's lots of stuff like that in the web platform. It's easy to write HTML that is nearly useless on mobile devices, it's actually the default state. You have to do extra work to ensure a web page is portable even just with basic HTML to mobile. So we can't truly say the web is always "portable" to every imaginable device.
And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.
So the idea of portability is not and never has been a requirement for something to be "the web". There have been non-portable web pages for the entire history of the web. The sky didn't fall.
The idea that everything must target an abstract machine whether the authors want that or not is clearly key to Mozilla's idea of "webbyness", but there's no historical precedent for this, which is why NaCL didn't insist on it.
In the context of the web, portability means that you can, ideally at least, use any browser on any platform to access any website. Of course that isn't always possible, as you say. But adding a big new restriction, "these websites only run on x86" was very unpopular in the web ecosystem - we should at least aim to increase portability, not reduce it.
> And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.
Historically, yes, and Flash as well. But the web ecosystem moved away from those things for a reason. They brought not only portability issues but also security risks.
Why should we aim to increase portability? There's a lot of unstated ideological assumptions underlying that goal, which not everyone shares. Large parts of the industry don't agree with the goal of portability or even explicitly reject it, which is one reason why so much software isn't distributed as web apps.
Security is similar. It sounds good, but is always in tension with other goals. In reality the web doesn't have a goal of ever increasing security. If it was, then they'd take features out, not keep adding new stuff. WebGPU expands the attack surface dramatically despite all the work done on Dawn and other sandboxing tech. It's optional, hardly any web pages need it. Security isn't the primary goal of the web, so it gets added anyway.
This is what I mean by saying it was vague and unclear. Portability and security are abstract qualities. Demanding them means sacrificing other things, usually innovation and progress. But the sort of people who make portability a red line never discuss that side of the equation.
> Why should we aim to increase portability? There's a lot of unstated ideological assumptions underlying that goal, which not everyone shares.
As far back as I can remember well (~20 years) it was an explicitly stated goal to keep the web open. "Open" including that no single vendor controls it, neither in terms of browser vendor nor CPU vendor nor OS vendor nor anything else.
You are right that there has been tension here: Flash was very useful, once, despite being single-vendor.
But the trend has been towards openness: Microsoft abandoned ActiveX and Silverlight, Google abandoned NaCl and PNaCl, Adobe abandoned Flash, etc.
There are shades of the old GPL vs BSD debates here.
Portability and openness are opposing goals. A truly open system allows or even encourages anyone to extend it, including vendors, and including with vendor specific extensions. Maximizing the number of devices that can run something necessarily requires a strong central authority to choose and then impose a lowest common denominator: to prevent people adding their own extensions.
That's why the modern web is the most closed it's ever been. There are no plugin APIs. Browser extension APIs are the lowest power they've ever been in the web's history. The only way to meaningfully extend browsers is to build your own and then convince everyone to use it. And Google uses various techniques to ensure that whilst you can technically fork Chromium, in practice hardly anyone does. It's open source but not designed to actually be forked. Ask anyone who has tried.
So: the modern web is portable for some undocumented definition of portable because Google acts as that central authority (albeit is willing to compromise to keep Mozilla happy). The result is that all innovation happens elsewhere on more open platforms like Android or Linux. That's why exotic devices like VR headsets or AI servers run Android or Linux, not ChromeOS or WebOS.
And a capability system and a brand new IDL, although I'm not sure who the target audience is...
> it's basically just regular assembly language
This doesn't affect your point at all, but it's much closer to a high-level language than to regular assembly language, isn't it? Nonaddressable, automatically managed stack, mandatorily structured control flow, local variables instead of registers, etc.
Some hardware in the past has had a hidden/cpu managed stack. Modern CPUs with features like CFG have mandatorily structured control flow. Using a stack machine instead of a register machine is indeed a key difference but the actual CPU is a register machine so that just means WASM has to be converted first, hence the JIT. Stack based assembly languages are still assembly languages.
It helps if you actually qualify statements such as "Containers are absolutely miserable things". I'm in a world where were using containers extensively, and I don't experience any issues whatsoever about which one might thing "WASI would be the solution to this".
Yeah the real answer is that all of this stuff is still a work in progress. Last I checked WASI doesn't have a concept of "current directory" for example, so porting software is not trivial.
Also WASI is a way of running a single process. If your app needs to run subprocesses you'll need to do more work.
Imo stuff like Flatpak has the right idea - provide a rich but controllable set of features, API/ABI compatibility, while providing zero overhead isolation (same as docker since it relies on the same APIs).
I also rather like the idea of deploying programs rather than virtual machines.
Docker's cardinal sin imo is that it was designed as a monetizable SaaS product, and suffers from inner platform effect, reinventing stuff (package management, lifecycle management etc) that didn't need to be invented.
The tooling is just not there yet. Everyone is just stuck on supporting Docker still.