Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Imagine if everything they told you about Unix was true. Plan 9 is the operating system Bell Labs made after Unix, to fix many of the problems the designers saw.

That said, it's based around what the designers of Plan 9 thought were problems with Unix. It's a very opinionated operating system. But it has so many ideas that were ahead of their time, and in many ways are still lightyears beyond what we have now.

It's a really cool piece of computing history, and if you haven't tried it, I suggest you look into it, but keep in mind that even though it looks and sometimes feels like Unix, it very much is not. It's not terribly useful as a daily driver OS due to a lack of software, but it's very, very cool.



> and in many ways are still lightyears beyond what we have now.

I think this wildly overstates it. Much of the good innovations have been adopted in Linux. 9p exists. /proc was adopted (though that was in UNIX first).

One unifying principle of plan9 is that everything is a file. But the (POSIX) file api has a lot of limitations. Fuschia, in contrast had some nice ideas about different types of file (blob/object, log, etc).


> I think this wildly overstates it. Much of the good innovations have been adopted in Linux.

The main innovation can't be done by addition. With plan 9, many special cases are removed. You no longer have to wonder what happens if you try to create a Unix socket on an NFS file system, and then mmap it: There's just 9p. Everywhere.

9p is nice, but it isn't special. Making the whole universe 9p is where the improvement lies.


9P is how Microsoft is bridging the filesystem between Windows/Linux in WSL2

https://devblogs.microsoft.com/commandline/a-deep-dive-into-...


Qemu uses 9p too.


>You no longer have to wonder what happens if you try to create a Unix socket on an NFS file system, and then mmap it

How does that work? I don't know the details of any implementation, but 9p the protocol appears not to have any concept of mmap: https://9fans.github.io/plan9port/man/man9/intro.html

I think I see what you mean about 9p not being that special, it doesn't seem much different than if Windows decided to export every system-level API as a DCOM object, that would also get you the same kind of "the whole universe is networked" kind of deal.


> if Windows decided to export every system-level API as a DCOM object, that would also get you the same kind of "the whole universe is networked" kind of deal.

The difference is that in Plan 9, there is no 'if', and there's no other option for accessing resources. All programs interface with the OS and other programs via 9p, more or less: Notable exceptions are process creation calls like rfork() and exec().

> but 9p the protocol appears not to have any concept of mmap:

Correct. Mmap is a kernel feature -- and mmap style stuff is only really done for demand paging of binaries at the moment. You get a cache miss and a page fault? Backfill with a read. Backfilling IO on page fault is really all mmap does, conceptually.


>there's no other option for accessing resources

That seems like it would create difficulties in porting software there. Please correct me if I'm wrong but the original plan9 appears to also have no support for shared memory or for poll/select.

>Backfilling IO on page fault is really all mmap does, conceptually.

For read-only resources yes, for handling writes to the mmapped region, that seems quite broken.


Plan 9 is not a posix system. That means it doesn't have to deal with legacy posix behavior. If you want unix, it's easy to get it.

> For read-only resources yes, for handling writes to the mmapped region, that seems quite broken.

No more broken than mmap of nfs. Consistency is hard.


>No more broken than mmap of nfs.

Right, I get that's what you meant, it doesn't seem to really change much versus NFS, or DCOM, or whatever. So it's unclear what benefit is being provided by 9p here.

Also upon further research I am not sure what you mean by this is the only option, plan9 seems to suggest use of channels for other types of IPC interfaces, which seem to not be the same as 9p and are not necessarily network serializable. (Or are they?)


Channels are not IPC -- they're a libthread API that stays within a shared-memory thread group.

There are few magic kernel devices that don't act like 9p, like '#s' which implements fd passing on a single node. And the VGA drivers expose a special memory segment on PCs to enable configuring VGA devices.

But the exceptions are very few and far in between, and affect very few programs.

> So it's unclear what benefit is being provided by 9p here.

A uniform and simple API for interacting with out-of-process resources that can be implemented in a few hundred lines of code.


How is that conceptually different from IPC? The graphics system appears to somehow pass mouse and keyboard events to the client programs over a channel. At least that part seems similar to an Unix X11 setup where this would be done over a socket.

I guess I just don't see what is conceptually the difference here versus something like doing basic HTTP over a TCP socket, it seems like the same kind of multiplexing. Either way, you still have to deal with the same issues: can't pass pointers directly, need to implement byte swapping, need another serialization library if you want the format to be JSON/XML or if you want a schema, etc... So in cases where that stuff isn't important, channels would come in handy, but of course that is now getting closer to a local Unix IPC. Am I getting this right?


> How is that conceptually different from IPC? The graphics system appears to somehow pass mouse and keyboard events to the client programs over a channel.

A thread reads them from a file descriptor and writes them to a channel. You can look at the code which gets linked into the binary:

    /sys/src/libdraw/mouse.c:61
Essentially, the loop in _ioproc is:

    while(read(fd, event)){
       parse(event);
       send(mousechan, event);
    }
And yes, once you have an open FD, read() and write() act similar to how they would elsewhere. The difference is that there are no OTHER cases. All the code works that way, not just draw events.

And getting the FD is also done via 9p, which means that it naturally respects namespaces and can be interposed. For example, sshnet just mounts itself over /net, and replaces all network calls transparently for all programs in its namespace. Because there's no special case API for opening sockets: it's all 9p.


Ok I see, that helps, thank you. That seems to be mostly similar to evdev on Linux after all, except it requires you to use coroutines instead of having an option for a poll/select type interface.

To me the problem with saying "no special cases" seems to make it quite limited on the kernel side and prevent optimization opportunities. For example if you look at the file node vtables on Linux [0] and FreeBSD [1] there are quite a lot of other functions there that don't fit in 9p. So you lose out on all that stuff if you try to fit everything into a 9p server or a FUSE filesystem or something else of that nature.

[0]: https://elixir.bootlin.com/linux/v5.11.8/source/include/linu...

[1]: https://github.com/freebsd/freebsd-src/blob/master/sys/kern/...


Yes, that's the meaning of no special cases*: it means you don't add special cases. But this is why plan 9 has 40-odd syscalls instead of 500, and tools can be interposed, redirected, and distributed between machines. I don't have to use the mouse device from the server I logged into remotely, I can grab it from the machine I'm sitting in front of and inject it into the program. VNC gets replaced with mount.

I don't have to use the network stack from my machine, I can grab it from the network gateway. NAT gets replaced with mount.

I don't have to use the debug APIs from my machine, I can grab them from the machine where the process is crashing. GDB remote stubs get replaced with mount.

You see the theme here. Resources don't have to be in front of you, and special case protocols get replaced with mount; 9p lets you interpose and redirect. Without needing your programs to know about the replacement, because there's a uniform interface.

You could theoretically do syscall forwarding for many parts of unix, but the interface is so fat that it's actually simpler to do it on a case by case basis. This sucks.

* In kernel devices can add some hacks and special magic, so long as they still mostly look as if they're speaking 9p. This is frowned upon, since it makes the system more complex -- but it's useful in some cases, like the '#s' device for fd passing. This is one of the abstraction breaks that I mentioned earlier.


That's what I mean though, I see the theme, but it seems to me to be about the same as trying to fit everything into an HTTP REST API, it all falls apart when something comes along that breaks the abstraction. For example if you have something that wants to pass a structure of pointers into the kernel, you can't reasonably do that with 9p, so now you've got a special case. The debug APIs can still only return a direct memory mapped pointer to the process memory as a special case, the normal case is doing copies of memory regions over the socket, no matter how large they are. If you want to add compression to your VNC thing, or add some more complex routing to your network setup, you have to start adding special daemons and proxies and translation layers into another socket, which is not really different from what you would be doing on a more traditional Unix. Or is there another way plan9 handles these?


These things have already been done with 9p.

> The debug APIs can still only return a direct memory mapped pointer to the process memory as a special case

Can you point to the special case here?

http://man.cat-v.org/plan_9/3/proc

Because it replaces ptrace, and seems to work perfectly fine when I mount it over 9p. It's used by acid, which needs no additional utilities: http://man.cat-v.org/plan_9/1/acid

> If you want to add compression to your VNC thing

Images may be sent compressed. More -- or at least better -- formats would be good, but this is done.

http://man.cat-v.org/plan_9/3/draw

For a full implementation of remote login using these interfaces, here's the code:

http://shithub.us/ori/plan9front/fd1db35c4d429096b9aff1763f2...

It's a bit complex because it needs to do more than just forward mouse, keyboard and drawing -- signals need to be interposed and forwarded, and there are a few other subtle things that need to happen in the namespace. And because it contains both the client and server code. Even so, it's still small compared to VNC.

And yes, shithub is hosted on plan 9.

> or add some more complex routing to your network setup, you have to start adding special daemons and proxies and translation layers into another socket

Here are the network APIs.

http://man.cat-v.org/plan_9/3/ip

What kind of complex routing are you talking about, and why would it be impossible to implement using those interfaces?


> I think this wildly understates it. Much of the good innovations have been poorly hammered into Linux.

FTFY.

> 9p exists.

A hacked up version called 9p200.u and later on, 9p2000.l which comes laden with posix and unix baggage and of course linux baggage in the lase of .l. This is to handle things like symlinks and special device file hacks inherited from Unix.

> /proc was adopted (though that was in UNIX first).

Linux proc is a mess. Plan 9 proc is just that, the interface to running processes. There's no stupid stuff like /proc/cpuinfo. wtf is that doing in there? http://man.postnix.pw/plan_9/3/proc

> One unifying principle of plan9 is that everything is a file. But the (POSIX) file api has a lot of limitations.

Plan 9 is not posix.

> Fuschia, in contrast had some nice ideas about different types of file (blob/object, log, etc).

A file is an array of bytes. Why complicate that simple approach?


> Linux proc is a mess. Plan 9 proc is just that, the interface to running processes. There's no stupid stuff like /proc/cpuinfo. wtf is that doing in there? http://man.postnix.pw/plan_9/3/proc

Do you think Plan 9 /proc would have remained as "clean" over time if it were as popular as Linux?

One thing that seems to be something of an axiom is that popular interfaces become messy over time. The location of /proc/cpuinfo seems to be an individual act of vandalism rather than being due to fundamental differences in underlying philosophy/approach.


> Do you think Plan 9 /proc would have remained as "clean" over time if it were as popular as Linux?

If people are allowed to submit "functionality" patches ad-hoc with little to no scrutiny or thought, then yes, any project will become a mess.

The general approach taken by plan 9 maintainers is to question functionality/feature patches and ask "Who does this benefit?" If the answer is only the submitter or rare edge cases then the patch is rejected. If the patch benefits a large audience, then it is accepted.

But to be fair, Linux is hammered on by large corps who's only goal is to make money by vomiting webshit from Linux servers. They don't care about simplicity, technical details, correctness, or anything like that, so long as it increases their bottom line. From my point of view the Linux I came to love is long dead.


Seems like you never loved the Linux in the first place, since Linux now is what Linux has always been.


>Do you think Plan 9 /proc would have remained as "clean" over time if it were as popular as Linux?

A lot of the appeal of Plan 9 is that it's not widely used, and so has remained opinionated. It's not a general use operating system. It's a research operating system.


These "nice ideas about different types of file" are not new. On the contrary, before Unix, this was common and it was one of the revolutionary approaches of Unix, that files - from an OS perspective - should be streams of bytes and nothing more.


I think before UNIX there weren't nice APIs for different types of file access. It was more like getting a raw block device and being told "fill your boots".

stream-of-bytes was the right idea then. It doesn't mean it is still right.


You think wrong, it was exactly the same approach of defining a "nice API" for each type of file.

It failed.


Before Multics...not UNIX ;)


Linux is not yet a fully distributed OS, and even the basic foundational work for that featureset is only just being undertaken now (and then mostly as a natural development of cointainerization/namespacing features, which you might or might not see as drawing from Plan9 itself).


By 9p you mean 9pfs? if so, it "exists" for Linux, but that's about all.


One can natively mount 9fs on linux and there's also diod which exports 9fs. Works well for my setup of local file share to both plan9 machines and my linux boxes. I use diod because my storage system is running a chunky zfs with lots of storage I wanted to share


Many years ago (30?), the Plan 9 shell, rc, was made available for other platforms. I was working at the Big Nerd Ranch and ran it everywhere I could (I was doing 2nd/3rd level support work that gave me unfettered privileged access almost everywhere), until the nascent in-house software release process (SRP) caught up and sudo started becoming more widespread and privileged access started getting locked down.

At that point, I needed muscle memory across all machines more than anything else, and switched back to sh (bash was still very new and not widely available, csh was born borked, and ksh was only available under certain OSes). That was sad.

rc had a beautiful, clean C-like syntax without any csh weirdness and was much more powerful than sh. Scripts were a joy to write and maintain.


I was going to say, 30 years ago, the 1st edition of the actual Plan 9 wasn't yet released (not until 1992), much less a clone/port of its shell. But it seems that Byron Rakitzis wrote his Unix clone of `rc` in 1991 before Plan 9 was even out! He based it on Tom Duff's paper which described Plan 9's shell.

(Plan 9's `rc` was originally written for 10th edition Unix, and would later get ported back to Unix as part of Russ Cox's plan9port in 2003.)


Yes, I did! I used Duff's paper as a reference, and relied on the good taste of all my beta testers to guide me towards a working shell. Back when source code was distributed via shar files in an email.


Thanks for writing rc! I’d forgotten about Duff’s paper and the exact circumstances of rc’s release until I read LukeShu’s and your replies.

I had a lot of fun with rc....


It's hosted on github now and still serves as the login shell for me and presumably many others!


I use bash for the muscle memory but my bash profile is increasingly always including

    PS1="$(hostname)=; "
because it's just nice.


The interesting conclusion of all this is that if everything looks like a file, then it doesn't matter what OS it runs on. A /dev/screen can be on your local Plan 9, on a remote Windows or on your Linux VPS; as long as it respects the protocol it doesn't matter. Plan 9 is the host of all this experiment but its findings can be (and have been) imported in other places.


Is that actually useful in practice?

When you're talking about things like displays, performance is extremely important. We're talking about 178MB/s to update a full HD screen at 30 fps, which requires networking pretty much no normal user has.


In my work, there's not much which requires updating a full HD screen at 30 fps... video calls, I suppose. Everything else updates small portions of the screen at lower rates.

There's a program called drawterm which implements Plan 9 graphics devices on Linux. You run drawterm locally, connecting to a Plan 9 system, and your applications on the Plan 9 system draw to your drawterm window over the network. I regularly run it at 4k and it performs quite well.


I'm guessing these applications do not have any kind of animations or smooth scrolling? That would be a simple test, make your web browser or your image viewer fullscreen in 4K and see if there is lag in the scrolling/panning/zooming.


/dev/screen was an example, in practice as said in the sibling comments you'd use drawterm which fulfills roughly the same usecase as what ssh or RDP do, so yes, the use is there. And you may not need a full HD screen at 30 fps to work

But it doesn't stop there. Wanna play local music remotely ? /dev/audio is there for that. Want to use a machine as a jump server ? Just mount their /net folder into yours and any network operation will go through them.

The ideas can be used today. I have a folder of Music with only lossless songs for personal reasons, but it's obviously not perfect for playing from my phone because of how large they are. So I had a server that transcoded them to Vorbis on-the-fly and served them with FUSE, and a sshfs on top of that to serve the transcoded fly to my phone. This composition of a common interface might use no line of code from Plan 9 but it definitely reuses its philosophy.


I think this looks at the benefit backwards. 9p allows resources to be where they make sense and abstract the location from the usage. Running a display over the network might not make sense but with 9p it also isn't necessary. 9p itself allows me to run my GUI locally while the data and processing live elsewhere.


You are seriously overestimating the needed throughput in practice. 60fps 1080p can be streamed with good quality over 16Mbps channel (2MB/s). The real problem is lack of good open source software that will eliminate the annoying latency due to desktop protocols (Xorg...). There are things such as SPICE or X2GO or RDP which are "OK" but I suspect much better experience is possible. The computers are extremely fast already but our software is so bad we can't see it.


178 MB/s is a calculation, not an estimate:

1,920 x 1,080 pixels @ 24 bits/pixel = 6,220,800 bytes/frame

30 frames/s = 186,624,000 bytes/s = 177.98 MiB/s

You are seriously underestimating the simplicity of plugging in a video encoder.


Images can be compressed when using devdraw. The compression formats are relatively primitive, but they're good enough in practice. Slotting in better ones seems like it should be straightforward, though video codecs don't fit cleanly.


So what is your estimation of simplicity of using video compression there? Is it possible?


But once you introduce a piece of software into the middle to make this usable, what's the actual difference between this and just using VNC?

At that point it doesn't really matter if the screen is a file or not -- you need a compressor that can easily provide the output on a network socket, and a client that can perform the decoding.


You're right that it doesn't matter if it's a file or not per se.

What matters - and what the file interface gets you, but you can do the same thing in many other ways - is introducing the concept of a generic pluggable, chainable API.


178MB/s is under 1.5 gb/s. It’s only because we’ve been stuck with slow gigabit Ethernet for 20 years that we think this is a hard problem.

10G ethernet can do it no problem and fractional speed like the 2.5 and 5 gigabit standards should have little issue as well.


I concur that it sucks that Ethernet is in a rut for some reason.

But even on 10G that's no picnic. Sure, that works for a single user, but add a few more people and it's not hard to run into trouble. Such a system can't for instance just drop frames when the network is overloaded which to me makes this more of a curiosity than something anybody would actually want to use in practice.


10G switches are old hat and can do full x-bar switching at 10G, unless your using very old tech got off ebay you shouldn't have issues.

Trunk lines of 100G and higher are pretty common in core networks now, if your big enough to span a single switch. The main limit was we had trouble doing 10G over cat-5e copper with long distances. 2.5/5 solve that problem and 10g is possible with cat-6. Fibre has no issue with super high rates for network backhauls to aggregate all that traffic. Most datacenters are moving to 25gig for server connections.

With the exception of the copper standards all of this has been roled out in the datacenter for years and is pretty mature.


I've speculated it's the patents on 10G over copper holding us back. IIRC we're just about at the point where the early over fiber modes are off patent in the US.

However the 10G (over copper) encoding format uses a complex forward error correction encoding that is a bit energy intensive, it also adds some latency. A smaller silicon production node and this being used outside of SERIOUSLY EXPENSIVE for pro-sumers / medium businesses would instigate a drop via commodity.


There's been some post about upscaling algorithms here lately, perhaps they could be used for diminishing the required bandwidth?


I can see it being very useful for events, call centers, or any kind of operations center where you want a lot of screens.


> and in many ways are still lightyears beyond what we have now.

How close were they to Lisp Machines? :)


Just build tiny Scheme compiler for plan9, anything with r5rS support and call it a day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: