I disagree. True, command lines are another shadow on the cave wall, but they're a sharper shadow: they show more of what's actually happening, and they give you tools to get more information about, as Stephenson would put it, "the tangled nam-shubs beneath." This is arguably not inherent to the command line, but it's definitely inerent to how GUIs work: GUIs cannot compose, are harder to script, and cannot show the kind of info CLIs can, if they wish to remain accessible (there's a reason there's no graphical method to get the inode number of a file). I think Stephenson expressed this quite well.
Nothing stops you from putting the inode number in a copyable field in a "file properties" window. Now, of course there is basically nothing you can do with an inode number in a graphical environment, but then there is also very little you can do with it in a command line as well. Omitting this detail is not something "inerent" to graphical interfaces.
Fair enough: for that particular detail. The point is, when UX matters, you cannot add mode switch (like command line flags), and you can't give the user all the data: their eyes would pop out. The GUI is a higher level abstraction: lack of control is the tradeoff.
This is what I like about user interfaces in Emacs. They are richer than command line interfaces as you have programmatic access to values (this gives you composition and scripting), yet they also hide information in their graphical representation bringing them closer to GUIs.
(Compared to well-designed GUIs Emacs interfaces aren't very pretty, though.)
However, GUIs are implicitly more complex than text steams. Sure, they can be programmable, and conposable, but it's an uphill battle, and CLI just does it so effortlessly.
The CLI only does so effortlessly because of all the work done to make it effortless. Pipes, string formatting, argument parsing. If the same amount of work was put into a direct manipulation GUI as was put into unix, it would be just as effortless, if not more.
Why are terminals still based on the concept of a teletype? It is fully one dimensional. You have to fit all interactions through that one dimension. (Multiple terminal windows are a kind of hyperspace, I suppose.)
I have recently found myself pondering what would have happened if a micro computer back in the BBS days could have handled access to multiple BBSs at the same time. My suspicion is that we would be looking at something very similar to what a multi-tab web browser is giving us these days.
Mmm I don't know. Self looks like yet another overdesigned thing at 100k+ lines of code that doesn't compile. After 3 mins on the site, I can't figure out the main thing it does.
Sure, the adaptive compilation is nice, and message sends and type inference, but how well do the UIs work?
It's sort of cool to see Urs Hölzle and Craig Chambers in the papers section.
I mean, it is almost 30 years old, so you have to take that into consideration. A lot of stuff had to be built from scratch.
As for what it "does", what does the smalltalk environment do? It's a combination of a programming language with graphical programming tools. It does anything it is programmed to do.
Self is the language that invented prototypical inheritance and the Morphic UI framework, one of which (morphic) is now used in pretty much all modern Smalltalks, and the other of which (prototypical inheritance) got picked up by a little language called Javascript, which you may have heard of. In addition, Self provides provisions for multiuser networked UI, kind of like Google Docs does.
Given, it's not exactly usable for modern projects, but it's worth studying.
They can't: that's the price you pay for the abstraction. Smalltalk got the closest, but if you really want to dig into a GUI, at the end of the day, you have to read the code behind it.