Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
'The Mother of All Demos' Is 45 Years Old, Doesn't Look a Day Over 25 (theatlantic.com)
199 points by steveklabnik on Dec 9, 2013 | hide | past | favorite | 59 comments


One of the things I love about Engelbart's work is that it does not obfuscate the underlying data. You have the data structure at your finger tips and it's up to you how you display it or interact with it. If you've got a graph, you interact with that graph as a graph. If you create a list you can interact with that as a list.

I am trying to implement something in a similar vein[1].

[1] 98, 4, & 44: https://github.com/samsquire/ideas


December 9, 1968 Douglas Engelbart did something what I believe was a huge leap into the future. He didn't just invent the mouse, but the whole experience and interactivity with a computer which was unheard.

Do that sort of things still happen today? Of course new things get invented and new ways to interact with media and a computer such as Leap Motion, Myo - Gesture control armband, Siri, and I'm sure there are tons of other examples out there. But they seem to be what in Engelbart's presentation was the mouse, just a part of the larger picture.


While it was more commercial in bent, I've always felt that the original iPhone introduction[1] was a spiritual successor to Englebart's demo.

[1] Jan 9, 2007 video in this podcast series: https://itunes.apple.com/us/podcast/apple-keynotes/id2758346...


I have to agree with that, I still own an iPod Classic Video and after 2 HD replacements and a new display (I wasn't really smart nor careful the first time I attempted to replace the HD) it still is the best Media device I ever had.

I wonder if Apple will be able to keep pushing devices of such impact out on the market.


This presentation was on par with the likes of Gutenberg's press, whoever first smelted metal, Einstein's relativity, and a handful of other simple concepts with singularity / black-swan level impacts. Great improvements & refinements occurred to be sure, but the base concepts were utterly new and staggeringly impactful.


The Chinese had invented printing presses with movable type before (but just didn't have that much use for it, given their writing system). Lorentz and others had worked on precursors to special relativity, relativity was in the air. If Einstein hadn't been there, we might have had to wait a few years more for some lesser minds to come up with a piece of relativity and his contributions to quantum mechanics each, but we would have gotten those achievements in any case.


Presumably he was picking the low hanging fruit. By today all the obvious or easy ideas have already been done a hundred times over. It's always harder to make progress in an older field than a newer one.


See also, "Turing's Curse", from OSCON 2013: http://www.oscon.com/oscon2013/public/schedule/detail/29917


Every time Douglas Engelbart is in the news, I reread Bret Victor's reminder of his goals. http://worrydream.com/Engelbart/


Victor points a software design that can be implemented, two mice, even two text cursors if designed that way.

What about today's tools isn't alighted with Engelbart's vision? Everything? That seems a bit too broad


Well, it's partly because the article is a bit narrow minded. Well, thats what I feel anyways.

As the article rightfully points out, what matters is intent, not the actual technical means of implementation, or even how it appears. We do have modern "screen-sharing" applications with multiple cursors. Just take a look at Google Docs.

The "problem" of today's tools is that computing technology went through this phase called the "PC" in which single-user interaction was all there was, and we went ahead and found efficient methods of single-user interaction, and then taught entire generations to think based on that metaphor.


This is an excellent point, and I also think that Victor's interpretation of Engelbart's priorities is really interesting: what if we made computer tools powerful enough to enable high-bandwidth creative cooperation? Would that open up new areas of hyper-productive human activity that we've only sporadically seen up to now, when small groups of like-minded people happen to be put into a facilitating environment at the same time?

What if that were the norm, because you could find such people -- right for you -- anywhere in the world, and work with them all your life, not just until this project funding ran out, or someone graduated, or got a new job, or whatever.

That's what I think Bret Victor is aiming to find out, and that's why I think a lot of people are really excited by what he's saying.


I am curious what he thought of Google Docs. I agree that it seems very similar to the kind of collaboration he envisioned.


He loses me a bit when he starts talking about his intent. Englebart indeed demonstrated precursors to all these modern technologies, despite the author claiming otherwise. So he didn't demonstrate a precursor Skype and screen sharing, just because there's only one mouse pointer to control in Skype? Ok, fine. Then he demonstrated the precursor to Screenhero, which has multiple mouse pointers and embodies Englebart's intent exactly.


The point Bret is making is deeper: Skype and Screenhero shouldn't have to be 'applications' that dig into mucky parts of the OS to try to get screen and pointer sharing to work. And even with those applications, other applications get wonky if they are used via screen-sharing.

"Sharing" (of applications, data, peripherals, and capabilities like network storage) to be a first-level primitive provided by the system and all applications ought to be designed to work with it.

Consider this: we're all connected via networks and bluetooth. So why can't I, in a simple way, let your mouse control my cursor? Or send just part of my screen to part of your screen on the OS level? Or have a photo library on my NAS that smartly caches things I look at often on my iPhone, syncing changes back when I edit a photo? Or share a vim session such that my mouse and keyboard follow my vimrc and yours follow yours? Why do these things have to be built in hacky ways, over and over, specific to individual applications rather than built as reusable building-blocks?

We've gone to so much trouble to bolt sharing on to "personal" computers, when that whole idea is flawed. Engelbart's goal was to create a fundamentally networked system where the combined abilities of everyone would exceed the mere sum of their capacity.

If Engelbart's vision were alive, we wouldn't be excited that we have Skype and screenhero. We'd ask, now that we have these things, how can we build software that uses them to get better outcomes? A medical tool the lets two doctors interact over a patient's EMR and rotate CAT scans together in a way that increases the odds that they get the right diagnosis? A pair programming method that finds more bugs, and gets better designs, more than just one person looking over another's shoulder? A spreadsheet that lets two business founders argue about their core assumptions and business model collaboratively?

Read about the early history of computing to see examples of this. One that comes to mind is the timesharing systems in the early MIT AI lab, where there were no security systems. RMS talks about copying bits of code and configuration that he found in his professors' private folders, and the way that people would snoop on each other and build complex systems out of these shared parts.


You can have all these things. A combination of Plan 9 and VPRI's DynaBook jr. is probably what you desire. :-)


I've also dreamed of linking Smalltalk images against Plan 9 namespaces. Do you know if anyone has seriously worked on it?


I've been working on a project [1] which uses a hypermedia directory protocol [2] to orchestrate Web Workers and WebRTC peers, much like Plan9 did with namespaces. It uses HTTP rather than 9P.

The stated goal is to allow Web interactions to be defined by content rather than the page environment. The approach is an SOA which can be configured by manipulating views of the links exported by Web APIs. Workers can then share the page environment through mediated access.

This particular project is at the head of about 2 years of previous work [3] [4] [5]. There have been some failed approaches, and WebRTC was a detour to get launched [6]. I feel confident that GIDE will solve some of the past problems, though I'm waiting to see if it can achieve all of its goals.

1 https://grimwire.com/local/#docs/api/agent.md 2 https://github.com/grimwire/ide 3 https://www.youtube.com/watch?v=CJLiAdYTDz8 4 https://www.youtube.com/watch?v=VK3KcoOnROA 5 https://www.youtube.com/watch?v=PR-DvCEy1vA 6 https://grimwire.net


My Slate programming language was intended to feature a module linking protocol that would have an ELF-like layer to link against native C code. And then the idea was that the object-slot graph would mirror or integrate with the file system. But the project has languished a bit, and I need to start over (would be glad to partner with somebody to share ideas).


So is it a question of OS integration, then? At some point these technologies get borged by the OS kernel, and we end up with these features operating as first class citizens. It's hard work, so should we surprised that things start as add-on applications first?

Aren't we in the process of achieving a lot of these goals in a pretty clean way? There was a lot of code to write to get to this point.


We are "in the process of achieving" a whole lot of things we had 30 or 40 years ago, but that never made it into the mainstream OS's of today.

E.g. my favorite pet peeve is how many things from AmigaOS "disappeared" when AmigaOS got relegated to a steadily shrinking niche with the death of Commodore. And this is just one of many "forgotten" operating systems with features that have been left behind.

Consider pervasive scripting of applications (yes, I know about Apple Script, but having spent 5 years in an office full of Mac users, I've yet to see anyone use it; on the Amiga AREXX was something almost everyone used in some form or another; and no, shell scripting is not comparable); consider componentized systems where users regularly install new components that enhance "all" their applications (e.g. need support for 7z? on the Amiga you'd install a new XPK library if you want to support a new compression algorithm, and all programs that want to compress data will support it; need support for an archival format? Install the right XAD library; want your paint program to read WebP? Install a suitable DataType library, and any application that knows how to use DataTypes to load images will be able to load it); consider if end users can replace dialog boxes or other GUI components wholesale in almost all their applications - AmigaOS has a few different alternatives: ARP, ASL and ReqTools that provide things like file-requesters. Because of the nature of AmigaOS, there were quickly apps that would act as shims from e.g. ASL to ReqTools, allowing users to pick and choose the file requester they want without relying on application developers choices.

We took giant steps back when Windows snuffed out most of the competition and with it a huge amount of innovative ideas that current mainstream computer users have never even experienced.

Even the stuff that made it into systems like AmigaOS and many others in the 80's to early 90's were mere shades of many of the visions coming out of the 60's and 70's, but the features in OS's like AmigaOS, OS/2, BeOS and many others were there - many of them are not hard work, but a matter of culture and support for the ideas.


Ok so let me get this straight, the application software used components available in the OS for different stuff instead of each application being a walled garden, having its own implementations of everything. So by upgrading/modifying the OS components, you essentially upgrade/modify the applications? Sounds cool. Can applications use each others modules? Maybe not directly, but what if installing an application provides an alternative component to the OS component and other applications can use that? Or is that just the same thing as downloading a new OS component?

All this stuff sounds pretty cool! And can't we make linux like that? (no hope for windows)


It feels similar to Android intent system. If I want to scan a barcode, an app sends a message to the system asking if anyone can do it. The app doesn't have to build that functionality, nor does it have to be provided by Android itself.

Same with sharing other types of data, installing new apps will automatically appear if they provide the functionality that another app asks for.


Actually, Windows and Macintosh already do that, at least for image, audio and video files. You install a codec, and then all properly-written applications can read and write files in that format.


Note on (maybe) the why not applescript, its hairy as hell to debug once it gets to a certain level of complexity. (In my experience)

I would rather debug shell scripts all day, which I also dont love.


I thought the BeOS translation kit was a brilliant idea. I didn't know the idea went back to AmigaOS.


Agree with this so much. We need to take the UNIX idea to its logical end - everything is a file (or a socket, or something exposable) - then for example screensharing is just a case of me "netcat"-ing my mouse to your window manager, and your video output to my screen. I've never played around with it but I believe Plan9 goes some way towards this idea.


"Sharing" (of applications, data, peripherals, and capabilities like network storage) to be a first-level primitive provided by the system and all applications ought to be designed to work with it.

The opencolbalt/croquet stuff sort of takes this approach, though development seems to have stalled there. http://www.opencobalt.org/


people are not ready to be in continuous collaboration mode, i.e. in the mode there they intensively, in real-time, generate worthy ideas and constructively receive(understand), analyze, improve upon and bounce back the ideas generated by the collaborating partner(s). It is very intensive brain activity requiring matching partners and it wears you off pretty quickly. It is like tennis - possible to do for a couple hours few times a week. Or like musical jam sessions - you can do it for prolonged times only when young, and usually with help of some "energizers".

It is one of the reasons why email is such a mainstay of collaboration.


Yes, "not ready" is exactly right. And this is one of Bret's points: Engelbart wasn't actually interested in technology itself. He was interested in sociology: creating groups of people who were ready to do transformative work by collaborating in new ways. Whether or not this model would work is up for debate. But that collaboration, not any particular piece of software, was what Engelbart was after.


Maybe once we've got basic income and everybody can relax a bit more. People can start collaborating on pie-in-the-sky type projects without so much worry about productivity or their next paycheck. Then doing a collaboration for a few hours a week isn't such a big deal.


> people are not ready to be in continuous collaboration mode

What would be the first step in getting people ready?


Would there be a better quality video available anywhere? I've always thought that of the one single copy we do have floating around the net, the quality is garbage. I wanna read the text!



That's an awesome chord keyboard - http://www.youtube.com/watch?v=yJDv-zdhzMY#t=2039 I've always wondered why more people don't use them. It seems significantly less prone to errors - instead of fat fingers or incorrect placement, you have to coordinate the timing between the fingers.

It's pretty amazing how much that video demonstrates. I wonder what the next version of that video will be. Hopefully it's not computer related.


I'd like to try one, but there appears to be basically no such thing as a bluetooth-enabled one-hand chord keyboard that you are intended to hold in your hand (as opposed to having on a table), and that's the use case I'm interested it... an input device for my augmented reality glasses while I'm walking around. On that note, if they're ever going to make a... well... can't really make a comeback if you never made an appearance at all... an appearance, that's probably the scenario that will drive them. Voice may cover casual usage, but when you really need to go to town you're going to need something more, and no current input device can meet that need.


If you don't mind getting your hands dirty, you could probably make a prototype pretty easily. A battery, some buttons, and a bluefruit[1] should be enough to make one. You would likely need to also write some kind of input translator (Custom android keyboard or so?) to wrap/ convert the value from straight key presses to your combinations (since you want a combination of key presses to give you just one character). 3D print a case, and you're probably all set to go.

I mean, it's likely easier said than done, and would take some time and effort, but it's probably also pretty do-able.

[1]http://www.adafruit.com/blog/2013/09/27/new-product-bluefrui...


Since Arduino can be programmed to function as a keyboard[1], wouldn't that be an easier option? Then you can put the chording logic on the Arduino board and have it function like a plain USB keyboard otherwise as far as the computer is concerned[2] (that's kind of how the Makey Makey[3] works - it's also Arduino based).

[1] http://arduino.cc/en/Reference/KeyboardWrite

[2] http://store.arduino.cc/index.php?main_page=product_info&cPa...

[3] http://www.makeymakey.com/


Yeah, you should be able to do that as well. There are a lot of different ways you could make a project like this work, I was just wanting to throw out an example to show that we're empowered to do these kinds of things now. We don't have to wait for some company to come out with a portable, bluetooth chording keyboard anymore, we can do these kinds of things ourselves. I think that is really exciting!


Sadly I think neat keyboards are patent encumbered.

There have been things like the Frogpad, or the Twiddler. But they're expensive and not available anymore.

https://en.wikipedia.org/wiki/FrogPad

http://www.youtube.com/watch?v=ciQVBNHrKKA

I totally agree, I think their time has come and it'd be great if someone could rescue the tech from all these dead or dying companies and provide a decent chording keyboard for people on the move.


Perhaps Morse code would make for a neat device, too.


I've been looking at this recently too. I like the basic design of chordite http://chordite.com/

7 keys across four fingers, and then perhaps a trackpoint for the thumb.

In terms of a bluetooth gpio controller, there are a few options including a few that have been kickstarted recently (e.g. bleduino and rfduino).


I've always heard that type of input device called a twiddler, I found one when googling "bluetooth twiddler": http://www.handykey.com/

* I'm an idiot, that was a top link because they're using the name twiddler, but they don't seem to offer bluetooth.


I poked around Google quite a bit a few months ago. If you drop any one of my adjectives, you can come up with something, but the full gamut does not appear to exist.


Port 8pen to the PS3 navigation controller (one-handed joystick device)?


It wouldn't be very useful without NLS. It wasn't intended for text entry but to select commands to execute while the mouse selected the target for the commands, ie you would enter DW for "delete word" with the chorded keyboard and use the mouse to select the word to delete.

It disappeared because the people a Xerox Labs decided it was too hard for normal people to learn to interact with a computer in this way, so they replaced the chorded keyboard with on-screen buttons plus a set of keys on the left of the keyboard for the most used operations (Undo, Open, Copy, Paste, etc). Then Apple, to make things even friendlier, removed the extra buttons from the keyboard entirely and relegated the most frequently used operations to key combinations (Cmd-C, Cmd-X, Cmd-V, etc). You will notice that many shortcuts are relegated to the left side of the keyboard.


Wow, how trendy is that humanist sans-serif font and lack of capital letters?

Edit: I mean on the announcement at the bottom of the article.


Did you notice there appears to be an overline for caps? At least that's what it appeared to signify to me.


Small case only was pretty popular in the interwar period, too. You'd also be hard pressed to find a single capital in 1957's 12 Angry Men. Even the names of the actors in the credits are all lower case.


For people interested in seeing the 1968 demo here's the links to the best (to my knowledge) version available on the internet:

https://archive.org/details/XD300-23_68HighlightsAResearchCn... (first reel, follow the links for the other two)

If you are interested in how NLS was actually used I also suggest watching the 1969 demo which is less flashy but contains a better explanation of the interface.

https://archive.org/details/XD301_69ASISconfPres_Reel1


Very interesting.

It looks like a interactive document/hypertext backed-up by a schema-less store ( much like JSON, with hierarchies ). Also I liked the idea to build "views" from the document(s).


Views are a very powerful idea, one that is currently under represented.

Today, if you want a different view of a document/data, you typically have to convert the document/data into a new one (in a different format). Then, as you change the original, the other remains outdated. Views are awesome because they remain up to date.


Or a day over 30. The Blit demo - http://www.youtube.com/watch?v=emh22gT5e9k is a bit over 30 years now


It is shocking to me that it took so long for this technology to be realized by the public at large. Decades passed before this technology reached people's homes.


Here's a talk in the 80's from Douglas that covers some more details:

https://archive.org/details/XD302_86ACM_Prese_AugKnowledgeWo...

The Q&A session at the end is very interesting. People asking about gestures and using a pencil (stylus) instead of a keyboard (note this is back in 86).


Is everyone seeing a bunch of screwed up inline links, or is it just me? Stuff like this:

<a href="http://A young Stewart Brand &mdash; who would shortly launch The Whole Earth Catalog &mdash; operated one of the cameras in Menlo Park. Brand, along with others,">was fairly mind-blowing</a>


Incidentally, the way the kids were typing in the recent Ender's Game movie is interesting. Does anyone know how deeply they designed that system/interface? (i.e., was it just random keystrokes or did the chording actually matter?)


The shopping list demo really looks like org-mode to me. :)


What else would you expect when much of our Software architecture is from the 70s ? :)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: