Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thing is that Licklider's vision of computers as "interactive intellectual amplifiers for all humans, pervasively networked world-wide" has already come to pass, and created huge economies of scale and exponential pressures for compatibility and conformity that didn't exist before.

In the 1970's a few dozen brilliant people could create a completely new and self-contained computer system because the entire computing world was tiny and fragmented. there wasn't the imperative to be compatible to all-pervasive standards (even IBM's dominance in business was being challenged by the minis).

These day if you want to create a new computer system that people will use you need at the minimum to provide a networking stack and a functional web browser, some emulation or compatibility system to support legacy software that people rely on, device drivers for a huge range of hardware, etc. All this not only takes a huge amount of work, it also punctures the design integrity of your system, making it into a huge mountain of compatibility hacks before you even start on your own new concepts. But the deadliest enemy of innovation is the mental inertia of masses of users with a long history of interacting with computers. They are no longer the blank slates who have never seen a computer you had in the 70's.

Even in the realm of art people realized that the romantic or modernistic model of artistic revolution that Kay invokes is untenable and retreated into postmodernism.



> All this not only takes a huge amount of work, it also punctures the design integrity of your system, making it into a huge mountain of compatibility hacks before you even start on your own new concepts.

One possible approach that doesn't require writing a full compatibility layer is virtualization. Run your new system alongside a mature system like Linux, then rig it up so windows can appear inside whatever new environment you come up with. You do still need to write the code to send events and get pixel buffers, but it seems like much less work overall.


Then you're constrained by the existing window and input systems, which probably makes many interesting new concepts impossible to implement.

I don't know an alternative except to do everything new within one window, which would be at least as bad.


I think this is why it is necessary to fund "searching for problems" research. We don't need to solve the 1970s vision of computing, we need to solve the 2020s vision of computing, whatever that may be.


I hate to be the one to say it, but Apple’s the only company anywhere near the capacity to make a new platform happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: