I am right with you there, I see two ways it could go, one is that the computer adds information into your visual field about things that are not necessarily around you (calendar appts, tweets, etc) vs adding info and annotating things you are near.
A hands free lets you talk to people who are not close to you but it does nothing for the people near you (there was a submission about smart hearing aids that would be different) and video assistance has the same potential about bringing personally relevant data to you that can even be situationally relevant but only as an artifact (your calendar said you were meeting 'Bob' so you have information about Bob in your eyeset not because something 'knew' you were looking at Bob and thus brought up information about him.)
Personally I'll be thrilled if I can read email on a decent sized 'screen' while sitting what passes for 'nominal seating space' in a modern jet aircraft.
> Personally I'll be thrilled if I can read email on a decent sized 'screen' while sitting what passes for 'nominal seating space' in a modern jet aircraft.
It's an HMD with a 1080p display and a large exit pupil. I'm planning on hacking up a linux distro to run on my Samsung Galaxy S2 and using that with a bluetooth keyboard for long flights. Sure, nowhere near as powerful as my Macbook Air, but for a big guy like me that flies a lot, just having a keyboard on my tray table will make my life significantly nicer. We'll see how it actually works out when they ship next month.
I sooo hope these guys survive. Its always a 'more hope than foundation' situation when your CEO's bio has this in it
"Currently, he is also an entrepreneur-in-residence with a venture capital firm in Boston, working on healthcare IT startups."
Not that health care isn't important, but it seems adjacent to the ST1080's market. To date all of the head mounted display companies have over promised and under delivered sadly.
Yeah, I think that bringing things we currently use a computer into the field of vision will be a big and important change, but when it starts doing things based on the context, that's when it really starts changing things. Some people are trying to do this kind of thing with smartphones (Color, with their attempts to piece together what's going on via photos, for example), but many of the more advanced attempts rely on an unnatural action like pulling out your phone and taking a picture or checking in or what have you. When you have an always-on forward facing camera and other sensors, and it's always accessible with a glance rather than awkwardly ignoring people and pulling something out of your pocket, things become a whole lot more interesting (and potentially creepy).
One quick example, but imagine something that annotates every person you see that gets near you and is clearly talking to you, so you never forget another name? Something like rapportive, but for in-person encounters, and provides a quick dossier to refresh you on who they are. I think it'd change things pretty radically. It'd be tough to do, but I'm sure Facebook and maybe a couple other companies have a large enough tagged face training corpus to do it.
A hands free lets you talk to people who are not close to you but it does nothing for the people near you (there was a submission about smart hearing aids that would be different) and video assistance has the same potential about bringing personally relevant data to you that can even be situationally relevant but only as an artifact (your calendar said you were meeting 'Bob' so you have information about Bob in your eyeset not because something 'knew' you were looking at Bob and thus brought up information about him.)
Personally I'll be thrilled if I can read email on a decent sized 'screen' while sitting what passes for 'nominal seating space' in a modern jet aircraft.