This seems to me a bit like saying we could never have a flat tablet computer because its impossible to have a perfectly flat crt that's thin enough what with the magnetic yoke, the electron beam, shadow mask etc.<p>"Breakthrough" is used much too often these days. Breakthroughs are breakthroughs because you don't see them coming. It will be laugh-out-loud unexpected when it comes, seem obvious in hindsight, and be one of the few things <i>ever invented</i> that might deserve a patent monopoly to be granted for a small handful of years.
My TLDR of this article is: Due to the additive blending, the "augmented reality" of the immediate future then is restricted more towards 2D HUD-like overlays and ghost-like 3D overlays.<p>True. Although, if you think about it, that's still pretty cool "sci-fi" tech, and opens up a lot of exciting futuristic possibilities. For example, you can still have a 2D HUD giving all the context-sensitive information you need, and you can still have ghost-like 3D images overlaying the real world to help you out.<p>In fact, I have no problem waiting for nonadditive AR glasses for everything except video-games, because in real-world use I want to be able to discern reality from augmented data.
11 ways to solve what you're trying to do.<p>1. Direct optic nerve connection.
2. Optogenetics (hack the ganglion for blue/yellow on/off control)
3. Holographic displays for close focusable screens
4. Eye drops that can be stimulated to block or darken light
5. An AR that doesn't obsess over sight but uses soundscapes.
6. Invert/distort the image so the brain relearns what light v dark means.
7. High res lcd's that use hemispherical lenses over groups of pixels to produce defocused light.
8. Simplify the problem by making AR windows not goggles.
9. Embrace the imperfections, delays, tears, for artistic license.
10. Constrain the environment so hard AR arcades precede portable devices.
11. Sponsor an x prize<p>Ultimately feasibility and time-frame to market are questions of money, not a lack of ideas or technology. Given a billion dollars Abrash could have out a hard AR system in under 5 years.
Does anyone else feel that widespread use of "hard AR" isn't actually desirable?<p>I'm all for soft AR, and anything else that increases the convenience and bandwidth of human/computer interaction.<p>But hard AR - seamless with the real world - makes it possible for people to quite literally lie to themselves (or, more sinister, be lied to) about what's real around them. I have nothing against solipsism, but with that kind of technology, it seems like it'd make more sense to go full-on virtual reality, if you want that, rather than viewing the real world through rose-tinted glasses.
On drawing black: Maybe biologists will discover a way to use carefully timed pulses of light to cause rods and cones to emit a diminished signal, or maybe all incoming light will be polarized, with an inverse polarized all-wavelength laser that cancels out the incoming light. Or maybe simcop2387's holographic mask idea (<a href="https://news.ycombinator.com/item?id=4273811" rel="nofollow">https://news.ycombinator.com/item?id=4273811</a>) on a contact lens would work.
Maybe we won't be using biological eyes any more. Maybe we'll be interfacing at the retina or neural level. If it's not any time soon, it may not be using any simple solution we can think of right now.
On drawing black:<p>LCD glasses that "darkened" at specific points do have the problem that those spots would be blurry, yes.<p>But if the spots are big enough, then they will create solid block spots at their middle. And the display can then "fill in" the darkened blurry spot proportionally with the "same" pixels from the camera, altering the ones it wants.<p>It would look bizarre for people looking <i>at</i> the person wearing the glasses, with strange black dots flitting across their glasses... And the syncing would have to be perfect (very difficult).<p>But it's an engineering issue, not a fundamentally physical one.
So, I have one issue with this: "hard AR" already has to solve all the mentioned problems of video passthrough if it wants to make virtual objects seem lifelike: dynamic range, field of view, lag, all of it. <i>If</i> you somehow solve of them, which of course seems impossibly hard at present, then there's no point in worrying about see-through AR; just use video passthrough.
It shows how fast computer technology moves these days that he spends the whole article talking about how hard and far-off this tech is, then casually mentions that it might be available in as little as 5 years.
Couldn't you just use something like Circular polarization <a href="http://en.wikipedia.org/wiki/Circular_polarization" rel="nofollow">http://en.wikipedia.org/wiki/Circular_polarization</a>. Each point on the glasses is a TFT and polarizes the light coming in to the opposite direction thus creating a good black surface.<p>His second argument about the processing time is good if the processing is done on the phone but with cellular networks becoming better organized you could easily have a computing cluster do most of the work. Using that and some basic statistical inferences (to fudge some of the processing) you can get pretty impressive response times.
<i>“you can just put an LCD screen with the same resolution on the outside of the glasses, and use it to block real-world pixels however you like.” That’s a clever idea, but it doesn’t work. [..] a pixel at that distance would show up as a translucent blob several degrees across</i><p>The LCD solution was the first one that came to my mind too, and I'm surely missing something here but why wouldn't the regular translucent pixels suffer from the same problem as that above?
Great piece. As an entrepreneur in the CV/AR industry, it's interesting to see how a larger company examines the problems faced by technology that is about to converge. As a smaller company, we have to come up with a product that will stick in a related industry that will hopefully position us well when the time comes for true AR (hard or soft).<p>"skate where the puck's going, not where it's been"
Notes to AR adaptation: I believe that AR will be introduced not through everyday life experience but via event context. (Similar to the 3D dorky glasses that we use in the cinema today). In those scenarios, the environment is much more controlled and even process lagging can be forgivable.
For example: take any stadium close sport field, like Soccer Euro championship. How amazing would it be to go to a empty soccer stadium and just see using AR the game in your hometown. Not saying there are no tech challenges but it is definitely easier where the env is controlled and even been 3-5 minutes behind is still ok, for a new kind of experience.
If I had to bet, I would say that AR glasses of the future will replace your vision entirely with a camera feed (called video-passthrough in the article). Today's glasses have a low field of view and lag, but these are only quantitative problems. They can be solved with gradual improvements. They're not physical limitations. The camera is right next to the display, so there are no fundamental speed-of-light issues. It's easy to imagine that, over the years, the lag will be reduced enough to be imperceptible.<p>The problem of drawing black with see-through glasses looks like a much harder problem in comparison.
Aside from how obviously cool Hard AR would be (for us SF nerds particularly), and some interesting gaming/entertainment(/adult) applications, is there any real <i>need</i> for Hard AR? It seems like it would be used primarily to turn reality into fantasy, which I suppose has its merits, but is that enough of an impetus for the countless man-years of experimentation and development to get us there?
Needs better definition of 'anytime soon': 3 years? 9 years? 20 years?<p>Also, arguing that something <i>isn't</i> possible seems risky for a practitioner in the field. It invests identity in the negative proposition, potentially biasing perception against new possibilities. (When a 'Hard AR' breakthrough does occur, it will come from a researcher who thinks it's possible and imminent.)
Sensor person here. These a problems are all solvable with enough sensors and compute power. What good is it? I am sure that there are specialist applications that make sense, like for doctors and mechanics that want to see schematics overlaid. I haven't seen a single consumer level use-case that makes sense. Games? Sure. Anything else?
> I’m sure that one day we’ll all be walking around with AR glasses on (or AR contacts)<p>All light sources generate heat. Is it not a good idea to have a possibly intense heat source so close to your optic nerves?
And yet soft AR, e.g. Aurasma, may revolutionize many areas:<p><a href="http://www.youtube.com/watch?v=frrZbq2LpwI" rel="nofollow">http://www.youtube.com/watch?v=frrZbq2LpwI</a><p>That is, do we actually need hard AR right now?
For transitional period, it might be even desireable that AR experience isn't lifelike. Easier adoption, avoiding PR disasters, tunning customer in etc.