VR sickness is primarily caused by latency. ie you move your head, the image takes a few milliseconds to respond and you feel dizzy. But there are other types of VR sickness, like the inability to focus on an object. This research improves your ability to focus on objects at different depths. Your vision is less blurry. So yes, this research does help eliminate nausea in VR. To say otherwise is misleading.
There is nice table in the paper which compares the capabilities of the different technologies trying to solve the DOF problem in HDMs.<p><a href="http://i.imgur.com/8rdoeS3.png" rel="nofollow">http://i.imgur.com/8rdoeS3.png</a>
Having a display that can support ocular accommodation (selective focus by the eye) is an important research development, though it will most likely not change the viewer's experience in a radical way.<p>Practical electronic 3D displays requires bandwidth reduction, both data bandwidth for transmission and optical bandwidth to create practical or lower-cost optical modulators. The goal is to use bandwidth reduction techniques that produce little or no visual artifacts. Some of the techniques used are the same as in 2D (spatial discretization, time multiplexing, compression), while others are unique to 3D (view discretization, limits on view angle, elimination of coherence).<p>Head-mounted displays are basically descendants of stereoscopes, the first 3D displays developed by Wheatstone in 1838. Wheatstone's amazing discovery was that you can throw a huge amount of information about the world away, provide just two images from two viewpoints, project them out to infinity in front of a viewer's two eyes using two lenses/light paths, and a vivid sense of 3D is evoked. That's an incredible amount of information reduction from real life.<p>In the traditional stereoscope, accommodation is thrown away, mostly because its really hard to recreate electro-mechanically, but also because we're generally fine with it. Accommodation isn't effective for distant objects (or for even larger depth ranges as we get older we lose our ability to accommodate), so we likely have neural circuitry to discount imperfect accommodation cues. One of the reasons we turn on bright lights when doing detailed work is to stop our eyes down and increase our depth of field, reducing the need for accommodation.<p>However, there have been perennial debates about the physiological impact of conflicting depth cues involving accommodation, and those debates are more interesting in VR where objects can be (virtually) very close to the viewer and the viewer can dynamically change their physical relationship with virtual objects.<p>Until you have a light modulator that can let you experiment with selectively modulating accommodation within a scene, you can't provide real data on how important accommodation (even approximate accommodation) is for a particular application. Can't wait to see the studies.<p>We did some similar focal plane manipulation in holographic video more than a decade ago, for related reasons (see Fig 7):<p><a href="https://www.researchgate.net/publication/255603167_Reconfigurable_image_projection_holograms" rel="nofollow">https://www.researchgate.net/publication/255603167_Reconfigu...</a>
This is a much better (IMO)approach by Nvidia: <a href="http://www.fudzilla.com/news/graphics/39762-nvidia-shows-off-its-light-field-vr-headset-at-vrla-2016" rel="nofollow">http://www.fudzilla.com/news/graphics/39762-nvidia-shows-off...</a>
Would it be possible to detect the focal distance of the eye and change the entire focal depth of the display to keep it always in focus, similar to automated vision testing devices? It could then perform blurring of out of focus objects as a rendering step.
I'm a bit skeptical of how much of a problem this really is. I have never noticed it in VR. Perhaps because:<p>1. Display resolution is still quite low, so really everything is blurry.<p>2. You will never be able to notice blurriness where you aren't focused anyway because you aren't looking there! Everything is always blurry in your peripheral vision.<p>3. Surely eye focus is a feedback system, like in cameras? I mean nobody has problems focusing on TVs because your eyes just magically change focal length until the image is sharp.<p>I am stereoblind so maybe it is a big problem for others.
Why are holograms / light field displays not technically possible now? I would assume think we have bright and dense enough displays, and can shape the microlenses.
Ctrl+F "inner ear" - no results. To me, that's going to be the keystone to a functional VR experience. Until there's some relatively non-intrusive mechanism to fool the body's systems into playing along, I'm sorry, I don't think image resolution or refresh or FPS will solve the problem. They're all very important, sure, but I think the biology of the conundrum is the most challenging short term.
I think this is overstated (though without looking through the prototype, this is only speculation).<p>During normal, outside-of-headset vision, we focus naturally and quickly on whatever we're looking at. We don't spend time with our eyes consciously defocused on subject matter in our foveal view. So anything that's out of focus will tend to be in our peripheral view.<p>So this is a peripheral technology. I think everyone's still looking for the killer additional tech that will make VR perfect -- but it's not about one magic tech bullet, it's about ecosystems slowly growing, and content getting better. (The headsets are better than people think.)
"It may even let people who wear corrective lenses comfortably use VR without their glasses."<p>If just for this, it's a move in the right direction.
Wont this also need gaze-tracking to be successful? In their video they described a manually moved camera.<p>Is this technology compatible with foveal rendering?
Sort of strange to describe it as a "discovery" - I'm sure a team of engineers with a variety of fields of expertise spent 1-3 years solving problems that led up to this. A "discovery" would seem to describe something that existed in the aether prior to their work - it seems to diminish the innovation and effort they put in.
i guess whatever the faults of the paper, i like that they do have a product on the market and are demoing and publishing research. magic leap? not so much.
Honestly I found the paper critically lacking where they attempted to make reference or comparisons to virtual retinal displays. Saying that a VRD is functionally restricted to <i>moderate</i> FOV in comparison to the 120 degree FOV of the Rift - using only the embodiment of the deform-able membrane mirror as reference is ridiculous on it's face.<p>Even a rough version of the deformable mirror AR VRD described by researchers at UNC Chapel Hill [1] accomplishes 100 degrees FOV with accommodation.<p>They went further with the Pinlight achieving 110 in 2014 [2]<p>The technical limit according to our own work for VRD FOV is H: 200o, V: 140o (Combined). So either they're ignoring work in the field intentionally cause they don't want to do VRD or they don't know about it. My guess would be the former.<p>[1]<a href="http://telepresence.web.unc.edu/research/dynamic-focus-augmented-reality-display/" rel="nofollow">http://telepresence.web.unc.edu/research/dynamic-focus-augme...</a><p>[2]<a href="http://www.cs.unc.edu/%7Emaimone/media/pinlights_siggraph_2014.pdf" rel="nofollow">http://www.cs.unc.edu/%7Emaimone/media/pinlights_siggraph_20...</a><p>edit: I find this whole thing extremely frustrating. Facebook could throw 2 billion dollars at VRD tech and actually get to a working stable consumer grade system if they wanted to - everything is there for it. Why aren't they?
Article headline is misleading. Eliminating nausea is not the primary benefit of this research and isn't even mentioned in the article. It may help a little with nausea in some cases but it won't eliminate it.
Nausea in VR is already a solved problem, at least for the case where you have a stationary camera. This is for giving people an extra method of depth perception, but one which isn't strictly necessary.