"We offer an experience of 3D on a single 2D image using the parallax effect, i.e, the user is able move his real-time tracked face to visualize the depth effect."<p>Since there are no examples I can't be sure if this is is what I think it is, but IF it is:<p>I want this on huge monitor for any 3D game instead of clunky headgear VR or tiny smartphone AR.<p>Months ago I also tested this with a small 3D visualization and very crude head tracking.<p>The effect is damn awesome! To be able to move around in real space with the rendering adapting to it, makes it so immersive, even for my very crude tests.<p>In my opinion the resulting 3D effect is MUCH better than viewing in stereo with one picture per eye.<p>Here is an example from someone else from 2007:<p><a href="https://www.youtube.com/watch?v=Jd3-eiid-Uw" rel="nofollow">https://www.youtube.com/watch?v=Jd3-eiid-Uw</a><p>From 2012:<p><a href="https://www.youtube.com/watch?v=h9kPI7_vhAU" rel="nofollow">https://www.youtube.com/watch?v=h9kPI7_vhAU</a><p>Obviously this works for only one person viewing it, but does that really matter?
There are a LOT of use cases where only a single person uses a single monitor for viewing, especially in these times.
In fact it is the standard.
Not exactly the same, but I made something a while ago that takes a 2D image and tries to infer a depth map to create a 3D effect: <a href="https://awesomealbum.com/depth" rel="nofollow">https://awesomealbum.com/depth</a><p>It's based on [1] and runs entirely in the browser, allthough it takes a moment to create the depth map. It's more of a toy project at this point. But I was surprised when I saw that Google is doing the same thing now in Google Photos [2].<p>[1] <a href="https://github.com/FilippoAleotti/mobilePydnet" rel="nofollow">https://github.com/FilippoAleotti/mobilePydnet</a><p>[2] <a href="https://www.theverge.com/2020/12/15/22176313/google-photos-2d-3d-photos-cinematic-memories-activities-things" rel="nofollow">https://www.theverge.com/2020/12/15/22176313/google-photos-2...</a>
It cracks me up that in 2021 people are still posting fundamentally visual tools without so much as a single screenshot to help understand what it does
<a href="https://munsocket.github.io/parallax-effect/examples/deepview.html" rel="nofollow">https://munsocket.github.io/parallax-effect/examples/deepvie...</a> here is a different library with a demo
The iPhone and iPad have motion tracking tied to the acceleration of the device. Is this a similar effect except it's based on head movement rather than device movement?