From the folks at Magic Leap. It looks remarkably good to me.<p>The video at <a href="https://www.youtube.com/watch?v=9NOPcOGV6nU&feature=youtu.be" rel="nofollow">https://www.youtube.com/watch?v=9NOPcOGV6nU&feature=youtu.be</a> is worth watching, especially the parts showing how the model gradually constructs and improves a labeled 3D mesh of a live room as it is fed more visual data by walking around the room.<p>--<p>On a related note, Magic Leap has been trying to find a buyer for the business for several months now:<p><a href="https://www.roadtovr.com/report-magic-leap-buyer-sale/" rel="nofollow">https://www.roadtovr.com/report-magic-leap-buyer-sale/</a><p><a href="https://www.bloomberg.com/news/articles/2020-03-11/augmented-reality-startup-magic-leap-is-said-to-explore-a-sale" rel="nofollow">https://www.bloomberg.com/news/articles/2020-03-11/augmented...</a>
On a tangential thought, it's interesting to me that a company (magicleap) that has raised several billion dollars generates so little value compared to other companies its size that this is the most notable output from them in a year and I thought it was a phd project until I looked at the project owner. Anyways, it's a very interesting project and thanks for sharing.
Here's a challenge question to folks reading this and learned with the tools of the trade (my apologies in advance for somewhat hijacking the thread): consider this video of an endoscopy: <a href="https://www.youtube.com/watch?v=DUVDKoKSEkU" rel="nofollow">https://www.youtube.com/watch?v=DUVDKoKSEkU</a> -- say, from 3:00 to 5:00. And I have a bunch of movies (i.e., a series of images!) and I want to do a 3d reconstruction of this.<p>It seems super, super difficult... there are free-flowing liquids, and since this is an esophagus/upper lining of the stomach which is changing in form quite drastically so often. How would you guys approach this problem?
I wonder how long it's going to be before we're able to run a significant portion of Youtube video (tourist videos, etc) through something like this, and generate a huge 3d mesh of the world. Combined with Street View data, you'd really have a ton of spaces covered.
Looks awesome. Given it takes position data along with images, how accurate must the position data be? Could it handle something like sensor drift in the position data over time?
For anyone with domain knowledge, how applicable is Google's NeRF work here in comparison? Is there any overlap?<p><a href="https://nerf-w.github.io/" rel="nofollow">https://nerf-w.github.io/</a><p><a href="https://news.ycombinator.com/item?id=24071787" rel="nofollow">https://news.ycombinator.com/item?id=24071787</a><p>EDIT: @bitl: Tremendous, thanks for the reply. Would be amazing to be able to build these scenes just by walking around scanning a room with your mobile phone while it records video for processing the frames into scenes (especially considering mobile platforms with a depth sensor for enrichment of the collected data).
Ladies and gentlemen you are looking at the pinnacle of mankind's technological achievements. The proof?<p>We can now make tiny virtual cars do stunts off object in the real world: <a href="https://www.youtube.com/watch?v=9NOPcOGV6nU&feature=youtu.be" rel="nofollow">https://www.youtube.com/watch?v=9NOPcOGV6nU&feature=youtu.be</a>