This is a project I've been working on for the past couple of months, using the Project Tango tablet for a navigation system for people with visual disabilities.<p>It uses pose estimation and point cloud data to (1) build a chunk-based voxel environment of the user's surroundings, (2) render a set of depth maps surrounding the user, and (3) use the depth map and OpenAL to generate 3D audio that gives indications of where mapped obstacles are.<p>I don't have it at a state where folks can try it out, but I did do a writeup of my approach and wanted to share it.<p>Demonstration video (with quiet audio) here: <a href="https://youtu.be/EnNuDiJazBs" rel="nofollow">https://youtu.be/EnNuDiJazBs</a>
This is awesome, great job, I think you've just given humans echolocation.<p>If someone was given a similar device at an early age that was semi-permanently attached to them, would their brain possibly be able to create a map of the room?<p>There have been previous attempts but the Tango device didn't exist then so the hardware was bulky and usually required a backpack.