Apparently you can shake a Kinect to make its field of view overlap with other kinects. More kinetics would make this already mind-blowing demo even more mind blowing, and allow more people in the room (and perhaps open up cross calibration)<p><a href="http://www.precisionmicrodrives.com/tech-blog/2012/08/28/using-vibration-motors-with-microsoft-kinect" rel="nofollow">http://www.precisionmicrodrives.com/tech-blog/2012/08/28/usi...</a>
I'm struck by the vignette involving the table leg, in which Oliver describes it as feeling "unnatural" to pass his kinect-sensed leg through the virtual table leg, even though there was no obstruction in "real life." I believe this is a demonstration of fully-convincing immersion, and Oliver is sure to point out its implications for the uncanny valley. This is so exciting from the perspective of social/cognitive science, too. I'm floored.
Looks good, kudos :) Although it painfully reminds me that we did something similar some years ago[1]; it even was a fight to convince the project's official contractor (aka my client, although after 8 years on-site fulltime, I guess I'm more the second longest employee by now, lol) that we can do such a proof of concept, and furthermore to use Kinect to make it given that was started back when only the OpenNI driver without audio was available, no official SDK from Microsoft.<p>Sadly we never got the funding to go further to do multi-cameras and I had to move on to other urgent things, so I'm glad to see others might get to solve it: imho there are many applications, even simple things like making better video conferencing using 3d capture viewed in the oculus :)<p>One suggestion however: the "fat points" pointcloud rendering of potree[2] might improve the appearance of the generated model instead of using meshes, could be worth a try.<p>-----------<p>[1] <a href="http://ivn.net/demo.html" rel="nofollow">http://ivn.net/demo.html</a> (you can skip the cheesy first minute of the video)<p>[2] <a href="http://potree.org" rel="nofollow">http://potree.org</a>
Too bad they don't make Kinects with different IR wavelengths. It would solve a lot of the problems with colliding data, and allow you to use more Kinects. I don't imagine it is simple though, because if I understand how diffraction gratings work (that's what produces the IR dot pattern), they're designed to work for specific wavelengths. If you send another wavelength through it, you're not going to get the original pattern. And since the depth-sensing algorithm is hard-coded in the hardware, you wouldn't be able to use this new pattern to detect depth.
There really is something amazing about this setup that neatly bypasses the uncanny valley.<p>Toward the last quarter of the film, at one point he clips through the table, and it was shocking to see, but even after seeing it, when he moved back out, it still felt like a "person" more than a "CGI Ghost".
This is cool, but I have to say I'm basically holding off on getting excited about Kinect stuff until the Kinect 2 gets out there to this same researchers and hackers. They are going to have a goddamn <i>field day</i>. The Kinect 2 is a straight up future toy. It's going to make these fabulous Kinect experiments look like 64K scene demos. I can't wait.
One way of improving original Kinect would be swappinch visible light camera module for something that does fullHD, there should be plenty of space inside kinect to do that mod.<p>There is enough 3D data in kinect stream, but 640x480 video is just pathetic.
So I'm thinking 5 Kinect-IIs, a common wooden table, some hardware, and you could have a team room/meeting room in virtual 3-D with folks from all over the world?<p>Once you made something like this, then you'd start writing apps for it -- I would imagine you'd start off with virtual "pictures" for the walls that could have a web browser, spreadsheet, etc. built in. Then you could work up to truly interactive 3-D tools, but I'm not sure users could easily grasp moving to holographic toolsets right off the bat. It's an interesting marketing question.
I've been wondering whether voxels or light fields will win the 3d video war. This is the first cheap voxel capture I've seen working well.<p>Voxels are nice because they are well understood by most 3d developers, and have the same spatial resolution characteristics as we are used to on 2D formats.<p>Light fields on the other hand have easier capture going for them, don't change transmission formats (a light field can be transmitted in a 2d video or image), and don't suffer from interference problems.<p>I'm excited to see what happens.
A bit offtopic:<p>Does someone know a DIY Lidar project or a cheap Lidar?<p>(Lidar are usually <i>very</i> expensive, e.g. the Lidar that Google uses for its autonomous cars cost 78k dollar.)
Alright Google, listen up. I think it is time to attach a bunch of kinects or kinect like devices to drones and have them 3D map the country or at least major cities. Then let me attach an VR headset that isn't the Occulus Rift to my computer and take virtual tours of cities and navigate with a game controller. Call it "World View" or "Street View++".