I also own one and have started developing a 3D painting application with it. It's magical, like playing with a touchscreen was magical.<p>BTW, this post reveals a bit more information than Leap Motion would like developers to reveal. Essentially, OP broke the agreement.
I have had one way before they shipped out the SDK, and all I have to say is that it is quite a bit more janky than the hype lets it seem.<p>Fingers will disappear without notice when nothing all that crazy is happening and the frame rate of the device (which is speced at 120+ fps) is much closer to around 45-55 fps. This leads so some major problems with long term finger acquisition that has to be handled by the developer. Quite frustrating to do things yourself that should be handled by the SDK.<p>While I understand that this SDK batch is a "beta/alpha" test, it is much buggier than it should be. The SDK will hang the entire OS quite often, and there is simply no way to detect if the device is actually plugged in. It will report invalid finger data rather than telling you that no device exists.<p>And the javascript API is so new, that it is borderline useless. It doesn't even properly report finger width, which is kind of sad since that worked many versions ago.<p>Overall a cool device with lots of hype, but needs a lot more work to even be mildly useful for anything more than just simple gestures.
My big question is really a paraphrase of Douglas Adams -<p><pre><code> Radio had advanced beyond touchscreen and into motion detection.
It meant you could control the radio with minimal effort
but had to sit annoyingly still if you wanted to keep
listening to the same channel.
</code></pre>
I can see it working like the Kinect - really useful in a specific and narrow use case but there is a reason we use pens and not paint brushes for writing. Similarly this does not seem like a tool that is easy to use for say to day tasks.<p>If you have an informed (hands on) opinion to the contrary I would be very interested
I miss one of these everytime I'm cooking or doing other things that require me to get my hands dirty or unavailable. I'd love to have something like this so I can answer the phone (skype!), check out at recipes/howtos, or change playlists while elbow deep in batter, or while still holding a hot soldering gun and a fiddly piece of kit.<p>So, as much as Gorilla Arm would be a problem for everyday/all-day use of no-touch gestural interfaces, they are a great solution for existing problems.
Suggestion for the OP - read more about computer vision.<p>Extracting gestures is indeed a problem. Most of the approaches I know depend on a state triggered by the appearance of a new input (in the video, when you add or a remove a finger) and then work by doing a temporal sum of the movement to get a shape.<p>This of course introduce problems about how fast or how slow the person draws the shape in the air - unless you trigger that when a finger is added, a finger is removed (as explained before) <i>OR</i> when you have just successfully detected a gesture - I don't mean identified it, but a quick deceleration of the finger followed by a short immobilization of the finger can reset the "frame of reading".<p>You may or may not have successfully grasped what was before that shape, but then an human will usually stop and try again so you get to join the right "frame of reading"<p>I've done a little work (computer vision MA thesis) on using Gestalt perceptual grouping on 3d+t (video) imaging. The goal was automating sign language interpretation (especially when shapes are drawn in the air, something very popular with the French Sign Language - and therefore I suppose with the American Sign Language considering how close they are linguistically)<p>However we were far from that in 2003, and we used webcams only. A lot of work went to separate each finger - depending on many things on its relative position to other, ie at the extremity of the row you either have the index or the pinky, and you guess which one if you know which hand it is, and which side is facing the camera)<p>I don't think it is or even it was <i>that</i> innovative. I've stopped working on that, so I guess there must have been lot of new innovative approaches. So once again, go read more about computer vision. It's fascinating!<p>I'd be happy to send anyone a copy, but it's in french :-)
I had the opportunity to get some hands on time with the Leap before Christmas. Leap Motion sponsored a hackathon at my school and brought in dev boards for everyone to borrow, although they were not the completed product like is pictured in this article.<p>I cannot tell you how incredible this product is. I'm a first year CS student, and I've never done anything even remotely close to gesture tracking before. But at the end of the night, I was able to play rock paper scissors with my computer. The API is that simple to use.<p>Yet, as mentioned, it's so incredibly accurate. One of the biggest bug we faced was even when we thought our hands were still, the device still registered imperceptible movements which were translated into false moves.<p>Overall it's a great product, especially for the price.
I also have one of these through the developer program. It is very neat, but it does not seem ready for prime time. When it is locked on to your fingers, it is <i>fast</i>; shockingly and amazingly fast and accurate. However, it drops fingers all the time; it almost never gets thumbs.<p>Even doing the calibration I could never going to all four corners of a 24inch screen. I suspect they will get it ironed out in the end, it does seem like AI/software issues rather than hardware issues.<p>I will say, that when it's working its really magical feeling. It feels accurate and like I am truly controlling something; but beyond that it didn't feel <i>nearly</i> robust enough for real world use.
What people never seem to get in the Minority Report is, it’s not cool because he used his hands.<p>It was cool because it had kick ass software behind it that could do all the work.<p>I see this continuously with things like glass/mirrors that can be touch screens etc<p>They look awesome because the (imaginary) software demoing it does awesome things.<p>Leap could be a cool device, but you’ll need to think outside the box to see how.<p>My fat ass is not going to wave at anything that it can do with a mouse let along the quicker speed at which we can type/shortcut/mouse compared to physical movement.<p>Personally if I was a developer I'd look at things totally new.
Here's my big question on the interfaces people are making for the leap motion:<p>Everyone seems to be trying to replicate iPad gestures in 3D, e.g., point your finger at something, drag something around by pointing.<p>How about instead we create a virtual hand that shows up in your screen and it mirrors the movements of your real hand and you use that to interact with objects on the screen?<p>I just think it would be awesome to be able to virtually reach into your screen and move things around! And it seems like it would be quite intuitive, no?<p>As some examples: I'm picturing moving your hand to the top of a window, make a grabbing motion and grab the top of a window and move it around. Grab the corner of a window to resize. You could even have the hand type on a virtual keyboard shown inside the screen. What do you guys think?
I'm excited about the Leap because I rapidly transition my hands from the keyboard to the mouse and back again. That takes a lot of time and the mouse is hard on my wrists. I would much rather "mouse in the air" even if I lost some precision. My productivity would soar.<p>Furthermore, my pinkies and thumbs take a beating pressing the command, control, and shift keys. I would much rather wave my thumb or pinky in a particular direction to get those modifiers. This may not be possible with the current Leap, but will no doubt be possible soon.
Seems a fit for any sort of 3D CAD work. I'd like to pair it with unconed's MathBox (<a href="https://github.com/unconed/MathBox.js" rel="nofollow">https://github.com/unconed/MathBox.js</a>).
Super excited for this. We've got a couple coming to our office.<p>Can you talk a little bit about the construction of the unit? How does the craftsmanship look?
I have a developer unit and can confirm: it's legit. I don't want to go into too much detail because of the developer agreement, but think it is best characterized as like a "useful, portable, affordable, easy to integrate kinect". I'm having a lot of fun messing around with it, though I don't have much to share yet as other work has taken precedence. SOON.
I spent a week or so playing with a leap device recently, and had a great time developing a sort of Minority Report style interface for some display screens in our office recently. I did end up writing all of the gesture recognition from scratch which was a bit more difficult than I expected. :)
I was wondering if this is common with pre-launch product kits, but I have to sign in everytime to use the SDK. What am I doing wrong?<p>That said, I had a fun time writing stuff. Now, if only it had proper linux support (instead of Virtualbox hacky hacks that I have to use)