Please, for the love of Jef Raskin and Henry Dreyfuss and Don Norman and all that is human factors, no.<p>The film version of Minority Report was <i>not</i> a model for practical or usable interface design. Millions of years of evolution have built our brains and bodies for interacting with things that provide physical feedback when we touch them. Waving a pencil in the air, "manipulating" an invisible item and looking for visual feedback from a screen, these are not good experiences. Even if you discount the "gorilla arm syndrome" that StavrosK quite rightly points out here, the fatigue of trying to perform fine and accurate motion without physical stimuli for your hands and fingers to respond to is significant.<p>I'm sorry to be a negative voice in the face of innovation, but this really does feel like a technology in search of a problem. What worries me greatly is that it has a remarkably high "cool factor" that would be excellent in short demos, and could be easily pitched to companies looking for a flashy feature to get a leg up on the competition. We were saddled with some dubious decisions at the dawn of the GUI age, and we're just starting to lose them as we enter the Direct Manipulation age of interfaces. Please don't let this concept of feedback-free hand gestures become a paradigm that we're stuck with in the future.
There's too much focus here on the assumption that this will be used as a full-time computing input device. I don't think anyone is realistically advocating banishment of all keyboard/mice to the netherworld.<p>Let's be more creative than that. Think about using it as an alternate input in spaces that a physical keyboard/mouse isn't appropriate, and also 'short term computing'.<p>Will this replace input on your workstation? I doubt it- but what about a large map that's installed in a public place? What about some sort of restroom or medical computing device where you'd rather not touch the surface that someone else just touched? You're not going to sit there 12 hours a day. You're going to pull up the map in the hospital and zoom/pan around on it. Why do we need another surface to clean? And in 15 seconds- you're done. No gorilla arm syndrome, no pain, and no real learning curve.
Whenever I see something like this, I immediately think of gorilla arm syndrome. There's a reason I was saying they will never become widespread when all my friends were screaming about Minority Report-style interfaces (ever since Minority Report).
I'm currently learning Blender. It's an open source 3D-modeling program with one of the most non-intuitive GUIs ever created. It's like the Vim/Emacs of 3D-creation.
Being able to just grab a 3D-object with your hands and kneed it to the shape you want would be freakishly amazing.
More than one comment below mentioned the use of this device for 3D modelling. There are certainly scenarios when an artist could use LeapMotion, like sculpting and painting, but the actual modelling part is heavily keyboard-supported.<p>I image you'd need both hands to replace two mouse buttons and a scroller, and to me that seems to break the deal.
Seems developers can candidate for a <i>free</i> Leap+SDK if they like your project idea and think you can deliver.<p>>How can I get a free developer kit?<p>>We’re distributing thousands of kits to qualified developers, [...] register to get the SDK and a free Leap device first, and then wow us.<p>Apply here: <a href="https://live.leapmotion.com/developers.html" rel="nofollow">https://live.leapmotion.com/developers.html</a><p>I like the small size and reasonable price.
Might be cool as a 3rd input device, or for specialized terminals.
> Say goodbye to your mouse and keyboard.<p>This single line is enough to help me see through their flawed assumptions. The keyboard and mice aren't going away anytime soon, just because these guys have found a way to integrate gestures with computers. I personally hate the Applesque marketing promising users to 'Own the future'. Gesture technology has been here for long and I don't see it being the future by replacing the mouse and the keyboard. Think about developers like us...no developer would find it useful, because we need to code <i>efficiently</i>, which is and never will be possible with gesture technology.<p>So, from a developer's perspective, this is something intended to be too cool, but fails to understand the basic underlying principle of the purpose of a keyboard and a mouse. Maybe, this would appeal to ultra Hi-Fi executives who want to flaunt to the world a new way of using their Powerpoint slideshows, but not the common man/developer who owns an average computer (Something like a c2d).<p>I was honestly expecting this to have some features like the Kinect, which developers have hacked to use it as a motion-capture system, especially for use in creating Animated movies (which is awesome because a standard decent mo-cap setup will cost you atleast $5k). This gadget is unfortunately too basic and solves a very small problem that no one really cares about, IMO.
I don't think anybody, including Leap, thinks the keyboard and mouse are going anywhere. Also, this isn't minority report. If leap can deliver on the sensitivity of the input, then small, precise gestures can be made without moving your hands from the keyboard. That makes it useful in cases where switching from the keyboard to a mouse isn't fast enough for my taste.<p>I can envision opening certain applications with a gesture (save you from typing the name into quicksilver or finding and double clicking on the icon). Tasks that you repeat over and over could be assigned to a gesture with great effect, like swiping a finger left and right to change windows.<p>3D editing could be interesting, where you move an invisible object in 3 dimensions with your hand. Anybody who's done 3d modeling or game development in unity can attest that a mouse and keyboard are limited in 3 dimensions.
First time I've seen this. Obviously the first time a lot of other people have seen it too, hence it making it all the way to front page.<p>Anyhoot, I can't deny it, this is very interesting.
I think this has great potential for use in conjunction with wearable computing such as Google Glass. I'm not sure how the current interface for Glass works, but I imagine it's based on voice input and possibly some buttons on the unit itself.<p>Imagine wearing a smaller Leap controller on your wrist - you would be able to use gestures to control the Glass and mostly likely interact much more intuitively with your surroundings as seen through Glass.
Can anybody explain how this works, ie: the technology behind it? The page itself doesn't disclose much more than that it uses some kind of secret algorithms to track hands and fingers, but I'd be interested in what kind of sensors and processing are used, and how such a small box can track 3 degrees of freedom so accurately. How can this work so well compared to the crude tracking that Kinect does with 3 cameras and a laser projector?
I wonder what the constraints on the background are? If it's not too fussy, then hang one of these around your neck and hook it up to an Oculus RIFT VR.<p>Immersive VR + hand-tracking == ????