In order to be useful, hand gestures are not something that should be tied to a specific input device as they have been up until now. The goal should be for a vocabulary of gestures to work consistently across all devices. e.g. You should be able to adjust volume by turning an imaginary knob whether you're working with a tablet and using this wrist-sensor, sitting at a desk with a web-cam, or sitting on your couch in front of a Kinect sensor.<p>Microsoft is positioning itself to be the author of leading OS's running in all of these environments, so they should build in gesture support that uses any form of gesture capture available and possibly even pools all devices available in a given instant to improve fidelity. e.g. These wrist devices are unlikely to be perfect, nor are Kinect devices, but both of them working together should be able to improve overall fidelity.<p>If Microsoft does this well enough, it may one day seem strange to think of a computer that cannot "see" you. Perhaps that's a bit creepy...