We got started down the wrong path when Apple defined touch events and heuristics for faking mouse events from them. As a transitional measure a library like this is about all we can do for Kinect, but it is <i>not</i> the right long-term solution.<p>If we all have to add special code to our apps and web pages for dealing with every input type (mouse, finger, Kinect, pen, eye-tracking, voice, etc.) it will severely limit the user's ability to interact with a device full of apps with varying levels of support. We need a unified set of actions and gestures that apply to all the input types. I'd love to see the W3C adopt Microsoft's MSPointer model, assuming Microsoft will let go of any patent claims.