The sad thing is, I saw Google Motion appear in my Gmail toolstrip, but I didn't even think about April fools, I just thought 'oh, <i>great</i>, another half-baked wannabe social media experiment masquerading as a feature from Google (the email and search company)' and didn't bother clicking it.
Watching the "joke" made me realize something - there's a chance that, in the future, a lot of communication with computers will be done using sign language.<p>After all, most people (rightly) assume that pretty soon, voice recognition will work well enough that you'll be able to dictate everything to your computer, making keyboards partially obsolete (especially for the majority of users).<p>But everyone also correctly thinks that there is one big problem with voice-driven computers - they won't work well with many people in the room at the same time. Imagine a cubicle environment in which everyone needs to talk to their computers.<p>Motion-detection, coupled with sign language, is a pretty logical next step.
Clickable link to the project website:
<a href="http://projects.ict.usc.edu/mxr/faast/" rel="nofollow">http://projects.ict.usc.edu/mxr/faast/</a><p>This makes me want to get a kinect to play around with.
I knew Gmail Motion was an April Fools gag --- that's what the link sending me there said, but...<p>It's actually a good idea --- for certain users. In particular, there are many people who use sign language to communicate. For those who sign, what could be more natural than controlling Gmail via sign? And why stop at Gmail?<p>Further reading: <a href="http://cad.ca/en/issues/telecommunications.asp" rel="nofollow">http://cad.ca/en/issues/telecommunications.asp</a>
Real time hand sign recognition had a published solution in 2005 ( <a href="http://portal.acm.org/citation.cfm?id=1107692" rel="nofollow">http://portal.acm.org/citation.cfm?id=1107692</a> "an automatic Australian sign language (Auslan) recognition system, which tracks multiple target objects (the face and hands) throughout an image sequence and extracts features for the recognition of sign phrases. ") after about 10 to 15 years of prior work (eg: <i>Hand movement classification using an adaptive fuzzy expert system</i> -<a href="https://www.socrates.uwa.edu.au/Pub/PubDetailView.aspx?PublicationID=429191" rel="nofollow">https://www.socrates.uwa.edu.au/Pub/PubDetailView.aspx?Publi...</a> ) - 1996).<p>It's interesting that this work flowed out of research and development in sheep shearing robotics ( <a href="http://school.mech.uwa.edu.au/~jamest/shearmagic/autoshear.html" rel="nofollow">http://school.mech.uwa.edu.au/~jamest/shearmagic/autoshear.h...</a> ) from the late 1970s / early 1980s.
I remember how xenophobic and arrogant the Google organization was in the early days when it was a hot company. Well they are definitely still arrogant, wasting time on a video like this when the technology to actually do it is not really that difficult.
Has anyone combined the Kinect with iOS yet? That plus the HDMI out from the new iPad to a big screen tv would equal epicness on an unprecedented scale. In fact, I'd actually pay more for that than I did for the iPad itself. Get to work, Apple.
All joking aside, for some users with disabilities this sort of gesture based interface could be really useful.<p>My prediction is that gesture interfaces are going to be used for some basic things, but that most interactions with computers will remain fairly conventional. If you do a time and motion study, the way people currently interact via keyboards, mice and touch screens is pretty efficient. Even with augmented reality, I expect that users will still be tapping on virtual keyboards.