I can't help but feel that most these companies working on these technologies are viewing them in isolation, when none of our peripherals are ever used in isolation. The mouse wasn't developed to replace the keyboard, but to supplement it. You don't generally use a mouse to select characters from a representation of a keyboard on screen (there are reasons when you want this, but it's not the general use case), so why do we always see the equivalent of that in new peripherals?<p>The two main cases I'm thinking of are eye tracking, and brainwave tracking (like EMOTIV). Individually, neither looks like a compelling way to control a computer when you have a mouse and keyboard, but <i>together</i>, I think that might really yield something interesting (and sooner!).<p>Instead of using brain waves to move a cursor around the screen, use eye tracking. Instead of using gaze lingering for click, use brainwave tracking. Individually they seem cumbersome and annoying to use. Together they could be a really compelling interface device, IMO.