I always have a kind of negative reaction to these sorts of interfaces.<p>What I tend to see in this demo is how to make a doorknob unintuitive, how to make music player functionality undiscoverable, how to accidentally lock yourself out, how to frustrate a user whose mental model of the sensor is incorrect.<p>I realize you have to approach new technology and interaction paradigms optimistically, but all I can think of is my own frustration when an ill-considered neato-feature is foisted on me when I least want it.<p>Like blue LEDs and touch sensitive buttons that lack a physical button (eg atmel q-touch).
This is too cool. I cannot wait to try how a manipulatable material and holographic interface feels.<p>Glad to see Bret Victor's important rant[1] resonating as well.<p>Does anyone have any tips about getting involved with this area of work? Any good forums or open source projects?<p>[1] <a href="http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign" rel="nofollow">http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...</a>
These guys did a killer demo at NYTM. Very cool demo of turning a plant into an instrument (they played an orchid on stage). There was a good question about detection of differing touches of the doorknob, this could totally replace the key some day!
I might be going on a tangent here, but I can't help thinking there's something wrong with product names for the past few years. We're now naming things with incredibly common words, changing their very meaning into something else. Facebook, Touché, Bonjour, Spaces, Windows...<p>For some reason, it's creeping me out. I'm afraid to see one of those next big thing named Lundi, Ball, Tisch or Hora. It's like we're robbing our languages of their meaning, bit by bit, for commercial purpose.<p>We can do better than that, can't we?
I really liked the demo controlling the music player; it struck me as the one I'd be most likely to actually see in the near future (The door with the different signs was neat, but requires a bit more hardware).<p>I'm curious where the sensors were placed for that one (I couldn't see them on the guy's arms) and if say, getting caught in the rain or a really dry day, would throw it on the fritz.
Medicine and military applications would be great!<p>Dentist: How dentist grips the tool indicates how it behaves. Eg. control speed of a rotary tool like a drill based on a grip.<p>Doctor: Patient giving doctor feedback with their hand.<p>Military: For some guns, one finger on the trigger does a single shot for precision, two fingers does bursts for rapid fire.
"recognition rates approaching 100%" - how many times do we hear this? Basically it doesn't work, except for a set of carefully constructed circumstances.
Considering that autistics and empaths tend to have remarkably rich gestural languages for communicating a wide array of non-verbal units of meaning, or even propositions, and further that allocentric language may present scoped indexicals (and not to mention Ame. Sign Language), we may have an opportunity to develop interestingly rich gestural/touch APIs from the mechanics of neurologically rooted gestures which describe or underpin norms of highly complex, spontaneously emergent non-verbal communication.<p>For instance, would finger-flipping or self-stimulation be considered "noise" to such a system, or would the system be configurable or adaptive or "fuzzy" enough to make successful interpretations of various deviant forms of model human behaviors? (I'm wondering the intersection between these types of interfaces and the training (or, say, auto-designing) of them via neural networks.)
Whenever I want to feel excited about the future I just look at this video; seriously is really magic stuff. Changing your bathtub temperature with one gesture of your hand in the water; changing the channel by a simple gesture in the couch, the use-cases are virtually unlimited. Also if one day it becomes even more sensitive it could transform any object into a fingerprint detector!