<i>Some people who have interacted with Glass testers feel that sometimes people seem to temporarily drop out of a conversation to process something they see on the display. How does Glass avoid being something that removes us from our physical environment?</i><p>My feeling is that this effect might fade away as people become more used to the device. Anyone who wears glasses can tell you that the first time you put them on, the frames are very distracting in your peripheral vision. You become accustomed to them rapidly, however-- within days, the frames are invisible, and you retain this even when you take them off for a while.<p>A similar phenomenon takes place when driving. Every now and then you realize that you weren't watching the road at all, just driving along automatically with your peripheral vision. And the haptic compass experiments have demonstrated people have the ability to gain an unconscious sense of location from external stimulus.<p>Obviously something popping up in focus is always going to be distracting, but with a notification icon in the corner of your eye, it's entirely possible that one would grow used to it and stop consciously noticing its appearance-- you'd simply have a somewhat-unconscious "email sense".<p>And there's no reason that should distract one from what they're doing; when the mail comes to my door, I don't stop whatever I'm doing and go read it. I make a mental note to check the mail when I get a chance, and continue the conversation.<p>I'm pretty excited about this technology.
At the risk of sounding silly a few years from now, I'll draw connection to when personal computers were first coming out. It was hard to predict all the uses we have today, and spreadsheets were thought to be one of the killer apps. Similarly, photos and sharing is the obvious killer app with Glass right now, but that's because it's very hard to see that far ahead. I can't wait to see what masses of developers will come up with once it's out for a while.<p>Further in the future, the device will likely shrink to, or work with a contact lens. At this point, this is not even a question of science, but of engineering [1].<p>[1] <a href="http://wireless.ee.washington.edu/papers/Lingley_JMMNov2011.pdf" rel="nofollow">http://wireless.ee.washington.edu/papers/Lingley_JMMNov2011....</a>
Google Glass can be divided into (at least) two distinct parts. The camera and the display. The camera is what Google is pitching very heavily right now. I'd guess they're doing that because it's something people can relate to... taking pictures from your perspective, taking them without putting some device between you and your subject. The tiny camera that takes pictures from your eye level is a big enough draw to get early users interested. (The killer app is "real-life" DVR... why should you have to stick your camera phone in front of your face to take a picture or record a video... how many times have you thought "damn... if I just would have had my phone ready I could have grabbed an awesome shot").<p>The display is harder to understand. What information do you need in front of your eyes right now that actually helps you? Today you might check Yelp for a restaurant review or Google Maps for directions, but you rarely ever keep your phone in front of your face while you walk through New York... it's a reference, not a constant aide. It's a cool idea but most people (even some of the geeky ones who visit Hacker News) don't see the value.<p>I think the big vision here is that you'll have a camera that consumes the world around you and a system that can process what the camera is seeing, and give you real useful information about the world around you immediately. You walk into the office and the system tells you that you're looking at "Sarah" and you have a meeting with her at 3pm, or that the menu item you're looking at has 615 calories and most people who order it love it, or that the product you're about to buy is $50 cheaper at an online store and can be shipped to you in 2 days. Glass doesn't offer any of this right now... but it will some day. I want one and if I'd attended Google I/O I'd have spent the $1500 for the early prototype.
Control input will make or break this thing. I don't think a single button and touchpad will cut it for general computing.<p>They're riding the wave of hope that voice controls have now, maybe I'm naive but it's going to have to improve by miles for anything beyond novelty use.<p>I could see this focusing on a tighter use case, like content aware eye mounted cameras with social features, and why not two of them for stereoscopic?
"So we decided that having the technology out of the way is much, much more compelling than immersive AR, at least at this time."<p>I'm not really convinced by this. "Out of the way" means you have to switch focus every time you want to see your information, and then switch back afterwards. To me, this seems even more distracting, albeit less prone to clutter.
I'd suggest they design the device as something that clips on to spectacles. I don't see why anyone would wear this thing if they weren't going to wear spectacles (why have the whole stupid frame?) and it's not going to work for spectacle wearers (which is a lot of us).<p>As for the comment about making something that doesn't come between the user and the physical world -- how about not using this device?<p>I think it's an intriguing concept, but right now it's a solution looking for a problem. Building software this way is relatively cheap (Google Wave...) but hardware?<p>It makes me think that an audio-focused UI that allowed commands via throat mike might be a good way to go.