Though the demo was quite impresive, and definitely more advanced than any other I've seen before, the "shakyness" is still there, detaching the "augmented" and the "regular" reality.<p>What is it preventing a more accurate analysis of the video frames? Too low resolution? Changing lighting conditions? Lack of processing power? I can't see how these cannot be solved with current optical and computing equipment.<p>Is it, then, a matter of not having developed sophisticate enough image analysis (vision) algorithms?