The headline seems to misuse the word "semantic" (not to mention "understand"). Does the door-opening robot now understand how to open all hinged doors with a similar opening mechanism? Or was it just trained to imitate a sequence of changes in a 2D image from a fixed angle? Can the same software and robot also be taught to open windows? Boxes? We are talking about "semantics" explicitly here. Does it understand "open" versus "closed" for these different types of closures/portals?<p>I don't want to discount the value of this research. It's absolutely necessary to do this sort of basic proof-of-concept testing of these ideas. But the claim being made implicitly here is way beyond what's actually going on. The software understands nothing, and the "semantics" extend to simple image-matching of objects, but there's no deeper meaning associated with the labels, so I think calling that "semantics" is a major stretch.<p>This approach is not going to teach a robot how to pick fruit, or serve food, or clean floors anytime soon. In the best case where this is even a workable approach, research like this is just the first of millions more tiny steps along the path. Anyway I think it's naive to assume that a good way to approach automation is to write software to let robots learn by watching humans do the desired task. As cool as that sounds, chances are that approach would ultimately be a massively inefficient way to solve the problem. It'd be like trying to invent the automobile by building a steam-powered horse robot that can tow carriages. The critical purpose is being overlooked in favor of a cool-looking but totally impractical toy demo.