This is an interesting conundrum.<p>I've seen it play out in space-based autonomous systems, where fundamental light-time delays limit how much autonomy you can offload to Earth versus using on-board computing on a rover (<a href="http://www.jpl.nasa.gov/news/news.php?release=2010-094" rel="nofollow">http://www.jpl.nasa.gov/news/news.php?release=2010-094</a>).<p>You end up having to reason about splitting the computational burden between the remote system (having limited resources) and the cloud. Sometimes you can train in the cloud but run on the robot (e.g., upload large training sets to the cloud, and download a trained classifier to a fast runtime on the robot).<p>Finding the right boundary for such a split system can create a hard engineering/infrastructure problem, because simple changes in bandwidth can have huge infrastructure implications.
I'm having trouble believing network latency would be the bottleneck here. Just ping google.com and you'll see ~25ms latency which is a lot less than the half second delay described in the article.<p>Now, having the server actually process the information, and return a response that then must then be vocalized may take much longer, but that's a different issue than "network latency".<p>Not to mention when people interact, they often use filler words when collecting thoughts... "um...", "uh...", "hmmm..", "yeah...". there's your half second delay
Humans have latency in their comms too, we just hide it effectively with filler language. I'm surprised we couldn't mask a half second of latency with a quick, "Hmm.." or "Ah..." or even a bunch of canned responses.<p>I know I've bought myself additional time with exactly those -- "That's a great question." "Interesting..."
Not to mention it's a terrible idea from a privacy and security perspective. It doesn't matter how good your encryption is or how reliable and low latency your network connection is if the service provider has shoddy VTech style security. The only way to keep your data safe is to not give copies of it to third parties. ("Two can keep a secret if one of them is dead" and all that.)
Latency would be an issue when dealing with face-to-face communication because of the uncanny valley. Perhaps.<p>Latency would not be an issue harnessing the cloud to drive robots to do chores around the house -- serving drinks, cleaning up, feeding the pets, and so on. All you'd really need is an intermediate language. You'd send commands like "walk over there" or "Pick up that cup" So what if there was a 2-3 second delay?<p>Also, you should be able to use real people over the cloud controlling bots right away. Actually hooking AI into it and having the cloud control everything is still a ways off. (And having robots in your house controlled remotely by other computers is about as freaking crazy as I can imagine)
Uncanny valley isn't real. It's just an artifact from the fact most people see robots on video.<p>Any latency we will just adapt to quite naturally.<p>At worst it means Blade Runner style robots can't work 100% off the cloud.<p>Personally I think caching would deal with most issues. How often does anyone ever surprise you with a sentence.
In GitS, tachikomas have inboard brains, but each night they dump all their memories and experiences into a central database. All tachikomas learn from each single unit's experience, and each unit learns from the experience of all the others.
Its interesting no one seems to have commented on the analog digital distinction yet. Not a cog sci scholar, but the massively parallel nature of the brain allows for highly sophisticated computations to happen more or less instantaneously but even an android such as this is (ie as powerful as this) is going to have to "look something up" as it were. The silliman (i think? drunk commenting) lectures neumann did at the end of his life cover this. also, the "intention engine" from the article sounds interesting.
I'm not sure how far away their datacenter is that they have 500ms of latency. I'm pretty sure packets can circle the globe in under 500ms these days.<p>The article mentions access points, maybe they're hampered by poor wifi?