Seriously unimpressed.<p>I really, really hate it when people use very small neural networks and extrapolate the results to some sort of biological plausibility.<p>Also, isn't this at approximately the level of a first AI project? Building robots seems wholly unnecessary, because there appears to be no reason why it couldn't just be simulated.<p>What was the point of the poison? It was mentioned twice and then ignored. Turning off the light near food isn't "deception", it's just not communicating. If the robots lured each other towards poison, that <i>might</i> be something to write home about.<p>Come to think of it, even <i>simulating</i> this seems unnecessary, because given the way the experiment is designed, it's kinda like listening to Captain Obvious from the fortress of the blatantly apparent.<p>This really just sounds like cargo-cult science at its very worst. Perhaps they could team up with the guy training neural networks using the bible.