wow, this is seriously a bunch of bull. It doesn't say anything interesting about the problem at all, and just claims a bunch of speculation and improbable comparisions.<p>Like, "punishing a robot" ...? No AI scheme I've seen ever has a sense of self worth. You can give it a negative reward, but the AI doesn't care much for its _current_ reward, it only performs actions to maximize the expected reward. Which means you can tell it to avoid doing these bad things, but punishing it after the fact would just leave it confused about why its reward system changed, and then go about trying to maximize its rewards again.<p>I for one, am all for the 3 laws of robotics, but it probably won't work for a much simpler reason -- it can't identify the terms. How would an AI recognize when a human is at harm? Would it show the drowning patient a picture of distorted letters and ask what the word is? Or would it jump in to save posters from being damaged? And that's the easy part... how would you define harm? These questions need to be answered before anyone tries to figure out the ethics guidelines for robots to follow.<p>You would seriously learn more about robot ethics from I, Robot than from this poorly written article.