So I'm not a person that actively writes AI software, but I am a knowledgable supporter. I'm all for the Singularity, rights for future non-human intelligences, etc, et. al.<p>So I <i>always</i> take issues with these kinds of esoteric debates about how to engineer ethics into an intelligence that can learn and become conscious.<p>Haven't any of these yahoos ever had kids or owned a pet dog?<p>You don't "engineer ethics" into your son or daughter. You teach them through examples of good behavior, punish them when they misbehave, and reward them when they succeed. Over the course of a few years, given a good environment, the end result is a new young intelligence that knows how to behave well and get along with others. That intelligence often goes on to bootstrap itself up into adulthood and eventually goes on to create later iterations of itself. If it was raised well, then the new ones tend to get raised well too. We call them "grandkids".<p>So lets assume in 10-20 years something descended from IBM's Blue Brain (simulating cat cortexes) leads to something that is analogous in intellectual range from a dog to an elephant.<p>Most people will agree that dogs and elephants are pretty damn smart. Dogs are able to perceive human emotional states, understand some language, do work for people, and fit nicely into our social structure. Elephants aren't that close with people, but are highly intelligent, have active internal emotional states, and even grieve for their dead. In some societies, people and elephants have worked together for thousands of years.<p>In both these cases, we have a long history of working with other intelligences of varying scales for thousands of years. In general, if you don't mistreat them they turn out to be socialized pretty well. Its only when you mistreat them that they learn to fear and hate you. The same is true for people.<p>So as @aothman said in another comment in this thread, AI researchers are just trying to get their projects to not fall over. There's no thought of "engineering ethics". This problem is going to be solved one little bit at a time. Artificial neural architectures are going to more and more sophisticated over time. But there is a key difference between the underlying architecture and how you go about training these new minds.<p>If you raise them well, then most of these angels-on-a-pin discussions are just that, meaningless.