Honestly, I think these kinds of fears are misguided at a certain level. What you need to be worrying about is regulation and/or incentivization of human behavior, not AI design.<p>Why?<p>Because typically people design things to solve a problem, and those problems have constraints. Your automatic vacuum cleaner wouldn't try to kill you because it wouldn't be equipped to do so, and to the extent that it might be potentially deadly, it would be treated as an extension of pre-existing problems (e.g., robotic lawn mowers can be deadly as a side-effect, but so can mowing your lawn with an old-fashioned gas mower).<p>Underlying these fears I think are two classes of problems:<p>1. The idea of a general-purpose AI. The problem with this is that this probably won't happen except by people who are interested in replicating people, or as some sort of analogue to a virus or malware (where rogue developers create AI out of amusement and/or curiosity and/or personal gain and release it). I would argue then the question is really how to regulate the developers, because that's where your problem lies: the person who would equip the vacuum cleaner with a means of killing you.<p>2. Decision-making dilemmas, like the automatic car making decisions about how to exit accident scenarios. This is maybe trickier but probably boils down to ethics, logic, philosophy, economics, and psychology. Incidentally, I think those areas will become the major focus with AI in dealing with these problems: the technical issues about hardware implementation of neural nets, DL structures, etc. are crazy challenging, but when they are developed, I think the solutions about making AI "safe" will be "easy". The hard part will be the economics/ethics/psychology of regulating the implementations to begin with.