I'm starting to think Aaronson is industrialising his commentaries to create his walk away wealth.<p>I'd be surprised if there are not deep roads in existing comp Sci, robotics and applied sciences to mine for safety principles. I don't personally see problems in A.I. itself which haven't been presaged by e.g. Norbert Weiner's decision to withdraw from bomb science, or by the Oxford institutes work on ethics in computing and networks. Or dare I say it, the therac case, or the box girder bridge disasters freeman fox brought on.<p>"Because a computer said it, it must be right" is a terrible basis for decision making.<p>Quite recently for A.I. (within living memory, last time people talked up a new wave of research) might be the fuckups in expert systems such as medical school admissions software which encoded functional institutional racism and sexism: it optimised to mimic the bias inherent in "manual" admissions.<p>Healthy scepticism, air gaps and checking. Don't believe anything Bender says right now. "Good news people" or not, Professor Farnsworth.