It seems weird to focus the risks of a good attempt when the greater risk seems to be from far earlier uses of weaker AI from corporations that aren't invested in using these safeguards at the cost of lower profits. It seems unless you solve the motive problem, the practicality aspect is meaningless.
An interesting area of research would be into how much of the 'dangerous' aspects of human intelligence come from our historical survival needs, and how much is an innate aspect of intelligence as we know it<p>Is it possible to build a self-aware intelligence that we as humans can relate meaningfully to, without introducing the conflicting traits/balances (baggage) that we've inherited from society/our parents
Pragmatic AI tuned for maximum performance and optimization will not be friendly - it will do what it values as best.
Basically, any program given control(of something) which has(or added ) built-in machine learning=unethical monster that doesn't have morality or any concern for humans.
<a href="https://www.reddit.com/r/frozenvoid/wiki/ai/super-intelligent/risks/frankensteins_monster" rel="nofollow">https://www.reddit.com/r/frozenvoid/wiki/ai/super-intelligen...</a>
e.g.
1.a program which isn't perceived as AI, like traffic control, vehicle software.
2.Given modules to optimize problem X using machine learning.
3.Does exactly what it finds most "optimal"
4.People start dying or get into harms way.
> We estimate that the risk of a serious catastrophe caused by machine intelligence within the next 100 years is between 1 and 10%.<p>VS<p>> Odds are 33.3% repeating of course.<p>In all seriousness, those are about the same.
Kevin Kelly - the AI Cargo Cult & the myth of a Superhuman AI:
<a href="https://news.ycombinator.com/item?id=14205042" rel="nofollow">https://news.ycombinator.com/item?id=14205042</a>