AI is great. But how much is too much? What happens when they can learn on their own and come to logical conclusions it thinks is right?<p>Reference: http://arstechnica.com/information-technology/2016/03/tay-the-neo-nazi-millennial-chatbot-gets-autopsied/
Half of the responses from Tay were from twitter history or "repeat after me". It was a cute little experiment but not really a display of how real AI will behave. Check out this article[1], I think more people should read it before jumping to conclusions.<p>[1] <a href="http://smerity.com/articles/2016/tayandyou.html" rel="nofollow">http://smerity.com/articles/2016/tayandyou.html</a>
Tay did not come to logical (or racist) conclusions. It was taught to be anti-social. Humans had to make it that way.<p>Much like weaponized diseases, AI will just be a <i>very</i> powerful tool that humans can misuse. Hopefully, like nuclear weapons (incredible power, but highly exclusive), AI will be incredibly difficult for the average person to use in a malicious way.