I'm have some CS training and have worked in the fringes of AI, but nothing deeply theoretical. Really smart people have occupied both sides of the argument, and I'm not sure who to believe.<p>The most viable argument I've heard for it is the difficulty in providing all possible constraints to narrow the path of optimization (so as to accommodate human norms). I.e. Facebooks negotiating bot developing their own language.<p>And for people saying it's not an issue for the next decade, I don't get this argument. Saying this is not the equivalent of saying it's not dangerous at all. In fact, the same could be said for global warming, could it not?<p>Plz enlighten me
Yes. But not in the way that you expect.<p>I don't think we're likely to see skynet achieve sentience anytime soon ... But AI technologies (NLP; Machine Learning) are definitely enablers for people who do not necessarily wish us well.<p>For example, we are living in a world where advertising, influence and propaganda technologies are steadily and rapidly improving in effectiveness. We are not too far off the point (perhaps we are there already?) where influencing messages can be crafted automatically -- tailored for each individual by NLP and machine learning algorithms.<p>This puts a lot of power in the hands of those who are able and willing to buy that influence. Is this AI being a threat? I'm not sure ... but it is certainly an accomplice.
We don't know. People saying AI is not a threat for the next 10 years don't know what they're talking about. Could be sooner, could be later. That story about a Facebook bot creating it's own language was nothing but a clickbait article designed to entertain people. However, we're naive to think AI won't be used for evil purposes in the near future. Heck, I'm sure people are using Machine Learning models for evil purposes today, right now.