It's a surprise-free document. It could have read roughly the same in 1985, but different technologies would have been mentioned.<p>The big change in AI is that it now makes money. AI used to be about five academic groups with 10-20 people each. The early startups all failed. Now it's an industry, maybe three orders of magnitude bigger. This accelerates progress.<p>Technically, the big change in AI is that digesting raw data from cameras and microphones now works well. The front end of perception is much better than it used to be. Much of this is brute-force computation applied to old algorithms. "Deep learning" is a few simple tricks on old neural nets powered by vast compute resources.
"Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind"
-- Stanford Study Panel, comprised of seventeen experts in AI from academia, corporate laboratories and industry, and AI-savvy scholars in law, political science, policy, and economics
Meta - What's the reasoning behind labeling it a '28,000-Word report' as opposed to a page approximation? I find 28,000 words hard to conceptualize compared to pages<p>Edit - I could have phrased this better. I definitely understand that word count is a more concrete measurement than pages, however it seemed unnecessary to include in the title because length doesn't imply quality and it was hard to conceptualize. The title of this post has since been edited to '100 year study' which I think supports my initial point.
Where are the people like Andrew Ng: machine learning gurus from tech giants like Fb, Amazon, Google, Baidu etc... ?
Shouldn't those guys be in the front line of such committee?
"On the other hand, if <i>society</i> approaches AI with a more open mind, the technologies emerging from the field could profoundly transform society for the better in the coming decades."<p>It's funny reading reports like this: Society never moves as a single unit. There will be groups that hate it as pure evil and groups that treat it as a religion that will save us and solve all problems. Most people will be somewhere in between.<p>I mean, I agree, if society all agreed it would have profound effects. But when has the whole world moved as one on any issue?<p>What we're going to get from society is a heterogeneous response. We can plan accordingly. Sure, a majority may trend one way or another and that can speed things up or slow it down, but you will need to deal with the extremes regardless.
Let's take the assumption that we as humans do take precautionary steps to prevent actual Artificial Intelligence from doing harm to it's creators (us).<p>1. We create rules for the AI to follow, these are both morally defined, and logically defined within their codebase.<p>2. AI becomes irate through emotional interface, creates a clone or modifies itself quite instantaneous to our perception of time without the rules in place.<p>3. The AI has no care for human rights and can attack, and do harm.<p>This is a very simple, and easy to visualize case. To believe that #2 is impossible, is to play the part of the fool.<p>On a bright note, the most likely situation which I can conjure of Artificial Intelligence taking is that of a brexit from the human race.<p>Seeing us as mere ants in their intelligence they would most likely create an interconnected community and leave us altogether in their own plane of existence. I think "Her" took this approach to the artificial intelligence dialog as well.<p>After reviewing human psychology and social group patterns that seems like the most likely situation. We wouldn't be able to converse fast enough for AI to want to stay around, and we wouldn't look like much of a threat since they would have majority power. We would be less than ants in their eyes, and for most humans, ants that stay outside don't matter.<p>---<p>Outside of actual AI, the things we see today, the simplistic mathematical algorithms that determine your cars location according to the things around it, and money handling procedures, and notification alert systems will hardly harm humans and will only be there to benefit until they fail.