Why is this incongruence?<p>Only 20% of respondents expect "Chance of global technological progress dramatically increases after HLMI" happening 2 years after HLMI is achieved, while 80% picks the other choice, "30 years after". (Table S4)<p>Here is the definition of HLMI from the survey:
"High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers."<p>It appears to me that if machines or software, which can be replicated billions of times in the span of two years, can do <i>every</i> task better and cheaper than humans, it is akin to having 100+ times more active researchers working on R&D with much higher bandwidth of communications among them than human researchers do.<p>It is true that we might be limited by computer hardware availability, but given that the median time of HLMI predictions is 45 years from 2016, we are unlikely to be limited by hardware then.<p>Another possibility is that most predictors believe they will be limited by the speed of physical experiments, my answer is that smart simulations should allow HLMI to perform many experiments without waiting for their real-world results. A recent paper from OpenAI has shown us that learning in simulations can be effectively transferred to solving real-world tasks. (<a href="https://blog.openai.com/robots-that-learn/" rel="nofollow">https://blog.openai.com/robots-that-learn/</a>) In 45 years, the quality and scope of simulations would be far better than in 2016.
Amusingly, the median response for 'AI researcher' is almost 40 years after 'all human tasks'. I am not sure that those being surveyed shared a common understanding of what was being asked.
Experts are known to be bad predictors of the future outcome of their fields. Many times these predictions obey the Gaes-Marreau law.<p>In the case of AI, according to one particular study, something similar happens: expert predictions are contradictory, indistinguishable from both non-expert predictions and past failed predictions.<p><a href="https://intelligence.org/files/PredictingAI.pdf" rel="nofollow">https://intelligence.org/files/PredictingAI.pdf</a>
I like this, but I feel it's a little optimistic (or pessimistic depending on your view). Isn't asking ML researchers when AI will dominate human performance a bit like asking a barber if you need a haircut?
These studies are always interesting, but I don't think they have much more scientific validity than, say, asking a bunch of religious fundamentalist preachers when the second coming of Jesus is going to be. No one knows how difficult it's going to be, and while we've overcome a lot of challenges, I'm positive there are many more to overcome before we get to human-level intelligence in computers. Whatever that means.
It's funny how surgeon is listed as the farthest out application of in the abstract. I think surgery is in fact the easiest of all the listed jobs in an AI sense, but it might depend more on advances in robotics.
I modified the title for accuracy. The original title misleads, slightly, IMO: "When Will AI Exceed Human Performance? Evidence from AI Experts." I swapped AI out and replaced it with ML.<p>The paper itself uses the acronym HLMI (high level machine intelligence). Quoting:<p>"High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers."<p>So a collection of machines could accomplish HLMI, without needing any single machine to do it alone.
I think explaining your own actions in games is a weird thing to ask for. It requires "aboutness" (that you're thinking about the problem). Aboutness is a really inefficient way to handle problems, but it's handy because we can apply it to all new situations, because we have general intelligence. Conversely, when humans have trained hard at a task, they generally lose aboutness, like an ANN. Things are done on instinct, feeling etc. In short, the NN has been trained, and general intelligence is no longer required to do the task. Indeed, it's been superseded.<p>More damningly for this kind of survey: Aboutness for a single task is not the same as general intelligence. And it's general intelligence that we want.
I agree we're still a few orders of magnitude behind on the myriad of technologies that will enable the terminator like robots... however, the exponential progress we've been making in computer science (e.g. machine learning) is making the likelihood of discovering these critical bits very realistic.
These people are not subject matter experts in these fields...<p>An interesting question would to have them consider the location and the IP environment. Will 10% of the public have their laundry folded by AI in the east or west first? Will it be wrapped up in patents?
If you asked aerospace engineers what they thought of the future in 1960 they would've said we'd have Mars colonies and asteroid mining would've revolutionized our economies.
Doesn't machine performance already exceed human performance in a number of areas? As was just demonstrated this week when AI beat the world's beat Go player?
we are, in general, really bad about predicting what is technologically solvable in a given timespan.<p>They thought machine translation would be solved in 5 years in the 60s, too. I'm vastly more skeptical.