Why is this incongruence?<p>Only 20% of respondents expect "Chance of global technological progress dramatically increases after HLMI" happening 2 years after HLMI is achieved, while 80% picks the other choice, "30 years after". (Table S4)<p>Here is the definition of HLMI from the survey:
"High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers."<p>It appears to me that if machines or software, which can be replicated billions of times in the span of two years, can do <i>every</i> task better and cheaper than humans, it is akin to having 100+ times more active researchers working on R&D with much higher bandwidth of communications among them than human researchers do.<p>It is true that we might be limited by computer hardware availability, but given that the median time of HLMI predictions is 45 years from 2016, we are unlikely to be limited by hardware then.<p>Another possibility is that most predictors believe they will be limited by the speed of physical experiments, my answer is that smart simulations should allow HLMI to perform many experiments without waiting for their real-world results. A recent paper from OpenAI has shown us that learning in simulations can be effectively transferred to solving real-world tasks. (<a href="https://blog.openai.com/robots-that-learn/" rel="nofollow">https://blog.openai.com/robots-that-learn/</a>) In 45 years, the quality and scope of simulations would be far better than in 2016.