To join the quote-and-respond masses in the comments:<p>> AI will be the single most significant driving factor of change in the world. If we solve AGI (or achieve intelligence close to AGI), we'll likely solve most of the world's problems.<p>I literally don't get the confidence in this statement. I'm not an AI-doomer by any means, but AGI (if possible) will likely be the most powerful technology humankind has ever invented. Just in terms of possible impact, why would we assume it will solve more problems than it will create (or the opposite)?<p>Think of recent super-powerful technologies we've invented. Sure, there's the potential for fantastic fixes to many problems that come out of nuclear tech. But there's also... the threat of nuclear annihilation? Is that all net positive? Do we even really have a way to evaluate on the timescale of 100 years? How can we know the net impact of nuclear tech in the next century, or millennium?<p>How can we call this sort of rhetoric anything other than blind optimism? Why would we have any priors about how AI will go? Why do we say things that make us blindly rush forward?<p>I'm not being sarcastic, or trying to argue one way or another. I'm genuinely asking. How does anyone have confidence in "AI is good" or "AI is bad" claims? Is confidence even good in this case?<p>For me, these questions lead into such deep and treacherous waters it's probably best to stop the comment there. There are limits to what even interested HN addicts can ask of each other.