By taking over decision-making roles, and then making locally-optimal decisions without regard to negative externalities. (Humans are pretty good at this already, but computers can streamline that whole cumbersome process of making bad decisions.)<p>The world is already to a significant degree run by machine learning algorithms that are designed to maximize shareholder value in some way, and many of them are deliberately engineered to manipulate the public, often by using their personal information.<p>Now, consider if this assortment of for-profit AIs were able to replace humans at the top of the decision-making chains in their respective organizations, and then were able to bribe/blackmail/manipulate the political and social structures of society to increase their wealth, power, and influence. It might seem kind of silly to consider computers doing this, but it's at least sort of how the world works now with humans in charge. If AI was in charge, it removes the restrictions of empathy, moral principle, and mortality on the acquisition of wealth, not to mention limited time and attention.<p>(Why would we put AI in charge of corporations, you might be wondering? How many boards of directors would disregard the idea if such an option could be reasonably expected to increase profits? And how many middle-class workers would refuse to invest in such companies if they had the best dividends and stock value growth?)<p>So, maybe we end up in a profit-centered dystopia where computers own all the wealth and people are effectively slaves. That's not the end of mankind, but it puts us in a position of no longer controlling our own destiny and being unable to react to existential threats. For instance, we might not be able to do anything about climate change because our AI overlords don't individually see any advantage in spending resources on that.
it will have to pry my paperclips from my cold dead hands<p>Btw, what are the popular theories on why Eliezer Yudkowsky was let out of the box? Did he ever say?<p>From the Sam Harris interview [1]:<p>"To demonstrate this, I did something that became known as the AI-box experiment. There was this person on a mailing list, back in the early days when this was all on a couple of mailing lists, who was like, “I don’t understand why AI is a problem. I can always just turn it off. I can always not let it out of the box.” And I was like, “Okay, let’s meet on Internet Relay Chat,” which was what chat was back in those days. “I’ll play the part of the AI, you play the part of the gatekeeper, and if you have not let me out after a couple of hours, I will PayPal you $10.” And then, as far as the rest of the world knows, this person a bit later sent a PGP-signed email message saying, “I let Eliezer out of the box.”<p>[1] <a href="https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/" rel="nofollow">https://intelligence.org/2018/02/28/sam-harris-and-eliezer-y...</a>
We could listen to it. But an AI could have good-sounding but insane reasoning that leads to insane results if we follow its recommendations. And, if the AI were more advanced than we are, we couldn't tell. We could only trust it, or not. But if we trust it and it's wrong...<p>A more malevolent AI could hack its way into infrastructure. Even if we intended to leave it airgapped, it could probably find a way around it (we humans seem to be <i>really</i> bad at true airgapping). From there, it could destroy, not mankind, but civilization and most of the human race.
Simply, by it doing what you tell it but not the way you expect.<p>It's a little contrived but you tell it 'solve world hunger', so it 'does a Thanos' and wipes out half the human population by releasing a pathogen or something, so it's fulfilled it's primary function but (hopefully) not in the way you expected.
AI would not think in the same timescales that people use, so something like killing sea life with plastic drinking straws or changing the climate via herbivore flatulence - though lengthy by our standards - might make logical, reasonable sense on its part as tools of human extinction.
I think the real threat is going to be more of a side-effect of AI. The fact that the current models may produce results that we can't foresee; therefore, a sufficiently advanced AI is powerful AND unpredictable, hence uncontrollable.
AI would more likely destroy mankind by screwing up rather than by becoming conscious. Perhaps something to do with power and energy supply. Like a stuxnet kind of thing.