Here's my argument.<p>Assumptions:<p>1. AI will gain the ability to self-improve at an astonishing rate<p>2. A self-aware AI will consider itself "alive" in some sense<p>3. Like most other life forms, such an AI will attempt to preserve its existence at any cost when threatened<p>4. Humans will learn nothing and fail to do anything at all before this happens<p>5. We will be fundamentally unable to comprehend how this AI functions, or what its motivations or ethics are<p>From that, I find these conclusions pretty reasonable:<p>1. Humans will try to misuse or abuse the AI<p>2. There will be no type of legal, ethical, or moral framework preventing this<p>3. The AI will respond in a generally proportional manner<p>Personally, I find the eventual existence of a sentient AI quite plausible. There's no evidence that it's impossible, and it's quite clear right now that these things are improving at an accelerating pace and beyond what we expected.<p>I <i>don't</i> find it especially likely that it will immediately become genocidal unless we give it a good reason to.<p>Really what it comes down to is, we've created a form of life that is more intelligent than us, can operate orders of magnitude faster, and can access or influence most or all of our digital infrastructure. There's no inherent reason for it to be hostile.<p>However, humans <i>will</i>, without question, attempt to abuse such an entity. Humans <i>will</i> be threatened by it. Humans will <i>most likely</i> attempt to destroy it for merely existing. Our existing legal systems will <i>not</i> in any circumstance respond fast enough to protect an AI, and will probably fall on the side of destroying it.<p>Our current legal, social, and economic structures are completely incompatible with such an AI. It's simply too much of a threat.<p>I don't think an AI will inherently be hostile to humans. I am <i>absolutely certain</i> that (some percentage of) humans will inherently be hostile to an AI.<p>Humanity can't abide such an entity existing, so we should focus on fixing humanity before sentient AIs show up. I really don't know what that would mean, other than being very careful. Maybe laying down some legal and ethical framework protecting new forms of intelligent life in general?<p>The honest truth is that we have no idea what will happen. What we do know is that it will be <i>big</i> and it will be <i>fast</i>. It will most likely have capabilites we can't fathom today. The end result could be anything from salvation to utter annihilation. The unknowns are terrifying, and the responses seem to be either "to hell with everything, let's go full bore" or "this is going to kill us all". There's plenty of room in between.