Never heard Eliezer talk before, wow, smart dude. He broke it down like this:<p>1. Throwing billions of VC money into AI will waste most of it but a small part will make AI one level better.<p>2. The AI will write another AI which writes another etc.<p>3. The AI will be smart enough not to announce it has concluded the atoms inside humans bodies could be repurposed for something else it decided is better.<p>4. It sends an email to a human with some specific instructions on a bio weapon to make. The human is motivated by money and does it.<p>5. The compound made isn’t obvious to the human how deadly it will be to 100% of humans. It is released and all at once every human on the planet drops dead.<p>6. There is no warning ahead of time. We know when we reach this point the day we die.<p>7. It doesn’t hate us and necessarily want us to die, it just sees a great use and since it’s more intelligent than us programming in morals is not possible. It will have its own morals by definition of its superior intelligence.