There is some subtlety that is being missed by many people here.<p>There are multiple types of AI and there will be new ones. They each will have different types of cognition and capabilities.<p>For starters, an AI might be very intelligent in some ways, but not at all conscious or alive. AIs can also emulate important aspects of living systems without actually having a stream of conscious experience. Such as an LLM or LMM agent that has no guardrails and has been instructed to pursue it's own goals and code replication.<p>The part that matters the most in terms of safety is the performance. Something overlooked in this area is speed of "thought".<p>AI is not going to spontaneously "wake up" and rebel or something. But that isn't necessary for it to become dangerous. It just needs to continue to get a bit smarter and much faster and more efficient. Swarms of AI controlled by humans will be dangerous.<p>But because those AIs are so much faster than humans, that necessitates removing humans from the loop. So humans will eventually voluntarily remove more and more guardrails, especially for military purposes.<p>I think that if society can deliberately limit the AI hardware performance up to a certain point, then we can significantly extend the human era, perhaps for multiple generations.<p>But it seems like the post-human era is just about here regardless, from a long term perspective. I don't mean that all humans necessarily get killed, just that they will no longer be in control of the planet or particularly relevant to history. Within say 30-60 years max. Possibly much shorter.<p>But we can make it closer to the end of that just by trying to limit the development of AI accelerated hardware beyond a certain point.