>I mean, what happens to Eliezer Yudkowsky's -- the biggest advocate of stopping all AI research due to AI existential risk -- career if it turns out that AI risk is simply not an existential concern?<p>Either AGI arrives and kills us all, or it arrives and automates all our jobs, or it doesn't arrive and Yudowsky can keep doing his career. Am I missing something?
> I mean, what happens to Eliezer Yudkowsky's -- the biggest advocate of stopping all AI research due to AI existential risk -- career if it turns out that AI risk is simply not an existential concern? Would anyone care about him at all?<p>I think the post misses the fact that it's not "off" or "on": even if AI is not a literal existential risk, it is still an immense risk so working to stop it is still a worthy activity that can have many positive results for society.