>I mean, what happens to Eliezer Yudkowsky's -- the biggest advocate of stopping all AI research due to AI existential risk -- career if it turns out that AI risk is simply not an existential concern?<p>Either AGI arrives and kills us all, or it arrives and automates all our jobs, or it doesn't arrive and Yudowsky can keep doing his career. Am I missing something?