I don't disagree that most of the warnings perpetuated are probably part of a publicity stunt (or hype men).<p>However there is a fair chance we will reach super-human autonomous actors in the next 50-100 years, and if we fail to align them very closely to human values it would be an extreme risk to all life.<p>The point is that currently there is effectively zero effort put into (AI Safety research) an existential risk we can actually see coming. It may not happen at all (we could stop progressing compute efficiency/density before then) but trajectories suggest it will eventually.<p>For a more nuanced discussion of the actual risks I recommend watching/listening to the ML Street Talk w/ Robert Miles: <a href="https://www.youtube.com/watch?v=kMLKbhY0ji0">https://www.youtube.com/watch?v=kMLKbhY0ji0</a>
There is a difference between disagreeing with predictions about the future, and accusing the people making those predictions of lying about their own beliefs. This article does the latter.<p>And, ngl, it's been pretty wild watching the AI risk-skeptical narrative pivot from claiming that AI risk is a fringe issue which no experts think is plausible, to claiming that a large fraction of the relevant experts are lying about their beliefs.<p>No, they're not lying. The people who signed the letter did so because they believe what it says.