Mixes of:<p><pre><code> - Agenda
- Straight-forward Fear of AI
- Fear that AI might trigger some really-not-so-bad social changes or upheavals...which *are* so bad for them & their friends
- Mirroring the fears of their peers
</code></pre>
Well worth noting: the "leaders" talking about AI are not magically wise, nor especially foresighted, nor widely experienced. They've mostly gotten to be leaders by being utterly obsessed with getting ahead in the human social hierarchy, and devoting their lives to doing that in some narrow social niche or other. There are human-nature reasons why the leaders in ~every historical major industry failed to be leaders in the industry which replaced it.
I think there's reason for concern. I own, but haven't read Nick Bostrom's "Superintelligence", which lays out the risk scenario in depth. I have read Bostrom's "Global Catastrophic Risks", which treats AI in a chapter rather than a book, and I found the argument for AI being a genuine threat convincing.