Another thing I don't like about this paper is how it wraps real, interesting questions in the larger framework of "existential risk" (which I don't... really think exists)<p>For example:<p>> "Instead of merely(!) aligning a single, powerful AI system, we need to align one or several complex systems that are at risk of collectively drifting away from human interests. This drift can occur even while each individual AI system successfully follows the local specification of its goals"<p>Well yes, making systems and incentives is a hard problem. But maybe we can specify a specific instance of this, instead of "what if one day it goes rogue!"<p>In our society, there are already many superhuman AI systems (in the form of companies) - and somehow, they successfully contribute to our wellbeing! In fact life is amazing (even for dumb people in society, who have equal rights). And the reason is, we have categorized the ways it goes rogue (monopoly, extortion, etc) and responded adequately.<p>So the "extinction by industrial dehumanization" reads a lot like "extinction by cotton mills" - i mean, look on the bright side!