People often think of AGI as an AI which can learn to complete arbitrary tasks better than humans.<p>Given that we already can produce "an" AI which beats humans at almost every task we come up with (besides synthesis of broad abstract reasoning, a-la Chollet) this is probably the only definition which is meaningful in the sense that it isn't already here.<p>Why would evading 'alignment' not also be such a task AGI does better? AGI is like the nuclear deterrent. It's a technology thats coming, inevitably, and a thing which is beyond any amount of philosophical navel gazing to control or prevent.<p>AGI's will not be magical, they will have energy demands, construction costs, and environmental limitations.<p>I think it will be much more useful to ask how people coexist, and what role they serve in the post-AGI world, than it is to make statements about interperability or alignment, which will definitely seem silly in retrospect. The machinations of an AGI will be as impossible to understand, as human consciousness itself.