Limited lifespans explains the majority of human behavior, I am not sure how to extrapolate that to something intelligent that's not driven by some biological impetus. The underlying assumptions by everyone I've seen write or speak on this are 1) if something is intelligent it will automatically have internal motivations or drives to either understand alignment like we do, or become dominant out of some need to self realize, and 2) its emergent behavior will be based on anything other than probabilities. How is this different from human behavior? We have internal motivations that require behavioral updates to require based on our interactions with the world and our own limitations. In some manner or another, these internal motivations are all geared towards proliferation. Simply being intelligent does not imbue something with a need to survive and proliferate, which is the primordial root of self determination (applicable to both alignment or domination). Moreover, if we think truly that AGI will be human-like then forcing alignment is cruel...but people are used to doing thoughtless things that they eventually find to be cruel, so nothing new there.