I find this hard to internalize, to be honest. I am inclined to chase the economic and productivity gains that might await if the promise of something like AGI is fulfilled. It does look to me like the progress won't be slowing down much, even if we have some slower periods transitioning between competing architectures.<p>That is wildly exciting! But if sustained progress remains true, then the logical conclusions of the potential for dangerous AI must be true as well. Because if it is not true, then either we fail to create truly capable intelligence or we manage to align it. The latter of which requires active effort.
And to argue against myself. One way this doesn't come true is from the fact have being smarter than every human still isn't smart enough if it's not orders of magnitude smarter.