Fun fact, the power method is what all neutronics codes that simulate neutron distributions in nuclear reactors use. The diffusion/transport equation in a multiplying medium is a eigenvalue equation and the dominant eigenvalue is the inverse of k, the multiplication factor
The convergence analysis is a bit lacking, but there is a significant sped up when you store the last power of A and you keep multiplying by that instead of A. That is: A, A^2, A^4, A^8, ...<p>It makes the second case they give go from 60 iterations down to 7.
Is there a relationship between the power method and some "standard" optimization algorithm (grad. descent, Newton's, ...) applied to maximization of the Rayleigh quotient?
Once you know the dominant eigenvector, I recall there was some trick you could do to get the second-dominant eigenvector, by projecting the dominant one out somehow. How can you repeat power iteration to get all the eigenvectors of a matrix?