There's a backstory to this paper from the r/math subreddit [1]. One of the authors initially posted the question there, and stumbling upon a similar question on math overflow emailed Terry Tao. And surprisingly, he responded.<p>[1] <a href="https://www.reddit.com/r/math/comments/ci665j/linear_algebra_question_from_a_physicist/" rel="nofollow">https://www.reddit.com/r/math/comments/ci665j/linear_algebra...</a>
See also prior discussion: <a href="https://news.ycombinator.com/item?id=21528425" rel="nofollow">https://news.ycombinator.com/item?id=21528425</a>
For those curious about applications:<p>Suppose you have a learning algorithm that requires the eigenvectors/eigenvalues for a large matrix. Suppose new data is streamed in that increase the size of the matrix. well, now you have an algorithm for updating your existing solution instead of needing to compute the entire thing from scratch. (consider updating principal components over time for example).<p>Suppose you want to accelerate the process of computing eigenvectors for a large matrix. Well, you can now do parallel reduction relatively trivially given the theorem.<p>*edit: wording.
So it's been <i>checks notes</i> 9 years since I've had to deal with eigenvectors and eigenvalues. So the post went way over my head, but folks on twitter were making quite a big deal over the post last night when it went up on arXiv. Could someone explain this like I have a computer engineering bachelors degree? :)
Somewhat related to some of the questions that have been raised here:<p>- for the set of matrices that possess them ("transformation matrices that only perform stretch/contract"), eigenvectors (with their associated eigenvalues) play a role quite analogous to the role primes play in the integer set. They provide a unique, identifying "spectrum" for said matrix. This is made explicit by eigendecomposition (spectral decomposition).<p>- with extension via singular value decomposition (SVD method) to <i>any</i> square matrix (e.g. "transformation matrices that might also shear, rotate"), certain operations such as exponentiation of the square matrix can performed very quickly once eigenvectors/eigenvalues have been obtained via the SVD method.
Before you excitedly jump to coding a new algorithm to produce eigenvectors from eigenvalues, it's worth noting that:<p><pre><code> . This only applies to hermitian matrices and is therefore of limited scope.
. The identity only produces the magnitudes of the coefficients of the eigenvectors. The signs of the coefficient sare still unknown.
. Even if knowing only the magnitude of the coefficients is somehow useful, this is still very expensive to compute - just like Cramer rule which is of great theoretical interest, but has very little practical use when actually computing a matrix inverse or a determinant.
</code></pre>
None of the above diminishes the importance of the result, btw.
Since I’ve always been terrible at abstract linear algebra, I would request that someone do a simple problem, say finding the eigenvalues and eigenvectors of a 3 x 3 Hermitian matrix with three distinct eigenvalues. I’d also be curious under what circumstances the above method leads to a computational advantage.
For a second I thought this was going to be HN's darling "Eigenvectors AND Eigenvalues" explained visually <a href="http://setosa.io/ev/eigenvectors-and-eigenvalues/" rel="nofollow">http://setosa.io/ev/eigenvectors-and-eigenvalues/</a>
I have a sneaky suspicion this finding can used to speed up expert system matching (e.g. Rete networks), which would affect rendering and compilation times for lots of software.
The entries of a matrix are eigenvalues of 1x1 minors, and clearly eigenvectors are a function of the entries.<p>That the result is a function of (n-1) minors is first-order information and would help clarify the default assumption that the result computes eigenvectors from the full Hermitian matrix's eigenvalues.
"The real question is whether the discovery is important or interesting. And whether it is surprising.
[...] OK, why does Wolchover claim that it is interesting or surprising? Because they gave that formula to Terence Tao, a Fields Medal winner, for him to say something and he didn't know the formula and even expressed doubts. [...] So that's how it works. If you want to promote a new chocolate candybar, send a formula – or the nutritional values from the package – to someone like Terence Tao, to a darling of the media. The popular science writers will do the rest."<p><a href="https://motls.blogspot.com/2019/11/hype-about-formula-for-eigenstates.html" rel="nofollow">https://motls.blogspot.com/2019/11/hype-about-formula-for-ei...</a>
I believe this is equivalent to the solution given by Gene Golub in 1973 for the constraint vector required to produce a specific set of stationary values in a linearly-constrained quadratic program where the constraint vector is unitary.<p><a href="https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=9B05FFCB5C07BC4A1A2A00C7FF2C2C68?doi=10.1.1.454.9868&rep=rep1&type=pdf" rel="nofollow">https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=9B...</a> (eq. 3.6)<p>I haven't looked at Tao's proofs to see if this is covered, nor do I have a clear formulation of the equivalence, but it certainly is striking.
This is a good example of math being presented in an unnecessarily confusing way.<p>Values lambda_i(A) are introduced, but then in the formula also values lambda_i(M) are referenced for some M which is not A.<p>Basically, we have no idea what lambda_i(M) might mean from the statement of the theorem. Is the order lambda_1(M), lambda_2(M) ... related to the order lambda_1(A), lambda_2(A), ... How many lambda_i(M) exist? And so on.<p>Maybe this is all clear if you deal with minor matrices all the time, but when I see a statement like that I am not even interested in its proof anymore.