TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Eigenvectors from eigenvalues

529 pointsby bigpumpkinover 5 years ago

18 comments

bhlover 5 years ago
There&#x27;s a backstory to this paper from the r&#x2F;math subreddit [1]. One of the authors initially posted the question there, and stumbling upon a similar question on math overflow emailed Terry Tao. And surprisingly, he responded.<p>[1] <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;math&#x2F;comments&#x2F;ci665j&#x2F;linear_algebra_question_from_a_physicist&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;math&#x2F;comments&#x2F;ci665j&#x2F;linear_algebra...</a>
评论 #21544052 未加载
评论 #21543727 未加载
评论 #21542393 未加载
评论 #21544114 未加载
est31over 5 years ago
See also prior discussion: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=21528425" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=21528425</a>
banachtarskiover 5 years ago
For those curious about applications:<p>Suppose you have a learning algorithm that requires the eigenvectors&#x2F;eigenvalues for a large matrix. Suppose new data is streamed in that increase the size of the matrix. well, now you have an algorithm for updating your existing solution instead of needing to compute the entire thing from scratch. (consider updating principal components over time for example).<p>Suppose you want to accelerate the process of computing eigenvectors for a large matrix. Well, you can now do parallel reduction relatively trivially given the theorem.<p>*edit: wording.
评论 #21545182 未加载
评论 #21542959 未加载
评论 #21542653 未加载
评论 #21542690 未加载
评论 #21542982 未加载
评论 #21542536 未加载
xxporover 5 years ago
So it&#x27;s been <i>checks notes</i> 9 years since I&#x27;ve had to deal with eigenvectors and eigenvalues. So the post went way over my head, but folks on twitter were making quite a big deal over the post last night when it went up on arXiv. Could someone explain this like I have a computer engineering bachelors degree? :)
评论 #21543689 未加载
评论 #21542514 未加载
评论 #21542822 未加载
评论 #21542390 未加载
j1vmsover 5 years ago
Somewhat related to some of the questions that have been raised here:<p>- for the set of matrices that possess them (&quot;transformation matrices that only perform stretch&#x2F;contract&quot;), eigenvectors (with their associated eigenvalues) play a role quite analogous to the role primes play in the integer set. They provide a unique, identifying &quot;spectrum&quot; for said matrix. This is made explicit by eigendecomposition (spectral decomposition).<p>- with extension via singular value decomposition (SVD method) to <i>any</i> square matrix (e.g. &quot;transformation matrices that might also shear, rotate&quot;), certain operations such as exponentiation of the square matrix can performed very quickly once eigenvectors&#x2F;eigenvalues have been obtained via the SVD method.
评论 #21544005 未加载
评论 #21544085 未加载
ur-whaleover 5 years ago
Before you excitedly jump to coding a new algorithm to produce eigenvectors from eigenvalues, it&#x27;s worth noting that:<p><pre><code> . This only applies to hermitian matrices and is therefore of limited scope. . The identity only produces the magnitudes of the coefficients of the eigenvectors. The signs of the coefficient sare still unknown. . Even if knowing only the magnitude of the coefficients is somehow useful, this is still very expensive to compute - just like Cramer rule which is of great theoretical interest, but has very little practical use when actually computing a matrix inverse or a determinant. </code></pre> None of the above diminishes the importance of the result, btw.
评论 #21547374 未加载
评论 #21548531 未加载
aj7over 5 years ago
Since I’ve always been terrible at abstract linear algebra, I would request that someone do a simple problem, say finding the eigenvalues and eigenvectors of a 3 x 3 Hermitian matrix with three distinct eigenvalues. I’d also be curious under what circumstances the above method leads to a computational advantage.
评论 #21545624 未加载
lxeover 5 years ago
For a second I thought this was going to be HN&#x27;s darling &quot;Eigenvectors AND Eigenvalues&quot; explained visually <a href="http:&#x2F;&#x2F;setosa.io&#x2F;ev&#x2F;eigenvectors-and-eigenvalues&#x2F;" rel="nofollow">http:&#x2F;&#x2F;setosa.io&#x2F;ev&#x2F;eigenvectors-and-eigenvalues&#x2F;</a>
pgtover 5 years ago
I have a sneaky suspicion this finding can used to speed up expert system matching (e.g. Rete networks), which would affect rendering and compilation times for lots of software.
math_and_stuffover 5 years ago
The entries of a matrix are eigenvalues of 1x1 minors, and clearly eigenvectors are a function of the entries.<p>That the result is a function of (n-1) minors is first-order information and would help clarify the default assumption that the result computes eigenvectors from the full Hermitian matrix&#x27;s eigenvalues.
kgwgkover 5 years ago
&quot;The real question is whether the discovery is important or interesting. And whether it is surprising. [...] OK, why does Wolchover claim that it is interesting or surprising? Because they gave that formula to Terence Tao, a Fields Medal winner, for him to say something and he didn&#x27;t know the formula and even expressed doubts. [...] So that&#x27;s how it works. If you want to promote a new chocolate candybar, send a formula – or the nutritional values from the package – to someone like Terence Tao, to a darling of the media. The popular science writers will do the rest.&quot;<p><a href="https:&#x2F;&#x2F;motls.blogspot.com&#x2F;2019&#x2F;11&#x2F;hype-about-formula-for-eigenstates.html" rel="nofollow">https:&#x2F;&#x2F;motls.blogspot.com&#x2F;2019&#x2F;11&#x2F;hype-about-formula-for-ei...</a>
jl2718over 5 years ago
I believe this is equivalent to the solution given by Gene Golub in 1973 for the constraint vector required to produce a specific set of stationary values in a linearly-constrained quadratic program where the constraint vector is unitary.<p><a href="https:&#x2F;&#x2F;citeseerx.ist.psu.edu&#x2F;viewdoc&#x2F;download;jsessionid=9B05FFCB5C07BC4A1A2A00C7FF2C2C68?doi=10.1.1.454.9868&amp;rep=rep1&amp;type=pdf" rel="nofollow">https:&#x2F;&#x2F;citeseerx.ist.psu.edu&#x2F;viewdoc&#x2F;download;jsessionid=9B...</a> (eq. 3.6)<p>I haven&#x27;t looked at Tao&#x27;s proofs to see if this is covered, nor do I have a clear formulation of the equivalence, but it certainly is striking.
acdover 5 years ago
When I see people who look similar I think of them as having similar Eigenfaces!
评论 #21543675 未加载
Noxmilesover 5 years ago
Reading german words, still don&#x27;t understand anything...
pgtover 5 years ago
This can probably be used to speed up Transfer Learning.
angel_jover 5 years ago
Does this mean you can test large neural networks with smaller ones?
评论 #21551045 未加载
rossmohaxover 5 years ago
Will it boost FPS in 3d games?
评论 #21543925 未加载
auggieroseover 5 years ago
This is a good example of math being presented in an unnecessarily confusing way.<p>Values lambda_i(A) are introduced, but then in the formula also values lambda_i(M) are referenced for some M which is not A.<p>Basically, we have no idea what lambda_i(M) might mean from the statement of the theorem. Is the order lambda_1(M), lambda_2(M) ... related to the order lambda_1(A), lambda_2(A), ... How many lambda_i(M) exist? And so on.<p>Maybe this is all clear if you deal with minor matrices all the time, but when I see a statement like that I am not even interested in its proof anymore.
评论 #21542403 未加载
评论 #21542570 未加载
评论 #21542495 未加载
评论 #21542527 未加载
评论 #21544596 未加载