TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Eigenvectors and eigenvalues explained visually

523 pointsby vicapowover 10 years ago

17 comments

discardoramaover 10 years ago
The pretty animations are nice, and the ability to manipulate the vectors is very nice; however, I am sorry to say (and I <i>do not</i> mean this negatively) that there&#x27;s not much &quot;explanation&quot;.<p>The first sentence just describes the utility of the Eigens (so no explanation there). The next lays out the setting for the diagram. And the third says, &quot;if we can do X, then v is an eigenvector and \lambda an eigenvalue&quot;. But... what if you <i>can&#x27;t</i> do &quot;X&quot; ? What if v, (0,0) and Av are not colinear?<p>The skeleton of a great explanation is there, but the meat isn&#x27;t there yet. A few more sentences would go a long way in making this better.<p>I appreciate the OP&#x27;s effort, and I hope this will come across as constructive criticism.
评论 #8919072 未加载
评论 #8919152 未加载
bsaulover 10 years ago
Very beautiful graphs, but i don&#x27;t think it&#x27;s going to make people understand anything. I would start with the problem : easily compute sums of dependant values, then show a naive computation, then use matrices, vectors and eigenvalues to come to a solution, and only then, show a graphical representation of the steps performed.<p>I&#x27;m surprised that this post isn&#x27;t following this method, because i&#x27;ve come to think it&#x27;s the standard way of explaining scientific things in the US.
评论 #8918941 未加载
评论 #8919674 未加载
评论 #8919103 未加载
Terr_over 10 years ago
Very cool, but as a layman I was a very confused by the description of eigenspaces and the S1&#x2F;S2 lines. I&#x27;m just guessing here (reasoning below) but I&#x27;d like to suggest phrasing like:<p>&quot;Eigenspaces are special lines, where any starting-point along them yields an eigenvalue that lands back on the same line. In these examples two exist, labeled, S1 and S2.&quot;<p>&quot;Eigenspaces show where there is &#x27;stability&#x27; from repeated applications of the eigenvector. Some act like &#x27;troughs&#x27; which attract nearby series of points (S1) while others are like hills (S2) where any point even slightly outside the stable peak yields eigenvalues further away.<p>______<p>Original post &#x2F; detailed-reaction:<p>&gt; First, every point on the same line as an eigenvector is another eigenvector. That line is an eigenspace.<p>At first I though this statement-of-fact meant that the whole tweakable quadrant of the X&#x2F;Y plot (at a minimum) is an unbroken 2D Eigenspace, because every point within it can be &quot;covered&quot; by a dashed line (a 2D &quot;vector&quot;) if I pick the appropriate start-point.<p>However, the last sentence also says eigenspaces are (despite the &quot;space&quot; in their name) lines, which throws the earlier interpretation into doubt.<p>&gt; As you can see below, eigenspaces attract this sequence<p>S1 and S2 were displayed earlier, but not explained, now this section implies that those lines are the Eigenspaces? If so, what is the difference between S1 and S2? Playing with the chart, I assume they are the &quot;forward&quot; and &quot;reverse&quot; for repeat-applications of the transformation.
评论 #8918880 未加载
throw7over 10 years ago
I have no idea what eigenvectors or eigenvalues are, so this just confused me more. To be fair, I think the author does assume some basic math understanding before hand though.
michafover 10 years ago
I like the visualization. But there seems to be an error: the non-diagonal elements of the Markov matrix need to be interchanged. You can see this by setting p=1 and q=0. Their formula would result in a total population of 2*California after one step, which is clearly larger than California+New York.
评论 #8919100 未加载
TehCorwizover 10 years ago
The interactive graph in the section &quot;Complex eigenvalues&quot; has a repeatable crash bug in Chrome 39 on Win 7. There are a number of was to trigger it, the easiest of which is to adjust a1 and a2 such that both have positive x and y values and the resulting line from v to Av has a slope of approximately 1.
hangonhnover 10 years ago
This wikipedia graphic gives a pretty good graphical explanation of what eigenvalues do and what eigenvectors are:<p><a href="http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors#mediaviewer/File:Eigenvectors.gif" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Eigenvalues_and_eigenvectors#me...</a>
dlwjover 10 years ago
First time I&#x27;ve actually sort of understood eigenvectors. Linear algebra was actually the class that made me hate math, after years of loving it in secondary education. Not everyone has the benefit of a good teacher, and the tools that exist now don&#x27;t help you to self-learn much.
noelwelshover 10 years ago
As a counterpoint to most of the comments, let me just say: this is fantastic.<p>(Nothing wrong with constructive criticism, which most comments are, but it&#x27;s also nice to just say thanks as well.)
debacleover 10 years ago
I have to admit I hated the term Eigenvector for two semesters of college and it nearly caused me to drop mathematics altogether. This explanation is very good and helps visualize some of the things I was missing. Apologies to the fantastic professors I had who were talking over my head for 16 weeks.
mturmonover 10 years ago
&gt;&gt; &quot;It turns out that a matrix like A, whose rows add up to zero (try it!), is called a Markov matrix, ...&quot;<p>Oops, you mean the rows add to one.<p>I hate to nitpick, but, additionally, numbers in the matrix can&#x27;t be negative.<p>Also, it&#x27;s not just that 1 is an eigenvalue, it is that 1 is the largest eigenvalue. This is significant, because it implies that all other components will die out in time.
评论 #8919319 未加载
xixixaoover 10 years ago
Wow, almost no positive feedback here? I think the article assumes certain audience, and for me, this brought a great insight I did not ever get in our college courses.
hangonhnover 10 years ago
The graphics is nice but the explanation is just terrible. There is a huge gap between the first part explaining vectors and then the part explaining eigenvectors.<p>&quot;If you can draw a line through (0,0), v and Av, then Av is just v multiplied by a number λ; that is, Av=λv.&quot;<p>That makes no sense. How do you draw a line through a point to &quot;v and Av&quot;? What does &quot;v and Av&quot; even mean in that context?
评论 #8919380 未加载
acdover 10 years ago
When I see similar faces that look a like from different people I wonder if they have similar eigenfaces?
评论 #8919433 未加载
评论 #8919385 未加载
pcvarmintover 10 years ago
Did you send this to Malcolm Gladwell? :)<p>Igon send it to him if you can&#x27;t :)
kristopolousover 10 years ago
It takes me an enormous amount of effort to read this font. I need to squint my eyes and had to zoom my browser window to about 200% and then scroll horizontally to make my way through the paragraphs.
评论 #8919191 未加载
cousin_itover 10 years ago
I just tried to figure out the simplest rigorous explanation of linear transformations. Here&#x27;s one in terms of straight lines. Let&#x27;s say we have a transformation of the 2D plane, i.e. a mapping from points to points. We will call that a &quot;linear transformation&quot; if these conditions are satisfied:<p>1) The point (0, 0) gets mapped to itself.<p>2) Straight lines get mapped to straight lines, though maybe pointing in a different direction.<p>3) Pairs of parallel straight lines get mapped to pairs of parallel straight lines.<p>Hence the name &quot;linear transformation&quot; :-) We can see that all straight lines going through (0, 0) get mapped to straight lines going through (0, 0). Let&#x27;s consider just those straight lines going through (0, 0) that get mapped to themselves. There are four possibilities:<p>1) There are no such lines, e.g. if the transformation is a rotation.<p>2) There is one such line, e.g. if the transformation is a skew.<p>3) There are two such lines, e.g. if the transformation is a stretch along some axis.<p>4) There are more than two such lines. In this case, you can prove that in fact all straight lines going through (0, 0) are mapped to themselves, and the transformation is a scaling.<p>Now let&#x27;s consider what happens within a single such line that gets mapped to itself. You can prove that within a single such line, the transformation becomes a scaling by some constant factor. (That factor could also be negative, which corresponds to flipping the direction of the line.) Let&#x27;s call these factors the &quot;eigenvalues&quot;, or &quot;own values&quot; of the transformation.<p>Now let&#x27;s define the &quot;eigenspaces&quot;, or &quot;own spaces&quot; of the transformation, corresponding to each eigenvalue. An eigenspace is the set of all points in the 2D plane for which the transformation becomes scaling by an eigenvalue. Let&#x27;s see what happens in each of the cases:<p>1) In case 1, there are no eigenspaces and no eigenvalues.<p>2) In case 2, there is only one eigenspace, which is the straight line corresponding to the single eigenvalue.<p>3) In case 3, it pays off to be careful! First we need to check what happens if the two eigenvalues are equal. If that happens, it&#x27;s easy to prove that we end up in case 4 instead. Otherwise there are two different eigenvalues, and their eigenspaces are two different straight lines.<p>4) In case 4, the eigenspace is the whole 2D plane.<p>In this way, eigenvalues and eigenspaces are unambiguously geometrically defined, and don&#x27;t require coordinates or matrices.<p>Now, what are &quot;eigenvectors&quot;, or &quot;own vectors&quot; of the transformation? Let&#x27;s say that an &quot;eigenvector&quot; is any vector for which our transformation is a scaling. In other words, an &quot;eigenvector&quot; is a vector from (0, 0) to any point in an eigenspace. The disadvantage is that it involves an arbitrary choice. The advantage is that eigenvectors can be specified by coordinates, so you can find them by computational methods.<p>Does that make sense?