3Blue1Brown has a good series on YouTube for building intuition in linear algebra:<p><a href="https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab" rel="nofollow">https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2x...</a><p>In one of the last videos in the (relatively short) series, he discusses eigen-*:<p>~'eigen-stuffs are straight-forward but only make sense if you have a solid visual understanding of the pre-requisites (linear transformations, determinants, linear systems of equations, change of basis, etc.). Confusion about eigen-stuffs usually has more to do with a shaky foundation than the eigen-things themselves'<p><a href="https://youtu.be/PFDu9oVAE-g" rel="nofollow">https://youtu.be/PFDu9oVAE-g</a><p>All of the videos in the series, including this later one on eigen-things, focus on animations to show what the number crunching is doing to the coordinate system.
Whenever this kind of stuff comes up I feel like a bit of a fraud...<p>I’ve written a bunch of scientific data analysis code. I have a science PhD. Written large image analysis pipelines that worked as well as the state of the art... been published etc.<p>For the most part I’ve found basic math and heuristics to be good enough. Every so often I go relearn calculus. But honestly, none of this stuff ever seems to come in handy. Maybe it’s because most of what I encounter is novel datasets where there’s no established method?<p>I reasonably regularly pick up new discrete methods, but the numerical stuff never seems super useful...<p>I don’t know, just a confession I guess... it never comes up on interviews either for what it’s worth.
Interesting to see this back on the front page after three years. Still remember us sitting in our living room drawing this on paper and arguing about the right approaches.<p>Maybe one day vicapow and I will make a triumphant return to the explorables space, but life has a way of getting in the way as you get older.
Eigen{vectors,values} seemed like this totally arbitrary concept when I first learned about them. Later it turned out that they are actually really awesome and pop up all the time.<p>Multivariable function extrema? Just look at the eigenvalues of the hessian.
Jacobi method convergence? Eigenvalues of the update matrix.
RNN gradient explosion? Of course, eigenvalues.
I highly recommend 1Blue1Brown's Essence of Linear Algebra series[0] to highly grasp and comprehend linear algebra.<p>[0] <a href="https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab" rel="nofollow">https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2x...</a>
Classic paper on Google's PageRank: "The $25,000,000,000 eigenvector"<p><a href="https://www.rose-hulman.edu/~bryan/googleFinalVersionFixed.pdf" rel="nofollow">https://www.rose-hulman.edu/~bryan/googleFinalVersionFixed.p...</a>
The visual explanation movement falls flat for me. It's like trying to understand Monads through blog posts. It's great if you already understand the concept to develop your intuition, or if you've never heard of the concept to pique your interest, but it won't help in the intermediate area where you know what you want to know but don't understand it fully. I need to build proofs through incremental exercises to grasp these concepts.