Seeing how each visualization adjusts as I change the original dataset is so useful. The technique reminds me of Bret Victor's amazing work.<p>Ladder of Abstraction Essay: <a href="http://worrydream.com/#!2/LadderOfAbstraction" rel="nofollow">http://worrydream.com/#!2/LadderOfAbstraction</a><p>Stop Drawing Dead Fish Video: <a href="https://vimeo.com/64895205" rel="nofollow">https://vimeo.com/64895205</a><p>This is awesome, thanks for sharing!
Very nice! I actually used the featured example from Mark Richardson's class notes on Principal Component Analysis (<a href="http://people.maths.ox.ac.uk/richardsonm/SignalProcPCA.pdf" rel="nofollow">http://people.maths.ox.ac.uk/richardsonm/SignalProcPCA.pdf</a>) in teaching. It was astounding how clear it was to some people and how unclear to others.<p>I did a singular value decomposition on a data set similar to the one Richardson used (except with international data). The original post here looks at the projection to country-coordinates, looking at what axes describe primary differences between countries. My students had no problem with that -- Wales and North Ireland are most different, in your example, and 'give' the first principal axis. But then I continued to do it with the foods, as Richardson did (look at Figure 4 in the linked file). Students concluded in large numbers that people just don't like fresh fruit and do like fresh potatoes. Hm. They didn't conclude that people don't like Wales and do like North Ireland; they accurately saw it as an axis. But once we were talking about food instead of countries, students saw projection to the eigenspace as being indicative of some percentage of approval.<p>How could we visually display both parts of this principal component analysis to combat this prejudice that sometimes leads us to read left to right as worse to better?
How differently is linear regression than PCA? I understand the procedure and methods are completely different, but isn't linear regression also going to give the same solution on these data sets?
This is truly fantastic.<p>Excuse me for being daft, but how do you transform back into 'what does this mean'?<p>For instance, in ex 3, we see that N. Ireland is an outlier. It wasn't obvious to me that the cause was potatoes and fruit.<p>How does PCA help you with the fundamental meaning?
A webapp for doing SVD/PCA:<p><a href="http://biographserv.com/bgs/docs/svd_graph_editor/" rel="nofollow">http://biographserv.com/bgs/docs/svd_graph_editor/</a>
PCA is a pretty okay method for dimensionality reduction. Latent Dirichlet allocation is pretty good too. It depends on what you're trying to do and how the data is distributed in N-dimensional space.