Truncated SVD has been a wonderful tool for "cleaning up" pairwise cosine similarity data for text document comparisons, graph/network building (for a visual representation of entities represented by documents, embedded in something like Gephi/Sigma.js/D3), and for item-based recommendation systems.<p>The biggest problems I then run into involves choosing a "k" (the dimensions allowed in your truncation). Have had some thoughts about training this unsupervised method (providing labeled data for what "oughta" be the top nearest neighbors for this particular entity, and optimizing toward that) or building an ensemble method on top of many SVD'd truncated vector spaces (though the combination method is unclear to me-- pick kNN from a linear combination of each model's outcomes? Pick the intersection of each method's k nearest neighbors?)<p>To novices looking at this tutorial: NumPy's a wonderful tool for small toy examples, but at a certain scale you will depend heavily on the sparse matrix formats provided for you by SciPy. (That and random projections should curb any memory problems you'll run into for many vector space-based problems, short of operating at a Google/Yahoo scale, or if your target's TBs of logging data).