Posted six months ago on <a href="https://news.ycombinator.com/item?id=8525237" rel="nofollow">https://news.ycombinator.com/item?id=8525237</a>.<p>> We will soon release the implementations for these algorithms described.<p>I would like to see them now.
If you're interested in lower bounds or tighter upper bounds, you can find the latest here:<p><a href="http://link.springer.com/article/10.1007%2Fs00453-014-9891-7#page-1" rel="nofollow">http://link.springer.com/article/10.1007%2Fs00453-014-9891-7...</a>
<a href="http://statweb.stanford.edu/~candes/papers/RandomizedNLA.pdf" rel="nofollow">http://statweb.stanford.edu/~candes/papers/RandomizedNLA.pdf</a>
Why aren't they just using compressed sensing instead of PCA in the first place? PCA is good because it guarantees perfect recovery wen the set of examples is contained in an n dimentional subspace. Compressed sensing guarantees recovery whenever the set of examples is sparse in <i>some basis</i> - it sounds like a much better fit.<p>Not to mention random projections which are even faster (even proved by the Johnson-Lindenstrauss lemma) usually do well,