The article is so vague it seems even the author isn't particularly sure of the point.<p>Notions of factor models and hidden markov models have been around in the statistics literature for ages. Computer science's contribution to the discussion has been framing these methods as machine learning - and leading the foray into unsupervised learning. But I'm not sure if unsupervised learning techniques are being put to use in real-world data analysis, I believe the theoretical foundations are still a bit opaque.
Additional "variables" they are introducing are in fact algorithms acting on available data. A fortunate choice of an algorithm amounts to providing a good prior on the space of algorithms, which are used to estimate the Kolmogorov complexity of the data, or, in other words, explain it. As in the example with stocks, adequate algorithms can be more complex than just copy-paste usually used when doing compression/pattern recognition...
"Generally, computer science is concerned with questions of computational complexity: Given a particular algorithm, you want to know whether a computer can execute it quickly, slowly or never."<p>Tsk tsk MIT for confusing computational complexity with analysis of algorithms. Or perhaps with sloppy use of the words "algorithm" and "execute", when "problem" and "solve" would be more accurate.
The whole notion of <i>Less is More</i> has to do with the end result, at the consumption/interaction level. Of course there's complexity behind the minimalism, and that's the true art of it.<p>This article isn't touching on anything new, I was expecting definitive contradictions.