There are a few criticisms that can be made of this paper, it tries to cover a lot of ground in a small space, has rather informal language, and possibly more, but these can be forgiven as it's a generally informative and entertaining piece.<p>For me, the largest omission is the lack of reference to theoretical limit of Machine Learning. That is, what can't be achieved even if you assume infinite resources and algorithmic complexity. It's important for me as this paper appears to be a damn good stab at being a comprehensive review of why machine learning projects fail, except for missing this critical point. The idea is best explored in the book What Computers Can't Do (H.Dreyfus, 1972), recounted in the book What Computer's Still Can't Do (H.Dreyfus, 1992), and well summarized in A History of First Step Fallacies (H.Dreyfus, 2012) [1]<p>Finally, any paper that's freely distributed, can be enjoyed over lunch and includes the phrase "most of the volume of a high-dimensional orange is in the skin, not the pulp" is fine in my book.<p>[1] - <a href="http://link.springer.com/article/10.1007%2Fs11023-012-9276-0" rel="nofollow">http://link.springer.com/article/10.1007%2Fs11023-012-9276-0</a> [PDF]