Note: This is about unsupervised learning and mostly about RBMs/DBNs. Most of the Deep Learning success is all about supervised learning. In the past, RBMs have been used for unsupervised pretraining of the model, however, nowadays, everyone uses supervised pretraining.<p>And the famous DeepMind works (Atari games etc) is mostly about Reinforcement learning, which is again different.
Okay, I confess. I really didn't understand most of that post. It sounds really smart, but someone will have to vouch that it's legit, because the picture of Kadanoff cuddling Cookie Monster trigged my baloney detector <a href="https://charlesmartin14.files.wordpress.com/2015/04/kadanoff.jpeg" rel="nofollow">https://charlesmartin14.files.wordpress.com/2015/04/kadanoff...</a>
I think this is similar to the scalar theory of the stock market, which uses scale invariant geometric objects to represent stock market emery levels<p><a href="http://greyenlightenment.com/sornette-vs-taleb-debate/" rel="nofollow">http://greyenlightenment.com/sornette-vs-taleb-debate/</a><p>Sornette’s 2013 TED video, in which he predicts an imminent stock market crash due to some ‘power law’, is also wrong because two years later the stock market has continued to rally.<p>You write on your blog:<p><i>These kinds of crashes are not caused by external events or bad players–they are endemic to all markets and result from the cooperative actions of all participants.</i><p>Easier said than done. I don't think the log periodic theory is a holy grail to making money in the market. There are too many instances here it has failed, but you cherry-picked a single example with bitcoin where it could have worked.
One way of think of it is that:<p>There are connections between Deep Learning and Theoretical Physics because there are (even stronger) connections between Information Theory and Statistical Mechanics.
I don't like the assertion at all because so many techniques are held to be "deep learning" and because even when specific techniques are built on an analogy of this sort (think Simulated Annealing and Genetic Algorithms) they do not work "because" they are "like" the physical processes that served as an inspiration.<p>Names are useful, but only as a aide to thinking. Does this help us think about these techniques?
I think a key difference is the physics renormalization structures use fairly regular or uniform weights and the deep learning plays a lot with the weights. So there are going to be pretty big differences in behavior.
And here I thought the renormalization group had no application outside high energy physics and condensed matter. May be I should have stuck with HEP after all.
It always depresses me when I read anything with math formulas and esoteric terms, a constant reminder of my lifelong incompetence with math and university calculus courses.