TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why Deep Learning Works II: the Renormalization Group

120 pointsby miketalmost 10 years ago

10 comments

albertzeyeralmost 10 years ago
Note: This is about unsupervised learning and mostly about RBMs&#x2F;DBNs. Most of the Deep Learning success is all about supervised learning. In the past, RBMs have been used for unsupervised pretraining of the model, however, nowadays, everyone uses supervised pretraining.<p>And the famous DeepMind works (Atari games etc) is mostly about Reinforcement learning, which is again different.
评论 #9834565 未加载
评论 #9833984 未加载
milesfalmost 10 years ago
Okay, I confess. I really didn&#x27;t understand most of that post. It sounds really smart, but someone will have to vouch that it&#x27;s legit, because the picture of Kadanoff cuddling Cookie Monster trigged my baloney detector <a href="https:&#x2F;&#x2F;charlesmartin14.files.wordpress.com&#x2F;2015&#x2F;04&#x2F;kadanoff.jpeg" rel="nofollow">https:&#x2F;&#x2F;charlesmartin14.files.wordpress.com&#x2F;2015&#x2F;04&#x2F;kadanoff...</a>
评论 #9834206 未加载
评论 #9834423 未加载
评论 #9834174 未加载
评论 #9834147 未加载
评论 #9834118 未加载
paulpauperalmost 10 years ago
I think this is similar to the scalar theory of the stock market, which uses scale invariant geometric objects to represent stock market emery levels<p><a href="http:&#x2F;&#x2F;greyenlightenment.com&#x2F;sornette-vs-taleb-debate&#x2F;" rel="nofollow">http:&#x2F;&#x2F;greyenlightenment.com&#x2F;sornette-vs-taleb-debate&#x2F;</a><p>Sornette’s 2013 TED video, in which he predicts an imminent stock market crash due to some ‘power law’, is also wrong because two years later the stock market has continued to rally.<p>You write on your blog:<p><i>These kinds of crashes are not caused by external events or bad players–they are endemic to all markets and result from the cooperative actions of all participants.</i><p>Easier said than done. I don&#x27;t think the log periodic theory is a holy grail to making money in the market. There are too many instances here it has failed, but you cherry-picked a single example with bitcoin where it could have worked.
评论 #9834429 未加载
fizixeralmost 10 years ago
One way of think of it is that:<p>There are connections between Deep Learning and Theoretical Physics because there are (even stronger) connections between Information Theory and Statistical Mechanics.
sgt101almost 10 years ago
I don&#x27;t like the assertion at all because so many techniques are held to be &quot;deep learning&quot; and because even when specific techniques are built on an analogy of this sort (think Simulated Annealing and Genetic Algorithms) they do not work &quot;because&quot; they are &quot;like&quot; the physical processes that served as an inspiration.<p>Names are useful, but only as a aide to thinking. Does this help us think about these techniques?
评论 #9835599 未加载
reader5000almost 10 years ago
Is the &quot;group&quot; in renormalization group the same &quot;group&quot; in group theory?
评论 #9834100 未加载
评论 #9834153 未加载
评论 #9834138 未加载
评论 #9834133 未加载
jmountalmost 10 years ago
I think a key difference is the physics renormalization structures use fairly regular or uniform weights and the deep learning plays a lot with the weights. So there are going to be pretty big differences in behavior.
nooberminalmost 10 years ago
And here I thought the renormalization group had no application outside high energy physics and condensed matter. May be I should have stuck with HEP after all.
octatoanalmost 10 years ago
No MathJax. I am disappoint.
curiousjorgealmost 10 years ago
It always depresses me when I read anything with math formulas and esoteric terms, a constant reminder of my lifelong incompetence with math and university calculus courses.
评论 #9836424 未加载