TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Do wide and deep networks learn the same things?

107 pointsby MindGodsalmost 4 years ago

4 comments

godelskialmost 4 years ago
For more context to people, we have the universal approximation theorem for neural nets that basically says if a network is wide enough it can approximate anything (with at least 2 layers). So a lot of stuff was really wide. Then VGG[0] came out and showed that deep networks were very effective (along with other papers, things happen in unison. Leibniz and Newton). Then you get ResNets[1] with skip connections and move forward to today. Today we&#x27;ve started looking more at what networks are doing and where their biases lie. This is because we&#x27;re running into some roadblocks with CNNs vs Transformers. They have different inductive biases. Vision transformers still aren&#x27;t defeating CNNs, but they are close and it is clear they learn different things. So we&#x27;re seeing more papers doing these types of analyses. ML will likely never be fully interpretable, but we&#x27;re getting better at understanding. This is good because a lot of times picking your model and network architecture is more art than science (especially when choosing hyper parameters).<p>[0] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1409.1556" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1409.1556</a><p>[1] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1512.03385" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1512.03385</a>
评论 #27374379 未加载
评论 #27374066 未加载
rajansainialmost 4 years ago
Those are very interesting empirical results. This lecture explains the deeper vs shallow tradeoff theoretically: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=qpuLxXrHQB4" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=qpuLxXrHQB4</a>. He&#x27;s an amazing lecturer; wish I didn&#x27;t need subtitles!<p>(If you&#x27;re too lazy to watch, it turns out that there exist functions that a shallow network can never approximate)
sovaalmost 4 years ago
At first I thought this had something to do with the classic &quot;breadth vs. depth&quot; notion on learning stuff -- if you&#x27;re preparing for the MCAT it is better to have breadth that covers all the topics than depth in one or two particulars for the exam, but this is actually just about the dimensions of the neural network used to create representations. Naturally, one would expect a &quot;sweet spot&quot; or series of &quot;sweet spots.&quot;<p>From the paper at <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2010.15327.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2010.15327.pdf</a><p>&gt; As the model gets wider or deeper, we seethe emergence of a distinctive block structure— a considerable range of hidden layers that have very high representation similarity (seen as a yellow square on the heatmap). This block structure mostly appears in the later layers (the last two stages) of the network.<p>I wonder if we could do similar analysis on the human brain and find &quot;high representational similarity&quot; for people who do the same task over and over again, such as play chess.<p>Also, I don&#x27;t really know what sort of data they are analyzing or looking at with these NN, maybe someone with better scansion can let me know?
评论 #27372166 未加载
评论 #27372475 未加载
joe_the_useralmost 4 years ago
So, it seems like the &quot;blocks&quot; they&#x27;re talking about are basically redundancies, duplicated logic. It makes sense to me that since they provide the same functionality, how or how these duplicates exist doesn&#x27;t matter. But I&#x27;m an amateur