TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How deep is the brain? The shallow brain hypothesis

202 点作者 vapemaster超过 1 年前

16 条评论

audunw超过 1 年前
I seem to remember research stating that an individual neuron has very complex behaviour that requires several ML “neurons” &#x2F; nodes to simulate. So if you do a comparison, perhaps the brain is deeper than you’d think by just looking at the graph of neurons and their synapses.<p>Could we construct a neutral net from nodes with more complex behaviour? Probably, but in computing we’ve generally found that it’s best to build up a system from simple building blocks. So what if it takes many ML nodes to simulate a neuron? That’s probably an efficient way to do it. Especially in the early phase where we’re not quite sure which architecture is the best. It’s easier to experiment with various neural net architectures when the building blocks are simple.
评论 #38070248 未加载
评论 #38074185 未加载
评论 #38069882 未加载
评论 #38070358 未加载
chriskanan超过 1 年前
The brain has a lot of skip connections and is massively recurrent. In a sense, the brain can be thought of as having infinite depth due to recurrent thalamno-cortical loops. They do mention thalamno-cortical loops in the paper, so I think a more concrete definition of what is meant by &quot;depth&quot; would be helpful.
评论 #38069359 未加载
sheeshkebab超过 1 年前
It’s indeed odd that current dnn’s require massive amount of energy to retrain and lack any kind of practical continuous adaptation and learning.
评论 #38064733 未加载
评论 #38064761 未加载
评论 #38064811 未加载
评论 #38069517 未加载
评论 #38066112 未加载
beaugunderson超过 1 年前
<a href="https:&#x2F;&#x2F;anonymfile.com&#x2F;dR8a&#x2F;s41583-023-00756-z.pdf" rel="nofollow noreferrer">https:&#x2F;&#x2F;anonymfile.com&#x2F;dR8a&#x2F;s41583-023-00756-z.pdf</a>
hliyan超过 1 年前
&quot;brain seems shallow and neural networks are deep, ergo neural networks are doing it wrong&quot;<p>Please don&#x27;t claim things the author didn&#x27;t. What I read was &quot;ergo (artificial) neural networks may be missing a trick&quot;
评论 #38065926 未加载
rsrsrs86超过 1 年前
Beyond the mere topological metaphor of neural networks there is almost nothing in common between brains and widigital computation. This is a widespread fallacy of category.
评论 #38076570 未加载
评论 #38073233 未加载
评论 #38070429 未加载
lawrenceyan超过 1 年前
We have skip connections and recurrent neural networks at home.
jakobson14超过 1 年前
If I had a nickel for every time some neurologist tried to compare brains to neural networks. It&#x27;s a surefire way to tell someone is either desperate for grant money or has been smoking crack. (previously: comparing brains and &quot;electronic computers&quot;)<p>Their entire article hinges on the complaint &quot;brain seems shallow and neural networks are deep, ergo neural networks are doing it wrong.&quot;<p>Neurologists seem to have a really hard time comprehending that researchers working on neural networks aren&#x27;t as clueless about computers as neurology is about the brain. They also <i>vastly</i> overestimate how much engineers working on neural networks even care about how biological brains work.<p>Virtually every attempt at making neural networks mimic biological neurons has been a miserable failure. Neural networks, despite their name, don&#x27;t work anything like biological neurons and their development is guided by a combination of<p>A) practical experimentation and refinement, and<p>B) real, actual understanding about how they work.<p>The concept of resnets didn&#x27;t come from biology. It came from observations about the flow of gradients between nodes in the computational graph. The concept of CNNs didn&#x27;t come from biology, it came from old knowledge of convolutional filters. The current form and function of neural networks is grounded in repeated practical experimentation, not an attempt to mimic the slabs of meat that we place on pedestals. Neural networks are deep because it turns out hierarchical feature detectors work really well, and it doesn&#x27;t really matter if the brain doesn&#x27;t do things that way.<p>And then you have the nitwits searching the brain for transformer networks. Might as well look for mercury delay line memory while you&#x27;re at it. Quantum entanglement too.
评论 #38064876 未加载
评论 #38064935 未加载
评论 #38064948 未加载
评论 #38064974 未加载
评论 #38064900 未加载
评论 #38065056 未加载
评论 #38064880 未加载
评论 #38065633 未加载
评论 #38064859 未加载
评论 #38065919 未加载
评论 #38064825 未加载
评论 #38065279 未加载
评论 #38065277 未加载
评论 #38066580 未加载
评论 #38067317 未加载
评论 #38065372 未加载
评论 #38066519 未加载
评论 #38064872 未加载
评论 #38066049 未加载
评论 #38067199 未加载
评论 #38065345 未加载
phlogisticfugu超过 1 年前
deep learning models have already been permitting &quot;shallow signals&quot; for a while. see &quot;skip connections&quot;<p><a href="https:&#x2F;&#x2F;theaisummer.com&#x2F;skip-connections&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;theaisummer.com&#x2F;skip-connections&#x2F;</a>
spacetimeuser5超过 1 年前
Who finally cares how exactly an ANN matches a human brain? Is such ANN smarter than ChatGPT?<p>It is more useful to use AI to develop more ecologically valid measurement methods for biology.
MagicMoonlight超过 1 年前
If it was shallow then it wouldn’t take 25 years for a human brain to fully train. The fact that some parts of it need that much data mean they must be way up the hierarchy.
评论 #38066868 未加载
评论 #38070436 未加载
评论 #38073121 未加载
Salgat超过 1 年前
The brain communicates with itself, so deep layers are equivalent to sections of the brain talking to each other. The only relevance white matter depth has is with regard to how it&#x27;s trained, and since it doesn&#x27;t use gradient descent, it&#x27;s irrelevant to neural networks in that regard.
评论 #38065608 未加载
lawlessone超过 1 年前
So does this mean DNN are in some ways deeper than human brains?
bjornsing超过 1 年前
&gt; This shallow architecture exploits the computational capacity of cortical microcircuits and thalamo-cortical loops that are not included in typical hierarchical deep learning and predictive coding networks.<p>As I understand it the thalamus is basically a giant switchboard though. I see no reason to believe that it never connects the output of one cortical area to the input of another, thus doubling the effective depth of the neural network. (I haven’t read this paper though, as it was behind a paywall.)
Simon_ORourke超过 1 年前
Judging by some of the levels of driving around these parts, the brain may be very shallow indeed.
low_tech_punk超过 1 年前
Replay of Jeff Hawkins group’s A Thousand Brains theory?
评论 #38067586 未加载