TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Inferring neural activity before plasticity for learning beyond backpropagation

145 点作者 warkanlock6 个月前

20 条评论

lukeinator426 个月前
It has been clear for a long time (e.g. Marvin Minsky&#x27;s early research) that:<p>1. both ANNs and the brain need to solve the credit assignment problem 2. backprop works well for ANNs but probably isn&#x27;t how the problem is solved in the brain<p>This paper is really interesting, but is more a novel theory about how the brain solves the credit assignment problem. The HN title makes it sound like differences between the brain and ANNs were previously unknown and is misleading IMO.
评论 #42260098 未加载
评论 #42260992 未加载
评论 #42261087 未加载
yongjik6 个月前
The title of the paper is: &quot;Inferring neural activity before plasticity as a foundation for learning beyond backpropagation&quot;<p>The current HN title (&quot;Brain learning differs fundamentally from artificial intelligence systems&quot;) seems very heavily editorialized.
评论 #42260926 未加载
robwwilliams6 个月前
Not my area of expertise, but this paper may be important for the reason that it is more closely aligned with the “enactive” paradigm of understand brain-body-behavior and learning than a backpropogation-only paradigm.<p>(I like enactive models of perception such as those advocated by Alva Noe, Humberto Maturana, Francisco Valera, and others. They get us well beyond the straightjacket of Cartesian dualism.)<p>Rather than have error signals tweak synaptic weights after a behavior, a cognitive system generates a set of actions it predicts will accommodate needs. This can apparently be accomplished without requiring short term synaptic plasticity. Then if all is good, weights are modified in a secondary phase that is more about asserting utility of the “test” response. More selection than descent. The emphasis is more on feedforward modulation and selection. Clearly there must be error signal feedback so some if you may argue that the distinction will be blurry at some levels. Agreed.<p>Look forward to reading more carefully to see how far off-base I am.
pharrington6 个月前
Theories that brains predict the pattern of expected neural activity aren&#x27;t new, (eg this paper cites work towards the Free Energy Principle, but not Embodied Predictive Interoception Coding works). I have 0 neuroscience training so I doubt I&#x27;d be able to reliably answer my question just by reading this paper, but does anyone know how specifically their Prospective Configuration model differs, or expands, upon the previous work? Is it a better model of how brains actually handle credit assign than the aforementioned models?
评论 #42260400 未加载
oatmeal16 个月前
&gt; In prospective configuration, before synaptic weights are modified, neural activity changes across the network so that output neurons better predict the target output; only then are the synaptic weights (hereafter termed ‘weights’) modified to consolidate this change in neural activity. By contrast, in backpropagation, the order is reversed; weight modification takes the lead, and the change in neural activity is the result that follows.<p>What would neural activity changes look like in an ML model?
dboreham6 个月前
Paper actually says that they fundamentally do learn the same way, but the fine details are different. Not too surprising.
robotresearcher6 个月前
The post headline is distracting people and making a poor discussion. The paper describes a learning mechanism that had advantages over backprop, and may be closer to what we see in brains.<p>The contribution of the paper, and its actual title is about the proposed mechanism.<p>All the comments amounting to ‘no shit, sherlock’, are about the mangled headline, not the paper.
eli_gottlieb6 个月前
Oh hey, I know one of the authors on this paper. I&#x27;ve been meaning to ask him at NeurIPS how this prospective configuration algorithm works for latent variable models.
yellowapple6 个月前
The title of this post doesn&#x27;t seem to have any connection to the title or content of the linked article.
blackeyeblitzar6 个月前
The comments here saying this was obvious or something else more negative are disappointing. Neural networks are named for neurons in biological brains. There is a lot of inspiration in deep learning that comes from biology. So the association is there. Pretending you’re superior for knowing the two are still different, contributes nothing. Doing so in more specific ways, or attempting to further understand the differences between deep learning and biology through research, is useful.
ilaksh6 个月前
Looks amazing if it pans out at scale. Would be great if someone tried this with one of those simulated robotic training tasks that always have thousands or millions of trials rather than just CIFAR-10.
nickpsecurity6 个月前
Some are surprised that anyone would make this point, either the title or the research.<p>It might be a response to the many, many claims in articles that neural networks work like the brain. Even using terms like neurons and synapses. With those claims getting widespread, people also start building theories on top of them that make AI’s more like humans. Then, we won’t need humans or they’ll be extinct or something.<p>Many of us whom are tired of that are both countering it and just using different terms for each where possible. So, I’m calling the AI’s models, saying model training instead of learning, and finding and acting on patterns in data. Even laypeople seem to understand these terms with less confusion about them being just like brains.
评论 #42259954 未加载
评论 #42259989 未加载
revskill6 个月前
It is a good thing as i do not admire much human brain. U learn things slowly...
CatWChainsaw6 个月前
&quot;AI and Human learn differently.&quot;<p>Obviously. So can the scraping grifters who claim that AI &#x27;learns just like a human&#x27; please shut up and never inflict their odious presence on the rest of humanity again? And also pay 10X damages for ruining the Internet.
nextworddev6 个月前
Brain learns through pain. LLMs learn through expending energy.
josefritzishere6 个月前
Surprise factor zero.
isaacimagine6 个月前
Wait, my brain doesn&#x27;t do backprop over a pile of linear algebra after having the internet rammed through it? No way that&#x27;s crazy &#x2F;s<p>tl;dr: paper proposes a principle called &#x27;prospective configuration&#x27; to explain how the brain does credit assignment and learns, as opposed to backprop. Backprop can lead to &#x27;catastrophic interference&#x27; where learning new things abalates old associations, which doesn&#x27;t match observed biological processes. From what I can tell, prosp. config learns by solving what the activations should have been to explain the error, and then updates the weights in accordance, which apparently somehow avoids abalating old associations. They then show how prosp. config explains observed biological processes. Cool stuff, wish I could find the code. There&#x27;s some supplemental notes:<p><a href="https:&#x2F;&#x2F;static-content.springer.com&#x2F;esm&#x2F;art%3A10.1038%2Fs41593-023-01514-1&#x2F;MediaObjects&#x2F;41593_2023_1514_MOESM1_ESM.pdf" rel="nofollow">https:&#x2F;&#x2F;static-content.springer.com&#x2F;esm&#x2F;art%3A10.1038%2Fs415...</a>
评论 #42259978 未加载
评论 #42260000 未加载
评论 #42259606 未加载
tantalor6 个月前
No shit, really?
johnea6 个月前
Was a study really necessary for this?<p>Do &quot;AI&quot; fanbois really think LLMs work like a biological brain?<p>This only reinforces the old maxim: Artificial intelligence will never be a match for natural stupidity
评论 #42259962 未加载
评论 #42259933 未加载
评论 #42259830 未加载
FrustratedMonky6 个月前
&quot;does not learn like human&quot; does not mean &quot;does not learn&quot;.<p>It is alien to us, that doesn&#x27;t mean it is harmless.