TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

PNAS Paper: fMRI shows brain is using the same algorithm as Transformers (2021)

14 点作者 rodoxcasta大约 2 年前

3 条评论

rodoxcasta大约 2 年前
They record the brain language processing via fMRI and the activations of some AI models during language tasks, then create a linear map between them. Then they use that map to try to predict how the brain will process a language task using the AI model activations for the same sentence. This holds true for different imaging techniques and different language tasks.<p>Transformers perform qualitatively better than other architectures, and GPT2 (the most advanced public model at the time) shows near 100% accuracy. The best correlate of performance in the experiment is the next-word prediction accuracy of the model. Other AI performance metrics don&#x27;t appear significant.<p>The conclusion is that this is strong evidence that the brain processes language using the same predictive algorithm as transformers. And GPT2 may have an architecture very similar to the language processing areas of the brain.
rodoxcasta大约 2 年前
Abstract The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models. By revealing trends across models, this approach yields novel insights into cognitive and neural mechanisms in the target domain. We here present a systematic study taking this approach to higher-level cognition: human language processing, our species’ signature cognitive skill. We find that the most powerful “transformer” models predict nearly 100% of explainable variance in neural responses to sentences and generalize across different datasets and imaging modalities (functional MRI and electrocorticography). Models’ neural fits (“brain score”) and fits to behavioral responses are both strongly correlated with model accuracy on the next-word prediction task (but not other language tasks). Model architecture appears to substantially contribute to neural fit. These results provide computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the human brain.
kevviiinn大约 2 年前
fMRI is not a good way to look at what a brain is actually doing. I don&#x27;t know why people keep beating this stupid horse. It&#x27;s dead. Leave it alone
评论 #35906904 未加载