TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Solomonic learning: Large language models and the art of induction

5 点作者 100ideas6 个月前

3 条评论

100ideas6 个月前
I found the opening quote of this article to be intriguing, especially since it was from a 1992 research lab:<p>“One year of research in neural networks is sufficient to believe in God.” The writing on the wall of John Hopfield’s lab at Caltech made no sense to me in 1992. Three decades later, and after years of building large language models, I see its sense if one replaces sufficiency with necessity: understanding neural networks as we teach them today requires believing in an immanent entity.
评论 #42211168 未加载
gnabgib6 个月前
Blog title: <i>Solomonic learning: Large language models and the art of induction</i>
评论 #42211094 未加载
gtsop6 个月前
Dark times for science when such quotes are thrown as legitimate.<p>The article is extremely technical and doesn&#x27;t really explain the quote other than acknowledging that there are stuff we don&#x27;t understand yet.<p>And really, a person will never grasp machine learning and AI as long as they keep drawing unbased parallels to humans and machines.
评论 #42211163 未加载