TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The Matrix: A Bayesian learning model for LLMs

139 点作者 stoniejohnson大约 1 年前

5 条评论

dosinga大约 1 年前
Conclusion from the paper is:<p>In this paper we present a new model to explain the behavior of Large Language Models. Our frame of reference is an abstract probability matrix, which contains the multinomial probabilities for next token prediction in each row, where the row represents a specific prompt. We then demonstrate that LLM text generation is consistent with a compact representation of this abstract matrix through a combination of embeddings and Bayesian learning. Our model explains (the emergence of) In-Context learning with scale of the LLMs, as also other phenomena like Chain of Thought reasoning and the problem with large context windows. Finally, we outline implications of our model and some directions for future exploration.<p>Where does the &quot;Cannot Recursively Improve&quot; come from?
评论 #40259038 未加载
avi_vallarapu大约 1 年前
Theoretically this sounds great. I would worry about scalability issues with the Bayesian learning models practical implementation when dealing with the vast parameter space and data requirements of state of the-art models like GPT-3 and beyond.<p>Would love to see practical implementations on large-scale datasets and in varied contexts. I Liked the use of Dirichlet distributions to approximate any prior over multinomial distributions.
programjames大约 1 年前
I didn&#x27;t read through the paper (just the abstract), but isn&#x27;t the whole point of the KL divergence loss to get the best compression, which is equivalent to Bayesian learning? I don&#x27;t really see how this is novel, like I&#x27;m sure people were doing this with Markov chains back in the 90s.
评论 #40262591 未加载
drbig大约 1 年前
The title of the paper is actually `The Matrix: A Bayesian learning model for LLMs` and the conclusion presented in the title of this post is not to be found in the abstract... Just a heads up y&#x27;all.
评论 #40256850 未加载
toxik大约 1 年前
Completely editorialized title. The article talks about LLMs, not transformers.