TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Matrix: A Bayesian learning model for LLMs

139 pointsby stoniejohnsonabout 1 year ago

5 comments

dosingaabout 1 year ago
Conclusion from the paper is:<p>In this paper we present a new model to explain the behavior of Large Language Models. Our frame of reference is an abstract probability matrix, which contains the multinomial probabilities for next token prediction in each row, where the row represents a specific prompt. We then demonstrate that LLM text generation is consistent with a compact representation of this abstract matrix through a combination of embeddings and Bayesian learning. Our model explains (the emergence of) In-Context learning with scale of the LLMs, as also other phenomena like Chain of Thought reasoning and the problem with large context windows. Finally, we outline implications of our model and some directions for future exploration.<p>Where does the &quot;Cannot Recursively Improve&quot; come from?
评论 #40259038 未加载
avi_vallarapuabout 1 year ago
Theoretically this sounds great. I would worry about scalability issues with the Bayesian learning models practical implementation when dealing with the vast parameter space and data requirements of state of the-art models like GPT-3 and beyond.<p>Would love to see practical implementations on large-scale datasets and in varied contexts. I Liked the use of Dirichlet distributions to approximate any prior over multinomial distributions.
programjamesabout 1 year ago
I didn&#x27;t read through the paper (just the abstract), but isn&#x27;t the whole point of the KL divergence loss to get the best compression, which is equivalent to Bayesian learning? I don&#x27;t really see how this is novel, like I&#x27;m sure people were doing this with Markov chains back in the 90s.
评论 #40262591 未加载
drbigabout 1 year ago
The title of the paper is actually `The Matrix: A Bayesian learning model for LLMs` and the conclusion presented in the title of this post is not to be found in the abstract... Just a heads up y&#x27;all.
评论 #40256850 未加载
toxikabout 1 year ago
Completely editorialized title. The article talks about LLMs, not transformers.