TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Compression Ratio of LLMs?

4 点作者 FileSorter8 个月前
Does anyone know what the lossy compression ratio of a LLM is?<p>((Final .safetensors [GB]) &#x2F; (Total Training Data [GB])) * 100 = ?

3 条评论

psyklic8 个月前
I believe this wouldn&#x27;t be meaningful, since any size LLM can be trained on any amount of data.<p>You could measure how well it memorizes via prediction accuracy on the training set, but this wouldn&#x27;t indicate whether it generalizes well.
speedgoose8 个月前
LLaMa 3.1 has been pre-trained on 15 trillion tokens, plus some more millions for the fine-tuning. About 60 terabytes.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;meta-llama&#x2F;llama-models&#x2F;blob&#x2F;main&#x2F;models&#x2F;llama3_1&#x2F;MODEL_CARD.md">https:&#x2F;&#x2F;github.com&#x2F;meta-llama&#x2F;llama-models&#x2F;blob&#x2F;main&#x2F;models&#x2F;...</a><p>The heaviest quantised LLaMa 3.1 8B is about 3.4GB.<p>So 0.005% compression rate, if you don&#x27;t mind the intelligence of a heavily quantised 8B model.
schappim8 个月前
OpenAI’s GPT-3 model (175B) has an archive size of about 350 GB, with training data estimated in the hundreds of terabytes, resulting in a highly compressed ratio.