TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Lossless LLM compression for efficient GPU inference via dynamic-length float

411 点作者 CharlesW29 天前

20 条评论

jhj29 天前
This is just a consequence of the fact that bfloat16 has a very high dynamic range which is not all used. People like hyperparameters that look like 0.01 not 10^10, even though there is the same fractional precision available at each exponent and if you multiplied everything - hyperparameters, initialized weights, training data, etc in a network by 10^6 things will still work more or less the same since the upper range is hardly used (with the possible exception of some small number of special functions).<p>Typical entropy of bfloat16 values seen in weights (and activations) are about 10-12 bits (only 65-75% or so of the value range is used in practice). Sign and mantissa bits tend to be incompressible noise.<p>This has been exploited several times before in the context of both classical HPC and AI, with lossless compression work from Martin Burtscher&#x27;s lab (<a href="https:&#x2F;&#x2F;userweb.cs.txstate.edu&#x2F;~burtscher&#x2F;" rel="nofollow">https:&#x2F;&#x2F;userweb.cs.txstate.edu&#x2F;~burtscher&#x2F;</a>), fpzip from LLNL (<a href="https:&#x2F;&#x2F;computing.llnl.gov&#x2F;projects&#x2F;fpzip" rel="nofollow">https:&#x2F;&#x2F;computing.llnl.gov&#x2F;projects&#x2F;fpzip</a>) and my library dietgpu from 2021 (<a href="https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;dietgpu">https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;dietgpu</a>) which we used to speed training on a large GPU cluster by about 10% wall clock time overall by losslessly compressing all data prior to send and decompressing upon receive (e.g., gradients, weights from backup, etc), which is still computing the same thing as it did before as it is lossless.<p>Also, rANS is more efficient and easier to implement in SIMD-like instruction sets than Huffman coding. It would reduce the performance latency&#x2F;throughput penalties as well with DFloat11 (since we have to decompress before we do the arithmetic).
评论 #43798959 未加载
评论 #43799831 未加载
评论 #43801022 未加载
评论 #43800470 未加载
评论 #43809324 未加载
评论 #43800779 未加载
评论 #43801201 未加载
评论 #43799975 未加载
badmonster29 天前
What stands out most is the practical implication: enabling lossless inference of a 405B-parameter model on a single node with 8×80GB GPUs is wild. That’s a huge unlock for research labs and startups alike that want to run frontier models without massive infrastructure costs.
评论 #43798766 未加载
评论 #43798541 未加载
评论 #43800368 未加载
评论 #43797825 未加载
loufe29 天前
I&#x27;m so grateful to live through such exciting times. I can open HN every two to some exciting new news about ML&#x2F;transformer models. I really should read more into it, but does llama.cpp use a &quot;custom kernel&quot; per se, with cublas, or is it just making good use of the cublas kernal?
评论 #43797546 未加载
Animats29 天前
Once this weight format war settles down, hardware can be built to support it. Presumably you want matrix multiply hardware optimized for whatever weight format turns out to be reasonably optimal.
评论 #43799248 未加载
aseligman29 天前
Some additional context: many real world agent use cases struggle to balance quality, cost, and performance. This technique can help avoid the tradeoffs that quantization techniques introduce, including unpredictable results while you try cost optimize an agent. In some cases the cost savings can be significant using dfloat11 as you squeeze into more affordable GPUs.<p>* I work with xmad.ai
yjftsjthsd-h29 天前
&gt; Compared to a potential alternative of offloading parts of an uncompressed model to the CPU to meet memory constraints, DFloat11 achieves 1.9-38.8x higher throughput in token generation. With a fixed GPU memory budget, DFloat11 enables 5.3-13.17x longer context lengths than uncompressed models.<p>The context length alone probably makes it worthwhile even if your models fit in memory, but I&#x27;m curious if it improves tokens&#x2F;sec even all on GPU, since <i>in my very amateur understanding</i> LLMs tend to be constrained by memory bandwidth?
评论 #43801354 未加载
评论 #43798137 未加载
评论 #43798443 未加载
wills_forward29 天前
So this could universally decrease the memory requirements by un-quantitized LLMs by 30%? Seems big if true.
评论 #43797680 未加载
thund29 天前
Is this different than ZipNN? <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2411.05239" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2411.05239</a><p>I see it mentioned but can’t understand if it’s based on it or different&#x2F;better…
评论 #43799423 未加载
评论 #43799439 未加载
gitroom29 天前
Pretty cool seeing how fast all this moves - feels like every week theres a new trick or hardware upgrade. I def get nerd sniped by these efficiency improvements lol.
mountainriver29 天前
Is it possible to run this on new models? It seem like the code is only for inference, unless I’m misunderstanding
jsemrau29 天前
I still hold the opinion that ternary instead of binary would lead to an even higher degree of compression.
评论 #43800166 未加载
firefoxd29 天前
Someone has figured out how to compress images even further with LLMs. They promised to published a white paper since last year: <a href="https:&#x2F;&#x2F;getproxyai.com&#x2F;blog&#x2F;this-image-is-4KB" rel="nofollow">https:&#x2F;&#x2F;getproxyai.com&#x2F;blog&#x2F;this-image-is-4KB</a><p>&#x2F;s I&#x27;ll show myself out
luotuoshangdui29 天前
Does it affect speed?
aazo1129 天前
This is a huge unlock for on-device inference. The download time of larger models makes local inference unusable for non-technical users.
iamnotagenius29 天前
Interesting, but not exactly practical for a local LLM user, as 4-bit is how LLM&#x27;s are run locally.
评论 #43797453 未加载
评论 #43797678 未加载
marksimi29 天前
Time to (dynamically) float
hchja29 天前
This is pretty useless in any case that doesn’t involve BFloat16 models
评论 #43798063 未加载
评论 #43798024 未加载
anticensor29 天前
This is just a VBR mode for neural networks. Not quite useful when inference is already quite slow.
评论 #43799840 未加载
Havoc29 天前
I&#x27;m guessing by lossless they mean something other than what the word usually means in compression context?<p>&gt;achieving near information-optimal compression without any loss of precision<p>So perhaps more lossless as in didn&#x27;t lose perplexity&#x2F;benchmarks?<p>In my mind lossless is precisely zero bits lost along the way.
评论 #43797506 未加载
评论 #43797399 未加载
评论 #43797744 未加载
评论 #43797472 未加载
评论 #43797685 未加载
ein0p29 天前
Note that this is _way_ slower at small batch sizes you&#x27;d need for interactive use. At batch size 1 this seems to run at 1&#x2F;3rd the speed of bf16 (so about 1&#x2F;6th the speed of fp8 you&#x27;d realistically be using) if figure 5 is to be believed. This is actually a pretty impressive feat in itself if you know anything about GPU kernel programming, but it is much slower nevertheless. For this to work at &quot;wire speed&quot; it&#x27;d need hardware support, which takes years. Their &quot;baseline&quot; elsewhere in the paper is CPU offloading, which is dog slow and can&#x27;t be made fast due to PCIe bottleneck.
评论 #43797667 未加载
评论 #43798291 未加载