TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Iguana: fast SIMD-optimized decompression

162 pointsby l2dyalmost 2 years ago

8 comments

powturboalmost 2 years ago
As general purpose compressor iguana is decompressing a lot slower than advertised, when tested with a typical data compression corpus.<p>It requires: avx512-vbmi2, available only on ice-lake&#x2F;Tiger-lake&#x2F;AMD zen4<p>- Benchmark from encode.su experts: <a href="https:&#x2F;&#x2F;encode.su&#x2F;threads&#x2F;4041-Iguana-a-fast-vectorized-compressor?p=79634&amp;viewfull=1#post79634" rel="nofollow">https:&#x2F;&#x2F;encode.su&#x2F;threads&#x2F;4041-Iguana-a-fast-vectorized-comp...</a><p>- benchmark from the iguana developers here: <a href="https:&#x2F;&#x2F;github.com&#x2F;SnellerInc&#x2F;sneller&#x2F;tree&#x2F;master&#x2F;cmd&#x2F;iguanabench">https:&#x2F;&#x2F;github.com&#x2F;SnellerInc&#x2F;sneller&#x2F;tree&#x2F;master&#x2F;cmd&#x2F;iguana...</a><p>Silesia corpus &#x2F; cpu Xeon Gold 5320<p>zstd -b3 3.186 943.9 MB&#x2F;s<p>zstd -b9 3.574 1015.8 MB&#x2F;s<p>zstd -b18 3.967 910.6 MB&#x2F;s<p>lz4 -b1 2.101 3493.8 MB&#x2F;s<p>lz4 -b5 2.687 3323.5 MB&#x2F;s<p>lz4 -b9 2.721 3381.5 MB&#x2F;s<p>iguana -t=0 2.58 4450 MB&#x2F;s<p>iguana -t=1 3.11 2260 MB&#x2F;s<p>As you can see, iguana with entropy coding enabled (-t 1) has a similar compression ratio to zstd -3, but it decompresses more than twice as quickly. With entropy coding disabled (-t 0), iguana has a compression ratio roughly equivalent to lz4 -5 and decompresses about 33% faster.
评论 #36148155 未加载
评论 #36148231 未加载
zX41ZdbWalmost 2 years ago
It looks similar to LZSSE. We tried it in ClickHouse, but then removed it:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ClickHouse&#x2F;ClickHouse&#x2F;pull&#x2F;24424">https:&#x2F;&#x2F;github.com&#x2F;ClickHouse&#x2F;ClickHouse&#x2F;pull&#x2F;24424</a><p>Reasons: - the decompression speed is slightly better than for lz4, but the compression speed is low; - the code was non-perfect, and the fuzzer has found issues.<p>LZSSE library was abandoned five years ago, but they have great blog posts to read: <a href="https:&#x2F;&#x2F;github.com&#x2F;ConorStokes&#x2F;LZSSE">https:&#x2F;&#x2F;github.com&#x2F;ConorStokes&#x2F;LZSSE</a><p>Iguana looks promising, but AVX-512 requirement is too restrictive. We need something to work both on x86 and ARM. Also, integrating Go assembly into other software is not easy. And A-GPL license makes it incompatible.
评论 #36151655 未加载
评论 #36153178 未加载
shooalmost 2 years ago
technically this looks really impressive. great to see a new compression approach that supports extremely high performance decompression, with a high performance open source implementation.<p>re: winning adoption of new compression approaches, there&#x27;s an interesting podcast interview [1] with Yann Collet (of lz4 &#x2F; zstd):<p>Some factors Yann discussed that helped lz4 &amp; zstd gain traction were: permissive licensing (BSD) ; implementation in C -- widest support for including it into other software ecosystems ; open development &amp; paying attention to issues raised by users of the software ; the new compression approach able beat an existing popular approach in some use cases with no downside: e.g. if a hypothetical new compression approach has 200% faster decompression but offers 10% worse compression ratios, then there&#x27;s friction for introducing it into an existing system, as the new approach might first require purchase and deployment of additional storage. Whereas a new approach that is 50% faster and has exactly the same or slightly better compression ratios can be adopted with much less friction.<p>It looks like the Iguana code has recently been relicensed with Apache instead of AGPL (which used for the rest of the sneller repo), which could lower the barrier for other projects to consider adopting Iguana, although there are still dependencies from the Iguana code to code in AGPL licensed files elsewhere in the sneller repo.<p>[1] <a href="https:&#x2F;&#x2F;corecursive.com&#x2F;data-compression-yann-collet&#x2F;" rel="nofollow">https:&#x2F;&#x2F;corecursive.com&#x2F;data-compression-yann-collet&#x2F;</a>
评论 #36147448 未加载
评论 #36149078 未加载
pellaalmost 2 years ago
Thank you, Promising work!<p>Question: How was zstd built in the tests?<p>In other words, was the possibility of 2-stage pgo+lto optimization taken into account?<p>(the alpine zstd claim ~ &quot;+30% faster on x86_64 than the default makefile&quot; [1] )<p>[1] <i>&quot;&quot;<p># 2-stage pgo+lto build (non-bootstrap), standard meson usage.<p># note that with clang,<p># llvm-profdata merge --output=output&#x2F;somefilename(?) output&#x2F;</i>.profraw<p># is needed.<p># believe it or not, this is +30% faster on x86_64 than the default makefile build (same params)..<p># maybe needs more testing<p># shellcheck disable=2046<p>&quot;&quot;*<p><a href="https:&#x2F;&#x2F;github.com&#x2F;alpinelinux&#x2F;aports&#x2F;blob&#x2F;master&#x2F;main&#x2F;zstd&#x2F;APKBUILD">https:&#x2F;&#x2F;github.com&#x2F;alpinelinux&#x2F;aports&#x2F;blob&#x2F;master&#x2F;main&#x2F;zstd&#x2F;...</a>
powturboalmost 2 years ago
Most time-series &#x2F; analytical databases,... are already using or switching to integer-compression [1] where you can compress&#x2F;decompress several times (&gt;100GB&#x2F;s see TurboBitByte in [2]) faster than general purpose compressors.<p>[1] - <a href="https:&#x2F;&#x2F;github.com&#x2F;powturbo&#x2F;TurboPFor-Integer-Compression">https:&#x2F;&#x2F;github.com&#x2F;powturbo&#x2F;TurboPFor-Integer-Compression</a><p>[2] - <a href="https:&#x2F;&#x2F;github.com&#x2F;powturbo&#x2F;TurboPFor-Integer-Compression&#x2F;issues&#x2F;96">https:&#x2F;&#x2F;github.com&#x2F;powturbo&#x2F;TurboPFor-Integer-Compression&#x2F;is...</a>
eigenrickalmost 2 years ago
Is this effected by Microsoft&#x27;s patent on various rAns coding and decoding?<p>If not, how does it avoid the (rather vague) claims?<p><a href="https:&#x2F;&#x2F;patents.google.com&#x2F;patent&#x2F;US11234023B2&#x2F;en" rel="nofollow">https:&#x2F;&#x2F;patents.google.com&#x2F;patent&#x2F;US11234023B2&#x2F;en</a>
aydynalmost 2 years ago
Decompression speed looks good, but in my experience once you get past a certain point (~X000 MB&#x2F;s) performance gains become pretty marginal in real world applications. I&#x27;d like to see compression speeds and performance on AVX if AVX-512 is not available.
评论 #36147275 未加载
评论 #36147465 未加载
评论 #36148925 未加载
Alifatiskalmost 2 years ago
What do people use these for?
评论 #36148701 未加载
评论 #36148875 未加载
评论 #36147954 未加载
评论 #36147928 未加载
评论 #36150088 未加载