Meta: This post is yet another victim of the HN verbatim title rule despite the verbatim title making little sense as one of many headlines on a news page.<p>How is "Now using Zstandard instead of xz for package compression" followed by the minuscule low-contrast grey "(archlinux.org)" better than "Arch Linux now using Zstandard instead of xz for package compression" like it was when I originally read this a few hours ago?
Zstandard is awesome!<p>Earlier last year I was doing some research that involved repeatedly grepping through over a terabyte of data, most of which were tiny text files that I had to un-zip/7zip/rar/tar and it was painful (maybe I needed a better laptop).<p>With Zstd I was able to re-compress the whole thing down to a few hundred gigs and use ripgrep which solved the problem beautifully.<p>Out of curiosity I tested compression with (single-threaded) lz4 and found that multi-threaded zstd was pretty close. It was an unscientific and maybe unfair test but I found it amazing that I could get lz4-ish compression speeds at the cost of more CPU but with much better compression ratios.<p>EDIT: Btw, I use arch :) - yes, on servers too.
Apparently this is how to use Zstd with tar if anyone else was wondering:<p><pre><code> tar -I zstd -xvf archive.tar.zst
</code></pre>
<a href="https://stackoverflow.com/questions/45355277/how-can-i-decompress-an-archive-file-having-tar-zst" rel="nofollow">https://stackoverflow.com/questions/45355277/how-can-i-decom...</a><p>Hopefully there's another option added to tar that simplifies this if this compression becomes mainstream.
Fedora 31 switched RPM to use zstd.
<a href="https://fedoraproject.org/wiki/Changes/Switch_RPMs_to_zstd_compression" rel="nofollow">https://fedoraproject.org/wiki/Changes/Switch_RPMs_to_zstd_c...</a><p>Package installations are quite a bit faster, and while I don't have any numbers I expect that the ISO image compose times are faster, since it performs an installation from RPM to create each of the images.<p>Hopefully in the near future the squashfs image on those ISOs will use zstd, not only for the client side speed boost for boot and install, but it cuts the CPU hit for lzma decompression by a lot (more than 50%).
<a href="https://pagure.io/releng/issue/8581" rel="nofollow">https://pagure.io/releng/issue/8581</a>
BTW, Fedora recently switched to zstd compression for its packages as well. For the same resons basically - much better overall de/compression speed while keeping the result mostly the same size.<p>Also one more benefit of zstd compression, that is not widely noted - a zstd file conpressed with multiple threads is binary the same as file compressed with single thread. So you can use multi threaded compression and you will end up with the same file cheksum, which is very important for package signing.<p>On the other hand xz, which has been used before, produces a <i>binary different file</i> if compressed by single or multiple threads. This basucally precludes multi threaded compression at package build time, as the compressed file checksums would not match if the package was rebuild with a different number of compression threads. (the unpacked payload will be always the same, but the compressed xz file <i>will</i> be binary different)
Zstd has an enormous advantage in compression and, especially, decompression speed. It often doesn't compress <i>quite</i> as much, but we don't care as much as we once did. We rebuild packages more than we once did.<p>This looks like a very good move. Debian should follow suit.
> Recompressing all packages to zstd with our options yields a total ~0.8% increase in package size on all of our packages combined, but the decompression time for all packages saw a ~1300% speedup.<p>Impressive. As a AUR package maintainer I am also wondering how the compression speed is though.
I learned about this one the hard way when I went to update a really crufty (~ 1 year since last update) Arch system I use infrequently the other day. I had failed to update my libarchive version prior to the change and the package manager could not process the new format.<p>Luckily updating libarchive manually with an intermediate version resolved my issue and everything proceeded fine.<p>This is a good change, but it's a reminder to pay attention to the Arch Linux news feed, because every now and then something important will change. The maintainers provided ample warning about this change there (and indeed I had updated by other systems in response) so we procrastinators really had no excuse :)
I used zstd for on-the-fly compression of game data for p2p multiplayer synchronization, and got 2-5x as much data (depends on the payload type) in each TCP packet. Sad that it still doesn't get much adoption in the industry.
I'd love to see Zstandard accepted in other places where the current option is only the venerable zlib. E.g., git packing, ssh -C. It's got more breadth and is better (ratio / cpu) than zlib at every point in the curve where zlib even participates.
I wish zstd supported seeking and partial decompression (<a href="https://github.com/facebook/zstd/issues/395#issuecomment-535875379" rel="nofollow">https://github.com/facebook/zstd/issues/395#issuecomment-535...</a>). We could then use it for hosting disk images as it would be a lot faster than xz which we currently use.
AUR users -- the default settings in /etc/makepkpg.conf (delivered by the pacman package as of 5.2.1-1) are still at xz, you must manually edit your local config:<p><pre><code> PKGEXT='.pkg.tar.zst'
</code></pre>
The largest package I always wait on perfect for this scenario is `google-cloud-sdk` (the re-compression is a killer -- `zoom` is another one in AUR that's a beast) so I used it as a test on my laptop here in "real world conditions" (browsers running, music playing, etc.). It's an old Dell m4600 (i7-2760QM, rotating disk), nothing special. What matters is using default xz, compression takes twice as long and <i>appears</i> to drive the CPU harder. Using xz my fans always kick in for a bit (normal behaviour), testing zst here did not kick the fans on the same way.<p>After warming up all my caches with a few pre-builds to try and keep it fair by reducing disk I/O, here's a sampling of the results:<p><pre><code> xz defaults - Size: 33649964
real 2m23.016s
user 1m49.340s
sys 0m35.132s
----
zst defaults - Size: 47521947
real 1m5.904s
user 0m30.971s
sys 0m34.021s
----
zst mpthread - Size: 47521114
real 1m3.943s
user 0m30.905s
sys 0m33.355s
</code></pre>
I can re-run them and get a pretty consistent return (so that's good, we're "fair" to a degree); there's disk activity building this package (seds, etc.) so it's not pure compression only. It's a scenario I live every time this AUR package (google-cloud-sdk) is refreshed and we get to upgrade. Trying to stick with real world, not synthetic benchmarks. :)<p>I did not seem to notice any appreciable difference in adding the `--threads=0` to `COMPRESSZST=` (from the Arch wiki), they both consistently gave me right around what you see above. This was compression only testing which is where my wait time is when upgrading these packages, huge improvement with zst seen here...
I’ve used LZ4 and Snappy in production for compressing cache/mq payloads. This is on a service serving billions of clicks in a day. So far very happy with the results, I know zstd requires more CPU than LZ4 or snappy on average but has someone used it under heavy traffic loads on web services. I am really interested trying it out but at the same time held back by “don’t fix it if it ain’t broken”.
For those who want a TLDR :
The trade off is 0.8% increase of package size for 1300% increase in decompression speed.
Those numbers come from a sample of 542 packages.
The wiki is already up to date if you build your own or AUR packages and want to use multiple cpu cores <a href="https://wiki.archlinux.org/index.php/Makepkg#Utilizing_multiple_cores_on_compression" rel="nofollow">https://wiki.archlinux.org/index.php/Makepkg#Utilizing_multi...</a>
> If you nevertheless haven't updated libarchive since 2018, all hope is not lost! Binary builds of pacman-static are available from Eli Schwartz' personal repository, signed with their Trusted User keys, with which you can perform the update.<p>I am a little shocked that they bothered; Arch is rolling release and explicitly does not support partial upgrades (<a href="https://wiki.archlinux.org/index.php/System_maintenance#Partial_upgrades_are_unsupported" rel="nofollow">https://wiki.archlinux.org/index.php/System_maintenance#Part...</a>). So to hit this means that you didn't update a rather important library for over a year, which officially implies that you didn't update <i>at all</i> for over a year, which... is unlikely to be sensible.
Most of the results published show very little positive or negative speed in decompression, where is all this -1300% coming from?<p>edit: Sorry, my fault that was decompression RAM I was thinking about, not speed, although I was influenced by my test that without measuring both xz and zstd seemed instant.
Quick shout out to LZFSE. Similar compression ratio to zlib but much faster.<p><a href="https://github.com/lzfse/lzfse" rel="nofollow">https://github.com/lzfse/lzfse</a>
I couldn't care less about decompression speed, because the bottleneck is the network, which means that I want my packages as small as possible. Smaller packages mean faster installation; at 54 MB/s or faster decompression rate of xz, I couldn't care less about a few milliseconds saved during decompression. For me, this decision is dumbass stupid.