TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Xz format inadequate for long-term archiving (2017)

235 点作者 pandalicious大约 7 年前

29 条评论

moltensyntax大约 7 年前
This article again? In my opinion, this article is biased. The subtext here is that the author is claiming that his &quot;lzip&quot; format is superior. But xz was not chosen &quot;blindly&quot; as the article claims.<p>To me, most of the claims are arguable.<p>To say 3 levels of headers is &quot;unsafe complexity&quot;... I don&#x27;t agree. Indirection is fundamental to design.<p>To say padding is &quot;useless&quot;... I don&#x27;t understand why padding and byte-alignment that is given so much vitriol. Look at how much padding the tar format has. And tar is a good example of how &quot;useless padding&quot; was used to extend the format to support larger files. So this supposed &quot;flaw&quot; has been in tar for dozens of years, with no disastrous effects at all.<p>The xz decision was not made &quot;blindly&quot;. There was thought behind the decision.<p>And it&#x27;s pure FUD to say &quot;Xz implementations may choose what subset of the format they support. They may even choose to not support integrity checking at all. Safe interoperability among xz implementations is not guaranteed&quot;. You could say this about any software - &quot;oh no, someone might make a bad implementation!&quot; Format fragmentation is essentially a social problem more than a technical problem.<p>I&#x27;ll leave it at this for now, but there&#x27;s more I could write.
评论 #16889171 未加载
评论 #16887721 未加载
评论 #16889378 未加载
comex大约 7 年前
Last time this came up on HN, I did some research, and discovered that <i>lzip</i> was quite non-robust in the face of data corruption: a single bit flip in the right place in an lzip archive could cause the decompressor to silently truncate the decompressed data, <i>without</i> reporting an error. Not only that, this vulnerability was a direct consequence of one of the features used to claim superiority to XZ: namely, the ability to append arbitrary “trailing data” to an lzip archive without invalidating it.<p>Like some other compressed formats, an lzip file is just a series of compressed blocks concatenated together, each block starting with a magic number and containing a certain amount of compressed data. There’s no overall file header, nor any marker that a particular block is the last one. This structure has the advantage that you can simply concatenate two lzip files, and the result is a valid lzip file that decompresses to the concatenation of what the inputs decompress to.<p>Thus, when the decompressor has finished reading a block and sees there’s more input data left in the file, there are two possibilities for what that data could contain. It could be another lzip block corresponding to additional compressed data. Or it could be <i>any other</i> random binary data, if the user is taking advantage of the “trailing data” feature, in which case the rest of the file should be silently ignored.<p>How do you tell the difference? Simply enough, by checking if the data starts with the 4-byte lzip magic number. If the magic number itself is corrupted in any way? Then the entire rest of the file is treated as “trailing data” and ignored. I hope the user notices their data is missing before they delete the compressed original…<p>It might be possible to identify an lzip block that has its magic number corrupted, e.g. by checking whether the trailing CRC is valid. However, at least at the time I discovered this, lzip’s decompressor made no attempt to do so. It’s possible the behavior has improved in later releases; I haven’t checked.<p>But at least at the time this article was written: pot, meet kettle.
评论 #16890426 未加载
评论 #16889729 未加载
评论 #16889230 未加载
tedunangst大约 7 年前
Are these concerns, about error recovery, outdated? If I want to recover a corrupted file, I find another copy. I don&#x27;t fiddle with the internal length field to fix framing issues. Certainly, if I want to detect corruption, I use a sha256 of the entire file. If that fails, I don&#x27;t waste time trying to find the bad bit.<p>To add to that, if you need parity to recover from errors, you need to calculate how much based on your storage medium durability and projected life span. It&#x27;s not the file format&#x27;s concern. The xz crc should be irrelevant.
评论 #16886478 未加载
评论 #16886689 未加载
评论 #16887175 未加载
评论 #16889399 未加载
arundelo大约 7 年前
I upvoted this because it seems to make some good points and I think the topic is interesting and important, but I can&#x27;t understand why the &quot;Then, why some free software projects use xz?&quot; section does not mention xz&#x27;s main selling point of being better than other commonly used alternatives at <i>compressing things to smaller sizes.</i><p><a href="https:&#x2F;&#x2F;www.rootusers.com&#x2F;gzip-vs-bzip2-vs-xz-performance-comparison&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.rootusers.com&#x2F;gzip-vs-bzip2-vs-xz-performance-co...</a>
评论 #16886153 未加载
评论 #16891036 未加载
carussell大约 7 年前
(2016)<p>Previously discussed here on HN back then:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12768425" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12768425</a><p>The author has made some minor revisions since then. Here are the main differences to the page compared to when it was first discussed here:<p><a href="http:&#x2F;&#x2F;web.cvs.savannah.nongnu.org&#x2F;viewvc&#x2F;lzip&#x2F;lzip&#x2F;xz_inadequate.html?r1=1.3&amp;r2=1.4" rel="nofollow">http:&#x2F;&#x2F;web.cvs.savannah.nongnu.org&#x2F;viewvc&#x2F;lzip&#x2F;lzip&#x2F;xz_inade...</a><p>And here&#x27;s the full page history:<p><a href="http:&#x2F;&#x2F;web.cvs.savannah.nongnu.org&#x2F;viewvc&#x2F;lzip&#x2F;lzip&#x2F;xz_inadequate.html" rel="nofollow">http:&#x2F;&#x2F;web.cvs.savannah.nongnu.org&#x2F;viewvc&#x2F;lzip&#x2F;lzip&#x2F;xz_inade...</a>
cpburns2009大约 7 年前
It may not be a good choice for long-term data storage, but I disagree that it should not be used for data sharing or software distribution. Different use cases have different needs. If you need long-term storage, it&#x27;s better to avoid lossless compression that can break after minor corruption. You should also be storing parity&#x2F;ECC data (I don&#x27;t recall the subtle difference). If you only need short to moderate term storage, the best compression ratio is likely optimal. Keep a spare backup just in case.
评论 #16885699 未加载
评论 #16885706 未加载
评论 #16885684 未加载
评论 #16885797 未加载
评论 #16886680 未加载
评论 #16886089 未加载
jwilliams大约 7 年前
I sent a reasonable amount of data to Cloud Storage. It varies a lot. Usually ~10GB&#x2F;day, but it can be up to 1TB&#x2F;day regularly.<p>xz can be <i>amazing</i>. It can also bite you.<p>I&#x27;ve had payloads that compress to 0.16 with gzip then compress to 0.016 with xz. Hurray! Then I&#x27;ve had payloads where xz compression is par, or worse. However, with &quot;best or extreme&quot; compression, xz can peg your CPU for much longer. gzip and bzip2 will take minutes and xz -9 is taking hours at 100% CPU.<p>As annoying as that is, getting an order of magnitude better in <i>many</i> circumstances is hard to give up.<p>My compromise is &quot;xz -1&quot;. It usually delivers pretty good results, in reasonable time, with manageable CPU&#x2F;Memory usage.<p>FYI. The datasets are largely text-ish. Usually in 250MB-1GB chunks. So talking JSON data, webpages, and the like.
评论 #16887260 未加载
评论 #16886812 未加载
freedomben大约 7 年前
This is purely anecdotal and could easily be PEBKAC, but I created a bunch of xz backups years ago and had to access them a couple of years later after a disc died. To my panicked surprise, when trying to unpack them, I was informed that something was wrong (sorry at this point I don&#x27;t remember what it was). I never did get it working. From that point on I went back to gzip and have not had a problem since. Yes xz packs efficiently, but a tight archive that doesn&#x27;t inflate is worse than worthless to me.
eesmith大约 7 年前
FWIW, PNG also &quot;fails to protect the length of variable size fields&quot;. That is, it&#x27;s possible to construct PNGs such that a 1-bit corruption gives an entirely different, and still valid, image.<p>When I last looked into this issue, it seemed that erasure codes, like with Parchive&#x2F;par&#x2F;par2, was the way to go. (As others have mentioned here.) I haven&#x27;t tried it out as I haven&#x27;t needed that level of robustness.
davidw大约 7 年前
FWIW, xz is also a memory hog with the default settings. I inherited an embedded system that attempts to compress and send some logs, using xz, and if they&#x27;re big enough, it blows up because of memory exhaustion.
评论 #16886706 未加载
pmoriarty大约 7 年前
When I use xz for archival purposes I always use par2[1] to provide redundancy and recoverability in case of errors.<p>When I burn data (including xz archives) on to DVD for archival storage, I use dvdisaster[2] for the same purpose.<p>I&#x27;ve tested both by damaging archives and scratching DVDs, and these tools work great for recovery. The amount of redundancy (with a tradeoff for space) is also tuneable for both.<p>[1] - <a href="https:&#x2F;&#x2F;github.com&#x2F;Parchive&#x2F;par2cmdline" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Parchive&#x2F;par2cmdline</a><p>[2] - <a href="http:&#x2F;&#x2F;dvdisaster.net&#x2F;" rel="nofollow">http:&#x2F;&#x2F;dvdisaster.net&#x2F;</a>
doubledad222大约 7 年前
Thank you for sharing this. I am in charge of archiving the family files - pictures, video, art projects, email. I want it available through the aging of standards and protected against the bitrot of aging hard drives. I&#x27;ll be converting any xz archives I get into a better format.
评论 #16887164 未加载
评论 #16886815 未加载
ryao大约 7 年前
Requiring userland software to worry about bitrot is a great way to ensure that it is not done well. It is better to let the filesystem worry about it by using a file system that can deal with it.<p>This article is likely more relevant to tape archives than anything most people use today.
nurettin大约 7 年前
Too bad for arch <a href="https:&#x2F;&#x2F;www.archlinux.org&#x2F;news&#x2F;switching-to-xz-compression-for-new-packages&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.archlinux.org&#x2F;news&#x2F;switching-to-xz-compression-f...</a>
评论 #16885703 未加载
评论 #16885791 未加载
评论 #16886353 未加载
londons_explore大约 7 年前
The purpose of a compression format is not to provide error recovery or integrity verification.<p>The author seems to think the xz container file format should do that.<p>When you remove this requirement, nearly all his arguments become moot.
评论 #16891162 未加载
leni536大约 7 年前
I fail to see why integrity checking is the file format&#x27;s responsibility. Is this historical? Like when you just dd a tar file directly onto a tape and there is no filesystem? Anyway seems like it should be handled by the filesystem and network layers.<p>I can understand the concerns about versioning and fragmented extension implementations though.
评论 #16891109 未加载
LinuxBender大约 7 年前
Perhaps renice your job so that others don&#x27;t complain about their noisy neighbor.<p><pre><code> renice 19 -p $$ &gt; &#x2F;dev&#x2F;null 2&gt;&amp;1 </code></pre> then ...<p>Use tar + xz to save extra metadata about the file(s), even if it is only 1 file.<p><pre><code> tar cf - ~&#x2F;test_files&#x2F;* | xz -9ec -T0 &gt; .&#x2F;test.tar.xz </code></pre> If that (or the extra options in tar for xattrs) is not enough, then create a checksum manifest, always sorted.<p><pre><code> sha256sum ~&#x2F;test_files&#x2F;* | sort -n &gt; ~&#x2F;test_files&#x2F;.sha256 </code></pre> Then use the above command to compress it all into a .tar file that now contains your checksum manifest.
AndyKelley大约 7 年前
I did some compression tests of the CI build of master branch of zig:<p><pre><code> 34M zig-linux-x86_64-0.2.0.cc35f085.tar.gz 33M zig-linux-x86_64-0.2.0.cc35f085.tar.zst 30M zig-linux-x86_64-0.2.0.cc35f085.tar.bz2 24M zig-linux-x86_64-0.2.0.cc35f085.tar.lz 23M zig-linux-x86_64-0.2.0.cc35f085.tar.xz </code></pre> With maximum compression (the -9 switch), lzip wins but takes longer than xz:<p><pre><code> 23725264 zig-linux-x86_64-0.2.0.cc35f085.tar.xz 63.05 seconds 23627771 zig-linux-x86_64-0.2.0.cc35f085.tar.lz 83.42 seconds</code></pre>
qwerty456127大约 7 年前
Why do people use xz anyway? As for me I just use tar.gz when I need to backup a piece of a Linux file system into an universally-compatible archive, zip when I need to send some files to a non-geek and 7z to backup a directory of plain data files for myself. And I dream of the world to just switch to 7z altogether but it is hardly possible as nobody seems interested in adding tar-like unix-specific metadata support to it.
评论 #16887624 未加载
orbitur大约 7 年前
Related: where can I find a thorough step-by-step method for maintaining the integrity of family photos&#x2F;videos in backups on either Windows or macOS?
ebullientocelot大约 7 年前
The [Koopman] cited throughout is my boss, Phil! At any rate I&#x27;m sadly not surprised and a little appalled that xz doesn&#x27;t store the version of the tool that did the compression..
Annatar大约 7 年前
So long as xz(1) gets insane amounts of compression and there is no compressor which compresses better, people are going to keep preferring it.
vortico大约 7 年前
What is the probability that a given byte will be corrupted on a hard disk in one year?<p>What is the probability of a complete HD failure in a year?
loeg大约 7 年前
Use par2 to generate FEC for your archives and move on with your life.
sirsuki大约 7 年前
So what wrong with plain and simple<p><pre><code> tar c foo | gzip &gt; foo.tar.gz </code></pre> or<p><pre><code> tar c foo | bzip2 &gt; foo.tar.bz2 </code></pre> Been using these for over 20 years now. Why is is so important to change things especially as this article points out for the worse?!
评论 #16888232 未加载
nailer大约 7 年前
To read the article:<p><pre><code> document.body.style[&#x27;max-width&#x27;] = &#x27;550px&#x27;; document.body.style.margin = &#x27;0 auto&#x27;</code></pre>
评论 #16886745 未加载
Lionsion大约 7 年前
What are better file formats for long term archiving? Were any of them designed specifically with that use case in mind?
评论 #16885720 未加载
评论 #16885725 未加载
评论 #16885800 未加载
microcolonel大约 7 年前
Given that there is basically one standard implementation, and virtually nobody has ever had an issue with compatibility with a given file, I don&#x27;t see how it is &quot;inadequate&quot;. Sure, if it&#x27;s inadequate now, it&#x27;ll be inadequate if you read it in a decade, but not in any way which would prevent you from reading it.<p>If your storage fails, maybe you&#x27;ll have a problem, but you&#x27;d have a problem anyway.<p>Sometimes I feel like genuine technical concerns are buried by the authors being jerks and blowing things way out of proportion. I, for one, tend to lose interest when I hear hyperbolic mudslinging.
kazinator大约 7 年前
&gt; <i>The xz format lacks a version number field. The only reliable way of knowing if a given version of a xz decompressor can decompress a given file is by trial and error.</i><p>Wow ... that is inexcusably idiotic. Whoever designed that shouldn&#x27;t be programming. Out of professional disdain, I pledge never to use this garbage.
评论 #16887969 未加载
评论 #16886273 未加载