TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

ZFS won’t save you: fancy filesystem fanatics need to get a clue about bit rot

19 pointsby gphreakalmost 8 years ago

13 comments

kabdibalmost 8 years ago
&gt; While it is true that keeping a hash of a chunk of data will tell you if that data is damaged or not, the filesystem CRCs are an unnecessary and redundant waste of space ...<p>A few years ago I, when I was on a game console team, a hardware engineer came to my desk and said, &quot;Can you find out what&#x27;s wrong with this disk drive?&quot; It had come from a customer whose complaint was that games sometimes failed to download and game saves became unreadable.<p>I spent a fun afternoon tracking down what turned out to be a stuck-at-zero bit on that drive&#x27;s cache. Just above the drive&#x27;s ECC-it-to-death block storage was this flaky bit of RAM that was going totally unchecked. The console had a Merkle-tree based file system and easily detected the failure, but without that addition checking the corruption would have been very subtle, most of the time.<p>Okay, so that&#x27;s just one system out of millions, right? What are the chances? Well, at the scale of millions, pretty much any hole in data integrity is going to be found out and affect real, live customers at <i>some</i> not insignificant rate. You really shouldn&#x27;t be amazed at the number of single-bit memory errors happening on consumer hardware (from consoles to PCs -- and I assume phones). You should expect these failures and determine in advance if they are important to you and your customers.<p>Just asserting &quot;CRCs are useless&quot; is putting a lot of trust on stuff that has real-world failure modes.
评论 #14954470 未加载
评论 #14954578 未加载
评论 #14954445 未加载
asveikaualmost 8 years ago
A few years ago I had a drive at home that was flipping bits, randomly corrupting my files. It inspired me to build a ZFS disk server and introduce redundancy in my home setup.<p>A bunch of this article reads as if this scenario, which I in fact hit, won&#x27;t happen, drives do it better, etc. But it happens. It happened to me. The drive did not &quot;magically fix itself&quot;, and instead got worse over time. With ZFS, if it happens again, I can be told where it happened, exactly what files are affected, etc., and that&#x27;s already better than what I got with that other disk which didn&#x27;t have ZFS.<p>Plus the ZFS tools like snapshotting, send&#x2F;receive, scrub being able to check integrity while the system is running... Those are great features.
评论 #14960471 未加载
Mindless2112almost 8 years ago
As someone who has lost some files to a silently malfunctioning hard disk in the past, I think I&#x27;ll stick with ZFS. Checksumming, RAID-Z, and periodic scrubbing would have saved my files. Even having backups did not -- after all, what good is a bit-for-bit copy of a corrupted file?<p>(On a side note, ZFS -- at least OpenZFS -- doesn&#x27;t support any <i>CRC</i> algorithms for use as its checksum.)
评论 #14955550 未加载
rgbrenneralmost 8 years ago
For an article with that tone, you would think the author would have more experience. It&#x27;s literally filled with flawed and uninformed or inexperienced thinking.<p>From the idea that SMART reliably detects hard drive failures.. to dismissing data protection for no reason other than it sounds unlikely to the author (which in several cases I know personally to be false... because I&#x27;ve experienced those failures).<p>ZFS is a very well designed filesystem. Things weren&#x27;t added haphazardly or because they sounded cool. The author would do well to try to understand why those protections were added.
评论 #14955560 未加载
DiabloD3almost 8 years ago
This entire article can be summarized as the following: RAID is not a replacement for backups.<p>Sun&#x2F;Oracle, and a lot of popular third party documentation, has said as such very openly, and commands like zfs send&#x2F;recv exist to easily automate zfs cloning (to backup from one zfs fs to another, for example, if you choose to do it that way).<p>I suspect whoever wrote this missed the boat on why zfs works.
notacowardalmost 8 years ago
Totally off base, on several points. Any kind of checksum on the disk only protects what gets to the disk. Filesystem-level CRCs can protect the <i>entire data path</i>. If you have a defect in your RAID card or HBA, or anywhere in the software stack below the filesystem, on-disk CRCs will happily &quot;validate&quot; the already-corrupted data while filesystem-level CRCs are likely to detect the corruption. The author dismisses it as a &quot;remotely likely scenario&quot; but I&#x27;ve seen it happen for real many times. Maybe that&#x27;s because I have about 3.5x as many years of experience as the author, across what&#x27;s probably thousands of times as many machines or drives (I&#x27;ve worked on some big system).<p>The same &quot;I&#x27;ve never seen it so it&#x27;s not real&quot; fallacy appears again in the discussion of RAID 5. He says that losing a second drive during a rebuild is &quot;statistically very unlikely&quot; but that&#x27;s not so. Not only have I seen it many times, but the simple math of disk capacities and interface speeds shows that it&#x27;s not really all that unlikely. I&#x27;ve seen <i>RAID 6</i> fail because of overlapping rebuild times, leading people to push for more powerful erasure-coding schemes. Over the lifetime of even a medium-sized system, concurrent failures on RAID 5 are likely enough to justify using something stronger.<p>I was one of the earliest and most outspoken critics of ZFS hype and FUD when it came out. It was and is no panacea, but that doesn&#x27;t justify more FUD in the other direction to sell backup products or services.
Veratyralmost 8 years ago
While he&#x27;s right that it&#x27;s not as big an issue as ZFS fanatics make it out to be, it _is_ a real issue and they&#x27;re not just pulling it out their asses. There are a number of studies that actually measured the error rate, some of the bigger ones being done by CERN [0], NetApp [1] and IA (I think there&#x27;s meant to be a talk or something to go with this one) [2].<p>ZFS certainly isn&#x27;t a magic wand you should wave at anything and everything and it doesn&#x27;t replace backups but it does make the chances of something going wrong undetected much smaller and even though the chances are small to begin with, there are times when you just can&#x27;t accept it at all.<p>[0]: <a href="https:&#x2F;&#x2F;www.nsc.liu.se&#x2F;lcsc2007&#x2F;presentations&#x2F;LCSC_2007-kelemen.pdf" rel="nofollow">https:&#x2F;&#x2F;www.nsc.liu.se&#x2F;lcsc2007&#x2F;presentations&#x2F;LCSC_2007-kele...</a><p>[1]: <a href="https:&#x2F;&#x2F;www.usenix.org&#x2F;legacy&#x2F;events&#x2F;fast08&#x2F;tech&#x2F;full_papers&#x2F;bairavasundaram&#x2F;bairavasundaram_html&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;www.usenix.org&#x2F;legacy&#x2F;events&#x2F;fast08&#x2F;tech&#x2F;full_papers...</a><p>[2]: <a href="http:&#x2F;&#x2F;storageconference.us&#x2F;2006&#x2F;Presentations&#x2F;39rWFlagg.pdf" rel="nofollow">http:&#x2F;&#x2F;storageconference.us&#x2F;2006&#x2F;Presentations&#x2F;39rWFlagg.pdf</a>
评论 #14954530 未加载
ATschalmost 8 years ago
&gt;Snapshots may help, but they depend on the damage being caught before the snapshot of the good data is removed. If you save something and come back six months later and find it’s damaged, your snapshots might just contain a few months with the damaged file and the good copy was lost a long time ago.<p>The author seems to misunderstand the purpose of snapshots. As frequently [1] pointed out, snapshots are not in fact backups and should not be used for longer term storage.<p>Also the same argument can be used on Backups: &quot;Backups may help, but they depend on the damage being caught before the backup of the good data is removed. If you save something and come back six months later and find it’s damaged, your backups might just contain a few months with the damaged file and the good copy was lost a long time ago.&quot;<p>[1] <a href="http:&#x2F;&#x2F;www.cobaltiron.com&#x2F;2014&#x2F;01&#x2F;06&#x2F;blog-snapshots-are-not-backups&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.cobaltiron.com&#x2F;2014&#x2F;01&#x2F;06&#x2F;blog-snapshots-are-not-...</a>
OpenZFSonLinuxalmost 8 years ago
This blog post was deleted hours after I posted the following comment rebuking most of what was said:<p>I don’t know much about btrfs so I’ll stick to ZFS related comments. ZFS does not use CRC, by default it uses fletcher4 checksum. Fletcher’s checksum is made to approach CRC properties without the computational overhead usually associated with CRC.<p>Without a checksum, there is no way to tell if the data you read back is different from what you wrote down. As you said corruption can happen for a variety of reason – due to bugs or HW failure anywhere in the storage stack. Just like other filesystems not all types of corruption will be caught even by ZFS, especially on the write to disk side. However, ZFS will catch bit rot and a host of other corruptions, while non-checksumming filesystems will just pass the corrupted data back to the application. Hard drives don’t do it better, they have no idea if they’ve bit rotted over time and there are many other components that may and do corrupt data, it’s not as rare as you think. The longer you hold data and the more data you have the higher the chance you will see corruption at some point.<p>I want to do my best to avoid corrupting data and then giving it back to my users so I would like to know if my data has been corrupted (not to mention I’d like it to self-heal as well which is what ZFS will do if there is a good copy available). If you care about your data use a checksumming filesystem period. Ideally, a checksumming filesystem that doesn’t keep the checksum next to the data. A typical checksum is less than 0.14 Kb while a block that it’s protecting is 128 Kb by default. I’ll take that 0.1% “waste of space” to detect corruption all day, any day. Now let’s remember ZFS can also do in-line compression which will easily save you 3-50% of storage space (depending on the data you’re storing) and calling a checksum a “waste of space” is even more laughable.<p>I do want to say that I wholeheartedly agree with “Nothing replaces backups” no matter what filesystem you’re using. Backing up between two OpenZFS pools machines in different physical location is super easy using zfs snapshot-ting and send&#x2F;receive functionality.
zlynxalmost 8 years ago
He missed all the history of ZFS too. Sun had actual customers with bit rot. Even though they were running systems with the highest types of server hardware Sun provided, they had invisible data errors which were only noticed when the files were used and analysis showed ECC passing bit errors.<p>ZFS was created to solve actual business problems.
random_commentalmost 8 years ago
This entire article can be summarised as &#x27;guy who has never used ZFS and has no idea whatsoever about how it works writes a critique that exposes their ignorance publicly&#x27;.<p>Here&#x27;s a quote:<p>- <i>“ZFS has CRCs for data integrity</i><p><i>A certain category of people are terrified of the techno-bogeyman named “bit rot.” These people think that a movie file not playing back or a picture getting mangled is caused by data on hard drives “rotting” over time without any warning. The magical remedy they use to combat this today is the holy CRC, or “cyclic redundancy check.” It’s a certain family of hash algorithms that produce a magic number that will always be the same if the data used to generate it is the same every time.</i><p><i>This is, by far, the number one pain in the ass statement out of the classic ZFS fanboy’s mouth...&quot;</i><p>Meanwhile in reality...<p>ZFS does not use CRCs for checksums.<p>It&#x27;s very hard to take someone&#x27;s view seriously when they are making mistakes at this level.<p>ZFS allows a range of checksum algorithms, including SHA256, and you can even specify per dataset the strength of checksum you want.<p>- <i>&quot;Hard drives already do it better&quot;</i><p>No, they don&#x27;t, or Oracle&#x2F;Sun&#x2F;OpenZFS developers wouldn&#x27;t have spent time and money making it.<p>It makes a bit of a difference when your disk says &#x27;whoops, sorry, CRC fail, that block&#x27;s gone?&#x27; and it was holding your whole filesystem together. Or when a power surge or bad component fries the whole drive at once.<p>ZFS allows optional duplication of metadata or data blocks automatically; as well as multiple levels of RAID-equivalency for automatic, transparent rebuilding of data&#x2F;metadata in the presence of multiple unreliable or failed devices. Hard drives... don&#x27;t do that.<p>Even ZFS running on a single disk can automatically keep 2 (or more) copies on disk of whatever datasets you think are especially important - just check the flag. Regular hard drives don&#x27;t offer that.<p>- <i>What about the very unlikely scenario where several bits flip in a specific way that thwarts the hard drive’s ECC? This is the only scenario where the hard drive would lose data silently, therefore it’s also the only bit rot scenario that ZFS CRCs can help with.</i><p>Well, that and entire disk failures.<p>And power failures leading to inconsistency on the drive.<p>And cable faults leading to the wrong data being sent to the drive to be written.<p>And drive firmware bugs.<p>And faulty cache memory or faulty controllers on the hard drive.<p>And poorly connected drives with intermittent glitches &#x2F; timeouts in communication.<p>You get the idea.<p>I could also point out that ZFS allows you to backup quickly and precisely (via snapshots, and incremental snapshot diffs).<p>It allows you to detect errors as they appear (via scrubs) rather than find out years later when your photos are filled with vomit coloured blocks.<p>It also tells you every time it opens a file if it has found an error, and corrected it in the background for you - thank god! This &#x27;passive warning&#x27; feature alone lets you quickly realise you have a bad disk or cable so you can do something about it. Consider the same situation with a hard drive over a period of years...<p>ZFS is a copy-on-write filesystem, so if something naughty happens like a power-cut during an update to a file, your original data is still there. Unlike a hard disk (or RAID).<p>It&#x27;s trivial to set up automatic snapshots, which as well as allowing known-point-in-time recovery, are an exceptionally effective way to prevent viruses, user errors etc from wrecking your data. You can always wind back the clock.<p>Where is the author losing his data (that he knows of, and in his very limited experience...): <i>All of my data loss tends to come from poorly typed ‘rm’ commands.</i> ... so, exactly the kind of situation that ZFS snapshots allow instant, certain, trouble-free recovery from in the space of seconds? [either by rolling back the filesystem, or by conveniently &#x27;dipping into&#x27; past snapshots as though they were present-day directories as needed]<p>Anyway I do hope Mr&#x2F;Ms nctritech learns to read the beginner&#x27;s guide for technologies they critique in future, maybe even try them once or twice, before they write their critique.<p>What next?<p><i>&quot;Why even use C? Everything you can do in C, you can do in PHP anyway!&quot;</i>
评论 #14954512 未加载
评论 #14955592 未加载
Jaepaalmost 8 years ago
I think one of the universal truths in tech is that, those for it, and those annoyed by it both kind of miss the point.
X86BSDalmost 8 years ago
I think what bothers me most is this person owns a computer related business. He is actively endangering people&#x27;s data out of willful ignorance. It&#x27;s highly unethical.