The article may raise some generally interesting points, but the logic seems completely flawed.<p>Couple of facts/opinions as reported by the article:<p>- no big distribution uses btrfs as a default, although some are going in that direction<p>- normal users have no exposure to btrfs, because it's not the default<p>- bugs are still found in btrfs<p>- oracle should not release fsck tool, because it's not tested<p>This does not compute. It makes no sense whatsoever. Software in early versions has bugs. That's a universal fact. No standard user jumps to new software if there are existing alternatives. Of course it's not a default, but how do they imagine the testing is going to work if the only way to use the system is to compile it yourself from some obscure branch? Who is going to be happy to test the file system for which an early version of the repair tool exists, but is hidden in some obscure location? Where do the field testers and community come from if the access to the latest version is not as easy as possible?<p>I can't grasp what is he complaining about really... That the option to use the not throughout tested filesystem (even though he starts with saying that won't ever find all bugs) is exposed to users? That's the field testing, he's so keen to see happening. That's exactly what will make the file system more stable. If he doesn't want to use the experimental features, he should not use experimental features!<p>He also seems to have some bias in what a complete file system provides... "Btrfs isn't even fully developed yet, because the developers are still working to integrate RAID-5 support, more efficient compression algorithms and various other improvements". Well - once developers go into integrating compression as an out of the box option and start working on a built-in RAID support, I believe the file system is complete. These are very interesting, but completely optional features.<p>I couldn't disagree more with this article.