The article is a great example of all the somewhat surprising peculiarities in ZFS. For example, the conversion will keep the stripe width and block size, meaning your throughput of existing data won't improve. So it's not quite a full re-balance.<p>Other fun things are the flexible block sizes and their relation to the size you're writing and compression ... Chris Siebenmann has written quite a bit about it (<a href="https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSLogicalVsPhysicalBlockSizes" rel="nofollow">https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSLogicalV...</a>).<p>One thing I'm particularly interested in is to see if this new patch offers a way to decrease fragmentation on existing and loaded pools (allocation changes if they are too full, and this patch will for the first time allow us to avoid building a completely new pool).<p>[edit] The PR is here: <a href="https://github.com/openzfs/zfs/pull/12225" rel="nofollow">https://github.com/openzfs/zfs/pull/12225</a><p>I also recommend reading the discussions in the ZFS repository - they are quite interesting and reveal a lot of the reasoning behind the filesystem. Recommended even to people who don't write filesystems as a living.
I'm starting to get concerned about the ZFS issue list, there are a ton of gotchas hiding in using OpenZFS that will cause data loss:<p>* Swap on ZVOL (data loss)<p>* Hardlocking when removing ZIL (this has caused dataloss for us)
I prefer just to have mirrors but its cool that it slowly coming, some people seem to really want this feature.<p>ZFS has been amazing to me, I have zero complaints.<p>I just wish it wouldn't have taken so long to come to /root on linux. Even still today you have to a lot of work unless you want to use the new support in Ubuntu.<p>This license snafu is so terrible, open-source licenses excluding each other. Crazy. The world would have been a better place if linux had incorporated ZFS long ago. (And no we don't need yet another legal discussion, my point is just that its sad).
I was disappointed by the lack of RAIDZ2 resize when I built my ZFS fileserver, but it turns out that my data growth is slower than the growth in size of HDD's, so I just replace the drives every 4 or 5 years and copy the data over. Now drives are so big that I might just go with mirroring instead of RAIDZ2
I quite like XFS + LVM. LVM now has a high level wrapper for kernel raid and md-integrity.<p>for previous data I can't bear to go less than raid 6 (equiv). And require ECC ram. I've had several events where after one drive failed, I discovered minor errors on a second drive...<p>Currently kernel raid can't use raid 6 to decide a majority win if a bit error is discovered. MD-integrity seems to cost a fair bit of performance (relative to zfs). So I like either plain LVM+raid 6 or also add an integrity option if I want to defend against bit rot.<p>It's simple to operate. Loads of experts available if it breaks. Well tested. easy to expand or even drastically reshape.<p>It lacks "send". and performance of snapshots could be lower (try thin pools). Can easily add SSD caching, but performance improvement is possibly not as high as with alternatives<p>works well enough for me...
Just upgraded my home NAS, had to swap all 8 drives, took 7 days... Not to mention it doubled the size of the array, I would have been much happier with an incremental increase.
this might sound like a troll comment but its coming from someone with almost zero experience with raid. What is the purpose of ZFS in 2021 if we have hardware RAID and linux software RAID? BTRFS does RAID too. Why would people choose ZFS in 2021 if both Oracle and Open Source users have 2 competing ZFS? are they interoperable?
I'll believe it when I see it, why anyone uses BTRFs (UnRaid or any other form of software raid that <i>isn't</i> ZFS) is still beyond me. At least when we're not talking SSD's ;)<p>ZFS is incredible, curious to mess around with these new features!
> Data newly written to the ten-disk RAIDz2 has a nominal storage efficiency of 80 percent—eight of every ten sectors are data—but the old expanded data is still written in six-wide stripes, so it still has the old 67 percent storage efficiency.<p>This makes this feature quite ‘meh’. The whole goal is capacity expansion and you won’t be able to use the new capacity unless you rewrite all existing data, as I understand it.<p>This feature is mostly relevant for home enthusiasts and I think it doesn’t really bring the desired behavior this user group wants and needs.<p>> Undergoing a live reshaping can be pretty painful, especially on nearly full arrays; it's entirely possible that such a task might require a week or more, with array performance limited to a quarter or less of normal the entire time.<p>Not an issue for home users as they often don’t have large work loads thus this process is fast and convenient. Even if it would take two days.