Has there been any progress on the zfs on linux Linus disagreement front since this article?<p><a href="https://arstechnica.com/gadgets/2020/01/linus-torvalds-zfs-statements-arent-right-heres-the-straight-dope/" rel="nofollow">https://arstechnica.com/gadgets/2020/01/linus-torvalds-zfs-s...</a>
Zstd compression with configurable levels is really interesting: You could write every block first with a level comparable to lz4 for very fast performance. And if a block has not been rewritten for some time you recompress them with a compression level allowing more compression and comparable decompression performance.<p>So cold data (cold write, cold/hot read) will take less and less space over time while still having the same read performance.
Sadly dRAID (parity Declustered RAIDz) just missed the cut-off for 2.0, but it looks like it will be in 2.1:<p>* <a href="https://openzfs.github.io/openzfs-docs/Basic%20Concepts/dRAID%20Howto.html" rel="nofollow">https://openzfs.github.io/openzfs-docs/Basic%20Concepts/dRAI...</a><p>* <a href="https://www.youtube.com/watch?v=jdXOtEF6Fh0" rel="nofollow">https://www.youtube.com/watch?v=jdXOtEF6Fh0</a>
This is huge! And very exciting :D<p>One thing I am wondering about is this:<p>> Redacted zfs send/receive - Redacted streams allow users to send subsets of their data to a target system. This allows users to save space by not replicating unimportant data within a given dataset or to selectively exclude sensitive information. #7958<p>Let’s say I have a dataset tank/music-video-project-2020-12 or something and it is like 40 GB and I want to send a snapshot of it to a remote machine on an unreliable connection. Can I use the redacted send/recv functionality to send the dataset in chunks at a time and then at the end have perfect copy of it that I can then send incremental snapshots to?
I'd love to get rid of my FreeNAS VM and run ZFS directly on my Linux desktop, but having to mess with the kernel has kept me from attempting it so far. Maybe I'm worrying about nothing.<p>btrfs seems like the main alternative if you want native kernel support, but when I checked a couple years ago there seemed to be a lot of concerns about the stability. Is that still the case?
Finally, this means we've a way to share "real" filesystems on both FreeBSD and Linux. The only other filesystems you could open without issues on both are FAT and NTFS (thought NTFS-3G), both of which are less than ideal for data you care about.
Slightly off topic, but it seems like GitHub can't/won't display the user profile page for one of the OpenZFS developers:<p><a href="https://github.com/behlendorf" rel="nofollow">https://github.com/behlendorf</a><p>For me, that gives a unicorn 100% of the time (tried across several minutes), instead of showing the developer profile.<p>Anyone else seeing that?
Congratulations - it's great to see the code unification on the two key ZFS platforms, and continuing to add useful features, especially around at-rest encryption.<p>Many thanks to the various OpenZFS contributors.
How's the memory consumption of ZFS without deduplication these days? I've got a couple of 4 TB drives connected to a single board ARM computer with 2 GB of RAM. I used to use btrfs, but switched to XFS after I accidentally filled up a drive and was unable to recover.
I'm looking at setting up my first ZFS pool ('zpool'?) in a few weeks, on Linux. Will I be using OpenZFS or something else? Ubuntu 20.04.<p>(Sorry if noise; I'm just trying to get an idea of how relevant this 2.0 release is to me.)
Just built a FreeNAS system over the past couple weeks and finished doing burn-in tests of my hard drives, wonder if I should wait and see how to install OpenZFS 2.0.0 before I create my storage config.
Side note, they really should have in big-bold letters "DO NOT ENABLE DEDUPLICATION UNLESS YOU HAVE A TON OF RAM!" on their readme. That was a huge mistake on my part. The ram requirements are VERY high for good performance.<p>I realized how bad the performance was when it took about 2 hours to delete 1000 files.