TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

OpenZFS deduplication is good now and you shouldn't use it

8 pointsby tjwds7 months ago

2 comments

bhouston7 months ago
Nice work!<p>I found setup worked well on my ZFS device. The main issue with ZFS that I ran into was slow writes even with caches when dealing with a lot of disks. It just felt like I wasn’t making the most of my hardware even with bulk not modifying writes.<p>I guess it may be the result of needing to do random writes to update directory structures or similar?<p>I had an array of 10 8GB drives and large writes would get &lt;100MB&#x2F;s even on 10GBE and the bottleneck wasn’t cpu or memory either.
nabla97 months ago
Everybody talks about OpenZFS block level dedup. The real gem is to use file level deduplication in copy-on-write transactional filesystems like ZFS.<p><pre><code> cp --reflink=auto </code></pre> The commands above perform a lightweight copy (zfs clone in file level), where the data blocks are copied only when modified.