TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

OpenZFS deduplication is good now and you shouldn't use it

8 点作者 tjwds7 个月前

2 条评论

bhouston7 个月前
Nice work!<p>I found setup worked well on my ZFS device. The main issue with ZFS that I ran into was slow writes even with caches when dealing with a lot of disks. It just felt like I wasn’t making the most of my hardware even with bulk not modifying writes.<p>I guess it may be the result of needing to do random writes to update directory structures or similar?<p>I had an array of 10 8GB drives and large writes would get &lt;100MB&#x2F;s even on 10GBE and the bottleneck wasn’t cpu or memory either.
nabla97 个月前
Everybody talks about OpenZFS block level dedup. The real gem is to use file level deduplication in copy-on-write transactional filesystems like ZFS.<p><pre><code> cp --reflink=auto </code></pre> The commands above perform a lightweight copy (zfs clone in file level), where the data blocks are copied only when modified.