TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

On ZFS deduplication and compression support

26 点作者 maus80超过 11 年前

4 条评论

oakwhiz超过 11 年前
You should not enable deduplication unless you have a lot of RAM to spare. Isn't the rule something like 6GB per 1TB of storage? The exact amount is not mentioned in the article.
stock_toaster超过 11 年前
I prefer lz4 (article uses gzip) for zfs compression. a nice cpu/compression tradeoff.
kapsel超过 11 年前
Use deduplication with caution, and only if your datasets are useful for dedup.<p>It uses a lot of memory (or SSD, you can add drives as L2ARC to save some money), and it can give you a lot of problems when deleting many files and large datasets.<p>The somewhat recent addition of LZ4 compression is quite nice.
XorNot超过 11 年前
At the very least you should always enable ZLE compression to deal intelligently with sparse-like files.