TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Intel QuickAssist Technology Zstandard Plugin for Zstandard

144 点作者 ot超过 1 年前

15 条评论

berbec超过 1 年前
A massive power and CPU load decrease in Zstandard is a big win for Intel. AMD has been racking up major pluses in the enterprise space, with RAM, PCIE and core count advantage. Showing that <i>any</i> Intel is faster at such a major CPU load task is a big deal.<p>That&#x27;s not to detract from everything AMD has done, but hardware is only the first step. Software that properly uses the features your hardware provides is just, if not more, important.<p>I love the fact AMD is pushing Intel so much. Pre-C2D days were amazing because we had two vibrant, innovative companies pushing to the edge of possible; trying to out-do each other. Pre-Ryzen was a horrible time. Do you want to spend $500 to upgrade from a 4-core intel 4000-cpu to an intel 5000-cpu? You&#x27;ll get DDR4 and 1% IPC.<p>Now we get massive IPC, clock speed, ram and PCIE improvements on a regular basis. Competition is great, especially for the consumer.
评论 #37158278 未加载
jeffbee超过 1 年前
I love the QAT libraries and I feel their abilities are overlooked. Intel also has the igzip library that does not even require QAT and it radically faster than zlib, which is handy in older applications where gzip is unavoidable despite its obsolescence.<p>The major downside of course is it is quite tricky to use this stuff in practice. In the cloud, you need a bare metal instance that exposes the QAT peripheral, and they are relatively scarce. And this whole generation of hardware is only just beginning to land in public clouds. For machines you own, you will need to scrutinize Intel&#x27;s somewhat ridiculous product matrix in order to acquire a Xeon that has QAT.
评论 #37155679 未加载
评论 #37156870 未加载
评论 #37156230 未加载
评论 #37156282 未加载
评论 #37156500 未加载
dale_glass超过 1 年前
I think that&#x27;s a very welcome improvement. With NVMes that go at 7 GB&#x2F;s we&#x27;re now at the point that it can be hard to do anything useful with the data fast enough.<p>So I think good acceleration for things like compression is going to be a big help.
metta2uall超过 1 年前
Interesting that Intel&#x27;s code for this includes numerous references to LZ4, as if that&#x27;s the actual algorithm the hardware originally aimed to accelerate.. So seems like LZ4 and ZSTD are quite similar?<p><a href="https:&#x2F;&#x2F;github.com&#x2F;intel&#x2F;QAT-ZSTD-Plugin&#x2F;blob&#x2F;main&#x2F;src&#x2F;qatseqprod.c">https:&#x2F;&#x2F;github.com&#x2F;intel&#x2F;QAT-ZSTD-Plugin&#x2F;blob&#x2F;main&#x2F;src&#x2F;qatse...</a>
评论 #37157912 未加载
评论 #37158193 未加载
sanqui超过 1 年前
Btrfs, the file system I use, utilizes zstd for transparent compression. That&#x27;s using a lot of CPU all the time on laptop. So more efficient compressing is great news! Is this for future CPUs?
评论 #37160713 未加载
bitbckt超过 1 年前
Finally. I’m tired of seeing zlib in QAT reviews alone - it’s largely irrelevant to situations where I might want to choose Intel (for QAT) over AMD.<p>I don’t fault Intel for choosing web frontend acceleration over storage first, but this has been a long time coming.
SilverBirch超过 1 年前
This may be a really dumb question... but: Is this transparent? Like, can I compress some data using QAT to create a zstd file, email it to my friend and have them decompress it without QAT? From the way that this is described it sounds like they&#x27;re replacing the sequence producer, but presumably that doesn&#x27;t matter as long as the format you encode those sequences adhere to some standard format?
评论 #37158582 未加载
评论 #37158684 未加载
NelsonMinar超过 1 年前
QuickAssist Technology is new to me. What hardware supports it? A quick look suggests it&#x27;s just a few Xeon processors or else a $800 peripheral card. Is it at all related to &quot;Quick Sync&quot;, the name for the video compression acceleration in newer Intel CPUs?
评论 #37166276 未加载
loeg超过 1 年前
Good to see more in the compression offload space. Several years ago we ended up running a custom gzip softcore on an FPGA (I believe) co-located on a NIC to get somewhat better gzip compression performance than software. (We were pretty short on PCIe physical capacity in that model.)<p>Dealing with the gzip core vendor and the FPGA vendor (both in wildly different timezones) was a little unpleasant.
pclmulqdq超过 1 年前
QAT is awesome, and flips the script on the notion of core count primacy. In many servers, QAT when properly used will save several CPU cores, since cores spend a lot of time compressing and encrypting stuff. However, the software layer has always been Intel&#x27;s weakness, and I&#x27;m not entirely sure they got this one right.
评论 #37158655 未加载
评论 #37157062 未加载
评论 #37156771 未加载
estebarb超过 1 年前
Is there an easy way to use these accelerators from the CLI? Sometimes I have to decompress several TBs of gzip files, but I don&#x27;t want to rollout my own decompressor in C. I know that Graviton 2 includes a compression accelerator as well, but no idea how to use it (easily).
评论 #37158412 未加载
评论 #37158201 未加载
ahofmann超过 1 年前
Why do they show different compression levels in their graphs? That seems kind of fishy to me.
评论 #37159387 未加载
scrubs超过 1 年前
Checking this out tomorrow. Side work project is better compressing 10s of TBs of files.
评论 #37156375 未加载
truth_seeker超过 1 年前
hah ! Nailed it.<p>Best way to optimise the reusable software is to turn it into single hardware CPU instruction
baybal2超过 1 年前
<a href="https:&#x2F;&#x2F;ark.intel.com&#x2F;content&#x2F;www&#x2F;us&#x2F;en&#x2F;ark&#x2F;products&#x2F;125200&#x2F;intel-quickassist-adapter-8970.html" rel="nofollow noreferrer">https:&#x2F;&#x2F;ark.intel.com&#x2F;content&#x2F;www&#x2F;us&#x2F;en&#x2F;ark&#x2F;products&#x2F;125200&#x2F;...</a><p>At &quot;only&quot; 100GBPs per adapter, with said adapter costing like 1 Epyc, does it make sense?<p>Epycs can do 400gbps of compression in software, without much SSE, and handwritten assembler.