TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Fundamental flaws of SIMD ISAs (2021)

153 点作者 fanf217 天前

17 条评论

stephencanon16 天前
The basic problem with almost every &quot;SIMD is flawed, we should have vector ISAs&quot; article or post (including the granddaddy, &quot;SIMD Instructions Considered Harmful&quot;), is that they invariably use SAXPY or something else trivial where everything stays neatly in lane as their demonstration case. Of course vector ISAs look good when you show them off using a pure vector task. This is fundamentally unserious.<p>There is an enormous quantity of SIMD code in the world that isn&#x27;t SAXPY, and doesn&#x27;t stay neatly in lane. Instead it&#x27;s things like &quot;base64 encode this data&quot; or &quot;unpack and deinterleave this 4:2:2 pixel data, apply a colorspace conversion as a 3x3 sparse matrix and gamma adjustment in 16Q12 fixed-point format, resize and rotate by 15˚ with three shear operations represented as a linear convolution with a sinc kernel per row,&quot; or &quot;extract these fields from this JSON data&quot;. All of which _can totally be done_ with a well-designed vector ISA, but the comparison doesn&#x27;t paint nearly as rosy of a picture. The reality is that you really want a mixture of ideas that come from fixed-width SIMD and ideas that come from the vector world (which is roughly what people actually shipping hardware have been steadily building over the last two decades, implementing more support for unaligned access, predication, etc, while the vector ISA crowd writes purist think pieces)
TinkersW17 天前
I write a lot of SIMD and I don&#x27;t really agree with this..<p><i>Flaw1:fixed width</i><p>I prefer fixed width as it makes the code simpler to write, size is known as compile time so we know the size of our structures. Swizzle algorithms are also customized based on the size.<p><i>Flaw2:pipelining</i><p>no CPU I care about is in order so mostly irrelevant, and even scalar instructions are pipelined<p><i>Flaw3: tail handling</i><p>I code with SIMD as the target, and have special containers that pad memory to SIMD width, no need to mask or run a scalar loop. I copy the last valid value into the remaining slots so it doesn&#x27;t cause any branch divergence.
评论 #43786411 未加载
评论 #43785125 未加载
评论 #43787819 未加载
评论 #43785771 未加载
评论 #43785693 未加载
评论 #43793143 未加载
评论 #43786350 未加载
sweetjuly17 天前
Loop unrolling isn&#x27;t really done because of pipelining but rather to amortize the cost of looping. Any modern out-of-order core will (on the happy path) schedule the operations identically whether you did one copy per loop or four. The only difference is the number of branches.
评论 #43787070 未加载
评论 #43792848 未加载
评论 #43791285 未加载
评论 #43794930 未加载
pornel17 天前
There are alternative universes where these wouldn&#x27;t be a problem.<p>For example, if we didn&#x27;t settle on executing compiled machine code exactly as-is, and had a instruction-updating pass (less involved than a full VM byte code compilation), then we could adjust SIMD width for existing binaries instead of waiting decades for a new baseline or multiversioning faff.<p>Another interesting alternative is SIMT. Instead of having a handful of special-case instructions combined with heavyweight software-switched threads, we could have had every instruction SIMDified. It requires structuring programs differently, but getting max performance out of current CPUs already requires SIMD + multicore + predictable branching, so we&#x27;re doing it anyway, just in a roundabout way.
评论 #43785912 未加载
评论 #43785215 未加载
评论 #43790372 未加载
评论 #43803494 未加载
xphos17 天前
Personally, I think load and increment address register in a single instruction is extremely valuable here. It&#x27;s not quite the risc model but I think that it is actually pretty significant in avoiding a von nurmon bottleneck with simd (the irony in this statement)<p>I found that a lot of the custom simd cores I&#x27;ve written for simply cannot issue instructions fast enough risvc. Or when they it&#x27;s in quick bursts and than increments and loop controls that leave the engine idling for more than you&#x27;d like.<p>Better dual issue helps but when you have seperate vector queue you are sending things to it&#x27;s not that much to add increments into vloads and vstores
评论 #43791271 未加载
pkhuong17 天前
There&#x27;s more to SIMD than BLAS. <a href="https:&#x2F;&#x2F;branchfree.org&#x2F;2024&#x2F;06&#x2F;09&#x2F;a-draft-taxonomy-of-simd-usage&#x2F;" rel="nofollow">https:&#x2F;&#x2F;branchfree.org&#x2F;2024&#x2F;06&#x2F;09&#x2F;a-draft-taxonomy-of-simd-u...</a> .
评论 #43785607 未加载
bob102917 天前
&gt; Since the register size is fixed there is no way to scale the ISA to new levels of hardware parallelism without adding new instructions and registers.<p>I look at SIMD as the same idea as any other aspect of the x86 instruction set. If you are directly interacting with it, you should probably have a good reason to be.<p>I primarily interact with these primitives via types like Vector&lt;T&gt; in .NET&#x27;s System.Numerics namespace. With the appropriate level of abstraction, you no longer have to worry about how wide the underlying architecture is, or if it even supports SIMD at all.<p>I&#x27;d prefer to let someone who is paid a very fat salary by a F100 spend their full time job worrying about how to emit SIMD instructions for my program source.
codedokode17 天前
I think that packed SIMD is better in almost every aspect and Vector SIMD is worse.<p>With vector SIMD you don&#x27;t know the register size beforehand and therefore have to maintain and increment counters, adding extra unnecessary instructions, reducing total performance. With packed SIMD you can issue several loads immediately without dependencies, and if you look at code examples, you can see that the x86 code is more dense and uses a sequence of unrolled SIMD instructions without any extra instructions which is more efficient. While RISC-V has 4 SIMD instructions and 4 instructions dealing with counters per loop iteration, i.e. it wastes 50% of command issue bandwidth and you cannot load next block until you increment the counter.<p>The article mentions that you have to recompile packed SIMD code when a new architecture comes out. Is that really a problem? Open source software is recompiled every week anyway. You should just describe your operations in a high level language that gets compiled to assembly for all supported architectures.<p>So as a conclusion, it seems that Vector SIMD is optimized for manually-written assembly and closed-source software while Packed SIMD is made for open-source software and compilers and is more efficient. Why RISC-V community prefers Vector architecture, I don&#x27;t understand.
评论 #43797401 未加载
评论 #43792973 未加载
评论 #43791284 未加载
Someone16 天前
&gt; Since the register size is fixed there is no way to scale the ISA to new levels of hardware parallelism without adding new instructions and registers.<p>I think there is a way: vary register size per CPU, but also add an instruction to retrieve register size. Then, code using the vector unit will sometimes have to dynamically allocate a buffer for intermediate values, but it would allow for software to run across CPUs with different vector lengths. Does anybody know whether any architecture does this?
dragontamer17 天前
1. Not a problem for GPUs. NVdia and AMD are both 32-wide or 1024-bit wide hard coded. AMD can swap to 64-wide mode for backwards compatibility to GCN. 1024-bit or 2048-bit seems to be the right values. Too wide and you get branch divergence issues, so it doesn&#x27;t seem to make sense to go bigger.<p>In contrast, the systems that have flexible widths have never taken off. It&#x27;s seemingly much harder to design a programming language for a flexible width SIMD.<p>2. Not a problem for GPUs. It should be noted that kernels allocate custom amounts of registers: one kernel may use 56 registers, while another kernel might use 200 registers. All GPUs will run these two kernels simultaneously (256+ registers per CU or SM is commonly supported, so both 200+56 registers kernels can run together).<p>3. Not a problem for GPUs or really any SIMD in most cases. Tail handling is O(1) problem in general and not a significant contributor to code length, size, or benchmarks.<p>Overall utilization issues are certainly a concern. But in my experience this is caused by branching most often. (Branching in GPUs is very inefficient and forces very low utilization).
评论 #43787634 未加载
thierry_src14 天前
One of my worry about the presented ideas, and this is present in RISC-V vector ISA if I&#x27;m not mistaken, is that register-size-independent vector instructions have random execution times depending on hardware register width.<p>I remember seeing presentations of extensions to AVX (during probably a supercomputing related event in Spain years ago ?) that some complex, matrix to matrix instructions could have data dependent execution time, in addition to possible hardware register size dependencies.<p>In some contexts, and for overall security, this could be very problematic. Has this been discussed?
评论 #43827690 未加载
timewizard17 天前
&gt; Another problem is that each new SIMD generation requires new instruction opcodes and encodings.<p>It requires new opcodes. It does not strictly require new encodings. Several new encodings are legacy compatible and can encode previous generations vector instructions.<p>&gt; so the architecture must provide enough SIMD registers to avoid register spilling.<p>Or the architecture allows memory operands. The great joy of basic x86 encoding is that you don&#x27;t actually need to put things in registers to operate on them.<p>&gt; Usually you also need extra control logic before the loop. For instance if the array length is less than the SIMD register width, the main SIMD loop should be skipped.<p>What do you want? No control overhead or the speed enabled by SIMD? This isn&#x27;t a flaw. This is a necessary price to achieve the efficiency you do in the main loop.
评论 #43796393 未加载
评论 #43786981 未加载
freeone300017 天前
x86 SIMD suffers from register aliasing. xmm0 is actually the low-half of ymm0, so you need to explicitly tell the processor what your input type is to properly handle overflow and signing. Actual vectorized instructions don’t have this problem but you also can’t change it now.
convolvatron17 天前
i would certainly add lack of reductions (&#x27;horizontal&#x27; operations) and a more generalized model of communication to the list.
评论 #43796584 未加载
评论 #43786548 未加载
dang17 天前
Related:<p><i>Three Fundamental Flaws of SIMD</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28114934">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28114934</a> - Aug 2021 (20 comments)
gitroom17 天前
Oh man, totally get the pain with compilers and SIMD tricks - the struggle&#x27;s so real. Ever feel like keeping low level control is the only way stuff actually runs as smooth as you want, or am I just too stubborn to give abstractions a real shot?
lauriewired17 天前
The three “flaws” that this post lists are exactly what the industry has been moving away from for the last decade.<p>Arm’s SVE, and RISC-V’s vector extension are all vector-length-agnostic. RISC-V’s implementation is particularly nice, you only have to compile for one code path (unlike avx with the need for fat-binary else&#x2F;if trees).