Intel needs to see what has happened to their AVX instructions and why NVidia has taken over.<p>If you just wrote your SIMD in CUDA 15 years ago, NVidia compilers would have given you maximum performance across all NVidia GPUs rather than being forced to write and rewrite in SSE vs AVX vs AVX512.<p>GPU SIMD is still SIMD. Just... better at it. I think AMD and Intel GPUs can keep up btw. But software advantage and long term benefits of rewriting into CUDA are heavily apparent.<p>Intel ISPC is a great project btw if you need high level code that targets SSE, AVX, AVX512 and even ARM NEON all with one codebase + auto compiling across all the architectures.<p>-------<p>Intels AVX512 is pretty good at a hardware level. But software methodology to interact with SIMD using GPU-like languages should be a priority.<p>Intrinsics are good for maximum performance but they are too hard for mainstream programmers.
> SIMD instructions are complex, and even Arm is starting to look more “CISCy” than x86!<p>Thank you for saying it out loud. XLAT/XLATB of x86 is positively tame compared to e.g. vrgatherei16.vv/vrgather.vv.
You can simplify the 2x sqrts as sqrt(a*b), overall less operations so perhaps more accurate. It would also let you get rid of the funky lane swivels.<p>As this would only use 1 lane, perhaps if you have multiple of these to normalize, you could vectorize it.
My approach to this is to write a bunch of tiny “kernels” which are obvious to SIMD and then inline them all, and it does a pretty good job on x86 and arm<p><a href="https://github.com/maedoc/tvbk/blob/nb-again/src/util.h">https://github.com/maedoc/tvbk/blob/nb-again/src/util.h</a>
> Let's explore these challenges and how Mojo helps address them<p>You've not linked to or explained what Mojo is. There's also a lot going on with different products mentioned: Modular, Unum cloud, SimSIMD that are not contextualised either. While I'm at it, where do the others come in (Ovadia, Lemire, Lattner), you all worked on SimSIMD, I guess?<p>That said, this is a great article, thanks.<p>Edit: Mojo is a programming language with python-like syntax, and is a product by Modular: <a href="https://github.com/modularml/mojo">https://github.com/modularml/mojo</a>
The main problem is that there are no good abstractions in popular programming languages to take advantage of SIMD extensions.<p>Also, the feature set being all over the place (e.g. integer support is fairly recent) doesn't help either.<p>ISPC is a good idea, but execution is meh... it's hard to setup and integrate.<p>Ideally you would want to be able to easily use this from other popular languages, like Java, Python, Javascript, without having to resort to linking a library written in C/C++.<p>Granted, language extensions may be required to approach something like that in an ergonomic way, but most somehow end up just mimicking what C++ does and expose a pseudo assembler.
Interesting article. The article mentions "...the NumPy implementation illustrates a marked improvement over the naive algorithm...", but I couldn't find a NumPy implementation in the article.
Did they write bfloat16 and bfloat32 when they meant float16 and float32?<p>On the image: <a href="https://www.modular.com/blog/understanding-simd-infinite-complexity-of-trivial-problems#:~:text=bfloat16%20compared%20to%20bfloat32" rel="nofollow">https://www.modular.com/blog/understanding-simd-infinite-com...</a>
I see a lot of "just use the GPU" and you'd often be right.<p>SIMD on the CPU is most compelling to me due to the latency characteristics. You are nanoseconds away from the control flow. If the GPU needs some updated state regarding the outside world, it takes significantly longer to propagate this information.<p>For most use cases, the GPU will win the trade off. But, there is a reason you don't hear much about systems like order matching engines using them.
Looks like a great use case for AI. Set up the logical specification and constraints and let the AI find the optimal sequence of SIMD operations to fulfill the requirements.