Inference is going to be interesting in 2025.<p>By that time we will have a good number of MI300 hosts. AMD Strix Halo (and the Intel equivalent?) will be out for high memory jobs locally. Intel Falcon Shores and who knows will finally be coming out, and from the looks of it the software ecosystem will be at least a little more hardware agnostic.
could someone eli5 about what this means for engineers working on systems from an app perspective / higher level perspective?<p>(have worked extensively with tf / pytorch)
<a href="https://discourse.llvm.org/t/rfc-add-xegpu-dialect-for-intel-gpus/75723" rel="nofollow noreferrer">https://discourse.llvm.org/t/rfc-add-xegpu-dialect-for-intel...</a> :<p>> <i>XeGPU dialect models a subset of Xe GPU’s unique features focusing on GEMM performance. The operations include 2d load, dpas, atomic, scattered load, 1d load, named barrier, mfence, and compile-hint. These operations provide a minimum set to support high-performance MLIR GEMM implementation for a wide range of GEMM shapes. XeGPU dialect complements Arith, Math, Vector, and Memref dialects. This allows XeGPU based MLIR GEMM implementation fused with other operations lowered through existing MLIR dialects.</i>
Not the way to do this.<p>Accelerators already have a common middle layer.<p><a href="https://discourse.llvm.org/t/rfc-introducing-llvm-project-offload/74302/23" rel="nofollow noreferrer">https://discourse.llvm.org/t/rfc-introducing-llvm-project-of...</a>