TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Intel proposes XeGPU dialect for LLVM MLIR

90 点作者 artagnon超过 1 年前

7 条评论

brucethemoose2超过 1 年前
Inference is going to be interesting in 2025.<p>By that time we will have a good number of MI300 hosts. AMD Strix Halo (and the Intel equivalent?) will be out for high memory jobs locally. Intel Falcon Shores and who knows will finally be coming out, and from the looks of it the software ecosystem will be at least a little more hardware agnostic.
评论 #38687939 未加载
CalChris超过 1 年前
“XeGPU dialect provides an abstraction that closely models Xe instructions.”<p>How is that an abstraction? It sounds more like a representation.
viksit超过 1 年前
could someone eli5 about what this means for engineers working on systems from an app perspective &#x2F; higher level perspective?<p>(have worked extensively with tf &#x2F; pytorch)
评论 #38679497 未加载
评论 #38679382 未加载
JonChesterfield超过 1 年前
Weird when there&#x27;s no codegen for it in llvm. I guess the idea is to use MLIR with a toolchain built from intel&#x27;s GitHub.
westurner超过 1 年前
<a href="https:&#x2F;&#x2F;discourse.llvm.org&#x2F;t&#x2F;rfc-add-xegpu-dialect-for-intel-gpus&#x2F;75723" rel="nofollow noreferrer">https:&#x2F;&#x2F;discourse.llvm.org&#x2F;t&#x2F;rfc-add-xegpu-dialect-for-intel...</a> :<p>&gt; <i>XeGPU dialect models a subset of Xe GPU’s unique features focusing on GEMM performance. The operations include 2d load, dpas, atomic, scattered load, 1d load, named barrier, mfence, and compile-hint. These operations provide a minimum set to support high-performance MLIR GEMM implementation for a wide range of GEMM shapes. XeGPU dialect complements Arith, Math, Vector, and Memref dialects. This allows XeGPU based MLIR GEMM implementation fused with other operations lowered through existing MLIR dialects.</i>
评论 #38679397 未加载
KingLancelot超过 1 年前
Not the way to do this.<p>Accelerators already have a common middle layer.<p><a href="https:&#x2F;&#x2F;discourse.llvm.org&#x2F;t&#x2F;rfc-introducing-llvm-project-offload&#x2F;74302&#x2F;23" rel="nofollow noreferrer">https:&#x2F;&#x2F;discourse.llvm.org&#x2F;t&#x2F;rfc-introducing-llvm-project-of...</a>
评论 #38678884 未加载
评论 #38680967 未加载
gardenfelder超过 1 年前
Direct link: <a href="https:&#x2F;&#x2F;hai.stanford.edu&#x2F;news&#x2F;how-well-do-large-language-models-support-clinician-information-needs" rel="nofollow noreferrer">https:&#x2F;&#x2F;hai.stanford.edu&#x2F;news&#x2F;how-well-do-large-language-mod...</a>
评论 #38677030 未加载