TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

CUDA vs. ROCm: A case study

161 点作者 shihab超过 1 年前

15 条评论

cherryteastain超过 1 年前
I wish AMD would just drop ROCm at this stage, and focus on SYCL. The rocRAND&#x2F;hipRAND woes in this article are if anything showing ROCm in a better light than it really is; here it at least worked and performed within the same ballpark as CUDA. Often it simply does not work at all, or if it works it&#x27;s behind by a lot more. At work I simply gave up on our 4x Radeon Pro W6800 workstation because launching Tensorflow with more than 1 GPU would cause a kernel panic every time, and AMD engineers never offered a fix other than &quot;reinstall Ubuntu&quot;.<p>ROCm feels like such a half assed product that (to me at least) feels like it&#x27;s been made to tick a box and look cool in corporate presentations. It&#x27;s not made with the proper mindset to compete against CUDA. Lisa Su claims they&#x27;re doubling down on ROCm but to me it feels like they&#x27;re falling behind relative to Nvidia, not catching up.<p>Banding together with Intel to support SYCL would in my opinion<p>1. Ensure there&#x27;s a lot more momentum behind a single, cross-platform, industry-standard competitor<p>2. Entice other industry heavyweights like MSFT, Qualcomm, ARM etc to also take the cross-platform solutions more seriously<p>3. Encourage heavy investment into the developer experience and tooling for the cross-platform solution
评论 #38703800 未加载
评论 #38704491 未加载
评论 #38706815 未加载
评论 #38704698 未加载
评论 #38706296 未加载
评论 #38707147 未加载
评论 #38707122 未加载
评论 #38703920 未加载
评论 #38708116 未加载
ekelsen超过 1 年前
AMD attempted responses go all the way back to 2007 when CUDA first debuted with &quot;Close to Metal&quot; (<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Close_to_Metal" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Close_to_Metal</a>). They&#x27;ve had nearly 20 years to fix the situation and have failed to do so. Maybe some third party player like Lamini AI will do what they couldn&#x27;t and get acquired for it.
评论 #38703833 未加载
frognumber超过 1 年前
That&#x27;s more-or-less my experience with AMD, only worse. Critical thing too are burned developers like myself.<p>I&#x27;m looking forward to Intel v. NVidia. Arc A770 is a pretty serious competitor. It&#x27;s the lowest-cost way to run OPT-175B.<p>Given a 7-slot motherboard, $270 * 7 = $1890 for 112GB of VRAM in one computer. That&#x27;s sweet. Compute speed would be on-par with top-of-the-line NVidia workstation GPU.<p>Three of those are enough to run the largest open-source LLMs at around $9000.<p>We&#x27;re just drivers + libraries + documentation away, and Intel is not bad at drivers + libraries + documentation.
评论 #38706889 未加载
评论 #38705489 未加载
评论 #38706476 未加载
评论 #38707001 未加载
评论 #38707114 未加载
评论 #38705467 未加载
anvuong超过 1 年前
“Please note the library is being actively developed, and is known to be incomplet; it might also be incorrekt and there could be a few bad bugs lurking.”<p>That gives me a good laugh.
评论 #38705310 未加载
physicsguy超过 1 年前
This was exactly my experience with it too. It&#x27;s moved on a bit since then but when I looked at rocFFT a couple of years ago, the documentation was really poor and it was missing features.<p>When I switched from FFTW to cuFFT many years ago (~2015), the transition was very smooth, the documentation was great, and all features were supported. They even put a shim &quot;FFTW&quot; compatible header file in so that you didn&#x27;t need to rewrite your code to make it work (leaving some performance on the table).
zamalek超过 1 年前
I&#x27;m convinced that Julia is how the moat will be crossed. There are some pretty incredible GPU packages for it (I&#x27;m looking at you KernelAbstractions.jl). The Python science community seems more than happy to carry on focusing on NVIDIA and are a lost cause.<p>I somewhat don&#x27;t blame them: the MI300X might be miles ahead and all, but AMD are not only oblivious to the desktop market (you know, where new ideas are prototyped) but are also seemingly actively hostile[1]. NVIDIA has people doing somewhat interesting things with a 3060 (which can eventually graduate to a 4090 or even a H100), while AMD don&#x27;t what to hear about it unless you have a &quot;pro&quot; GPU. Definitely a case of dollar-wise and penny-foolish.<p>[1] <a href="https:&#x2F;&#x2F;rocm.docs.amd.com&#x2F;en&#x2F;docs-5.5.1&#x2F;release&#x2F;gpu_os_support.html" rel="nofollow noreferrer">https:&#x2F;&#x2F;rocm.docs.amd.com&#x2F;en&#x2F;docs-5.5.1&#x2F;release&#x2F;gpu_os_suppo...</a> * FWIW you can override this with an envar, but AMD aren&#x27;t exactly forthcoming with that information.
JonChesterfield超过 1 年前
Documentation. The python example is a bit on the nose. I&#x27;ve not had good times with ROCm&#x27;s documentation either.<p>Can anyone point to an example of good documentation for a big software system where they can also sketch how that was achieved? E.g. Cuda&#x27;s docs are pretty good but I&#x27;ve no idea how they came to be, or how they stay up to date. LLVM&#x27;s docs are a small amount of handwritten webpages which correlate with reality to some extent, the source for which lives in the same repo as the code.<p>I have an idea that it needs to combine programmers writing some things, some testing infra to notice things like internal links will 404 and some non-developers writing things.<p>I started trying to document one of my own systems as freeform notes under obsidian and while it kind of works at the time it diverges from reality pretty quickly, and that&#x27;s without trying to have anyone else working on either the docs or the system.<p>So what&#x27;s the proper, established answer to this?
评论 #38709145 未加载
quanto超过 1 年前
AMD cards + ROCm are used in top supercomputers for (non-deep learning) HPC. Why is this the case?<p>I understand that AMD GPUs offer better cost efficiency for F32 &amp; F64 FLOPs, RAM, and wattage. But however, if ROCm is such a half baked piece, shouldn&#x27;t that advantage be gone? What drives AMD adoption in the HPC space then?
评论 #38704542 未加载
评论 #38704740 未加载
评论 #38706163 未加载
评论 #38704490 未加载
评论 #38707118 未加载
评论 #38705991 未加载
评论 #38704450 未加载
Zetobal超过 1 年前
I hate ROCm so much I can&#x27;t even describe how much I suffered because of this pos software. It wasn&#x27;t good 3 years ago but manageable now it&#x27;s just I don&#x27;t even know how they got it worse.<p>I really just wished my employer would give up on AMD for GPUs.
评论 #38706511 未加载
HarHarVeryFunny超过 1 年前
At this point, rather than chasing CUDA&#x2F;cuDNN compatability, it would seem more productive for AMD to be targetting high level language support. Forget CUDA compatibility, and instead support work such as Mojo and MLIR.<p>It seems that in an ideal world PyTorch support for AMD wouldn&#x27;t rely on ROCm, but rather be based on high level code compiled to MLIR with AMD target support, with this same MLIR representation supporting Mojo and any other languages such as Julia that want optimized AMD support.
outside1234超过 1 年前
How is there not an OpenGL at this point in this space? It seems like it is in all of the hyperscalers benefit to get this going - why haven’t they?
评论 #38705210 未加载
latchkey超过 1 年前
Recent video of Lisa Su, good watch:<p><a href="https:&#x2F;&#x2F;twitter.com&#x2F;TheSixFiveMedia&#x2F;status&#x2F;1737177221490450594" rel="nofollow noreferrer">https:&#x2F;&#x2F;twitter.com&#x2F;TheSixFiveMedia&#x2F;status&#x2F;17371772214904505...</a>
up2isomorphism超过 1 年前
Any article that trying to make a big deal about 30% performance difference without comparing price and cost is just making a case for itself.
slavik81超过 1 年前
The rocrand library did not have any real documentation at all until 2023. It&#x27;s still pretty barebones, but the updates in article regarding the Python API seem to suggest that this is a work-in-progress.
dpflan超过 1 年前
Are there use-cases for LLMs assisting in developing this software? Basically, I&#x27;m wondering if LLMs for developing a GPU-API exist and how can they can accelerate development such that this &quot;moat&quot; becomes more of a river that other can join?
评论 #38706308 未加载