TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AMD's 7900 XTX achieves better value for Stable Diffusion than Nvidia RTX 4080

191 点作者 mauricesvp超过 1 年前

13 条评论

klft超过 1 年前
&gt; Using Microsoft Olive and DirectML instead of the PyTorch pathway results in the AMD 7900 XTX going form a measly 1.87 iterations per second to 18.59 iterations per second!<p>So the headline should be Microsoft Olive vs. PyTorch and not AMD vs. Nvidia.
评论 #37197305 未加载
评论 #37198740 未加载
DarkmSparks超过 1 年前
Been watching this quite closely. As far as I summarise, the 7900XTX is the first (and only) desktop GPU from AMD that _might_ be worth buying. (They own the console gaming space, but thats a different story).<p>Not Nvidia beating due to the CUDA issue, but a massive leap in the right direction.<p>Intel is also making _some_ progress with its ARC range.<p>Its going to be happy days for us users if&#x2F;when AMD&#x2F;Intel are competitive, and cut some of that monopoly margin off Nvidias pricing, but a way to go yet.
评论 #37196383 未加载
评论 #37196174 未加载
评论 #37197493 未加载
评论 #37196072 未加载
brucethemoose2超过 1 年前
Well the problem is that Automatic1111 is not fast...<p>Other diffusers based UIs with PyTorch Triton will net you 40%+ performance.<p>Facebook AITemplate inference in VoltaML will be at least twice as fast as A1111 on a 3080, with support for LORAs, controlnet and such. This supports AMD Instinct cards too.<p>What I am getting at is that people dont really care about A1111 performance on a 3080 because, for the most part, its fast enough.
评论 #37196745 未加载
lelandbatey超过 1 年前
The comments point out that AMD in the table performing well required the use of Microsoft Olive, and someone in the article comments implies that if you use Microsoft Olive with Nvidia instead of Pytorch with Nvidia, then you&#x27;ll see the Nvidia jump in performance as well, largely rendering the supposed leap by AMD not relevant. Is that true? Can folks chime in?
Havoc超过 1 年前
Nearly bought one thinking AMD will sort itself out shortly but hard to justify vs a second hand 3090 with 24gb and no cuda hassles
评论 #37195464 未加载
cschmid超过 1 年前
Can I also interpret this as: &#x27;AMD&#x27;s pytorch support is so abysmal that inference is 10x slower than it should be&#x27;?
评论 #37195770 未加载
delusional超过 1 年前
I&#x27;ve been running pytorch and rocm (5.6 has support for gfx1100 if you compile it yourself) for at least 3 months at 18 it&#x2F;s on a 7900 XTX. This has been possible for quite a while.<p>Could someone fill me in on what&#x27;s actually new here, other than the specific technology used?
smoldesu超过 1 年前
Wait, why are they comparing Microsoft Olive on AMD to Pytorch on Nvidia? Nvidia supposedly shipped support for Olive recently, there should be no problem getting a head-to-head comparison: <a href="https:&#x2F;&#x2F;www.tomshardware.com&#x2F;news&#x2F;nvidia-geforce-driver-promises-doubled-stable-diffusion-performance" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.tomshardware.com&#x2F;news&#x2F;nvidia-geforce-driver-prom...</a><p>This is a very strange comparison.
评论 #37195947 未加载
评论 #37195382 未加载
评论 #37195376 未加载
laserbeam超过 1 年前
I&#x27;m actually curious if libraries like pytorch are even trying to move away from CUDA and if moving away from it is worth it. I get why newer ML toolchains would do that but do mainstream established MI frameeorks plan on sticking with nvidia exclusivity for now?
评论 #37200480 未加载
评论 #37197027 未加载
shrx超过 1 年前
Won&#x27;t using Nvidia cards with Microsoft Olive also provide some boost in performance?
tamrix超过 1 年前
How does it compare to nvidias Jetson Orin?
Zetobal超过 1 年前
Stay away from amd consumer GPUs they are not stable enough... neither hard nor software.
评论 #37198209 未加载
评论 #37198187 未加载
Der_Einzige超过 1 年前
No it doesn’t. AMD drivers don’t support all of the extensions, optimizations, and related in things like automatic1111. There’s always stuff that breaks on AMD and works perfectly in CUDA land.
评论 #37195360 未加载
评论 #37195318 未加载
评论 #37195971 未加载