TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AMD announces unified UDNA GPU architecture – bringing RDNA and CDNA together

71 pointsby prng20218 months ago

3 comments

Jlagreen8 months ago
That&#x27;s actually funny because before with Vega AMD was exactly there and decided to go RDNA and CDNA because it made more sense for separating consumer and data center.<p>While at the same time, Nvidia was going the other route and trying to make CUDA support all GPUs and have all features on consumer as well just like Tensor Cores.<p>AMD going back to one architecture is basically admitting that the separation in the past was a mistake and that Nvidia went the right way.<p>But AMD makes the next mistake. By not competing in high end in gaming. Not because gaming is important but because the RTX 3090&#x2F;4090 are among the most wanted AI accelerator cards. If you look at it, the RTX 2000 series had no 2090 and Titan. The RTX 2080TI however was more of a success than most could imagine. Thanks to the tensor cores small severs were built back then and used a lot in academia and small enterprises for ML.<p>Nvidia reacted to that and released RTX 3090. The RTX 3090 was way ahead in gaming and probably a RTX 3080TI would have been enough. But the 3090 offered way more memory and huge compute for ML. The same applies to RTX 4090. I read somewhere that the RTX 4090 has ~50-70% of compute of H100 for ML and that at 5-10% of the price. Yes it has no NVLink and has much less memory size and bandwith but still it&#x27;s THE card used in academia for students to enter the world of ML.<p>AMD shouldn&#x27;t neglect high end gaming because Nvidia uses it to offer a product which is great for ML but marketed for gaming. AMD should release a $2000 gaming card being better than Nvidia if they want their SW to really get some spread. And they should start giving academia free consumer cards as Nvidia has been doing for almost a decade. If you want the community to use and drive your SW then give them incentives for god&#x27;s sake!
评论 #41498155 未加载
jdboyd8 months ago
As some one bummed by how second class RDNA had been for ROCm, this tremendously exciting. While there are other reasons that ROCm is second class to CUDA, I think this has been a big one.
ksec8 months ago
Sometimes I dont get HN&#x27;s algorithm. 50 points, zero comments, high point to comment ratio. And still not on front two page.