Is modern Radeon support on Linux still great? I've heard that NV Geforce on Linux was kinda bad since ever, but it has massively improved over the past 3-4 years. Is Radeon still the way to go for Linux?<p>EDIT: My usecase is a Linux gaming PC. I was fortunate enough to score an RX 6800 during the pandemic, but moved from Windows 10 to Linux a month ago. Everything seems solid, but I'm looking to upgrade the whole PC soon-ish. (Still running a Ryzen 1800X, but Ryzen 9000 looks tempting.)
No ROCm at launch? After they delayed it for months? What a joke. That's like not having CUDA available at launch.<p><a href="https://www.phoronix.com/news/AMD-ROCm-RX-9070-Launch-Day" rel="nofollow">https://www.phoronix.com/news/AMD-ROCm-RX-9070-Launch-Day</a>
The hardware itself seems like it will be welcome in the market. But I find AMD GPU launches so frustrating, the branding is all over the place and it feels like there is never any consistent generational series. Instead you get these seemingly random individual product launches that slot somewhere in the market, but you never know what comes next.<p>In comparison nVidia has had pretty consistent naming since 200 series, with every generation feeling at least somewhat complete. Only major exception was (mostly) skippin 800 series. Not saying they are perfect by any means in this regard, but AMD just feels like a complete mess.<p>Checking wikipedia, Radeon has recently gone through (in order):<p>* HD 7000 series<p>* HD 8000 series<p>* 200 series<p>* 300 series<p>* 400/500 series<p>* RX Vega<p>* RX 5000 series<p>* RX 6000 series<p>* RX 7000 series<p>* RX 9000 series<p>Like what happened to 8000 series? And also isn't it confusing to <i>partially</i> change the naming scheme? Any guesses what the next generation will be called?
So about 20% better value than Nvidia comparing MSRPs. Not terrible but that’s what they offered last generation and got 10% market share as a result. If the <i>market price</i> of Nvidia cards remains 20-50% above MSRP while AMD can hit their prices, then maybe we have a ballgame.
I'm pretty torn to self-host AI 70 B models on Ryzen AI Max with 128gb of ram. The market seems to be evolving fast. Outside of Apple, this is the first product to really compete in this category Self-host AI. So... I think a second generation will be significantly better than what's currently on offer today. Rationale below...<p>For a max spec processor with ram at $2,000, this seems like a decent deal given today's market. However, this might age very fast for three reasons.<p>Reason 1: LPDDR6 may debut in the next year or two this could bring massive improvements to memory bandwidth and capacity for soldered on memory.<p>LPDDR6 vs LPDDR5 - Data bus width - 24 bits, 16 bits Burst length - 24 bits, 15 bits Memory bandwidth - Up to 38.4 GB/s, Up to 6.7 GB/s<p>- Camm ram may or may not be maintain signal integrity as memory bandwidth increases. Until I see it implemented for a AI use-case in a cost-effective manner, I am skeptical.<p>Reason 2: - It's a laptop chip with limited PCI lanes and reduced power envelope. Theoretically, a desktop chip could have better performance, more lanes, socketable (Although, I don't think I've seen a socketed CPU with soldered RAM)<p>Reason 3: In addition, what does hardware look like being repurposed in the future compared to alternatives?<p>- Unlike desktop or server counterparts which can have a higher cpu core count, PCEe/IO Expansion, this processor with its motherboard is limited on re-purposing later down the line as a server to self-host other software besides AI. I suppose could be turned into a overkill, NAS with ZFS and HBA Single Controller Card in new case.<p>- Buying into the framework desktop is pretty limited based on the form factor. Next generation might be able to include a 16x slot fully populated, a 10G nic. That seems about it if they're going to maintain the backward compatibility philosophy given the case form factor.
Rx 9070 seems perfect for compact builds on a power budget but I can't see a single two-slot, two-fan card from partners so far. They all look like massive, three slot cards.
So Framework launches with Ryzen AI Max+ 395 with Radeon 8060S Graphics which has RDNA 3.5.<p>RDNA 4 has a 2x performance gain over 3.5 (4x with Sparsity) at FP16.<p>It just makes it all harder (the picking and choosing). Let's see what Project DIGITS brings once it launches.
It seems very silly to me to change the naming scheme just for the 9000 series when they're going to have to change it again for the next series. Well I suppose they could pull an Intel and go with RX 10070 XT next time. I guess we can be thankful that they didn't call it the AMD Radeon AI Max+ RX 9070 XT.
> AMD Radeon RX 9070 XT 64 16GB 2.4 Up to 3.0 256-bit 64 MB 304W $599<p>> AMD Radeon RX 9070 56 16GB 2.1 Up to 2.5 256-bit 64 MB 220W $549<p>Ignoring the awful naming scheme that marketing cooked up.<p>I really do love the price point here. Very competitive with the joke offerings by NVDA —- 5070 and 5080. Look forward to the benchmarks.<p>Been itching to upgrade my gaming PC for quite awhile now. But the issues with NVDA (12VHPWR cable issues, non-competitive pricing, paper release, missing CUs, QC issues, …) have encouraged me to put it off until later.
Hopefully, finally on par with nvidia with hardware BVH for raytracing and not those horrible GLSL shaders (or at least published and provided optimized GPU machine code or similar).
Their reveal video[0] from an hour ago.<p>[0] <a href="https://www.youtube.com/watch?v=GZfFPI8LJrc" rel="nofollow">https://www.youtube.com/watch?v=GZfFPI8LJrc</a>
My hot take is that the 9070 XT at $600 will do OK as long as they can ship at MSRP and nvidia can't, but would have been more impressive at $500 or even $550.<p>The 9070 non-XT seems DOA at $550.<p>The most I've ever spent on a GPU is about $300 and I don't really see that changing any time soon. (And that was for a 70-class card, so...)
The problem is that AMD is nowhere close to DLSS performance/quality, AMD cards are good for rasterization but it's not enough. The other thing is that most game don't implement FSR because 90%+ cards are nvidia.