The Apple M1 Ultra SoC chip achieves 87% of the performance of the Intel 12900K and Nvidia RTX 3090 combined while only consuming 34% of the power. The entire M1 Ultra chip draws 225W at full load and 11w at idle. Forget the Nvidia card and the other PC components, the 12900K alone can draw 250W and the Ryzen 5950X 140W.<p>This is a seismic shift. Can Intel and the other players like AMD and Nvidia catch up to this peformace per watt that Apple has on their hands?
You're looking in the wrong place. The magic comes from TSMC, not Apple. Apple's major innovation is in using their unrivaled bank account to pay TSMC for exclusivity on their newest fabrication technology. As competitors gain access to that TSMC technology they will match or beat Apple's performance. And if Intel succeeds in reclaiming the fabrication technology lead from TSMC (we'll see) then Intel will beat Apple's performance again.
The picture you're painting is far too simple.<p>There are many metrics that play a role for different applications.
A modern Nvidia card is magnitudes faster than any CPU and the M1 GPUs. A Nvidia A100 card has HBM memory with ~2TB/s bandwidth.<p>On the other hand, there are areas where a ~5GHz intel CPU is competitive, or even older server chips with AVX can compete.<p>So let's see what happens when Intel and AMD step to smaller processes and switch to much faster memory (ddr5 as apple has done). Maybe the gap is large in energy consumption, but not in total performance.
> achieves 87% of the performance of the Intel 12900K and Nvidia RTX 3090 combined<p>This sounds insane (not in a good way). A 3090 destroys M1 Ultra in tasks like machine learning, 3090 being 5x-10x faster than the M1 Ultra.
1. Apple literally pays billions to TSMC to build a new fab<p>2. So Apple gets exclusive rights to those new fabs for the first 2 years<p>3. As a result, Apple gets access to the latest performance benefits of the newest nodes 2 years before anyone else<p>4. Additionally, since Apple chips are SoC - EVERYTHING (gpu, memory controller, memory, cpu, etc) gets fabbed on thr newest highest performing node. This has never been done before because even Nvidia latest chips, due to cost, are typically in 4 year old nodes, along with memory controllers etc.<p>You can’t underestimate the massive benefit of having the <i>entire</i> SoC fabbed on a node that no other company can gain access too for 2+ years
For many years, AMD GPUs were cheaper and equally or more performant than NVIDIA GPUs. Did that stop NVIDIAs lead in AI training? No, software support is the key component.<p>It's the same with Apple's M1. I don't know anyone who is using it. People need obscure x64 instructions and CUDA. And that means you buy AMD CPU + NVIDIA GPU as the cheapest high-performance combination.<p>Also, I expect that the 5nm process is responsible for a large part of the power savings in the M1. So I would expect an AMD CPU on TSMCs 5nm to be pretty close in terms of performance per watt. It's just that the mainstream market isn't willing to pay the premium price for 5nm yet. So AMD produces "good enough" at a price that people are happy to pay, which means they need to hold off on going 5nm for now.
Yes, but the reasons are to do with business and investment as much as technology.<p>Apple utterly dominates the mobile phone CPU landscape, and it’s hard to see that changing. The reason is that they capture a huge slice of the profits in mobile phones. Something like 90%. For a while they capture over 100% of profits, because so many of their competitors were operating at a loss. This enables them to make investments their competitors can’t. Also because their competitors use commodity chips developed by Cisco and Samsung none of the Android vendors can use those chips to differentiate their products, which means even for flagship phones the CPU isn’t a unique selling points, so they can’t justify buying super premium chips. Without the guaranteed sales, the chip vendors can’t justify investing in making super premium chips. They’ve been stuck in this rut for almost a decade now, and still not found a way out. Success breeds investment, but without investment they can’t achieve success. It’s a negative feedback loop, compounded by a prisoner’s dilemma.<p>With desktop and server chips these factors don’t apply. Apple will never dominate the desktop and server CPU market because they’ve positioned themselves as a niche premium brand. They don’t even make servers, and the desktops they do make are ridiculously high end. They can’t go down market without eviscerating their margins. There’s still huge amounts of money to be made in those sectors, so vendors like Intel and AMD can afford to put in massive investment in new designs. They may be behind now, in some ways, but I don’t see any barriers to them competing in the near to long term.
Intel is investing in RISC-V[0] as well as working on its own chips.<p>But this kind of question is not new. 10-20 years ago nobody would be thinking ARM could catch up to Intel. But also, is your question just about laptops and mobile devices? I don't see Apple taking on server markets, where the power per watt is perhaps less important. Finally, lots of people will be using non-Apple hardware for a long time (think anyone who isn't in Engineering at a company, they will probably be on a Windows system), which means there is guaranteed revenue for these other folks to compete, and possibly, catch up.<p>[0] <a href="https://www.zdnet.com/article/intel-invests-in-open-source-risc-v-processors-with-a-billion-dollars-in-new-chip-foundries/" rel="nofollow">https://www.zdnet.com/article/intel-invests-in-open-source-r...</a>
In general what I see as a massive issue for ARM and big win for x86 is standardization (or in case of ARM lack of)<p>- Can you run ARM system on any ARM processor, or are you limited by core architecture, endianness (BE/LE), and RAM addressing, forcing you to recompile for specific SoC? I honestly don't know what are the exact limitations.<p>- Can you boot any ARM processor in one specific way like x86 processor? No you can't. Every ARM processor has its own booting mechanism and you basically need to bend your system to it.<p>This lack of standardization is the reason, why Android phones does not have universal Lineage OS, but has build for phone X, build for phone Y, build for tablet Z and on the other hand this basic standardization is what will keep x86 alive for decades to come.
Performance isn't everything.<p>Even if the comparisons to the RTX3090 were legit, all that power is still 'trapped' inside a Mac.<p>As the main reasons to have that level of power are gaming and VR, you'd be probably be better off having a bit less performance but on a Windows machine.
Reading this thread, the delusional thinking is thick!<p>I’m hearing that we need to wait until Intel moves to 5nm to really compare. When that means that the whole PC market cedes the crown to Apple, who is not slowing down. Apple is on its second generation on 5nm and the x86 crowd as yet to show any response!<p>That it’s only because Apple is on 5nm that they’re competitive. Which is crazy, since we can directly compare slightly older chips made on a 7nm process and Apple comes out way ahead.<p>That the competitors like Qualcomm are not optimizing for performance. When the reality is much simpler. The benchmarks are this bad from Qualcomm because they’re optimizing for their wallets.<p>The reality is simple: for a decade, Apple has had the fastest phones, and the fastest tablets. Are you PC guys ready to admit Apple has the fastest computers?
<i>"Qualcomm Confirms Nuvia Arm Chips Will Be in PCs by Late 2023"</i><p><a href="https://www.tomshardware.com/news/qualcomm-confirms-nuvia-arm-chips-late-2023" rel="nofollow">https://www.tomshardware.com/news/qualcomm-confirms-nuvia-ar...</a><p><i>"Qualcomm acquired Nuvia in January 2021. The processor startup was founded by ex-Apple engineers who wanted to turn their talents to Arm-based system-on-chips (SoCs) for servers. Just a few months later Qualcomm provided an extensive update on its plans for Nuvia-technology SoCs, and it publicly pinned its hopes on addressing the always-connected PCs (ACPCs) market with a processor that could get in the ring and trade blows with the Apple M1. This could be an exciting introduction for the Windows ecosystem, if all goes to plan."</i>
The M1 is packed to the brim with top of the class technology (lower node, big ROB, big caches, fast buses, very complex prefetching and branch prediction...) and has the advantage that it was basically wrote from scratch during a decade or so from simple mobile level processors to full grade CPU, but it is not magic. Competitors can certainly catch up in some years<p>A lot of their best engineers left and joined the competition and a whole lot depends on them. The second thing that needs to happen is a serious restructuring of existing processors, to the point of rewriting parts from scratch. It will take some time but there are companies with the budget and the know how to do it.
This isn't the first time there's been a shift in claims to the various performance thrones and it certainly won't be the last. It might take a while before there's another king on the CPU with integrated GPU throne, but someone will catch up eventually.<p>Back in the 90's, there were more contenders than just AMD and Intel. Apple was using PowerPC chips, and on the x86 side there were several other competitors. Sure, it's been a bit one-sided since Apple went the Intel route, Via faded into obscurity and Transmeta went belly-up, but something else re-appearing as a viable, competitive platform is neither unprecedented nor unexpected.
Apple Silicon:<p>Be node ahead of competition.<p>Solder memory on-package.<p>?????<p>Profit!<p>Not that Apple Silicon isn't impressive but I think we ignore the shortcomings and take a lot of what other processors do for granted.<p>Silicon often struggles on workloads that don't use some sort of hardware acceleration. If you never go off the beaten path it is very compelling.
Considering that no one with an apple M1 chip is gaming or training ML with it, I'm not even sure what it means that these are supposedly better. What are people even using the extra performance for? Keeping thousands of tabs open simultaneously?
Given that whatever Apple has is not going into 80% of the desktop market, 60% of the mobile market, 100% of game consoles, or 100% of the server market, it matters much less than some Apple afficionados make out of it.
I hope so, but I'm also sure.<p>Technology is created by people, and I believe a big chunk of Apple's silicon team left last year.<p>So I hope we will see good performance from non-vertically integrated cpus soon. ie companies offering devices using a Qualcom or whatever cpu, which can run windows, linux, etc, not just tied into the Apple ecosystem.
Apple will stay in the lead as long as they pay TSMC for a virtual monopoly on the smallest die sizes. How it will look when that situation changes is anyones guess. Apple is likely to continue to use the least electricity as they have spent a lot of effort on that, but other than that is impossible to say.
I don't think there is a technical reason why a chip based on the x86 ISA could not be just as efficient as one based on Apple's ARM-variation.<p>And AMD (and probably Intel soon) has access to the same process technology as Apple so I think it is only a matter of time before you will see a catch up.
the main issue IMHO is that competitors are not choosing the right combination of cores and peripherals when designing SoCs. Apple is doing that much better.<p>for example i am not interested in 2x an A53 and 2x an A72. i want an octa core with the fastest A cores available. each time when i tried to find that i ended up in the server processor world with a price tag to match.<p>stuff like 4k video decoders and TPUs are of no interest to me. yet multiple i2s ports or TDM are missing on typical mobile SoCs.<p>AMD has some seriously interesting CPUs these days but unfortunately it seems hard to get access to parts if you are a small time device manufacturer…
I think if x86 get rid of their legacy instructions, they could reduce their core size, and a few side effects will be performance per watt gains and maybe also performance (smaller cores), and if you need the legacy instructions you could just emulate, most consumer PCs don't need those.<p>That's the biggest difference with x86 and ARM, ARM got a lot of breaking changes with their versions, while x86 don't (I'm not sure if there was any breaking change in the las 20 years at least).
Apples hardware is proprietary and they don’t offer or probably plan to offer server chips. I think what would make these stats more interesting is if we saw a shift in the thin client vs. fat client debate back to fat clients. Perhaps these changes will bring us into that era?<p>Amazons arm-based offerings are pretty interesting as are the other non-x86 initiatives. But I don’t see x86 going away in my day job any time soon.
If your views on technological progress are informed by the 80s and 90s, and you think it is an exponential curve, then it seems unlikely that today’s leaders will ever be caught.<p>If your views on technological progress are informed by the 00s and 10s, and you think it is an S-curve, then today’s leaders might never be caught in an absolute sense, but the competitors will get arbitrarily close eventually.
Seismic shift? Seems to be pretty much on-par with current AMD and Intel offerings [1]. Let's see how Apple holds up when AMD gets on the same node.<p>[1] <a href="https://www.youtube.com/watch?v=FWfJq0Y4Oos" rel="nofollow">https://www.youtube.com/watch?v=FWfJq0Y4Oos</a>
The GPU is not that fast. It is at heart a mobile architecture that excels when the application takes that into account. Otherwise, it is mediocre. It is also worth noting that performance does not scale linearly with power. You can often get something like 75% of the performance at 50% of the power.
Imho they can as long as they agree to drop some sort if retro-compatibility.<p>Modern cpus being able to run dos software is certainly nice, but it’s probably preventing intel/amd from performing some aggressive restyling and optimisation/refactoring to their isa and its implementation in silicon.<p>Apple on the other hand happily deprecates its own os releases and the hardware supported. Software retro-compatibility is then achieved in software (rosetta and such)
Apple fans sure love pretending other chips are as slow as they are<p>> The Apple M1 Ultra SoC chip achieves 87% of the performance of the Intel 12900K and Nvidia RTX 3090<p>Benchmarks not run by Apple do not support this claim<p>In reality it is not competitive with chips five years old
Well if the initial M2 machines are anything to go by then x86 really has nothing to worry about. Apple will damage the reputation of their systems by themselves.
The question is "Would they, and do they need it?"<p>For as long as it can't run Windows, it's not a competitor.<p>A mass market chip will never survive the yield on such extremely large dies.<p>Most of mass market chips will be many times smaller physically to make money.<p>Apple does not make money on these chips.
Most of the performance advantage of Apple silicon is that it puts CPU, GPU and all memory (that is typically VRAM and DRAM in traditional architectures) millimeters from primary DRAM. When competing purveyors start do this with their own clever packaging, they will start to approach Apple’s mips/watt.<p>Emulation on M1 is outstanding. The spatial locality of main memory nullifies emulation penalty. Apple has come close to what Transmeta tried to do.