One of my best friends was high up in the M1 project (don't want to get too specific, but he works at Apple).<p>I am not surprised about the M1 performance numbers at all. Granted, I have a degree in this area (silicon design) and many friends in the industry. But it's the sort of thing where if you're paying any attention, it's just inevitable.<p>(1) Intel has been a mess for a while. Multiple canceled high-profile projects (not announced publicly), some high-profile process technology missteps, and frankly...they just feel kind of rudderless. A lot of my more career-minded friends have jumped ship. There just doesn't seem to be a concrete goal they're pursuing. They've had what, three CEOs in the last few years?<p>(2) Apple's semiconductor team is really good. REALLY good. It was a Steve Jobs-level initiative that started with the acquisition of PA Semi and they've assembled a lot of the best people in the industry.<p>(3) Apple sells something like 10-100x as many phones as computers each year. This means there's much, much more scope for high-budget R&D on the phones. And in this case, they took a lot of what they learned about system-on-chips, which are used in phones, and brought it back to PCs.<p>(4) It's pretty obvious the architecture of PCs was overdue for a bit of a rethink. Just look at a PC mainboard. There's all kinds of shit on there, lots of clocked digital electronics, BIOS chips, memory controllers, real-time timers, a giant energy-eating PCI bus, etc. If you stop and think about it, we've probably reached the point where the whole thing needed to be repackaged into a single part. With the exception of overclockers and hardcore gamers, most people don't upgrade their CPU, or memory, so right there, you can remove a bunch of clunky edge connectors and all their bus logic. This means the whole thing can run on fewer synchronous clocks, which dramatically improves power efficiency and also (I'm not sure about this, but it stands to reason) performance. There's also a completely unified memory model--one giant flat RAM between CPU and GPU--so no schlepping all the things back and forth from GPU to CPU memory. Huge performance improvement right there, just by eliminating stupid legacy bullshit we don't need anymore.<p>As a software guy, it's like they just ripped out a bunch of legacy stuff and got rid of all the unnecessary and complex silicon.<p>No less important, there was a movement in academia about ten years ago toward using FPGAs (basically reprogrammable silicon) to do various special-purpose tasks like image processing, DSP, and GPU. Custom silicon is <i>way</i> more efficient (both power and performance) than general-purpose CPU silicon. If you look at the M1 design, they bundled a bunch of custom stuff into a single package, so that rather than having a CPU, you get a CPU, memory controller, GPU, image processor, etc. on a single piece of silicon.<p>(5) And finally, they did a bunch of sensible things with the ARM CPU that anyone with an undergraduate degree in computer engineering wouldn't be surprised by. I read something about large reodering buffers and a few other things.<p>I'm not in any way minimizing this achievement. It is a very big deal. But like a lot of what Apple does, it's just a realtively straightforward idea, executed to a very, very high level of excellence by a very good team. And it's interesting how all these little 5% improvements here, there, and everywhere culminated into a much larger, qualitative degree of excellence. It all feels very "Apple".