CPU cores are pretty much a done deal IMO (unless Nuvia's hype is real. I guess we'll see about that soon). For the last 10, maybe 20 years, CPUs have simply grown bigger and have more or less done the same features.<p>CPUs simply push wider SIMD (SSE -> AVX -> AVX512), bigger out-of-execution buffers, larger registers, deeper branch prediction, more cache. The biggest advancement seemed to be Intel's uop-cache in Sandy Bridge (a concept that was first tried in the Pentium4, so not really a "new" innovation).<p>Even ARM chips are more of the same. Wider SIMD (see A64fx: #1 supercomputer in the world for 512-bit SVE), deeper out-of-order execution, better branch prediction. Its rare to see something new<p>----------<p>In contrast, whenever I see GPU-architectures, everything is so hugely different. True, GPUs all have a "SIMD focus", but there's so many exciting things about SIMD that just haven't been explored yet.<p>NVidia has Tensor-cores, Raytracing cores. It was already crazy that NVidia had a full matrix-multiplication implemented in its FP16 processor in Volta, but Ampere is now pushing sparse matrix multiplication at the chip level (!!).<p>AMD GCN and RDNA push GPU/SIMD architecture in a different direction. While a chunk of RDNA can be described as "more of the same", the "SubVector execution" feature grossly changes how Wave64 wavefronts are executed to save on register space and potentially have a better memory access pattern. Innovation continues in the AMD space for sure.<p>Intel finally enters with their GPU / SIMD architecture, and they start of with... SIMD8 + SIMD2 (instead of the Wave32 of RDNA/NVidia or Wave64 for GCN). Finally, Intel is minimizing hardware scheduling (saving on die area), and focusing on software to schedule the instructions.<p>Three very, very different architectures. Three very different ideologies for how SIMD compute / GPUs should be built. The SIMD / GPU architecture space remains more innovative than the CPU-architecture space by a longshot. Its exciting to see.<p>--------<p>I'm not saying that CPU-advancements are dead per se. AMD did make chiplets a hot commodity (but chiplets were already experimented with Power5, and chiplets seem like a natural evolution to dual-socket or NUMA architectures... just cheap enough for a consumer now). Intel's new Tremont Atom core has this cool "dual-decoder" design (where one decoder works on one-branch, and the 2nd decoder works on a 2nd branch in parallel). There are certainly new ideas being tested in the CPU world. But nothing as majorly game changing as what Intel / AMD / NVidia are doing to the GPU space. (Or Google if we include TPUs in the discussion. A TPU isn't SIMD, but it shares some degree of similarity, especially to NVidia's Tensor cores)<p>I guess big.LITTLE (and Intel's Lakefield platform) was a big change in the last 10 years. But that hasn't really seen much success in the Desktop/Server markets.