Its the infinity fabric. AMD EPYC doesn't have 64MB of L3 cache. It has 8x8MB of L3 cache.<p>* If CCX#0 has a cacheline in the E "Exclusive Owner" state, then CCX1 through CCX7 all invalidate their L3 cache. There can only be one owner at a time, because the x86 architecture demands a cohesive memory.<p>* All 8-caches can hold a copy of the data. In the case of code: this means your code is replicated 8x and uses 8x more space than you think it does. Code is mostly read-only. With that being said, it is shared extremely efficiently, as all 8x L3 caches can work independently. (1MB of shared code on all 8x CCXes will use up 8x1MB of L3).<p>* Finally: if CCX#0 has data in its L3 cache, then CCX#6 has to do the following to read it. #1: CCX#6 talks to the RAM controller, which notices that CCX#0 is the owner. The RAM controller then has to tell CCX#0 to share its more recent data (because CCX#0 may have modified the data) to CCX#6. This means that L3-to-L3 communication has higher latency than L3-to-RAM communication!<p>-------------<p>In the case of a multithreaded database, this means that a single multithreaded database will not scale very well beyond a CCX (12-threads in this case). L3-to-L3 communications over infinity fabric is way slower, because of the cache coherence protocols that multithreaded programmers rely upon to keep the data consistent.<p>But if you run 8x different benchmarks on the 8x different CCXes... each of which were 12-thread each, it would scale very well.<p>-------------<p>Overall, the problem seems to scale linearly-or-better up to 16ish threads. (8x is 6762.35, 16x is 13063.39).<p>Scaling beyond 6-threads is technically off a CCX (3-cores per CCX on the 7401), but remains on the same die. There's internal infinity fabric noise, but otherwise the two L3 caches inside a singular die seem to communicate effectively. Possibly, the L3->memory controller->L3 message is very short and quick, as its all internal.<p>The next critical number for the 7401 is 12-threads, which is off of a die (3+3 cores per die). This forces "external" infinity fabric messages to start going back and forth.<p>Going from 12-threads (10012.18) to 24-threads (16886.24) is all the proof I need. You just crossed the die barrier, and can visibly see the slowdown in scaling.<p>--------------<p>With that being said, the system looks like it scales (sub-linearly, but its still technically better) all the way up to 48 threads. Going beyond that, Linux probably struggles and begins to shift the threads around the CCXes. I dunno how Linux's scheduler works, but there are 2x CCXes per NUMA node. So even if Linux kept the threads on the same NUMA node, they'd have to have this L3 communication across infinity fabric if Linux inappropriately shifted threads between say... Thread#0 and Thread#20 on NUMA #0.<p>That kind of shift would rely upon a big L3-to-L3 bulk data transfer across the two different CCXes (although, on the same die). I'd guess that something like this is going on, but its completely speculation at this point.