TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AMD EPYC Performance Testing with systemd

120 pointsby josephscottalmost 7 years ago

9 comments

bhoustonalmost 7 years ago
Fun fact: a lot of user benchmarks of Ryzen Threadripper have lower averages than standard performance. For example passmark and userbench. This is because many are running TR with just 2 dimms rather than 4 dimms. This basically reduces performance by +20%. I almost think that AMD should have just not made TR work in this case because the lower than accurate benchmarks hurt then.<p>I do not know exactly why it has this performance characteristic but I&#x27;ve witnessed it first hand. It is very ease to reproduce with Passmark on Windows.
评论 #17518921 未加载
评论 #17608596 未加载
评论 #17518766 未加载
评论 #17520904 未加载
评论 #17518655 未加载
rossmohaxalmost 7 years ago
I am pretty sure it has something to do with kernel.sched_autogroup_enabled = 1 which places process from different sessions into different scheduling groups.Bash terminal is session leader and all processes are part of it, unless they explicitly break away with setsid(2) call
评论 #17518811 未加载
viraptoralmost 7 years ago
Most likely the change is from cgroups, and not systemd itself. This could be verified by booting with and without cgroup_disable=cpu.
评论 #17518666 未加载
评论 #17537136 未加载
et2oalmost 7 years ago
This is perhaps tangential to the article, but what is the advantage of running the MySQL server with more than 48 threads on a 24-core, 48 thread CPU anyway?<p>The fact that performance is in the EPYC cpu increasing on the older 4.13 kernel when you use 100 instead of 40 threads is surprising to me. On the Epyc (Kernel 4.15 | Ubuntu 18) and and Xeon CPU you can see it stalls or decreases from ~48 threads upwards.
评论 #17518700 未加载
评论 #17518467 未加载
评论 #17518479 未加载
评论 #17518530 未加载
评论 #17518525 未加载
dragontameralmost 7 years ago
Its the infinity fabric. AMD EPYC doesn&#x27;t have 64MB of L3 cache. It has 8x8MB of L3 cache.<p>* If CCX#0 has a cacheline in the E &quot;Exclusive Owner&quot; state, then CCX1 through CCX7 all invalidate their L3 cache. There can only be one owner at a time, because the x86 architecture demands a cohesive memory.<p>* All 8-caches can hold a copy of the data. In the case of code: this means your code is replicated 8x and uses 8x more space than you think it does. Code is mostly read-only. With that being said, it is shared extremely efficiently, as all 8x L3 caches can work independently. (1MB of shared code on all 8x CCXes will use up 8x1MB of L3).<p>* Finally: if CCX#0 has data in its L3 cache, then CCX#6 has to do the following to read it. #1: CCX#6 talks to the RAM controller, which notices that CCX#0 is the owner. The RAM controller then has to tell CCX#0 to share its more recent data (because CCX#0 may have modified the data) to CCX#6. This means that L3-to-L3 communication has higher latency than L3-to-RAM communication!<p>-------------<p>In the case of a multithreaded database, this means that a single multithreaded database will not scale very well beyond a CCX (12-threads in this case). L3-to-L3 communications over infinity fabric is way slower, because of the cache coherence protocols that multithreaded programmers rely upon to keep the data consistent.<p>But if you run 8x different benchmarks on the 8x different CCXes... each of which were 12-thread each, it would scale very well.<p>-------------<p>Overall, the problem seems to scale linearly-or-better up to 16ish threads. (8x is 6762.35, 16x is 13063.39).<p>Scaling beyond 6-threads is technically off a CCX (3-cores per CCX on the 7401), but remains on the same die. There&#x27;s internal infinity fabric noise, but otherwise the two L3 caches inside a singular die seem to communicate effectively. Possibly, the L3-&gt;memory controller-&gt;L3 message is very short and quick, as its all internal.<p>The next critical number for the 7401 is 12-threads, which is off of a die (3+3 cores per die). This forces &quot;external&quot; infinity fabric messages to start going back and forth.<p>Going from 12-threads (10012.18) to 24-threads (16886.24) is all the proof I need. You just crossed the die barrier, and can visibly see the slowdown in scaling.<p>--------------<p>With that being said, the system looks like it scales (sub-linearly, but its still technically better) all the way up to 48 threads. Going beyond that, Linux probably struggles and begins to shift the threads around the CCXes. I dunno how Linux&#x27;s scheduler works, but there are 2x CCXes per NUMA node. So even if Linux kept the threads on the same NUMA node, they&#x27;d have to have this L3 communication across infinity fabric if Linux inappropriately shifted threads between say... Thread#0 and Thread#20 on NUMA #0.<p>That kind of shift would rely upon a big L3-to-L3 bulk data transfer across the two different CCXes (although, on the same die). I&#x27;d guess that something like this is going on, but its completely speculation at this point.
评论 #17520707 未加载
dijitalmost 7 years ago
I&#x27;m not an expert, I&#x27;m a lowly systems admin-<p>But I imagine the difference is that size of systemd itself and how mysql does fork().<p>SystemD is 1.5MB itself on my systems where I have it, but upstart (for example) is 148KB on centos 6.<p>Since an AMD Epyc has roughly 64Mb of L3 Cache, a larger binary would not have to be evicted from L3 cache as often.<p>One of Intels generally powerful all-rounder CPUs (2687Wv4) only has 30Mb of &quot;Smart-cache&quot; (which is fancy speak for; not that much)<p>A complete guess on my part though..
评论 #17519053 未加载
评论 #17518611 未加载
评论 #17520779 未加载
citilifealmost 7 years ago
&gt; I ran the same benchmark on my Intel box<p>What was the Intel box? I&#x27;m also wondering the EPYC system was configured properly.
评论 #17519500 未加载
gsichalmost 7 years ago
systemd not SystemD
评论 #17519657 未加载
newnewpdroalmost 7 years ago
Implying that systemd is somehow the root-cause for this performance disparity strikes me as ridiculous.<p>I&#x27;ve noticed a pattern over the years with anyone spelling systemd as SystemD: They tend to not really know what the hell they&#x27;re talking about with regards to systemd, while possessing significant bias against the project, actively searching for reasons to disparage it.
评论 #17519655 未加载