TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Java BitSet performance-flicker mystery

113 pointsby cospaiaover 3 years ago

21 comments

CraigJPerryover 3 years ago
My gut reaction is that this[1] looks like a NUMA issue. The flip-flop performance profile immediately reminds me of a very gnarly prod issue i lead the investigation of. I’d want to eliminate NUMA first from my investigation.<p>You can use “numactl --cpunodebind=0 --membind=0 &lt;your original java benchmark command&gt;” to pin the JVM to a particular NUMA node and immediately discount my notion as bunkum.<p>[1] all the reasons mentioned that make me think numa:<p><pre><code> 1. The machine affected has multiple NUMA domains 2. The M1 is a single NUMA domain chip and does not exhibit the behaviour 3. The real thing making me think this - the consistent performance flip flop is close to the ~40% performance overhead of accessing remote memory vs local that i’ve seen previously. You’re seeing higher overheads but your code is a much tighter loop of memory access than my previous experience so that could explain the difference i think.</code></pre>
评论 #30233963 未加载
jbaiterover 3 years ago
Have you tried setting up a JMH benchmark? This should allow you to see if the JIT is the cause of your slowdowns. Also, running it under a profiler (I recommend async-profiler[1]) should give you a good idea of <i>where</i> the slowdown occurs which might help you pin it down further.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;jvm-profiling-tools&#x2F;async-profiler" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jvm-profiling-tools&#x2F;async-profiler</a>
评论 #30228035 未加载
评论 #30227397 未加载
评论 #30231396 未加载
评论 #30228422 未加载
cospaiaover 3 years ago
I&#x27;m stumbling onto a strange Java performance irregularity, especially easy to hit with Docker on Linux. A super straightforward implementation of Eratosthenes sieve using a Java BitSet for storage, sometimes drops in performance by 50%. Even more with JDK8. Any JAVA&#x2F;JDK&#x2F;JVM experts here that can shed some light on the mystery for me? I&#x27;ve been at it a while, but it is quite a bit outside my field of knowledge, and I am out of ideas for how to proceed. The blog article linked + the Git repository is an attempt to summarize&#x2F;minimize things.
评论 #30227321 未加载
评论 #30228404 未加载
评论 #30227275 未加载
评论 #30232219 未加载
评论 #30234696 未加载
评论 #30229258 未加载
评论 #30232292 未加载
评论 #30227329 未加载
评论 #30228825 未加载
评论 #30227248 未加载
评论 #30227450 未加载
nsajkoover 3 years ago
First make sure there are no user or kernel tasks that may be hogging resources sometimes. Maybe even try disabling some peripherals and similar, at the hardware or kernel level.<p>Lots of stuff you could try then:<p>* disabling ASLR<p>* disabling Turbo Boost and similar CPU settings<p>* changing the CPU performance scaling governor (from &quot;powersave&quot; to &quot;performance&quot;): printf performance | tee &#x2F;sys&#x2F;devices&#x2F;system&#x2F;cpu&#x2F;cpufreq&#x2F;policy*&#x2F;scaling_governor<p>* run the code under a real-time scheduling policy (like SCHED_FIFO) with the highest priority. If you do try this, you need to also enable actual real-time scheduling by writing &quot;-1&quot; to &#x2F;proc&#x2F;sys&#x2F;kernel&#x2F;sched_rt_runtime_us.<p>But modern CPUs are not predictable in their performance, that&#x27;s why microcontrollers are usually used for hard real time requirements. So I doubt you&#x27;ll ever be able to get absolutely consistent performance across all benchmark runs.<p>I played similar benchmarking games myself before, and it turns out that, although I did most of the stuff described above, and my code was C++ (no JIT), big slowdowns do happen with some inevitable but predictable frequency.
评论 #30230788 未加载
mibslover 3 years ago
Just a guess, but it kind of sounds like machine code loop alignment could be the cause. Modern CPUs really like their jump targets 32 byte aligned.
评论 #30230786 未加载
kgeistover 3 years ago
My first thought was that it&#x27;s a bug in the deoptimizer, i.e. the JIT compiler dynamically switches back to a deoptimized form to throw away invalid optimizations (or so it it thinks) to apply new optimizations which are more relevant to the current load&#x2F;usage patterns. [0]<p>I think I&#x27;ve seen a bug report once about this deoptimization&#x2F;optimization process happening in an infinite loop, but why would it happen only under Docker on Linux, but not Mac? Perhaps, the deoptimizer relies on heuristics which depend on the current environment.<p>[0] <a href="https:&#x2F;&#x2F;stackoverflow.com&#x2F;a&#x2F;20542675" rel="nofollow">https:&#x2F;&#x2F;stackoverflow.com&#x2F;a&#x2F;20542675</a>
评论 #30230392 未加载
cbsmithover 3 years ago
It&#x27;d be funny if this was just effects from hyperthreading...
评论 #30228476 未加载
srosenbergover 3 years ago
So far my attempts to reproduce the alleged performance degradation have not been fruitful. I&#x27;ve written up a fairly detailed gist [1] on how to get CPU performance metrics; appendix also has a dump of C1 and C2 compiled methods (useful for comparison). I also ran on 2-node NUMA; binding cpu and memory to different nodes didn&#x27;t yield a repro. either.<p>[1] <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;srosenberg&#x2F;41611d5f40cfcbad51aa27eb0dba1af0" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;srosenberg&#x2F;41611d5f40cfcbad51aa27eb0...</a>
评论 #30237268 未加载
评论 #30237005 未加载
ww520over 3 years ago
Seeing that the benchmarks are running inside Docker, would it be Docker related? Does Docker throttle CPU on different machines differently?<p>Check the temperature of the CPU. Modern CPU will slow down when it runs too hot. Also does anti-virus got run from time to time? Does expensive backup or expensive telemetry got run during the benchmarks?<p>Reading the blog again and seeing the results are dropped to the exact same level step wise, it really looks like the CPU has been throttled or some quota limit has been triggered.
lokedhsover 3 years ago
This issue isn&#x27;t directly related to BitSet. I have observed the same thing in my programming language interpreter that runs on the JVM (well, it&#x27;s written in Kotlin multiplatform so it runs on JS and Natively as well).<p>I start the interpreter and measue the time it takes to all all then numbers below 1000000000.<p>The first time I run it after starting the interpreter it always takes 1.4 seconds (within 0.1 second precision). The second time I measure the time it takes 1.7, and for every invocation following that it takes 2 seconds.<p>If I stop the interpreter and try again, I get exactly the same result.<p>I have not been able to explain this behaviour. This is on OpenJDK 11 by the way.<p>If anyone wants to test this, just run the interpreter from here: <a href="https:&#x2F;&#x2F;github.com&#x2F;lokedhs&#x2F;array" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;lokedhs&#x2F;array</a><p>To run the benchmark, type the following command in the UI:<p><pre><code> time:measureTime { +&#x2F;⍳1000000000 } </code></pre> My current best guess is that the optimiser decides to recompile the bytecode to improve performance, but the recompiled version actually ends up being slower.
评论 #30231936 未加载
MattPalmer1086over 3 years ago
I echo the earlier comments to use jmh for benchmarking. There&#x27;s lots of subtle issues that jmh solves.<p>One thing I notice is that your sieve run doesn&#x27;t return any values. Java can optimise out code that produces no effect in some circumstances.<p>EDIT: Although in that case, you&#x27;d expect to see it run insanely fast. Anyway, the point stands, there&#x27;s lots of non obvious issues in benchmarking.
archi42over 3 years ago
A lot of your time is spent on the branching and on accessing the memory. So on the ASM level, memory access patterns, caching and branch prediction will affect your performance.<p>My bet is on the branch predictor, since IIRC AMD has a novel branch predictor that&#x27;s pretty different from the Intel branch predictor (not sure about M1): In C-land you should try loop unrolling (in fact a decent compiler will do that for you). If the JVM has a control for that, force it; else do it manually and pray the JIT does the right thing.<p>My first intuition was the BitSet&#x27;s cache &amp; memory behavior, which might also be the case for some ranges of `p`: Internally the BitSet is probably something like a chunk of memory with bitwise indexing. So to change a bit, the machine has to load a bunch of bytes into a register, set the bit, and write that register then back to memory. This is bad(*) if you want to set e.g. bits 0 to 31 in your BitSet, because you now got 64 memory accesses instead of two. This might affect you for small p, but with p &gt;= 64 the access stride will be larger than 64. Thinking about it, in fact, this could be something that might throw of a naive JIT optimizer. I would have to think a little bit on how to improve here were your code written in C, with the JVM I have no idea how you could optimize here. Maybe do two loops, first for p&lt;=64 and the other for p&gt;64.<p>Regarding cache misses, hm, 1M bits are just shy of 64kByte. On any modern machine that should fit into the cache; I would still monitor for cache misses though.<p>(*) Short story: I have a light case of PTSD since I once had to reimplement the storage backend for a Java SQL engine university project because a friend was trying to be too clever and did exactly that.<p>Anyway, interesting question and I wish you best of luck with figuring out what&#x27;s going on there :)<p>&#x2F;&#x2F;edited a few times; sorry, I&#x27;m bad at writing a comment in on go...
评论 #30232276 未加载
sk5tover 3 years ago
It could be that something else is contending for L1&#x2F;L2 cache.<p>As others have mentioned, take JMH for a spin. Benchmark after a few hundred thousand iterations warmup.<p>Also as mentioned here, thermal throttling could have a big impact. Maybe you have access to a desktop Xeon or similar?
Snoozusover 3 years ago
Would love to reproduce this, please post the full specs of the machine and os!
评论 #30227721 未加载
评论 #30232369 未加载
stock_toasterover 3 years ago
There have been issues[1] with seccomp. Maybe try with seccomp disabled for that container?<p><pre><code> --security-opt seccomp:unconfined </code></pre> More info here[2].<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;docker&#x2F;for-linux&#x2F;issues&#x2F;738" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;docker&#x2F;for-linux&#x2F;issues&#x2F;738</a><p>[2]: <a href="http:&#x2F;&#x2F;mamememo.blogspot.com&#x2F;2020&#x2F;05&#x2F;cpu-intensive-rubypython-code-runs.html" rel="nofollow">http:&#x2F;&#x2F;mamememo.blogspot.com&#x2F;2020&#x2F;05&#x2F;cpu-intensive-rubypytho...</a>
评论 #30232065 未加载
dnauticsover 3 years ago
I ran into a similar problem with docker on the same problem in zig; podman and on-metal had no performance regression but docker did. I kind of gave the fuck up on the plummer&#x27;s sieve shootout at that point because (of this and other performance regressions I was finding, like CPU throttling) I felt like I was fighting the stupid rules of the contest more than I was discovering things about performance.<p>Anyways for the authors: try running it in podman and see if it eliminates the perf regression
renewiltordover 3 years ago
What happens if you invert the test condition? That is, you run it 10k times and then see how long that took rather than for X time and see how many times you could do it?<p>You’re using System.getCurrentTimeMillis() which should be w fast. My first thought was if you were using Instant and then sometimes there’s one call to VM.getNanoTimeAdjustment and sometimes two.<p>Man, this is a tough one. I’ll try when I’m home.
评论 #30231337 未加载
评论 #30227736 未加载
shadowmatterover 3 years ago
Every other theory listed here is far more likely, but I would try changing your loop from using System.currentTimeMillis() to using System.nanoTime(). It&#x27;s a higher-resolution time source that has no relation to wall clock time but is more appropriate for timings. Classes like Stopwatch from Google&#x27;s Guava use it.
biehlover 3 years ago
How do you calculate the time?<p>Windows had famously low timer resolution on java for a while. What happens if you run each round for increasingly longer periods?
anonuover 3 years ago
Garbage collection ?
评论 #30229340 未加载
cplusplusfellowover 3 years ago
Got to be honest. This sort of thing is why I dropped Java and went to Golang (with all its warts).
评论 #30230941 未加载