A very clear and interesting post.<p>I've been trying to fit big-enough long-running stuff into JVMs for a few years, and have found that minimizing the amount of garbage is paramount. Its a bit like games- or C programming.<p>Recent JVM features like 8-bit strings and not having a size-limit on the interned pools etc have been really helpful.<p>But, for my workloads, the big wastes are still things like java.time.Instant and the overhead of temporary strings (which, these days, copy the underlying data. My code worked better when split strings used to just be views).<p>There are collections for much more memory-efficient (and faster) maps and things, and also efficient (and fast) JSON parsing etc. I have evaluated and benchmarked and adopted a few of these kinds of things.<p>Now, when I examine heap-dumps and try and work out where more I can save bytes to keep GC at bay, I mostly see fragments of Instant and String, which are heavily used in my code.<p>If there was only a library that did date manipulation and arithmetic with longs instead of Instant :(
I wonder how things would have stacked with OpenJ9 - AdoptOpenJDK project makes OpenJ9 builds available for Java 8/11/13/14 - so it should be trivial to include it in the benchmarks.<p>We have been experimenting with it in light of the Oracle licensing situation and it does provide interesting set of options - AOT, various GCs (metronome, gencon, balanced) along with many other differentiators to OpenJDK like JITServer which offloads JIT compilation to remote nodes.<p><a href="https://www.eclipse.org/openj9/docs/gc/" rel="nofollow">https://www.eclipse.org/openj9/docs/gc/</a><p>It doesn't get as much coverage when it should - it's production hardened - IBM has used it and still uses it for all their products - and it's fully open source.
Specific workload matter a lot. I had a good experience with Shenandoah collector on an application that generates very few intermediate objects, but once an object is created it stays in the heap for a while ( a custom made key/value store for a very specific use case). Shenandoah collector was the best in terms of throughput and memory utilization. Most collectors are generational, so surviving objects have to be moved from Eden to Survivor to Old. Shenandoah is not generational, and I suspect it has less work to do for objects that survive compare to other collectors. When most objects live long enough generational collectors hinder performance.
Converting Java code to Kotlin, then compiling it with the Kotlin Native[1] is more promising from the performance point of view. Native code is always faster (assuming compiler is good enough).<p>[1] <a href="https://kotlinlang.org/docs/reference/native-overview.html" rel="nofollow">https://kotlinlang.org/docs/reference/native-overview.html</a>