This is great news, though it initially only supports Linux x86-64 and is decades late for Java desktop apps (and not having non-blocking I/O until Java 1.4 was shameful for a language explicitly targeted and a pervasively networked ecosystem.)<p>In their "tiered mode", they put sampling instrumentation into the native code, and if they detect a hotspot, regenerate fully instrumented native code from bytecode using the C1 (fast) JIT, which then allows the C2 JIT to do its full optimizations on the code as if AoT were not involved.<p>Since the invention of tracing JITs, I've often wondered why languages don't package together a compact serialized SSA form such as LLVM bitcode or SafeTSA along with storing functions as lists of pointers to space-optimized compilations of extended basic blocks (strait-line code), similar to how some Forth compilers generate threaded code. A threaded code dispatcher over these strait-line segments of native code would have minimal overhead, and when a simple SIGPROF lightweight sampler detected a hotspot, a tracing version of the dispatcher could collect a trace, and then generate native code from the visited traces using the stored SSA for the basic blocks.<p>In this way, they'd have a light-weight tracing JIT for re-optimizing native code.
I can't see anywhere in the linked issue that indicates AOT compilation is coming to Java 9, or even coming at all. The issue demonstrates nothing more than an intent to bring it to OpenJDK, and the issue seems to be very nascent? It was only created a fortnight ago.<p>Lest the title is changed:<p><pre><code> AOT compilation is coming to Java 9 (java.net)
18 points by hittaruki 37 minutes ago</code></pre>
Slightly off topic but if you are interested in how HotSpot compiles to native code I gave a presentation at JavaOne:<p><a href="http://alblue.bandlem.com/2016/09/javaone-hotspot.html" rel="nofollow">http://alblue.bandlem.com/2016/09/javaone-hotspot.html</a><p>The presentation wasn't recorded but there is a video recorded from a DocklandsLJC event which is on InfoQ:<p><a href="https://www.infoq.com/presentations/hotspot-memory-data-structures" rel="nofollow">https://www.infoq.com/presentations/hotspot-memory-data-stru...</a>
I'm not familiar enough with compilers, but why would an ahead of time compiler perform worse than a just in time compiler in a static language? I think I'd understand if it was a dynamic language, because you can't know the types for sure until you start running the program, but are similar issues present for Java?
Why was Java ever JIT'd rather than natively compiled anyway? I hate to stick my neck out and even ask this but I never understood why you'd want to JIT or interpret when you can just natively compile to a binary. It seems like Go has gone "back" to the future on this one and in general their toolchain approach to me looked like the way.<p>I always got the sense the world is waiting for a statically typed Python that compiles to native code with Go's CPU performance. I suppose Nim might fit that bill but a shame it doesn't have compatibility with Python's or even the extent of a language like Go's libraries. And if possible, an imperative language that interfaces with OTP.<p>And that said, I can see why Erlang/Elixir wouldn't make as much sense or even work with native code AOT compilation due to it's feature set (thinking stuff like hot code reloading). But I've never grasped why Java or Python were better off with JIT or interpreters than AOT comp. Seems like a type system such as Go's is simple enough and allows for good gains in both CPU performance and memory usage. Add in the fact you don't need to install anything and less to think about in deploying and it seems to be a no brainer. Please feel free to fill me in on this or where I went wrong..
Assuming this comes in Java 9, and compilation of code other `java.base` is possible, will this make Java a more solid competitor to Go? I guess it partly depends on how much they optimize the compiled binary size. Go does a really good job at static compilation, so it will be tough to compete.
> Infrequently-used Java methods might never be compiled at all, potentially incurring a performance penalty due to repeated interpreted invocations.<p>That sort of makes no sense. How can you incur a real performance hit if the uncompiled method is rarely called?
How does this interact with classloading?<p>My general impression is that the design of classloaders is pretty actively hostile to making JVM startup fast.
resurrect me when it's there
<a href="https://github.com/search?p=3&q=jaotc&type=Code" rel="nofollow">https://github.com/search?p=3&q=jaotc&type=Code</a>