Title seems incorrect, or at least misleading. It is not a JIT for C++ code (ie. not "C++ to..."), it is a JIT library for C++ that seems focused on compiling arithmetic algorithms as used in bing.<p>Interestingly, it requires the functions called by the JIT'd code to be side-effect free, since it guarantees it will call any given function invocation <i>at least once</i>, since it evaluates both sides of any but top level branches. See "Design Notes and Warnings" in <a href="https://github.com/BitFunnel/NativeJIT/blob/master/Documentation/README.md" rel="nofollow">https://github.com/BitFunnel/NativeJIT/blob/master/Documenta...</a>
What I find amazing about NativeJIT is how small the codebase seems to be -- under 10k SLOC. Impressive!<p>Will seriously consider using this to speed up expression execution in EventQL [0] (we shied away from llvm so far because it's such a massive dependency).<p>[0] <a href="https://eventql.io/" rel="nofollow">https://eventql.io/</a>
This is similar to a technique used in Renjin to compile specific vector computation expressions down to straight-line JVM bytecode, which is then JITed down to machine code. It's a very powerful technique because it removes all the indirection involved in a evaluating dynamic expressions and lets the processor just do its job.<p>For example, if you evaluate sum(sqrt(x^2 + y^2) * 3) in Renjin, and x or y happen to be very long vectors, then we'll jit out a JVM class for this specific expression that would look something like this in Java:<p><pre><code> class JittedComputation1E5374A3 {
SEXP compute(SEXP[] args) {
double[] x = args[0].toDoubleArrayUnsafe();
double[] y = args[1].toDoubleArrayUnsafe();
double sum = 0;
for(int i=0;i<x.length;++i) {
double xi = x[i];
double yi = y[i]
sum += Math.sqrt(xi*xi+yi*yi) * 3;
}
return DoubleVector.valueOf(sum);
}
}
</code></pre>
The computation is specialized to the types of x and y, so if for example x is a sequence 1:1000000 then a new class gets written for that doesn't even use an array for x.<p>The speedup is so impressive that even if you don't cache the compiled expression you see dramatic improvements: <a href="http://www.renjin.org/blog/2015-06-28-renjin-at-rsummit-2015.html" rel="nofollow">http://www.renjin.org/blog/2015-06-28-renjin-at-rsummit-2015...</a>
I know it can be sandboxed, but it makes the hair on the back of my neck stand up to think that something people type into a search box on a web page gets turned into machine code and executed.. :)
Can't you use function overloading? or function pointers? Why do you need to recompile the code? Can someone give an example of where this should be used?
Looks very similar to <a href="https://github.com/asmjit/asmjit" rel="nofollow">https://github.com/asmjit/asmjit</a>
The title is wrong. This is not a C++ to x86 JIT (which would be really cool), this is a JIT library for C++ (of which there are a number). It has it's own domain specific language for expressions. It doesn't take arbitrary C++. No std C++ syntax, no classes, etc.<p>Ex:<p><pre><code> // nativeJIT DSL
auto & area = expression.Mul(rsquared, expression.Immediate(PI));
auto function = expression.Compile(area);
// C++
auto area = rsquared * PI;</code></pre>