> Because PyPy is a JIT compiler it's main advantages come from long run times and simple types (such as numbers).<p>It is not <i>inherent</i> to JIT compilers that they need long running times or simple types to show benefit. LuaJIT demonstrates this. Consider this simple program that runs in under a second and operates only on strings:<p><pre><code> vals = {"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o"}
for _, v in ipairs(vals) do
for _, w in ipairs(vals) do
for _, x in ipairs(vals) do
for _, y in ipairs(vals) do
for _, z in ipairs(vals) do
if v .. w .. x .. y .. z == "abcde" then
print(".")
end
end
end
end
end
end
$ lua -v
Lua 5.2.1 Copyright (C) 1994-2012 Lua.org, PUC-Rio
$ time lua ../test.lua
.
real 0m0.606s
user 0m0.599s
sys 0m0.004s
$ luajit -v
LuaJIT 2.0.2 -- Copyright (C) 2005-2013 Mike Pall. http://luajit.org/
$ time ./luajit ../test.lua
.
real 0m0.239s
user 0m0.231s
sys 0m0.003s
</code></pre>
LuaJIT is over twice the speed of the (already fast) Lua interpreter here for a program that runs in under a second.<p>People shouldn't take the heavyweight architectures of the JVM, PyPy, etc. as evidence that JITs are <i>inherently</i> heavy. It's just not true. JITs can be lightweight and fast even for short-running programs.<p>EDIT: it occurred to me that this might not be a great example because LuaJIT isn't actually generating assembly here and is probably winning just because its platform-specific interpreter is faster. <i>However</i> it is still the case that it is instrumenting the code's execution and paying the execution costs associated with attempting to find traces to compile. So even with these JIT-compiler overheads it is still beating the plain interpreter which is only interpreting.