I sometimes see people ask about translating a language like Python to Common Lisp (or another language that can be compiled) as a kind of optimization.<p>The problem, in general, isn't that Python and languages like it don't have a compiler, it's that the semantics of the language are hostile to good performance by traditional means of compilation. To do what the programmer requests requires doing things at runtime that are hard to make fast. That's why things like tracing JITs are being used for things like JavaScript.<p>The speedup you get from actually compiling Python programs is because the CPython interpreter is pretty awful, not because compilation is a magic solution to performance problems. The IronPython guy gave a nice explanation of this at OOPSLA 2007's Dynamic Languages Symposium - maybe things have changed in CPython since then.
The licence (GPLv3) limits its use a bit - at least for people who prefer other licences like BSD or MIT.<p>The generated C++ source contains the following comment:<p>// This code is in part copyright Kay Hayen, license GPLv3. This has the consequence that
// your must either obtain a commercial license or also publish your original source code
// under the same license unless you don't distribute this source or its binary.
Personally I have more faith in JITs for dynamic languages such as Python. It just seems a more natural match. That said, I'm sure there are many Python programs out there that are essencially static.<p>Did anybody else notice the large number of compilers/interpreters/tools built for python in comparison to many other languages out there? I think it might partly be the advantage of having an easy to parse language with well defined semantics.
Here's a simple test for the curious. It's not a benchmark.<p><pre><code> import math
num_primes = 0
for i in xrange(2, 500000):
if all(i % j for j in xrange(2, int(math.sqrt(i)) + 1)):
num_primes += 1
print num_primes
</code></pre>
Here's the code above translated to C++ by Nuitka: <a href="http://pastebin.com/41ueyTEB" rel="nofollow">http://pastebin.com/41ueyTEB</a><p><pre><code> # CPython 2.6.6
$ time python hello.py
41538
real 0m6.377s
user 0m6.350s
sys 0m0.020s
# Nuitka & g++-4.5
$ time ./hello.exe
41538
real 0m4.573s
user 0m4.270s
sys 0m0.300s</code></pre>
In this Python-to-C++ vein, there's also Shed Skin ( <a href="http://shed-skin.blogspot.com/" rel="nofollow">http://shed-skin.blogspot.com/</a> ), which has been at it for a few years.
I was developing a compiler called unPython for a while but I have not yet released it openly. Plan to do so "soon". It is a compiler for annotated subset of Python (particularly NumPy, rest of it being very slow or not supported) to a C++ Python module. Will post here once I release it.
Should also just check out Psyco: <a href="http://psyco.sourceforge.net/" rel="nofollow">http://psyco.sourceforge.net/</a><p>"Psyco is a Python extension module which can greatly speed up the execution of any Python code."
50% speed up, or even 2x, 3x matters to a few niches and users. But for the vast majority it's not significant enough to change/accept limitations (not 2.7/3.1)/accept risks (is this as tested/supported as CPython). We'll just wait for CPython's regular speed improvements and/or effective processing power to increase another order of magnitude.<p>Research like this is very important. I just don't think it's wise to be viewing it as a silver bullet for use in production.