Python is fundamentally not designed to be faster because it leaks a lot of stuff that’s inherently slow that real world code depends on. That’s mutable interpreter frames, global interpreter locks, shared global state, type slots, the C ABI.<p>The only way to speed it up would be to change the language.
Ultimately, at least IMO, no attempt to speed up python will succeed until the issue of Python's C API is addressed. This is arguably Pypy's only major barrier: if you can't run the software on it, you're not going to use it. Pyston was arguably the most serious attempt at fast python while maintaining compatibility with the API, but DBX clearly didn't see the RoI they were hoping to.<p>It's looking like HPy is going to (hopefully) solve this. But finishing HPy and getting it adopted is likely to be a pretty massive undertaking.
What I really want for python is a knob to improve startup time. I've imagined there must be a way to "statically link dependencies so that import isn't searching the disk but just loading from a fixed location/file. There doesn't seem to be many resources on the net. I've found this one: <a href="https://pythondev.readthedocs.io/startup_time.html" rel="nofollow">https://pythondev.readthedocs.io/startup_time.html</a>. I tried using virtualenvs to limit my searchable import paths, and messed around with cython in effort to come up with a static linked binary. But I've yet to come up with anything that really improves the startup time. Clearly I have no idea what I'm doing.
Not trying to self-promote, but this might be of interest to you. It's not a fully flushed out implementation, but my project analyzed specific language features that affect performance: <a href="https://github.com/joncatanio/cannoli" rel="nofollow">https://github.com/joncatanio/cannoli</a>
Yuri Selivanov tweeted yesterday that Python 3.10 will be "up to 10% faster" <a href="https://twitter.com/1st1/status/1318558048265404420" rel="nofollow">https://twitter.com/1st1/status/1318558048265404420</a>
The list should probably also include mypyc: <a href="https://github.com/python/mypy/tree/master/mypyc" rel="nofollow">https://github.com/python/mypy/tree/master/mypyc</a>
Another one missing from that list is Graalpython, <a href="https://github.com/graalvm/graalpython" rel="nofollow">https://github.com/graalvm/graalpython</a>. It's in early stages of implementation, aimed at being python3 on top of GraalVM.
This article appear to be a list of python interpreters.<p>Not all of these were designed for speed,l. For example jython was also intended for Java/python interoperability.<p>Some of the interpreters on the list haven't seen updates in a while, or don't support python 3.x
I tend to use Python for batch jobs and things where its speed isn't that important to me. Am I alone in this?<p>When I reach for python its not for speed. Its because its fairly easy to write and has some good libraries.<p>Either its done in a few seconds, or I can wait a few hours as it runs as a background slurm task..<p>I feel like there is a group that wants python to be the ideal language for all things, maybe because I'm not in love with the syntax, but I'm ok having multiple languages.
I gave up trying to make Python fast since to do so you give up what makes Python good and end up writing C/Cython. On top of this, distributing Python is just... gross, at least for my use cases.<p>Eventually I found Nim and never looked back. Python is simply not built for speed but for productivity. Nim is built for both from the start. It's certainly lacking the ecosystem of Python, but for my use cases that doesn't matter.
In my opinion there is some potential there. Especially exploiting the increasing integration of typing-oriented features (i.e. type annotations) and the interest in using those to carry out static analysis (e.g. in mypy, but also Facebook's Pyre and Microsoft's Pyright and many other), it might be possible to speed up execution times a bit. This is especially true if we restrict the attention to a restricted subset of Python as, e.g., within domain specific languages. It might not make sense to entirely reverse engineer a language that was designed to be duck-typed into a statically typed one. However, for some domain specific applications I find performance oriented static analysis an interesting tool.<p>To make it more concrete, here is an experimental DSL for embedded high-performance computing that uses static analysis and source-to-source (Python-to-C, actually) code transformation: <a href="https://github.com/zanellia/prometeo" rel="nofollow">https://github.com/zanellia/prometeo</a>.
I don't know much about the other ones, but I think you'd have to say PyPy has been a success. Although to be honest, I don't know why it would be better to modify CPython vs. just using PyPy -- the JIT speedup does come with some tradeoffs (memory usage, warmup times), so it seems better just to leave that decision up to the user?
It amazes me that the stack-entwined implementation with the GIL remained the canonical version this whole time -- I would think that the Stackless version (or similar) would have been the default long-ago. This really should have made it worth it from a 2.x to 3.x version perspective, even if many people had to rewrite their extensions, and even if some monkey-patching were removed from the language in favor of more disciplined meta-programming.
Whatever happened to psyco? I remember it pretty much just working without any hassle and actually providing a noticeable speedup. All the mindshare is now on PyPy -- it's received enormous amounts of engineering and still seems very rough around the edges.
The HotPy listed by OP is done by Mark Shannon, the same person of today's proposed 5x speedup<p>Also, some relevant old post:<p><a href="https://news.ycombinator.com/item?id=17107047" rel="nofollow">https://news.ycombinator.com/item?id=17107047</a>