TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How fast can we make interpreted Python?

54 pointsby heydenberkalmost 12 years ago

9 comments

jballancalmost 12 years ago
It seems like the biggest point here is that compatibility with the extension API in Python is the anchor dragging down performance. Similarly, Ruby suffers from continued compatibility with its extension API. In the paper they compare with Lua and JavaScript, but it&#x27;s worth noting that JavaScript doesn&#x27;t have an extension API (well, JavaScript proper...not saying anything about node.js), and the team behind Lua are notorious for not caring about breaking API compatibility between versions.<p>I guess what I&#x27;m trying to say is, while this is a laudable effort, it seems what Python (and Ruby) really needs is a way to free itself from the chains of extension API compatibility.
评论 #6113380 未加载
评论 #6113493 未加载
albertzeyeralmost 12 years ago
Previous discussion:<p><a href="http://www.phi-node.com/2013/06/how-fast-can-we-make-interpreted-python.html" rel="nofollow">http:&#x2F;&#x2F;www.phi-node.com&#x2F;2013&#x2F;06&#x2F;how-fast-can-we-make-interpr...</a><p><a href="https://news.ycombinator.com/item?id=5943258" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=5943258</a>
kghosealmost 12 years ago
From the paper: <a href="http://github.com/rjpower/falcon/" rel="nofollow">http:&#x2F;&#x2F;github.com&#x2F;rjpower&#x2F;falcon&#x2F;</a>
评论 #6113167 未加载
mlubinalmost 12 years ago
If the goal is to get performance without giving up on Python&#x27;s existing libraries, why not use Steven Johnson&#x27;s PyCall (<a href="https://github.com/stevengj/PyCall.jl" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;stevengj&#x2F;PyCall.jl</a>) package for Julia?
评论 #6113358 未加载
wyuenhoalmost 12 years ago
While this paper is quite easy and occasionally funny to read, being a total math idiot, I need somebody to help me out how to read the benchmarks at the end. The author claims that on average converting stack-based bytecode to register-based bytecode results in an average of 25% performance improvement, I have trouble finding where that number comes from. The charts are said to be a comparison of optimized code relative to unoptimized code, my question is, how come the unit on the y-axis is a percentage, and the unoptimized code isn&#x27;t used as a base-line all labeled as 100% or given an absolute average time-took? The tiny gaps between unoptimized and optimized code are confusing me.
评论 #6113314 未加载
lettucecrisperalmost 12 years ago
It would be good to compare Falcon with Numba: &quot;Numba’s job is to make Python + NumPy code as fast as its C and Fortran equivalents without sacrificing any of the power and flexibility of Python.&quot; Like Falcon, Numba is compatible with CPython and whatever extensions you want to use with CPython.<p><a href="https://github.com/numba/numba" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;numba&#x2F;numba</a><p>Intro to Numba, parts 1 and 2:<p><a href="http://continuum.io/blog/numba_growcut" rel="nofollow">http:&#x2F;&#x2F;continuum.io&#x2F;blog&#x2F;numba_growcut</a><p><a href="http://continuum.io/blog/numba_performance" rel="nofollow">http:&#x2F;&#x2F;continuum.io&#x2F;blog&#x2F;numba_performance</a>
Demiurgealmost 12 years ago
Since it&#x27;s compatible with PyObject, sounds like it can be folded into CPython? Are there any arguments against that?
评论 #6113337 未加载
评论 #6113341 未加载
willvarfaralmost 12 years ago
I wish pyscho was ported to 2.7 64 bit :(
评论 #6113184 未加载
bayesianhorsealmost 12 years ago
The GIL must go!
评论 #6113330 未加载