TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Unladen Swallow Retrospective

175 pointsby jnollerabout 14 years ago

8 comments

malkiaabout 14 years ago
Somewhat relevant to the topic - Mike Pall discusses (2009) the usage of llvm for lua - he's not ditching the approach, just pointing out the difficulties, and why he took on making luajit the way it is:<p><a href="http://lua-users.org/lists/lua-l/2009-06/msg00071.html" rel="nofollow">http://lua-users.org/lists/lua-l/2009-06/msg00071.html</a>
评论 #2373598 未加载
jnollerabout 14 years ago
See also Collin's comment on the post: <a href="http://qinsb.blogspot.com/2011/03/unladen-swallow-retrospective.html?showComment=1301186609242#c5054925394205960815" rel="nofollow">http://qinsb.blogspot.com/2011/03/unladen-swallow-retrospect...</a>
rayinerabout 14 years ago
The points about LLVM being designed for "static C-like languages" isn't totally on-point. There as an impedance mismatch between Python and LLVM, but it's less about dynamic versus static than it is about the nature of the stack frame.<p>In LLVM, most optimizations operate on SSA values (the Value class in the LLVM IR). There is some support for CSE-ing loads, etc, but the effectiveness of that depends heavily on your alias information. So to get good optimization out of LLVM, you've got to represent things as Value's.<p>This is hard to do in Python. Python's lexical scoping semantics are a little bit wonky and there are lots of scenarios in which the thread's stack frame is reified as an object. So without some heavy analysis, you end up keeping your local variables in an activation frame object on the heap, and at that point you're toast. The local variables themselves won't be represented as SSA Value's, and most of the LLVM optimizations won't do anything with them.<p>This is not a "dynamic language" thing per se. Lisp, which is also a dynamic language, actually maps quite cleanly to LLVM. Every binding introduced by LET is already in SSA form unless it is either closed-over, assigned-to, or both.<p>1) If the value is just closed-over, you demote it to the function's environment and replace uses of the variable with a load from the environment vector.<p>2) If the value is just assigned-to, you just demote it to a stack slot via ALLOCA and LLVM's mem2reg pass will take care of re-promoting it to a Value. This latter technique is exactly what Clang does for all C local variables, so LLVM is highly-tuned for handling this scenario. In C, variables that have their address taken cannot be promoted, but this cannot happen in Lisp so every assigned-to value demoted to an ALLOCA should be promotable.<p>3) If a value is both assigned-to and closed over, you demote it to a heap-allocated box and replace all uses with heap references.<p>After this transformation, nearly every Lisp variable is an SSA Value, and the optimizers can work with them. Even if you use function calls to do generic arithmetic, etc, LLVM will happily CSE them for you as long as you mark those functions as readnone (ie: pure).<p>Now, there are some things LLVM won't do for you. It can't const-propagate generic arithmetic because it does't know the semantics. It can't do reassociation, etc, because you're not using the ADD/SUB, etc instructions. I don't see anything that would prevent you from doing it yourself in a custom pass, however.<p>In short, the criticism isn't so much that LLVM has an impedance mismatch with dynamic languages as it is that it only handles the bottom of the optimization stack. You still need to do the high-level language-specific optimizations before handing things to LLVM.
评论 #2374219 未加载
评论 #2374464 未加载
评论 #2374430 未加载
mitchellhabout 14 years ago
The blog post touched on it briefly with regards to PyPy but I wanted to ask here since I'm sure someone knows a lot more than I do on this. I didn't follow unladen swallow very closely, but it seemed like an exciting project from the get-go. Its sad to see it go, but it appears its not in vain.<p>I want to ask: What lasting effects did the work that went into Unladen Swallow have on any Python implementations (or any other languages)? What "legacy" was left?
评论 #2373582 未加载
评论 #2373630 未加载
tianyicuiabout 14 years ago
&#62; Our potential customers eventually found other ways of solving their performance problems that they felt more comfortable deploying.<p>Just curious, what are the "other ways"?
nikcubabout 14 years ago
<i>"Most Python code at Google isn't performance critical."</i><p>What about AppEngine? The primary 'cloud' platform at Google.
评论 #2374138 未加载
评论 #2375325 未加载
jbarhamabout 14 years ago
"Our potential customers [within Google] eventually found other ways of solving their performance problems that they felt more comfortable deploying."<p>I wonder how much this includes teams within Google that have since started using Go for writing performance sensitive applications...
评论 #2374710 未加载
18pfsmtabout 14 years ago
Are we talking about an african or european swallow?
评论 #2375816 未加载