TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Have Tracing JIT Compilers Won?

76 点作者 nex3大约 15 年前

7 条评论

illumen大约 15 年前
Ahead of time compilers do work quite well for a subset of dynamic languages. Just like tracing JIT works well for a subset. This has been proven by projects like shedskin, cython, and tinypyC++. Given typing information, either explicitly or through type inference - massive speed-ups are possible.<p>Also, tracing JIT currently does not do multiple CPU optimizations - which static compilers have been doing. They also don't do SIMD or MIMD optimizations, which compilers like gcc/icc/etc are doing.<p>However, runtime assemblers - like 'Orc' 'corepy' and older ones like 'softwire' are proving to be lots faster in multimedia applications compared to traditional compilers or interpreters. These runtime assemblers give authors of programs access to the compilers/assemblers. Just like allowing people to specify typing, letting people construct programs at runtime can give massive speed-ups.<p>C has even had fast interpreters for a while - see tinycc as an example. It doesn't use a tracing compiler, but a very straight forward direct translation compiler which happens to be faster than many interpreters.<p>It's quite easy to argue 'No' to this question I think.
评论 #1180699 未加载
amix大约 15 年前
"Will a more static style JIT like Google's V8 be able to keep up?"<p>V8 compiles everything to assembler and does not do JIT. V8 seems to beat TraceMonkey on both CPU and memory benchmarks ( <a href="http://shootout.alioth.debian.org/u64/benchmark.php?test=all&#38;lang=v8&#38;lang2=tracemonkey" rel="nofollow">http://shootout.alioth.debian.org/u64/benchmark.php?test=all...</a> ), so I doubt their approach has "lost". They probably do this because most JavaScript programs are very small and it's relatively cheap to compile everything to assembler from the start. They do their optimizations on assembler level as well - instead of doing it at bytecode level as it's done in most other VMs. This is at least the information I gathered when I looked at V8 last time - it was a year ago, so things might have changed.
评论 #1181622 未加载
gizmo大约 15 年前
I'd say that for languages such as Python or Javascript tracing JIT compilers are the only viable approach. Because the languages allow anything you can't make any sensible predictions about types and call graphs at compile time (people tried; didn't work well). The incremental-JIT approach (LuaJIT) has less information to work with, and it's not aware which parts of the program are bottlenecks, so it can't intelligently decide which parts parts need optimization most.<p>The tracing JIT is such an obvious tactic that it should be part of any dynamic language. Measure which parts are slow, then optimize those parts using the parameter type information gathered at runtime. Fall back on original code on type mismatch. The implementation is hairy, but it's a no-brainer conceptually speaking.<p>I suspect that dynamic languages will evolve over the next 10 years to allow for really efficient compilers, but for the current languages such as Javascript I doubt we're ever going to see massive performance improvements.
评论 #1180703 未加载
评论 #1180669 未加载
stcredzero大约 15 年前
With modern IDEs, tracing JIT, and type inference, I'm surprised that someone hasn't made the same lateral thinking sidestep that Hennessey &#38; Patterson made for RISC. They offloaded much of the decoding work from the chip to the compiler, yielding faster chips in a smaller die size.<p>Perhaps the Go language is the start of such a sidestep. We should be able to offload a lot of the work of type annotation to the IDE and compiler, yielding a compiled language with the dynamic feel of an interpreted one. In fact, for langs with clean runtime models like Lisp, we should be able to allow intermediate states of partial annotation, and allow some interpreted execution for fast prototyping. (But demand full annotation before deployment.)<p>(EDIT: Yes, I was aware of Haskell when I posted this. Haskell. There I said it!)<p>Is there a tool that uses a Bayesian technique for suggesting type annotation that cannot be inferred? This would be tremendously useful. (And very dangerous in the hands of the incompetent.)
评论 #1180938 未加载
mikek大约 15 年前
Can somebody please explain what a tracing JIT is?
评论 #1180929 未加载
评论 #1181637 未加载
评论 #1181236 未加载
KirinDave大约 15 年前
I'm confused. In the javascript space, it seems like V8 and squirrelfish (or is it Nitro? I prefer SF) have soundly trounced tracemonkey. Last time I checked, Squirrelfish isn't a tracing JIT centric implementation (nor is V8). I've yet to see a ttJIT javascript engine that hasn't been soundly trounced by the more conventional approaches.<p>Perhaps it's different in the Lua world, but it seems to me like this post is claiming the exact opposite of what the evidence suggests?<p>Don't get me wrong, here. I think trace trees are a very cool optimization technique and surely have a place in the future of JIT compilers. I just don't think they've “Won” at this point.
评论 #1181598 未加载
iskander大约 15 年前
1) No. See V8.<p>2) Tracing JIT's <i>might</i> win for loop-centric languages. They miss some serious opportunities for parallelization in languages with rich collection-oriented operators (functional and array languages).