TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

“The 'use asm' pragma is not necessary to opt into optimizations in V8”

116 pointsby eliseeover 11 years ago

7 comments

kevingaddover 11 years ago
Sadly V8&#x27;s dev team has yet to address the other problem that &quot;use asm&quot; solves - falling out of JIT sweet spots due to broken heuristics. If you maintain a large machine-generated JS codebase (like I do by proxy, with my compiler), it is a regular occurrence that new releases of Chrome (and Firefox, to be fair) will knock parts of your code out of the JIT sweet spot and suddenly start optimizing other parts. Sometimes code that was fast becomes slow for no reason, other times some slow code becomes fast and now you look at profiles and realize you need to remove caching logic or that your code will be faster if you remove an optimization.<p>The arms race never ends, and keeping up with it is a full-time job. asm.js fixes this, by precisely specifying the &#x27;sweet spot&#x27; and giving you a guarantee that if you satisfy its requirements, <i>all</i> your code will be optimized, unless the VM is broken. This lets you build a compiler that outputs valid asm.js code, verify it, and leave it alone.<p>These days I don&#x27;t even have time to keep up with the constant performance failures introduced by new releases, but JSIL is a nearly two-year-old project now and they cropped up regularly the whole time. Ignoring the performance failures isn&#x27;t an option because customers don&#x27;t want slow applications (and neither do I).
评论 #6556631 未加载
评论 #6555981 未加载
Danieruover 11 years ago
From what I understand writing fast javascript is hard because the engines are improving so fast no one knows what is fast yet. Thus asm.js is a promise to developers: &quot;Stay within the subset and your javascript will be fast&quot;.<p>Yet it never was &quot;use asm&quot; which made asm.js fast. It was the js engine. &quot;use asm&quot; carries symantical knowledge so a browser can warn if you break out of the subset. It also serves as a strong hint that should make the optimizer&#x27;s easier.<p>This is why I think asm.js is the future. Mozila does not need buy in from Google, or Apple, or MS. Instead developers can compile to asm.js and their code will run, and in time it will run faster.<p>So in effect Chrome supports arm.js but they are not making a promise. I think it would be better for the internet if they made this promise.
评论 #6554676 未加载
评论 #6554280 未加载
评论 #6554534 未加载
评论 #6554375 未加载
cliffbeanover 11 years ago
Getting the Citadel demo to run at 60fps is impressive. It means that V8 is &quot;fast enough&quot; to keep up with the video card on that application.<p>However, the benchmarks at [0] clearly show that this is not the end of the story. The &quot;workload0&quot; runs measure startup time. All the other workloads show runtime performance, and V8 is still quite a ways behind.<p>[0] <a href="http://arewefastyet.com/#machine=11&amp;view=breakdown&amp;suite=asmjs-apps" rel="nofollow">http:&#x2F;&#x2F;arewefastyet.com&#x2F;#machine=11&amp;view=breakdown&amp;suite=asm...</a>
评论 #6554731 未加载
TheZenPsychoover 11 years ago
A point some people seem to be missing in this thread which I would like to emphasize is the value of <i>predictable</i> performance.<p>It is true that a JIT is in principle, capable of all the same things as an AOT compiler. It is true, that improving the speed of a JIT is valuable. However, that all glosses over the fact that a JIT is a black box. If I am working on an application <i>today</i> that has a mysterious slowdown after running for about 80 seconds, the promise that won&#x27;t happen in <i>next year&#x27;s</i> browser release is of no use to me whatsoever.<p>In fact, I would prefer the jit run at the same constant slow speed instead of starting out fast, giving a false sense of performance. I can OPTIMISE for that. I can work with that. I can&#x27;t work with a JIT that can&#x27;t decide how fast it&#x27;s going to run, and why from release to release, from second to second. It&#x27;s fine for most applications, but if I absolutely positively need to generate a frame at a constant fixed rate, an unpredictable JIT is a huge liability no matter how theoretically fast it can go on benchmarks. Stability trumps raw performance.
nonchalanceover 11 years ago
<a href="http://mrale.ph/blog/2013/03/28/why-asmjs-bothers-me.html" rel="nofollow">http:&#x2F;&#x2F;mrale.ph&#x2F;blog&#x2F;2013&#x2F;03&#x2F;28&#x2F;why-asmjs-bothers-me.html</a> suggested that this would happen:<p>&gt; When I sit down and think about performance gains that asm.js-implementation OdinMonkey-style brings to the table I don’t see anything that would not be possible to achieve within a normal JIT compilation framework and thus simultaneously make human written and compiler generated output faster.
评论 #6554793 未加载
评论 #6554300 未加载
评论 #6554870 未加载
评论 #6554099 未加载
s-mackeover 11 years ago
I made the same experience with my hand optimized asm.js code. <a href="http://s-macke.github.io/jor1k/" rel="nofollow">http:&#x2F;&#x2F;s-macke.github.io&#x2F;jor1k&#x2F;</a> It runs as fast as asm.js in Firefox. Chrome is optimizing it really well.
k_bxover 11 years ago
I think this only states that asm.js or it&#x27;s tests are far from perfect now.