TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Speed Without Wizardry

206 pointsby mnemonikabout 7 years ago

6 comments

austincheneyabout 7 years ago
The simple rule I have found for achieving superior performance in high level languages, particularly JavaScript is to simply <i>do less</i>. It isn&#x27;t that simple though.<p>Doing less really means less code totally at the current compilation target, essentially feeding fewer total instructions to the compiler. This means no frameworks and minimal abstractions. It means having a clear appreciation for the APIs you are writing to. It means minimizing use of nested loops, which exponentially increase statement count.<p>Sometimes caching groups of instructions in functions can allow for cleaner code with a positive performance impact.<p>V8 cannot compile arithmetic assignment operators, which it calls left-side expressions, so you can see a rapid speed boost in V8 when you replace something like <i>a += 1</i> with <i>a = a + 1</i>.<p>The side benefit of less code is generally clearer and cleaner code to read. There isn&#x27;t any wizardry or black magic. No tricks or super weapon utilities.<p>As an example I wrote a new diff algorithm last year that I thought was really fast. <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=13983085" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=13983085</a> This algorithm is only fast because it does substantially less than other algorithms. I only wrote it because I could not wrap my head around the more famous Myers&#x27; O(ND) algorithm. A side benefit, in this case, of doing less is an algorithm that produces substantially more accurate results.
评论 #16469703 未加载
评论 #16470249 未加载
评论 #16470955 未加载
nickm12about 7 years ago
I&#x27;ve got to admire the graciousness in this response. It&#x27;s making the point that mraleph&#x27;s “Maybe you don’t need Rust and WASM to speed up your JS” article completely neglected code maintainability as a factor, but it does so without turning the whole thing into a pissing match. It&#x27;s all been a fascinating to read.
评论 #16475076 未加载
评论 #16471805 未加载
Felzabout 7 years ago
Funny, I was using the source-map library under Nashorn. The performance was poor enough that I had to switch to embedding V8; I&#x27;m not sure whether that was a consequence of Nashorn itself being too slow, or the Javascript optimizations intended for V8&#x2F;Firefox just completely missing their mark.<p>Not that the WASM version of the library would&#x27;ve helped, since Nashorn doesn&#x27;t do WASM at all. But maybe the performance would&#x27;ve been decent if it had.
hobofanabout 7 years ago
&gt; But a distinction between JavaScript and Rust+WebAssembly emerges when we consider the effort required to attain inlining and monomorphization, or to avoid allocations.<p>I&#x27;m not sure that is true. Having worked&#x2F;interacted with a lot of people working with Rust on different experience levels, most of them (that includes me) don&#x27;t have a deep knowledge of what Rust concept maps to a specific concept with which performance implications. And if they do it&#x27;s often only partial. I&#x27;d say that right now, only very few people that don&#x27;t work on the Rust compiler have a broad knowledge in that area. Sure, it&#x27;s much better to have to Result of the optimization expressed in code itself, but I&#x27;d say that the amount of knowledge and effort required to get to such a level of optimization is similar to optimizing Javascript.<p>I also found the hint to `#[inline]` suggestions, a bit disingenuous. In the end they are just _suggestions_, and your are just as much at the mercy of the Rust&#x2F;LLVM optimizer to accept them, as you are with a Javascript JIT.<p>I&#x27;m a big fan of Rust, and I&#x27;m a big fan of Rust+Webassembly (working with it is the most fun I had programming in a long time!). Generally I think that Rust has one of the better performance optimzation stories, I just don&#x27;t a gree with some of the sentiments in the post. There are also enough other reasons to love Rust+WebAssembly than just the peformance!
评论 #16474957 未加载
DannyBeeabout 7 years ago
I really don&#x27;t understand this article, and the claims really rub me the wrong way.<p>The main point it makes is, again &quot;He perfectly demonstrates one of the points my “Oxidizing” article was making: with Rust and WebAssembly we have reliable performance without the wizard-level shenanigans that are required to get the same performance in JavaScript.&quot;<p>This doesn&#x27;t make a lot of sense as a claim.<p>Why? Because underneath all that rust .... is an optimizing compiler, and it happens the author has decided to stay on the happy path of that. There is also an unhappy path there. Is that happy path wider? Maybe. It&#x27;s a significantly longer and more complex optimization pipeline just to wasm output, let alone the interpretation of that output. I have doubts it&#x27;s as &quot;reliable&quot; as the author claims (among other things, WebAssembly is still an experimental target for LLVM). Adding the adjective &quot;reliable&quot; repeatedly does not make it so.<p>Let&#x27;s ignore this though, because there are easier claims to pick a bone with.<p>It also tries to differentiate optimizations between the two in ways that don&#x27;t make sense to me: &quot;In some cases, JITs can optimize away such allocations, but (once again) that depends on unreliable heuristics, and JIT engines vary in their effectiveness at removing the allocations.&quot;<p>I don&#x27;t see a guarantee in the rust language spec that these allocations will be optimized away. Maybe i missed it. Pointers welcome.<p>Instead, i have watched plenty of patches to LLVM go by to try to improve it&#x27;s <i>heuristics</i> (oh god, there&#x27;s that evil word they used above!) for removing allocations for rust. They are all heuristic based, they deliberately do not guarantee attempting to remove every allocation (for a variety of reasons). In general, it can be proven this is a statically undecidable problem for a language like rust (and most languages), so i doubt rustc has it down either (though i&#x27;m sure it does a great job in general!)<p>The author also writes the following: &quot;WebAssembly is designed to perform well without relying on heuristic-based optimizations, avoiding the performance cliffs that come if code doesn’t meet those heuristics. It is expected that the compiler emitting the WebAssembly (in this case rustc and LLVM) already has sophisticated optimization infrastructure,&quot;<p>These two sentences literally do not make sense together. The &quot;sophisticated optimization infrastructure&quot; is also using heuristics to avoid expensive compilation times, pretty much all over the place. LLVM included. Even in basic analysis, where it still depends on quadratic algorithms in basic things.<p>If you have a block with 99 stores, and ask LLVM&#x27;s memory dependence analysis about the dependency between the first and the last, you will get a real answer. If you have 100 stores, it will tell you it has no idea.<p>What happened to reliable?<p>Why does this matter? For example: Every time rust emits a memcpy (which is not infrequent), if there are more than 99 instructions in between them in the same block, it will not eliminate it, even if it could. Whoops. Thats&#x27; a random example. These things are endless. Because compilers make tradeoffs (and because LLVM has some infrastructure that badly needs rewriting&#x2F;reworking).<p>These &quot;sophisticated optimization infrastructures&quot; are not different than JITs in their use of heuristics. They often use the same algorithms. The only difference is the time budget allocated to them and how expensive the heuristics let things get.<p>There may be good reasons to want to write code in rust and good reasons to believe it will perform better, but they certainly are <i>not</i> the things mentioned above.<p>Maybe what the author really wants to say is &quot;we expect the ahead of time compiler we use is better and more mature than most JITs and can spend more time optimizing&quot;. But they don&#x27;t.<p>Maybe it would also surprise the author to learn that their are JITs that beat the pants off LLVM AOT for dynamic languages like javascript (they just don&#x27;t happen to be integrated into web browsers).<p>But instead, they make ridiculous claims about heuristics and JITs. Pretending the compiler they use doesn&#x27;t also depend, all over the place, on heuristics and other things is just flat out wrong. At least to me (and i don&#x27;t really give a crap about what programming language people use), it makes it come off as rampant fanboism. (Which is sad, because i suspect, had it been written less so, it might be actually convincing)
评论 #16472585 未加载
评论 #16471068 未加载
评论 #16471233 未加载
评论 #16473168 未加载
评论 #16472343 未加载
评论 #16471472 未加载
评论 #16471509 未加载
IncRndabout 7 years ago
&gt; <i>This is a factor of 4 improvement!</i><p>This is a common mistake. It should read, &quot;This is a factor of 3 improvement!&quot;<p>x+x+x+x is an improvement over x of 3x not of 4x. The improvement factor is 3.
评论 #16471310 未加载
评论 #16475125 未加载