I wish his summary charts were less binary. He shades the "winner" algorithm's box green. But many of those wins are insignificant.<p>I don't think I'd inflict ~~(1*"12.5")
on someone for an insignificant gain when a parseInt() is obvious. (I also suspect that at least one of those implementations constant folded it.)
Stupid microoptimisations.
Not only one should never optimize prematurely, and that is noted in the slides, but one should also never optimize like that (at least in the JS context). The interpreters/compilers/whatever will evolve for the readable cases, not for obscure idiocies, and programs using these techniques today will just run slower tomorrow.
These would be useful in something like the Closure Compiler <a href="http://code.google.com/closure/compiler/" rel="nofollow">http://code.google.com/closure/compiler/</a>. Most seem like too much of a micro-optimization to use in real code though (especially given that they may change over time).
I'd be more interested in seeing these techniques applied to Javascript that is coded up with a framework (say jQuery for arguments sake).<p>For example, what is the performance penalty of:<p><pre><code> var arr = [1, 2, 3];
for(var i=0; i < arr.length; i++){ //stuff }
// versus
$.each( arr, function(i, val){ // stuff } );
</code></pre>
Or @ caching:<p><pre><code> $("a").each( function(){
$(this).click(
$(this).find("img").hide();
);
});
// versus
$("a").each( function(){
var targetImage = $(this).find("img");
$(this).click(
$(targetImage).hide();
);
});</code></pre>
Didn't even know about the function.toString() method, but it seems to that actually the non-firefox implementations deserve the wtf. (javascript:(function(){return 2*3;}).toString() => function(){return 6;}). Can't an interpreter optimize anymore?
The one I wish they had measured was createElement compared to innerHTML replacement.<p>In my job I was always shocked to see massive HTML strings appended or replaced in an objects innerHTML, and instead wrote lots of createElements and createTextNodes. Apparently, for performance's sake (and especially in loops) it is MUCH more efficient to use a string and replace methods to alter the content.
Author's page with PDF of slides<p><a href="http://mir.aculo.us/" rel="nofollow">http://mir.aculo.us/</a>
<a href="http://script.aculo.us/downloads/extremejs.pdf" rel="nofollow">http://script.aculo.us/downloads/extremejs.pdf</a><p>Slideshare is terribly unresponsive at times, and the registration's capcha doesn't work properly.
How come JIT compilers seem to have a problem with compiling try/catch blocks? Also, why does caching the window object increase performance by significant margins in some of the browsers?