The trouble with microbenchmarks like these is, JS engines nowadays are often clever enough to simply eliminate the code being tested, or change its character enough that the results are no longer meaningful. Vyacheslav Egorov (a chrome v8 engineer) has written a bunch of very good blogs on this. E.g.<p><a href="http://mrale.ph/blog/2014/02/23/the-black-cat-of-microbenchmarks.html" rel="nofollow">http://mrale.ph/blog/2014/02/23/the-black-cat-of-microbenchm...</a><p><a href="http://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html" rel="nofollow">http://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.h...</a><p>Checking the tests here, the "default parameters" section shows some tests being 2000x faster than others, which sounds suspicious. Here's an es5 test case:<p><pre><code> function fn(arg, other) {
arg = arg === undefined ? 1 : arg;
other = other === undefined ? 3 : other;
return other;
}
test(function() {
fn();
fn(2);
fn(2, 4);
});
</code></pre>
Sure enough, an arbitrarily smart VM could compile that code down to `test();`. How much this and other optimizations affect each test is anyone's guess, but I think it's likely that at least some of these results are dominated by coincidental features of how the tests are written.