The first place I got the impression that it was a good idea to cache the array length and/or reverse loops was in Zakas, "High Performance Javascript" [1]. The book was written in 2010, and I have a feeling that compiler advances have laid waste to huge swaths of it. Some of it may have already been outdated when it was written. Even so, performance folklore is slow to disappear.<p>I've found everything that Egorov (author of this post) has written is well worth reading.<p>If there's one overall theme of his work, it's that it is generally highly unreliable to extrapolate the result of micro-benchmarks to real code. Instead, you need to measure potential optimizations in your actual code base. And you might need to do it again next year, because compilers are constantly changing and usually improving.<p>If there's a second overall theme, it's that you can and should examine the code that your JS compiler is producing, rather than relying completely on black box benchmarks. It would be great if browser dev tools would make this easier to do directly. Currently, I find the external tools for doing this hard enough to set up and manage that it isn't really very economical to do this kind of analysis frequently.<p>Compare to Julia, where you can call code_llvm(fn, (argtypes...)) to see the LLVM IR of a piece of code right from the REPL, or code_native(fn, (argtypes...)) to see the generated machine code for your architecture.<p>[1] <a href="http://books.google.com/books?id=ED6ph4WEIoQC&lpg=PA64&vq=there%20are%20several%20operations%20happening%20each%20time&pg=PA64#v=onepage&q&f=false" rel="nofollow">http://books.google.com/books?id=ED6ph4WEIoQC&lpg=PA64&vq=th...</a>