That's a very clear and accessible explanation of what big O complexity is all about.<p><i>But</i> (and that's a pretty big but), 'Big O' is not all there is, and once you've picked your algorithm based on the one that has the best expected runtime based on 'Big O', you <i>really</i> have to try to make sure that:<p><pre><code> - your algorithm is executed with the lowest possible frequency
- you concentrate on those pesky n's that you eliminated during analysis to
make sure that you don't end up wasting all your CPU time on some
little detail somewhere
- you take in to account the effects of your code on caches and virtual memory
- you profile your code afterwards to make sure that all your assumptions regarding
the above are correct
</code></pre>
It is very easy to pick the 'right' algorithm and still get crappy runtime if you skip those steps.
Interestingly the top rated answer was given by the gentleman who got turned down by Google<p><a href="http://news.ycombinator.com/item?id=1520323" rel="nofollow">http://news.ycombinator.com/item?id=1520323</a>
Understand the effect of the constant factor hidden within the Big O. Often a O(n) algorithm is faster than a O(1) algorithm, because n is too small. E.g. arrays vs hashmaps.<p>Make sure what n is in each case. For example a graph algorithm with O(n^2) and n being the number of nodes may actually be O(n) for n being the graph size (number of edges).
That anime was <i>really</i> confusing. I'm not sure I could come up with a simple plain English explanation.<p>Was it a dream? A computer simulation? An alternate reality? I mean, really, wtf?