For anyone interested in this subject I'd highly recommend the Algorithm Design Manual by Steven Skiena[0]. He goes over this in pretty good detail in the first ~1/4 of the book. This was my first introduction to Algorithm Complexity Analysis and by the end of that section I already found myself looking at my code in a different light.<p>[0] <a href="http://www.amazon.com/dp/1849967202" rel="nofollow">http://www.amazon.com/dp/1849967202</a>
Great article. I've understood the why and what of Big-O but never how to do the analysis.<p>I have a question for the informed however:<p>The article says that we only consider the fastest growing parts of an algorithm. So, counting instructions, n^2 + 2n would just be n^2. But why do we do that? Imagine an algorithm where we have n^12 + n^2 + n^2 + n^2 + n^2, etc. Do we really ignore each n^2 section?<p>[Edited]
I've been doing the Coursera course on Algorithms (from Stanford by Tim Roughgarden). He's going over concepts like Big-O notation as well as analyzing algorithms. This particular article is a refreshing read, and I would highly suggest the Coursera course as an accompaniment if someone wants to go deeper.
I wonder if the title is in jest. Clicking on this link and then being greeted with a wall of text (literally! It fills my entire screen) is hardly gentle.<p>I'm sure it's a good read just not right now >_>