You can pass some interviews by blindly memorizing, but it's unnecessary. If you understand a concept, then you can reason its big O. Memorization implies a superficial understanding that may be revealed later.<p>If you don't understand something, spend a few hours and implement it.<p><pre><code> "I hear and I forget. I see and I remember. I do and I understand."
- Confucious</code></pre>
This is why it's a terrible idea to ask for some random algorithm runtime in an interview. It says absolutely nothing about programming or reasoning skill.<p>I'd rather ask a candidate to explain their favorite data structure to me and derive it's big-O complexity right then and there. Being able to regurgitate the correct answer doesn't count for anything in my book.
This is a pretty limited list of algorithms. It should definitely include linear time sorting algorithms (e.g. bucket or radix sort), as well as graph algorithms (shortest path at least, but also probably all pair shortest path and minimum spanning tree).<p>There should also be a section about heaps and their operations. There are a huge number of ways to implement a heap (e.g. linked list, binary tree, or a more exotic structure like a binomial of fibonacci heap) and there are a lot of tradeoffs in complexity of different operations between them.
I suppose the general idea for the colors is something like: green = best in category, red = worst in category, yellow = neither best nor worst?<p>In that case bubble sort and insertion sort should be green for the best-case time complexity ( O(n) vs. O(n log(n)) for quicksort/mergesort).<p>It might also be interesting to make the plot dynamic and allow the visitor to play with different implicit constants for the individual asymptotic bounds.
I'm not totally sure what you mean by "dynamic array", but the vector algorithm for insertion (which you should probably at least include, if by "dynamic array" you were implying a more naive insertion scheme) is O(1) amortized.
I never understood why people look at 7-8 sorting methiods and ignore Radix sort which often beats everything else at O(n) average case. <a href="https://en.wikipedia.org/wiki/Radix_sort" rel="nofollow">https://en.wikipedia.org/wiki/Radix_sort</a> I mean is the assumption that people would never actually need a useful real world understanding of the topic?
Why is O(n) red and O(n log(n)) yellow? Clearly, O(n log(n)) is slower.<p>In general, whether a specific complexity is good or bad differs greatly based on what you're doing. I don't think it's a good idea to have the colors be the same everywhere. A particularly bad instance is how every single data structure is O(n), which is red/"bad".
Very few commenters think this is a good idea. The majority of posts lament the rote learning and lack of understanding involved. Why then, is this upvoted so much? Is it that people think the comments are worth reading so much that they upvote the article in the hope that other readers will read the comments? Are the people commenting negatively upvoting the article in the hopes their comments will be more widely read†? Are people afraid of flagging articles?<p>†testable hypothesis, data requested
If you only want to hire me based on answering big-O questions I don't want to work there anyway. 32 years of working on highly complex and performant stuff and not once did I think in terms of big-0. Optimizing is not about knowing the math but knowing how to measure and how to interpret what you measure. Big-O might make you feel smart but it's a tiny part of actually constructing something complex and optimal.
Slightly academic, but this cheat sheet gives out some shorthand explanations to many of the methods in the Big-O document:<p><a href="http://www.scribd.com/doc/39557873/Data-Structures-Cheat-Sheet" rel="nofollow">http://www.scribd.com/doc/39557873/Data-Structures-Cheat-She...</a>
Actually walking through each problem is the only way to understand. There's not even a point in memorizing, it takes longer to memorize than to actually learn how to come up with those values.
Can you annotate it with the subtype of the algorithm. For example, you have O(logN) worst-case space complexity for quicksort - that is not true on all variations, include to naive one.
If you want to visualise big O runtime, you draw it on a log-log scale. The linear gradient on the log-log plot is the factor. i.e. if its at 45 degrees its O(n), if its at a gradient of 2:1 its O(n^2).
Handy fact to work out your big O without having to do the tedious math! (See <a href="http://jcsites.juniata.edu/faculty/kruse/cs2/ch12a.htm" rel="nofollow">http://jcsites.juniata.edu/faculty/kruse/cs2/ch12a.htm</a>)
I don't know why people don't use balanced BST ( std::map in c++) for storing the adjacency lists of a graph. Sure the insertion would take O(log n) time but , I think the overall benefit would be greater than the costs. Correct me if I am wrong.
I would like to see the same type of complexity cheat sheet for algorithms for common math problems: addition, multiplication, division, subtraction, factorization, solving linear systems, solving eigen systems, matrix inversion, etc.
Pretty cool, thanks.<p>I think the graph at the end is the most useful thing. Really helps understanding the complexity in relative terms.<p>Adding tree stuff would be cool (especially for search which is often implemented as either a tree or graph version)
Question: I'm a junior software developer that did not get a CS degree. What would be the best way to learn and understand this sort of stuff? Coursera/Khan? A book?
/trolling on<p>Awesome. Next time I'm trying to pass some silicon valley interview, I'll have to look this up. Till then, I think I have more practical problems to worry about.<p>/trolling off