Well, after 6 years of professional software engineering after finishing my BS in CS, the only things on that list that I've came anywhere close to using are the natural join and Demorgan's laws.<p>I think this is a pretty silly post, to be honest. CS covers so much, and everytime I see a list of "things you should know", I have to resist the urge to roll my eyes and ignore it. But then I read it, and inevitably roll my eyes anyway.
Since it seems like the multicore thing is here to stay, may I suggest that if you are doing anything parallel you should know about Amdahl's law: <a href="http://en.wikipedia.org/wiki/Parallel_computing#Amdahl.27s_law_and_Gustafson.27s_law" rel="nofollow">http://en.wikipedia.org/wiki/Parallel_computing#Amdahl.27s_l...</a>
I would take issue with the pumping lemma for regular languages here. The formal statement of the lemma is outrageously complicated, which makes it really difficult to understand and use. The only good justification I’ve heard for including this result in a CS curriculum is that it’s a good warm-up for the pumping lemma for context-free languages, which is more useful.<p>If you actually ever find yourself needing to show that a particular language is non-regular, it’s almost always clearer to use an ad hoc argument or appeal to the Myhill-Nerode theorem. Actually the latter is much better, because Myhill-Nerode completely characterises the regular languages, whereas there are non-regular languages that pass the pumping lemma test.[1]<p>1. <a href="http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages#Lemma_not_sufficient" rel="nofollow">http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_langu...</a>
I really wish my brain didn't gloss over the first time I see a math symbol. All of this stuff seems intriguing but it's almost as if I'm hardwired to translate all those symbols into mush. I'd be much more interested in seeing the equivalent code snippets these ideas express.
I should think the Master Theorem would be included:<p><a href="http://en.wikipedia.org/wiki/Master_theorem" rel="nofollow">http://en.wikipedia.org/wiki/Master_theorem</a><p>or better,<p><a href="http://en.wikipedia.org/wiki/Akra%E2%80%93Bazzi_method" rel="nofollow">http://en.wikipedia.org/wiki/Akra%E2%80%93Bazzi_method</a>.
More fundamental than Bayes's theorem is the probabilistic counterpart of modus ponens: P(A/\B) = P(B|A) P(A). This corresponds to the logical rule of inference A, A->B |- A/\B. Note that modus ponens is usually stated in the form A, A->B |- B. But this throws away useful information, namely that proposition A is true, so it's a weaker form.<p>Bayes's theorem is a direct consequence of this axiom and the commutativity of conjunction.
Although they're not really "equations", I'd have liked to see some results from computation theory in this list, because they're often deep and beautiful without necessarily being difficult to formulate or understand. They also tend to be informative in a useful way rather than feeling too abstract to be relevant.<p>For example: the halting problem. Isn't it interesting that you can't write a computer program which can decide (in general) whether another program terminates? The proof is simple and it gives you a real tool to help you avoid trying to solve impossible problems. It's good to know the limits of your craft.
The Y Combinator and the pumping lemma seem a bit contrived on that list, especially the former. I would add the maximum margin separation equation, which underlies many modern machine learning methods like SVMs and MMMF, and the P=NP equality question.
Shannon's Information Theory, Eigenvector, DeMorgan's Laws, etc. None of those names are meaningful or descriptive. And then the greek letters and made up symbols. Math could learn something from Computer Science:<p><a href="https://www.google.com/search?q=readable+code" rel="nofollow">https://www.google.com/search?q=readable+code</a>
I'm sorry for being somewhat off topic, but is it too hard to try and write coherently? The author's run-on, stream-of-consciousness style, together with the random use of punctuation marks, was tiring to read.
There's a small error in the formula for O(N): the way he's written it it looks like for all n, there is a k such that kg(n) >= f(n), ie k depends on n so take k = f(n)/g(n) and all nonzero functions trivially satisfy it. It should be there exists a k such that for all n kg(n) >= f(n). Pedantic I know, but on the other hand I wouldn't call these "beautiful equations" associated with O(N), I'd instead call them the definition of O(N).<p>There's also o(f(n)), g(n) is a member of o(f(n)) if the limit as n goes to infinity of g(n)/f(n) is zero. Finally, there is asymptotic equality: f(n) ~ g(n) if the limit as n goes to infinite of g(n)/f(n) = 1. O,o and ~ are all subtly different, but if you're just trying to prove upper bounds then O(f(n)) is the one that comes up most frequently, which is why it's probably the only sort of asymptotic analysis most CS grads know.
Here's a real simple and practical equation:
1+2+3+4 . . . N = N(N+1)/2<p>This equation represents the number of edges on a complete graph with N+1 vertices or the number of possible pairings given N+1 objects. Useful when estimating O(N) for certain algorithms that involve comparing an item to every other item in a set.
Little's Law is probably CS folks should know too. It is relatively simple, but useful<p><a href="https://en.wikipedia.org/wiki/Little%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Little%27s_law</a>
Oddly enough I never really used any of these while studying CS. Now that I'm in bioinformatics, some of the stuff is commonly used, particularly Bayes' theorem and information theory.
Bayes Theorem isn't totally at the heart of Bayesian v non-Bayesian statistics. Bayes Theorem can still be true if you're in a strictly frequentist framework.