Post any algorithms you think are cool, inventive, or useful.
PDF links or blog posts are appreciated.<p>My current favorite is SWIM.
https://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf
The Fastest and Shortest Algorithm for All Well-Defined Problems: <a href="https://arxiv.org/abs/cs/0206022" rel="nofollow">https://arxiv.org/abs/cs/0206022</a><p>Abstract: An algorithm M is described that solves any well-defined problem p as quickly as the fastest algorithm computing a solution to p, save for a factor of 5 and low-order additive terms. M optimally distributes resources between the execution of provably correct p-solving programs and an enumeration of all proofs, including relevant proofs of program correctness and of time bounds on program runtimes. M avoids Blum's speed-up theorem by ignoring programs without correctness proof. M has broader applicability and can be faster than Levin's universal search, the fastest method for inverting functions save for a large multiplicative constant. An extension of Kolmogorov complexity and two novel natural measures of function complexity are used to show that the most efficient program computing some function f is also among the shortest programs provably computing f.<p>The catch is that the constants in the big-O are enormous.
Given the head of a linked list, how do you determine if it loops? eg, an l-shaped list is easy to determine - you simply process each element in the list until you find one without a subsequent element. But what if it's a 9-shaped linked list? You'll never run out of elements, so the best you could seem to do would be to store a reference to each element and check against all references to see if you've found a duplicate.<p>There's a way to do it in O(1) space, though.<p>If you start off two runners in the list, and each "step" move one twice and the other only once, if there's a loop they will eventually run into each other and be processing the same element. Simple, elegant, and nothing I'd have ever thought of. :)
1. Lempel-Ziv compression algorithms (both LZ77 and LZ78). They are practical, even if one code their basic versions will get good compression ratio. Moreover, they opened the whole big chapter of data compression (LZW, LZSS and more).
2. The algorithm behind rsync. Nice one.
3. Jarvis algorithm for finding convex hull.
4. Speaking of data structures I like xor-linked list, it's neat.<p>[1] <a href="https://en.wikipedia.org/wiki/LZ77_and_LZ78" rel="nofollow">https://en.wikipedia.org/wiki/LZ77_and_LZ78</a>
[2] <a href="https://en.wikipedia.org/wiki/Rsync#Algorithm" rel="nofollow">https://en.wikipedia.org/wiki/Rsync#Algorithm</a>
[3] <a href="https://en.wikipedia.org/wiki/Gift_wrapping_algorithm" rel="nofollow">https://en.wikipedia.org/wiki/Gift_wrapping_algorithm</a>
[4] <a href="https://en.wikipedia.org/wiki/XOR_linked_list" rel="nofollow">https://en.wikipedia.org/wiki/XOR_linked_list</a>
I feel I have to mention the algorithm for matching with mismatches which I published in my thesis (<a href="http://www.daemonology.net/papers/thesis.pdf" rel="nofollow">http://www.daemonology.net/papers/thesis.pdf</a>), simply because it gave me a legitimate opportunity to use the phrase "Fourier Transform of the FreeBSD kernel".
As basic as it is, and completely uninventive, I've always loved Dijkstra's algorithm.<p>It was probably the algorithm that cemented my love of computer science, it's such an elegant solution to a problem and so simple once you understand it.
The burrows-wheeler transform is pretty interesting <a href="https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform" rel="nofollow">https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transf...</a>. It's behind the bzip compression format.
I have never seen anything more elegant than Disjoint Set Datastructure
<a href="https://en.wikipedia.org/wiki/Disjoint-set_data_structure" rel="nofollow">https://en.wikipedia.org/wiki/Disjoint-set_data_structure</a>
Here's a rather surprising one. There's an algorithm to generate a uniform random integer in the range 1..n with a KNOWN factorization. That means the algorithm is like a regular random number generator except it will also tell you how to factor it! Magic? yes! This always surprises people and i love results like this.<p>Search for the paper by Adam Kalai on how to generate random factorized integers easily. It uses log^2 n primality tests an average. The result is originally by Eric Bach who managed it using only log n primality tests but using a more complex algorithm.
Diffie-Hellman<p>One of those algorithms where you go "Shit so obvious and also so completely genius that I could never have thought of it"<p>Also, the internet as we know it would not work without it
Token Bucket. Very lightweight way to implement rate limiting.<p>PHP implementation: <a href="https://github.com/bandwidth-throttle/token-bucket" rel="nofollow">https://github.com/bandwidth-throttle/token-bucket</a>
Linear Hybrid Cellular Automata:
<a href="https://pdfs.semanticscholar.org/7835/161f253ab117a3666fa8e74e008782a789cd.pdf" rel="nofollow">https://pdfs.semanticscholar.org/7835/161f253ab117a3666fa8e7...</a><p>Parallelizable and efficient method for producing high quality pseudo random bits.<p>Designed to achieve maximal period (with 500 bit state the period of repetition is one less than 2^500).<p>Can be run backwards or forwards. Running it backwards undoes running it forwards and reproduces the pseudo random bits in reverse order.
Tomasulo's algorithm: <a href="https://en.wikipedia.org/wiki/Tomasulo_algorithm" rel="nofollow">https://en.wikipedia.org/wiki/Tomasulo_algorithm</a><p>A modified version of it is used in most CPU register out-of-order execution
Great question. Without a doubt, my favourite is the Burrows-Wheeler transform (as used by bzip2)<p><a href="https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform" rel="nofollow">https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transf...</a><p><a href="https://www.youtube.com/watch?v=4n7NPk5lwbI" rel="nofollow">https://www.youtube.com/watch?v=4n7NPk5lwbI</a>
When I first read of HashLife I had the most extraordinarily vivid dreams of breaking all complexity barriers just by finding the right memoization scheme: <a href="http://www.drdobbs.com/jvm/an-algorithm-for-compressing-space-and-t/184406478" rel="nofollow">http://www.drdobbs.com/jvm/an-algorithm-for-compressing-spac...</a>
R* tree for indexing spatial data [1]. Also, if you enjoy spatial data structures, you can't go wrong with Samet's[2] _ The Design and Analysis of Spatial Data Structures_ and Applications of Spatial Data Structures_.<p>1: <a href="https://en.wikipedia.org/wiki/R*_tree" rel="nofollow">https://en.wikipedia.org/wiki/R*_tree</a><p>2: <a href="https://www.cs.umd.edu/~hjs/design.html" rel="nofollow">https://www.cs.umd.edu/~hjs/design.html</a>
Randomized algorithms, specifically Monte Carlo (probably because it was the first I explored), have fascinated me since I met them. The idea of having a very simple/efficient method to look for a solution, instead of a very complicated one, and still be able to have a near optimal solution is amazing.<p>(<a href="https://en.wikipedia.org/wiki/Monte_Carlo_algorithm" rel="nofollow">https://en.wikipedia.org/wiki/Monte_Carlo_algorithm</a>)
I really like Fleury's algorithm[0] - I used it to generate more efficient paths to draw on a canvas, for the purpose of displaying simple wireframe models. It may not be the most efficient, but I really like the simplicity.<p>[0] <a href="http://www.geeksforgeeks.org/fleurys-algorithm-for-printing-eulerian-path/" rel="nofollow">http://www.geeksforgeeks.org/fleurys-algorithm-for-printing-...</a>
There are a lot of great answers here. Let me add another one I really love: the Knuth Morris Pratt algorithm for string matching. There are many features about it I especially love. First of all it takes a simple idea (when a mismatch occurs it's often possible to shift the pattern a lot more than just by one position) and takes it to it's logical conclusion (that of defining and computing the so called border lengths of prefixes). Second, the analysis of the precomputation step (the border lengths) is tricky, and amortised, but plainly obvious once you see it.<p>It shows the importance of making a good definition and thinking differently (for the run time analysis I thought for a while and failed to come up with an argument why it runs in linear time, but once you see it differently it's obvious :))<p>I only recently had the chance to understand this algorithm. Here's a little (annotated) gist I made after understanding it: <a href="https://gist.github.com/quantumelixir/3c410dd4a0afe0f2514e0f554cddab05" rel="nofollow">https://gist.github.com/quantumelixir/3c410dd4a0afe0f2514e0f...</a>
Number Theoretic Transform aka Discrete Fourier Transform.<p>The algorithms for computing it are beautiful and have enabled much of the technology we see around us today.
I really like majority voting algorithm to get majority number in array which appears more then half of the elements, Its the most simplest and elegant algorithm i have studied<p><a href="http://www.geeksforgeeks.org/majority-element/" rel="nofollow">http://www.geeksforgeeks.org/majority-element/</a>
Ford-Fulkerson algorithm for finding the maximum flow across a graph: <a href="https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm" rel="nofollow">https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorit...</a><p>The algorithm itself is interesting enough, but when applied to bipartite graphs, you can solve some tough problems very efficiently. For instance, one of my favorites is how Santa could possibly match a million kids with a million different gifts in order to maximize happiness. It turns out you structure the problem as a bipartite graph with nodes on one side for gifts and children on the other, and Ford-Fulkerson can guarantee the best matching in polynomial time (no small feat, given how many possible combinations there are!)
I've been diving into a bunch of the newer (computational) optimization algorithms but the old school ones that they teach in school is still extremely interesting and relevant to me. I've always been a huge fan of "Combinatorial optimization" [1]. I saw someone else mention this but the simplex algorithm is awesome as well! [2]<p>[1] <a href="https://en.wikipedia.org/wiki/Combinatorial_optimization" rel="nofollow">https://en.wikipedia.org/wiki/Combinatorial_optimization</a>
[2] <a href="http://gizmodo.com/5934150/the-algorithm-that-controls-your-life" rel="nofollow">http://gizmodo.com/5934150/the-algorithm-that-controls-your-...</a>
Cuckoo Hashing: <a href="http://www.lkozma.net/cuckoo_hashing_visualization/" rel="nofollow">http://www.lkozma.net/cuckoo_hashing_visualization/</a><p>"An elegant method for resolving collisions in hash tables... It has the rare distinction of being easy to implement and efficient both in theory and practice."<p>It does an amazing job of packing values into N hash tables so there's very little dead space but lookups are very fast.
1. Karatsuba algorithm for fast multiplication.
2. Heap's algorithm for generating permutations.<p>mostly because they clearly demo the power of human brain.
My favorite is the Hindley-Milner type system algorithm: <a href="https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner_type_system#Towards_an_algorithm" rel="nofollow">https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner_type_sy...</a><p>It is what is used in ML based languages to infer type and apparently Haskell uses it a bit too
Knuth's DLX algorithm. Uses what he calls dancing links to perform backtracking in basically constant space with a clever trick on doubly linked lists.<p>He recently updated his implementation. Is very fun to read his notes on his implementation. In particular, he discusses things he had wrong.<p>If you like poor JavaScript implementations, I can post mine again.
Cheney's two finger garbage collector with forwarding pointers. Still beating every other GC out there by factor of 10, if you can afford 2x memory
Contrastive divergence for RBMs. Incredibly simple, but absolutely beautiful how it modifies the energy surface. Also, just in general a cool use of Gibbs sampling.
Convolutional neural nets and backprop. That stuff is magical.<p>I also was fascinated as shit when I learnt about minimax and alpha beta search.<p>Game theory and neural nets have always fascinated me.