Great article.<p>I went through a similar process about a year ago for <a href="https://kouio.com" rel="nofollow">https://kouio.com</a> (RSS reader). In its case I needed to coalesce closely matching RSS feeds purely for storage and performance. After trialling edit distance and various simhash implementations in Python, we ended up needing to look no further than the standard library's difflib.SequenceMatcher - I wish I documented my findings at the time, but I recall it was the best in terms of speed and accuracy.<p>Also you might not want to rely on str.isalnum for stripping punctuation. I made the same mistake here: <a href="https://twitter.com/stephen_mcd/status/506344236531212288" rel="nofollow">https://twitter.com/stephen_mcd/status/506344236531212288</a>
There's also nilsimsa hashing (there's a Python implementation at <a href="http://code.google.com/p/py-nilsimsa/" rel="nofollow">http://code.google.com/p/py-nilsimsa/</a>). Unfortunately, nilsimsa hashes can vary in their most significant bits when used on similar inputs:<p><pre><code> 773e2df0a02a319ec34a0b71d54029111da90838cbc20ecd3d2d4e18c25a3025
47182cf0802a11dec24a3b75d5042d310ca90838c9d20ecc3d610e98560a3645
</code></pre>
...so although nilsimsa is somewhat nice for calculating the difference of two documents, it's a pain in the butt for finding similar documents in a database.<p>The solution described in the writeup is neat, but I really wish there was a LSH that generated hashes with a most-to-least significance in their bits.<p>Great writeup though!
As an aside: util.clean_html() has been dropped from NLTK 3.0 which has substantial API changes[1].<p>The recommendation is to now use BeautifulSoup or something similar.<p>[1] <a href="https://github.com/nltk/nltk/wiki/Porting-your-code-to-NLTK-3.0" rel="nofollow">https://github.com/nltk/nltk/wiki/Porting-your-code-to-NLTK-...</a>