Nice. The output is similar to that of my rss ingest pipeline for <a href="http://jkl.io" rel="nofollow">http://jkl.io</a> although I've yet to add my custom document/topical hash, sentiment and topical classifiers directly but it has article, stemmed article, first sentence (which will evolve to summary), named entities and resolves url redirects.<p>I am thinking I should clean up the code, add a few more extractors and release it soon as a url analysis library (I was thinking "demands" would be a good name to pair with Python's "requests"). I would like to get entity disambiguation from Wikipedia in it first though as I think that is a vital feature. My funding pitch largely failed though so I will approach that somewhat more slowly, but the methodology and libraries for constructing reasonable entity disambiguation from topic modelling (rather than heaviest sub-graph approaches) are out there.<p>I recently saw an API on HN selling basically this type of extraction from urls, but I think it's necessary (along with Common Crawl and other such things) for this base layer to be there for free so people can properly compete with Google. I think Google currently runs 200+ extractors and classifiers on every page, so they have a huge advantage over startups (and non-profits which is my area of interest) in this area which Common Crawl can't help with by just providing the raw data.