I read a lot of "serial archive"-formatted things (webcomics, online novels, etc.) I've always wanted an extension like this that will spider rel="next" and rel="previous" links/headers (or, not finding those, try to guess a pair of links on the page that represent those) to build up an archive sequence; chew that into a set of pages+sections; generate a Table of Contents for those; and then stick all that together into an ePub.<p>I've written scrapers to do exactly that for a few works, but they're one-offs that get their metadata (e.g. chapter titles) from explicit provided data-structures rather than from the site itself. A fully-general solution to this would be amazing.