I'm scraping information regarding civil servants' calendars. This is all public, text-only information. I'd like to keep a copy of the raw HTML files I'm scraping for historical purposes, and also in case there's a bug and I need to re-run the scrapers.<p>This sounds like a great usage for a forward proxy like Squid or Apache Traffic Server. However, I couldn't find in their docs a way to both:<p>* Keep a permanent history of the cached pages<p>* Access old versions of the cached pages (think Wayback Machine)<p>Does anyone know if this is possible? I could potentially mirror the pages using wget or httrack, but a forward cache is a better solution as the caching process is driven by the scraper itself.<p>Thanks!
If you weren't already aware, Scrapy has strong support for this via their HTTPCache middleware; you can choose whether to have it actually behave like a cache, choosing to returned already scraped content if matched or merely to act as a pass-through cache: <a href="https://docs.scrapy.org/en/2.7/topics/downloader-middleware.html#writing-your-own-storage-backend" rel="nofollow">https://docs.scrapy.org/en/2.7/topics/downloader-middleware....</a><p>Their OOtB storage does what the sibling comment says about sha1-ing the request and then sharding the output filename by the first two characters: <a href="https://github.com/scrapy/scrapy/blob/2.7.1/scrapy/extensions/httpcache.py#L332-L333" rel="nofollow">https://github.com/scrapy/scrapy/blob/2.7.1/scrapy/extension...</a>
Content addressable storage. Generate names with SHA-3, split off bits of the names into directories like<p><pre><code> name[0:2]/name[0:4]/name[0:6]/name
</code></pre>
to keep any of the directories from getting too big (even the filesystem can handle huge directories, various tools you use with it might not) Keep a list of where the files came from and other metadata so you can find things in a database.
When doing this in the past, I settled on an sqlite database with one table that stores the compressed html (gzip or lzma) along with other columns (id/date/url/domain/status/etc.)<p>Also made it easy to alert on when something broke (query the table for count(*) where status=error) and rerun the parser for failures.
i'd just apply intelligent file naming strategy, based on timestamps and urls. keep in mind, that a folder should not contain more than 1000 files or other folders, otherwise it's slow to list.