ArchiveTeam is extracting <i>all</i> the data from Google Reader and uploading it to the Internet Archive. Help out by submitting your OPML file: <a href="https://news.ycombinator.com/item?id=5958119" rel="nofollow">https://news.ycombinator.com/item?id=5958119</a>
Thanks mihaip!<p>Worked successfully in Windows CMD for me, without using the \bin shell script:<p><pre><code> cd C:\mihaip-readerisdead
set PYTHON_HOME=C:\mihaip-readerisdead
C:\path-to-py27 reader_archive\reader_archive.py --output-directory C:\mystuff
</code></pre>
Locked up at 251K out of 253K items for me, though. Restarting... success! Looks like it might have locked up trying to start the "Fetching comments" section on my first try.
Warning to other impatient users:<p>I didn't read the instructions too well, so the half hour I spent carefully deleting gigantic/uninteresting feeds out of my subscriptions.xml file was all for naught. Because I didn't know I needed to specify the opml_file on the command line, the script just logged into my Reader account (i.e., it walked me through the browser-based authorization process) and downloaded my subscriptions from there -- including all the gigantic/uninteresting subscriptions that I did NOT care to download.<p>So now I've gone and downloaded 2,592,159 items, consuming 13 GB of space.<p>I'm NOT complaining -- I actually think it's AWESOME that this is possible -- but if you don't want to download millions of items, be sure to read the instructions and use the opml_file directive.
This is excellent, thank you for making this! I'm using it right now to make an offline archive of my Reader stuff.<p>My only gripe would be the tool's inability to continue after a partial run, but since I won't be using this more than once that's probably OK.<p>All web services should have a handy CLI extraction tool, preferably one that can be run from a CRON call. On that note, I'm very happy with gm_vault, as well.<p><i>Edit: getting a lot of XML parse errors, by the way.</i>
Thank you for this!
Now I can procrastinate on my own reader app for much longer :)<p>Should we be concerned with errors like this?<p><pre><code> [W 130629 03:11:54 api:254] Requested item id tag:google.com,2005:reader/item/afe90dad8acde78b (-5771066408489326709), but it was not found in the result
</code></pre>
I'm getting ~1-2 per "Fetch N/M item bodies" line.
This is an impressive bit of work. I have had, though, an interesting thing happen, in that it's apparently trying to pull every single item from explore and from suggested items in, to the extent that I get a message saying I have 13 million items, and still going strong -- it pulled about 5 or 6 gig of data down .<p>Is there some way to avoid all the years of explore and suggested items with reader archive? I tried limiting the maximum number of items to 10.000 but it was still running and growing after 12 hours. Interesting though, what it was able to accomplish in that time.
If this does what I think it does(And it seems to be doing it now on my machine), then this is truly, truly awesome.<p>Thank you. mihaip, if you are ever in Houston I will buy you a beer/ and or a steak dinner.
I'm getting "ImportError: No module named site"<p>echo %pythonpath% gives c:\readerisdead<p>I copied 'base' from the readerisdead zipfile to c:\python27\lib & also copied the base folder into the same folder as reader_archive.py<p>C:\readerisdead\reader_archive\reader_archive.py --output-directory C:\googlereader gives "ImportError: No module named site"<p>What am I doing wrong? How can I get this to work?
I guess archived RSS data for me isn't terribly important since most people seem to hide the rest of their content behind a "More" link to get those precious ad views.