HN links to over 6 million urls in stories and comments. Many domains have expired or content is no longer available. Internet archive has much of the content but throttles requests. What's the fastest way to get the historical content?
HN does have a REST API which is quite easy to use.<p><a href="https://github.com/HackerNews/API" rel="nofollow">https://github.com/HackerNews/API</a><p>I'm not sure what rate limiting policy is in place, but in theory you can start with a request for maxitem and from that point on just GET all items down to zero until you hit some sort of blocker.
The best way to do it is from Google BigQuery.<p>There's a dataset containing everything: bigquery-public-data.hacker_news.full<p>You can write SQL and is super fast. Sample:<p>SELECT * FROM bigquery-public-data.hacker_news.full LIMIT 1