TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Google Reader API replacement, powered by Riak

70 点作者 charlieok将近 12 年前

4 条评论

ivank将近 12 年前
There is another ongoing backup of Google Reader's feed cache: <a href="http://www.archiveteam.org/index.php?title=Google_Reader" rel="nofollow">http://www.archiveteam.org/index.php?title=Google_Reader</a> and the data is landing at Internet Archive.<p>(If anyone has a dedicated server with a high transfer cap, we could <i>really</i> use for temporary storage and uploading to IA. Email in profile.)<p>For anyone else doing an independent backup, you can get more than 1000 items by using ?r=n&#38;n=1000 and following the continuation in the JSON response with a ?c= URL parameter. And keep in mind that Google doesn't canonicalize feed URLs for the same content, so you have to grab all of them.
评论 #5829409 未加载
bonzoesc将近 12 年前
For the listing/deleting problems, have you looked at using LevelDB and secondary indexes (2i) to make range queries cheaper?<p>Disclosure: I work at Basho, makers of the Riak database.
评论 #5829221 未加载
评论 #5829310 未加载
JeffJenkins将近 12 年前
SuperFeedr is awesome and Julien is great to work with as a user of the service. I wish they had this feature when I was working on my multi-medium client (now defunct) a year and a half ago.<p>The only downside—and why I stopped using it—is that the pricing model is per-item, so if you have frequently updating feeds it can get very expensive. Although I never tried to use it, the pricing page does say they they'll meet whatever it costs you to run your own feed system since their cost should be lower than yours.
评论 #5830995 未加载
abalone将近 12 年前
Interesting, I would have thought that the unidirectional, read-only nature of the publisher-subscriber relationship would have made this simple for a traditional SQL database with read replicas and a very basic partitioning scheme. You assign workers to monitor feeds for updates, they update the DB, and.. done.<p>Looks like they may have <i>added</i> some complexity with their feed parser implementation, what they refer to as "supernoders". Looks like they don't lock ownership of feeds during parsing, thus allowing concurrent supernoders to get into race conditions while parsing the same feed.<p>And so it turns into another NoSQL example of employing conflict resolution to fix things.<p>I wonder if they could just use a simple locking scheme to prevent more than 1 parser from parsing the same feed at the same time. This sounds simpler than conflict resolution, to me.
评论 #5830023 未加载
评论 #5830986 未加载