> <i>notorious for sticking with chronologically-ordered timelines, so unless you have time to look at every single post, you’ll likely miss something.</i><p>The value of RSS to me is getting everything in chronological order, so I (not some algo) can throw 99% of it away, unread.<p>(and if I do want to go back and look at something I'd skipped without reading, it's easily findable, searchable by keyword or at least still there in the "read" list, next to all the things that were temporally close)
The whole "death of RSS" idea seems like a strange perspective. RSS/Atom never went away, and nothing has replaced their use case for syndicating content.<p>To the author's point, it's just that a lot of mass-market consumption of web content for the past decade or so has been on walled-garden platforms that never offered syndication in the first place.<p>But even there, I think the article is overstating things -- it implies that Reddit, HN, Medium, and Substack have only recently begun offering RSS feeds, but these have never <i>not</i> offered RSS feeds (and HN has always had a native top-level article feed, even if the 3rd-party solution is more extensive). Even YouTube has always offered RSS feeds (albeit without enclosures, so they can't be used as podcast feeds -- but Odysee, a web frontend to LBRY which is gaining traction against YouTube <i>does</i> offer feeds with enclosures).<p>I guess this application is a good solution for people who want to follow syndicated content via Mastodon, but it should be pointed out that the traditional model of using standalone RSS readers never went away -- when Google Reader was shut down, the void was quickly filled with a variety of solutions like TinyTinyRSS, MiniFlux, Feedly, Inoreader, etc.<p>I personally use TinyTinyRSS, with Liferea as a desktop client, as my primary interface to all of the blogs, podcasts, subreddits, and YouTube channels I read, along with aggregators like HN and Lobsters. I've been using this solution for over 10 years now; nothing has ever stopped working and none of the sites ever pulled back from publishing feeds.
I always thought, the "correct" way of doing this was the other way around: the RSS reader would implement ActivityPub so you could "toot" from within you RSS client. It would perhaps attempt to collect other "toot"s about the same link, facilitating a discussion (and keeping everything at one place). But I think this is the next best thing, especially for those feeds you tend to share on the social networks (I don't think it's feasible to reproduce the RSS reader following with this).
Stop trying to make Mastodon be Twitter. If that's what you want, go use Twitter. I don't get this mentality of "I like X, but I don't want to use X, so imma go turn Y into X" some devs seem to have. Chronological ordering is a fair and balanced choice for media presentation. Playing into people's FOMO is what got the Internet into the rat's nest of algorithmically driven problematic feeds in the first place.
Name clash with an already popular open-source project: <a href="https://fossil-scm.org/home/doc/trunk/www/index.wiki" rel="nofollow">https://fossil-scm.org/home/doc/trunk/www/index.wiki</a>
People like RSS because it's not algorithmic. People like Mastodon because it's not algorithmic.<p>What I'm mainly curious about here is what drew this author to these two technologies in the first place? What's the hook for them?<p>---<p>Subjective aside: before I read the line on algorithms & the HN comments bemoaning the same, I was already very turned off by the awful AI header image. Until tools like Dall-E, &c. get to the point where they can reliably generate images that aren't blatantly & obviously AI-generated (they seem a surprisingly long way off still), I think this effect is worth keeping in mind: for me the aesthetic is a massive turnoff for any product page.
Initially I thought they were referring to fossil SCM:<p><a href="https://www2.fossil-scm.org/home/doc/trunk/www/index.wiki" rel="nofollow">https://www2.fossil-scm.org/home/doc/trunk/www/index.wiki</a><p>Since fossil SCM has been around awhile, the author might consider a name change to avoid confusion?
started using RSS feeds a few months ago, they're still widely supported and there are JSON scrapers for those
sites that don't. The idea of "reading N accounts" to get some info or some algorithmic
soup of "popular stuff i tangentially show interest in" doesn't even compare to curated RSS feeds
organized into groups. The unpopular, rare and obscure stuff can't compete with algorithmically
optimal SEO-friendly hype-tagged social media junk in the "common prole-feed" of giant websites.