Too bad that browsers are depreciating both RSS (ie, firefox removed feed rendering support) and HTTP (ie, firefox pushing HTTPS <i>only</i>). And HTTP3 isn't even tcp anymore it uses the google-mostly QUIC on udp.<p>For the corporate web RSS and HTTP are dead. But as a non-corporate human person I'll be sure to keep RSS and HTTP alive on my webservers.
Anything built on domain names cannot be described as "built to last." Not HTTP, not SMTP, not Gopher, not DNS, not the Fediverse. This is because domain names are rented out, and have expiration times attached to them, so obviously they can only last as long as the organization behind both the domain and the registrar keeps it there.<p>IRC and USENET were built to last. Names in either network weren't tied to anyone but the collective "network." Neither network gets used much today, since names aren't tied to anyone in particular. It turns out that globally writable data stores are great vehicles for spam and fraud.<p>Content-addressed systems like BitTorrent and I2P can theoretically maintain content availability for as long as <i>anybody</i> wants to keep it available, not just whoever originally published it. BitTorrent is also pretty secure, but it's not truly fair since it's an immutable data store, and all the spam and fraud is just <i>outsourced</i> to HTTP instead of being eliminated entirely.
> I’d argue that the web, more specifically http, offered up the last truly new paradigm in computing. It is a single, elegant standard for distributed systems. We should all take the time every now and then to think about the beauty, power, and simplicity of this standard.<p>What is really funny about that sentence is that the word "http" links to the wikipedia article through href.li, in order to hide the referer, a built-in feature of HTTP 1.1 and the web. So much for elegance and simplicity when I am staring at a workaround using an external system for something as simple as a link.
Problems with RSS:<p>1. Inconsistent implementation of standards: The implemented versions of RSS and Atom out there makes parsing more of an art than just throwing a library at it as no RSS parsing library out there can handle all the edge cases. (The last time I did a test a few years ago in an hour long sample from one of the RSS firehoses out there pulled up hundreds of custom namespaces and tag names.)<p>2. XML formatting: Consistently formatted, well formed XML is never 100%, even from by major news organizations. Embedded CDATA means parsing content is a quagmire of double escaping.<p>3. Inconsistent content: A RSS feed could just have the last few items that have been updated, with just titles or links, or it could be literally all of the content of a blog, jammed into some 20MB+ text file, double escaped and simply enlarged after every new update.<p>4. Inconsistent unique identifiers and HTTP header responses: Many sites will respond appropriately to requests with a 304 if there are no changes. Many will not. Many sites will give each RSS item a globally unique identifier, many will not. This forces every Reader to simply request the whole doc over and over again, comparing unique items with a blend of logic and magic.<p>5. Inconsistent support: Most sites that use RSS have no business model attached to it, so it's just sort of an afterthought and may be shut down at any time, and often is.<p>All this leads to: Massive amounts of wasted bandwidth as bots poll endlessly for updates, wasted processing time parsing unformatted or badly formatted content, wasted storage because of bad IDs and URLS, wasted effort on the user's part dealing with the inevitable errors, and wasted effort on the admin side dealing with an antiquated tech that should have gone away with MySpace.<p>RSS should be scrapped. Killed. Replaced. Forgotten.