While everyone is waiting for Atproto to proto, ActivityPub is already here. This is giving me "Sumerians look on in confusion as god creates world" vibes.<p><a href="https://theonion.com/sumerians-look-on-in-confusion-as-god-creates-world-1819571221/" rel="nofollow">https://theonion.com/sumerians-look-on-in-confusion-as-god-c...</a>
I would love to have an RSS interface where I can republish articles to a number of my own feeds (selectively or automatically). Then I can follow some my friends' republished feeds.<p>I feel like the "one feed" approach of most social platform is not here to benefit users but to encourage doom-scrolling with FOMO. It would be a lot harder for them to get so much of users' time and tolerance for ads if it were actually organized. But it seems to me that there might not be that much work needed to turn an RSS reader into a very productive social platform for sharing news and articles.
Its not obvious to me that what is missing here is another technical protocol rather than more effective 'social protocols'. If you havent noticed, the major issues of today is not the scaling of message passing per-se but the moderation of content and violations of the boundary between public and private. These issues are socially defined and cannot be delegated to (possibly algorithmic) protocols.<p>In other words what is missing is rules, regulations and incentives that are adapted to the way people use the digital domain and enforce the decentralized exchange of digital information to stay within a consensus "desired" envelope.<p>Providing capabilities in code and network design is ofcourse a great enabler, but drifting into technosolutionism of the bitcoin type is a dead end. Society is not a static user of technical protocols. If left without matching social protocols any technical protocol will be exploited and fail.<p>The example of abusive hyperscale social media should be a warning: they emerged as a behavior, they were not specified anywhere in the underlying web design. Facebook is just one website after all. Tim Berners-Lee probably did not anticipate that one endpoint would succesfully fake being the entire universe.<p>The deeper question is, do we want the shape of digital networks to reflect the observed concentration or real current social and economic networks or do we want to use the leverage of this new techology to shape things in a different (hopefully better) direction?<p>The mess we are in today is not so much failure of technology as it is digital illiteracy, from the casual user all the way to the most influential legal and political roles.
NOSTR has solved most of these topics in a simple way.
Anyone can generate a private/public key without emails or password, and anyone can send messages that you can verify as truly belonging to the person with that signature.<p>They have hundreds of servers running today by volunteers, there is little cost of entry since even cellphones can be used as servers (nodes) to keep you private notes or keep the notes from people you follow.<p>There is now a file sharing service called "Blossom" which is decentralized in the same simple manner. I don't think I've seen there a way to specify custom domains, people can only use the public key for the moment to host simple web pages without a server behind.<p>Many of the topics in your page are matching with has been implemented there, it might be a good match for you to improve it further.
1. Domain names: good.<p>2. Proof of work time IDs as timestamps: This doesn't work. It's trivial to backdate posts just by picking an earlier ID. (I don't care about this topic personally but people are concerned about backdating not forward-dating.)<p>N. Decentralized instances should be able to host partial data: This is where I got lost. If everybody is hosting their own data, why is anything else needed?
<a href="https://en.wikipedia.org/wiki/Syndie" rel="nofollow">https://en.wikipedia.org/wiki/Syndie</a> was a decent attempt at this which is, I gather, still somewhat alive.
AIUI, the "Decentralized" added to RSS here stands for:<p>- Propagation (via asynchronous notifications). Making it more like NNTP. Though perhaps that is not very different functionally from feed (RSS and Atom) aggregators: those just rely on pulling more than on pushing.<p>- A domain name per user. This can be problematic: you have to be a relatively tech-savvy person with a stable income and living in an accommodating enough country (no disconnection of financial systems, blocking of registrar websites, etc) to reliably maintain a personal domain name.<p>- Mandatory signatures. I would prefer OpenPGP over a fixed algorithm though: otherwise it lacks cryptographic agility, and reinvents parts of it (including key distribution). And perhaps to make that optional.<p>- Bitcoin blockchain.<p>I do not quite see how those help with decentralization, though propagation may help with discovery, which indeed tends to be problematic in decentralized and distributed systems. But that can be achieved with NNTP or aggregators. While the rest seems to hurt the "Simple" part of RSS.
alot of the use cases for this would have been covered by protocol designs suggested by Floyd, Jacobson and Zhang in <a href="https://www.icir.org/floyd/papers/adapt-web.pdf" rel="nofollow">https://www.icir.org/floyd/papers/adapt-web.pdf</a><p>but it came right at a time when the industry had kind of just stopped listening to that whole group, and it was built on multicast, which was a dying horse.<p>but if we had that facility as a widely implemented open standard, things would be much different and arguably much better today.
That is a really great list of requirements.<p>One area that is overlooked is commercialization. I believe, that the decentralized protocol needs to support some kind of paid subscription and/or micropayments.<p>WebMonetization ( <a href="https://webmonetization.org/docs/" rel="nofollow">https://webmonetization.org/docs/</a> ) is a good start, but they're not tackling the actual payment infrastructure setup.
The blog mentions the "discovery problem" 7 times but this project's particular technology architecture for syndication doesn't seem to actually address that.<p>The project's main differentiating factor seems to be <i>not propagating the actual content</i> to the nodes but instead save disk space by only distributing hashes of content.<p>However, having a "p2p" decentralized network of hashes doesn't solve the "discovery" problem. The blog lists the following bullet points of metadata but that's not enough to facilitate "content discovery":<p><i>>However it could be possible to build a scalable and fast decentralized infrastructure if instances only kept references to hosted content.<p>>Let’s define what could be the absolute minimum structure of decentralized content unit:<p>>- Reference to your content — a URL<p>>- User ID — A way to identify who posted the content (domain name)<p>>- Signature — A way to verify that the user is the actual owner<p>>- Content hash — A way to identify if content was changed after publishing<p>>- Post time — A way to know when the post was submitted to the platform<p>>It is not unreasonable to expect that all this information could fit into roughly 100 bytes.</i><p>Those minimal 5 fields of metadata (url+userid+sig+hash+time) are not enough to facilitate content discovery.<p>Content discovery of <i>reducing the infinite internet down to a manageable subset</i> requires a lot more metadata. That extra metadata requires <i>scanning the actual content</i> instead of the hashes. This <i>extra metadata based on actual content</i> (e.g. Google's "search index", Twitter's tweets & hashtags, etc) -- is one of the factors that acts as unescapable gravity pulling users towards centralization.<p>To the author, what algorithm did you have in mind for decentralized content discovery?
Ipfs has a pub/sub mechanism.<p>As far as I can tell it is stuck in some sort of inefficient prototype stage. which is unfortunate because I think it is one of the neatest most compelling parts of the whole project. it is very cool to be able build protocols with no central server.<p>Here is my prototype of a video streaming service built on it. I abandoned the idea mainly because I am a poor programmer and could never muster the enthusiasm to get it past the prototype stage. but the idea of a a video streaming service that was actually serverless sounded cool at the time<p><a href="http://nl1.outband.net/fossil/ipfs_stream/file?name=ipfs_stream&ci=tip" rel="nofollow">http://nl1.outband.net/fossil/ipfs_stream/file?name=ipfs_str...</a>
I think it's pretty clear they don't want us to have such a protocol. Google's attack on RSS is probably the clearest example of this, but there's also several more foundational issues that prevent multicasts and similar mechanisms from being effective.
Am I the only one concerned by this?<p>> In RSDS protocol DID public key is hosted on each domain and everyone is free to verify all the posts that were submitted to a decentralized system by that user.<p>DNS seems far too easy to hijack for me to rely on it for any kind of verification. TLS works because the server which an A(AAA) record points to has to have the private key, meaning that you have to take control of that to impersonate the server. I don’t see a similar protection here.
Perhaps this is a little naïve of me, but I really don't understand what this does. Let's say you have website with an RSS feed, it seems to have everything listed here. I suppose pages don't have signatures, but you could easily include a signature scheme in your website. In fact I think this is possible with existing technologies using a link element with MIME type "application/pkcs7-signature".
I think the author here would be happy to learn that secure scuttlebutt (SSB) exists. <a href="https://github.com/ssbc/scuttlebutt-protocol-guide">https://github.com/ssbc/scuttlebutt-protocol-guide</a>
>Everybody has to host their own content<p>Yeah, this won't work. Like at all. This idea has been tried over and over on various decentralized apps and the problem is as nodes go offline and online links quickly break...<p>No offense but this is a very half-assed post to gloss over what has been one of the basic problems in the space. It's a problem that inspired research in DHTs, various attempts at decentralized storage systems, and most recently -- we're getting some interesting hybrid approaches that seem like they will actually work.<p>>Domain names should be decentralized IDs (DIDs)<p>This is a hard problem by itself. All the decentralized name systems I've seen suck. People currently try use DHTs. I'm not sure that a DHT can provide reliability though and since the name is the root of the entire system it needs to be 100% reliable. In my own peer-to-peer work I side-step this problem entirely by having a fixed list of root servers. You don't have to try "decentralize" everything.<p>>Proof of work time IDs can be used as timestamps<p>Horribly inefficient for a social feed and orphans are going to screw you even more.<p>I think you've not thought about this very hard.
> Keeping track of time and operations order is one of the most complicated challenges of a decentralized system.<p>Only in decentralized systems. In centralized ones, fake timestamps down to the bit all over the motherfucking space. So, basically, quasi, ultimately, so to speak, time and order don't matter in centralized systems, only the Dachshund does.
Url changed from <a href="https://tautvilas.medium.com/decentralized-syndication-the-missing-internet-protocol-209cb7bd6341" rel="nofollow">https://tautvilas.medium.com/decentralized-syndication-the-m...</a>, which points to this.