I found this struck a chord with me.<p>The key point of the article is that most social P2P systems (Mastodon, Dat) are fully public and do not hide the user's data from the system administrator or physical infrastructure network.<p>This is at odds with how these systems are often "marketed" as tools for fighting against The Government" or "The Corporations"-- when the open nature and lack of anonymization mechanisms of these tools expose those who are currently technically breaking the law (to prove a moral point) to be trivially detected by law-enforcement.<p>I personally have considered running a few P2P nodes for fun, but the amount of moderating and administration I would need to do and the risk of the service being used for illegal purposes pushes me firmly in the camp of watching from a distance.
A decentralized social network should have trust-less servers. The servers should just store encrypted data and serve it up, without being exposed to the social graph and with other metadata minimised.<p>Identity should also be independent of servers (and hence DNS).<p>At the networking layer they will also need to use something like onion routing or a mix network to fully protect the social graph.
This takes a while to digest. I've been hanging out on SSB (aka scuttlebutt) for a while now and I do enjoy the community. So this hits very close to home, and I guess it's good that it does.<p>The author, one the one hand, writes this in the concluding paragraph:<p>> Without cohesive organisation, mobilisation to harden security and privacy and without a sincere commitment from protocol designers to revise their collective assumptions, the push back from incumbent power will leverage each and every socio-technical flaw in each and every network.<p>But on the other hand they write:<p>> The moment demands not another protocol, not another manifesto, not another social network, but a savvy understanding of the political dynamics of protocols and the nakedness of today’s networks.<p>I guess that's a call for doing some serious stock-taking before writing the next (iteration of an existing) protocol? I'm fairly confident that the "solution" to this, if it is to be found, will have <i>some</i> technological component to it. But Cade Diehm is right in pointing out that that will not suffice, not by a long shot.<p>If we expect every (human-made) protocol to have flaws and vectors for "incumbent retaliation" like the bittorrent copyright suits of the 00s, then <i>one</i> way to side-step this would be to reduce the harm such tactics can do. Not saying that's easy, but establishing the social norm that copyright records won't get you into trouble when trying to find a job, that would be cool. There's a whole cultural aspect to this that has largely been confined to some niche cyberpunk (and even more niche, and frankly artistically lacking) solarpunk subcultures.<p>So where do we go from here? Do we "simply" design protocols and networks around them that don't allow siphoning off the entire traffic? Do we establish spaces (physical, mostly) where being "burned" by a copyright lawsuit won't matter, going full walkaway (look it up if you don't get the reference) or what should happen?
Wow, rarely seen such good writing on this topic. Encouraging to see that these ideas are well and alive and kept available until the impossible suddenly becomes possible.
AFAIK peer-to-peer is about preventing censorship, but the author seems to assume it is about preventing surveillance. So a large part of the criticism is missing the point.<p>Still, it is a problem that these systems are not preventing both and it is also a problem if users expect the latter but only get the former. Protocols need to be specific about their threat model.
Nit: The background-color style on `.lyt-txt label` makes this very difficult to read for me: <a href="https://i.imgur.com/XYIH36q.png" rel="nofollow">https://i.imgur.com/XYIH36q.png</a>