For those really interested in the actual techniques being employed:<p>Twitter repo + relevant visibility tweaking code at:
<a href="https://github.com/twitter/the-algorithm/tree/main/visibilitylib/src/main/scala/com/twitter/visibility/rules">https://github.com/twitter/the-algorithm/tree/main/visibilit...</a><p>Still reading through it myself, but if I'm properly distilling the gist out of this, it seems they've implemented an "iptables for tweet visibility" through which the server sends instructions to the client to then run a rules engine against to drop tweets or specifically throttle engagement.<p>So... if I'm right, and this is the real kick in the teeth from my perspective; they aren't even doing the hard work on their side to sift through the datastream and drop things on their side. They're instead programming your hardware to do their gaslighting/censorship/filtering for them.<p>Dumb pipe for them, but you're left burning cycles on your phone/client/whatever to hide their material for them. Corollary being that with a sufficiently misbehaving client, one ought to be able to reconstitute an unfiltered stream to get a more accurate representation of the awfulness of those around you instead of only seeing what Twitter wants you to see.<p>It also means that server-side, there may actually be nothing preventing using a sufficiently misbehaving client from repurposing the Twitter backend as a Command & Control layer. In fact, one may even be able to compose several account provisioning/deprovisioning/visibility primitives to ensure no normal client would see anything, while the message nevertheless gets through. It's technically auditable, but if I put on my blacker hat; I miiiiight see a few ways to get up to some difficult to follow mischief if the system as posted is truly representative of what is there. May do dome net traffic analysis to see if I can figure out where the request is that would return the hypothetical ruleset to be consumed by the client. Not entirely convinced the engine is entirely client side, as that would have tipped their hand much longer ago I'd think. Not sure til I actually audit the full codebasr.<p>Yet another reason I've never quite been brave enough to pull the trigger on hosting a system like this for anyone but those I personally know and trust. After a certain point, probability goes to 1 that somepne is going to find a way to repurpose something nice no matter the level of good intention into something horrible. I like to think of it as a more abstract form of Rule34. If you build an information transfer system, someone will use it for something illegal somewhere.<p>Of course, even if I'm totally wrong, odds are that if I'm seeing the potential here, there is a smarter, less ethical version of me with a goatee that's already picked it apart and os likely actively exploiting it.