It's always worth thinking a little more about these things than "Facebook is blocking EFF privacy tips."<p>Maybe:<p>* This account already scored poorly on spamminess and attempting to post a bare link (with no content) on their page pushed them over the edge?<p>* The EFF Privacy Tips link was somehow used in a parallel spam campaign, or some characteristic of the page itself causes it to be flagged as spammy?<p>* Something about this specific browser session was flagged, preventing posting? (this might go along with the No Violations observation about the page).<p>I find it much more interesting to try to reverse engineer these strange behaviors of some spam detection systems than to just chalk them up to a "Facebook hates the EFF" conspiracy theory - if this happened to me, I'd be doing some more tests to try to see what's up.
It could just be me, but I feel the offence/outrage culture is very tiring on social media. It seems like certain parts of the internet have become a sponge of human frustration, often misdirected, often unnecessary. Is it healthy for us to keep consuming these micro-outrages all the time?
It’s not at all clear to me that the account in question isn’t just banned from posting links, and they happened to try to post this one.<p>And without more evidence, that seems more plausible to me than Facebook banning links to the EFF.
The editorialized title here seems deceptive at best. Near as I can tell from the tiny post, the user in question was restricted for something and then posted screenshots of trying to post links after being restricted. Unless we live in a world of time reversed causality, there’s no evidence for that sensational headline.
Assume for the sake of argument that facebook users make one-million posts a day. Assume it's spam detector is 99.999% accurate at telling if a given post is spam or not.<p>That's 10 false positives _every single day_. And in all likelihood their spam detector isn't that good, and there are many, many more posts than that.<p>Every now and then one of those false positives will be an interesting web site or one that feels really obviously wrong. But that's just what statistics does. When you take enough samples from a distribution, even very low-probability events happen.<p>Sometimes something will feel really out of the ordinary and wrong, but it happened entirely by chance.<p>If there were evidence of systematic decisions like this, it would be more of a story. But what we have here is just a big nothing-burger.
As someone who has worked at big social media companies, I can say the conventional wisdom about content moderation being a carefully planned process is off-base.<p>The reality is that moderation relies heavily on imperfect machine learning models and overworked human reviewers making rushed judgments on hundreds of cases per day. There's no meticulous strategy document mapping out the pros and cons before banning accounts that upset the company.<p>Mistakes inevitably happen when relying on this combination of flawed automation and human reviewers who are stretched too thin. The moderation policies may seem arbitrary or politically motivated from the outside, but much of it comes down to hasty human error and buggy algorithms rather than some malicious scheme.
Does Facebook still consider distrowatch spam? It was posted here a couple years ago now, and I was skeptical then and I've always wondered whether it was true/fixed. <a href="https://news.ycombinator.com/item?id=29529312">https://news.ycombinator.com/item?id=29529312</a>
Classic fecebook tactics, purely evil company leeching off of its user’s privacy, feels like it’s the cliche evil corp you read or see in scifi novels/movies.
you know that semi-old saying "if you're not paying you're the product"<p>well it's subtly wrong but points in the overall direction.<p>the capacity to manipulate this kind of stuff, to sabotage ideologies such as EFFs privacy ideology IS the product.<p>governments spend money for this, it's a product from governments to other governments.<p>though I suppose this is an instance of FB dogfooding
They did this years ago (possibly 2015) on a UCLA article I tried to post about Facebook censorship. I posted a clip on YouTube (trying to find it, will edit if I do).