It seems a lot of people in tech believe the ability for anybody to share largely unmoderated content on large social media platforms is very important for free/open online communication. At the same time you could argue there is a desire/need to combat spam, disinformation campaigns, propaganda, hate speech, AI generated content, etc.<p>Much of the discussion around content moderation is about the many challenges, sometimes suggesting it is an impossible problem to solve. A lot of discussion also suggests content moderation is inherently bad because “free speech”. However, lots of problems are technically challenging and/or ethically challenging (e.g., self driving cars), but are not discussed the same way.<p>A perfect solution to content moderation likely does not exist, but I am curious what companies are working solely on providing content moderation as a service and what innovative solutions are being proposed. Also, what is the state of the art in content moderation research assuming such a thing exists?
Give the user the ability to choose their preferred filtering for themselves. Think carefully about how to do it. Redirect your zeal there.<p>At the top-down platform level, focus on keeping out illegal material only. Take any further aspects of your value system and your theories about what is or isn't "good" for people out of it.<p>The current model is terribly contemptuous and anti-human. It assumes that someone or something has more legitimacy than "those people" to decide what EVERYBODY can/can't should/shouldn't create and consume. Whether that someone is "a private company that can do what it wants, no further examination needed", or a government, "society", "the people (really, 'my people')", the majority, the elite, the experts, the intellectuals, the 'responsible adults in the room', it's all the same. Tyranny over the experience of every single person by a subset of people in a position to make it so.
Do you believe that Twitter and/or Facebook are not applying/working on "state of the art" moderation? I ask because I know first hand that they are. Further, I also know that when they do talk about how they are researching state of the art "AI based" moderation that the influencers who want no moderation at all come out of the woodwork to dog pile and invent conspiracies.<p>At this point I see no way for any social media company to talk about moderation (even legally mandated moderation) without getting dog piled. Note that even Musk is not advocating absolute free speech. In interviews about his attempt to purchase Twitter his frustration seems to be more about moderation being a black box not that moderation is not needed or wanted.<p>If you agree that AI driven moderation is "state of the art" and that moderation must be transparent and not a black box, how are you proposing that a company explain how transformers and hidden layers made a given moderation decision to the public when even the best PhDs can't really explain it to each other?
It’s not a government-run system, so it’s irrelevant.<p>A guy who got a court-ordered twitter-sitter should probably not be in charge of Twitter as he doesn’t seem to understand what protected speech is.<p>You don’t get to say whatever you want as the CEO of a publicly traded company without consequences — especially as it relates to stock prices and shareholder information.<p>You don’t get to argue that a company is infringing on your right to free speech — that’s not how free speech works.<p>As a long-time Musk fan, I find this unhealthy obsession with Twitter to really be the final straw. The man is unhinged and may be completely undermining all the good SpaceX and Tesla are capable of.
Spam is something different than misinformation. Russians are targeted for misinformation if they criticize the war and labeled as spreaders of fake news. Sound familiar? If someone in western nations proposes the same because we are the good guys, that person did misunderstand one of the most fundamental implications of the enlightenment. Even supposedly educated people seem to fail here massively and this is still very basic.<p>And yes, moderation did ban content that was correct and only later admitted the mistake. But the damage is done at that point. I don't even believe Musk doesn't have his own opinion on allowed content, but his perspective is the vastly smarter one and if you think about being in opposition to someone like him, better start your homework yesterday.<p>Because what we got until now didn't have any sensible quality. Companies are interested in compliance. If you think you can leverage them to build good moderation you are either very naive or malicious. Same problem with Musk technically, but at least he writes the correct stuff on his banner.<p>Hate speech is a vehicle to enable dictators to censor content they don't like. You cannot ban people being hateful and contrary to pop-science believes it isn't as contagious as believed.<p>The solution is to have an iron framework of allowed content. Terms like hate speech make that impossible. Call it personal attacks and ban them if you don't want them on your platform. But ban only them and let supposed misinformation stay. Your crusade is more damaging than the info itself. And in some cases the info is correct and you are not. Basically the classical approach of many internet platforms.