I think the main problem here is that social media companies tend to have a near-monopoly, thanks to network effects. Readers want to be where the writers are, and writers want to be where the readers are. This puts the companies in a position of too much power. We may not shed any tears over the banning of racists making death threats, but things have moved on from that stage. Personally, I sure wish that Sci-Hub's twitter account hadn't been banned. Furthermore, people who value their privacy are de-facto banned from these platforms, since the platforms have chosen a model where people pay for the product in the form of their personal data and viewing of advertisements. The same service minus the ads and personal info gathering could be provided for a relatively small fee, but the above-mentioned network effects ensure that no competitors with this model will be able to survive.<p>I imagine that soon the conversation will turn to regulation. One possible path is to limit what kinds of things can be banned on the largest of the platforms. I can only see that path leading to a big political mess, where no one is happy in the end, and the folks who care about privacy are still screwed. I think the more promising path is to get rid of the monopoly aspect. Which means somehow getting rid of network effects. So, even if you use a Facebook competitor, you should still be able to friend people on Facebook, send them messages, read what they write, and they should be able to read what you write.<p>In other words, running a social media company should just mean defining a protocol, or implementing an existing one. The relevant regulations would say the following:<p>- The protocol must be public, with all details published online.<p>- Advance notice must be given of changes to the protocol, so that competitors have time to modify their code, and regulators have time to verify that the new version of the protocol is still legal. Only 1 set of changes can be pending at a time.<p>- The protocol should only require folks to transfer information that is absolutely required to make to protocol work. So Facebook can't require you to send them your entire private message history in order to talk with someone on their platform. (They can still spy on their own users as much as they want, though.)<p>- Moderation is still allowed, but companies must apply the same moderation rules to their customers and competitors' customers. Facebook can still censor whatever posts they don't like, but if they ban all accounts originating from their competitors, even the innocuous ones, they'll have anti-trust knocking on their door.<p>- Ranking algorithms determining what people see in their feed, and any other code that mediates how users interact should also treat all users equally. (It can treat high-karma users differently from low-karma users, of course, but it must be possible for users from competitors to become high-karma.)<p>- These rules only apply to sufficiently large social media protocols (say, those used by over 10% of population in the country where these regulations are implemented). If you're just running a small Q&A forum for people who use your product, you can do whatever you want. (The rationale for this is that these rules are intended to prevent formation of monopolies. For protocols without tons of users, this is no problem.)