In my mind, arguments like this are insignificant at best, and quite possibly harmful. At different times and for different reasons, both Democrats and Republicans have expressed dismay for Section 230 and a desire to get rid of it. The likelihood of Section 230 being in place in its current form 10 years from now seems awfully low.<p>Our best hope is to discuss ways to improve, narrow, clarify, etc in such a way that we maintain the protections we care most about. If we continue to advocate for no change whatsoever, we risk losing it all.<p>To me, the best way forward is to revisit the point at which you cease to be a protected service provider and instead become an unprotected publisher. The Facebooks and TikToks of the world act an awful lot like publishers. Treat them like it.
230 is mostly fine. I would add a KYC type of clause to it though. Basically if you can identify the speaker of a piece of content, then they are responsible, otherwise the online platform is responsible.<p>Along with that, the DSA would need to be tasked with creating a strong id system for for companies to integrate with to make it easy for a service platform to identify their users if they want to<p>High level, I think it makes sense that platforms either ensure they know who is using their services or they pre-screen what they publish.
All this talk of changing S.230 and yet nobody has legally tested what the current situation is with regards to algorithmic editorialization. If Bob posts something that mechanically goes to all his friends, then S.230 straightforwardly applies. But if Bob posts something that is merely feedstock for Faceboot (etc) to possibly decide to pass on to some of his friends, then it seems that the middleman has a large part in the speech. You can't join in with someone singing a song, and then disclaim responsibility for its contents.<p>If Mary's friends all post a bunch of Covid denialism, and Mary gets sick and dies, it's the same situation as if they had called her up and spoke to her directly. If a small portion of Mary's friends post Covid denialism, and then Faceboot determines that Mary is susceptible to the narratives of Covid denialism and will "engage" more with their site if they repeat these messages to her, then Faceboot is directly responsible when Mary gets sick and dies.<p>I think that is close to the limits of what is possible to do with respect to the first amendment, and possibly close to the current legal situation depending on how a lawsuit with such theories fared.<p>The only plausible approach I've seen to breaking up the power of Big Tech is through a US GDPR and mandating open interoperability APIs. The data protection is so that individuals can choose not to be subjects of these companies, and the open APIs are so that companies have to compete for business rather than coasting on Metcalfe's law. Every other proposal I've seen is basically a way for the government to control Big Tech's power for its own ends, rather than dispersing it so that such concentrated power doesn't exist in the first place.