What we're seeing is a new profitable industry, with a tendency towards natural monopoly, refusing to act in the public good because its incentives conflict with it. At stake is the entire corpus of 21st century media, at risk of being drowned out in a tsunami of statistically unidentifiable spam, which has already begun and will only get worse, until nobody will even acknowledge the web post 2022 as worth reading or archiving.<p>The solution is very, very simple: just regulate it. Force all companies training LLMs to add some method of watermarking with a mean error rate below a set value. OpenAI's concern is that users will switch away to other vendors if they add watermarks? Well, if nobody can provide that service, OpenAI still has the lead. A portion of the market may indeed cease to exist, but in the same vein, if we had always prioritized markets over ethics, nobody would be opposed to having a blooming hitman industry.<p>Open weights models do exist, but they require much greater investment, which many of the abusers aren't willing to make. Large models already require enough GPU power to be barely competitive with underpaid human labor, and smaller ones seem to already fall into semi-predictable patterns that may not even need watermarking. Ready-made inference APIs can too include some light watermarking, while with general purpose notebooks/VMs, the question may still be open.<p>Still, it's all about effort to effect ratio. Sometimes inconvenience is enough to approximate practical impossibility for 80% of the users.