None of the stuff described here is novel. Increasingly over the past 10 years, and especially over the past five or so, the web has been flooded with low-quality, insincere, intentionally manipulative text content. Sometimes this is done for profit. Sometimes it's done to propagandize for or against something. Sometimes it's simply a hand-grenade meant to disrupt or destroy effective communication on a platform or community (think of it like a social DDoS: when everyone's angry and tired from responding to all the garbage posts, they gradually stop participating, and the high-quality posters are the first to go.)<p>But none of this requires GPT. Much simpler bots often do the trick, and when slightly more finesse is required a whole army of human "bots" stand ready on the grey market to take your crypto and post carefully crafted bullshit wherever you like. GPT MAY end up automating out this human element a bit, but it's hardly a harbinger of doom. The doom's already been here for a while. The environment of the web in general is MUCH more hostile than it used to be, and the barriers to entry much higher because the net is crawling with bad actors now. I miss the days when the worst that could happen online was credit card fraud or an unexpected encounter with a pedophile.