Many people seem to have skewed expectations, but posting on X is no different from publishing a blog post. Unless they're taking similar actions for private posts, this isn’t too surprising. In fact, X is arguably more transparent about it. (Other platforms might not explicitly mention AI, but often include terms in their ToS that allow similar practices.)<p>It wouldn’t be surprising if Facebook is doing the same, provided it only applies to public posts. Ultimately, if you don’t want your content scraped from the internet, the best defense is not to post it at all.
If I prepend “by reading this message, you agree to not use it for AI training purposes” to my Tweet, why is that any less legitimate that the ToS I implicitly agree to by using Twitter?
This seems like a particularly bad move, because:<p>- The content is, er, not what you'd call high-quality.<p>- Artists generally _hate_ genAI. Like, really, really, viscerally hate it. They're gonna lose whole communities over this.
I wonder what the ratio of "real human" posts vs mass-produced botspam is like in that dataset. Probably looks like the inside of a mortgage-backed security in 2006.
tl;dr for those who don't want to open CNN:<p>X's new terms of service, effective November 15, 2024, now allow the platform to use public posts to train its AI models. Users' content can be collected and adapted for various uses, which has raised privacy concerns.