I don't want my content to be harvested by LLMs; They are removing attribution, among other things. Otherwise, I'd like to stick as close as possible to the open source licenses (say MIT). Is there such a license out there? If not, anyone working on such a thing?<p>So far what we have learned is that robots.txt doesn't work; major sites are using login-only access with 2FA to have any hope to keep their content away from LLMs. I imagine the licenses would be one thing, but actually implementing/enforcing them might be a whole other can of worms!
The LLMs' training data is already mostly All Rights Reserved content which is more restrictive than whatever license you could come up with, and if that doesn't stop anyone then sure as hell you won't stand a chance either.<p>You best bet to fight back is to either try to poison your data, or to train your own models on <i>their</i> data.
If machine learning is found to be fair use, the license you choose does not matter - in the same way Google Books can scan books and make them searchable without a specific license to do so.<p>If machine learning is <i>not</i> found to be fair use, and your concern is the removal of attribution, then MIT license should be fine.<p>> So far what we have learned is that robots.txt doesn't work;<p>The companies training models I'm aware of[0][1][2] all respect robots.txt for their crawling. Can't necessarily guarantee that all of them do - but the fact that smaller players are likely to use CommonCrawl (which also follows robots.txt[3]) means it should catch the vast majority of cases and I'd recommend it if you don't want your work trained on.<p>> major sites are using login-only access with 2FA to have any hope to keep their content away from LLMs<p>I suspect it's more that users with accounts are more valuable than lurkers, and framing forced sign-up as protecting user data from LLMs is a convenient excuse.<p>[0]: <a href="https://platform.openai.com/docs/bots" rel="nofollow">https://platform.openai.com/docs/bots</a><p>[1]: <a href="https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler" rel="nofollow">https://support.anthropic.com/en/articles/8896518-does-anthr...</a><p>[2]: <a href="https://blog.google/technology/ai/an-update-on-web-publisher-controls/" rel="nofollow">https://blog.google/technology/ai/an-update-on-web-publisher...</a><p>[3]: <a href="https://commoncrawl.org/faq" rel="nofollow">https://commoncrawl.org/faq</a>
You don't have a choice. Any content you put online will be harvested by LLMs regardless of your intent, or any license you post to the contrary. That's already the norm and it isn't going to change any time soon.<p>hehehheh's comment is your best option - poison your content when possible. It's still going to be consumed but at least you can make the LLMs choke on it. Second best option is to never post content to the free internet, but even that's just a temporary measure - all accessible data (including private data) will be assimilated eventually.. But expecting a license to work in a post LLM world is just naive.
If you care about it being an OSI-approved license (or purists arguing that it's not really "open source"), then any restrictions on who/what can use the software violates the FSF's "freedom zero": <a href="https://www.gnu.org/philosophy/free-sw.en.html#four-freedoms" rel="nofollow">https://www.gnu.org/philosophy/free-sw.en.html#four-freedoms</a>
<i>but actually implementing/enforcing them might be a whole other can of worms!</i><p>Are you assuming out lawyering Google, OpenAI, etc. is <i>only</i> a can of worms?<p>A license is only as good as your legal wherewithal to enforce it. Good luck.