I guess if you selectively allow crawlers that promise to not use the data in such a way, robots.txt is still the way to go.<p>Otherwise you need to selectively allow certain bots. However, as well as with web crawlers, respecting a robots.txt is optional.<p>Insidious with AI-models is that it is difficult or practicably impossible to prove that it trained on your data.<p>Difficult to establish a standard like robots.txt. There also was .well-known/security.txt that Google proposed. Some sites serve it, but it hasn't really become a standard.
Ironically my blog is written with the help of an LLM, so AI scraper bots are trained on their own output.<p>But if you are concerned there's a good resource here for blocking them: <a href="https://darkvisitors.com/" rel="nofollow">https://darkvisitors.com/</a>