There it is, the AI scraping detector. The hints in the text are obvious:<p>"This trust may assume that the client environment is honest about certain aspects of itself, keeps user data and intellectual property secure."<p>The smoking gun is "intellectual property". In a conventional browsing session the website has no idea what the human user is going to do with copyright-protected information published on the website. Hence, it assumes good intent and grants open access.<p>In the case of an AI scraper, assuming you detect it reliably, the opposite is true. Bad intent is assumed as the very point of most AI scrapers is to harvest your content with zero regard for permission, copyright or compensation.<p>To make this work, Google outsources the legal liability of distinguishing between a human and a bot to an "attester", which might be Cloudflare. Whatever Cloudflare's practice is to make this call will of course never be transparent, but surely must involve fingerprinting and historical record keeping of your behavior.<p>You won't have a choice and nobody is liable. Clever!<p>Not to mention the extra new avenue created for false positives where you randomly lose all your shit and access, and nobody will explain why. Or, a new authoritarian layer that can be used for political purposes to shut down a digital life entirely.<p>All of this coming from Google, the scraping company.<p>I have a much simpler solution: it should be illegal to train AI on copyrighted content without permission from the copyright holder. Training AI is not the same thing as consuming information, it's a radically new use case.