TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Do Not Train" Meta Tags: The Robots.txt of AI – Will Anyone Respect Them?

5 点作者 alissa_v大约 1 个月前
I&#x27;ve been noticing more creators and platforms quietly adding things like &lt;meta name=&quot;robots&quot; content=&quot;noai&quot;&gt; to their pages - kind of like a robots.txt, but for LLMs. For those unfamiliar, robots.txt is a standard file websites use to tell search engines which pages they shouldn&#x27;t crawl. These new &quot;noai&quot; tags serve a similar purpose, but for AI training models instead of search crawlers.<p>Some examples of platforms implementing these opt-out mechanisms: - Sketchfab now offers creators an option to block AI training in their account settings - DeviantArt pioneered these tags as part of their content protection approach - ArtStation added both meta tags and updated their Terms of Service - Shutterstock created a compensation model for contributors whose images are used in AI training<p>But here&#x27;s where things get concerning - there&#x27;s growing evidence these tags are being treated as optional suggestions rather than firm boundaries:<p>- Various creators have reported issues with these tags being ignored. For instance, a discussion on DeviantArt (https:&#x2F;&#x2F;www.deviantart.com&#x2F;lumaris&#x2F;journal&#x2F;NoAI-meta-tag-is-NOT-honored-by-DA-941468316) documents cases where the tags weren&#x27;t honored, with references to GitHub conversations showing implementation issues<p>- In a GitHub pull request for an image dataset tool (https:&#x2F;&#x2F;github.com&#x2F;rom1504&#x2F;img2dataset&#x2F;pull&#x2F;218), developers made respecting these tags optional rather than default, which one commenter described as having &quot;gutted it so that we can wash our hands of responsibility without actually respecting anyone&#x27;s wishes&quot;<p>- Raptive Support, a company implementing these tags, admits they &quot;are not yet an industry standard, and we cannot guarantee that any or all bots will respect them&quot; (https:&#x2F;&#x2F;help.raptive.com&#x2F;hc&#x2F;en-us&#x2F;articles&#x2F;13764527993755-NoAI-Meta-Tag-FAQs)<p>- A proposal to the HTML standards body (https:&#x2F;&#x2F;github.com&#x2F;whatwg&#x2F;html&#x2F;issues&#x2F;9334) acknowledges these tags don&#x27;t enforce consent and compliance &quot;might not happen short of robust regulation&quot;<p>Some creators have become so cynical that one prominent artist David Revoy announced they&#x27;re abandoning tags like #NoAI because &quot;the damage has already been done&quot; and they &quot;can&#x27;t remove [their] art one by one from their database.&quot; (https:&#x2F;&#x2F;www.davidrevoy.com&#x2F;article977&#x2F;artificial-inteligence-why-i-ll-not-hashtag-my-art-humanart-humanmade-or-noai)<p>This raises several practical questions:<p>- Will this actually work in practice without enforcement mechanisms?<p>- Could it be legally enforceable down the line?<p>- Has anyone successfully used these tags to prevent unauthorized training?<p>Beyond the technical implementation, I think this points to a broader conversation about creator consent in the AI era. Is this more symbolic - a signal that people want some version of &quot;AI consent&quot; for the open web? Or could it evolve into an actual standard with teeth?<p>I&#x27;m curious if folks here have added something like this to their own websites or content. Have you implemented any technical measures to detect if your content is being used for training anyway? And for those working in AI: what&#x27;s your take on respecting these kinds of opt-out signals?<p>Would love to hear what others think.

5 条评论

nicbou大约 1 个月前
They already started with the assumption of consent, crawled the web with disregard for resource use, and still provide no mechanism to revoke permission. This is the culture around AI. A quiet little tag that says &quot;please don&#x27;t do that&quot; won&#x27;t do much.<p>These companies are already behaving like jerks. Do you think they will become more polite once they control how we avcess information? with investors breathing down their neck?
Ukv大约 1 个月前
Of the signals used to indicate crawling is prohibited, robots.txt is probably the most effective; OpenAI, Google, Anthropic, Meta, and CommonCrawl all claim to respect it. That often provokes a response of &quot;well they&#x27;re lying&quot;, but I&#x27;ve yet to actually find any cases of the IPs they use for crawling accessing content prohibited by robots.txt.<p>Newly proposed standards will probably take a while to catch on, if they ever do.<p>Not a lawyer, but I believe such measures could in theory become legally enforceable in the US without any new legislation if the fair use defense fails but an implied license defense (the reason you can cache&#x2F;rehost copies of webpages that don&#x27;t have a &lt;noarchive&gt; meta tag, as in Field v. Google Inc) succeeds.
zzo38computer大约 1 个月前
I do not want others to scrape my files from my server for the purpose of training LLMs, but if they acquire a copy of them by other means or already have a copy of them for other reasons, then they will already have a copy and then they can do what they want with it.<p>I do not care about attribution; but I care more that they do not claim additional restrictions in their terms of use when they copy my stuff and use it.
abhisek大约 1 个月前
I am not sure how this is any different from open source code being embedded in commercial applications. It’s really like a self-accelerating loop.<p>At least for OSS, usage defines value. When an OSS project is popular, enterprises notices it and begins to use it in their commercial applications.
评论 #43790250 未加载
BobbyTables2大约 1 个月前
No
评论 #43790294 未加载