TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How Googlers cracked OpenAI's ChatGPT with a single word

31 pointsby theduder99over 1 year ago

6 comments

Jimmc414over 1 year ago
I reported this behavior 4 months ago on HN <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36675729">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36675729</a><p>[The researchers wrote in their blog post, “As far as we can tell, no one has ever noticed that ChatGPT emits training data with such high frequency until this paper. So it’s worrying that language models can have latent vulnerabilities like this.”]
评论 #38502368 未加载
mike_hearnover 1 year ago
I&#x27;m not sure how this is an attack. Is it actually vital that models don&#x27;t repeat their training data verbatim? Often that&#x27;s exactly the answer the user will want. We are all used to a similar &quot;model&quot; of the internet that does that: search engines. And it&#x27;s expected and required that they work this way.<p>OpenAI argue that they can use copyrighted content so repeating that isn&#x27;t going to change anything. The only issue would be if they had used stolen&#x2F;confidential data to train on, and it was discovered that way, but it also seems unlikely anyone could easily detect that given that there&#x27;d be nothing to intersect it with, unlike in this paper.<p>The blog post seems to slide around quite a bit, roving from &quot;it&#x27;s not surprising to us that small amounts of random text is memorized&quot; straight to &quot;it&#x27;s unsafe and surprising and nobody knew&quot;. The nobody knew idea, as Jimmc414 has nicely proven in this thread, is false alarm because their technique actually was detected and the paper authors just didn&#x27;t know that it had been. And &quot;it&#x27;s unsafe&quot; doesn&#x27;t make any sense in this context. Repeating random bits of memorized text surrounded by huge amounts of original text isn&#x27;t a safety problem. Nor is it an &quot;exploit&quot; that needs to be &quot;patched&quot;. OpenAI could ignore this problem and nobody would care except AI alignment researchers.<p>The culture of alarmism in AI research is vaguely reminiscent of the early Victorians who argued that riding trains might be dangerous, because at such high speeds the air could be sucked out of the carriages.
评论 #38503297 未加载
dangover 1 year ago
Recent and related:<p><i>Scalable extraction of training data from (production) language models</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38496715">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38496715</a> - Dec 2023 (12 comments)<p><i>Extracting training data from ChatGPT</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38458683">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38458683</a> - Nov 2023 (126 comments)
FartyMcFarterover 1 year ago
This should make companies think twice about what training data they use. Plausible deniability doesn&#x27;t work if you spit out your training data verbatim.
cedwsover 1 year ago
What&#x27;s the endgame of this &quot;AI models are trained on copyrighted data&quot; stuff? I don&#x27;t see how LLMs can work going forward if every copyright owner needs to be paid or asked for permission. Do they just want LLM development to stop?
评论 #38501921 未加载
评论 #38501988 未加载
评论 #38501492 未加载
评论 #38503863 未加载
评论 #38502149 未加载
评论 #38502320 未加载
评论 #38502133 未加载
评论 #38501709 未加载
gardenhedgeover 1 year ago
I think as part of AI regulations, all companies should have to publish their training data along side their model.