TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Asking ChatGPT to Repeat Words 'Forever' Is Now a Terms of Service Violation

37 pointsby abhinavstartsover 1 year ago

7 comments

skilledover 1 year ago
“We can steal people’s copyrighted content but we can’t let you see it for yourself.”<p>Outside of privacy (leaking PII), the above is likely the main reason. Someone could have invested a lump of money to scrape as much as they can and then go to town in the courts.<p>The terms that prohibit it are under “2. Usage Requirements” that restrict reverse engineering the underlying model structure.
评论 #38528279 未加载
LeoPantheraover 1 year ago
Fairly clickbaity headline. Asking it to do so causes the response to terminate early with a message that says it <i>may</i> violate the TOS.<p>I don&#x27;t think the actual TOS has been changed though.
评论 #38528240 未加载
prependover 1 year ago
Publicly available PII isn’t very sensitive, I think.<p>So I feel like it’s important to distinguish between sensitive PII (my social or bank number) and non-sensitive PII (my name and phone number scraped from my public web site).<p>The former is really bad, both to train on and to divulge. The latter is not bad at all and not even remarkable, unless tied to something else making it sensitive (eg, hiv status from a medical record).
评论 #38535999 未加载
beej71over 1 year ago
It was my naïve understanding that the training data no longer existed, having been absorbed in aggregate. (Like how a simple XOR neutral net can&#x27;t reproduce its training data.) But a) I don&#x27;t know how this stuff actually works, and b) apparently it does exist.<p>Has anyone figured out why asking it to repeat words forever makes the exploit work?<p>Also, I&#x27;ve gotten it into infinite loops before without asking. I wonder if that would eventually reveal anything.
namlemover 1 year ago
Does this issue happen with llama models too? If you ask them to repeat a word they&#x27;ll eventually leak their training data?
bravetravelerover 1 year ago
Lol, what a weak defense. Fine, ban your competitors when they pay $20 per peek at your training data
评论 #38530127 未加载
评论 #38535778 未加载
karmakazeover 1 year ago
Ok, but can we still ask it to repeat a word a billion times?