TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

OpenAI won't watermark ChatGPT text because its users could get caught

50 点作者 LaSombra9 个月前

16 条评论

wongarsu9 个月前
Remember when GPT was "too dangerous" to release into the world and unless you were twitter-famous you had to apply and wait for months to even get access? Times sure have changed
评论 #41159772 未加载
评论 #41159849 未加载
评论 #41159781 未加载
pona-a9 个月前
What we&#x27;re seeing is a new profitable industry, with a tendency towards natural monopoly, refusing to act in the public good because its incentives conflict with it. At stake is the entire corpus of 21st century media, at risk of being drowned out in a tsunami of statistically unidentifiable spam, which has already begun and will only get worse, until nobody will even acknowledge the web post 2022 as worth reading or archiving.<p>The solution is very, very simple: just regulate it. Force all companies training LLMs to add some method of watermarking with a mean error rate below a set value. OpenAI&#x27;s concern is that users will switch away to other vendors if they add watermarks? Well, if nobody can provide that service, OpenAI still has the lead. A portion of the market may indeed cease to exist, but in the same vein, if we had always prioritized markets over ethics, nobody would be opposed to having a blooming hitman industry.<p>Open weights models do exist, but they require much greater investment, which many of the abusers aren&#x27;t willing to make. Large models already require enough GPU power to be barely competitive with underpaid human labor, and smaller ones seem to already fall into semi-predictable patterns that may not even need watermarking. Ready-made inference APIs can too include some light watermarking, while with general purpose notebooks&#x2F;VMs, the question may still be open.<p>Still, it&#x27;s all about effort to effect ratio. Sometimes inconvenience is enough to approximate practical impossibility for 80% of the users.
评论 #41159969 未加载
评论 #41168607 未加载
评论 #41161091 未加载
评论 #41159922 未加载
j-bos9 个月前
On a personal level, I&#x27;m happy about this. Having all personal info trackable is a tiring aspect of modern life? Remember when social media didn&#x27;t zero out the exif on photos and anyone could easily grab a map to your family&#x27;s home?<p>On a society level? I&#x27;m stil ok with this. Watermarks are trivially removed by the motivated and unpleasant.
评论 #41159874 未加载
评论 #41159873 未加载
评论 #41159909 未加载
评论 #41159798 未加载
kristjank9 个月前
I do not think adding covert metadata (which this seems to be, however well-encoded) to output is a good addition to any program, regardless if it&#x27;s a language model or something even more sinister. I am not so naive to believe OpenAI refuses to do it because of the goodness of their heart, but it&#x27;s still a welcome benefit to me.
throwaway20379 个月前
Why not post the original article that The Verge has basically summarised?<p><a href="https:&#x2F;&#x2F;www.wsj.com&#x2F;tech&#x2F;ai&#x2F;openai-tool-chatgpt-cheating-writing-135b755a" rel="nofollow">https:&#x2F;&#x2F;www.wsj.com&#x2F;tech&#x2F;ai&#x2F;openai-tool-chatgpt-cheating-wri...</a>
评论 #41159895 未加载
methyl9 个月前
I think it&#x27;s just too easy to fool it, as article says &gt; But it says techniques like rewording with another model make it “trivial to circumvention by bad actors.”
评论 #41159935 未加载
评论 #41159905 未加载
dsign9 个月前
Today, a colleague committed ten lines of code outputted by ChatGPT, without testing it, and the systems broke. Any chance watermarking would help with this? Just kidding (about the watermarking).<p>The AI cat is already out of the bag. I&#x27;m for regulation when it comes to AI direst threats, but students using ChatGPT to cheat is a problem that can be solved with live, supervised exams, the kind I had at school in the nineties...we couldn&#x27;t even use a pocket calculator.<p>If anything, I&#x27;m torn that today&#x27;s AI systems aren&#x27;t good enough to do the really serious words. Mark my words, we will have self-aware murder robots before we have AI systems able to write quantum-simulation software.
评论 #41160721 未加载
carbondating9 个月前
Kind of pointless when there are many open-source models that are approaching ChatGPT-level people could use instead.
评论 #41159963 未加载
nope10009 个月前
If I understand the regulation correctly, they will have to add it in order to comply with the AI act:<p>&gt; In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.<p><a href="https:&#x2F;&#x2F;ec.europa.eu&#x2F;commission&#x2F;presscorner&#x2F;detail&#x2F;en&#x2F;ip_24_4123" rel="nofollow">https:&#x2F;&#x2F;ec.europa.eu&#x2F;commission&#x2F;presscorner&#x2F;detail&#x2F;en&#x2F;ip_24_...</a>
geor9e9 个月前
Curious about the idea of watermarking text. How would one go about embedding a watermark in plain text without altering its readability? Are there specific techniques or tools designed for this purpose? To me it seems like a challenging task. Given the simplicity of text what methods could ensure the watermark remains intact? Perhaps there’s a way to subtly adjust character frequencies or patterns? That said I&#x27;d love from good sources to delve.
briandear9 个月前
Perhaps all documents should be watermarked “grammar improved by Microsoft Word” or “Animation effects provided by Keynote” or have films watermarked with “Automatic Color Correction provided by DaVinci.” It Takes a Village by Hillary Clinton: “Ghostwritten by Barbara Feinman.”<p>If we want to watermark GPT, fine — then let’s watermark absolutely everything not directly and personally created by the claimed creator. But we’re getting into interesting legal territory here — work for hire agreements would be in jeopardy because authorship of something under work for hire is owned by then company, not the contractor. There’s also a First Amendment issue — requiring companies and individuals to watermark creative works or disclose uncredited authors amounts to compelled speech that doesn’t serve a public interest high enough to provide an exception to First Amendment protections. The unintended (or perhaps subversively intended) consequences of requiring watermarks can be astounding. Journalists could potentially be compelled to reveal sources for instance because the content they’ve created was partially provided by someone else. It’s a stretch, but then again, the gymnastics courts and prosecutors routinely employ make such scenarios plausible (albeit unlikely.)
lofaszvanitt9 个月前
Bird of paradise courtship spectacle (dance) for the masses.
inktrail9 个月前
We solved this with a simple but novel solution that is doesn&#x27;t break with rewording&#x2F;paraphrasing. The problem we&#x27;ve run into is that teachers don&#x27;t want to have the hard conversation when they get the evidence of cheating. And school leadership don&#x27;t want to fix the issue, they want to put a chatbot on their resume. Sadly, I don&#x27;t think OpenAI releasing this would have any effect fixing the problem. It&#x27;s a problem with the people in education.
评论 #41159844 未加载
tbruckner9 个月前
...or maybe the education of what we deem as useful work needs to change.<p>Imagine if digital cameras were watermarking their photos because art classes refused to consider photography as a form of art.
miga9 个月前
...until they are legally required, just like producers of printers are.
latexr9 个月前
&gt; On one hand, it seems like the responsible thing to do; on the other, it could hurt its bottom line.<p>Remember that next time OpenAI and Sam Altman throw sand in your eyes with “our goal is to empower and serve humanity” or whatever they try to sell you. If they believed what they preach, the choice would’ve been obvious.
评论 #41159761 未加载
评论 #41159825 未加载