TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Scientists claim >99 percent identification rate of ChatGPT content

64 pointsby Sindisilalmost 2 years ago

25 comments

barbegalalmost 2 years ago
The authors of the study are all from the chemistry department at the University of Kansas. Is this really the sort of paper they should be authoring? <a href="https:&#x2F;&#x2F;www.cell.com&#x2F;cell-reports-physical-science&#x2F;fulltext&#x2F;S2666-3864(23)00200-X#%20" rel="nofollow">https:&#x2F;&#x2F;www.cell.com&#x2F;cell-reports-physical-science&#x2F;fulltext&#x2F;...</a><p>The methodology is terrible. The prompting was as simple as : &quot;Can you produce a 300- to 400-word summary on this topic: INSERT TOPIC HERE&quot; where some example topics are:<p>A surprising fossil vertebrate<p>Stem cells remember insults<p>I can&#x27;t see how that prompt is going to come up with anything comparable to the human text which is based on perspectives articles in Science.<p>And they don&#x27;t report these numbers but I can from the tables.<p>Document false positive rate (human assigned as AI) 0%<p>Document false negative rate (AI assigned as human) 0%<p>Paragraph false positive rate (human assigned as AI) 14%<p>Paragraph false negative rate (AI assigned as human) 3%<p>In summary though this is a garbage tier study for entertainment only
评论 #36243042 未加载
评论 #36246676 未加载
评论 #36242795 未加载
评论 #36243734 未加载
cainxinthalmost 2 years ago
&gt; <i>Scientists are more likely to have a richer vocabulary and write longer paragraphs containing more diverse words than machines. They also use punctuation like question marks, brackets, semicolons more frequently than ChatGPT, except for speech marks used for quotations.</i><p>&gt; <i>Real science papers also use more equivocal language – like &quot;however&quot;, &quot;but&quot;, &quot;although&quot; as well as &quot;this&quot; and &quot;because&quot;.</i><p>You can prompt ChatGPT to write in that style. For example, I have a semi-standard prompt I often use:<p>“Don’t just give a list of information; write in engaging and well-written paragraph form with transitions. (It’s ok to use some bullet points or lists, but the entire piece should not be a list.) Don’t use emojis and don’t overuse any of the following words (or their conjugates or other tenses): ensure, enable, seamless, solution, crucial, critical, vital, invaluable, essential, welcome, game-changer, unleash, or streamline. You can use each once or twice at most. Vary sentence length to improve readability.”
评论 #36241730 未加载
评论 #36242090 未加载
mtlmtlmtlmtlalmost 2 years ago
I have a lot of doubts about this research both methodologically and in principle after skimming the paper.<p>1. All this is only on GPT-3.5.<p>2. &quot;Accuracy&quot; is an ambiguous term here. What we really want is sensitivity(how likely the test is to identify true positives) and specificity(same for true negatives). Maybe this is buried in the text somewhere but I couldn&#x27;t find it.<p>3. They only did a very narrow niche, namely academic papers. Doing machine learning on a highly selective dataset is definitely easier, and the results may not generalise well.<p>4. I have a strong feeling human language will evolve towards being harder and harder to distinguish from GPT. That means these results could be highly sensitive to the time period from which the non-GPT writing was taken.
评论 #36244586 未加载
评论 #36241670 未加载
leecarraheralmost 2 years ago
The title was originally 100% but the editor felt that to be too unbelievable and ratcheted it back to just greater than 99% , that way it covers 100, but looks fancier, there&#x27;s a math equation in the title, it&#x27;s gotta be legit!
CPLXalmost 2 years ago
There is no chance whatsoever that any tool will ever be able to reliably tell the difference between LLM and human content, and I can’t understand how anyone thinks such a thing is possible.<p>There&#x27;s no mechanism of action for such a thing. The information would have to be encoded in the text and it isn&#x27;t.<p>Honestly the best way to think about it is to invoke the infinite number of monkeys scenario since believing this <i>requires</i> you to disprove the infinite monkeys theorem.<p>Consider this thought experiment.<p>1) We will start with a piece of text that your detector is 100% certain was created by a GPT tool.<p>2) Now, prove that there is no way whatsoever for at least one human being to independently create this text.<p>If you can’t <i>prove</i> that, then your tool is bullshit.<p>cite: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Infinite_monkey_theorem" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Infinite_monkey_theorem</a>
评论 #36242195 未加载
massysettalmost 2 years ago
AI to detect AI. I’m skeptical that AI will lead to the end of the world, but who knows. At this rate, the AIs will go to war on each other and we’ll just be casualties of the crossfire.
评论 #36242818 未加载
teerayalmost 2 years ago
I honestly hope they don’t crack this problem, since it’s created a lovely existential crisis for term papers. It’s forcing long-overdue innovation in how we assess knowledge.
评论 #36241959 未加载
评论 #36241940 未加载
评论 #36241563 未加载
redman25almost 2 years ago
The trouble with percentages like this is that the last &lt;1% are difficult to achieve but provide the most value. If you&#x27;re a teacher looking to prevent cheating by students, you can&#x27;t take the chance of falsely accusing one student out of 100.
评论 #36242864 未加载
评论 #36241728 未加载
评论 #36241909 未加载
PrimeMcFlyalmost 2 years ago
Even if this is true now, it soon won&#x27;t be. Eventually there won&#x27;t be <i>any</i> way to tell.
评论 #36241660 未加载
jasonlotitoalmost 2 years ago
With all these claims of ChatGPT content detection, it assumes that the person using ChatGPT is just taking the raw output and not doing any editing or verification. We are treading onto Ship of Theseus grounds here, but how much has to change before it&#x27;s not important?<p>For example:<p>&gt; &quot;One of the biggest problems is that it assembles text from many sources and there isn&#x27;t any kind of accuracy check – it&#x27;s kind of like the game Two Truths and a Lie.&quot;<p>That&#x27;s not a ChatGPT problem. It&#x27;s an accuracy problem. If the output is edited to fix the accuracy, then it&#x27;s no longer an issue.<p>Yes, blindly using output can be an issue, the issue is accuracy, not the method used to generate it. If ChatGTP output was 100% accurate, what would be the next problem?
sublinearalmost 2 years ago
Surely what matters more than detecting generated content is verifying that the information is true and consistent and limiting the quantity and length of submissions per verified user of a system.<p>That takes care of the quality of the submission. The other part is determining attribution. Why not just ask the human submitter to defend their work in a controlled environment? If they can, it&#x27;s their paper now.<p>I don&#x27;t see a difference between unassisted people turning in crap vs assisted people still turning in crap.
spywaregorillaalmost 2 years ago
Look at the features used in their classifier:<p><a href="https:&#x2F;&#x2F;www.cell.com&#x2F;action&#x2F;showFullTableHTML?isHtml=true&amp;tableId=tbl1&amp;pii=S2666-3864%2823%2900200-X" rel="nofollow">https:&#x2F;&#x2F;www.cell.com&#x2F;action&#x2F;showFullTableHTML?isHtml=true&amp;ta...</a><p>20 features. Most of them checking whether a single character or word appears. It seems to be pretty dumb tbh.<p>I&#x27;m also curious how they could possibly justify a single feature of &gt; contains “others” or “researchers”
hammockalmost 2 years ago
What if I take chatgpt output and partially rewrite it in my own words, or add or mix in some original content? What result does this identification tool return?
ithkuilalmost 2 years ago
Plot twist: that paper has been written by ChatGPT
constantcryingalmost 2 years ago
The chance that LLMs won&#x27;t be able to produce text in a certain style is extremely low.<p>What is true, is that by default ChatGPT doesn&#x27;t produce the same style as academic papers, which seems unsurprising. You could also compare it to a 15 year old writing text messages and conclude that ChatGPT content is identifiable by that metric, which, again, seem unsurprising.
评论 #36242409 未加载
eulers_secretalmost 2 years ago
This is another reason to use OSS LLMs: the bar to entry is much higher, so they’re not targeted for countermeasures and they circumvent human pattern-matching.<p>You’re less likely to be “found out”, especially with there being LOTS of fine-tunes and even a couple fundamental models.<p>Though I don’t think this study is accurate, people are trying and they’re only targeting “Open”Ai’s products.
vidarhalmost 2 years ago
me to chatgpt:<p>&gt; Here is some text. Please rewrite it as follows: 1. DO NOT change the content more than the bare minimum needed to comply with the instructions. 2. Rewrite to add at least one question mark, and one semicolon. Add a &quot;however&quot;, &quot;but&quot;, &quot;although&quot; and &quot;because&quot;, and rewrite the sentences so it flows naturally and without changing the meaning. Here is the paragraph you should rewrite:<p>&gt;<p>&gt; They believe their classifier is effective, because it homes in on a range of stylistic differences between human and AI writing. Scientists are more likely to have a richer vocabulary and write longer paragraphs containing more diverse words than machines. They also use punctuation like question marks, brackets, semicolons more frequently than ChatGPT, except for speech marks used for quotations.<p>ChatGPT output (GPT4):<p>&gt; Although they have confidence in their classifier&#x27;s effectiveness due to its ability to identify a variety of stylistic differences between human and AI writing, a question emerges; why is this believed to be the case? It&#x27;s because scientists are generally more likely to utilize a richer vocabulary and construct longer paragraphs filled with a diverse set of words, but can this solely account for the difference?<p>&gt;<p>&gt; They also employ punctuation, such as question marks, brackets, and semicolons, with a higher frequency compared to ChatGPT; however, an exception is the use of speech marks for quotations.<p>Of course their detector might well handle that too, but GPT4 is perfectly capable of writing text with very varying level of styles and vocabulary, and if their detection relies on that, odds are people will prompt their way around that quite easily before even looking at more advanced approaches like e.g. finetuning on scientific papers within the right field etc.
Workaccount2almost 2 years ago
They used GPT3.5 to generate articles and then used those as a test. And their prompts were as basic as could possibly be.
评论 #36241708 未加载
EGregalmost 2 years ago
Do people really not understand that they’re just building a GAN (generative adversarial network) and the endgame of the arms race is that not even detector machines will be able to tell the diference within a margin of statistical error? Unless that is the goal, this just helps the machines in the long-term.
distant_hatalmost 2 years ago
The real danger of this kind of work is that you&#x27;ll have a shoddy product that misclassifies student&#x2F;academic writings and people will get tagged as cheaters when they aren&#x27;t.
newman8ralmost 2 years ago
It only works for a highly specific type of article that&#x27;s found in scientific journals (&#x27;perspectives&#x27; articles), and at the paragraph level it was closer to 92% correct.
actinium226almost 2 years ago
It sounds like they didn&#x27;t do any follow up prompts. The first response from chatGPT is usually inadequate, but with some follow ups you can get a much better one.
userbinatoralmost 2 years ago
What&#x27;s the false positive rate?<p>It&#x27;s already difficult enough to not be accused of being a bot instead of a human.<p>I do not want the future to be &quot;rule by AI&quot;.
sidewndr46almost 2 years ago
This title is useless. I can identify &gt;99% of all AI generated content with a function that is just &quot;return true;&quot;.
评论 #36241635 未加载
评论 #36241294 未加载
评论 #36241343 未加载
评论 #36241420 未加载
评论 #36241334 未加载
Giorgialmost 2 years ago
Not again