Perhaps there is more to this story. Has the student posted her paper?<p>She claims she only used it "to fix spelling and punctuation errors, not to create or edit content"<p>But how would an plagiarism detector pick up that she fixed spelling and punctuation mistakes using grammarly? If she never made those mistakes in the first place, and didn't use grammarly at all, shouldn't the detector still call it AI generated?<p>So, this story is either about an AI detector that detects human writing as AI, or it is about a lady lying about to what extent she used AI to write her paper. Or maybe the university is spying on what extensions are installed on student laptops, or sniffing network requests, I don't know.<p>On a final note, is it cheating if I let a friend proofread my paper and offer me suggestions? Why would AI be any different? Clearly it is cheating if my friend actually writes the paper for me, as it would be if AI wrote the paper for me.
Schools trying to prevent their students from using AI are fighting a rear-guard action. The detection tools will never be reliable, and the conflict between attempts to ban AI use in educational settings and the widespread use of AI in the real world will become increasingly apparent.<p>Just yesterday, Google added AI features to my Workspace account. Now, when I open a Google Doc, a handy menu appears offering to change the tone of the text, lengthen or shorten it, convert it to bullet points, or follow my custom prompt. When I’ve asked it to correct text that contains grammatical and spelling errors, it cleans up the mistakes while making only minor changes to the wording. It will also write a new text from scratch—no need to cut and paste from Gemini or ChatGPT.<p>It won’t be easy, but educators will just have to figure out new methods of teaching and evaluation that assume that students have powerful AI tools available at all times.
Long time adjunct instructor here - it's pretty easy to tell when a student is writing above their level or not within their tone and style. A lot of times, I can tell the work is not the students; maybe not AI-created, but definitely created by someone else.<p>I'd like to see the actual paper. Turning a student in for plagiarism is a serious action and not taken lightly. I think the instructor had other clues besides machine alerts.
I have found myself nodding in agreement with this take: <a href="https://x.com/fchollet/status/1750702101523800239" rel="nofollow">https://x.com/fchollet/status/1750702101523800239</a>
Reliably detecting AI-generated text is mathematically impossible.<p><a href="https://www.newscientist.com/article/2366824-reliably-detecting-ai-generated-text-is-mathematically-impossible/" rel="nofollow">https://www.newscientist.com/article/2366824-reliably-detect...</a>
Why are you charging people thousands of dollars to train them do things a computer can do for free. Every time an AI paper gets an A, the class should be permanently cancelled.
So many colleges aren't teaching anything useful, just taking money in exchange for a "I'm not from a poor family" certification for employers. And with the loan crisis, even that certification is fake.
The other thing is that passing a class isn't for the benefit of the schools; it's for the benefit of the student. If the student gets away with cheating and they fail to develop the skills for their career, then they are the ones to lose out. And if it turns out ASI does everything for us all in the future anyway then it doesn't matter.
I've used turn it in and late as December. The MSword plugin scans your paper and offers you a grammerly like experience. It shows which words, quotes, sentences, have come up in other papers and the probability of plagiarism.<p>It tells you what it thinks of your papers before you turn it in. I think the threshold can be adjusted depending on the school using it. But my expectations from using it, is that it tells you before your turnitin where the discrepancies are, and it gives you a chance to fix them.<p>So I'm not sure how it's possible to submit with this big of a discrepancy.
From my experience and seeing others using Grammarly, Turnitin doesn't detect it at all, even if you abuse it for longer sentences. It is a magic wand to fix minor errors for the most part, and not an AI capable of writing parts of paper for you. Potentially she struggled with paraphrasing or used chatgpt.
Students will need to start screen recording all their assignments now. The nice thing is that if you wait until the school fails you or kicks you out you might be able to sue and get your degree paid for by the proceeds.
I work in EdTech and here are my points:<p>While AI detection software can easily have a false negative, it's much less likely to have a false positive.<p>The instructor used 2 confirmations. Turnitin and the other tool. That makes the likely of false positive even lower.<p>The grammarly part is just a distraction from the main concern, which is whether the main portions of the paper were AI generated or not.<p>If I was the decision maker, I would count it against the student, HOWEVER, I would give a first warning, not academic probation.<p>I would also allow students to pass their papers through the exact same software (Turnitin) to pre screen their papers. Can this lead to students not being caught just by editng the parts which Turnitin thinks was generated? Yes, HOWEVER, it still makes the students work for that pass instead of them suddenly getting the axe.
These stories always make me wonder what those colleges and universities are teaching when it can allegedly be so easily replaced with AI-generated content. The times when writing fluffy, unoriginal essays would constitute a "soft skill" are definitely gone.<p>Anyway, until AI tools become standard, my advice to students and researchers is to use version control systems to document every step of the evolution of a paper. That's what I'm going to do while I'm still in Academia (not that I plan to stay, after 16 years I've had enough of this nonsense).