On the one hand, this is sorely needed: AI detection software will inevitably be mostly snake oil.<p>Academia and education desperately wants this software to work! As a result, selling them something that doesn't work is going to be very profitable.<p>The most obvious problem with this class of software is how easy it would be to defeat if the students could access it themselves: generate some text, run it through the detector, then fiddle with it (by manually tweaking it or by prompting the AI to "reword this to be less perfect") until it passes.<p>Which means these tools need to not be openly available... which makes them much harder to honestly test and evaluate, making it even easier to sell something that doesn't actually work.<p>But... I don't think this site is particularly convincing right now. It has spelling mistakes (which at least help demonstrate AI probably didn't write it) and the key "How AI Detection Software Works" page has a "Coming Soon" notice.<p>The "examples" page is pretty unconvincing right now too - and that's the page I expect to get the most attention: <a href="https://itwasntai.com/examples" rel="nofollow">https://itwasntai.com/examples</a><p>It looks to me like this is still very much under development, and is not yet ready for wider distribution.
IMO one of the way that most schools are going to end up being able to detect plagiarism is going to be a custom word processor (or something similar) that can track all of the edits made into a document. Basically, have the students type an essay where all of the keystrokes are recorded by the program, and so it can be detected by the program whether someone is copy and pasting whole essays, or if someone is actually typing and revising the essay until it is submitted. Essays that are just turned in in general are probably going to be a thing of the past.
Ironically, the best detector for plagiarism would be a 15 minute conversation asking the student about their research and opinions on the topics written, kind of like interviewing someone who claims redis expertise on their resume.
Context for the website creation: <a href="https://www.reddit.com/r/ChatGPT/comments/13hi5y6/i_fed_gptzero_a_letter_from_1963_that_went_viral/" rel="nofollow">https://www.reddit.com/r/ChatGPT/comments/13hi5y6/i_fed_gptz...</a>
I've been reviewing answers to questionnaires we send out to potential software engineering candidates. Sometimes candidates seem to write 90% of the submission themselves, and then use ChatGPT for the last couple of questions (which are more general, like "Outline your thoughts on documentation in software projects"). I joked to a colleague that I'd come up with a fool-proof ChatGPT detector in one line of Python:<p><pre><code> is_chatgpt = paragraphs[-1].startswith(('In conclusion', 'Finally'))</code></pre>
My favorite example of it wasn't AI is that software to detect GPT flagged the US Constitution as almost certainly being AI.<p><a href="https://stealthoptional.com/news/us-constitution-flagged-as-ai-generated-content-by-chatgpt-detector/" rel="nofollow">https://stealthoptional.com/news/us-constitution-flagged-as-...</a><p>As long as that kind of egregious mistake is possible, we should look at such tools with suspicion.
This seems kind of pointless to me. Long before I entered the academic system, Turnitin had pioneered the industry of accusing students of plagiarism while simultaneously claiming unlimited license to their works.<p>They also built a parallel industry selling to services to students on how to avoid being considered as plagiarism.<p>In the real world, that is known as organized. But in academia, it is business as usual.
With the advent of new technology, so must entire practices and industries spring up to counteract the inherent harm this technology will cause and is already causing.<p>is it a given that technological progress will often necessitate societal harm? Is such technological progress actually progress for humanity?<p>there seems to be this universal notion that "things that can be built will be built and are inevitable". It is for example argument #1 anytime anyone suggests we should be manufacturing and selling fewer guns - that this is not possible, since guns are "inevitable". You can 3-d print them after all! Therefore, everyone must be armed and we must live in an armed society with regular mass shootings, because what can we do? It's also a ubiquitous slogan used around AI - that AI is "inevitable". It's already out there, Google internal docs are betting that OSS AI will become the norm, and that's that. AI will be everywhere used for everything, making it's fairly unreliable decisions about things like who broke into your house last night, who's likely to be shoplifting, is that a bike in the crosswalk or just nothing at all, etc., and that's now the world we live in.<p>Are humans as a species perhaps in need of better ways to <i>not</i> build things, since right now every possible thing that is imagined and becomes possible therefore "must" be built, en masse, and humanity's occupation becomes mitigating the species against all the harms brought about by all this "progress".<p>anyway that's the low blood sugar version. I'll likely have not much to say after lunch
It's not just university students anymore. My 11yo kid got accused of being a robot on a physics competition where the only reward is a (paid) summer camp full of extra physics lessons. All that was needed to trigger the accusation was a bit less fluent explanation of the solution, something you would expect from a student struggling with a difficult task. People are growing unreasonably paranoid.
This is an "everything sucks all around situation" because since real things are tied to academic performance you have to weed out dishonesty for fairness but also the power disparity between student/teacher and the black box nature of the detection makes it impossible to actually prove your innocence.<p>I wish more than anything that the availability of AI will at some point force schools to restructure how classes work to make cheating like this a non-issue. Higher education is actually unbelievably horrible at actually educating. I only realized that once I graduated and on a whim wanted to learn about something that requires university level expertise. If you're not there for the credential it's a monumental waste of time. If classes were designed for students who wanted to be there and the grades were <i>only</i> for your benefit and not used as a target for anything you might actually have engaged learners.
I am compelled to point out that in one of the info pages, the site includes screenshots of a conversation with ChatGPT where the author claims to trick AI detection by generating text with a lower temperature. But asking ChatGPT, through the LLM interface, to lower the temperature doesn't lower the temperature. There's no mechanism for it to do so. It may have some (nevertheless real) placebo effect, because the LLM thinks it should behave different and assigns some vague "meaning" to "temperature" -- but this isn't a technical change to the model operation.
this entire line of reasoning (using AI to detect AI, with disastrous results) is ripe for a giant lawsuit. a sufficiently wealthy school is bound to accuse a sufficiently wealthy student at some point.
I love this AI cheating detection stuff.<p>Mostly because it really gets at the root of the issues in education.<p>Like, fine, you have made some system where cheating is impossible. Great.<p>But have your students learned anything?<p>If educators put in even a iota of effort into learning their students, then they know who is cheating and who isn't.<p>But if they put that same amount back into teaching, then everyone wins.<p>Education is not a contest with winner and losers.<p>(Yes, ok, you went to a bad school where it was a contest for your pre-med degree. Look where that has gotten US healthcare.)
oooh I like that, the student can sue for copyright infringement because the teacher uploaded their work and proved that they uploaded it?<p>sounds like a simple sublicensing clause imposed on the student will fix that, but the next few semesters a few examples can be made of the teachers and institutions<p>will pay off that tuition
What if we just let cheaters cheat? If they don't have the knowledge, they won't last long in a job that requires it. As the saying goes "You're only cheating yourself"
Teachers need to ask students to write things that are hard for AI to cheat on: if a bunch of humans end up writing very similar essays to the prompt - that's a prompt problem!
I like to think that when I was in college I wrote with enough flair and personality that no one could mistake me for an AI. Perhaps I'm overestimating myself.
If a professor fails you because they thought your final essay was written by an AI and it wasn't, do you have legal grounds to start a lawsuit against the school?
> Web server is down Error code 521<p>I’m seeing an error message from cloudflare.<p>Is the website working for anyone else? Is there an archive / mirror?
Seems like the site might have crashed, here's an archive link: <a href="https://web.archive.org/web/20230515030802/https://itwasntai.com/" rel="nofollow">https://web.archive.org/web/20230515030802/https://itwasntai...</a>.<p>But tl;dr many students have been accused of using AI by teachers who think that AI detection software works, when it really doesn't. So the goal of this site is to communicate to teachers that AI detection software isn't reliable.<p>I originally discovered this in a reddit comment which you can see here: <a href="https://www.reddit.com/r/ChatGPT/comments/13hi5y6/comment/jk5ndkq/" rel="nofollow">https://www.reddit.com/r/ChatGPT/comments/13hi5y6/comment/jk...</a>