My wife is a university professor in the states, she said it's incredibly obvious who has started to use a GPT for the work they turn in. There is a lot of wrestling going on within the staff in her department on what they should do, some people want to outright ban anything that "seems" it was written by a LLM. As my wife discussed with me, it's hard to know how much was written by GPT and how much was edited by the student, maybe they asked 50 questions to the LLM and then copy/pasted it together themselves, maybe they used it for the outline, and filled in the details themselves. The lines can get pretty blurry. One thing that it seems most of the profs find annoying is... they're aware that GPT can invent convincing sounding sources, so they're doing a lot more work verifying sources.