People are focusing on the negative stuff here (lazy researchers "cheating"). But there's also a positive side to this. Generative AI is the ultimate research tool. It already knows the vast majority of research that you'll never get around to reading. You could spend your entire life reading every last second of it and you wouldn't come close to catching up. A lot of that stuff may be irrelevant (to your context) but it still knows about it.<p>And you can ask it questions about anything it ingested. Or ask it to criticize your text, find analogies to other work, or generally perform the role of a really diligent peer reviewer and editor before you even submit the work. That's all highly useful and it should lead to a higher quality of work for those researchers that use their tools well. I use gpt for Google docs and it's definitely helping me improve my text. I don't let it write whole sections/paragraphs. But I do use it to criticize, critique, and suggest improvements. I imagine a lot of students and researchers have been doing the same for the last year or so.<p>The same goes for reviewers. They can ask to extract key points, analyze the argumentation, find related work that the author might have missed, figure out where the authors are taking a few liberties with the facts/literature, etc. much more easily.<p>I've reviewed a fair amount of mostly badly written papers back in the day. This is not fun and rather laborious work. A lot of academic life is basically about reading each other's work and providing (hopefully) constructive criticism for articles that ultimately don't make the cut. I got rather good at that when I was still doing that. Any workshop, conference, or journal ends up rejecting way more articles than they accept. Especially the better publications. Some poor souls have to read all the rejected stuff. The price you pay for getting accepted is helping out with the peer reviews.<p>The process can be biased, political, unfair, and sometimes harsh. But it's better than not having a process. Generative AI can help with challenging unfair peer reviews, help reviewers extract key points, and zoom in on novel ideas/theories. Ultimately what you look for in an article is: Does it contribute something novel? Is the work contextualized properly relative to prior work? Is the work sound in its reasoning? Etc. Answering such questions positively basically means it's a good article. A generative AI can save a lot of time with this. Weeding out the bad articles is not that hard but a lot of work.