Whenever I read articles like this (AI assistants having a negative impact on resulting work), I wonder about how much it affects "experienced" AI users. I've been interested in AI since the Cleverbot days, and have extensively used GitHub Copilot and ChatGPT since they came out. When I ask ChatGPT something that has an objective answer, but one I can't easily verify from my own knowledge or low stakes experiment (e.g. does this fix my syntax error?), I always make sure to not "ingest" it into my knowledge or product before finding one or more external corroborating sources. This doesn't make ChatGPT significantly less useful to me, from my experience, verifying an answer is typically much easier than researching the question from the ground up by conventional means (Google, GitHub Code Search). Similarly, when using GitHub Copilot, I am acutely aware that I need to critically evaluate the suggested code myself, and if there is something I am unsure about, it's again off to Google or Code Search.<p>Personally, the most risky AI stuff I do is if I am completely stuck on something, I might accept AI suggestions without much thought just to see if it can resolve whatever issue I am running into. But in my mind, those parts of the code are always "dirty" until I thoroughly review them; in the vast majority of cases, I end up refactoring those parts myself. If I am asking AI to improve a text I wrote, I rarely just take it as-is, I typically open both versions next to each other and apply parts I like to my original text.<p>In my opinion, stuff created by AI is inherently "unfinished". I cringe whenever people have AI do something and just roll with it (writing an essay, code, graphic design, etc.). AI is excellent for going most of the way, but in most cases, there need to be review and finishing touches by a human, at least for now.