> That’s when it dawned on me: we don’t have a vocabulary for this.<p>I'd like to highlight the words "counterfeiting" and "debasement", as vocabulary that could apply to the underlying cause of these interactions. To recycle an old comment [0]:<p>> Yeah, one of their most "effective" uses [of LLMs] is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight."<p>> Oh, sure, qualitatively speaking it's not new, people could have used form-letters, hired a ghostwriter, or simply sank time and effort into a good lie... but the quantitative change of "Bot, write something that appears heartfelt and clever" is huge.<p>> In some cases that's devastating--like trying to avert botting/sockpuppet operations online--and in others we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."<p>[0] <a href="https://news.ycombinator.com/item?id=41675602">https://news.ycombinator.com/item?id=41675602</a>