The original story here was published on December 11th 2022, just 12 days after the launch of ChatGPT (running on GPT-3.5).<p>I feel most of this document (the bit that evaluates examples of ChatGPT output) is more of historic interest than a useful critique of the state of LLMs in 2025.
"I cannot emphasize this enough: ChatGPT is not generating meaning. It is arranging word patterns. I could tell GPT to add in an anomaly for the 1970s - like the girl looking at Billy’s Instagram - and it would introduce it into the text without a comment about being anomalous."<p>I asked ChatGPT to introduce the girl looking at Billy's instagram. The response:<p>"Instagram didn't exist in the 1970s. Do you want to keep the setting authentic to the '70s or update the story to a contemporary timeframe where Instagram fits naturally?"
It’s always baffling how people take a technology that wasn’t even thought as feasible a decade ago and try to dismiss it as trivial and stagnant. It’s pretty clear that LLMs have improved rapidly and have successfully become better writers than the majority of people alive. To try and present this as just random pattern matching seems as just a way to assuage fears of being replaced.<p>It’s also amusing that people minimize it by just calling it pattern matching, as if human reasoning isn’t built upon recognizing patterns in past experiences.
> Giving 10 month old children permanent crutches may harm the development of walking, but since the crutches make unassisted walking obsolete, maybe it wasn't worth it anyway?<p>LLMs are a language calculator, yes, but don't share much with their analog. Natural language isn't a translation from input to output, it's a manifestation of thought.<p>If deaf children not taught sign language suffer from non-language learning disabilities, it's not a far stretch to say failing to practice quality writing will have a similar effect. But even with direct translation, if a school "translated" Shakespeare to a lower reading level to account for the class's lower literacy, it may still affect their development. If you had ChatGPT do every exercise in CS, you wouldn't have learned much from just the explanations.
If this argument is true it implies an unintended but very bleak corollary about what things aren't really worth preserving, if you start to think about what things aren't thriving in the economy with competition from LLMs. Trusted phone communication, visual art, rule of law, academic peer review, the news, search engines, question and answer platforms, etc
> The reason the appearance of this tech is so shocking is because it forces us to confront what we value, rather than letting the status quo churn along unexamined.<p>I think this is the most valuable part of the article. It's the writing process itself, which isn't valued in schools.
> I could tell GPT to add in an anomaly for the 1970s - like the girl looking at Billy’s Instagram - and it would introduce it into the text without a comment about being anomalous<p>Wrong. I tried it. it wrote it as "instagram". Then i asked it to explain this instagram:<p>In Timmy’s mind, it was simple. First, he’d snap a Polaroid of some passing dog or a field of sunflowers. Next, he’d run home, sit at his rickety desk, and carefully slip the Polaroid into a school notebook he called his “feed.” He’d scribble a title at the top—something he swore was called a “caption”—and pretend he was beaming the image across some invisible network into the hands of friends he’d never actually met.<p>Awesome to me
The author is a poor writer for someone that is supposed to be an expert on writing. Those who can't do, teach? Prolix, excessive commas, takes too long to get to the point, and talks about himself too much.
Maybe this is propaganda and you are being told what to think.<p>Of course AI isn’t going to be a good thing for the majority of humans at all. But it’s very important to manage sentiment within tech audiences.