It has nothing to do with GPT and everything to do with you.<p>Neurodivergent people are always struggling with the feeling that they can get it 90% right or 99% right or 99.9% right but they still get rejected in the end.<p>GPT's top competence is eliciting "neurotypical privilege" and having people give it credit when no credit is due. Somehow it gets it 11% right or 1.1% right or 0.11% but people give it credit for the whole. I'm jealous. Those cases where people think it failed are cases where the spell was broken and people see it for what it really was all the time.<p>An alternate way of thinking about it is that it goes back to structuralism<p><a href="https://en.wikipedia.org/wiki/Structuralism" rel="nofollow">https://en.wikipedia.org/wiki/Structuralism</a><p>which frequently used language as a model for other things and failed because linguistics is actually a pseudoscience. That is, "language is what language does" and even though it looks like it has regularities and it does have regularities, you can't build systems that depend on those regularities because correct performance requires getting all the non-regular cases correct. It looks and acts like a science because linguists can set up problems, appear to solve them, and publish papers in their paradigm, but try to use Chomsky's generative grammar for "natural language technology" it doesn't work. (However it does work if you want to make computer languages!)<p>In the structuralist mode, GPT-3 thrives on regularities in the surface forms of language but it has no "world knowledge" or any comprehension of semantics at all. The new image generation models are somewhat fascinating because they can use a model of images as a substitute for a world model: a real artist knows that people ordinarily have two arms and two legs and draw things accordingly, DALL-E has seen many pictures with two arms and two legs and when it does its iterative process to refine an image it automatically rejects things that are structurally wrong and it puts them a step ahead of current LLMs.