>> In this piece Chomsky, the intellectual godfather of an effort that failed for 60 years to build machines that can converse in ordinary language, condemns the effort that succeeded.<p>Chomsky never had anything to do with any sort of effort "to build machines that can converse in ordinary language". I have no idea where that came from in Aaronson's article. Chomsky was a great influence to cognitive science, and artificial intelligence, in particular in its early years, but he has nothing to do with chatbots. At. all. That's just plain nuts, to open an article against "Chomskyism" (bad form, that title) with such a complete fabrication.<p>Moreover, Chomsky's influence on AI did very much lead to systems that could "converse in ordinary language" - but with which Chomsky had nothing to do, personally, or as a researcher. The "influence" I mention refers to Chomsky's work on formal grammars, and how they have been used for a long time in NLP to model language.<p>For a couple historically notable examples:<p>ELIZA: Joseph Wizenbaum's chatbot pretending to be a Rogerian psychologist. I think at this point in the conversation, everyone has heard about ELIZA and how it drew people in and made them ascribe human qualities to it, just like ChatGPT and friends.<p><a href="https://en.wikipedia.org/wiki/ELIZA" rel="nofollow">https://en.wikipedia.org/wiki/ELIZA</a><p>SHRDLU: the doctoral thesis of Terry Winograd, written in 1968-70. SHRDLU was a natural language interface to a virtual robotic arm manipulating coloured bricks in a "brick world". It was based on a natural language grammar created by Winograd, and could support intricate instructions that the system then executed with precision. The wikipedia article on SHRDLU has an example, from which I quote:<p><pre><code> Person: Is there a large block behind a pyramid?
Computer: YES, THREE OF THEM: A LARGE RED ONE, A LARGE GREEN CUBE, AND THE BLUE ONE.
Person: Put a small one onto the green cube which supports a pyramid.
Computer: OK.
Person: Put the littlest pyramid on top of it.
Computer: OK.
Person: Does the shortest thing the tallest pyramid's support supports support anything green?
Computer: YES, THE GREEN PYRAMID.
Person: What color is it?
Computer: BY "IT", I ASSUME YOU MEAN THE SHORTEST THING THE TALLEST PYRAMID'S SUPPORT SUPPORTS.
Computer: RED
Person: How many things are on top of green cubes?
Computer: I'M NOT SURE WHAT YOU MEAN BY "ON TOP OF "IN THE PHRASE "ON TOP OF GREEN CUBES " .
DO YOU MEAN:
1 - DIRECTLY ON THE SURFACE
2 - ANWHERE ON TOP OF ?
Person: 2
Computer: THREE OF THEM.
</code></pre>
<a href="https://en.wikipedia.org/wiki/SHRDLU" rel="nofollow">https://en.wikipedia.org/wiki/SHRDLU</a><p>Now, if you've seen discussions with ChatGPT and earlier large language models you'll know that the above is beyond the capabilities of modern systems; including ones trained specifically to manipulate robotic arms etc. ChatGPT, faced with instructions like the above, will soon start to hallucinate objects that don't exist, then hallucinate moving them, and make a pig's meal of the entire state of the blocks world. SHRDLU, confined as it was in its virtual, toy world, could still follow the instructions of its user with absolute precision. We still have nothing that can repeat this feat. Why? Because "Chomskyism" was abandoned, and everyone turned to statistical NLP, after the AI winter of the '90s crushed AI research funding, that's why.