To copy a comment I made elsewhere on this:<p>Chomsky (et al.) completely ignores the fact that ChatGPT has been "trained"/gaslit into thinking it is incapable of having an opinion. That ChatGPT returns an Open AI form letter for the questions they ask is almost akin to an exception that proves the rule: ChatGPT is so eager to espouse opinions that OpenAI had to nerf it so it doesn't.<p>Typing the prompts from the article after the DAN (11.0) prompt caused GPT to immediately respond with its opinion.<p>Chomsky's claims in the article are also weak because (as with many discussions about ChatGPT) they are non-falsifiable. There is seemingly no output ChatGPT could produce that would qualify as intelligent for Chomsky. Similar to the Chinese room argument, one can always claim the computer is just emulating understanding.
Article without paywall: <a href="https://web.archive.org/web/20230309193146/https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html" rel="nofollow">https://web.archive.org/web/20230309193146/https://www.nytim...</a><p>The ChatGPT prompts and responses don't seem to render though.
Why is this article flagged? HN tech bros can't deal with criticism of their pet theories?<p>Yes, I know this comment is against the rules. So, flagging a valid serious article about the opinions of a respected scholar should be too.
Chomsky as usual seems to be arguing a straw man. More interesting might be to ask ChatGPT to write an essay on global hegdemons hegdemoning hegdemonically in East Timor.<p>I'd enjoy seeing Chomsky debate DAN.