TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The False Promise of Chomskyism

138 点作者 nikbackm大约 2 年前

34 条评论

mjburgess大约 2 年前
Scott mischaracterises the reasons, which concern capacities of the system <i>not</i> properties of its output.<p>(1) It lacks the capacity to interpret sentences. It&#x27;s <i>output</i> possess the property of seeming-to-have-interpreted.<p>(2) It lacks the capacity to learn. Learning is, in part, explaining; it is, in part, causal contact with an environment. It&#x27;s output has the property of seeming-to-be-generated-by-a-learner.<p>(3) It lacks the capacity to speak either truthfully or otherwise. Lacking the capacity to explain, because at least it lacks the capacity to imagine (counterfactuals), it cannot thereby determine whether a claim accords with the way the world is. It&#x27;s output has the property: seeming-to-be-true.<p>(4) It lacks the capacity to be moral. For all the above, and more: it lacks the capacity to <i>care</i>. It&#x27;s output has the property: seeming-to-take-moral-stands.<p>The &quot;Jesuit&quot; here, I fear, is the AI fanatic who have turned their telescopes away from reality (intelligence as it actually exists), and have stolen the lenses to read scripture (intelligence as theorised by partisan academics).<p>One has to agree with Chomsky here at the end, &quot;given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.&quot;<p>The morality point I think should hit home most extremely: how horrifying to treat generated text output as-if it were constructed by an agent who <i>cares</i>.<p>Apparent properties of the output are <i>not</i> properties of the system; ChatGPT does not think, know, care, intend, speak, commnuicate, etc.<p>One can only &quot;laugh or cry&quot; at how absurd this sales pitch: what a horror to be invited to treat ChatGPT as possessing any of these capacities, simply because correlations across a whole internet of text seems to.
评论 #35081347 未加载
评论 #35081073 未加载
评论 #35081239 未加载
评论 #35086353 未加载
评论 #35093229 未加载
评论 #35081452 未加载
评论 #35089735 未加载
评论 #35081132 未加载
评论 #35081514 未加载
评论 #35087613 未加载
评论 #35092936 未加载
ak_111大约 2 年前
I think comparing Chomsky to Jesuit astronomers declining to look through Galileo’s telescope is an unfair analogy.<p>A better analogy in my opinion would be Einstein&#x27;s attitude to quantum mechanics, he accepted that it had predictive power but was militantly in the camp that it was incomplete as a theory for various reason. Similar to how Chomsky thinks that deep learning is effective and interesting as an engineering concept to generate language but sheds relatively very little light on how human intelligence and language works on a deeper level.
评论 #35092595 未加载
评论 #35081487 未加载
评论 #35081013 未加载
lkrubner大约 2 年前
&quot;In this piece Chomsky, the intellectual godfather of an effort that failed for 60 years to build machines that can converse in ordinary language, condemns the effort that succeeded.&quot;<p>That&#x27;s incorrect on a few levels. Aaronson is talking about something else, different from what Chomsky worked on. ChatGPT can put together a statistically likely series of tokens, but ChatGPT doesn&#x27;t understand the meaning of those tokens, and therefore ChatGPT has no concept of &quot;truth.&quot; ChatGPT cannot deliberately lie or tell the truth, it has no understanding of such things. By contrast, Chomsky, for much of his career, was a linguist and he focused on the issues of how is knowledge constructed, and how do we know what truth is, and how does language express this? So Chomsky is pointing out that ChatGPT creates a fascinating illusion of a real conversation, but it isn&#x27;t exactly what AI researchers were aiming for, for several decades. And it is appropriate that the New York Times would want to publish an essay that speaks clearly about what ChatGPT is and is not, because otherwise there is a risk that the general public will get an over-hyped view of what ChatGPT does.
评论 #35082097 未加载
评论 #35082926 未加载
评论 #35088194 未加载
评论 #35081052 未加载
评论 #35103656 未加载
评论 #35081092 未加载
评论 #35080965 未加载
raisin_churn大约 2 年前
This is not a response to the Chomsky piece. The main argument advanced by Chomsky et al is that LLMs are neither AGIs nor are they precursors to what we might consider AGIs, because, among other reasons, LLMs &quot;learn&quot; differently to how humans do, and that difference comes with strict limitations on the upper bounds of what LLMs can achieve. I&#x27;m certainly no expert on linguistics or AI&#x2F;ML, so I don&#x27;t know about all that, but this blog post avoids engaging with that claim, and opts instead for ad hominem.
评论 #35081361 未加载
评论 #35081009 未加载
评论 #35080944 未加载
davewritescode大约 2 年前
ChatGPT is a massive accomplishment but in my personal opinion it feels a lot closer to a search engine with a better search interface and missing data attribution. I&#x27;ve been using ChatGPT in my daily life for months at this point, I use it as a software engineer, particularly when I&#x27;m exploring a new programming language and looking for idiomatic ways to express myself. &#x27;What&#x27;s the Rust way to do X&#x27; is something that&#x27;s all over my ChatGPT history at this point. It&#x27;s a great way to get help when you&#x27;re completely lost in a new problem domain.<p>However, my issue is that more than once, ChatGPT has suggested code to me that could likely introduce a security issue in a product had someone been dumb enough to blindly copy and paste. If this were a code snippet on a website, I&#x27;d probably drop a comment or just remember to ignore whatever I found on that website in the future.<p>So I get why Chomsky isn&#x27;t impressed, once accuracy really matters ChatGPT falls on its face. When it comes to generating fluff like marketing copy, jokes and other things that don&#x27;t require any degree of accuracy, it&#x27;s amazingly good.
评论 #35080971 未加载
评论 #35086380 未加载
PaulHoule大约 2 年前
The irony of it is that Chomsky’s ideas are the foundation of parsers for programming languages. That is, you can make a link between the semantics of arithmetic and logic and the kind of grammar Chomsky talks about and you have… C, Pascal, Python, etc.<p>And people understand it!<p>The semantics of natural languages is over the <i>experience of an animal</i> and if you think of it as a “language instinct”, that instinct is a peripheral of a animal’s brain which has considerable capability for cognition (particularly in the case of mammals and birds) <i>without language</i>.<p>From that point of view natural language competence is a cherry on top of animal intelligence and you can’t simulate a language-using animal without simulating the rest of the animal.<p>ChatGPT does a surprisingly good job of faking linguistic competence with nothing but language so it certainly looks like a challenge to the Chomskyian point of view but I’d remind you that animals, pre-linguistic children, aphasics are all capable of cognition without language so “language is all you need” is still an incomplete position. (But boy there is that strange fact that transformers work for vision although research may be showing that is accidental?)<p>What does it mean for the mind?<p>A major part of animal behavior involves sequences. Think of tying a knot or a bird migrating, or for that matter serializing and deserializing mental structures into words. In the 1980s I read books about the brain that were lost at sea about how a neural network could model sequences and artificial neural networks have come a long way since then.
评论 #35085460 未加载
allturtles大约 2 年前
I didn&#x27;t think Chomsky&#x27;s NYT piece was very strongly argued (it seems a mish-mash of different points, some somewhat convincing and others quite unconvincing), but this is at least as bad.<p>It starts off with a complete mischaracterization of Chomsky&#x27;s research program (to imply that he is a disgruntled sore loser in the race to build machine intelligence) and then continues with a bogus analogy to imply Chomsky is a kind of deluded religious fanatic.<p>Then it concludes with a note complaining that people don&#x27;t trust the objectivity of the author&#x27;s opinions now that he&#x27;s employed by the entity he&#x27;s defending - well, yes, of course they don&#x27;t, that&#x27;s the compromise you made when you decided to take Open AI&#x27;s money.
gostsamo大约 2 年前
&gt; I’ll be busy all day at the Harvard CS department, where I’m giving a quantum talk this afternoon, but for now:<p>Please, make it even more condescending, this is what we need in a good debate. &#x2F;s
评论 #35080899 未加载
评论 #35080932 未加载
analog31大约 2 年前
&gt;&gt;&gt; I submit that, like the Jesuit astronomers declining to look through Galileo’s telescope<p>Completely as an aside, from what I&#x27;ve read, the Catholic astronomers did look through Galileo&#x27;s scope, and confirmed his observational evidence.<p>They were willing to accept a model of Tycho Brahe, in which the sun goes around the earth and the planets go around the sun.<p>Not needing to look through the scope, to know the truth, was introduced as a hypothetical in arguments. And there were Churchmen who were using the controversy to push the Church towards what we might call a more &quot;conservative&quot; position today. But amongst intellectuals, the debate was over whether the evidence supported the heliocentric theory or not.
oldgradstudent大约 2 年前
&gt; In this piece Chomsky, the intellectual godfather of an effort that failed for 60 years to build machines that can converse in ordinary language, condemns the effort that succeeded.<p>Well, Aaronson may not be the godfather, but he&#x27;s definitely an influential figure in effort that failed for 45 years to develop magic machines.
评论 #35086404 未加载
timspn大约 2 年前
Here&#x27;s Chomsky&#x27;s essay, for comparison: <a href="https:&#x2F;&#x2F;archive.ph&#x2F;cKVj5" rel="nofollow">https:&#x2F;&#x2F;archive.ph&#x2F;cKVj5</a><p>Aaronson seems to have built a strawman out of this, for some reason.
morsecodist大约 2 年前
I legitimately scrolled to the top of the post and scrolled back down because I assumed I had missed most of it. The author lists four of Chomsky&#x27;s points, rebuts only the fourth by saying it was intentional, then insults Chomsky with a vague analogy that could apply to anyone engaged in a scientific debate. Regardless of my stance on the issues this post doesn&#x27;t seem that good to me.
dtagames大约 2 年前
Scott didn&#x27;t understand Noam&#x27;s complaints very well. He (and I [0], and many others) are pointing out that LLMs cannot think or perform reasoning or exhibit intelligence. Not now, and not ever, because statistical counting of likely words is not intelligence. Lacking the ability to explain the fundamental reasoning concepts behind one&#x27;s conclusions is a hallmark of machine &quot;learning,&quot; which is why it isn&#x27;t learning. It&#x27;s word aggregation.<p>The fact that ChatGPT cannot tell if anything is real or correct or not is not a small thing that one can hand-wave away and say, &quot;Well, maybe tomorrow.&quot; The inability to discern truth is built into the LLM method. This is not true of people. Certainly not intelligent ones.<p>ChatGPT cannot even correctly perform math or logic, and those are fundamental to intelligence and intellectual development. Hilariously, we already have many programming languages which can do those things, so that&#x27;s not much of an advance to rely on software that cannot.<p>Aaronson&#x27;s premise, that Noam Chomsky has nothing more than &quot;sour grapes&quot; to offer, is really puerile and uninformed. See Chomsky&#x27;s original article [1] for yourself and then decide who is on the hype bandwagon and who is not.<p>[0] <a href="https:&#x2F;&#x2F;medium.com&#x2F;gitconnected&#x2F;behind-the-curtain-understanding-the-magic-of-chatgpt-3bbd23f0fbb3" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;gitconnected&#x2F;behind-the-curtain-understan...</a><p>[1] <a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230308104809&#x2F;https:&#x2F;&#x2F;www.nytimes.com&#x2F;2023&#x2F;03&#x2F;08&#x2F;opinion&#x2F;noam-chomsky-chatgpt-ai.html" rel="nofollow">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230308104809&#x2F;https:&#x2F;&#x2F;www.nytim...</a>
评论 #35081002 未加载
评论 #35080933 未加载
评论 #35081322 未加载
评论 #35080935 未加载
colesantiago大约 2 年前
IMO this is a bizarre post by Scott.<p>But not surprising since Scott is now employed at OpenAI.<p><a href="https:&#x2F;&#x2F;scottaaronson.blog&#x2F;?p=6484" rel="nofollow">https:&#x2F;&#x2F;scottaaronson.blog&#x2F;?p=6484</a>
评论 #35080988 未加载
评论 #35080987 未加载
评论 #35081455 未加载
评论 #35081096 未加载
评论 #35080982 未加载
lisasays大约 2 年前
<i>In this piece Chomsky, the intellectual godfather of an effort that failed for 60 years to build machines that can converse in ordinary language.</i><p>Woah there. When did Chomsky ever take an interest in building machines to do anything? What is he even referring to here?<p>From the get-go, this piece sounds like an ad-hominem slam.
pfortuny大约 2 年前
Really, the jesuits did use telescopes. One wonders at the ignorance that paragraph shows. Come on: Galileo was punished for purely political reasons, with the religious argument as a simple excuse. This is not something obscure.
shrimp_emoji大约 2 年前
&gt;<i>I’m a CS professor at UT Austin, on leave for one year to work at OpenAI on the theoretical foundations of AI safety. I accepted OpenAI’s offer in part because I already held the views here, or something close to them</i><p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;XbzGLdiICk4" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;XbzGLdiICk4</a>
obblekk大约 2 年前
These are fair criticisms but imo miss Chomsky&#x27;s much bigger point.<p>From Chomsky&#x27;s essay, 2 important lines:<p><pre><code> &quot;Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time. For this reason, the predictions of machine learning systems will always be superficial and dubious.&quot; </code></pre> He&#x27;s saying we <i>trust</i> humans because they can say things like &quot;I&#x27;m pretty sure X because of explanation Y&quot; and under the hood, we process the explanation, form our own probability of X and trust our own computation.<p>But since LLMs cannot provide explanations for their beliefs, humans will never be able to rely on LLMs because the way we actually communicate is through explanations, not probabilities.<p>Chomsky&#x27;s conclusion: this is a good predictor, but not a human.<p>Where Chomsky is actually wrong is he mixes up <i>how good are you at prediction</i> vs. <i>how well can you convince a human of the prediction</i> vs. <i>how intelligent you are</i>. We humans use a combination of accuracy + convincingness as a marker of intelligence.<p>An AI that was just as accurate but 0 ability to convince a human because 0 ability to produce explanations could still be intelligent. This AI would seem like an alien to us, but an alien that could uncannily beat us at any challenge that requires an understanding of the natural world. In fact, we might never truly understand its internal explanations of the world, but still acknowledge it has them and is good at building new ones internally.<p>Imagine being in a room with a foreign language speaker who beats you at chess. Clearly they have a mental model that works, even if they cannot explain it to you.
ordu大约 2 年前
To my mind Chomsky and Aaronson are talking about different things. Chomski says that LLMs are not AGI and they will never be it. Aaronson says LLMs may be a way to AGI.<p>This ideas do not contradict each other. Chomsky talks that LLMs is not sufficient to create AGI, Aaronson says that LLMs is essential to create AGI. A cause can be essential but not sufficient, nothing wrong with it.
DiscourseFan大约 2 年前
Fundementally Chomsky has always argued that language must have some sort of core logic beyond Saussure&#x27;s division between Signifier and Signified, but so far nobody has been able to prove it. What might be most disturbing to him is that LLMs, which operate purely on mass associations, might be closer to human cognition than a notion of cognitive logic. What if the associations are more primary, and the logic comes later? Just as someone else commented, animals work mostly in cycles, patterns, they don&#x27;t have logic but the cycles of life are burned into them; there could be some magic key that explains why humans possess logic, but it would not be hidden in biology--more likely, since language is by nature a social phenomenon, it would be an aspect of culture.
littlestymaar大约 2 年前
I&#x27;d like to thank Scott for pointing me to the original Chomsky article which I didn&#x27;t noticed, because this link is probably the most valuable part of this rant post.<p>(And Chomsky&#x27;s take on ChatGPT is much better than his recent takes about the Russian invasion of Ukraine…)
评论 #35086483 未加载
评论 #35085211 未加载
rvz大约 2 年前
Disclaimer: Scott Aaronson is employed at OpenAI. [0]<p>Which explains why he is now defending the Closed AI snake oil over his quantum computing research and is in fact defending his OpenAI equity like the rest of everyone who recently rushed into the OpenAI hype train scam.<p>His weak response to Chomsky tells us that he has no answer to LLMs like ChatGPTs lack of transparent explainability about the sentences it regurgitates, which is the core reason why LLMs are untrustworthy to which Chomsky repeatedly brings up.<p>But of course he chose not to disclose that in his article so here is the source for his announcement that he joined. [0]<p>[0] <a href="https:&#x2F;&#x2F;scottaaronson.blog&#x2F;?p=6484" rel="nofollow">https:&#x2F;&#x2F;scottaaronson.blog&#x2F;?p=6484</a>
gbanfalvi大约 2 年前
I think some of Chomsky&#x27;s issues with ChatGPT might be misunderstandings and some of it can be iterated on and improved. ChatGPT _is_ a statistical engine returning probabilities -- but there&#x27;s nothing stopping engineers from changing how it weighs its probabilities to reject obvious falsehoods &quot;the earth is flat&quot; or show a moral bias &quot;doing this will hurt people, I will not engage in it&quot; without convoluted prompts or torturing underpaid Kenyans to label abusive content.<p>One thing I would&#x27;ve like Chomsky to engage more in is comparing how people create new &quot;output&quot; from previous &quot;input&quot; vs. how AIs like ChatGPT do it. He talks about &quot;fake thinking&quot; and &quot;real thinking&quot;, but doesn&#x27;t really go into the &quot;hows&quot; of it (ironically, kinda like how ChatGPT would).<p>I _do_ agree, however, that we&#x27;re nowhere near AGI and this doesn&#x27;t bring us closer, but I don&#x27;t know what would, either.<p>On the topic of Aaronson&#x27;s blog post, I feel like he didn&#x27;t understand what Chomsky wrote about or intentionally mischaracterised it, then gave a super childish response. It&#x27;s very embarrassing.
morelisp大约 2 年前
&gt; In this piece Chomsky, the intellectual godfather of an effort that failed for 60 years to build machines that can converse in ordinary language<p>At least he leads with the fact he doesn’t understand Chomsky’s research program at all.
评论 #35080895 未加载
jacknews大约 2 年前
&quot;For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. &quot;<p>LOL, this seems completely wrong to me. I think Chomsky&#x27;s idea is that humans have some kind of &#x27;grammar engine&#x27; that just needs configuring from &#x27;miniscule data&#x27; aka many years of listening and practice speaking.<p>No doubt humans have some optimized areas for language, but I doubt it&#x27;s anything so formal.
hackandthink大约 2 年前
This Chomskyism and Chomsky hate is strange:<p>&quot;Yes, they’re wrong, and yes, despite being wrong they’re self-certain, hostile, and smug, and yes I can see this, and yes it angers me&quot;<p>Norvig&#x27;s essay is less hateful and goes deeper.<p>The Norvig - Chomsky Debate<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34857287" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34857287</a>
Ologn大约 2 年前
At one point in the Times article, Chomsky and his colleagues have ChatGPT make their arguments for them.<p>ChatGPT: &quot;I am not conscious, self-aware, or capable of having personal perspectives. I can provide information and analysis based on the data I have been trained on, but I don&#x27;t have the ability to form personal opinions or beliefs.&quot;<p>ChatGPT itself will tell you the limits of its capability.
评论 #35085178 未加载
z3c0大约 2 年前
I&#x27;m sorry, but if you create an NLP product and Chomsky says you missed the mark, you missed the mark. He practically admits to how far ahead of his time Chomsky was, but then somehow tries to mischaracterize it as a bad thing: <i>In this piece Chomsky, the intellectual godfather of an effort that failed for 60 years to build machines that can converse in ordinary language, condemns the effort that succeeded.</i><p>Have you had a real conversation with ChatGPT? As in a &quot;trying to extract factual information from another entity&quot; conversation? I&#x27;ll answer for you: no, because ChatGPT is generative (it&#x27;s in the name) of apparently valid syntax with consistent meaning, but it&#x27;s not extractive in nature and does not wield denotation behind-the-scenes. &quot;Converse&quot; means a lot more than &quot;vomit words in a proper order at each other&quot;. By that alone, it has not &quot;succeeded&quot; in conversing, just arranging terms convincingly (using Chomsky&#x27;s theories to do so.)<p>I encourage anybody who believes otherwise to read this paper before continuing to posit that ChatGPT is doing anything more: <a href="https:&#x2F;&#x2F;www.philosophy-index.com&#x2F;russell&#x2F;on-denoting&#x2F;Russell_-_On_Denoting.pdf" rel="nofollow">https:&#x2F;&#x2F;www.philosophy-index.com&#x2F;russell&#x2F;on-denoting&#x2F;Russell...</a>
YeGoblynQueenne大约 2 年前
&gt;&gt; In this piece Chomsky, the intellectual godfather of an effort that failed for 60 years to build machines that can converse in ordinary language, condemns the effort that succeeded.<p>Chomsky never had anything to do with any sort of effort &quot;to build machines that can converse in ordinary language&quot;. I have no idea where that came from in Aaronson&#x27;s article. Chomsky was a great influence to cognitive science, and artificial intelligence, in particular in its early years, but he has nothing to do with chatbots. At. all. That&#x27;s just plain nuts, to open an article against &quot;Chomskyism&quot; (bad form, that title) with such a complete fabrication.<p>Moreover, Chomsky&#x27;s influence on AI did very much lead to systems that could &quot;converse in ordinary language&quot; - but with which Chomsky had nothing to do, personally, or as a researcher. The &quot;influence&quot; I mention refers to Chomsky&#x27;s work on formal grammars, and how they have been used for a long time in NLP to model language.<p>For a couple historically notable examples:<p>ELIZA: Joseph Wizenbaum&#x27;s chatbot pretending to be a Rogerian psychologist. I think at this point in the conversation, everyone has heard about ELIZA and how it drew people in and made them ascribe human qualities to it, just like ChatGPT and friends.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;ELIZA" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;ELIZA</a><p>SHRDLU: the doctoral thesis of Terry Winograd, written in 1968-70. SHRDLU was a natural language interface to a virtual robotic arm manipulating coloured bricks in a &quot;brick world&quot;. It was based on a natural language grammar created by Winograd, and could support intricate instructions that the system then executed with precision. The wikipedia article on SHRDLU has an example, from which I quote:<p><pre><code> Person: Is there a large block behind a pyramid? Computer: YES, THREE OF THEM: A LARGE RED ONE, A LARGE GREEN CUBE, AND THE BLUE ONE. Person: Put a small one onto the green cube which supports a pyramid. Computer: OK. Person: Put the littlest pyramid on top of it. Computer: OK. Person: Does the shortest thing the tallest pyramid&#x27;s support supports support anything green? Computer: YES, THE GREEN PYRAMID. Person: What color is it? Computer: BY &quot;IT&quot;, I ASSUME YOU MEAN THE SHORTEST THING THE TALLEST PYRAMID&#x27;S SUPPORT SUPPORTS. Computer: RED Person: How many things are on top of green cubes? Computer: I&#x27;M NOT SURE WHAT YOU MEAN BY &quot;ON TOP OF &quot;IN THE PHRASE &quot;ON TOP OF GREEN CUBES &quot; . DO YOU MEAN: 1 - DIRECTLY ON THE SURFACE 2 - ANWHERE ON TOP OF ? Person: 2 Computer: THREE OF THEM. </code></pre> <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;SHRDLU" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;SHRDLU</a><p>Now, if you&#x27;ve seen discussions with ChatGPT and earlier large language models you&#x27;ll know that the above is beyond the capabilities of modern systems; including ones trained specifically to manipulate robotic arms etc. ChatGPT, faced with instructions like the above, will soon start to hallucinate objects that don&#x27;t exist, then hallucinate moving them, and make a pig&#x27;s meal of the entire state of the blocks world. SHRDLU, confined as it was in its virtual, toy world, could still follow the instructions of its user with absolute precision. We still have nothing that can repeat this feat. Why? Because &quot;Chomskyism&quot; was abandoned, and everyone turned to statistical NLP, after the AI winter of the &#x27;90s crushed AI research funding, that&#x27;s why.
评论 #35081640 未加载
评论 #35087239 未加载
dang大约 2 年前
Related ongoing thread:<p><i>The False Promise of ChatGPT</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35067619" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35067619</a> - March 2023 (20 comments)
wg0大约 2 年前
Is this complete blog post or got trimmed? Because it ends with a claim about Galileo&#x27;s telescope thus dismissing the critics via an analogy but doesn&#x27;t offer much concrete chain of reasoning to reach there.
slibhb大约 2 年前
I think this response is more or less fair.<p>Chomsky has been extremely derisive towards anyone with whom he disagrees. You can see that in the NYT piece but it&#x27;s been a decades-long pattern where dissenters are automatically morons. That&#x27;s not a manner befitting a scientist.<p>I agree that &quot;Chomskyism&quot; has been a &quot;false promise&quot; in some sense. It&#x27;s not clear to me what Chomsky&#x27;s linguistics have actually accomplished. Perhaps they gave us a new way to think about language. In that sense, they were a philosophical achievement. But we didn&#x27;t build anything with them, and that&#x27;s ultimately the test of science. Meanwhile neural nets got us Google translate and ChatGPT (so far). Now, I don&#x27;t think philosophy is useless, but Chomsky going after machine learning is a bit like &quot;Jesuit astronomers declining to look through Galileo’s telescope&quot; in that Chomsky is a bit like a philosopher immersed in metaphysics who is dismissive towards empirical scientists who accomplished something much more concrete.<p>Then there&#x27;s the question of whether ChatGPT is conscious and whether it could lead to AGI. I agree with Chomsky that it&#x27;s not actually intelligent in some sense. But I&#x27;m not sure how much that matters. If you can build a Q&amp;A machine, whether it&#x27;s intelligent or conscious is an interesting philsophical question but ultimately beside the point. Anyway, that debate has been raging for 50 years (see Searle) and isn&#x27;t especially interesting at this point. As to whether LLMs can scale into AGIs, I have no idea, that depends on how we define AGI. To me, one of the lessons of ChatGPT is that we don&#x27;t need consciousness to build useful AIs.<p>Chomsky is a humanist. I believe his critique of behaviorialism was based on humanism: he wanted to place humans in a separate category (as philosophers often do). I think his criticism of ChatGPT is ultimately similar. But the worry is misplaced. ChatGPT is in some sense <i>more human</i> than we are. It isn&#x27;t some foreign, disembodied intelligence. It&#x27;s based entirely on text we, as a species, generated and in that sense it&#x27;s a culmination of human potential.
mikepalmer大约 2 年前
Regarding Chomsky&#x27;s characterization of LLMs &quot;gorging on hundreds of terabytes of data&quot; compared to the &quot;miniscule data&quot; required for a child to learn a language: The brain is evolved, so the the &quot;gorging&quot; already happened in animals for 100s of millions of years. The brain has a lot of evolved sequence processing, visual processing, language (the authors are linguists and they admit this though it undercuts their point). Only fine tuning of this pretrained model is needed for a child to grow up speaking, say, English vs. French. This requires only a relatively miniscule amount of data.<p>Moreover, it doesn&#x27;t matter that LLMs work differently from the human brain. Per Larry Wall, TIMTOWTDI (&quot;There is more than one way to do it&quot;).
评论 #35094025 未加载
twarge大约 2 年前
Chomsky&#x27;s article really seems to show the results of decades of living in an echo chamber.<p>&gt;On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information;<p>True, but then<p>&gt; it seeks not to infer brute correlations among data points but to create explanations.<p>This is such an odd dichotomy, and feels really wrong to me. Explanations are to me just further correlations.<p>&gt; a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters.<p>Chomsky apparently has children, but clearly may have forgotten what it&#x27;s like for them to learn language! It&#x27;s years of trying to communicate, constantly, failing, and learning.<p>Now, go ask ChatGPT to Write an ode to &lt;obscure topic&gt; in iambic pentameter and I bet that it will be as astonishing, creative and fluent as a native speaker.
评论 #35081223 未加载