TE
테크에코
홈24시간 인기최신베스트질문쇼채용
GitHubTwitter
홈

테크에코

Next.js로 구축된 기술 뉴스 플랫폼으로 글로벌 기술 뉴스와 토론을 제공합니다.

GitHubTwitter

홈

홈최신베스트질문쇼채용

리소스

HackerNews API원본 HackerNewsNext.js

© 2025 테크에코. 모든 권리 보유.

Chomsky on what ChatGPT is good for (2023)

281 포인트작성자: mef4일 전

43 comments

atdt4일 전
The level of intellectual engagement with Chomsky&#x27;s ideas in the comments here is shockingly low. Surely, we are capable of holding these two thoughts: one, that the facility of LLMs is fantastic and useful, and two, that the major breakthroughs of AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution.<p>That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.
评论 #44093403 未加载
评论 #44093514 未加载
评论 #44094853 未加载
评论 #44094107 未加载
评论 #44096454 未加载
评论 #44102987 未加载
评论 #44101645 未加载
评论 #44094981 未加载
评论 #44094180 未加载
papaver-somnamb4일 전
There was an interesting debate where Chomsky took a position on intelligence being rooted in symbolic reasoning and Asimov asserted a statistical foundation (ah, that was not intentional ;).<p>LLM designs to date are purely statistical models. A pile, a morass of floating point numbers and their weighted relationships, along with the software and hardware that animates them and the user input and output that makes them valuable to us. An index of the data fed into them, different from a Lucene or SQL DB index made from compsci algorithms &amp; data structure primitives. Recognizable to Azimov&#x27;s definition.<p>And these LLMs feature no symbolic reasoning whatsoever within their computational substrate. What they do feature is a simple recursive model: Given the input so far, what is the next token? And they are thus enabled after training on huge amounts of input material. No inherent reasoning capabilities, no primordial ability to apply logic, or even infer basic axioms of logic, reasoning, thought. And therefore unrecognizable to Chomsky&#x27;s definition.<p>So our LLMs are a mere parlor trick. A one-trick pony. But the trick they do is oh-so vastly complicated, and very appealing to us, of practical application and real value. It harkens back to the question: What is the nature of intelligence? And how to define it?<p>And I say this while thinking of the marked contrast of apparent intelligence between an LLM and say a 2-year age child.
评论 #44093640 未加载
评论 #44094823 未加载
评论 #44103718 未加载
评论 #44096147 未加载
评论 #44095590 未加载
评论 #44094689 未加载
zombot3일 전
The voice of reason. And, as always, the voice of reason is being vigorously ignored. Dreams of big profits and exerting control through generated lies are irresistible. And among others, HN comment threads demonstrate how even people who should know better are falling for it in droves. In fact this very thread shows how Chomsky&#x27;s arguments fall on deaf ears.
评论 #44094951 未加载
whattheheckheck4일 전
3.35 hrs Chomsky interview on ML Street Talk <a href="https:&#x2F;&#x2F;youtu.be&#x2F;axuGfh4UR9Q" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;axuGfh4UR9Q</a>
评论 #44093702 未加载
Xmd5a3일 전
<a href="https:&#x2F;&#x2F;magazine.caltech.edu&#x2F;post&#x2F;math-language-marcolli-noam-chomsky" rel="nofollow">https:&#x2F;&#x2F;magazine.caltech.edu&#x2F;post&#x2F;math-language-marcolli-noa...</a><p>These days, Chomsky is working on Hopf algebras (originally from quantum physics) to explain language structure.
visarga3일 전
Brains don&#x27;t have innate grammar more than languages are selected to fit baby brains. Chomsky got it backwards, languages co-evolved with human brains to fit our capacities and needs. If a language is not useful or can&#x27;t be learned by children, it does not expand, it just disappears.<p>It&#x27;s like wondering how well your shoes fit your feet, forgetting that shoes are made and chosen to fit your feet in the first place.
评论 #44096688 未加载
评论 #44103449 未加载
calibas4일 전
The fact that we have figured out how to translate language into something a computer can &quot;understand&quot; should thrill linguists. Taking a word (token) and abstracting it&#x27;s &quot;meaning&quot; as a 1,000-dimension vector seems like something that should revolutionize the field of linguistics. A whole new tool for analyzing and understanding the underlying patterns of all language!<p>And there&#x27;s a fact here that&#x27;s very hard to dispute, this method works. I can give a computer instructions and it &quot;understands&quot; them in a way that wasn&#x27;t possible before LLMs. The main debate now is over the semantics of words like &quot;understanding&quot; and whether or not an LLM is conscious in the same way as a human being (it isn&#x27;t).
评论 #44091796 未加载
评论 #44092574 未加载
评论 #44103394 未加载
评论 #44093247 未加载
评论 #44092560 未加载
评论 #44092750 未加载
评论 #44093438 未加载
asmeurer4일 전
It&#x27;s amusing that he argues (correctly) that &quot;there is no Great Chain of Being with humans at the top,&quot; but then claims that LLMs cannot tell us anything about language because they can learn &quot;impossible languages&quot; that infants cannot learn. Isn&#x27;t that an anthropomorphic argument, saying that what a language is inherently <i>defined</i> by human cognition?
评论 #44089673 未加载
评论 #44089830 未加载
teleforce3일 전
&gt;Many biological organisms surpass human cognitive capacities in much deeper ways. The desert ants in my backyard have minuscule brains, but far exceed human navigational capacities, in principle, not just performance. There is no Great Chain of Being with humans at the top.<p>Chomsky made interesting points regarding the performance of AI with the performance of biological organisms in comparison to human but his conclusion is not correct. We already know that cheetah run faster human and elephant is far stronger than human. Bat can navigate in the dark with echo location and dolphin can hunt in synchronization with high precision coordination in pack to devastating effect compared to silo hunting.<p>Whether we like or not human is the the top unlike the claim of otherwise by Chomsky. By scientific discovery (understanding) and designing (engineering) by utilizing law of nature, human can and has surpassed all of the cognitive capabilities of these petty animals, and we&#x27;re mostly responsible for their inevitable demise and extinction. Human now need to collectively and consciously reverse the extinction process of these &quot;superior&quot; cognitive animals in order to preserve these animals for better or worst. No other earth bound creature can do that to us.
lucisferre4일 전
&quot;The desert ants in my backyard have minuscule brains, but far exceed human navigational capacities, in principle, not just performance. There is no Great Chain of Being with humans at the top.&quot;<p>This quote brought to mind the very different technological development path of the spider species in Adrian Tchaikovsky&#x27;s Children of Time. They used pheromones to &#x27;program&#x27; a race of ants to do computation.
评论 #44090019 未加载
mrmdp3일 전
Chomsky has the ability to say things in a way that most laypersons of average intelligence can grasp. That is an important skill for communication of one&#x27;s thoughts to the general populace.<p>Many of the comments herein lack that feature and seem to convey that the author might be full of him(her)self.<p>Also, some of the comment are a bit pejorative.
bawana3일 전
I once heard that a roomful of monkeys with typewriters given infinite time could type out the works of shakespeare. I dont think that&#x27;s true any more than the random illumiination of pixels on a screen could eventually generate a picture.<p>OTOH, consider LLMs as a roomful of monkeys that can communicate to each other, look at words,sentences and paragraphs on posters around the room with a human in the room that gives them a banana when they type out a new word, sentence or paragraph.<p>You may eventually get a roomful of monkeys that can respond to a new sentence you give them with what seems an intelligent reply. And since language is the creation of humans, it represents an abstraction of the world made by humans.
ashoeafoot3일 전
Chat Gpt can write great apolgia for blood thirsty landempires and never live that down :<p>&quot;To characterize a structural analysis of state violence as “apologia” reveals more about prevailing ideological filters than about the critique itself. If one examines the historical record without selective outrage, the pattern is clear—and uncomfortable for all who prefer myths to mechanisms.&quot; the fake academic facade, the us diabolism, the unwillingness to see complexity and responsibility in other its all with us forever ..
ggm4일 전
Always a polarising figure, responses here bisect along several planes. I am sure some come armed to disagree because of his life long affinity to left world view, others to defend because of his centrality to theories of language.<p>I happen to agree with his view, so i came armed to agree and read this with a view in mind which I felt was reinforced. People are overstating the AGI qualities and misapplying the tool, sometimes the same people.<p>In particular, the lack of theory, and scientific method means both we&#x27;re, not learning much, and we&#x27;ve rei-ified the machine.<p>I was disappointed nothing said of Norbert Weiner. A man who invented cybernetics and had the courage to stand up to the military industrial complex.
skydhash4일 전
Quite a nice overview. For almost any specific measure, you can find something that is better than human at that point. And now LLMs architecture have made possible for computers to produce complete and internally consistent paragraphs of text, by rehashing all the digital data that can be found on the internet.<p>But what we&#x27;re good as using all of our capabilities to transform the world around us according to an internal model that is partially shared between individuals. And we have complete control over that internal model, diverging from reality and converging towards it on whims.<p>So we can&#x27;t produce and manipulate text faster, but rarely the end game is to produce and manipulate text. Mostly it&#x27;s about sharing ideas and facts (aka internal models) and the control is ultimately what matters. It can help us, just like a calculator can help us solve an equation.<p>EDIT<p>After learning to draw, I have that internal model that I switch to whenever I want to sketch something. It&#x27;s like a special mode of observation, where you no longer simply see, but pickup a lot of extra details according to all the drawing rules you internalized. There&#x27;s not a lot, they&#x27;re just intrinsically connected with each other. The difficult part is hand-eye coordination and analyzing the divergences between what you see and the internal model.<p>I think that&#x27;s why a lot of artists are disgusted with AI generators. There&#x27;s no internal models. Trying to extract one from a generated picture is a futile exercice. Same with generated texts. Alterations from the common understanding follows no patterns.
评论 #44089501 未加载
评论 #44096725 未加载
schoen4일 전
(2023)
评论 #44089453 未加载
评论 #44089348 未加载
评论 #44089320 未加载
r00sty3일 전
I imagine his opinions might have changed by now. If we&#x27;re still residing in 2023, I would be inclined to agree with him. Today, in 2025 however, LLMs are just another tool being used to &quot;reduce labor costs&quot; and extract more profit from the humans left who have money. There will be no scientific developments if things continue in this manner.
oysterville4일 전
Two year old interview should be labeled as such.
Amadiro3일 전
In my view, there is a major flaw in his argument is his distinction into pure engineering and science:<p>&gt; We can make a rough distinction between pure engineering and science. There is no sharp boundary, but it’s a useful first approximation. Pure engineering seeks to produce a product that may be of some use. Science seeks understanding. If the topic is human intelligence, or cognitive capacities of other organisms, science seeks understanding of these biological systems.<p>If you take this approach, of course it follows that we should laugh at Tom Jones.<p>But a more differentiated approach is to recognize that science also falls into (at least) two categories; the science that we do because it expands our capability into something that we were previously incapable of, and the one that does not. (we typically do a lot more of the former than the latter, for obvious practical reasons)<p>Of course it is interesting from a historical perspective to understand the seafaring exploits of Polynesians, but as soon as there was a better way of navigating (i.e. by stars or by GPS) the investigation of this matter was relegated to the second type of science, more of a historical kind of investigation. Fundamentally we investigate things in science that are interesting because we believe the understanding we can gain from it can move us forwards somehow.<p>Could it be interesting to understand how Hamilton was thinking when he came up with imaginary numbers? Sure. Are a lot of mathematicians today concerning themselves with studying this? No, because the frontier has been moved far beyond.*<p>When you take this view, it´s clear that his statement<p>&gt; These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.<p>is not warranted. Consider the following, in his own analogy:<p>&gt; These considerations bring up a minor problem with the current GPS enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at ones. But there are much more serious problems than absurdity. One is that GPS systems are designed in such a way that they cannot tell us anything about navigation, planning routes or other aspects of orientation, a matter of principle, irremediable.<p>* I´m making a simplifying assumption here that we can´t learn anything useful for modern navigation anymore from studying Polynesians or ants; this might well be untrue, but that is also the case for learning something about language from LLMs, which according to Chomsky is apparently impossible and not even up for debate.
评论 #44114511 未加载
titzer4일 전
All this interview proves is that Chomsky has fallen far, far behind how AI systems work today and is retreating to scoff at all the progress machine learning has achieved. Machine learning <i>has</i> given rise to AI now. It can&#x27;t explain itself from principles or its architecture. But you couldn&#x27;t explain your brain from principles or its architecture, you&#x27;d need all of neuroscience to do it. Because the brain is digital and (probably) does not reason like our brains do, it somehow falls short?<p>While there&#x27;s some things in this I find myself nodding along to in this, I can&#x27;t help but feel it&#x27;s an a really old take that is super vague and hand-wavy. The truth is that all of the progress on machine learning is <i>absolutely science</i>. We understand extremely well how to make neural networks learn efficiently; it&#x27;s why the data leads anywhere at all. Backpropagation and gradient descent are extraordinarily powerful. Not to mention all the &quot;just engineering&quot; of making chips crunch incredible amounts of numbers.<p>Chomsky is extremely ungenerous to the progress and also pretty flippant about what this stuff can do.<p>I think we should probably stop listening to Chomsky; he hasn&#x27;t said anything here that he hasn&#x27;t already say a thousand times for decades.
评论 #44089706 未加载
评论 #44089658 未加载
评论 #44089580 未加载
评论 #44089854 未加载
评论 #44091102 未加载
评论 #44089558 未加载
评论 #44089920 未加载
评论 #44089702 未加载
prpl4일 전
Reminds me of SUSY, string theory, the standard model, and beyond that, string theory etc…<p>What is elegant as a model is not always what works, and working towards a clean model to explain everything from a model that works is fraught, hard work.<p>I don’t think anyone alive will realize true “AGI”, but it won’t matter. You don’t need it, the same way particle physics doesn’t need elegance
LudwigNagasena4일 전
That was a weird ride. He was asked whether AI will outsmart humans and went on a rant about philosophy of science seemingly trying to defend the importance of his research and culminated with some culture war commentary about postmodernism.
评论 #44093811 未加载
jdkee3일 전
Chomsky&#x27;s own words.<p><a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;2023&#x2F;03&#x2F;08&#x2F;opinion&#x2F;noam-chomsky-chatgpt-ai.html" rel="nofollow">https:&#x2F;&#x2F;www.nytimes.com&#x2F;2023&#x2F;03&#x2F;08&#x2F;opinion&#x2F;noam-chomsky-chat...</a>
评论 #44095634 未加载
retskrad4일 전
It’s time to stop writing in this elitist jargon. If you’re communicating and few people understands you, then you’re a bad communicator. I read the whole thing and thought: wait, was there a new thought or interesting observation here? What did we actually learn?
评论 #44092624 未加载
评论 #44090399 未加载
评论 #44089577 未加载
submeta4일 전
Chomsky’s notion is: LLMs can only imitate, not understand language. But what exactly <i>is</i> understanding? What if our „understanding“ is just unlocking another level in a model? Unlocking a new form of generation?
评论 #44091189 未加载
评论 #44090444 未加载
评论 #44095607 未加载
评论 #44091315 未加载
评论 #44089482 未加载
msh4일 전
He should just surrender and give chatgpt whatever land it wants.
评论 #44089318 未加载
paulsutter4일 전
&quot;Expert in (now-)ancient arts draws strange conclusion using questionable logic&quot; is the most generous description I can muster.<p>Quoting Chomsky:<p>&gt; These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.<p>&gt; One is that the LLM systems are designed in such a way that they cannot tell us anything about language, learning, or other aspects of cognition, a matter of principle, irremediable... The reason is elementary: The systems work just as well with impossible languages that infants cannot acquire as with those they acquire quickly and virtually reflexively.<p>Response from o3:<p>LLMs do surface real linguistic structure:<p>• Hidden syntax: Attention heads in GPT-style models line up with dependency trees and phrase boundaries—even though no parser labels were ever provided. Researchers have used these heads to recover grammars for dozens of languages.<p>• Typology signals: In multilingual models, languages that share word-order or morphology cluster together in embedding space, letting linguists spot family relationships and outliers automatically.<p>• Limits shown by contrast tests: When you feed them “impossible” languages (e.g., mirror-order or random-agreement versions of English), perplexity explodes and structure heads disappear—evidence that the models do encode natural-language constraints.<p>• Psycholinguistic fit: The probability spikes LLMs assign to next-words predict human reading-time slow-downs (garden-paths, agreement attraction, etc.) almost as well as classic hand-built models.<p>These empirical hooks are already informing syntax, acquisition, and typology research—hardly “nothing to say about language.”
评论 #44089805 未加载
评论 #44089608 未加载
netcan4일 전
Insect behaviour. Flight of birds. Turtle navigation. A footballer crossing the field to intercept a football.<p>This is what Chomsky always wanted ai to be... especially language ai. Clever solutions to complex problems. Simple once you know how they work. Elegant.<p>I sympathize. I&#x27;m a curious human. We like elegant, simple revelations that reveal how out complex world is really simple once you know it&#x27;s secrets. This aesthetic has also been productive.<p>And yet... maybe some things are complicated. Maybe LLMs do teach us something about language... that language is complicated.<p>So sure. You can certainly critique &quot;ai blogosphere&quot; for exuberance and big speculative claims. That part is true. Otoh... linguistics is one of the areas that ai based research may turn up some new insights.<p>Overall... what wins is what is most productive.
评论 #44092935 未加载
评论 #44089961 未加载
newAccount20254일 전
[flagged]
评论 #44089663 未加载
godelski4일 전
I think many people are missing the core of what Chomsky is saying. It is often easy to miscommunicate and I think this is primarily what is happening. I think the analogy he gives here really helps emphasize what he&#x27;s trying to say.<p>If you&#x27;re only going to read one part, I think it is this:<p><pre><code> | I mentioned insect navigation, which is an astonishing achievement. Insect scientists have made much progress in studying how it is achieved, though the neurophysiology, a very difficult matter, remains elusive, along with evolution of the systems. The same is true of the amazing feats of birds and sea turtles that travel thousands of miles and unerringly return to the place of origin. | Suppose Tom Jones, a proponent of engineering AI, comes along and says: “Your work has all been refuted. The problem is solved. Commercial airline pilots achieve the same or even better results all the time.” | If even bothering to respond, we’d laugh. | Take the case of the seafaring exploits of Polynesians, still alive among Indigenous tribes, using stars, wind, currents to land their canoes at a designated spot hundreds of miles away. This too has been the topic of much research to find out how they do it. Tom Jones has the answer: “Stop wasting your time; naval vessels do it all the time.” | Same response. </code></pre> It is easy to look at metrics of performance and call things solved. But there&#x27;s much more depth to these problems than our abilities to solve some task. It&#x27;s not about just the ability to do something, the how matters. It isn&#x27;t important that we are able to do better at navigating than birds or insects. Our achievements say nothing about what they do.<p>This would be like saying we developed a good algorithm only my looking at it&#x27;s ability to do some task. Certainly that is an important part, and even a core reason for why we program in the first place! But its performance tells us little to nothing about its implementation. The implementation still matters! Are we making good uses of our resources? Certainly we want to be efficient, in an effort to drive down costs. Are there flaws or errors that we didn&#x27;t catch in our measurements? Those things come at huge costs and fundamentally limit our programs in the first place. The task performance tells us nothing about the vulnerability to hackers nor what their exploits will cost our business.<p>That&#x27;s what he&#x27;s talking about.<p>Just because you can do something well doesn&#x27;t mean you have a good understanding. It&#x27;s natural to think the two relate because understanding improves performance that that&#x27;s primarily how we drive our education. But this is not a necessary condition and we have a long history demonstrating that. I&#x27;m quite surprised this concept is so contentious among programmers. We&#x27;ve seen the follies of using test driven development. Fundamentally, that is the same. There&#x27;s more depth than what we can measure here and we should not be quick to presume that good performance is the same as understanding[0,1]. We <i>KNOW</i> this isn&#x27;t true[2].<p>I agree with Chomsky, it is laughable. It is laughable to think that the man in The Chinese Room[3] <i>must</i> understand Chinese. 40 years in, on a conversation hundreds of years old. Surely we know you can get a good grade on a test without actually knowing the material. Hell, there&#x27;s a trivial case of just having the answer sheet.<p>[0] <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;singularity&#x2F;comments&#x2F;1dhlvzh&#x2F;geoffrey_hinton_says_in_the_old_days_ai_systems&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;singularity&#x2F;comments&#x2F;1dhlvzh&#x2F;geoffr...</a><p>[1] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Yf1o0TQzry8&amp;t=449s" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Yf1o0TQzry8&amp;t=449s</a><p>[2] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=hV41QEKiMlM" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=hV41QEKiMlM</a><p>[3] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chinese_room" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chinese_room</a>
AIorNot4일 전
As much as I think of Chomsky - his linguistics approach is outside looking in, ie observational speculation compared to the last few years of LLM based tokenization semantic spaces, embedding, deep learning and mechanistic interpretation, ie:<p>Understanding Linguistics before LLMs:<p>“We think Birds fly by flapping their wings”<p>Understanding Linguistics Theories after LLMs:<p>“Understanding the physics of Aerofoils and Bernoulli’s principle mean we can replicate what birds do”
dragochat3일 전
...for the lulz try asking ChatGPT &quot;what is Chomsky (still) good for?&quot;
thasso4일 전
&gt; The world’s preeminent linguist Noam Chomsky, and one of the most esteemed public intellectuals of all time, whose intellectual stature has been compared to that of Galileo, Newton, and Descartes, tackles these nagging questions in the interview that follows.<p>By whom?
评论 #44089590 未加载
评论 #44089606 未加载
评论 #44089562 未加载
评论 #44089526 未加载
评论 #44089591 未加载
评论 #44089575 未加载
Orangeair4일 전
[2023]
0xDEAFBEAD4일 전
I confess my opinion of Noam Chomsky dropped a lot from reading this interview. The way he set up a &quot;Tom Jones&quot; strawman and kept dismissing positions using language like &quot;we&#x27;d laugh&quot;, &quot;total absurdity&quot;, etc. was really disappointing. I always assumed that academics were only like that on reddit, and in real life they actually made a serious effort at rigorous argument, avoiding logical fallacies and the like. Yet here is Chomsky addressing a lay audience that has no linguistics background, and instead of even attempting to summarize the arguments for his position, he simply asserts that opposing views are risible with little supporting argument. I expected much more from a big-name scholar.<p>&quot;The first principle is that you must not fool yourself, and you are the easiest person to fool.&quot;
评论 #44092661 未加载
评论 #44089893 未加载
评论 #44092978 未加载
lanfeust64일 전
I&#x27;m noticing that leftists overwhelmingly toe the same line on AI skepticism, which suggests to me an ideological motivation.
评论 #44092664 未加载
评论 #44089566 未加载
评论 #44091221 未加载
评论 #44093054 未加载
评论 #44092723 未加载
评论 #44095837 未加载
评论 #44089609 未加载
评论 #44091408 未加载
评论 #44089594 未加载
A4ET8a8uTh0_v24일 전
It is unfortunate opinion, because I personally hold Chomsky in fairly high regard and give most of his thoughts I am familiar with a reasonable amount of consideration if only because he could, I suppose in the olden days now, articulate his points well and make you question your own thought process. This no longer seems to be the case though as I found the linked article somewhat difficult to follow. I suppose age can get to anyone.<p>Not that I am an LLM zealot. Frankly, some of the clear trajectory it puts humans on makes me question our futures in this timeline. But even if I am not a zealot, but merely an amused, but bored middle class rube, the serious issues with it ( privacy, detailed personal profiling that surpasses existing systems, energy use, and actual power of those who wield it ), I can see it being implemented everywhere with a mix of glee and annoyance.<p>I know for a fact it will break things and break things hard and it will be people, who know how things actually work that will need to fix those.<p>I will be very honest though. I think Chomsky is stuck in his internal model of the world and unable to shake it off. Even his arguments fall flat, because they don&#x27;t fit the domain well. It seems like they should given that he practically made his name on syntax theory ( which suggests his thoughts should translate well into it ) and yet.. they don&#x27;t.<p>I have a minor pet theory on this, but I am still working on putting it into some coherent words.
petermcneeley4일 전
I recently saw a new LLM that was fooled by &quot;20 pounds of bricks vs 20 feathers&quot;. These are not reasoning machines.
评论 #44089438 未加载
评论 #44089548 未加载
评论 #44089447 未加载
评论 #44089722 未加载
评论 #44089461 未加载
mrandish4일 전
[Edit to remove: It was not clear that this was someone else&#x27;s intro re-posted on Chomsky&#x27;s site]
评论 #44089392 未加载
评论 #44089408 未加载
johnfn4일 전
&gt; It’s as if a biologist were to say: “I have a great new theory of organisms. It lists many that exist and many that can’t possibly exist, and I can tell you nothing about the distinction.”<p>&gt; Again, we’d laugh. Or should.<p>Should we? This reminds me acutely of imaginary numbers. They are a great theory of numbers that can list many numbers that do &#x27;exist&#x27; and many that can&#x27;t possibly &#x27;exist&#x27;. And we did laugh when imaginary numbers were first introduced - the name itself was intended as a derogatory term for the concept. But who&#x27;s laughing now?
评论 #44089570 未加载
评论 #44089553 未加载
评论 #44089513 未加载
kevinventullo4일 전
Maybe I am missing context, but it seems like he’s defending himself from the claim that we shouldn’t bother studying language acquisition and comprehension in humans because of LLM’s?<p>Who would make such a claim? LLM’s are of course incredible, but it seems obvious that their mechanism is quite different than the human brain.<p>I think the best you can say is that one could <i>motivate</i> lines of inquiry in human understanding, especially because we can essentially do brain surgery on an LLM in action in a way that we can’t with humans.
评论 #44096819 未加载
next_xibalba4일 전
Chomsky is always saying that LLMs and such can only imitate, not understand language. But I wonder if there is a degree of sophistication at which he would concede these machines exceed &quot;imitation&quot;. If his point is that LLMs arrive at language in a way different than humans... great. But I&#x27;m not sure how he can argue that some kind of extremely sophisticated understanding of natural language is not embedded in these models in a way that, at this point, exceeds the average human. In all fairness, this was written in 2023, but given his longstanding stubbornness on this topic, I doubt it would make a difference.
评论 #44089443 未加载
评论 #44089650 未加载
评论 #44089385 未加载
评论 #44089490 未加载
irrational4일 전
I have a degree in linguistics. We were taught Chomsky’s theories of linguistics, but also taught that they were not true. (I don’t want to say what university it was since this was 25 years ago and for all I know that linguistics department no longer teaches against Chomsky). The end result is I don’t take anything Chomsky says seriously. So, it is difficult for me to engage with Chomsky’s ideas.
评论 #44093504 未加载
评论 #44094498 未加载
评论 #44093586 未加载