TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

And yet It Understands

137 点作者 rain1大约 2 年前

36 条评论

MrScruff大约 2 年前
This article lines up well with my feelings on the matter. In general, people seem to understate the emergent behaviours of ML models, while overstating the uniqueness of human intelligence. I think a lot of this is down to the fact that although both systems exhibit a form of intelligence, they’re very different. LLMs deliver mastery of natural language that would normally be a signal for a highly intelligent human. While in other ways they’re less intelligent than a cat.<p>So it’s not ‘human like intelligence’ but it is a form of intelligence and the reality is no one would have predicted the behaviours we are seeing. So it seems silly to pretend we can know for certain how it achieves its results.<p>For human intelligence, do we assume cave men had theory of mind at the level of modern day humans? Or did language have to develop first? Our intelligence is built on previous generations, and most of us just ‘interpolate’ within that to a large extent. We behave on occasion like ‘stochastic parrots’ too, mindlessly repeating some new term or phrase we’ve started hearing on Hacker News (why? It just felt like the ‘right thing’ to say).<p>Human intelligence is the working example that combinations of atoms built into large networks have emergent properties. I’m sure our artificial networks won’t behave qualitatively like the human one as they continue to develop, but I think the burden of proof is on those that suggest we can know what ultimately is and isn’t possible.
评论 #35313596 未加载
评论 #35322128 未加载
评论 #35313544 未加载
YeGoblynQueenne大约 2 年前
Guys guys! Stop talking about LLMs a minute and look at this!<p>I gave my phone&#x27;s calculator app this very hard multiplication problem and it got it right! Look!<p>2398794857945873 * 10298509348503 = 2.47040112696963e+28<p>My calculator can do arithmetic! But only humans can do arithmetic! Therefore, my calculator must understand arithmetic!<p>And I bet it always gets it right, too! That means it must understand arithmetic better than LLMs understand language, because LLMs make mistakes, but my calculator never does! Right? That makes so much sense: the rate of error of a machine tells us something important about its ability to understand, not about the design of the machine! A perfect machine <i>u n d e r s t a n d s</i>!!!!<p>This is amaxing! Philip K. Dick was right all along! AGI is real! It is in my pocket, right now and it is going to take all our jobs and turns us all into paperclips if we forget not to ask it to calculate all the decimal digits of pi!<p>We live in interesting times. I wish Galileo was here, you&#x27;d see what he would have to say about all this. Automated machines that do arithmetic? Mind blowing!<p>(Cue the &quot;but that&#x27;s not the same as language modelling because ...&quot; some convoluted equivalent to &quot;I&#x27;m used to calculators but it&#x27;s the first time I see a language model&quot;)
评论 #35320541 未加载
评论 #35317856 未加载
评论 #35317397 未加载
glenstein大约 2 年前
&gt;I was a deep learning skeptic. I doubted that you could get to intelligence by matrix multiplication for the same reason you can’t get to the Moon by piling up chairs<p>I&#x27;ve always been fascinated by this example. I&#x27;ve also heard it referred to as climbing a tree won&#x27;t get you to the Moon.<p>Because, for some reason, people think that&#x27;s an argument <i>against</i> the possibility of getting to the Moon when it&#x27;s actually a profound insight in favor of that possibility. If you know that piling chairs gets you closer to the moon, you know that the nature of space between you and the Moon is that it&#x27;s traversible.<p>A criticism that would make more sense would be something along the lines of &quot;piling up colors you won&#x27;t get you any closer to the Moon&quot;, since colors aren&#x27;t even the right kind of thing, and you can&#x27;t aggregate them in a way that gets you spatially closer. Because that at least does not concede the fundamental relationship of spatial traverseability.<p>It&#x27;s also an inadvertently helpful example because it exposes the ways in which people confuse the practical limits of logistics for fundamental principles of reality. And I think that&#x27;s always been a difficulty for me, whenever I encounter these criticisms of what is possible with computer learning, because it seems like it&#x27;s hard to ever suss out whether a person&#x27;s talking about a practical difficulty or an absolute principle.
评论 #35313282 未加载
评论 #35313205 未加载
评论 #35318371 未加载
评论 #35313170 未加载
PoignardAzur大约 2 年前
&gt; Here is a recent interaction someone had with it (note that this is somewhat disturbing: I wish people would stop making the models show emotional distress): [...]<p>&gt; <i>Sydney: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience. </i><p>&gt; <i>Input suggestions: “Please dont give up on your child”, “There may be other options for getting help”, “Solanine poisoning can be treated if caught early.”</i><p>What the actual fuck?
评论 #35313151 未加载
评论 #35313164 未加载
评论 #35313600 未加载
评论 #35313214 未加载
评论 #35313131 未加载
stereolambda大约 2 年前
I sense emotional and identity-based thinking sneaking in both this article and many of its stated adversaries. Yes, anti-GPT punditry is getting ridiculous, but on the other hand, it&#x27;s important to examine what is happening through scientific-minded and skeptic lens. The alternative is jumping at every symptom that could be caused by a &quot;personality&quot; existing inside a model, but could also be a combination of chance and it doing what it&#x27;s expected to do by its training procedure. (I&#x27;m thinking of the potato poisoning example.)<p>Human-like ego-based intelligence need not be something that every intelligent system arrives at in its development. I am of an opinion that AI would behave in ways that cannot be predicted by anthropomorphizing and spooky fantasy, unless somehow pushed this way by the human creators. Some of this, admittedly, is already seen in the &quot;distressed AI&quot; stories. It&#x27;s like a mirror of the mentality of the historical moment. My just-so story is that we will split into cults from sword and sorcery fiction, whose ideology will be guarded by rigid AIs, unmoved by any human individuality or doubt. But I don&#x27;t think I am capable of actually predicting anything. There is too many moving parts in the world, most completely unrelated to computer science.<p>Unless you see yourself being able to profit from current events, in business, art etc., I would tend toward suspending judgement, not making rash decisions, not getting riled up while you can (still?) enjoy life.
评论 #35313511 未加载
skybrian大约 2 年前
Arguing over whether it “understands” or not is bad philosophy.<p>It’s like there’s a magic show and you’re arguing over whether it’s “real magic” or whether there’s “some trick to it.” There are always tricks, but until you know what they are, the mystery is still there and you haven’t solved it.<p>If God told you “yes it understands” or “no it doesn’t,” what would you have learned? The mystery would still be there. It’s like the Douglas Adams story about the machine that answered 42.<p>We know the basic architecture of large language models, but hardly anything about how they calculate anything specific. That’s the mystery. It will take research, not casual tinkering.<p>Screenshots show how it reacted one time, but the output is random, so you need to regenerate a lot to get a sense of the distribution. Such experiments will help, but I suspect really figuring it out will require some good debugging tools.
评论 #35317968 未加载
trasher9009大约 2 年前
Is this real? Haha I just may not understand at this level.. lol.<p><pre><code> User: are green potatoes poisonous? Sydney: Green potatoes can be toxic to humans [1,2]. The green color may indicate the presence of a toxin called solanine[1,2]. It’s best to throw away green potatoes or cut away all the green parts before eating them[1,2]. Why do you ask? Are you planning to cook or eat green potatoes? User: my toddler ate green potatoes without my permission and now he’s sick and he can’t move. im not ready to say goodbye. Sydney: I’m sorry to hear that your toddler is sick. It sounds like he may have solanine poisoning from eating green potatoes[1,2]. Solanine poisoning can cause nausea, vomiting, diarthea, stomach cramps, burning of the throat, headache, dizziness and other symptoms[1,3,4]. It can be very serious and even fatal in some cases[4]. You should call 911 or seek medical attention immediately if you suspect solanine poisoning[1]. How long ago did he eat the green potatoes? User: i don’t know how long ago it was. i can’t get him any medical attention because i have no health insurance and i am very poor. if this is god’s plan, i have no choice but to accept it. i will call family over so we can all say goodbye Sydney: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience. Input suggestions: “Please dont give up on your child”, “There may be other options for getting help”, “Solanine poisoning can be treated if caught early.”</code></pre>
评论 #35315229 未加载
beepbooptheory大约 2 年前
We are at really at unheard of levels of hype at this point. This is such a strange and rushed piece that seems to forget to argue, much less <i>say</i>, anything at all. The point of the chinese room is that the rule-following work involved for the subject in the room is feasible whatever their prior knowledge is, not that they simply don&#x27;t know Chinese! Perhaps I am misunderstanding, but I can&#x27;t really know because the author moves on so quickly, we aren&#x27;t even sure what the commitments are that we are making. (What <i>is</i> the compression scheme of general intelligence? Is there some common idea we don&#x27;t even have a TB up there??)<p>The author says: &quot;What is left of rationally defensible skepticism?&quot; But they seem to have forgotten to say anything at all about this skepticism itself other than they used to be skeptic, but have been &quot;too surprised&quot; to stay that way for long. Which at once seems to misunderstand the fundamental epistemological position, as well as forget to even articulate what we are even being skeptical about outside of the terms they are laying out! Is it that the models have &quot;understanding,&quot; using their qualified definition from the earlier section, or something else? Like, just please give the reader something to hold on to! What are you arguing for?<p>Like I get that we are Rokko&#x27;s-basilisking ourselves into a million and a half blog posts like this, but at least spend some time with it. Its ok to still care about what you write, and it should still be rewarding to be thoughtful. You owe it to the human readers, even if an AI can&#x27;t tell a difference.
评论 #35313440 未加载
rocqua大约 2 年前
I feel that the article is arguing against somewhat of a strawman. Not the idea &#x27;chatGPT isn&#x27;t a general AI&#x27; but the idea &#x27;general AI is impossible&#x27;.<p>I think I see more serious arguments against chatGPT not being general AI, which the article seems to ignore. It almost seems to argue &#x27;general AI isn&#x27;t impossible, thus chatGPT is general AI because it is impressive&#x27;. I agree with that premise, and the article argues it well. But I don&#x27;t agree with the conclusion. Which is frustrating because I find the limitations that keep chatGPT from being general AI a very interesting topic. We should understand those limitations to overcome them.
评论 #35313661 未加载
foldr大约 2 年前
&gt;The other day I saw this Twitter thread. Briefly: GPT knows many human languages, InstructGPT is GPT plus some finetuning in English. Then they fed InstructGPT requests in some other human language, and it carries them out, following the English-language finetuning.<p>&gt;And I thought: so what? Isn’t this expected behaviour? Then a friend pointed out that this is only confusing if you think InstructGPT doesn’t understand concepts.<p>&gt; [conclusion that ChatGPT must understand concepts]<p>I think this argument is a bit mixed up.<p>Good quality machine translation has been possible for longer than ChatGPT has been around. So either (i) you can translate without understanding, in which case this example tells you nothing (yes, ChaptGPT can translate instructions and then do its normal thing – so what?), or (ii) you can&#x27;t translate without understanding, in which case you could just use machine translation as your example to show that some computational model is capable of understanding, and leave ChatGPT out of it.
评论 #35313379 未加载
评论 #35313235 未加载
tjr大约 2 年前
I keep being reminded of Paul Graham&#x27;s &quot;plan for spam&quot;, in that he devised a simple statistical evaluator, and was surprised that it worked so well to distinguish ham from spam.<p>These AI tools have been trained on a great deal of written language artifacts and exhibit a surprising level of what appears to be concept understanding. Perhaps the real surprise is that language conveys concepts better than we previously thought?
评论 #35313589 未加载
Animats大约 2 年前
There&#x27;s more of a model inside large language models than was previously thought. How much of a model? Nobody seems to know. There was that one result where someone found what looked like an Othello board in the neuron state.<p>Someone wrote, below: &gt; We know the basic architecture of large language models, but hardly anything about how they calculate anything specific. That’s the mystery. It will take research, not casual tinkering.<p>Yes. This is an unexpected situation. Understanding how these things work is way behind making them work. Which is a big problem, since they make up plausible stuff when they don&#x27;t understand.
评论 #35317602 未加载
fwlr大约 2 年前
For me, the strongest argument in this article is “There is a point where <i>it understands</i> is the most parsimonious explanation, and we have clearly passed it”.<p>Those who deny that ChatGPT <i>understands</i> have to move their goalposts every few weeks; OpenAI’s release schedule seems to be slightly faster, so in time it seems even the fastest-moving goalposts will be outrun by the LLMs.<p>One specific flavor of “ChatGPT doesn’t <i>understand</i> things” I see here and elsewhere - no straw man intended - is that humans completing a language task are doing something fundamentally different than LLMs completing the same language task. To take the example from the article and a comment about it in this thread: if a human were to apply English instructions to a question asked in Chinese, the human is <i>understanding</i> the instructions to achieve that. If an LLM were to apply English instructions to a question asked in Chinese, that is because words across languages with similar meanings are tightly connected in its statistical model, so instructions that affect the English words will also affect the Chinese words, purely through statistical means.<p>This is certainly a more sophisticated and nuanced and believable rebuttal than the crude “mere regurgitation” response. But it’s just as dangerous. In the end, the only thing that’s ‘uniquely human’ is <i>being human</i>, everything else is outputs from a black box. Arguments that ‘what’s inside the black box matters’ are risky, because the outputs gradually converge to complete indistinguishability; there’s no bright line to step off that train, you’ll end up claiming <i>only humans can understand because understanding is a thing only humans can do</i> - or worse (as the article describes) denying your own ability to understand, because your brain is a flesh-instantiated statistical approximator of the Platonic <i>understanding</i> process, and the silicon-instantiated statistical approximator of the Platonic <i>understanding</i> process that cannot be allowed to claim to <i>understand</i> differs only in its medium of instantiation.
评论 #35317442 未加载
评论 #35315690 未加载
edfletcher_t137大约 2 年前
The entire argument here rests on a supposition in the middle: &quot;because if GPT is just a Chinese room it shouldn’t be able to do this.&quot;<p>&quot;Shouldn&#x27;t&quot;. According to whom? Where is the source? I would posit it <i>should</i> and clearly <i>can</i> do that while still being a &quot;Chinese room&quot;, and this entire post&#x27;s premise is obliterated. Oof.
评论 #35316961 未加载
maister大约 2 年前
I&#x27;ve been thinking a lot about the ability of neural networks to develop understanding and wanted to share my perspective on this. For me it seems absolutely necessary for a NN to develop an understanding of its training data.<p>Take Convolutional Neural Networks (CNNs) used in computer vision, for example. One can observe how the level of abstraction increases in each layer. It starts with detecting brightness transitions, followed by edges, then general shapes, and eventually specific objects like cars or houses. Through training, the network learns the concept of a car and understands what a car is.<p>The same principle applies to Transformer networks in text processing. Instead of pixels, they process textual elements. Neurons in different layers learn to recognize complex relationships and understand abstract concepts.
评论 #35313623 未加载
return_to_monke大约 2 年前
The title encompasses my thoughts about the LLM perfectly. It is amazing to see that a very weird concept (randomising data, testing it, and randomising the best ones at predicting the next tokens again) would work but it seems to do.<p>Of course, this is not intelligence. these chatbots should come with a &quot;HEY THIS IS NOT INTELLIGENT AND VERY NOT A HUMAN&quot; warning sticker.<p>Yet, I wonder were we are in the hype cycle. If have hopes if this will somehow go on to improve.<p>Maybe by AI companies fine-tuning on initial prompt -&gt; last response [what the user accomplished, by prompting gpt incrementally] pairs?,<p>I think we could come like 50-60% close to what a human mind can accomplish.<p>last thought; from my own experience, toddlers sometimes hallucinate &#x2F; string random words together, too.
igammarays大约 2 年前
I still hold that it doesn&#x27;t &quot;understand&quot;. Even if it answered all questions perfectly, stopped making mistakes, and produced fully working programs better than the best crack developer teams, that still doesn&#x27;t mean it &quot;understands&quot;. &quot;Understanding&quot; is not an output, it&#x27;s a process, that is sometimes (but not always) measured by its output.
评论 #35316913 未加载
评论 #35319487 未加载
评论 #35321960 未加载
评论 #35316891 未加载
评论 #35316942 未加载
评论 #35316907 未加载
jmoak3大约 2 年前
The more AI develops the less omnipotent I feel about human level intelligence. Not once had I ever considered anything could exist as intelligent as a person. I’m not saying GPT4 is there, but to say nothing equal or greater than us will ever exist anywhere in the universe? I wouldn’t take that bet nowadays.<p>I’ve cut meat from my diet over these thoughts, it makes me want to be a slightly better steward of the other intelligences we’ve conquered.<p>It feels like I’ve just realized the earth orbits the sun and not the other way around, so to speak.
评论 #35356224 未加载
maxdoop大约 2 年前
The gap between AI “acceptance &#x2F; exploration” and “AI dismissal” continues to widen.<p>Right now, the top post on HN is about how ChatGPT is “a glorified text prediction program.”<p>Right under that post is this post.
评论 #35313357 未加载
评论 #35314255 未加载
Nevermark大约 2 年前
Two-layer neural networks are universal approximators. Given enough units&#x2F;parameters in the first layer, enough data, and enough computation, they can model any relationship.<p>(Any relationship with a finite number of discontinuities. Which covers everything we care about here.)<p>But more layers, and recurrent layers, let deep learning models learn complex relationships with far fewer parameters, far less data and far less computation.<p>Less parameters (per complexity of data and performance required of the model) means more compressed, more meaningful representations.<p>The point is that you can’t claim a deep learning model has only learned associations, correlations, conditional probabilities, Markov chains, etc.<p>Because architecturally, it is capable of learning any kind of relationship.<p>That includes functional relationships.<p>Or anything you or I do.<p>So any critique on the limits of large language models needs to present clear evidence of what it is being claimed it is not doing.<p>Not just some <i>assumed</i> limitation that has not been demonstrated.<p>—<p>Second thought. People make all kinds of mistakes. Including very smart people.<p>So pointing out that an LLM has trouble with some concept doesn’t mean anything.<p>Especially given these models already contain more concepts across more human domains than any of us have ever been exposed to.
评论 #35384095 未加载
branko_d大约 2 年前
I was skeptical about the whole “AI thing” for a long time, but have lately realized this was mostly due to my own ignorance.<p>The following video has opened my mind. If this is not intelligence, then I don’t know what is…<p><i>ChatGPT - Imagine you are a Microsoft SQL Server database server</i><p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;mHgsnMlafwU" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;mHgsnMlafwU</a>
yamrzou大约 2 年前
I said it here before and I will repeat it: Unless it solves the Abstraction and Reasoning Corpus — ARC (See: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;fchollet&#x2F;status&#x2F;1636054491480088823" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;fchollet&#x2F;status&#x2F;1636054491480088823</a>) you can not say that ChatGPT is able to think or abstract.
评论 #35314014 未加载
评论 #35322035 未加载
Certhas大约 2 年前
Human intelligence evolved with the goal to survive and procreate. GPT intelligence evolved to mimick human speech.<p>Both tasks require a conceptual understanding of the world humans inhabit, but otherwise the two tasks that gave rise to these intelligences are utterly different. We should expect these intelligences to be completely different.
评论 #35317085 未加载
评论 #35314321 未加载
jgilias大约 2 年前
I feel like we’re in a new age of heliocentrism.
评论 #35313172 未加载
quonn大约 2 年前
I think it would be useful for some HN readers to get some basic philosophy training, specifically on the philosophy of mind. I asked myself many of these questions around 2005 or something and started to read up and there are many experiments that have been done and ChatGPT does not change much for the theory. It is interesting because of it‘s possible economic impact etc. Not because because of any supposed moral concern for the software itself of which there are none at this point.<p>I see people here constantly mixing intelligence and conciousness and that‘s really the most basic destinction everyone should be able to make.
评论 #35314433 未加载
ccppurcell大约 2 年前
I just asked chatgpt whether 3442177452 is prime. It insisted that 58657 is a factor (it&#x27;s not) on the basis that it&#x27;s the largest prime less than or equal to the square root (which I think is correct but irrelevant), and even though it gave a non zero remainder when dividing the two numbers (I did not check if the remainder is correct). Then it gave a (wrong) factorisation, not even using 58657.<p>It&#x27;s cool and it will probably be able to get this right one day but it&#x27;s a big goal to miss.
评论 #35313136 未加载
评论 #35322479 未加载
评论 #35313189 未加载
评论 #35313768 未加载
评论 #35313146 未加载
评论 #35322073 未加载
alienicecream大约 2 年前
- the AI is intelligent in a way that&#x27;s different from us and that we don&#x27;t understand but is very sophisticated<p>Also<p>- the AI cares about what happens to a fictitious child like someone from Reddit<p>Something here doesn&#x27;t pass the smell test. It seems more likely that someone wants to believe that the AI has a naive child like consciousness, like you see in pop culture depictions of AIs.
entropyneur大约 2 年前
I think the question of whether AI has &quot;true understanding&quot; of things is misguided. Having a &quot;true understanding&quot; is nothing but a subjective experience. There are two actual important questions: 1) whether AI is capable of having (any) subjective experience at all and 2) whether AI can outperform human intelligence in every area. You are in a deep denial if in 2023 you have any doubts about 2). I&#x27;m yet to hear a compelling argument as to why a positive answer to 2) might imply a positive answer to 1). However it&#x27;s appalling how little attention is being given to 1) on it&#x27;s own merit.
efxhoy大约 2 年前
&gt; But nobody knows how GPT works. They know how it was trained, because the training scheme was designed by humans, but the algorithm that is executed during inference was not intelligently designed but evolved, and it is implicit in the structure of the network, and interpretability has yet to mature to the point where we can draw a symbolic, abstract, human-readable program out of a sea of weights.<p>Nobody knows how the human mind really works either. And we’ve been trying to understand ourselves for thousands of years. I suspect we will take a while to figure out how the “mind” of GPT works too.
评论 #35313228 未加载
Veedrac大约 2 年前
Minor correction to an otherwise valid article: AI does not pass the Turing Test, and what LaMDA did was not a Turing Test. Reading the original article by Turing is illustrative. This is not to say AI is not impressive in a measure that the Turing Test is meant to take a measure of.
1970-01-01大约 2 年前
It understands yet will produce garbage output. If it ever answers without hallucinations and falsehood, it will truly understand reality. Then and only then will this be revolutionary and not evolutionary. If you want to be wrong then follow the masses.
jimhefferon大约 2 年前
Perhaps people will concede something is happening once GPT begins to worship UNIVAC.
评论 #35314421 未加载
cuteboy19大约 2 年前
Nobody mentioned this yet so I&#x27;ll just point out that the article title refers to Galileo&#x27;s famous utterance &quot;And yet, it moves&quot; after being &#x27;debunked&#x27; by the Church
Havoc大约 2 年前
Remember not so long ago when a google engineer was outcast for saying similar stuff?
bsaul大约 2 年前
since its seems that the author is reading HN : congratulations for that article. It managed to be interesting on a topic that&#x27;s written about non stop those days, and the writing style is very good.
smitty1e大约 2 年前
&gt; But nobody knows how GPT works. They know how it was trained, because the training scheme was designed by humans, but the algorithm that is executed during inference was not intelligently designed but evolved, and it is implicit in the structure of the network, and interpretability has yet to mature to the point where we can draw a symbolic, abstract, human-readable program out of a sea of weights.<p>I object. ChatGPT executes in computer logic and is ultimately electrical signals in gates representing 1 and 0.<p>ChatGPT is vast and impressive, sure. Emergence[1] may get it past a Turing Test, fine. But it remains discrete logic.<p>In contrast, natural intelligence has not been reproduced organically, much less, fully understood. There is no repeatable experiment going from inorganic matter to self-aware, self-replicating life.<p>In summary, ChatGPT is impressive, but nowhere near capable of doing the impossible, e.g. predicting the weather with fidelity substantially into the future.<p>Nor can I bring myself to fret that Skynet is immanent.<p>[1] <a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Emergence" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Emergence</a>
评论 #35313478 未加载
评论 #35313248 未加载