TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

What does it mean for a machine to “understand”?

59 点作者 stablemap超过 5 年前

16 条评论

nutanc超过 5 年前
This is a good balanced article that gets a lot of things right. We should take a forgiving approach when we talk about AI systems. And as the author points out the problem is not that AI systems dont have understanding yet. The problem is with the hype which leads many to believe that we are close to building systems which can understand us.<p>That said, I have a small problem with the examples presented to say that already machines understand us :)<p>The article says &#x27;For example, when I tell Siri “Call Carol” and it dials the correct number, you will have a hard time convincing me that Siri did not understand my request&quot;<p>Let me try to take a shot at trying to explain that Siri did not &quot;understand&quot; your request.<p>Siri was waiting for a command and executed the best command that matched. Which is, make a phone call.<p>It did not understand what you meant because it did not take the whole environment into consideration. What if Carol was just in the other room. A human would maybe just shout &quot;hey Carol, Thomas is asking you to come&quot;, instead of making a phone call.<p>If listening to a request and executing a command is understanding, then computers have been understanding us for a long time. Even without the latest advances in AI.
评论 #21375050 未加载
评论 #21375719 未加载
评论 #21375026 未加载
评论 #21375837 未加载
评论 #21375940 未加载
评论 #21375852 未加载
js8超过 5 年前
I have a straightforward definition of &quot;understand&quot;. To understand means to be able to give a (representative) example of the (intensionally) given set. Though it is harder than it seems, as it usually means solving the constraint satisfaction problem.<p>For example, take the classical AI knowledgebase fragment, &quot;bird is animal that flies&quot;. If I ask example of bird, it can say &quot;eagle&quot;, and exhibit some understanding. We can then probe further and ask for a bird which is not an eagle. If it says &quot;bat&quot; or &quot;balloon&quot;, it exhibits that it still doesn&#x27;t understand birds quite right.<p>In particular, if the description is nonsensical and thus impossible to understand, we cannot give any examples.<p>This idea was really inspired by the study, where they asked people to recognize nonsensical and profound sentences, describing certain situation. The profound are the ones where you can create a concrete instance of the situation.
评论 #21375427 未加载
评论 #21375637 未加载
评论 #21376069 未加载
评论 #21375185 未加载
评论 #21374922 未加载
评论 #21376050 未加载
评论 #21375804 未加载
cjfd超过 5 年前
On the one hand the quote by Edsger Dijkstra comes to mind. &quot;The question of whether machines can think is about as relevant as the question of whether submarines can swim.&quot; We are hardwired to attribute great significance to what happens both in our own head and that of other people.<p>On the other hand, machines still perform actions that one could call &#x27;stupid&#x27;. When alphago was losing in the fourth match against Lee Sedol it would play &#x27;stupid&#x27; moves. These were, for instance, trivial threads that any somewhat accomplished amateur go player would recognize in an instant and answer correctly.<p>Humans, and also animals, have a hierarchy in their understanding of things. This maps on brain structure too. Evolution has added layers to the brain while keeping the existing structure. In this layered structure the lower parts are faster and more accurate but not as sophisticated. Stupidity arises because of a lack of layeredness so when the goal of winning the game is thwarted the top layer doesn&#x27;t have any useful thing to do anymore and it falls back on a layer behind that. For alphago pretty much the only layer behind its very strong go engine is the rules of go. So, even when it is losing it will never play an illegal move but it will do otherwise trivially stupid things. For humans there is a layer between these things that prevents them from doing useless stuff. For living entities this is essential for survival. You can be forgetful of your dentist appointment but it is not possible to forget to let your heart beat. It seems that this problem could be mended by putting layers between the top level algorithm and most basic hardware level such that stupid stuff is preempted.
评论 #21376643 未加载
modeless超过 5 年前
&gt; When I ask Google “Who did IBM’s Deep Blue system defeat?” and it gives me an infobox with the answer “Kasparov” in big letters, it has correctly understood my question. Of course this understanding is limited. If I follow up my question to Google with “When?”, it gives me the dictionary definition of “when” — it doesn’t interpret my question as part of a dialogue.<p>Google Search doesn&#x27;t, but Google Assistant does. I posed the exact queries suggested by the article and the second query of simply the word &quot;when&quot; did give the correct answer (May 11 1997).
评论 #21374719 未加载
评论 #21374903 未加载
BoppreH超过 5 年前
I don&#x27;t remember where I first saw it, but the best definition of &quot;understanding&quot; I&#x27;ve seen is &quot;being able to encode and compress&quot;.<p>For example, imagine a system that has as input the picture of a human face in RAW format. If the system runs the picture through JPEG compression, for example, and returns something substantially smaller, it has shown some understanding of the input (color, spatial repetition, etc).<p>A more advanced system, with more understanding, may recognize it as a human face, and convert it to a template like the ones used for facial recognition. It doesn&#x27;t care about individual pixels anymore, or the lighting, just general features of faces. It understands faces.<p>An even more advanced system may recognize the specific person and compress the whole thing to a few bits.<p>I would say that an OCR scanner understands the alphabet and how text is laid out, GPT-2 understands the relationship between words and how text is written. And a physics simulator understands basic physics because it can approximately compress a sequence of object movements into only initial conditions and small corrections.<p>Lossy compression makes this concept non-trivial to measure, but it&#x27;s still a world&#x27;s away from the normal philosophical arguments.
stared超过 5 年前
&gt; Speaking as a psychologist, I’m flabbergasted by claims that the decisions of algorithms are opaque while the decisions of people are transparent. I’ve spent half my life at it and I still have limited success understanding human decisions. - Jean-François Bonnefon’s tweet (as quoted in <a href="https:&#x2F;&#x2F;p.migdal.pl&#x2F;2019&#x2F;07&#x2F;15&#x2F;human-machine-learning-motivation.html" rel="nofollow">https:&#x2F;&#x2F;p.migdal.pl&#x2F;2019&#x2F;07&#x2F;15&#x2F;human-machine-learning-motiva...</a>)
评论 #21377243 未加载
Nasrudith超过 5 年前
To be gadflyish do humans even truly understand or do they just claim they do because they had the observations roughly encoded from what they have been taught? Teachings which themselves often include unfounded assumptions or outright superstition.<p>Human understanding has been wrong often enough, missing enough crucial context to be dangerously hillariously wrong even amongst the &quot;experts&quot; of the day who came closest.<p>The isn&#x27;t some epistemological nilhism but to point out that understanding is incomplete for everyone and just because a given intelligence subset doesn&#x27;t match with our assumptions doesn&#x27;t mean it is wrong - although it also isn&#x27;t always right.
ilaksh超过 5 年前
I think getting near human level for NLP understanding means be being able to visualize and combine all of the dynamic systems that language represents. I mean it&#x27;s obvious that you can get pretty far just by processing a lot of text, but there is a limit. Some information about the way things work just is not encoded very well in text the way it is in video input. So you need to be able to do a sort of physics simulation for starters. Except it can&#x27;t just be physics, because there are a lot of patterns that occur that you need to be able to call up and manipulate or combine that are not just plain physics. These patterns are not represented in text.<p>There are projects doing video and text understanding. I think the trick to efficient generalization is to have the representations properly factored out somehow. Maybe things like capsule networks will help. Although that my guess is that to get really sort of componentized efficient understanding neural networks are not going to be the most effective way.
avmich超过 5 年前
The proposal in the article is to define &quot;understanding&quot; and work towards testable satisfaction of the definition.<p>This sounds a bit like a studying for a test taking. What if we made a definition and then worked successfully to reach the state when, according to this definition, the system &quot;understands&quot;. Can we expect to be satisfied with the result in general, outside of the definition?<p>The definition of understanding could be tricky, as history suggests. Other than &quot;to understand is to translate into a form which is suitable for some use&quot;, there could be many definitions. Article itself brings examples of chess playing or truck driving which were considered good indicators, yet failed to satisfy us in some ways.<p>Maybe we should just keep redefining &quot;understanding&quot; as good as we can today, and changing it if needed, and work trying to create a system &quot;good&quot;, not necessarily &quot;passing the test&quot;?
YeGoblynQueenne超过 5 年前
OK, wow, the old guard sure knows how to write sensibly. This is a great article.<p>But I have to disagree with this (because of course I do):<p>&gt;&gt; For example, when I tell Siri “Call Carol” and it dials the correct number, you will have a hard time convincing me that Siri did not understand my request.<p>That is a very common-sense and down-to-earth non-definition of intelligence: how can an entity that is answering a question correctly not &quot;understand&quot; the question?<p>I am going to quote Richard Feynman who encountered an example of this &quot;how&quot;:<p><i>After a lot of investigation, I finally figured out that the students had memorized everything, but they didn’t know what anything meant. When they heard “light that is reflected from a medium with an index,” they didn’t know that it meant a material such as water. They didn’t know that the “direction of the light” is the direction in which you see something when you’re looking at it, and so on. Everything was entirely memorized, yet nothing had been translated into meaningful words. So if I asked, “What is Brewster’s Angle?” I’m going into the computer with the right keywords. But if I say, “Look at the water,” nothing happens – they don’t have anything under “Look at the water”!</i><p><a href="https:&#x2F;&#x2F;v.cx&#x2F;2010&#x2F;04&#x2F;feynman-brazil-education" rel="nofollow">https:&#x2F;&#x2F;v.cx&#x2F;2010&#x2F;04&#x2F;feynman-brazil-education</a><p>In this (in?) famous passage Feynman is arguing that students of physics that he met in Brazil didn&#x27;t know physics, even though they had memorised physics textbooks.<p>Feynman doesn&#x27;t talk about &quot;understanding&quot;. Rather he talks about &quot;knowing&quot; a subject. But his is also a very straight-forward definition of knowing: you can tell whether someone knows a subject if you ask them many questions from different angles and find that they can only answer the questions asked from one single angle.<p>So if I follow up &quot;Siri, call Carol&quot; with &quot;Siri, what is a call&quot; and Siri answers by calling Carol, I know that Siri doesn&#x27;t know what a call is, probably doesn&#x27;t know what a Carol is, or what a call-Carol is, and so that Siri doesn&#x27;t have any understanding from a very common-sense point of view.<p>Not sure if this goes beyond the Chinese room argument though. Perhaps I&#x27;m just on a diffferent side of it than Thomas Dietterich.
visarga超过 5 年前
Does AlphaGo &#x27;understand&#x27; go?<p>I think the key ingredient is &#x27;being in the game&#x27;, that means, having a body, being in an environment with a purpose. Humans are by default playing this game called &#x27;life&#x27;, we have to understand otherwise we perish, or our genes perish.<p>It&#x27;s not about symbolic vs connectionist, or qualia, or self consciousness. It&#x27;s about being in the world, acting and observing the effects of actions, and having something to win or lose as a consequence of acting. This doesn&#x27;t happen when training a neural net to recognise objects in images or doing translation. It&#x27;s just a static dataset, a &#x27;dead&#x27; world.<p>AI until now has had a hard time simulating agents or creating real robotic bodies - it&#x27;s expensive, and the system learns slowly, and it&#x27;s unstable. But progress happens. Until our AI agents get real hands and feet and a purpose they can&#x27;t be in the world and develop true understanding, they are more like subsystems of the brain than the whole brain. We need to close the loop with the environment for true understanding.
评论 #21374947 未加载
评论 #21375091 未加载
boyadjian超过 5 年前
To understand means to classify, to modelize.
RaiseProfits超过 5 年前
You should direct the question to the computer if you want a meaningful answer.
basicplus2超过 5 年前
Self consciousness is required for understanding and intelligence
评论 #21374913 未加载
评论 #21374917 未加载
评论 #21374908 未加载
评论 #21374886 未加载
igammarays超过 5 年前
I&#x27;m with John Searle on the Chinese room [1] opinion, i.e. that a machine cannot be said to &quot;understand&quot; language even if it is able to pass the Turing Test. That is because when we say &quot;understand&quot;, we are referring to particular kind of human experience (qualia?) that a machine simply doesn&#x27;t seem to have, but animals, for example, do.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chinese_room" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chinese_room</a>
评论 #21374867 未加载
评论 #21374845 未加载
评论 #21374844 未加载
评论 #21374918 未加载
friendlybus超过 5 年前
I don&#x27;t think it&#x27;s possible for machines to understand. Numbers are meaningless, our human actions give them a useful function. All of the meaning a computer appears to provide is the preassigned values of layers and layers of programming work done by humans. Even today AI has a lot of human tagging and categorization that makes it useful.<p>The idea that a new self- sustaining meaning generation can arise out of the interlocking mechanisms of a computer is an interesting one. As we see self driven car CEOs describe some of the most advanced systems we have, requiring to be run in controlled environments and balking at the infinite complexity of real life, are we really building computer systems that are anything more than an incredibly sophisticated loop?
评论 #21374981 未加载