TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Your Brain on Metaphors

40 pointsby jawonover 10 years ago

13 comments

NotAtWorkover 10 years ago
&gt; Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: &quot;It kills it.&quot;<p>This just makes me think he both doesn&#x27;t understand brains or AI.<p>I also don&#x27;t get the insistence on a &#x27;body&#x27;. If we weren&#x27;t planning on having the AI totally isolated, and intended to say, talk to it in order to see if it was an AI, then we were already proposing to give it senses right from the start.<p>In fact, I don&#x27;t think I&#x27;ve seen a single proposal for an AI that didn&#x27;t give it at least one external sense and many internal ones. I don&#x27;t see why we would think it would have that much trouble building metaphores.<p>As Lera Boroditsky says:<p>&gt; If you’re not bound by limitations of memory, if you’re not bound by limitations of physical presence, I think you could build a very different kind of intelligence system<p>&gt; I don’t know why we have to replicate our physical limitations in other systems.
dragonwriterover 10 years ago
Here&#x27;s where things go wildly wrong:<p>&gt; Since computers don&#x27;t have bodies, let alone sensations<p>Computers are not non-physical, they definitely have bodies (the physical machines which include the circuitry for executing their software and its necessary support mechanisms); they also can, and often do, have sensory systems providing inputs regarding the state of the world both external (e.g., cameras, microphones) and internal (e.g., temperature sensors) to their &quot;bodies&quot;.<p>It may be that the &quot;bodies&quot; of current computers are structurally dissimilar to human bodies in ways which are detrimental to human style cognition -- its certainly true that they aren&#x27;t build on the same kind of biomechanical design and it may well be that the web of biomechanical feedback loops in the body is important to human intelligence and isn&#x27;t readily simulated in systems using the technologies used for modern digital computers. But even if that&#x27;s true, it doesn&#x27;t say we can&#x27;t have AI, it just means our AI may need to be built on adifferent set of technologies, e.g., perhaps using biological rather than silicon substrates. But engineering biological systems is something we <i>can</i> do, and with increasing facility.<p>The belief that AI is physically impossible -- rather than just a very hard engineering problem -- is equivalent to the belief that intelligence is, itself, not a phenomenon governed by the laws of the physical universe, but magic that intrudes effects into the physical universe from outside that cannot be reproduced by physical means.
评论 #8274149 未加载
fensterbrettover 10 years ago
&gt; <i>If cognition is embodied, that raises problems for artificial intelligence. Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: &quot;It kills it.&quot; Of Ray Kurzweil’s singularity thesis, he says, &quot;I don’t believe it for a second.&quot; Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.</i><p>Well, then let&#x27;s simulate the body as well once we got the brain right.
评论 #8273796 未加载
评论 #8273764 未加载
评论 #8273674 未加载
GrantSover 10 years ago
I almost didn&#x27;t read the article due to the headline but the fMRI studies (which are really the focus of the article) are fascinating, and there is a surprisingly deep discussion about the line between idioms and metaphors -- particularly how this affects whether the brain engages the motor cortex when processing language and how it might vary by individual.<p>Note that the headline isn&#x27;t actually implying machines can&#x27;t be intelligent but that their internal states won&#x27;t correspond to human states unless their cognition is grounded in a similar set of sensations. But this is already the case with humans from vastly different cultural contexts. The words coming out of your mouth only mean the same thing to others to the extent that they share a similar history of experiences when interacting with the world.<p>So I can certainly believe that an AI whose internal concepts are based on embodied&#x2F;simulated experience would seem more relatable than one raised purely on books, but that&#x27;s true of humans too so no big surprise and not an insurmountable barrier as one of the quoted sources in the article suggests. Non-embodied agents will speak in idioms and embodied agents will speak in metaphors.
Millenniumover 10 years ago
My personal take is that while humanlike AI may be developed, it won&#x27;t happen on computers as we know them today. The fundamental mechanics of computation and thought appear to be different enough that I suspect an accurate simulation of humanlike thought may very well be out somewhere in NP. This is not to say that machines capable of humanlike AI won&#x27;t be invented; they just won&#x27;t be recognizable as computers.<p>They might not even make computers obsolete. If a humanlike AI&#x27;s fundamental model of thought is closer to ours than to a computer&#x27;s, then the AI might turn out not to be very much better at math-oriented tasks than we are (proportionally speaking). It would therefore still need to use computers in mostly the same ways we do. Both types of machines might be incorporated into a single unit -AI on the left, computer on the right, for example- to speed up that process.
评论 #8273628 未加载
评论 #8273690 未加载
评论 #8273649 未加载
evunveotover 10 years ago
A webcomic called <i>Nine Planets Without Intelligent Life</i> had a clever take on this topic (years ago). Quoting <a href="http://www.bohemiandrive.com/comics/npwil/19.html" rel="nofollow">http:&#x2F;&#x2F;www.bohemiandrive.com&#x2F;comics&#x2F;npwil&#x2F;19.html</a> (drag to read; the illustrations are highly entertaining):<p>How and why do robots eat?<p>The answer to the first part of this question is simple: Robots eat the same way humans eat.<p>As to the why, it would be helpful to think of a saying of the late-human AI programming community.<p>Building an artificial intelligence that appreciates Mozart is easy ... building AI that appreciates a theme restaurant is the real challenge.<p>In other words, base desires are so key to human behavior that if they are not simulated ... convincing artificial intelligence is impossible.
mellingover 10 years ago
Never is such a long time and the brain isn&#x27;t performing magic.
评论 #8273459 未加载
评论 #8273504 未加载
评论 #8273506 未加载
bwooceliover 10 years ago
&lt;tldr&gt;Train the AI of the future on Amelia Bedelia books&lt;&#x2F;tldr&gt;<p>I disagree with how they draw a distinction between metaphor and literal constructs in language. As we are all in our own heads experiencing the world, language is an interface to pass meaning from one reality to another.<p>Over time humanity has used language to arrive at a collective consensus on the meaning of words that describe shared experiences. At this point, all language is on a metaphorical scale where the depth of a personal knowledge determines the success of understanding the input. This is coupled to a positive&#x2F;negative reinforcement mechanism that builds a history of interactions that helps determine what language will convey the intended meaning in an appropriate context.<p>It does not seem that these two features, knowledge graph and track record, are outside the realm of possibility for computation. Given a deep enough knowledge graph and a means to query outcomes of past experience it seems that this feature of &quot;seeming human&quot; would be possible.
Aqueousover 10 years ago
I&#x27;m not sure I buy the premise that a brain even understands a literal sentence completely by simulating it, in all cases. I think that is a <i>component</i> of understanding, and can help understand, depending on both the sentence and our direct experience of the situation the sentence describes.<p>But I don&#x27;t think that is the whole story because I don&#x27;t think that permits <i>partial understanding.</i> What explains our ability to understand (or partially) understand a sentence describing a novel situation we&#x27;ve never experienced involving objects or people we&#x27;ve never seen? As the article points out sometimes we are able to understand sentences for which there is no motor activity or visual experience associated.<p>A huge example of this is soon after we&#x27;re born and start to develop we begin to understand sentences even though we&#x27;re not being formally taught a language - only exposed to it. The simulation theory seems not to explain that process of &#x27;bootstrapping.&#x27;
tim333over 10 years ago
&gt;Of Ray Kurzweil’s singularity thesis, he says, &quot;I don’t believe it for a second.&quot; Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.<p>Ironically, Kurzweil is big in to the stuff the article goes on about like bodily sensory input, metaphor and using fMRI to see what is going on. From his recent book:<p>&quot;Inputs from the body (estimated at hundreds of megabits per second), including that of nerves from the skin, muscles, organs, and other areas, stream into the upper spinal cord.&quot; ... &quot;Key cells called lamina 1 neurons create a map of the body.&quot;<p>&quot;A key aspect of creativity is the process of finding great metaphors - symbols that represent something else. The neocortex is a great metaphor machine, which accounts for why we are a uniquely creative species.&quot;
jbarrowover 10 years ago
Jeff Hawkins is a huge proponent of strong AI, and I strongly encourage anyone interested in the subject to read his 2004 book, On Intelligence. In it he makes several cogent points about the future of true AI, a lot of which hit on points brought up in the article.<p>He discusses the origin of thought and imagination as simulations, which is in line with the article. He sees this in a different light, however: not only are simulations necessary for brains to produce thought, but they are achievable given the right computational system.<p>He also argues that embodiment may not (and in his view, likely won&#x27;t) take a humanlike form, Rather the AI, like a human, will be able to plastically adapt to new senses (say, weather sensors) to understand the world in a way we can&#x27;t even fathom.
dghfover 10 years ago
&gt; Take the sentence &quot;Harry picked up the glass.&quot; &quot;If you can’t imagine picking up a glass or seeing someone picking up a glass,&quot; Lakoff wrote in a paper with Vittorio Gallese, a professor of human physiology at the University of Parma, in Italy, &quot;then you can’t understand that sentence.&quot;<p>Taken to its logical conclusion, doesn&#x27;t that imply that someone blind from birth can&#x27;t understand visual metaphors or idioms: e.g., &quot;I see what you mean?&quot;
评论 #8273398 未加载
staredover 10 years ago
I am a big fan of conceptual metaphors. But I don&#x27;t get this part: &quot;why AIs may never be humanlike&quot;.<p>We can simulate embodiment and simulate structures for generating analogies. And, actually, it may be simpler that a Platonic approach of words having &quot;an objective meaning&quot;.