TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Human intelligence is overrated (2012)

59 pointsby togeliusabout 9 years ago

22 comments

beatabout 9 years ago
Multiplying 3842 by 543 is very hard for humans. They&#x27;re slow and inaccurate. Computers do it perfectly at incredible speeds.<p><i>Inventing multiplication</i> is something no computer (as we currently understand them) would ever be able to do.<p>For another analogy, moving ten tons of rock a thousand feet is something humans can do, and have done for millennia. It&#x27;s very slow and difficult. A bulldozer can do the same thing in minutes. But a bulldozer would never, ever have a <i>reason</i> to move ten tons of rock.
评论 #11573027 未加载
评论 #11573068 未加载
评论 #11573325 未加载
评论 #11573292 未加载
评论 #11573407 未加载
评论 #11575287 未加载
评论 #11573395 未加载
评论 #11573221 未加载
lmmabout 9 years ago
&gt; the argument goes both ways here as well: take an arbitrary human (such as yourself, if you happen to be human) and try placing this human in the cockpit of a landing jet plane, in a semiconductor factory, in the oval office of the White House, in the kitchen of a gourmet restaurant, on a horseback in Siberia, or equipped with only a spear in the middle of the Amazonas jungle. There are humans that have been programmed to do well in each of these situations, but it is very unlikely that the human you were thinking of (perhaps yourself) would know what to do in more than at most one of these situations.<p>This part seems wrong. I think most humans would make a decent go of most of those situations. Not as good as an expert by any means, but they&#x27;d be capable of doing <i>something</i>, unlike a computer program.
评论 #11573920 未加载
评论 #11573054 未加载
评论 #11577679 未加载
fossuserabout 9 years ago
All this talk of &#x27;intelligence&#x27; gets confusing - it seems like the core distinction people are talking about most of the time with &#x27;general&#x27; or &#x27;strong&#x27; AI is really something more like Artificial Consciousness.<p>This has issues too - if consciousness is an emergent property of a neural net with the right feedback mechanisms and training material then even if you have the right feedback mechanisms you could still create an artificial consciousness that&#x27;s stupid.<p>You see this problems in humans - depending on a lifetime of &#x27;test data&#x27; exposure (parents, peers, environment) and the underlying brain neural net &#x27;hardware&#x27; you can get people that believe a lot of stupid things.<p>Maybe consciousness doesn&#x27;t have to work that way and we&#x27;re just dealing with a local maxima of evolution (or some reproduction&#x2F;sex drive constraint), but we might end up stumbling on the ability to create a neural net with the ability to emerge a consciousness before we can craft the type of consciousness we&#x27;d want.<p>Actually understanding how the system works is harder.
评论 #11573084 未加载
评论 #11573884 未加载
评论 #11573885 未加载
prmphabout 9 years ago
&gt; &quot;Humans are quite stupid in many ways, compared to computers. Let&#x27;s start with the most obvious: they can&#x27;t count. Ask a human to raise 3425 to the power of 542 and watch them sit there for hours trying to work it out. Ridiculous&quot;<p>I hope this guy is just trolling, but in case he is not, this is a tired argument that should be debunked once and for all.<p>The brain in the course of seemingly mundane activities (interpreting what we see, for instance) effectively performs a stupendous amount of complex calculations per second [1]<p>What people confuse is conscious calculations vs effective calculations.[2] The brain does not need to output intermediate results of basic operations because that is not it&#x27;s computational objective.<p>I was actually a bit disappointed at the shallowness of the article; from the title I was expecting maybe a discussion of how complex even the very concept of intelligence is, and how speed of calculations does not necessarily equate to intelligence.<p>[1] <a href="http:&#x2F;&#x2F;gizmodo.com&#x2F;an-83-000-processor-supercomputer-only-matched-one-perc-1045026757" rel="nofollow">http:&#x2F;&#x2F;gizmodo.com&#x2F;an-83-000-processor-supercomputer-only-ma...</a><p>[2] <a href="http:&#x2F;&#x2F;chrisfwestbury.blogspot.com&#x2F;2014&#x2F;06&#x2F;on-processing-speed-of-human-brain.html" rel="nofollow">http:&#x2F;&#x2F;chrisfwestbury.blogspot.com&#x2F;2014&#x2F;06&#x2F;on-processing-spe...</a>
Rhapsoabout 9 years ago
It is an existential horror SciFi novel, but BlindSight by Peter Watts is the only book I&#x27;ve seen talk about this idea.<p>Avoiding totally spoiling the book, it asks the question: &quot;Is consciousness really a survival trait in the long term?&quot;
评论 #11572907 未加载
评论 #11573342 未加载
评论 #11574056 未加载
评论 #11573235 未加载
评论 #11573440 未加载
rtkweabout 9 years ago
&gt; But the argument goes both ways here as well: take an arbitrary human (such as yourself, if you happen to be human) and try placing this human in the cockpit of a landing jet plane, in a semiconductor factory, in the oval office of the White House, in the kitchen of a gourmet restaurant, on a horseback in Siberia, or equipped with only a spear in the middle of the Amazonas jungle. There are humans that have been programmed to do well in each of these situations, but it is very unlikely that the human you were thinking of (perhaps yourself) would know what to do in more than at most one of these situations.<p>At least the random human will have a chance at doing something and if the situation isn&#x27;t life or death like the landing plane or the Amazon jungle could with time actually learn to operate in the new environment even without interacting with other people who could teach them. That&#x27;s what&#x27;s missing from AI the flexibility to operate in a chaotic environment. Until relatively recently the slightest thing going wrong in even moving through an area or performing a simple task like moving a box would completely break.<p>&gt; (It&#x27;s hard to understand why anyone would want to be in a plane flown by a human, now that there are alternatives.)<p>Detecting and ignoring spurious inputs and extreme edge cases are one of the main reasons that I feel better if there&#x27;s a person alert and at the controls of a plane. Extreme cases like Quantas 32 where a huge number of systems are absolutely destroyed would be a huge challenge for modern autopilots which are great but aren&#x27;t tested or designed for emergencies. [1]<p>[1] <a href="http:&#x2F;&#x2F;lifehacker.com&#x2F;the-power-of-mental-models-how-flight-32-avoided-disas-1765022753" rel="nofollow">http:&#x2F;&#x2F;lifehacker.com&#x2F;the-power-of-mental-models-how-flight-...</a>
评论 #11573312 未加载
评论 #11573182 未加载
ktRolsterabout 9 years ago
He makes a flawed argument that computers are smarter than humans: but the flaw is obvious in that a computer couldn&#x27;t even make that argument. It takes a human.
mafribeabout 9 years ago
The article is informal about intelligence and then comes up with a couple of ad-hoc examples where computers beat humans. It&#x27;s unclear that they have much to do with intelligence. The following definition of the term has been proposed:<p><pre><code> Intelligence measures an agent’s ability to achieve goals in a wide range of environments. </code></pre> Togelius even addresses this a little by pointing out that humans have to be trained to be a pilot of a president. But it is unclear at this point to what extent computers can be intelligent in this sense. Alpha-Go&#x27;s reinforcement learner, probably the most astonishing part of Alpha-Go, was not (to the best of my knowledge) Go-specific, instead, it was a general-purpose reinforcement learner. I doubt it can learn much more complicated forms of interaction without a simple reward function (such as games).<p>Nevertheless, I&#x27;m quite optimistic, but it&#x27;s far from the foregone conclusion that the author implies it it.
评论 #11573409 未加载
aardsharkabout 9 years ago
I feel like this article is being disingenuous in order to get discussion going.
cosmin800about 9 years ago
How did this article made it on hackernews? Someone take it down please.
评论 #11573800 未加载
jcofflandabout 9 years ago
&gt; It would be very easy to invent games that were so complicated that only computers could play them; computers could even invent such games automatically.<p>Of the many errors in this article this is one of the most flagrant. Computers have yet to invent any novel and challenging games. That would take ingenuity, something computers have failed to demonstrate. However, I suspect the author is trolling a bit.
评论 #11574112 未加载
评论 #11573357 未加载
评论 #11573469 未加载
nitwit005about 9 years ago
&gt; Now let&#x27;s take another activity that humans should be good at: game-playing.<p>It should be just the opposite. There was no evolutionary pressure on humans to make them play Chess well. The games are interesting to us partly because they&#x27;re challenging and make us think differently.<p>If you want a fair comparison, you need to look at how successful we&#x27;ve been at making machines to do things animals <i>were</i> facing evolutionary pressure to do successfully.<p>Imagine trying to build an ant. A machine with a tiny, power efficient brain, that coordinates with its fellows to gather resources and build enormous hives. Could we make computers do that? Maybe eventually, but we&#x27;re nowhere close today. Just making a machine walk with grace is at the edge of what we can do.
colourincorrectabout 9 years ago
Will a robot&#x2F;AI crossing the street ever realize (without anyone explicitly telling it) that if it is stuck by a car then it will be incapable of moving?<p>Will an image recognizer ever &quot;truly understand&quot; what it means for something to be in a category?<p>If I set up a Go board such that the pieces on the board resemble a smiley face or some other pattern, will there ever be a version of AlphaGo that is able to recognize that? Will it be able to stop playing Go and start placing pieces on the board that fit into the pattern?<p>Will an AI be able to make an original joke that is not based off of any template?<p>To my knowledge we don&#x27;t have an AI that can do any of the above, but any person would find those tasks easy.
mouzoguabout 9 years ago
Comments seem to have gone off on a tangent about multiplication.<p>What stands out for me is the irony, that ultimately the purpose of AI is not to be especially good at multiplication but rather to replicate the tenuous, fragile and indefinable properties of Human Intelligence that can only come about through some process more sophisticated than logical binary determinism.
joluxabout 9 years ago
No offense to OP or Mr. Togelius but this argument is terrible in almost every way imaginable and completely unconvincing at that. It makes its case entirely by subtly introducing straw men and double standards.<p>Leaving aside that its primary point is made by a redefinition of “intelligence” (itself being nebulous without providing two definitions), it completely ignores the fact that computers would be completely unable to do any of these things had smart people not told them how to. You may say the same of people as well, but people can learn things independent of knowledge. Even something as simple as space and time is understood a priori by people but most computers are oblivious to what these things actually mean.<p>The arguments about memory are terrible because one might as well say a library is smarter than a person if all that matters is the accuracy of recall and the amount of data stored. The computer itself does not know these things in the same way that we do, if you ask it to find them for you it will search them the same way you might but more quickly. Knowledge is contextual and intuitive, and computers currently are not great at context or intuition.<p>And that’s just knowledge! It’s so easy to demonstrate that human knowledge is more complex than computer “knowledge” that it’s barely even worth discussing.<p>I personally think one of the strongest indicators of intelligence in people is their intuition and the ease at which they adapt to new things, i.e. how many things come easily to them. Nothing comes easily to a computer. Everything must be specified clearly and carefully to the computer by a person who is better at intuition than the computer is. The computer has no way of knowing whether something is “right” or “wrong” in the sense that those words intersect with both morality and logic. They might understand that something is “incorrect,” but that does not carry the negative connotations that “wrong” does for a human being. Computers have not “computed how to play the game perfectly,” they were told how to do so through increasing levels of abstraction. Computers have not “computed how to play the game perfectly” just as pencils have not learned how to make marks on paper. That a computer does what you tell it to and by definition can’t do what you don’t tell it to is evidence enough of its utility as a tool and not a person.<p>This is, of course, in the current context of AI. I have no doubt that some day we will create software with the sort of intuition and ability to contextualize information that people have. But until then, there’s not sense deluding ourselves that we’re already there. If that were the case, we might as well have stopped doing computer science research with the Bombe 70 years ago.
thefoxabout 9 years ago
&gt; Ask a human to raise 3425 to the power of 542 and watch them sit there for hours trying to work it out.<p>You can&#x27;t compare the speed of a computer to the speed of a human. Only because a human is slow doesn&#x27;t mean that the human is stupid, nor computer are smarter. The electric current in a integrated circuit acts near the speed of light. Light itself is also very <i>fast</i> but not very <i>intelligent</i>.<p>&gt; [...] the world Chess champion has been a computer.<p>Again, this is a matter of speed. If you give a human being the same amount of time in relation what a computer had taken to calculate the same steps of a chess game it would be a more true <i>comparsion of intelligence</i>.<p>&gt; Humans have almost no memory<p>That&#x27;s because the human brain isn&#x27;t trained and not used. We use almost 10% of our brain. So this comparsion also hinks.<p>&gt; The face recognition software that Facebook uses can tell the faces of millions of people apart.<p>And again, this is only a matter of training and not a matter of intelligence.
drabiegaabout 9 years ago
It seems that any discussion about artificial intelligence eventually devolves into arguments essentially about whether human brains are magical or deterministic.
crussoabout 9 years ago
There are tools and there are users of tools. Which is the computer? Which is the human?
PaulHouleabout 9 years ago
My suspicion is that Chomsky&#x27;s &quot;language instinct&quot; is actually a derangement of our ability to reason about probabilities that leads us to make the same mistakes consistently, thus understand each other.
kazinatorabout 9 years ago
&quot;Human intelligence&quot; is a fiction fabricated from cherry-picked examples of <i>individual</i> intelligence.
danielamabout 9 years ago
If all you have is a hammer, everything looks like a nail...<p>I&#x27;m not familiar with Togelius&#x27; other writings so I don&#x27;t know how fast and loose the man is with his words, but in isolation, these &quot;arguments&quot; are like a compilation of youtube comments. Normally I ignore them, but some days I take the bait.<p>The question at issue is made out to be &quot;who&#x27;s more intelligent, computers or human beings?&quot; when the real question is &quot;what is intelligence in the first place?&quot;. To merely assume some definition because it suites the author&#x27;s position is nothing short of question begging.<p>There also do exist powerful arguments against strong AI and computationalism. Searle&#x27;s Chinese room argument is perhaps the best known, but by all appearances, often unappreciated or misunderstood. The essential point he makes is that computers are syntactic machines, i.e., machines that transform strings of symbols (which are intrinsically meaningless) according to syntactic rules. However, human minds contain semantics (concepts). Because computers are syntactic machines only, they necessarily do not and cannot possess semantics. They can simulate semantics when a human being formalizes semantics by producing syntactic rules for the simulation, but no amount of syntax ever results in semantics any more than skillfully adding clay to a sculpture can ever produce a human being. Remember, a computer is anything that implements anything equivalent to the Turing machine (a formalization of effective method).<p>Aristotle, on the other hand, makes a much deeper argument about the nature of the intellect that can reinforce a restricted form of Searle&#x27;s argument, viz., his arguments can be used to explain why computers lack semantics by showing that matter per se cannot possess &quot;concepts&quot; as such and apart from particular instances. This argument is difficult to appreciate without an understanding of Aristotle&#x27;s broader metaphysics. However, the outline of the argument is as follows:<p>1. Matter is particular&#x2F;concrete (e.g., &quot;<i>this</i> tree&#x2F;<i>that</i> rose&quot;).<p>2. Concepts are abstract (e.g., &quot;<i>Tree</i> as a class&#x2F;<i>Redness</i> as such&quot;).<p>3. The intellect, the organ of abstracting concepts from particular instances, holds concepts.<p>4. Therefore, the intellect is not material. QED.<p>...adding own minor premise and conclusion...<p>5. Computers are purely material.<p>6. Therefore, computers cannot be intelligent.<p>Note that &quot;intellect&quot; is not a synonym for &quot;mind&quot;. Aristotle distinguishes such things as imagination (phantasm) from the intellect, the former of which he argues is material. To better see how concepts are immaterial, consider the word &quot;tree&quot;. You may imagine a tree, or even a number of trees, but the image is always particular, it is always an image of a particular tree whether real or not. However, none of these is the concept &quot;tree&quot; which is not particular (if it were particular, then there could only be one particular tree). You can repeat the same reflection with anything: every triangle you imagine will be isosceles, scalene or right-angle and of some particular color, and indeed something <i>triangular</i> and not a triangle as such.<p>The general problem here can be related to the problem of qualia (and intensionality) and thus the mind-body problem introduced by Descartes&#x27; metaphysics and haunting much of philosophical discourse since (even when the mind is dropped and the body endowed with the powers attributed to the mind). Note that Aristotle&#x27;s immaterial intellect is NOT Descartes&#x27; mind.<p>Others who have argued against computationalist or materialist conceptions of the mind include Kripke and Popper, but there are many in-depth treatments of the subject that address many of the claims and objections raised by the computationalists. That being said, I find &quot;AI&quot; (arguably a misnomer) to be a very interesting field.
评论 #11577709 未加载
knownabout 9 years ago
Intelligence != Knowledge