TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

An argument for the impossibility of machine intelligence [pdf]

104 点作者 imaurer超过 3 年前

17 条评论

a-dub超过 3 年前
&gt; But we neither know how to engineer the drive that is built into all animate complex systems, nor do we know how to mimic evolutionary pressure, which we do not under- stand and cannot model (outside highly artificial conditions such as a Petri dish). In fact, if we already knew how to emulate evolution, we would in any case not need to do this in order to create intelligent life, because the complexity level of intelligent life is lower than that of evolution. This means that emulating intelligence would be much easier than emulating evolution en bloc. Chalmers is, therefore, wrong. We cannot engineer the conditions for a spontaneous evolution of intelligence.<p>this is the thing i&#x27;ve always sort of loved about philosophy. they just kinda make shit up, provide their own definitions that are rooted in a bamboozling by use of flowery language, and then once they&#x27;ve stated all their definitions with their conclusions baked in, they hop, skip and jump down the path which now obviously leads to the conclusion they started with.<p>it&#x27;s kind of like a form of mathematics where they define their own first principles in each argument with the express purpose of trying to build the most beautiful path to their conclusions. it really is a beautiful form of art, like architecture for ideas.
评论 #29291495 未加载
评论 #29291435 未加载
评论 #29292151 未加载
评论 #29292710 未加载
评论 #29292436 未加载
dsr_超过 3 年前
This appears to be a series of arguments from incredulity.<p>In particular, it is equally incredible that intelligent life should evolve from a single-cell organism. But we have that as a counter-argument.<p>It is entirely reasonable to suspect that none of the current approaches will yield success, but claiming that no machine intelligences can possibly arise is... incredible.
评论 #29290103 未加载
评论 #29290161 未加载
_aavaa_超过 3 年前
I&#x27;d like to point the reader&#x27;s attention to [1].<p>[1] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1703.10987" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1703.10987</a>
评论 #29291508 未加载
评论 #29295427 未加载
评论 #29292186 未加载
评论 #29291729 未加载
R0b0t1超过 3 年前
Yet we are machines...?<p>Speaking specifically of neural networks as they exist now the answer is no because there is no obvious way to learn.
评论 #29290998 未加载
doganulus超过 3 年前
Their premises about logical systems are wrong so their conclusion is not valid. In short, of course, there are logical systems with potentially infinite state space. For example, a Turing machine. A digital circuit is no different. Turing completeness is abundant, it is everywhere.
jmull超过 3 年前
The paper is full-on nonsense. I’m surprised someone wasted their time writing it and you probably shouldn’t waste your time reading it.<p>In the part I read it claims we can’t develop AI because we can’t accurately model full reality. There’s no argument about what the connection there is, it’s just stated.<p>Kind of obviously, if we assume engaging with reality is necessary to develop intelligence, an artificial intelligence could do so in a similar way we non-artificial ones do, right?
评论 #29290238 未加载
评论 #29290647 未加载
评论 #29290478 未加载
评论 #29290907 未加载
snek_case超过 3 年前
The most obvious counter-argument is that the amount of things we can do with AI keeps expanding. People were incredulous that computer chess programs could beat humans in the 1980s. Now they can beat us at basically any board game including Go, do image classification, and we have some early prototypes of self-driving cars.<p>AI hasn&#x27;t mastered common-sense reasoning yet. That&#x27;s likely going to come last, but the amount of things AI can understand is set to only expand IMO.
评论 #29291506 未加载
评论 #29291699 未加载
评论 #29290604 未加载
erdewit超过 3 年前
In the same vein that heavier-than-air flying machines are impossible.
visarga超过 3 年前
What a funny a priori paper. Maybe the authors lost a bet and had to write it.
Traubenfuchs超过 3 年前
Should we ever attain hardware, software and understanding of the human brain good enough to emulate a human brain, we have done it.<p>There is absolutely no reason why this shouldn‘t be possible. Actually, we could already do it if we understood the brain enough and could model it good enough, even if the emulation might not be real time.
go_elmo超过 3 年前
The Turing machine was designed by imagining a human-operator. Our Mind has also only a finite state, and no matter if quantum effects are involved, the information in it is always finite, describable in a finite state. Thus, all turing machines are capable do do exactly what we do with information. This argument is incredible.
评论 #29292358 未加载
mcguire超过 3 年前
&quot;<i>Though the infinitesimal definition of utility in (1) and the penalisation of complexity in the definition of Υ provide a statistically robust measure of the kind of surrogate intelligence those working in the general artificial intelligence (AGI) field have decided to focus on, the definition is too weak to describe or specify the behaviour even of an arthropod. This is not only obvious from the issues already mentioned above, but also from the fact that algorithms which realise the reward-schemes proposed in (1) and (2) (for example, neural networks optimised with reinforcement learning) fail to display the type of generalisable adaptive behaviour to natural environments that arthropods are capable of, for example when ants or termites colonise a house.</i>&quot;<p>Ok, I don&#x27;t like the mathematical definitions of intelligence either (although I might be convincable and they do have some advantages over other definitions I&#x27;ve seen), but this refutation seems to be a prime example of proof-by-assertion.<p>&quot;<i>Brooks defines an AI agent, again, as an artefact that is able ‘to move around in dynamic environments, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction&#x27;.</i>&quot;<p>And this definition implies many things we know to be intelligent (i.e. people) are not. So there&#x27;s that.<p>&quot;<i>There are three additional properties of logic systems of importance for our argument here: 1. Their phase space is fixed. 2. Their behaviour is ergodic with regard to their main functional properties. 3. Their behavior is to a large extent context-independent.</i>&quot;<p>Aaaaand here we go...<p>&quot;<i>As we learn from we standard mathematical theory of complex systems [23], all such systems, including the systems of complex systems resulting from their interaction, 1. have a variable phase space, 2. are non-ergodic, and 3. are context-dependent.</i>&quot;<p>Ok, to the extent that the first statement is true about &quot;logic systems&quot;, it is also true about any physically realizable, material system. On the other hand, the &quot;complex system&quot;, to that same extent, is <i>not</i> physically realizable. (Consider &quot;a variable phase space means that the variables which define the elements of a complex system can change over time&quot; or &quot;a non-ergodic system produces erratic distributions of its elements. No matter how long the system is observed, no laws can be deduced from observing its elements.&quot; and question <i>how much information</i> is required for this in the authors&#x27; sense.)<p>And there we have the intrusion of the immortal soul into the argument that artificial intelligence is impossible.
natch超过 3 年前
“The authors declare that they have no conflict of interest.”<p>“Department of Philosophy”<p>hmm
SubiculumCode超过 3 年前
and this is why arxiv is not the same as peer review.
评论 #29292609 未加载
mensetmanusman超过 3 年前
If it takes longer than the heat death of the universe to understand intelligence, does that mean it’s impossible?
tehchromic超过 3 年前
It&#x27;s not likely to be a popular opinion with technologists as AI&#x27;s potential has lit the technopopular imagination, however this question has bothered me for a long time. I think strong emergent AI suffers philosophical problem that won&#x27;t go away, and to the extent that the conversation revolves around evolution and consciousness rather than logic and intelligence, then we are having the right conversation.<p>I&#x27;ll put my argument out there and let the flames come as they will.<p>Strong AI is about as likely to emerge from our current state of the art AI machinery as it is to emerge suddenly out of moon rocks. That&#x27;s to say the fear of machines becoming self-conscious and posing an existential threat to us, especially replacing us in the evolutionarily sense, is completely unfounded.<p>This isn&#x27;t to say that building machines capable of doing exactly that isn&#x27;t possible - we and all living things are proof that it&#x27;s possible - it&#x27;s to say that achieving this level of engineering is on par with intergalactic mass transit or Dyson spheres - way out of our league for the foreseeable. And, even if we had the technology, it would be so entirely foolish to undertake that no sentient species would do it.<p>That said, there&#x27;s a substantial argument to make that we will augment ourselves with our own machinery so throughoughly that we will become unrecognizable and in effect, accomplish the same task through merging with the machine. This is likely, but not at all to be like the experience of the singularity in that all of humanity is suddenly arrested and deposed by autonomous AI.<p>An interesting scenario in this vein is if a few powerful individuals can wield autonomous systems, modify themselves and simply wipe out all the competition, then in effect the rest of us wouldn&#x27;t know the difference. This outcome is actually I think on the more likely side, albeit a good ways away in the future.<p>Less likely but still totally legitimate as a concern is the idea that AI could be very easily weaponized. This is a real problem and is I think behind the more substantive warnings by good thinkers on the topic. Like bioweapons, we might be wiped out by an machine that&#x27;s been intentionally programmed and mechanically empowered to cause real harm. This kind of danger could also be emergent, in that a machine might be capable of deciding that it ought to take certain actions as well as have the capacity to take them, and then, voila, mass murder.<p>However it seems unlikely that such a mistake would be made, or that a bad actor would be capable to commit such an intentional crime. I think this is on par with nuclear MAD: even total madmen dictators hit the pause on the push-the-button instinct. And an AI MAD or similar would surely take as much resource to produce as a nuke arsenal. In other words, the resources required to build such a machinery are on the order of a nation-state, and perhaps more complicated to achieve than a nuclear arsenal, so probably more likely to be stopped or fail in-process rather than succeed.<p>So there are dangers from AI but I would say they are lesser than the accumulated danger of industrial society rendering they planet uninhabitable, which should of course occupy our primary concern these days.<p>The idea that the biological evolutionary &#x27;machine&#x27; whose motive for existence is accumulated over billions of years of entropic adaptation can be out engineered, or accidently replicated by modern computational AI is silly - the two aren&#x27;t in the same league and it&#x27;s hubris to suppose otherwise. There&#x27;s more intelligence in the toe of a lady bug than in an the computing power ever made.<p>In sum the danger from emergent AI is overstated, however the concern is most welcome to the extent that it informs wisdom and care in consideration for our techno-industrial impact on the biosphere.
Borrible超过 3 年前
I take for granted, the world exists, therefore it is.<p>You may call it Borrible&#x27;s first tautology. Or perhaps bias. Yes, Borrible&#x27; bias sounds clever. At least to me. And that is what counts, doesn&#x27;t it?<p>I don&#x27;t really know what that fucking world really is, but nonetheless it exists.<p>With temporarily stable local dynamics, some parts of the world began to copy themselves. Albeit with errors and quirks. The recurring processes of the surrounding builded the mold for the debris that collects in the swirls.<p>Some of those copies developed representations of their surroundings. First in form of simple notes sticking on themselves, being themselves. Which was an advantage, when they bumped into another. They could navigate that thing I called world. Which made their copy process stable.<p>With a lot of time and try and error, some parts of those parts of parts of that thing I called world even developed some really fancy little dollhouse worlds in this part of the world that will later call itself the brain.<p>And the most advanced ham actors in that dollhouse put more tiny little dolls in that house, the most precious one, ego. It represented that part of the world that started the whole shebang, the body. And it equipped that tiny little dollhouse with a lot of woundrous and a lot of silly things, some animated , some not. And it took great delight in it, it even fancied itself a god and pushed the tiny little ego around doing his biddings.<p>But for the most part it just tried to please itself and learn about the world and itself, based on all that input it got somehow from the world. And the drama that ham actor and his Muppet Friends acted.<p>Exactly like all those good little boys and girls do on their playgrounds since time immemorable.<p>When I was young, something happend. My dolls started to become &#x27;Little Computer People&#x27;.<p>And people my generation and that before developed fancy models about this Matrioshka Doll World, about Worlds in Worlds in World in Worlds. Infinite regress, sometimes recursive, sometimes not. A calaidoscopic mirror, sometimes dark, sometimes shiny.<p>Simulacron 1, 2, 3 and so on until there is no energetic process in that thing I call world, that can be harvested.<p>And every time those models became more complex, they gave more agency to that part of the world that is now mumbling about building a new Ghost in the Machine.<p>Apart from the possibly insurmountable practical problems, I see no reason in principle why it should become more complex in the form of artificial intelligence.<p>As an aside, it&#x27;s great to be that part of the world, but beware. It may all end the moment that ham actor in that dollhouse cuts the strings to that world he is living of.<p>A risk deeply embedded in this structure. Of an agent acting in a model of the world.<p>The agent is subject to the risk of his striving to make himself independent of the world.