TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Forget the hype, “thinking machines” can’t replace humans

44 点作者 rfreytag将近 3 年前

19 条评论

tialaramex将近 3 年前
Notable that this is published by the Discovery Institute Press, the imprint of the Discovery Institute, the people behind &quot;Intelligent design&quot; and &quot;Teach the controversy&quot;.<p>There have been lots of books making roughly the same argument from intuition - that it sure <i>feels</i> like we&#x27;re special, and so therefore we must be special even if there&#x27;s nothing in particular we can point to in favour of this notion - I can&#x27;t strictly recommend any of them since I don&#x27;t find them at all convincing but there&#x27;s no need to fund the Discovery Institute if you want to read attempts at this argument.
评论 #32035196 未加载
评论 #32035566 未加载
评论 #32035178 未加载
elil17将近 3 年前
The idea that the human mind is not simply a very complex thinking machine is simply a hypothesis. You can come up with examples and arguments all you want but it doesn’t change the fundamental question about the nature of intelligence and consciousness. Unlike most spiritual questions it is falsifiable, which is exciting. Scientists need only emulate a sufficiently complex brain on a computer. Small worm nervous systems have already been simulated, so the real question is how big do we need to go. My guess is that by the time we have a mouse emulation behaving like a mouse within a physics simulation there will be little question left that intelligence and consciousness are computable.
评论 #32035050 未加载
评论 #32038608 未加载
评论 #32034971 未加载
DOsinga将近 3 年前
I am looking forward to the day computers are better at coming up with things humans can do better than computers than humans.<p>We used to argue that face recognition and the ability to play go were the hallmark of true intelligence. Meanwhile thinking machines are replacing humans all over the place
dimensional_dan将近 3 年前
The sad truth is that most jobs can be replaced by &quot;unthinking machines&quot;. This is what is actually happening.
评论 #32035463 未加载
erellsworth将近 3 年前
History is festooned with examples of things people thought technology could never do until it could.
评论 #32035334 未加载
visarga将近 3 年前
I disagree with the linked article because it uses an a priori judgement. AlphaGo for example is a program and yet it beats us at Go, the humans who made it were quick to lose the game, and then the best human players as well. The argument that an AI &quot;will not do anything that departs from its programming&quot; is weak, the programming might be good enough to best humans. The latest neural translator can translate 200 languages, who among us can do that?<p>But let&#x27;s assume we make an AI just as capable as a human in all respects. It will probably be more complex than GPT-3, which requires about 4 to 8 of the largest GPUs to run. If you want to replace a human you&#x27;d have to run it in the loop, 20-40 times a second. The energy required to run it would be much greater than that used by a human.<p>Besides energy, it would need some of the most complex chips which can only be produced by TSMC today, and depend on a single company capable of manufacturing the high end lasers. The cost of building a fab that can make these chips is huge even for a country.<p>What I am getting at - we can&#x27;t replace everyone with AI because we can&#x27;t make the chips and afford the energy. Not for a good while. And that&#x27;s assuming we solve the hard AI problem somehow. What we can afford is to replace a few humans with AI.<p>I didn&#x27;t even mention the cost of building robotic bodies for all these AIs to act in the physical world. We don&#x27;t have that kind of mechatronics yet.<p>Some singularity enthusiasts think by 2030 all jobs will be taken by AI. I believe the timespan will be much longer, allowing humans to gradually transition into a different world. We can&#x27;t even replace the whole fleet of IC cars with EVs until 2030&#x27;s.
评论 #32035856 未加载
monkeydust将近 3 年前
Yea but they can replace a lot of tasks that humans do today from which they derive an income.
xg15将近 3 年前
&gt; <i>The answers computer programs give sometimes surprise me too — but they always result from their programming.</i><p>Friendly reminder that we, too, have a programming: The DNA.
评论 #32035385 未加载
Terry_Roll将近 3 年前
And chemistry is not a form of computation which can not be emulated by computers?<p>Until the experts can decide on what is consciousness, people will continue to espouse their opinions.
评论 #32034764 未加载
kkfx将近 3 年前
Hem, behind the hype the reality is that we do not build &quot;intelligent machines&quot; but tool to do things on our behalf. The classic getting more out of life with less effort. The hype is son of a dream: immortality.<p>We do not know how to build meat, but we know how to build circuits. So naturally some dream immortality through them, no pain in the middle, easy repair etc. That&#x27;s the kind of intelligence we dream: evolve to been able to became pure intellect with a physical basis since we can&#x27;t imaging something without it, but as a commodity, a substrate not much different than a house.<p>Lack of creativity is nothing, &quot;intelligent systems&quot; lack comprehension of the physical world, they just crunch bits, a photo of a train is just a collection of bits, guessing similar collection is a thing, knowing what a train is is another.<p>What people should fear is the abuse of such system to boast their &quot;effectiveness&quot; in ways that people decide to cede power to some corp behind them. Like &quot;our system can rule better than us humans, let&#x27;s do it&quot;...
otikik将近 3 年前
To me juman brains can’t be “replaced by thinking machines” because they <i>already are</i> thinking machines.<p>Starting from there, the next interesting question is more … technical, and less philosophical. Can our brains understand themselves, with or without aid (computers)? If our brains are too complex for us to understand, then that’s where the path ends. If we can understand them, other questions emerge.
karaterobot将近 3 年前
On the other hand, nobody knows whether a hypothetical thinking machine can replace humans, because we don&#x27;t have one yet. If we had a thinking machine, we could ask &quot;will it replace humans?&quot;, but right now we&#x27;re asking &quot;how do we make a machine that can think?&quot;. So, this talk is futurism, no more grounded in reality than Ray Kurzweil.
评论 #32036383 未加载
fio_ini将近 3 年前
I have been under the impression that Artificial Intelligence is a buzzword and misnomer, at least in today&#x27;s state of the art. Isn&#x27;t most of the foundation built on linear regressions to &quot;make decisions&quot; based on all the data it&#x27;s seen before? It&#x27;s largely a philosophical topic but in the computational sense it&#x27;s closer to another virtual layer on top of the ever growing virtual layers of classical computing turing machines where there&#x27;s a tape of tokens that feed into a processor and there&#x27;s an output. At least with our brains we can adapt and learn any task that has never been encountered before. For an &quot;AI&quot; we would have to program it to do a specific task and there&#x27;s nothing general enough at this current state to be considered &quot;intelligent.&quot; These machines have specific intelligence maybe to identify a dog&#x2F;object because it &quot;learned&quot;&#x2F;we told it how to.
评论 #32037046 未加载
评论 #32035466 未加载
aperson_hello将近 3 年前
*yet, for most things
arisAlexis将近 3 年前
These kind of arguments can be brought down by asking: never?
8bitsrule将近 3 年前
Except for highly-specialized tasks. Some &#x27;idiot savants&#x27; (not <i>my</i> words) can do amazing things.
jamesrom将近 3 年前
Machine Learning is the process of dumbing down machines to be intelligent enough to understand our world.<p>When humans get the matmul upgrade, we&#x27;ll grok this properly.
treesprite82将近 3 年前
&gt; [Video @ 21:50] I don&#x27;t know about you, I put my car in reverse and if it gets too close to something it beeps because it&#x27;s aware of its surroundings, right? Does that make my car self-aware?<p>I wouldn&#x27;t call sensing the external environment &quot;self-awareness&quot; even for humans - it&#x27;s more about ability to inwardly inspect our own thoughts and have an internal model of ourself. If you take some entity with a train of thought and then you give it that ability, I&#x27;d probably say you gave it self-awareness.<p>&gt; [Video @ 25:18] Computers are constrained by something called the Church-Turing thesis, which says anything you can do on a computer of today or a computer of the future could be done on Alan Turing&#x27;s original 1938 Turing machine<p>&gt; Now today&#x27;s computers can do things millions billions of times as fast.<p>A Turing machine is an abstract model of computation with infinite time and memory. Maybe nitpicking, but I feel it&#x27;s being talked about here as if it were some real physical machine.<p>&gt; [Video @ 26:35] You have an algorithm on your shampoo, right? &quot;Wet, apply shampoo, lather, rinse, repeat&quot;. Unfortunately if a computer was looking at this what would happen? You would wash your hair forever wouldn&#x27;t you? Because it doesn&#x27;t say &quot;rinse once&quot;, it says &quot;rinse, repeat&quot;.<p>&gt; [Article] The computer will not do anything that departs from its programming. That’s a human specialty.<p>I think there&#x27;s conflation between instructions given to some agent and its low-level underlying programming (floating point math, or chemical interactions for us).<p>Modern AI would likely be capable of using context and understand the intended meaning of the video&#x27;s given examples, or disobey a given instruction.<p>&gt; [Video @ 28:50] The first one he did is something called the Turing halting problem.<p>You can&#x27;t correctly answer a question like &quot;What won&#x27;t you answer this question with?&quot; - halting problem is effectively this. Less a limitation specific to computation, more about showing that some tasks are sufficiently non-trivial to embed this kind of paradox so can&#x27;t be solved in all instances.<p>&gt; [Video @ 30:58] Imagine trying to explain your experience to a man who has been blind since birth. [...] but duplicating the experience that you&#x27;re having, the simple experience of seeing green, is not possible to describe to the blind man to the point where he can experience it also<p>Could probably build up the concept and associations of green in his head, but the visualization part is going to be limited by neural pathways in&#x2F;between visual cortex not being properly formed without having received signals from the eyes. Sufficiently advanced future neurosurgeon could make someone experience green without actually having seen green, I&#x27;d bet.<p>&gt; [Video @ 31:24] Now if we can&#x27;t explain it to a blind man then how are we ever going to write a computer program to have qualia? And the answer is we won&#x27;t.<p>Consider a text-based agent that can reason and introspect. How would it describe the tokens of text that it receives? I reckon similar to how we consider qualia - seemingly irreducible inputs that are hard to explain in terms of anything else.<p>&gt; [Video @ 31:48] Understanding is something that computers will never do. This was established a long time ago by [Chinese room example], but does the person inside the room understand Chinese? No, he is exercising an algorithm.<p>I think one problem with the thought experiment is that people imagine the person in the room&#x27;s procedure to be relatively tractable - like replacing English characters with a couple sets of intermediate characters, and then finally to Chinese.<p>While you can translate Chinese just with look-ups and writing symbols (with unbounded time&#x2F;memory), to do so at a human-Chinese-speaker level would currently (until machine translation improves) involve using symbols to simulate arithmetic, to simulate quantum field theory, to simulate chemical interactions, to simulate a Chinese speaker&#x27;s head.<p>I personally believe the answer of whether the system as a whole understands Chinese at that point has to be &quot;yes&quot;, but at the very least it&#x27;s not a clear &quot;no&quot;.<p>&gt; [Video @ 35:58] Rather I like the proposal made by Selmer Bringsjord called the Lovelace Test for strong AI. That is as follows: &quot;Strong (or General) AI will be demonstrated when a machine&#x27;s performance is beyond the explanation of its creator&quot;<p>This was originally proposed in 2001, and I feel since then it has been accomplished by deep learning. Leaps in performance that take theory a while to catch up and understand what it&#x27;s actually doing, agents that cheat games in unintended ways not previously thought possible, or unexpected generalization ability to novel tasks.<p>I think this definition of strong AI is generally too lenient. Although in the opposite direction, given this is a Christian conference and beliefs on God&#x27;s omniscience, it doesn&#x27;t seem like they&#x27;d consider humans to meet the bar.<p>&gt; [Video @ 37:12] All computer programs have done what they were designed to do.<p>That would make my job a lot easier!<p>&gt; [Video @ 37:30] Can AI create music? No it can&#x27;t create music, do you know what a typical scenario of creating music is? Say you want to have a computer program AI generate baroque music, what do you do? You feed it a bunch of musical scores which were written by Bach. What&#x27;s it going to generate? It&#x27;s going to generate a musical score which sounds like Bach. It&#x27;s not going to generate Wagner&#x27;s music or Schoenberg&#x27;s music or any of the more modern music, it&#x27;s only going to generate things that sound like Bach, it just does the interpolation. So again it&#x27;s this idea of interpolation, that we have. So no a computer cannot create music.<p>On sufficiently high dimensional data like music, novel examples are essentially always going to be extrapolation. If there&#x27;s acceptance that it can learn from pieces of music and produce new pieces, with ability to vary similarity to existing piece distribution, then I don&#x27;t see the objection to it not being able to do the same with musical styles.<p>&gt; [Video @ 41:16] [...] totally splits the brain. Now if this is true, shouldn&#x27;t we end up with a split personality after it was over if the mind was the same as the brain?<p>By materialism there&#x27;ll be no direct communication between the halves of a split brain, and that&#x27;s what&#x27;s observed. It doesn&#x27;t imply anything about whether both halves of the brain have the capability to develop personality traits, or that they&#x27;ll noticeably diverge even given roughly the same experiences.
collimator将近 3 年前
But I bet that &quot;thinking machines&quot; can win court cases against human adversaries, just as they beat humans on the chessboard. If it is found that a machine can be a legally recognised entity, and is allowed to pursue its own agenda, we are in trouble, because the machine&#x27;s agenda is unlikely to be anything other than self-serving.
评论 #32034911 未加载
评论 #32034878 未加载
评论 #32034690 未加载