TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Singularity Missed

35 点作者 telotortium6 个月前

9 条评论

hyeonwho46 个月前
I must be missing something. The article claims that $1000 of compute (hardware?) was supposed to surpass an insect brain about 23 years ago, and we hasn&#x27;t achieved that benchmark yet.<p>But $1000 of time on Claude Opus will buy 13 million tokens of output, or about 52,000 human hours of output content at typical human author writing rates. The content will be well-formatted, logical, and if well-prompted, virtually indistinguishable from that of an above-average human writer.<p>$1000 on DALLE-3 will generate 8300 images, some fraction of which will pass an artistic turing test.<p>And $1000 on AlphaFold will do things that no human can do.<p>So it seems Kurtweil was right on target, and AI did surpass human capabilities around 2023?
评论 #42195679 未加载
评论 #42195743 未加载
andrewla6 个月前
I&#x27;m of the school of thought that says that the Singularity Theory is just fundamentally unsound.<p>In particular the &quot;futurism&quot; school which tries to extrapolate existing trends to the future. To me it seems pretty clear that the path of technological development does not follow a predictable roadmap, and the things that seem important now end up being supplanted by unanticipated things, and the things that will seem important in the future will be important in ways that we can&#x27;t predict.<p>So far I feel this has been borne out as each concrete prediction of people like Kurzweil has been ruled out or seen to be irrelevant or uninteresting, but it seems like the school still exists because it can always offer a post hoc adjustment towards the same end.<p>I have a question for people who believe in Kurzweil&#x27;s theory -- what would it take to disprove it? To ultimately say &quot;yeah, this was just an incorrect model of the future&quot;?
评论 #42195726 未加载
评论 #42195963 未加载
评论 #42196080 未加载
FrameworkFred6 个月前
I disagree that we&#x27;re greatly off-target.<p>We can certainly build a grenade-dropper that sometimes picks the wrong target and gets intercepted by hostile actors. We have LLMs that aren&#x27;t doing some things that we might task a single human brain with.<p>It&#x27;s really a matter of product, market, fit.<p>When our army of bug-smart drones wipes out civilians and heritage sites, does something cruel or otherwise distasteful, and&#x2F;or requires a deployment of a million drones to get the job done, folks realize they actually wanted something with human-level capabilities...like remote-controlled drones.<p>When we build an AI chat bot that&#x27;s often wrong and can&#x27;t do math all that well, folks complain that they didn&#x27;t really want any old brain, they want something better.<p>I&#x27;m not sure I buy everything Kurzweil&#x27;s selling TBH, but I don&#x27;t think this article is making a great argument.
Quinner6 个月前
What this is really saying is that hardware is a necessary precondition for, but not determinative of, intelligence. If you have a human brain worth of neurons firing randomly you don&#x27;t get consciousness. The structure and instruction set is crucial. And we are perhaps 20 years away from figuring out how to structure the compute power we have to get the consciousness of a human adult.
uoaei6 个月前
This article highlights a gripe I&#x27;ve had for a while surrounding this conversation:<p>Analog and digital computing may be mathematically equivalent, but the devil is in the implementation, basically destroying any sense of equivalence in &quot;processing power&quot;. The modes and methods of processing are so distinct (even if the quantities of &quot;information flow&quot; are equal) that reducing everything to FLOPS is not just irrelevant, it&#x27;s horribly misleading.
falcor846 个月前
&gt; So we should expect to be able to have e.g. a combat drone that can take a grenade, navigate for miles to enemy territory, pick on an enemy target, drop the grenade, and navigate back to the base.<p>Call me naive, but I would like to believe that at least part of the reason we don&#x27;t have these autonomous murder machines is that we choose not to develop them.
natch6 个月前
More like &quot;point missed.&quot; The singularity is about brains, not bodies. Of course physical instantiations of embodied agents will lag a very short bit when we first hit AGI.
cen46 个月前
I have been trying to kill a mosquito for about 20 minutes now. Can&#x27;t believe what those speck of dust brains are doing.
throwuxiytayq6 个月前
&gt; So we shouldn&#x27;t expect human like intelligence before late 2040&#x27;s. That is of course if we actually get an insect brain this year. Which we won&#x27;t.<p>Was this written pre-ChatGPT? I am amazed that the author decided that this is an insightful take he&#x27;d like to share on the internet. They managed to confuse computational capacity and actual capability, while remaining completely blind to the fact that <i>nobody expects AI development to happen along this sort of curve, or to resemble a walk up the list of intelligent animal species</i>. We didn&#x27;t expect it 10 years ago, we <i>especially</i> don&#x27;t expect it now that it&#x27;s <i>super prominently not happening</i>.
评论 #42195318 未加载