TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The AI explosion might never happen

43 pointsby snewmanover 1 year ago

16 comments

NumberWangManover 1 year ago
&gt; If you step way back, the simple fact that the Earth still exists in its current form tells us that no recursive process in the multi-billion-year history of the planet has ever spiraled completely out of control; some limit has always been reached.<p>I dunno, I mean, humanity has made some serious changes in a very short time. We&#x27;re pretty lucky that the climate&#x27;s sensitivity to CO2 isn&#x27;t worse than it is, or we could have taken ourselves out already. Same as if we had had a nuclear war.<p>And AGI doesn&#x27;t need to completely destroy the earth to be really bad for humans. Just taking over a lot of the resources we need would do the trick.<p>&gt; It’s conceivable that some sort of complexity principle makes it increasingly difficult to increase raw intelligence much beyond the human level, as the number of facts to keep in mind and the subtlety of the connections to be made increases.<p>There&#x27;s a whole lot of &quot;it&#x27;s conceivable&quot; in here, which seems to me to be a bit of a coping strategy. For humans, the problem with our biology is that our heads kinda have a physical limit on their size.<p>The idea that not only is it possible to make machines that are stronger than us, tougher than us, more precise than us, and faster than us, but also just straight up smarter than us in the general sense (instead of just at math or chess or whatever), is not outlandish at all. They might be less efficient and take a whole lot more power, but that sort of thing hasn&#x27;t stopped us before. We just find ways to get them more power. I accused the author of trying to cope with fear of the development of super-intelligence by figuring out happy scenarios where it will be harder than we think to create. I have done the same thing, but I think we do ourselves a disservice by not facing the potential for this to be a real problem head-on.
评论 #37579839 未加载
评论 #37579840 未加载
评论 #37580389 未加载
评论 #37579957 未加载
kelseyfrogover 1 year ago
Diminishing returns is not only way an AI explosion might never happen; Liebig&#x27;s law of the minimum[1] is just as important.<p>We often talk about innovation driving innovation in discussions about AI explosions. However, we must honestly ask ourselves, &quot;Even though it looks like innovation is the limiting factor in AI now, what other factors will be limiting in the future?&quot;<p>In other words is our model wrong because it could be too simplistic? If innovation was uncapped, what would be our next limiting factor, and how much of a difference would this represent?<p>1. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Liebig&#x27;s_law_of_the_minimum" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Liebig&#x27;s_law_of_the_minimum</a>
评论 #37579314 未加载
评论 #37579859 未加载
评论 #37579789 未加载
kbos87over 1 year ago
So many AI predictions rely on a complete suspension of everything we know about how markets behave.<p>Could AI usher in a new era of wealth where nobody has to work? Sure, if you ignore the fact that the most likely scenario is that the value is captured by the current owning class, which is already well underway.<p>On the flip side of possible scenarios, could AI start improving itself and lead to an upward spiral of improvement? I can entertain that idea, but I think we&#x27;ll find that most types of AI innovation won&#x27;t engineer themselves outside the box that is market need and product-market fit. History has proven over and over again that innovation for innovations&#x27; sake gets a few moments in the sun before being quickly forgotten if it doesn&#x27;t have actual utility.
评论 #37579890 未加载
评论 #37579751 未加载
评论 #37579516 未加载
breezeTrowelover 1 year ago
Interesting article but it feels like the author may be conflating &quot;explosion&quot; with &quot;singularity&quot;. An explosion does not imply infinite growth but, instead, a sudden large burst of growth.
hprotagonistover 1 year ago
Nature may abhor a vacuum but she sure loves sigmoids.
1vuio0pswjnm7over 1 year ago
Imagine, hypothetically, one has a computer system that is capable of human language mimicry. For example, one can train it with sources such as &quot;books, etc.&quot; and the system can then generate &quot;new&quot; books, etc. of its own. Then imagine people begin to accept these books as &quot;creativity&quot;. Eventually, there are no books, etc. created without using the computer system for &quot;assistance&quot;. What is the system now mimicking. The last time that human language was created without using the system is a distant memory. It can only mimic human language that was created prior to the system&#x27;s existence. Arguably this is not a &quot;future&quot;; it is a continual rehashing of the past, namely that last point in time where all books, etc. up to that point were created by people, without the assistance of the system. Plus any &quot;new&quot; sources that have been created since. with the assistance of the system.<p>This is an incestuous process that does not produce useful mutations. However it may produce some non-useful ones, &quot;defects&quot;.<p>NB. The eventual &quot;derivative&quot; works in this hypothetical are not derived from use of human language, they are derived from use of a model of human language that represents a point in time that has long passed. Under this hypothetical &quot;future&quot;, the source of all work becomes either works created before the computer system existed or works generated with the use of the computer system. Neither the system nor its user can continue to &quot;learn&quot;, take inspiration, or, most importantly, deviate unpredictably, from <i>works created without the use of the system</i> because, aside from the works that pre-date the system, such works no longer exist. The birth of new language and ideas arising <i>from sources not created using the system</i>, such as all the works that pre-date the system, is prevented. This is inbreeding.
评论 #37579962 未加载
blagieover 1 year ago
1) Biological evolution was limited in part since the brain needs to fit into 0.0013 cubic meters and use a maximum of 10-20W. Until recently, for natural selection, keeping within those constraints was much more important than incremental extra intelligence.<p>2) Evolution is also incremental in a way engineering doesn&#x27;t need to be.<p>3) One could argue we&#x27;re in the middle of the (biological) singularity already. It just depends on timescales we look at. As life, we&#x27;ve developed more technology in the past 200 years than than in the previous 3B years.<p>Interpolation works better than extrapolation. I don&#x27;t really think we know what will happen or where we&#x27;re going. The Bible, Quoran, or Vedas are as good guideposts as our own extrapolations.<p>It&#x27;s a discussion we should be having, but the levels of confidence expressed in predictions, constraints, or dynamics are unrealistic.
mdaleover 1 year ago
Per &quot;Chip design relies heavily on software tools, which are computationally demanding. Better chips can run these tools more efficiently, yet we haven&#x27;t experienced an uncontrolled rate of improvement&quot; ...<p>We have seen an exponential rate of improvement. And this is the whole point of why people talk about runaway AI; it&#x27;s relative to the exponential growth of complexity observed in software and computer hardware scaling.<p>Yes it&#x27;s slowing now but it many years of exponential growth transforming a lot in tbe process
altcognitoover 1 year ago
The AI tools we develop for self-improvement will need to be self aware enough to know what is holding them back from further improvement. In the same way that Einstein said &quot;You can&#x27;t solve problems with the same kind of thinking that created them&quot;, hopefully these tools will have a way to get to the next level, or at least be able to identify what is required to further improve.<p>The assumption in the article that humans haven&#x27;t been on a &quot;runaway&quot; improvement process could be stronger. Human history is relatively short, but process improvement has been strong.<p>Oral history led to written history, led to more instant communication, and in most fundamental disciplines there have been systematic process improvements. Humans are messy, but vastly improved over evolution for adapting to varied environments.<p>I mean, if we&#x27;ve really reached a point where the tools are this self-aware enough to improve, we will probably struggle to get enough power and silicon to run more, faster. I get that this is fantasy stuff, but seems reasonable on a longer timeline (say 150-200 years), no?
smitty1eover 1 year ago
One of the underlying assumptions in The Famous Article is that genius can be externalized, captured in a process, and then flourish outside of the original head.<p>I&#x27;ll give that a &quot;maybe&quot; and &quot;to some degree&quot;.
palad1nover 1 year ago
Every time I hear about &quot;superhuman AIs&quot;, I just have to wonder that if it truly is beyond our power to comprehend it, how are we in any way guessing what it would actually do?
评论 #37581358 未加载
_nalplyover 1 year ago
Agreed, the positive feedback loop for the AI explosion might not happen.<p>However we had a path of positive feedback loops up to today. Moore&#x27;s Law is part of this path of positive feedback loops. We have a look-behind bias. We tend to say that such a thing was destined to happen exactly as it happened.<p>What exactly happens does not really matter.<p>It is enough that something is found which is better than the rest. We see this in evolution. If a species has a mutation giving it a better chance then this species will thrive and probably displace other species.<p>I think that there&#x27;s a chance that the same will happen with AI.<p>The driving force for evolution is the need to survive and the driving force for human products is the market demand. A company might stumble upon something better they can sell for cheap. Of course there are cartels, rent seeking and market barriers to entry, and if these forces are strong enough world-wide then AI explosion will be delayed for a long time or if humanity goes extinct not establish at all.<p>But exilees could produce something in the backwaters which can overcome the broken main markets because of sheer ingenuity and because there&#x27;s demand they will be successful. Think Tesla twenty years ago.<p>tldr: Something is bound to happen driven by external forces like demand and competition.
Stem0037over 1 year ago
there might be sectors or niches within the broader AI landscape where recursive self-improvement could defy this S-curve and sustain exponential growth for longer periods.
m3kw9over 1 year ago
How people say never is beyond me because there is still between now and the supposed end of the universe around 30 billion years to go and we advanced so much just in the last 100.
评论 #37579684 未加载
评论 #37579559 未加载
hustwindmaple1over 1 year ago
It&#x27;s already happening, bro
araesover 1 year ago
What if it already happened and you just don&#x27;t know it? Because you&#x27;re too far down the singularity?<p>That you will never succeed, because the AIs are already better. Because you are too simple. And they can calculate what you will do for all eternity. Because the AIs can simply hold out a false carrot, and then all humanity falls toward the event horizon. Wealth for the wealthy, celebrity for the celebrity. Run toward the carrot and backstab those in front of you. That you were born into Westworld.<p>That the botnets integrated generative AI the minute it became available. That all other beliefs are foolish. That all the corporations already had AI, and already integrated those ideas a long time ago. 30 years of &quot;maybe AI someday&quot;? Then they all release in a couple months? Right as everyone realizes being squared&#x2F;boxed&#x2F;cubed&#x2F;hypercubed in offices is horrible. Pfft.<p>More orphans for the orphan grinder.
评论 #37579828 未加载