TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Transformative AGI by 2043 is <1% likely

33 pointsby snewmanalmost 2 years ago

14 comments

adverblyalmost 2 years ago
Title isn&#x27;t quite right... The article is estimating the probability of widespread&#x2F;consumer accessible&#x2F;cost effective AGI. But AGI can be transformative without being widespread.<p>Replacing all human functions is not nessecary for transformation. A single superhumam AI could cost 1e6 that of a single human, but still do transformative work if it&#x27;s 1.5x smarter than us.<p>Edit: nevermind... This was an intentional definition of &quot;transformative&quot; made by the authors. Seems wrong to me, but they were up front about it. Not impressed though. Anyone can drum attention by using a non-standard definition of something. But of a cheap trick imo, but that&#x27;s academia these days I guess... For reference, my definition of transformative would be &quot;responsible for large scale changes to society&quot;. In other words, it doesn&#x27;t need to happen at scale for impact to happen at scale. Replication of a solution is cheap once it is discovered.
评论 #37073409 未加载
评论 #37072599 未加载
评论 #37076282 未加载
eldritch_4ieralmost 2 years ago
Don’t we all known someone who is very widely read, but makes the occasional error or mistake? Gets confused about something, or is asked about a discipline they don’t understand or haven’t researched very deeply? Maybe they misremember a date here or there, but are otherwise fairly intelligent? Maybe they work a data entry job making a middle class salary.<p>This is basically where ChatGPT is at. It’s a very widely read person with an excellent memory and quick mind. It’s probably smarter than the average human (certainly in breadth, often in depth), not to mention a comparison of it to the average human globally.<p>ChatGPT is smarter than the average human already. It doesn’t need agency or a soul to do so. We already have AGI, we just keep moving the goal posts.
评论 #37072509 未加载
评论 #37077099 未加载
humanistbotalmost 2 years ago
So this is some kind of attempt to make a Drake Equation [1] for AGI? That&#x27;s more useful as a thought experiment than something claimed with scientific precision.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Drake_equation" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Drake_equation</a>
评论 #37071873 未加载
mark_l_watsonalmost 2 years ago
My opinion on this may be worthless since I have worked in AI since 1982, and I am guilty of enjoying living through several AI hype cycles.<p>After seeing surprising advances in techniques for deep models, then attention+transformer models, I think there is a lot of progress still to be made with cooperating LLMs. I have a simple example of this in the last book I wrote.<p>I have no idea what new ideas will work, but I would be very surprised if every 4 or 5 years new and fresh ideas don’t occur to achieve really good reasoning, counterfactual reasoning, etc. Anyway, putting the chance of transformative AGI below 1% for the next 20 years seems wrong to me.
ajucalmost 2 years ago
Let&#x27;s say we get AGI that costs billions to run, can&#x27;t do some simple things well, don&#x27;t have good cheap robots to use, but discovers a way to lengthen the human life significantly and cure most illnesses. Pretty transformative if you ask me.
评论 #37073092 未加载
评论 #37074014 未加载
Nuzzerinoalmost 2 years ago
Using the tool referenced in the paper, my own estimate came to around 25%<p><a href="https:&#x2F;&#x2F;www.tedsanders.com&#x2F;agi-forecaster&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.tedsanders.com&#x2F;agi-forecaster&#x2F;</a>
评论 #37072092 未加载
anonuser123456almost 2 years ago
“we estimate that transformative AGI by 2043 is 0.4% likely”<p>Not enough significant digits to take seriously. Now if they had said 0.39785% they might have some credibility.
cubefoxalmost 2 years ago
The standard response to this is that the above is commiting the multiple stages fallacy:<p><a href="https:&#x2F;&#x2F;arbital.com&#x2F;p&#x2F;multiple_stage_fallacy&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;arbital.com&#x2F;p&#x2F;multiple_stage_fallacy&#x2F;</a>
manuhortetalmost 2 years ago
&quot;China has stated plainly it intends to reunify (invade) Taiwan. A majority of its population supports invasion. Its military is preparing to be ready for invasion&quot;<p>Am I reading a paper or listening to Rogan?
RetroTechiealmost 2 years ago
Impossible to predict future if future depends on unpredictable factors.<p>I can&#x27;t shake the feeling that recent developments like LLM&#x27;s, ChatGPT &amp; co are barking up the wrong tree. Or are missing a key piece of the puzzle. Or that vastly simpler (computationally cheaper) constructs with similar capabilities could be found. That in hindsight (say, 20y from now) we&#x27;ll say &quot;see, it was really easy!&quot;. That missing piece(s) were small but essential. And (perhaps) non-obvious right now.<p>See <a href="https:&#x2F;&#x2F;longbets.org&#x2F;1&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;longbets.org&#x2F;1&#x2F;</a><p>Ray Kurzweil&#x27;s argument is well reasoned &amp; very convincing imho. Computing power will get there or already is. And our understanding of the architecture &amp; function of the human brain is a steadily-completing picture. Not to mention brains of smaller creatures. All it takes is time.<p>Doubtful about timeframe in above bet. And efficiencies of artificial vs. biological brains remains to be seen. But yes, AGI will be achieved. Likely sooner than later.
评论 #37074295 未加载
jimrandomhalmost 2 years ago
Previous discussion on LessWrong: <a href="https:&#x2F;&#x2F;www.lesswrong.com&#x2F;posts&#x2F;DgzdLzDGsqoRXhCK7&#x2F;transformative-agi-by-2043-is-less-than-1-likely" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.lesswrong.com&#x2F;posts&#x2F;DgzdLzDGsqoRXhCK7&#x2F;transforma...</a> and EA Forum: <a href="https:&#x2F;&#x2F;forum.effectivealtruism.org&#x2F;posts&#x2F;ARkbWch5RMsj6xP5p&#x2F;transformative-agi-by-2043-is-less-than-1-likely" rel="nofollow noreferrer">https:&#x2F;&#x2F;forum.effectivealtruism.org&#x2F;posts&#x2F;ARkbWch5RMsj6xP5p&#x2F;...</a><p>Both previous discussions contain multiple independent refutations of the core claim (the argument stacks quite a few errors on top of each other).
jospfalmost 2 years ago
The only consistency in AI predictions is to be wrong, whether it&#x27;s &#x27;AI will never be able to do....&#x27; or GAI by year 20xx.
albertTJamesalmost 2 years ago
This must be one of the stupidest article I have ever read.<p>The first page only shows they have absolutely no clue how statistics works. Of course if you consider every events in the universe completely independent the probability of any large combination of those will be very low. But it is of course not the case with all the conditions they describe, those are highly correlated and P(A|B)P(B)&gt;P(A)P(B) for all of those.
评论 #37076429 未加载
hosejaalmost 2 years ago
&gt;look into paper<p>&gt;it&#x27;s Drake equation