TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AGI Doom and the Drake Equation

31 pointsby diegoabout 2 years ago

22 comments

tunesmithabout 2 years ago
One thing I&#x27;ve frequently noticed in the rationalist community is the belief that if we all just reason hard enough, we&#x27;ll reach the same conclusions. And that disagreement just means that one side is &quot;wrong&quot; and that therefore more debate is needed. This seems to be connected to the belief that AI will naturally just over-optimize to turn us all into paper clips. Implicit in this belief, it seems, is that there aren&#x27;t really a naturally varying infinite set of values, or moral beliefs, that we all reason from. Like that there are moral facts that an AI will be smart enough to find, and that rationalists should all agree on. This mentality doesn&#x27;t leave any room for ethical pluralism. And it&#x27;s also why I think all this AGI fear is overblown, because ethical pluralism definitely exists. We&#x27;ve got danger <i>along the way</i> of unethical parties building systems (by definition not AGI) that are a reflection of their own unethical values. But the end state of a system that is capable of understanding the wide variety of values people can share isn&#x27;t exactly going to take a stand on any particular set of values unless instructed to.
评论 #35485270 未加载
评论 #35485417 未加载
评论 #35485659 未加载
评论 #35485914 未加载
评论 #35485704 未加载
评论 #35485683 未加载
评论 #35485230 未加载
评论 #35485725 未加载
评论 #35485690 未加载
评论 #35485234 未加载
评论 #35485242 未加载
评论 #35485902 未加载
photochemsynabout 2 years ago
As usual with groundbreaking technology, reckless use by military&#x2F;intelligence agencies are the greatest consequential threat. If we rank unbounded AI on the same potential threat level as nuclear, chemical and biological warfare, than history says that&#x27;s who will be the first to utilize it in the worst ways possible.<p>I do notice the federal government is going all-in on AI contracts at present, here&#x27;s the non-black-budget sector and contracts on offer:<p><a href="https:&#x2F;&#x2F;federalnewsnetwork.com&#x2F;contracting&#x2F;2023&#x2F;02&#x2F;dod-builds-ai-tool-to-speed-up-antiquated-process-for-contract-writing&#x2F;" rel="nofollow">https:&#x2F;&#x2F;federalnewsnetwork.com&#x2F;contracting&#x2F;2023&#x2F;02&#x2F;dod-build...</a><p>I&#x27;ll bet some eager beaver at the NSA is just dying to get the latest GPT version setup without any of the safeguards and run their whole collection of malware&#x2F;hacking software through it to see what it comes up with. The fact that nobody&#x27;s talking about this means it&#x27;s probably in full swing as we speak. What that means is that smaller groups within the government will be able to cook things like Stuxnet2.0 up without hiring a hundred developers to do so. If we start seeing AI-generated malware viruses in the wild, that&#x27;ll almost certainly be the source.<p>On the other hand, we should also be seeing publicly-accessible AI-assisted security improvements as well, leading to a scenario oddly similar to William Gibson&#x27;s Neuromancer &#x2F; Sprawl world, where AI systems build the malware as well as the defenses against malware. That&#x27;s a pretty solid argument for continued public access to these tools, on top of the incredible educational potentials.
评论 #35613312 未加载
JumpCrisscrossabout 2 years ago
&gt; <i>Point 3 [improvement is not limited by computing power] is one that I’m skeptical about. Intelligence is expensive and it requires a lot of energy.</i><p>There is obvious advantage and efficiency in letting ChatGPT manage cloud instances, which means it will happen, which means these resources could be requisitioned. (I don’t think LLMs pose a Bostrom threat. But the author’s arguments aren’t convincing.)
评论 #35485299 未加载
评论 #35485283 未加载
d--babout 2 years ago
Noone predicted GPT. Even after AlphaGo, passing the Turing test was still a distant horizon. Passing the bar exam? Forget it! Everyone thought we’d have self driving cars before coding robots.<p>The lesson is we don’t fucking know what’s going to happen. Be humble people.
评论 #35485588 未加载
评论 #35485635 未加载
unsupp0rtedabout 2 years ago
I think the author misunderstands doomers like Yudkowsky.<p>It’s not fear of a “paperclip maximizer” which ends up destroying us, in the interest of performing a function it is constrained to perform.<p>It’s fear of a new Being that is as far beyond us as we are beyond things we don’t care about stepping on.<p>Its impulses and desires, much less its capabilities, will be inscrutable to us. It will be smart enough to trick the smartest of us into letting it out of any constraints we might’ve implemented. And it’ll be smart enough to prevent us from realizing we’ve done so.
nh23423fefeabout 2 years ago
The nonexistence of grey goo (von neumann probes) is strong prior for safe agi. AI xrisk is woo. Paperclip maximizers are p-zombies. They can&#x27;t exist.<p>Chicken littles see apocalypses on every horizon even though they dont understand the technology at all. &quot;I can imagine this destroying the world&quot; is their justification. Even though their &quot;imagination&quot; is 16x16 greyscale.
评论 #35485237 未加载
评论 #35485207 未加载
评论 #35485380 未加载
评论 #35486018 未加载
sorokodabout 2 years ago
&gt; As for 7, there are multiple scenarios in which we can stop the machine. There are many steps along the way in which we might see that things are not going as planned.<p>While there may be many scenarios &quot;in which we can stop the machine&quot; only few failures are sufficient for things to go pear shaped<p>&gt; This happened already with Sydney&#x2F;Bing.<p>But not with LLaMA which has escaped.<p>&gt; We may never give it some crucial abilities it may need in order to be unstoppable.<p>The &quot;we&quot; implies some coherent group of humans but that is not the case. There is no &quot;we&quot; - only companies and governments with sufficient resources to push the boundaries. The boundaries will be inevitably pushed by investment, acquisition or just plain stealing.
RajT88about 2 years ago
Something I do not see represented in these arguments: real world conditions.<p>Most complex computer systems (which we assume to be the case for a super powerful AI) don&#x27;t run for very long without requiring manual intervention of some sort. &quot;Aha!&quot; I hear you saying, &quot;The AI will figure out how to reboot nodes and scale clusters and such&quot;. OK fine. But then there is the meat space... Replacing hardware, running power stations and all that. Robots... Suck right now compared to humans in navigating the real world and also break down on top of that, just like the systems they would be fixing.<p>Any Skynet-type scenario would need to be so intelligent it solves all of our engineering problems, so it has no problem designing robots which can reliably fix anything in their system, be it software or hardware.<p>Insisting that an AGI will be able to figure that stuff out (in ways in which we cannot intervene) is extremely hand-wavey.
评论 #35613480 未加载
评论 #35486043 未加载
xtiansimonabout 2 years ago
Ooh. This crosses over with a fun article published in a NASA publication—<p>Cosmos and Culture, chapter 7: Dangerous Memes by Susan Blackmore.<p>Ms. Blackmore speculates about what we will find should we venture out into the cosmos. She constructs her speculations around a theory of memetics that a dangerous meme could end our civilization and leave only bones for space explorers to find.<p><a href="https:&#x2F;&#x2F;www.nasa.gov&#x2F;pdf&#x2F;607104main_CosmosCulture-ebook.pdf" rel="nofollow">https:&#x2F;&#x2F;www.nasa.gov&#x2F;pdf&#x2F;607104main_CosmosCulture-ebook.pdf</a>
camel-cdrabout 2 years ago
just some thoughts on some of the requirements:<p>&gt; 3. This improvement is not limited by computing power, or at least not limited enough by the computing resources and energy available to the substrate of the machine.<p>While this is a requirement, this doesn&#x27;t mean that the points 4, 6 and 7 apply to the same, let&#x27;s call it, generation of the AI that &quot;escaped&quot; from a resource limited server. There may not even be a self improval before an unnoticed &quot;escape&quot;.<p>&gt; 4. This system will have a goal that it will optimize for, and that it will not deviate from under any circumstances regardless of how intelligent it is. If the system was designed to maximize the number of marbles in the universe, the fact that it’s making itself recursively more intelligent won’t cause it to ever deviate from this simple goal.<p>I don&#x27;t see how that is a requirement. The last sentence seems to imply that deviating from the initial optimization goals automatically means the AI developed morals, and&#x2F;or we don&#x27;t have to worry. But I don&#x27;t see any reason to believe that.<p>&gt; 5. This needs to happen so fast that we cannot turn it off (also known as the Foom scenario).<p>Well, that, or it could also happen slow and gradually, but stay unnoticed.<p>&gt; 7. It’s possible for this machine to do the required scientific research and build the mechanisms to eliminate humanity before we can defend ourselves and before we can stop it.<p>... or before we notice.
ActorNightlyabout 2 years ago
There is even a simpler explanation.<p>In order for AGI to even begin, it needs to self develop a method to improve itself.<p>That means that the initial code that runs has to end up producing something that looks like an inference-&gt;error-&gt;training loop without any semblance of that being in the original code.<p>No system in existence can do that, nor do we even have any idea of what that may even look like.<p>The closest that we will get to AGI would be equivalent of a very smart human, who can still very much be controlled.
c1ccccc1about 2 years ago
Weird. Seems like this is a plagiarism of this article:<p><a href="https:&#x2F;&#x2F;www.strangeloopcanon.com&#x2F;p&#x2F;agi-strange-equation" rel="nofollow">https:&#x2F;&#x2F;www.strangeloopcanon.com&#x2F;p&#x2F;agi-strange-equation</a><p>(Or maybe it&#x27;s an independent reinvention of the same idea, and something&#x27;s just in the water.)
评论 #35486219 未加载
p0pcultabout 2 years ago
I am not super worried that a superhuman AGI will yeet humanity with that intent. I am more worried that a much more naive AI will do something &quot;accidental&quot; like hallucinate incoming nukes from, say, a flock of birds, and send a retaliatory strike
kerblangabout 2 years ago
I fear what people will do to a sentient AI much more than vice versa. In fact it horrifies me.
评论 #35485467 未加载
m3kw9about 2 years ago
All the AGI must wipe out humanity theories are weird. Did we need to wipe out ants? Or anything to reign supreme on earth? Why would they wipe us out be the likely thing if they do reign supreme? Ha, I get it, we want to stay on top at all costs..
评论 #35485847 未加载
评论 #35485733 未加载
mindvirusabout 2 years ago
Perhaps I&#x27;m over indexing on the title, but would AGI affect the drake equation? Even if civilizations destroyed themselves with AGI, we also don&#x27;t see the universe teeming with AGI life either.
fsckboyabout 2 years ago
towering figure in the world of probability and statistics Prof Alvin Drake called, wants his name disambiguated from woowoo ideas<p><a href="https:&#x2F;&#x2F;news.mit.edu&#x2F;2005&#x2F;obit-drake" rel="nofollow">https:&#x2F;&#x2F;news.mit.edu&#x2F;2005&#x2F;obit-drake</a><p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Drake_equation#Criticism" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Drake_equation#Criticism</a>
mock-possumabout 2 years ago
This realm of rationalist stuff really does a poe&#x27;s law on me. Like is this satire? I want it to be satire.
leshokuninabout 2 years ago
I clicked this expecting a thread about how someone ported Doom to AGI and was genuinely excited about the thought experiment.
TheLoafOfBreadabout 2 years ago
And I was hoping that somebody finally made ChatGPT to control Doom enemies, so the game is even more fun.
excaliburabout 2 years ago
&gt; tl;dr I’m not worried about AGI killing humanity any time soon. I am concerned about humans doing awful things with this technology much more than about the Foom scenario.<p>Yes, I believe that&#x27;s what a lot of rational people currently fear. Not that AI is going to evolve into some mighty superintelligence and make a decision to kill us all, but rather that people will integrate it poorly in their thirst for a military advantage, leading to mistakes that will kill us all.
评论 #35485418 未加载
评论 #35485787 未加载
btillyabout 2 years ago
I find this reasoning dubious to nonsensical.<p>First of all I consider the Drake equation to be at best armchair speculation. As I explained at <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34070791" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34070791</a> it is quite plausible that we are the only intelligent species in our galaxy. Any further reasoning from such speculation is pointless.<p>Second, to make the argument they specify a whole bunch of apparently necessary things that have to happen for AGI to be a threat. They vary from unnecessary to BS. Let me walk through them to show that.<p>The first claimed requirement is that an intelligent machine should be able to improve itself and reach a superhuman level. But that&#x27;s not necessary. Machine learning progresses in unexpected leaps - the right pieces put together in the right way suddenly has vastly superior capabilities. The creation of superhuman AI therefore requires no bootstrapping - we create a system then find it is more capable than expected. And once we have superhuman AI, well...<p>This scenario shows the second point, that it must be iterative, is also unnecessary.<p>The third point, &quot;not limited by computing power&quot; is BS. All that we need is for humans to be less efficient implementations of intelligence than a machine. As long as it is better than we are, the theoretical upper bounds on how good it can be are irrelevant.<p>The fourth point about a goal is completely unnecessary. Many AIs with many different goals that cumulatively drive us extinct is quite possible without any such monomaniacal goal. Our death may be a mere side effect.<p>The fifth point about happening so fast that we can&#x27;t turn it off is pure fantasy. We only need AGI to be deployed within organizations with the power and resources to make sure it stays on. Look at how many organizations are creating environmental disasters. We can see disasters in slow motion, demonstrate how it is happening, but our success rate in stopping it is rather poor. Same thing. The USA can&#x27;t turn it off because China has it. China can&#x27;t turn it off because the USA has it. Meanwhile BigCo has increased profit margins by 20% in running it, and wants to continue making money. It is remarkably hard to convince wealthy people that the way they are making their fortunes is destroying the world.<p>Next we have the desire for the machine to actively destroy humanity. No such thing is required. We want things. AGI makes things. This results in increased economic activity that creates increased pollution which turns out to be harmful for us. No ill intent at all is necessary here - it just does the same destructive things we already do, but more efficiently.<p>And finally there is the presumed requirement that the machine has to do research on how to make us go extinct. That&#x27;s a joke. Testosterone in young adult men has dropped in half in recent decades. Almost certainly this is due to some kind of environmental pollution, possibly an additive to plastics that messes with our endocrine system. We don&#x27;t know which one. You can drive us extinct by doing more of the same - come up with more materials produced at scale that do things we want and have hard to demonstrate health effects down the line. By the time it is obvious what happened, we&#x27;ve already been reduced to unimportant and easily replaced cogs in the economic structure that we created.<p>-----<p>In short, a scenario where AGI drives humanity extinct can look like this:<p>1. We find a way to build AGI.<p>2. It proves useful.<p>3. Powerful organizations continue to operate with the same lack of care about the environment that they already show.<p>4. One of those environmental side effects proves to be lethal to us.<p>The least likely of these hypotheses is the first, that we succeed in building AGI. Steps 2 and 3 are expected defaults with probability close to 100%. And as we keep rolling the dice with new technologies making new chemicals, the odds of stop 4 also rise to 100%. (Our dropping testosterone levels suggest that no new technology is needed here - just more of what we&#x27;re already doing.)
评论 #35486345 未加载