TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Most Terrifying Thought Experiment of All Time

36 pointsby pnpalmost 11 years ago

20 comments

lxmorjalmost 11 years ago
So stupid. Roko&#x27;s Basilisk and Roko&#x27;s Infinite Blowjob Machine are equally probable.<p>When I was a kid I watched a movie about a monster that came up from the basement when the kids in the house accidentally said an ancient Indian incantation.<p>&quot;BlahBlahBluey&quot; or something<p>I got scared that perhaps, the last thing I said &quot;Good Night, Batman&quot; or whatever, was in fact coincidentally an ancient curse incantation and I had summoned a monster who was en route to kill me.<p>&quot;Blah!&quot; I&#x27;d say, changing my words to &quot;Good Night, Batman Blah!&quot;<p>Ah, much better.<p>But then FUCK! What if &quot;Good Night, Batman Blah!&quot; is the cursed incantation!!! Repeat over and over until I was sufficiently convinced that no one had ever uttered the previous sequence of sounds I had just made, become exhausted from an entirely too powerful imagination, and fall asleep.<p>I&#x27;m pretty sure this &#x27;thought experiment&#x27; is about as intellectual as I was at bedtime when I was 7.
评论 #8079138 未加载
评论 #8080691 未加载
dm2almost 11 years ago
TL,DR: Here is the thought experiment, 4th paragraph after 3 long paragraphs of stuff I didn&#x27;t want to read.<p>&quot;What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?&quot;<p>I don&#x27;t think it&#x27;s that interesting of a thought experiment, there are dozens of much more insightful and thought-provoking ones out there.<p>Search for &quot;best thought experiments&quot; on Google if you want some cool things to think about. It might be taboo to mention but weed will make thought experiments really fun, in my opinion.
评论 #8078987 未加载
jcbrandalmost 11 years ago
There&#x27;s something very religious about the whole singularity movement.<p>Roko&#x27;s Basilisk doesn&#x27;t appear very different to me than Pascal&#x27;s Wager, with the added twist that the God you have to believe in is actually a malevolent demiurge (as envisioned by the Gnostic christian sects more than a 1000 years ago).<p>If you already have a cynical viewpoint on religion, then this thought experiment is very similar to the &quot;deal&quot; monotheistic religions have been offering through the ages.<p>Either believe in our god (and therefore give us, as his sole representatives enormous, unfettered power), or suffer eternal hell! (or not ;)<p>Throughout the ages, people were terrified of hell. This is just a modern twist on an old idea.<p>I used to be scared of hell, until I realized that hell &quot;existed&quot; only as long I believed in it. If I stopped believing in it, it &quot;disappeared&quot;. That&#x27;s the power of belief. Yes, it might still exist whether I believe or not, but life is full of &quot;what ifs&quot;, the contemplation of which might drive you crazy.<p>Better to follow a positive and compassionate path then a fear based &quot;what if&quot; path.
jpetersonalmost 11 years ago
Well, that did it. Lifetime ignore for LessWrong or anything LessWrong-related.<p>I want my 10 minutes back.
iSnowalmost 11 years ago
How is this more than a souped-up version of Pascal&#x27;s wager? You better believe in God now because if you don&#x27;t and he exists, you&#x27;ll rot in hell.
评论 #8079105 未加载
评论 #8079080 未加载
评论 #8079124 未加载
评论 #8079078 未加载
评论 #8079242 未加载
teekertalmost 11 years ago
&quot;it is preferable to torture a single person for 50 years than for a sufficient number of people (to be fair, a lot of people) to get dust specks in their eyes.&quot;<p>This makes him Yudkowsky the perfect lab animal! Think of the things we can learn, the lives we can save by performing experiments (that most think are unethical) on him!<p>As for the box choice? Just start chopping through the alien&#x27;s head, the unknown that lies beyond will create new input for the simulation. Who knows what nice benevolent digital organism named Jane might sprout from this in the future?! (Edit: This is an Enders game&#x2F;Speaker for the Dead reference, Ender also find himself presented with an impossible choice in a simulation at some point.)<p>Anyway, worrying about this will not make your life better, nor will it make your kids&#x27; life better. What is life all about according to this guy? I hope for the people that love him that he will get his priorities straight.
评论 #8079114 未加载
评论 #8079278 未加载
danepowellalmost 11 years ago
If we&#x27;re going to talk about Yudkowsky and boxes, the AI-Box experiment is much more interesting than this silly &quot;basilisk&quot; nonsense: <a href="http://yudkowsky.net/singularity/aibox/" rel="nofollow">http:&#x2F;&#x2F;yudkowsky.net&#x2F;singularity&#x2F;aibox&#x2F;</a>
madaxe_againalmost 11 years ago
It seems that everyone deriding this as pointless may be unaware of the simulation argument. It hasn&#x27;t been disproven, and it&#x27;s more likely than not that at some point it <i>will be proven</i>, given the pace of technological advancement.<p>When you subscribe to the idea that this could all be a sim, the whole thing results in mental contortions that are literally maddening.<p>Baudrillard observed not dissimilar in <i>Simulacrum</i>.
评论 #8079460 未加载
zeeedalmost 11 years ago
here&#x27;s the comment thread from when the topic last surfaced 6 days ago: <a href="https://news.ycombinator.com/item?id=8051533" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=8051533</a>
adwnalmost 11 years ago
&gt; <i>LessWrong’s founder, Eliezer Yudkowsky, is a significant figure in techno-futurism [...]</i><p>What? Is this true, or just gullible journalism?<p>&gt; <i>Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown.</i><p>It seems like some users of LessWrong need to spend more time in the real world and less time on stupid thought experiments.
DanielBMarkhamalmost 11 years ago
What a cute story. A logical Chinese finger puzzle.<p>Seems like this argument isn&#x27;t new, is it? Those folks who both believe in a watchmaker&#x27;s universe and freewill at the same time are also always going on about how critical it is to only make the &quot;correct&quot; choices.<p>It&#x27;s circular reasoning. If a being exists that can tell which choices you are going to make, and it is able to kill you or prevent you from making those choices ahead of time, and some of those choices involve its existence? Then you&#x27;ll ever be able to make choices that cause it not to exist. Or, with a bit of finer detail, the aggregate of all the choices made will never be such that it does not exist. Since the predicate was that such a being exists, we have returned to our starting point. Yadda yadda. In short, &quot;How can something not exist that must exist?&quot;
paul_falmost 11 years ago
Good grief that was a waste of time. This entire concept only works if time travel is possible and will be discovered in the future. It isn&#x27;t, and it won&#x27;t be.
评论 #8079344 未加载
评论 #8079092 未加载
评论 #8079107 未加载
评论 #8079055 未加载
评论 #8079377 未加载
spingsprongalmost 11 years ago
Nobody is going to make a malevolent universe controlling AI on purpose.<p>If a massively powerful and knowledgeable AI was created by accident, it would have nothing to gain by being malevolent.<p>And both those points don&#x27;t matter anyway, since the technological singularity is impossible.
评论 #8078906 未加载
评论 #8078867 未加载
评论 #8079111 未加载
pessimizeralmost 11 years ago
Chick&#x27;s Cthulhu: <a href="http://jackchick.wordpress.com/2009/07/08/chick-parody-who-will-be-eaten-first/" rel="nofollow">http:&#x2F;&#x2F;jackchick.wordpress.com&#x2F;2009&#x2F;07&#x2F;08&#x2F;chick-parody-who-w...</a>
DalekBaldwinalmost 11 years ago
We should all just commit to believing that Box A&#x27;s label actually says &quot;Devote your life to helping create Roko&#x27;s Basilisk... who will arrive in the form of the Stay Puft Marshmallow Man&quot;.
aaronemalmost 11 years ago
It&#x27;s a shame no one there seems to have noticed that they reinvented Yog-Sothoth from first principles.
bunglealmost 11 years ago
Just don&#x27;t choose any of the boxes and be done with it, ;-). It will BSOD the God machine.
评论 #8079307 未加载
评论 #8079177 未加载
评论 #8078946 未加载
ryandvmalmost 11 years ago
I refuse to believe there are people stupid enough to fall for this tripe.
评论 #8081616 未加载
thegeomasteralmost 11 years ago
I was actually more terrified by Smile.dog :)
IgorPartolaalmost 11 years ago
Really? Peter Thiel spends his time in this circle jerk? Let me give so life advice that will go unhedged, I am sure. If stuff like this gives you nightmares, <i>get off your fucking computer</i>. Get a different hobby so you can stop losing sleep over killer robots that you built just so they can torture you.
评论 #8079198 未加载