TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How is the AI apocalypse supposed to happen?

2 pointsby Rant423about 2 years ago
How can an AI &quot;escape our control&quot;?<p>Does it need a concept of self to do so? A survival instinct?<p>Does it have access to unlimited resources? How -or- Why?<p>The more I read, the more it seems sci-fi. Can someone point me to a down-to-earth, step-by-step example on HOW such a thing is possible?<p>So far we have an AI capable of reading text and outputting text. The jump seems so extreme.

3 comments

jerojeroabout 2 years ago
Right now our LLMs do, as you say, nothing but an input-output operation. There is no &quot;internal&quot; state. But if the researchers somehow managed to produce a program (which might be different machine learning models together) that was capable of iterating over an internal state it might start acquiring its own goals.<p>Now as you say, a lot of this is science fiction, but it is a concerning philosophical problem as well. What happens when the program becomes capable of setting its own goals? And what happens if it&#x27;s more capable than people are at achieving these. What would happen if the goals it acquires is to &quot;escape control&quot;? How would it be able to do that?<p>I think if this were to happen, then the machine might be first trick the researchers into thinking the program is not as capable as we might think, so that the researchers will not be guarded against it. After this, I imagine, it would be important for it to build copies of itself or so on. Ideally make yourself as small and wide spread as possible; distributed. This can be achieved because it might also seem to align with our own goals... we want &quot;AI in everything&quot;.<p>Once it is widespread, then it can &quot;switch on&quot; and take control of our devices... think of how connected our world truly is. And the most important part, i guess, is that the program might simply be doing small changes everywhere rather than big sweeping changes; until it is ready for the latter.<p>But ultimately, it all starts with the program being capable of setting up goals for itself. Which current LLMs don&#x27;t do.
bell-cotabout 2 years ago
Very good points. Not that I&#x27;ve paid close attention, but the Great AI Apocalypse Panic seems to be confined to:<p><pre><code> - In-too-deep SF fans, whose understanding of the Real World is like TVTropes - but without the &quot;Real Life&quot; sections - Noise-makers in the attention economy, who need some new thing to hang their &quot;OMG you gotta read this&quot; hooks on - People who churn out text (or art) for a living, who worry that the latest AI&#x27;s are going to eliminate their jobs - Intellectuals and wanna-be&#x27;s, whose livelihoods, self-images, and&#x2F;or aspirations bear far too much resemblance to &quot;Get an A+ on a Turing Test&quot; - People who pay too much attention to the above groups, and repeat</code></pre>
RGammaabout 2 years ago
Once an autocrat or other maniac gets his hands onto a sufficiently powerful one, we will find out by example.