TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Gradual Disempowerment: How Even Incremental AI Progress Poses Existential Risks

87 pointsby mychaelangelo4 months ago

12 comments

yapyap4 months ago
I implore everyone to watch the movie ‘Money Monster’, not only because it’s a great movie but also because I think it has a minor plot point that basically predicts how AI will be used.<p>(small spoiler)<p>In Money Monster it turns out the hedge fund manager who is blaming their state of the art AI bot for malfunctioning and rapidly selling off certain stock, tanking it because of that, did so out of a machine code error. He can’t explain what the error was or how or why it happened cause ‘he didn’t program their trading bot, some guy in Asia did.’ But as it turns out he did do it in some way.<p>I feel like using AI as a way to abstract blame even more when something goes wrong will be a big thing, even when secretly it was not the AI (ML) or who trained the thing’s fault.
评论 #42899885 未加载
评论 #42901584 未加载
评论 #42899831 未加载
alephnerd4 months ago
While this is a well written paper, I&#x27;m not sure it&#x27;s really contextualizing realistic risks that may arise from AI.<p>It feels like a lot of &quot;Existential AI Risk&quot; types are divorced from the physical aspects of maintaining software - eg. your model needs hardware to compute, you need cell towers and fiber optic cables to transmit.<p>It feels like they always anthropomorphize AI as some sort of &quot;God&quot;.<p>The &quot;AI Powered States&quot; aspect is definetly pure sci-fi. Technocratic states have been attempted, and econometrics literally the exact same mathematical models used in AI&#x2F;ML (Shapely values are an Econometrics tool, Optimization Theory itself got it&#x27;s start thanks to GosPlan and other attempts and modeling and forecasting economic activity, etc).<p>As we&#x27;ve seen with the Zizian cult, very smart people can fall into a fallacy trap of treating AI as some omnipotent being that needs to either be destroyed or catered to.
评论 #42899852 未加载
评论 #42914544 未加载
评论 #42899809 未加载
评论 #42899774 未加载
评论 #42899829 未加载
评论 #42900079 未加载
评论 #42899769 未加载
评论 #42908161 未加载
评论 #42899832 未加载
randomcatuser4 months ago
Another thing I don&#x27;t like about this paper is how it wraps real, interesting questions in the larger framework of &quot;existential risk&quot; (which I don&#x27;t... really think exists)<p>For example:<p>&gt; &quot;Instead of merely(!) aligning a single, powerful AI system, we need to align one or several complex systems that are at risk of collectively drifting away from human interests. This drift can occur even while each individual AI system successfully follows the local specification of its goals&quot;<p>Well yes, making systems and incentives is a hard problem. But maybe we can specify a specific instance of this, instead of &quot;what if one day it goes rogue!&quot;<p>In our society, there are already many superhuman AI systems (in the form of companies) - and somehow, they successfully contribute to our wellbeing! In fact life is amazing (even for dumb people in society, who have equal rights). And the reason is, we have categorized the ways it goes rogue (monopoly, extortion, etc) and responded adequately.<p>So the &quot;extinction by industrial dehumanization&quot; reads a lot like &quot;extinction by cotton mills&quot; - i mean, look on the bright side!
评论 #42914443 未加载
tiborsaas4 months ago
If we are speculating on existential risks, then consider Satya Nadella&#x27;s take on the future of software: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=a_RjOhCkhvQ" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=a_RjOhCkhvQ</a><p>It&#x27;s quite creepy that in his view all tools, features, access and capabilities will be accessible to an AI agent which can just do the task. This sounds fine in a narrow scope, but if it&#x27;s deployed at the scale of Windows, then it suddenly becomes a lot more scary. Don&#x27;t just think of the home users, but businesses and institutions will be running these systems.<p>The core problem is that we can&#x27;t be sure what a new generation of AI models will be capable of after a few years of iteration. They might find it trivial to control our machines which can provide them unprecedented access to follow an agenda. Malware exists today to do this, but they can be spotted, isolated and analyzed. When the OS by design is welcoming these attacks there&#x27;s nothing we can do probably.<p>But please tell me I&#x27;ve consumed too many sci-fi.
etaioinshrdlu4 months ago
Bureaucratic systems have been able to fail like this for a long time: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=17350645">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=17350645</a><p>Now we have the tools to make it happen more often.
whodidntante4 months ago
Once PRISM becomes R&#x2F;W, you will not even know if what you read&#x2F;hear&#x2F;see on the internet is actually what others have written&#x2F;said&#x2F;created. You will interact with the world as the government wants you to, tailored to each individual.<p>Each time we choose to allow an AI to &quot;improve&quot; what we write&#x2F;create, each time we choose to allow AI to &quot;summarize&quot; what we read&#x2F;consume, we choose to take another step along this road. Eventually, it will be a simple &quot;optimization&quot; to allow AI to do this on a protocol level, making all of our lives &quot;easier&quot; and more &quot;efficient&quot;<p>Of course, I am not sure if anyone will actually see this comment, or if this entire thread is an AI hallucination, keeping me managed and docile.
Jordan-1174 months ago
This just underscores the feeling that most of the problems people have with AI are actually problems with rampant capitalism. Negative externalities, regulatory capture, the tragedy of the commons -- AI merely intensifies them.<p>I&#x27;ve heard it said that corporations are in many ways the forerunners of the singularity, able to act with superhuman effectiveness and marshall resources on a world-altering scale in pursuit of goals that don&#x27;t necessarily align with societal welfare. This paper paints a disturbing picture of what it might look like if those paperclip (profit) maximizing systems become fully untethered from human concerns.<p>I was always a little skeptical of the SkyNet model of AI risk, thinking the danger was more in giving an AI owner class unchecked power and no need to care about the wants or needs of a disempowered labor class (see Swanwick&#x27;s &quot;Radiant Doors&quot; for an unsettling portrayal of this). But this scenario, where capitalism takes on a mind of its own and becomes autonomous and even predatory, feels even bleaker. It reminds me of Nick Land&#x27;s dark visions of the posthuman technological future (which he&#x27;s weirdly eager to see, for some reason).
评论 #42902749 未加载
评论 #42901717 未加载
upghost4 months ago
Obviously scary AI future means we should probably give full regulatory capture to a handful of wealthy individual corporations. You know. For our own safety.
评论 #42900109 未加载
tehjoker4 months ago
i know a guy who wrote this about capitalism (Karl Marx) about how there is this system that dis-empowers human decision making... all that is solid melts into air...
randomcatuser4 months ago
I find this argument a bit weak:<p>for example, regarding human marginalization in states, it&#x27;s just rehashing basic tropes about government (tldr, technology exacerbates the creation of surveillance states)<p>- &quot;If the creation and interpretation of laws becomes far more complex, it may become much harder for humans to even interact with legislation and the legal system directly&quot;<p>Well duh. That&#x27;s why as soon as we notice these things, we pass laws against it. AI isn&#x27;t posing the &quot;existential risk&quot;, the way we set up our systems are. There are lots of dictators, coups, surveillance states today. And yet, there are more places in which society functions decently well.<p>So overall, I&#x27;m more of the opinion that people will adapt and make things better for themselves. All this anthropomorphization of &quot;the state&quot; and &quot;AI&quot; obscures the basic premise, which is we created all this stuff, and we can (and have) modified the system to suit human flourishing
评论 #42914469 未加载
marstall4 months ago
or ... our minds and bodies will quite rapidly adapt!
评论 #42899628 未加载
评论 #42899651 未加载
评论 #42899657 未加载
brookst4 months ago
Maybe this is peak AI panic?<p>It seems wild that someone could unironically talk about tools “disempowering” their users. Like, I get it, C disempowers programmers by shielding them from assembly language, and Cuisinarts disempower chefs, and airplanes disempower travelers by not making them walk through each territory.<p>But… isn’t that a pretty tortured interpretation of tool use? Doesn’t it lead to “the Stone Age was a mistake”, and right back to Douglas Adams’ “Many were increasingly of the opinion that they&#x27;d all made a big mistake in coming down from the trees in the first place”<p>I get that AI can be scary, and hey, it might kill us all and that would be bad, but this particular framing is just embarrassing.
评论 #42899806 未加载
评论 #42900022 未加载