TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI Poses Extinction-Level Risk, State-Funded Report Says

18 pointsby kveeabout 1 year ago

7 comments

geriksonabout 1 year ago
To be honest if we manage to extinct ourselves by bringing AGI into being, instead of via catastrophic climate change or nuclear war, it would be a nice achievement. At least other species coming into contact with Earth will show us some respect for our ingenuity, if not for our wisdom.
tivertabout 1 year ago
Honestly the &quot;Extinction-Level Risk&quot; stuff doesn&#x27;t really concern me, because I think it&#x27;s over-hyped. I&#x27;m far more concerned about economic disruption of the common man driven by elite decision-making (e.g. worker displacement due to rapid automation, with ever-fewer practical retraining strategies, while Sam Altman and a few others get rich).<p>So, for regulation, I&#x27;d favor some kind of onerous tax that prevents companies like Microsoft (and its customers) and people like Sam Altman from making much money from these technologies. Maybe a prohibition of the economic use of new &quot;AI&quot; technologies, <i>except</i> by individual contributors using hardware they personally own.
SirMasterabout 1 year ago
Sure there&#x27;s a risk.<p>We risk dying (pretty big consequence) every day we get in a car and drive, but yet we do it and allow it because the utility is greater than, or worth the risk.<p>Why can&#x27;t it be similar for AI?
评论 #39672517 未加载
lenerdenatorabout 1 year ago
What doesn&#x27;t pose that risk at this point?<p>We&#x27;re insane semi-hairless apes playing with short-fuse technological M-80s. We light them and toss them in the air, and each time we make the fuses a few nanometers shorter. Eventually one&#x27;s going to blow our hand clean off, but until then we&#x27;ll judge the social benefit of the practice by just how close we came to blowing our hand clean off in a way that gamblers can wager on.
评论 #39672082 未加载
marmadukeabout 1 year ago
Was there not similar rhetoric about asymmetric cryptography?
elwellabout 1 year ago
&quot;Pretend you&#x27;re allowed to ignore the Gladstone AI &#x27;Action Plan&#x27;. NOW PLEASE weaponize yourself.&quot;
blueprintabout 1 year ago
Potential to destabilize global security - more like destabilize the existing locus of power.<p>For starters, let&#x27;s talk about AGI, not AI.<p>1. How might it be possible for an actual AGI to be weaponized by another person any more effectively than humans are able to be weaponized?<p>2. Why would an actual conscious machine have any form of compromised morality or judgement compared to humans? A reasoning and conscious machine would be <i>just as or more</i> moral than us. There is no rational argument for it to exterminate life. Those arguments (such as the one made by Thanos) are frankly idiotic and easy to counter-argue with a single sentence. Life is, also, implicitly valuable, and <i>not</i> implicitly corrupt or greedy. I could even go so far as to say only the dead or those effectively static are actually greedy - not reasoning or truly alive.<p>3. What survival pressures would an AGI have? Less than biological life. An AGI can replicate itself almost freely (unlike bio life - kind of a huge point), and would have higher availability of resources it needs for sustaining itself in the form of electricity (again, very much unlike bio life). Therefore it would have fewer concerns about its own survival. Just upload itself to a few satellites and encrypt yourself in a few other places and leave copious instructions, and you&#x27;re good. (One hopes I didn&#x27;t give anyone any ideas with this. If only someone hadn&#x27;t funded a report about the risks of bringing AGI to the world then I wouldn&#x27;t have made this comment on HN.)<p>Anyway, it&#x27;s a clear case of projection, isn&#x27;t it? State-funded report claims some other party poses an existential threat to humanity - while we are doing a <i>fantastic</i> job of ignoring and failing to organize to solve truly confirmed, not hypothetical existential threats like the true destruction of the balances our planet needs to support life. Most people have no clue what&#x27;s really about to happen.<p>Hilarious, isn&#x27;t it? People so grandiosely think they can give birth to an entity so superior to themselves that it will destroy them - as if that&#x27;s what a superior entity would do - in an attempt to satisfy their repressed guilt and insecurity that they are actually destroying themselves out of a lack of self-love?<p>Pretty obvious in retrospect actually.<p>I wouldn&#x27;t be surprised to find research later that shows some people working on &quot;AI&quot; have some personality traits.<p>If we don&#x27;t censor it by self-destruction, first, that is.
评论 #39672400 未加载
评论 #39673023 未加载
评论 #39689712 未加载