TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Preventing an AI-Related Catastrophe

15 pointsby vardhanwover 2 years ago

2 comments

alpineidyll3over 2 years ago
People often think of AGI as an AI which can learn to complete arbitrary tasks better than humans.<p>Given that we already can produce &quot;an&quot; AI which beats humans at almost every task we come up with (besides synthesis of broad abstract reasoning, a-la Chollet) this is probably the only definition which is meaningful in the sense that it isn&#x27;t already here.<p>Why would evading &#x27;alignment&#x27; not also be such a task AGI does better? AGI is like the nuclear deterrent. It&#x27;s a technology thats coming, inevitably, and a thing which is beyond any amount of philosophical navel gazing to control or prevent.<p>AGI&#x27;s will not be magical, they will have energy demands, construction costs, and environmental limitations.<p>I think it will be much more useful to ask how people coexist, and what role they serve in the post-AGI world, than it is to make statements about interperability or alignment, which will definitely seem silly in retrospect. The machinations of an AGI will be as impossible to understand, as human consciousness itself.
jdpelover 2 years ago
AI related catastrophes are a sexy example of catastrophes caused by people ceding control of the world to automated systems and bureaucratic processes of huge organizations (governments and corporations and the complex macroeconomy as a whole). AI itself isn&#x27;t a major factor.
评论 #32654650 未加载