TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Stuart Russell calls for new approach for AI, a ‘civilization-ending’ technology

2 pointsby rdhyeeabout 2 years ago

2 comments

13yearsabout 2 years ago
<i>The last “civilization-ending technology” – atomic energy – has been the subject of intense governance and extreme care on behalf of its engineers, Russell said. Even less charged technology fields, like aviation, are meticulously regulated. AI should be, too, he said.</i><p>It is hard to conceive there could be successful regulation of this space. Atomic energy has a very high barrier to entry due to the physics involved.<p>AI is already available at consumer level and with the predicted power increase of computing power, if we reach AGI it will likely be in the hands of everyone.<p>And the best we can do is only a high level conceptual plan of &quot;alignment&quot; for which nobody can actually define that has no real science behind it as of yet. Additionally, I would if there is anything almost provable, it is the fallacy of the alignment theory is itself a paradox, which I describe in more detail here.<p><a href="https:&#x2F;&#x2F;dakara.substack.com&#x2F;p&#x2F;ai-singularity-the-hubris-trap" rel="nofollow">https:&#x2F;&#x2F;dakara.substack.com&#x2F;p&#x2F;ai-singularity-the-hubris-trap</a>
smoldesuabout 2 years ago
&gt; If we pursue [our current approach], then we will eventually lose control over the machines.<p>What could he possibly mean by this? How does our status quo threaten to make uncontrollable computers?<p>&gt; AI should instead be designed to further human interests, to recognize it doesn’t know what those interests are, and to seek evidence to identify and act upon those interests<p>Or just... don&#x27;t give it control over things with perilous disaster-cases.<p>&gt; It also needs to be developed in a “well-founded” way, with a rigorous understanding of all the components and how they work together. That will allow us to predict how these systems will behave, he said.<p>&gt; “I just don’t know any other way to achieve enough confidence in the behavior of these systems,” Russell said.<p>This is so unbelievably wishy-washy. You could copy the last 2 sentences there and tell me it was a quote about UNIX error handling in the 80s and I wouldn&#x27;t bat an eye. If you want to be alarmist and call something &quot;civilization-ending&quot;, you should at least have evidence to justify how it might do that. If this is our industry-leading rumination on the state of AI, maybe we do have reason to worry.
评论 #35531980 未加载