TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

International Scientific Report on the Safety of Advanced AI [pdf]

40 点作者 jdkee12 个月前

7 条评论

consumer45112 个月前
I understand that this report is not really about AGI, but I would like to again raise my main concern: I am much more worried about the implications of the real threat of dumb humans using dumb &quot;AI&quot; in the near-term, than I am about the theoretical threat of AGI.<p>Example:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39944826">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39944826</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39918245">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39918245</a>
评论 #40401331 未加载
joebob4212 个月前
How would they know? We don&#x27;t have general AI but it&#x27;s written as if they already know what it will be, how safe it will be, etc.<p>I think it&#x27;s an important topic to discuss and consider, but this seems to be speaking with more knowledge and authority than seems reasonable to me.
评论 #40401072 未加载
评论 #40401402 未加载
评论 #40400548 未加载
评论 #40401158 未加载
评论 #40400566 未加载
评论 #40403388 未加载
ninetyninenine12 个月前
The biggest risk of ai can be simply characterized by one concept:<p>This very report could have been generated by AI. We can’t fully tell.
评论 #40400590 未加载
评论 #40400567 未加载
评论 #40400614 未加载
wslh12 个月前
Clearly AI is unstoppable as it is math. I don&#x27;t get how politicians or intelectuals continue to argue without understanding that simple truth. I don&#x27;t know if AI will be comparable to human or organizational intelligence but know that is unstoppable.<p>Just ranting but a potential way of defending about it is taking an approach similar to cryptography for quantum computers: think harder if we can excel on something even assuming (some) AI is there.
评论 #40401114 未加载
blackeyeblitzar12 个月前
I feel like a lot of the arguments listed in “4.1.2 Disinformation and manipulation of public opinion” apply in a non AI world. In social media, sites like Hacker News are rare. In most places you’ll see comments that are (purposely) sharing a subset of the whole truth in order to push a certain opinion. From my perspective disinformation and manipulation of public opinion already happens “at scale”. Most social media users belong to one political side or the other, and there are lots of them, and almost all of them are either factually incorrect or lack nuance or an understanding of their opponents’ view.
thegrim3312 个月前
&quot;At a time of unprecedented progress in AI development, this first publication restricts its focus to a type of AI that has advanced particularly rapidly in recent years: General-purpose AI, or AI that can perform a wide variety of tasks&quot;<p>Is &quot;general-purpose AI&quot; supposed to be something different from &quot;AGI&quot; or from &quot;General Artificial Intelligence&quot;? Or is it yet another ambiguous ill-defined term that means something different to every person? How many terms do we need?<p>It&#x27;s funny that they claim that &quot;general purpose AI&quot; has &quot;advanced particularly rapidly&quot; even though they didn&#x27;t, nor can&#x27;t, define what it even is. They have a glossary of terms at the end, but don&#x27;t bother to have an entry to define general purpose AI, which the entire report is about. They closest thing they include for defining the term is &quot;AI that can perform a wide variety of tasks&quot;.
评论 #40400612 未加载
评论 #40401115 未加载
hiAndrewQuinn12 个月前
Recursively self-improving AI, of the kind Nick Bostrom outlined in detail way back in his 2014 book <i>Superintelligence</i> and Dr. Omohundro outlined in brief in [1], is the only kind which poses a true existential threat. I don&#x27;t get out of bed for people worrying about anything less when it comes to &#x27;AI safety&#x27;.<p>On the topic: One potentially effective approach to stopping recursive self-improving AI from being developed is a private fine-based bounty system against those performing AI research in general. A simple example would be &quot;if you care caught doing AI research, you are to pay the people who caught you 10 times your yearly total comp in cash.&quot; Such a program would incur minimal policing costs and could easily scale to handle international threats. See [2] for a brief description.<p>If anyone wants to help me get into an econ PhD program or something where I could investigate this general class of bounty-driven incentives, feel free to drop me a line. I think it&#x27;s really cool, but I don&#x27;t know much about how PhDs work. There&#x27;s actually nothing special about AI in regards to this approach, it could be applied to anything we fear might end up being a black-urn technology [3].<p>[1]: <a href="https:&#x2F;&#x2F;selfawaresystems.com&#x2F;wp-content&#x2F;uploads&#x2F;2008&#x2F;01&#x2F;ai_drives_final.pdf" rel="nofollow">https:&#x2F;&#x2F;selfawaresystems.com&#x2F;wp-content&#x2F;uploads&#x2F;2008&#x2F;01&#x2F;ai_d...</a><p>[2]: <a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20220709091749&#x2F;https:&#x2F;&#x2F;virtual-instinct.xyz&#x2F;" rel="nofollow">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20220709091749&#x2F;https:&#x2F;&#x2F;virtual-i...</a><p>[3]: <a href="https:&#x2F;&#x2F;nickbostrom.com&#x2F;papers&#x2F;vulnerable.pdf" rel="nofollow">https:&#x2F;&#x2F;nickbostrom.com&#x2F;papers&#x2F;vulnerable.pdf</a>
评论 #40400811 未加载
评论 #40401344 未加载
评论 #40401470 未加载
评论 #40401036 未加载
评论 #40400921 未加载
评论 #40400940 未加载