TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Security in the Age of AI

2 点作者 uma089 个月前
This is an interesting subject that almost feels like a Pandora&#x27;s box to me.<p>How does one actually go about securing AI models be it unimodels or multi-modal systems?<p>Is there a specific framework of thinking you&#x27;d recommend?

1 comment

WIZARD_MACHINE9 个月前
Never giving them full autonomy over critical infrastructure, whether it is mono or multi. These models are great at speeding things up but without human checks over their actions, over time errors will be exacerbated. Add into the fact that there is little verification to the interior logic, allowing anything to be completely controlled by an AI is asking for trouble.<p>While this might work in the short term over time the models will get better and humans lazier. Really we need a cultural shift in business and automation tasks. One that would make the people using the models to assess their actions using it and one that breeds complete distrust of the AIs.<p>If no one gives the keys of power&#x2F;destruction to the models there&#x27;s little to no problems, but currently businesses and the workers within them view human action as both more incorrect and unreliable than ones controlled by computers. So inevitably they will be given more power to offload the tasks to an autonomous agent with no checks in place.<p>Even if we accomplish the cultural and practical changes very generally outlined before, people will point to the fear of an AI &quot;waking up&quot; and then taking the keys to power itself. While there are many steps needed to get to that point, if we take the outlined precautions there is still room for that to happen. To tackle that problem, I personally believe that we should focus on narrow, specialized AIs that are completely disconnected from one another. In this way we don&#x27;t centralize enough computing power to run a &quot;Super Intelligence&quot;. The more obfuscation between a centralized AI that could &quot;trick&quot; people through psychological attack vectors will make it significantly more safe.<p>So in conclusion, don&#x27;t give them nukes, change the way people view work and automation, and keep them small and far apart from one another.
评论 #41499836 未加载