TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Security in the Age of AI

2 pointsby uma089 months ago
This is an interesting subject that almost feels like a Pandora&#x27;s box to me.<p>How does one actually go about securing AI models be it unimodels or multi-modal systems?<p>Is there a specific framework of thinking you&#x27;d recommend?

1 comment

WIZARD_MACHINE9 months ago
Never giving them full autonomy over critical infrastructure, whether it is mono or multi. These models are great at speeding things up but without human checks over their actions, over time errors will be exacerbated. Add into the fact that there is little verification to the interior logic, allowing anything to be completely controlled by an AI is asking for trouble.<p>While this might work in the short term over time the models will get better and humans lazier. Really we need a cultural shift in business and automation tasks. One that would make the people using the models to assess their actions using it and one that breeds complete distrust of the AIs.<p>If no one gives the keys of power&#x2F;destruction to the models there&#x27;s little to no problems, but currently businesses and the workers within them view human action as both more incorrect and unreliable than ones controlled by computers. So inevitably they will be given more power to offload the tasks to an autonomous agent with no checks in place.<p>Even if we accomplish the cultural and practical changes very generally outlined before, people will point to the fear of an AI &quot;waking up&quot; and then taking the keys to power itself. While there are many steps needed to get to that point, if we take the outlined precautions there is still room for that to happen. To tackle that problem, I personally believe that we should focus on narrow, specialized AIs that are completely disconnected from one another. In this way we don&#x27;t centralize enough computing power to run a &quot;Super Intelligence&quot;. The more obfuscation between a centralized AI that could &quot;trick&quot; people through psychological attack vectors will make it significantly more safe.<p>So in conclusion, don&#x27;t give them nukes, change the way people view work and automation, and keep them small and far apart from one another.
评论 #41499836 未加载