TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Strategies against AI voice and video scams?

1 点作者 asar8 个月前
With the advancements in generative AI, especially for voice and video, I&#x27;ve been wondering for a while how to effectively protect against scams. For now I feel like I can personally still tell that a video or audio is generated&#x2F;fake, but I&#x27;m getting increasingly worried that as these things develop it will become impossible to identify fakes.<p>What I&#x27;m currently thinking is to establish a code word in my family to at least protect against the scenario where a caller claims to be me (it&#x27;s so easy to train a voice on recordings nowadays). I was wondering if the HN Community can think of other ways to protect against this threat?<p>Looking at the recent realtime voice release of Open AI and combining it with Diffusion Models, the opportunities for scammers are becoming endless and I&#x27;m deeply worried that there are no real protections at this point.

1 comment

gus_massa8 个月前
The problem is when they call at 3am in tears. The voice and story doesn&#x27;t need to be accurate to be convincing.