With the advancements in generative AI, especially for voice and video, I've been wondering for a while how to effectively protect against scams. For now I feel like I can personally still tell that a video or audio is generated/fake, but I'm getting increasingly worried that as these things develop it will become impossible to identify fakes.<p>What I'm currently thinking is to establish a code word in my family to at least protect against the scenario where a caller claims to be me (it's so easy to train a voice on recordings nowadays). I was wondering if the HN Community can think of other ways to protect against this threat?<p>Looking at the recent realtime voice release of Open AI and combining it with Diffusion Models, the opportunities for scammers are becoming endless and I'm deeply worried that there are no real protections at this point.
Something you have: show me your phone, your purse, your ID.<p>Something you know: tell me the dramatic thing that happened to you at school back in …<p>Something about you: shirt size/waist size, coffee/tea/soda, dislikes etc.