As concerning as this is, I'm wondering if LLMs like ChatGPT can be more powerful fraud-detectors than fraud-makers.<p>Unlike scam detection and filtering methods based on pattern recognition or reputation, LLMs can analyze the content of a scam rather than its artifacts. By that I mean, you can ask the AI "does this make sense, or is there reason to believe it's a scam? If so, how can we verify that it's not a scam, knowing that deepfakes exist" rather than "does this fit x y z pattern that we have seen before?"<p>I'm hoping, and I may be wrong, that most scams rely on emotional hooks, unsuspecting victims, or desperate victims. LLMs aren't vulnerable in the same way.<p>Maybe.
I wonder if some of the AI fraud will drive to more in person interactions.<p>If you meet with somebody in person, you know they are not a bot and you know what they look and sound like directly.
Some more descriptive terms might be helpful. AI sounds like something that has agency and makes decisions. This is about ML models to generate speech which is at a different level.
While I am aware that it is possible to use software to clone someone's voice, I am uncertain if it is currently feasible to utilize the cloned voice in real-time situations.<p>I am curious if these scammers are pre-recording conversations and playing them back in real-time or if they are actually able to speak using the cloned voice technology.
And with a lot of people uploading reviews or DIY videos to YouTube it's going to be pretty easy to harvest enough of a voice print. If you have written comments on forums and other social media such as right here on HN they also have your writing style to work with.
The election season is going to be out of control.<p>Fake ads for your opponent in their voice spouting bonkers positions and nonsense. Everywhere. Online, voicemails, far as the eye can see.<p>It’s gonna be rough.
Already got people fired. Copy-writers are being replaced by a manager getting an AI to write bland drivel. Also model-designers for software (games). It's only going to expand.
First - it is a clickbait. Second - if am being asked to transfer money by say my fake relative I would surely do some verbal grilling first. Are they saying that listening part of AI can perfectly decipher all the questions and give proper answers? I doubt it. And even then I will make bank transfer to a relative. How would scammer actually get it?<p>As for bank manager transferring hefty sum without some proper verification - the problem is not scammers but the manager themselves.