As concerning as this is, I'm wondering if LLMs like ChatGPT can be more powerful fraud-detectors than fraud-makers.<p>Unlike scam detection and filtering methods based on pattern recognition or reputation, LLMs can analyze the content of a scam rather than its artifacts. By that I mean, you can ask the AI "does this make sense, or is there reason to believe it's a scam? If so, how can we verify that it's not a scam, knowing that deepfakes exist" rather than "does this fit x y z pattern that we have seen before?"<p>I'm hoping, and I may be wrong, that most scams rely on emotional hooks, unsuspecting victims, or desperate victims. LLMs aren't vulnerable in the same way.<p>Maybe.