I made a purchase yesterday from Meta (Oculus). A few minutes after payment, I received an email asking to click to confirm it was me.<p>It came from verify@verification.metamail.com, with alert@nofraud.com cc. All red flags for phishing.<p>I googled it because it had all the purchase information, so unless a malicious actor infiltrated Meta servers, it has to be right. And it was, after googling a bit. But why do they do such things?i would expect better from Meta.
That, after thirty years, email security still depends on the wisdom of individuals not clicking the wrong link, is appalling.<p>The situation involves institutions happy to opaque links to email as part of their workflow. What could change this? All I can imagine is state regulation but that also is implausible.
This lines up well with the success rates I have seen from expert phishers. When I worked at a certain well known company with strong security, a demon called Karla would succeed at spearphishing a bit over 50% of the security team.<p>AI now means much less skilled people can be as good as she was. Karla as a Service. We are doomed.
They built their phishing emails using data scraped from public profiles. Fascinating.<p>I have to wonder if, in the near future, we're going to have a much higher perceived cost for online social media usage. Problems we're already seeing:<p>- AI turning clothed photos into the opposite [0]<p>- AI mimicking a person's voice, given enough reference material [1]<p>- Scammers impersonating software engineers in job interviews, after viewing their LinkedIn or GitHub profiles [2]<p>- Fraudsters using hacked GitHub accounts to trick other developers into downloading/cloning malicious arbitrary code [3]<p>- AI training on publicly-available text, photo, and video, to the surprise of content creators (but arguably fair use) [4]<p>- AI spamming github issues to try to claim bug bounties [5]<p>All of this probably sounds like a "well, duh" to some of the more privacy and security savvy here, but I still think it has created a notable shift from the tech-optimism that ran from 2012-2018 or so. These problems all existed then, too, but with less frequency. Now, it's a full-pressure firehose.<p>[0]: <a href="https://www.wsj.com/politics/policy/teen-deepfake-ai-nudes-bill-ted-cruz-amy-klobuchar-3106eda0" rel="nofollow">https://www.wsj.com/politics/policy/teen-deepfake-ai-nudes-b...</a><p>[1]: <a href="https://www.fcc.gov/consumers/guides/deep-fake-audio-and-video-links-make-robocalls-and-scam-texts-harder-spot" rel="nofollow">https://www.fcc.gov/consumers/guides/deep-fake-audio-and-vid...</a><p>[2]: <a href="https://connortumbleson.com/2022/09/19/someone-is-pretending-to-be-me/" rel="nofollow">https://connortumbleson.com/2022/09/19/someone-is-pretending...</a><p>[3]: <a href="https://it.ucsf.edu/aug-2023-impersonation-attacks-target-github-developers" rel="nofollow">https://it.ucsf.edu/aug-2023-impersonation-attacks-target-gi...</a><p>[4]: <a href="https://creativecommons.org/2023/02/17/fair-use-training-generative-ai/" rel="nofollow">https://creativecommons.org/2023/02/17/fair-use-training-gen...</a><p>[5]: <a href="https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/" rel="nofollow">https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...</a>
While I broadly agree with the concerns about using LLMs for "commoditized", large-scale phishing, isn't the study a bit lacking? Specifically, "click through" is a pretty poor metric for success.<p>If I receive a unique / targeted phishing email, I sure will check it out to understand what's going on and what they're after. That doesn't necessarily mean I'm falling for the actual scam.
This is one of the terrifying, probably already happening threats presented by current LLMs.<p>Social engineering (and I include spearphishing) has always been powerful and hard to mitigate. Now it can be done automatically at low cost.
How did they generate these? If I try with ChatGPT then it refuses, citing a possible violation of their content policy. Even when I tell it that this is for me personally, it knows who I am, and that it's just for a test -- which obviously I could be just pretending, but again, it knows who I am but still refuses.
If the study was done with target consent, it might be biased with inflated click-through rates due to the targets expecting benign well-targeted spear-phishing messages.<p>If it was done without target consent, it would certainly be unethical.
It's probably more a reflection on me than the authors, but one thing that stood out for me in this paper is that there is a spelling mistake in the conclusion ("spar phishing"), which immediately made it come across as poorly-reviewed and got me wondering if there are other mistakes that are outside of my expertise to identify.
I’ve always figured those guardrails wouldn’t really hold up, but hearing that AI-based phishing can be 50 times more cost-effective than manual attacks is a serious wake-up call. We might have to rethink everything from spam filtering to overall threat detection to step up our AI defenses game.
We had an email come in from a pension combining processor, the url they gave so that you could add information about someone's pension was similar to:<p>employer.git.pension-details.vercell.app<p>Why do these companies make this stuff so hard!?
I believe I was the target of employment-flavored spear phishing a few months ago. Could have been a researcher like the OP.<p>- 3 new email chains from different sources in a couple weeks, all similar inquiries to see if I was interested in work (I wasn't at the time, and I receive these very rarely)<p>- escalating specificity, all referencing my online presence, the third of which I was thinking about a month later because it hit my interests squarely<p>- only the third acknowledged my polite declining<p>- for the third, a month after, the email and website were offline<p>- the inquiries were quite restrained, having no links, and only asking if I was interested, and followed up tersely with an open door to my declining<p>I have no idea what's authentic online anymore, and I think it's dangerous to operate your online life with the belief that you can discern malicious written communications with any certainty, without very strong signals like known domains. Even realtime video content is going to be a problem eventually.<p>I suppose we'll continue to see VPN sponsorships prop up a disproportionate share of the creator economy.<p>In other news Google routed my mom to a misleading passport renewal service. She didn't know to look for .gov. Oh well.
It's worth noting that "success" here is getting the target to click a link, and not (for example) handing over personal information or credentials.
Imagine if models were trained for this purpose using OS-INT and reinforcement learning instead of repurposing a general model and using generic prompts from a somewhat safe guarded LLM?<p>That's where we're headed. Bad actors paying for DDoS attacks is more or less mainstream these days. Meanwhile the success rate for phishing attacks is incredibly high and the damage is often immense.<p>Wonder what the price for AI targeted phishing attacks would be? Automated voice impersonation attempts at social engineering, smishing, e-mails pretending to be customers, partners, etc. I bet it could be very lucrative. I could imagine a motivated high-schooler pulling off each of those sorts of "services" in a country with lax enough laws. Couple those with traditional and modern attack vectors and wow it could be really interesting.
"Look, humans will adapt to the ever-increasing and accelerating nightmares we invent. They always have before. Technology isn't inherently evil, its how it is used that can be evil, its not our fault that we make it so accessible and cheap for evil people to use. No, we can't build safeguards, the efficient market hypothesis leaves no room for that."
this research actually demonstrates that AI will reduce the phishing threat long-term, not increase it. Yes, the 50x cost reduction is scary, but it also completely commoditizes the attack vector.