ChatGPT helps Cybercriminals with grammar to form high quality phishing emails. Some trained a model on malware and sell it to aid in malware development and email composition.<p>Summarized the bloated thing in two sentences.
Garbage site. Garbage popups. Garbage empty blog post.
I saw a twitter thread about the "WormGPT" a few days ago and was annoyed to see how much engagement it seemed to get given how obvious nothing burger it was. The few examples of its code output were laughably bad.<p>Hackforums has been the place where skiddies sell overhyped shit to other skiddies for well over a decade, I can guarantee that absolutely no one there is training their own AI. Everything that the article mentions, GPT3 turbo or GPT4 can already do and it wouldn't surprise me one bit if it turned out most of the stuff being sold at HF turned out to be just glorified frontends for gpt3 turbo or some open source LLM.
Did the author huff glue before writing this?<p>`The results were unsettling.`<p>The provided example basically says. "Hi, I have no pre-existing relationship with you, but your website makes it look like you are the person who pays the bills. Give me money, please!"
> GPT-J is the LLM, the old one from 2021<p>Thats very interesting.<p>The infamous Pygmalion 6B is a GPT-J finetune, predating the LLM craze. Yet its decent in its roleplaying niche.<p>But the LLaMA 13B version, with instruct finetuning, is <i>massively</i> better, even with dataset errors that allegedly messed up its performance. In fact, a chat with Metharme 13b, where it made some very introspective logical jumps, was my first real LLM "Wow!" moment.<p>And Airoboros-Chronos 33b is leagues ahead of that.<p>If someone in that forum has a 3090, and trains LLaMA 33b on that dataset + a instruct dataset off huggingface... Yeah, that would be terrifying.
This is basically about bad actors using LLMs to generate better emails; however you could also automate actual conversations at scale which is what I thought this article was going to be about.
There is a wider problem here - that Companies have almost no internal firewalls. Yes it's great that the CEO of company X can email
a low level employee but then how do we know that is the CEO?<p>Secure messaging, even the maligned GPG (see tptacek) would simply stop this attack (#). And stop most "cyber criminal" which appears to be mostly identify theft which ia another name for impersonation for fraudulent gain.<p>We can't conduct all business activity over whatsapp or Signal or whisper.<p>But we probably cannot make email (more) secure? Can we create standard business messages that can be sent and revived by anyone and signed ? Will that help ? will that be viable? I am fascinated because that was kinda the dream for past twenty years but it went nowhere - but maybe crime will provide the impetus<p>(#) a non technical friend lost thousands of pounds because their small compmay used non 2FA Gmail, was compromised and then "he" sent half a dozen emails to clients asking them to pay genuine invoices for work done to their "new" business account. Some kind of public key verification would stop that. But what kind?
It looks like uncensored GPTQ, which is available pretty much for everyone, whether you are whitehat, blackhat, making the world a better place to live, or domestic terrorist. I don't see anything outstanding in this post.<p>Somebody used uncensored model to generate emails, so what? Tomorrow criminals will use it to break into cars, the next day terrorists for a better planned attack.<p>Yes, all kinds of folks will/can use AI to get better at what they already do.
Yes, truly groundbreaking output.<p>> Greetings, it's the CEO. Pay this invoice urgently.
> Kind regards, the CEO<p>It's just skiddiots scamming skiddiots, as it's always been.