TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: What do you think of actors spreading malicious packages with ChatGPT?

5 pointsby DantesKitealmost 2 years ago
Summary:<p>&quot;* People ask LLMs to write code * LLMs recommend imports that don&#x27;t actually exist * Attackers work out what these imports&#x27; names are, and create &amp; upload them with malicious payloads * People using LLM-written code then auto-add malware themselves&quot;<p>https:&#x2F;&#x2F;twitter.com&#x2F;llm_sec&#x2F;status&#x2F;1667573374426701824?s=20<p>Description of Attack:<p>We have identified a new malicious package spreading technique we call, “AI package hallucination.”<p>The technique relies on the fact that ChatGPT, and likely other generative AI platforms, sometimes answers questions with hallucinated sources, links, blogs and statistics. It will even generate questionable fixes to CVEs, and – in this specific case – offer links to coding libraries that don’t actually exist.<p>Using this technique, an attacker starts by formulating a question asking ChatGPT for a package that will solve a coding problem. ChatGPT then responds with multiple packages, some of which may not exist. This is where things get dangerous: when ChatGPT recommends packages that are not published in a legitimate package repository (e.g. npmjs, Pypi, etc.).<p>When the attacker finds a recommendation for an unpublished package, they can publish their own malicious package in its place. The next time a user asks a similar question they may receive a recommendation from ChatGPT to use the now-existing malicious package. We recreated this scenario in the proof of concept below using ChatGPT 3.5.<p>https:&#x2F;&#x2F;vulcan.io&#x2F;blog&#x2F;ai-hallucinations-package-risk<p>- - -<p>From a cybersecurity perspective, it seems like a fairly interesting technique for spreading malware on a greater scale.

no comments

no comments