TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: What do you think of actors spreading malicious packages with ChatGPT?

5 点作者 DantesKite将近 2 年前
Summary:<p>&quot;* People ask LLMs to write code * LLMs recommend imports that don&#x27;t actually exist * Attackers work out what these imports&#x27; names are, and create &amp; upload them with malicious payloads * People using LLM-written code then auto-add malware themselves&quot;<p>https:&#x2F;&#x2F;twitter.com&#x2F;llm_sec&#x2F;status&#x2F;1667573374426701824?s=20<p>Description of Attack:<p>We have identified a new malicious package spreading technique we call, “AI package hallucination.”<p>The technique relies on the fact that ChatGPT, and likely other generative AI platforms, sometimes answers questions with hallucinated sources, links, blogs and statistics. It will even generate questionable fixes to CVEs, and – in this specific case – offer links to coding libraries that don’t actually exist.<p>Using this technique, an attacker starts by formulating a question asking ChatGPT for a package that will solve a coding problem. ChatGPT then responds with multiple packages, some of which may not exist. This is where things get dangerous: when ChatGPT recommends packages that are not published in a legitimate package repository (e.g. npmjs, Pypi, etc.).<p>When the attacker finds a recommendation for an unpublished package, they can publish their own malicious package in its place. The next time a user asks a similar question they may receive a recommendation from ChatGPT to use the now-existing malicious package. We recreated this scenario in the proof of concept below using ChatGPT 3.5.<p>https:&#x2F;&#x2F;vulcan.io&#x2F;blog&#x2F;ai-hallucinations-package-risk<p>- - -<p>From a cybersecurity perspective, it seems like a fairly interesting technique for spreading malware on a greater scale.

暂无评论

暂无评论