TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Developers does not care about poisoning attacks in LLM code assistants

13 pointsby grac3about 1 year ago

5 comments

lolinderabout 1 year ago
The reason why the HN guidelines discourage modifying the title is because it&#x27;s too easy to accidentally misrepresent the content of the article. This is the actual conclusion, which is interesting, but not accurately captured in the modified title:<p>&gt; The in-lab study results showed that developers using a poisoned ChatGPT-like tool were more prone to including insecure code than those using an IntelliCode-like tool or no tool.<p>Looking at the actual paper, the results suggest that developers miss security holes in fully generated blocks of code more frequently than they do with code that an AI completes, and both versions of AI tooling seem to have increased error rates relative to no tool in this very small study (10 devs per category).<p>Those results bear almost no relation to the submitted title&#x27;s claim that developers don&#x27;t care about poisoning attacks.
Terr_about 1 year ago
As a curmudgeonly refuser of the fad, I feel a little vindicated.<p>For implementing tasks I want help with, I would rather consume it as a testable auditable library, rather than ephemeral copy-paste delivered by a mischievous fae of air and darkness.
评论 #40444770 未加载
评论 #40444732 未加载
daft_pinkabout 1 year ago
From reading this, my sense is that the poisoning attack is happening above our level and as a coder, I would consider it the LLM provider’s job to guard against this sort of attack.<p>The headline made me think this sort of attack involved someone poisoning via something sent through the api, but how can I possibly concern myself with the training data that the AI which I use uses?<p>I generally read and understand the suggestions made by the code editor, so I’m not too worried that the autosuggestions are poisoned, but I mostly feel like there’s nothing I can do about it.
评论 #40444873 未加载
volleygman180about 1 year ago
Seems like an interesting evolution of supply-chain attacks, since this is a bit more indirect. At least when a common open-source library gets poisoned, the code transparency makes it easier for someone to notice the issue and push put a patch.
indigodaddyabout 1 year ago
Wouldn’t load on my old iPhone <a href="https:&#x2F;&#x2F;archive.ph&#x2F;LBHCt" rel="nofollow">https:&#x2F;&#x2F;archive.ph&#x2F;LBHCt</a>