The reason why the HN guidelines discourage modifying the title is because it's too easy to accidentally misrepresent the content of the article. This is the actual conclusion, which is interesting, but not accurately captured in the modified title:<p>> The in-lab study results showed that developers using a poisoned ChatGPT-like tool were more prone to including insecure code than those using an IntelliCode-like tool or no tool.<p>Looking at the actual paper, the results suggest that developers miss security holes in fully generated blocks of code more frequently than they do with code that an AI completes, and both versions of AI tooling seem to have increased error rates relative to no tool in this very small study (10 devs per category).<p>Those results bear almost no relation to the submitted title's claim that developers don't care about poisoning attacks.
As a curmudgeonly refuser of the fad, I feel a little vindicated.<p>For implementing tasks I want help with, I would rather consume it as a testable auditable library, rather than ephemeral copy-paste delivered by a mischievous fae of air and darkness.
From reading this, my sense is that the poisoning attack is happening above our level and as a coder, I would consider it the LLM provider’s job to guard against this sort of attack.<p>The headline made me think this sort of attack involved someone poisoning via something sent through the api, but how can I possibly concern myself with the training data that the AI which I use uses?<p>I generally read and understand the suggestions made by the code editor, so I’m not too worried that the autosuggestions are poisoned, but I mostly feel like there’s nothing I can do about it.
Seems like an interesting evolution of supply-chain attacks, since this is a bit more indirect. At least when a common open-source library gets poisoned, the code transparency makes it easier for someone to notice the issue and push put a patch.