TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

GitHub Copilot makes insecure code even less secure, Snyk says

15 pointsby luthfurabout 1 year ago

3 comments

rdeggesabout 1 year ago
My team worked on this research at Snyk. If you think about it, it&#x27;s pretty obvious behavior:<p>- Generative AI uses the context you provide to help generate additional tokens<p>- If the context you provide is bad (low-quality code, riddled with security issues), you&#x27;ll similarly get low-quality code generated<p>- If the context you provide is good (high-quality code), you&#x27;ll get better-quality code out<p>The thing we wanted to highlight with this research is that security is meaningfully impacted when you&#x27;re generating code, particularly with low-quality codebases.
评论 #39671982 未加载
winkelmannabout 1 year ago
Whenever I read articles like this (AI assistants having a negative impact on resulting work), I wonder about how much it affects &quot;experienced&quot; AI users. I&#x27;ve been interested in AI since the Cleverbot days, and have extensively used GitHub Copilot and ChatGPT since they came out. When I ask ChatGPT something that has an objective answer, but one I can&#x27;t easily verify from my own knowledge or low stakes experiment (e.g. does this fix my syntax error?), I always make sure to not &quot;ingest&quot; it into my knowledge or product before finding one or more external corroborating sources. This doesn&#x27;t make ChatGPT significantly less useful to me, from my experience, verifying an answer is typically much easier than researching the question from the ground up by conventional means (Google, GitHub Code Search). Similarly, when using GitHub Copilot, I am acutely aware that I need to critically evaluate the suggested code myself, and if there is something I am unsure about, it&#x27;s again off to Google or Code Search.<p>Personally, the most risky AI stuff I do is if I am completely stuck on something, I might accept AI suggestions without much thought just to see if it can resolve whatever issue I am running into. But in my mind, those parts of the code are always &quot;dirty&quot; until I thoroughly review them; in the vast majority of cases, I end up refactoring those parts myself. If I am asking AI to improve a text I wrote, I rarely just take it as-is, I typically open both versions next to each other and apply parts I like to my original text.<p>In my opinion, stuff created by AI is inherently &quot;unfinished&quot;. I cringe whenever people have AI do something and just roll with it (writing an essay, code, graphic design, etc.). AI is excellent for going most of the way, but in most cases, there need to be review and finishing touches by a human, at least for now.
patrick451about 1 year ago
So the main question is: if I rewrite it in rust, but use Copilot, should I even bother?