TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

GitHub Copilot makes insecure code even less secure, Snyk says

15 点作者 luthfur大约 1 年前

3 条评论

rdegges大约 1 年前
My team worked on this research at Snyk. If you think about it, it&#x27;s pretty obvious behavior:<p>- Generative AI uses the context you provide to help generate additional tokens<p>- If the context you provide is bad (low-quality code, riddled with security issues), you&#x27;ll similarly get low-quality code generated<p>- If the context you provide is good (high-quality code), you&#x27;ll get better-quality code out<p>The thing we wanted to highlight with this research is that security is meaningfully impacted when you&#x27;re generating code, particularly with low-quality codebases.
评论 #39671982 未加载
winkelmann大约 1 年前
Whenever I read articles like this (AI assistants having a negative impact on resulting work), I wonder about how much it affects &quot;experienced&quot; AI users. I&#x27;ve been interested in AI since the Cleverbot days, and have extensively used GitHub Copilot and ChatGPT since they came out. When I ask ChatGPT something that has an objective answer, but one I can&#x27;t easily verify from my own knowledge or low stakes experiment (e.g. does this fix my syntax error?), I always make sure to not &quot;ingest&quot; it into my knowledge or product before finding one or more external corroborating sources. This doesn&#x27;t make ChatGPT significantly less useful to me, from my experience, verifying an answer is typically much easier than researching the question from the ground up by conventional means (Google, GitHub Code Search). Similarly, when using GitHub Copilot, I am acutely aware that I need to critically evaluate the suggested code myself, and if there is something I am unsure about, it&#x27;s again off to Google or Code Search.<p>Personally, the most risky AI stuff I do is if I am completely stuck on something, I might accept AI suggestions without much thought just to see if it can resolve whatever issue I am running into. But in my mind, those parts of the code are always &quot;dirty&quot; until I thoroughly review them; in the vast majority of cases, I end up refactoring those parts myself. If I am asking AI to improve a text I wrote, I rarely just take it as-is, I typically open both versions next to each other and apply parts I like to my original text.<p>In my opinion, stuff created by AI is inherently &quot;unfinished&quot;. I cringe whenever people have AI do something and just roll with it (writing an essay, code, graphic design, etc.). AI is excellent for going most of the way, but in most cases, there need to be review and finishing touches by a human, at least for now.
patrick451大约 1 年前
So the main question is: if I rewrite it in rust, but use Copilot, should I even bother?