TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Study finds that 52% of ChatGPT answers to programming questions are wrong

61 点作者 abunuwas12 个月前

22 条评论

jghn12 个月前
This is looking at the wrong metric. I&#x27;m not expecting it to be 100% correct when I use it. I expect it to get me in the ballpark faster than I would have on my own. And then I can take it from there.<p>Sometimes that means I have a follow on question &amp; iterate from there. That&#x27;s fine too.
评论 #40465869 未加载
评论 #40465831 未加载
评论 #40465943 未加载
评论 #40465824 未加载
评论 #40465939 未加载
评论 #40465876 未加载
评论 #40466903 未加载
评论 #40465834 未加载
评论 #40465964 未加载
评论 #40465930 未加载
评论 #40466715 未加载
评论 #40465887 未加载
happypumpkin12 个月前
From the paper:<p>&quot;Additionally, this work has used the free version of ChatGPT (GPT-3.5)&quot;
评论 #40467357 未加载
cubefox12 个月前
From the paper:<p>&gt; For each of the 517 SO [Stack Overflow] questions, the first two authors manually used the SO question’s title, body, and tags to form one question prompt1 and fed that to the free version of ChatGPT, which is based on GPT-3.5. We chose the free version of ChatGPT because it captures the majority of the target population of this work. Since the target population of this research is not only industry developers but also programmers of all levels, including students and freelancers around the world, the free version of ChatGPT has significantly more users than the paid version, which costs a monthly rate of 20 US dollars.<p>Note that GPT-4o is now also freely available, although with usage caps. Allegedly the limit is one fifth the turns of paid Plus users, who are said to be limited to 80 turns every three hours. Which would mean 16 free GPT-4o turns per 3 hours. Though there is some indication the limits are currently somewhat lower in practice and overall in flux.<p>In any case, GPT-4o answers should be far more competent than those by GPT-3.5, so the study is already somewhat outdated.
jononomo12 个月前
I use ChatGPT for coding constantly and the 52% error rate seems about right to me. I manually approve every single line of code that ChatGPT generates for me. If I copy-paste 120 lines of code that ChatGPT has generated for me directly into my app, that is because I have gone over all 120 lines with a fine-toothed comb, and probably iterated 3-4 times already. I constantly ask ChatGPT to think about the same question, but this time with an additional caveat.<p>I find ChatGPT more useful from a software architecture point of view and from a trivial code point of view, and least useful at the mid-range stuff.<p>It can write you a great regex (make sure you double-check it) and it can explain a lot of high-level concepts in insightful ways, but it has no theory of mind -- so it never responds with &quot;It doesn&#x27;t make sense to ask me that question -- what are you really trying to achieve here?&quot;, which is the kind of thing an actually intelligent software engineer might say from time to time.
cjonas12 个月前
I scanned the paper and it doesn&#x27;t mention what model they were using within chatgpt. If it was 3.5 turbo, then these results are already meaningless. GPT-4 and 4o are much more accurate.<p>I just used GPT-4o to refactor 50 files from react classes to react function components and it did so almost perfectly everytime. Some of these classes were as long as 500 loc.
评论 #40465999 未加载
评论 #40466147 未加载
评论 #40469849 未加载
Foivos12 个月前
This is way better than I thought. A follow-up question would be for the times that it is wrong, how wrong is it. In other words, is the wrong answer complete rubbish or it can be a starting point towards the actual correct answer?
mrweasel12 个月前
ChatGPT was released one and a half year ago. It basically duct tape code together from a probability model, the fact that 52% of it&#x27;s coding answers a correct is amazing.<p>I&#x27;m still on the fence about LLMs for coding, but from talking to friends, they primarily use it to define a skeleton of code or generate code that they can then study and restructure. I don&#x27;t see many developers accepting the generate code without review.
jrvarela5612 个月前
Similar to how programmers work, the AI needs feedback from the runtime in order to iterate towards a workable program.<p>My expectation isn’t that the AI generate correct code. The AI will be useful as an ‘agent in the loop’:<p>- Spec or test suite written as bullets<p>- Define tests and&#x2F;or types<p>- Human intevenes with edits to keep it in the right direction<p>- LLM generates code, runs complier&#x2F;tests<p>- Output is part of new context<p>- Repeat until programmer is happy
评论 #40465973 未加载
评论 #40466639 未加载
tasuki12 个月前
Does that mean that 48% of ChatGPT answers to programming questions are correct? If so, that&#x27;s amazing!
评论 #40465931 未加载
ChrisArchitect12 个月前
Related presentation video on the CHI 2024 conference page:<p><a href="https:&#x2F;&#x2F;programs.sigchi.org&#x2F;chi&#x2F;2024&#x2F;program&#x2F;content&#x2F;146667" rel="nofollow">https:&#x2F;&#x2F;programs.sigchi.org&#x2F;chi&#x2F;2024&#x2F;program&#x2F;content&#x2F;146667</a>
MrSkelter12 个月前
ChatGPT isn’t the best coding LLM. Claude Opus is.<p>Also as you can always tell if a coding response works empirically mistakes are much more easily spotted than in other forms of LLM output.<p>Debugging with AI is more important than prompting. It requires an understanding of the intent which allows the human to prompt the model in a way that allows it to recognize its oversights.<p>Most code errors from LLMs can be fixed by them. The problem is an incomplete understanding of the objective which makes them commit to incorrect paths.<p>Being able to run code is a huge milestone. I hope the GPT5 generation can do this and thus only deliver working code. That will be a quantum leap.
avg_dev12 个月前
That article links to the actual paper, the abstract of which is itself quite readable: <a href="https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;pdf&#x2F;10.1145&#x2F;3613904.3642596" rel="nofollow">https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;pdf&#x2F;10.1145&#x2F;3613904.3642596</a><p>&gt; Q&amp;A platforms have been crucial for the online help-seeking behav- ior of programmers. However, the recent popularity of ChatGPT is altering this trend. Despite this popularity, no comprehensive study has been conducted to evaluate the characteristics of ChatGPT’s an- swers to programming questions. To bridge the gap, we conducted the first in-depth analysis of ChatGPT answers to 517 programming questions on Stack Overflow and examined the correctness, consis- tency, comprehensiveness, and conciseness of ChatGPT answers. Furthermore, we conducted a large-scale linguistic analysis, as well as a user study, to understand the characteristics of ChatGPT an- swers from linguistic and human aspects. Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose. Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style. However, they also overlooked the misinformation in the ChatGPT answers 39% of the time. This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.
nijuashi12 个月前
I guess I know how to ask the right programming questions, because my feeling about it is it’s about 80-90% correct, and the rest just gets me to correct solutions much faster than a search engine.
ph412 个月前
I view it as an e-bike for my mind. It doesn&#x27;t do all the legwork, but it definitely gets me up certain hills (of my choosing) without as much effort.
drewcoo12 个月前
To those who constantly claim ChatGPT is &quot;like an intern,&quot; just how low are the standards for interns?
评论 #40466137 未加载
123yawaworht45612 个月前
iirc, I saw some other study (or an experiment some random guy had ran) where original GPT4 had vastly outperformed its later incarnations for code generation.<p>current openai products either use much lower parameter models under the hood than they did originally, or maybe it&#x27;s a side-effect of context stretching.
ggddv12 个月前
Can there be some sort of mechanism on HN for criticism of an unsubstantiated headline?
评论 #40493683 未加载
odyssey712 个月前
Extrapolation:<p>Odds of correct answer within n attempts =<p>1 - (1&#x2F;2)^n<p>Nice, that’s exponentially good!
评论 #40488799 未加载
resource_waste12 个月前
Can someone email the author and explain what a LLM is?<p>People asking for &#x27;right&#x27; answers, don&#x27;t really get it. I&#x27;m sorry if that sounds abrasive, but these people give LLMs a bad name due to their own ignorance&#x2F;malice.<p>I remember having some Amazon programmer trash LLMs for &#x27;not being 100% accurate&#x27;. It was really an iD10t error. LLMs arent used for 100% accuracy. If you are doing that, you don&#x27;t understand the technology.<p>There is a learning curve with LLMs, and it seems a few people still don&#x27;t get it.
评论 #40466288 未加载
评论 #40465926 未加载
Last5Digits12 个月前
Here&#x27;s hoping that the average HN commenter will actually read the paper and realize that the study was performed using GPT-3.5.
f0e4c2f712 个月前
This study uses a version of ChatGPT that is either 1 or 2 versions behind depending on the part of the study.<p>It cracks me up how consistent this is.<p>See post criticizing LLMs. Check if they&#x27;re on the latest version (which is now free to boot!!).<p>Nope. Seemingly...never. To be fair, this is probably just an old study from before 4o came out. Even still. It&#x27;s just not relevant anymore.
ObnoxiousProxy12 个月前
Misleading headline and completely pointless without diving into how the benchmark was constructed and what kinds of programming questions were asked.<p>On the Humaneval (<a href="https:&#x2F;&#x2F;paperswithcode.com&#x2F;sota&#x2F;code-generation-on-humaneval" rel="nofollow">https:&#x2F;&#x2F;paperswithcode.com&#x2F;sota&#x2F;code-generation-on-humaneval</a>) benchmark, GPT4 can generate code that works on first pass 76.5% of the time.<p>While on SWE bench (<a href="https:&#x2F;&#x2F;www.swebench.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.swebench.com&#x2F;</a>) GPT4 with RAG can only solve about 1% of github issues used in the benchmark.