This is looking at the wrong metric. I'm not expecting it to be 100% correct when I use it. I expect it to get me in the ballpark faster than I would have on my own. And then I can take it from there.<p>Sometimes that means I have a follow on question & iterate from there. That's fine too.
From the paper:<p>> For each of the 517 SO [Stack Overflow] questions, the first two authors manually used the SO question’s title, body, and tags to form one question prompt1 and fed that to the free version of ChatGPT, which is based on GPT-3.5. We chose the free version of ChatGPT because it captures the majority of the target population of this work. Since the target population of this research is not only industry developers but also programmers of all levels, including students and freelancers around the world, the free version of ChatGPT has significantly more users than the paid version, which costs a monthly rate of 20 US dollars.<p>Note that GPT-4o is now also freely available, although with usage caps. Allegedly the limit is one fifth the turns of paid Plus users, who are said to be limited to 80 turns every three hours. Which would mean 16 free GPT-4o turns per 3 hours. Though there is some indication the limits are currently somewhat lower in practice and overall in flux.<p>In any case, GPT-4o answers should be far more competent than those by GPT-3.5, so the study is already somewhat outdated.
I use ChatGPT for coding constantly and the 52% error rate seems about right to me. I manually approve every single line of code that ChatGPT generates for me. If I copy-paste 120 lines of code that ChatGPT has generated for me directly into my app, that is because I have gone over all 120 lines with a fine-toothed comb, and probably iterated 3-4 times already. I constantly ask ChatGPT to think about the same question, but this time with an additional caveat.<p>I find ChatGPT more useful from a software architecture point of view and from a trivial code point of view, and least useful at the mid-range stuff.<p>It can write you a great regex (make sure you double-check it) and it can explain a lot of high-level concepts in insightful ways, but it has no theory of mind -- so it never responds with "It doesn't make sense to ask me that question -- what are you really trying to achieve here?", which is the kind of thing an actually intelligent software engineer might say from time to time.
I scanned the paper and it doesn't mention what model they were using within chatgpt. If it was 3.5 turbo, then these results are already meaningless. GPT-4 and 4o are much more accurate.<p>I just used GPT-4o to refactor 50 files from react classes to react function components and it did so almost perfectly everytime. Some of these classes were as long as 500 loc.
This is way better than I thought. A follow-up question would be for the times that it is wrong, how wrong is it. In other words, is the wrong answer complete rubbish or it can be a starting point towards the actual correct answer?
ChatGPT was released one and a half year ago. It basically duct tape code together from a probability model, the fact that 52% of it's coding answers a correct is amazing.<p>I'm still on the fence about LLMs for coding, but from talking to friends, they primarily use it to define a skeleton of code or generate code that they can then study and restructure. I don't see many developers accepting the generate code without review.
Similar to how programmers work, the AI needs feedback from the runtime in order to iterate towards a workable program.<p>My expectation isn’t that the AI generate correct code. The AI will be useful as an ‘agent in the loop’:<p>- Spec or test suite written as bullets<p>- Define tests and/or types<p>- Human intevenes with edits to keep it in the right direction<p>- LLM generates code, runs complier/tests<p>- Output is part of new context<p>- Repeat until programmer is happy
Related presentation video on the CHI 2024 conference page:<p><a href="https://programs.sigchi.org/chi/2024/program/content/146667" rel="nofollow">https://programs.sigchi.org/chi/2024/program/content/146667</a>
ChatGPT isn’t the best coding LLM. Claude Opus is.<p>Also as you can always tell if a coding response works empirically mistakes are much more easily spotted than in other forms of LLM output.<p>Debugging with AI is more important than prompting. It requires an understanding of the intent which allows the human to prompt the model in a way that allows it to recognize its oversights.<p>Most code errors from LLMs can be fixed by them. The problem is an incomplete understanding of the objective which makes them commit to incorrect paths.<p>Being able to run code is a huge milestone. I hope the GPT5 generation can do this and thus only deliver working code. That will be a quantum leap.
That article links to the actual paper, the abstract of which is itself quite readable: <a href="https://dl.acm.org/doi/pdf/10.1145/3613904.3642596" rel="nofollow">https://dl.acm.org/doi/pdf/10.1145/3613904.3642596</a><p>> Q&A platforms have been crucial for the online help-seeking behav-
ior of programmers. However, the recent popularity of ChatGPT is
altering this trend. Despite this popularity, no comprehensive study
has been conducted to evaluate the characteristics of ChatGPT’s an-
swers to programming questions. To bridge the gap, we conducted
the first in-depth analysis of ChatGPT answers to 517 programming
questions on Stack Overflow and examined the correctness, consis-
tency, comprehensiveness, and conciseness of ChatGPT answers.
Furthermore, we conducted a large-scale linguistic analysis, as well
as a user study, to understand the characteristics of ChatGPT an-
swers from linguistic and human aspects. Our analysis shows that
52% of ChatGPT answers contain incorrect information and 77%
are verbose. Nonetheless, our user study participants still preferred
ChatGPT answers 35% of the time due to their comprehensiveness
and well-articulated language style. However, they also overlooked
the misinformation in the ChatGPT answers 39% of the time. This
implies the need to counter misinformation in ChatGPT answers to
programming questions and raise awareness of the risks associated
with seemingly correct answers.
I guess I know how to ask the right programming questions, because my feeling about it is it’s about 80-90% correct, and the rest just gets me to correct solutions much faster than a search engine.
I view it as an e-bike for my mind. It doesn't do all the legwork, but it definitely gets me up certain hills (of my choosing) without as much effort.
iirc, I saw some other study (or an experiment some random guy had ran) where original GPT4 had vastly outperformed its later incarnations for code generation.<p>current openai products either use much lower parameter models under the hood than they did originally, or maybe it's a side-effect of context stretching.
Can someone email the author and explain what a LLM is?<p>People asking for 'right' answers, don't really get it. I'm sorry if that sounds abrasive, but these people give LLMs a bad name due to their own ignorance/malice.<p>I remember having some Amazon programmer trash LLMs for 'not being 100% accurate'. It was really an iD10t error. LLMs arent used for 100% accuracy. If you are doing that, you don't understand the technology.<p>There is a learning curve with LLMs, and it seems a few people still don't get it.
This study uses a version of ChatGPT that is either 1 or 2 versions behind depending on the part of the study.<p>It cracks me up how consistent this is.<p>See post criticizing LLMs. Check if they're on the latest version (which is now free to boot!!).<p>Nope. Seemingly...never. To be fair, this is probably just an old study from before 4o came out. Even still. It's just not relevant anymore.
Misleading headline and completely pointless without diving into how the benchmark was constructed and what kinds of programming questions were asked.<p>On the Humaneval (<a href="https://paperswithcode.com/sota/code-generation-on-humaneval" rel="nofollow">https://paperswithcode.com/sota/code-generation-on-humaneval</a>) benchmark, GPT4 can generate code that works on first pass 76.5% of the time.<p>While on SWE bench (<a href="https://www.swebench.com/" rel="nofollow">https://www.swebench.com/</a>) GPT4 with RAG can only solve about 1% of github issues used in the benchmark.