looks like they trained their model on SWE-bench and tried to submit
<a href="https://github.com/swe-bench/experiments/pull/45">https://github.com/swe-bench/experiments/pull/45</a>
Sorry for not discussing the product itself, but...<p>I'm just not seeing a machine that is "likely correct", constantly interrupting the "operator" to be that much of a win. I have seen some software influencers reflect on how much more fun it is to code, after dropping the LLM assistant.<p>All of these feel like offerings to the Productivity God. As a salary guy I'll never get excited that I can do more during my work day. It's already easy to hit my capacity.
Something I'm kind of curious about is the degree to which eval performance might be due to parts of the SWE-bench dataset getting into the latest LLM models.<p>A while back someone on Twitter seemed to confirm that Claude-3.5 was aware of the Github issues inside the dataset by mentioning them, but I couldn't find the original post.<p>30% performance on the full SWE-bench benchmark is quite the leap, but just how "real" of an achievement is this? Anecdotal reports mention that GPT-4o is marginally better than GPT-4 turbo at best, and yet agents leveraging the LLM did perform better.<p>What would happen if SWE-bench was updated, top to bottom, with completely new Github issues? Would all these agents just completely shit the bed?