TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Performance of LLMs on Advent of Code 2024

135 pointsby jerpint5 months ago

20 comments

1010085 months ago
This kind of mirror my experience with LLMs. If I ask them non-original problems (make this API, write this test, update this function (that must be written 100s of time by develoeprs around the world, etc), it works very well. Some minors changes here and there but it saves time.<p>When I ask them to code things that they never heard of (I am working on a online sport game), it fails catastrophically. The LLM should know the sport, and what I ask is pretty clear for anyone who understand the game (I tested against actual people and it was obvious what to expect), but the LLM failed miserably. Even worse when I ask them to write some designs in CSS for the game. It seems if you take them outside the 3-columsn layout or bootstrap or the overused landing page, LLMs fails miserably.<p>It works very well for the known cases, but as soon as you want them to do something original, they just can&#x27;t.
评论 #42557185 未加载
评论 #42558053 未加载
评论 #42557082 未加载
评论 #42558277 未加载
评论 #42556595 未加载
Recursing5 months ago
The article and comments here _really_ underestimate the current state of LLMs (or overestimate how hard AoC 2024 was)<p>Here&#x27;s a much better analysis from someone who got 45 stars using LLMs. <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;adventofcode&#x2F;comments&#x2F;1hnk1c5&#x2F;results_of_a_multiyear_llm_experiment&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;adventofcode&#x2F;comments&#x2F;1hnk1c5&#x2F;resul...</a><p>All the top 5 players on the final leaderboard <a href="https:&#x2F;&#x2F;adventofcode.com&#x2F;2024&#x2F;leaderboard" rel="nofollow">https:&#x2F;&#x2F;adventofcode.com&#x2F;2024&#x2F;leaderboard</a> used LLMs for most of their solutions.<p>LLMs can solve all days except 12, 15, 17, 21, and 24
评论 #42558483 未加载
评论 #42558758 未加载
评论 #42558438 未加载
评论 #42558435 未加载
评论 #42558346 未加载
评论 #42568470 未加载
评论 #42562756 未加载
评论 #42558135 未加载
upghost5 months ago
After looking at the charts I was like &quot;Whoa, damn, that Jerpint model seems amazing. Where do I get that??&quot; I spent some time trying to find it on Huggingface before I realized...
评论 #42554955 未加载
评论 #42555461 未加载
评论 #42554919 未加载
bryan05 months ago
Since you did not give the models a chance to test their code and correct any mistakes, I think a more accurate comparison would be if you compared them against you submitting answers without testing (or even running!) your code first
评论 #42557227 未加载
评论 #42558218 未加载
评论 #42557397 未加载
zaptheimpaler5 months ago
I’m adjacent to some people who do AoC competitively and it’s clear many of the top 10 and maybe 1&#x2F;2 of the top 100 this year were heavily LLM assisted or wholly done by LLMs in a loop. They won first place on many days. It was disappointing to the community that people cheated and went against the community’s wishes but it’s clear LLMs can do much better than described here
评论 #42556401 未加载
评论 #42557411 未加载
unclad59685 months ago
Half the time I try to use gemini questions about the c++ std library, it fabricates non-existent types and functions. I&#x27;m honestly impressed it was able to solve any of the AoC problems.
评论 #42555186 未加载
评论 #42557388 未加载
grumple5 months ago
I’m both surprised and not surprised. I’m surprised because these sort of problems with very clear prompts and fairly clear algorithmic requirements are exactly what I’d expect LLMs to perform best at.<p>But I’m not surprised because I’ve seen them fail on many problems even with lots of prompt engineering and test cases.
yunwal5 months ago
With no prompt engineering this seems like a weird comparison. I wouldn’t expect anyone to be able to one-shot most of the AOC problems. A fair fight would at least use something like cursor’s agent on YOLO mode that can review a command’s output, add logs, etc
评论 #42555007 未加载
评论 #42557421 未加载
评论 #42555392 未加载
评论 #42554815 未加载
评论 #42555181 未加载
bawolff5 months ago
I&#x27;m a bit of an AI skeptic, and i think i had the opposite reaction of the author. Even though this is far from welcoming our AI overlords, I am surprised that they are this good.
jebarker5 months ago
I&#x27;d be interested to know how o1 compares. On may days after I completed the AoC puzzles I was putting them question into o1 and it seemed to do really well.
评论 #42554411 未加载
moffkalast5 months ago
At first I was like &quot;What is this jerpint model that&#x27;s beating the competition so soundly?&quot; then it hit me, lol.<p>Anyhow this is like night and day compared to last year, and it&#x27;s impressive that Sonnet is now apparently 50% as good as a professional human at this sort of thing.
评论 #42557204 未加载
demirbey055 months ago
o1 is not included, I think each benchmark should include o1 and reasoning models. o-series is really changed the game.
airstrike5 months ago
I like the idea, but I feel like the execution left a bit to be desired.<p>My gut tells me you can get much better results from the models with better prompting. The whole &quot;You are solving the 2024 advent of code challenge.&quot; form of prompting is just adding noise with no real value. Based on my empirical experience, that likely hurts performance instead of helping.<p>The time limit feels arbitrary and adds nothing to the benchmark. I don&#x27;t understand why you wouldn&#x27;t include o1 in the list of models.<p>There&#x27;s just a lot here that doesn&#x27;t feel very scientific about this analysis...
Tiberium5 months ago
Wanted to try with o1 and o1-mini but looks like there&#x27;s no code available, although I guess I could just ask 3.5 Sonnet&#x2F;o1 to make the evaluation suite ;)
评论 #42556469 未加载
bongodongobob5 months ago
I think a major mistake was giving parts 1 and 2 all at once. I had great results having it solve 1, then 2. I think I got 4o to one shot parts 1 then 2 up to about day 12. It started to struggle a bit after that and I got bored with it at day 18. It did way better than I expected, I don&#x27;t understand why the author is disappointed. This shit is magic.
antirez5 months ago
The most important thing is missing from this post: the performance of Jerpint+Claude. It&#x27;s not a VS game.
guerrilla5 months ago
How far can LLMs get in Project Euler without messing up?
BugsJustFindMe5 months ago
I think this is a terrible analysis with a weak conclusion.<p>There&#x27;s zero mention of how long it took the LLM to write the code vs the human. You have a 300 second runtime limit, but what was your coding time limit? The machine spat out code in, what, a few seconds? And how long did your solutions take to write?<p>Advent of code problems take me longer to just <i>read</i> than it takes an LLM to have a proposed solution ready for evaluation.<p>&gt; <i>they didn’t perform nearly as well as I’d expect</i><p>Is this a joke, though? A machine takes a problem description written as floridly hyperventilated as advent problems are, and, without any opportunity for automated reanalysis, it understands the exact problem domain, it understands exactly what&#x27;s being asked, correctly models the solution, and spits out a correct single-shot solution on 20 of them in no time flat, often with substantially better running time than your own solutions, and that&#x27;s disappointing?<p>&gt; <i>a lot of the submissions had timeout errors, which means that their solutions might work if asked more explicitly for efficient solutions. However the models should know very well what AoC solutions entail</i><p>You made up an arbitrary runtime limit and then kept that limit a secret, and you were surprised when the solutions didn&#x27;t adhere to the secret limit?<p>&gt; <i>Finally, some of the submissions raised some Exceptions, which would likely be fixed with a human reviewing this code and asking for changes.</i><p>How many of your solutions got the correct answer on the first try without going back and fixing something?
评论 #42556949 未加载
评论 #42555410 未加载
评论 #42555069 未加载
johnea5 months ago
LLMs are writing code for the coming of the lil&#x27; baby jesus?
评论 #42554424 未加载
cheevly5 months ago
Genuinely terrible prompt. Not only in structure, but also contains grammatical errors. I&#x27;m confident you could at least double their score if you improve your prompting significantly.
评论 #42557402 未加载