Oh, it's about people who use AI, I thought they were going after the real cheaters: the people who use a prefab computer to solve the problems rather than building their own from scratch.
My day 6 part 2 time was 46s, in a language fairly new to me and entirely manually (no automation at all). I just happened to have a solution that could quickly be changed to partition by 14 instead of 4<p>I’m not sure that just fast Part2 is that strong a signal. (I’m also not even sure that I think GPT is cheating any more than having pre-written frameworks, ready Dykstra, A*, and min-max solvers, automated input downloaders, scripted submission functions, etc. I don’t think any of those are cheating. GPT is more a grey area, I guess.)
> for a programmer looking up something like “network request python” things like ChatGPT or Github CoPilot become much easier to use, as they have IDE integration (Why didn’t Stack Overflow make IDE plugins for this stuff years ago?<p>No? isn't that just like Google "i'm feeling lucky" but without even bothering to look up who said what and why.<p>GPT strips things of context, and this makes it difficult to know how much to trust it, and value it in comparison to alternatives.<p>I mean if you don't mind quake3 code turning up in your work, and don't mind even knowing that's what it is and where it came from then i guess that level of don't give a crap matches what you are doing - seriously if it's just a hack i have nothing against this, but it doesn't feel right for any long living code.
Seemingly unpopular opinion: large language models are just a new higher level of abstraction we can use to describe what we want the computer to do.<p>Can I claim this guy is a fraud for using python instead of typing opcodes into a hex editor? He doesn’t even know what registers hold his data!
The “fraud detection“ method used here was underwhelming. I was hoping for a statistical learning approach rather than a blunt heuristic. More justification for the heuristic would have been interesting.
> Using this API and a Python script, it is possible to mark any user who solves the second part less than a minute after the first as suspicious<p>Looks like for Day 25, this would have marked the entire leaderboard (top 100) as suspicious. (No, they didn't use ChatGPT.)<p>Part of the meta-game while solving part one is to predict what kind of parameterization part two might depend on and make a flexible solution, and the best solvers are also the best at doing that.
Maybe someone more knowledgeable can explain - is there a reason people would be concerned about cheaters in AoC? Is it just like for speedrunning, where the repercussions of cheating are largely about your perception in the community itself?<p>I guess what I’m trying to figure out is whether anyone’s going to try and get a job using their Top 10 Advent of Code medal.
Why?<p>The leaderboard fills up in anywhere from 10 to 15 minutes. I bet it would take people at least as long to come up with a suitable prompt (just copying the text on the site doesn’t work), refine the response, and implement the solution.