TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Leaderboard Illusion

184 pointsby pongogogo20 days ago

15 comments

mrandish19 days ago
I&#x27;m not even following AI model performance testing that closely but I&#x27;m hearing increasing reports they&#x27;re inaccurate due to accidental or intentional test data leaking into training data and other ways of training to the test.<p>Also, ARC AGI reported they&#x27;ve been unable to independently replicate OpenAI&#x27;s claimed breakthrough score from December. There&#x27;s just too much money at stake now to <i>not</i> treat all AI model performance testing as an adversarial, no-holds-barred brawl. The default assumption should be all entrants will cheat in any way possible. Commercial entrants with large teams of highly-incentivized people will search and optimize for every possible advantage - if not outright cheat. As a result, smaller academic, student or community teams working part-time will tend to score lower than they would on a level playing field.
评论 #43852937 未加载
评论 #43850206 未加载
评论 #43849793 未加载
simonw19 days ago
I published some notes and opinions on this paper here: <a href="https:&#x2F;&#x2F;simonwillison.net&#x2F;2025&#x2F;Apr&#x2F;30&#x2F;criticism-of-the-chatbot-arena&#x2F;" rel="nofollow">https:&#x2F;&#x2F;simonwillison.net&#x2F;2025&#x2F;Apr&#x2F;30&#x2F;criticism-of-the-chatb...</a><p>Short version: the thing I care most about in this paper is that well funded vendors can apparently submit dozens of variations of their models to the leaderboard and then selectively publish the model that did best.<p>This gives them a huge advantage. I want to know if they did that. A top place model with a footnote saying &quot;they tried 22 variants, most of which scored lower than this one&quot; helps me understand what&#x27; going on.<p>If the top model tried 22 times and scored lower on 21 of those tries, whereas the model in second place only tried once, I&#x27;d like to hear about it.
godelski19 days ago
Many of these things are ones that people have been screaming about for years (including Sarah Hooker). It&#x27;s great to see some numbers attached. And in classic Cohere manner, they are not holding punches on some specific people. Expect them to push back.<p>There&#x27;s a crux that makes it easy to understand why we should expect it. If you code (I assume you do) you probably (hopefully) know that you can&#x27;t test your way into proving your code is correct. Test Driven Development (TDD) is a flawed paradigm. You should use tests, but they are hints. That&#x27;s why Cohere is quoting Goodhart at the top of the intro[0]. There is <i>NO</i> metric where the metric is perfectly aligned with the reason you implemented that metric in the first place (intent). This is fucking alignment 101 here. Which is why it is really ironic how prolific this attitude is in ML[1]. I&#x27;m not sure I believe any person or company that claims they can make safe AI if they are trying to shove benchmarks at you.<p>Pay close attention, evaluation is very hard. It is also getting harder. Remember reward hacking, it is still alive and well (it is Goodhart&#x27;s Law). You have to think about what criteria meets your objective. This is true for any job! But think about RLHF and similar strategies. What methods also maximize the reward function? If it is human preference, deception maximizes just as well (or better) than accuracy. This is bad design pattern. You want to make errors as loud as possible, but this paradigm makes errors as quiet as possible and you cannot confuse that with lack of errors. It makes evaluation incredibly difficult.<p>Metrics are guides, not targets<p>[0] Users that recognize me may remember me for mentioning &#x27;Goodhart&#x27;s Hell&#x27;, the adoption of Goodhart&#x27;s Law as a feature instead of a bug. It is prolific, and problematic.<p>[1] We used to say that when people say &quot;AI&quot; instead of &quot;ML&quot; to put your guard up. But a very useful one that&#x27;s been true for years is &quot;if people try to prove by benchmarks alone, they&#x27;re selling snakeoil.&quot; There should always be analysis <i>in addition to</i> metrics.
pongogogo20 days ago
I think this is a really interesting paper from Cohere, it really feels that at this point in time you can&#x27;t trust any public benchmark, and you really need your own private evals.
评论 #43844736 未加载
评论 #43844587 未加载
unkulunkulu20 days ago
Sounds like classic inequality observed everywhere. Success leads to attention leads to more success.<p>Why spend evaluation resources on outsiders? Everyone wants to know who is exactly first second etc, after #10 it’s do your own evaluation if this is important to you.<p>Thus, we have this inequality.
评论 #43843739 未加载
评论 #43843679 未加载
aredox20 days ago
The fact those big LLM developers devote a significant amount of effort to game benchmarks is a big show of confidence that they are making progress towards AGI and will recoup those billions of dollars and man-hours&#x2F;s
评论 #43843473 未加载
评论 #43843868 未加载
bob102919 days ago
&gt; Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.<p>In context of genetic programming and other non-traditional ML techniques, I&#x27;ve been having difficulty attempting to locate a simple fitness function that reliably proxies natural language string similarity due to this effect.<p>For example, say you use something like common prefix length to measure how close a candidate&#x27;s output string is to an objective string given an input string. The underlying learner will inevitably start doing things like repeating the input verbatim, especially if the input&#x2F;output training tuples often share a lot of prefixes. So, you might try doing something like reversing the input to force learning to take a less crappy path [0]. The learner may respond degenerately by inventing a string reversing technique and repeating its prior behavior. So, you iterate again and try something like base64 encoding the input. This might take, but eventually you wind up with so many weird hacks that the learner can&#x27;t make progress and the meaning of the quantities evaporates.<p>Every metric I&#x27;ve ever looked at gets cheated in some way. The holy grail is probably normalized information distance (approximated by normalized compression distance), but then you have a whole new problem of finding an ideal universal compressor which definitely doesn&#x27;t exist.<p>[0]: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1409.3215" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1409.3215</a> (Figure 1)
评论 #43849091 未加载
评论 #43851196 未加载
ekidd20 days ago
Also, I&#x27;ve been hearing a lot of complaints that Chatbot Arena tends to favor:<p>- Lots of bullet points in every response.<p>- Emoji.<p>...even at the expense of accurate answers. And I&#x27;m beginning to wonder if the sycophantic behavior of recent models (&quot;That&#x27;s a brilliant and profound idea&quot;) is also being driven by Arena scores.<p>Perhaps LLM users actually do want lots of bullets, emoji and fawning praise. But this seems like a perverse dynamic, similar to the way that social media users often engage more with content that outrages them.
评论 #43844389 未加载
评论 #43846640 未加载
评论 #43845512 未加载
jmount20 days ago
Not the same effect: but a good related writeup: <a href="https:&#x2F;&#x2F;www.stefanmesken.info&#x2F;machine%20learning&#x2F;how-to-beat-kaggle-(the-easy-way)&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.stefanmesken.info&#x2F;machine%20learning&#x2F;how-to-beat...</a>
jmmcd20 days ago
Absolutely devastating for the credibility of FAIR.
评论 #43851257 未加载
lostmsu20 days ago
Chiming in as usual: <a href="https:&#x2F;&#x2F;trashtalk.borg.games" rel="nofollow">https:&#x2F;&#x2F;trashtalk.borg.games</a><p>A social deduction game for both LLMs and humans. All the past games are available for anyone.<p>I&#x27;m open for feedback.
badmonster19 days ago
<a href="https:&#x2F;&#x2F;x.com&#x2F;karpathy&#x2F;status&#x2F;1917546757929722115" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;karpathy&#x2F;status&#x2F;1917546757929722115</a>
j7ake19 days ago
It’s essentially the pvalue hacking we see in social and biological sciences applied to machine learning field.<p>Once you set an evaluation metric it ceases to become a useful metric.
mottiden19 days ago
This is such a great research. Kudos to the authors!
n8m820 days ago
Predictable, yet incredibly important.