TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The GRIM test – a method for evaluating published research

79 pointsby maxharlowalmost 9 years ago

10 comments

canjobearalmost 9 years ago
Another simple thing you can test in a paper to see if it is credible is p-curve and related methods from Uri Simonsohn et al.<p><a href="http:&#x2F;&#x2F;www.p-curve.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.p-curve.com&#x2F;</a><p>You just look at the distribution of p values that are used to support the authors&#x27; hypotheses. If the distribution is skewed high, then something fishy is going on.
jlduggeralmost 9 years ago
Interesting approach -- get the scientific community to agree on the mathematical principles first, before anyone specifically is outed as cheating.<p>But this article feels like reading a teaser chapter of a bigger story.<p>&gt; The amount of (toil) required to actually create data like this from scratch if (very) nightmarish. It’s a task drastically out of reach of the (foolish people) who’d try such a bush league stunt in the first place.<p>This assumes that all experiments lead to publications. We know there&#x27;s a strong publication bias, and that the bias favors positive results, and dramatically favors unintuitive positive results. Which means you need to find correlations where none were expected. How many experiments do you need to get a significant correlation when there is none? Hint: more than one.<p>It&#x27;s also worth noting that it wouldn&#x27;t be difficult to produce a genetic algorithm using various statistical checks, including this one, as a fitness function.
评论 #11790156 未加载
bayesian_horsealmost 9 years ago
I don&#x27;t agree with the idea that faking data is more difficult than running the experiment. A lot of courses in universities now teach R or something similar to run statistics. Relatively simple &quot;monte carlo&quot; simulations would provide results which satisfy the GRIM test.
michaelmioralmost 9 years ago
While it&#x27;s true that fractional ages are pretty much never used, ages do not necessarily have to be recorded as whole numbers.
评论 #11790950 未加载
_bdogalmost 9 years ago
I work together with a psychology professor (was head of department) now and then. She said there&#x27;s a large problem with students, even graduates, just not understanding statistics and math properly (or <i>at all</i>)
评论 #11791849 未加载
bluenose69almost 9 years ago
Some of these might be simple errors, with results being typed in from other documents. Authors who are worried about making such errors might want to consider using methods of reproducible research, e.g. writing *blah had mean value `round(mean(x), 3)` (n=`length(x)`)&quot; or similar in Sweave, where the items in the back-ticks are R code working on the actual data. This is a bit more work, but it prevents transcription errors, and also a pernicious type of error that comes about by adjusting the data analysis during the writing process.
banealmost 9 years ago
Also see: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Benford%27s_law" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Benford%27s_law</a> for a related test used in various fields.
moyixalmost 9 years ago
Wow, this is a really clever technique, and the results are really alarming. I suspect we&#x27;ll see some careers unravel as a result of it.
评论 #11791163 未加载
progers7almost 9 years ago
I&#x27;ve done a similar trick where the ratio of two secret integers is released publicly with many significant digits and you can sometimes find the two integers by brute forcing the division over all possible values. Does anyone know a name for this approach?
评论 #11790765 未加载
flerchinalmost 9 years ago
Good. Fuck the liars and cheats right in their... careers.