Another simple thing you can test in a paper to see if it is credible is p-curve and related methods from Uri Simonsohn et al.<p><a href="http://www.p-curve.com/" rel="nofollow">http://www.p-curve.com/</a><p>You just look at the distribution of p values that are used to support the authors' hypotheses. If the distribution is skewed high, then something fishy is going on.
Interesting approach -- get the scientific community to agree on the mathematical principles first, before anyone specifically is outed as cheating.<p>But this article feels like reading a teaser chapter of a bigger story.<p>> The amount of (toil) required to actually create data like this from scratch if (very) nightmarish. It’s a task drastically out of reach of the (foolish people) who’d try such a bush league stunt in the first place.<p>This assumes that all experiments lead to publications. We know there's a strong publication bias, and that the bias favors positive results, and dramatically favors unintuitive positive results. Which means you need to find correlations where none were expected. How many experiments do you need to get a significant correlation when there is none? Hint: more than one.<p>It's also worth noting that it wouldn't be difficult to produce a genetic algorithm using various statistical checks, including this one, as a fitness function.
I don't agree with the idea that faking data is more difficult than running the experiment. A lot of courses in universities now teach R or something similar to run statistics. Relatively simple "monte carlo" simulations would provide results which satisfy the GRIM test.
I work together with a psychology professor (was head of department) now and then. She said there's a large problem with students, even graduates, just not understanding statistics and math properly (or <i>at all</i>)
Some of these might be simple errors, with results being typed in from other documents. Authors who are worried about making such errors might want to consider using methods of reproducible research, e.g. writing *blah had mean value `round(mean(x), 3)` (n=`length(x)`)" or similar in Sweave, where the items in the back-ticks are R code working on the actual data. This is a bit more work, but it prevents transcription errors, and also a pernicious type of error that comes about by adjusting the data analysis during the writing process.
Also see: <a href="https://en.wikipedia.org/wiki/Benford%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Benford%27s_law</a> for a related test used in various fields.
I've done a similar trick where the ratio of two secret integers is released publicly with many significant digits and you can sometimes find the two integers by brute forcing the division over all possible values. Does anyone know a name for this approach?