The author's research is discussed in an excellent Atlantic article that many may find more accessible: <a href="http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/" rel="nofollow">http://www.theatlantic.com/magazine/archive/2010/11/lies-dam...</a>
In statistics, you're supposed to come up with a statistical model first before running regressions on the data. But quite a few papers I've read (especially in finance) seem to go the other way around, i.e.<p>They run regressions on a data set, adding and subtracting independent variables until the t values and standard errors start looking good.<p>Then they construct the linear model, assume the Gauss-Markov assumptions and sometimes (though not always) try to explain the causal relationship between the variables.<p>This is obviously very wrong and nobody has any clue what the distribution of the least squares estimators to these models are. But I've seen plenty of examples of this, which is enough to void the results of the paper (even if the model they come up with is somewhat plausible).
Well, that's why the whole "replication" thing is important. One published result is interesting, but rarely definitive, and possibly wrong. (Or at least unusual for possibly difficult-to-determine reasons.)<p>This is another good reason to ignore the media hype for every new paper that comes out. (Besides the fact that journalists perform lossy compression on data.)<p>But it seems like it's how science is supposed to work: publish your results, see if others confirm your findings, because <i>you might be wrong</i> even if you seem to have done everything correctly and honestly to the best of your ability.
This paper has gotten way too much press for an oversimplified model of science. Here's the thing: if results hold up to scrutiny, the authors are eager to share code and plasmids/samples. If not, they are a lot more squirrelly. Outside replication is what keeps the machine moving forward, is fairly readily proxied by citation rates, and yet is not captured by Ioannidis' simple model.
This is an interesting followup as well: <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1808082/" rel="nofollow">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1808082/</a>
Self reference much? This <i>is</i> published research...<p>Clearly these findings are false... or maybe not? Dammit.
<a href="http://en.wikipedia.org/wiki/Liar_paradox" rel="nofollow">http://en.wikipedia.org/wiki/Liar_paradox</a>
First of all, the author of this piece works in Department of Hygiene and Epidemiology. Research is done differently across different disciplines, so it's dangerous to try to expand this to other disciplines. For example, some fields find alpha < 0.05 acceptable and other fields do not.<p>But research is very weird indeed. The more conference/journal articles you read, the less you trust them. I mean, say a field accepts results alpha < 0.05. This means that 5% of everything shown is wrong.<p>Feel free to correct me if you have a better grasp of statistics find what I say to be wrong.
So what are the implications of this information to the average individual? He is basically saying that the conventional wisdom on medical questions is most often incorrect.
For some reason I was reminded of this blog post:<p><a href="http://jsomers.net/blog/it-turns-out" rel="nofollow">http://jsomers.net/blog/it-turns-out</a>
It's because researchers slack just like everyone else at their jobs and need to pay the bills in the meanwhile. Now imagine your doctor or law enforcement and the mess they cause when they slack and cut corners just to produce "product" and justify their jobs.
This title yelled "paradox!" at me. It's funny to see it coming from a ".gov" website.<p>For those who need clarification, if this published research and its title are true, than it is saying that research like itself are usually false. This contradicts the original assumption that it is true.<p>If this published research and its title are false, than research like itself is usually true since what it's saying must be wrong. This contradicts the original assumption that it is false.
So if this research is not false (unlikely according to the author) then mankind would be moving backwards, unless non scientific reasoning compensates for the failure of science. Medical treatment would get constantly worse, people would be misdiagnosed and mistreated more than ever, death rates after cancer and cardiac events would rise.