The vast majority of scientific papers are not single experiments with one p-vaule, but rather a handful experiments to a dozen or more experiments, only some of which may be reduced to a p-value. And in most biological research, at least two lines of evidence are required before a reviewer will accept a claim (e.g. "OK, you may have found something, now verify it with a PCR.").<p>So this entire setup is just kind of crap, and not representative of scientific research.<p>In addition, this simple point, which is quite interesting, and necessary to keep in mind when interpreting multiple p-values, is widely acknowledged in the field, which is why False Discovery Rate methods started to be used as far back as the 90s. This initial point was first published as a "The sky is falling, what are all you idiot medical researchers doing?!" type of paper by Ioannidis, which is a great way to make a name for oneself. However, even his own interpretation did not hold up well, and he has stopped pushing the point. Summarizing an extensive comment on Metafilter [1]<p>>Why Most Published Research Findings Are False: Problems in the Analysis
>The article published in PLoS Medicine by Ioannidis makes the dramatic claim in the title that “most published research claims are false,” and has received extensive attention as a result. The article does provide a useful reminder that the probability of hypotheses depends on much more than just the p-value, a point that has been made in the medical literature for at least four decades, and in the statistical literature for decades previous. This topic has renewed importance with the advent of the massive multiple testing often seen in genomics studies.Unfortunately, while we agree that there are more false claims than many would suspect—based both on poor study design, misinterpretation of p-values, and perhaps analytic manipulation—the mathematical argument in the PLoS Medicine paper underlying the “proof” of the title's claim has a degree of circularity. As we show in detail in a separately published paper, Dr. Ioannidis utilizes a mathematical model that severely diminishes the evidential value of studies—even meta-analyses—such that none can produce more than modest evidence against the null hypothesis, and most are far weaker. This is why, in the offered “proof,” the only study types that achieve a posterior probability of 50% or more (large RCTs [randomized controlled trials] and meta-analysis of RCTs) are those to which a prior probability of 50% or more are assigned. So the model employed cannot be considered a proof that most published claims are untrue, but is rather a claim that no study or combination of studies can ever provide convincing evidence.<p>>ASSESSING THE UNRELIABILITY OF THE MEDICAL LITERATURE: A RESPONSE TO "WHY MOST PUBLISHED RESEARCH FINDINGS ARE FALSE"
>A recent article in this journal (Ioannidis JP (2005) Why most published research findings are false. PLoS Med 2: e124) argued that more than half of published research findings in the medical literature are false. In this commentary, we examine the structure of that argument, and show that it has three basic components:
>1) An assumption that the prior probability of most hypotheses explored in medical research is below 50%.
>2) Dichotomization of P-values at the 0.05 level and introduction of a “bias” factor (produced by significance-seeking), the combination of which severely weakens the evidence provided by every design.
>3) Use of Bayes theorem to show that, in the face of weak evidence, hypotheses with low prior probabilities cannot have posterior probabilities over 50%.
>Thus, the claim is based on a priori assumptions that most tested hypotheses are likely to be false, and then the inferential model used makes it impossible for evidence from any study to overcome this handicap. We focus largely on step (2), explaining how the combination of dichotomization and “bias” dilutes experimental evidence, and showing how this dilution leads inevitably to the stated conclusion. We also demonstrate a fallacy in another important component of the argument –that papers in “hot” fields are more likely to produce false findings.
We agree with the paper’s conclusions and recommendations that many medical research findings are less definitive than readers suspect, that P-values are widely misinterpreted, that bias of various forms is widespread, that multiple approaches are needed to prevent the literature from being systematically biased and the need for more data on the prevalence of false claims. But calculating the unreliability of the medical research literature, in whole or in part, requires more empirical evidence and different inferential models than were used. The claim that “most research findings are false for most research designs and for most fields” must be considered as yet unproven.<p>[1] <a href="http://www.metafilter.com/133102/There-is-no-cost-to-getting-things-wrong#5256675" rel="nofollow">http://www.metafilter.com/133102/There-is-no-cost-to-getting...</a>