"Why Most Published Research Findings Are False" is a 2005 essay written by John Ioannidis, a professor at the Stanford School of Medicine, and published in PLOS Medicine.
In simple terms, the essay states that scientists use hypothesis testing to determine whether scientific discoveries are significant.
Statistical significance is formalized in terms of probability, with its p-value measure being reported in the scientific literature as a screening mechanism.
Ioannidis posited assumptions about the way people perform and report these tests; then he constructed a statistical model which indicates that most published findings are likely false positive results.
While the general arguments in the paper recommending reforms in scientific research methodology were well-received, Ionnidis received criticism for the validity of his model and his claim that the majority of scientific findings are false.
Suppose that in a given scientific field there is a known baseline probability that a result is true, denoted by
When a study is conducted, the probability that a positive result is obtained is
is the type I error rate (false positives) and
is the type II error rate (false negatives); the statistical power is
: However, the simple formula for PPV derived from Bayes' theorem does not account for bias in study design or reporting.
be the probability that an analysis was only published due to researcher bias.
, and is free of bias, there is still a 36% probability that a paper reporting a positive result will be incorrect; if the base probability of a true result is lower, then this will push the PPV lower too.
Furthermore, there is strong evidence that the average statistical power of a study in many scientific fields is well below the benchmark level of 0.8.
[2][3][4] Given the realities of bias, low statistical power, and a small number of true hypotheses, Ioannidis concludes that the majority of studies in a variety of scientific fields are likely to report results that are false.
In addition to the main result, Ioannidis lists six corollaries for factors that can influence the reliability of published research.
Research findings in a scientific field are less likely to be true, Ioannidis has added to this work by contributing to a meta-epidemiological study which found that only 1 in 20 interventions tested in Cochrane Reviews have benefits that are supported by high-quality evidence.
[5] He also contributed to research suggesting that the quality of this evidence does not seem to improve over time.
[6] Despite skepticism about extreme statements made in the paper, Ioannidis's broader argument and warnings have been accepted by a large number of researchers.
[7] The growth of metascience and the recognition of a scientific replication crisis have bolstered the paper's credibility, and led to calls for methodological reforms in scientific research.
[8][9] In commentaries and technical responses, statisticians Goodman and Greenland identified several weaknesses in Ioannidis' model.
[10][11] Ioannidis's use of dramatic and exaggerated language that he "proved" that most research findings' claims are false and that "most research findings are false for most research designs and for most fields" [italics added] was rejected, and yet they agreed with his paper's conclusions and recommendations.
Biostatisticians Jager and Leek criticized the model as being based on justifiable but arbitrary assumptions rather than empirical data, and did an investigation of their own which calculated that the false positive rate in biomedical studies was estimated to be around 14%, not over 50% as Ioannidis asserted.
[12] Their paper was published in a 2014 special edition of the journal Biostatistics along with extended, supporting critiques from other statisticians.
[13] Statistician Ulrich Schimmack reinforced the importance of the empirical basis for models by noting the reported false discovery rate in some scientific fields is not the actual discovery rate because non-significant results are rarely reported.
Ioannidis's theoretical model fails to account for that, but when a statistical method ("z-curve") to estimate the number of unpublished non-significant results is applied to two examples, the false positive rate is between 8% and 17%, not greater than 50%.
[14] Despite these weaknesses there is nonetheless general agreement with the problem and recommendations Ioannidis discusses, yet his tone has been described as "dramatic" and "alarmingly misleading", which runs the risk of making people unnecessarily skeptical or cynical about science.
[10][15] A lasting impact of this work has been awareness of the underlying drivers of the high false positive rate in clinical medicine and biomedical research, and efforts by journals and scientists to mitigate them.