Advertisement
Essay

Essay Essays are opinion pieces on a topic of broad interest to a general medical audience.

See all article types »

Why Most Published Research Findings Are False

  • John P. A. Ioannidis
  • Published: August 30, 2005
  • DOI: 10.1371/journal.pmed.0020124

Reader Comments (31)

Post a new comment on this article

The Clinical Interpretation of Research

Posted by plosmedicine on 30 Mar 2009 at 23:45 GMT

Author: Stephen Pauker, MD
Position: Professor of Medicine
Institution: Tufts-New England Medical Center
E-mail: spauker@tufts-nemc.org
Submitted Date: September 11, 2005
Published Date: September 12, 2005
This comment was originally posted as a “Reader Response” on the publication date indicated above. All Reader Responses are now available as comments.

John Ioannidis emphasizes the central role of prior probabilities [1]. His conclusion rests on the presumed low probability that a hypothesis was true before the study.

Unfortunately, his formulation relates the post-study probability that the study's conclusion is true to the pre-study odds. The results might have been clearer had he also plotted the relation of odds to probability, a curvilinear relationship, assuming the study carried no information. Further, the various graphs are right-truncated at pre-study odds, R, of 1.0 (a probability of 0.5), although his examples go as high as pre-study odds of 2.0. A positive study must, by definition, increase the likelihood that the hypothesis is true. It might have been clearer had Ioannidis chosen to relate odds to odds or probability to probability; in both cases a neutral study would produce at straight line along a 45-degree diagonal.

The pre-study to post-study relation can more simply be expressed using the odds-likelihood form of Bayes rule - i.e., the post-study odds equal the pre-study odds times the likelihood ratio (LR) of the study. Then the equations for positive predictive value become the simple product R x LR. For a single unbiased study, the LR equals (1-beta)/alpha. When incorporating study bias u, as defined by Ioannidis, LR equals [1-beta*(1-u)]/[alpha*(1-u)+u]. For a typical study with alpha equal to 0.05 and beta equal to 0.2 (ie, with power equal to 0.8), the LR equals 16. When the pre-study odds are less than 1:16 (a probability of 0.0588), the post-study odds will be less that 1.0 - i.e., the study's hypothesis will be more likely false than true.

For non-Bayesians, statistical significance testing presumes a uninformative prior probability - i.e., pre-study odds (R) of 1. Then LR would merely need to exceed 1 for the study's conclusions to be more likely true than false. At the common significance levels (alpha) of 0.05 and 0.01, the requisite study powers would merely need to exceed 0.05 and 0.01 respectively, corresponding to maximum type II error rates (beta) of 0.95 and 0.99. Such lax requirements would almost always be met for a published study. Hence, the common belief that the vast majority of studies have valid conclusions would be correct, if we can assume that the pre-study odds are truly uninformative. However, as Ioannidis suggests, this is unlikely to be the case.

Two more corollaries might be added. The higher the pre-study odds that the study's hypothesis is true, the lower the requisite power (study size and effect size) required to make the study's findings more likely true than false. When studies are published, the investigator should estimate the pre-study odds and report the LR implied by the observed effect.

From the perspective of an epidemiologist or a statistician, the relevant question is whether the study's hypothesis is true - i.e., is the probability of H1 greater than 0.5. For clinicians and their patients, the relevant question is whether a particular strategy should be followed in an individual patient or a subset of similar patients. That decision (or recommendation to the patient) will depend on the pre-study likelihood of benefit in that patient and on the relative magnitude of benefits and risks of that strategy, if the diagnosis in that patient is uncertain. For many such decisions, the "more likely true than false" criterion may not be the best decision rule. For serious diseases and treatments of only modest risk, post-study probabilities of considerably less than 0.5 may be sufficient to justify treatment [2].

Ioannidis's provocative essay is a timely call for careful consideration of published studies. The odds-likelihood formulation suggested herein may be helpful in providing a more intuitive model. Clinicians now need to take it to the next step.

References
1. Ioannidis JPA (2005). PLoS Med 2(8):e124.
2. Pauker SG, Kassirer JP (1975). N Eng J Med 293:229.

No competing interests declared.