Advertisement
Essay

Essay Essays are opinion pieces on a topic of broad interest to a general medical audience.

See all article types »

Why Most Published Research Findings Are False

  • John P. A. Ioannidis
  • Published: August 30, 2005
  • DOI: 10.1371/journal.pmed.0020124

Reader Comments (31)

Post a new comment on this article

Truth, probability and frameworks

Posted by plosmedicine on 30 Mar 2009 at 23:45 GMT

Author: Jonathan Wren
Institution: Dept. of Botany & Microbiology University of Oklahoma
E-mail: jonathan.wren@ou.edu
Submitted Date: September 12, 2005
Published Date: September 12, 2005
This comment was originally posted as a “Reader Response” on the publication date indicated above. All Reader Responses are now available as comments.

James T. Kirk: Harry lied to you, Norman. Everything Harry says is a lie. Remember that, Norman: Everything he says is a lie.

Harry Mudd: Now I want you to listen to me very carefully, Norman: I... am... lying.

-From Star Trek, the episode "I, Mudd"

Although John Ioannidis[1] brings up several good points about over-reliance upon formal - yet arbitrary - statistical cut-offs and bias against the reporting of negative results, his claim that most published research findings are false is somewhat paradoxical. Ironically, the truer his premise is, the less likely his conclusions are. He, after all, relies heavily upon other studies to support his premise, so if most (i.e. >50%) of his cited studies are themselves false (including the 8/37 that are to his own work), then his argument is automatically on shaky grounds. As mentioned in the editorial[2], scientific studies don't offer truth, per se. Even when studies appear in the best journals, they offer probabilistic assertions. Ioannidis' statement that 'the probability that a research finding is indeed true depends on the prior probability of it being true' is really begging the question; this, after all, is the problem. We cannot know such probabilities a priori, and guessing at such probabilities and/or parameters (as he does in his SNP association example) surely could not be less biased than any statistical test of significance. The key problem in Ioannidis' PPV formula to calculate the post-study probability that a relationship is true (PPV = (1 - ??)R/(R - ??R + ??), where R is the ratio of true to non-relationships), is that one can postulate a near-infinite number of non-relationships. Just extending his SNP example, why assume each SNP acts independently? So rather than 99,990 SNPs not being associated with schizophrenia, we have potentially 99,990n not associated, where n is the number of potentially interacting SNPs. As n grows, R becomes very small very quickly and PPV effectively zero. Taken to the extreme, this would imply that all empirical studies are fruitless. One of the most important factors in moving towards the truth that was not discussed is fitting discoveries into a framework. Optimally, if a relationship is true, it should have more than one implication, permitting validation from multiple angles. For example, a SNP causally associated with schizophrenia must affect something on the molecular level, whether genomic, transcriptional, post-transcriptional, translational or post-translational. In turn, these molecules should interact differently with each other and/or with other molecules within the cell and/or within a tissue and/or system as a whole. If Norman, the android from Star Trek mentioned in the beginning quote, had been equipped with the capacity to evaluate statements within a framework, he never would have short-circuited as a result of Kirk's paradox. He could have entertained the possibility that either Kirk was lying about Harry or that Harry's statement was incomplete (i.e. lying about what?) Similarly, repeatedly examining and re-examining any particular study to resolve the true/not-true paradox via statistical arguments alone can short-circuit our patience. We should simultaneously seek to identify the framework by which implications can be tested and I would argue that the more important the finding, the more testable implications it has.

References
[1] Ioannidis JP (2005). Why Most Published Research Findings Are False. PLoS Med. Aug 9;2(8):e124.
[2] PLoS Medicine Editors (2005. Minimizing mistakes and embracing uncertainty. PLoS Med 2(8):e272.

No competing interests declared.