Why should this posting be reviewed?
See also Guidelines for Comments and Corrections.
Thank you for taking the time to flag this posting; we review flagged postings on a regular basis.close
Post Your Discussion Comment
Please follow our guidelines for comments and review our competing interests policy. Comments that do not conform to our guidelines will be promptly removed and the user account disabled. The following must be avoided:
- Remarks that could be interpreted as allegations of misconduct
- Unsupported assertions or statements
- Inflammatory or insulting language
Reader Comments (5)
Post a new comment on this article
Hierarchies of science: what goes on top?
Posted by plosmedicine on 31 Mar 2009 at 00:23 GMT
Author: John Ioannidis
Position: No occupation was given
Institution: Department of Hygiene and Epidemiology, University of Ioannina School of Medicine
Submitted Date: March 07, 2008
Published Date: March 10, 2008
This comment was originally posted as a “Reader Response” on the publication date indicated above. All Reader Responses are now available as comments.
The fight between the randomized and observational camps has been long polarized. Jan Vandenbroucke makes a valiant conciliatory effort. His thoughtful masterpiece raises many important issues.
Vandenbroucke avoided the charged words “evidence-based medicine” and “meta-analysis” in the jungle of hierarchies. Returning from a safari, he carefully mentioned no lions, elephants, or monkeys. However, perhaps in all hierarchies, the top layer is not trials, observational studies, or case reports, but the total evidence. Single studies are pieces of a larger puzzle. The puzzle may need different studies and designs to complete. This may also be a healthier perspective of evidence-based medicine, rather than early polemical views where only randomized trials mattered.
While there should be freedom to exploit imaginatively all research designs, integration of evidence requires complete transparency, exact reporting, and systematization. Part of the weakness of non-randomized designs is not intrinsic (lack of randomization, confounding, etc), but is caused from selective availability of information and poor capacity for integration. This can affect also randomized designs of course. Being interested simply in significant findings, leads to a distorted literature filled with selectively analyzed and reported research. Scientific inquiry eventually becomes bias (=doing everything possible to see associations where they don't exist).1 I don't think that scientific progress would be threatened if discoverers were fully transparent about what they did and how they made the discovery. The pathway might be convoluted – one more reason to describe it accurately. One may argue that basic scientists can really select the information to propel, using biological reasoning and analogy to make sense of the data. However, as the complexity of biological processes is revealed and we know that we are dealing with millions of known and probably zillions of unknown biological factors, resorting to biological plausibility and analogy is almost impossible. Biological thinking should not give an alibi to selective reporting.
Then, if we have the option of comparing a randomized design vs. an uncontrolled case report for discovery or explanation, I think the randomized design is more unbiased, other things being equal. Even in quick and dirty discovery experiments, robust experimental procedures are important, and these may include proper randomization, controls, blinded reading, and quality checks. Randomization is not a nuisance that some boring clinicians have to do for a product developed by an even more boring and greedy company. Robust experimental design (including randomization, when pertinent) may be equally essential for studying neuronal cultures, C. elegans, or human beings. Application of randomization per se can be boring or innovative. Some clinical trials require extremely innovative thinking in their design. Conversely, much discovery research can be dull “run of the mill” - and vice versa of course. The meandering path of science is not due just to the unavoidable convoluted path of discovery; lack of penetration of robust scientific methods in some basic and observational research keeps the fruitless meandering going.
One may argue that many/most new discoveries arise totally un-preplanned out of smart case observations. I am not sure where this evidence comes from.2 Even if so, are we happy with this? Science has definitely made progress. But would progress be stalled, if basic scientists started from more controlled/randomized efforts in their work rather than just wait for case reports to arise? Perhaps, in the past and current discovery path, the yield of major discoveries has been amazingly low, because we were stuck on haphazard observation and biological rationale. The revolution of complex disease genetics through genome-wide association studies is one example of what systematic approaches can achieve in accelerating discovery.3
Finally, there can be a major "loss function", if discovery research propels some low credibility beliefs and these get entrenched in the literature.4 These then lend support to other wrong beliefs, and so forth. Wrong data, theories and interpretation then stifle and destroy the few new true findings that arise; or simply remain in the literature confusing future scientific efforts. The resulting loss is hard to measure.
1. Ioannidis JP (2007) Molecular evidence-based medicine: evolution and integration of information in the genomic era. Eur J Clin Invest. 37:340-9.
2. Contopoulos-Ioannidis DG, Ntzani E, Ioannidis JP (2003) Translation of highly promising basic science research into clinical applications. Am J Med. 114:477-84.
3. Todd JA (2006) Statistical false positive or true disease pathway? Nat Genet. 38:731-3.
4. Tatsioni A, Bonitsis NG, Ioannidis JP (2007) Persistence of contradicted claims in the literature. JAMA. 298:2517-26.