Why should this posting be reviewed?
See also Guidelines for Comments and Corrections.
Thank you for taking the time to flag this posting; we review flagged postings on a regular basis.close
Post Your Discussion Comment
Please follow our guidelines for comments and review our competing interests policy. Comments that do not conform to our guidelines will be promptly removed and the user account disabled. The following must be avoided:
- Remarks that could be interpreted as allegations of misconduct
- Unsupported assertions or statements
- Inflammatory or insulting language
Reader Comments (5)
Post a new comment on this article
Response to Drs Autier and Ioannidis
Posted by plosmedicine on 31 Mar 2009 at 00:24 GMT
Author: Jan Vandenbroucke
Position: Professor of Clinical Epidemiology, and Academy Professor of the Royal Netherlands Academy of Arts and Sciences
Institution: Department of Clinical Epidemiology at Leiden University Medical Centre, Leiden, The Netherlands.
Submitted Date: March 31, 2008
Published Date: April 1, 2008
This comment was originally posted as a “Reader Response” on the publication date indicated above. All Reader Responses are now available as comments.
I thank Dr Autier for describing my point of view as idealistic. On purpose, I did not discuss “non-scientific agendas” that might influence studies. The majority of the most cited randomised controlled trials in this 21st century are sponsored by the pharmaceutical industry . By design, analysis or interpretation, sponsored trials are biased toward the sponsor’s product, showing a clear non-scientific agenda effect [2-5]. The non-scientific agenda in observational etiologic or explanatory research is often a personal ambition to be the first to propose a beautiful explanation, which leads to a too strong belief, and a tendency to save the explanation by overlooking non-fitting or contrary evidence. It is hard to judge what type of non-scientific agenda is most pervasive in what type of research.
The hormone replacement therapy debacle that Dr Autier describes, may have been due in part to non-scientific agendas. However, from the point of view of methods, it is an example of an “unexpected beneficial effect” – i.e., a beneficial effect for which the drug originally was not intended. In the longer version of the paper, I make the distinction between “unexpected beneficial” and “unexpected adverse” effects of treatments, and I describe how the recent history of pharmacoepidemiology knows of several wrongly held beliefs about unexpected beneficial effects ( Supplement S1, page 10-11). These are due to a healthy user bias , or to confounding by contraindication, which in its extreme forms may lead to apparent protective effects. In contrast, examples where observational studies about unexpected adverse effects were equally wrong are much rarer. The reasons are probably many fold and complex. One reason might be that a message about an unexpected benefit may be too readily accepted, while a message about an unexpected harm might receive much more scrutiny before it gets to print, and generates many more studies to question it after it has been published. That would be in line with Dr Autier’s argument that the type and amount of evidence needed may be different according to the sensitivity of the topic.
Most of Dr Ioannidis’s comments stem from a different level of abstraction. He is concerned about the meta-level of research integration that determines what will be accepted in the greater body of medical scientific knowledge. My paper was mainly concerned with the point of view of the individual researcher (or research group) who does primary research and who is thinking which study to start for a particular problem.
I am in full agreement with Ioannidis that science ultimately consists of a synthesis of different types of ideas and evidence. Unfortunately, Ionannidis describes “biological plausibility and analogy” as almost impossible given the numerous associations that one can dream up. In actual scientific progress, biologic arguments and numerical research are dependent on each other . For example, a long controversy took place over the question whether one type of oral contraceptives (containing desogestrel) caused more venous thrombosis than an older type of contraceptives (containing levonorgestrel) . Epidemiologic arguments about potential bias and confounding went back and forth without end - until randomised cross-over trials showed that several coagulation parameters shifted to a more clotting-like state in young women who used the desogestrel contraceptive [10, 11]. That gave biologic support to the epidemiologic findings of a greater thrombosis risk. However, the results from a cross-over trial on coagulation factors would have little meaning, were it not for the earlier epidemiologic findings of an increase in venous thrombosis risk. A cross-over trial would never have been done, if there had not been epidemiologic findings, as well as observational data about differences in coagulation parameters between women on different types of contraceptives. (By way of thought experiment: if a randomised cross-over trial would have been done without the prior observational evidence, its results would immediately have called for epidemiologic investigations into the venous thrombosis risk - only thereafter would the trial results about the coagulation parameters have become meaningful.) The joint evidence helped to convince almost all workers in the field. Of course, I agree with the underlying message of Ioannidis: biological plausibility should be robust - it should be more than a mere flight of fancy. Still, the overall integration of knowledge remains basically subjective, and this subjectivity also applies to the interpretation of results from randomised trials . Jerome Cornfield explained already in 1954 why there are no different classes of evidence: there are only associations from the data, be them experimental or observational, which will have to be interpreted (see long version of paper,  Supplement S1, page 17).
Ioannidis writes that discoverers should be “fully transparent about what they did” and that a “lack of penetration of robust scientific methods in some basic and observational research keeps the fruitless meandering going”. He calls for discoverers to start “from more controlled/randomised efforts... rather than just wait for case reports to arise”. Apparently, I have not been sufficiently specific. Having a new idea that leads to a study cannot be preplanned: it arises in your brain, elicited by some stimuli (like seeing a patient, a juxtaposition in the literature, or a discussion with colleagues), and it stems from your unique constellation of knowledge. Where the first spark came from, is not important; important is what follows. After a first spark, you may want to pursue the idea. The first studies that you will think about when the idea is still tentative will be the cheapest and easiest ones that will give some further insight. Likewise, your colleagues who want to test your idea will preferably use existing data and/or material. It is impossible to start a new RCT or a new large prospective study for every new idea of every scientist – which is not to deny that at a certain point they may become necessary – at least if feasible.
Ioannidis adds the interesting thought to replace individual ingenuity by systematic collaborations – still, such collaborations thrive upon a first idea to make them work, and within these collaborations people need to generate ideas to come to action. For example, to demonstrate why large scale collaborations for genome wide associations studies are necessary, we first needed meta-analyses by people like Ioannidis who saw the problem of non-replication . Moreover, discoveries in genome wide association studies become only meaningful if a suitable mechanism is found that translates the association at the DNA level into some higher level of cellular explanation – which is usually done by individual scientists.
Finally, the convoluted pathways of science can indeed lead to a major loss. This is due to the inevitable subjectivity in interpretation, of whatever study and whatever synthesis (see longer version of paper,  Supplement S1 page 22). Ioannidis cites a paper about a continuation of belief in a nutritional hypothesis, despite randomised trial evidence to the contrary . However, the issues are subtle: for example, that paper describes in detail the reasons why some people think that the randomised trials might not have addressed the right question (See Box 1 of ). Those reasons have face credibility, and they are not formally rejected by the trials. Thus, the judgment that this particular nutritional hypothesis has been rejected by randomised trials can only be made if one also simultaneously rejects the objections that the trials have not addressed the right question, which is an argument that cannot derive from the trials themselves.
 Patsopoulos NA, Ioannidis JP, Analatos AA (2006). Origin and funding of the most frequently cited papers in medicine: database analysis. BMJ 332:1061-4.
 Lexchin J, Bero LA, Djulbegovic B, Clark O (2003). Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 326:1167-70.
 Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B (2003). Evidence b(i)ased medicine--selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 326:1171-3.
 Heres S, Davis J, Maino K, Jetzinger E, Kissling W, Leucht S (2006). Why olanzapine beats risperidone, risperidone beats quetiapine, and quetiapine beats olanzapine: an exploratory analysis of head-to-head comparison studies of second-generation antipsychotics. Am J Psychiatry 163:185-94.
 Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008). Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 358:252-60.
 Vandenbroucke JP (2008). Observational research, randomised trials, and two views of medical science. PLoS Med 5:e67.
 Thomsen RW (2006). The lesser known effects of statins: benefits on infectious outcomes may be explained by "healthy user" effect. BMJ 333:980-1.
 Vandenbroucke JP (1998). 175th anniversary lecture. Medical journals and the shaping of medical knowledge. Lancet 352:2001-6.
 Vandenbroucke NEJM Vandenbroucke JP, Rosing J, Bloemenkamp KW, Middeldorp S, Helmerhorst FM, Bouma BN, Rosendaal FR (2001). Oral contraceptives and the risk of venous thrombosis. N Engl J Med 344:1527-35.
 Kemmeren JM, Algra A, Meijers JC, Tans G, Bouma BN, Curvers J, Rosing J,
Grobbee DE (2004). Effect of second- and third-generation oral contraceptives on the protein C system in the absence or presence of the factor VLeiden mutation: a randomized
trial. Blood 103 :927-33.
 Vandenbroucke JP, de Craen AJ (2001). Alternative medicine: a "mirror image" for scientific reasoning in conventional medicine. Ann Intern Med 135:507-13.
 Ioannidis JP, Trikalinos TA, Ntzani EE, Contopoulos-Ioannidis DG (2003). Genetic associations in large versus small studies: an empirical assessment. Lancet 361:567-71.
 Tatsioni A, Bonitsis NG, Ioannidis JP (2007). Persistence of contradicted claims in the literature. JAMA 298:2517-26.