Not for Your Eyes: Information Concealed through Publication Bias THOMAS J. LIESEGANG, DANIEL M. ALBERT, AND ANDREW P. SCHACHAT
P
EER-REVIEWED MEDICAL JOURNALS STRIVE TO PRESENT
new information for their readers’ consideration and judgment—information that will guide their medical practice and will benefit their patients. Vigorous efforts are made by the editors to control and minimize factors that may inhibit or impair the impartiality of the data and the conclusions they warrant. Toward the end, potential conflicts of interest on the part of the authors are revealed, the quality of data is examined, and control groups are required. In clinical trials, when possible, actual treatment and sham or placebo treatments are masked. These practices are carried out to eliminate or minimize the possibility of bias. We want to address here three less obvious, but important, forms of bias: 1) submission bias, 2) methodologic bias, and 3) acceptance bias. These are key among the various forms of publication bias that can limit or distort the interpretation of journal contents for clinical care.1,2 The real world of research, funded by government grants, foundations, private industry and subsequently scrutinized by institutional review boards, is far different than the world of research reflected in reports in the journals. A black hole exists between the real world of clinical investigation and the world of published studies, and into that hole disappear those studies with “insignificant” results; those studies with strong positive results are submitted and published preferentially, whereas those with negative or inconclusive results are often filed away, unsubmitted or when submitted, are rejected, and remain hidden from public view.3–5 Both investigators and journal editors share responsibility for submission bias. But study sponsors, frequently pharmaceutical companies, often are the culprits for data suppression and methodologic flaws or bias.6 Methodologic bias refers to poor study design and faulty execution of studies.1 Two broad classes of methodologic bias are selection bias and observation bias, both of which are (sometimes intentional) systematic errors in studies that cause or encourage one outcome over another.1,7 Individual analyses and meta-analyses consistently have found a significant association between industry sponsorship and proindustry conclusions.8 –11 We agree with Lesser and associates that there is a natural tendency for industry Accepted for publication Jul 22, 2008. From the Editorial Offices of the AMERICAN JOURNAL OF OPHTHALMOLOGY, ARCHIVES OF OPHTHALMOLOGY, and OPHTHALMOLOGY. Inquiries to Thomas J. Liesegang, MD, Mayo Clinic, 4500 San Pablo Road, Jacksonville, FL 32224; e-mail:
[email protected]
638
©
2008 BY
preferentially to fund those studies that favor their products or that disparage their competitors’ products and exclude products in which the company has no economic interest.12 Investigators are encouraged to formulate hypotheses, to design studies, or to analyze data in ways that are consistent with the financial interests of sponsoring companies.12 It has been demonstrated that authors’ financial competing interests may bias the conclusions of clinical trials and that the bias typically favors the sponsors funding the studies.12,13 Sponsors frequently have the capability of obstructing, delaying, or completely preventing publication of trials or can impose practical or legal obstacles to publication, at times with the acquiescence of United States medical schools, in violation of the International Committee of Medical Journal Editors guidelines.14 Consequently, pharmaceutical company sponsorship of economic analyses of new drugs has been associated with a reduced likelihood of reporting unfavorable results.15 Research funded by drug companies is less likely to be published than research funded by other sources; in fact, approximately half of all completed trials go unpublished.3,10,16 And, as the public media have reported, industry and investigators have been shown to delay or not to publish findings that have negative implications for the sponsor’s product. Another form of methodologic bias seen in industry-sponsored studies is the selective reporting of favorable outcomes. Moreover, the reporting of trial outcomes often is not only incomplete but may be distorted and inconsistent with protocols.17 Acceptance bias can be defined as the tendency of reviewers and editors excessively to favor and accept manuscripts on the basis of the so-called direction or strength of study results.18 Regarding direction, studies with positive results clearly are more likely to be published than those with negative or inconclusive results.18 –20 Stated otherwise, studies showing high efficacy are more likely to be reported than those in which the observed efficacy is average or poor.21 Publication bias based on the strength of the findings refers to the favoring of studies with strong statistical significance. Conversely, studies lacking statistical significance are published less frequently.4,22–24 It is heartening, however, that little evidence can be found that publication bias occurs when high-quality manuscripts are submitted to the best medical journals. It seems in these cases that the chances of being
ELSEVIER INC. ALL
RIGHTS RESERVED.
0002-9394/08/$34.00 doi:10.1016/j.ajo.2008.07.034
published are not significantly different for trials with positive results and trials with negative results.4 The danger with submission bias, methodologic bias, or acceptance bias, of course, is that the literature becomes skewed toward studies with positive outcomes or with strong statistical results. This in turn leads to overestimation, or at least misrepresentation, of the efficacy or the absence of adverse effects of a diagnostic or therapeutic method and can result in a failure to institute proper clinical guidelines.2,7 Subsequent scientific reviews, especially by industryselected authors, do not remedy publication bias because the search and interpretation may be selective.12 Studies suggest that readers should interpret industry-supported reviews with caution and should rely on, for example, the more transparent Cochrane reviews.25 Meta-analysis or systematic analysis of multiple small trials only amplifies the bias, because only the reported (positive) trials are included.17 Publication bias is a recognized economic and health policy issue. The Health Technology Assessment Program of the National Health Service recently published a 115-page monograph devoted entirely to exploring the implications of publication bias.26 Unreported clinical studies essentially become lost data and represent wasted resources. An ethical imperative is squandered as well because there is typically a hint to research subjects that “they may help others by participating.” If the knowledge is not shared, that help is not imparted. The mandatory clinical trial registration guidelines for data registration in a centralized searchable database in a publicly available website6 helps to address publication bias, and most reliable journals will not publish results from unregistered studies. To improve transparency, access to the protocols also should be made available, such as those published by the Cochrane Library. Further, data monitoring commit-
tees should be completely independent. Journals should request a disclosure of both the authors’ roles and the sponsor’s role in the study and should affirm that the principle author had access to all the data and controlled the decision to publish. Deviations from trial protocols should be described in the published articles to assess the potential for bias.17 Journal editors should require that original protocols and subsequent amendments be submitted with the manuscript or at least be available for the benefit of the peer-reviewers17; admittedly, proprietary issues need to be considered. Clinical trial registration, however, does not ensure submission of trial results. Neither does publication ensure that the research has been conducted with accuracy and objectivity and with a meaningful hypothesis. Details regarding interventions that fail in clinical trials, however, should be available publicly. The responsibility to submit these studies for publication lies not only with the investigators, but also with research ethics committees (institutional review boards) and funding bodies. In summary, institutional review board-approved studies should not be buried when the results are indecisive or negative, because all resulting information is important if the study has been carried out properly.12 Editors must guard against basing the decision to publish on the significance of a study’s results. Rather, they should prioritize manuscripts on the basis of the clinical question addressed, the quality of the research methods, and findings that will impact subsequent treatment. In fact, the inconclusive or negative studies provide prospective and balance against the seductive power of positive data in the literature. These steps will assure both editors and readers that the aggregate information presented is accurate and reliable and will enable journals to reflect the real world of research.
THE AUTHORS INDICATE NO FINANCIAL SUPPORT OR FINANCIAL CONFLICT OF INTEREST. ALL OF THE AUTHORS WERE involved in design and conduct of study; data collection; analysis and interpretation of data; and preparation, review and approval of the manuscript.
REFERENCES 1. Sica GT. Bias in research studies. Radiology 2006;238:780 –789. 2. DeMaria AN. Publication bias and journals as policemen. J Am Coll Cardiol 2004;44:1707–1708. 3. Veitch E. Tackling publication bias in clinical trial reporting—PLoS announces the launch of a new online journal. PLoS Med 2005;2:e367. 4. Olson CM, Rennie D, Cook D, et al. Publication bias in editorial decision making. JAMA 2002;287:2825–2828. 5. Dickersin K, Min YI, Meinert CL. Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA 1992;267:374–378. 6. Abaid LN, Grimes DA, Schulz KF. Reducing publication bias through trial registration. Obstet Gynecol 2007;109:1434–1437.
VOL. 146, NO. 5
7. Gluud LL. Bias in clinical intervention research. Am J Epidemiol 2006;163:493–501. 8. Bekelman JE, Li Y, Gross CP. Scope and impact of financial conflicts of interest in biomedical research: A systematic review. JAMA 2003;289:454 – 465. 9. Als-Nielsen B, Chen W, Gluud C, Kjaergard LL. Association of funding and conclusions in randomized drug trials: a reflection of treatment effect or adverse events? JAMA 2003;290:921–928. 10. Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 2003;326:1167–1170. 11. Liss H. Publication bias in the pulmonary/allergy literature: effect of pharmaceutical company sponsorship. Isr Med Assoc J 2006;8:451– 454.
EDITORIAL
639
12. Lesser LI, Ebbeling CB, Goozner M, Wypij D, Ludwig DS. Relationship between funding source and conclusion among nutrition-related scientific articles. PLoS Med 2007;4:e5. 13. Kjaergard LL, Als-Nielsen B. Association between competing interests and authors’ conclusions: epidemiological study of randomized clinical trials. BMJ 2002;325:249. 14. Gøtzsche PC, Hróbjartsson A, Johansen HK, Haahr MT, Altman DG, Chan AW. Constraints on publication rights in industry-initiated clinical trials. JAMA 2006;295: 1645–1646. 15. Friedberg M, Saffran B, Sinson TJ, Nelson W, Bennett C. Evaluation of conflict of interest in economic analyses of new drugs used in oncology. JAMA 1999;282:1453–1457. 16. Dickersin K. How important is publication bias? A synthesis of available data. AIDS Educ Prev 1997;9:15–21. 17. Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DA. Empirical evidence for selective reporting of outcomes in randomized trials comparison of protocols to published articles. JAMA 2004;291:2457–2465. 18. Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA 1990;263:1385–1389. 19. Callaham ML, Wears RL, Weber EJ, Barton C, Young G. Positive outcome bias and other limitations in the outcome
640
AMERICAN JOURNAL
20.
21. 22.
23. 24.
25.
26.
OF
of research abstracts submitted to a scientific meeting. JAMA 1998;280:254 –257. Callaham M, Wears RL, Weber EJ. Journal prestige, publication bias, and other characteristics associated with citation of published studies in peer-reviewed journals. JAMA 2002; 287:2847–2850. Begg CB. A measure to aid in the interpretation of published clinical trials. Stat Med 1985;4:1–9. Sterling TD. Publication decisions and their possible effects on inferences drawn from tests of significances. Am Stat Assoc J 1959;54:30 –34. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet 1991;337:867– 872. Harris IA, Mourad MS, Kadir A, Solomon MJ, Young JM. Publication bias in papers presented to the Australian Orthopaedic Association Annual Scientific Meeting. Aust N Z J Surg 2006;76:427– 431. Jørgensen AW, Hilden J, Gøtzsche PC. Cochrane reviews compared with industry supported meta-analyses and other meta-analyses of the same drugs: systematic review. BMJ 2006;333:782. Song F, Eastwood AJ, Gillbody S, Duley L, Sutton AJ. Publication and related biases. Health Technol Assess 2000; 4:1–115.
OPHTHALMOLOGY
NOVEMBER 2008