Journal of Clinical Epidemiology 65 (2012) 239e246
REVIEW ARTICLES
Publication guidelines need widespread adoption Elaine L. Larsona,b,c,*, Manuel Cortazalb,c,d a
Mailman School of Public Health, Columbia University, New York, NY 10032, USA b School of Nursing, Columbia University, New York, NY 10032, USA c American Journal of Infection Control d South Florida Community College, Avon Park, FL 33825 USA Accepted 4 July 2011; Published online 15 October 2011
Abstract Objective: During the past two decades teams of researchers and editors have developed a variety of publishing guidelines to improve the quality of published research reports. Journals and editorial groups have adopted many of these guidelines. Whereas some guidelines are widely used, others have yet to be generally applied, thwarting attainment of consistent reporting among published research reports. The aim of this study is to describe the development and adoption of general publication guidelines for various study designs, provide examples of guidelines adapted for specific topics, and recommend next steps. Study Design and Setting: We reviewed generic guidelines for reporting research results and surveyed their use in PubMed and Science Citation Index. Results: Existing guidelines cover a broad spectrum of research designs, but there are still gaps in topics and use. Appropriate next steps include increasing use of available guidelines and their adoption among journals, educating peer reviewers on their use, and incorporating guideline use into the curriculum of medical, nursing, and public health schools. Conclusion: Wider adoption of existing guidelines should result in research that is increasingly reported in a standardized, consistent manner. Ó 2012 Elsevier Inc. All rights reserved. Keywords: Clinical trials; Guidelines; Study design; Publication guidelines; Qualitative research; Surveys
1. Introduction Well-designed clinical trials and observational studies are essential to provide appropriate information needed by clinicians to adapt or change treatments. The primary mode of communication among scientists is peer-reviewed publication. Hence, it is important that the quality of such publications be maximized. Even if clinicians prefer not to modify a practice with which they are familiar, the pressure to assure evidence-based practice and minimize health care costs will necessitate that even unpopular results, if rigorously documented and confirmed, be incorporated into treatment decisions. This will require that results of clinical research be increasingly scrutinized. The limitations of any study should be carefully summarized in research publication. Variability in the format and presentation of research studies, however, can make it
* Corresponding author. Elaine L. Larson, Columbia University, 617 W. 168th Street, Room 330, New York, NY 10032, USA. Tel.: þ212305-0723; fax: þ212-305-0722. E-mail address:
[email protected] (E.L. Larson). 0895-4356/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. doi: 10.1016/j.jclinepi.2011.07.008
difficult to discern whether the limitations of the study are inherent in the design or whether the presentation of the results in their final published form are simply not providing sufficient clarity or specificity. For example, there is considerable variation in published descriptions of recruiting participants, training data collectors, calculating sample size, monitoring the integrity of an intervention, and assessing outcome measures. Even for a concept as basic as ‘‘blinding,’’ differences in interpretation have been noted [1]. In one survey of randomized clinical trials, 18 different combinations of groups were blinded, despite the fact that each study reported ‘‘double blinding’’ [2], and in a survey of 73 trials only 19% of ‘‘double blind’’ trials clearly described the blinding of participants [3]. To improve the clarity and consistency of research reports, efforts beginning in the 1990s were initiated by researchers, editors, and methodologists to develop and recommend specific standards for the publication of research [4,5]. The initial guidelines focused on randomized clinical trials, but since 1999 there has been a burgeoning of statements and checklists. The aim of this study is to describe the development and adoption of general publication guidelines for
240
E.L. Larson, M. Cortazal / Journal of Clinical Epidemiology 65 (2012) 239e246
What is new? Multiple guidelines have been developed for publishing a variety of types of studies; this study summarizes and compares such guidelines for major study designs. Appropriate next steps would be to increase use of available guidelines and incorporating their use into curricula for health care professionals.
various study designs, provide examples of guidelines adapted for specific topics, and recommend next steps.
2. Review methodology For this review, we defined ‘‘general’’ guidelines as those developed to serve as generic guides to the publication of studies using specific study designs. These include guidelines for studies using intervention, observational, and qualitative designs; systematic reviews and metaanalyses; and Internet surveys. To assess the extent to which guidelines are being used and cited, we searched PubMed for the years after the first publication of each guideline through December 2010 for references to the guideline. In PubMed we used the ‘‘All Fields’’ option in the Advanced Search mode linked with the title and acronym of each guideline. To determine the number of published articles that had cited specific guidelines we used the bibliographic and citation search features of the Science Citation Index (Thomson Reuters). This approach enabled us to search for articles that had cited a published version of the guideline in support of the authors’ methods. The following sections and Table 1 summarize the generic guidelines for the publication of studies using various study designs, provide examples of guidelines which have been adapted for specific topics, and conclude with recommendations for next steps.
statement recommending standards for publishing results of clinical trials [5]. These two groups subsequently met in Chicago and produced the CONSORT document, published in 1996 [6]. The most recent update was published in 2010 [7] and includes 25 items in six categories: title and abstract, introduction, methods, results, discussion, and other (funding and registration). CONSORT has served as a template for the development of subsequent guidelines which extend the recommendations beyond randomized clinical trials and is the most frequently adopted statement to date; O500 publications listed in PubMed have cited CONSORT, and O150 journals had endorsed the statement. A Web site is maintained to provide up-to-date information about the guideline (http://www.consort-statement. org/home/). The Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement was originated by The HIV/AIDS Prevention Research Synthesis (PRS) group at the Centers for Disease Control and Prevention (CDC) to improve the reporting of nonrandomized intervention trials [8]. These guidelines are endorsed by several dozen journals and have been used by investigators, although they have been cited with much less frequency than has CONSORT been [9]. In 1999, guidelines for Quality Improvement Reports (QIR) were initially published [10]. They were designed to be pragmatic, and there was no formal approach (e.g., workshop, Delphi technique, systematic literature review, and expert consensus) used in their development. QIR standards were adopted by several journals and used by a number of investigators as a template [11]. The Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines published in 2008 set standards for robust reporting of quality improvement projects using input from experts and public comment in a formalized vetting process. The checklist includes 19 items and has been endorsed by about a dozen journals [11e13]. Although these three guidelinesdCONSORT, TREND, and SQUIREdvary in the specific study designs for which they are intended, each includes criteria for intervention studies. Table 2 compares these three guidelines with regard to their recommendations for six components of a published report.
3. Generic guidelines by study design 3.1. Guidelines for intervention studies (CONSORT, TREND, SQUIRE)
3.2. Guidelines for nonintervention, observational studies (MOOSE, STROBE)
The first statement designed to improve reporting of research was the Consolidated Standards of Reporting Trials (CONSORT). Motivated by evidence of inadequate and inconsistent reporting of randomized clinical trials work to develop this statement began in 1993 with 30 experts, including editors, authors, publishers, and scholars in the field, who met in Canada. Their first statement was published in 1994 [4,5]. A second group convened in California simultaneously developed and published a similar
The Meta-analysis Of Observational Studies in Epidemiology (MOOSE) guideline was developed out of a workshop funded by the (CDC) in 1997. A steering committee selected 27 expert participants, a systematic literature review was conducted, and 32 meta-analyses were examined. A checklist was developed and modified by experts. The resultant 35-item list including six domains for presenting results of observational studiesdbackground, search strategy, methods, results, discussion, and conclusiondwas
E.L. Larson, M. Cortazal / Journal of Clinical Epidemiology 65 (2012) 239e246
241
Table 1. Summary of publication guidelines and statements in order of development and publication Guideline Guidelines by study design CONSORT (Consolidated Standards of Reporting Trials) (http://www.consortstatement.org/) QIR (Quality Improvement Reports/SQUIRE) (Standards for Quality Improvement Reporting Excellence) (http://www.squirestatement.org/) QUOROM/PRISMA (Quality Of Reporting of Meta-analysis/ Preferred Reporting Items for Systematic reviews and Meta-Analyses) (http://www. prisma-statement.org/) MOOSE (Meta-analysis Of Observational Studies in Epidemiology) TREND (Transparent Reporting of Evaluations with Nonrandomized Designs) (http://www.cdc.gov/ trendstatement/) STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) (http://www.strobestatement.org/) COREQ (Consolidated Criteria for Reporting Qualitative Research) CHERRIES (Checklist for Reporting Results of Internet E-Surveys) Examples of additional guidelines adapted for specific topics STARD/QUADAS (STAndards for the Reporting of Diagnostic Accuracy Studies/QUAlity Assessment of Studies of Diagnostic Accuracy) (http://www.stardstatement.org/) Guidelines for Neuro-Oncology: Standards for Investigational Studies (GNOSIS) ORION (Outbreak Reporting and Intervention Studies of Nosocomial Infections) (http://www.idrn.org/orion. php) STREGA (STrengthening the REporting of Genetic Associations) (http://www. medicine.uottawa.ca/publichealth-genomics/web/eng/ strega.html)
First published/most recent update
Number of Items
Approximate number of PubMed citationsa
Sample articles applying the guideline
Reporting randomized clinical trials [6,7]
1996/2010
25
565
[44e47]
Reporting quality improvement projects [10,12,13]
1999/2008
19
60
[48]
Reporting systematic reviews and meta-analyses [20,22,49]
1999/2009
21/27
80
[50e54]
Reporting observational studies in epidemiology [55]
2000
35
3
[14,15]
Reporting studies using nonrandomized designs [8]
2004
22
5
[9]
Reporting of observational studies in epidemiology [56]
2007
22
40
Reporting qualitative research studies (interviews and focus groups) [57] Reporting studies using Internet surveys [25]
2007
32
2
[23]
2004
30
5
[58]
Reporting studies of diagnostic accuracy [26e28]
2000/2003
25/14
Purpose
Reporting of neuro-oncology studies [33]
2005
Reporting of outbreaks and intervention studies of nosocomial infections [34]
2007
Reporting studies of genetic associations [30,31]
2009
105
[17e19]
[29,59e62]
2
[35]
22
5
[63]
22
12
[64]
a This number represents the number of times the guideline was cited in PubMed when we searched the ‘‘All Fields’’ option in the Advanced Search mode.
242
E.L. Larson, M. Cortazal / Journal of Clinical Epidemiology 65 (2012) 239e246
Table 2. Comparison of three major publication checklists used for intervention studiesb Section/topic
CONSORT [7] (for randomized clinical trials)
TREND [8] (for nonrandomized intervention studies)
Title
Identify as randomized trial
Identify target population and how intervention was allocated
Abstract Introduction
Structured Scientific background, rationale, specific objectives, or hypotheses Trial design, participants, interventions, outcomes, sample size, randomization, blinding, statistical methods
Structured Scientific background, rationale, theories used to design intervention Participants, interventions, objectives, outcomes, sample size, assignment method, blinding, unit of analysis, statistical methods Participant flow, recruitment, baseline data, baseline equivalence, numbers analyzed, outcomes and estimation, ancillary analyses, adverse events
Methods
Results
Participant flow (diagram), recruitment, baseline data, numbers analyzed, outcomes and estimation, ancillary analyses, harms
Discussion
Limitations, generalizability, interpretation Registration, location of full protocol, funding
Other information
Interpretation, generalizability, overall evidence
SQUIRE [13] (for quality improvement studies) Indicate the article is quality improvement, state aim, and study method Summarize all key information Background knowledge, local problem, intended improvement, study question Ethical issues, setting, planning the intervention, planning the study of the intervention, methods of evaluation, analysis Setting, course of intervention, degree of success, how and why plan evolved, changes in outcomes, benefits, harms, unexpected results, strength of association, missing data Summary, relation to other evidence, limitations, interpretation, conclusions Funding
b
The original checklists were developed for different types of studies; this table is for the purpose of illustrating common elements but should not be used for preparing specific manuscripts.
originally published in 2000 and has been used in several subsequent reviews [14,15]. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) was initially developed after a 2-day workshop in 2004 for three observational designs: cohort, case-control, and cross-sectional studies. The latest version (version 4) of STROBE was published in several journals in 2007 [16]. There are four checklists, one which combines items for studies using cohort, case-control, and cross-sectional designs as well as a separate checklist for each of the three designs. Eighteen of the 22 checklist items are common to all three study designs, and four are specific for each design. The STROBE group maintains an active Web site (http://www.strobe-statement.org/index.php?id5 strobe-home). Several examples of its use include selfcollection for vaginal human papillomavirus testing [17], prevalence of complementary medicine use in pediatric cancer [18], and a review of design and reporting issues in selfreported prevalence studies of leg ulceration [19]. 3.3. Guidelines for systematic reviews and metaanalyses (QUOROM/PRISMA) For reporting results of meta-analyses, 30 epidemiologists, clinicians, statisticians, researchers, and editors participated in a consensus conference in 1996. Using a modified Delphi technique, a 21-item list of criteria defining the Quality of Reporting of Meta-analyses (QUOROM) statement was produced and then formally tested with a randomized clinical trial, which included eight medical journals to determine whether the guidelines had an impact on the manuscript review process. The original QUOROM statement was published in 1999 [20].
In 2009, the QUOROM statement was updated and replaced by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement to include guidelines for systematic reviews. This was motivated by research demonstrating that only a minority of reports of systematic reviews described their methods or protocols [21]. PRISMA expanded the focus to include reviews of study designs other than randomized clinical trials [22]. Since its publication, PRISMA has been endorsed and adopted by O100 journals as well as the Cochrane Collaboration, the Council of Science Editors, and the World Association of Medical Editors (http://www.prisma-statement.org/ index.htm). 3.4. Guidelines for qualitative studies (COREQ) In 2007, the first comprehensive guidelines for qualitative methodologies were published, Consolidated Criteria for Reporting Qualitative Research (COREQ). The 32-item checklist was developed by a group of Australian experts, using an extensive literature search. Initially, 76 items from 22 checklists were compiled and grouped into three domains: research team and reflexivity, study design, and data analysis and reporting. Authors have started to adopt COREQ standards when reporting their qualitative research [23,24]. 3.5. Guideline for Internet surveys (CHERRIES) Increasingly, studies rely on data collected from Web sites or via e-mail. Researchers use these modes of communication mainly to collect survey responses that were previously primarily gathered through the mail. As with any survey methodology, using the Internet to aggregate survey
E.L. Larson, M. Cortazal / Journal of Clinical Epidemiology 65 (2012) 239e246
243
data suffers from selection bias, leading to questions about the validity of conclusions drawn from online surveys. Responding to these concerns, the editors of the Journal of Medical Internet Research adopted a checklist designed to guide authors in adequately describing their survey methodology. Published in 2004, the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) is grouped into eight categories: design; IRB approval; development and pretesting; recruitment; survey administration; response rates; preventing multiple responses; and analysis [25].
associations. It includes additional criteria in 12 of the 22 items of the STROBE checklist: population stratification, genotyping errors, modeling haplotype variation, Hardye Weinberg equilibrium, replication, selection of participants, rationale for choice of genes and variants, treatment effects in studying quantitative traits, statistical methods, relatedness, reporting of descriptive and outcome data, and the volume of data issues that are important to consider in genetic association studies [30,31]. Published in 2009, we found O20 publications listed in PubMed by the end of 2010 which had cited the use of STREGA.
4. Examples of additional guidelines adapted for specific topics
4.3. Additional guidelines
General guidelines are fundamental, but there is a growing number of guidelines adapting the generic guidance for a specific research area to allow better comparison across studies (e.g., added components standardizing terminology or outcomes selection and definition). In fact, more than 140 such guidelines have been identified (http://www. equator-network.org/). Below we describe just a few examples of these specific guidelines, which have been used and cited in the literature. 4.1. Studies of diagnostic accuracy Two guidelines (STAndards for the Reporting of Diagnostic Accuracy Studies [STARD] and QUAlity Assessment of Studies of Diagnostic Accuracy [QUADAS]) are used extensively for reporting of studies of diagnostic accuracy. The 25-item (STARD) was initially developed by a steering committee of scientists and editors in a 2-day consensus conference in Amsterdam and published in 2000 [26,27]. In 2003, a four-round Delphi procedure was used by members of the same team to develop (QUADAS). Although not specifically a reporting guideline, QUADAS is a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews, which was shortened to include 14 items. Items in both tools include a definition of the patient spectrum covered, reference standard, disease progression bias, verification bias, review bias, clinical review bias, incorporation bias, test execution, study withdrawals, and indeterminate results [26e28]. In 2006, the same group published an evaluation of QUADAS. The level of agreement between three reviewers who evaluated 30 studies using QUADAS was O85%. Because of difficulties in scoring items regarding uninterpretable results and withdrawals, revised guidelines for scoring these items were suggested [29]. These guidelines have been cited in PubMed in O100 published studies of diagnostic accuracy. 4.2. Studies reporting gene-disease association STrengthening the REporting of Genetic Associations (STREGA) is an extension of the STROBE guidance, which provides a checklist for reporting studies of genetic
A survey of reporting guidelines published in 2008 cited 37 guidelines [32], many of which were extensions or modifications of CONSORT. Several other topic-specific examples included Guidelines for Neuro-Oncology: Standards for Investigational Studies for phase 1 and phase 2 studies (GNOSIS) [33] and Outbreak Reporting and Intervention Studies of Nosocomial Infections (ORION), a guideline for publishing outbreaks and nosocomial infection studies [34]. GNOSIS has two 18-item checklists for phase 1 and phase 2 studies, although there was little explanation in the published description about how the checklists were actually developed. At least one author has published clinical trial results based on GNOSIS guidelines [35]. The ORION statement includes 22 items adapted from the CONSORT guideline to apply to retrospective (outbreak investigations) and quasi-experimental designs frequently used for interventions to reduce health care-associated infections. It was developed in the United Kingdom in a similar manner to other such guidelines after systematic literature reviews and consultation with experts in the field [34]. Other guidelines have been promulgated for such diverse topics as cancer prognostic studies, economic evaluations, metabolic analyses, and acupuncture research, to name a few.
5. Discussion Evidence of the need for publishing guidelines continues, despite the fact that many journals and editorial groups have publicly endorsed such guidelines. For example, between 2000 and 2006 only minimal improvement was found among articles listed in PubMed for such design attributes as defining a primary end point (45% in 2000 and 53% in 2006), describing how participants were assigned to a comparison group (21% in 2000 and 34% in 2006), and reporting a sample size calculation (27% in 2000 and 45% in 2006) [36,37]. Similarly, among 152 articles published in the five nursing research journals with the highest impact factors between 2005 and 2007, only about onethird reported a power calculation or provided a thorough explanation of their statistical analysis [38]. Many systematic reviews, such as the Cochrane Database Systematic
244
E.L. Larson, M. Cortazal / Journal of Clinical Epidemiology 65 (2012) 239e246
Table 3. General recommendations to improve reporting of research results Target Journals
Health professions schools and academic and research institutions Professional societies Researchers
Guideline developers
Recommendation Include in guidelines for authors links to relevant reporting guidelines; Educate reviewers regarding these guidelines; Include in instructions to reviewers forms and/or links that incorporate reporting guidelines. Incorporate publication guidelines into curricula that address research methodology and publication standards. Educate members on use of these guidelines. Consistently use publication guidelines when submitting manuscripts; Conduct future studies to assess/compare the quality of publications before and after guidelines are adopted. Assess current use and effectiveness of published guidelines; Identify gaps and additional need for clarity to revise current guidelines or develop new ones.
Reviews, continue to conclude that available reports are of insufficient quality to draw firm conclusions. The CONSORT statement for randomized clinical trials paved the way for the development of a plethora of guidelines specific to other designs and topics, but these guidelines are, in fact, closely related and generally quite consistent with each other. Many used similar iterative, systematic and rigorous processes, and a number of the domains included in the various statements are the same or overlapping, as noted in Table 2. Nevertheless, there are still gaps in terms of generic guidelines. The publication of case series/case studies is one example. Although some guidelines, such as the CONSORT Statement are widely used, others have yet to be generally applied. We were unable, for example, to find many citations in the literature of the application of the COREQ statement designed for reporting studies using qualitative designs. Barriers to the use of the guidelines may include a lack of knowledge among authors of the existence of the guidelines, failure of journals to require their use, or simply that authors continue to prepare manuscripts as they have done previously out of habit. The EQUATOR Network (Enhancing the Quality and transparency of Health Research, http://www.equatornetwork.org/) was recently established with funds from the UK National Health Service to further enhance the quality of reporting [32,39,40]. They have published a catalog of guidelines for reporting health research [41] and an online
collection of reporting guidelines and other resources supporting responsible research reporting [39,42,43]. Despite this, Moher et al. [43] in a systematic review of 81 guidelines found that 43% did not focus on a specific study type and only 14% included explanation documents. The authors concluded that publication guideline development should be more rigorous. Although there are many guidelines, there also are still gaps and need for robust guidance in specific areas, such as reporting of case studies/case series. Efforts, such as the EQUATOR Network and the adoption of CONSORT and other guidelines, by journal editors are essential. For example, the American Journal of Preventive Medicine in their guidelines for authors explicitly cites EQUATOR, CONSORT, TREND, and PRISMA (http://www.ajpm-online. net/authorinfo#gen). Interested organizations, such as the Committee on Publication Ethics (http://publicationethics. org/), can play a key role in coalescing support for broader adoption of existing guidelines. Study results published in the biomedical literature guide treatment decisions and shape public policy. The consequence of poorly reported findings is the potential to cause real harm. Readers of the scientific literature deserve to know that editors, reviewers, and authors have adopted processes that foster clarity and replication. This manuscript clearly has limitations. Most importantly, it is possible that in our literature review we missed some guidelines or failed to identify all the studies from PubMed or the Science Citation Index in which a guideline was used. Nevertheless, we have attempted to provide a survey of current use of general publication guidelines, as well as to make recommendations for journals, academic institutions, researchers, and guideline developers in Table 3. Simera et.al. [39] also have published extensive recommendations for journals, guideline developers, authors, funders, as well as academic and editorial organizations. The reader is referred to that publication for additional consideration.
References [1] Devereaux PJ, Manns BJ, Ghali WA, Quan H, Lacchetti C, Montori VM, et al. Physician interpretations and textbook definitions of blinding terminology in randomized controlled trials. JAMA 2001;285:2000e3. [2] Haahr MT, Hrobjartsson A. Who is blinded in randomized clinical trials? A study of 200 trials and a survey of authors. Clin Trials 2006;3:360e5. [3] Hrobjartsson A, Pildal J, Chan AW, Haahr MT, Altman DG, Gotzsche PC. Reporting on blinding in trial protocols and corresponding publications was often inadequate but rarely contradictory. J Clin Epidemiol 2009;62:967e73. [4] A proposal for structured reporting of randomized controlled trials. The Standards of Reporting Trials Group. JAMA 1994;272:1926e31. [5] Call for comments on a proposal to improve reporting of clinical trials in the biomedical literature. Working Group on Recommendations for Reporting of Clinical Trials in the Biomedical Literature. Ann Intern Med 1994;121:894e5.
E.L. Larson, M. Cortazal / Journal of Clinical Epidemiology 65 (2012) 239e246 [6] Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA 1996;276:637e9. [7] Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c869. [8] Des Jarlais DC, Lyles C, Crepaz N. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health 2004;94:361e6. [9] Riethmuller AM, Jones R, Okely AD. Efficacy of interventions to improve motor development in young children: a systematic review. Pediatrics 2009;124:e782e92. [10] Moss F, Thompson R. A new structure for quality improvement reports. Qual Health Care 1999;8:76. [11] Thomson RG, Moss FM. QIR and SQUIRE: continuum of reporting guidelines for scholarly reports in healthcare improvement. Qual Saf Health Care 2008;17:i10e2. [12] Davidoff F, Batalden P, Stevens D, Ogrinc G, Mooney SE. Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project. BMJ 2009;338:a3152. [13] Ogrinc G, Mooney SE, Estrada C, Foster T, Goldmann D, Hall LW, et al. The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care 2008;17:i13e32. [14] Abraham NS, Byrne CM, Young JM, Solomon MJ. Meta-analysis of non-randomized comparative studies of the short-term outcomes of laparoscopic resection for colorectal cancer. ANZ J Surg 2007; 77:508e16. [15] Singh-Grewal D, Macdessi J, Craig J. Circumcision for the prevention of urinary tract infection in boys: a systematic review of randomised trials and observational studies. Arch Dis Child 2005; 90:853e8. [16] Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. Epidemiology 2007;18:805e35. [17] Huynh J, Howard M, Lytwyn A. Self-collection for vaginal human papillomavirus testing: systematic review of studies asking women their perceptions. J Low Genit Tract Dis 2010;14:356e62. [18] Bishop FL, Prescott P, Chan YK, Saville J, von Elm E, Lewith GT. Prevalence of complementary medicine use in pediatric cancer: a systematic review. Pediatrics 2010;125:768e76. [19] Firth J, Nelson EA, Hale C, Hill J, Helliwell P. A review of design and reporting issues in self-reported prevalence studies of leg ulceration. J Clin Epidemiol 2010;63:907e13. [20] Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999;354:1896e900. [21] Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med 2007;4:e78. [22] Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 2009;6:e1000100. [23] Edwardraj S, Mumtaj K, Prasad JH, Kuruvilla A, Jacob KS. Perceptions about intellectual disability: a qualitative study from Vellore, South India. J Intellect Disabil Res 2010;54:736e48. [24] Gensichen J, Jaeger C, Peitz M, Torge M, G€uthlin C, Mergenthal K, et al. Health care assistants in primary care depression management: role perception, burdening factors, and disease conception. Ann Fam Med 2009;7:513e9. [25] Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res 2004;6:e34.
245
[26] Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD Initiative. Ann Intern Med 2003; 138:40e4. [27] Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med 2003; 138:W1e12. [28] Whiting P, Rutjes AW, Reitsma JB, Bossuyt PM, Kleijnen J. The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol 2003;3:25. [29] Whiting PF, Weswood ME, Rutjes AW, Reitsma JB, Bossuyt PN, Kleijnen J. Evaluation of QUADAS, a tool for the quality assessment of diagnostic accuracy studies. BMC Med Res Methodol 2006;6:9. [30] Little J, Higgins JP, Ioannidis JP, Moher D, Gagnon F, von Elm E, et al. STrengthening the REporting of Genetic Association studies (STREGA): an extension of the STROBE Statement. Ann Intern Med 2009;150:206e15. [31] Little J, Higgins JP, Ioannidis JP, et al. Strengthening the reporting of genetic association studies (STREGA): an extension of the STROBE Statement. Hum Genet 2009;125:131e51. [32] Simera I, Altman DG, Moher D, Schulz KF, Hoey J. Guidelines for reporting health research: the EQUATOR network’s survey of guideline authors. PLoS Med 2008;5:e139. [33] Chang SM, Reynolds SL, Butowski N, Lamborn KR, Buckner JC, Kaplan RS, et al. GNOSIS: guidelines for neuro-oncology: standards for investigational studies-reporting of phase 1 and phase 2 clinical trials. Neuro Oncol 2005;7:425e34. [34] Stone SP, Cooper BS, Kibbler CC, Cookson BD, Roberts JA, Medley GF, et al. The ORION statement: guidelines for transparent reporting of outbreak reports and intervention studies of nosocomial infection. Lancet Infect Dis 2007;7:282e8. [35] Robe PA, Martin DH, Nguyen-Khac MT, Artesi M, Deprez M, Albert A, et al. Early termination of ISRCTN45828668, a phase 1/2 prospective, randomized study of sulfasalazine for the treatment of progressing malignant gliomas in adults. BMC Cancer 2009; 9:372. [36] Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet 2005;365:1159e62. [37] Hopewell S, Dutton S, Yu LM, Chan AW, Altman DG. The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed. BMJ 2010;340:c723. [38] Cohn E, Jia H, Larson E. Evaluation of statistical approaches in quantitative research. Clin Nurs Res 2009;18:223e41. [39] Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG. Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Med 2010;8:24. [40] Simera I, Moher D, Hoey J, Schulz KF, Altman DG. The EQUATOR Network and reporting guidelines: Helping to achieve high standards in reporting health research studies. Maturitas 2009;63:4e6. [41] Simera I, Moher D, Hoey J, Schulz KF, Altman DG. A catalogue of reporting guidelines for health research. Eur J Clin Invest 2010;40:35e53. [42] Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med 2010;7:e1000217. [43] Moher D, Weeks L, Ocampo M, Seely D, Sampson M, Altman DG, et al. Describing reporting guidelines for health research: a systematic review. J Clin Epidemiol 2011;64:718e42. [44] Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol 2009;19:494e500. [45] Zhang D, Yin P, Freemantle N, Jordan R, Zhong N, Cheng KK. An assessment of the quality of randomised controlled trials conducted in China. Trial 2008;9:22.
246
E.L. Larson, M. Cortazal / Journal of Clinical Epidemiology 65 (2012) 239e246
[46] Uetani K, Nakayama T, Ikai H, Yonemoto N, Moher D. Quality of reports on randomized controlled trials conducted in Japan: evaluation of adherence to the CONSORT statement. Intern Med 2009;48:307e13. [47] Tiruvoipati R, Balasubramanian SP, Atturu G, Peek GJ, Elbourne D. Improving the quality of reporting randomized controlled trials in cardiothoracic surgery: the way forward. J Thorac Cardiovasc Surg 2006;132:233e40. [48] Thakar D, Dolansky MA, Neuhauser D. 23 Poster evaluation by SQUIRE guidelines. Qual Saf Health Care;19:158e9. [49] Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6:e1000097. [50] Perrio S, Holt PJ, Patterson BO, Hinchliffe RJ, Loftus IM, Thompson MM. Role of superficial femoral artery stents in the management of arterial occlusive disease: review of current evidence. Vascular 2010;18:82e92. [51] Christiaans I, van Engelen K, van Langen IM, Birnie E, Bonsel GJ, Elliott PM, et al. Risk stratification for sudden cardiac death in hypertrophic cardiomyopathy: systematic review of clinical risk markers. Europace 2010;12:313e21. [52] Stein DJ, Ipser JC, Baldwin DS, Bandelow B. Treatment of obsessive-compulsive disorder. CNS Spectr 2007;12:28e35. [53] Gupta R, Wayangankar SA, Targoff IN, Hennebry TA. Clinical cardiac involvement in idiopathic inflammatory myopathies: a systematic review. Int J Cardiol 2011;148:261e70. [54] Seidler EM, Kimball AB. Meta-analysis comparing efficacy of benzoyl peroxide, clindamycin, benzoyl peroxide with salicylic acid, and combination benzoyl peroxide/clindamycin in acne. J Am Acad Dermatol 2010;63:52e62. [55] Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 2000;283:2008e12.
[56] von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet 2007;370:1453e7. [57] Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007;19:349e57. [58] Dobrow MJ, Orchard MC, Golden B, Holowaty E, Paszat L, Brown AD, et al. Response audit of an Internet survey of health care providers and administrators: implications for determination of response rates. J Med Internet Res 2008;10:e30. [59] Coppus SF, van der Veen F, Bossuyt PM, Mol BW. Quality of reporting of test accuracy studies in reproductive medicine: impact of the Standards for Reporting of Diagnostic Accuracy (STARD) initiative. Fertil Steril 2006;86:1321e9. [60] Smidt N, Rutjes AW, van der Windt DA, Ostelo RW, Bossuyt PM, Reitsma JB, et al. Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies. BMC Med Res Methodol 2006;6:12. [61] Mann R, Hewitt CE, Gilbody SM. Assessing the quality of diagnostic studies using psychometric instruments: applying QUADAS. Soc Psychiatry Psychiatr Epidemiol 2009;44:300e7. [62] van Trijffel E, Anderegg Q, Bossuyt PM, Lucas C. Inter-examiner reliability of passive assessment of intervertebral motion in the cervical and lumbar spine: a systematic review. Man Ther 2005;10: 256e69. [63] Voirin N, Barret B, Metzger MH, Vanhems P. Hospital-acquired influenza: a synthesis using the Outbreak Reports and Intervention Studies of Nosocomial Infection (ORION) statement. J Hosp Infect 2009;71:1e14. [64] Miyaki K. Genetic polymorphisms in homocysteine metabolism and response to folate intake: a comprehensive strategy to elucidate useful genetic information. J Epidemiol 2010;20:266e70.