Contemporary Clinical Trials 38 (2014) 245–250
Contents lists available at ScienceDirect
Contemporary Clinical Trials journal homepage: www.elsevier.com/locate/conclintrial
Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: A systematic review Lawrence Mbuagbaw a,b,c,d,⁎, Michael Thabane e, Thuva Vanniyasingam a, Victoria Borg Debono a, Sarah Kosa a, Shiyuan Zhang a, Chenglin Ye a, Sameer Parpia f, Brittany B. Dennis a, Lehana Thabane a,b,g,h,i a
Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada Biostatistics Unit, Father Sean O'Sullivan Research Centre, St Joseph's Healthcare—Hamilton, ON, Canada c Centre for Development of Best Practices in Health, Yaoundé Central Hospital, Yaoundé, Cameroon d South African Cochrane Centre, South African Medical Research Council, South Africa e University of Waterloo Waterloo, ON, Canada f Ontario Clinical Oncology Group, McMaster University, Hamilton, ON, Canada g Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada h Centre for Evaluation of Medicine, St Joseph's Healthcare—Hamilton, ON, Canada i Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada b
a r t i c l e
i n f o
Article history: Received 12 March 2014 Received in revised form 13 May 2014 Accepted 16 May 2014 Available online 23 May 2014 Keywords: Quality of reporting CONSORT extension for abstracts Abstracts of RCTs
a b s t r a c t Background: We sought to determine if the publication of the Consolidated Standards of Reporting Trials (CONSORT)1 extension for abstracts in 2008 had led to an improvement in reporting abstracts of randomized controlled trials (RCTs).2 Methods: We searched PubMed for RCTs published in 2007 and 2012 in top-tier general medicine journals. A random selection of 100 trial abstracts was obtained for each year. Data were extracted in duplicate on the adherence to the CONSORT extension for abstracts. The primary outcome was the mean number of items reported and the secondary outcome was the odds of reporting each item. We also estimated incidence rate ratios (IRRs).3 Results: Significantly more checklist items were reported in 2012 than in 2007: adjusted mean difference was 2.91 (95% confidence interval [CI]4 2.35, 3.41; p b 0.001). In 2012 there were significant improvements in reporting the study as randomized in the title, describing the trial design, the participants, and objectives and blinding. In the Results section, trial status and numbers analyzed were also reported better. The IRRs were significantly higher for 2012 (IRR 1.32; 95% CI 1.25, 1.39; p b 0.001) and in multisite studies compared to single site studies (IRR 1.08; 95% CI 1.03, 1.15; p = 0.006). Conclusions: There was a significant improvement in the reporting of abstracts of RCTs in 2012 compared to 2007. However, there is still room for improvement as some items remain under-reported. © 2014 Elsevier Inc. All rights reserved.
⁎ Corresponding author at: 1280 Main Street West, HSC 2C7, Hamilton, ON L8S4K1, Canada. Tel.: +1 9055259140x22430. E-mail addresses:
[email protected] (L. Mbuagbaw),
[email protected] (M. Thabane),
[email protected] (T. Vanniyasingam),
[email protected] (V.B. Debono),
[email protected] (S. Kosa),
[email protected] (S. Zhang),
[email protected] (C. Ye),
[email protected] (S. Parpia),
[email protected] (B.B. Dennis),
[email protected] (L. Thabane). 1 CONSORT: Consolidated Standards of Reporting Trials. 2 RCT: Randomized controlled trial. 3 IRRs: Incidence rate ratios. 4 CI: Confidence interval.
http://dx.doi.org/10.1016/j.cct.2014.05.012 1551-7144/© 2014 Elsevier Inc. All rights reserved.
246
L. Mbuagbaw et al. / Contemporary Clinical Trials 38 (2014) 245–250
1. Introduction Randomized controlled trials (RCTs) are important sources of evidence in health care [1]. Apart from the limitations that are inherent to the study design, readers must rely upon the report of the trial to establish its credibility. Unfortunately, many trials are not reported in sufficient detail to ensure that potential sources of bias have been avoided [2]. In order for readers to be able to appreciate the internal and external validities of RCTs, they must be able to fully understand exactly how the RCT was conducted. Discrepancies in the way RCTs were reported led to the development of the Consolidated Standards of Reporting Trials (CONSORT) statement in 1996 and a revision in 2001 [3,4]. The 1996 publication of the CONSORT statement led to an improvement in the reporting of RCTs [5]. However, given the large volumes of scientific publications produced each year, many readers refer to the abstracts of RCTs rather than the entire report [6]. In addition, conference proceedings containing RCTs are limited to abstracts. This implies that abstracts of RCTs should contain sufficient information to be useful to readers. This can be achieved by recommending that information in abstracts be provided under specific headings and have a uniform “structure”. Structured abstracts were introduced to the medical literature in the 1980s [7]. They were found to be well accepted by readers, to contain more information and to facilitate the review of conference proceedings. However, they are often lengthier [7]. The acknowledgment of these issues in relation to RCTs led to the development of the CONSORT extension for abstracts [8]. The CONSORT statement for abstracts comprises 17 items in eight sections which are: the title, authors contact details, trial design, methods (participants, interventions, objective, outcomes, randomization, blinding), results (numbers randomized, recruitment, numbers analyzed, outcome, harms), conclusions, trial registration and funding. The statement describes what information authors of RCT abstracts are expected to report [8]. A number of studies have investigated the quality of reporting after the publication of this extension for abstracts. Poor adherence to the statement was noted in general medicine journals, [9] and also in specific fields of medicine such as anesthesia [10]. A recent scoping review noted that higher journal impact factor, later date of publication, industry funding, multisite, larger studies and studies of pharmacological interventions were more likely to adhere to guidelines. On the other hand studies with positive results were less likely to adhere to the CONSORT statement [11]. In addition, journal endorsement can lead to improved quality of reporting [8,11]. These studies have either focused on the effect of journal endorsement [12], specific medical specialties [10,13–16], specific countries [17] or without adjustments for potential confounders [9]. In order to evaluate the role of the CONSORT extension for abstracts in improving the reporting of abstracts, it is necessary to appraise changes in reporting quality over time, taking into account the factors that influence reporting quality [11]. Our objectives were to compare abstracts of RCTs published in high-impact general medicine journals before and after the
publication of the CONSORT extension for abstracts, while controlling for potential confounders. 2. Methods We conducted a search of the United States National Institutes for Health (NIH) database PubMed (August 2013) for RCTs published in the years 2007 and 2012 in high-impact general medicine journals with wide readership: The Journal of the American Medical Association (JAMA), the New England Journal of Medicine (NEJM), the Lancet, the British Medical Journal (BMJ), Annals of Internal Medicine (AIM) and the Canadian Medical Association Journal (CMAJ). Our search strategy included terms for RCTs (randomized control*, clinical trial*), journal names (as above), exclusions for other type of articles (protocol, systematic review, cohort, case control, case series, guidelines and editorials) and limits set for the specific time periods of interest (01/01/2007 to 31/12/2007 and 01/01/ 2012 to 31/12/2012). For each year we randomly selected 100 RCT abstracts (to keep the data manageable) for inclusion based on prespecified criteria: the abstract must be the report of an RCT, published in English, conducted on human subjects and registered in a clinical trials registry. Abstracts were excluded if they were published only as abstracts (for example, conference proceedings), still recruiting or duplicate publications. We extracted data on the compliance of the abstract to the CONSORT statement for abstracts [8]. Screening and data abstraction were done in duplicate by four pairs of reviewers (two for 2007 and two for 2012) and agreement was measured using the Kappa statistic [18]. Disagreements were resolved by consensus or by arbitration by a third author. The reviewers graduate or doctoral research methodologists trained in clinical trial reporting and the CONSORT extension for abstracts. Ratings were made based on the CONSORT elaboration and explanation guidance documents [19]. Excluded articles were replaced by another random selection. 3. Statistical analyses Firstly, we determined the mean number of items reported (0–17) for each year and estimated the unadjusted and adjusted differences using a two-sample t-test and generalized estimation equations (GEEs) respectively [20]. The means are reported with standard deviations (SDs). The mean differences and adjusted means are reported with 95% CIs and p-values. Secondly, we compared compliance with the 17 items of the CONSORT statement for abstracts for years 2007 versus 2012 using individual Chi-squared tests. This unadjusted analysis was followed by an adjusted analysis using GEE. For this binary outcome (item reported yes or no) we assumed a binomial distribution and an unstructured correlation matrix. We report adjusted odds ratios, 95% confidence intervals and p-values. Thirdly, we estimated the incidence rate ratios (IRRs) for reporting items in 2012 compared to 2007 using GEE, assuming a Poisson distribution and an unstructured correlation matrix. Adjusted IRRs, 95% CI and p-values are reported.
L. Mbuagbaw et al. / Contemporary Clinical Trials 38 (2014) 245–250
For the GEE, adjustments were made for 1) number of sites [multisite versus single site], 2) type of intervention [pharmaceutical vs all others], 3) sample size [1000 or less versus more than 1000], [21] and 4) results of trial [negative versus positive result] with journal as a grouping factor – to adjust for potential clustering or similarity in articles published in the same journal. Descriptive data are presented as counts and percentages. Data were analyzed using Statistical Package for Social Sciences (SPSS) Version 20.0 (SPSS, Inc., 2009, Chicago, Illinois, USA).
4. Results Our search retrieved 445 articles of which 199 and 230 were published in 2007 and 2012 respectively. We then randomly selected 100 for each year. Ten (8 from 2007 and 2 from 2012) articles did not meet our inclusion criteria and were replaced by another random selection. A flow diagram of our study selection process is shown in Fig. 1. Agreement was considered high for inclusion of articles (Kappa = 0.60 [95% CI 0.29, 0.92]; p b 0.001; given the maximum attainable Kappa = 0.60). Overall we compared 100 titles published in 2007 to 100 titles published in 2012 giving a total of 200 titles. Table 1 is a description of the included articles by year of publication and selected characteristics. Agreement between reviewers on all CONSORT items was moderate (Kappa = 0.59; 95% CI 0.56–0.62; p b 0.001 given a maximum attainable Kappa = 1). The mean number of items reported in 2007 (mean = 9.06; SD = 2.15) compared to 2012 (mean = 12.11; SD = 2.22) differed significantly (mean difference = 3.05; 95% CI 2.44– 3.65; p b 0.001). After adjusting for covariates (number of sites, sample size and type of intervention), the difference was (2.91; 95% CI 2.35, 3.41; p b 0.001).
247
There was a statistically significant improvement in reporting in 2007 and 2012 for seven items of the CONSORT for abstracts. More abstracts in 2012 identified the study as randomized in the title (aOR 5.04; 95% CI 2.24, 11.34; p b 0.001) and provided details on the trial design (aOR 2.23; 95% CI 1.21, 4.30; p = 0.011). In the Methods section, the participants (aOR 12.31; 95% CI 3.60, 42.01; p b 0.001), objectives (aOR 3.15; 95% CI 1.23, 1.19; p b 0.001), randomization (aOR not estimable; 13/100 versus 0/100) and blinding (aOR 16.57; 95% CI 7.18, 38.21; p b 0.001) were reported better in 2012 compared to 2007. In the Results section, trial status (aOR 2.79; 95% CI 1.52, 5.11; p = 0.001) and numbers analyzed (aOR 31.78; 95% CI 13.52, 74.72; p b 0.001) were also reported better. The source of funding was more often reported in 2012 than in 2007 (aOR not estimable; 74/100 versus 0/100). Details of the intervention and the primary outcomes were worse in 2012 but not statistically significant. The odds ratios of reporting of author contact addresses, numbers randomized, adverse events and conclusions were not statistically significant, but all in favor of 2012. Reporting of the effect size of the primary outcome and its precision, and trial registration was identical in both years. These results are reported in Table 2. After GEE, year (2012) of publication (IRR 1.32; 95% CI 1.25, 1.39; p b 0.001) and multisite studies (IRR 1.08; 95% CI 1.03, 1.15; p = 0.006) were significantly associated with reporting more items. Source of funding was removed from the model because no studies reported funding in 2007. See Table 3.
5. Discussion In this study, we report differences in the quality of reporting of abstracts in a random selection of articles published in major general medicine journals in 2012 and
Fig. 1. Flow chart of study selection.
248
L. Mbuagbaw et al. / Contemporary Clinical Trials 38 (2014) 245–250
Table 1 Distribution of articles by year and selected characteristics. Characteristic
Year
Journal New England Journal of Medicine Lancet British Medical Journal Journal of the American Medical Association Annals of Internal Medicine Canadian Medical Association Journal Multisite study Yes No Sample size 1–1000 N1000 Pharmaceutical intervention Yes No Industry funding Yes No Not reported
Total
2007 n = 100
2012 n = 100
n (%) 34 (34) 27 (27) 15 (15) 13 (13) 11 (11) 0 (0)
n (%) 43 (43) 24 (24) 14 (14) 12 (12) 6 (6) 1 (1)
n (%) 77 (38.5) 51 (25.5) 29 (14.5) 25 (12.5) 17 (8.5) 1 (0.5)
40 (40) 60 (60)
56 (56) 44 (44)
96 (48.0) 104 (52.0)
63 (63) 37 (37)
59 (59) 41 (41)
122 (61.0) 78 (39.0)
63 (63) 37 (37)
59 (59) 41 (41)
122 (61.0) 78 (39.0)
0 (0) 0 (0) 100
36 (36) 38 (38) 26 (26)
36 (18.0) 38 (19.0) 126 (63.0)
2007 using the CONSORT extension for abstracts [8]. For some items there was an improvement while for others, reporting quality seemed to stagnate or regress. Overall,
2007 + 2012 n = 200
there was an improvement in the quality of reporting of abstracts since the publication of the CONSORT extension for abstracts in 2008, with an average of 3 more items reported. A number of
Table 2 Crude and adjusted odds ratios for adherence to 17 items of the CONSORT statement for abstracts in 2012 compared to 2007. 2007
2012
Univariate analysisb
Multivariable analysis
n (%)
n (%)
OR; (95% CI); p
Adjusteda OR; (95% CI); p
5.51; (2.51, 11.32); b0.001 1.46; (0.73, 2.92); 0.289 2.46; (1.27, 4.82); 0.004
5.04; (2.24, 11.34); b0.001 1.37; (0.68, 2.77); 0.372 2.23; (1.21, 4.30); 0.011
97 (97) 80 (80)
13.21; (3.82, 69.62); b0.001 0.65; (0.28, 1.46); 0.259
12.31; (3.60, 42.01); b0.001 0.53; (0.23, 1.19); 0.123
79 (79) 98 (98) 0 (0)
93 (93) 91 (91) 13 (13)
3.53; (1.35, 10.31); 0.004 0.93; (0.87, 0.99); 0.030 Not estimable
3.15; (1.23, 8.14); 0.017 0.21; (0.04, 1.13); 0.069 Not estimable
8 (8)
59 (59)
16.50; (6.95, 43.14); b0.001
16.57; (7.18, 38.21); b0.001
53 (53) 25 (25)
61 (61) 49 (49)
1.39; (0.76, 2.53); 0.253 2.88; (1.52, 5.50); b0.001
1.28; (0.72, 2.28); 0.399 2.79; (1.52, 5.11); 0.001
26 (26) 83 (83)
91 (91) 83 (83)
28.78; (12.05, 72.88); b0.001 1.00; (0.45, 2.24); N0.999
31.78; (13.52, 74.72); b0.001 0.78; (0.36, 1.71); 0.547
53 (53) 22 (22)
60 (60) 26 (26)
1.33; (0.73, 2.42); 0.318 1.25; (0.62, 2.52); 0.508
1.58; (0.85, 2.95); 0.148 1.41; (0.71, 2.79); 0.322
100 (100)
100 (100)
Not estimable
Not estimable
0 (0)
74 (74)
Not estimable
Not estimable
Item
Criteria
Title Authors Trial design Methods Participants Interventions
Identified as randomized in title Both postal and email addresses provided Details on trial design
66 (66) 77 (77) 59 (59)
91 (91) 83 (83) 78 (78)
Eligibility criteria for participants Details of interventions (e.g. dosing and duration) Specific objective or hypothesis reported Clearly defined primary outcome Method of randomization and allocation concealment Details on whether there was blinding and who was blinded
71 (71) 86 (86)
Numbers randomized to each group Details on the trial status (e.g. completed and interim analyses) Numbers analyzed in each group Effect size for primary outcome and it's precision Adverse events reported Conclusions reported balancing both harms and benefits Trial registration number and name of trial registry Source of funding
Objective Outcome Randomization Blinding Results Numbers randomized Recruitment Numbers analyzed Outcome Harms Conclusions Trial registration Funding a
Adjusted for number of sites (single site versus multiple), sample size (0–1000 versus more than 1000), and type of intervention (pharmaceutical versus all others), with journal as a grouping variable using generalized estimation equations. b Chi squared tests.
L. Mbuagbaw et al. / Contemporary Clinical Trials 38 (2014) 245–250 Table 3 Adjusted incidence rate ratios for the total number of CONSORT extension for abstract items reported. Factors Year of publication 2007 2012 Multisite study? No Yes Pharmaceutical intervention? No Yes Sample size 0–1000 N1000 Results of trial Negative result Positive result
Adjusted incidence rate ratio (95% CI)
p-Value
1 1.32 (1.25, 1.39)
b0.001
1 1.08 (1.03, 1.15)
0.006
1 1.05 (0.98, 1.11)
0.131
1 1.02 (0.96, 1.08)
0.508
1 0.98 (0.92, 1.05)
0.942
issues are noteworthy in this analysis. The first is that our findings represent a subset of abstracts registered in clinical trial registries and published as full text in the concerned journals. For this reason, all of the included abstracts reported a trial registration number and name of the trial registry. All the abstracts in both years also reported contact details for the first author, not all reported email addresses. This may reflect journal indexing policy, rather than a deficiency in reporting. More so, author contact details are more relevant for conference abstracts, [19] none of which were included in this analysis. Our findings are in line with many other studies that have reported sub-optimal adherence to the CONSORT for abstracts across journals and fields of medicine over time [9,10,13–17]. The two most poorly reported items were the details of randomization and reporting conclusions that balanced both benefits and harms. None of the studies in 2007 reported the methods of randomization, and only 13/100 of the studies in 2012 reported details of randomization and allocation concealment. These are critical and defining components of RCTs which must be reported adequately to demonstrate that selection bias has been avoided [19]. Likewise, only 22/100 and 26/100 abstracts reported balanced conclusions in 2007 and 2012 respectively. Even though all of the studies reported conclusions, these conclusions often omitted the harms and focused on the benefits of the intervention. It has been shown that journal endorsement and implementation of the CONSORT statement improves reporting of abstracts [12]. All the journals included in this paper endorse the consort statement (http://www.consort-statement.org/aboutconsort/consort-endorsement/consort-endorsers—journals/). In addition, higher impact factor, industry funding, multisite studies, large studies, studies of pharmacological interventions, studies with negative results and more recent studies are often better reported [11]. In this study we have demonstrated an improvement in reporting in 2012 compared to 2007 and that multisite studies seem to be reported
249
better. The effects of type of intervention, sample size and results, were in the hypothesized direction, though not statistically significant. One could postulate that multisite studies possess more resources, but it is unclear why they are better reported. Given the above findings, we encourage journal editors to implement the use of the CONSORT extension for abstracts at all stages of the editorial process. It should be highlighted as one of the instructions for authors; submission procedures should insist on this structure; reviewers should be asked to comment on the abstract quality; and final copyediting needs to ensure that all seventeen items are covered. Authors of RCT abstracts should also take upon themselves, the responsibility of producing scientifically adequate abstracts, irrespective of the type of intervention, or the results of the study. On the other hand despite the desirability of a checklist, authors may be uncertain, if not unwilling to report items of unclear utility in an abstract. For example, reporting harms of a new treatment for cancer were often more readily reported than for educational interventions. It also appeared that harms were not discussed when the intervention was not found to be effective. In many instances when the primary outcomes were undesirable (e.g. mortality, stroke or myocardial infarction), harms were omitted from the abstracts. It would seem that despite wide journal endorsement of the CONSORT statement, little is done to ensure that abstracts are reported accordingly. The reasons why authors and editors still fall short of optimal reporting are cause for further investigation. There may be room for more elaboration regarding the CONSORT for abstract guidelines. This study is not without limitations. The selection of top tier journals cannot be considered to be representative of all other medical journals. Indeed it is possible that reporting of RCT abstracts is worse in other less popular journals. Improvements in reporting of abstracts may improve for reasons other than the release of the CONSORT extension for abstracts. Funding bodies and readers who demand high quality reports may be playing a role in these improvements. It is also reasonable to expect an improvement in reporting quality in the absence of guidelines if the studies themselves are conducted better. However, it is also important to note that because they are not reported does not necessarily mean that the investigators did not carry out some items [22]. Even though we adjusted for a journal effect in our GEE analyses, we are unable to determine whether this effect is due effective implementation of the CONSORT extension for abstracts, restrictions in word counts for abstracts or some other journal-specific factors that may influence reporting quality. 6. Conclusion In 2012 compared to 2007 there has been some improvement in reporting of RCT abstracts in top-tier medical journals according to the CONSORT for abstract checklist, with regard to the number of items reported and for specific items, but there is still room for improvement. Journal endorsement and implementation of the CONSORT statement for abstracts are warranted.
250
L. Mbuagbaw et al. / Contemporary Clinical Trials 38 (2014) 245–250
Appendix A Search strategy for RCTs published in high impact medical journals in 2007 and 2012 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
12.
Search ((RCT) or randomized control*) or clinical trial* Search #1 limits: Publication date from 2007/01/01 to 2007/12/31 Search #1 limits: Publication date from 2012/01/01 to 2012/12/31 Search #2 or #3 Search animal* Search #4 not #5 Search ((((protocol[Title]) or systematic review[Title]) or metaanalysis[Title]) or editorial*[Title]) or narrative review[Title] Search #6 not #7 Search ((((case report[Title]) or cohort stud*[Title]) or case control [Title]) or case series[Title]) or guideline*[Title] Search #8 not #9 Search ((((“Lancet”[Journal]) or “BMJ (Clinical research ed.)”[Journal]) or “The New England journal of medicine”[Journal]) or “Annals of internal medicine”[Journal]) or “CMAJ: Canadian Medical Association journal = journal de l'Association medicale canadienne”[Journal] Search #10 and #11
References [1] Stolberg HO, Norman G, Trop I. Randomized controlled trials. Am J Roentgenol 2004;183:1539–44. [2] Jadad AR. Randomised controlled trials: a user's guide. BMJ books; 1998. [3] Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA 1996;276:637–9. [4] Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallelgroup randomized trials. J Am Podiatr Med Assoc 2001;91:437–42. [5] Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust 2006;185:263–7. [6] Fontelo P, Gavino A, Sarmiento RF. Comparing data accuracy between structured abstracts and full-text journal articles: implications in their use for informing clinical decisions. Evid Based Med 2013;18:207–11. [7] Hartley J. Current findings from research on structured abstracts. J Med Libr Assoc 2004;92:368–71.
[8] Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, et al. CONSORT for reporting randomised trials in journal and conference abstracts. Lancet 2008;371:281–3. [9] Ghimire S, Kyung E, Kang W, Kim E. Assessment of adherence to the CONSORT statement for quality of reports on randomized controlled trial abstracts from four high-impact general medical journals. Trials 2012;13:77. [10] Can OS, Yilmaz AA, Hasdogan M, Alkaya F, Turhan SC, Can MF, et al. Has the quality of abstracts for randomised controlled trials improved since the release of Consolidated Standards of Reporting Trial guideline for abstract reporting? A survey of four high-profile anaesthesia journals. Eur J Anaesthesiol 2011;28:485–92. [11] Samaan Z, Mbuagbaw L, Kosa D, Debono VB, Dillenburg R, Zhang S, et al. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc 2013;6:169–88. [12] Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ 2012;344:e4178. [13] Fleming PS, Buckley N, Seehra J, Polychronopoulou A, Pandis N. Reporting quality of abstracts of randomized controlled trials published in leading orthodontic journals from 2006 to 2011. Am J Orthod Dentofacial Orthop 2012;142:451–8. [14] Seehra J, Wright NS, Polychronopoulou A, Cobourne MT, Pandis N. Reporting quality of abstracts of randomized controlled trials published in dental specialty journals. J Evid Based Dent Pract 2013;13:1–8. [15] Ghimire S, Kyung E, Lee H, Kim E. Oncology trial abstracts showed suboptimal improvement in reporting: a comparative before-and-after evaluation using CONSORT for abstract guidelines. J Clin Epidemiol 2014;67:658–66. [16] Knobloch K, Vogt PM. Adherence to CONSORT abstract reporting suggestions in surgical randomized-controlled trials published in Annals of Surgery. Ann Surg 2011;254:546 [author reply 546–547]. [17] Chen Y, Li J, Ai C, Duan Y, Wang L, Zhang M, et al. Assessment of the quality of reporting in abstracts of randomized controlled trials published in five leading Chinese medical journals. PLoS One 2010;5:e11926. [18] Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med 2005;37:360–3. [19] Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, et al. CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration. PLoS Med 2008;5:e20. [20] Hanley JA, Negassa A, Edwardes MDd, Forrester JE. Statistical analysis of correlated data using generalized estimating equations: an orientation. Am J Epidemiol 2003;157:364–75. [21] LeLorier J, Grégoire G, Benhaddad A, Lapierre J, Derderian F. Discrepancies between meta-analyses and subsequent large randomized, controlled trials. N Engl J Med 1997;337:536–42. [22] Devereaux PJ, Choi PT, El-Dika S, Bhandari M, Montori VM, Schunemann HJ, et al. An observational study found that authors of randomized controlled trials frequently use concealment of randomization and blinding, despite the failure to report these methods. J Clin Epidemiol 2004;57:1232–6.