SPECIAL ARTICLE
Reading between the lines: Making practical sense of research reports Ameet V. Revankar Dharwad, Karnataka, India
T
he consensus for evidence-based methods is growing.1 The wave of change is noticeable in the way articles are published in the AJO-DO, including structured abstracts, systematic reviews under the guidelines of the Cochrane Colloboration, and randomized contolled trials (RCTs). Guidelines, including CONSORT (Consolidated Standards of Reporting Trials), QUOROM (Quality of Reporting of Meta-analyses), and MOOSE (Meta-analysis of Observational Studies in Epidemiology), have been established for many kinds of articles.2,3 Is this change toward an evidence-based approach in publishing articles enough to convince the reader, or are additional skills needed? Does the reader still need to master the art of reading an article—an art based on specific parameters and evaluation strategies that enables him or her to separate the wheat from the chaff? A critical review of the methodologic quality of a study is paramount and includes assessment of 3 important parameters: appraisal of the evidence, its validity, and its usefulness (or clinical applicability); together, these comprise critical appraisal. Encompassing these parameters of critical appraisal, standard checklists have been developed, and simplified versions can be helpful (Figs 1-3).4 Most articles in contemporary, top-notch orthodontic journals are presented in the standard format: introduction, methods, results, and discussion. Surprisingly, a major chunk of published research ‘‘belongs to the bin.’’5 I agree that picking out flaws in others’ research is much easier than writing a watertight article yourself; nevertheless, accuracy is the name of the game.
Assistant professor, Department of Orthodontics and Dentofacial Orthopedics, Sri Dharmasthala Manjunatheshwara College of Dental Sciences and Hospital, Sattur, Dharwad, Karnataka, India The author reports no commercial, proprietary, or financial interest in the products or companies described in this article. Reprint requests to: Ameet V. Revankar, Department of Orthodontics and Dentofacial Orthopedics, SDM College of Dental Sciences and Hospital, Sattur, Dharwad, Karnataka, India 580009; e-mail,
[email protected]. Submitted, June 2009; revised and accepted, August 2009. Am J Orthod Dentofacial Orthop 2010;138:118-20 0889-5406/$36.00 Copyright Ó 2010 by the American Association of Orthodontists. doi:10.1016/j.ajodo.2009.08.004
118
Relevance of the study 1. What clinical or research questions did the study ask? 2. What type of study was done? 3. Was the study design appropriate to the broad field of research addressed? Fig 1. Checklist to determine the relevance of the study.
In a scientific article, the weight of the presented evidence is judged by the 6-level hierarchy of evidence suggested by Shekelle et al6 (Fig 4). However, the value of an article has more to do with its design (methods) rather than anything else.7 Of utmost importance are the following questions related to the study design. What is the aim of the study? The reason for doing the study and the questions the authors are attempting to answer are usually reported in the introduction. It is always worthwhile to clarify in your own mind the purpose of the study. The background of the research should be clear and concise. The hypothesis should be definitive and preferably based on a deductive probabilistic pedestal.8 What is the study type or design? Research articles basically fall into 2 categories: primary or first-hand research, and secondary or integrative research (a review design). Is the study design appropriate? However thoughtprovoking the hypothesis, if a wrong method is used to test it, the results will not be meaningful. The appropriateness of a particular study design is determined by the type of research undertaken (Fig 5). A study design suitable for 1 research field might not be suitable for another. For example, to determine the short-term efficacy of the Twin-block appliance for Class II correction (an evaluation of treatment efficacy), an RCT is the ideal design, whereas to study the lasting effects of Twin-block therapy (research of prognosis), a longitudinal design is essential. Reading articles is of no value unless you read the right ones at the right time in the right light of critical assessment.
Revankar 119
American Journal of Orthodontics and Dentofacial Orthopedics Volume 138, Number 1
Methodology used in the study 1. Was the study original?
Ia Evidence from systematic reviews of RCTs Ib Evidence from at least 1 RCT
2. Who or what were the subjects of the study? • Under what guidelines were the subjects selected? • Who or what was included and excluded? • Were the subjects studied in “real life” or under simulated test conditions? 3. Was the design of the study sensible? • What treatment or intervention was considered?
IIa Evidence from at least 1 controlled study with randomization IIb Evidence from at least 1 other type of quasiexperimental study III Evidence from nonexperimental descriptive studies IV Evidence from expert committee reports or opinions
• What outcomes were measured, and how? 4. Was the study adequately controlled? • True randomization is essential in a randomized control trial • In other study designs, appropriate controls are needed to authenticat e the research • Were all confounding factors eliminated? • Was the outcome assessment blinded?
Fig 4. Hierarchy of evidence in ascending order, with opinion-based evidence and case reports at the bottom and systematic reviews based on RCTs at the top.
Type of research
Appropriate study design
Treatment efficacy
Randomized control trial
Diagnosis
Cross-sectional design
Screening
Cross-sectional design
Prognosis
Longitudinal cohort
Etiology, etiopathogenesis
Cohort/case control study
5. Were the sample size and study time adequate to obtain realistic results? Fig 2. Checklist for the methodology used in the study.
Mathematical/statistical design of the study 1. Did the investigators determine in advance whether the groups in question were comparable? 2. Were confounding variables neutralized before the interventional measures were used? 3. What sort of data was obtained and were appropriate statistical tests used? For example, were paired tests performed on paired data? 4. Were the data analyzed in accordance with the original study protocol? 5. Were the data evaluated with both common sense and appropriate statistical adjustments? 6. Were the results expressed in terms of the likely harm or benefit to an individual patient? Fig 3. Checklist for the mathematical or statistical design of the study.
Fig 5. Study designs and appropriate research types.
Critics may very well sneer at the evidence-based approach, calling it ‘‘the increasingly fashionable tendency of a group of young, confident, and highly numerate medical academics to belittle the performance of experienced clinicians by using a combination of epidemiologic jargon and statistical sleight of hand.’’9 Nevertheless, it defies all reasoning that a haphazard empirical approach to a scientific problem thrives when a logical, systematic approach—better known as the evidence-based approach—is available to the scientific community.
120
Revankar
REFERENCES 1. Turpin DL. Consensus builds for evidence-based methods. Am J Orthod Dentofacial Orthop 2004;125:1-2. 2. Turpin DL. CONSORT and QUOROM guidelines for reporting randomized clinical trials and systematic reviews. Am J Orthod Dentofacial Orthop 2005;128:681-6. 3. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA 2000;283:2008-12. 4. Sackett DL, Haynes B. On the need for evidence based medicine. Evid Based Med 1995;1:4-5.
American Journal of Orthodontics and Dentofacial Orthopedics July 2010
5. Altman DG. The scandal of poor medical research. BMJ 1994;308: 283-4. 6. Shekelle PG, Woolf SH, Eccles M, Grimshaw J. Clinical guidelines: developing guidelines. BMJ 1999;318:593-6. 7. Greenhalgh T. How to read a paper. Getting your bearings (deciding what a paper is about). BMJ 1997;315:243-6. 8. Popper K. Conjectures and refutations: the growth of scientific knowledge. New York: Routledge and Kregan Paul; 1963. 9. Greenhalgh T. Why read papers at all? In: Greenhalgh T. How to read a paper. The basics of evidence based medicine. 2nd ed. London: BMJ Books; 2001. p. 3.