The Clinical Impact of Resident-attending Discrepancies in On-call Radiology Reporting

The Clinical Impact of Resident-attending Discrepancies in On-call Radiology Reporting

ARTICLE IN PRESS Radiology Resident Education The Clinical Impact of Residentattending Discrepancies in On-call Radiology Reporting: A Retrospective...

100KB Sizes 0 Downloads 22 Views

ARTICLE IN PRESS

Radiology Resident Education

The Clinical Impact of Residentattending Discrepancies in On-call Radiology Reporting: A Retrospective Assessment Sebastian R. McWilliams, MB, Christopher Smith, MD, Yaseen Oweis, MD, MBA, Kareem Mawad, MD, Constantine Raptis, MD, Vincent Mellnick, MD Abbreviations MCR management change rate CT computed tomography MRI magnetic resonance imaging ED emergency department CR computed radiography

Rationale and Objectives: The purpose of this study is to quantify the clinical impact of residentattending discrepancies at a tertiary referral academic radiology residency program by assessing rates of intervention, discrepancy confirmation, recall rate, and management change rate; furthermore, a discrepancy categorization system will be assessed. Materials and Methods: Retrospective review of the records was performed for n = 1482 discrepancies that occurred in the 17-month study period to assess the clinical impact of discrepancies. Discrepancies were grouped according to a previously published classification system. Management changes were recorded and grouped by severity. The recall rate was estimated for discharged patients. Any confirmatory testing was reviewed to evaluate the accuracy of the discrepant report. Categorical variables were compared to the chi-square test. Results: The 1482 discrepancies led to management change in 661 cases (44.6%). The most common management change was follow-up imaging. Procedural interventions including surgery occurred in 50 cases (3.3%). The recall rate was 2.6%. Management changes were more severe with computed tomography examinations, inpatients, and when the discrepancy was in the chest and abdomen subspecialty. Also, management changes correlated with the discrepancy category assigned by the attending at the time of review. Conclusions: Resident-attending discrepancies do cause management changes in 44.6% of discrepancies (0.62% overall); the most frequent change is follow-up imaging. The discrepancy categorization assigned by the attending correlated with the severity of management change. Key Words: Resident; discrepancy; on-call. © 2018 Published by Elsevier Inc. on behalf of The Association of University Radiologists.

RATIONALE AND OBJECTIVES

A

fundamental concept in radiology residency training is the development of independence in a paradigm of graded responsibility. In the past, residents obtained this experience working nights for the practice in which their residency was embedded. Although this model persists in many departments, there is an increasing drive to a 24-hour coverage model with an attending radiologist, even subspecialty radiologist, providing direct resident supervision and final signing throughout the night. This model poses a challenge

Acad Radiol 2017; ■:■■–■■ From the Mallinckrodt Institute of Radiology, Washington University in St Louis, 510 S. Kingshighway Blvd. Box 8131, St Louis, MO 63110. Received October 16, 2017; accepted November 22, 2017. Address correspondence to: V.M. e-mail: [email protected] © 2018 Published by Elsevier Inc. on behalf of The Association of University Radiologists. https://doi.org/10.1016/j.acra.2017.11.016

to fostering independence in residency, particularly in the early years of training. One of the strongest arguments for the 24-hour attending coverage is the need to avoid resident-attending discrepancies that lead to significant changes in management, particularly for discharged or critically ill patients. Although this justification is inherently reasonable, it is easy to overemphasize its importance when the true impact of discrepancies is not well defined. Previous studies have determined what the discrepancy rates are in several resident practice environments (1–3). Others have shown that discrepancy rates are correlated with duration of shift (4) and vary with different imaging study types (5,6). Fewer studies have specifically detailed the clinical impact of discrepancies (7–9). Ruchman et al. reviewed 11,903 reports with a discrepancy rate of 24% and showed no significant effect on management in 92.8% of discrepancies. It should be noted, however, that this group did not specifically detail their criteria for assessing management impact. The definition of discrepancy 1

MCWILLIAMS ET AL

has been reasonably well defined (5,7,9,10), yet several studies apply different criteria which makes direct comparisons challenging (8,11). A paper by Friedman et al. identified a low rate of 28 per 18,185 reports as discrepant. They relied on chart review and emergency department (ED) physicians’ assessment to assess discrepancy impact, but did not include cases where follow-up imaging was pursued as having a change in management. In a study of computed tomography (CT) angiography of head and neck, Meyer et al. described only one case where a change in management occurred out of 73 discrepancies, yet there is ambiguity over whether follow-up angiograms after negative computed tomography angiography was performed as a result of the discrepancy or not (11). Other studies suffer from small numbers or limited imaging of patient subgroups. Bruni et al. performed a rigorous retrospective analysis of discrepancies, but limited their analysis to neuroradiology studies (9). The studies by Carney et al. and Filippi et al. had low numbers overall (35 discrepancies and 26 discrepancies, respectively), and little detail on management changes were provided (5,10). The study by Tieng et al. had low numbers (20 discrepancies out of 203 studies) and defined minor discrepancies as conditions that would not impact the course of ED management, potentially excluding cases with clinically impactful discrepancies (3). The purpose of this study is to quantify the clinical impact of resident-attending discrepancies at a tertiary referral academic radiology residency program. Metrics estimated to address this question include rates of intervention, rates of confirmation of the discrepancy, the recall rate for discharged patients, and the management change rate (MCR). In addition, a previously proposed discrepancy classification system will be evaluated based on the severity of management change.

MATERIALS AND METHODS Case Selection

The institution ethics review board granted approval for this retrospective study. The electronic discrepancy log was queried over a 17-month range from mid-January 2013 to mid-June 2014. Discrepant reports were generated for studies read by residents during the on-call period, all nights from 5 PM to 7 AM, from 12 PM to 5 PM on Saturdays, and all day Sunday and holidays. The attending that reviewed cases at the end of the call period determined the necessity of a discrepancy and supervised the electronic logging of the discrepancy by the resident into the radiology information system. Each discrepancy was recorded and categorized based on a previously published severity and location-based system (12). According to this system, only discrepancies with the potential to alter patient management were recorded, analogous to “major discrepancies” in other grading systems (4,5,9) or the “b” modifier in the RADPEER system (13). Studies’ modality, discrepant diagnosis, times of the preliminary and addended reports, and interpreting attending subspecialty were recorded. At the study institution, residents do not read magnetic resonance 2

Academic Radiology, Vol ■, No ■■, ■■ 2017

imaging (MRI) studies on call; hence, the vast majority of discrepant reports were for radiographs and CTs. The few nonradiograph and non-CT discrepancies (n = 14), such as ultrasound and nuclear medicine studies, were excluded from analysis due to small numbers. Discrepancy Analysis

The patients associated with each discrepancy were identified and their charts were reviewed retrospectively. The diagnosis in question was recorded (including when a discrepancy occurred to call a study normal, a false-positive diagnosis) and categorized by organ system. The type of error that led to the discrepancy was grouped into observation (eg, failure to observe a significant finding, false negative) or interpretation errors (eg, misinterpretation of the significance of an observed finding). Several discrepancies were made that were categorized as report clarification errors (eg, an addendum made to change a follow-up recommendation, clarify a typographic or dictation error, or document a verbal communication that had already taken place with no change to the meaning of the preliminary report). These discrepancies were excluded from analysis as they represent errors that may be expected to occur if the attending were to have dictated the study by him- or herself or were erroneously entered as discrepancies (eg, the change to the report never had the potential to alter management). No communication errors had clinical impact. If several discrepancies occurred for a given report, all were recorded. For analysis in this study, the one discrepancy per report with greatest clinical impact was selected. Clinical notes from the ED visit or admission were reviewed and changes in management attributed in writing to the discrepant report were identified and recorded. If no change in management occurred after the time of the discrepant report addendum and direct verbal communication, the discrepancy was considered to have no change in management even in the absence of a statement confirming the lack of change. Similarly, if a follow-up test was recommended and performed, it was assumed this occurred as a result of the discrepant report. If several clinical impacts occurred potentially as a result of the discrepant report, the most severe one was selected. For example, if a missed hip fracture on radiograph led to repeat imaging and then surgery, the clinical impact of the discrepancy was coded as therapeutic intervention (surgery). Changes in management were categorized based on a scheme (Table 1) and grouped by severity. To minimize variation in chart reviews, all charts were reviewed by one reviewer to standardize the assessment. The initial discrepant and subsequent imaging studies performed of the same body region were reviewed in addition to other potentially confirmatory testing (laboratory, pathology) to assess the veracity of the attending-directed discrepancy. Patients that returned to the ED as a result of the report discrepancy and subsequent telephone call were identified and the recall rate was estimated. Time of preliminary and discrepant reports were identified, and delay to discrepancy was calculated.

Academic Radiology, Vol ■, No ■■, ■■ 2017

ON-CALL DISCREPANCIES IMPACT

TABLE 2. Modality and Study Types of the Discrepant Studies

TABLE 1. Description and Total Numbers of Clinical Management Changes Categorized and Grouped by Severity

Management Change Therapeutic intervention Medication change Diagnostic intervention Follow-up imaging Discharge delay/admission/ change in level of care Consult or clinic visit Stopped workup Physical examination Laboratory examination Called patient or doctor No change in management Total

Modality

Severity Class

Number of Cases

3

81 102 31 282 6

2

1

25 9 21 17 87 821 1482

Statistical Methods

CT

CR

Study Type

n

Total

CT abdomen pelvis CT chest CT head CT angiogram head/neck CT spine CT face/sinuses CT MSK Consult CXR CR MSK CR abdomen

491 211 146 29 57 33 8 38 264 191 14

1013 (68.4%)

Total

469 (31.6%)

1482

CR, computed radiography; CT, computed tomography; CXR, chest x-ray; MSK, musculoskeletal.

TABLE 3. (a) Ten Most Frequent Organs with Discrepancies; (b) Ten Most Frequent Discrepant Diagnoses (a)

Continuous variables were tested for normality; the means of normally distributed variables and medians of non-parametric variables were compared to Student t test and MannWhitney U test, respectively. Categorical variables were compared to the chi-square and Fisher exact tests. The criterion for statistical significance was P < .05.

RESULTS Case Selection

There were n = 1541 discrepancies in the study period. Moreover, n = 105,754 on-call studies were performed in total during the same period for a total discrepancy rate of 1.45%. One hundred thirty-eight of 1541 reports had more than one discrepancy, and the discrepancy with greatest clinical impact was selected for inclusion in analysis. After exclusion of report clarification errors and non–computed radiography/computed tomography (CR/CT) discrepancies, 1482 cases were available for analysis.

Discrepancy Classification

One thousand thirteen of the discrepancies (68.4%) were for CT examinations including all subspecialties and the remaining 469 (31.6%) were for radiographic examinations (Table 2). When grouping by the most common discrepant diagnoses, the lung and brain were the most common organs but fracture and pneumonia represented the most common discrete diagnoses (Table 3). When grouping by error type, the majority were observation errors (n = 991) and the remainder interpretation errors (n = 491).

Organ Lung Brain Colon Small bowel Kidney Pulmonary artery Rib Cervical spine Peritoneum Thoracic spine

Number of Discrepancies 271 120 84 74 51 43 36 34 31 29

(b) Diagnosis Fracture Pneumonia Pulmonary nodule Compression fracture Pulmonary embolism Brain infarct Colitis Obstruction Aneurysm Subdural hematoma

Number of Discrepancies 130 104 54 37 33 28 25 23 18 18

Management Changes

The most common change in management was no change (n = 821) (Table 1). The next most common change in management was obtaining follow-up imaging (n = 282). The most common type of diagnostic intervention was a biopsy procedure. Of the 81 discrepancies that led to a therapeutic 3

Academic Radiology, Vol ■, No ■■, ■■ 2017

MCWILLIAMS ET AL

TABLE 4. Discrepancy Frequencies Grouped by Management Change Group and Subgrouped by Modality, Type of Error, Discrepancy Class Location and Severity, Attending Subspecialty, and Overread Subspecialty (a) Modality Management Change Group

CR

1 (Mild) 2 (Moderate) 3 (Severe) Total P-value

49 87 115 196 54 160 218 443 .013

CT

Type of Error

Discrepancy Class—Location

Observation

Interpretation

Discharged

95 224 150 469

41 87 64 192

94 60 35 189

Admitted 32 193 139 364 <.001

.848

Discrepancy Class—Severity

In ED

a (Critical)

10 58 40 108

2 5 5 12

b (Time Dependent)

c (Non-time Dependent)

51 175 158 384 <.001

83 131 51 265

(b) Management Change Group 1 (Mild) 2 (Moderate) 3 (Severe) Total P-value

Attending Subspecialty

Overread Subspecialty

Abdo

Chest

MSK

Neuro

Abdo

Chest

MSK

Neuro

58 110 76 244

45 122 101 268

12 9 5 26

21 70 32 123

55 78 81 214

25 114 74 213

30 43 24 97

26 76 35 137

<.001

.002

Abdo, abdomen; CR, computed radiography; CT, computed tomography; ED, emergency department; MSK, musculoskeletal.

intervention, 31 led to surgery, 19 to an interventional or endoscopic procedure, 22 to casting or bracing, and 9 to other procedures. No unnecessary surgeries or interventions took place as a result of a discrepant preliminary report. Overall, a discrepancy leading to surgery or a procedure occurred 50 times out of 1482 discrepancies (3.3%) or 0.0005% of all reported studies during the study period. When evaluating management changes grouped by severity as laid out in Table 1, discrepancies on CT examinations were seen to occur proportionately more in group 3, the more severe category, compared to CR examinations (Table 4) (P = .013). The proportion of observation and interpretation errors did not vary by management change group (P = .848). When grouping by attending assigned discrepancy class, more severe management changes occurred more frequently in the admitted and ED groups (P < .001) and in those discrepancies the attending defined as critical or time dependent (P < .001). There were also proportionately more severe management changes in the chest and abdominal subspecialties (P < .001). The MCR was defined as the number of discrepant cases for which any change in management occurred (n = 821) divided by the total number of discrepancies (n = 1482), yielding a rate of 44.6%. Review Latency

When comparing latency from preliminary to addended report, there was no difference between severity groups (1; 523 minutes, 2; 527 minutes, 3; 478 minutes, P = .225) and modality (CR 527 minutes, CT 510 minutes, P = .268). 4

Recall Rate

Of those patients that were discharged with a report discrepancy (n = 334), 40 were recalled to the ED for a recall rate of 11.9% of discharged patients or 2.6% of all discrepant cases. Those called back had similar rates of more severe clinical impact than those who were not recalled (P = .09). Attending Performance

Overall, 1017 of 1482 cases with discrepancies (68.6%) had further testing including those where no change in management occurred. When confirmatory testing was available, the attending was correct 80.5% of the time and the resident 19.5% of the time. In these cases with confirmatory testing, when the attending was proven correct, it was more likely to have a more significant clinical impact (P < .001). Also, when comparing attending reads inside vs outside his or her subspecialty, there was no difference in the proportion of cases where the attending was correct (P = .568). DISCUSSION In this study population, resident-attending discrepancies were uncommon with a rate of 1.45%, similar to an established resident-attending benchmark rate of 1.7% (14). Furthermore, this rate is similar to previously published comparable work assessing the clinical impact of resident-attending discrepancies (7,9,15,16). Agreement with previously published work supports the approach to identifying discrepancies employed for this study population. In addition, this study’s

Academic Radiology, Vol ■, No ■■, ■■ 2017

discrepancy rate is much lower than second opinion subspecialist radiology attending discrepancy rates (17,18) although this is understandable as not all imaging examinations in this study were read by subspecialists. Finally, this discrepancy rate is lower than that (14%) in a study done to assess the impact of discrepancies between attending general and subspecialist radiologists using surgeons to determine clinical impact (19). Both attending subspecialist radiologists and clinical specialists would be expected to more stringent assessors of discrepancy and impact, however. In addition to measuring the baseline discrepancy rate, we found an MCR of 44.6% of all discrepancies (661/1482) or 0.62% overall (661/105,574). However, the definition of management change is subjective with considerable variability in the literature. Consistent aspects of the definition of major discrepancy are impact to patient management due to admission, discharge, further testing, medical treatment, or surgery (5,7,9,10). Our study attempted to maintain these objective criteria. The MCR in this study is similar to that of Maloney (14/29 had immediate change in management), Lal (12/21), and Stevens (23/37), although these studies evaluated a low number of discrepancies (20–22). When comparing to studies of comparable size, the MCR of this study is higher. The Ruchman study showed an MCR of 7.2% (7). Bruni et al. categorized discrepancies in a post hoc fashion, defining major as only those where management changes occurred and minor as those that did not. Their rates were a major rate of 1.2% and rate of minor 7.2% with a derived MCR of 1.2/(1.2 + 7.2) = 14.3% (9). The increased MCR in our study may be attributed to the discrepancy criteria utilized at our institution which are intended to be more specific. That is, many discrepancy classification schemes include “minor” report changes or “edits” which do not impact clinical care. Our system intentionally de-emphasizes report changes that do not warrant a change in management. In addition to institutional variations, the discrepancy rates and MCR may be impacted by baseline attending radiologist biases. The large number of attending radiologists sampled in this study limited the potential impact of outliers, but with the added benefit of sampling a wide variety of readers. The actual changes in management that occurred are of interest. A discrepancy that leads to a surgery is a high profile, dramatic, and easily understandable example of a significant management change. Therapeutic interventions such as surgery were a minority of discrepancies (81 of 1482, 5.5% of all discrepancies). Other frequently seen changes in management include medications being started or stopped, patients made non-weight-bearing or allowed to walk, patients being casted, and patients being discharged from the hospital. The most frequent change in management was recommending followup imaging where it was not initially recommended (282 of 1482, 19% of all discrepancies). This is a significant result as follow-up imaging may add to cost, time delays, and patient anxiety. All management changes were sorted into groups based on severity (Table 1) to permit grouped analyses. Increased

ON-CALL DISCREPANCIES IMPACT

severity of management changes was observed in the admitted and ED groups (Table 4). This is in keeping with other studies that showed discrepancies occurred more frequently in studies with multiple positive findings which are more frequent in sicker patients (9) and in inpatients (4). No patient died as a result of a discrepancy in this study. Another impactful observation is that increased management change severity was noted in the higher attending-assigned discrepancy groups, a result supporting a tiered discrepancy categorization system (12) rather than a system based on whether a finding would be expected to be made or not, such as RADPEER. CT discrepancies outnumbered CR discrepancies by 2:1 and body studies made up the majority of those CT discrepancies, whereas CR studies were overall more common. The most common organ affected by discrepant diagnoses was the lung, reflecting the most common study performed, the chest radiograph. The most common discrete discrepant diagnosis was fracture in this study. In a resident vs attending MSK radiologist series, 25 of 40 discrepancies were fractures (6). However, of those discrepant fractures, 20% of cases were diagnosed by orthopedic surgeons or emergency physicians and 50% were referred to clinic already or admitted, showing the lower level of clinical impact of discrepancy leading to this diagnosis, in general. The mean time latency between preliminary and discrepant interpretations was 8–9 hours and there was no difference between the severity groups in this study. This suggests that the short time scale of these discrepancies is not a factor in the determination of clinical impact. Also, the time latency may not substantially delay treatments that would otherwise be performed during normal business hours. Conceivably, many discrepant diagnoses left for longer time periods (beyond 14 hours, the length of the call period) may lead to more severe clinical impact and worse clinical outcome. When considering coverage models, this result can be seen to support a resident coverage model when the call period is limited to 14 hours and the attending review occurs as soon as possible after the call period. Discrepancies are treated as mistakes in common practice rather than differences of opinion in interpreting studies that are frequently not clear-cut. The discrepant management decisions or procedural techniques employed by residents of other specialties may not receive the same high profile persistence in the clinical record as those of radiology residents. Previous work has shown the attending to account for most of the variation in resident-attending discrepancies (23), thus an attempt was made to evaluate attending accuracy in their discrepancies. The attendings were correct 80.5% of the time when confirmatory testing was available, showing their supervision to be effective in improving study interpretation accuracy. This is in keeping with the results of a body CT series, where a review of the discrepancies showed the attending to be correct in 95 of 112 discrepancies (84.8%) (24). One difficulty in making the assessment of “correctness” is that a correctly interpreted but imperfect study (such as a cervical spine radiograph) may be refuted by a subsequent better test (cervical 5

MCWILLIAMS ET AL

spine CT). When this situation arises, the methods of this study would assign this as an attending “incorrect,” a study weakness that would underestimate the attending correct rate. Despite this, the attendings were very frequently correct, a finding that supports their role in interpretation of on-call studies. This study has several other limitations. The modalities focused on in this study were CR and CT as residents do not read MRI at the study institution so the impact of this advanced imaging modality was not assessed. Small numbers of discrepancies in modalities other than CR and CT led to their exclusion. Other work has shown higher discrepancy rates with MRI (9,25). A further methodological weakness is the retrospective chart review for assessment of clinical impact. This may be affected by reviewer bias, documentation bias, or incomplete records. Finally, these results may be applicable to a larger academic practice and large residency program but may not reflect practice patterns at smaller institutions. CONCLUSIONS Resident-attending discrepancies in on-call radiology reports do cause management changes at a rate of 44.6%; however, the overall rate of change is 0.62%, less than 1 in 100 studies reported on call. Most commonly, discrepancies result in followup imaging (19% of discrepancies), although therapeutic changes do occur (14% of discrepancies, 0.2% overall). A severitybased discrepancy categorization system successfully identifies those discrepancies with higher clinical impact, allowing management and education efforts to be focused on these cases. REFERENCES 1. Blane CE, Desmond JS, Helvie MA, et al. Academic radiology and the emergency department: does it need changing? Acad Radiol 2007; 14:625– 630. 2. Cooper VF, Goodhartz LA, Nemcek AA, Jr, et al. Radiology resident interpretations of on-call imaging studies: the incidence of major discrepancies. Acad Radiol 2008; 15:1198–1204. 3. Tieng N, Grinberg D, Li SF. Discrepancies in interpretation of ED body computed tomographic scans by radiology residents. Am J Emerg Med 2007; 25:45–48. 4. Ruutiainen AT, Durand DJ, Scanlon MH, et al. Increased error rates in preliminary reports issued by radiology residents working more than 10 consecutive hours overnight. Acad Radiol 2013; 20:305–311. 5. Filippi CG, Schneider B, Burbank HN, et al. Discrepancy rates of radiology resident interpretations of on-call neuroradiology MR imaging studies. Radiology 2008; 249:972–979. 6. Kung JW, Melenevsky Y, Hochman MG, et al. On-call musculoskeletal radiographs: discrepancy rates between radiology residents and musculoskeletal radiologists. Am J Roentgenol 2013; 200:856– 859.

6

Academic Radiology, Vol ■, No ■■, ■■ 2017

7. Ruchman RB, Jaeger J, Wiggins EF, 3rd, et al. Preliminary radiology resident interpretations versus final attending radiologist interpretations and the impact on patient care in a community hospital. Am J Roentgenol 2007; 189:523–526. 8. Friedman SM, Merman E, Chopra A. Clinical impact of diagnostic imaging discrepancy by radiology trainees in an urban teaching hospital emergency department. Int J Emerg Med 2013; 6:24. 9. Bruni SG, Bartlett E, Yu E. Factors involved in discrepant preliminary radiology resident interpretations of neuroradiological imaging studies: a retrospective analysis. Am J Roentgenol 2012; 198:1367–1374. 10. Carney E, Kempf J, DeCarvalho V, et al. Preliminary interpretations of after-hours CT and sonography by radiology residents versus final interpretations by body imaging radiologists at a level 1 trauma center. Am J Roentgenol 2003; 181:367–373. 11. Meyer RE, Nickerson JP, Burbank HN, et al. Discrepancy rates of on-call radiology residents’ interpretations of CT angiography studies of the neck and circle of Willis. Am J Roentgenol 2009; 193:527– 532. 12. Mellnick V, Raptis C, McWilliams S, et al. On-call radiology resident discrepancies: categorization by patient location and severity. J Am Coll Radiol 2016; 13:1233–1238. 13. Jackson VP, Cushing T, Abujudeh HH, et al. RADPEER scoring white paper. J Am Coll Radiol 2009; 6:21–25. 14. Ruutiainen AT, Scanlon MH, Itri JN. Identifying benchmarks for discrepancy rates in preliminary interpretations provided by radiology trainees at an academic institution. J Am Coll Radiol 2011; 8:644–648. 15. Branstetter BF, 4th, Morgan MB, Nesbit CE, et al. Preliminary reports in the emergency department: is a subspecialist radiologist more accurate than a radiology resident? Acad Radiol 2007; 14:201–206. 16. Erly WK, Berger WG, Krupinski E, et al. Radiology resident evaluation of head CT scan orders in the emergency department. AJNR Am J Neuroradiol 2002; 23:103–107. 17. Eakins C, Ellis WD, Pruthi S, et al. Second opinion interpretations by specialty radiologists at a pediatric hospital: rate of disagreement and clinical implications. Am J Roentgenol 2012; 199:916–920. 18. Zan E, Yousem DM, Carone M, et al. Second-opinion consultations in neuroradiology. Radiology 2010; 255:135–141. 19. Lauritzen PM, Andersen JG, Stokke MV, et al. Radiologist-initiated double reading of abdominal CT: retrospective analysis of the clinical importance of changes to radiology reports. BMJ Qual Saf 2016; 25:595– 603. doi:10.1136/bmjqs-2015-004536. 20. Maloney E, Lomasney LM, Schomer L. Application of the RADPEER scoring language to interpretation discrepancies between diagnostic radiology residents and faculty radiologists. J Am Coll Radiol 2012; 9:264– 269. 21. Lal NR, Murray UM, Eldevik OP, et al. Clinical consequences of misinterpretations of neuroradiologic CT scans by on-call radiology residents. AJNR Am J Neuroradiol 2000; 21:124–129. 22. Stevens KJ, Griffiths KL, Rosenberg J, et al. Discordance rates between preliminary and final radiology reports on cross-sectional imaging studies at a level 1 trauma center. Acad Radiol 2008; 15:1217–1226. 23. Sistrom C, Deitte L. Factors affecting attending agreement with resident early readings of computed tomography and magnetic resonance imaging of the head, neck, and spine. Acad Radiol 2008; 15:934– 941. 24. Chung JH, Strigel RM, Chew AR, et al. Overnight resident interpretation of torso CT at a level 1 trauma center an analysis and review of the literature. Acad Radiol 2009; 16:1155–1160. 25. Ruma J, Klein KA, Chong S, et al. Cross-sectional examination interpretation discrepancies between on-call diagnostic radiology residents and subspecialty faculty radiologists: analysis by imaging modality and subspecialty. J Am Coll Radiol 2011; 8:409–414.