Chest reporting by radiographers: Findings of an accredited postgraduate programme

Chest reporting by radiographers: Findings of an accredited postgraduate programme

Radiography 20 (2014) 94e99 Contents lists available at ScienceDirect Radiography journal homepage: www.elsevier.com/locate/radi Chest reporting by...

474KB Sizes 7 Downloads 35 Views

Radiography 20 (2014) 94e99

Contents lists available at ScienceDirect

Radiography journal homepage: www.elsevier.com/locate/radi

Chest reporting by radiographers: Findings of an accredited postgraduate programme K. Piper a, *, S. Cox a, A. Paterson a, b, A. Thomas c, N. Thomas d, N. Jeyagopal e, N. Woznitza f, g a

AHP Dept., Canterbury Christ Church University, Canterbury CT11QU, UK Society and College of Radiographers, UK c The Princess Royal University Hospital, Orpington, UK d Central Manchester University Hospitals NHS Foundation Trust, Manchester, UK e Pennine Acute Hospitals NHS Trust, The Royal Oldham Hospital, Oldham, UK f Homerton University Hospital, London, UK g Canterbury Christ Church University, Canterbury, UK b

a r t i c l e i n f o

a b s t r a c t

Article history: Received 28 November 2013 Received in revised form 15 January 2014 Accepted 26 January 2014 Available online 12 February 2014

Aim: To analyse the objective structured examination (OSE) results of the first six cohorts of radiographers (n ¼ 40) who successfully completed an accredited postgraduate programme in clinical reporting of adult chest radiographs. Methods: One hundred chest radiographs were used in the OSE which included a range of abnormal cases (prevalence of abnormal examinations approximated 50%) and included: cardiac, pulmonary, pleural, interstitial, inflammatory, neoplastic and traumatic appearances on patients referred from a range of referral sources. Normal variants and incidental findings were also included. True/false positive and negative fractions were used to mark the responses which were also scored for agreement with the previously agreed expected answers based on agreement between three consultant radiologists’ reports. Results: Mean sensitivity and specificity rates, for all six cohorts (4000 reports), was 95.4% (95% CI 94.4% e96.3%) and 95.9% (95% CI 94.9%e96.7%), respectively. The mean agreement rate was 89% (95% CI 88.0% e89.0%) and the most common errors were related to heart size, hilar enlargement or pleural effusion (false positive); and skeletal appearances or pneumothoraces (false negative). Conclusions: These OSE results suggest therefore that in an academic setting, and following an accredited postgraduate education programme, this group of radiographers has the ability to correctly identify normal chest radiographs and are able to provide a report on the abnormal appearances to a high standard. Further work is required to confirm the clinical application of these findings. Crown Copyright Ó 2014 Published by Elsevier Ltd on behalf of The College of Radiographers. All rights reserved.

Keywords: Clinical Reporting Radiographer Chest Interpretation Observer

Introduction In UK imaging departments, plain examinations account for 62% of imaging studies and approximately 20% are of the chest. Annually, this equates to around 40,000 examinations of the chest per hospital1,2 and may increase.3 It is over 40 years since the possibility of using radiographers or technologists, to alleviate radiological workloads by developing them to report on chest radiographs, was first considered.4,5

* Corresponding author. Tel.: þ44 1227782425. E-mail addresses: [email protected] (K. Piper), Steven.cox@ canterbury.ac.uk (S. Cox), [email protected] (A. Paterson), adrian.thomas@ btinternet.com (A. Thomas), [email protected] (N. Thomas), [email protected] (N. Jeyagopal), [email protected] (N. Woznitza).

Subsequent developments in the UK are well documented6e9 and in relation to the reporting of plain image investigations of the appendicular & axial skeletal systems, there is a growing body of evidence that selectively trained radiographers can provide accurate reports on radiographic appearances particularly on patients referred from the Accident and Emergency (A&E) Department.10,11 The increasing number of radiographers performing this task is generally accepted and relatively common practice.12 Chest radiographs are recognised as one of the most complex plain images to interpret, with some authors reporting significant errors by non-radiological/respiratory professionals.13e16 Whilst the notion has been considered for a number of years, and there are positive indications that radiographers could develop in this way,17,18 there is limited research to date that has investigated the ability of appropriately trained radiographers to produce the ‘clinical report’19 on plain film examinations of the chest.

1078-8174/$ e see front matter Crown Copyright Ó 2014 Published by Elsevier Ltd on behalf of The College of Radiographers. All rights reserved. http://dx.doi.org/10.1016/j.radi.2014.01.003

K. Piper et al. / Radiography 20 (2014) 94e99

95

Following validation by the College of Radiographers in 2002, a postgraduate certificate (PgC) Clinical Reporting (Adult Chest) programme was initially developed by Canterbury Christ Church University20 and has run annually since. The part time workbased programme takes one year to complete and comprises three 20 credit Level 7 (postgraduate) modules which focus on reporting of the adult chest radiograph. Consultant radiologists are involved in the curriculum development, learning and teaching delivery; and assessment stages. Students regularly attend university and assessments include a case study, reflective essay, record of a minimum of 750 practice/shadow reports (150 of which are checked by a supervising consultant radiologist) and an Objective Structured Examination (OSE). Estimates suggest that over 30 radiographers, having completed this particular programme, are now reporting adult chest radiographs in clinical practice, from a wide range of referral sources. In a recent survey of UK Departments, over 10% of managers who responded confirmed that chest reporting was being undertaken by radiographers.12 Aim The aim of this study was to analyse the OSE results of the first six cohorts of radiographers (n ¼ 40) who completed the PgC programme. Method Compliance with the University’s Research Ethics and Governance procedures was confirmed and all other relevant guidance followed.21 Consistent with other postgraduate programmes in clinical reporting at this university, one of the final assessments is an OSE which, for this pathway, consists of 100 adult (age >/ ¼ 16 years) chest radiographs. Obuchowski22 refers to the performance of an ‘average reader’ and accordingly the OSE was constructed using cases where there was good agreement between 3 independent experienced consultant radiologists. Approximately 150 chest radiographs were randomly selected from archives at two departments in the Southern England. To ensure compliance with the relevant data protection legislation all identifying information was removed from the images, request details and the initial radiological reports, which were then coded anonymously. Subsequent reports were provided independently by two consultants radiologists (one being a cardio-thoracic specialist), blinded to the original report. Although the specific agreement rate between the consultant radiologists was not calculated, the method adopted had been used previously23 and rates were similar to other studies.24 Based on the three reports, the expected answer (including diagnosis), was then agreed by consensus by the programme team (KP and AP/SC) and one of the consultant radiologist external examiners (NT/NJ) experienced in chest reporting, for every examination (n ¼ 100) selected for the OSE. The external examiner also confirmed that an appropriate selection of discriminatory cases were included.24 A range of cases were included to adequately test the candidates’ knowledge and to demonstrate competence at postgraduate level. The final prevalence of abnormal (Fig. 1) to normal (including normal variants) cases approximated 1:1 and non-A&E cases approximated 75%. Due to the implementation of Computed Radiography (CR) and Digital Radiography (DR) imaging and recording systems, the proportion of hard copy images used in the OSE decreased by 25% each year between 2005/6 and 2009/10; the final proportion of soft copy

Figure 1. Typical range of abnormal appearances included in the OSE.

images was 100%. All digital images were viewed on 42 cm monitors with native screen resolution of 1280 x1084, w1.3 MP25 in Digital Imaging and Communications in Medicine (DICOM) format using KPACS software26 to enable manipulation. The range and type of pathology included in each abnormal soft copy case was matched as closely as possible to the original hard copy, and when replaced was confirmed by the external examiner (NT/NJ). Normal and normal variant cases were replaced similarly. Candidates were provided with the patient’s details (age, gender, referral source, clinical history) and were asked to make a decision whether the appearances were normal (including normal variants) or abnormal, recording the decision on the pro forma. For the abnormal cases the student was expected to provide brief key details on the abnormal radiographic appearances and include suggested pathology/ies where applicable, in the form of a free text hand-written report. Credit was also given where candidates made appropriate recommendations related to further imaging. The responses were compared to the expected answer by one of the programme team and second marked as required by university procedures (KP/AP/SC). If the examination was correctly identified as normal or abnormal a true negative/positive (TN/TP) fraction was allocated accordingly. If the case was marked as incorrectly normal or abnormal, a false negative/positive (FN/FP) was recorded. Overall sensitivity and specificity rates were then calculated. One mark for each normal and a maximum of two marks for each abnormal case was allocated and fractioned27 where necessary to reflect the different key aspects that were required in each report. Students were not penalised providing any agreed expected pathology was diagnosed.

96

K. Piper et al. / Radiography 20 (2014) 94e99

All scores were summed and the overall agreement percentage calculated. All sensitivity, specificity and agreement rates were verified by the radiologist external examiner (NT/NJ). Results All radiographers had a minimum of two years post-registration experience. 38% (15/40) had previous reporting experience and held an accredited postgraduate qualification in skeletal reporting. Mean agreement, sensitivity and specificity rates for each cohort are shown in Table 1. The total number of FN (n ¼ 93) and FP (n ¼ 83) errors were similar (p ¼ 0.49). Mean sensitivity and specificity, for all cohorts, was 95.4% (95% CI 94.4%e96.3%) and 95.9% (95% CI 94.9%e96.7%), respectively. The mean agreement rate was 89% (95% CI 88.0%e89.0%). The sensitivity, specificity and agreement rates for those students with previous reporting experience was 96%, 97% and 90% respectively compared to 95%, 96% and 88%. No significant differences were found (p ¼ 0.29, 0.33 and 0.12), in any of the three measures between those radiographers with previous reporting experience and those without. Fig. 2 outlines the most common FN and FP errors which are discussed further below. Discussion Figure 2. Table of most common FP & FN errors.

In a study by Robinson24 three experienced radiologists reported on 100 chest radiographs of patients referred from A&E. When the reports were analysed for agreement on significant abnormality or not, the observers disagreed in 11e19% of cases, which approximates to the agreement rates found in this study (88.0%e 89.0%). It should be noted however, that this OSE included a wider range of referral sources. Potchen,28 in a diagnostic accuracy study, which included a range of abnormalities found in routine clinical practice, estimated the Area Under the Receiver Operating Characteristic curve to be 0.95 for the Top 20 radiologists (n ¼ 20) and 0.86 for all board certified radiologists (n ¼ 111). The mean sensitivity, specificity and agreement rates demonstrated by the radiographers in this study compare well to the measures of accuracy reported by Potchen. The errors made by the radiographers in the OSE environment generally correspond, both in frequency and type, to those made by consultant radiologists reported in other literature.16,29e31 False negative (FN) errors The two most common types of FN error were under-reporting eroded, destructed or fractured ribs, and sclerotic or lytic bony metastases. Donald & Barnard16 also found that 14 of the 72 chest X-ray errors produced by consultant radiologists were fractures, chest wall lesions or bone metastases. Guttentag & Salwen32 similarly noted that a range of normal variants or abnormalities involving the ribs may be overlooked. The complex

superimposition of multiple structures in chest radiography has been shown, by other investigators, to contribute to incorrect diagnoses30,33; and other authors34,35 report that as many as 50% of rib fractures are missed on a PA chest radiograph. This type of error falls into the classical “satisfaction of search” error category, and is a well-recognised phenomenon in chest radiograph interpretation.36e40 This explanation for missed rib abnormalities is confirmed by previous work41 where a significant improvement in the detection of subtle rib fractures was demonstrated, when this was the only abnormality the radiologists were asked to identify, compared to detection when asked to recognise multiple possible pathologies. The third most common FN error was under calling a subtle apical pneumothorax which equally has been noted in previous research. In a study which compared non-chest radiologists to chest radiologists, 11% (28/243) of the FN errors related to the misinterpretation of a pneumothorax31 and this is corroborated by other investigators16 who reported a 7% (5/72) FN rate. Some studies31 suggest this may be related to the influence of clinical factors, for example, a missed pneumothorax could be influenced if a history of attempted line placement was, or was not provided. In this OSE, none of the clinical indications mentioned line placement, but in at least two of the abnormal cases where a pneumothorax was evident there was at least one other abnormality, suggesting that this may have been a ‘satisfaction of search’ error.

Table 1 Sensitivity, specificity and agreement rates. Cohort (number of radiographers)

2002/3 (n ¼ 3) 2003/4 (n ¼ 13) 2005/6 (n ¼ 6) 2007/8 (n ¼ 5) 2008/9 (n ¼ 6) 2009/10 (n ¼ 7) All cohorts (n ¼ 40) a

Using Wilson procedure.

Results (95% CIsa)

Data TN

TP

FN

FP

Sensitivity (%)

Specificity (%)

Agreement (%)

145.5 617.5 281.0 242.0 290.0 341.0 1917.0

144.5 630.0 285.0 232.0 281.0 335.0 1908.0

4.5 20.5 15.0 18.0 19.0 15.0 93.0

5.5 32.0 19.0 8.0 10.0 9.0 83.0

96.3 98.0 95.0 92.8 93.7 95.7 95.4

97.0 95.1 93.7 96.8 96.7 97.4 95.9

91.7 90.8 86.7 87.0 88.4 88.5 89.0

91.5e98.6 96.5e98.9 91.7e97.0 88.7e95.6 90.1e96.0 92.9e97.5 94.4e96.3

92.4e99.0 93.0e96.6 90.1e96.0 93.6e98.5 93.7e98.3 95.0e98.7 94.9e96.7

87.8e94.4 89.1e92.3 83.7e89.3 83.7e90.0 85.5e90.8 85.9e90.7 88.0e89.0

K. Piper et al. / Radiography 20 (2014) 94e99

The fourth most common FN error, under calling heart enlargement will be discussed later as overcalling heart size was also the most common FP error. The fifth most common FN error was not recognising Progressive Massive Fibrosis (PMF), and although this was described as abnormal in most cases, in some instances there was a failure to determine the nature of pathology and this may be classified as a cognitive error. It may be argued that one of the main reasons for this was spectrum bias42 as this pathology is becoming less prevalent and the students may have decided that the diagnosis of PMF was very low, in terms of probability.

False positive (FP) errors The most common FP error was falsely interpreting increased heart size. Interestingly, misinterpretation of an enlarged heart was also a common (4th) FN error. There are several normal variants that may simulate discrepancies of cardiac size and shape which may lead to misinterpretations.43,44 In a small number of cases (n ¼ 5) it was the same students who made FN errors by not reporting cardiac enlargement, but in these cases there were other obvious chest pathologies and these errors could have been due to “satisfaction of search” errors and/or lack of knowledge.39,40 The assessment of cardiac size and outline on chest radiographs by radiologists may result in variation between observers29 and moderate agreement (k ¼ 0.48) when diagnosing cardiomegaly, has been reported previously.45 Herman et al.29 also found that cardiac enlargement and congestive cardiac failure (CCF) were among the most common abnormalities in which consensus could not be reached by an expert panel of five experienced radiologists. This has been reinforced by a more recent study which found only fair agreement (K ¼ 0.29) between an expert panel and the report provided by consultant radiologists in clinical practice.46 The second most frequent FP error was overcalling hilar enlargement. In a review of radiology discrepancies in an A&E Department,47 it was found that of the 166 FP errors made by physicians, only 4 related to hilar lesions. This finding is similar to an earlier study16 in which 5 of 72 errors were of this area. According to other authors,48 difficulty in recognition of hilar enlargement is compounded by the great range of size and shape variation between individuals, and pulmonary artery ectasia, due to loss of vessel wall elasticity, which can produce a unilateral hilar bulge or general hilar enlargement, particularly in the elderly. Differentiation between pleural thickening and a pleural effusion has been regarded as difficult and unreliable44,49 and in the OSE six students overcalled small pleural effusions, which was the third most common FP error in this study. Pleural effusion errors were also common in an earlier study30 where 23/173 errors were in this category. Other authors50 have found that the sensitivity rates of A&E physicians in evaluating pleural effusions and pleural thickening was 25% and 20% respectively, although good agreement has been reported between chest physicians and tuberculosis specialists,51 where the consultant radiologists in the study agreed on every case (K ¼ 0.60 and 1.0 respectively). Overcalling Chronic Obstructive Pulmonary Disease (COPD) was the fourth most common FP error. COPD, which must be confirmed by clinical diagnosis including lung function tests, embraces several pathological conditions and a corresponding range of radiographic appearances. These include: hyperinflation of the lungs, flattening of the domes of the hemi diaphragms, attenuation or absence of pulmonary vasculature, loss of the regular vascular branching pattern, widened retrosternal space, large focal lucencies indicating bullae, and bronchial wall thickening.52 In the OSE most of the

97

students errors were conclusions based upon misconceptions of lung hyperexpansion. There are several normal variants that could be mistaken for, and may simulate specific diseases in this category. For example, simulated air trapping in the right middle lobe results from the pattern of division of the major vascular trunks and can be misleading; and high insertions of the medial attachments of the diaphragm create an impression of false hemi-diaphragmatic flattening.44 In an early study29 100 randomly selected abnormal chest images were evaluated by 5 experienced radiologists. Of the range of significant interpretation errors found, 14 of the 173 cases included the evaluation of COPD which was the fourth most common category. It may be argued that most of the COPD related errors in the OSE were due to clinical review bias42 as the clinical indications related to a smoking or asthmatic history, although one of the images was from an asthenic patient where 7 right anterior ribs were demonstrated on full inspiration and therefore represented a normal variant error. Failure to identify or diagnose a malignant pulmonary nodule is an error frequently reported in the literature. The retrospective analysis by Quekel et al.30 identified 259 cases of missed lung cancer, with 22 of those cases missed on more than one previous radiograph. Missed nodules accounted for over 50% of cases (37/72) in an earlier analysis,16 which also found that of the discordant reports, 40% of the missed nodules had an average diameter of 20 mm. This is in contrast to the radiographers in this study, who tended to overcall benign nodules as malignant, with nodules corresponding to the fifth most frequent false positive error. In a study which investigated patients who had proven malignancies, 36% of the nodules which had been reported as malignant by consultant radiologists, were later found to be benign at resection.53 Interpreting the chest X-rays in the context of an academic examination may have influenced the decision threshold of the radiographers,54 with some choosing to err on the side of caution, especially given the relative consequence of a false negative cancer diagnosis. It is possible that benign lesions such as hamartomas, scars and granulomas, found on essentially normal images in the OSE, were described as abnormal lung lesions due to clinical review bias.42 There are also several normal variants such as nipple shadows, skin lesions and superimposition of vascular shadows that may simulate lung nodules. In these cases a clinical history of “haemoptysis” or “smoking history” may have influenced the students’ decision. The influence of clinical history on chest X-ray interpretation is equivocal; several studies15,28,55,56 have suggested this leads to improved accuracy, and other studies57,58 found no statistically significant difference in chest X-ray interpretation undertaken with or without the relevant clinical history. It is “generally accepted that access to previous images and reports is useful” (p205)59 when interpreting X-rays, and this has been suggested as a method of error reduction.15,16 In the current study the radiographers did not have access to previous X-rays, which may have contributed to the higher false positive rate for pulmonary nodules; a lesion that is unchanged over a significant time is more likely to represent a benign pathology. A further limitation of this study, and of many such studies of this nature,60 was the absence of an external reference standard to provide incontrovertible confirmation (by repeated follow up or further imaging, such as CT) of normality or of the presence and type of pathology (including the progression of any disease). In the absence of such evidence, it is acknowledged that concordant reports independently produced by three consultant

98

K. Piper et al. / Radiography 20 (2014) 94e99

radiologists may be used to assess other observers in this way, noting that the performance of an individual undertaking a reporting role should be indistinguishable from the performance of an ‘average’ practitioner or specialist.24 Conclusion Overall, the results presented suggest that this group of 40 radiographers, at the end of an accredited postgraduate programme, can report on the broad range of adult chest pathologies with satisfactory accuracy under examination conditions. Although lessons are to be learned from these initial experiences, generally the types of errors made are likely to be similar to those made in the practical setting by consultant radiologists of varying experience. The majority of the radiographers included in this study are now reporting adult chest examinations in clinical practice from a range of referral sources. Implementation into practice is subject to routine audit and governance processes including an agreed Scheme of Work. Attendance at further clinico-radiological and Multidisciplinary Team Meetings, as part of Continuing Professional Development, should be facilitated/supported to enhance their reporting accuracy as Advanced Practitioners/Consultant Radiographers. In the future it will be imperative to investigate the accuracy of chest reporting by radiographers more extensively and particularly throughout implementation into clinical practice and in comparison with consultant radiologists. Conflict of interest statement None declared. Acknowledgements Thanks to all the radiologists who reported the examinations prior to construction of the OSE and particularly Dr. Richard Coulden (formerly of Papworth Hospital) who made significant contribution to the pilot cohort. Thanks also to all the radiographers who completed the OSEs. References 1. Healthcare Commission. An improving picture? Imaging services in acute and specialist trustsIn Acute hospital portfolio review; 2007. 2. Hart MC, Hillier, Wall BF. Doses to patients from medical X-ray examinations in the UK: 2000 review. Chilton: National Radiological Protection Board; 2002. 3. National Institute for Health and Clinical Excellence. The diagnosis and treatment of lung cancer (update) (Clinical guideline 121), http://guidance.nice.org. uk/CG121; 2011. 4. Department of Health. Imaging and radiodiagnostic activity; 2010e11. 5. Sheft DJ, Jones MD, Brown RF, Ross SE. Screening of chest roentgenograms by advanced roentgen technologists. Radiology 1970;94(2):427e9. 6. Swinburne K. Pattern recognition for radiographers. Lancet 1971;297(7699): 589e90. 7. Berman L, de Lacey G, Twomey E, Twomey B, Welch T, Eban R. Reducing errors in the accident department: a simple method using radiographers. Br Med J 1985;290(6466):421e2. 8. Renwick IGH, Butt WP, Steele B. How well can radiographers triage X-ray films in the accident and emergency departments? Br Med J 1991;302(6776):568e9. 9. Saxton HM. Should radiologists report on every film? Clin Radiol 1992;45(1):1e 3. 10. Robinson PJ. Short communication: plain film reporting by radiographersea feasibility study. Br J Radiol 1996;69(828):1171e4. 11. Brealey S, Scally A, Hahn S, Thomas N, Godfrey C, Coomarasamy A. Accuracy of radiographer plain radiograph reporting in clinical practice: a meta-analysis. Clin Radiol 2005;60(2):232e41. 12. Society and College of Radiographers. The scope of practice. London: Society and College of Radiographers; 2012. 13. Mehrotra P, Bosemani V, Cox J. Do radiologists still need to report chest X rays? Postgrad Med J 2009;85(1005):339e41. 14. Shaw NJ, Hendry M, Eden OB. Inter-observer variation in interpretation of chest X-rays. Scott Med J 1990;35(5):140e1.

15. Goddard P, Leslie A, Jones A, Wakeley C, Kabala J. Error in radiology. Br J Radiol 2001;74(886):949e51. 16. Donald JJ, Barnard SA. Common patterns in 558 diagnostic radiology errors. J Med Imaging Radiat Oncol 2012;56(2):173e8. 17. Sonnex EP, Tasker AD, Coulden RA. The role of preliminary interpretation of chest radiographs by radiographers in the management of acute medical problems within a cardiothoracic centre. Br J Radiol 2001;74(879):230e3. 18. Brealey S, Scally A, Hahn S, Thomas N, Godfrey C, Crane S. Accuracy of radiographers red dot or triage of accident and emergency radiographs in clinical practice: a systematic review. Clin Radiol 2006;61(7):604e15. 19. The Royal College of Radiologists and the Society and College of Radiographers. Team working in clinical imaging. London: The Royal College of Radiologists and the Society and College of Radiographers; 2012. 20. Canterbury Christ Church University. PgC clinical reporting (adult chest) validation document; 2002. 21. The Royal College of Radiologists. Standards for patient consent particular to radiology. 2nd ed. London: Royal College of Radiologists; 2012. 22. Obuchowski N, Zepp RC. Simple steps for improving multiple-reader studies in radiology. Am J Roentgenol 1996;166(3):517e21. 23. Piper KJ, Buscall KL. MRI reporting by radiographers: the construction of an objective structured examination. Radiography 2008;14(2):78e89. 24. Robinson P, Wilson D, Coral A, Murphy A, Verow P. Variation between experienced observers in the interpretation of accident and emergency radiographs. Br J Radiol 1999;72(856):323e30. 25. Royal College of Radiologists. Picture archiving and communication systems (PACS) and quality assurance. London: Royal College of Radiologists; 2012. 26. KPACS [program]. v. 1.5. Germany. 27. Piper K, Paterson A, Godfrey R. Accuracy of radiographers’ reports in the interpretation of radiographic examinations of the skeletal system: a review of 6796 cases. Radiography 2005;11(1):27e34. 28. Potchen EJ, Cooper TG, Sierra AE, Aben GR, Potchen MJ, Potter MG, et al. Measuring performance in chest Radiography1. Radiology 2000;217(2):456e9. 29. Herman PG, Gerson DE, Hessel SJ, Mayer BS, Watnick M, Blesser B, et al. Disagreements in chest roentgen interpretation. Chest 1975;68(3):278e82. 30. Quekel LG, Kessels AG, Goei R, van Engelshoven JM. Miss rate of lung cancer on the chest radiograph in clinical practice. Chest 1999;115(3):720e4. 31. Cascade PN, Kazerooni EA, Gross BH, Quint LE, Silver TM, Bowerman RA, et al. Evaluation of competence in the interpretation of chest radiographs. Acad Radiol 2001;8(4):315e21. 32. Guttentag R, Salwen J. Keep your eyes on the ribs: the spectrum of Normal variants & diseases that involve the ribs. Radiographics 1999;19:1125e42. 33. Kashani H, Varon CA, Paul NS, Gang GJ, Van Metter R, Yorkston J, et al. Diagnostic performance of a prototype dual-energy chest imaging system ROC analysis. Acad Radiol 2010;17(3):298e308. 34. Campbell SG, Dingle MA. Rib fractures following minor trauma in older patients: a not-so-benign injury. CJEM 2000;2(1):32e4. 35. Davis S, Affatato A. Blunt chest trauma: utility of radiological evaluation and effect on treatment patterns. Am J Emerg Med 2006;24(4):482e6. 36. Berbaum K, Franken Jr EA, Caldwell RT, Schartz KM. Can a checklist reduce SOS errors in chest radiography? Acad Radiol 2006;13(3):296e304. 37. Berbaum KS, Caldwell RT, Schartz KM, Thompson BH, Franken Jr EA. Does computer-aided diagnosis for lung tumors change satisfaction of search in chest radiography? Acad Radiol 2007;14(9):1069e76. 38. Berbaum KS, Dorfman DD, Franken EA, Caldwell RT. Proper ROC analysis and joint ROC analysis of the satisfaction of search effect in chest radiology. Acad Radiol 2000;7(11):945e58. 39. Berbaum KS, Franken Jr EA, Dorfman DD, Caldwell RT, Krupinski EA. Role of faulty decision making in the satisfaction of search effect in chest radiography. Acad Radiol 2000;7(12):1098e106. 40. Berbaum KS, Franken Jr EA, Dorfman DD, Miller EM, Caldwell RT, Kuehn DM, et al. Role of faulty visual search in the satisfaction of search effect in chest radiography. Acad Radiol 1998;5(1):9e19. 41. Fuhrman CR, Britton CA, Bender T, Sumkin JH, Brown ML, Holbert JM, et al. Observer performance studies: detection of single versus multiple abnormalities of the chest. Am J Roentgenol 2002;179(6):1551e3. 42. Brealey S, Scally AJ. Bias in plain film reading performance studies. Br J Radiol 2001;74(880):307e16. 43. Sutton D, editor. Textbook of radiology and imaging. 3rd ed. China: Churchill Livingstone; 1980. 44. Keats T, Anderson M. Atlas of normal roentgen variants that may simulate disease. 7th ed. China: Mosby; 2001. 45. Butman SM, Ewy GA, Standen JR, Kern KB, Hahn E. Bedside cardiovascular examination in patients with severe chronic heart failure: importance of rest or inducible jugular venous distension. J Am Coll Cardiol 1993;22(4): 968e74. 46. Feldmann EJ, Jain VR, Rakoff S, Haramati LB. Radiology residents’ on-call interpretation of chest radiographs for congestive heart failure. Acad Radiol 2007;14(10):1264e70. 47. Petinaux B, Bhat R, Boniface K, Aristizabal J. Accuracy of radiographic readings in the emergency department. Am J Emerg Med 2011;29(1):18e25. 48. Lange S, Walsh G. Radiology of chest diseases. 2nd ed. Germany: Theime; 1998. 49. Jenkins PF. Making sense of the chest x-ray. UK: Hodder Arnold; 2005. 50. Gatt ME, Paltiel O, Hilier N, Stalnikowocz R. Chest radiographs in the emergency department: is the radiologist really necessary? Postgrad Med J 2003;79(930):214e7.

K. Piper et al. / Radiography 20 (2014) 94e99 51. Abubakar I, Story A, Lipman M, Bothamley G, van Hest R, Andrews N, et al. Diagnostic accuracy of digital chest radiography for pulmonary tuberculosis in a UK urban population. Eur Respir J 2010;35(3):689e92. 52. Pipavath SN, Schmidt RA, Takasugi JE, Godwin JD. Chronic obstructive pulmonary disease: radiology-pathology correlation. J Thorac Imaging 2009;24(3):171e80. 53. Ginsberg MS, Griff SK, Go BD, Yoo HH, Schwartz LH, Panicek DM. Pulmonary nodules resected at video-assisted thoracoscopic surgery: etiology in 426 patients. Radiology 1999;213(1):277e82. 54. Reed WM, Ryan JT, McEntee MF, Evanoff MG, Brennan PC. The effect of abnormality-prevalence expectation on expert observer performance and visual search. Radiology 2011;258(3):938e43. 55. Leslie A, Jones AJ, Goddard PR. The influence of clinical information on the reporting of CT by radiologists. Br J Radiol 2000;73(874):1052e5.

99

56. Berbaum KS, Franken Jr EA, Dorfman DD, Lueben KR. Influence of clinical history on perception of abnormalities in pediatric radiographs. Acad Radiol 1994;1(3):217e23. 57. Cooperstein LA, Good BC, Eelkema EA, Sumkin JH, Tabor EK, Sidorovich K, et al. The effect of clinical history on chest radiograph interpretations in a PACS environment. Invest Radiol 1990;25(6):670e4. 58. Good BC, Cooperstein LA, DeMarino GB, Miketic LM, Gennari RC, Rockette HE, et al. Does knowledge of the clinical history affect the accuracy of chest radiograph interpretation? Am J Roentgenol 1990;154(4):709e12. 59. Aideyan UO, Berbaum K, Smith WL. Influence of prior radiologic information on the interpretation of radiographic examinations. Acad Radiol 1995;2(3):205e8. 60. Robinson PJ. Radiology’s Achilles’ heel: error and variation in the interpretation of the Röntgen image. Br J Radiol 1997;70(839):1085e98.