Clinical Radiology (2005) 60, 232–241
Accuracy of radiographer plain radiograph reporting in clinical practice: a meta-analysis S. Brealeya,*, A. Scallyb, S. Hahna, N. Thomasc, C. Godfreya, A. Coomarasamyd a
Department of Health Sciences, University of York, bDivision of Radiography, University of Bradford, Department of X-ray, North Manchester General Hospital, and dEducation Resource Centre, Birmingham Women’s Hospital, UK
c
Received 30 April 2004; received in revised form 19 July 2004; accepted 26 July 2004
KEYWORDS Radiography; Images; Interpretation; Sensitivity and specifity
AIM: To determine the accuracy of radiographer plain radiograph reporting in clinical practice. MATERIALS AND METHODS: Studies were identified from electronic sources and by hand searching journals, personal communication and checking reference lists. Eligible studies assessed radiographers’ plain radiograph reporting in clinical practice compared with a reference standard, and provided accuracy data to construct 2!2 contingency tables. Data were extracted on study eligibility and characteristics, quality and accuracy. Summary estimates of sensitivity and specificity and receiver operating characteristic curves were used to pool the accuracy data. RESULTS: Radiographers compared with a reference standard, report plain radiographs in clinical practice at 92.6% (95% CI: 92.0–93.2) and 97.7% (95% CI: 97.5–97.9) sensitivity and specificity, respectively. Studies that compared selectively trained radiographers and radiologists of varying seniority against a reference standard showed no evidence of a difference between radiographer and radiologist reporting accuracy of accident and emergency plain radiographs. Selectively trained radiographers were also found to report such radiographs as accurately as those not solely from accident and emergency, although some variation in reporting accuracy was found for different body areas. Training radiographers improved their accuracy when reporting normal radiographs. CONCLUSION: This study systematically synthesizes the literature to provide an evidence-base showing that radiographers can accurately report plain radiographs in clinical practice. q 2005 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Introduction The subject of non-medically qualified staff reporting radiographs has been debated and contested almost since the discovery of X-rays by Roentgen in 1895.1 Although radiographers have interpreted images in a red dot or triage role, the contentious * Guarantor and correspondent: S. Brealey, York Trials Unit, Department of Health Sciences, Seebohm Rowntree Building, University of York, York YO10 5DD, UK. Tel.: C44-1904-321357; fax: C44-1904-321388. E-mail address:
[email protected] (S. Brealey).
issue of making written reports has remained the province of the radiologist.2,3 The introduction of the NHS and Community Care Act in 1990 challenged the traditional methods of delivering health care, resulting in the blurring of professional boundaries.4 Findings from 1968 to 1991 showed that radiologists’ total workload increased by 322%, but the number of posts increased by 213% only; subsequently, radiologists in England reported only 60% of work within 2 working days.5 The change in government policy and a shortage of radiologists resulted in the relaxation of restrictions on radiographer reporting
0009-9260/$ - see front matter q 2005 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.crad.2004.07.012
Accuracy of radiographer plain radiograph reporting in clinical practice: a meta-analysis
as reflected in several college documents.6 A recent national survey found radiographers now report accident and emergency (A & E) plain radiographs in 37/276 NHS trusts,7 whereas a similar survey performed just 4 years earlier identified only 4/ 333 NHS Trusts.8 The reason for this change in clinical practice is that if carefully selected radiographers develop reporting skills to the same standard as radiologists, this should alleviate radiologists’ reporting workload, thereby allowing both professions to use their time more effectively. The acceptance of skill-mix principles should also increase radiographer job satisfaction and develop radiographer professional standing. Against this background and in the absence of an existing review, we comprehensively and systematically synthesized the evidence concerning radiographers’ plain radiograph reporting accuracy in clinical practice to answer the following questions. † How accurately do radiographers report plain radiographs in clinical practice compared with a reference standard? † How accurately do selectively trained radiographers and radiologists of varying seniority report plain radiographs compared with a reference standard? † How accurately do selectively trained radiographers report plain radiographs for different types of patient and body areas? † Do training programmes affect radiographer reporting accuracy?
Methods Identification of studies Eight electronic sources were searched: MEDLINE, Science Citation Index Expanded, CINAHL, EMBASE, NHS National Research Register, the Cochrane Library, PsycINFO and SIGLE. The MEDLINE search used a combination of terms derived from its thesaurus and from the terms used to index known studies in the subject area. This included the single index terms ‘diagnostic-errors’ and ‘sensitivityand-specificity’ and exploded index terms ‘radiography’ and ‘radiology’. The text words used included ‘reporting’, ‘radiographs’, and ‘radiographers’. Similar strategies were developed for searching the other databases. When possible searches were performed from 1971 (to coincide with Swinburne’s proposal that radiographers could distinguish normal from abnormal radiographs)9 to
233
October 2002. Journals and supplements were also hand searched, personal contact was made with experts, and reference lists of identified articles were checked for further studies.
Study selection Studies were included if radiographers’ plain radiograph reporting in clinical practice was assessed against a reference standard. The exclusion criteria were: not conducted in clinical practice; not assessing accuracy; no accuracy data to construct 2!2 tables; not assessing radiographers in a reporting role; case studies of radiographer reporting; and assessment of visual search behaviour using remote eye movement detection equipment. In cases of duplicate publication, the most recent and completed versions were selected. No language or geographical restrictions were applied. Disagreement about inclusion or exclusion was resolved by consensus between reviewers.
Data extraction One reviewer (S.B.) extracted information about study eligibility, quality, study characteristics (e.g. types of patient, body area) and accuracy data for all studies and contacted authors when necessary for further information. Two other reviewers (A.S., N.T.) also independently extracted data concerning study eligibility and quality for all studies between them. Discordance between reviewers was resolved by discussion. There was perfect agreement between reviewers about study eligibility. Agreement between SB and the two other reviewers for the assessment of study quality was 71% and 92% with Kappa values of 0.58 (95% CI: 0.50–0.65) and 0.88 (95% CI: 0.77–0.99).
Assessment of study validity All eligible studies were assessed for their methodological quality. The checklist used was developed from searching the literature about the appraisal of studies of diagnostic tests and addressed issues concerning the selection of subjects (including both radiographs and observers), study design (including the choice and application of the reference standard) and independence (or blinding) in interpretation of radiographs and reports. These quality criteria have been described in detail elsewhere.10–12 Because studies were assessed in total for 42 quality criteria, a numerical scoring scheme was developed so that each study could be awarded a summary quality score.
234
Data synthesis Summary estimates of sensitivity (true positive rate) and specificity (true negative rate) with approximate (Wilson) 95% confidence intervals (CIs) were calculated and heterogeneity assessed using chi-squared tests.13 Heterogeneity was detected, so receiver operating characteristic (ROC) curve methodology was included to explore whether this could be explained by the diagnostic threshold or cut-off point used by studies for defining a report as abnormal. This is achieved by pooling the diagnostic odds ratio (DOR) for each study, which is calculated as (sensitivity/1Ksensitivity) divided by (1Kspecificity/specificity). The DOR describes the ratio of the odds of a positive report in a person with disease compared with a person without disease. When the DOR is constant regardless of the diagnostic threshold, the ROC curve is symmetrical around the “sensitivity equals specificity” line. The DerSimonian–Laird random effects model was used to calculate the pooled DOR necessary for producing the corresponding ROC curve.13 Heterogeneity between studies was again demonstrated, so the Littenberg–Moses method for fitting an asymmetrical ROC curve was used.14 This method allows for variation in DOR with diagnostic threshold using the model, DZaCbS. (D is the log of the DOR and S represents the diagnostic threshold.) Equalweighted and weighted (inverse variance of D) methods were used.15 An extension of the Littenberg–Moses method was used to investigate sources of heterogeneity other than the diagnostic threshold. This required testing the hypothesis that including quality criteria (e.g. choice and application of the reference standard) and study characteristics (e.g. types of patient, body area) in turn as covariates in the regression model do not affect estimates of DOR.15 Publication bias is introduced when a study is likely to be published depending on its size and direction of results.16 To detect publication bias, the log DOR of studies was plotted against precision (1/standard error).
Results Literature identification and study validity Fig. 1 summarizes the process of literature identification and selection. Twelve studies were eligible and are presented in Table 1, which includes
S. Brealey et al.
information on study characteristics and results, ordered by quality score with the lowest score being the most valid.17–28 All studies were observational in design and several studies did not appropriately select and apply the reference standard against which radiographer reporting accuracy was assessed. The consequences of this are discussed in the next section.
Accuracy of radiographer plain radiograph reporting Figs. 2 and 3 present the summary estimates of sensitivity and specificity for the twelve studies that assessed radiographer plain radiograph reporting accuracy.17–28 The 95% CIs reflect the weight of studies and are ordered as shown in Table 1 with the pooled estimate closest to the x-axis. The summary sensitivity estimate of radiographer reporting was 92.6% (95% CI: 92.0–93.2) and specificity was 97.7% (95% CI: 97.5–97.9). The chi-squared test confirmed the statistically significant heterogeneity observed visually for the estimates of both sensitivity and specificity (p!0.001). The DerSimonian–Laird random effects model yielded a pooled DOR of 540.9 (95% CI: 303.4– 965.3) with evidence of gross heterogeneity (p! 0.00001) suggesting that the DOR is not constant. Using the Littenberg–Moses method for fitting an asymmetrical ROC curve, the regression coefficient for the diagnostic threshold using equal-weighted methods was close to statistical significance (pZ0.048) but was not significant for the weighted method (pZ 0.269). This provides some evidence that the DOR depends on the diagnostic threshold. The R2 value for the DOR and diagnostic threshold were low for both equal-weighted (0.335) and weighted (0.121) methods, indicating limited correlation and unexplained variation between studies. Fig. 4 presents the asymmetrical summary ROC curves using equal-weighted and weighted methods and the curve assuming symmetry using the pooled DOR. Sensitivity and specificity are similar for both symmetrical and asymmetrical ROC curves in the middle range of the observed studies, but differ at higher sensitivities. The ROC curves graphically show a departure from symmetry and a threshold effect. They also rise steeply and pass close to the top left-hand corner, indicating that radiographer reporting accuracy in clinical practice is comparable to that of the reference standard.
Accuracy of radiographer plain radiograph reporting in clinical practice: a meta-analysis
Figure 1
235
Identification and selection of studies.32,33,36–74
Figure 2 Sensitivity estimates (with 95% CI) of radiographer plain radiograph reporting.
Figure 3 Specificity estimates (with 95% CI) of radiographer plain radiograph reporting.
236
Table 1
Studies included in the meta-analyses
Study
Professional
Training in reporting (work experience)
Patient type
Body area
Dataa
Results (95% Cls)
Quality score
TP
FP
FN
TN
Sensitivityb
Specificityc
1164 31 1195 18 178 1112 920 2032 144
41 2 43 5 4 67 37 104 11
29 2 31 1 6 82 106 188 13
5042 837 5879 42 250 1924 2732 4656 393
98 (97–98) 94 (80–98) 98 (96–98) 95 (75–99) 97 (93–99) 93 (92–94) 90 (88–91) 92 (90–93) 92 (86–95)
99 100 99 89 98 96 99 98 97
(99–99) (99–100) (99–99) (77–95) (96–99) (96–97) (98–99) (97–98) (95–98)
149 108 94 1080
8 2 12 72
8 8 22 109
396 667 657 3738
95 (90–97) 93 (87–96) 81 (73–87) 91 (89–92)
98 100 98 98
(96–99) (99–100) (97–99) (98–99)
925
78
69
1927
93 (91–95)
96 (95–97)
47
54
7
2
136
96 (88–99)
95 (90–98)
48
[17]
Radiographer
Postgraduate (2 years minimum)
A&E
[18] [19] [20]
Radiographer Radiographer Radiographer
None (Basic to senior I) Postgraduate (20 years) Postgraduate (2 years minimum)
A&E A&E A&E
[21]
Radiographer
Postgraduate (4 years minimum) Consultants and registrars Postgraduate Registrars Postgraduate (12–20 years) Postgraduate (15–25 years) None (Basic to Superintendent) Postgraduate (O10 years) 6 month in-house (O5 years) 3 day in-house (10 months–39 years)
A&E
Appendicular Axial Skeleton Appendicular Appendicular Appendicular Axial Skeleton All body areas
A&E
All body areas
A&E and orthopaedic A&E and out-patient A&E
Skeleton Appendicular
A&E
Appendicular
618
23
46
417
93 (91–95)
95 (92–96)
50
A&E
Skeleton
440
129
61
2941
88 (85–90)
96 (95–96)
55
A&E
Skeleton
235
31
26
729
90 (86–93)
96 (94–97)
55
[23]
Radiologist Radiographer Radiologist Radiographer
[24]
Radiographer
[25]
Radiographer
[26]
Radiographer
[27]
Radiographer
[28]
Radiographer
[22]
a
Skeleton
34 35 35
41
42 42
S. Brealey et al.
TP, true positive; FP, false positive; FN; false negative; TN, true negative. bSensitivityZTP/(TPCFN). cSpecificityZTN/(FPCTN).
33
Accuracy of radiographer plain radiograph reporting in clinical practice: a meta-analysis
Figure 4
237
Summary ROC curves of radiographer plain radiograph reporting. (DOR: diagnostic odds ratio).
Accuracy of selectively trained radiographers and radiologists of varying seniority The summary estimates did not provide evidence of a difference in how accurately selectively trained radiographers and radiologists of varying seniority report A & E radiographs for all body areas compared with a reference standard.21,22 The summary estimate of sensitivity for the radiographers and radiologists, respectively, was 92.3% (95% CI: 88.5–94.9) and 89.0% (95% CI: 84.8–92.2). The summary estimate of specificity for the radiographers and radiologists was 98.8% (95% CI: 97.9–99.3) and 98.1% (95% CI: 97.1–98.8), respectively.
Accuracy of selectively trained radiographers reporting for different types of patient and body areas When selectively trained radiographers reported radiographs not solely from A & E (some came from orthopaedics and outpatient departments) the summary estimate of sensitivity was 91.9% (95% CI: 90.6–93.0) and specificity was 97.4% (95% CI: 97.0–97.8).23,24 The summary estimate of sensitivity for A & E radiographs only was 92.9% (95% CI: 92.2–93.6) and specificity was 97.9% (95% CI: 97.6– 98.1).17,19–22,26–28 The summary estimates did not provide evidence of a difference between how accurately trained radiographers report A & E
radiographs only from their accuracy when reporting radiographs not solely from A & E. Table 2 presents the summary sensitivity and specificity estimates for different body areas. One notable finding is the lack of overlap in CIs for summary estimates of sensitivity, suggesting that trained radiographers report abnormal radiographs of the appendicular skeleton significantly more accurately than for the axial skeleton. In contrast, the lack of overlap in CIs for summary estimates of specificity suggests that the radiographers reported normal radiographs of the axial skeleton more accurately than for the appendicular skeleton. The summary estimates for all body areas were similar to those for the whole skeleton.
The effect of training on radiographer reporting accuracy The only two previous studies that assessed radiographer reporting accuracy without training included A & E radiographs of the skeleton.18,25 These were compared with the six studies assessing radiographers with some training reporting A & E radiographs of the skeleton.17,19,20,26–28 The summary estimate of sensitivity and specificity for radiographers without training were respectively 96.0% (95% CI: 88.9–98.6) and 93.7% (95% CI: 89.3– 96.4). For those with some training, the summary estimate of sensitivity was 92.9% (95% CI: 92.2– 93.6) and specificity was 97.8% (95% CI: 97.6–98.0). Although there was some variation in summary
238
Table 2
S. Brealey et al.
Selectively trained radiographer reporting of different body areas
Body area 17,19,20,24,26
Appendicular skeleton Axial skeleton17,20 Whole skeleton17,19,20,23,24,26–28 All body areas21,22
Sensitivity (95% CI)
Specificity (95% CI)
94.5 89.8 92.6 92.3
97.8 98.9 97.7 98.8
(93.8–95.2) (87.8–91.5) (92.0–93.2) (88.5–94.9)
estimates of sensitivity for A & E radiographs of the skeleton depending on whether the radiographers received training or not, the overlap in CIs suggests this difference was not significant. In contrast, the summary specificity estimate for A & E radiographs of the skeleton was higher for the radiographers who had received training, and the lack of overlap in CIs is evidence that training significantly improved radiographers’ ability to report normal radiographs accurately.
Investigating sources of heterogeneity Heterogeneity was evident when pooling sensitivity and specificity for all analyses. To investigate sources of heterogeneity other than the diagnostic threshold, the Littenberg–Moses method was extended by adding in turn quality criteria (e.g. choice and application of the reference standard) and other study characteristics (e.g. types of patient, body area) as covariates to the regression model. Only two variables produced a significant finding where p!0.01: conducting a sample size calculation; and reference standard independently reporting radiographs. The analysis was repeated including both covariates. This model showed most variation in study results was explained by whether or not the reference standard independently reported radiographs (pZ0.011). S, as the measure of diagnostic threshold, also made a statistically significant contribution (pZ0.024) but whether a sample size calculation was performed did not (pZ 0.125). The R2 value for the model was 0.932, indicating that these variables explained most of the variation between studies.
Investigating publication bias Fig. 5 shows a pattern that resembles a funnel shape, with smaller studies demonstrating wider variation in DOR than the larger studies, thereby suggesting that publication bias is not present. Using the extended Littenberg–Moses method there was insufficient evidence to support the hypothesis that the covariate “grey literature” explains variation in study results for both
(97.5–98.1) (98.5–99.2) (97.5–97.8) (97.9–99.3)
equal-weighted (pZ0.751) and weighted (pZ 0.195) methods. Both visual and statistical evidence suggests that publication bias is unlikely in our meta-analysis.
Discussion For our study a comprehensive and systematic approach was used to underpin the synthesis of the evidence base concerning radiographer plain radiograph reporting accuracy. This was achieved by undertaking a comprehensive literature search, independent selection of eligible studies and data extraction, independent assessment of study validity, and data synthesis using meta-analytical techniques. It is unlikely that the conduct of our review introduced bias that might confound the findings, nor was publication bias evident. The finding that conducting a sample size calculation and the reference standard independently reporting radiographs or not can affect study results should be interpreted cautiously with only twelve studies in the analysis; it might be explained by chance as a result of multiple testing. The inclusion of the choice and the application of the reference standard as covariates were not important for explaining variation in study results, but did affect the rigour with which they were conducted. This is because in some studies, when the report of a radiographer agreed with a radiologist’s report, it was judged to be correct; but when the radiographer report disagreed with that of the radiologist, a second radiologist was asked to report the radiograph. Whichever of the first reports was concordant with the second radiologist’s report was assumed to be correct. This process of judging the accuracy of radiographers reports questions the suitability of the radiologist as the reference standard and introduces verification, work-up and incorporation bias.10–12 The results of the metaanalyses also show no evidence of a difference in reporting accuracy of selectively trained radiographers and radiologists of varying seniority, so how can radiologist reports be used as a reference standard? It could be argued though that this process of choosing and applying a reference
Accuracy of radiographer plain radiograph reporting in clinical practice: a meta-analysis
Figure 5
239
Funnel plot for detection of publication bias.
standard more appropriately reflects clinical practice, and so is better than an alternative reference standard such as a triple blind consultant radiologist report for which there is still error and uncertainty. In summary, when drawing inferences it should be borne in mind that radiographers reporting accuracy was often compared against a reference standard of variable validity. When designing future studies, the assumptions made when choosing and applying the reference standard should be carefully considered. Our review found that radiographers accurately report plain radiographs in clinical practice from different sources, and no evidence of a difference in reporting accuracy between selectively trained radiographers and radiologists of varying seniority. This finding is supported by a recent study which showed no evidence of a difference between selectively trained radiographer and consultant radiologist accuracy when reporting plain radiographs for A & E and general practitioners.29 Accuracy, however, is only an intermediate outcome; introducing radiographer reporting also affects clinicians’ decision-making and patient outcome, the availability of reports and the associated costs.30,31 Studies show that radiographers commenting on plain radiographs do not adversely affect patient management32 or outcome33 and improve the availability of reports;17 and the costs range from nil to £15,000 per annum.18 These findings show that the introduction of this change does not adversely affect aspects of a reporting service other than reporting accuracy, although further research is desirable.
In conclusion, this study systematically synthesizes the literature to provide an evidence base showing that radiographers can accurately report plain radiographs in clinical practice. Viewing radiographs is useful, however, not only for developing the skills of radiographers and underpinning their profession but also for A & E nurses, casualty officers, and junior radiologists. Government policy focuses on both patient needs and the staff in the NHS.34,35 The challenge now is to promote flexible team working between different professions as to who reports plain radiographs to ensure that both patient and staff needs are met.
Acknowledgements We thank I. Russell and D. Russell for comments on early drafts, S. Boynes for helping to pilot the checklist, and particularly K. Piper for being a continuous source of support. We are also most grateful to the authors of the studies in this review, whose assistance with providing data was invaluable.
References 1. Paterson AM, Price R. Current topics in radiography number 2. London: Saunders; 1996. 2. Saxton HM. Should radiologists report on every film? Clin Radiol 1992;45:1–3. 3. Fielding AJ. Improving accident and emergency radiography. Clin Radiol 1990;41:149–51.
240
4. College of Radiographers. Role development in radiography. London: COR; 1996. 5. Royal College of Radiologists. Staffing and standards in departments of clinical oncology and clinical radiology. London: RCR; 1993. 6. Royal College of Radiologists and College of Radiographers. Inter-professional roles and responsibilities in a radiology service. London: RCR and COR; 1998. 7. Price RC, Le Masurier SB, High L, Miller LR. Changing times: a national survey of extended roles in diagnostic radiography. Br J Radiol 1999;72:7. 8. Paterson AM. Role development—towards 2000: a survey of role developments in radiography. London: COR; 1995. 9. Swinburne. Pattern recognition for radiographers. Lancet 1971;I:589–90. 10. Brealey S, Scally AJ, Thomas NB. Methodological standards in radiographer plain film reading performance studies. Br J Radiol 2002;75:107–13. 11. Brealey S, Scally AJ. Bias in plain film reading performance studies. Br J Radiol 2001;74:307–16. 12. Brealey S, Scally AJ, Thomas NB. Presence of bias in radiographer plain film reading performance studies. Radiography 2002;8:203–10. 13. Deeks J. Systematic reviews of evaluations of diagnostic and screening tests. In: Egger M, Smith GD, Altman G, editors. Systematic reviews in health care: meta-analysis in context. London: BMJ Publishing Group; 2001. 14. Littenberg B, Moses LE. Estimating diagnostic accuracy from multiple conflicting reports: a new meta-analytic method. Med Decis Making 1993;13:313–21. 15. Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F. Metaanalysis of different types of data. London: Wiley; 2000. 16. Egger M, Dickersin K, Smith GD. Problems and limitations in conducting systematic reviews. In: Egger M, Smith GD, Altman G, editors. Systematic reviews in health care: metaanalysis in context. London: BMJ Publishing Group; 2001. 17. Piper KJ, Paterson AM, Ryan CM. The implementation of a radiographic reporting service for trauma examinations: final analyses of accuracy. Br J Radiol 2000;73:91. 18. Pitchers M. Radiographic interpretation by emergency nurse practitioners and radiographers in a minor injuries unit (Thesis). Hatfield: University of Hertfordshire, 2002. 19. Carter S, Manning D. Performance monitoring during postgraduate radiography training in reporting—a case study. Radiography 1999;5:71–8. 20. Manning D. Results from audit process of students on a postgraduate course. Lancaster, UK: University of Lancaster; 1999. 21. Robinson PJA. Plain film reporting by radiographers—a feasibility study. Br J Radiol 1996;69:1171–4. 22. Balcam S, Hood A. Are radiographers with suitable training competent to cold report plain films from the accident and emergency department? Hull, UK: Hull Royal Infirmary; 1998. 23. Eyres RD, Williams P. Results of a postgraduate training programme. Salford, UK: University of Salford; 1999. 24. Eyres RD, Williams P. Monitoring the effectiveness of a clinical reporting and educational programme: are radiographers getting it right? Br J Radiol 1997;70:123. 25. Wolfe G. How well do radiographers identify and describe abnormalities on axial and appendicular radiographs in A & E X-ray? (Thesis). Glasgow, UK: Glasgow Caledonian University, 2002. 26. Ford L, Crawshaw I. Results of audits. York: York District Hospital; 1999.
S. Brealey et al.
27. Loughran CF. Reporting of fracture radiographs by radiographers: the impact of a training programme. Br J Radiol 1994;67:945–50. 28. Snaith BA. Has radiography outgrown the red dot? Br J Radiol 2000;73(S):32. 29. Brealey S, King DG, Crowe MTI, et al. Accident and emergency and general practitioner plain radiograph reporting by radiographers and radiologists: a quasi-randomized controlled trial. Br J Radiol 2003;76:57–61. 30. Brealey S. Measuring the effects of image interpretation: an evaluative framework. Clin Radiol 2003;56:341–7. 31. Brealey S. Quality assurance in radiographic reporting: a proposed framework. Radiography 2001;7:263–70. 32. Berman L, de Lacey G, Twomey E, Twomey B, Welch T, Eban R. Reducing errors in accident department; a simple method using radiographers. BMJ 1985;290:421–2. 33. Robinson PJA, Culpan G, Wiggins M. Interpretation of selected accident and emergency radiographer examinations by radiographers: a review of 11000 cases. Br J Radiol 1999;72:546–51. 34. Department of Health. Agenda for change. London, UK: Stationery Office; 2003. 35. Department of Health. The NHS knowledge and skills framework and related development review. London, UK: Stationery Office; 2003. 36. Hughes H, Hughes K, Hamill R. A study to evaluate the introduction of a pattern recognition technique for chest radiographs by radiographers. Radiography 1996;2:263–88. 37. Callaway MP, Kay CL, Whitehouse RW, Fields JM. The reporting of A & E radiographs by radiologist trainees and radiographers. RAD Mag 1997;23:31–2. 38. Boynes S, Scally AJ, Webster AJ, Kay K. Results of a postgraduate training programme. Bradford, UK: University of Bradford; 1999. 39. McMillan P, Paterson A, Piper K. Radiographers’ abnormality detection skills in radiographs of the chest and abdomen. Br J Radiol 1995;68(S):21. 40. McConnell JR, Webster AJ. Improving radiographer highlighting of trauma films in the accident and emergency department with a short course of study—an evaluation. Br J Radiol 2000;73:608–12. 41. Henderson I. Results of a postgraduate course. London: Southbank University; 1999. 42. Piper K, Paterson A. Accuracy of radiographers’ reports in accident and emergency examinations of the skeletal system. Eur Radiol 1997;7(S):178–9. 43. Piper KJ, Godfrey RC, Paterson AM. Reporting of non-A & E radiographic examinations by radiographers: a review of 6796 cases. Br J Radiol 2002;75(S):35. 44. Wilson J, Sharkey C. A project to assess the validity of training radiographers to carry out plain film reporting. Leeds, UK: University of Leeds; 1999. 45. Seymour R, White P. A & E reporting: relative performance of radiographers and radiologists. Br J Radiol 1996;69(S): 129. 46. Cassidy S, Eyres RD, Thomas NB, Whitehouse R, Williams PL. A comparative study of the reporting accuracy of clinical radiographers and radiology registrars. Br J Radiol 1999; 72(S):19. 47. Quagehebeur GM, Shute M. Audit of casualty X-rays. Br J Radiol 1992;65(S):12. 48. Tam CL, Chandramohan M, Thomas N. A 2-year prospective study of radiographers for unsupervised reporting of casualty trauma radiographs. Br J Radiol 2002;75(S):34. 49. Raynor JS. Audit study: 1995–1997. Macclesfield, UK: Macclesfield District General; 1999.
Accuracy of radiographer plain radiograph reporting in clinical practice: a meta-analysis
50. Loughran CF, Raynor J, Quine FM. Radiographer reporting of accident and emergency radiographs: a review of 5000 cases. Br J Radiol 1996;69(S):128. 51. Webster J, Gallagher D. Radiography reporting in a community hospital. Br J Radiol 1998;70(S):42. 52. Remedios D, Ridley N, Taylor S, de Lacey G. Trauma radiology: extending the Red Dot system. Br J Radiol 1998; 71(S):60. 53. Renwick IGH, Butt WP, Steele B. How well can radiographers triage X-ray films in the accident and emergency department? BMJ 1991;302:568–9. 54. Timmis AJ, Burnett SJD. Red dot–blue dot: an alternative. Br J Radiol 1995;68(S):41. 55. Giles M. Identification of abnormalities by the radiographer (Thesis). Luton, UK: Luton and Dunstable Hospital, 1989. 56. Bowman S. Introducing an abnormality detection system by radiographers into an accident and emergency department: an investigation into radiographers’ concerns about the introduction of such a system. Res Radiography 1991;1:2–20. 57. Sonnex EP, Tasker AD, Coulden RA. The role of preliminary interpretation of chest radiographs by radiographers in the management of acute medical problems within a cardiothoracic centre. Br J Radiol 2001;75(S):34. 58. Hargreaves JS, MacKay S. The accuracy of the red dot: will it improve training? Br J Radiol 2000;73(S):32. 59. Cheyne WN, Field-Boden Q, Wilson J, Hall R. The radiographer and frontline diagnosis. Radiography 1987;53:114. 60. Carr D, Gale A, Mugglestone MD. Visual search strategies of radiographers: evidence for role extension. Eur Radiol 1997; 7(S):179. 61. Carr D, Gale A, Mugglestone MD. Can radiographers interpret medical images effectively? Br J Radiol 1996;69(S):283. 62. Gale AG, Vernon J, Millar K, Worthington BS. Reporting in a flash. Br J Radiol 1990;63(S):71.
241
63. Manning D, Leach J, Bunting S. A comparison of expert and novice performance in the detection of simulated pulmonary nodules. Radiography 2000;6:111–6. 64. Loughran CF. Reporting of accident radiographs by radiographers. RAD Mag 1996;2:34. 65. Boynes S, Scally AJ, Webster AJ, Kay K. Radiographer reporting of the axial and appendicular skeleton by radiographers and nurse practitioners. Br J Radiol 1997;70(S):123. 66. Piper K. Reporting by radiographers: findings of an accredited postgraduate skeletal reporting programme in the UK. Radiology 1997;205:360. 67. Piper KJ, Paterson AM, Ryan CM. The implementation of a radiographer reporting service for trauma examinations of the skeletal system. Br J Radiol 1998;71(S):126. 68. Renwick I, Butt WP, Steele B. An assessment of triage of casualty radiographs by radiographers. Br J Radiol 1990; 63(S):7. 69. Loughran CF. Reporting of accident and emergency radiographs by radiographers: a study to determine the effectiveness of a training programme. Br J Radiol 1994;67(S):93. 70. Henderson PI, Lovegrove MJ, Hughes JW. Radiographer reporting of the appendicular skeleton: initial evaluation of a pilot programme. Br J Radiol 1996;69(S):128. 71. Piper K. Accuracy of radiographers’ reports in examinations of the skeletal system. Br J Radiol 1996;69(S):282. 72. Piper K, Paterson AM. The accuracy of radiographer’s reports in examinations of the skeletal system. Br J Radiol 1997; 70(S):123. 73. Tasker AD, Sonnex E, Coulden RA. The role of preliminary interpretation of surgical chest radiographs by radiographers in a cardiothoracic centre. Br J Radiol 2000;73(S):32. 74. Berman L, de Lacey G, Twomey E, Twomey B, Welch T, Eban R. Reducing errors in the accident department: a simple method using radiographers. Radiography 1986;52: 143–4.