Missed Case Feedback and Quality Assurance Conferences in Radiology Resident Education: A Survey of United States Radiology Program Directors

Missed Case Feedback and Quality Assurance Conferences in Radiology Resident Education: A Survey of United States Radiology Program Directors

Current Problems in Diagnostic Radiology ] (2017) ]]]–]]] Current Problems in Diagnostic Radiology journal homepage: www.cpdrjournal.com Missed Case...

484KB Sizes 1 Downloads 43 Views

Current Problems in Diagnostic Radiology ] (2017) ]]]–]]]

Current Problems in Diagnostic Radiology journal homepage: www.cpdrjournal.com

Missed Case Feedback and Quality Assurance Conferences in Radiology Resident Education: A Survey of United States Radiology Program Directors Anne E. Gill, MD, Philip K. Wong, MD, Mark E. Mullins, MD, PhD, FACR, Amanda S. Corey, MD, FACR, Brent P. Little, MDn Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA

Rationale and Objectives: Diagnostic Radiology (DR) residents typically generate preliminary reports for imaging examinations, but few publications discuss feedback regarding missed or misinterpreted findings. Our goal was to determine the practices of United States DR residencies with respect to missed case feedback, including the role of Quality Assurance (QA) conferences. Materials and Methods: A 23-item survey containing multiple-choice questions and several free text fields was created and hosted on SurveyMonkeyR. An invitation to complete the survey was sent via email to all DR Program Directors (PDs) or representatives. Responses were tabulated and analyzed using SurveyMonkeyR analytic tools and Microsoft Excel. Results: 188 PDs or representatives were emailed, resulting in 45 survey responses. Common types of missed case feedback included resident QA case conferences (81%), resident self review of cases (72%), discussion during readout at the end of shift (70%), and faculty-resident meetings (67%). A minority of programs reported using automated methods of resident feedback, such as PACS integration or automated emails. Most resident QA conferences were held monthly (64%). Typical formats of conferences included informal discussion (43%), formal presentation (30%), or case conferences (30%). The majority (78%) of respondents rated resident missed case feedback mechanisms at their institution as at least “good”. Conclusion: DR residencies use a variety of mechanisms to provide feedback to residents regarding missed or misinterpreted cases, including QA conferences. Although several possibilities for improvement in feedback mechanisms were highlighted by survey responses, most respondents had a favorable view of their program’s feedback processes. & 2017 Elsevier Inc. All rights reserved.

Introduction Feedback regarding errors in Diagnostic Radiology (DR) Resident preliminary interpretations of imaging studies is an important part of residency. Residents typically generate preliminary, actionable reports of imaging studies as part of on-call duties, with cases later reviewed by staff radiologists, and feedback given to residents for significant misses or misinterpretations. In addition, many radiology departments hold resident “Quality Assurance” (QA) or “Mortality and Morbidity” (M and M) conferences as part of ongoing quality surveillance and improvement efforts, providing feedback intended to improve performance of all residents. In other fields of Medicine and Surgery, review and discussion of resident errors and patient complications has been viewed as The terms “Quality Assurance conference,” “Quality Improvement conference,” and “Mortality and Morbidity conference” are frequently used interchangeably in radiology departments to denote meetings in which missed or misinterpreted cases, complications, and other quality issues are discussed. “Quality Assurance conference” is used in this article as a general term that refers to such conferences. n Reprint requests: Brent P. Little, MD, Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1365 Clifton Rd NE, Clinic Building A, Atlanta, GA 30322. E-mail address: [email protected] (B.P. Little). http://dx.doi.org/10.1067/j.cpradiol.2017.06.008 0363-0188/& 2017 Elsevier Inc. All rights reserved.

central to resident education, and several publications examine the importance of QA conferences in fields other than Radiology.1-7 For practicing Radiologists, RADPEER and departmental QA conferences have played well-recognized roles in QA efforts. However, few publications discuss DR Residency practices regarding feedback about missed or misinterpreted cases8-11 and to our knowledge no publications address the characteristics of DR resident QA conferences. We surveyed DR residencies across the United States regarding feedback mechanisms for resident missed cases, characteristics of resident QA conferences, and satisfaction with feedback processes. We hoped that our survey would provide insight into the most common resident feedback and QA conference practices and identify areas for improvement. We expected that DR Residency Program Directors (PDs), Program Chairs, QA Officers and stakeholders would particularly benefit from knowledge of the feedback practices of other programs.

Methods Local institutional review board (IRB) approval was obtained before the study began; an IRB-approved informational form was

A.E. Gill et al. / Current Problems in Diagnostic Radiology ] (2017) ]]]–]]]

2

Table 1 Questionnaire items Question item

Question type

1. What is the size of your diagnostic radiology residency? 2. Describe the setting of your residency program. 3. Geographic setting of your residency program. 4. Does your institution have an Emergency Radiology division? 5. Does your institution have final signers providing coverage at all times? 6. Do residents in your program provide preliminary readings or reports for diagnostic imaging studies? “Preliminary” includes any actionable, published resident report with a delay between creation and staff approval. 7. For which diagnostic modalities do residents provide preliminary readings? 8. What mechanisms does your program use to provide missed case or quality assurance (QA) feedback to residents? 9. If a computerized review of preliminary reports or e-mails are used, are they 10. How are the “missed” or “QA” cases organized/retained in the department? 11. If residents perform self-review of discrepancies between resident preliminary reading and final reading, how is this facilitated? 12. How frequently is missed case feedback given to the residents? 13. If your department participates in a resident missed case or QA or M & M conference, who usually attends? 14. If your institution participates in a resident missed case or QA or M & M conference, who moderates the conference? 15. How often is the resident missed case or QA or M & M conference held? 16. Is literature pertaining to particular missed cases or themes circulated among the residents prior to the conference? 17. If your institution participates in a resident missed case or QA or M and M conference, what is the format? 18. Is the resident who missed the case identified at the QA conference? 19. Please rate the resident missed case feedback mechanisms in place at your institution. 20. How do you think the feedback mechanisms could be improved at your institution? 21. Does your institution have a faculty missed case or QA or M & M conference? 22. How often is the faculty missed case or QA or M & M conference held? 23. Please provide additional details or comments that you would like to make about missed case feedback and QA conferences.

Multiple Multiple Multiple Y/N Y/N Multiple

choice choice choice

choice

Multianswer Multianswer Multiple choice Multianswer Multianswer Multiple choice Multianswer Multiple choice Multiple choice Y/N Multianswer Y/N 5-point scale Free text Y/N Multiple choice Free text

Multiple choice items allowed 1 answer from a list of possible responses. Multianswer questions asked the respondent to “check all that apply” from a list of possible responses. Free text questions allowed up to a paragraph of free text as a response.

viewed by each participant in lieu of written informed consent. A 23-question survey, designed to take 10-15 minutes to complete, was created and hosted on the commercial survey website SurveyMonkey (www.surveymonkey.com). Five question types were used. For multiple choice questions, respondents could choose 1 best answer. Multianswer questions allowed multiple responses to be checked from a list of possibilities. Yes or no questions allowed either a positive or nonaffirmative response. Five-point scale questions asked respondents to rate quality of conferences “poor,” “fair,” “good,” “very good,” or “excellent.” Free-text responses allowed entry of up to 250 lines of text in response to a prompt. Survey questions and response types are listed in Table 1. An invitation to participate in the survey was sent via e-mail to US DR PDs identified through a national database maintained by the Association of Program Directors in Radiology (APDR). Emails were directed to the program coordinator for cases in which the PDs address was not available. An initial invitation e-mail to participate in the survey was sent in early February 2013, and a reminder e-mail was sent 2 weeks later. Responses were collected over the course of approximately 2 months. The survey responses were anonymous, and neither respondent names, computer IP addresses, nor program names were obtained. Results were tallied with Microsoft Excel (version 15.29.1) and the survey analysis tools in the SurveyMonkey analytic suite. The free-text responses were evaluated by one of the authors (B.P.L.) using a qualitative coding scheme described by Uwe, Moustakas, and others (a modified Van Caam method).12,13 An open coding approach was used to annotate each response, followed by second-order coding with abstraction of the key themes from the first-order coded data. We asked the respondents to consider “missed cases” as those for which a discrepancy was noted between an actionable, published resident “preliminary reading” and the final report. To accommodate potentially diverse perceptions of what counts as a “missed case,” we avoided specifications of the severity of the miss, the effect on patient care, or whether the “miss” required notification of the referring provider.

Results Characteristics of Programs Of 188 e-mails sent to PDs or their representatives, 45 survey responses were received, a 24% response rate. Basic characteristics of these programs are listed in Table 2. Most respondents were from programs with 17-40 residents (71%), and were from university-based (62%) or hybrid practices (38%); no private practice was represented among the responses. More than half of responses were from programs in either the Northeast (31%) or Midwest (20%) regions. A minority of the responding programs have an emergency radiology division (38%). Although many responding programs have “final signers” providing some degree of coverage at all times (36%), almost all respondents reported that residents provide preliminary readings for either all or some diagnostic examination types (96%), most commonly abdomen or pelvis computed tomography (CT), chest CT, head CT, and head or neck CT. Only a small minority (4%) of programs reported that their residents do not provide preliminary reports for any modality.

Feedback Mechanisms Most programs reported prompt distribution of feedback to residents, with a majority reporting that feedback is given at the time a case is missed (74%), or on at least a daily basis (16%) (Table 3). The most commonly reported feedback mechanisms for resident preliminary interpretations (Fig 1) included QA conferences (81%), resident self-review of cases (72%), case discussion during readout session at the end of a shift (70%), faculty-resident meetings (67%), and verbal discussion other than at end of shift (65%). Less common mechanisms included e-mail notices (47%), computerized review of preliminary reports (28%), and “other” means (7%). When retained for further review, QA cases were most

A.E. Gill et al. / Current Problems in Diagnostic Radiology ] (2017) ]]]–]]]

Characteristics of QA Conferences

Table 2 Demographics of radiology residency programs responding Response Response count percentage (%) (n ¼ 45) Size o16 17-40 4 40 Type University Hybrid or community Location Mid Atlantic Midwest Northeast Northwest Southeast Southwest West ER division Yes No Final signer coverage at all times Yes No Preliminary reports rendered by residents Yes; for all diagnostic modalities Yes; some but not for all types of modalities No; in-house final signers provide full coverage 24 hours a day No; teleradiology services provided

3

17.8 71.1 11.1

8 32 5

62.2 37.8

28 17

8.9 20.0 31.1 0.0 17.8 11.1 11.1

4 9 14 0 8 5 5

37.8 62.2

17 28

35.6 64.4

16 29

64.4 31.1 4.4

29 14 2

0

0

ER, emergency radiology.

commonly collected by the PD (28%), by a quality officer or risk management office (23%), or by the subspecialty divisions (23%). Overall, 33% of programs reported using manually generated e-mail or picture archiving and communication systems (PACS) discrepancy notifications, and 16% reported using automated reports (Table 3). In cases in which residents perform self-review, most programs use a nonautomated means of facilitating review, such as records manually kept by residents (58%), or lists of missed cases kept by faculty (35%); a minority of programs flag cases in PACS for optional (19%) or mandatory (9%) resident review (Table 3).

Of the programs reporting utilization of resident QA conferences, residents (100%), the PD (86%), and other faculty (83%) were the more frequently reported attendees, with only a small minority of programs (6%) reporting attendance of representatives from other specialties, such as Medicine, Pathology, or Surgery (Table 4). An attending Radiologist other than the PD was most frequently reported as the moderator of the QA conference (36%), with the PD (22%) or a chief resident (19%) also commonly noted; a minority of programs cited conferences led by a resident-at-large (6%). Most programs reported holding monthly resident QA conferences (64%), with quarterly conferences as the second most common reply (28%). The most common formats of QA conferences included informal discussion (43%), residents taking missed cases as “unknowns” (30%), and formal presentations (30%) (Table 4). A minority of programs distributed literature related to selected topics before the QA conference (20.5%). Most programs reported that the identity of a resident missing a case was not revealed at the conferences (84%); the remainder reported that the resident is identified at the conference. Most respondents reported having a faculty QA conference (61%) (Table 4). Of the institutions holding conferences, the conferences were most frequently held either monthly (59%) or quarterly (35%). Seven additional free-text comments about QA conferences were submitted. Four mentioned attendees of the conferences, all noting institution of joint faculty or resident conferences. One comment highlighted the timing of conferences, noting a desire for increased frequency. Two comments noted feedback mechanisms used in lieu of QA conferences, mentioning RADPEER for faculty and immediate conversations between resident and final signers. Satisfaction With Missed Case Feedback Mechanisms Overall satisfaction with feedback mechanisms for resident missed cases was rated as “good,” “very good,” or “excellent” in 78% of responding institutions, with a minority reporting “fair” (17%) or “poor” (5%) systems (Fig 2). Twenty free-text responses suggested a desire for improved feedback mechanisms. Technology was the focus of 8/20 (40%) of responses, including “better

Table 3 Resident missed case feedback practices

Frequency Each time a missed case is noted Daily Weekly Monthly Quarterly Less frequently than quarter, or no established feedback Not sure Type of self-review (all that apply) Residents keep records of cases and check final reports for discrepancies Faculty or staff collects lists of cases with discrepancies Cases with discrepancies are flagged in PACS or other system and accessible to the resident in question for optional review Cases with discrepancies are flagged in PACS or other system and presented to the resident for mandatory review Other Not sure Types of computerized review of preliminary reports Not applicable Manually generated (customized response describing discrepancy) Automated (click on a provided statement, i.e. “minor variance with resident report”) Other (please specify)

Response percentage (%)

Response count (n ¼ 43)

74.4 16.3 0.0 4.7 2.3 0.0 2.3

32 7 0 2 1 0 1

58.1 34.9 18.6 9.3 9.3 4.7

25 15 8 4 4 2

44.2 32.6 16.3 7.0

19 14 7 3

A.E. Gill et al. / Current Problems in Diagnostic Radiology ] (2017) ]]]–]]]

4

Fig. 1. Mechanisms to provide missed case/quality assurance (QA) feedback to residents. Bar chart displaying percentages of commonly reported feedback mechanisms for resident preliminary interpretation. (Color version of the figure available online.)

automation,” “better IT tools integrated into PACS,” “use of electronic notification,” and “easier ways to generate e-mails to residents.” Eight responses (40%) highlighted compliance issues, such as encouraging timely, more formal, and reliable faculty feedback, and higher faculty and resident attendance at QA conferences; interestingly, almost all comments lamented suboptimal faculty participation. An additional 4 responses (20%) focused on structure of feedback, such as suggestions for “hotseat conferences,” more resident participation, and better missed case documentation.

Discussion Feedback regarding missed or misinterpreted cases is an important part of DR Radiology training. Although surveys of resident feedback and QA conference practices have been

performed in fields other than Radiology,14-16 information regarding national practices in Radiology training is notably absent from the literature. We hoped that our survey would provide insight into prevailing feedback and QA conference mechanisms and identify potential areas of improvement of existing practices. The most common feedback techniques for resident preliminary reports cited by respondents included QA conferences and resident self-review of cases. The widespread prevalence of DR resident QA conferences as a feedback mechanism (81% of responding programs in our survey—Fig 1) is not surprising, as the American Council on Graduate Medical Education (ACGME) requires participation in quality improvement conferences during residency.17 Self-guided review of cases in most programs (71%) might be expected; however, more than 60% of programs report using face-to-face interaction with faculty for missed case notification (Fig 1), suggesting the perceived effectiveness of this method.

Table 4 Resident QA conference characteristics Response percentage (%)

Frequency Daily or weekly Monthly Quarterly Less frequently than quarterly Attendees at QA conference (all that apply) Residents Program Director Faculty other than Program Director Other departments (surgery, medicine, pathology, etc.) Format of QA conference (all that apply) Informal discussion Resident takes the case as an unknown Formal presentation Other Didactic lecture Presented by the resident who missed the case QA conference moderator Attending other than Program Director Program Director Chief resident Other Resident Fellow Is the resident who missed the case identified? Yes No Is literature pertaining to missed cases distributed? Yes No

Response count

Resident

Faculty

Resident (n ¼ 42)

Faculty (n ¼ 29)

0.0 63.8 27.7 8.3

3.4 58.6 34.5 3.4

0 23 10 3

1 17 10 1

100 86.10 83 5.6

36 31 30 2

42.5 30.0 30.0 25.0 17.5 12.5

17 12 12 10 7 5

36.1 22.2 19.4 16.7 5.6 0.0

13 8 7 6 2 0

16.6 84.4

5 27

20.5 79.4

8 31

A.E. Gill et al. / Current Problems in Diagnostic Radiology ] (2017) ]]]–]]]

5

representation reviews resident performance, including data on missed case rates, at semiannual meetings. In the future, we hope that further automation of our missed case tracking will provide more detailed information on educational needs of residents, helping tailor the curriculum and individual learning plans. A few possible limitations of our study should be mentioned. Although reminder e-mails were used to maximize participation, the final response rate was lower than desired. In addition, there may have been some reporting and recall bias, typically unavoidable in a retrospective survey. Finally, data were collected from PDs and program coordinators, and not DR residents. In the future, other members of the residency leadership team or faculty and residents at large could be surveyed, which might improve authentication of the received responses.

Conclusion Fig. 2. Satisfaction with feedback mechanisms. Bar chart demonstrating respondent satisfaction with resident missed case feedback mechanisms (n ¼ 41). (Color version of the figure available online.)

Interestingly, although a desire for automation was a frequent response to an open-ended question about improvement of feedback mechanisms, the use of fully automated or semiautomated feedback mechanisms, including e-mail notifications or specialized software, was reported by only a minority of programs (Table 3). Most institutions (58%) reported having faculty members moderate QA conferences (Table 4) which lends expertise and may further facilitate in-depth discussion of missed findings. However, resident-led conferences, reported by 25% of programs, may have certain advantages including reinforcement of ACGME Milestones or competencies. By researching and presenting their own and others' missed cases, residents gain experience in practice-based learning (self-directed study), systems-based practice (quality improvement), medical knowledge (interpretation of examinations), and other competencies.18 Although no medical specialties have published a consensus on how to best deliver resident feedback, the recent literature and the results of this survey provide suggestions for “best practices.” Regularly held QA conferences are used by many specialties, including Radiology, as documented in our survey; several publications advocate a structured and interactive format of QA conferences,7,16,19 with attention to ACGME competencies and Milestones, and multidisciplinary participation.18 Several publications argue for resident leadership of QA conferences, and resulted in higher perception of educational value from the attendees in one study.20 Automated resident feedback was mentioned by many in our survey; PACS-integrated software can improve efficiency of flagging cases, and software for tracking discrepancies between preliminary and final reports may increase effectiveness of resident review.10,11 Metrics for assessment of the effectiveness of resident missed case feedback were not queried in our survey. However, possible mechanisms for ongoing evaluation of such feedback include satisfaction surveys of faculty and residents, monitoring of automated or manually recorded missed case logs, and longitudinal review of resident performance by the Program Director and other faculty. At our institution, a faculty mentor supervises and provides feedback for a resident-led monthly QA conference; faculty and residents suggest cases for inclusion in the conference, with a subset drawn from those flagged for automated review in PACS. In addition, a Clinical Competency Committee with multidivisional

Our survey is the first to investigate the practices of DR residencies with respect to resident missed case feedback and resident QA conferences. The exact methods of feedback vary across respondents, with programs frequently employing in-person verbal feedback and QA conferences. Although our survey uncovers several potential categories for future improvement, most respondents rated the feedback mechanisms as “good” to “excellent.” Future directions for improving missed case feedback in DR programs may include wider implementation of automation, refinement of the structure of QA conferences to maximize educational value, and the involvement of other disciplines in QA proceedings.

References 1. Cifra CL, Bembea MM, Fackler JC, et al. Transforming the morbidity and mortality conference to promote safety and quality in a PICU. Pediatr Crit Care Med 2016;17(1):58–66. 2. Flynn-O'Brien KT, Mandell SP, Eaton EV, et al. Surgery and medicine residents' perspectives of morbidity and mortality conference: An interdisciplinary approach to improve ACGME core competency compliance. J Surg Educ 2015;72(6):e258–66. 3. Bowe SN. Quality improvement in otolaryngology residency: Survey of program directors. Otolaryngology 2016;154(2):349–54. 4. Jackson JR, De Cesare JZ. Multidisciplinary OBGYN morbidity and mortality conference. Arch Gynecol Obstet 2015;292(1):7–11. 5. Valle B, Gasq C, Dehours E, et al. Morbidity and mortality conferences in emergency departments: The French National Survey. Eur J Emerg Med 2013;20(5):364–6. 6. Cifra CL, Bembea MM, Fackler JC, et al. The morbidity and mortality conference in PICUs in the United States: A national survey. Crit Care Med 2014;42 (10):2252–7. 7. Kim MJ, Fleming FJ, Peters JH, et al. Improvement in educational effectiveness of morbidity and mortality conferences with structured presentation and analysis of complications. J Surg Educ 2010;67(6):400–5. 8. Kalaria AD, Filice RW. Comparison-Bot: An automated preliminary-final report comparison system. J Digit Imaging 2015. 9. Itri JN, Kang HC, Krishnan S, et al. Using focused missed-case conferences to reduce discrepancies in musculoskeletal studies interpreted by residents on call. AJR Am J Roentgenol 2011;197(4):W696–705. 10. Itri JN, Redfern RO, Scanlon MH. Using a web-based application to enhance resident training and improve performance on-call. Acad Radiol 2010;17 (7):917–20. 11. Gorniak RJ, Flanders AE, Sharpe RE Jr. Trainee report dashboard: Tool for enhancing feedback to radiology trainees about their reports. Radiographics 2013:135705. 12. Moustakas CE. Phenomenological Research Methods. Thousand Oaks, CA: Sage; 1994. 13. Flick U. An introduction to Qualitative Research. 4th ed. Los Angeles: Sage Publications; 2009. 14. Orlander JD, Fincke BG. Morbidity and mortality conference: a survey of academic internal medicine departments. J Gen Intern Med 2003;18(8): 656–8. 15. Seigel TA, McGillicuddy DC, Barkin AZ, et al. Morbidity and mortality conference in emergency medicine. J Emerg Med 2010;38(4):507–11.

6

A.E. Gill et al. / Current Problems in Diagnostic Radiology ] (2017) ]]]–]]]

16. Orlander JD, Barber TW, Fincke BG. The morbidity and mortality conference: The delicate nature of learning from error. Acad Med 2002;77(10):1001–6. 17. Education ACfGM. Common Program Requirements 2015. Available from: 〈http://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_07012015.pdf〉. [Accessed April 2016]. 18. Rosenfeld JC. Using the morbidity and mortality conference to teach and assess the ACGME General Competencies. Curr Surg 2005;62(6):664–9.

19. Prince JM, Vallabhaneni R, Zenati MS, et al. Increased Interactive format for morbidity & mortality conference improves educational value and enhances confidence. J Surg Educ 2007;64:266–72. 20. Prince JM, Vallabhaneni R, Zenati MS, et al. Increased interactive format for morbidity & mortality conference improves educational value and enhances confidence. J Surg Educ 2007;64(5):266–72.