Processes for Identifying and Reviewing Adverse Events and Near Misses at an Academic Medical Center

Processes for Identifying and Reviewing Adverse Events and Near Misses at an Academic Medical Center

The Joint Commission Journal on Quality and Patient Safety 2017; 43:5–15 Processes for Identifying and Reviewing Adverse Events and Near Misses at an...

190KB Sizes 0 Downloads 26 Views

The Joint Commission Journal on Quality and Patient Safety 2017; 43:5–15

Processes for Identifying and Reviewing Adverse Events and Near Misses at an Academic Medical Center William Martinez, MD, MS; Lisa Soleymani Lehmann, MD, PhD, MSc; Yue-Yung Hu, MD, MPH; Sonali Parekh Desai, MD, MPH; Jo Shapiro, MD

Background: Conferences, processes, and/or meetings in which adverse events and near misses are reviewed within clinical programs at a single academic medical center were identified. Methods: Leaders of conferences, processes, or meetings—“process leaders”—in which adverse events and near misses were reviewed were surveyed. Results: On the basis of responses from all 45 process leaders, processes were classified into (1) Morbidity and Mortality Conferences (MMCs), (2) Quality Assurance (QA) Meetings, and (3) Educational Conferences. Some 22% of the clinical programs used more than one of these three processes to identify and review adverse events and near misses, while 10% had no consistent participation in any of them. Explicit criteria for identifying and selecting cases to be reviewed were used by 58% of MMCs and 69% of QA Meetings. The explicit criteria used by MMCs and QA Meetings varied widely. Many MMCs (54%, 13/24), QA Meetings (54%, 7/13), and Educational Conferences (70%, 7/10) did not review all the adverse events or near misses that were identified, and several MMCs (46%, 6/13), QA Meetings (29%, 2/7), and Educational Conferences (57%, 4/7) had no other process within their clinical program by which to review these remaining cases. Conclusions: There was wide variation regarding how clinical programs identify and review adverse events and near misses within the MMCs, QA Meetings, and Educational Conferences, and some programs had no such processes. A welldesigned, coordinated process across all clinical areas that incorporates accepted approaches for event analysis may improve the quality and safety of patient care.

S

ince the 1999 release of the Institute of Medicine’s report To Err Is Human, numerous health care organizations have called for increased reporting and analysis of adverse events and near misses as a means of improving systems of care and making health care safer.1–3 For this to happen, adverse events and near misses need to be identified, reported, and analyzed effectively, and lessons learned need to be translated into practice and systems improvements. However, establishing an organizational culture and procedures to accomplish these aims has proved challenging.4,5 To facilitate improvement in care, patient safety experts have developed tools for analyzing systems contributions to adverse events and near misses, but these tools are underutilized.6,7 In an attempt to promote transparency and improvements in care, many medical centers have implemented error disclosure policies that mandate disclosure and apology to patients3,8,9 and require reporting of specific errors,10 yet underreporting and a lack of disclosure persists.11,12 For decades, Morbidity and Mortality Conferences (MMCs) have been one of academic medicine’s most time-honored forums for the discussion and analysis of adverse events.13 However, historically these conferences have not emphasized systems contributions or the discussion of disclosure and apology to patients.7,14,15 Despite the prominent place 1553-7250/$-see front matter © 2016 The Joint Commission. Published by Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jcjq.2016.11.001

of MMCs in academic medicine, guidelines for their structure and content are lacking, particularly outside of the surgical setting where the conference has its deepest roots.14,16 In recent years, several authors have noted opportunities to improve MMCs by using them to promote a culture of safety through the discussion of errors and a focus on improving communication and systems of care.17–22 These opportunities for improvement have included broadening attendance to include all stakeholders (for example, pharmacy, nursing, environmental services, risk management, patient services), prospective standardized incident recording (for example, the American College of Surgeons National Surgical Quality Improvement Program 30-day complication proforma), structured analysis of events, administrative pathways for acting on issues identified, and communication with patients and families.17–23 In addition to MMCs, over the last decade many medical centers have established patient safety reporting systems to identify risks to patients that can be used to improve patient safety.24 Promoting, monitoring, and effectively acting on these reports remains challenging.24,25 The last decade has also seen a raise in Quality Assurance (QA) committees and quality and safety officers within medical centers that are charged with ensuring some combination of the safety, effectiveness, and patient-centeredness of the medical care delivered.26–29 More recently, the psychological consequences and emotional suffering of clinicians involved in serious medical errors and the need for supportive interventions that achieve a healthy recovery have been increasingly recognized.30,31 Ideally, all

6

William Martinez, MD, MS, et al

processes used to identify and review adverse events and near misses, as well as processes used to implement systems improvement and support learning and coping among involved clinicians, would be integrated and coordinated across an institution to maximize effectiveness of these processes in improving the quality and safety of patient care. In the absence of clear standards, it is unknown how different clinical programs within an academic medical center approach the identification and review of adverse events and near misses within their clinical area and how the findings are disseminated to improve the quality and safety of medical care. Thus, we identified conferences, processes, and/or meetings in which adverse events and near misses are reviewed within clinical programs at a single academic medical center to (1) describe and compare their procedures for identifying, selecting, and evaluating adverse events and near misses and disseminating findings to affect change, and (2) Explore how these procedures may be modified to improve the quality and safety of patient care. METHODS Setting

This study was performed at a 747-bed tertiary care academic medical center affiliated with an accredited US medical school. There are approximately 52,000 inpatient admissions and 950,000 outpatient visits annually. The hospital employs more than 12,000 people, of whom approximately 3,000 are physicians. The hospital is organized into 17 clinical departments with several divisions within each department. There are 110 clinical residency and fellowship programs that encompass nearly all clinical programs. The hospital has a fully integrated electronic health record and computerized order entry system. In addition, there is a computerized patient safety reporting system accessible to all staff to document and share patient safety events. The hospital has a dedicated Department of Quality and Safety responsible for collecting, reporting, and analyzing all quality and safety data throughout the hospital and applying patient safety concepts that incorporate human factors and systemsbased solutions to reducing medical errors. The hospital also offers a peer support program with trained clinician peer supporters that are available to all clinicians involved in any critical event, such as an adverse event, with the goal of providing a safe environment to share the emotional impact and foster open communication and recovery. To facilitate transparent and compassionate disclosure conversations with patients and their families, the institution deploys a disclosure coach (the senior author [J.S.]), who works in conjunction with risk management and patient relations to prepare clinicians for engaging with patients and their families after adverse events. Sample

Our sample was constructed using a multistep process. We first contacted 97 residency and fellowship program directors representing 105 clinical programs within our institution.

Processes for Identifying and Reviewing AEs

Eight program directors led more than one clinical training program (for example, Anesthesiology Residency and Obstetrical Anesthesia Fellowship). We excluded clinical training programs in oncology because they are based at an affiliated institution with unique leadership and processes. Program directors were asked (1) whether their clinical program participates in an MMC and/or other conferences, processes, or meetings in which adverse events and near misses are reviewed and (2) if so, to identify all of these conferences, processes, or meetings and their leaders. For simplicity and consistency, we will refer to the conference, process, or meeting leaders simply as “process leaders.” This first sampling frame was chosen because within this large academic medical center, program directors are well positioned to know about these conferences, processes, and meetings, and nearly all clinical programs within the medical center have an affiliated residency or fellowship program. In the second step, we contacted the identified process leaders and asked them to complete a detailed questionnaire about their respective conference, process, or meeting. We chose to survey process leaders, rather than participants, because process leaders are uniquely positioned to describe the procedures used to identify, select, and review adverse events and near misses and to disseminate findings. Questionnaire

The questionnaire (Appendix 1, available in online article) was designed to assess the structure and procedures of the conferences, processes, or meetings in which adverse events and near misses are reviewed. The questionnaire was developed on the basis of a review of the literature on MMCs,7,13–21,32,33 quality assurance,26,29,34–37 and adverse events and near misses,1,22,30,38–50 as well as discussion with patient safety experts. The questionnaire was pilot tested for clarity and face validity prior to its use. The introduction to the questionnaire provided respondents with the following definitions, which were repeated at the top of each section: • Adverse Events: Harm, whether transient or permanent, caused by medical management rather than the underlying condition of the patient. Adverse events are not necessarily preventable. Not all adverse events are due to medical errors. Example: A patient is appropriately treated with antibiotics and has an allergic reaction for the first time. • Medical Errors: The failure of a planned action to be completed as intended (error of execution) or the use of a wrong plan to achieve an aim (error of planning) in the course of managing a patient’s medical condition. Example: A patient with a known history of anaphylaxis to penicillin mistakenly receives it. • Near Misses: Medical errors that could have caused harm but did not either by chance or timely intervention. Example: A patient with a history of anaphylaxis to penicillin is prescribed penicillin, but prior to administering

Volume 43, No. 1, January 2017

the medication, the pharmacist notices the allergy and notifies the physician, who changes the order. These definitions were based on the Institute of Medicine’s report To Err Is Human.1 The questionnaire then asked process leaders about the following: 1. Structure of the conference, process, or meeting they lead 2. Procedures used to identify, select, and review adverse events and near misses 3. Mechanisms to disseminate the findings for the conference, process, or meeting to relevant parties (institutional or departmental leadership in a position to effect change as well as the involved clinicians, patients, and families where appropriate) 4. Discussion of event disclosure to patients and modeling of appropriate accountability within their conference, process, or meeting 5. Knowledge and attitudes regarding the existing peer support program (peer-to-peer advice on coping and responding to adverse events as well as emotional support) Procedures

Process leaders were surveyed between August 2012 and July 2013. Questionnaires were administered via e-mail link to an electronic questionnaire using REDCap (Research Electronic Data Capture) software.51 Nonrespondents received up to two reminder e-mails from the senior author requesting their participation. Data Analysis

Frequencies and counts were used to describe the characteristics of the conferences, processes, and meetings. Responses to open-ended questions were qualitatively coded by one of the authors and a research assistant working independently. Responses with shared content were then iteratively organized into emerging categories. Disagreements between coders were resolved by consensus and through consultation with an expert physician (senior author). Representative responses in each category were edited slightly for grammar and spelling. Analyses were performed using SAS versions 9.3 (SAS Institute Inc., Cary, North Carolina). The study was approved by the study site’s Institutional Review Board. RESULTS Response Rate

Of the program directors contacted, 92% (89/97) responded and provided information about 96 unique clinical programs within the institution. Some program directors were affiliated with more than one clinical program. Program directors reported that 93% (89/96) of the clinical programs participated in at least one conference, process, or meeting in which adverse events and near misses were reviewed. Several clinical programs joined with others and participated in broader division- or departmentwide con-

7

Table 1. Characteristics of the 45 Process Leaders Surveyed Characteristics Gender Male Female Profession Nurse Physician Department Anesthesiology Dermatology Emergency Medicine Medicine Neurology Obstetrics and Gynecology Pathology Psychiatry Radiology Surgery Position Chief Resident or Fellow Residency/Fellowship Program Director Clinical Service/Medical Director Division/Department Leadership* Faculty Member Quality or Safety Officer

n (%) 26 (58) 19 (42) 1 (2) 44 (98) 1 (2) 1 (2) 1 (2) 9 (20) 3 (7) 4 (9) 4 (9) 1 (2) 8 (18) 13 (29) 2 (4) 8 (18) 13 (29) 10 (22) 6 (13) 6 (13)

*Includes division chiefs and department vice chairs.

ferences, processes, or meetings. In total, program directors identified 47 unique conferences, processes, or meetings led by 45 unique process leaders. Some process leaders led multiple conferences, processes, or meetings. All of the 45 process leaders surveyed responded (100% response rate) and provided information about all 47 conferences, processes, and meetings. Table 1 shows the characteristics of the process leaders surveyed. Based on the titles and description of the conferences, processes, and meetings, the authors organized them into three groups: (a) MMCs (n = 24), (b) QA Meetings (n = 13) and (c) Educational Conferences (n = 10). Sidebar 1 contains illustrative examples of the titles of the conferences, processes, or meetings in each group. Program directors reported that 48% (46/ 96) of clinical programs participated in an MMC, 9% (9/96) participated in a QA Meeting, 10% (10/96) participated in an Educational Conference, and 22% (21/96) participated in some combination of the three types of processes in which adverse events and near misses were reviewed; while 10% (10/96) of clinical programs had no consistent participation in any processes in which adverse events and near misses were reviewed. Structure

Table 2 contains data on the structure of the MMCs, QA Meetings, and Educational Conferences in this study. Process leaders reported nurses participated in 50% (12/24) of

8

William Martinez, MD, MS, et al

Processes for Identifying and Reviewing AEs

Sidebar 1. Illustrative Examples of Conferences, Processes, and Meetings in Which Adverse Events and Near Misses Are Reviewed, Sorted by Type MMCs • • • • • •

Anesthesiology MMC Cardiology MMC General Surgery MMC Internal Medicine MMC Interventional Radiology MMC Urology MMC

QA Meetings • • • • • •

Educational Conferences

Abdominal Imaging and Intervention QA Meeting Neurology QA Committee Meeting Psychiatry Critical Incident Review Team Meeting Stillbirth Review Meeting Thoracic Surgery QA Meeting Transplant Surgery QA and Performance Improvement Meeting

• Family Planning Case Conference and Lecture Series • Musculoskeletal Radiology Education Conference • Plastic Surgery Grand Rounds • Radiology Case Review Conference • Renal Grand Rounds

MMC, Morbidity and Mortality Conference; QA, Quality Assurance.

MMCs, 38% (5/13) of QA Meetings, and 30% (3/10) of Educational Conferences. The physicians involved in the case were “often” or “always” present in 92% (22/24) of MMCs, 62% (8/13) of QA Meetings, and 80% (8/10) of Educational Conferences.

Identification and Selection of Adverse Events and Near Misses

Table 3 shows the procedures used to identify and select adverse events and near misses for review. Process leaders reported that

Table 2. Structural Characteristics of Morbidity and Mortality Conferences (MMCs), Quality Assurance (QA) Meetings, and Educational Conferences Where Adverse Events and Near Misses Are Reviewed Characteristics Frequency—no. (%) Quarterly Monthly Weekly Other Type of Participants—no. (%) Attending physicians Fellows Residents Interns Medical students Nurses Physician extenders Other(s) Physicians involved in case attend—no. (%) Never Sometimes Often Always Number of cases presented—no. (%)*† 1–2 3–4 5 or more Case presented within two months from when it occurred—no. (%)† Never Seldom Sometimes Often Always *Not mutually exclusive categories, may add up to > 100%. † Missing data for one MMC (n = 23, not 24) and total (N = 46, not 47). MMC, Morbidity and Mortality Conference; QA, Quality Assurance.

MMCs n = 24

QA Meetings n = 13

Educational Conferences n = 10

Total N = 47

4 (17) 12 (50) 5 (21) 3 (13)

6 (46) 2 (15) 3 (23) 2 (15)

1 (10) 3 (30) 4 (40) 2 (20)

11 (23) 17 (36) 12 (26) 7 (15)

24 (100) 21 (88) 18 (75) 10 (42) 12 (50) 12 (50) 14 (58) 4 (17)

11 (85) 9 (69) 5 (38) 2 (15) 3 (23) 5 (38) 4 (31) 6 (46)

10 (100) 10 (100) 9 (90) 2 (20) 4 (40) 3 (30) 3 (30) 4 (40)

45 (96) 40 (85) 32 (68) 14 (30) 19 (40) 20 (43) 21 (45) 14 (30)

1 (4) 1 (4) 12 (50) 10 (42)

2 (15) 3 (23) 4 (31) 4 (31)

0 (0) 2 (20) 4 (40) 4 (40)

3 (7) 6 (13) 20 (43) 18 (38)

4 (17) 7 (30) 12 (52)

5 (38) 2 (15) 6 (46)

7 (70) 1 (10) 2 (20)

16 (35) 10 (22) 20 (43)

0 (0) 1 (4) 2 (9) 12 (52) 8 (35)

0 (0) 0 (0) 2 (15) 6 (46) 5 (38)

0 (0) 1 (10) 3 (30) 3 (30) 3 (30)

0 (0) 2 (4) 7 (15) 21 (46) 16 (35)

Volume 43, No. 1, January 2017

9

Table 3. Procedures Used to Identify Potential Cases for Presentation in Morbidity and Mortality Conferences (MMCs), Quality Assurance (QA) Meetings, and Educational Conferences Characteristics Procedures for identifying potential cases for review during the conference, process, or meeting—no. (%) Clinicians volunteer cases* Explicit criteria† Other/missing response All identified adverse events or near misses reviewed as part of conference, process, or meeting—no. (%) No Yes Separate process to review remaining cases—no. (%) No Yes

MMCs n = 24

QA Meetings n = 13

Educational Conferences n = 10

Total N = 47

9 (38) 14 (58) 1 (4)

3 (23) 9 (69) 1 (8)

5 (50) 3 (30) 2 (20)

17 (36) 26 (55) 4 (9)

13 (54) 11 (46)

7 (54) 6 (46)

7 (70) 3 (30)

27 (57) 20 (43)

6/13 (46) 7/13 (54)

2/7 (29) 5/7 (71)

4/7 (57) 3/7 (43)

12/27 (44) 15/27 (56)

*Defined as: Clinician(s) within the clinical program identify cases by whatever means they deem appropriate or are solicited to volunteer cases they deem appropriate. † Defined as: Process leader describes the use of predetermined, specific, and measurable standard(s) for identifying potential cases (e.g., deaths, ICU transfers, reportable events).

slightly more than half of both MMCs (58%, 14/24) and QA Meetings (69%, 9/13) and about a third of Educational Conferences (30%, 3/10) had explicit criteria (predetermined, specific, and measurable standard[s] such as evaluating all deaths or readmissions) for identifying and selecting adverse events and near misses for review. The criteria used by MMCs and QA Meetings varied widely within and between the two groups. Sidebar 2 provides illustrative examples of the explicit criteria used by each group. These criteria varied from chart review of all cases by a dedicated staff member with subsequent review of any identified adverse events and near misses by clinical leadership (Sidebar 2, Example 2) to reviewing all deaths and readmissions, as well as adverse events or near misses submitted by concerned clinicians (Sidebar 2, Example 7). Some MMC process leaders reported partnering with QA processes within their clinical program such that the QA committee selected the cases to be discussed during the MMC as a way to educate conference participants about specific systems issues (Sidebar 2, Example 3). Many MMCs (54%, 13/24), QA Meetings (54%, 7/13), and Educational Conferences (70%, 7/10) did not review all the adverse events or near misses that were identified, and of these, several MMCs (46%, 6/13), QA Meetings (29%, 2/7), and Educational Conferences (57%, 4/7) had no other process within their clinical program by which to review these remaining adverse events and near misses. Attributes of Adverse Events and Near Misses

Table 4 reports the importance of specific attributes of the adverse events and near misses for process leaders in selecting cases for review during the MMCs, QA Meetings, and Educational Conferences. In addition, most cases came from

an inpatient setting: Process leaders reported that the cases presented “rarely” or “sometimes” occurred in an outpatient setting in 74% (17/23) MMCs, 69% (9/13) QA Meetings, and 80% (8/10) Educational Conferences. Assessment of Systems Issues and Dissemination of Findings

Table 5 reports the prevalence of assessment of systems issues and procedures for disseminating finding in the MMCs, QA Meetings, and Educational Conferences. When adverse events or near misses were reviewed, process leaders of 78% (18/ 23) of MMCs, 69% (9/13) of QA Meetings, and 60% (6/ 10) of Educational Conferences reported that systems issues were “often” or “always” assessed. However, 60% (6/10) of Educational Conference leaders and 22% (5/23) of MMC leaders reported no procedures for communicating systems issues to leadership in a position to effect change. Among conferences, processes, and meetings with such procedures in place, the procedures varied; they included sharing findings with a designated quality/safety officer(s), clinical leaders (for example, clinic directors, division chiefs, nursing leadership), and/or risk management. Medical Error Disclosure

Process leaders reported that the cases presented “rarely” or “sometimes” involve a medical error in 78% (18/23) of MMCs, 100% (13/13) of QA Meetings, and 100% (10/ 10) of Educational Conferences. When medical errors were reviewed and the physician(s) involved were present, process leaders reported that the involved physician(s) “often” or “always” acknowledged their contribution to the medical error when appropriate in 70% (16/23) of MMCs, 54% (7/13) of QA Meetings, and 50% (5/10) of Educational Conferences.

10

William Martinez, MD, MS, et al

Processes for Identifying and Reviewing AEs

Sidebar 2. Illustrative Examples of Criteria Used to Identify and Select Cases for Presentation in Morbidity and Mortality Conferences (MMCs) and Quality Assurance (QA) Meetings MMCs

QA Meetings

[S]election based on four determinants: (a) the severity of the outcome or for nearmisses of the possible outcome, (b) the teaching value of the event for both residents and faculty, (c) the frequency of the problem, and (d) because it is a reportable event. (Example 1)

We routinely review all deaths that occur on the . . . service each month. We also review cases of patients who were re-admitted to our service after being discharged within the last 14 days. Intermittently, a concerned faculty member or resident will contact [the Committee] to review a case. (Example 7)

Clinical follow up on all admitted patients is completed . . . with adverse events occurring from admission through discharge documented.. . . We have a dedicated Quality Nurse Coordinator that does chart reviews on 100% of cases to verify complications, adverse events, medical errors and near misses. The complication information is stored in an extensive database and a query is extracted each month. The data is reviewed by the Medical Director of the [clinical area], rotating [attending physician] and the Fellow presenting MMC for the month. (Example 2)

Any case deemed to be a severe, unexpected adverse reaction to [treatment] is discussed, as are any errors leading to patient harm or nearmisses. Any reportable case is discussed. (Example 8)

Typically, incidents and near misses are discussed at our monthly Quality Improvement Committee and then brought anonymously to the MMC. (Example 3)

[Cases] are selected by review of the [clinical] record which is reviewed for [all deaths]. Cases are assessed by a standard criteria form for primary and contributing cause of death; unusual cases or those without clear causes are then presented in greater detail. (Example 9)

Cases are selected from review of (1) patient safety reports, (2) deaths in [our clinical area], (3) deaths within 24 hours of admission, (4) readmission within 72 hours, and (5) transfer to ICU within 24 hours of admission. (Example 4) Co-fellow and [MMC leader] compile all complications for the previous month and pick 4 for discussion during hour-long conference. (Example 5)

Criteria for cases to review: (1) all complications and mortality, (2) near misses, and (3) suspected medical errors. (Example 10)

Cases are collected by 4 processes. There is a review of: (1) all safety reports submitted through the safety reporting system, (2) all “codes,” (3) mandatory mortality reports for all deaths, (4) attending physicians are asked during and at the end of their rotation to report a list of predefined “events.” Cases with most important systems issues or teaching value are highlighted. In addition, we attempt to briefly review all deaths that have autopsy results. (Example 6)

Table 4. Importance of Specific Features of the Adverse Events and Near Misses in Case Selection for MMCs, QA Meetings, and Educational Conferences Features

MMCs n = 24

QA Meetings n = 13

Educational Conferences n = 10

Total N = 47

“Important” or “very important” feature in case selection—no. (%)* Availability of pathologic or autopsy data Clinical teaching value Highlights systems issues Involves a medical error

11 (46) 21 (88) 23 (96) 18 (75)

7 (54) 9 (69) 12 (92) 10 (77)

4 (40) 10 (100) 8 (80) 7 (70)

22 (47) 40 (85) 43 (91) 35 (74)

*Response categories for this item were on a 4-point Likert scale: 1 (not at all important), 2 (somewhat important), 3 (important), and 4 (very important). MMC, Morbidity and Mortality Conference; QA, Quality Assurance.

Process leaders reported that the disclosure of the event to the patient/family was “often” or “always” discussed in only 30% (7/23) of MMCs, 23% (3/13) of QA Meetings, and 10% (1/10) of Educational Conferences.

and 60% (6/10) of Educational Conference leaders reported they would be “very” or “completely” likely to refer physicians involved in adverse events for peer support.

Peer Support

DISCUSSION

Most process leaders of MMCs (87%, 20/23), QA Meetings (77%, 10/13), and Educational Conferences (80%, 8/10) were aware of the availability of the peer support within the medical center; however, even when all were made aware, only 57% (13/23) of MMC leaders, 38% (5/13) of QA Meeting leaders,

This is the first study to describe the conferences, processes, and/or meetings used by clinical programs within a single academic institution to review adverse events and near misses and the procedures they use to identify, select, and evaluate cases. The majority of clinical programs utilized at least one of three

Volume 43, No. 1, January 2017

11

Table 5. Prevalence of Assessing Systems Issues and Procedures for Disseminating Findings in MMCs, QA Meetings, and Educational Conferences Where Adverse Events and Near Misses Are Reviewed

Characteristics When adverse event or near miss is reviewed during conference/process/ meeting, systems issues are assessed—no. (%) Never Rarely Sometimes Often Always Process for communicating systems issues to leadership in position to effect change—no. (%) No Yes If yes, describe the process:† Communicated to clinical leadership‡ Communicated to quality/safety officer(s)§ Communicated to risk management|| Communicated ad hoc to relevant parties#

MMCs* n = 23

QA Meetings n = 13

Educational Conferences n = 10

0 (0) 2 (9) 3 (13) 12 (52) 6 (26)

0 (0) 0 (0) 4 (31) 5 (38) 4 (31)

0 (0) 3 (30) 1 (10) 4 (40) 2 (20)

0 (0) 5 (11) 8 (17) 21 (46) 12 (26)

5 (22) 18 (78)

1 (8) 12 (92)

6 (60) 4 (40)

12 (26) 34 (74)

9/18 (50) 8/18 (44) 3/18 (17) 6/18 (33)

9/12 (75) 4/12 (33) 2/12 (17) 1/12 (8)

1/4 (25) 2/4 (50) 1/4 (25) 0/4 (0)

19/34 (56) 14/34 (41) 6/34 (18) 7/34 (21)

Total* N = 46

*Missing data for one MMC (n = 23, not 24) and total (n = 46, not 47). † Not mutually exclusive categories, may add up to greater than 100%. ‡ Defined as: Process leader describes procedures for communicating systems issues to clinic leadership (e.g., directors of clinical areas, division chiefs, department chairs) or the involvement of clinical leadership directly in the process. § Defined as: Process leader describes procedures for communicating systems issues to quality and/or patient safety leader(s) (e.g., director of patient safety, vice chair for quality and safety) within a clinic area or the involvement of quality and/or patient safety leader(s) directly in process. || Defined as: Process leader describes procedures for communicating systems issues to risk management or the involvement of risk management directly in the process. # Defined as: Process leader describes communicating systems issues to relevant parties (who are not prespecified) as needed or at the discretion of the process leader. MMC, Morbidity and Mortality Conference; QA, Quality Assurance.

types of conferences, processes, or meetings (MMCs, QA Meetings, and Educational Conferences). We found that only slightly over half of MMCs and QA Meetings used explicit criteria to identify and select cases for review, and when present, there was wide variation in the criteria for case selection within and between groups. In addition, we found that more than half of all conferences, processes, and meetings did not review all identified adverse events and near misses, and of these, nearly half of MMCs and Educational Conferences had no other process within their clinical program by which to review the remaining cases. Moreover, we found that several clinical programs did not consistently participate in any conferences, processes, or meetings to review their adverse events and near misses. These unreviewed events and the absence of review processes within some clinical programs represent missed opportunities for quality improvement. Our study also produced some tentative findings regarding process leaders’ perceptions of the prevalence of specific content within their MMCs, QA Meetings, and Educational Conferences. Despite calls from governmental,1 professional,52 and private2 organizations for increased discussion and study of medical errors, the majority of process leaders reported that their MMCs, QA Meetings, or Edu-

cational Conferences did not often review a case involving a medical error. Previous studies have outlined the importance of modeling appropriate accountability and the disclosure of errors to patients and families as a means of promoting culture change and establishing expectations and behavioral norms.15,53 However, when medical errors were presented, more than a quarter of process leaders reported that the physicians involved did not often appropriately acknowledge their contribution to the error during the discussion. In addition, more than half of the process leaders reported that their process did not often include a discussion of whether or how the event was disclosed to the patient or family. Although some clinical programs with more than one process for the review of adverse events may appropriately choose to focus on errors, modeling of appropriate accountability, and a discussion of disclosure to patients in one process (for example, MMC) and not the others, only a minority of clinical programs had more than one process and would not account for absence of this content within many of the MMCs, QA Meetings, and Educational Conferences in this study. Our findings are consistent with research studies focused on MMCs. 7,14,15 Variability in the MMCs has been

12

William Martinez, MD, MS, et al

demonstrated because of lack of consistency between departments, participation of treating clinicians, and conference modifications to achieve various educational aims.32,54 Aboumatar and colleagues found wide variation in how MMCs are conducted across departments and little conformity to known models for analyzing events and identifying systems issues.7 Pierluissi and colleagues found that adverse events and errors were not routinely discussed in internal medicine MMCs, and both surgery and internal medicine MMCs missed opportunities to model error recognition and acknowledgement of contributions to errors.15 Our study adds to existing literature by providing a comprehensive analysis of conferences, processes, and meetings where adverse events and near misses are reviewed within a diverse group of clinical programs across an entire, large academic medical center. Our findings demonstrate significant variation in procedures among not only MMCs but all conferences, processes, and meetings where adverse events and near misses were reviewed. Some variation among the MMCs may be due to differences in Accreditation Council for Graduate Medical Education (ACGME) requirements for surgery and internal medicine training programs;14,15 however, lack of a systematic approach to event identification and analysis was common across all conferences, processes, and meetings in this study. Although a few clinical programs had both an MMC and a QA Meeting, process leaders rarely described communication or collaboration between the two processes. The ability to learn from mistakes and make changes is an essential element of safety culture. Thus, all venues where adverse events and near misses are reviewed play a central role in safety culture. Frankel and colleagues remind us that “to achieve reliability, organizations need to begin thinking about the relationship between these [quality and safety] efforts and linking them conceptually.”55(p. 1690) Thoughtful integration, coordination, and structure of these conferences, processes, and meetings may help to maximize their effectiveness in improving the quality and safety of patient care. A formal approach (for example, review of all readmissions, codes, ICU transfers, extended length of stay, and so forth) may help identify adverse events that might have otherwise gone unreported or unrecognized. Coordinated triage of identified cases for presentation in the appropriate venue (MMC, QA Meeting, or Educational Conference) and a formal process to ensure that all identified events are reviewed in one venue or another may reduce missed opportunities for improvement and make the best use of each venue. Centralized reporting of all identified events across the institution may assist in identifying patterns and tracking improvement. When medical errors are presented in conferences, processes, or meetings, acknowledgment by individuals (particularly attending physicians) of their contributions to the error, explicit use of the word error, and discussion of how the error was disclosed to the patient/ family may benefit participants and organizational culture by de-stigmatizing these behaviors and encouraging discus-

Processes for Identifying and Reviewing AEs

sion and examination of errors.15,53 It is important that such internal discussions are held in a forum that supports personal and systems accountability without shame or blame. Although analysis of errors may not be appropriate for all conferences, processes, or meetings, clinical programs should work to incorporate it into the most appropriate venue and where it will reach the greatest number of individuals. Department leaders should ensure that all physicians and staff involved in reviewing events receive training in accepted approaches to event analysis, including (1) soliciting input from all staff, including nurses, technicians, and administrators involved in the event; (2) using a structured framework to investigate all underlying contributing factors, such as task factors and team factors, to identify underlying system issues as well as cognitive biases; and (3) developing a follow-up plan based on recommendations, including assigning responsible personnel and a time line for action.6,7,23,55–57 Clinicians should receive support and training for having compassionate and transparent conversations with patients and families after adverse events.40 In addition, modeling these behaviors may help prepare others for successful personal management of this significant and inherent challenge of medical practice.16,54 After disclosure and event analysis, follow-up may include communicating to affected patients and families the findings and improvements made, as well as ensuring that involved clinicians receive peer support for these potentially emotionally stressful events.31,32 Regrettably, we found that some process leaders were unaware of an existing peer support program within the institution, and some may be unlikely to refer clinicians involved in an adverse event to the program. This indicates that further work needs to be done to promote and demonstrate the value of these programs. Our study has several important limitations. First, the data were collected from only one academic medical center in the United States, which may limit the generalizability of our findings. Second, some items relied on process leaders’ recall of prior conferences, processes, or meetings, and thus the findings are subject to recall bias. Recall bias may be partially mitigated by the regular occurrence of the conferences, processes, and meetings. Third, given that process leaders are reporting on their own conferences or meetings and may want to present them more favorably, social desirability bias may be present. If social desirability was present, formal processes to identify, select, and review adverse events and near misses may be even less common and less rigorous than our data suggest. Fourth, our assessment of the content of the MMCs, QA Meetings, and Educational Conferences in this study relied on assessments from process leaders. Although process leaders are in the best position to describe the procedures that underlie the identification and selection of adverse events and near misses in their setting, which was the primary focus of our study, a more reliable assessment of the content of the conferences, processes, or meetings may come from surveying their participants. However, this was not the primary purpose of our study, and given limited resources, surveying the participants

Volume 43, No. 1, January 2017

13

of 47 unique conferences, processes, and meetings would have been prohibitively time intensive and logistically challenging. Thus, the content assessment of process leaders was sufficient for our purposes. Finally, although nearly all program directors responded to our initial query, it is possible that a small number of conferences, processes, or meetings where adverse events and near misses are reviewed within the institution were not identified, and some clinical programs may have processes that were unaccounted for in this study. Properly conducted conferences, processes, and meetings in which adverse events and near misses are reviewed hold potential for improving the quality and safety of patient care, advancing medical education, strengthening transparent communication with patients and families, and encouraging peer support for clinicians affected by involvement in adverse events. There is wide variation regarding how identification and analyses of adverse events and near misses are accomplished in these venues. A well-designed, coordinated process should successfully identify adverse events and near misses, follow a structured approach for determining system defects and individual contributions, confirm appropriate disclosure and possible apology

to affected patients and families, act on recommendations for improvement, and support clinicians in their learning and recovery after any significant adverse event. Since concluding this study, we have advised and collaborated with several clinical programs that reported no process for reviewing adverse events and near misses to develop MMCs where case selection, discussion and follow-up can have the best chance for improving safety and quality. We have developed a best-practices framework for event analysis for the institution (Table 6) that is based on known principles of safety improvement.58,59 The framework is designed to bring consistency and standardization to MMCs while providing flexibility for process leaders to integrate them into their clinical divisions and programs. The framework has been presented to two institutional leadership groups: the Graduate Medical Education Committee (GME leaders and program directors) and Directors of Quality and Safety. With leadership support from the Department of Quality and Safety, two of the study authors [S.P.D., J.S.] will disseminate the framework to all MMC leaders and pilot it within several select programs.

Table 6. Best Practices Framework for Event Analysis Collaborative Case Review Principles Participants • Attendings, trainees, other health care providers • Cross-departmental, Multi-D Case Selection • Rigorous, disciplined review of all significant adverse events, near misses, and occasional random case review Culture of Patient Safety • Feedback: Review action items from prior MMCs • Implement: Systems, performance, practice changes • Ensure: disclosure, apology, consistency among review teams • Confidential peer support for providers Collaborative Case Review Template Template Categories Situation Background Assessment

Recommendations for preventing future adverse events

Operationalizing improvement efforts “Knowing what we know now, what could we do differently next time?”

Data Elements What is the event/near miss? Diagnosis, procedure Pertinent clinical information and subsequent event/near miss • History, ambulatory or inpatient course, procedure, test results, management of adverse event Analysis of the risks: • System issues (e.g., competing priorities, equipment, interface, flow issues) • Personal performance factors (e.g., KSAs, distractions, experience) • Behavioral choices that increased risk (procedure not followed, unprofessional behavior, etc.) EBM: review current data • Summary of learning points • Identify action items • Assess reliability of proposal • Identify feedback loop (How will you know it’s been effective or completed?) Ensure that all action items are communicated to the relevant parties: • Departmental leadership—Dept Chair/Div Chief, Quality/Safety Vice Chair within Dept/Div • Hospital Quality/Safety leadership, Risk Management • Disclosure to patient/family • Peer support

Multi-D, multidisciplinary; MMC, Morbidity and Mortality Conference; KSA, knowledge, skills, and abilities; EBM, evidence-based medicine; Dept, department; Div, division.

14

William Martinez, MD, MS, et al

It is clear that some adverse events also need to be reviewed by one of the more centralized groups in the institution, such as the Quality and Risk Management Committee, which is charged with deciding which events require reporting to outside agencies. Adverse events involving multiple services also may benefit from analysis by a centralized group. Ideally, when systems issues are identified by local MMC, this learning will be transferred centrally so that the institution can both help with systems changes and disseminate the learning throughout the institution. The medical center is now restructuring the centralized processes by which adverse events and near misses are analyzed and systems changes are enacted. In addition, the medical center has been involved in a major effort to disseminate the concepts of Just Culture56 throughout the institution. Clinical leaders have been educated in the precepts of Just Culture and how to integrate these precepts into event analyses. Two of the study authors [S.P.D., J.S.] now sit on the Executive Patient Safety Committee, the highestlevel committee within the institution charged with overseeing all aspects of patient safety. Committee members include the top leaders of various disciplines, such as the Vice President of Quality and Safety and members of the Department of Quality and Safety, the Chief Medical Officer, Chief Nursing Officer, and other leaders, such as the head of the Quality Assurance/Risk Management Committee. In addition, in collaboration with our Center for Professionalism and Peer Support, there has been improved education about and utilization of our peer support and disclosure coaching programs. We continue to work toward improving the safety of our patient care and the culture of safety at our institution by reinforcing our commitment to be a learning organization. Funding Support. Funding for Dr. Martinez was provided by an Institutional National Research Service Award (T32HP10251). The authors also thank Pamela Galowitz and Sara Nadelman for their assistance. The authors are grateful to Paul LeSage for his invaluable guidance and input regarding the Just Culture framework. Acknowledgments. The authors thank CRICO/RMF for funding the project described in the article. Conflicts of Interest. All authors report no conflicts of interest.

William Martinez, MD, MS, is Assistant Professor of Medicine, Division of General Internal Medicine and Public Health, Vanderbilt University Medical Center, Nashville, Tennessee. Lisa Soleymani Lehmann, MD, PhD, MSc, is Executive Director, National Center for Ethics in Health Care, US Department of Veterans Affairs, Washington, DC, and Associate Professor of Medicine and Medical Ethics, Harvard Medical School, Boston. Yue-Yung Hu, MD, MPH, is Pediatric Surgery Fellow, University of Connecticut School of Medicine, Farmington, Connecticut. Sonali Parekh Desai, MD, MPH, is Medical Director, Ambulatory Patient Safety, Brigham and Women’s Hospital, Boston, and Assistant Professor of Medicine, Harvard Medical School. Jo Shapiro, MD, is Director, Center for Professionalism and Peer Support, Brigham and Women’s Hospital, and Associate Professor of Otolaryngology, Harvard Medical School. Please address correspondence to Jo Shapiro, [email protected].

ONLINE-ONLY CONTENT See the online version of this article for Appendix 1. Safety and quality review survey

Processes for Identifying and Reviewing AEs

REFERENCES 1. Institute of Medicine. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000. 2. Conway J, et al. Respectful Management of Serious Clinical Adverse Events. 2nd ed. IHI Innovation Series white paper. Cambridge, MA: Institute for Healthcare Improvement; 2011. 3. The Joint Commission. Patient Safety chapter. 2016 Comprehensive Accreditation Manual for Hospitals (E-dition). Oak Brook, IL: Joint Commision Resources; 2015. 4. Wachter RM. Patient safety at ten: unmistakable progress, troubling gaps. Health Aff (Millwood). 2010;29:165–173. 5. Landrigan CP, et al. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010 Nov 25; 363:2124–2134. 6. Vincent C. Understanding and responding to adverse events. N Engl J Med. 2003 Mar 13;348:1051–1056. 7. Aboumatar HJ, et al. A descriptive study of morbidity and mortality conferences and their conformity to medical incident analysis models: results of the morbidity and mortality conference improvement study, phase 1. Am J Med Qual. 2007;22:232–238. 8. Kachalia A, et al. Liability claims and costs before and after implementation of a medical error disclosure program. Ann Intern Med. 2010 Aug 17;153:213–221. 9. Kachalia A, Mello MM. New directions in medical liability reform. N Engl J Med. 2011 Apr 21;364:1564–1572. 10. Mastroianni AC, et al. The flaws in state “apology” and “disclosure” laws dilute their intended impact on malpractice suits. Health Aff (Millwood). 2010;29:1611–1619. 11. Kaldjian LC, et al. Reporting medical errors to improve patient safety: a survey of physicians in teaching hospitals. Arch Intern Med. 2008 Jan 14;168:40–46. 12. O’Connor E, et al. Disclosure of patient safety incidents: a comprehensive review. Int J Qual Health Care. 2010;22:371– 379. 13. Campbell WB. Surgical morbidity and mortality meetings. Ann R Coll Surg Engl. 1988;70:363–365. 14. Orlander JD, Fincke BG. Morbidity and mortality conference: a survey of academic internal medicine departments. J Gen Intern Med. 2003;18:656–658. 15. Pierluissi E, et al. Discussion of medical errors in morbidity and mortality conferences. JAMA. 2003 Dec 3;290:2838–2842. 16. Gore DC. National survey of surgical morbidity and mortality conferences. Am J Surg. 2006;191:708–714. 17. Rosenfeld JC. Using the morbidity and mortality conference to teach and assess the ACGME General Competencies. Curr Surg. 2005;62:664–669. 18. Deis JN, et al. Transforming the morbidity and mortality conference into an instrument for systemwide improvement. In: Henriksen K, et al., eds. Advances in Patient Safety: New Directions and Alternative Approaches, vol. 2: Culture and Redesign. Rockville, MD: Agency for Healthcare Research and Quality; 2008 Accessed Nov 8, 2016. https:// www.ncbi.nlm.nih.gov/books/NBK43710/. 19. McVeigh TP, et al. Increasing reporting of adverse events to improve the educational value of the morbidity and mortality conference. J Am Coll Surg. 2013;216:50–56. 20. Mitchell EL, et al. Improving the quality of the surgical morbidity and mortality conference: a prospective intervention study. Acad Med. 2013;88:824–830. 21. Rabizadeh S, et al. Restructuring the morbidity and mortality conference in a department of pediatrics to serve as a vehicle for system changes. Clin Pediatr (Phila). 2012;51: 1079–1086.

Volume 43, No. 1, January 2017 22. Martinez W, Lehmann LS. The “hidden curriculum” and residents’ attitudes about medical error disclosure: comparison of surgical and nonsurgical residents. J Am Coll Surg. 2013;217:1145–1150. 23. Calder LA, et al. Enhancing the quality of morbidity and mortality rounds: the Ottawa M&M model. Acad Emerg Med. 2014;21:314–321. 24. Pronovost PJ, et al. Toward learning from patient safety reporting systems. J Crit Care. 2006;21:305–315. 25. Pronovost PJ, et al. Improving the value of patient safety reporting systems. In: Henriksen K, et al., eds. Advances in Patient Safety: New Directions and Alternative Approaches, vol. 1: Assessment. Rockville, MD: Agency for Healthcare Research and Quality; 2008 Accessed Nov 8, 2016. https:// www.ncbi.nlm.nih.gov/books/NBK43621/. 26. Society of Interventional Radiology Standards of Practice Committee. Guidelines for establishing a quality assurance program in vascular and interventional radiology. J Vasc Interv Radiol. 2003;14(9 pt 2):S203–S207. 27. Glaiberman C. How to create a quality assurance program for radiation safety in interventional radiology. Tech Vasc Interv Radiol. 2010;13:194–199. 28. Hilsden RJ, et al. Development and implementation of a comprehensive quality assurance program at a community endoscopy facility. Can J Gastroenterol. 2011;25:547–554. 29. Naylor G, et al. Setting up a quality assurance program in endoscopy. Endoscopy. 2003;35:701–707. 30. Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000 Mar 18;320: 726–727. 31. Hu YY, et al. Physicians’ needs in coping with emotional stressors: the case for peer support. Arch Surg. 2012;147: 212–217. 32. Orlander JD, Barber TW, Fincke BG. The morbidity and mortality conference: the delicate nature of learning from error. Acad Med. 2002;77:1001–1006. 33. Risucci DA, et al. Assessing educational validity of the morbidity and mortality conference: a pilot study. Curr Surg. 2003;60:204–209. 34. Brennan TA. Physicians’ professional responsibility to improve the quality of care. Acad Med. 2002;77:973–980. 35. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press, 2001. 36. Institute of Medicine. Preventing Medication Errors. Washington, DC: National Academies Press, 2007. 37. Academic Medical Center Working Group of the Institute for Healthcare Improvement. The imperative for quality: a call for action to medical schools and teaching hospitals. Acad Med. 2003;78:1085–1089. 38. Bell SK, Moorman DW, Delbanco T. Improving the patient, family, and clinician experience after harmful events: the “when things go wrong” curriculum. Acad Med. 2010;85: 1010–1017. 39. Gallagher TH, et al. Choosing your words carefully: how physicians would disclose harmful medical errors to patients. Arch Intern Med. 2006 Aug 14–28;166:1585–1593. 40. Gallagher TH, Levinson W. Disclosing harmful medical errors to patients: a time for professional action. Arch Intern Med. 2005 Sep 12;165:1819–1824.

15 41. Gallagher TH, et al. Patients’ and physicians’ attitudes regarding the disclosure of medical errors. JAMA. 2003 Feb 26;289:1001–1007. 42. Gallagher TH, et al. US and Canadian physicians’ attitudes and experiences regarding disclosing errors to patients. Arch Intern Med. 2006 Aug 14–28;166:1605–1611. 43. Gawande AA, et al. Analysis of errors reported by surgeons at three teaching hospitals. Surgery. 2003;133: 614–621. 44. Leape LL, Berwick DM. Safe health care: are we up to it? BMJ. 2000 Mar 18;320:725–726. 45. Leape LL, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991 Feb 7;324:377–384. 46. Martinez W, Lo B. Medical students’ experiences with medical errors: an analysis of medical student essays. Med Educ. 2008;42:733–741. 47. Wu AW, et al. To tell the truth: ethical and practical issues in disclosing medical mistakes to patients. J Gen Intern Med. 1997;12:770–775. 48. Wu AW, et al. Do house officers learn from their mistakes? JAMA. 1991 Apr 24;265:2089–2094. 49. Lo B. Resolving Ethical Dilemmas: A Guide for Clinicians. 3rd ed. Philadelphia: Lippincott Williams & Wilkins; 2005. 50. Hickson GB, et al. Balancing systems and individual accountability in a safety culture. In The Joint Commission: From Front Office to Front Line: Essential Issues for Health Care Leaders, 2nd ed. Oak Brook, IL: Joint Commission Resources, 2012, 1–35. 51. Harris PA, et al. Research electronic data capture (REDCap)—A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377–381. 52. American College of Obstetricians and Gynecologists. ACOG committee opinion #328: patient safety in the surgical environment. Obstet Gynecol. 2006;107:429–433. 53. Martinez W, et al. Role-modeling and medical error disclosure: a national survey of trainees. Acad Med. 2014;89:482–489. 54. Bechtold M, et al. Educational quality improvement report: outcomes from a revised morbidity and mortality format that emphasised patient safety. Qual Saf Health Care. 2007;16:422–427. 55. Frankel AS, Leonard MW, Denham CR. Fair and just culture, team behavior, and leadership engagement: the tools to achieve high reliability. Health Serv Res. 2006;41:1690–1709. 56. Pronovost PJ, et al. A practical tool to learn from defects in patient care. Jt Comm J Qual Patient Saf. 2006;32:102–108. 57. Croskerry P. From mindless to mindful practice—Cognitive bias and clinical decision making. N Engl J Med. 2013 Jun 27;368:2445–2448. 58. Gallagher TH, Studdert D, Levinson W. Disclosing harmful medical errors to patients. N Engl J Med. 2007 Jun 28;356: 2713–2719. 59. Shapiro J, Whittemore A, Tsen LC. Instituting a culture of professionalism: the establishment of a center for professionalism and peer support. Jt Comm J Qual Patient Saf. 2014;40:168–177.