A Tool for the Concise Analysis of Patient Safety Incidents

A Tool for the Concise Analysis of Patient Safety Incidents

The Joint Commission Journal on Quality and Patient Safety Risk Assessment and Event Analysis A Tool for the Concise Analysis of Patient Safety Incid...

146KB Sizes 7 Downloads 260 Views

The Joint Commission Journal on Quality and Patient Safety Risk Assessment and Event Analysis

A Tool for the Concise Analysis of Patient Safety Incidents Julius Cuong Pham, MD, PhD; Carolyn Hoffman, RN, BSN, MN; Ioana Popescu, MBA; O. Mayowa Ijagbemi, MPH; Kathryn A. Carson, ScM

T

he prevalence of patient safety incidents, sometimes referred to as adverse events, incidents, or patient safety events,1 in health care is unacceptably high, with up to one-third of hospitalized patients experiencing one.2–4 Besides developing and implementing systems to prevent these patient safety incidents, there is a need and desire to analyze and learn from them. The traditional approach for analyzing incidents in health care is the root cause analysis (RCA), which was adapted from the defense and manufacturing industries approximately 15 years ago.5,6 This approach is well accepted in health care and has been gradually deployed across the industrialized world.7–9 The RCA approach is systematic, examines systems, and employs rigorous methods to ensure that all relevant aspects of an incident are understood. Effective actions are developed to reduce the risk of recurrence. Key resources required include an experienced facilitator; time to interview all applicable staff, physicians, patients/family, and experts; time for at least one multidisciplinary team meeting; and time and personnel to prepare and distribute the findings, recommendations, and plan for implementation. RCA investigations of incidents have played an important role in significantly reconfiguring clinical prac­ tices and how clinicians enact their relationships.10 Yet there are challenges to the RCA process. RCAs are labor intensive, each RCA requiring an estimated 20 to 90 personhours to complete.11 As such, they are often reserved for the most severe incidents (that is, those that result in death or permanent harm/disability). The Joint Commission has required accredited organizations to perform an RCA for such “sentinel events” since its Sentinel Event Policy was adopted in 1996.1 Yet such events represent a very small proportion of adverse events that occur in health care. Given the significant resource requirements of an RCA, there is a need for a more concise method that allows for more efficient analysis of a larger number of incidents.12 For example, a long term care facility implemented “mini-RCAs,” an abbreviated version of the formal RCA process, when there was not enough time to conduct a full RCA on each fall.10 In 2008 the National Patient Safety Agency (United Kingdom) issued an RCA tool with guidance on three levels: 26

January 2016

Article-at-a-Glance Background: Patient safety incidents, sometimes re-

ferred to as adverse events, incidents, or patient safety events, are too common an occurrence in health care. Most methods for incident analysis are time and labor intensive. Given the significant resource requirements of a root cause analysis, for example, there is a need for a more targeted and ­efficient method of analyzing a larger number of incidents. Although several concise incident analysis tools are in existence, there are no published studies regarding their usability or effectiveness. Methods: Building on previous efforts, a Concise Incident Analysis (CIA) methodology and tool were developed to facilitate analysis of no- or low-harm incidents. Staff from 11 hospitals in five countries—Australia, Canada, Hong Kong, India, and the United States—pilot tested the tool in two phases. The tool was evaluated and refined after each phase on the basis of user perceptions of usability and effectiveness. Results: From September 2013 through January 2014, 52 patient safety incidents were analyzed. A broad variety of incident types were investigated, the most frequent being patient falls (25%). Incidents came from a variety of hospital work areas, the most frequent being from the medical ward (37%). Most incidents investigated resulted in temporary harm or no harm (94%). All or most sites found the tool “understandable” (100%), “easy to use” (89%), and “effective” (89%). Some 95% of participants planned to continue to use all or some parts of the tool after the pilot. Qualitative feedback suggested that the tool allowed analysis of incidents that were not currently being analyzed because of insufficient resources. The tool was described as simple to use, easy to document, and aligned with the flow of the ­incident analysis. Conclusion: A concise tool for the investigation of patient safety incidents with low or no harm was well accepted across a select group of hospitals from five countries.

Volume 42 Number 1

Copyright 2016 The Joint Commission

The Joint Commission Journal on Quality and Patient Safety Sidebar 1. Advisory Group Name

Title

Organization/City

Ross Baker, PhD

Professor and Program Director, MSc. Quality Improvement and Patient Safety Institute of Health Policy, Management and Evaluation

University of Toronto, Toronto

Noel Eldridge, BE, MS

Senior Advisor/Public Health Specialist

Agency for Healthcare Research and Quality (AHRQ), Rockville, United States of America

Donna Forsyth, MSCP, CMIOSH

Associate Director Patient Safety– Investigation

National Health Service (NHS), London

Peter Goldschmidt MD, DrPH, DMS

President

World Development Group, Inc., Plymouth, United States of America

Lui Siu Fai, BSc, MB ChB, MRCP, FHKCP, FRCPEd, FHKAM (Medicine), FRCPGlas, FRCP

Clinical Professional Consultant, Division of Health System, Policy and Management, Jockey Club, School of Public Health

Chinese University of Hong Kong (Retired from Hong Kong Hospital Authority), Sha Tin, Hong Kong

Jean-Marie Rodrigues, MD

Professor, Department of Public Health and Medical Informatics

University of Saint Étienne, Saint-Étienne, France

William Runciman, BSc, MBBCh, FFARACS, FANZCA, FJFICM, FHKCA, FRCA, PhD

President Professor in Patient Safety and Healthcare Human Factors

Australian Patient Safety Foundation, Adelaide, Australia University of South Australia, Adelaide

concise, comprehensive, and independent.13 Other abbreviated methodologies include case conferences and morbidity and mor­tality (M&M) rounds,14 the Learning From Defects Tool,15 the concise methodology of the World Health Organization (WHO) High 5s program,16 and the Canadian Incident Analysis Framework.17 We conducted a study to develop a standard Concise Incident Analysis (CIA) tool and pilot test it in a variety of international settings. The purpose of the tool is to facilitate a more streamlined process for analyzing no- or low-harm incidents that occur in health care, including the development of effective actions for improvement.

Methods This study was approved by the Institutional Review Board of the Johns Hopkins School of Medicine.

Advisory Group An advisory group composed of seven members of the WHO Reporting and Learning Systems Community provided input on the tool development and pilot testing. Participants were recruited from the community through e-mail invitations from the project team. The advisory group consisted of international leaders in patient safety and adverse event investigation. These members, who were selected on the basis of their knowledge and practical experience in incident analysis, concise analysis, research, and/or health care leadership, included individuals from January 2016

the University of Toronto; the Agency for Healthcare Research and Quality; the National Health Service, England; the Chinese Univer­sity Hong Kong; the University of Saint Etienne (France); and the Australian Patient Safety Foundation (Sidebar 1, above).

Tool Development and Refinement Between May 2013 and August 2013, the CIA tool and methodology18 were developed on the basis of concepts from the Canadian Incident Analysis Framework,17 the WHO High 5s program,16 and other concise analysis as well as improvement tools.15 The CIA tool was designed as a method of incident analysis that is intended to be less time and resource intensive than traditional methods such as the RCA. It was intended to be used by a person with knowledge of the incident analysis process, ­human factors, and effective solutions development in health care with input gathered from staff and physicians local to the event. The process is intended to be completed within hours or days because of the less intensive approach. The tool was refined through two rounds of pilot testing. Feedback from pilot sites and an advisory group was collected after each round. Revisions to the CIA tool were made after each round on the basis of this feedback.

Pilot Site Recruitment Pilot sites were recruited in June and July 2013. Recruitment was limited to English-speaking countries to avoid challenges with translation and communication during the tool Volume 42 Number 1

Copyright 2016 The Joint Commission

27

The Joint Commission Journal on Quality and Patient Safety ­ evelopment phase. Health organizations were required to have d experience conducting RCA or similar comprehensive analysis investigations. All analyses were performed in an acute care hospital setting. Pilot sites were recruited using the WHO Reporting and Learning Community listserve and direct recommendations from the CIA advisory group. Eleven hospitals were chosen on the basis of available resources­—three from Australia, three from one health system in Canada, two from Hong Kong, two from India, and one from the United States (Sidebar 2, right).

Pilot-Testing Procedure Representatives from pilot sites were oriented to the project during a webinar in August 2013. They received an overview of the CIA tool and were instructed to (1) analyze cases of adverse incidents involving no or low harm that occurred at their hospital, and (2) provide feedback on the CIA tool and process. A pilot-testing manual was provided. Pilot testing and feedback occurred between September 2013 and January 2014. Pilot testing was divided into two rounds. During Round 1, pilot sites were asked to analyze three cases using the first version of the CIA tool. Each incident was analyzed by one or more facilitators as decided by each of the pilot sites. Facilitators varied in their roles and background; examples include patient safety officer, senior clinical safety specialist, clinical nurse consultant, and physician. Sites were instructed to choose cases that were low- or no-harm incidents. In addition, sites were requested to choose one case involving a medication incident, a fall, and a medical device incident. Because the organization from Canada participated as a health system, one case was analyzed for each of the system’s three hospitals. The study team did not have access to patient records or final reports because of patient confidentiality and potential liability issues. Following the conclusion of Round 1 data collection, preliminary data results were compiled and a summary created. The summary was provided to the pilot sites and advisory group, who provided additional detail and clarified findings via an online meeting. Eleven sites participated in and contributed to data for Round 1. One site withdrew from the study after Round 1 because of lack of resources, competing priorities, and time constraints associated with participation. In Round 2, each of the remaining 10 pilot sites, using the newly revised CIA tool,18 analyzed three additional cases. Some pilot sites did not complete analysis of all six cases over the two rounds—for reasons such as the short time frame allowed for the analysis, the lack of available appropriate cases for analysis, and the limited resources to perform analyses under these con28

January 2016

Sidebar 2. Participating Sites Organization

Location

Alberta Health Services

Edmonton, Canada

Amrita Institute of Medical Sciences

Kochi, India

Caboolture Hospital

Caboolture, Australia

Gold Coast University Hospital

Southport, Australia

Indraprastha Apollo Hospitals

New Delhi, India

Johns Hopkins Hospital

Baltimore, United States

Logan Hospital

Logan City, Australia

Prince of Wales Hospital

Sha Tin, Hong Kong

Queen Mary Hospital

Pok Fu Lam, Hong Kong

ditions. In none of the cases was incomplete analysis attributed to the CIA tool.

Data Collection Demographic data (country, hospital type, academic status, hospital size, comprehensive incident analysis experience) were collected from 10 of the 11 participating sites. One site (not the one that withdrew after Round 1) did not provide demographic information. Pilot sites were asked to complete an online evaluation survey after completion of each round of testing. The research team developed the survey, which was not tested before use, with specific questions related to the study; a patient safety incident was defined as “an event or circumstance that could have resulted, or did result, in unnecessary harm to a patient.”19(p. 15) After Round 1 of testing, the evaluation survey was revised to provide additional choices for some responses and some additional questions. Action items were rated on their likely effectiveness by the pilot site facilitators using the Institute for Safe Medication Practices (ISMP) adaptation of the human factors hierarchy of effectiveness.20 Each action item was categorized as one of the following: ■■ High leverage (forcing functions and constraints; automation/computerization) ■■ Medium leverage (simplification/standardization; reminders, checklists, double checks) ­ ■■ Low leverage (rules and policies; education and information). Feedback from participating sites was essential to understanding their experiences in testing the methodology and suggesting improvements to the tool. Qualitative data from pilot sites were drawn from three primary sources: 1. Open-ended survey questions within the evaluation survey 2. Semistructured discussions within a webinar format follow­ing each round of analysis. To accommodate different time zones, two webinars were held in Round 1, in which seven Volume 42 Number 1

Copyright 2016 The Joint Commission

The Joint Commission Journal on Quality and Patient Safety sites participated, and one webinar in Round 2, in which three sites participated. 3. One-on-one interviews with three pilot sites, self-selected on the basis of their availability, conducted after Round 2 Qualitative data also came from the advisory group via two modes. Primarily, the advisors provided their feedback during semistructured discussions held within a webinar format; nine webinars occurred monthly between July 2013 and March 2014. The advisors also provided feedback through e-mails to the study team.

Data analysis Categorical measures were summarized using counts and percentages. Time required for CIA completion was summarized in terms of medians and interquartile ranges. The results are presented for the entire sample and stratified by round. Missing data/cases were excluded from the analysis. Analysis was performed using SAS version 9.3 (SAS Institute, Inc., Cary, North Carolina). All qualitative feedback was tracked and themed. Written notes from the four study team members [J.C.P., C.H., I.P., O.M.I] were combined and discussed at debriefing meetings after each webinar. Neither audio recordings nor transcriptions were made. Key themes, findings, and decisions regarding revisions to the CIA tool were reached by group consensus. A change log was maintained separately to ensure that each key suggestion was considered and decided on.

Results Quantitative Eleven hospitals from five countries participated in pilot testing. Hospitals were mostly public (70%) and from academic facilities (90%) (Table 1, right). There was even distribution of hospitals with regards to ­frequency of comprehensive incident analysis (such as RCA) conduct. A total of 52 patient safety incidents were analyzed (Table 2, page 30). A broad variety of incident types were investigated, the most frequent being patient falls (25%). The incidents came from a variety of hospital work areas, the most frequent being from the medical ward (37%). Most incidents investigated resulted in temporary harm “or less” (94%). Patient outcomes were indicated as death in 3 of the incidents investigated—2 in Round 1 and 1 in Round 2. Overall, for the majority of the cases, participants found the tool “understandable” (100%), “easy to use” (89%), and “effective” (90%) (Table 3, page 31). Some 95% of the participants reported planning to continue to use all or some parts of the

January 2016

Table 1. Characteristics of Participating Hospitals Hospital Characteristics*

N (%)

Country   United States

1 (10)

 Canada

3 (30)

 Australia

3 (30)

 India

1 (10)

  Hong Kong

2 (20)

Hospital Type  Public

7 (70)

 Private

3 (30)

Academic Status  Academic

9 (90)

 Nonacademic

1 (10)

Hospital Size   ≤ 500 beds

4 (40)

  501–1,000 beds

1 (10)

  > 1,000 beds

5 (50)

Comprehensive Incident Analysis Experience†   2–3 incidents per year

2 (20)

  4–10 incidents per year

3 (30)

  > 10 incidents per year

5 (50)

* One hospital did not provide demographic data. † Such as root cause analysis.

tool after the pilot. The analyses took a median of 10 hours­ (interquartile range, 4–16 hours) to complete. For the vast majority of the cases, the CIA tool helped participants to understand what happened (98%), identify contributing factors (100%), prepare summary statements (78%), develop recommended action items (85%), and implement the action items (80%). To a lesser extent, the tool helped manage the implementation of action items (73%) and evaluate the effect of the action items (65%). The majority of the “lessons learned” (73%) from the incident analyses were shared, at least, ­internally within the participating hospital.

Qualitative In general, the qualitative feedback supported the quantitative findings. A summary of the qualitative feedback from the participating sites, including selected quotes from survey responses, is provided in Appendix 1 (available in online article). A summary of the qualitative feedback from the advisory group, including selected quotes from survey responses, is provided in Appendix 2 (available in online article).

Volume 42 Number 1

Copyright 2016 The Joint Commission

29

The Joint Commission Journal on Quality and Patient Safety Table 2. Incident Characteristics Overall and by Round: No. (%)* Overall (N = 52)

Round 1 (n = 32)

Round 2 (n = 20)

 United States of America

  6 (12)

  4 (13)

  2 (10)

 Canada

  5 (10)

2 (6)

  3 (15)

 Australia

13 (25)

  9 (28)

  4 (20)

 India

12 (23)

  9 (28)

  3 (15)

  Hong Kong

16 (31)

  8 (25)

  8 (40)

2 (4)

1 (3)

1 (5)

Incident Characteristic Country

Incident Type  Blood  Device  Medication  Perinatal  Fall  Surgery

1 (2)

1 (3)

0 (0)

12 (23)

  8 (25)

  4 (20)

3 (6)

3 (9)

0 (0)

13 (25)

  6 (19)

  7 (35)

2 (4)

0 (0)

  2 (10)

 Venothromboembolism

1 (2)

0 (0)

1 (5)

 Diagnosis and treatment delay

2 (4)

2 (6)

0 (0)

 Other

11 (21)

  6 (19)

  5 (25)

 Unknown

  5 (10)

  5 (16)

0 (0)

Incident Location   Emergency department

7 (13)

3 (9)

  4 (20)

  Medical ICU

1 (2)

1 (3)

0 (0)

  Medical ward

19 (37)

10 (31)

  9 (45)

  Operating room

4 (8)

3 (9)

1 (5)

  Pediatric ward

2 (4)

2 (6)

0 (0)

 Radiology

1 (2)

1 (3)

0 (0)

  Surgical ICU

1 (2)

1 (3)

0 (0)

4 (8)

3 (9)

1 (5)

 Obstetrics/gynecology

  5 (10)

3 (9)

  2 (10)

  Pediatric ICU/stepdown

2 (4)

1 (3)

1 (5)

 Other

4 (8)

2 (6)

  2 (10)

 Unknown

2 (4)

2 (6)

0 (0)

  No Harm

11/50 (22)

  5/30 (17)

  6 (30)

 Emotional Distress or Inconvenience

  9/50 (18)

  7/30 (23)

  2 (10)

  Additional Treatment

10/50 (20)

  5/30 (17)

  5 (25)

  Temporary Harm

17/50 (34)

11/30 (37)

  6 (30)

  Permanent Harm

0/50 (0)

  0/30 (0)

0 (0)

 Severe Permanent Harm

0/50 (0)

  0/30 (0)

0 (0)

 Death

3/50 (6)

  2/30 (7)

1 (5)

  Surgical ward

Severity of Harm

* Eleven sites are represented in Round 1, and 10 in Round 2.

30

January 2016

Discussion In this study, we developed a tool for the concise analysis of incidents in health care and pilot tested it in a select group of 11 hospitals in five countries. Overall, the tool was well received; the vast majority (95%) of the participants stated that they would continue to use all or parts of the tool as part of their patient safety management strategy. The current RCA approach to incident investigation has several challenges.11 As Wu et al. explain, many RCAs are performed incorrectly, with excess emphasis placed on uncovering the single “root cause.” Lessons learned from RCAs are rarely shared within the same organization or between organizations. Corrective actions that result from RCAs are often ineffective and incompletely implemented. For incidents that involve medical devices or pharmaceuticals, strong interventions often require product redesign/reformulation/relabeling and collaboration with manufacturers. Individual hospitals are often left with weak interventions such as reminders and reeducation of staff. Finally, there is often little time, thought, or resources ­toward measuring whether or not the risk of a recurrence has been reduced.11 Although the CIA tool cannot overcome most of these challenges, it does have some advantages. There is an important need for a CIA tool in health care. Most incidents in health care are not investigated in a systematic manner, and/or the focus of the analysis is not on system issues. In severe safety incidents, usually involving death or permanent harm/ disability, an extensive or comprehensive incident analysis is often performed. Tools for doing so may involve the RCA tool developed by the US Department of Veterans Affairs5 or ­other systems-based methodologies. However, this process is labor and time intensive to perform. Thus, it is only performed for the most egregious of adverse events. Other events might be too common and/or represent incidents with known risk factors and interventions (for example, falls or mislabeled specimens) for which detailed investigation of every incident is not practical. There remains a large middle ground of incidents that health systems can learn from that are not investigated. A number of participating hospitals in the pilot test indicated that the CIA tool gave them a method to analyze no- or low-harm incidents that were not currently being analyzed because of insufficient resources. The CIA tool was developed with several objectives in mind. The first objective was to develop a tool that allows an efficient and organized approach to system-focused incident investigation. The intent was that the tool would enable more incidents to be investigated, with the respective analyses then integrated into organizational reporting and learning processes. Patient safety at the local level is then improved through a betVolume 42 Number 1

Copyright 2016 The Joint Commission

The Joint Commission Journal on Quality and Patient Safety Table 3. Perceptions of Concise Incident Analysis (CIA) Tool, Overall and by Round* Overall (N = 52)

Round 1 (n = 32)

Round 2 (n = 20)





19/19 (100)

Very easy (5 on Likert scale)





  5/19 (26)

Easy (4 on Likert scale)





12/19 (63)

Neutral (3 on Likert scale)





  2/19 (11)

Perception Was able to understand the CIA tool Ease of using the tool

Effectiveness of the tool in the analysis Very effective (5 on Likert scale)





  7/19 (37)

Effective (4 on Likert scale)





10/19 (53)

Neutral (3 on Likert scale)





  2/19 (11)

Plan to continue to use some or all of the steps of the Concise Incident Analysis methodology after the pilot test All of the steps





  4/19 (21)

Some of the steps





14/19 (74)

None of the steps





1/19 (5)

Time required for CIA completion in hours, median (interquartile range)

10 (4–16)

9 (4–6)

11 (4.5–21)

Criteria provided for case selection useful

48/49 (98)

29/30 (97)

  19/19 (100)

The CIA tool helped you understand what happened

48/49 (98)

  30/30 (100)

18/19 (95)

  48/48 (100)

  29/29 (100)

  19/19 (100)

Used the guiding questions

The CIA tool helped you identify the contributing factors of the incident

44/48 (92)

27/29 (93)

17/19 (89)

Prepared one or more summary statements as a result of the CIA

35/48 (73)

20/29 (69)

15/19 (79)

The CIA tool assisted in preparing a summary statement

36/46 (78)

20/27 (74)

16/19 (84)

The CIA tool helped develop recommended actions for this incident

40/47 (85)

25/28 (89)

15/19 (79)

High leverage

11/46 (24)

  9/28 (32)

  2/18 (11)

Medium leverage

26/46 (57)

13/28 (46)

13/18 (72)

Low leverage

  9/46 (20)

  6/28 (21)

  3/18 (17)

Action items were implemented from this incident

37/46 (80)

25/28 (89)

12/18 (67)

The CIA tool helped manage the implementation of the action items from this incident

32/44 (73)

22/27 (81)

10/17 (59)

The CIA tool helped evaluate the effect of the action items from this incident

28/43 (65)

18/26 (69)

10/17 (59)

There were “lessons learned” from this analysis

45/46 (98)

  27/27 (100)

18/19 (95)

33/45 (73)

16/27 (59)

17/18 (94)

Highest level of effectiveness of the action items (Hierarchy of Effectiveness)

These lessons were shared: Internally

4/45 (9)

3/27 (11)

1/18 (6)

Plan to share internally

10/45 (22)

9/27 (33)

1/18 (6)

Plan to share externally

  5/45 (11)

3/27 (11)

  2/18 (11)

Externally

* Results reported are number (%) of positive responses, except for Time required for CIA completion. Eleven sites are represented in Round 1, and 10 in Round 2. Missing data reflect the fact that the items were not included in Round 1.

ter understanding of specific contributing factors and targeted improvements. Second, the tool was meant to be used for lowand no-harm incidents, including the near misses and “good catches” (a near miss that was averted by the astute/heroic efforts of a health care provider) that often go undocumented and unanalyzed21 (although 3 cases resulting in death were analyzed using the tool). Analyzing near-miss incidents provides a key January 2016

improvement opportunity; instead of causing fear and shame, these incidents leverage positive attitudes a­ ssociated with knowing that the system and people stopped the harm. Third, there is potential for this tool to be used at the local/unit level if the applicable training and mentoring in incident analysis and human factors are provided. Such training can be readily obtained through both in-person and online courses.22,23 Local/unit-level Volume 42 Number 1

Copyright 2016 The Joint Commission

31

The Joint Commission Journal on Quality and Patient Safety teams may appreciate having the option of analyzing incidents, as opposed to relying on hospital or health system resources. Building capacity at the front line to analyze, learn from, and improve on incidents can strengthen the safety culture and empower all caregivers to take an active role. Fourth, the tool can be used across countries to successfully analyze a subset of incidents and devise corrective actions. As we found, in 80% (37/46) of the cases, the action items were reported to be either medium or high leverage (Table 3). Although there are many potential benefits to a concise method of incident analysis, there are some potential ­challenges. ­Because the CIA would likely be performed with fewer individuals, with less time, and less resources, there is risk that there will be more bias in the results. Indeed, it is not unusual for different hazard identification methodologies analyzing the same process or event to result in different findings.24,25 This risk should be recognized in using the CIA tool, and future studies should examine this potential effect. Another challenge is that the CIA tool may be used at the hospital local/unit level, potentially by individuals with less training than those who might traditionally perform an RCA, which is typically performed at the hospital or health system level. Requiring CIA facilitators to have prerequisite training in incident investigation, as is recommended in the CIA tool, and the availability of training resources22,23 can potentially mitigate this risk. Although the CIA tool might potentially allow for analysis of more incidents, there will likely still be many incidents that go uninvestigated. The CIA tool is not intended for the analysis of all incidents; organizations that receive reports of large numbers of incidents still need to prioritize which incidents require further investigation. Although several concise incident analysis tools are in existence, to our knowledge there are no published studies regarding their usability or effectiveness. The CIA tool combines the key features of many of these tools. Moreover, this effort represents a first step in performing validation testing to measure the usability and effectiveness of the CIA tool.

Limitations There were several potential limitations to this study. First, the tool was pilot tested in a small number of English-speaking sites. The sites were self-nominated and likely represented organizations with an interest in patient safety. It is possible that these favorable results would not translate to different hospitals in different countries. However, this study is an important first step in developing a tool that is intended to be globally effective and efficient. Second, these results represent self-reported perceptions of the tool. It is possible that with more objective test32

January 2016

ing of the tool, we would have found different results. However, there are no currently accepted objective methods to evaluate the quality of an incident investigation.26 Moreover, this pilot test represents more evaluation than many other tools in health care have received11; many incident investigation tools are developed and deployed with limited or no testing.26 Third, in 27% of the cases, “lessons learned” were not shared, even internally within the organization. Unfortunately, we did not have clarification on why this occurred. In our experience, the vast majority of lessons from incidents are not shared, even internally. Fourth, although participants felt that the tool was efficient and provided an estimate of time required for completion, we were unable to directly quantify how many personnel-hours were required to complete the CIA. Having more accurate information might inform users of the potential time commitment of use of the tool. Fifth, we have not compared the effectiveness of the CIA tool to other methods of incident investigation. It is unclear how the CIA methodology compares to other methodologies such as the RCA process, both in terms of resource utilization and effectiveness. We present these pilot test results to increase awareness of this tool and to lay a foundation for a validation study, which is consistent with the expert advisory group’s endorsement of the need for further research.

Conclusion In this study, we developed and pilot tested a concise tool for the investigation of safety incidents across a select group of hospitals from five countries. We found that the CIA tool was well accepted and relatively efficient in terms of time required to completion. The CIA tool can potentially fill a niche for investigation of low- or no-harm incidents at the local/unit level that are currently not being investigated. Further validation testing is required to fully determine this tool’s effectiveness. J Julius Cuong Pham, MD, PhD, is Associate Professor, Department of Emergency Medicine and Department of Anesthesia Critical Care Medicine, Armstrong Institute for Patient Safety and Quality, Johns Hopkins University School of Medicine, Baltimore. Carolyn Hoffman, RN, BSN, MN, formerly Senior Program Officer, Quality & Healthcare Improvement, Alberta Health Services, Edmonton, Canada, is Executive Director, Saskatchewan Registered Nurses’ Association, Regina, Canada. Ioana Popescu, MBA, is Patient Safety Improvement Lead, Canadian Patient Safety Institute, Edmonton, Canada. O. Mayowa Ijagbemi, MPH, is Research Program Manager, Armstrong Institute for Patient Safety and Quality. Kathryn A. Carson, ScM, is Senior Research Associate, Department of Epidemiology, Johns Hopkins University Bloomberg School of Public Health, Baltimore. Please address correspondence and requests for reprints to Julius Cuong Pham, [email protected].

Volume 42 Number 1

Copyright 2016 The Joint Commission

The Joint Commission Journal on Quality and Patient Safety Online Only Content http://www.ingentaconnect.com/content/jcaho/jcjqs See the online version of this article for Appendix 1. Themes from Participating Site Qualitative Feedback Appendix 2. Themes from Advisory Group Qualitative Feedback

References 1. The Joint Commission. 2016 Comprehensive Accreditation Manual for Hospitals (E-dition). Oak Brook, IL: Joint Commission Resources, 2015. 2. Brennan TA, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991 Feb 7;324:370–376. 3. Wilson RM, et al. The Quality in Australian Health Care Study. Med J Aust. 1995 Nov 6;163:458–471. 4. Baker GR, et al. The Canadian Adverse Events Study: The incidence of adverse events among hospital patients in Canada. CMAJ. 2004 May 25;170: 1678–1686. 5. Bagian JP, et al. The Veterans Affairs root cause analysis system in action. Jt Comm J Qual Improv. 2002;28:531–545. 6. National Patient Safety Foundation (NPSF). RCA: Improving Root Cause Analyses and Actions to Prevent Harm. Boston: NPSF, 2015. 7. Iedema RA, et al. Turning the medical gaze in upon itself: Root cause analysis and the investigation of clinical error. Soc Sci Med. 2006;62:1605–1615. 8. US Department of Veterans Affairs, National Center for Patient Safety. Root Cause Analysis Tools: Root Cause Analysis (RCA) Step-by-Step Guide. (Updated: Feb 26, 2015.) Accessed Dec 1, 2015. http://www.patientsafety.va.gov /docs/joe/rca_step_by_step_guide_2_15.pdf. 9. National Patient Safety Agency, National Reporting and Learning Service. Root Cause Analysis (RCA) Toolkit. Jan 1, 2004. Accessed Dec 1, 2015. http://www.nrls.npsa.nhs.uk/resources/?entryid45=59901. 10. Ruddick P, et al. Using root cause analysis to reduce falls in rural health care facilities. In Henriksen K, et al., editors: Advances in Patient Safety: New Directions and Alternative Approaches, vol. 1: Assessment. Rockville, MD: Agency for Healthcare Research and Quality, 2008. Accessed Dec 1, 2015. http://www .ncbi.nlm.nih.gov/books/NBK43629/. 11. Wu WA, Lipshutz AK, Pronovost PJ. Effectiveness and efficiency of root cause analysis in medicine. JAMA. 2008 Feb 13;299:685–687. 12. Canadian Patient Safety Institute. Event Analysis Roundtable. Presentation at Event Analysis Roundtable, Mar 3, 2010, Vancouver, BC.

January 2016

13. National Patient Safety Agency, National Reporting and Learning Service. Three Levels of RCA Investigation – Guidance. Sep 2008. Accessed Dec 1, 2015. http://www.nrls.npsa.nhs.uk/EasySiteWeb/getresource.axd?AssetID= 60179&type=full&servicetype=Attachment. 14. Kaufman J, et al. An interdisciplinary initiative to reduce unplanned extubations in pediatric critical care units. Pediatrics. 2012;129:e1594–1600. 15. Berenholtz SM, Hartsell TL, Pronovost PJ. Learning from defects to enhance morbidity and mortality conferences. Am J Med Qual. 2009;24:192–195. 16. Leotsakos A, et al. Standardization in patient safety: The WHO High 5s project. Int J Qual Health Care. 2014;26:109–116. 17. Incident Analysis Collaborating Parties. Canadian Incident Analysis Framework. Edmonton, AB, Canada: Canadian Patient Safety Institute, 2012. 18. Canadian Patient Safety Institute. Concise Incident Analysis Tool: A ­Resource for Health Care Organization. Pham JC , et al. Jun 1, 2014. Accessed Dec 1, 2015. http://www.patientsafetyinstitute.ca/en/toolsResources/Research /commissionedResearch/IncidentAnalysisMethodPilotStudy/Documents /Concise%20Incident%20Analysis%20Tool.pdf. 19. World Health Organization (WHO). Conceptual Framework for the Inter­ national Classification for Patient Safety, version 1.1. Geneva: WHO, 2009. 20. Institute for Safe Medication Practices. Medication error prevention “Toolbox.” ISMP Medication Safety Alert! Acute Care Edition. Jun 2, 1999. 21. Herzer KR, et al. Patient safety reporting systems: Sustained quality improvement using a multidisciplinary team and “good catch” awards. Jt Comm J Qual Patient Saf. 2012;38:339–347. 22. Canadian Patient Safety Institute. Incident Analysis Learning Program (2012). 2012. Accessed Dec 1, 2015. http://www.patientsafetyinstitute.ca/en /toolsResources/Presentations/Pages/Incident-Analysis-Learning-Program -(2012).aspx. 23. Johns Hopkins School of Public Health (JHSPH). Patient Safety and Medical Errors. JHSPH Open Courseware. Wu A, Morlock L, Pronovost P. 2010. Accessed Dec 1, 2015. http://ocw.jhsph.edu/index.cfm/go/viewCourse /course/PatientSafety/coursePage/lectureNotes/. 24. Potts HW, et al. Assessing the validity of prospective hazard analysis methods: A comparison of two techniques. BMC Health Serv Res. 2014 Jan 27; 14:41. 25. Kessels-Habraken M, et al. Integration of prospective and retrospective methods for risk analysis in hospitals. Int J Qual Health Care. 2009;21:427– 432. 26. Hettinger AZ, et al. An evidence-based toolkit for the development of effective and sustainable root cause analysis system safety solutions. J Healthc Risk Manag. 2013;33(2):11–20.

Volume 42 Number 1

Copyright 2016 The Joint Commission

33

The Joint Commission Journal on Quality and Patient Safety Online Only Content



Appendix 1. Themes from Participating Site Qualitative Feedback* Theme Overall Impression

Summary of Site Participant Feedback

Quotes from Site Participants

The CIA tool was well-received by all participating sites.

“The CIA tool is a great way to delve into problems where there is a minimal understanding.” “We were able to understand the whole picture of the incident that happened and were able to take the corrective and preventive actions.” “No suggestions found [for improvement]. It [is] very simple to use and easy to document and follows the flow of the incident that was analyzed.” “I understand the tool better each time I use it.”

Criteria for case selection

Almost all participants found the criteria useful, and one site indicated that low- or no-harm incidents would not have received analysis without the tool.

“The use of [the case selection criteria] was very useful. Often the case types that are suitable for this level of analysis are not well reviewed using our existing tools.”

One site described the tool as very helpful, even when investigating a case with death as the outcome. Short-term actions (within approximately one week of the incident) were identified, and they were ready to then transition to a full comprehensive review at a later time.

“This incident, though seemingly innocent, led to a revelation of deeper, much more significant system issues. This will likely go to a comprehensive review.”

One site found the tool helpful for identifying more significant issues that are more appropriate for comprehensive review. Duration of analysis

The time for analysis varied significantly across the respondents (organization and facilitators). Factors that influenced the duration: • Team (longer) vs individual approach • Investigator experience/knowledge of human factors analysis technique (shorter) • Patient complaint process (longer) • Construction of the constellation diagram (longer) • To some respondents, “time for analysis” included the administrative tasks (for example, scheduling meetings)

“Actual meeting was 90 minutes long; all other time attributed to information collection clinical review and report writing.” (Total time for analysis reported as 16 hours.)

Patient and/or family participation/ perspective

One site described its positive perceptions of patient and/ or family participation as experienced during two different incident analyses.

“Listening to the patient’s story was invaluable, as this assisted in identifying some of the contributing factors. This also provides a great opportunity to explain the analysis process and how the incident is investigated and managed. I explained to all those that I spoke with that I was utilizing a new analysis tool and that was the reason for this different approach.”

Guiding questions/ Contributing factors

Almost all participants used the guiding questions and found them useful in determining the contributing factors. One site emphasized they were a good starting point for the analysis.

“The guiding questions provide a systematic way in determining the contributing factors, though it cannot cover all the possible contributing factors.” “The guiding questions brought up additional insights that the analysis team didn’t consider before.” “I think the tool thoroughly coveralls all aspects of contributing factors.” (continued on page AP2)

January 2016

Volume 42 Number 1

Copyright 2016 The Joint Commission

AP1

The Joint Commission Journal on Quality and Patient Safety Online Only Content



Appendix 1. Themes from Participating Site Qualitative Feedback* (continued) Theme Constellation diagram

Summary of Site Participant Feedback

Quotes from Site Participants

Participant feedback varied from finding the constellation diagram development valuable to confusing or chaotic.

“Cluster diagram can get messy, but we followed it by using the tree to consolidate factors, and that assisted in tidying up and then transferring to the tool tables. This may improve with experience in use.” “I find the constellation diagram easy to do but a little chaotic to interpret into a report.” “I did not like the cluster diagram all that much. I feel it can get a little confusing to the team, although it does effectively show how the contributing factors overlap with each factor.”

Summary statements

Many participants did not develop summary statements. Reasons varied from not having system issues identified to struggling to identify the contributing factors and preparing the summary statements. Feedback also referenced the value of other tools for this purpose.

“We can also use root cause analysis or tree diagram to help prepare the summary statement about the incident.”

Developing, managing, and evaluating actions for improvement

Participants generally found the tool, including the Hierarchy of Effectiveness table, helpful in developing recommended actions. In three of the sites, implementation of the actions had not yet occurred, or the facilitator did not manage or evaluate the actions in their organization, so they were not able to comment about this aspect in detail.

“It did not help manage or evaluate these areas because the changes have not been implemented yet, due to needed collaboration and process.” “The only way evaluation was helpful was based on the hierarchy table that made us aware that our actions were low impact.”

Some participants suggested the need for greater direction and examples regarding evaluation; however, most indicated that there was sufficient information in the tool. Sharing learning internally and/or externally

Most of the participating sites shared learning internally; however, only a few plan to share externally. Learning meant sharing of analysis information (contributing factors, findings, recommended actions).

“Conducted meeting with the members involved with this incident.” “There will be further discussion to decide if sharing will be external.” “I also understand the share learning’s concept, but I find it difficult in practice. Finding who really would benefit from this information and even if they would benefit from this, is there a sense of urgency sufficient to make the appropriate changes there based on a low-harm event at another location? I do like the [idea of] rolling the analysis information into the organizationwide reporting and learning system, but this is a very passive approach to spreading the information.”

General

Contextual aspects at many sites influenced the implementation of the tool. They included organizational culture and structures for analysis, facilitator’s experience, and other current priorities. Participants emphasized the importance of having previous training and experience in incident analysis before using the tool. One exception was a site where the tool was used as a training support under the direction of an experienced incident analysis facilitator.

“It is difficult to get buy-in from operational leaders from an event that caused moderate to no harm to the patient, especially in the implementation of recommendations. [In] the first trial round, I found it difficult to traverse the bureaucracy of senior leadership to get started with the investigation. This round I bypassed this step and went directly to the unit managers.” “Local safety culture not developed to the level that allowed free critique of system failure.”

An electronic version of the tool was requested. * Eleven sites are represented in Round 1, and 10 in Round 2.

AP2

January 2016

Volume 42 Number 1

Copyright 2016 The Joint Commission

The Joint Commission Journal on Quality and Patient Safety Online Only Content



Appendix 2. Themes from Advisory Group Qualitative Feedback Theme

Summary of Advisory Group Feedback

Quotes from Advisory Group Participants

Overall Impression

Advisors generally agreed that the CIA tool was well-received by the participating sites.

“Results seem positive, especially the comments from the sites regarding contributing factors, summary state­ ments,recommended actions, implementation of actions, and the parts that are useful. So people using it are generally happy.”

Criteria for case selection

Advisors were supportive of the criteria.

“I think the criteria you are using already work. If during the course of the CIA the criteria are no longer met, then they should consider converting to a comprehensive.” “The need to convert the CIA to a comprehensive analysis should be relatively rare.”

Duration of analysis

Advisors noted the variation in duration of analysis and the longer than anticipated time frames.

After-Round 1–feedback from the sites: “What surprised me the most was 70% of the incidents took seven hours or more to analyze. This seems a little long to me.” “Seven hours is not too long if the outcome is an effective resolution of a thorny issue; or if someone is less skilled and still learning the ropes but doing it well.”

Patient and/or family participation/ perspective

Advisors generally agreed with the importance of the patient and/or family participation/perspective.

Guiding questions/ contributing factors

Advisors noted that the guiding questions provided a helpful starting point for the analysis.

“From experience, need to be careful not to make guiding questions too specific or prescriptive. Otherwise it quickly turns into a QA/audit type tool instead of investigation.”

Constellation diagram





Summary statements





Developing, managing, and evaluating actions for improvement

Advisors generally agreed on the following: • That is it is important that one person alone does not decide on what actions should be implemented for the system • That the evaluation of the effectiveness of the actions for improvement should be out of scope for the CIA tool.

“Starting the recommendation development discussion with What forcing function could we apply to this situation? is great.”

Sharing learning internally and/or externally

Advisors emphasized the importance of ensuring that the use of the CIA tool, and the applicable findings, fit into the context of each organization’s reporting and learning system. This includes the applicable local process or prioritization of patient safety resources.

“It’s important to ensure that the CIA findings are summarized and shared corporately to enable system understanding and decisions.”

One advisor was supportive of a site suggestion to add information on how to assess cost in the CIA tool.

“. . . effort and cost must be weighed against benefit.”

General

“I would not want to encourage anyone to implement an action and not ever evaluate it . . . it may be at best wasteful or at worst ineffective and mislead people to think the harm has been reduced or eliminated.”

“I think it would be helpful for users to be realistic in their expectations of the findings and recommended actions developed from a CIA. From what I understand, it is intended to generate relatively quick local solutions and in aggregate would feed into system solutions.”

Advisors indicated the need for systematic and rigorous research regarding the effectiveness of the now developed and pilot-tested CIA tool.

January 2016

Volume 42 Number 1

Copyright 2016 The Joint Commission

AP3