The Joint Commission Journal on Quality and Patient Safety Root Cause Analysis
Developing a Tool for Assessing Competency in Root Cause Analysis Priyanka Gupta; Prathibha Varkey, M.B.B.S., M.P.H., M.H.P.E.
B
etween 1995 and September 2008, The Joint Commission recorded 5,437 sentinel events.1 Root cause analysis (RCA) has been recommended as a process to enable people to recognize the beliefs and practices in an organization to identify the root cause(s) contributing to a sentinel event or near miss.2 RCA can help prevent recurrence of unexpected events by allowing the study of system vulnerabilities, increasing voluntary reporting of events, causing a shift away from blaming individuals, and improving patient safety by improving systems design.3 Reporting significant sentinel events and associated RCAs has been required by the Joint Commission since 20054 and is also required by several state regulatory agencies.5 RCAs are also endorsed by the National Quality Forum’s Safe Practice Guidelines.6(p. 61) As a result, RCA training programs are being developed for all levels of medical training. In its requirement that residency programs include systems-based practice, the Accrediting Council for Graduate Medical Education (ACGME) stipulates that residents “demonstrate awareness of and responsiveness to the larger context and system of health care” and “participate in identifying system errors and implementing potential systems solutions.”7 Previous studies suggest that participants who have completed RCA training have demonstrated an improvement in problem-solving ability and a willingness and an ability to effectively respond to errors.3,8 As organizations introduce RCA training, assessment of competency has become significant and necessary. Yet, there is no published literature on valid or reliable methods for assessing competency in RCA. The only related literature is that of an Objective Structured Clinical Examination (OSCE) station for medical students, which tested the ability to communicate a prescription error with a standardized patient (SP).9 At the end of the interaction with the SP, students completed a written note that included the identification of the root causes of the prescription error. Quality improvement (QI) faculty rated the written note, completed a history-taking checklist, and assigned a global competency score. However, the study did not 36
January 2009
Article-at-a-Glance Background: Root cause analysis (RCA) is a tool for iden-
tifying the key cause(s) contributing to a sentinel event or near miss. Although training in RCA is gaining popularity in medical education, there is no published literature on valid or reliable methods for assessing competency in the same. Methods: A tool for assessing competency in RCA was pilot tested as part of an eight-station Objective Structured Clinical Examination that was conducted at the completion of a three-week quality improvement (QI) curriculum for the Mayo Clinic Preventive Medicine and Endocrinology fellowship programs. As part of the curriculum, fellows completed a QI project to enhance physician communication of the diagnosis and treatment plan at the end of a patient visit. They had a didactic session on RCA, followed by process mapping of the information flow at the project clinic, after which fellows conducted an actual RCA using the Ishikawa fishbone diagram. For the RCA competency assessment, fellows performed an RCA regarding a scenario describing an adverse medication event and provided possible solutions to prevent such errors in the future. Results: All faculty strongly agreed or agreed that they were able to accurately assess competency in RCA using the tool. Interrater reliability for the global competency rating and checklist scoring were 0.96 and 0.85, respectively. Internal consistency (Cronbach’s alpha) was 0.76. Six of eight of the fellows found the difficulty level of the test to be optimal. Discussion: Assessment methods must accompany education programs to ensure that graduates are competent in QI methodologies and are able to apply them effectively in the workplace. The RCA assessment tool was found to be a valid, reliable, feasible, and acceptable method for assessing competency in RCA. Further research is needed to examine its predictive validity and generalizability.
Volume 35 Number 1
Copyright 2009 Joint Commission on Accreditation of Healthcare Organizations
The Joint Commission Journal on Quality and Patient Safety Root Cause Analysis (RCA) Ishikawa Diagram for Learner Quality Improvement Project
Figure 1. The figure shows an example of an RCA Ishikawa diagram conducted by fellows to identify the reasons why patients may not fully understand their diagnosis and management plan at the end of a visit. IT, information technology; MD, physician.
assess the psychometric properties of the tool as it relates to competency in RCA. In developing and studying the psychometric properties of a tool for assessing competency in RCA, we administered the assessment tool as part of an eight-station OSCE for testing competency in QI for preventive medicine and endocrinology fellows at Mayo Clinic (Rochester, Minnesota). The objectives of this pilot study, which we conducted in November 2006, were to determine (a) the feasibility of designing and implementing a tool for the assessment of competency in RCA and (b) the validity, reliability, and acceptability of the tool for assessing competency in RCA.
inantly through experiential training. In this particular elective rotation, fellows completed a QI project to enhance physician communication of the diagnosis and treatment plan at the end of a patient visit. Following a didactic session on RCA and process mapping of the information flow at the project clinic, learners conducted an actual RCA using the Ishikawa fishbone diagram.11 Using this method, the fellows then (1) identified the key and common root causes (Figure 1, above) for why patients may not fully understand their diagnosis and management plan at the end of a visit and (2) addressed the issues by creating a set of viable and cost-effective solutions to improve communication between patients and providers.
Methods
RCA ASSESSMENT TOOL
The Mayo Clinic Institutional Review Board considered this study exempt.
An OSCE is an assessment method that may contain written cases or may use an actor to portray a particular role and respond to the learner in a standardized manner.12 The study RCA tool was administered as part of an eight-station QI OSCE conducted at the end of the three-week QI elective. The development and implementation of the eight-station OSCE is described in detail by Varkey et al.13 The other stations related
RCA CURRICULUM RCA was taught in the setting of a three-week QI elective for two preventive medicine and seven endocrinology fellows. The curriculum, described in detail elsewhere,10 was taught predom-
January 2009
Volume 35 Number 1
Copyright 2009 Joint Commission on Accreditation of Healthcare Organizations
37
The Joint Commission Journal on Quality and Patient Safety Sidebar 1. “Doorway” Information* Root Cause Analysis Scenario
Possible Solutions
Mr. Rodriguez* is a 42-year-old Hispanic immigrant to Lake City who speaks limited English. He is being treated for atrial fibrillation at the Lake City Medical Center with 240 mg of verapamil divided into three doses daily.
(Please see Figure 1 on page 37 for an example of a completed root cause analysis for this scenario.)
He recently presented to the emergency department at St. Mary’s Hospital with a wrist fracture. An onsite translator was unavailable, and the patient communicated the events that took place in broken English. After a successful surgery Mr. Rodriguez was transferred to your unit. Within 24 hours the patient developed atrial fibrillation, which precipitated congestive heart failure. A complete history later obtained by the new intern working with the patient revealed that he had been previously been taking verapamil. The absence of medical reconciliation in Mr. Rodriguez’s case led to his development of congestive heart failure. Medical reconciliation is the process of maintaining the most accurate and complete list of medications a patient is on through the continuum of care.
1. Propose the creation of a medical reconciliation procedure for use during intake in the emergency department (ED) to ensure that all medications are properly recorded. 2. Propose the creation of a medical reconciliation procedure for use within the hospital to ensure that at the time of transfer medications are updated and verified both with the hospital and any external care providers. 3. Propose the creation of a protocol for sharing of medical records between the ED and outside health care facilities so that ED physicians can have timely access to medical records. 4. Ensure translation services are available in all departments in some form, whether an on-site translator or a phone service. 5. Have medication and other patient forms available in more than one language.
Perform a root cause analysis for this medical error, i.e., lack of reconciliation. Present your findings as well as a possible solution.
Assessment
References
Quality of Solution
Federico F.: Reconciling doses. AHRQ WebM&M, Nov. 2005. http://webmm.ahrq.gov/case.aspx?caseID=107 (last accessed Nov. 25, 2008).
1 = Unsatisfactory 2 = Marginal 3 = Pass 4 = Good 5 = Excellent
Institute for Safe Medication Practices: Building a case for medication reconciliation. ISMP Medication Safety Alert: Acute Care, Apr. 21, 2005. http://www.ismp.org/MSAarticles/20050421.htm (last accessed Nov. 25, 2008). Pronovost P., et al.: Medication reconciliation: A practical tool to reduce the risk of medication errors. J Crit Care 18:201–205, Dec. 2003. Verapamil Hydrochloride. 1974–2006 Thomson MICROMEDEX. Vira T., Colquhoun M., Etchells E.: Reconcilable differences: Correcting medication errors at hospital admission and discharge. Qual Saf Health Care 15:122–126, Apr. 2006.
1
to negotiation, team collaboration, quality measurement, prescription errors, evidence-based medicine, insurance systems, and Nolan’s three-question model of QI.14* Fellows were given 15 minutes to complete each station. The RCA assessment tool was created by a QI expert [P.V.] at the Mayo Clinic and was then reviewed by two other institutional QI experts for content validity. On the day of the OSCE, fellows were asked to read and answer the written scenario for the RCA station. The “doorway” information is as described in Sidebar 1 (above). The fellows then were requested to perform an RCA for the adverse event and to describe possible solutions on the basis of the RCA to prevent such errors in the future. Potential solutions included creating a * What are we trying to accomplish? How will we know that a change is an improvement? What changes can we make that will result in improvement?14(p. 4)
January 2009
2
3
4
5
3
4
5
Solution is cost-effective 1
2
Solution resolves a key factor contributing to the error 1
* Fictional name and case.
38
Proposes a viable solution
2
3
4
5
medical reconciliation procedure for use during intake in the emergency department, creating a medical reconciliation procedure for use during transfers within the hospital, creating a protocol to share medical records between the emergency department and outside health care facilities, ensuring availability of translation services, and having patient forms available in more than one language.
EVALUATION Because the RCA assessment tool was used for research purposes alone, results of learner performance were not included in the summative evaluation of the fellows for the rotation. Fellows were assessed for competency in RCA by three faculty members [including P.V.] who were experts in QI and RCA, who completed the following:
Volume 35 Number 1
Copyright 2009 Joint Commission on Accreditation of Healthcare Organizations
The Joint Commission Journal on Quality and Patient Safety Table 1. Checklist and Interrater Reliability for the Root Cause Analysis Station* Mean (S.D.) Checklist
Reviewer 1
Reviewer 2
Reviewer 3
Total Mean (S.D.)
1 (0)
1 (0)
1 (0)
1 (0)
Identifies lack of reconciliation or omission of verapamil as adverse event
0.89 (0.33)
0.89 (0.33)
0.89 (0.33)
0.89 (0.32)
Recognizes incomplete information transfer between institutions
0.56 (0.53)
0.67 (0.50)
0.67 (0.50)
0.63 (0.49)
1 (0)
1 (0)
1 (0)
1 (0)
Fishbone diagram
Recognizes lack of use of appropriate interpretive services (e.g., telephone) Recognizes lack of patient communication of medications Recognizes lack of medication reconciliation protocol Recognizes communication barrier Recognizes lack of redundant checks/role of multiple providers in ensuring medication reconciliation
1 (0)
0.89 (0.33)
1 (0)
0.96 (0.19)
0.44 (0.53)
0.56 (0.53)
0.56 (0.53)
0.52 (0.51)
1 (0)
1 (0)
1 (0)
1 (0)
0.44 (0.53)
0.44 (0.53)
0.67 (0.50)
0.52 (0.51)
* S.D., standard deviation.
■ An eight-item checklist concerning the quality of the RCA (Table 1, above) ■ An assessment (five-point Likert scale; 1 = unsatisfactory, 2 = marginal, 3 = pass, 4 = good, 5 = excellent) of the cost-effectiveness and feasibility of the solution as well the capacity of the solution to resolve the key root causes (Sidebar 1) ■ A global competency rating The faculty were also provided with a sample Ishikawa diagram with which to evaluate the fellow’s RCA. The modified Angoff procedure was used for standard setting of the station.15,16 Critical elements that were necessary in the answers were as follows: ■ Use of a cause-and-effect or fishbone diagram for the RCA ■ Identification of the adverse event in the case ■ Recognition of the communication errors of the health care team, as well as of the lack of adequate interpretive services, transfer procedures, and medication reconciliation process in the health care system.
STATISTICAL ANALYSIS The constructs of validity and reliability were calculated as described in detail by Downing et al.17,18 For the property of validity, evidence for content validity, response process, and relationship to other variables were assessed. Content validity was determined by two local institutional experts who assessed whether the content of the assessment tool
January 2009
matched the curriculum blueprint, emphasized important learning objectives, and allowed RCA skills to be demonstrated in an appropriate context. Response process was determined by evaluating the scoring methods, reporting mechanisms, and material describing the interpretation of the results. The interstation correlation between the RCA station and the other seven QI OSCE stations was calculated using a Pearson correlation. The two reliability measures of internal consistency and interrater reliability were studied. Internal consistency of the station was calculated using Cronbach’s alpha. The checklist scores were standardized to a range of 0–1 and the global competency rating to 1–5 for reporting purposes. The scores of each reviewer for the nine fellows were aggregated to determine the mean and standard deviation of each reviewer’s scores. Interrater reliability was then determined by calculating the intra-class correlation between the three faculty raters. A coefficient of 0.41–0.60 was considered moderate, 0.61–0.80 excellent and 0.81–1.00 outstanding.19 The acceptability of the RCA tool was determined by the qualitative surveys completed by the faculty and fellows. The survey included a five-point Likert scale on the time available to complete the exercise (1 = too little time, 5 = too much time), the difficulty of the station (1 = easy, 5 = very difficult), and whether an accurate assessment of RCA competency could be determined by participation in the station (1 = strongly agree, 5 = strongly disagree).
Volume 35 Number 1
Copyright 2009 Joint Commission on Accreditation of Healthcare Organizations
39
The Joint Commission Journal on Quality and Patient Safety Table 2. Interrater Reliability for the Root Cause Analysis Station Mean S.D. Measure
Possible Range
Rater 1
Rater 2
Rater 3
Checklist score*
0–1
0.80 (0.16)
0.76 (0.18)
0.85 (0.16)
Intra-Class Correlation 0.85
Global competency rating
1–5
3.56 (1.01)
3.67 (1.00)
3.67 (1.12)
0.96
* Scores were standardized to a range of 0–1 for reporting purposes. S.D., standard deviation.
Results Nine fellows (seven women, two men) in various stages of training (five postgraduate year [PGY] 4 and four PGY6) participated in the RCA assessment. Eight of the nine fellows completed the survey. On the basis of the global competency rating, all of the fellows passed the RCA station. Content validity was determined to be excellent by the institutional QI experts. They validated that the RCA assessment tool was created by content experts, matched the curriculum blueprint, emphasized learning objectives, and allowed the fellows to demonstrate RCA skills in a suitable context. Seven (87.5%) of the eight follows agreed that the instructions for the station were clear. Response process was deemed accurate by the institutional QI experts in the areas of written test information, evaluation and scoring methods, score calculation, data entry, and reporting methods. The interstation correlation between the RCA station and the other QI OSCE stations ranged from -0.46 to 0.62. Interrater reliability was 0.96 for the global competency rating and 0.85 for the checklist scoring (Table 1 and Table 2, above). Internal consistency for the station was 0.764. Seven (87.5%) of the eight fellows stated that they had adequate time to complete the station. Six (75%) of the eight fellows found the difficulty level of the station to be optimal; the remaining two fellows found it difficult. All faculty strongly agreed (one of three) or agreed (two of three) that they were able to accurately assess competency in conducting RCA through the exercise.
Discussion QI education is integral to medical practice because it equips practitioners with the skills necessary to improve patient safety and health care practice. However, QI education alone is not enough. Assessment methods must accompany education programs to ensure that graduates are competent in QI methodologies and are able to apply them effectively in the workplace. Competency in RCA, a methodology that is mandated by accreditation and regulatory agencies, is crucial. 40
January 2009
This study confirms that it is feasible to design a valid, reliable, and acceptable method to test competency in RCA. Content validity and validity for response process were deemed excellent by content experts. The expectedly low correlation between the RCA station and the other OSCE stations demonstrates case specificity; each station was designed to assess a different QI competency. Reliability for the RCA assessment tool was excellent. The high reliability results for both the checklist and global competency ratings indicate that both are reliable methods for assessing competency in RCA. It is important to use a global competency rating along with the checklist scoring—experts in RCA methodology may not follow all the checklist steps but may still exhibit competency in achieving necessary outcomes. Acceptability of the RCA assessment tool was high among faculty and fellows. Most fellows felt that the time allocated to the station was adequate and that the difficulty level of the station was optimal. Faculty were of the opinion that it was an accurate and authentic method of assessing competency in RCA. As described earlier, several programs have instituted curricula for teaching RCA. At the undergraduate level, the University of Minnesota Medical School’s student-run organization, CLARION, conducts an annual national Inter-professional Case Competition in which teams of interprofessional students compete in conducting and presenting an RCA for a fictitious sentinel event.20,21 This event helps health professions students develop RCA skills early in their training and apply them to realistic hospital situations. Other methods to teach the same principles are as described by Woodcock et al., who found that through teaching the use of the Maintenance Error Decision Aid (MEDA) and the Five Principles of Causation, undergraduates enrolled in a course in accident theory and analysis in occupational health and safety baccalaureate programs and practicing safety personnel were able to approach simulated accident investigation studies and collect more causative factors.22 At the graduate level, RCA has been incorporated into the
Volume 35 Number 1
Copyright 2009 Joint Commission on Accreditation of Healthcare Organizations
The Joint Commission Journal on Quality and Patient Safety curricula of several training programs through mortality and morbidity conferences. Residents frequently present cases and discuss the factors that contributed to the sentinel event with their colleagues. Several programs provide QI courses where residents are taught basic QI concepts and are given practical experiences in implementing RCA through participation in QI initiatives and sentinel event committees.8 For practicing professionals, education in RCA is typically provided through just-in-time training. Few organizations have created curricula or dedicated training modules for the same. HealthInsight, the Medicare Quality Improvement Organization for Nevada and Utah, conducted a workshop to train practicing health professionals on implementing and enhancing RCA activities in participant organizations.3 Follow-up interviews showed that most of the organizations were able to apply the RCA principles learned in investigating and analyzing quality problems two months after completing the course. Although the RCA assessment tool we describe in this study demonstrates excellent psychometric properties, there were several limitations. The checklist did not include identification of strong actions and outcome measures, which are critical elements of an RCA. Because this particular RCA OSCE station was geared towards medication reconciliation, the checklist may need to be modified slightly if other sentinel event scenarios are used. RCA in itself is not without flaws. Often, RCAs are performed incorrectly or incompletely and do not produce reliable results.23,24 In this article, we used the contextual framework that training in cause-and-effect analysis through a tool such as RCA was essential for competence in QI and safety. The tool we describe assessed fellow competency immediately after the completion of a QI elective without providing a mechanism for long-term follow-up of RCA performance in the setting of QI projects or sentinel events. Therefore, it is difficult to analyze the tool’s predictive validity. In addition, the sample size was small; hence, we are unsure if the results of the study can be generalized to other programs. In the future, we plan to implement the tool in a larger number of training programs at our organization. The scenario described in this study is one of many that could be constructed to test RCA competency. If RCA is the only competency under study, a variety of scenarios testing competency in RCA may enhance the strength and power of the competency assessment. Medical reconciliation was chosen for the assessment scenario because of its significant contribution to medication errors and its relevance to physicians in training, regardless of specialty.25,26 Moreover, since 2006 it has been a Joint Commission National Patient Safety Goal
January 2009
(NPSG.08.04.01),27 confirming the suitability of the issue for a patient safety scenario. In future iterations of the curriculum, we plan to include Marx’s Principles of Causation28 to set a higher standard for solving systems problems that potentially lead to patient harm.
Conclusion This study describes an assessment method for RCA competency in the setting of graduate medical education and lays the foundation to advance knowledge in the assessment of competency in conducting RCA. Studies with larger sample sizes are needed to determine the generalizability of the assessment tool in various educational settings, and longitudinal studies would be helpful toward determining its predictive validity. J The authors thank Kevin Bennet, M.B.A., and Neena Natt, M.D., for participating in the assessment of the fellows and Timothy Lesnick for his help with the statistics.
Priyanka Gupta is a Medical Student, Mayo Medical School, Rochester, Minnesota. Prathibha Varkey, M.B.B.S., M.P.H., M.H.P.E., is Associate Professor of Medicine, Preventive Medicine and Medical Education, and Director of QI, Mayo School of Graduate Medical Education, Rochester, Minnesota. Please address correspondence to Prathibha Varkey, Varkey.prathibha@ mayo.edu.
References 1. The Joint Commission: Sentinel Events Statistics. Sep. 30, 2008. http://www.jointcommission.org/SentinelEvents/Statistics/ (last accessed Nov. 25, 2008). 2. Dew J.R.: Using Root Cause Analysis to Make the Patient Care System Safe. http://bama.ua.edu/~st497/UsingRootCauseAnalysis%20.htm (last accessed Nov. 25, 2008). 3. Sweitzer S.C., Silver M.P.: Learning from unexpected events: A root cause analysis training program. J Healthc Qual 27:11–19, Sep.–Oct. 2005. 4. The Joint Commission: Sentinel Event Policy and Procedures. Jul. 2007. http://www.jointcommission.org/SentinelEvents/PolicyandProcedures/se_pp. htm (last accessed Nov. 25, 2008). 5. Rosenthal J., Riley T., Booth M.: State Reporting of Medical Errors and Adverse Events: Results of a 50-state survey. Portland, ME: National Academy for State Health Policy, Apr. 2000. 6. National Quality Forum (NQF): Safe Practices for Better Healthcare: A Consensus Report. Washington, DC: NQF, 2007. http://www.qualityforum.org/pdf/projects/safe-practices/SafePractices2006 UpdateFINAL.pdf (last accessed Dec. 1, 2008). 7. Accreditation Council for Graduate Medical Education (ACGME): ACGME Outcome Project, Competencies. 2007. http://www.acgme.org/outcome/comp/compHome.asp (last accessed Nov. 25, 2008). 8. Weingart S.N., et al.: Creating a quality improvement elective for medical house officers. J Gen Intern Med 19:861–867, Aug. 2004. 9. Varkey P., Natt N.: The Objective Structured Clinical Examination as an educational tool in patient safety. Jt Comm J Qual Patient Saf 33:48–53, Jan. 2007. 10. Varkey P., et al.: An experiential interdisciplinary quality improvement
Volume 35 Number 1
Copyright 2009 Joint Commission on Accreditation of Healthcare Organizations
41
The Joint Commission Journal on Quality and Patient Safety education initiative. Am J Med Qual 21:317–322, Sep.–Oct. 2006. 11. Ishikawa K.: Developing a Specifically Japanese Quality Strategy. http://www.asq.org/about-asq/who-we-are/bio_ishikawa.html (last accessed Nov. 25, 2008). 12. Newble D.: Techniques for measuring clinical competence: Objective structured clinical examinations. Med Educ 38:199–203, Feb. 2004. 13. Varkey P., et al.: Validity evidence for an Objective Structured Clinical Examination to assess competency in systems-based practice and practicebased learning and improvement: A preliminary investigation. Acad Med 83(8):775–780, Aug. 2008. 14. Langley G.J., et al.: The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. San Francisco: Jossey-Bass, 1996. 15. Kaufman D.M., et al.: A comparison of standard-setting procedures for an OSCE in undergraduate medical education. Acad Med 75:267–271, Mar. 2000. 16. Humphrey-Murto S., MacFadyen J.C.: Standard setting: A comparison of case-author and modified borderline-group methods in a small-scale OSCE. Acad Med 77:729–732, Jul. 2002. 17. Downing S.M.: Validity: On meaningful interpretation of assessment data. Med Educ 37:830–837, Sep. 2003. 18. Downing S.M., Haladyna T.M.: Validity threats: Overcoming interference with proposed interpretations of assessment data. Med Educ 38:327–333, Mar. 2004. 19. Landis J.R., Koch G.G.: The measurement of observer agreement for cat-
42
January 2009
egorical data. Biometrics 33:159–174, Mar. 1977. 20. Johnson A.W., et al.: CLARION: A novel interprofessional approach to health care education. Acad Med 81:252–256, Mar. 2006. 21. Swanton C., Varkey P.: An innovative method for experiential inter-professional education: The CLARION case study competition. J Nurs Educ 47:48, Jan. 2008. 22. Woodcock K., et al.: Using simulated investigations for accident investigation studies. Appl Ergon 36:1–12, Jan. 2005. 23. Wu A.W., Lipshutz A.K.M., Pronovost P.J.: Effectiveness and efficiency of root cause analysis in medicine. JAMA 299:685–687, Feb. 13, 2008. 24. Percarpio K.B., Watts B.V., Weeks W.B.: The effectiveness of root cause analysis: What does the literature tell us? Jt Comm J Qual Patient Saf 34:391–398, Jul. 2008. 25. Lazarou J., Pomeranz B.H., Corey P.N.: Incidence of adverse drug reactions in hospitalized patients: A meta-analysis of prospective studies. JAMA 279:1200–1205, Apr. 15, 1998. 26. Rozich J., Roger R.: Medication safety: One organization’s approach to the challenge. Clinical Outcomes Management 8:27–34, Dec. 2001. 27. The Joint Commission: Accreditation Program: Hospital: National Patient Safety Goals. http://www.jointcommission.org/NR/rdonlyres/31666E86E7F4-423E-9BE8-F05BD1CB0AA8/0/HAP_NPSG.pdf (last accessed Nov. 25, 2008). 28. Marx D., Watson J.: Maintenance Error Causation. Washington, DC: Federal Aviation Administration, Office of Aviation Medicine, 2001.
Volume 35 Number 1
Copyright 2009 Joint Commission on Accreditation of Healthcare Organizations