The Joint Commission Journal on Quality and Patient Safety Diagnostic Error
The Next Organizational Challenge: Finding and Addressing Diagnostic Error Mark L. Graber, MD, FACP; Robert Trowbridge, MD; Jennifer S. Myers, MD; Craig A. Umscheid, MD, MS, FACP; William Strull, MD; Michael H. Kanter, MD
N
ow more than a decade old, the patient safety movement is emerging from its early beginnings and appears poised to significantly improve patient care. Four landmark reports from the Institute of Medicine outlined a pathway to safer care that centered on finding and addressing the inherent flaws in our health care systems.1–4 With clear pressure to show progress, health care organizations (HCOs) have tackled a wide range of patient safety concerns in this domain, and in some areas, these efforts are starting to improve patient outcomes.5 Although HCOs are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues; few are focused on diagnostic error.6,7 Of the current metrics used to evaluate health care quality, none target diagnostic error.8 Achieving the highest possible levels of quality will require a more balanced approach in which the reliability of diagnosis receives equal attention. To quote from the vision statement of the recently formed Society to Improve Diagnosis in Medicine, diagnosis “needs to be accurate, timely, efficient, and above all, safe.”9 In this article, we review why HCO leaders need to be concerned about the problem of diagnostic error. We argue that a major reason that diagnostic error has remained in the shadows relates to the difficulty in finding these errors. We review the available literature relating to this problem, concluding that the existing tools that HCOs use to capture adverse medical events are insensitive for detecting diagnostic errors. Several novel approaches have recently been considered and trialed that offer strategies to detect diagnostic error, of which two are described.
Why Health Care Organizations Need to Get Involved The case for HCOs’ paying attention to diagnostic errors is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires highFor editorial, see pages 99–101.
102
Article-at-a-Glance Background: Although health care organizations (HCOs)
are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. Why Health Care Organizations Need to Get Involved: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) highquality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. Finding Diagnostic Errors: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches—Maine Medical Center’s case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente’s electronic health record– based reports that detect process breakdowns in the followup of abnormal findings—are described in case studies. Conclusion: By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.
March 2014 Volume 40 Number 3 Copyright 2014 The Joint Commission
The Joint Commission Journal on Quality and Patient Safety quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. 1. Diagnostic Errors Are Common and Harmful. Diagnostic errors represent a critical and unsolved problem with substantial clinical and financial impact. One estimate places the annual death toll from these errors in the United States at 40,000– 80,000 per year,10 which equates to approximately 10 deaths per year in the approximately 5,000 hospitals in the United States. An estimated 40,500 preventable deaths from diagnostic error arise in the ICU alone.11 One patient in six can describe a diagnostic error that has personally affected him or her, and his or her family or immediate circle of friends,12 and nearly half of pediatricians report encountering one or more diagnostic errors every month.13 At least one in every thousand primary care encounters will cause preventable harm from diagnostic error14,15 (Figure 1, right). In 2011 the American Medical Association (AMA) called attention to this dilemma, including the need for research to identify more appropriate ways to find and count these errors.16 Diagnostic errors take on special significance in this context because they are among the most frequent, harmful, and costly types of errors in this setting. 2. High-Quality Health Care Requires High-Quality Diagnosis. Reliability of the diagnostic process is now recognized as a prerequisite for high-quality care. 8 During the next few years, this issue will receive increasing attention from patient safety organizations, oversight agencies, and patients themselves. In 2005 The Joint Commission initiated this process through its National Patient Safety Goal regarding the reliable communication of critical test results.17 This will likely be just the first of additional steps devoted to improving the reliability of the diagnostic process, aptly referred to as the “next frontier” in patient safety.18,19 3. Diagnostic Errors Are Costly. Besides detracting from quality care and patient satisfaction, diagnostic errors also generate substantial costs for HCOs. The most visible components are the direct costs of defending and resolving malpractice claims, and the largest fraction of paid claims relates to missed or delayed diagnosis.20 Less obvious but possibly of greater magnitude are costs related to diagnostic inefficiency, including overand under-testing, and ordering tests that are inappropriate or of low value. The Choosing Wisely® campaign recently launched by Consumer Reports and the ABIM Foundation represents a first step towards reducing these costs.21 In addition, diagnostic errors contribute to hospital readmissions, costs that could potentially be avoided if the correct diagnosis is made during the initial hospital stay. Finally, unwarranted treatments given because of wrong diagnoses are another source of unnecessary costs
The Toll of Diagnostic Error
Figure 1. The estimated incidence of death related to diagnostic error, and the likelihood of harm in ambulatory care encounters, are shown. ER, emergency room.
associated with diagnostic error. 4. HCOs Are Well Positioned to Lead the Way in Reducing Diagnostic Error. It would be appropriate for physicians to address the problem of diagnostic error as their own professional responsibility. Unfortunately, the problem is too complex and the diagnostic process too intertwined with the particular health care system for this to be a realistic expectation. Making progress will require a commitment from all stakeholders, in this case physicians, patients, medical educators, and health care leaders, working together in a coordinated fashion. Importantly, diagnostic errors cannot be reduced without the HCO leadership recognizing the problem, giving it priority among the many other competing demands for health care resources, and taking active steps to understand diagnostic errors and support interventions to help reduce the risk of error.
Finding Diagnostic Errors: Existing Methods A range of research approaches has been used to study the incidence of diagnostic error, including autopsies, patient and provider surveys, chart studies of particular diseases, closed claims reviews, second reviews (for example, a second pathologist reviews the impressions of the first pathologist), and sending standardized patients into practice settings. These studies, as diverse as they are, consistently find that the incidence of diagnostic error is unacceptably high, generally in the range of 10%–50%.22 What’s missing are data from actual practice to
March 2014 Volume 40 Number 3 Copyright 2014 The Joint Commission
103
The Joint Commission Journal on Quality and Patient Safety Table 1. The Major Tools Used by Health Care Organizations to Identify Adverse Medical Events* Safety Monitoring Tool Occurrence screens
Incident reports Voluntary reporting portals Autopsy Peer review IHI Global Trigger Tool†
How the Tool Works HCOs employ a large set of standard screens (for example, readmissions to the ICU, return to the OR, death) and review these charts for quality of care. HCO staff are expected to file reports on patient incidences that lead to harm or might have. Some HCOs provide anonymous electronic or Web-based systems that allow reporting of safety concerns. Pathologists review the findings at autopsy with the medical record to identity discrepancies. The physician staff review the care provided by their peers in selected cases with poor outcomes. A set of measures designed to capture errors of omission (for example, transfusion errors, injuries from falls, medication errors, wrong-site surgery)
Why the Tool Is Insensitive to Diagnostic Error Potentially useful (for example, reason for readmission), but risk management staff typically aren’t trained to identify misdiagnosis Most are submitted by nurses and focus on treatment Physicians are reluctant to report errors or don’t take the time; patients don’t have access to such portals or, if they do, aren’t using them often. Rarely performed in health care settings in the United States Potentially useful, but very few cases discussed, and data are typically not aggregated to trend diagnostic errors Wasn’t designed to measure errors of omission (diagnosis); none of the elements targets diagnostic error
* HCO, health care organization, OR, operating room. † Griffin FA, Resar RK. IHI Global Trigger Tool for Measuring Adverse Events, 2nd ed. IHI Innovation Series white paper. Cambridge, MA: Institute for Healthcare Improvement, 2009. Accessed Jan 31, 2014. http://www.ihi.org/resources/Pages/IHIWhitePapers/IHIGlobalTriggerToolWhitePaper.aspx.
confirm these estimates from research investigations. To the best of our knowledge, there is not a single HCO in the United States that is systematically measuring the rate of diagnostic error in its clinics, hospitals, or emergency departments. Because there are no local data, there are also no national data. The most fundamental axiom of improving health care quality and safety is that improvement requires data, and the absence of data on diagnostic errors is a critical reason that this problem persists and may be growing. The reason diagnostic errors are not being measured is simple: HCOs don’t know how to identify these errors, the first step in measurement. The major tools that HCOs use to find adverse medical events (Table 1, above) work well in identifying incidents such as medication errors, falls, and the various never events but generally fail to detect diagnostic errors. 22 The shortcomings of the current tool set in regard to finding diagnostic errors is that it (1) was never designed to detect diagnostic issues (for example, the Global Trigger Tool,23) and (2) relies on voluntary reporting. Physicians regularly identify diagnostic errors; they just don’t take the time to report them or they are concerned that these reports will cause harm to themselves or colleagues.24 In a 2005–2006 survey, 86% of hospitals noted that physicians reported few or no medical errors.25 Similarly, in a 2004–2006 study that examined five different incident-detecting tools in use at a single institution, physicians contributed just 2.5% of adverse event reports reported.26 104
Random chart reviews overcome the problem of reporting bias, but they are expensive and have a low yield for detecting diagnostic errors—probably less than 1%.14,27 Even in cases in which there is a definite diagnostic error, it may not be evident solely from review of the medical record.28,29 In summary, there is ample evidence that, separately and even in aggregate, the existing tools are ineffective in discovering diagnostic errors. Confirming this impression, Tsang and coworkers recently reviewed the utility of various current detection tools to identify diagnostic errors in ambulatory settings, concluding that none were helpful.30 Similar findings are obtained from reviews of inpatient care. A 2010 study from the Office of the Inspector General used five different tools to study adverse events involving 780 hospitalized Medicare beneficiaries. The study identified a 13.5% incidence of adverse events of various types, but none of these were categorized as related to diagnostic error.31
Finding Diagnostic Error: Case Studies of Newer Approaches The growing interest in diagnostic errors, coinciding with the realization that the existing tools are inadequate, has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. Two innovative HCOs have responded to this challenge. Maine Medical Center has pioneered case-finding of diagnostic errors by facilitating direct reports from physicians,
March 2014 Volume 40 Number 3 Copyright 2014 The Joint Commission
The Joint Commission Journal on Quality and Patient Safety and Kaiser Permanente has created sophisticated yet practical electronic health record (EHR)–based reports that detect process breakdowns in the follow-up of abnormal findings.
CASE STUDY 1. MAINE MEDICAL CENTER: FACILITATING PHYSICIAN REPORTING Background: To become learning organizations, institutions need to seek out, analyze, and understand the causes of diagnostic errors locally. There is no better lesson than errando discimus, learning from your own mistakes. Methods: In 2010 Maine Medical Center (Portland, Maine), a 605-bed independent academic medical center affiliated with Tufts University School of Medicine, initiated a multimodality program to increase the visibility of diagnostic errors within the institution and develop a robust response to identified errors. The program, endorsed by local physician champions and supported by leadership, consisted of three complementary components: an educational campaign, a physician-based reporting system, and a redesigned method for root cause analysis (RCA) for cases involving diagnostic error. The six-month educational campaign emphasized the shortcomings of the diagnostic process and the significant impact of diagnostic error on patient outcomes. Didactic sessions were held for all of the major departments, including faculty, residents, and students. Departmental residency-based morbidity and mortality conferences were redesigned to reflect a specific emphasis on diagnostic error. Teaching avoidance of diagnostic error by improving clinical reasoning skills became regular topics in faculty development programs. By the end of the initiative, the majority of the medical staff had attended a session on diagnostic error which emphasized its prevalence and importance, as well as common causes and potential solutions. Despite this educational campaign, Maine Medical Center struggled to identify diagnostic errors, and, consequently, created a diagnostic error reporting system32 to facilitate reporting of diagnostic errors by clinicians in real time. This open and anonymous system required input of minimal information (patient medical record number, type of error, a brief description of the error, and degree of patient harm incurred), and was available on all clinical workstations, although the pilot was limited to the adult inpatient medical services. Reporting was encouraged by a hospitalist peer who championed the program. Reported incidents were reviewed by physicians with expertise in diagnostic error to verify diagnostic errors. A second and simultaneous innovation involved creating a novel process for RCA of the reported cases of diagnostic error as part of the institutional peer review program. In addition to
systematically surveying the potential system-based contributions to error, the new RCA process incorporated review of possible cognition contributions, which also served to increase physician interest and participation in the review process. This RCA fishbone was based on the diagnostic error classification system suggested by Graber et al.33 and the Diagnostic Error Evaluation and Research form designed by Schiff et al.34 and incorporated the common root causes of diagnostic error.35 RCA sessions involved clinicians, including those with expertise in diagnostic error, and department heads. Results: During a trial and evaluation period from January through June 2011, 80%–90% of reported diagnostic errors were confirmed. The initial six-month pilot of the system revealed 36 diagnostic errors that otherwise would have been unreported. The severity of errors uncovered was high, with 51% incurring moderate harm to the patient and 22% serious harm. The diagnoses associated with diagnostic errors were concordant with those reported in previous series, with the most commonly reported diagnoses being epidural abscess/hematoma, acute coronary syndrome, and stroke. The redesigned RCA process had both immediate and significant impact. Because the process emphasized both clinicianbased and system-based factors, physicians were remarkably engaged in the review process, as previously described in a similar program.36 Whereas previously it was difficult to secure the attendance of one or two physicians at the four-hour RCA sessions, with the redesigned system, six or more physicians would participate. More importantly, the novel fishbone diagram facilitated appreciation of the complexity of these errors. These discussions led directly to the implementation of interventions designed specifically to address the causes of these errors. The interventions included an institutional consultation protocol, several departmental initiatives designed to address affective bias (letting emotions affect diagnosis), and construction of symptom-specific diagnostic pathways for several common presenting complaints. Discussion and Lessons Learned: The educational campaign, physician-based reporting system, and redesigned RCA method promoted increased awareness of diagnostic error for clinicians, risk management staff, and hospital leadership. The identification and analysis of local errors provided a strong impetus for addressing latent system flaws identified through the new RCA process. Although successfully implemented, the program was resource intensive and required significant effort to develop and maintain, and its long-term sustainability and actual effect on diagnostic reliability are unclear. Limitations include the debatable effect of education, the significant, ongoing effort required
March 2014 Volume 40 Number 3 Copyright 2014 The Joint Commission
105
The Joint Commission Journal on Quality and Patient Safety to encourage physician reporting, and the challenges of identifying the underlying causes of errors and developing and implementing meaningful interventions.
Sidebar 1. Key Principles in Creating Efficient Safety Nets in the Kaiser Permanente System* ■
CASE STUDY 2. KAISER PERMANENTE: LEVERAGING THE ELECTRONIC HEALTH RECORD TO FIND DIAGNOSTIC ERRORS Background: Efforts to enhance the reliability of the diagnostic process are occurring across the eight regions of the integrated health care delivery system of Kaiser Permanente. These efforts are coordinated by the Diagnostic Reliability Improvement Initiative, a multidisciplinary team of individuals who have access to the organization’s database of adverse events. This allows prioritization of improvement initiatives on the basis of the frequency and severity of various types of harm as reflected in the database. A major initiative addressing diagnostic error consists of the Safety Net programs developed by the Southern California Region of Kaiser Permanente (SCAL KP) since 2009. Each Safety Net program targets a specific diagnostic error. The key principles in creating efficient Safety Nets are listed in Sidebar 1 (right). These Safety Nets leverage the “HealthConnect” EHR implemented across KP to enable centralized monitoring, detection, and outreach regarding a wide variety of patient safety issues. Importantly, each Safety Net program is designed to catch errors before harm reaches the patient so that mitigation is still possible. It does not prevent errors or lapses in care from occurring in the first place and is not 100% effective in preventing harm in all patients it catches. Since its implementation in 2009, the SCAL KP outpatient Safety Net programs have identified 4,925,145 safety hazards and facilitated and completed successful interventions (alerts to primary care providers) for 2,167,064 of them. A key concept is that even a very low incidence of error, such as failure to follow up abnormal test results, can affect many patients in a system as large as SCAL KP, which has 3.6 million members. Of the 19 Safety Nets currently implemented, 10 are relevant for safe and timely diagnosis, as shown in Table 2 (page 107), which details the impact realized from each of these measures. An illustrative example of the Safety Net program is the Safety Net addressing colorectal cancer (CRC). Although its rates of CRC screening (81.5% as of November 2013) are among the highest in the United States, KP members continue to present with advanced-stage CRC. Common causes of presentation at advanced stages include lack of screening, falsely negative screening tests, delay or failure to evaluate early symptoms of CRC, 106
■
■
■
■
Reliance on administrative data to screen for failure to follow up, as manual review would not be scalable for the large volumes of testing done in a region of more than 3.6 million members. Some safety nets rely on a combination of administrative data and manual review (for example, PSA Safety Net). Others are purely automated and do not require any manual intervention (for example, sending out notifications to patients if they fail to obtain laboratory tests within 30 days of the order). Other safety nets require some physician action, with the rest automated (for example, electronically pending orders for appropriate laboratory testing for patients receiving chronic medication, which, on physician signature, generates automated letters to the patients informing them to go to the laboratory for follow-up tests). Buy-in from the clinicians involved in the care of the patient to intervene on his or her behalf when appropriate with interventions that can be done largely by a centralized safety net team A system to track the effectiveness of each safety net created, allowing assessment of efficacy of the program and further learning Method(s) for identifying potential types of diagnostic errors to be addressed, including malpractice claims, published literature on known hazards, suggestions from frontline physicians, certain HEDIS® metrics, various chiefs of service groups, risk management, and opinions of expert physician quality leaders Rapid implementation of new safety nets, thereby enhancing credibility among frontline physicians when problems are identified regarding centralized programs to promote a safe culture. This encourages physicians to propose new safety nets.
* PSA, prostate-specific antigen; HEDIS, Healthcare Effectiveness Data and Information Set (National Committee for Quality Assurance, Washington, DC).
and lack of follow-up or evaluation of abnormal test results. In 2010, for example, 61 of the 3.6 million patients presented with stage III or IV CRC, and opportunities for earlier intervention were identified for 47 of those patients, as follows: ■ 11 patients experienced rectal bleeding, which was falsely attributed to hemorrhoids for two months to several years before the diagnosis of CRC was established. ■ 5 patients had iron-deficiency anemia identified previously without a gastrointestinal workup. ■ 2 patients had positive immunological tests for fecal occult blood (iFOBT) that was not further evaluated ■ 17 patients had not undergone CRC screening ■ 12 patients had false-negative CRC screening (4 by iFOBT, 3 by sigmoidoscopy, and 6 by colonoscopy) before the diagnoses of advanced stage CRC. Methods: Two Safety Nets were developed using electronic screens to detect cases of patients with rectal bleeding (International Classification of Diseases, Ninth Revision [ICD-9] codes
March 2014 Volume 40 Number 3 Copyright 2014 The Joint Commission
The Joint Commission Journal on Quality and Patient Safety Table 2. Safety Nets Operating to Prevent Diagnostic Errors in the Kaiser Permanente Health Care System* Positive prostate-specific antigen tests
April 2006–December 2009: 8,076 patients fell into the safety net, of whom 3,833 received urology appointments; 2,204 patients underwent prostate biopsy, resulting in diagnosis of prostate cancer in 745 patients.
Positive fecal occult blood testing†
Continuous tracking of every positive result until colonoscopy performed or documented patient refusal or physician decision that follow-up is not indicated
Abnormal Pap smear follow-up†
Continuous tracking of every positive result until follow-up performed or documented patient refusal or physician decision that follow-up is not indicated
Overdue labs (labs ordered but not completed within 30 days)
June 2010–December 2012: 1.2 million overdue labs identified and orders placed, yielding 244,468 patients with labs repeated.
Post splenectomy state without proper immunizations
January 2001–April 2012: 267 patients identified through Natural Language Processing; 696 identified through administrative data. 160 patients fully vaccinated in the first 90 days.
Medication monitoring for patients on digoxin (K+ and creatinine)
January 2009–December 2011: 2,429 patients identified with missing labs and orders placed, yielding 1,391 patients with labs repeated and 180 with abnormal lab results.
Medication monitoring for patients on anticonvulsants
January 2010–December 2011: 10,210 patients identified with missing labs and orders placed, yielding 4,242 patients with labs repeated and 1,730 with abnormal lab results.
Medication monitoring for patients on January 2010–December 2011: 256,000 patients identified with missing labs and orders ACEs/ARBs & diuretics (Na+, K+, creatinine) placed, yielding 131,494 patients with labs repeated and 20.409 with abnormal lab results. Abnormal creatinine that is not repeated
January 1997–December 2011, 5,324 patients identified with an abnormal creatinine not repeated and lab orders placed, yielding 2,565 patients with labs repeated (48%) and 1,078 new cases of CKD identified.
Colon cancer (microcytic anemia or rectal bleeding in patients who should have a colonoscopy but did not)
November 2010–July 2012: 187 patients with iron deficiency were referred for colonoscopy; 48 patients had the procedure, identifying the diagnosis in 20. April 2010–May 2012: 102 patients with rectal bleeding were referred for colonoscopy; 26 had the procedure, identifying the diagnosis in 20, including 1 case of colorectal cancer.
Hepatitis C positive screening tests without a confirmatory test performed
January 2010–July 2012: 614 patients identified with a positive Hepatitis C antibody test and no confirmatory test. Lab orders were placed which yielded 482 patients with a completed confirmatory test and 416 new cases of Hepatitis C identified.
* CKD, chronic kidney disease; ACE, angiotensin-converting enzyme [inhibitors]; ARB, angiotensin receptor blocker. †
Data not tracked.
569.3x and 455.xx) and/or possible iron-deficiency anemia (based on microcytosis, normal renal function, and hemoglobin < 14 g/dL and red blood cell count [RBC] < 4.7 M/microliter [males] or hemoglobin < 12 g/dL and RBC < 4.2 M/microliter [females]) without documented subsequent follow-up. Screen fallouts were reviewed by a gastroenterologist, who, if appropriate, sent a message to the primary care physician which suggested ordering a colonoscopy. After several weeks, a clerk checked the records to see if there was any follow-up. If not, the gastroenterologist then sent a second note, which now went into the medical record. At either step, if the primary care physician documented that he or she evaluated the case and that the patient did not need further evaluation, the case was closed. Results: Some 168 outpatients 55–75 years of age with rectal bleeding who did not have subsequent colonoscopy were iden-
tified between April 2, 2010, and May 31, 2012. Of the 102 patients referred for colonoscopy, 26 completed the procedure—1 patient had an adenocarcinoma with spread to a local lymph node, 1 had a carcinoid tumor, 7 had one or more tubular adenomas, 3 had one or more hyperplastic polyps, 7 had hemorrhoids, and 1 had colitis. Of the 76 patients who did not receive a colonoscopy, 33 failed to respond to three attempts at contact, and 24 patients refused the procedure. For the November 11, 2010–July 31, 2012, period, we identified 366 outpatients 55–75 years of age with presumed irondeficiency anemia and no subsequent colonoscopy, of whom 187 were referred after gastroenterologist review for colonoscopy. Of 187 patients, 48 had a colonoscopy, and 139 patients did not. Discussion and Lessons Learned: The Safety Net programs establish a new approach to identify opportunities to improve
March 2014 Volume 40 Number 3 Copyright 2014 The Joint Commission
107
The Joint Commission Journal on Quality and Patient Safety the quality of the diagnostic process and avoid diagnostic errors. Analysis of cases of late-stage presentation of cancer can help health care systems determine why such cases occur and design interventions to facilitate diagnosis at an earlier stage. Moreover, two identified diagnostic errors—failure to work up rectal bleeding and failure to rule out colon cancer in patients with suspected iron-deficiency anemia—are errors amenable to mitigation using a semi-automated algorithm for selected case reviews. Similar detection strategies (“trigger tools”) leveraging the EHR are also being developed and tested in the US Department of Veterans Affairs system. In one such approach, data-mining algorithms were applied to outpatient records at two medical centers to identify patients lacking appropriate follow-up for abnormal prostate-specific antigen (PSA) test results or conditions associated with colorectal cancer (positive fecal occult blood, hematochezia, or iron-deficiency anemia). Murphy et al. estimated that the approach would identify more than a thousand instances of delayed or missed follow-up of these abnormalities during a one-year period in these two systems, with identification of 47 high-grade cancers.37 Singh and colleagues have also developed algorithms that identify patients at high risk for diagnostic error, such as patients with a nonelective inpatient admission within two weeks of a primary care index visit. Diagnostic errors could be identified from the medical record in about 20% of such cases, compared with 2% in randomly selected clinic charts.27 The algorithm-triggered cases provide unique and important insights into the causes for these errors by identifying “missed opportunities” in the diagnostic process.14,27
Discussion HCOs should take an active role in addressing the problem of diagnostic error to provide high-quality care,8 with the initial challenge being to begin identifying diagnostic errors in routine practice. Using facilitated physician reporting (Maine Medical Center) or EHR–based triggers to identify diagnostic errors that have already occurred (Department of Veterans Affairs), or identifying missed opportunities related to diagnosis that can still be remediated (Kaiser Permanente) all represent novel and effective approaches to accomplish this. An additional option for identifying diagnostic errors is to simply ask the patient if he or she has been misdiagnosed. Patients recently seen in the emergency department or discharged from the hospital would be ideal populations to target. In a recent patient survey, three in five respondents were “very” or “extremely” concerned about diagnostic errors,38 and apprehension 108
about diagnostic errors is the chief concern of patients seen in an emergency setting.39 Patients are able, willing, and motivated to participate in error-reporting systems. A Canadian study of families who had experienced a pediatric hospital admission identified a wide range of safety concerns, almost none of which (fewer than 3%) had been identified by the hospitals’ own safetymonitoring programs.40 Similar findings have been reported from Japan,41 Sweden,42 and the United States.43–45 In a related study, interviews of 228 inpatients during their admission and within 10 days of discharge identified 20 adverse events and 13 near misses, none of which had been identified through the traditional incident reporting pathways.46 Patients are also able to report diagnostic errors in ambulatory settings; a survey of primary care patients found that 13% had experienced a diagnostic error.47 The National Patient Safety Foundation and the American Hospital Association have endorsed and campaigned for patient participation to prevent medical errors,48 and the Agency for Healthcare Research and Quality is currently developing a national, voluntary reporting system for patients.49
BEYOND DETECTION STRATEGIES: EDUCATIONAL OPPORTUNITIES TO IMPROVE DIAGNOSIS Detecting diagnostic errors is just the first step in reducing the likelihood of harm from diagnostic error. As outlined in Sidebar 2 (page 109), HCOs can take concrete steps to raise awareness of the problem and reduce the risk of diagnostic error.32,50–52 For example, the University of Pennsylvania Health System and the Perelman School of Medicine (Philadelphia) have embarked on such a program. At the organizational level, workshops jointly sponsored by the chief medical officer and general counsel included discussion of diagnostic errors on the basis of aggregated errors during the previous five years, and strategies to address these errors were proposed. Concepts relating to diagnostic error were also added into the patient safety curriculum.53 All medical students now receive an introduction to decision making and cognitive error in their first year, and these concepts are revisited and reinforced during a differential diagnosis course in which clinical decision making is emphasized.50 In parallel, a unique longitudinal curriculum on diagnostic error was introduced into the internal medicine residency program alongside a new curriculum on high-value care.54,55 The curriculum presented methods to identify cognitive and systems errors. It also provided instruction on heuristics, bias, and the dual process paradigm; and used an interactive, case-based activity using video vignettes and an exercise in which residents write and reflect on a diagnostic error with cognitive components that they have experienced.57 Residents participating in the program improved their
March 2014 Volume 40 Number 3 Copyright 2014 The Joint Commission
The Joint Commission Journal on Quality and Patient Safety Sidebar 2. Interventions to Raise Awareness of Diagnostic Error and Interventions to Reduce the Risk of Diagnostic Error Raise Awareness ■ Develop a dashboard to monitor diagnostic error: number of cases detected per year; number of root cause analyses performed; and number of improvement projects started. ■ Follow up with patients seen in the emergency department to determine if the diagnosis provided was correct. ■ Institute a peer review program to monitor the quality of pathology and radiology diagnoses. ■ Monitor preanalytic, analytic, and postanalytic errors from the clinical laboratory. ■ Develop feedback pathways so that providers learn about changes in a patient’s diagnosis and diagnostic errors. Reduce the Risk of Error Make sure that all critical medical imaging studies are read in real time by a radiologist. ■ Ensure that subspecialty expertise is available when it is needed. ■ Use a certified electronic health record. ■ Use tools to promote differential diagnosis (for example, checklists; Web-based tools that suggest a differential diagnosis based on the key features of the case). ■ Facilitate patients’ obtaining a second opinion. ■ Improve communication among providers. ■ Ensure that all ordered tests, consults, and procedures are completed. ■ Ensure that patients receive the results of the diagnostic tests. ■ Ensure that patients receive appropriate cancer screenings. ■ Ensure that all patients receiving a diagnosis know when and how to come back for reevaluation if their symptoms persist or change. ■ Improve the environment for diagnosis: Allow enough time, minimize distractions, reward diagnostic quality. ■ Promote a culture of safety and open dialogue. “I’m not sure,” “I need help,” and “not yet diagnosed” (NYD) should be heard more commonly. NYD should be used in documenting impressions of patients for whom the diagnosis is not highly certain. ■
ability to identify cognitive bias when presented with written clinical cases or video vignettes in which cognitive errors were present and were able to easily recall, reflect on, and openly discuss diagnostic error in their practice.56
Conclusion Leaders of HCOs are realizing that improving patient safety is a major challenge, and reducing diagnostic error is the next imperative in this quest. The substantial harm and costs associated with diagnostic error are formidable motivating factors for HCOs’ involvement. By raising awareness and implementing
targeted programs that address diagnostic error, as illustrated in this article, HCOs may begin to play an important role. J Mark L. Graber, MD, FACP, is Senior Fellow, Health Care Quality and Outcomes, RTI International, Research Triangle Park, North Carolina; President and Founder, Society to Improve Diagnosis in Medicine; and Professor Emeritus of Medicine, State University of New York at Stony Brook. Robert Trowbridge, MD, is Director, Longitudinal Integrated Clerkship and Course Director, Maine Medical Center/Tufts University School of Medicine Maine Track Orientation and Community-based Apprenticeship in Primary Care, Portland, Maine. Jennifer S. Myers, MD, is Associate Professor of Clinical Medicine and Patient Safety Officer, Hospital of the University of Pennsylvania, Philadelphia. Craig A. Umscheid, MD, MS, FACP, is Director, Penn Medicine Center for Evidence-based Practice, and Senior Associate Director, ECRI-Penn Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center, University of Pennsylvania School of Medicine. William Strull, MD, is Medical Director, Quality and Patient Safety, The Permanente Federation, Kaiser Permanente, Oakland, California. Michael H. Kanter, MD, is Regional Medical Director of Quality and Clinical Analysis, Southern California Permanente Medical Group, Pasadena. Please address correspondence to Mark L. Graber,
[email protected].
References 1. Institute of Medicine. To Err Is Human, Building a Safer Health System. Washington, DC: National Academy Press, 1999. 2. Institute of Medicine. Best Care at Lower Cost; The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press, 2012. 3. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press, 2001. 4. Olsen LA, Aisner D, McGinnis JM, editors: Roundtable on Evidence-Based Medicine, Institute of Medicine. The Learning Healthcare System: Workshop Summary. Washington, DC: The National Academies Press, 2007. Accessed Jan 31, 2014. http://www.nap.edu/catalog.php?record_id=11903. 5. Pronovost P, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006 Dec 28;355(26):2725–2732. 6. Wachter RM. Why diagnostic errors don’t get any respect—And what can be done about them. Health Aff (Millwood). 2010;29(9):1605–1610. 7. Graber M. Diagnostic errors in medicine: A case of neglect. Jt Comm J Qual Patient Saf. 2005;31(2):106–113. 8. Graber ML, Wachter RM, Cassel CK. Bringing diagnosis into the quality and safety equations. JAMA. 2012 Sep 26;308(12):1211–1212. 9. Society to Improve Diagnosis in Medicine. Mission & Vision. Accessed Jan 31, 2014. http://www.improvediagnosis.org/?page=MissionVision. 10. Hayward RA. Counting deaths due to medical errors. JAMA. 2002 Nov 20;288(19):2404–2405. 11. Winters B, et al. Diagnostic errors in the intensive care unit: A systematic review of autopsy studies. BMJ Qual Saf. 2012;21(11):894–902. 12. IsabelHealthcare. Press Release: Misdiagnosis Is an Overlooked and Growing Patient Safety Issue and Core Mission of Isabel Healthcare: Survey Shows 1 in 6 people Has Experience with Wrong Diagnosis. Mar 20, 2006. Accessed Jan 31, 2014. http://www.isabelhealthcare.com/pdf/USsurveyrelease-Final.pdf. 13. Singh H, et al. Errors of diagnosis in pediatric practice: A multisite survey. Pediatrics. 2010;126(1):70–79. 14. Singh H, et al. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013 Mar 25;173(6):418–425. 15. Newman-Toker DE, Makary MA. Measuring diagnostic errors in primary care: The first step on a path forward. JAMA Intern Med. 2013 Mar
March 2014 Volume 40 Number 3 Copyright 2014 The Joint Commission
109
The Joint Commission Journal on Quality and Patient Safety 25;173(6):425–426. Erratum in JAMA Intern Med. 2013 Apr 8;173(7):599. 16. Wynia MK, Classen DC. Improving ambulatory patient safety: Learning from the last decade, moving ahead in the next. JAMA. 2011 Dec 14;306(22):2504–2505. 17. The Joint Commission. 2014 Comprehensive Accreditation Manual for Hospitals: The Official Handbook. Oak Brook, IL: Joint Commission Resources, 2013. 18. Newman-Toker DE, Pronovost PJ. Diagnostic errors—The next frontier for patient safety. JAMA. 2009 Mar 11;301(10):1060–1062. 19. Singh H. Diagnostic errors: Moving beyond ‘no respect’ and getting ready for prime time. BMJ Qual Saf. 2013;22(10):789–792. 20. Saber Tehrani AS, et al. 25-year summary of US malpractice claims for diagnostic errors 1986-2010: An analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672–680. 21. Cassel CK, Guest JA. Choosing wisely: Helping physicians and patients make smart decisions about their care. JAMA. 2012 May 2;307(17):1801– 1802. 22. Graber ML. The incidence of diagnostic error in medicine. BMJ Qual Saf. 2013;22 Suppl 2:ii21–27. 23. Griffin FA, Resar RK. IHI Global Trigger Tool for Measuring Adverse Events, 2nd ed. IHI Innovation Series white paper. Cambridge, MA: Institute for Healthcare Improvement, 2009. Accessed Jan 31, 2014. http://www.ihi.org/resources/Pages/IHIWhitePapers/IHIGlobalTriggerToolWhitePaper.aspx. 24. Shojania KG. The frustrating case of incident reporting systems. Qual Saf Health Care. 2008;17(6):400–402. 25. Farley DO, et al. Adverse-event-reporting practices by US hospitals: Results of a national survey. Qual Saf Health Care. 2008;17:416–423. 26. Levtzion-Korach O, et al. Integrating incident data from five reporting systems to assess patient safety: Making sense of the elephant. Jt Comm J Qual Patient Saf. 2010;36(9):402–410. 27. Singh H, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012;21(2):93–100. 28. Schwartz A, et al. Uncharted territory: Measuring costs of diagnostic errors outside the medical record. BMJ Qual Saf. 2012;21(11):918–924. 29. Luck J, et al. How well does chart abstraction measure quality? A prospective comparison of standardized patients with the medical record. Am J Med. 2000 Jun 1;108(8):642–649. 30. Tsang C, Majeed A, Aylin P. Routinely recorded patient safety events in primary care: A literature review. Fam Pract. 2012;29(1):8–15. 31. Levinson DR. Adverse Events in Hospitals: National Incidence Among Medicare Beneficiaries. Washington, DC: US Department of Health and Human Services, Office of Inspector General, Nov 2010. Accessed Jan 31, 2014. http://oig.hhs.gov/oei/reports/oei-06-09-00090.pdf. 32. Trowbridge R, Salvador D. Addressing diagnostic errors: An institutional approach. Focus on Patient Safety. 2010;13(3):1–2, 5. Accessed Jan 31, 2014. http://www.npsf.org/wp-content/uploads/2011/09/Focus_13.3_2010.pdf. 33. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005 Jul 11;165(13):1493–1499. 34. Schiff GD, et al. Diagnostic error in medicine: Analysis of 583 physicianreported errors. Arch Intern Med. 2009 Nov 9;169(20):1881–1887. 35. Trowbridge R, et al. A restructured root cause analysis process for diagnostic error. Abstract. Presented at the 4th International Diagnostic Error in Medicine Conference, Chicago, Oct 2011. 36. Graber ML. Expanding the goals of peer review to detect both practitioner and system error. Jt Comm J Qual Improv. 1999;25(8):396–407.
110
37. Murphy DR, et al. Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf. 2014;23(1):8–16. 38. PR Newswire. Americans Are Concerned About Hospital-Based Medical and Surgical Errors. Jul 20, 2004. Accessed Jan 31, 2014. http://www.prnewswire.com/news-releases/americans-are-concerned-about hospital-based-medical-and-surgical-errors-71336382.html. 39. Burroughs TE, et al. Patient concerns about medical errors in emergency departments. Acad Emerg Med. 2005;12(1):57–64. 40. Daniels J, et al. Identification by families of pediatric adverse events and near misses overlooked by health care providers. CMAJ. 2012 Jan 10;184(1):29– 34. 41. Hasegawa T, et al. Patients’ identification and reporting of unsafe events at six hospitals in Japan. Jt Comm J Qual Patient Saf. 2011;37(11):502–508. 42. Ohrn A, et al. Reporting of sentinel events in Swedish hospitals: A comparison of severe adverse events reported by patients and providers. Jt Comm J Qual Patient Saf. 2011;37(11):495–501. 43. Fowler FJ Jr, et al. Adverse events during hospitalization: Results of a patient survey. Jt Comm J Qual Patient Saf. 2008;34(10):583–590. 44. Weissman JS, et al. Comparing patient-reported hospital adverse events with medical record review: Do patients know something that hospitals do not? Ann Intern Med. 2008 Jul 15;149(2):100–108. 45. Weingart SN. Patient engagement and patient safety. AHRQ Web M&M. 2013. Accessed Jan 31, 2014. http://webmm.ahrq.gov/perspective.aspx? perspectiveID=136. 46. Weingart SN, et al. What can hospitalized patients tell us about adverse events? Learning from patient-reported incidents. J Gen Intern Med. 2005;20(9):830–836. 47. Kistler CE, et al. Patient perceptions of mistakes in ambulatory care. Arch Intern Med. 2010 Sep 13;170(16):1480–1487. 48. Millman EA, et al. Patient-assisted incident reporting: Including the patient in patient safety. J Patient Saf. 2011;7(2):106–108. 49. Agency for Healthcare Research and Quality. Designing Consumer Reporting Systems for Patient Safety Events: Contract Final Report. Jul 2011. Accessed Jan 31, 2014. http://www.ahrq.gov/professionals/quality-patient-safety/patientsafety-resources/resources/consumer-experience/reporting/index.html. 50. Graber ML, et al. Cognitive interventions to reduce diagnostic error: A narrative review. BMJ Qual Saf. 2012;21(7):535–557. 51. Singh H, et al. System-related interventions to reduce diagnostic errors: A narrative review. BMJ Qual Saf. 2012;21:160–170. 52. McDonald KM, et al. Patient safety strategies targeted at diagnostic errors: A systematic review. Ann Intern Med. 2013 Mar 5;158(5 Pt 2):381–389. 53. Umscheid CA, Williams K, Brennan PJ. Hospital-based comparative effectiveness centers: Translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):1352–1355. 54. Umscheid CA. Snapshots of low-value medical care. JAMA Intern Med. 2013 Feb 11;173(3):186–187. 55. Smith CD; Alliance for Academic Internal Medicine–American College of Physicians High Value; Cost-Conscious Care Curriculum Development Committee. Teaching high-value, cost-conscious care to residents: The Alliance for Academic Internal Medicine–American College of Physicians Curriculum. Ann Intern Med. 2012 Aug 21;157(4):284–286. 56. Ogdie AR, et al. Seen through their eyes: Resident’s reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med. 2012;87(10):1361–1367.
March 2014 Volume 40 Number 3 Copyright 2014 The Joint Commission