Diagnostic Errors: Impact of an Educational Intervention on Pediatric Primary Care

Diagnostic Errors: Impact of an Educational Intervention on Pediatric Primary Care

ARTICLE Diagnostic Errors: Impact of an Educational Intervention on Pediatric Primary Care Julianne Nemes Walsh, DNP, PNP-BC, Margaret Knight, PhD, &...

343KB Sizes 2 Downloads 50 Views

ARTICLE

Diagnostic Errors: Impact of an Educational Intervention on Pediatric Primary Care Julianne Nemes Walsh, DNP, PNP-BC, Margaret Knight, PhD, & A. James Lee, PhD

ABSTRACT Introduction: The purpose of our study was to determine the impact of an educational program on a provider’s knowledge related to diagnostic errors and diagnostic reasoning strategies. Methods: A quasi-experimental interventional study with a multimedia approach, case study discussion, and triggergenerated medical record review at two time points was conducted. Measurement tools included a test developed by the National Patient Safety Foundation, Reducing Diagnostic Errors: Strategies for Solutions Quiz, additional diagnostic reasoning questions, and a trigger-generated process to analyze medical records. Results: Knowledge related to diagnostic errors statistically improved from the pretest to posttest scores with sustained 60-day differences (p < .025). Although there was a decline in the proportion of patients returning with the same chief

Julianne Nemes Walsh, Primary Care Pediatric Nurse Practitioner, Bridgewater Pediatrics, Bridgewater, MA. Margaret Knight, Associate Professor, School of Nursing, University of Massachusetts, Lowell, Lowell, MA. A. James Lee, Associate Professor Emeritus, Health Information Systems, University of Massachusetts, Lowell, Lowell, MA. This study was supported by a Research and Practice Award from Sigma Theta Tau International, Eta Omega Chapter Award. Correspondence: Julianne Nemes Walsh, DNP, PNP-BC, Bridgewater Pediatrics, 1029 Pleasant St., Bridgewater, MA 02324; e-mail: [email protected]. 0891-5245/$36.00 Copyright Q 2017 by the National Association of Pediatric Nurse Practitioners. Published by Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.pedhc.2017.07.004

www.jpedhc.org

complaint within 14 days, this was not statistically significant (p < .15). When providers were confronted with an unrecognizable clinical presentation, they reported an increased use of a ‘‘diagnostic timeout’’ (p < .038). Discussion: Providers developed an increased awareness of the presence of diagnostic errors in the primary care setting, the contributing risk factors for a diagnostic error, and possible strategies to reduce diagnostic errors. These factors had an unexpected impact on changing the primary care practice model to enhance the continuity of patient care. J Pediatr Health Care. (2017) -, ---.

KEY WORDS Diagnostic errors, pediatrics, primary care, patient safety

Diagnostic errors are the sixth leading cause of death in the United States, are ranked as the leading cause of paid malpractice claims in primary care, and are twice as likely to cause a patient death compared with any other type of error (Carroll & Buddenbaum, 2007; CRICO Foundation, 2014; Graber, 2013; Institute of Medicine [IOM], 2015; Singh et al., 2014). A recent report published by the IOM (2015), Improving Diagnosis in Health Care, highlighted the multifactorial causes of diagnostic errors and recognized that diagnosis is a collaborative effort between health care professionals, patients, and families. In the report, the IOM defines a diagnostic error as a failure to establish an accurate and timely explanation of the patient’s health problem or communicate the explanation to the patient. In 2014, CRICO Foundation released The Annual Benchmarking Report: Malpractice Risk in the Diagnostic Process. This report was generated to -/- 2017

1

determine when and where diagnosis-related errors occur and discusses necessary changes to prevent diagnostic errors. In the report, 23,527 malpractice cases were reviewed, and it was found that the most expensive judgment errors were related to a failure in the diagnostic process. Leading judgment factors included failure or delay in ordering a diagnostic test, misinterpretation of a diagnostic test, failure to establish a differential diagnosis, failure or delay in ordering a consultation, and failure to rule out an abnormal finding. Diagnostic errors in the ambulatory setting are more often due to lapses in clinical judgment (Carroll & Buddenbaum, 2007; CRICO, 2014; Giardina et al., 2013; Kain & Caldwell-Andrews, 2006). BACKGROUND Although the majority of studies related to diagnostic errors are within the adult population, CRICO Foundation (2014) reviewed 45 malpractice cases from January 2007 through December 2011 and reported that pediatric/neonatal care ranked first among all specialties named in malpractice claims. Pediatric diagnostic error data is predominantly extrapolated from pediatric malpractice claims, and 28% to 31.8% of all pediatric claims are thought to be related to errors in diagnosis (Carroll & Buddenbaum, 2007; Kain & Caldwell-Andrews, 2006). Singh et al. (2010) surveyed 726 pediatricians from academic, training, and community practices and found that 54% of the pediatricians admitted their involvement in a diagnostic error at least once or twice per month and that almost one half (45%) reported that a diagnostic error caused significant harm at least once or twice per year. Failure to gather available medical information ranked as the greatest contributing factor to committing a diagnostic error among the pediatricians surveyed. Several factors need to be considered when evaluating the diagnostic process: the element of diagnostic uncertainty, provider knowledge deficits, system time constraints, population trends, health literacy, technological tools, culture, language barriers, and mental health of the patient and provider (Brady & Goldenhar, 2014; IOM, 2015; Sherbino, Dore, Siu & Norman, 2011). Sarkar et al. (2012) surveyed 1,817 primary care physicians to determine the challenges of making a diagnosis in the outpatient setting. Fifty percent of the respondents (n = 1,054) reported that more than 5% of their patients’ illnesses were too difficult to diagnose, supporting previous research indicating a lack of knowledge as a cause of diagnostic errors (Graber et al., 2012; IOM, 2015; Sarkar et al., 2012; Sherbino et al., 2011; Singh, Thomas, Khan, & Petersen, 2007; Singh et al., 2013). System-related stressors include a lack of time with patients, a lack of time to review patient-related documentation, and administrative tasks. A study by Sarkar et al. (2012) of system-related interventions indicated that improved ac2

Volume -  Number -

cess to specialists, lengthening patient visits, reducing physician panel sizes, and delegating administrative tasks to nonclinical staff may be beneficial in improving diagnostic accuracy. Diagnostic decision-making skills play a significant role in creating or preventing a diagnostic error, and these skills are the most important cognitive skills a nurse practitioner or physician can develop and refine. Several reports suggest problems with hypothesis generation or broadening of Diagnostic the differential diagdecision-making nosis as the cause of decision-making erskills play a rors (Ely, Kaldjian, & significant role in D’Allesandro, 2012; creating or Fischer, Fetter, Munro, & Goldman, 1997; preventing a Giardina et al., 2013; diagnostic error. Schiff et al., 2009). Two predominant theories of clinical reasoning related to diagnostic decision making are found commonly in the literature: The Safer Diagnosis Framework (Singh and Sittig, 2015) and the Dual Process Theory (Croskerry, 2009; Durham, Fowler, & Kennedy, 2014; IOM, 2015; Tsalatsanis et al., 2015). The IOM Report (2015) highlights the dual process theory of diagnostic decision making and how the theory serves as a framework to describe the cognitive activities a provider engages in when determining a diagnosis. This theoretical framework incorporates both heuristic (System 1) and analytic reasoning skills (System 2). When a provider is exposed to a familiar case presentation, the provider’s mind will often begin to recognize patterns and take mental shortcuts. These mental shortcuts, known as heuristics, are used as the method of reasoning. If the provider does not recognize the patient presentation, he/she will then engage in a more analytic approach involving slower, conscious, logical, and defensible processing (Croskerry, 2009; Ely et al., 2012; Singh and Sittig, 2015). The dual process theory involves reiterative mental processing, a toggling back and forth from System 1 (heuristic) to System 2 (analytic) until a diagnosis is reached. Provider clinical reasoning skills in conjunction with patient risk factors may have a compounding effect on diagnostic error rates. Retrospective reviews of patient medical records using specific triggered electronic queries have been shown to be a reliable method for identifying patients at risk for a diagnostic error in the primary care setting (Kirkendall et al., 2012; Singh et al., 2007, 2011, 2013; Unbeck et al., 2014). The identification of a patient at risk is an integral step in determining the incidence of errors, types of errors, and possible causes of errors within the pediatric population. Schwappach (2012) examined data from Journal of Pediatric Health Care

the Commonwealth Fund’s 2010 International Survey to determine risk factors for patient-reported medical errors and determined a patient’s odds ratio (OR) for a medical error. Schwappach found poorly coordinated care (OR = 3.9), hospitalization (OR = 1.6) or emergency department visits (OR = 1.7), and three or more providers involved in a patient’s care (OR = 2.0) as significant risk factors for a medical error. Singh et al. (2007) retrospectively screened 15,580 primary care visits logged into an electronic medical record (EMR) to determine the positive predictive value (PPV) of two trigger criteria for identifying diagnostic errors. Trigger 1 (n = 139) was defined as a primary care visit followed by a hospitalization in the next 10 days, and Trigger 2 (n = 175) was defined as a well or sick visit followed by one or more primary care, emergency care, or urgent care visits. Control charts were also analyzed (n = 199). Singh et al. reported the PPVs for Trigger 1 (16.1%), Trigger 2 (9.7%), and controls (4.0%). In a larger randomized controlled trial (N = 212,165 patient visits), Singh et al. (2014) repeated the 2007 study and extended the criteria for the both triggers to 14 days and found slightly higher rates for Trigger 1 (PPV = 20.9%, 95% confidence interval [CI] = [17.9, 24.0]), lower rates for Trigger 2 (PPV = 5.4%, 95% CI = [3.7, 7.1]), and lower rates for the control group (PPV = 2.1%, 95% CI = [0.1, 3.3]). Using this information, a return visit with the same chief complaint within 14 days appears to be the sweet spot for identifying patients at risk for a diagnostic error. Although analyzing malpractice claims may provide clues to identifying points in the diagnostic process to focus provider education strategies, these clues are inferences. The CRICO Foundation (2014) identified two focal points for provider training: (a) the initial process, that is, the patient presentation with a complaint through to the provider’s assessment including a differential diagnosis and test orders (58% of claims), and (b) coordination of care, which includes consultations, laboratory test follow up, and communication of the provider’s assessment to the patient (46% of claims). Several researchers have explored the influence of situational awareness as a method to heighten a provider’s awareness of cognitive biases to decrease error rates (Brady & Goldenhar, 2014; Sherbino et al., 2011; Singh et al., 2012b). A few convergent themes surfaced as possible strategies to reduce errors: team-based care, the availability of standardized processes, increased provider experience, and a provider’s selfawareness of limitations and willingness to consult with others. A provider’s perception and his/her ability to comprehend the magnitude of the problem, as well as his/her ability to forecast the appropriate action, may be influenced by the diagnostic reasoning style and training related to diagnostic reasoning (Brady & Goldenhar, 2014; Pirret et al., 2015; Sherbino et al., www.jpedhc.org

2011; Singh et al., 2012b). Providers who practice analytic reasoning, include patients/parents in their decision making, ask themselves why a particular diagnosis fits or does not fit, participate in taking a diagnostic timeout, and verbally communicate diagnostic uncertainty to their patients and colleagues will serve as a role model for others and will ultimately affect the quality of care within an organization. The aim of this study was to determine the impact of a multimedia educational intervention on (a) a provider’s knowledge related to diagnostic errors and diagnostic reasoning strategies to improve diagnostic accuracy; (b) the rate of documented patient engagement (after visit summaries or documented discussions), the existence of a documented differential diagnosis, and follow up of laboratory study results in the patient chart; and (c) the rate of unscheduled revisits within a 14-day period with a related chief complaint that results in a change in diagnosis. Knowledge and documentation were measured before and after the intervention. No similar studies were identified in the literature. METHODS This study used a quasi-experimental interventional design. The purpose of the study was to determine the impact of an educational intervention on a provider’s knowledge related to diagnostic errors and diagnostic reasoning strategies. Setting The study took place in a nonprofit, multispecialty medical organization providing care to more than 650,000 adult and pediatric patients in the Northeastern United States. One of the organization’s pediatric primary care practice facilities was used. This facility provides care to 9,000 pediatric patients per year ranging in age from birth to 22 years and operates 7 days per week (including pediatric urgent care hours). Sample There were equal numbers of nurse practitioners (NPs) and pediatricians (N = 12: n = 6 NPs and n = 6 pediatricians) who met the inclusion criteria to participate in the intervention portion of the study. The majority of the NPs had more than 30 years of experience (66.7%), compared with only 16.7% of the pediatricians. A combined 41.7% of the providers had more than 30 years of experience, 16.57% had 20 to 29 years, 8.33% had 16 to 19 years, and the remaining providers were split evenly between 11 to 15 years (16.7%) and 1 to 5 years (16.7%). EMRs of patients who had a primary or urgent care visit followed by a hospitalization, urgent care, or primary care visit within 24 hours to 14 days were analyzed. Only patients seen by a provider who participated in the study were included. There were a total of 49 EMRs reviewed that included 98 patient visits. -/- 2017

3

Twenty-five patient records were reviewed for 2015 and served as controls (pre-intervention), and 24 patient records were reviewed from 2016 (postintervention). In the control (pre-intervention) group of patients, 60% were evaluated by NPs at the initial visit and 20% at the subsequent visit. Patients in the postintervention cohort (2016) were evaluated by an NP 67% of the time for the first visit and 45% of the time for the subsequent visit within 14 days. Overall, 48% of the patients were evaluated and treated by an NP and 52% by a pediatrician for the combined total of 98 visits. The mean patient age was 6.64 years (standard deviation [SD] = 5.57, 95% CI = [4.64, 6.90]). The range of patient ages was 6 months to 20 years. Measurement Reducing Diagnostic Errors: Strategies for Solutions Quiz Developed by the National Patient Safety Foundation This instrument was used for the pretest, immediate posttest, and 60-day posttest. The instrument includes 12 questions developed by the National Patient Safety Foundation (NPSF), and the final eight questions were added by the researcher (see Figure). This instrument has not been validated to date and is used by the NPSF for its online course on diagnostic error prevention. Scoring is based on 0 to 100–point system for questions 1 through 12; each question answered correctly awards 8.333 points. Validation of the current tool by the NPSF is not recommended because questions related to selfreflective practice and willingness to take a diagnostic timeout are not incorporated in the current version. EMR review A trigger tool was used to extract data from each patient’s EMR. The tool was developed by the researcher and was based on a review of related literature. The trigger tool included patient age, provider category, chief complaint, diagnosis at the index and subsequent visits, a change in the diagnosis, existence of a differential diagnosis, laboratory test ordering and follow up, patient education upon discharge, documented use of a consultation, existence of a chronic illness, number of medications a patient was taking, and the numbers of contacts within a 14-day period. Educational Intervention The educational intervention included an in-person PowerPoint presentation on Day 0, a case study group discussion on Day 30, and four 4-minute online videos developed by the researcher (see Table). The educational intervention was based on the NPSF program Education Module: Addressing Diagnostic Errors (2011). On Day 30 the providers participated in a case study discussion developed by The National Patient Safety Foundation Reducing Diagnostic Errors: Strategies for 4

Volume -  Number -

Solutions Educational module (2011). Each provider received the following handouts to guide the discussion: (a) ‘‘How Doctors Think’’ (b) ‘‘System Related Factors’’ (c) ‘‘Common Heuristics,’’ and (d) ‘‘Strategies to Reduce Cognitive Errors’’ (NPSF, 2011). One hundred percent of the providers completed the pretest, both posttests, and the case study discussion and selfreported viewing the videos. One NP (8.3%) had received previous formal training. Procedure Phase 1 Institutional review board approval was awarded by the University of Massachusetts (Lowell, MA) and the participating organization. Once the provider completed the consent he/she as asked to complete the pretest and attended the education session. He/she was then sent a video link once per week to review for 4 weeks (see Table). Phase 2 After completing the video series, the providers participated in a 40-minute case study analysis of a 13-year-old who presented with bilateral thigh swelling based on a guideline developed by the NPSF (2011). Providers were asked to discuss their analyses of the case study with their colleagues and the researcher. Phase 3 Providers completed an immediate posttest and a 60-day posttest. Providers were given a $5.00 gift card to a local coffee shop after completing each of the posttests. Phase 4 The information technology department at the participating organization retrieved the EMRs of patients who met the following criteria: (a) had a primary care visit followed by an unplanned hospitalization, urgent care, or emergency care visit within 24 hours to 14 days from the index visit; (b) had an urgent care visit followed by one or more unscheduled urgent care visit(s) or primary care visit(s) within 24 hours to 14 days from the index visit; (c) were seen by a provider who participated in the educational intervention at the index visit; (d) were between the ages of 1 month and 21 years; and (e) were seen in July, August, or September of 2015 or 2016. Twenty-five patients met the inclusion criteria in 2015 (pre-intervention) and 24 patients in 2016 (post-intervention). Statistical Analysis JMP 12.2 Statistical Program was used to analyze the data with the exception of a two-sided significance test using Statistica (JMP Software, 2015) to determine the study power at .05 CI (N = 49 EMRs, 98 visits). Distribution analyses of provider and patient demographics, provider Journal of Pediatric Health Care

FIGURE. National Patient Safety Foundation quiz. Please record the last three digits of your cell phone number here: __ __ __ REDUCING DIAGNOSTIC ERRORS: Strategies for Solutions Quiz Please circle the correct answer 1. What is the currently accepted definition of a diagnostic error? A. A diagnosis that is wrong, missed, or delayed B. A diagnosis not made correctly because the illness is rare C. A diagnosis not made because the patient did not seek care appropriately 2. Malpractice claims related to diagnostic error outnumber all other classes of medical error. A. True B. False 3. What is the “dual process” theory? A. B. C. D.

A theory about medication errors A theory that clinicians use four differing modes of diagnostic thinking A theory that clinicians use two differing modes of diagnostic thinking A theory about system-based causes of diagnostic errors

4. What is premature closure? A. A failure to consider other reasonable diagnoses once the initial diagnosis is reached B. There is no definition for this term C. The term is also know as framing bias 5. What are heuristics? A. B. C. D.

Strategies to overcome biases Cognitive short-cut strategies to solve clinical problems Coming to a diagnosis prematurely None of the above

6. What are the principles behind metacognition? A. Reflection, self-questioning, perspective-taking and self-assessment B. Same as the principle of diagnostic time out C. Same as the principles behind system 1 and system thinking 7. What is a CDR (Cognitive Disposition to Respond)? A. B. C. D.

A strategy to overcome diagnostic errors An always inappropriate response to diagnostic decision making A common cognitive bias This concept does not apply to diagnostic error

8. What are examples of system-related factors that contribute to diagnostic error? A. B. C. D. E.

Communication Coordination of Care Availability of expertise Culture of Safety All of the above

9. Intuition can be taught. A. True B. False 10. What is a cognitive autopsy? A. B. C. D.

www.jpedhc.org

A process to apply the autopsies conducted in the pathology laboratory A stepwise guideline for thinking through why a diagnostic error occurred A guideline that CANNOT be used in teaching about diagnostic errors An ineffective way in which to think through diagnostic errors

-/- 2017

5

FIGURE. (continued). 11. Which of the following are common problems encountered with intuitive clinical reasoning? A. B. C. D.

Framing bias Premature closure Faulty context generation All of the above

12. Which cognitive causes of diagnostic errors account for the vast majority of errors in internal medicine? A. Faulty knowledge B. Faulty data gathering C. Faulty synthesis 13. Which of the following are known risk factors for incurring a diagnostic error? A. The existence of a chronic illness B. The absence of a differential diagnosis when diagnostic uncertainty exists C. Three or more providers involved in the patient’s care D. Reliance on the history obtained by other clinicians E. All of the above

14. Have you received formal training on factors that may lead to a diagnostic error prior to this educational module? A. Yes B. No For the following two questions please indicate how you feel by circling the number that best matches your opinion. 15. I engage in self-reflection related to my diagnostic reasoning skills with every patient I encounter. Strongly Agree

Agree

Neutral

Disagree

5

4

3

2

Strongly Disagree 1

16. When I feel any level of diagnostic uncertainty, I include a differential diagnosis in my patient’s chart. Strongly Disagree

Agree

Neutral

Disagree

Strongly Agree

5

4

3

2

1

17. Please circle how many years you have been practicing as a primary care provider? A. 1-5 years B. 6-10 years C. 11-15 years D. 16-20 years E. >21 years 18. How often do you see a patient who presents with an unrecognizable clinical presentation? A. Less than once per week B. 1-2 times per week C. Less than once per month D. 1-2 times per month E. 3-5 times per month F. Never 19. When confronted with a patient who presents with an unrecognizable clinical presentation, how often do you take a diagnostic time out to consult with a colleague? A. B. C. D. E.

Never Rarely Occasionally Almost Always Always

20. Please circle your provider category A. Nurse Practitioner B. Physician C. Physician Assistant Questions 1-12 from © National Patient Safety Foundation Reprinted with permission of NPSF. All rights reserved THANK YOU!

6

Volume -  Number -

Journal of Pediatric Health Care

TABLE. Diagnostic errors video content and links Video length (minutes)

Title of video

4:37

Diagnostic Errors Introduction

4:36

Cognitive Psychology, Dual Process Theory and Cognitive Biases Intuition, Metacognition, and Cognitive Biases Patient Engagement and Strategies to Reduce Diagnostic Errors

4:35 4:33

Electronic link https://www.powtoon.com/online-presentation/dIml6DAQKb0/ video-1-diagnostic-errors/ https://www.powtoon.com/m/doe2sqRN6RC/1/m https://www.powtoon.com/online-presentation/fnk0A0yGj7B/ video-3-diagnostiac-errors/ https://www.powtoon.com/online-presentation/bEFpbj48CQz/ video-4-diagnostic-errors/?utm_source=Transactional&utm_medium= Email&utm_campaign=Transactional-Publish-success&mode=movie

experience, previous provider training related to diagnostic errors, and numbers of patient contacts within a 14-day period were completed. Matched-pairs analyses were performed to determine the differences in (a) pretest and posttest scores on the NPSF quiz, (b) pre- and post-intervention–identified trigger tool criteria, (c) the provider’s report of self-reflective activities related to diagnostic reasoning, and (d) the provider’s willingness to take a diagnostic timeout when confronted with an unrecognizable clinical situation. Fisher’s exact test and chi-square analyses were performed to determine differences in the pre- and post-intervention EMR inclusion of a differential diagnosis, laboratory follow-up, patient engagement in the pre- and post-intervention chart reviews, and relationship between those who presented with a related chief complaint who had a change in diagnosis. A binomial difference test was performed to determine the proportion of patients who presented with a related chief complaint in the pre- and postintervention chart reviews. RESULTS The NPSF test was used for the pretest, immediate posttest, and 60-day posttest to evaluate the provider’s knowledge related to diagnostic errors and reasoning strategies. The mean pretest score was 82.05, the immediate posttest mean score was 88.44, and the 60-day posttest mean score was 87.18 (the mean difference between the pretest and 60-day posttest scores was 6.41). Matched-pairs analysis showed statistically significant differences between the pretest and immediate posttest (p < .025, two tailed; 95% CI = [0.96, 11.8]) with sustained differences between the pretest and 60-day posttest scores (p < .035, one tailed). A 5-point Likert scale ranging from strongly agree (5 points) to strongly disagree (1 point) was used to determine each participant’s agreement to engage in selfreflection when he/she was involved in diagnostic reasoning with patients (see Figure, Question 15). A matched-pairs analysis of the provider’s engagement in self-reflection remained constant, with 67% of the responses being in the agree or strongly agree categories on both the pretest and immediate posttest. The mean www.jpedhc.org

response was 3.66 on the pretest and 3.58 on the immediate posttest (pediatricians: 0.083, p < .58, 95% CI = [ .041, 0.024]). The mean response for self-reflection dropped to 3.33 on the 60-day posttest, showing no significant differences (pediatricians: 0.33, p < .41, 95% CI = [ 1.20, 0.53]). Providers were asked to report how frequently they would take a diagnostic timeout to consult with a colleague if they were presented with a patient who had an unrecognizable clinical presentation (see Figure, Question 19). Providers reported a statistically significant increase in willingness to consult with a colleague when confronted with a patient with an unrecognizable clinical presentation from the pretest to immediate posttest (p < .038, 95% CI = [0.02, 0.64]), and this significant difference was sustained on the 60-day posttest (p < .038, 95% CI = [0.02, 0.64]). Providers reported seeing patients who presented with an unrecognizable clinical presentation three to five times per month (mean = 4, SD = 1.2, 95% CI = [3.23, 4.76]). Fifty percent of providers agreed on the use of a differential diagnosis when they felt any level of diagnostic uncertainty on the pretest and 60-day posttest, showing no significant difference (p < .14, 95% CI = [ 1.19, 0.19]). There was a significant difference in the pretest and immediate post-intervention analysis, showing that 59% of providers reported disagreement with including a differential diagnosis when there was any level of diagnostic uncertainty (p < .042, 95% CI = [0.04, 1.79]). EMR distribution analysis indicated that the mean number of patient contacts (N = 49) within a 14-day period, including electronic and phone call contacts, was 5.77 (SD = 3.13, 95% CI = [4.87, 6.67]). There were no significant differences in the number of contacts from 2015 (mean = 5.92, SD = 2.95) and 2016 (mean = 5.62, SD = 3.37). Fifty-six percent of patients were identified as having a chronic illness in the pre-intervention group (2016) and 38% in the postintervention group (2016); there were no significant differences between the two groups (Pearson chi-square, p > .91; 95% CI). Distribution analysis showed a 5% increase (0.20 to 0.25) in the frequency with which a provider included a -/- 2017

7

differential diagnosis in the patient’s EMR from the pre- to post-intervention patient groups (N = 98 visits); however, this was not significant (Fisher exact test, p < .49). Patient engagement was evaluated by the existence of specific documentation related to the patient’s suspected diagnosis. In the pre-intervention group (2015), 64% of the patients had evidence of documented patient engagement on the after visit summary and 65% had evidence of documentation in the postintervention group. Fisher’s exact test was not significant at the 90% CI (p < .22). Laboratory testing follow up was analyzed using only those patients who had laboratory tests ordered. In the pre-intervention group there were 17 of 50 visits during which laboratory tests were ordered, and 88% (n = 15) had the follow up documented; in the post-intervention group there were 20 of 48 visits during which laboratory tests were ordered, and 85% (n = 17) of the charts had documented follow up. The chi-square analysis at 90% CI was not significant (p < .56). Binomial analyses detailed the proportion of patients with a related chief complaint who returned within 14 days. Although the proportion of patients with a related chief complaint decreased (0.76 to 0.62), this was not significant. As expected, there was an overall increase in the proportion of patients who presented with an unrelated chief complaint who sustained a change in diagnosis (0.48 to 0.66) in the postintervention group. There were no significant differences relative to related chief complaint and change in diagnosis between the pre- and post-intervention patient groups (Fisher’s exact test, one-tailed, p < .15). DISCUSSION To our knowledge, we describe the first educational intervention study targeting diagnostic error education in the pediatric primary care setting. Reflective reasoning among providers has the potential to decrease diagnostic errors. The providers who participated in the educational intervention had the opportunity to formally self-evaluate their existing diagnostic reasoning skills, Reflective engage in conversation related to diagnostic erreasoning among rors, and improve their providers has the knowledge related to potential to heuristic and analytic thinking and its reladecrease tionship to diagnostic diagnostic errors. accuracy. The providers were able to show an improved ability to identify patient risk factors for incurring a diagnostic error. Although self-reported and not measured in actual practice, the providers were more willing to take a diagnostic timeout and interrupt a colleague to consult when confronted with an unfamiliar clinical presenta8

Volume -  Number -

tion after participation in the educational program. This may be due to an improved understanding of the need to engage in self-reflective behaviors when confronted with unrecognizable clinical situations. This finding was consistent with Mamade & Schmidt’s (2017) systematic review of reflective reasoning, which indicated that reflective reasoning could be a useful strategy to reduce diagnostic errors and increase diagnostic performance. Research related to pediatric diagnostic errors is predominantly extrapolated from malpractice claim reviews and indicates that a narrowed diagnostic focus is a major cause of judgment errors (CRICO, 2014; Giardina et al., 2013; IOM, 2015; Kain & CaldwellAndrews, 2006). Despite the lack of statistical significance, the small increase in the proportion of providers (0.20 pre-intervention to 0.25 postintervention) who used a differential diagnosis after participating in the educational intervention may be a preliminary indicator of providers’ willingness to engage in self-reflective practice to broaden their diagnostic focus. Use of heuristic and analytic thinking and toggling between the two diagnostic reasoning strategies was discussed by the providers during the case study analysis. In particular, providers expressed some trepidation about the difficulties that ensue when trying to decide the most appropriate colleague to consult when reflecting on more difficult diagnostic dilemmas. A review of the EMRs revealed that the proportion of patients who presented with a related chief complaint within a 14-day period declined from the pre- to postintervention group. This would be consistent with the intent of the study to ultimately improve diagnostic accuracy, thus decreasing the numbers of patients returning within a 14-day period with a related chief complaint. There was a difference noted in the numbers of contacts a patient had if he/she presented two or more times in a 14-day period (5.77 visits/14 days) versus the overall patient population within the study organization (1.82 visits/year). In addition, the mean percentage of patients who were noted to have a chronic illness in both groups was 0.408, and this was significantly higher than the percentage of U.S. children (0.08) who have at least one chronic illness reported by the National Health Council (2014). Risk factors for a diagnostic error include patients who have three or more providers involved in their care or have an identified chronic illness (Schwappach, 2012; Singh et al., 2014). Study limitations include a small sample size, a low power of 0.26, the inability to supervise video participation, and a lack of a control group of providers. Modifications to replicate the study should include a control group of providers, a larger sample size of patients and providers, and the use of three reviewers of each patient chart to also determine diagnostic accuracy. Future studies should be directed toward the influence of open notes, EMR diagnostic support tools, or patient Journal of Pediatric Health Care

checklists on diagnostic accuracy and patient engagement (Sibbald, de Bruin, & Van Merrienboer, 2013). Providers are often pressured to evaluate high volumes of patients in compressed timeframes; these organizational constraints on diagnostic reasoning may also be an important topic for future research related to diagnostic accuracy. Morally incentivizing providers to engage in self-reflection regarding their diagnostic performance even when they are pressured to see higher patient volumes is an ongoing challenge. The expectation that providers see higher volumes of patients with less allotted time to evaluate and treat them has created a health care conundrum. This conundrum highlights the struggles providers manage when there are organizational demands that conflict with professional ethics. Supporting constructive feedback to colleagues rather than avoiding differences and holding ourselves as well as our colleagues accountable for achieving accurate, timely, and safe care are possible measures to enhance diagnostic accuracy (Cosby, Zipperer, & Balik, The pace of 2015). Providing peer support by encourproviding health aging diagnostic timecare to patients has outs when there are accelerated, difficult clinical presentations and promotcreating a dilemma ing a culture of safety for providers by within organizations requiring higher are methods to support the development of volumes of patients moral courage among with less time to providers (Cosby et al., evaluate and treat 2015). Ongoing conversapatients. tions among providers to engage patients in their diagnosis by ‘‘thinking aloud’’ with the patient as a provider goes through a differential diagnosis and engaging the patient in the diagnosis have been suggested in the literature (Croskerry, 2009; Thammasitboon, Thammasitboon, & Singhal, 2013). Verbally explaining to a patient why a particular diagnosis does or does not fit based on the patient’s symptoms and test results requires maturity in diagnostic decision-making skills, yet engagement of patients in the diagnostic process has the potential to reduce errors (IOM, 2015). The current health care system is struggling to understand, support, and improve the diagnostic process. Excellence in clinical reasoning and a sound knowledge base are at the core of diagnostic accuracy. This study raised provider awareness by spotlighting the possible causes of diagnostic errors, cognitive biases that may occur during diagnostic reasoning, and possible strategies to reduce diagnostic errors in the primary care setting. The inclusion of a decision supwww.jpedhc.org

port system platform using standardized patients with varying levels of difficulty has been shown to be an aid in the development of a broader differential diagnosis (Parot, Delaney, & Kostopoulou, 2017; Thammasitboon et al., 2013). Integrating this type of simulated EMR platform into the educational preparation of NPs may enhance students’ clinical reasoning skills, increase their understanding of the importance of an expanded differential diagnosis, and stimulate students to reflect on the why a diagnosis may or may not match a patient’s clinical presentation. Meta-cognition exercises that include case study analyses focusing on identifying cognitive and system biases and strategies to avoid these biases should be included in the educational programs of both NPs and physicians. Conversations between nurse educators and students should include the use of diagnostic timeouts and the acceptance of both heuristic and analytic thinking in the presence of situational awareness. The IOM (2015) has recommended that practicing providers receive ongoing education related to diagnostic accuracy and strategies to prevent errors. Patients would ultimately benefit from the implementation of a similar provider-centered educational program focusing on diagnostic reasoning strategies and the use of diagnostic timeouts. Patient safety officers have a unique opportunity to implement similar programs within both large and small health care institutions. When the study was completed, the providers met and discussed the pediatric practice’s practice model and how the practice could improve the continuity of care to improve patient engagement and diagnostic accuracy. An additional NP was funded, and the model of care was changed to enhance the continuity of patient care by creating interdisciplinary teams. Engagement of providers from novice to expert in the continual process of reflective practice related to diagnostic accuracy and encouraging organizations to value open nonjudgmental discussions on diagnostic performance may provide a better understanding of preventative strategies related to diagnostic errors. The authors would like to acknowledge Mary Fischer, PhD, Nurse Researcher, Atrius Health; Jonathan Watson, MA, Clinical Research Project Coordinator; and Leilani Hernandez, MPH, Senior Data Analyst, Atrius Health for their support with editorial comments, orchestration of study site requirements, and retrieval of data.

REFERENCES Brady, P., & Goldenhar, L. (2014). A qualitative study examining the influences on situation awareness and the identification, mitigation and escalation of recognized patient risk. British Medical Journal of Quality and Safety, 23, 153-161. Carroll, A., & Buddenbaum, J. (2007). Malpractice claims involving pediatricians: Epidemiology and etiology. Pediatrics, 120(1), 10-17.

-/- 2017

9

Cosby, K., Zipperer, L., & Balik, B. (2015). Tapping into the wisdom in the room: results from participant discussion at the 7th International Conference on Diagnostic Error in Medicine facilitated by the World  technique. Diagnosis, 2, 189-193. Cafe CRICO Foundation. (2014). Annual benchmarking report: Malpractice risks in the diagnostic process. Cambridge, MA: Crico Strategies. Retrieved from https://www.rmf.harvard.edu/cbsreport Croskerry, P. (2009). A universal model of diagnostic reasoning. Academy of Medicine, 84, 1022-1028. Durham, C., Fowler, T., & Kennedy, S. (2014). Teaching dual-process diagnostic reasoning to doctor of nursing practice students: Problem-based learning and illness script. Journal of Nursing Education, 53, 646-650. Ely, J., Kaldjian, L., & D’Allesandro, D. (2012). Diagnostic errors in primary care. Lessons learned. Journal of the American Board of Family Medicine, 25(1), 87-97. Fischer, G., Fetters, M., Munro, A., & Goldman, E. (1997). Adverse events in primary care identified from a risk-management database. Journal of Family Practice, 45(1), 40-46. Giardina, T., King, B., Ignaczak, A., Pault, D., Hoeksema, L., Mills, P., ., Singh, H. (2013). Root cause analysis reports help identify common factors in delayed diagnosis and treatment of outpatients. Health Affairs, 32, 1368-1375. Graber, M. (2013). The incidence of diagnostic error in medicine. British Medical Journal of Quality and Safety, 22(Supplement 2), 21-27. Graber, M., Kissam, S., Payne, V., Meyer, A., Sorensen, A., Lenfestey, N., & Singh, H. (2012). Cognitive interventions to reduce diagnostic error: a narrative review. British Medical Journal, 21, 535-557. Institute of Medicine: National Academies of Sciences, Engineering and Medicine. Committee on Diagnostic Errors in Healthcare, Board of Healthcare Services. (2015). Improving diagnosis in health care. Washington D.C: National Academy Press. JMP Software. (2015). JMP 12.2 Statistical Program. Cary, NC: SAS Institute Inc. Kain, Z., & Caldwell-Andrews, A. (2006). What pediatricians should know about child-related malpractice payments in the United States. Pediatrics, 118, 465-468. Kirkendall, E., Kloppenborg, E., Papp, J., White, D., Frese, C., Hacker, A. S. N., ., Kotagal, U. (2012). Measuring adverse events and levels of harm in pediatric inpatients with the global trigger tool. Pediatrics, 130, 1206-1214. Mamade, S., & Schmidt, H. (2017). Reflection in medical diagnosis: a literature review. Health Professions Education, 3, 15-25. National Health Council. (2014). About Chronic Illness. Washington DC: Author. Retrieved from www.nationalhealthcouncil.org/ sites/default/files/NHC_Files/Pdf_Files/AboutChronicDisease. pdf National Patient Safety Foundation. (2011). Reducing diagnostic errors, education module. Boston, MA: Author. Parot, T., Delaney, B., & Kostopoulou, O. (2017). The impact of a diagnostic decision support system on consultation: Perceptions on GPs and patients. BMC Medical Informatics and Decision Making, 17, 79. Pirret, A. M., Neville, S., & LaGrow, S. (2015). Nurse practitioners versus doctors diagnostic reasoning in a complex case

10

Volume -  Number -

presentation to an acute tertiary hospital: A comparative study. International Journal of Nursing Studies, 52, 716-726. Sarkar, U., Bonacum, D., Strull, W., Spitzmueller, C., Jin, N., Lopez, A., ., Singh, H. (2012). Challenges of making a diagnosis in the outpatient setting: A multisite survey of primary care physicians. British Medical Journal of Quality and Safety, 21, 641-648. Schiff, G., Hasan, O., Seijeoung, K., Abrams, R., Crosby, K., Lambert, B., ., McNutt, R. (2009). Diagnostic error in medicine: Analysis of 583 physician-reported errors. Archives in Internal Medicine, 169, 1881-1887. Schwappach, D. (2012). Risk factors for patient-reported medical errors in 11 countries. Health Expectations, 17, 321-331. Sherbino, J., Dore, K., Siu, E., & Norman, G. (2011). The effectiveness of cognitive forcing strategies to decrease diagnostic error: An exploratory study. Teaching and Learning in Medicine, 23, 78-84. Sibbald, M., de Bruin, A., & van Merrienboer, J. (2013). Checklists improve experts’ diagnostic decisions. Medical Education, 47, 301-308. Singh, H., Classen, D., & Sittig, D. (2011). Creating an oversight infrastructure for electronic health record-related patient safety hazards. Journal of Patient Safety, 7, 169-174. Singh, H., Giardina, T., Forjuoh, S., Reis, M., Kosmach, S., Khan, M., & Thomas, E. (2012a). Electronic health record-based surveillance of diagnostic errors in primary care. British Medical Journal of Quality and Safety, 21, 93-100. Singh, H., Giradina, T. D., Meyer, A., Forjuoh, S., Reis, M., & Thomas, E. (2013). Types and origins of diagnostic errors in primary care settings. Journal of the American Medical Association Internal Medicine, 173, 418-425. Singh, H., Giardina, T., Peterson, L., Smith, M., Paul, L. W., Dismukes, K., ., Thomas, E. (2012b). Exploring situational awareness in diagnostic errors in primary care. British Medical Journal of Quality and Safety, 21, 30-38. Singh, H., Meyer, A., & Thomas, E. (2014). The frequency of diagnostic errors in outpatient care: Estimations from three large observational studies involving US adult populations. British Medical Journal of Quality and Safety, 23, 727-731. Singh, H., & Sittig, D. (2015). Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. British Medical Journal of Quality and Safety, 24, 103-110. Singh, H., Thomas, E., Khan, M., & Petersen, L. (2007). Identifying diagnostic errors in primary care using an electronic screening algorithm. Archives in Internal Medicine, 167, 302-308. Singh, H., Thomas, E. J., Wilson, L., Kelly, A., Pietz, K., Elkeeb, D., & Singhal, G. (2010). Errors of diagnosis in pediatric practice: a multisite survey. Pediatrics, 126, 70-79. Thammasitboon, S., Thammasitboon, S., & Singal, G. (2013). System related factors contributing to diagnostic errors. Current Problems in Pediatric Adolescent Care, 43, 242-247. Tsalatsanis, A., Hozo, I., Kumar, A., & Djulbegovic, B. (2015). Dual processing model for medical decision-making: An extension to diagnostic testing. PLOS One, 10, e0134800. Unbeck, M., Lindemalm, S., Nydert, P., Ygge, B., Nylen, U., Berglund, C., & Harenstam, K. (2014). Validation of triggers and development of a pediatric trigger tool to identify adverse events. BioMed Central Health Service Research, 14, 655.

Journal of Pediatric Health Care