Applied Ergonomics 85 (2020) 103047
Contents lists available at ScienceDirect
Applied Ergonomics journal homepage: http://www.elsevier.com/locate/apergo
Assessing the cognitive and work load of an inpatient safety dashboard in the context of opioid management Theresa E. Fuller a, b, Pamela M. Garabedian c, Demetri P. Lemonias a, Erin Joyce a, Jeffrey L. Schnipper b, d, Elizabeth M. Harry b, d, David W. Bates b, c, d, Anuj K. Dalal b, d, James C. Benneyan a, e, * a
Healthcare Systems Engineering Institute, Northeastern University, Boston, MA, USA Brigham and Women’s Hospital, Boston, MA, USA Partners Healthcare, Incorporated, Boston, MA, USA d Harvard Medical School, Boston, MA, USA e College of Engineering, Northeastern University, Boston, MA, USA b c
A R T I C L E I N F O
A B S T R A C T
Keywords: Patient safety Healthcare Cognitive load NASA TLX
For health information technology to realize its potential to improve flow, care, and patient safety, applications should be intuitive to use and burden neutral for frontline clinicians. We assessed the impact of a patient safety dashboard on clinician cognitive and work load within a simulated information-seeking task for safe inpatient opioid medication management. Compared to use of an electronic health record for the same task, the dashboard was associated with significantly reduced time on task, mouse clicks, and mouse movement (each p < 0.001), with no significant increases in cognitive load nor task inaccuracy. Cognitive burden was higher for users with less experience, possibly partly attributable to usability issues identified during this study. Findings underscore the importance of assessing the usability, cognitive, and work load analysis during the design and imple mentation of health information technology applications.
1. Introduction In 2005, the National Academy of Engineering and Institute of Medicine (now National Academy of Medicine) published a joint report calling for interdisciplinary collaboration between systems engineering, human factors, and information technology to improve healthcare quality, productivity, and safety (Reid et al, 2005). Dramatic increases have since occurred in health information technology (HIT) applications to help address safety, billing, flow, and efficiency, with electronic health records (EHRs) in particular playing a large role in clinician workflow (Non-federal Acute). While EHRs have potential to improve care efficiency and safety (Kern et al., 2013), achieving impact has been slow (Bates and Singh, 2018, Russo et al., 2016). Low use of human factors engineering in the design of HIT may explain some of this gap (Ratwaniet al., 2016, El-Kareh et al., 2009, Alotaibi and Federico, 2017), with developed systems often lacking the ability to support high variability in clinician computer proficiency, workflows, perspectives, and roles (Carayon et al., 2018, Rosso and
Saurin, 2018). Within EHRs, information often is entered by users in different manners, stored in silos, and disjointly accessible (Beasley et al., 2011, Bowman, 2013, Carayon et al., 2019, Ratwani et al., 2018). Searching through an EHR can increase user cognitive load past the ability to process the information, which in turn can increase errors, fatigue, inefficiency, and provider burnout (Garrett, 2010, Holden, 2011, Menachemi and Collum, 2011, Reisman, 2017, Walsh, 2004). Electronic dashboards that aggregate and centralize EHR data have been developed to address these problems, facilitate patient comanagement by care teams (Khairat et al., 2018), and mitigate patient safety issues such as medication errors and hospital acquired conditions (Donaldson et al., 2005, Egan, 2006). In prior work, we developed a patient safety dashboard (Bersani et al., 2019, Mlaver et al., 2017) to automate data seeking and aggregation, provide information on clinical guidelines, and assist in clinical decision making for hospitalized pa tients. One of the dashboard safety domains, opioid management, aimed to ameliorate medication errors and inadequate monitoring that have been associated with severe inpatient adverse drug events (Herzig et al.,
* Corresponding author. Healthcare Systems Engineering Institute, Northeastern University, Boston, MA, 02115, USA. E-mail address:
[email protected] (J.C. Benneyan). https://doi.org/10.1016/j.apergo.2020.103047 Received 18 December 2018; Received in revised form 19 December 2019; Accepted 9 January 2020 Available online 31 January 2020 0003-6870/© 2020 Elsevier Ltd. All rights reserved.
T.E. Fuller et al.
Applied Ergonomics 85 (2020) 103047
Fig. 1. Illustration of the patient safety dashboard, aggregating data extracted from different locations in the EHR into a unified unit level view (top) and detailed patient-specific view (bottom).
2014). When managing pain, clinicians ideally should consider a variety of information, including patient comorbidities, medication combina tions, opioid type, dose amounts, delivery routes, frequency, pain level, respiratory rate, level of consciousness, and other vital signs (New Opioid Prescribing, RADEO). Since these numerous data typically are scattered across an EHR, locating, assimilating, and comparing them to prescribing guidelines can make safe pain management challenging and burdensome. We therefore evaluated the dashboard’s impact on work and cognitive load, versus standard practice using a commercial EHR, via a standardized data gathering study task focused on collection of patient opioid medication management information.
2. Methods 2.1. Study participants We recruited a target of 24 staff clinicians from a large academic medical center in Boston, MA to consent to participate in the study over a 12-month period post dashboard implementation. Eligible participants included physicians or physician assistants who had worked on a clinical unit where the dashboard had been implemented and had not been involved in the dashboard’s development. All participating clinicians were given a formal dashboard training session (20 min) prior to its 2
T.E. Fuller et al.
Applied Ergonomics 85 (2020) 103047
Fig. 2. Example of dashboard information display for the opioid management safety domain. Respiratory risks include: advanced age (>55 years); active smoker; obesity (body mass index (BMI) > 30 kg/m2); decreased kidney function (glomerular filtration rate (GFR) < 30 mL/min/1.73m2); impaired liver function (total bilirubin >5 mg/dL); blood coagulation (prothrombin international normalized ration (PT-INR) >2); depressed respiratory function (Richmond agitation-sedation scale (RASS) <-2); concomitant sedatives administered (Alprazolam, Baclofen, Clonazepam, Cyclobenzaprine, Diazepam, Diphenhydramine, Gabapentin, Hy droxyzine, Lorazepam, Methocarbamol, Metoclorpramide, Prochloperazine, Promethazine, Temazepam, Trazodone, or Zolpidem).
implementation, along with an informal “in service” training (5 min) by medical leadership during the intervention period. We did not exclude potential participants based on their duration of prior dashboard nor EHR use so that the effect of experience could be examined. All study activities were approved by the health system’s institutional review board.
management domain could display “action needed: patient on opiates, over-sedated, discontinue opioids” along with the patient’s pain scores, medications, etc.). The display automatically updates as data are updated, which otherwise would need to be searched for in many lo cations in the EHR. Fig. 2 illustrates how disparate information relevant to opioid management is aggregated from various EHR locations into the dash board patient-level view. For the “Opioid Management” domain, data are extracted from the physician orders (diet order), nursing flowsheets (pain scores, sedation score), medication administration record (morphine equivalent dose, analgesics administered, concomitant sed atives), and lab results (respiratory risks). This safety domain is intended to alert clinicians to a range of pain management issues and in congruities, such as respiratory risks or medications administered intravenously for patients who are eating regularly.
2.2. Patient safety dashboard The dashboard (Fig. 1) was developed using user-centered design principles to aggregate and display real-time clinical information for hospitalized patients (Bersani et al., 2019, Dalal et al., 2019, Katsuliset al., 2016, Mlaver et al., 2017). Clinical end-users, as well as nursing and medical leadership, were consulted during the iterative design and development of the display, logic, and workflow. The dashboard application launches directly from a link within the EHR. The dashboard obtains information via real-time EHR data services (including test results and data inputted by nursing, medical, and con sult staff) as described further by Mlaver et al. (2017). These data are analyzed by customizable logic and displayed via a unified color-coded display with relevant clinical information and clinical decision support. The underlying software logic was based on the study hospital’s treat ment guidelines, clinician input, and priorities. The dashboard was intended to ensure that core safety domains are reviewed routinely during each patient’s hospitalization by the entire care team, primarily during morning bedside rounds, the start of shifts, patient handoffs, and interdisciplinary care coordination huddles. For each of fifteen safety domains identified during the design pro cess (e.g., fall risk, opioid management, delirium, etc.), the dashboard displays each patient’s current status by color (red, yellow, green, gray) indicating “action needed”, “risky state”, “guideline compliant”, or “not applicable” respectively. The dashboard then provides clinicians with appropriate decision support and context (e.g. a red flag in the opioid
2.3. Study task Each participating clinician completed the study task on two pa tients, one using the EHR and another using the dashboard, with pairing and order randomized by a spreadsheet random number generator. For each clinician, a research assistant (RA) selected two patients meeting the inclusion criteria: (1) currently admitted to an inpatient medicalsurgical unit; (2) prescribed and given opioids within the last 24 h; (3) had a documented pain score; (4) had at least one documented respi ratory risk; (5) were unfamiliar to the participating staff clinician (i.e. the patient was cared for by a different medical team and in a separate unit). Each included patient was used for only one clinician. Both pa tients assigned to a given participating clinician were matched by dashboard status such that both were coded as “yellow” or both as “red”. The standardized study task (Fig. 3) consisted of five data gathering questions regarding the preceding 24 h of clinical care for each patient. Participants were asked to complete the following activities: (1) 3
T.E. Fuller et al.
Applied Ergonomics 85 (2020) 103047
Fig. 3. Standardized sheet with five information gathering tasks (patient pain score documentation and range; type and amount of opioids administered; morphine equivalent dose administered; type and amount of any non-opioid analgesics; respiratory risk documentation).
evaluate a patient’s pain score; (2) assess the type and amount of opioids that were administered; (3) calculate the morphine equivalent dose (MED) administered; (4) assess the type and amount of any non-opioid analgesic administered; and (5) identify whether documentation of key respiratory risks was completed for the patient. Depending on the question, participants responded with clinical information, “yes/no”, or no answer if they were uncertain about how to respond or unable to respond. When conducting the study task using the dashboard, clinicians also could not use the EHR (and vice versa).
experience, and comfort levels with the EHR and dashboard. While they conducted the task, participants were instructed to think aloud and verbally narrate their activity and thought process. If they did not answer a study question, they were asked to articulate their reason. All participant audio comments and computer screen activity were recorded using the Morae computer use tracking software (Techsmith Morae), including mouse clicks and mouse movement. Immediately after completing the study task using the dashboard and EHR separately, participating clinicians completed the NASA Raw Task Load Index (RTLX) cognitive load questionnaire (Hart and Field, 2016), which assesses perceived work load on six dimension scales: mental demand, physical demand, temporal demand, overall performance, frustration level, and effort (Hart et al., 1988). The RTLX asks users to report their workload for each dimension on an interval scale from 1 to 20, where a higher value indicates a higher demand. While these
2.4. Data collection Each evaluation session began with an RA reviewing instructions with the participating clinician about how to complete the study task sheet and asking for basic information about their demographics, prior 4
T.E. Fuller et al.
Applied Ergonomics 85 (2020) 103047
Table 1 Task analysis by subtask comparing task efficiency and user accuracy between the two technologies. Responses were coded as incorrect if a question was answered incorrectly or not answered. Reported p values are for column totals (time on task, mouse clicks, mouse movement, and percent correct). Sub-Task
Pain Score Documented Pain Score Range Opioids Administered Morphine Equivalent Dose (MED) Non-Opioid Analgesics Administered Respiratory Risk Total p value
Mean Time on Task (seconds)
Mean Number of Mouse Clicks
Mouse Movement (mean pixels traveled)
User Answered Task Correctly (%, N ¼ 18)
EHR
Dashboard
EHR
Dashboard
EHR
Dashboard
EHR
Dashboard
20.33 41.82 39.72 41.09 29.48 151.52 323.96 <0.001
29.17 13.18 24.56 6.04 8.43 122.83 204.21
2 6 9 7 4 25 53 <0.001
3 0 1 0 0 8 12
3517 5088 4099 8536 3404 29442 54084 <0.001
3481 485 729 144 190 12284 17315
94% 89% 100% 22% 89% 89% 81% 0.076
89% 89% 89% 100% 89% 83% 90%
Differences in accuracy of information gathered by users were not statistically significant overall (p ¼ 0.076) nor by individual sub-task (assuming α ¼ 0.05, results not shown), except for morphine equivalent dose administered (22% accuracy using EHR versus 100% using dashboard, p < 0.001). During that sub-task, two users answered inaccurately for the EHR but 12 did not respond. Examples of inaccuracies for both systems include providers listing the incorrect dosing frequency and miscategorizing medications as opioids versus non-opioids.
dimensions also can be given relative weights, for simplicity we used the unweighted raw version of the NASA tool, which is similarly sensitive to task demands (Carswellet al., 2010, Tsang and Vidulich, 2006) and often has been applied to health information technology (Ahmed et al., 2011, Pickering et al., 2010).
final ruling if necessary. 3. Results 3.1. Demographics Of 39 clinicians we attempted to recruit, 24 (62%) consented to participate with the remainder either not responding to email in vitations or citing availability constraints. Study errors occurred with six of the assessments (one user logged into the wrong version of the EHR; two users navigated away from the dashboard to answer questions; three Morae recordings were lost (battery failure and network issues resulting in the loss of all electronic data). The remaining 18 clinicians consisted of 10 attending physicians (56%), six residents (33%), and two physi cian assistants (11%). Participants were 72% male and on average 37.1 years old, in their current professional role for 6.2 years, and at this medical center for 4.1 years. Participants had used any EHR for an average of 7.5 years and 3.7 years for this one specifically (range for this EHR: 1–10 years). All participants responded they were “comfortable” or “very comfortable” with this EHR and 83% were “experienced” or “very experienced” (the top two options on 4-point scales). Full-time clinicians used this EHR on a weekly basis, whereas part-time clini cians used it for weeklong stretches at least once every other month. Participants reported less experience with the dashboard, averaging four months (range: one day (five participants) to 1.5 years (one partici pant)). Thirty-three percent responded being “comfortable” or “very comfortable” with the dashboard and 17% “experienced” or “very experienced”. All participants had used this EHR for at least one year longer than they had been aware of the dashboard.
2.5. Analysis Descriptive statistics were calculated on participant demographic data, technology experience (duration of prior use and comfort level), average time on task, mouse clicks, mouse movement (overall and for each of the five tasks), and RTLX ratings (overall and by domain). Regression and breakpoint analyses were conducted on participant dif ferences in RTLX scores (EHR versus dashboard) using familiarity with the dashboard (duration of prior use) as the independent variable. The breakpoint and value at which the regressed difference in scores equaled zero (intersection of the regression line with the horizontal axis) were used separately to estimate lengths of time until dashboard familiar ization. Regression and breakpoint analyses were conducted using the R segmented package (Core Team, 2017, Muggeo, 2008). Differences between the dashboard and EHR task measures (time on task, mouse clicks, mouse movement, task accuracy) and their RTLX scores were tested separately for statistical significance (one-sided Wilcoxon signed rank tests). Assuming 24 study participants (18 final) yields at least 80% power to detect reductions of 23 s (26) and 13.5 points (16) or greater in the primary work (overall time on task) and cognitive (overall RTLX score) measures, respectively. A two-factor analysis of variance (ANOVA) also was conducted to assess the differ ences in RTLX scores based on technology order (i.e., EHR-beforedashboard versus dashboard-before-EHR). Task questionnaire data completed by participants were classified as correct, incorrect, not answered (including clinician unable/chose not to answer), or study error based on review and comparison by an RA with data in the EHR. The RA (TF) was trained on where the correct data elements were documented in the EHR by clinical staff (JS, AD) and conducted the EHR review after the study task was completed. These results were reviewed and confirmed by a second research team member (PG). User behaviors and user reported usability issues were extracted from Morae audio and screen recordings. User behaviors were grouped by task domain. Issues were categorized and confirmed using a twoperson consensus approach (PG and TF) and ranked according to the Dumas and Redish severity index (Dumas and Redish, 1999). This severity index ranks issues in electronic application use into four cate gories, ranging from needs for subtle enhancements to completely pre venting task completion. Theme and severity disagreement between reviewers were resolved via discussion to consensus, with PG making a
3.2. Task analysis Data for time on task (excluding dashboard load times, which aver aged 1 min during initial implementation, whereas EHR load time was on the order of a few seconds), mouse clicks, and mouse movement (pixels traveled) are summarized in Table 1. For each of these three category totals, improvements when using the dashboard versus the EHR are both statistically and practically significant (each with p < 0.001; Bonferroni-adjusted individual rejection thresholds are 0.017 to achieve an overall α ¼ 0.05), with reductions of 37% time on task, 87% fewer mouse clicks, and 68% less mouse movement. 3.3. Cognitive burden The difference in overall RTLX scores between the dashboard and EHR of 2.7 points was not statistically significant by the Wilcoxon test (p 5
T.E. Fuller et al.
Applied Ergonomics 85 (2020) 103047
Fig. 4. Difference in EHR versus dashboard RTLX scores as a function of participant prior dashboard experience. Positive y-axis values indicate a participant found the task more demanding when using the dashboard. No overall statistical difference was found, with and without adjusting for dashboard experience.
¼ 0.54) nor via regression accounting for the amount of dashboard experience of each participant (Fig. 4). Although piecewise regression identified the best possible breakpoint at 3.96 months of dashboard experience, the difference in slopes before and after that point was not statistically significant (p ¼ 0.59). Neither regression model was statis tically significant (linear regression p value ¼ 0.258, piecewise regres sion p value ¼ 0.217), also indicating no effect on the difference in RTLX scores due to the amount of dashboard experience. ANOVA results also did not find a statistical difference in the RTLX score differences based on which technology a participant used first (p ¼ 0.433). Individual RTLX domain results (Fig. 5) indicate there was a
significant difference within the performance domain (p ¼ 0.009), with better perceived performance in the EHR. There were no statistically significant differences among the remaining RTLX domains. 3.4. Usability findings Examples of user behaviors and challenges identified from Morae screen and audio recordings are summarized in Table 2. For example, one clinician who had known about the dashboard for many months was not aware that a patient detail view was available containing clinical decision support and further clinical information.
Fig. 5. Difference in RTLX scores by domain and total. RTLX scores for each domain are on a 20-point scale, with higher scores being worse (e.g. “very high” demand or “failed” performance). Error bars indicate one standard error (SE) about the median. 6
T.E. Fuller et al.
Applied Ergonomics 85 (2020) 103047
Table 2 Examples of user behaviors when using the dashboard (DB) and electronic health record (EHR). Task Domain
User Behaviors
Pain score documentation
� (DB) When the minimum pain score for a patient ¼ 0 and highest ¼ 10, users thought the display’s “Avg (0–10)” was describing the scale (i.e., “Average on a scale from 0 to 10”), rather than giving clinical information � (DB) Users missed the “24h” in the display and mentioned uncertainty regarding timescale � (EHR) Users would either consult the “Pain Report” viewer, which displayed an average pain score, or would scroll through 24 h of columns of pain score documentation and manually calculate � (DB) When asked about dosing frequency, users would look for the Latin “q_hours” format, and found it difficult to answer when each medication was listed by date/time � (DB) Users would read “Given” for the first drug and not for the rest, and lacked clarity regarding which medications had been administered � (DB) Users would read aloud from the “analgesics” list and would erroneously list opioids as analgesics, and vice versa. � (EHR) One user used “Ctrl þ F” to determine whether a patient was receiving opioids; (e.g. searches for “hydromorphone”, “oxycodone”, “morphine”, etc.) � (EHR) Some users went to outside websites to calculate MED; most chose not to calculate � (DB) Users did not know which “respiratory risks” would be pulled into this area. When asked “Are they an Active Smoker”, upon not seeing the information here, they would not immediately know the answer � (EHR) Clinicians remarked that they would usually never investigate the nursing flowsheet for data and mentioned never reviewing the pain score or RASS score in the health record. � (EHR) Clinicians looked for patient smoking status in at least three different places (patient history, notes, patient summary) � (DB) Some users did not know you could click into the boxes or on the patient name for more detail � (DB) Though all information relevant to the task was in the “opioid management” section of the dashboard, many users navigated to different columns and attempted to draw conclusions based on that information � (DB) Many users accessed the “Patient Expectations” column when looking for pain score � (EHR) Many users used the search function in the EHR for “Smoker”, “RASS”, etc. Many users would pass over the “correct answer” multiple times (e.g. they would search for “smoker” on the “patient summary” page which had that information) � (Both) Users searched for information documented in prior hospitalizations in the EHR, where the dashboard was designed to display only information that was from this hospitalization � (Both) In both the dashboard and EHR, providers used very conservative language when answering questions (“per the pain report” “apparently” “at least got x”), indicating that they were aware/concerned that there could be more data not available in the viewer
Opioids and non-opioid analgesics administered
Morphine equivalent dose Respiratory risks
All
Usability issues and representative quotes are summarized in Table 3, along with their associated Dumas and Redish severity indices (lower ¼ more severe). The most severe usability issues with the dash board were a long average load time of almost 1 min and a lack of pertinent negatives (significant “nonfindings”). In this instance, rather than “Respiratory Risks: None”, inclusion of pertinent negatives could display as: “Respiratory Risks: None (Age <55, No concomitant seda tives given, not an active smoker, etc.)”. For the EHR, the most severe usability issue was a lack of internal calculation of the MED.
unbalanced convenience sample (young, unequal number from each role, unequal dashboard experience) recruited from only one academic medical center, possibly limiting generalizable conclusions. Work measures such as “time on task” could have been confounded by encouragement of participants to think aloud. Usability assessments relied on review of participant recordings rather than formal surveys or other physiological measures (Charles and Nixon, 2019). Patients differed between assessments and were not controlled beyond the in clusion criteria, since artificial patient records linked to the dashboard were beyond the technical scope and resources of this study. Providers reporting some familiarity with the dashboard may not have used it much in actual practice. The study task itself was a simulated hypo thetical data gathering exercise conducted outside of actual workflows in a quiet conference room; although providers completed the task while on call, minimal “real-world” interruptions occurred. Further analyses therefore could confirm results within natural workflows rather than using a simulated task, involve a larger and broader representative sample of users, and more directly study us ability. Since some lack of difference found in cognitive load may be due to small sample sizes and dashboard familiarization, a larger study also may be warranted. The study design also could be modified in several manners to ensure participant experience levels in both technologies are balanced, provide a brief training on the dashboard before the study, and pilot the task on clinicians outside of the research group.
4. Discussion Introduction of new health information technology ideally should simplify work and not increase cognitive burden nor inaccuracies. Cognitive and work load analysis can be useful to assess changes in real and perceived work load of new HIT tools and provide improvement insights. In our study of a new patient safety dashboard versus the existing EHR, while accuracy and cognitive load were not statistically changed, work efficiency increased with significant reductions in task time, mouse clicks, mouse movement, and physical demand. Usability themes identified during the study also highlighted potential improve ments (e.g. listing medication dosing frequency as “q_x hours” in addi tion to time/day information) and workarounds in the EHR (e.g. reliance on the search function). Results demonstrate the challenges of new HIT development and importance of usability testing, echoing prior work by Karsh et al. (2010) and others (Asan et al., 2018, Novak et al., 2019). The negative impact of increased workflow burden on clinician well-being and patient safety is increasingly recognized (Linzer, 2018). Many HIT applications, however, continue to have poor usability, create additional work, or increase cognitive load (Carayon and Hoonakker, 2019). User distrust in data displays in our study are corroborated elsewhere, (Flanagan et al., 2009, Middleton et al., 2013) although more transparency about the underlying algorithm might mitigate this (e.g., by listing pertinent negatives and describing both the source and time of the used data). Our study has some limitations. Participating clinicians were a small
5. Conclusion Assessing impact on cognitive and physical work load is important when developing new HIT applications. In the described case, intro duction of a patient safety dashboard resulted in better workload mea sures than for an EHR in a clinical information retrieval task with no statistical impact on cognitive measures and accuracy. As illustrated, such studies also can produce insight into usability issues, the impor tance of human factors in HIT development, and trust issues for data viewed externally from their source applications.
7
T.E. Fuller et al.
Applied Ergonomics 85 (2020) 103047
Table 3 User identified usability issues categorized by technology (EHR and dashboard (DB)) with representative user quotes and severity scores based on the Dumas and Redish severity index (Dumas and Redish, 1999) (lower ¼ more severe), where (1) prevents task completion; (2) creates significant delay and frustration; (3) problems have a minor effect on usability; (4) subtle and possible enhancements or suggestions. User Reported Usability Issues
EHR
Calculating morphine equivalent dose
X
Lack of pertinent negatives
DB
X
User Quotes � � � � �
�
Load time
Distrust of “summary data displays”
X
X
X
� � � � � �
Noisy display
X
�
User inability to alter display
X
� �
Siloed documentation
X
Clarity of display
�
X
Foraging for data
X
Differing clinical definitions
X
X
� � � � � � � �
Severity Index
“Don’t know - can’t tell, I’d have to use a calculator” “Not easily. Am I allowed to say that? I could, but it would take me a long time” “I would go to google.com” “I guess I would say that if it was a risk it would show up under those risk factors” “This is a little bit contrived, because I would assume if they were an active smoker that it would show up here so [since there is nothing here], I would assume that they are not, but I don’t feel comfortable saying that … I would have to go into the chart and look” “It would be nice if it pulled together those prompts of all the respiratory risk factors … I still need to see that to know that the program is surveying that, because I have no way of trusting that” “If you don’t show negatives, then I’ll probably look for them” “This [long loading time] is basically a non-starter for me, it’s intensely frustrating” “This would prevent me from using it” “For me one of the frustrations and temporal demands is that beginning process; I was annoyed that it took so long to open” “I don’t quite know how the [Dashboard] algorithms work so I don’t know if I trust this yet; I’m trusting your programming and I don’t know if I should” “I don’t know how good these reports are though, I don’t know if they capture everything” (regarding “Pain Report” built into EHR) “I always get a little bit distracted from how much there is going on here … the only colors that I pay attention to are the red ones, so I always wish that I could sort them somehow” “In my experience, I would just ignore the yellow” “And then, the other thing that I’ve noticed is that sometimes it can get difficult to change the colors of these. Like maybe you have addressed it … but it’ll still show as yellow even though the team has discussed it and decided on the plan” [regarding pain score]: “I can’t see when it was last assessed, but it looks like several times it has been assessed in the last 24 h “I have no idea where I would find that information” “It says 24-h pain, but I can’t tell for sure when it’s documented” “Pain is averaged here but I don’t know the time that it was done, but I’m assuming that it was in the last 24 h because of the score” “It’s hard to tell if this is 0600 today or 0600 yesterday” “I have to say I’m not sure if they’ve received it because it just says ‘PRN? “This is not the best view to look at this because I have to look back in time” (scrolling 24 h into the past) “Oh, I missed methadone [when answering the earlier task] as an opiate for this patient” “Do they capture Reiki [as a non-opioid analgesic]? Counseling? Alternative sorts of pain therapy?”
1 1
1
2
2 2 2 3
3 4
� “Amitriptyline could technically count [as a non-opioid analgesic]” � “… If you count opioids as a sedative”
Funding
Carayon, P., Wooldridge, A., Hose, B.-Z., Salwei, M., Benneyan, J., Nov. 2018. Challenges and opportunities for improving patient safety through human factors and systems engineering. Health Aff. 37 (11), 1862–1869. Carayon, P., Hundt, A.S., Hoonakker, P., Jul. 2019. Technology barriers and strategies in coordinating care for chronically ill patients. Appl. Ergon. 78, 240–247. Carswell, C.M., et al., Dec. 2010. Hands-free administration of subjective workload scales: acceptability in a surgical training environment. Appl. Ergon. 42 (1), 138–145. Charles, R.L., Nixon, J., Jan. 2019. Measuring mental workload using physiological measures: a systematic review. Appl. Ergon. 74, 221–232. R Core Team, 2017. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. URL. https://www.R-project. org/. Dalal, A.K., Fuller, T., Garabedian, P., Ergai, A., Balint, C., Bates, D.W., Benneyan, J., Jun. 2019. Systems engineering and human factors support of a system of novel EHR-integrated tools to prevent harm in the hospital. J. Am. Med. Inform. Assoc. 26 (6), 553–560. Donaldson, N., Brown, D.S., Aydin, C.E., Bolton, M.L.B., Rutledge, D.N., Apr. 2005. Leveraging nurse-related dashboard benchmarks to expedite performance improvement and document excellence. J. Nurs. Adm. 35 (4), 163–172. Dumas, J.S., Redish, J.C., 1999. A Practical Guide To Usability Testing, Revised, Subsequent Edition. Exeter, England. Intellect Ltd, Portland, Or. Egan, M., Dec. 2006. Clinical dashboards: impact on workflow, care quality, and patient safety. Crit. Care Nurs. Q. 29 (4), 354–361. El-Kareh, R., et al., Apr. 2009. Trends in primary care clinician perceptions of a new electronic health record. J. Gen. Intern. Med. 24 (4), 464–468. Flanagan, M.E., Patterson, E.S., Frankel, R.M., Doebbeling, B.N., Aug. 2009. Evaluation of a physician informatics tool to improve patient handoffs. J. Am. Med. Inform. Assoc. 16 (4), 509–515. Garrett, S.K., 2010. Provider information and resource foraging in healthcare delivery. Int. J. Collab. Enterp. 1, 3–4. Hart, S.G., Field, M., Oct. 2016. NASA-task load index (NASA-TLX); 20 years later. HFES Proc. 50 (9), 5.
This work was supported by the Agency for Healthcare Research and Quality (P30-HS023535) and the Centers for Medicare & Medicaid Services (1C1CMS331050). References Ahmed, A., Chandra, S., Herasevich, V., Gajic, O., Pickering, B.W., Jul. 2011. The effect of two different electronic health record user interfaces on intensive care provider task load, errors of cognition, and performance. Crit. Care Med. 39 (7), 1626. Alotaibi, Y.K., Federico, F., Dec 2017. The impact of health information technology on patient safety. Saudi Med. J. 38 (12), 1173–1180. Asan, O., Holden, R.J., Flynn, K.E., Murkowski, K., Scanlon, M.C., Jul. 2018. Providers’ assessment of a novel interactive health information technology in a pediatric intensive care unit. J. Am. Med. Inform. Assoc. Open. 1 (1), 32–41. Bates, D.W., Singh, H., Nov. 2018. Two decades since to err is human: an assessment of progress and emerging priorities in patient safety. Health Aff. 37 (11), 1736–1743. Beasley, J.W., et al., Dec. 2011. Information chaos in primary care: implications for physician performance and patient safety. J. Am. Board Fam. Med. 24 (6), 745–751. Bersani, K., Fuller, T.E., Garabedian, P., Espares, J., Mlaver, E., Businger, A., Chang, F., Boxer, R., Schnock, K.O., Rozenblum, R., Dykes, P.C., Dalal, A.K., Benneyan, J., Lehmann, L., Gershanik, E., Bates, D.W., Schnipper, J.L., 2019. Use, perceived usability, and barriers to implementation of a patient safety dashboard integrated within a vendor EHR. Appl. Clin. Inf. (in press). Bowman, S., Oct. 2013. Impact of electronic health record systems on information integrity: quality and safety implications. Perspect. Health Inf. Manag. 10. Fall. Carayon, P., Hoonakker, P., Aug. 2019. Human factors and usability for health information technology: old and new challenges. Yearb. Med. Inf. 28 (1), 71–77.
8
T.E. Fuller et al.
Applied Ergonomics 85 (2020) 103047 Walsh, S.H., May 2004. The clinician’s perspective on electronic health records and how they can affect patient care. BMJ 328 (7449), 1184–1187. Novak, L.L., Anders, S., Unertl, K.M., France, D.J., Weinger, M.B., Aug. 2019. Improving the effectiveness of health information technology: the case for situational analytics. Appl. Clin. Inf. 10 (4), 771–776. Pickering, B.W., Herasevich, V., Ahmed, A., Gajic, O., Apr. 2010. Novel representation of clinical information in the ICU. Appl. Clin. Inf. 1 (2), 116–131. Ratwani, R., et al., Nov. 2016. Mind the Gap. A systematic review to identify usability and safety challenges and practices during electronic health record implementation. Appl. Clin. Inf. 7 (4), 1069–1087. Ratwani, R.M., et al., Sep. 2018. A usability and safety analysis of electronic health records: a multi-center study. J. Am. Med. Inform. Assoc. 25 (9), 1197–1201. Reid, P., et al., 2005. Building a Better Delivery System: A New Engineering/Health Care Partnership. National Academies Press (US), Washington (DC). Reisman, M., Sep. 2017. EHRs: the challenge of making electronic data useable and interoperable. Pharm. Ther. 42 (9), 572–575. Rosso, C.B., Saurin, T.A., Sep. 2018. The joint use of resilience engineering and lean production for work system design: a study in healthcare. Appl. Ergon. 71, 45–56. Russo, E., Sittig, D.F., Murphy, D.R., Singh, H., Dec. 2016. Challenges in patient safety improvement research in the era of electronic health records. Healthcare 4 (4), 285–290. Tsang, P.S., Vidulich, M.A., 2006. “Mental Workload and Situation Awareness,” in Handbook Of Human Factors And Ergonomics. Wiley-Blackwell, pp. 243–268. “Download SHM’s RADEO Guide.” [Online]. Available: https://shm.hospitalmedicine. org/acton/media/25526/download-shms-radeo-guide. [Accessed: 26-Oct-2018]. “New Opioid Prescribing Guidance Targets Inpatient Acute Pain Management,” Specialty Pharmacy Times. [Online]. Available: https://www.specialtypharmacytimes.com /news/new-opioid-prescribing-guidance-targets-inpatient-acute-pain-management. [Accessed: 26-Oct-2018]. Non-federal Acute Care Hospital Electronic Health Record Adoption.” [Online]. Available:/quickstats/pages/FIG-Hospital-EHR-Adoption.php. [Accessed: 26-Oct2018]. Techsmith Morae [computer Program]. Version 3.3.4. Okemos, MI2004-2015.
Hart, S.G., Staveland, L.E., 1988. Development of NASA-TLX (task load index): results of empirical and theoretical research. In: Hancock, P.A., Meshkati, N. (Eds.), In Advances in Psychology, vol. 52. North-Holland, pp. 139–183. Herzig, S.J., Rothberg, M.B., Cheung, M., Ngo, L.H., Marcantonio, E.R., Feb. 2014. Opioid utilization and opioid-related adverse events in nonsurgical patients in US hospitals. J. Hosp. Med. 9 (2), 73–81. Holden, R.J., Mar. 2011. Cognitive performance-altering effects of electronic medical records: an application of the human factors paradigm for patient safety. Cognit. Technol. Work 13 (1), 11–29. Karsh, B.-T., Weinger, M.B., Abbott, P.A., Wears, R.L., Nov.-Dec. 2010. Health information technology: fallacies and sober realities. J. Am. Med. Inform. Assoc. 17 (6), 617–623. Katsulis, Z., et al., Sep. 2016. Iterative user centered design for development of a patientcentered fall prevention toolkit. Appl. Ergon. 56, 117–126. Kern, L.M., Barr� on, Y., Dhopeshwarkar, R.V., Edwards, A., Kaushal, R., Apr. 2013. Electronic health records and ambulatory quality of care. J. Gen. Intern. Med. 28 (4), 496–503. Khairat, S.S., Dukkipati, A., Lauria, H.A., Bice, T., Travers, D., Carson, S.S., May 2018. The impact of visualization dashboards on quality of care and clinician satisfaction: integrative literature review. JMIR Hum. Factors 5 (2), e22. Linzer, M., Oct. 2018. Clinician burnout and the quality of care. JAMA Int. Med. 178 (10), 1331–1332. Menachemi, N., Collum, T.H., May 2011. Benefits and drawbacks of electronic health record systems. Risk Manag. Healthc. Policy 4, 47–55. Middleton, B., et al., Jun. 2013. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J. Am. Med. Inform. Assoc. JAMIA 20 (e1), e2–8. Mlaver, E., Schnipper, J.L., Boxer, R.B., Breuer, D.J., Gershanik, E.F., Dykes, P.C., Massaro, A.F., Benneyan, J., Bates, D.W., Lehmann, L.S., Dec. 2017. User-centered collaborative design and development of an inpatient safety dashboard. Jt. Comm. J. Qual. Patient Saf. 43 (12), 676–685. Muggeo, V., 2008. Segmented: an R Package to Fit Regression Models with Broken-Line Relationships. R News.
9