Research Forum Abstracts alerts, 25.9% (CI⫽16.6-35.2%) versus 2.0% (CI⫽1.8-2.2%), respectively (difference⫽ 23.9%; CI⫽15.7-34.1%), p⬍0.001). Conclusion: While allergy alerts more commonly resulted in a change in medication order, the overwhelming majority of medication alerts, including drugdrug interaction alerts, did not result in any change in the CPOE medication order by emergency physicians at our two study sites.
373
Scenario-Based Usability Evaluation of Emergency Department Information System by Clinical Roles and Experience Levels
Kim M, Mohrer D, Shapiro J, Aguilar V, Genes N, Baumlin K, Elkin P/Mount Sinai School of Medicine, New York, NY
Study Objectives: Electronic health record (EHR) usability and human-computer interaction issues have prevented an adoption of EHR systems. In this study, we investigated the usability gaps in the Mount Sinai Emergency Department Information System (EDIS) as 2 physicians and 2 nurses, differentiated by experience level, completed two sets of 11-12 scenario-based tasks. All major clinical tasks that incorporate the EDIS were identified, and usability testing software and performance and system usability scale metrics were employed in order to analyze these usability gaps and recommend improvements for the development of future EDIS. Method: Quantitative and qualitative usability metrics were collected on Picis EDPulseCheck EDIS using Morae®, a usability research software tool, in a controlled experimental setting in the Center for Biomedical Informatics at Mount Sinai School of Medicine. We employed five performance metrics as well as a system usability scale (SUS). The performance metrics were: 1) task success rate, 2) time-ontask (TOT), 3) error (free) rate, 4) efficiency, and (5) learnability. SUS is a simple, ten-item scale that provides a comprehensive assessment of subjective usability. In addition to these six metrics, individual tasks were analyzed and compared in order to identify learning curves and subtle workflow and navigation pattern variability as different clinicians completed the same task. Results: Overall, significant usability gaps between expert and novice clinicians were identified, as novice clinicians completed the tasks both less efficiently and less effectively, and expressed less satisfaction over the EDIS system. Novice clinicians required 40% more TOT and mouse clicks to complete a given task, on average. The SUS demonstrated that novice clinicians rated the system usability at 55-62% (marginal) and experts rating it at 92.5-100% (excellent). Subtask analysis highlighted navigation pattern differences between users, regardless of experience level. For example, in treating a patient with a hand laceration, two expert physicians used different templates. This subtle difference resulted in two unique physical exams; although the impact in this situation was trivial, workflow variations could potentially result in unintended variations in documentation. Subtask analysis was also instrumental in identifying a number of usability concerns, including 1) uneconomical space-usage, particularly in the triage notes where it is difficult to quickly identify points of important information, 2) lack of auto-population, which forces the clinician to waste time filling out redundant fields throughout the clinical tasks, 3) ambiguous names for labels (eg, Med SVC) and functions (eg, Repeat), and 4) inconsistent data entry methods. Conclusion: By implementing Morae® software, we successfully analyzed user experience with regard to clinical role and experience level, highlighting the learning curves and navigation pattern differences associated with this EDIS. Extended studies should be warranted with more clinician evaluators. The fact that Picis ED PulseCheck has been ranked as one of the most comprehensive and well-integrated EDIS available, and yet still presents usability concerns, is indicative that there is room for significant improvement in the EHR industry.
374
Effect on Emergency Department Patient Satisfaction of a Simple In-Room Information Computer Display
Delasobera BE, Dubin J, Childs C, Gostine A, Klement M, Gatewood J/ Georgetown University School of Medicine, Washington, DC; Washington Hospital Center, Washington, DC
Study Objectives: To evaluate if providing emergency department (ED) patients with informational screens would increase patient satisfaction. Methods: Three computer screens were installed in separate rooms in an inner-city ED. The screens each displayed the patient’s last name, the attending’s name, the nurse’s name, the date and time, and the room number. The fields on the screen were programmed to automatically populate from the physician workstation. Research assistants were periodically in the ED in December and January of 2009-2010, providing
Volume , . : September
patients who met inclusion criteria with surveys. For each survey administered to a patient with a screen, a survey was administered to a patient without a screen who was cared for by the same attending on the same day. The surveys asked questions about how informed they felt and about their overall level of satisfaction with their ED visit. Results: A total of seventy patients participated in this study (35 screens, 35 without screens). Overall, 27% of patients who had screens in their room knew their attending’s name, compared to 9% of patients without screens (p⫽0.1). The same trend was seen for the nurses; 42% of patients who had screens knew their nurses name, compared to 24% without screens (p⫽0.2). Overall majority of patients both with (65%) and without (68%) screens felt informed while in the ED. 85% of the patients with screens felt like it improved their ED experience. When given a 1-10 scale (1-worst, 10-best), patients with screens rated their ED experience as an 8.2, while those without screens rated a 7.7 (p⫽0.7). Conclusion: While the findings of this study were not significant (p⬍0.05), there was a trend towards improved knowledge of providers’ names when an informational screen was available, and majority of patients with screens felt it improved their ED experience. This study and future studies with a larger sample size will help determine whether or not purchasing more screens would be a worthy investment for the future.
375
UObserve: A Mobile App for the Study of Emergency Department Workflow
Li Z, Robinson DJ, Zhang J/UTHealth, Houston, TX
Study Objectives: The emergency department (ED) workflow is characterized by its complexity, which has been known as one major reason makes the problem of medical error difficult to be addressed systematically. We need a better understanding of the workflow in order to address the complexity and provide interventions. Traditional ethnographic study has been useful in studies to examine certain aspects of the ED. However, it is designed for exploratory study of a domain without a particular focus and the data are collected mostly in free text. Thus, huge amount of data transcribing and processing works are needed before the analysis, not to mention incorporating other sources of data such as location and audio data. In our study, we have predefined sets of activities belong to the workflow to be observed, so we do not need extensive data as in traditional study but need precise time measures of each activity for statistical analysis as well as synchronizing with other data sources. Method: In response, we developed a mobile tool for field studies. We designed the tool with a flexible data structure so that we can modify our coding scheme for ED activities with minimum effort. It also allows us to add multiple dimensions of relevant data points, such as patient involved, location, etc, to each activity. The interface is designed to enable researchers to collect most data by a single tap to ensure that the researcher could keep with the fast pace in the ED, while also allowing for recording detailed descriptive data in case things of research interest but not covered by the coding scheme is observed. A timestamp is generated when a new activity is logged. After each data collection session, the data is analyzed using the desktop-side tool. The system provides statistics about how long the subject spent doing each of the activities, a human-readable table of all activities, as well as other on-demand analysis about the data. Another feature of the suite is that because of the precise timestamp for each activity logged, it could synchronize the data observers collected with other types of data such as location and audio data to enable us have the richest description of the context at any specific time point. Conclusion: The tool has been used to collect over 20 hours of workflow data in a level-one trauma center. The data has been successfully utilized to validate the data we collected using location sensors simultaneously. Our team is planning to adopt the tool in a larger scale. This tool brings new possibilities in studying workflow in the field. Compared with ethnographic methods, our approach is precise, concise and focused towards the domain. It reduces the efforts needed in collecting and processing of the observational data. The limitation is that the maximum outcome can only be achieved with a welldesigned coding scheme that is capable of covering all aspects of research interest in the domain. Otherwise, the observer will need to constantly entering free-text data into the tool, which eliminates the edge of the tool over traditional methods.
376
How Computer Literate Are Emergency Department Patients Seen In an Urban County Hospital?
Arora S, Morato D, Ballinger J, Menchine M/University of Southern California, Los Angeles, CA
Study Objective: As access to establishing regular primary care becomes increasingly difficult, more and more patients with chronic diseases are turning to the
Annals of Emergency Medicine S121