Heuristic evaluation of infusion pumps: implications for patient safety in Intensive Care Units

Heuristic evaluation of infusion pumps: implications for patient safety in Intensive Care Units

International Journal of Medical Informatics (2004) 73, 771—779 Heuristic evaluation of infusion pumps: implications for patient safety in Intensive ...

142KB Sizes 0 Downloads 27 Views

International Journal of Medical Informatics (2004) 73, 771—779

Heuristic evaluation of infusion pumps: implications for patient safety in Intensive Care Units Mark J. Grahama, Tate K. Kubosea, Desmond Jordana,b, Jiajie Zhangc, Todd R. Johnsonc, Vimla L. Patela,∗ a

Department of Biomedical Informatics, College of Physicians and Surgeons, Columbia University, USA Department of Anesthesiology, College of Physicians and Surgeons, Columbia University, USA c School of Health Information Sciences, University of Texas Health Science Center at Houston, USA b

Received 15 March 2004 ; received in revised form 3 July 2004; accepted 19 August 2004 KEYWORDS Patient safety; Medical device errors; Heuristic evaluation; IV pumps

Summary Objective: The goal of this research was to use a heuristic evaluation methodology to uncover design and interface deficiencies of infusion pumps that are currently in use in Intensive Care Units (ICUs). Because these infusion systems cannot be readily replaced due to lease agreements and large-scale institutional purchasing procedures, we argue that it is essential to systematically identify the existing usability problems so that the possible causes of errors can be better understood, passed on to the end-users (e.g., critical care nurses), and used to make policy recommendations. Design: Four raters conducted the heuristic evaluation of the three-channel infusion pump interface. Three raters had a cognitive science background as well as experience with the heuristic evaluation methodology. The fourth rater was a veteran critical care nurse who had extensive experience operating the pumps. The usability experts and the domain expert independently evaluated the user interface and physical design of the infusion pump and generated a list of heuristic violations based upon a set of 14 heuristics developed in previous research. The lists were compiled and then rated on the severity of the violation. Results: From 14 usability heuristics considered in this evaluation of the Infusion Pump, there were 231 violations. Two heuristics, ‘‘Consistency’’ and ‘‘Language’’, were found to have the most violations. The one with fewest violations was ‘‘Document’’. While some heuristic evaluation categories had more violations than others, the most severe ones were not confined to one type. The Primary interface

* Corresponding author. Tel.: +1 212 305 5643.

E-mail address: [email protected] (V.L. Patel).

1386-5056/$ — see front matter © 2004 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.ijmedinf.2004.08.002

772

M.J. Graham et al. location (e.g., where loading the pump, changing doses, and confirming drug settings takes place) had the most occurrences of heuristic violations. Conclusion: We believe that the Heuristic Evaluation methodology provides a simple and cost-effective approach to discovering medical device deficiencies that affect a patient’s general well being. While this methodology provides information for the infusion pump designs of the future, it also identifies important insights concerning equipment that is currently in use in critical care environments. © 2004 Elsevier Ireland Ltd. All rights reserved.

1. Introduction A variety of sources underscore the need to study, understand, and reduce medical device-use errors that occur in hospitals [1]. A large number of reported errors involving medical devices are user errors rather than technical failures [2]. Indeed, the Food and Drug Administration indicates that there are many devices currently on the market that are sub-optimal from the human factors perspective [3]. Inadequate training of users, stressful work environments, and poorly designed device interfaces have been shown to contribute to this problem [4]. It is important to recognize, however, that errors occur as a part of complex dynamic interactions among individuals, technology, as well as organizational and social contexts. We believe interventions that target these interacting factors are more likely to reduce medical errors. It is crucial to understand the dynamic nature of medical errors that are potentially device-related, especially with systems that are already in existence in clinical settings. Even when new devices are developed that eliminate or substantially reduce device use errors, it is usually the case that until the hospital’s lease expires (or a major patient catastrophe occurs) these institutions are likely to keep using the existing equipment [5]. This appears to be so even when serious errors and device malfunctions are uncovered and deemed likely to occur again. Rather than trying to understand and improve a device’s physical functions in isolation, contemporary approaches to studying how devices and humans interact (e.g., Human Factors Engineering) make concerted efforts to integrate the device characteristics and the user’s characteristics on a cognitive level [6]. One step in this integration effort is to trace the evolution of the state of the device in parallel with the cognitive activities of the person using the device. This has been shown to lead to better insight on where and when errors are likely to occur [7]. Another step in this integration is to address human—computer interface design improvements. These improvements, based

on principles of human factors engineering, have been shown to decrease errors. For example, new prototypes based on these principles outperformed existing models that were not specifically designed with such principles [8]. In addition, recent efforts have sought to aid the integration of device characteristics and human characteristics even further by developing a comprehensive cognitive taxonomy of medical errors in order to better understand the underlying cognitive mechanisms of medical errors [9]. We believe that these efforts will lead to improved categorizing of medical errors along cognitive dimensions. Taken together, they will provide a useful framework to guide future studies on errors involving medical devices. Despite these efforts, knowledge of how well or how poorly designed a device interface actually is cannot be widely known until it is already in use and under daily scrutiny in critical care settings. We believe that such a situation leads to possible compromises in patient safety and deserves further attention. Thus, the present study adds to the existing literature on patient safety and infusion pumps by making the argument that since pumps with known errors are presently in use in ICUs, it is imperative to (1) identify these errors (e.g., rate-dose conversions), (2) actively address the implications of these errors, and (3) effectively communicate the findings to end-users. One efficient way of doing this, we believe, is through the insights gained by conducting a heuristic evaluation.

1.1. Heuristic evaluation Heuristic evaluation is a method where discoveries about performance are made through investigation. The heuristic evaluation methodology, originally developed from usability engineering [10] and then adapted for medical devices [11], involves a class of techniques in which experienced evaluators examine an interface for usability problems. As an example, one of the heuristics for medical devices states ‘‘the interface should provide appropriate default values’’ (see Appendix A for a complete description of all 14 heuristics). Thus, an infusion

Evaluation of infusion pumps in Intensive Care Unit

773

pump with an unsafe default concentration of morphine would violate this heuristic. The more complicated empirical techniques such as functional analysis, user and task analyses, competitive analysis, feature inspection, and user satisfaction surveys all strive to indicate what is right with the system and identify the most appropriate functionality [11]. A heuristic evaluation is more limited in purpose. It identifies possible usability and safety problems with an interface. It does not indicate the elements of the system that correctly follow usability guidelines, nor does it reveal major missing functionality. The benefits of a heuristic evaluation for usability problems in medical devices is that it takes a lot less time than a complete usability analysis—–both to execute it as well as to learn how to do one [11]. Although it takes less time, as a modified engineering technique with 3—5 evaluators it has been reported to capture 60—70% of the usability problems, including many major ones [11]; nevertheless, one downside is that the technique also identifies many minor problems that cause little trouble to users.

learning about the intricacies of their ICUs infusion pump through experience while on the job rather than through any explicit training [12]. Thus, our raters in this study came from two backgrounds—– either a clinical background or a cognitive science background.

1.2. Device use and the clinical environment

2.2. Materials and procedures

The chosen device and environment for study were IV pumps in an Intensive Care Unit (ICU) at a New York City area hospital. Here, patients often require multiple three-channel infusion pumps (between 3 and 5 machines, typically). Three-channel infusion pumps have three ports for different types of medicine that are to be given to a patient. Each medicine can be assigned its own dosage and flow rate. Except for the one-channel device supported by the same manufacturer, the three-channel pump is the primary device used throughout the hospitals where this study was conducted. Such doses can be used to help patient comfort (e.g., a constant, low stream of morphine) or to deliver medications critical to sustaining the life of patients, such as total parenteral nutrition (TPN). In some instances, failure of any sort (mechanical or user-errors) can harm the patient or even be fatal. In the ICU, it is the nurses who typically have multiple interactions with medical devices like the infusion pump [12]. Due to their expertise we believe that veteran ICU nurses are also the best candidates to help with a heuristic evaluation. Based on their background and training, however, a nurse’s knowledge of the principles of good interface design—–from a technical standpoint—–is generally low. Indeed, most critical care nurses report

2. Methodology 2.1. Participants Four raters conducted this heuristic evaluation of a three-channel infusion pump interface. This team included three of the authors and one member of the clinical staff. The clinician was a senior nurse in the Surgical Intensive Care Unit who had extensive experience operating this particular pump. The three authors had a cognitive science background and were familiar with the foundations of the heuristic evaluation methodology. Of these three raters, two had observed the pumps being used in various settings (e.g., ICU, OR) while one had no prior experience.

The procedures of this study adhered to other medical device heuristic evaluations [11]. In general, as a guideline for considering whether a heuristic violation had occurred, each rater was instructed to consider three components: (a) the proportion of users who will experience such a violation, (b) the impact this violation will have on their experience with the interface, and (c) whether the usability problem would be an issue only the first time users encounter it, or whether it would persistently bother them. Each rater reviewed each of the eight interface locations of the infusion pump itself (Physical Design, Startup, New Patient, Primary, Main Display, Device Settings, Piggyback, and Physical Interface) while generating a list of violations according to the usability heuristics described in Appendix 1. The four lists were then compiled into a single list and redundant violations were edited out. This aggregated list was subsequently given back to each rater who then independently assessed the severity of each violation using the following scale: 4 = Usability Catastrophe–imperative to fix this problem; 3 = Major Usability Problem–important to fix, so should be given high priority; 2 = Minor Usability Problem—–fixing this should be given low priority; 1 = Cosmetic Problem Only–need not be fixed unless extra time is available on project; 0 = No Problem—

774 –I don’t agree that this is a usability problem at all [10]. To offer an illustrative example, one heuristic violation uncovered was coded as catastrophic by the four raters. This violation was located in the interface area defined as Device Settings, where Air in the Tube alarms are located. The problem identified was that when an Air in the Tube alarm sounds, it has to be immediately fixed in order to proceed. Thus, in urgent situations, no user judgment is allowed. The raters each indicated that this violated the Flexibility heuristic. It received an average severity rating of 3.75 from the four raters. A typical suggestion for improvement was allowing for an override. Another example to highlight this coding process concerns a violation that involved a less severe rating (i.e., minor). Located in the Primary interface of the infusion pump, the issue identified here was that before users can program the dosage they are required to put in a rate and volume each time they program a new dose. Since the rate and volume is typically consistent, the task was considered tedious and coded as violating the Control usability heuristic. In this instance, the four raters rated it as a minor usability problem (<2) rather than a major usability violation.

3. Results The results of this study are presented in three ways. First, all heuristic violations found for the three-channel infusion pump are summarized according to the 14 usability heuristics. Second, the average severity ratings (e.g., catastrophic, major, minor, cosmetic) from all four heuristic evaluators are presented. Within this analysis a more descriptive breakdown of the nine catastrophic violations found is also presented. Third, an integrated approach to the data is taken by combining, across raters, the location, frequency, and severity for all 14 heuristic violations.

3.1. Inter-rater reliability All analyses showed that the raters’ scores varied systematically with one another, were positively correlated, and were more similar than would be expected by chance. Specifically, each and every correlation between all raters was significant at the p = .01 level based on a kappa test, though some correlations had low coefficient values. Two of the three cognitive scientists correlations were between .52 and .62. The ICU nurse and another of the cognitive scientists were correlated at the

M.J. Graham et al.

Fig. 1. Frequencies of the heuristic violations by category averaged across raters.

.60 level. Thus, the ratings from the four evaluators were averaged. These averaged ratings are reported in Section 3.

3.2. Aggregate heuristic violations of a three-channel infusion pump Fig. 1 shows that overall there were 231 identified violations of the 14 usability heuristics. Violations that were double-coded (e.g., as Undo and Closure) were counted as both: that is, a single usability problem identified by an evaluator can be a violation of multiple heuristics. In this way, the number of heuristic violations is typically more than the number of usability problems identified. Two heuristics, Consistency and Language, were found to have the most violations: 33 (14%) and 30 (12%), respectively. The next highest category of violations was Error with 24 (10%). Match had 23 (9%) violations. Visibility had 20 violations (8%). The one with fewest violations was Document (2 in total).

3.3. Average severity rating of the heuristic violations Fig. 2 shows the average severity ratings of the violations (with the catastrophic violations displayed to the left and the cosmetic violations displayed to the right). There were nine catastrophic violations in total that rated above a 3.5, and there were 61 violations considered to be major ones that rated

Fig. 2. Frequencies of the heuristic violations by severity averaged across rater.

Evaluation of infusion pumps in Intensive Care Unit

775 interface locations. Similarly, all violation locations (except one) were distributed across the interface (e.g., Physical Interface, Opening Screen, etc.), and there were more occurrences of heuristic violations in the Primary menu than in any other part of the interface.

4. Discussion Fig. 3. Location frequency and severity for heuristic violations averaged for all raters.

above a 2.5. These two sets of violations were ones that the raters coded as important to fix and should be given high priority. In addition, there were 48 violations that were coded as minor violations, as well as 11 violations that were cosmetic problems only (i.e., ones that were generally judged to have a low priority in being fixed). For each of the catastrophic violations, Appendix B illustrates the location, problem encountered, the heuristic violated, and the average severity.

3.4. Summary of frequency and severity averaged across raters Fig. 3 shows the count, average severity, and places of occurrence in the interface of the heuristic violations for the three-channel infusion pump. In total there were 145 violations that could be identified by interface location in this heuristic evaluation. Those that showed the largest numbers of violations were in the Primary part of the interface (62 violations, or 42%), the Options Screen (25 violations, or 17%), and the Physical Interface (18 violations, or 12%). The average severity ratings for nearly all areas ranged between 2 and 2.5. Thus, the analyses reveal that more than half of the heuristic violations found by the raters were deemed serious enough to require attention, and at least nine required immediate fixing. Implications and practical concerns for clinicians are discussed in the next section. In general, certain categories of the 14 usability heuristics had more violations than others. That is, Consistency, Language, Error, and Match were found to be violated more, on average, than the other ten categories. The most catastrophic of the violations were not confined to any one category; rather, they were distributed across several categories, including Consistency, Flexibility, Error, Undo, and Control (among others). The average severity level remained relatively constant across

Because this model pump is being used in many ICU locations at the present time, there is considerable potential for these identified errors to occur and patient safety to be compromised. While some heuristic evaluation categories had more violations than others, the most severe ones were not confined to one type. Rather, the most severe violations were spread out across at least eight of the 14 usability heuristics. Thus, end-users must be informed that there are numerous aspects of the system where they need to be vigilant about the potential for making errors. In addition, since the Primary interface location had the most occurrences of heuristic violations relative to all others, this is also important for end-users to know since this is very likely to be an area where errors can occur (e.g., loading the pump, changing doses, and confirming drug settings). The heuristic evaluation methodology allowed for an independent assessment of the practices and protocols that are present in critical care units such as the ICU. From this information, we were able to compare and evaluate usability issues at various levels and at different strata of functioning. Three outcomes of the heuristic evaluation methodology are the following: (1) it allows for a comparison between the usability protocols from one unit in the ICU to another (e.g. Medical versus Surgical ICU), (2) it allows for categorization as well as stratification (i.e., the ability to compare units), and (3) the results allow for evaluation of one unit’s frequency and severity of violations as compared to other ICUs. If the errors are reduced (as compared to the benchmark of the Heuristic Evaluation), then this may call for the unit’s practices and protocols to be further investigated and, where appropriate, possibly extended to other units within the department and hospital. With these outcomes we can, for example, use such knowledge to confirm the ICU nurses’ verbal reports of the severe problems that are known to exist with the device used in their daily routines. In practice, the heuristic evaluation methodology identifies differences and provides a benchmark by which clinicians can gauge units, adopt good practices, and discard poor ones.

776

M.J. Graham et al.

Knowledge of these errors enables the evaluators to not only inform the future designers of next generation devices but also alert the current users, such as critical care nurses, to potential problems.

Acknowledgements We graciously acknowledge the support of the administrators, bioengineers, nurses, and physicians

who volunteered their time and provided insights. The project/effort depicted was sponsored by grant number P01 HS11544 from the Agency for Healthcare Research and Quality and the Department of Army under Cooperative Agreement #DAMD17-97-27016. The content of the information does not necessarily reflect the position/policy of the government or NMTB, and no official endorsement should be inferred. We thank Leslie Wright for her assistance in preparation of this manuscript.

Appendix A. Description and evaluation criteria for the 14 usability heuristics Type of usability heuristic and description

Heuristic evaluation criteria

Consistency: Heuristic violations involving Consistency and Standards are instances where users should not have to wonder whether different words, situations, or actions mean the same thing.

Examples include the following: Sequences of Actions (what level skill acquisition of the user is required?); Color (Do the color categorizations make sense?); Layout and Position (is there spatial consistency?); Font and Capitalization (do the levels of text organization make sense?); Terminology (are the operating commands clear or unclear: del = delete; rm = remove); Language (Are the words, phrases appropriate?).

Visibility: How well the Visibility of System State is set up determines whether users are adequately informed as to what is going on with the system. This is done through appropriate feedback and display of information.

Heuristic violations occur when there are unanswered questions such as the following: What is the current state of the system? What can be done at current state? Where can users go? What change is made after an action?

Match: How well there is a Match Between the System and the World means that the image of the system perceived by users should match the model the users have about the system.

Criteria for judgment include: does the user model match the system image; do the actions provided by the system match the actions performed by users; do the objects on the system match the objects of the task?

Minimalist: This criterion involves judging whether any extraneous information is a distraction and a slow-down.

Principles such as ‘‘less is more’’, ‘‘simple is not equivalent to abstract and general’’, ‘‘simple is efficient’’ and ‘‘progressive levels of detail’’ are followed.

Memory: When considering whether Memory Load is Minimized, users should not be required to memorize a lot of information to carry out tasks.

Since memory load reduces users’ capacity to carry out the main tasks, the following aspects of the system are considered: recognition versus recall (e.g., menu versus commands); externalize information through visualization; perceptual procedures; hierarchical structure; default values; concrete examples (DD/MM/YY, e.g., 10/20/02); generic rules and actions (e.g., drag objects).

Feedback: The type of Informative Feedback users should be prompt and informative.

Regarding their actions, the following characteristics are considered: does the feedback follow the direct perception model (7 h stage model); can the information displayed be directly perceived and interpreted; what are the levels of feedback (novice and expert); is the information concrete and specific, not abstract and general; what is the response time (0.1 s for instantaneously reacting, 1.0 s for uninterrupted flow of thought, 10 s for the limit of attention)?

Evaluation of infusion pumps in Intensive Care Unit

777

Flexibility: Since users always learning and users are always different, the Flexibility and Efficiency characteristics of the heuristic evaluation evaluates whether users have the flexibility of creating customizations or shortcuts to accelerate their performance. Message: What makes for Good Error Messages? The messages should be informative enough such that users can understand the nature of errors, learn from errors, and recover from errors.

Here, judgments are based upon the following: shortcuts for experienced users; shortcuts or macros for frequently used operations; skill acquisition through chunking (e.g., abbreviations, function keys, hot keys, command keys, macros, aliases, templates, type-ahead, bookmarks, hot links, history, default values, etc.). Evaluation of errors in this area consisted of the following questions: was the error message phrased in clear language that avoided obscure codes such as ‘‘system crashed, error code 147’’; was the error message precise and not vague or general (such as ‘‘Cannot Open Document’’); was the error message constructive and polite (errors include such comments as ‘‘illegal user action’’, ‘‘job aborted’’, ‘‘system was crashed’’, ‘‘fatal error’’, etc.)?

Error: With regards to Preventing Errors, it is always better to design interfaces that keep them from happening in the first place.

The criteria for evaluation included the following: interfaces that make errors impossible; interfaces that avoid modes (e.g., vi, text wrap); interfaces that use informative feedback (e.g., different sounds); execution error vs. evaluation error; various types of slips.

Closure: With regards to Clear Closure, since every task has a beginning and an end, users should be clearly notified about the completion of a task.

Aspects to evaluate this include the following: does the task have a beginning, middle, and an end; does the task complete the 7-stages of actions; and does the task have clear feedback to indicate goals are achieved and current stacks of goals can be released (e.g., dialogues)?

Undo: Another area to evaluate is whether the system allows for Reversible actions. This means that users should be allowed to recover from errors; moreover, reversible actions encourage users to engage in exploratory learning.

Examples of this include the following: at different levels does the systems allow for a single action, a subtask, or a complete task; are there multiple steps to complete a task; does the system encourage exploratory learning; and, does the system have controls to prevent serious errors?

Language: For the text interface, it is best if the system employs the Users’ Language. Specifically, the intended users should always have the language of the system presented in a form understandable to them.

Ways to evaluate this included the following: does the system use the standard meanings of words or does it use specialized language for a specialized group; are there user-defined aliases; does the system communicate from the users’ perspective (e.g., ‘‘we have bought four tickets for you’’ vs. ‘‘you bought four tickets’’)?

Control: Are the Users in Control? As part of the system design, one general rule is that it should not give users the impression that they are controlled by the systems.

The following heuristics were used to evaluate this: the system should be designed such that users are initiators of actors, not responders to actions; and, the system should avoid surprising actions, unexpected outcomes, tedious sequences of actions, etc.

Document: A final area to explore is whether the system allows for Help and Documentation. The general rule here is that system should always provide the user with help when needed.

Ways in which this was evaluated included the following: was there context-sensitive help; was the help arranged in four ways (task-oriented, alphabetically ordered, semantically organized, and by search); and, was there help embedded within the various contexts of the system?

778

M.J. Graham et al.

Appendix B. Catastrophic violations and their characteristics Interface location

Example of problem description

Heuristics violated

Average severity

New patient

If Personality is selected, that Personality cannot be changed without powering down and re-powering system. If need to enter dosage in mode not available in Personality, it is impossible to do. Even if Volume is too high for mode, pump will still start. In View Personality, nurses never really look at what is enabled or disabled; values are not meaningful to the nurse for their practice; the nurses cannot get information about what values or slots mean. In general, the interface buttons require hard presses; not sure whether it received input or not; there is a beep as feedback, but it is quite soft in comparison to the working environment. In Confirm Settings, the nurses need to press this before Start, but the system only checks settings on some drugs. Air in the Tube alarms have to be fixed in order to proceed; there is no user judgment allowed in urgency of situation. The system does not remind the user to turn the alarm settings back on. When pump is turned off, the flow stops. Especially bad if/when Personalities need to be changed for whatever reason (e.g., to allow medications to be programmed), since this requires that the machine be turned off and on again.

Undo

3.5

Flexibility; control

3.5

Error

4.0

Language; minimalist; document

3.5

Feedback

3.8

Consistency; error

3.5

Flexibility

3.8

Undo

3.8

Flexibility; error; control

3.8

Main display

Primary: loading pump Options menu

Primary: change mode

Primary: label line

Device settings: errors

Device settings: errors

Device settings: errors

References [1] Food and Drug Administration (FDA), Medical device user-safety: Incorporating human factors engineering into risk management, 2000, http://www.fda.gov/cdrh/ humfac/1497.pdf.

[2] E.A. McConnell, M. Cattonar, J. Manning, Australian registered nurse medical device education: A comparison of simple versus complex devices, J. Adv. Nursing 23 (2) (1996) 322—328. [3] Food and Drug Administration (FDA), Improving-patient care by reporting problems with medical devices, 1997, www.fda.gov/medwatch/articles/mdr/mdr.pdf.

Evaluation of infusion pumps in Intensive Care Unit

779

[4] L. Lin, R. Isla, K. Doniz, J. Harkness, K.J. Vincente, D.J. Doyle, Applying human factors to the design of medical equipment: patient-controlled analgesia, J. Clin. Monit. Comput. 14 (1998) 253—263. [5] A. Kesselman, V.L. Patel, J. Zhang, T.R. Johnson, Institutional decision making to select patient care devices: identifying venues to promote patient safety, J. Biomed. Inform. 36 (2003) 31—44. [6] E. Hollnagel, D.D. Woods, Cognitive systems engineering: new wine in new bottles, Int. J. Hum. Comput. Stud. 51 (1999) 339—356. [7] D.D. Woods, Process-tracing methods for the study of cognition outside of the experimental psychology laboratory, in: G.A. Klein, J. Orasanu, R. Calderwood, C.E. Zsambok (Eds.), Decision Making in Action: Models and Methods, Ablex Publishing Corporation, Norwood, NJ, 1993, pp. 228—251.

[8] L. Lin, K.J. Vicente, D.J. Doyle, Patient safety, potential adverse drug events, and medical device design: a human factors engineering approach, J. Biomed. Informat. (2001) 1—11. [9] J. Zhang, V.L. Patel, T.R. Johnson, E.H. Shortliffe, A cognitive taxonomy of human errors in medicine, J. Biomed. Informat. 3 (2004) 193—204. [10] J. Neilson, Usability Engineering. AP Professional, Boston, 1994. [11] J. Zhang, T.R. Johnson, V.L. Patel, D. Paige, T. Kubose, Using usability heuristics to evaluate patient safety of medical devices, J. Biomed. Informat. 36 (2003) 23—30. [12] T.T. Kubose, V.L. Patel, D. Jordan, Dynamic adaptation to critical care medical environment: error recovery as cognitive activity, in: Proceedings of the 2002 Cognitive Science Society, 2002, p. 43.