Outcomes assessment of clinical information system implementation: A practical guide

Outcomes assessment of clinical information system implementation: A practical guide

ONLINE CONTENT Outcomes assessment of clinical information system implementation: A practical guide Eun-Shim Nahm, PhD, RN Vinay Vaydia, MD Danny Ho,...

142KB Sizes 0 Downloads 48 Views

ONLINE CONTENT

Outcomes assessment of clinical information system implementation: A practical guide Eun-Shim Nahm, PhD, RN Vinay Vaydia, MD Danny Ho, MS Barbara Scharf, MSN/MPH, RN Jake Seagull, PhD

T

Healthcare information systems (HIS) play a vital role in quality of care and the organization’s daily operations. Consequently, increasing numbers of clinicians have been involved in HIS implementation, particularly for clinical information systems (CIS). Implementation of these systems is a major organizational investment, and its outcomes must be assessed. The purpose of this article is to provide clinicians and frontline informaticians with a practical guide to assess these outcomes, focusing on outcome variables, assessment methods, and timing of assessment. Based on in-depth literature reviews and their empirical experiences, the authors identified 3 frequently used outcomes: user satisfaction, clinical outcomes, and financial impact. These outcomes have been assessed employing various methods, including randomized controlled trials, pre- and post-test studies, time and motion studies, surveys, and user testing. The timing for outcomes assessments varied depending on several factors, such as learning curves or patients conditions. In conclusion, outcomes assessment is essential for the success of healthcare information technology, and the CIS implementation team members must be prepared to conduct and/or facilitate these studies.

he role of healthcare information systems (HIS) has emerged as particularly important in current healthcare due to the heightened awareness of medical errors1 and various national healthcare information technology initiatives, including the Health Insurance Portability and Accountability Act (HIPAA),2 the National Healthcare Information Infrastructure,3 the Electronic Health Record (EHR),4 and the Personal Healthcare Record (PHR).5 Consequently, increasing numbers of clinicians have been involved in HIS implementation, particularly for clinical information systems (CIS). Implementation of HIS is a major investment for healthcare organizations, and its outcomes require justification.6,7 In general, evaluation of HIS is a complex process that occurs at different phases and is undertaken by different groups of individuals.8,9 For instance, during development, the evaluation is primarily conducted by the vendor. The system is then evaluated by the organization before it is purchased. After implementation, system outcomes are evaluated by various users. In each phase, evaluation criteria also vary. Furthermore, a HIS is often interrelated with many other systems, which makes outcomes assessment more complicated.10 Moreover, many confounding variables (e.g., staffing or patients conditions) could influence outcomes.10,11 Recently many researchers have assessed the effects of CIS on medical errors,12–17 however, the body of knowledge in outcomes assessment of CIS implementation is relatively sparse compared to other areas, such as systems engineering or other healthcare sciences.8,18 The authors, who have expertise in various informatics fields (including nursing informatics, medical informatics, and human factors), have collaborated in teaching and mentoring informatics students in their research projects at the University of Maryland, Baltimore (UMB) and the University of Maryland Medical System. One such collaborative effort is the multidisci-

Eun-Shim Nahm, PhD, RN is an Associate Professor at University of Maryland School of Nursing, Baltimore, MD. Vinay Vaydia, MD is an Assistant Professor at University of Maryland School of Medicine, Baltimore, MD. Danny Ho, MS is an Doctoral Student at University of Maryland School of Medicine, Baltimore, MD. Barbara Scharf, MSN/MPH, RN is a Doctoral Student at University of Maryland School of Nursing, Baltimore, MD. Jake Seagull, PhD is an Assistant Professor at University of Maryland School of Medicine, Baltimore, MD. Reprint requests: Eun-Shim Nahm, PhD, RN, 655 W. Lombard St, Suite 455C, Baltimore, MD 21201. E-mail: [email protected] Nurs Outlook 2007;55:282-288. 0029-6554/07/$–see front matter Copyright © 2007 Mosby, Inc. All rights reserved. doi:10.1016/j.outlook.2007.09.003

282

V

O L U M E

5 5



N

U M B E R

6

N

U R S I N G

O

U T L O O K

CIS implementation

Nahm et al

plinary UMB Informatics and Human Factors Journal Club. This Journal Club is comprised of healthcare researchers, informaticians, clinicians, and doctoral students who are interested in healthcare informatics. In the past 2 years, the main theme of the Journal Club has been CIS implementation and outcomes evaluation. Through many journal club reviews, discussions, and the authors’ empirical experiences, the authors identified a critical need for translating and disseminating current informatics research efforts in CIS outcomes assessment into practice. Although increasing numbers of clinicians and informaticians are involved in implementing various CIS, there is often a disconnect between systems implementation and outcomes assessment, which should be part of the planning phase of the implementation process. The authors recognize that it may be difficult for frontline informaticians and clinicians to conduct rigorous randomized controlled trials or complex return on investment (ROI) analyses to assess outcomes of CIS implementation. These individuals, however, can utilize findings of outcomes research. Furthermore, many of them are master’s- and/or doctoral-prepared and capable of conducting practical research projects. To build a body of knowledge and to advance practice in outcomes assessment during CIS implementation, such research activities must permeate into practice. This article was developed to provide clinicians and frontline informaticians with a practical guide to outcomes assessment of CIS implementation. As the initial step in developing this guide, each author conducted an in-depth review of the literature in his/her own field based on the established scope of this guide (see the Scope section). The literature review was conducted using several bibliographic databases, including MEDLINE, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), HealthSTAR, and the ACM (Association for Computing Machinery) Digital Library, and Internet searches. The main search terms used included “outcome assessments,” “hospital information systems,” “clinical information systems,” “measures,” and “instruments.” Depending on the area of expertise, each author also used additional specific keywords such as “clinical outcomes” or “usability testing.” Upon completion of individual literature reviews, the authors reviewed their findings as a group and combined them using an established framework for this guide (see the Framework section).

Framework Several researchers have conducted systematic reviews of the literature on HIS outcome evaluation studies over the past 3 decades.8,20 –23 These reviews included different dimensions of outcomes and/or assessment methodologies.8,20 –23 The purpose of our article was to translate these findings into a practical guide. The simple framework to develop this practical guide for outcomes assessment of CIS implementation was as follows: What (outcomes of CIS implementation); How (assessment methods); and When (assessment timing).10 Identification of the specific outcomes are guided by the work of Meijden, Tange, and Hasman.24 In their article, the authors identified 6 dimensions of success factors for CIS implementation: (1) system quality, (2) information quality, (3) usage, (4) user satisfaction, (5) individual impact, and (6) organizational impact. Among these success factors for implementation, the last 3 can be used to gauge outcomes of CIS implementation: user satisfaction, individual impact, and organizational impact. In practice settings, 2 frequently used outcomes that are associated with individual and organizational impact are clinical and financial outcomes. This article, therefore, will discuss user satisfaction, clinical outcomes, and financial outcomes.25–31

SCOPE AND FRAMEWORK

WHAT: SELECTED OUTCOMES OF CIS IMPLEMENTATION

Considering the various types of healthcare systems, outcome indicators, and types of research methods, it was important to establish the scope and a framework for this guide.

User Satisfaction Definitions and Measures. User satisfaction has been used as a common surrogate measure for the success of information systems.32 In general, user

N

Scope A healthcare information system is defined as “an information system used within a healthcare organization to facilitate communication, integrate information, document healthcare interventions, perform recordkeeping, or otherwise support the functions of the organization.”19 Categories of the HIS vary, including clinical, finance, and ancillary systems, and each category may have different outcome indicators. Outcomes evaluation of HIS is a complex process that varies depending on the types of systems and occurs in different phases of system development, purchasing, and implementation. It is, therefore, important that the authors define the scope of outcomes evaluation. First, among the different HIS, this article will focus on the outcomes of clinical information systems (CIS). A clinical information system is defined as “the component of a healthcare information system designed to support the delivery of patient care, including order communications, results reporting, care planning, and clinical documentation.”19 Second, among different evaluation phases, this article will focus on post-system implementation.

O V E M B E R

/ D

E C E M B E R

N

U R S I N G

O

U T L O O K

283

CIS implementation

Nahm et al

satisfaction is defined as a measure of system performance that meets basic requirements and standards.32 This concept has been operationalized differently and measured using various instruments.23,26,33–35 Although the concepts of “usability,” “user satisfaction,” and “usefulness” are different, these terminologies have often been used interchangeably.23,26,33–35 In the discussion of system acceptability, Nielsen describes the concept of usefulness, which includes both usability and utility.32 Usability of a system is associated with how well users can use that functionality. On the other hand, utility of a system is associated with whether the functionality of the system can do what is needed. Nielsen describes user satisfaction as an attribute of usability. When assessing outcomes of CIS implementation, however, this conceptualization may need modification. Obviously, users will not be satisfied if the system is not useful for their work. Table 1 summarizes the outcome dimensions of the several selected measures, their response scales, and their psychometric aspects. These measures are included because: (1) they are applicable to many CIS; and (2) the readers can find the full scales in the cited references. For instance, the Questionnaire for User Interaction Satisfaction (QUIS) is a frequently used instrument to assess user satisfaction. All measures included assess some aspects of human factors (i.e., usability).26,33–38 Several measures assess the utility aspect of outcomes, such as productivity, usefulness, and patient care outcomes.34,38 Some measures assess both usability and usefulness (e.g., the Physician Order Entry User Satisfaction and Usage Survey).35

design. The 2 interventions compared were CPOE and the combination of CPOE plus a team intervention. Overall, non-intercepted serious medication errors decreased by 55%, from 10.7 to 4.86 events per 1000 patient days (P ⫽ .01). Preventable adverse drug events (ADEs) declined 17% from 4.69 to 3.88 (P ⫽ .37), whereas non-intercepted potential ADEs declined 84% from 5.99 to 0.98 per 1000 patient days (P ⫽ .002). When the 2 interventions (CPOE-only and CPOE plus team intervention) were compared, both interventions demonstrated similar benefits. Several other studies also showed a positive impact of CPOE in medication management and safety.40,41 A few recent studies,17,42 however, showed an increase in error related to CPOE implementation. Han et al17 investigated the impact of CPOE on mortality among children who were transported for specialized care. This was a pre- and post-test trial that lasted 18 months. The findings showed a statistically significant increase in mortality rates, from 2.80% (39 of 1394) to 6.57% (36 of 548) during the study period. Koppel et al42 also report similar findings of increased medication error risks after implementation of CPOE. As discussed by the authors, however, these negative findings were from isolated studies with multiple limitations, such as a short study period and seasonal variability of illnesses. This negative aspect must be carefully monitored in further studies to ensure that these cases were indeed isolated. Other studies have demonstrated positive effects of CIS. For instance, in a pre- and post-test trial, Toth-Pal, Nilsson, and Furhoff28 investigated the effects of computer-generated physician reminders in an electronic patient record system on the physicians’ practice in recommending health screening tests for older adults. During the study period, 602 patients underwent screening in 5 intervention areas. The findings showed significant increase in screening tests in all 5 areas. Turner, Casbard, and Murphy43 evaluated the effects of a barcode patient identification system with hand-held computers for blood transfusion. The findings showed significant improvements in the staff’s adherence to the procedure for blood transfusion during the study period. Issues in current studies. Although several studies reported positive effects of the CIS on decreasing medication errors26,40,41 and clinicians’ adherence to practice guidelines,43 systematic review of the literature showed inconclusive findings and a lack of rigor in many studies.44 To analyze the impact of computerbased patient record systems on medical practice, quality of care, and user satisfaction, Delpierre, Cuzin, Fillaux, Alvarez, Massip, and Lang44 reviewed 26 articles published from 2000 –2003. Among those, 12 articles evaluated the impact of those systems on compliance with practice guidelines. The findings showed that the frequency of positive impact was similar to that of no benefit. In another study, Kaushal,

Clinical Outcomes Selection of outcome variables. For clinicians, important clinical outcomes include patient clinical status, patient safety, length of stay, and mortality rates. However, it is difficult to assess the direct effects of CIS on these clinical outcomes because this would require randomized controlled trials (RCTs). In reality, RCTs with a CIS as an intervention present multiple logistical, financial, and ethical challenges. In RCTs, researchers try to control for potential confounding variables (e.g., severity of patient illnesses and staffing), but often this may not be feasible for a study with a CIS. Researchers, therefore, use proxy outcomes (i.e., related outcome variables), including decreased medication errors,25,26 improved adherence to practice guidelines,27,28 and improved quality of documentation.39 For instance, if a system proves effective in decreasing medication errors and/or improving clinicians’ adherence to practice guidelines, it is likely to contribute to better clinical outcomes. Prior studies. In an early study conducted by Bates et al,26 researchers examined the effects of the computerized physician order entry system (CPOE) on prevention of medication errors using a pre- and post-test 284

V

O L U M E

5 5



N

U M B E R

6

N

U R S I N G

O

U T L O O K

CIS implementation

Nahm et al

Table 1. Selected Measures to Assess User Satisfaction Measure

Outcome Dimensions

Questionnaire for User Interaction Satisfaction (QUIS) (V 7.0)

A total of 143 items with (1) a demographic questionnaire, (2) 6 scales that measure overall reaction ratings of the system, (3) 4 measures of specific interface factors: screen factors, terminology and system feedback, learning factors, and system capabilities, and (4) optional sections for online help, online tutorials, multimedia, Internet access, and software installation Perceived Usefulness The Perceived Scale / PEU Usefulness Scale [PU] – 6 items The Perceived Ease of Use Scale [PEU] – 6 items Questionnaire on A total of 40 items Computer Systems (note: many items and Decisioninclude several submaking items) - Employee morale, reductions in employees, goals being met, and overall satisfaction with the systems Physician Order Entry Items 1 to 16: general User Satisfaction satisfaction with POE and Usage Survey and assessment of POE reliability, speed, ease of use, adequacy of training, and impact on productivity and patient care Items 17 to 25: specific features of POE –respondents were asked to indicate whether they use each feature, and if so, to rate its usefulness End-User Computing A total of 12 items with Satisfaction 5 components of end-user satisfaction: content, accuracy, format, ease of use, and timeliness

Data from references

26

and

33-38

Reliability/ Validity

Scale

Types of Systems/ Users

4-, 5-, 6-, and 9Reliability: alpha: General computer point Likert scales; 0.94 - 0.95 systems / categorical Validity: General users scales construct validity

7-point Likert scale (strongly disagree – strongly agree)

Reliability: alpha - 0.98 (PU); 0.94 (PEU)

HIS / Physicians, nurses

Categorical; yes/no None response; frequency; percentage; 5point Likert scales

HIS / Directors of information systems

7-point Likert scale (never –always)

HIS / Providers (physicians, dentists, nurse practitioners, pharmacists, etc.)

Reliability: alpha: 0.86

7-point Likert scale (not useful at all – extremely useful)

5-point Likert scale (non-existent – excellent)

Reliability: alpha: General computer 0.92 Validity: systems / factor analysis; General users criterion validity; discriminant validity

. N

O V E M B E R

/ D

E C E M B E R

N

U R S I N G

O

U T L O O K

285

CIS implementation

Nahm et al

Shojania, and Bates41 reviewed 12 articles that reported the effects of CPOE and clinical decision support systems on medication error rates. Several studies showed some improvement in medication error rates, but overall, most studies did not have enough power to detect statistically significant findings. Although many outcome measures to assess clinical outcomes are straightforward (e.g., assessment of time intervals or presence/absence of certain documentation), to ensure validity of the findings, researchers must think through the research process and focus on potential confounding variables and scientific meaningfulness. For instance, the number of errors may increase immediately after a system implementation because of the users’ learning curve. As users get used to the system, the number of errors may decrease. Researchers may not be able to control for certain confounding variables, such as staffing or skill sets of healthcare professionals. In other situations, although the findings show statistical significance, these may not be clinically meaningful (e.g., changes in blood pressure of 0.5). From a logistical perspective, it is difficult to conduct replication studies in the area of CIS implementation because, in an organization, a CIS often interfaces with other systems and impacts units that are unique to that organization.

various financial aspects such as inflation and depreciation rates must be considered; however, detailed discussion of these calculation methods is beyond the scope of this article. Furthermore, each institution is likely to have its own template to conduct ROI analysis. Expected costs and benefits. Although most clinicians and frontline systems analysts may not conduct detailed ROI analysis, it is important that they understand the concepts of costs and benefits associated with the system implementation. In estimating costs, both direct and indirect costs must be estimated. Direct costs are expenses associated with acquiring and implementing the systems, including hardware, software, training, salary, and support fees. Direct costs are often one-time costs.6,48 Indirect costs are related to ongoing operational costs such as software maintenance and support fees, salaries for support staff, and the fees related to space and utilities.6,48 By implementing a system, the institution will have both tangible and intangible benefits. Tangible benefits are concrete, measurable gains directly derived from the implementation of the system and include increased revenues and savings in staff time or supplies. Intangible benefits may be difficult to measure in monetary value but, in the long run, they are influential factors for the organization’s profits or even survival. These include customer and staff satisfaction and compliance with federal and professional regulations.29,51,52 Prior findings. Many published studies reported findings from ROI analysis; however, the breadth of analysis varies.24,29,31,51,52 Several ROI studies29,51,52 reporting positive ROI of CIS used economic models with various direct and indirect costs and benefits. For instance, Snyder-Halpern and Wagner (2000)51 conducted ROI analysis before purchasing a CIS for a small not-for-profit rural hospital. They expected an ROI of 12% using the following formula: ROI calculation ⫽ (estimated lifetime benefit ⫺ estimated lifetime costs)/estimated lifetime costs. In another study, Fung and Vogel29 conducted ROI analysis for adding a decision support system to the current computerized medications order entry system used in the hospitals in Hong Kong. Using an economic model, they estimated that this addition would reduce adverse drug events by 4.2%– 8.4%, resulting in a total net saving of $44,000 – $586,000 over 5 years. Many other ROI reports, however, only discussed direct financial benefits such as staff time saved31 or potential savings from decreased errors or improved practice, lacking the discussion of detailed financial costs such as expenses associated with inflation or depreciation over time.24,31

Financial Impact In implementing CIS, financial outcomes are often assessed using Return on Investment (ROI) analysis.6,29 –31,45– 47 ROI analysis is a complex process with various definitions and calculation methods and is often conducted before purchasing a system to assess the expected cost-to-benefit ratio over several years.6,48 Most clinical implementation team members may not be involved in the actual calculation of ROI. For successful implementation, however, they should be aware of the factors that contribute to costs and benefits. This article, therefore, will discuss the definitions of ROI and expected costs and benefits that can be used for ROI. Definition of ROI. The traditional financial definition of ROI is simply earnings divided by investment.49 The definition of earnings and cost over time in CIS implementation, however, is not straightforward. Various tangible/intangible costs and direct/indirect benefits must be considered during analysis.6,48 Furthermore, in the current rapidly changing healthcare industry, prospective estimation of costs and earnings may not hold for a prolonged period. For instance, changes in Medicare reimbursement rates, costs related to software and hardware, or shifts in healthcare management methods will change this estimation significantly.50 Several different models and calculation equations for ROI have been developed, including benefit-to-cost ratio and net present value (NPV), break-even period, or payback analsyis.6,51 In conducting ROI calculation, 286

V

O L U M E

5 5



N

U M B E R

6

N

U R S I N G

HOW: ASSESSMENT METHODS Outcomes of system implementation can be assessed using different methods that can be broadly categorized as quantitative and qualitative study designs. This O

U T L O O K

CIS implementation

Nahm et al

article will briefly discuss selected designs that are often used in evaluating outcomes of CIS implementation. More detailed explanations can be found in the references provided.

Quantitative Designs Quantitative designs include studies with a systematic collection of numerical data and analysis applying statistical procedures.53 Quantitative research may be conducted using experimental or non-experimental research designs. Experimental designs usually contain 3 characteristics: randomization, control, and manipulation (i.e., treatment or intervention). Non-experimental designs are further categorized into quasi-experimental and descriptive designs.53 Quasi-experimental designs are experimental designs without randomization. Descriptive designs are used when the purpose of a study is to observe, describe, or document a situation. Randomized controlled trials. In this experimental design, subjects are divided into experimental and control groups. To evaluate outcomes of systems implementation, a CIS may be implemented in randomly selected services, with similar services used as control groups.54 Outcomes would then be compared between experimental and control groups. This is the most rigorous research design and may not be feasible in many clinical settings. One group pre- and post-test studies. This is a frequently used quasi-experimental design for evaluating outcomes of CIS. Researchers compare outcomes before and after CIS implementation.55 Compared to randomized controlled trials, this design can be more feasible in many settings; however, the generalizability and interpretability of the results are limited. Time and motion studies. Time and motion studies are often conducted to assess productivity-oriented outcomes.56 Researchers in the informatics field have used this method to compare the time required to carry out certain tasks between old and newly implemented information systems. Survey studies. A survey is a frequently used descriptive design in which the researcher administers a set of questions to answer research questions.57 For instance, after implementing a CIS, the researcher often surveys users to assess the usability of the system. Qualitative Designs Using qualitative designs, researchers collect and analyze subjective data.53 Among various methods, user testing and interviews are frequently conducted after implementation of clinical systems. User testing. This testing provides information about how real users use systems and identifies their exact problems with the system.32 User testing is often conducted in a laboratory environment. A few methods can be employed during user testing, including the thinking-aloud method, observation, videotaping, and N

interviewing.58,59 For the thinking-aloud method, the researcher asks the participants to verbalize what they are thinking as they are audiotaped. During user testing, the researcher observes the user’s performance and documents his/her observations on a worksheet. Interviews. The interview method is used to assess the user’s experience with the system, such as the usability of the system.53,58 This method can also be helpful investigating specific events related to the system (e.g., medication errors). Several types of interviews, including structured, semi-structured, or unstructured, can be used.53, 58

Triangulation The triangulation approach refers to the use of multiple sources of data, observers, methods, or theories to draw conclusions.53,60 This approach can decrease bias caused by using a single source. For instance, usability of a system can be assessed using observation, the think-aloud method, and survey.

WHEN: ASSESSMENT TIMING In implementing systems, outcomes assessment plans must be initiated during the planning phase. The timing for outcome evaluation must be determined by considering various factors, such as the staff’s learning curve or patients conditions.7,39,61 Immediately after implementation, users experience learning curves7,39,61 and may prefer the old system because they were accustomed to it. Users may need to spend more time to complete a task using the new system. Consequently, this would not be an appropriate time to conduct a time and motion study or count errors. Additionally, as users have more frequent exposure to the new system, they become more efficient, resulting in improved outcomes (e.g., documentation time).39 Although there are a few reports that recommend a certain length of time for learning a new system (e.g., approximately 6 months61), for better understanding of the system’s effects, certain outcomes may need to be assessed longitudinally.7 Some clinical settings experience seasonal changes in patients conditions.17,62,63 These changes may impact outcomes of CIS implementation, such as documentation compliance or medication errors. Therefore, this factor must be considered in determining the timing of outcomes assessment.

ISSUES AND CONSIDERATIONS IN OUTCOMES ASSESSMENT Systems implementation is a complex process and requires a great deal of time and effort from various individuals. Additionally, the team often faces unforeseen issues or effects during and after implementation. When the implementation team develops outcomes assessment plans, there are several aspects to consider.

O V E M B E R

/ D

E C E M B E R

N

U R S I N G

O

U T L O O K

287

CIS implementation

Nahm et al

for specific users. For instance, different question items may need to be used to assess specific outcomes for certain user groups. However, some variables, such as organizational factors or staff turnover, are more difficult to control. The researcher must identify and monitor these variables during the study and take them into consideration when interpreting the results.

Reliability and Validity of the Measures The reliability and validity of an instrument is critical in interpretation of study findings. If the instrument is not reliable and valid, the findings cannot be considered valid. Consequently, the findings from studies cannot be compared. Compared to many published scientific studies in other disciplines, the measurement area of CIS outcomes evaluation research needs further development. Findings from a review of 27 published articles (from 1976 –2002) on outcomes of clinical information systems18 showed that only 8 (30%) studies reported some information about reliability and/or validity.

CONCLUSION Currently, healthcare information technology is a priority in most healthcare organizations. Simultaneously, implementing CIS is a resource-intensive process with high cost.6,7 The assessment of outcomes of CIS implementation is vital not only to justify the cost within the organization but also to promote the national agenda to improve healthcare information technology. As addressed by Friedman et al,8,11,18 outcomes evaluation has not been adequately examined in the area of CIS implementation. Furthermore, there are great opportunities to improve the rigor of the studies. To build the knowledge base in CIS outcomes evaluation and to facilitate successful CIS implementation, it is essential that frontline implementation team members develop the outcomes assessment plan during the planning phase and strive to conduct sound outcomes evaluation studies. This article provides these individuals with a practical guide to conduct and/or facilitate these studies, focusing on potential outcomes, assessment methods and timing, and issues and considerations. References are available in the online version of this article at the Nursing Outlook Website: http://www. nursingoutlook.org.

Confounding Variables In planning a study that assesses outcomes of CIS implementation, several confounding variables must be carefully considered. Wyatt and Wyatt10 identified various challenges in assessing outcomes, including (1) organizational factors (e.g., strategic plans); (2) varying purposes and outcomes of each component in a system (i.e., used by different user groups and functions); (3) a lack of researchers’ control for the system (e.g., individual users’ experience with computers); (4) phased implementation approaches (i.e., each unit may use different systems and staff can float to different units); (5) system’s involvement with actual patients (i.e., limited manipulation of the system process); and (6) customization of a system for each organization (i.e., this makes it difficult to compare results of studies). Some of these confounding variables can be controlled using various methods, such as study designs, analysis methods, or selecting appropriate instruments

288

V

O L U M E

5 5



N

U M B E R

6

N

U R S I N G

O

U T L O O K

CIS implementation

Nahm et al

REFERENCES 1. Kohn LT, Corrigan JM, Donaldson MS, editors. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999. 2. U.S. Department of Health & Human Services. HIPAA: Medical Privacy—National Standards to Protect the Privacy of Personal Health Information. Available at: http://www. hhs.gov/ocr/hipaa/. Accessed December 23, 2006. 3. U.S. Department of Health & Human Services. The national health information infrastructure. Available at: http://aspe. hhs.gov/sp/nhii/. Accessed December 23, 2006. 4. Office of the National Coordinator for Health Information Technology. E-Prescribing. Available at: http://www.hhs. gov/healthit/e-prescribing.html. December 23, 2006. 5. American Health Information Management Association. The role of the Personal Health Record in the EHR. Available at: http://library.ahima.org/xpedio/groups/public/documents/ahima/ bok1_027539.hcsp?dDocName⫽bok1_027539. Accessed December 23, 2006. 6. Arlotto P, Oakes J. Return on Investment. Chicago, IL: Healthcare Information and Management Systems Society; 2003. 7. Hunt ECH, Sproat SB, Kitzmiller RR. The nursing informatics implementation guide. New York, NY: Springer; 2004. 8. Friedman CP, Wyatt JC. Evaluation methods in medical informatics. New York, NY: Springer; 1997. 9. Ammenwerth E, Kaiser F, Wilhelmy I, Hofer S. Evaluation of user acceptance of information systems in health care— the value of questionnaires. Studies in Health Technology & Informatics 2003;95:643-8. 10. Wyatt JC, Wyatt SM. When and how to evaluate health information systems? Int J Med Inf 2003;69:251-9. 11. Friedman CP, Wyatt JC. Evaluation Methods in Biomedical Informatics. New York, NY: Springer; 2006. 12. Berkenstadt H, Yusim Y, Katznelson R, Ziv A, Livingstone D, Perel A. A novel point-of-care information system reduces anaesthesiologists’ errors while managing case scenarios. Eur J Anaesthesiol 2006;23:239-50. 13. Giles LC, Whitehead CH, Jeffers L, McErlean B, Thompson D, Crotty M. Falls in hospitalized patients: can nursing information systems data predict falls? CIN: Computers, Informatics, Nursing 2006;24:167-72. 14. Desikan P, Koram MR, Trivedi SK, Jain A. An evaluation of the effectiveness of the laboratory information system (LIS) with special reference to the microbiology laboratory. Indian J Pathol Microbiol 2005;48:418. 15. Upperman JS, Staley P, Friend K, Neches W, Kazimer D, Benes J, et al. The impact of hospital wide computerized physician order entry on medical errors in a pediatric hospital. J Pediatr Surg 2005;40:57-9. 16. Kilbridge PM, Welebob EM, Classen DC. Development of the Leapfrog methodology for evaluating hospital implemented inpatient computerized physician order entry systems. Qual Saf Health Care 2006;15:81-4. 17. Han YY, Carcillo JA, Venkataraman ST, Clark RS, Watson RS, Nguyen TC, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics 2005;116:1506-12.

N

18. Friedman CP, Abbas UL. Is medical informatics a mature science? A review of measurement practice in outcome studies of clinical systems. Int J Med Inf 2003;69:261-72. 19. Shortliffe EH, Perreault LE, Wiederhold G, Fagan LM, Medical Informatics: Computer applications in health care and biomedicine. (2nd ed). New York, NY: Springer; 2001. 20. Nelson L, Taylor F, Adams M, Parker DE. Improving pain management for hip fractured elderly. Orthop Nurs 1990;9: 79-83. 21. Stoop AP, Berg M. Integrating quantitative and qualitative methods in patient care information system evaluation. Methods Inf Med 2003;4:458-62. 22. Neville D, Gates K, Tucker S, et al. Towards an evaluation framework for electronic health records initiatives: An annotated bibliography and systematic assessment of the published literature and program reports. Available at: http:// www.nlchi.nf.ca/pdf/bio_feb04.pdf. Accessed December 23, 2006. 23. Van Der Meijden MJ, Tange HJ, Troost J, Hasman A. Determinants of success of inpatient clinical information systems: a literature review. J Am Med Inform Assoc 2003;10:235-43. 24. Mekhjian HS, Kumar RR, Kuehn L, Bentley TD, Teater P, Thomas A, et al. Immediate benefits realized following implementation of physician order entry at an academic medical center. J Am Med Inform Assoc 2002;9:529-39. 25. Classen D. Medication safety: Moving from illusion to reality. JAMA 2003;289:1154-6. 26. Bates DW, Leape LL, Cullen DJ, Laird N, Petersen LA, Teich JM, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA 1998;280:1311-6. 27. Nuckolls JG. Process improvement approach to the care of patients with type 2 diabetes. Postgrad Med 2003;113(suppl): 53-62. 28. Toth-Pal E, Nilsson GH, Furhoff AK. Clinical effect of computer generated physician reminders in health screening in primary health care—a controlled clinical trial of preventive services among the elderly. Int J Med Inf 2004;73:695703. 29. Fung KW, Vogel LH. Will decision support in medications order entry save money? A return on investment analysis of the case of the Hong Kong hospital authority. AMIA Annual Symposium Proceedings 2003;244-254. 30. Krohn R. In search of the ROI from CPOE. J Healthc Inf Manag 2003;17:6-9. 31. Taylor R, Manzo J, Sinnett M. Quantifying value for physician order-entry systems: a balance of cost and quality. Healthc Financ Manage 2002;56:44-8. 32. Nielsen J. Usability engineering. San Diego, CA: Morgan Kaufman; 1993. 33. Doll WJ, Torkzadeh G. The measurement of end-user computing satisfaction. MIS Quarterly 1988;12:259-74. 34. Hatcher M. Impact of information systems on acute care hospitals: results from a survey in the United States. J Med Syst 1998;22:379-87. 35. Lee F, Teich JM, Spurr CD, Bates DW. Implementation of physician order entry: user satisfaction and self- reported usage patterns. J Am Med Inform Assoc 1996;3:42-55. 36. Harper B, Slaughter L, Norman K. Questionnaire administration via the WWW: A validation and reliability study for a user satisfaction questionnaire. Paper presented at WebNet 97,

O V E M B E R

/ D

E C E M B E R

N

U R S I N G

O

U T L O O K

288.e1

CIS implementation

37.

38.

39.

40.

41.

42.

43.

44.

45. 46. 47.

48.

Nahm et al 49. Phillips JJ, Phillips PP. In Action: Measuring Return on Investment. Vol 3. Alexandria, VA: American Society for Training & Development; 2001. 50. Chaiken BP. Clinical ROI: Not just costs versus benefits. J Healthc Inf Manag 2003;17:36-41. 51. Snyder-Halpern R, Wagner MC. Evaluating return-oninvestment for a hospital clinical information system. Comput Nurs 2000;18:213-9. 52. Corley ST. Electronic prescribing: a review of costs and benefits. Top Health Inf Manage 2003;24:29-38. 53. Polit DF, Beck CT. Nursing Research: Principles and Methods. (7th ed). Philadelphia, PA: Lippincott Williams & Wilkins; 2004. 54. Overhage JM, Tierney WM, Zhou XH, McDonald CJ. A randomized trial of “corollary orders” to prevent errors of omission. J Am Med Inform Assoc 1997;4:364-75. 55. Cook TD, Campbell DT. Quasi-Experimentation: Design & Analysis Issues for Field Settings. Boston, MA: Houghton Mifflin; 1979. 56. Barnes R. Motion and Time Study: Design and Measurement of Work. 7th ed. Hoboken, NJ: Johns Wiley & Sons, Inc.; 1980. 57. Aday LA. Designing and Conducting Health Surveys: A comprehensive guide. (2nd ed). San Francisco, CA: JosseyBass; 1996. 58. Dix A, Finlay J, Abowd G, Beale R. Human-computer Interaction. (3rd ed). London: Prentice Hall Europe; 2003. 59. Preece J, Rogers Y, Sharp H. Interaction design: beyond human-computer interaction. New York, NY: John Wiley & Sons; 2002. 60. Ammenwerth E, Iller C, Mansmann U. Can evaluation studies benefit from triangulation? A case study. Int J Med Inf 2003;70:237-48. 61. Blignaut PJ, McDonald T, Tolmie CJ. Predicting the learning and consultation time in a computerized primary healthcare clinic. Comput Nurs 2001;19:130-6. 62. Dexter PR, Perkins S, Overhage JM, Maharry K, Kohler RB, McDonald CJ. A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med 2001;345:965-70. 63. Manfredini R, Boari B, Smolensky MH, et al. Seasonal variation in onset of myocardial infarction—a 7-year singlecenter study in Italy. Chronobiol Int 2005;22:1121-35.

Association for the Advancement of Computing in Education, Toronto, Canada. Available at: http://www.lap.umd.edu/QUIS/ index.html. Accessed December 23, 2006. Mazzoleni MC, Baiardi P, Giorgi I, Franchi G, Marconi R, Cortesi M. Assessing users’ satisfaction through perception of usefulness and ease of use in the daily interaction with a hospital information system. Proceedings/AMIA Annual Fall Symposium 1996:752-6. Hatcher M. Survey of acute care hospitals in the United States relative to technology usage and technology transfer. J Med Syst 1997;21:323-37. Poissant L, Pereira J, Tamblyn R, Kawasumi Y. The impact of electronic health records on time efficiency of physicians and nurses: a systematic review. J Am Med Inform Assoc 2005;12:505-16. Galanter WL, Polikaitis A, DiDomenico RJ. A trail of automated safety alerts for inpatient digoxin use with computerized physician order entry. J Am Med Inform Assoc 2004;11:270-7. Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003;163:1409-16. Koppel R, Metlay JP, Cohen A, Abaluck B, Localio AR, Kimmel SE, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293:1197-203. Turner CL, Casbard AC, Murphy MF. Barcode technology: its role in increasing the safety of blood transfusion. Transfusion 2003;43:1200-9. Delpierre C, Cuzin L, Fillaux J, Alvarez M, Massip P, Lang T. A systematic review of computer-based patient record systems and quality of care: more randomized clinical trials or a broader approach? Int J Qual Health Care 2004;16:40716. Erstad TL. Analyzing computer based patient records: a review of literature. J Healthc Inf Manag 2003;17:51-4. Glaser JP, DeBor G, Stuntz L. The New England Healthcare EDI Network. J Healthc Inf Manag 2003;17:42-50. Newell LM, Christensen D. Who’s counting now? ROI for patient safety IT initiatives. J Healthc Inf Manag 2003;17: 29-35. Gold MR, Siegel JE, Russell LB, Weinstein MC. Costeffectiveness in health and medicine. New York, NY: Oxford University Press; 1996.

288.e2

V

O L U M E

5 5



N

U M B E R

6

N

U R S I N G

O

U T L O O K