Available online at www.sciencedirect.com R
Preventive Medicine 36 (2003) 519 –524
www.elsevier.com/locate/ypmed
Status of practice guidelines in the United States: CDC guidelines as an example Elaine Larson, R.N., Ph.D., FAAN, CIC* Columbia University School of Nursing, 630 West 168th Street, New York, NY 10032, USA
Abstract Background. Clinical practice guidelines have proliferated in the past several decades, starting with only a handful in the 1980s to over 1000 approved through The National Guideline Clearinghouse in 2002. Methods. The purposes of this article to review research related to guideline adoption and impact and to make recommendations for assessing the outcomes of guidelines, using the CDC guideline process as an example. Results. Despite the national movement toward standardization of evidence-based practice, few studies have been conducted to assess the costs of guideline development and implementation, and some practice guidelines have been implemented without concomitant assessment on patient outcomes and costs and benefits of changes in care. Conclusions. An immediate mandate is to ensure that when guidelines are promulgated, they include an evaluation plan, developed by the implementer of the guideline, which takes advantage of existing qualitative and quantitative data and programs (e.g., patient-centered care, quality assurance, risk management) not limited to expensive and sophisticated clinical trials. © 2003 American Health Foundation and Elsevier Science (USA). All rights reserved. Keywords: Guidelines; Evidence-based practice; Outcomes assessment
Introduction Over the past few decades, two movements have had an important impact on how patient care is delivered. First, as early as the 1970s and increasingly in recent decades, variations and inequities in clinical care for a variety of health conditions such as acute myocardial infarction, cardiac surgery, use of diagnostic procedures, and tonsillectomy, have been noted. Such variations are concerning because they have been associated with factors unrelated to the need for care, factors such as geography, socioeconomic status, ethnicity, or gender [1–5]. Second, the mandate to mediate escalating costs of health care has resulted in increased motivation among clinicians and policy-makers to identify those practices which result in positive patient outcomes and those practices which cannot be justified because of insufficient evidence of either cost saving or patient benefit [6]. These two factors—recognition of inappropriate varia-
* Fax: ⫹1-212-305-0722. E-mail address:
[email protected]
tions in patient care and of the need to base clinical practice on outcomes and evidence— have resulted in national efforts to set standards for care and to focus on quality and cost effectiveness. Clinical practice guidelines offer one mechanism to improve equity and quality in patient care. In 1989, The Agency for Health Care Policy and Research (now Agency for Healthcare Quality and Research, AHRQ) was established to “enhance the quality, appropriateness and effectiveness of health care services and access to these services.” One of their initial activities was to produce a series of 19 clinical practice guidelines (http://www.ahrq.gov/clinic/ cpgonline.htm). The Agency also commissioned The Institute of Medicine (IOM) to examine and make recommendations for development, dissemination and implementation of practice guidelines. The resulting two publications have served as the blueprint for rigorous and high quality practice guidelines [7,8]. The IOM defined practice guidelines as “systematically developed statements to assist practitioners and patient decisions about appropriate health care for specific clinical circumstances” [8]. In 1996, AHRQ ceased
0091-7435/03/$ – see front matter © 2003 American Health Foundation and Elsevier Science (USA). All rights reserved. doi:10.1016/S0091-7435(03)00014-8
520
E. Larson / Preventive Medicine 36 (2003) 519 –524
developing guidelines but established The National Guideline Clearinghouse (http://www.guideline.gov/body home nf.asp?view⫽home), which now includes ⬎1000 practice guidelines that meet specific quality criteria and are submitted by qualified groups such as other governmental and professional organizations. Currently, many professional organizations in health care are engaged in developing, adapting, and/or implementing evidence-based practice guidelines. In light of this emerging focus on practice guidelines as a mechanism to improve the quality, equity, and efficiency of patient care, the purposes of this article are to briefly review research related to their adoption and impact and to make recommendations for assessing the outcomes of guidelines, using the CDC guideline process as an example.
Studies assessing the adoption and impact of guidelines Research to date on national practice guidelines has focused primarily on two areas: the extent to which guidelines are accepted and implemented by clinicians (i.e., diffusion and adoption), and their impact on processes and outcomes of patient care. While a number of studies have demonstrated that guideline adherence or attitudes regarding practice guidelines is poor, particularly among physicians [9 –12], other studies have demonstrated improvements in practice [13]. For example, a 40% increase in appropriate referrals was noted after introduction of referral guidelines for dermatology [14], there was reduced emergency/hospital use for asthma following implementation of a national guideline [15], and improvements in compliance to evidence-based guidelines have been demonstrated in cancer care [16,17]. One of the primary challenges of national practice guidelines has been overcoming barriers to their adoption and adherence. Widespread awareness of a guideline is not necessarily followed by adherence [18]. In a study conducted at Kaiser Permanente, Brown et al. evaluated under naturalistic conditions the effectiveness of two implementation methods— continuous quality improvement (CQI) and academic detailing—for the AHCPR Guideline for the Detection and Treatment of Depression in Primary Care. Since most of the CQI team’s recommendations were not implemented and academic profiling failed to improve symptoms or measures of overall functional status, they concluded that new organizational structures may be necessary before practice guidelines can substantially change complex processes [19]. Cabana et al. [20] reviewed 76 studies describing barriers to adherence to clinical practice guidelines by physicians. They classified possible barriers into seven general knowledge (lack of familiarity or awareness), attitudes (lack of agreement, outcome expectancy, self-efficacy, or motivation), and behavioral (external barriers such as patient factors) categories, based on a model that described an ideal mechanism of action for guidelines. They found that more
than half of the studies examined only a single barrier and concluded that programs which fail to consider the variety of barriers to adherence are less likely to succeed. One of the identified barriers to implementation of AHCPR’s guideline for unstable angina was its unknown effects on patient outcomes [21]. Pediatricians in a national survey reported that practice guidelines are likely to be followed if they are simple, flexible, rigorous, not used punitively, and motivated by a desire to improve patient care [22]. Similarly, Crain et al. [23] concluded that the impact of NIH-developed guidelines for pediatric asthma care remained unclear because of the lack of evidence that the guidelines would improve outcomes. Thus, it is clear that the link between practice guidelines and patient outcomes not only is essential to effectiveness, but it is also a prerequisite for acceptance of the guideline by clinicians. The lack of an apparent link of practice guidelines to patient outcomes represents a major barrier to successful implementation. Studies assessing the impact of guidelines on patient outcomes are increasing in number, but still less common. For example, among 18 studies of almost 1900 physicians which examined the effects of various outreach interventions, only one measured patient outcomes [24]. Some studies have demonstrated positive patient outcomes, e.g., in management of back pain and care of hospitalized patients with congestive heart failure [25,26] and in the functional status of poststroke patients [27], but others have not [28,29]. Windsor and colleagues [30] examined the impact of the AHCPR guideline on smoking cessation and demonstrated a reduction in smoking among pregnant Medicaid recipients. A guideline produced by the Washington State Department of Labor and Industries for elective lumbar fusion was associated with a decline in rates of performing that procedure [31]. It has recently been noted that cost effectiveness studies are particularly relevant to guidelines related to preventive measures/services [32]. Studies of the economic impact of guidelines generally confirm that there is a cost saving to standardizing practice [33–35]. Two studies have demonstrated improvements in antibiotic prescribing and reduced costs when cystitis treatment guidelines were used [36,37]. With regard to costs, it is important to distinguish between treatment cost effectiveness and policy cost effectiveness (combining treatment with cost and magnitude of change achieved by the Guideline) [38]. Unfortunately, however, cost studies are rare [24,39], and the actual costs of guideline development and implementation (e.g., education, monitoring, and feedback of staff) have not generally been included in the few economic analyses conducted.
CDC guidelines as an example In 1981, the Centers for Disease Control and prevention (CDC) published a manual, “Guidelines for the Prevention
E. Larson / Preventive Medicine 36 (2003) 519 –524
and Control of Nosocomial Infections,” containing nine clinical guidelines for infection prevention and control handwashing [40]. The publication and widespread dissemination of these guidelines revolutionized practice because it was the first time that a federal agency had gathered recommended, evidence-based standards of practice to prevent infections into a single document. Although CDC has been consistently careful through the years to emphasize that these guidelines are not regulatory, they are taken very seriously by essentially all hospitals and serve as the basis for accreditation and standard setting. In fact, if a health care institution does not comply with the guidelines, they may be required to provide an acceptable rationale in order to become fully accredited (i.e., the guidelines are considered the “gold standard” for practice). Currently, CDC infection prevention and control guidelines are developed through a careful, rigorous, evidencebased process overseen by The Healthcare Infection Control Practices Advisory Committee (HICPAC), Division of Healthcare Quality Promotion of The National Center for Infectious Diseases. HICPAC was established in 1992 for the purpose of advising CDC on infection control practices and preparing clinical guidelines. HICPAC consists of 14 members, selected for their expertise and appointed by the Secretary of DHHS. The Committee’s role in guideline development includes prioritizing topics, identifying expert authors, and overseeing the process. Between 1996 and 2001, eight guidelines were published on topics such as prevention of intravascular-related infections, surgical site infections, and nosocomial pneumonia; isolation precautions; infection control in health care personnel; and environmental infection control. Each guideline has two parts: a synthesis of the state-ofthe-science research literature on the topic and a set of recommendations based on the evidence. The recommendations are categorized as IA (strongly recommended based on well-designed studies), IB (strongly recommended based on some studies and strong theoretical rationale, II (suggested for implementation and supported by suggestive studies or theoretical rationale, and No Recommendation (practices with insufficient evidence or lack of consensus). All of the guidelines are widely disseminated through the Federal Register, websites of CDC and relevant professional organizations, and journals. The latest guideline, Guideline for Hand Hygiene in Healthcare Facilities (available at http:// www.cdc.gov/mmwr/preview/mmwrhtml/rr5116al.htm), represents a step forward, because it is the first HICPAC guideline jointly developed by CDC and at least three nongovernmental professional societies (Association for Professionals in Infection Control and Epidemiology (APIC), Society for Healthcare Epidemiologists of America, Infectious Disease Society of America). Recognizing the importance of assessing the impact of guidelines, HICPAC has recently added a section in each new or revised guideline that includes examples of outcome and process measures which could be adopted or adapted by
521
health care facilities. HICPAC recognizes that it is not their responsibility to monitor guideline effectiveness because institutions will take varying approaches and seek different outcomes when they implement the same guideline, but the Committee is sending a strong signal that assessment of impact is a vital component of guideline implementation. Three studies have examined the CDC guidelines specifically, but only one assessed patient outcomes. The introduction of the CDC group B streptococcal prevention guideline into a health maintenance organization was associated with improved patient care—increased maternal intrapartum antibiotic use and decreased laboratory tests in infants [41]. Another study evaluated a clinical decision process model for appropriateness of vancomycin use using modified HICPAC guidelines [42]. A third study, funded in a cooperative agreement with CDC, assessed the diffusion and adoption of CDC guidelines in a stratified random sample of 445 U.S. hospitals [43]. Three aspects of diffusion and adoption of the seven guidelines published by 1987 were measured: whether the guidelines were available in the hospital, the extent to which the guidelines had been reviewed, and an overall index of adoption (a sum of the number of 16 recommendations that had been adopted). There was widespread diffusion of the guidelines; 84% of respondents were familiar with them. Adoption ranged from 23 to 75%, depending upon the topic of the guideline. This study confirmed that CDC guidelines were widely disseminated and adopted. Lacking in this study, however, was any assessment of guideline impact on rates of nosocomial infections or costs.
New CDC hand hygiene guideline for health care settings The first national hand hygiene guideline was published by CDC in 1981 [40]. By the 1990s, the guideline had become outdated, but the CDC, citing fiscal constraints, halted their guideline writing and updating process in the mid-1980s. Therefore, APIC, the professional organization comprising about 12,000 professionals in infection prevention and control, took on the task of guideline development. Their first guideline, published in 1988 and revised in 1995, was on hand hygiene [44,45]. In 1992, the CDC reactivated its guideline development process with the formation of HICPAC, and in 1999, because of new research in the field, elected to produce a new guideline for hand hygiene. The recommendations in this newest Guideline, published in late October 2002, require some major departures from traditional clinical practice including use of a waterless alcohol-based antiseptic rather than soap or detergent and water for all clinical indications except when hands are visibly soiled (i.e., handwashing will be replaced by antiseptic hand rub in most patient care encounters), provision of lotion by health care facilities, prohibition against artificial fingernails, institutional mandate for providing staff
522
E. Larson / Preventive Medicine 36 (2003) 519 –524
education, and developing a multidisciplinary program to monitor compliance with recommendations (http://www. cdc.gov/mmwr/preview/mmwrhtml/rr5116a1.htm). All recommendations are based on sound evidence from clinical trials and other research. Nevertheless, these changes will be not only challenging to implement, but also costly.
Conceptual underpinnings While the IOM, AHRQ, and other agencies have published criteria for guideline development and quality, there is less information or guidance for assessing the clinical impact of guidelines. For example, the IOM text on clinical practice guidelines [8] devotes only three pages to assessment of impact. Similarly, in 1996 CDC published a document on improving the quality of guidelines. Even though this document describes in detail the entire development process, including needs assessment, defining the scope and framework of the guideline, coordinating the review and preparation process, and updating, only 2 of 185 pages are devoted to assessment of impact [46]. Unfortunately, the recommendations of this document were never implemented, and there is no standard mechanism for evaluating CDC (or any other) guidelines. From the research evaluating practice guidelines to date, it is clear that studies to measure the clinical outcomes of practice guidelines need to examine both the implementation process (i.e., whether guideline is actually adopted) and the efficacy of the recommendations. Only then will it be possible to determine whether the results of studies evaluating the clinical impact of practice guidelines are attributable to problems associated with process (implementation) or with outcomes (actual effectiveness of recommendations). Donabedian proposed a conceptual model that suggests that research on the quality of care must examine three dimensions: structure, process, and outcomes [47]. In this schema, the structure variable is a CDC practice guideline, the process variables are diffusion and adoption of the guideline’s recommendations and barriers to their adoption, and the outcome for infection prevention and control guidelines must be changes in both rates of nosocomial infections and costs. Structure CDC Guideline
3
Process - -Diffusion and adoption of the Guideline’s recommendations - -Barriers to adoption/compliance
3
Outcomes - -Rates of nosocomial infections - -Costs
Recommendations for assessing patient outcomes and costs of guidelines Guideline development requires significant resources; the average time to prepare a HICPAC guideline, for example, is about 2 years and involves dozens of experts.
Additional resources are required for disseminating and implementing guidelines, and because guidelines become outdated within a few years [48], there is additional cost for updating them on a regular basis. Despite this national movement toward standardization of evidence-based practice, the clinical impact and patient outcomes associated with implementation of practice guidelines are not always assessed. For example, only a single impact/outcomes study has been published for any of the CDC guidelines, despite the fact that they have been widely used for three decades. Further, to our knowledge, no study of guideline impact has attempted to measure the costs of guideline implementation. The new Hand Hygiene Guideline proposes major changes in clinical practice, and therefore a unique opportunity to assess impact on patient outcomes, specifically health care-associated infections, as well as costs. The primary challenge is to determine the most feasible methods to conduct evaluation studies, given competing priorities and limited expertise and financial resources. Clearly, the “gold standard” for outcomes assessment would be the randomized clinical trial (RCT) or other formal interventional design. Grimshaw and colleagues have summarized a number of quasi-experimental and experimental designs appropriate for assessing guidelines [49]. Unfortunately, even when an RCT suggests that a guideline is useful, evidence for external validity may be lacking and the cost of conducting clinical trials in every facility would be prohibitive and wasteful. Littlejohns and Cluzeau [50] noted that the benefits and costs of guidelines, like other educational interventions, may be incredibly difficult to quantify. Just as there are a variety of methods to implement guidelines (e.g., educational, marketing, behavioral, structural, economic, legal) [51,52], a variety of methods are necessary to assess outcomes of the guideline implementation. In addition to traditional experimental designs, other more feasible research methods such as user and patient surveys, pre and post test studies, simple economic analyses, and statistical modeling are appropriate alternatives. While there are a few commercial products being developed to evaluate guidelines [53], many currently available data sources can be gleaned for the purposes of outcomes assessment. Studies to data have successfully used four major data sources: computerized/automated medical records and other databases [53–56], self- report surveys of providers [18,22,57–59], retrospective chart reviews and abstraction [27,60 – 62], and statistical modeling of practice changes or costs [63,64]. Granted, the challenge of measuring the impact of practice guidelines is daunting. On the other hand, the expertise, equipment, and supplies needed to effect the cultural and systems changes necessary to implement the recommendations of guidelines that require clinicians to change longingrained habits and tradition are also daunting. Resource utilization is important in health care (as elsewhere) because limitless demands compete with limited resources and
E. Larson / Preventive Medicine 36 (2003) 519 –524
hence, it is irresponsible to expend such major resources without assessing the outcomes. From this review of the state-of-the-science of guideline development, it is clear that the immediate mandate is to ensure that when guidelines such as the CDC infection prevention and control recommendations are promulgated, they include a feasible plan for evaluating their impact, outcomes, and costs. Because guidelines will serve different purposes in different settings and will have differing goals, plans to assess outcomes and cost must come from the implementer, not the developer of the guideline, should take advantage of existing qualitative and quantitative data and programs (e.g., patient-centered care, quality assurance, risk management), and should not be limited to expensive and sophisticated clinical trials.
References [1] Perrin JM, Homer CJ, Berwick DM, Woolf AD, Freeman JL, Wennberg JE. Variations in rates of hospitalization of children in three urban communities. N Engl J Med 1989;320:1183–7. [2] McPherson K, Wennberg JE, Hovind OB, Clifford P. Small-area variations in the use of common surgical procedures: an international comparison of New England, England, and Norway. N Engl J Med 1982;307:1310 – 4. [3] Welch WP, Miller ME, Welch HG, Fisher ES, Wennberg JE. Geographic variation in expenditures for physicians’ services in the United States. N Engl J Med 1993;328:621–7. [4] O’Connor GT, Quinton HB, Traven ND, et al. Geographic variation in the treatment of acute myocardial infarction: the Cooperative Cardiovascular Project. JAMA 1999;281:627–33. [5] Giugliano RP, Camargo CA Jr, Lloyd-Jones DM, et al. Elderly patients receive less aggressive medical and invasive management of unstable angina: potential impact of practice guidelines. Arch Intern Med 1998;158:1113–20. [6] Oermann MH, Huber D. Patient outcomes. A measure of nursing’s value. Am J Nurs 1999;99:40 –7. [7] Field M, Lohr K. Guidelines for clinical practice: from development to use. Washington, DC: National Academy Press, 1992. [8] Field M, Lohr K. Clinical practice guidelines: directions for a new program. Washington, DC: National Academy Press, 1990. [9] Di Iorio D, Henley E, Doughty A. A survey of primary care physician practice patterns and adherence to acute low back problem guidelines. Arch Fam Med 2000;9:1015–21. [10] Szajewska H, Hoekstra JH, Sandhu B. Management of acute gastroenteritis in Europe and the impact of the new recommendations: a multicenter study. The Working Group on acute Diarrhoea of the European Society for Paediatric Gastroenterology, Hepatology, and Nutrition. J Pediatr Gastroenterol Nutr 2000;30:522–7. [11] Dhenain M, Vanhems P, Colin C, et al. Published guidelines have a limited impact on the first prescription of antiretroviral therapy for HIV-1-infected patients in Lyon, France. AIDS 2000;14:1673– 4. [12] Gattellari M, Ward J, Solomon M. Implementing guidelines about colorectal cancer: a national survey of target groups. Aust New Zeal J Surg 2001;71:147–53. [13] Bialas MC, Evans RJ, Hutchings AD, Alldridge G, Routledge PA. The impact of nationally distributed guidelines on the management of paracetamol poisoning in accident and emergency departments. National Poison Information Service. J Accid Emerg Med 1998;15:13–7. [14] Hill VA, Wong E, Hart CJ. General practitioner referral guidelines for dermatology: do they improve the quality of referrals? Clin Exp Dermatol 2000;25:371– 6.
523
[15] Nestor A, Calhoun AC, Dickson M, Kalik CA. Cross-sectional analysis of the relationship between national guideline recommended asthma drug therapy and emergency/hospital use within a managed care population. Ann Allergy Asthma Immunol 1998;81:327–30. [16] Debrix I, Tilleul P, Milleron B, et al. The relationship between introduction of American society of clinical oncology guidelines and the use of colony-stimulating factors in clinical practice in a Paris university hospital. Clin Ther 2001;23:1116 –27. [17] Smith TJ, Hillner BE. Ensuring quality cancer care by the use of clinical practice guidelines and critical pathways. J Clin Oncol 2001; 19:2886 –97. [18] Finkelstein JA, Lozano P, Shulruff R, et al. Self-reported physician practices for children with asthma: are national guidelines followed? Pediatrics 2000;106:886 –96. [19] Brown JB, Shye D, McFarland BH, Nichols GA, Mullooly JP, Johnson RE. Controlled trials of CQI and academic detailing to implement a clinical practice guideline for depression. Jt Comm J Qual Improv 2000;26:39 –54. [20] Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999;282:1458 – 65. [21] Katz DA. Barriers between guidelines and improved patient care: an analysis of AHCPR’s Unstable Angina Clinical Practice Guideline. Agency for Health Care Policy and Research. Health Serv Res 1999; 34:377– 89. [22] Flores G, Lee M, Bauchner H, Kastner B. Pediatricians’ attitudes, beliefs, and practices regarding clinical practice guidelines: a national survey. Pediatrics 2000;105:496 –501. [23] Crain EF, Weiss KB, Fagan MJ. Pediatric asthma care in US emergency departments. Current practice in the context of the National Institutes of Health guidelines. Arch pediatr Adolesc Med 1995;149: 893–901. [24] Thomson O’Brien MA, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL. Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2000:2. [25] Costantini O, Huck K, Carlson MD, et al. Impact of a guideline-based disease management team on outcomes of hospitalized patients with congestive heart failure. Arch Intern Med 2001;161:177– 82. [26] Goldberg HI, Deyo RA, Taylor VM, et al. Can evidence change the rate of back surgery? A randomized trial of community-based education. Eff Clin Pract 2001;4:95–104. [27] Duncan PW, Horner RD, Reker DM, et al. Adherence to postacute rehabilitation guidelines is associated with functional recovery in stroke. Stroke 2002;33:167–77. [28] Cowie RL, Underwood MF, Mack S. The impact of asthma management guideline dissemination on the control of asthma in the community. Can Respir J 2001;8 Suppl A:41A–5A. [29] Ramsay CR, Campbell MK, Cantarovich D, et al. Evaluation of clinical guidelines for the management of end-stage renal disease in Europe: the EU BIOMED 1 study. Nephrol Dial Transplant 2000;15: 1394 – 8. [30] Windsor RA, Woodby LL, Miller TM, Hardin JM, Crawford MA, DiClemente CC. Effectiveness of Agency for Health Care Policy and Research clinical practice guideline and patient education methods for pregnant smokers in medicaid maternity care. Am J Obstet Gynecol 2000;182:68 –75. [31] Elam K, Taylor V, Ciol MA, Franklin GM, Deyo RA. Impact of a worker’s compensation practice guideline on lumbar spine fusion in Washington State. Med Care 1997;35:417–24. [32] Saha S, Hoerger TJ, Pignone MP, Teutsch SM, Helfand M, Mandelblatt JS. The art and science of incorporating cost effectiveness into evidence-based recommendations for clinical preventive services. Am J Prev Med 2001;20:36 – 43. [33] Nathwani D, Rubinstein E, Barlow G, Davey P. Do guidelines for community-acquired pneumonia improve the cost- effectiveness of hospital care? Clin Infect Dis 2001;32:728 – 41.
524
E. Larson / Preventive Medicine 36 (2003) 519 –524
[34] Mille D, Roy T, Carrere MO, et al. Economic impact of harmonizing medical practices: compliance with clinical practice guidelines in the follow-up of breast cancer in a French Comprehensive Cancer Center. J Clin Oncol 2000;18:1718 –24. [35] Berild D, Ringertz SH, Lelek M, Fosse B. Antibiotic guidelines lead to reductions in the use and cost of antibiotics in a university hospital. Scand J Infect Dis 2001;33:63–7. [36] O’Connor PJ, Solberg LI, Christianson J, Amundson G, Mosser G. Mechanism of action and impact of a cystitis clinical practice guideline on outcomes and costs of care in an HMO. Jt Comm J Qual Improv 1996;22:673– 82. [37] Goode CJ, Tanaka DJ, Krugman M, et al. Outcomes from use of an evidence-based practice guideline. Nurs Econ 2000;18:202–7. [38] Mason J, Freemantle N, Nazareth I, Eccles M, Haines A, Drummond M. When is it cost-effective to change the behavior of health professionals? JAMA 2001;286:2988 –92. [39] Grol R. Between evidence-based practice and total quality management: the implementation of cost-effective care. Int J Qual Health Care 2000;12:297–304. [40] CDC. Guidelines for the prevention and control of nosocomial infections. Atlanta, GA: Centers for Disease Control, 1981. [41] Davis RL, Hasselquist MB, Cardenas V, et al. Introduction of the new Centers for Disease Control and Prevention group B streptococcal prevention guideline at a large West Coast health maintenance organization. Am J Obstet Gynecol 2001;184:603–10. [42] Salemi C, Becker L, Morrissey R, Warmington J. A clinical decision process model for evaluating vancomycin use with modified HICPAC guidelines. Hospital Infection Control Practice Advisory Committee. Clin Perform Qual Health Care 1998;6:12–16. [43] Celentano DD, Morlock LL, Malitz FE. Diffusion and adoption of CDC guidelines for the prevention and control of nosocomial infections in US hospitals. Infect Control 1987;8:415–23. [44] Larson E. Guideline for use of topical antimicrobial agents. Am J Infect Control 1988;16:253– 66. [45] Larson EL. APIC guideline for handwashing and hand antisepsis in health care settings. Am J Infec Control 1995;23:251– 69. [46] Office EP, Activity PE. CDC guidelines: improving the quality. Atlanta, GA: Centers for Disease Control and Prevention, 1996. [47] Donabedian A. The quality of care. How can it be assessed? JAMA 1988;260:1743– 8. [48] Shekelle PG, Ortiz E, Rhodes S, et al. Validity of the Agency for Healthcare Research and Quality Clinical Practice Guidelines: how quickly do guidelines become outdated? JAMA 2001;286:1461–7.
[49] Grimshaw J, Campbell M, Eccles M, Steen N. Experimental and quasi-experimental designs for evaluating guideline implementation strategies. Fam Pract 2000;17(suppl 1):S11–16. [50] Littlejohns P, Cluzeau F. Guidelines for evaluation. Fam Pract 2000; 17(Suppl 1):S3– 6. [51] Freemantle N. Implementation strategies. Fam Pract 2000;17(Suppl 1):S7–10. [52] Grol R, Jones R. Twenty years of implementation research. Fam Pract 2000;17(Suppl 1):S32–5. [53] Metfessel BA. An automated tool for an analysis of compliance to evidence-based clinical guidelines. Medinfo 2001;10:226 –30. [54] Chen RS, Rosenheck R. Using a computerized patient database to evaluate guideline adherence and measure patterns of care for major depression. J Behav Health Serv Res 2001;28:466 –74. [55] Chen RS, Nadkarni PM, Levin FL, Miller PL, Erdos J, Rosenheck RA. Using a computer database to monitor compliance with pharmacotherapeutic guidelines for schizophrenia. Psychiatr Serv 2000;51: 791– 4. [56] Eytan TA, Goldberg HI. How effective is the computer-based clinical practice guideline? Eff Clin Pract 2001;4:24 –33. [57] Feely J. The therapeutic gap— compliance with medication and guidelines. Atherosclerosis 1999;147(Suppl 1):S31–7. [58] Holloway RG, Gifford DR, Frankel MR, Vickrey BG. A randomized trial to implement practice recommendations: design and methods of the Dementia Care Study. Control Clin Trials 1999;20:369 – 85. [59] James PA, Cowan TM, Graham RP. Patient-centered clinical decisions and their impact on physician adherence to clinical guidelines. J Fam Pract 1998;46:311–18. [60] Frankel HL, FitzPatrick MK, Gaskell S, Hoff WS, Rotondo MF, Schwab CW. Strategies to improve compliance with evidence-based clinical management guidelines. J Am Coll Surg 1999;189:533– 8. [61] LaClair BJ, Reker DM, Duncan PW, Horner RD, Hoenig H. Stroke care: a method for measuring compliance with AHCPR guidelines. Am J Phys Med Rehabil 2001;80:235– 42. [62] Lia-Hoagberg B, Schaffer M, Strohschein S. Public health nursing practice guidelines: an evaluation of dissemination and use. Public Health Nurs 1999;16:397– 404. [63] Marshall DA, Simpson KN, Norton EC, Biddle AK, Youle M. Measuring the effect of clinical guidelines on patient outcomes. Int J Technol Assess Health Care 2000;16:1013–23. [64] Marshall D, Simpson KN, Earle CC, Chu CW. Economic decision analysis model of screening for lung cancer. Eur J Cancer 2001;37: 1759 – 67.