Evidence for the Effectiveness of Techniques To Change Physician Behavior

Evidence for the Effectiveness of Techniques To Change Physician Behavior

Evidence for the Effectiveness of Techniques To Change Physician Behavior* Wally R. Smith, MD Study objectives: To understand the theory and results ...

189KB Sizes 0 Downloads 44 Views

Evidence for the Effectiveness of Techniques To Change Physician Behavior* Wally R. Smith, MD

Study objectives: To understand the theory and results of how to improve physician performance, as part of overall health-care quality improvement. In particular, to study whether and how guideline production and dissemination affects physician performance. Design: Review of meta-analyses and structured reviews; review of behavior change theories implicit in interventions to change physician performance. Setting: Primarily the United States. Patients or participants: Various patients and physicians, determined by reviews. Interventions: None. Measurements and results: There is no unifying theory of physician behavior change tested among physicians in practice. Attempts to affect individual physicians’ performance have often met with failure. Mixed results are found for almost all interventions reviewed. Multiple interventions yield better results. Conclusions: The answer to the question of what works to improve an individual physician’s clinical performance is not simple. Emerging theory and evidence suggests that applications of behavior-change methods should not be focused on which tools (don’t) always work. Instead, guideline development and implementation methods should be theory driven and evidence based (supported by evidence that proves the theory correct). In particular, the framework of evidence-based quality assessment offers some insight into past failures and offers hope for organizing attempts at guideline implementation. (CHEST 2000; 118:8S–17S) Key words: changing physician behavior; evidence-based medicine; guidelines; meta-analyses; quality improvement Abbreviations: CME ⫽ continuing medical education; EBQA ⫽ evidence-based quality assessment

Background The Rise (and Fall?) of Guideline Implementation Within Quality Improvement Programs has long been a clamor for an understandT here ing of what works and what does not work to improve physician performance, as part of overall health-quality improvement.1– 4 The use of guidelines has become a popular, integral part of a reasoned approach to improving individual physician performance. A MEDLINE search lists 4,127 publications since 1966 under the publication category *From the Division of Quality Health Care, Department of Internal Medicine, Virginia Commonwealth University, Medical College of Virginia Campus, Richmond, VA. Correspondence to: Wally R. Smith, MD, Associate Professor and Chairman, Division of Quality Health Care, Virginia Commonwealth University, Medical College of Virginia Campus, Box 980306, 1200 E Broad, W 10 – 402, Richmond, VA 23298 8S

practice guideline, 3,969 of which were published since 1989. However, efforts to implement guidelines using tools to affect an individual physician’s performance have often met with failure. These failures have frustrated clinicians interested in improving their own practices, policy makers, administrators, leaders in managed care, quality assurance, those interested in health-care policy, researchers in this area,5 and organizations funding quality-improvement efforts.6,7 The Agency for Health Care Research and Quality is rethinking its guideline development effort, becoming a clearinghouse rather than developer of guidelines,8 after a multimillion-dollar effort, 16 published evidence-based guidelines, and little evidence of influence on behavior.9 A chart review concluded that the National Institutes of Health Consensus Statement program had failed to stimulate change in physician practice, despite moderate success in reaching the appropriate target audience.10 This disappointing history Translating Guidelines Into Practice

raises the possibility that guideline implementation in health care could be in danger of “falling,” after its dramatic rise. Alas, there are no magic bullets guaranteed to improve physicians’ adherence to guidelines in their practice environments. But an emerging new approach to performance improvement offers some hope. Evidence-based quality assessment (EBQA) can be summarized in four steps, published in a series of articles analogous to the User’s Guide to the Medical Literature,11 which sprung from the evidence-based medicine movement.12 The four steps of EBQA are analogous to the four steps of the quality improvement cycle13: set priorities (plan),14 set guidelines (do),15 measure performance (check),16 and improve performance (act).17 EBQA offers important guidelines for each of these steps, ie, a guideline for implementing guidelines.18 This approach offers higher hopes, though no guarantees, of success. Setting priorities (planning) involves asking, “How important is the problem (the problem our guideline or other intervention intends to solve)?” EBQA urges planners to concentrate on important, welldefined clinical problems where there is good enough evidence to indicate optimal practice. Otherwise, guidelines may end up addressing issues about which there is no agreement, and may therefore be difficult to implement. Setting guidelines (doing) involves asking, “How should we manage the problem (what should be the content of the guideline)?” Guidelines usually seek to authoritatively address questions surrounding diagnosis and therapy, rather than risk or prognosis. To get precise, authoritative answers from the literature, guideline authors should first repeatedly hone questions so that they are clear and concise.19 Authors should conduct systematic searches for articles and should grade the available literature, using published standards for critical review. Authors should then use the synthetic tools of evidence-based medicine, such as meta-analysis and decision analysis, including cost-effectiveness analysis, to determine answers to their questions. This process, while exhausting, is rewarding, and may produce answers that surprise both aspirant guideline authors and users. Perhaps the most illustrative examples of the time, expense, and reward of this process are the guidelines published by the former Agency for Health Care Policy and Research.20,21 Measuring performance (checking) involves asking, “How are we (potential users of the guideline) managing the problem?” The goal of this process is to assess the extent to which physicians are adhering to optimal practice, as determined by guidelines. Ideally, performance indicators should be reported

as rates, and should assess performance of unambiguously (not) indicated tasks. The interpretation of whether the indicator has been met should be equally unambiguous. Directly observed, entered, or charted behaviors should be assessed, and behaviors should be unambiguously attributable to the individual(s) being assessed. Improving performance (acting) involves asking, “How can we improve how we manage the problem (which the guideline intends to solve)?” To answer this question, this article reviews what strategies have been useful to improve individual physician performance, once a practice problem has been identified, the best practice has been determined, and the current practice has been compared to best practice. As a background for the review, the article briefly covers features of physician behavior, behavior change theories applicable to physician behavior, and approaches to changing physician behavior, and discusses methodologic issues affecting studies of interventions. In order to improve attempts at guideline implementation, EBQA is offered as an approach. Special Features of Physician Behavior A physician’s background, ethics, and beliefs strongly mold his or her opinion and influence his or her practice behavior. Physicians are generally highly ethical and professional. Most have sworn to the Hippocratic Oath at some ceremony during medical school. As patients, they have expected and likely observed high standards of conduct from their own physicians. But several special features of a physician’s background make practice behavior changes complex. First, physicians in practice generally have already had their behavior changed significantly and have been exposed to countless guidelines, both formal (written) and informal (verbal), as part of their medical school and residency training. In total, they have had an average of ⬎ 20 years of prior education/ training, including seminal influences on their practice behavior during medical school. Exposures to both local and national opinion leaders are intended to set normative behaviors in the minds of medical students. Later during residency training, program leaders and department chiefs serve as thought leaders by design. During this time, residents may cite or hear cited position statements and/or guidelines by specialty physician societies, in order to more strongly ingrain norms of practice behavior. Also during residency training, a physician’s individual mentors, supervisors, and peers seek to mold his or her practice behavior. Repetitive assessment of values, attitudes, and skills is a part of training.22,23 Trainers CHEST / 118 / 2 / AUGUST, 2000 SUPPLEMENT

9S

use techniques such as didactics, repetition and drill, apprenticeship, and observation coupled with correction. Thus, while some physician behaviors are cognitive and not habitual, some are well-ingrained reflexes that are habitual. Once physicians enter practice, there is an abundance of educational opportunities competing for their attention. Physicians’ mailboxes are choked with fliers advertising continuing medical education (CME) courses, often combined with vacation features. In addition, written, audio, or video education courses to complete at home, by mail, or on the Internet are offered, in hopes that they capture physicians’ limited time for interventions to improve their performance. As human beings, physicians are motivated by multiple interests: the patient’s interests, their own interests, society’s interests, and, increasingly, the payor’s interests. But first and foremost, patients’ lives are at stake. Thus, physicians must balance their multiple motivations with a professional ethos that demands accountability; competence, if not perfect performance; willingness to admit mistakes that occur; maintenance of requisite knowledge and skills; and willingness to admit ignorance and ask for help. Importantly, the target behaviors of interventions to change practicing physicians’ behavior vary. Some interventions are aimed at documented circumstances in which physicians fail to use management options that clearly could improve the outcomes of patients who receive them. Other interventions are aimed at (often reducing) physicians’ resource utilization, such as prescribing and test ordering. In the most benign of these situations, it is believed a change will not affect clinical outcomes. In the most malignant situations, physicians feel forced to ration care or to use treatments that might be inferior, though less expensive. Theories and Approaches to Physician Behavior Change Since a variety of forces set and later influence normative patterns of practice behavior for the practicing physician, researchers have been unable to formulate a unifying theory of physician behavior change, applicable and successfully proven among physicians in practice. However, psychologists, sociologists, and educators have offered several health behavior change theories that apply to efforts to improve physician performance.24,25 Table 1, taken from Grol,26 summarizes these theories and applications, roughly classifying them into approaches that focus either on internal “processes” or on external processes. Another way to classify these approaches is to consider approaches 10S

that target physicians’ professionalism, namely their prior education, their scientific bent, and their code of ethics, vs those that utilize physicians’ humanity by targeting their needs, desires, social systems, and environment. The approaches, defined below, that have been most often tested in recent trials of improving individual physician performance borrow from each of the above categories of approaches, defined below, and the theories behind them. CME, such as didactic lectures in a conference, mailings, or correspondence courses, often may not be offered purely for educational benefit. However, the idea for their effectiveness stems from a branch of learning theory.27 Researchers have found evidence that physicians contemplating and/or adopting behavior change go to CME conferences to validate and test the reliability of their learning and behavior, either that of new information and innovations, or that of what they are already doing in practice.28 Academic detailing, a face-to-face educational meeting with office-based physicians by specially trained representatives to discuss a particular behavior, evolved from observations of the “detailing” done by pharmaceutical representatives. Originally applied to drug prescribing and called “counter detailing,” academic detailing is often done by opinion leaders, or significant peers/role models.29 It thus stems from social learning, innovation, and social influence/power theories, such as diffusion of innovation.30 Participatory guidance, where physicians are given special time together to jointly agree on norms for behavior change, jointly strategize how to habituate behavior change, and then jointly commit to “experiment” with actually changing, also stems from this theoretical framework. Reminders, as well as audit and feedback, are approaches that seek to control physicians’ performance by external stimuli. They stem from behavioral and learning theory. Behavioral/affective theories,31,32 including social cognitive theory33 and the health belief model,34,35 suggest that an individual’s health behavior change is governed by his or her goals and perceptions, which are in turn manipulated by internal and external forces that may be malleable. These theories suggest that feedback of performance or norms of behavior, or of reminders to comply with guidelines, will be effective in changing physician behavior. They also suggest that administrative interventions such as rules and barriers will be successful. Evidence-based guideline development stems from cognitive theory.36 Evidence-based guidelines are intended to change behavior by providing definitive information on best practices from authoritative sources to well-trained, interested, logical practitioners. While many subtruths of this statement still Translating Guidelines Into Practice

Table 1—Theories and Approaches to Physician Behavior Change* Approach Focus on internal processes Educational

Theories

Focus

Interventions, Strategy

Adult learning theories

Intrinsic motivation of professionals

Bottom up, local consensus development Small group interactive learning Problem-based learning

Epidemiologic

Cognitive theories

Rational information seeking and decision making

Evidence-based guideline development Disseminating research findings through courses, mailing, journals

Marketing

Health promotion, innovation, and social marketing theories

Attractive product adapted to needs of target audience

Needs assessment, adapting change proposal to local needs Stepwise approach Various channels for dissemination (mass media and personal)

Learning theory

Controlling performance by external stimuli

Audit and feedback Reminder systems, monitoring Economic incentives, sanctions

Social interaction

Social learning and innovation theories, social influence/power theories

Social influence of significant peers/role models

Peer review in local networks Outreach visits (academic detailing), individual instruction Opinion leaders Influencing key people in social networks Patient-mediated interventions

Organizational

Management theories, system theories

Creating structural and organizational conditions to improve care

Reengineering care process Total quality management/continuous quality improvement approaches Team building Enhancing leadership Changing structures, tasks

Coercive

Economic, power, and learning theories

Control and pressure, external motivation

Regulations, laws Budgeting, contracting Licensing, accreditation Complaints/legal procedures

Focus on external influences Behavioral

*Adapted from Grol.26 Used with permission.

apply, research reviewed below has shown that simple provision of information, even in the form of guidelines, is insufficient. Last, economic incentives are based on health economists’ theory and observations that physicians as people may behave to maximize personal gain.37,38 A little-tested theoretical model that offers insights into the complexity of physician behavior change is the transtheoretical, or readiness-tochange model.39,40 This model integrates numerous behavioral processes and theories, and adds the important variable of time into its predictions of health behavior change. Initially conceived and validated in smoking cessation studies,41 it has now been tested and partly validated among patients in a number of other health-related settings.42 The approach it may suggest would be successful among physicians is a multifactorial, stage-appropriate intervention, tailored to each physician’s readiness to change. For example, educational

strategies would be most appropriate for earlystage interventions, whereas enabling strategies such as reminders would be most appropriate for late-stage interventions.

Materials and Methods An Evidence-Based Approach to Review of Trials of Physician Behavior Change Finding and employing methods to improve physician performance by changing physician behavior can be considered analogous to finding and employing drugs or other therapy to treat disease. The evidence-based medicine movement strongly asserts that randomized trials be accepted as the strongest evidence of the best therapy for a given disease. However, investigators evaluating interventions to improve individual physician performance have not held as rigorous a standard. Many studies used time-series or case-control designs and still claimed superiority/ success of a certain approach. Further, comparison of approaches has been hampered by a CHEST / 118 / 2 / AUGUST, 2000 SUPPLEMENT

11S

Table 2—Evidence Table for the Effectiveness of Various Interventions to Improve Physician Performance* Intervention Education Davis et al46 (99 RCT studies, 29 on education) Cochrane: printed materials vs none Cochrane: printed materials vs printed plus other

Evidence Formal CME without enablers or practice reinforcement had little impact Benefit from ⫺3 to 243.4%, provider outcomes

Widely used CME delivery methods such as conferences have little direct impact on improving professional practice Practical importance

Benefit from ⫺16.1 to 175.6%, patient outcomes Attributed benefits ⫺11.8 to 92.7%, behavior

Effects small at best Effects small at best

Attributed benefits ⫺24.4 to 74.5%, patient outcomes

No additional impact of audit and feedback and conferences/workshops Educational outreach visits and opinion leaders larger additive effect A small subset with estimates of effectiveness No full economic analyses

Stat sign: 2/14 prof behavior, 2/11 patient outcomes

Academic detailing Cochrane (n ⫽ 18)

Conclusions

Several components including written materials, conferences Some augmented (reminders audit/feedback.) 13 trials prescribing practices

Positive effects on practice observed all trials Only one trial measured a patient outcome Few studies examined the cost effectiveness of outreach

3 trials preventive services 2 trials management of common problems Reminders Haynes and Walker49

Johnston et al51

Shea et al52 (n ⫽ 16)

Only 14/135 articles RCT or crossover studies Stimuli ⫽ computer-aided drug dosing, patient data, costs of behavior, patient-specific suggestions 14/14 positive effects on processes of care 3/14 positive effects on patient outcomes Only 28/793 articles were RCT Computer-aided drug dosing, diagnosis, preventive care reminders, quality assurance 15/24 positive effects on clinician performance 3/10 positive effects on patient outcomes Improved overall prevention, OR 1.77; 95% CI 1.38–2.27

Reminders effective on processes of care Reminder study patient outcomes not measured or improved Reminders effective on improving performance. Reminder study patient outcomes not measured or improved Computer-based reminders improve prevention services in the ambulatory care setting

Improved vaccinations OR 3.09; 95% CI 2.39–4.00 Improved breast cancer screening OR 1.88; 95% CI 1.44–2.45 Improved colorectal cancer screening OR 2.25; 95% CI 1.74–2.91 Improved cardiovascular risk reduction OR 2.01; 95% CI 1.55–2.61 Did not improve cervical cancer screening OR 1.15; 95% CI 0.89–1.49 Did not improve other preventive care OR 1.02; 95% CI 0.79–1.32 Audit and Feedback Balas et al53

Cochrane part 1: alone (n ⫽ 37)

Meta-analysis of profiling (peer feedback) on utilization (n ⫽ 12), 553 physicians p ⬍ 0.05, sign test, 12 studies (direction of effects) p ⬍ 0.05, 8 studies (statistical comparison) OR (5 trials) ⫽ 1.091, CI 1.045–1.136 31/37 studies’ randomization process could not be determined 27/37 no power calculations Variety of behaviors targeted Reduction of diagnostic test ordering, prescribing practices, preventive care, and general management 28 reported MD performance, 1 patient outcomes, 8 both Effects from ⫺16 to 152% Clinical importance of ⌬s not always clear

Peer feedback has a statistically significant but minimal effect on utilization

Audit and feedback can sometimes be effective, in particular prescribing and diagnostic test ordering. Effects appear to be small to moderate Should not rely solely on this approach

*RCT ⫽ randomized controlled trial; OR ⫽ odds ratio; CI ⫽ confidence interval; MD ⫽ physician; HMO ⫽ health maintenance organization; FFS ⫽ fee for service.

12S

Translating Guidelines Into Practice

Table 2—Continued Intervention Cochrane part 2: vs other interventions (n ⫽ 7)

Guidelines Grimshaw and Russell56 (n ⫽ 59)

Evidence

Conclusions

Targeted behaviors: management of low hemoglobin, delivery of preventive care services (2 studies), management of high Cholesterol, performance of cervical smears, and ordering of diagnostic tests (2 studies) 4 trials had little evidence of a measurable effect of adding a complementary intervention 2/3 studies reminders better than audit/feedback for increasing preventive services

It is not possible to recommend a complementary intervention to enhance the effectiveness of audit and feedback

24 studied guidelines for specific clinical conditions

Explicit guidelines do improve clinical practice when introduced in the context of rigorous evaluations The size of the improvements in performance vary considerably

Guidelines for preventive care

Worrall et al57 (n ⫽ 13)

Economic incentives Fairbrother et al63 (n ⫽ 1)

Guidelines for prescribing or for support services 55/59 detected significant improvements in the process of care after the introduction of guidelines 9/11 significant outcome improvements Assessed the evidence for the effectiveness of clinical practice guidelines in improving patient outcomes in primary care Of 91 trials of guidelines identified through the search, 13 met the criteria for inclusion in the critical appraisal Most common conditions studied were hypertension (7 studies), asthma (2 studies) and cigarette smoking (2 studies) 5 of 13 trials (38%) statistically significant Compared 3 incentives—a cash bonus for practice— wide increases, enhanced FFS, and feedback Bonus group improved 25.3% (p ⬍ 0.01) % of immunizations received outside the participating practice also increased for bonus group (p ⬍ 0.01)

Hillman et al65 (n ⫽ 1)

Kouides et al64 (n ⫽ 1)

Hickson et al66

No significant changes occurred in other groups Combined feedback and financial incentives

Little evidence that guidelines improve patient outcomes in primary medical care Most studies published to date have used older guidelines and methods insensitive to change Newer, evidence-based guidelines untested

Bonuses sharply and rapidly increased immunization coverage in medical records Much of the increase, however, was the result of better documentation A bonus is a powerful incentive, but more structure or education may be necessary to achieve desired results Financial incentives and feedback did not improve physician compliance with cancer screening guidelines for women ⱖ50 years old in a Medicaid HMO

From 1993 to 1995, screening rates doubled overall (from 24 to 50%) No significant differences between intervention and control group sites Performance-based financial incentives on influenza immunization rate in primary-care physicians’ offices

Mean immunization rate 68.6% (SD 16.6%) vs 62.7% (SD 18.0%) in the control practices (p ⫽ 0.22) Median practice-specific improvement immunization rate ⫹10.3% incentive group vs ⫹3.5% in controls (p ⫽ 0.03) Pediatric resident continuity clinic Compared salary vs FFS reimbursement on physician practice behavior FFS physicians scheduled more visits/patient than did salaried physicians (3.69 visits vs 2.83 visits, p ⬍ 0.01) FFS physicians and saw their patients more often (2.70 visits vs 2.21 visits, p ⬍ 0.05)

Despite high background immunization rates, modest financial incentive made 7% increase in immunization rate among the ambulatory elderly

FFS physicians scheduled more visits/patient than did salaried physicians FFS physicians and saw their patients more often Almost all of this difference was because FFS physicians saw more well patients than salaried physicians FFS physicians provided better continuity of care

CHEST / 118 / 2 / AUGUST, 2000 SUPPLEMENT

13S

Table 2—Continued Intervention

Evidence

Conclusions

FFS physicians saw more well patients than salaried physicians (1.42 visits/patient vs 0.99 visits/p ⬍ 0.01) FFS physicians missed fewer pediatric guidelinerecommended visits, scheduled visits in excess of guidelines FFS physicians attended a larger % of visits (86.6% vs 78.3%, p ⬍ 0.05) FFS physicians encouraged fewer emergency visits/patient (0.12 vs 0.22, p ⬍ 0.01) All Bero44 (n ⫽ 1,139)

18 reviews that met the inclusion criteria No common classification approach between reviews Few linked findings to theories of behavioral change

poor classification scheme of approaches and of the specificity of interventions. If education is used, are printed materials, short CME courses, or other media forms of presentation used? If feedback is used, what content is fed back to providers? Who provides the feedback? What are the (implied) consequences for failure to improve based on feedback? Are suggestions for improvement coupled with the feedback? If reminders are used, are they paper based or computer based? Are the reminders controllable by physicians (for computer-based reminders)? Are reminders in the form of information, suggestions, guidelines, or all of these? Differences in these specifics make it difficult to compare even studies using a single intervention class. The setting in which interventions are tested varies significantly among studies. Many studies are of house officers in academic medical centers, and many are of primary care physicians rather than specialists or subspecialists. Few studies have focused on physicians in practice, or those under managed care. Outcomes reported in many studies have included whether physician behavior changed, but often have not included patient outcomes, or costs, either of the intervention or of changes in behavior. The duration of the effects often has not been reported in studies; a fatigue effect has been sometimes been shown when change over time was measured. Methods for this Review The author conducted a Grateful Med search of published reviews of behavior change, setting the “publication type” field to meta analyses, ie, structured reviews that combined results of clinical trials. The review focused on common approaches to performance improvement often used and often evaluated in the medical literature, by crossing physician with each of the following terms: education, academic detailing, opinion leader, reminder, audit and feedback, guideline, and (economic) incentive. The review did not include searches for individual trials in most cases. Not included were studies of: licensing and credentialing, small-group interactive learning; problem-based learning; bottom up, local consensus development (participatory guidance); disseminating research findings through courses, mailing, journals, etc; or peer review in local networks. Also not included were studies of organizational approaches not solely aimed at individual improvement, including the following: reengineering the care process; total quality management/continuous quality improvement approaches; team building; enhancing leadership; and changing structures or tasks of caregivers. 14S

Passive dissemination of information is generally ineffective Education (small dose) ineffective Guideline dissemination effective but passive Multiple tools more effective Disparate results for any single tool

The Cochrane Collaboration has published the most complete and rigorous reviews of performance improvement trials to date. The Effective Practice and Organization of Care Review Group has published its review results on the Internet.43 Completed Effective Practice and Organization of Care Review Group Cochrane Reviews of individual behavior change strategies were reviewed.

Results and Discussions Table 2 summarizes the findings of this review. A review of reviews by Bero44 is perhaps the most comprehensive single publication to date of the results of trials of physician performance improvement. A general review of all strategies to translate guidelines into practice by Davis and Taylor-Vaisey45 found that the likelihood of adoption of guidelines was influenced by several factors: qualities of the guidelines, characteristics of the health-care professional, characteristics of the practice setting, incentives, regulation, and patient factors. Education Education and training, including continuing medical education, relies on supply of information to physicians to influence practice behavior. Continuing medical education is widely used by every US medical school, each of which have their own departments largely dedicated to conducting educational conferences and distributing materials. The conferences serve practical benefits both to the schools and to attendees. They often generate income for the school. There is often a pleasure element associated with the conferences, which may be in exotic locations or feature recreational activities and/or copious free time. Conferences often invite alumni, presumably to improve giving as well as maintain contact. Translating Guidelines Into Practice

However, the passive education strategies embodied in these conferences have not been found to be effective.46 Neither have printed materials been found to be of use in changing performance.47 Passive educational strategies may be paired with reinforcement strategies to improve their effectiveness, with mixed results. The impact of paired strategies cannot be assessed reliably. Of the strategies studied, personal outreach or academic detailing may be the most effective strategy, though it is time intensive and expensive. Academic Detailing “Detailing” has been used for decades to promote pharmaceuticals. Thus, “counter detailing,” “academic detailing,” or the “outreach visit” was first employed decades ago with success in order to improve prescribing performance or decrease drug utilization. Opinion leaders or clinical pharmacists may deliver the messages. This tool, although expensive and not often evaluated, is relatively effective.48 Reminders The goals of paper and electronic reminders are both to replace memory and to inform decisions with useful, timely, relevant information—a type of feedback. While paper reminders are cheap, electronic interventions are expensive to establish, if inexpensive to continue. Both can vary in invasiveness. The type of intervention studied so far has depended on the medium of interaction with clinicians. The specificity of the interventions has depended on the data accessible to supply a decision aid. Four reviews49 –52 suggest that, of all interventions, reminders show the best evidence to date of consistent effectiveness. However, trials have not yet measured some aspects of when reminders work, including when physicians do not agree with what they are being reminded to do. And few studies evaluate how long a behavior response lasts after the reminder stimulus has discontinued. Audit and Feedback Audit and feedback approaches offers great variety: variety in how audits of performance are performed and how information is fed back to enhance performance. Auditing may be accomplished using chart review, review of electronic data in a computerized medical record system, or visual observation. Feedback may vary by level of aggregation (aggregate physician performance for all patients or data about a single patient); by the kind of data fed back (diagnosis, outcome, utility, decision, cognition); by the population of interest (all patients, patients with

specific characteristics); and by the comparison group, if benchmarks are used (average doctor, other doctors, norms, previous time periods). The reviews found suggest that peer feedback and other types of feedback have a minimal effect on utilization in general,53 and on prescribing and diagnostic test ordering in particular.54 While reviewers suggested combining this approach with others, they did not find evidence pointing to the superiority of any complementary interventions tested.55 More study is needed on the effects of modifying important characteristics, such as the content, source, timing, recipient, and format of feedback. Guidelines Evidence-based guideline development relies on physicians to be rational information seekers and decision makers. Guidelines appear to be a necessary but not sufficient strategy for performance improvement. They may be impossible to apply without adaptation for local use. One of two reviews found that the size of the improvements in performance from the use of guidelines varied considerably.56 The other review found little improvement, but suggested that the guidelines whose implementation was evaluated may have been older, less evidencebased guidelines and that the methods used to evaluate them may have been insensitive to change.57 Economic Incentives Economic incentives clearly influence physician behavior. Several incentive types have been used, and some in use may be unpalatable to physicians or cause unintended effects. Nonrandomized time series replicated in many environments have shown that physicians react to fee freezes by increasing volume.58 – 60 They react to surgical fee freezes by decreasing participation with Medicaid and by increasing volume.61 And even small fee reductions may produce small increases in volume and intensity of various services in large payment programs.62 However, no reviews and only a few recent trials were found of the effects of economic incentives intended to improve performance. Two trials showed that incentives have worked to improve childhood immunization documentation and rates63 and to improve influenza immunization among the elderly,64 but one showed that incentives plus feedback failed to improve breast cancer screening in women ⬎ 50 years old.65 In one trial of residents compared to salaried physicians, fee-for-service physicians scheduled more visits and delivered improved preventive and continuity care.66 Economic incentives appear to be the least studied of all interventions, CHEST / 118 / 2 / AUGUST, 2000 SUPPLEMENT

15S

likely because of the difficulty of randomization of incentives and the controversy and ethical problems surrounding providing incentives.

Conclusion The reviews point to a few conclusions, but raise many questions, about the effectiveness of guideline implementation methods. Education in small doses (days) is ineffective, likely because it pales in comparison with the prior 20 years of education physicians have already received. Guideline dissemination is too passive to effect behavior change without active implementation strategies. Multiple implementation tools are more effective than single ones. Reminders may have the best evidence of effectiveness demonstrated so far, but may have only been tested in narrow situations where there was consent by targets of change that behavior change was needed. Disparate results may be found for any single tool evaluated. Likely the unmeasured situational, environmental factors surrounding trials of performance improvement are so different that the comparisons of results from a single intervention type may be invalid. EBQA offers a relevant framework for those planning to use the information contained in reviews of the effectiveness of performance improvement strategies. Emerging theory and evidence suggests that applications of behavior change methods to improve individual physicians’ clinical performance should not be focused on which tools (don’t) always work. Instead, guideline development and implementation methods should be theory driven and evidence based (supported by evidence that proves the theory correct).67 In the language of clinical medicine, we must diagnose the lesion (why change is not adopted) before prescribing therapy (a change strategy). In practical implementations of physician performance improvement, multiple tools will likely be necessary and should be chosen carefully.

References 1 Eisenberg JM, Williams SV. Cost containment and changing physicians’ practice behavior: can the fox learn to guard the chicken coop? JAMA 1981; 246:2195–2201 2 Eisenberg JM. Physician utilization: the state of research about physicians’ practice patterns. Med Care 1985 May; 23:461– 483 3 Haynes RB, Davis DA, McKibbon A, et al. A critical appraisal of the efficacy of continuing medical education. JAMA 1984 Jan 6; 251:61– 64 4 Greco PJ, Eisenberg JM. Changing physicians’ practices. N Engl J Med 1993; 329:1271–1273 5 Poses RM, Cebul RD, Wigton RS. You can lead a horse to water: improving physicians’ knowledge of probabilities may not affect their decisions. Med Decis Making 1995; 15:65–75 6 Lomas J, Hannah WJ, Enkin MW, et al. Do practice guide16S

7

8 9

10 11

12 13 14 15 16 17

18 19 20

21 22

23 24 25

lines guide practice? The effect of a consensus statement on the practice of physicians. N Engl J Med 1989; 321:1306 – 1311 The SUPPORT Principal Investigators. A controlled trial to improve care for seriously ill hospitalized patients: the study to understand prognoses and preferences for outcomes and risks of treatments (SUPPORT). JAMA 1995; 274:1591–1598 Agency for Healthcare Research and Quality. National Guideline Clearinghouse [online]. Available at: http://www.guideline.gov/index.asp. Katz DA. Barriers between guidelines and improved patient care: an analysis of AHCPR’s Unstable Angina Clinical Practice Guideline; Agency for Health Care Policy and Research. Health Serv Res 1999; 34(1 Pt 2):377–389 Kosecoff J, Kanouse DE, Rogers WH, et al. Effects of the National Institutes of Health Consensus Development Program on physician practice. JAMA 1987; 258:2708 –2713 Randolph AG, Haynes RB, Wyatt JC, et al. Users’ Guides to the Medical Literature: XVIII; How to use an article evaluating the clinical impact of a computer-based clinical decision support system. JAMA 1999; 282:67–74 Evidence-Based Medicine Working Group. Evidence-based medicine: a new approach to teaching the practice of medicine. JAMA 1992; 268:2420 –2425 Andrew N, Cibildak A, Khera M, et al. Implementing total quality management in health care. Jt Comm J Qual Improv 1995; 21:489 – 492 Evidence-based care, 1: Setting priorities; how important is the problem? Evidence-Based Care Resource Group. Can Med Assoc J 1994; 150:1249 –1254 Evidence-based care, 2: Setting guidelines; how should we manage this problem? Evidence-Based Care Resource Group. Can Med Assoc J 1994; 150:1417–1423 Evidence-based care, 3: Measuring performance; how are we managing this problem? Evidence-Based Care Resource Group. Can Med Assoc J 1994; 150:1575–1579 Evidence-based care, 4: Improving performance; how can we improve the way we manage this problem? Evidence-Based Care Resource Group. Can Med Assoc J 1994; 150:1793– 1796 Woolf SH, Grol R, Hutchinson A, et al. Clinical guidelines: potential benefits, limitations, and harms of clinical guidelines. BMJ 1999; 318:527–530 Shekelle PG, Woolf SH, Eccles M, et al. Clinical guidelines: developing guidelines. BMJ 1999; 318:593–596 Sickle Cell Disease Guideline Panel. Sickle cell disease: screening, diagnosis, management, and counseling in newborns and infants; clinical practice guideline No. 6. Rockville, MD: Agency for Health Care Policy and Research, Public Health Service. US Department of Health and Human Services, April, 1993. AHPCR Publication No. 93– 0562 The Agency for Health Care Policy and Research Smoking Cessation Clinical Practice Guideline. JAMA 1996; 275: 1270 –1280 Cassel C, Blank L, Braunstein G, et al. ABIM Subcommittee on Clinical Competence in Women’s Health: what internists need to know; core competencies in women’s health. Am J Med 1997; 102:507–512 Holmboe ES, Hawkins RE. Methods for evaluating the clinical competence of residents in internal medicine: a review. Ann Intern Med 1998; 129:42– 48 Walsh JM, McPhee SJ. A systems model of clinical preventive care: an analysis of factors influencing patient and physician. Health Educ Q 1992; 19:157–175 Shumaker SA, Schron EB, Ockene JK, et al, eds. The handbook of health behavior change. New York, NY: Springer Publishing, 1998; 1–113 Translating Guidelines Into Practice

26 Grol R. Beliefs and evidence in changing clinical practice. BMJ 1997; 315:418 – 421 27 Skinner BF. Science and human behavior. New York, NY: Macmillan, 1953; 30 –35 28 Putnam RW, Campbell MD. Competence. In: Fox RD, Mazmanian PE, Putnam RW, eds. Changing and learning in the lives of physicians. New York, NY: Praeger, 1989; 80 –97 29 Soumerai SB, Avorn J. Principles of educational outreach (’academic detailing’) to improve clinical decision making. JAMA 1990; 263:549 –556 30 Rogers EM. Diffusion of innovations. 4th ed. New York, NY: Free Press, 1995; 131–203 31 Andersen R. A behavioral model of families’ use of health services. Chicago, IL: University of Chicago, Center for Health Administration Studies, 1974; 14 –19 32 Andersen R. Revisiting the behavioral model and access to medical care: does it matter? J Health Soc Behav 1995; 36:1–10 33 Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev 1977; 84:191–215 34 Rosenstock IM. Historical origins of the health belief model. Health Educ Monogr 1974; 2:328 –335 35 Maiman LA, Becker MH. The health belief model: origins and correlates in psychological theory. Health Educ Monogr 1974; 2:336 –353 36 Yates JF. Judgment and Decision Making. Englewood Cliffs, NJ: Prentice-Hall, 1990; 42–343 37 Boardman AE, Dowd B, Eisenberg JM, et al. A model of physicians’ practice attributes determination. J Health Econ 1983; 2:259 –268 38 Rossiter LF, Wilensky GR. A reexamination of the use of physician services: the role of physician-initiated demand. Inquiry 1983; 20:162–172 39 Prochaska JO, DiClemente CC. Transtheoretical therapy: toward a more integrative model of change. Psychother Theory Res Pract 1982; 19:276 –288 40 Prochaska JO. Systems of psychotherapy: a transtheoretical analysis. 2nd ed. Pacific Grove, CA: Brooks-Cole, 1984 41 Prochaska JO, DiClemente CC. Stages and processes of self-change of smoking: toward an integrative model of change. J Consult Clin Psychol 1983; 51:390 –395 42 Prochaska JO, Velicer WF, Rossi JS, et al. Stages of change and decisional balance for twelve problem behaviors. Health Psychol 1994; 13:39 – 46 43 Thomson O’Brien MA, Oxman AD, Davis DA, et al. Educational outreach visits: effects on professional practice and health care outcomes (Cochrane Review). Available at: http://hiru. mcmaster.ca/cochrane/cochrane/revabstr/g10index.htm 44 Bero LA. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. The Cochrane Effective Practice and Organization of Care Review Group. BMJ 1998; 317:465– 468 45 Davis DA, Taylor-Vaisey A. Translating guidelines into practice: a systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines. Can Med Assoc J 1997; 157:408 – 416 46 Davis DA, Thomson MA, Oxman AD, et al. Changing physician performance: a systematic review of the effect of continuing medical education strategies. JAMA 1995; 274:700 –705 47 Freemantle N, Harvey EL, Wolf F, et al. Printed educational materials: effects on professional practice and health care outcomes (Cochrane Review). In: The Cochrane Library, Issue 3, 1999. Oxford, UK: Update Software 48 Thomson O’Brien MA, Oxman AD, Davis DA, et al. Educational outreach visits: effects on professional practice and health care outcomes (Cochrane Review). In: The Cochrane

Library, Issue 3, 1999. Oxford, UK: Update Software 49 Haynes RB, Walker CJ. Computer-aided quality assurance: a critical appraisal. Arch Intern Med 1987; 147:1297–1301 50 Austin SM, Balas EA, Mitchell JA, et al. Effect of physician reminders on preventive care: meta-analysis of randomized clinical trials. Proc Annu Symp Comput Appl Med Care 1994; 121–124 51 Johnston ME, Langton KB, Haynes RB, et al. Effects of computer-based clinical decision support systems on clinician performance and patient outcome: a critical appraisal of research. Ann Intern Med 1994; 120:135–142 52 Shea S, DuMouchel W, Bahamonde L. A meta-analysis of 16 randomized controlled trials to evaluate computer-based clinical reminder systems for preventive care in the ambulatory setting. J Am Med Inform Assoc 1996; 3:399 – 409 53 Balas EA, Boren SA, Brown GD, et al. Effect of physician profiling on utilization: meta-analysis of randomized clinical trials. J Gen Intern Med 1996; 11:584 –590 54 Thomson O’Brien MA, Oxman AD, Davis DA, et al. Audit and feedback: effects on professional practice and health care outcomes (Cochrane Review). In: The Cochrane Library, Issue 3, 1999. Oxford, UK: Update Software 55 Thomson MA, Oxman AD, Davis DA, et al. Audit and feedback to improve health professional practice and health care outcomes: Part II. (Cochrane Review). In: The Cochrane Library, Issue 1, 1999. Oxford, UK: Update Software 56 Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet 1993; 342:1317–1322 57 Worrall G, Freake D, Chaulk P. The effects of clinical practice guidelines on patient outcomes in primary care: a systematic review. Can Med Assoc J 1997; 156:1705–1712 58 Hadley J. Physician participation in Medicaid: evidence from California. Health Serv Res 1979; 14:266 –280 59 Gabel JR, Rice TH. Reducing public expenditures for physician services: the price of paying less. J Health Polit Policy Law 1985; 9:595– 609 60 Berry C, Held PJ, Kehrer B, et al. Canadian physicians’ supply response to universal health insurance: the first years in Quebec (preliminary results). In: Gabel JR, Taylor J, Greenspan NT, et al, eds. Physicians and financial incentives. Washington, DC: US Government Printing Office, 1980; 57–59 61 Schwartz M, Martin SG, Cooper DD, et al. The effect of a thirty percent reduction in physician fees on Medicaid surgery rates in Massachusetts. Am J Public Health 1981; 71:370 –375 62 Rice TH. The impact of changing Medicare reimbursement rates on physician-induced demand. Med Care 1983; 21:803–815 63 Fairbrother G, Hanson KL, Friedman S, et al. The impact of physician bonuses, enhanced fees, and feedback on childhood immunization coverage rates. Am J Public Health 1999; 89:171–175 64 Kouides RW, Bennett NM, Lewis B, et al. Performancebased physician reimbursement and influenza immunization rates in the elderly: Primary-Care Physicians of Monroe County. Am J Prev Med 1998; 14:89 –95 65 Hillman AL, Ripley K, Goldfarb N, et al. Physician financial incentives and feedback: failure to increase cancer screening in Medicaid managed care. Am J Public Health 1998; 88: 1699 –1701 66 Hickson GB, Altemeier WA, Perrin JM. Physician reimbursement by salary or fee-for-service: effect on physician practice behavior in a randomized prospective study. Pediatrics 1987; 80:344 –350 67 Cook DJ, Greengold NL, Ellrodt AG, et al. The relation between systematic reviews and practice guidelines. Ann Intern Med 1997; 127:210 –216 CHEST / 118 / 2 / AUGUST, 2000 SUPPLEMENT

17S