Challenges in evaluating methods to improve physician practice

Challenges in evaluating methods to improve physician practice

NOTES FROM THE ASSOCIATION OF MEDICAL SCHOOL PEDIATRIC DEPARTMENT CHAIRS, INC. CHALLENGES IN EVALUATING METHODS TO IMPROVE PHYSICIAN PRACTICE MICHAEL ...

68KB Sizes 0 Downloads 25 Views

NOTES FROM THE ASSOCIATION OF MEDICAL SCHOOL PEDIATRIC DEPARTMENT CHAIRS, INC. CHALLENGES IN EVALUATING METHODS TO IMPROVE PHYSICIAN PRACTICE MICHAEL D. CABANA, MD, MPH, AND NOREEN CLARK, PHD

he physician–patient encounter is a key opportunity for translating research into improved outcomes and quality of care. The latest therapies or technologies will not affect outcomes if physicians and patients do not apply these advances appropriately. How to enhance the physician– patient encounter through effective physician practice is a key question on the agenda of administrators, educators, healthcare purchasers, and health services researchers.1 Many interventions have attempted to change physician practice, and many reviews have detailed their strengths and weaknesses. However, there is a paucity of evidence regarding their applicability, effectiveness, and sustainability in changing physician practice and improving outcomes. As methods to improve physician practice are developed and tested, research in the area faces several challenges.

T

COMPETING CLINICAL PRIORITIES Although physician participation in quality improvement efforts is essential and evaluation of such efforts is needed,1 research is not a priority for many clinical practices, especially given increasing managed care expectations.2 Studies cannot make major demands upon physicians for data collection or disrupt the practice routine. Even staff that participate peripherally in the enrollment of subjects must understand the informed consent process. As a result, researchers need to develop procedures that make minimal demands on office staff and physicians (eg, helping compile patient lists, giving study information to patients, providing a toll-free line for study questions). Researchers should also consider alternate methods to train staff regarding human subjects protection (eg, online course formats) and/or compensate participating offices for this effort.3

Michael Cabana is a member of the Child Health Evaluation and Research (CHEAR) Unit, Division of General Pediatrics, University of Michigan Health System. Noreen Clark is part of the Department of Health Behavior and Health Education, University of Michigan School of Public Health, Ann Arbor, Michigan. The opinions expressed herein by the author(s) do not necessarily reflect the official endorsement of The Association of Medical School Pediatric Department Chairs, Inc. ( J Pediatr 2003;143:413-4) Funded by the Robert Wood Johnson Foundation (Princeton, NJ).

ENGENDERING PARTICIPATION AND INTEREST The evaluation of interventions to change practice must be as rigorous as the evaluation of any new medical therapy. To engender physician participation, interventions must be clinically relevant.4 In addition, studies that focus solely on cost-reduction outcomes (eg, by reducing length of stay or referrals) without also considering the impact on the quality of patient care will likely be of limited interest to clinicians.5 Studies should also account for the time constraints and financial realities of everyday practice. For example, one of the perceived barriers to physician counseling and education is the poor reimbursement for such services.6 In assessing an intervention to improve physician asthma counseling skills, our team added a module to teach physicians how to effectively document, code, and receive reimbursement for such services.7 Directly addressing time and cost constraints improved the program, increased interest, and enabled more rigorous evaluation. In addition, when possible, eliciting input regarding the feasibility of an intervention before its evaluation may both improve it and further physician ‘‘buy-in’’.8 Emphasis up front on the additional benefits of participation (eg, the eventual distribution of effective intervention tools to all participants) can also improve interest.9

ASSESSING ACTUAL BEHAVIOR It is difficult to evaluate what actually transpires during a physician–patient encounter. For example, details of counseling provided by physicians may be infrequently or incompletely documented in the medical record.10 Assessment methods that provide definitive data are often intrusive, impractical, and/or financially prohibitive. Researchers usually rely on chart audits, patient surveys, claims data, or even physician self-reports to determine if an intervention has affected physician practice behavior. Assessment by combing two or more of these methods may be needed to reconstruct practice behavior. The use of multiple methods enables crossvalidation of an outcome and can reduce the burden of any one method on participating staff and clinicians. Another approach may be to use a comprehensive assessment method 413

with a subsample of subjects and validate findings using a more practical measure with all subjects.

GENERALIZABILITY Practice sites used to evaluate an intervention should be as representative as possible so that findings might be generalizable. However, achieving a truly representative sample may be difficult-to-impossible. At minimum, studies should fully describe practice characteristics and consider these in data analyses. Procedures used to recruit practices and physicians should also be described to account for different motives and capacities for participating in research.

UNINTENDED CONSEQUENCES Assessment of interventions to improve physician practice should take into account possible unintended consequences. Practice resources and time are limited in most settings. If an intervention improves screening for one preventable disease, did the intervention simply shift time and resources toward the new activity at the expense of another?11 The impact on aspects of the physician–patient encounter that might affect the long-term sustainability of the intervention and its ultimate utility should be anticipated and investigated.

ADEQUATE SAMPLE SIZE Although the focus of an intervention may be to change physician behavior, the outcomes of eventual interest are changes in the patient. Although many practice improvement studies have evaluated the effect on physician performance, few have evaluated the effect of changing provider behavior on patient outcomes. Such studies are needed. Large sample sizes are usually required for data analysis to detect significant changes for patients, especially those related to health care utilization.

CLUSTERING The corollary of mustering a large enough sample of patients to ensure sufficient power for statistical analysis is managing the problem of ‘‘clustering.’’ Multiple physicians at the same intervention practice site (ie, a cluster) may have similar practice styles. The practice styles are not independent of each other and the cluster violates fundamental assumptions of standard statistical tests. Furthermore, patients within a single physicianÕs patient panel may also comprise a cluster, because patients who seek the care of a given physician may be very similar in one or more dimensions (eg, income, background, or behavior). Analyses must account for potential clustering effects. Given potential clustering, an adequate sample size is key. Omitting these considerations results in

414

Cabana and Clark

misinterpretations. For example, the influence of a practice site or a particular physician may explain variation in patient outcomes, as opposed to the effect of the quality improvement intervention itself.12

SUMMARY As investigators attempt to assess efforts to enhance practice behavior, studies must be relevant and rigorous. Several studies have successfully addressed the realities of clinical practice and recruitment, and have overcome statistical challenges.7,13 Table I summarizes important considerations when evaluating quality improvements in primary care. Although demanding, studies that meet such criteria will move research forward, improve practice, and achieve quality care. Copyright ª 2003 Mosby, Inc. All rights reserved. 0022-3476/2003/$30.00 + 0 10.1067/S0022-3476(03)00438-4

REFERENCES 1. Chassin MR, Galvin RJ. National Roundtable on Health Care Quality. The urgent need to improve health care quality. JAMA 1998; 280:1000-5. 2. Asch S, Connor SE, Hamilton EG, Fox SA. Problems for recruiting community-based physicians for health services research. J Gen Intern Med 2000;15:591-9. 3. Wolf LE, Croughan M, Lo B. The challenges of IRB review and human subjects protections in practice-based research. Med Care 2002; 40:521-9. 4. Nazarian LF, Maiman LA, Becker MH. Recruitment of a large community of pediatricians in a collaborative research project. Clin Pediatr 1989;28:210-3. 5. Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Quart 1998;76:625-48. 6. Cabana MD, Ebel BE, Cooper-Patrick L, Powe NR, Rubin HR, Rand CS. Barriers that pediatricians face when using asthma practice guidelines. Arch Ped Adolesc Med 2000;154:685-93. 7. Cabana MD, Clark NM. Improving asthma outcomes: physiciandirected interventions. American Thoracic Society Annual Meeting, May 2002, Atlanta, Georgia. 8. Wasserstein RC. Randomized trials and recruitment tribulations: rethinking the research enterprise. J Pediatr 2002;141:756-7. 9. Park ER, Gross NAM, Goldstein MG, DePue JD, Hecht JP, Eaton CA, et al. Physician recruitment for a community-based smoking cessation intervention. J Fam Pract 2002;51:70. 10. Senturia YD, Bauman LJ, Coyle YM, Mergan W, Rosensteich DL, Roudier MD, et al. The use of parent report to assess the quality of care in primary care visits among children with asthma. Ambul Pediatr 2001;1: 194-200. 11. Ferris TG. Improving quality improvement research. Eff Clin Pract 2000;3:40-4. 12. Krumholz HM, Herrin J. Quality improvement studies: the need is there but so are the challenges. Am J Med 2000;109:501-3. 13. Clark NM, Gong M, Schork MA, Evans D, Roloff D, Hurwitz M, et al. Impact of education for physicians on patient outcomes. Pediatrics 1998;101: 831-6.

The Journal of Pediatrics  October 2003