It takes a faculty

It takes a faculty

It takes a faculty Debra A. DaRosa, PhD, Chicago, Ill From the Department of Surgery, Northwestern University Medical School, Chicago, Ill PROGRAM D...

52KB Sizes 0 Downloads 121 Views

It takes a faculty Debra A. DaRosa, PhD, Chicago, Ill

From the Department of Surgery, Northwestern University Medical School, Chicago, Ill

PROGRAM DIRECTORS ARE ADVISED to begin integrating learning objectives and evaluation tools for assessing residents’ competencies as defined by the American Council for Graduate Medical Education (ACGME). This paper describes a 4-step approach for phasing in enhancements to resident performance evaluation systems to accomplish this. Once programs ensure that their curricula include learning objectives relevant to the competencies, a systematic approach can be taken to enhance how residents are evaluated on these competencies. Step 1 includes improving the program’s current subjective rating system. Step 2 requires that objective measures be adopted or adapted to further assess resident proficiency levels in the competencies. Step 3 highlights the need to study the evaluation tools for adequate evidence of psychometric qualities. Step 4 suggests strategies for success by use of sound educational administrative practices. These steps guide refinement of faculty subjective ratings and provide ways to efficiently access objective measures to supplement the subjective performance data. Lastly, they outline the criteria associated with sound education administrative planning needed to advance any evaluation system. Programs should begin planning integration of the competencies and implementation of new and improved assessment tools before July 2002, at which time they will be accountable for the new requirements related to the ACGME defined general competencies. The accreditation council for ACGME advises residency program directors to begin planning Presented as part of the 2001 Society of University Surgeons Committee on Education panel. Accepted for publication September 7, 2001. Reprint requests: Debra A. DaRosa, PhD, Northwestern University Medical School, 251 E Huron, Galter 3-150, Department of Surgery, Northwestern University Medical School, Chicago, IL 60611. Surgery 2002;131:205-9. Copyright © 2002 by Mosby, Inc. 0039-6060/2002/$35.00 + 0 11/56/120664 doi:10.1067/msy.2002.120664

and piloting integration into their programs of those competencies formed through the ACGME Outcome Project (Table) that are not already taught and evaluated. They also advise implementing new and enhanced assessment tools to ensure that residents are adequately proficient in the 6 general competencies. During this time, the various Residency Review Committees (RRCs) have been charged with further defining these competencies relative to their specialties and including those not already reflected in the Program Requirements. Effective July 2002, residency programs will become responsible for meeting the new requirements related to competencies. The dimensions of competence are complex. A common distinction between “competence” and “performance” is that the former is what a physician is capable of doing under ideal circumstances, and the latter is what a physician actually does in his or her day-to-day practice. The ACGME has defined 6 dimensions of competence they feel residents should be proficient in by the time they graduate from residency. It is the responsibility of the faculty at each program to define how they expect residents will master these competencies, and what set of performance indicators will be used to assess resident proficiency. The purpose of this paper is to describe a 4-step approach for advancing residency program evaluation systems to meet these forthcoming program requirements. However, the effort will take more than the commitment of each institution’s program director to be successful; it will require the combined efforts of a faculty. Before beginning enhancements to the performance evaluation systems, a preliminary step is needed. Programs must first review their goals and objectives and determine if and where the 6 general competencies are included. Where they are not, learning objectives and corresponding teaching and learning activities must be developed, and the new expectations communicated to the residents and faculty. The system for evaluating the newly integrated competencies can be planned only after faculty members have clarified the objectives and subsequent expectations of the residents. SURGERY 205

206 DaRosa

Table. General competencies defined by the American Council for Graduate Medical Education Patient care • Medical knowledge • Practice-based learning and improvement • Interpersonal and communication skills • Professionalism • Systems-based practice

In other words, it is not feasible to determine how you are going to evaluate someone’s competency until it is clear what learners are expected to know and to be able to do. STEP 1: ENHANCE THE SUBJECTIVE RATING SYSTEM The subjective rating system is the cornerstone of any performance evaluation system, because ultimately it is a faculty responsibility to determine who has what it takes to be a competent physician and who does not. Tests and other psychometric measures should serve as supplements to, rather than substitutes for, faculty judgment.1 Faculty ratings of residents’ “on-the-job” performance are the most widely used method for clinical performance evaluation.2 The system used to capture faculty members’ impressions and judgments about residents’ performance should be reviewed in terms of its (1) organizational context, (2) quality and frequency of direct observations, (3) quality and frequency of coaching, (4) quality of written performance appraisals, and (5) consistency of data interpretation and administrative decision-making relevant to resident status. How the subjective rating system is organized and operated within a department impacts the quality of information and meaningfulness of the data. Does the leadership emphasize the importance of the system? Are new, junior faculty members formally introduced to the system and familiarized with criteria associated with well and poorly written evaluations? Are the faculty provided “frame of reference” training so their expectations and ratings are better calibrated with what constitutes “outstanding,” “average,” and “marginal” performance?3 What is the return rate by faculty? Is the information collected in a timely, systematic fashion with protocols for what needs to be done if a “red flag” report about a resident’s performance is received? Are there consequences for faculty members who do not complete the evaluation

Surgery February 2002

forms, or who complete them consistently late, or poorly? Whether the evaluation forms are completed through a Web-based system or by hand, policies and procedures should be written describing the “who, what, when, and so what” aspects from data collection to report interpretation. An important consideration is how often faculty members have an opportunity to directly observe the residents engaged in activities they are asked to evaluate. The time between those observations and the time the faculty complete the evaluation forms are also important.4 If the faculty do not generally interact and observe residents engaged in activities associated with the competencies they are being asked to evaluate the residents on, efforts must be made to either ensure that other evaluators (eg, nurses) complete the evaluation forms, or alternatives must be considered. Some have found the use of patient ratings feasible and useful to assess residents.5 Standardized patients, skill laboratories, or other venues where faculty or others can be positioned to observe residents perform and make judicious assessments of various competencies can be established. Because other health professionals (eg, nurses and physician assistants) are in situations where they may more easily observe residents interacting with patients and family members or work as team members or leaders, they can serve as additional sources of performance assessment information. A section for indicating the extent of observation (not observed, minimal, occasional, frequent) is recommended for inclusion on the evaluation form. This will give a sense of the weight each rating should be given. Coaching of resident performance involves giving residents feedback so they can understand others’ impressions of their abilities, determine how others’ impressions fit with theirs, and develop a plan to address any deficiencies and build on strengths. This is best accomplished in a typical 3month rotation by a mid-rotation and end-ofrotation review. Although feedback is a common principle for learning, residents rarely receive systematic feedback from faculty. Often, when it is provided, the information is not specific enough to help residents formulate a meaningful plan for improvement. By adding a system for ongoing and formal feedback, faculty can reinforce goals and standards, and work together to produce a plan for accomplishing them. The written narrative portion of any evaluation augments the numerical ratings and clarifies the reasoning behind the assigned ratings. Not all faculty write informative narratives and some do not write anything at all. Performance ratings are typically

Surgery Volume 131, Number 2

clustered at the positive end of the numeric scale and rarely identify marginal or unsatisfactory performers.6 Faculty also hesitate to write candid comments in the narrative section of the forms. Faculty development is key to explaining the importance of the system, the rules of due process, and how the system will be used. Sample narratives, exemplifying both well-written comments (specific, descriptive statements), as well as poorly written comments (vague, meaningless statements) that provide neither substantial feedback nor assistance with status decisions, should be provided to faculty to encourage inclusion of comments that will be useful in judging a resident’s strengths and weaknesses. Finally, how the performance data and information from various faculty members and others are summarized and reported is a critical consideration. Administrative decisions regarding a resident’s status should be based on meaningful information that is systematically derived and collected, that meets due process requirements, and that is presented in a form that supports consistent and fair decision-making. STEP 2: IMPLEMENT OBJECTIVE MEASURES RELEVANT TO THE COMPETENCIES The ACGME does not expect all aspects of each competency to be formally tested. But for those aspects of the competencies a faculty endorses adding to an evaluation system, programs can choose to adopt or adapt available measures or create new ones. The ACGME acknowledges that many programs will need time to effectively implement evaluation methods that are dependable and useful. Allowance is made for the phasing in of these improvements. Several planning resources are available. The ACGME Web site lists a “toolbox” of available measures and indicates which of the measures is best used to evaluate each competency.7 A joint effort of the ACGME and the American Board of Medical Specialties, currently underway, will develop proposed curricula and assessment tools to further assist program directors. Other resources for evaluation tools include professional societies, such as the Association for Surgical Education and the Association of Professors of Gynecology and Obstetrics. These organizations have available various tools or information about different evaluation methods. Other residency programs are also a good source. For example, Dr Richard Reznick and his colleagues at the University of Toronto have studied the use of the Objective Structured Assessment of Technical Skills and have found adequate evidence of reliability and validity.8 Recently, they conducted pioneering studies of verbal communication exchanges in

DaRosa 207

the operating room that yielded a reliable method for analyzing operating team communications.9 This type of study is critical as we seek to better understand how communication skills and interprofessional discourse are learned by novice residents, a prerequisite to developing and refining approaches to evaluating physician communication skills and aspects of professionalism. Collaboration among schools and programs will help conserve time, money, and resources. Professional societies can serve as catalysts for communication and cross-fertilization within and between disciplines and programs. Programs already making use of, or beginning to use, any of the evaluation methods listed in the ACGME “toolbox” list should be encouraged to disseminate information on their experiences with the methods. This communication could take place either at scheduled time at appropriate professional meetings, on an already available Listserv (such as the Association for Surgical Education’s Listserv) or in a publication outlining the program’s overall experience with the method(s), its perceived strengths and weaknesses, and its psychometric qualities. A variety of evaluation methods and measures are available, but some will need to be adapted to surgical residents’ competencies and studied for appropriate psychometric qualities. The use of checklists to evaluate live or recorded performance will likely increase, as will record reviews, chart stimulated recall oral examinations, and performance based examinations, such as the Patient Assessment and Management Examination and others.10 STEP 3: ASSESS EVALUTION SYSTEMS AND MEASURES Cars require tune-ups and maintenance, or they do not run very well for very long. This is also true with evaluation systems and tools. It will be critical at departmental and multi-institutional levels that we evaluate our evaluation tools for evidence of reliability, feasibility, and validity to ensure a fair and reasonable evaluation process with sound findings and results. It is important to note that even evaluation tools found to be reliable and valid when studied at one site, will need to be studied when used elsewhere to ensure portability of the tool. In other words, a well-executed oral examination that shows high inter-rater reliability among faculty at one institution does not guarantee that the same examination format used elsewhere will have the same results. Factors, such as the faculty members’ training in administering and scoring oral examinations, can impact the reliability of the scores.

208 DaRosa

Grant sources must be developed and identified to support these types of research studies. The National Board of Medical Examiners and the Foundation for the Association for Surgical Education do provide monies for such projects, but additional dollars are needed to perform larger scale multi-institutional research on the psychometric qualities of various evaluation methods. Some of this research will build on others’ work. For example, outside the field of surgery numerous studies have been published describing and evaluating approaches to assessing communication skills, professionalism, and knowledge.11-14 Replicating these studies to determine their applicability to surgery residents will be helpful. Additionally, literature reviews with meta-analysis will be useful in selecting the most appropriate evaluation methods for our needs. Fewer evaluation tools exist for evaluating competencies in practice-based learning and systemsbased practice. This will likely require the creation of new, or a modification of existing evaluation strategies. STEP 4: PLAN TO USE SOUND EDUCATION ADMINISTRATION PRACTICES More than ever, because dollars and faculty time are increasingly tight in today’s academic medical centers and hospitals, faculty time and departmental budgets specifically designated for education are needed. Evaluation incurs costs, and residency program directors should monitor the amount of money and resources needed to accommodate the residency program’s evaluation needs. Consideration should be given to conducting some evaluations (eg, Objective Structured Clinical Examination, Patient Assessment and Management Examination, Objective Structured Assessment of Technical Skills) on a regional basis to share costs, resources, and faculty time. The leadership of the department must demonstrate support for new evaluation developments, or already busy faculty will not be available to ensure success. The performance evaluation system will need to be documented, and faculty responsibilities to the systems made clear. Various residency programs within a hospital or institution may want to share resources and work as a team more than has been typical in the past. Residents’ time will need to be protected so that they can get adequate rest before their examinations and be relieved of clinical responsibilities while completing them. Formal examinations are not expected to become a weekly or monthly addition to residency programs, but it is likely that new

Surgery February 2002

or additional examinations will be administered in each program at certain points throughout the academic year. It will be critical that those in leadership positions also work cooperatively with the Residency Review Committees and the ACGME. Currently, residency education is conducted in a linear and lock-step manner. Additional evaluation may identify weaknesses in a resident that otherwise would have gone undetected. Program directors will need the autonomy to determine the type and length of remedial intervention needed to address a resident’s performance problem. This is difficult under current guidelines. Therefore, communication and negotiation among residency program directors, the RRC, and those responsible for funding graduate medical education will be required to effect mutually agreeable solutions to resident education. CONCLUSION Few would argue that the focus on competencies and performance outcomes will require additional effort on the part of the faculty to ensure compliance with accreditation demands and residents’ proficiency in the specified competencies. Most programs already have integrated into their objectives, learning activities, and evaluation methods some of these competencies. For example, most programs already use the American Board of Surgery In-Training Examination to evaluate, in part, their residents’ basic science and clinical science knowledge. Those competencies that are not addressed, or not addressed well, however, will have to be appropriately included in the program. The ACGME is committed to helping programs meet the new requirements, and is working with the RRCs to allow a phasing in of education and evaluation advances. The 4-step approach described in this paper can help move programs in the right direction. It involves having programs review their current subjective rating systems, implement objective evaluation methods and measures to supplement or replace current evaluation methods, assess their evaluation systems for evidence of reliability, validity, and feasibility, and make use of sound administrative practices in planning, implementation, and evaluation of the new initiatives. If we take each of these steps, and communicate the extent to which new curriculum and evaluation efforts worked, we will find ourselves making use of increasingly more dependable methods for teaching and evaluating the stated competencies. This will take time and it will take collaboration, but to accommodate ACGME’s new

DaRosa 209

Surgery Volume 131, Number 2

emphasis on outcomes rather than process...it will take a faculty. REFERENCES 1. Tonesk X. The evaluation of clerks: Perceptions of clinical faculty. Washington: Association of American Medical Colleges; 1983. 2. Littlefield J. Developing and maintaining a resident rating system. In: Lloyd J, Langsley D, editors. How to evaluate residents. Chicago: American Board of Medical Specialties; 1986. 3. Littlefield J, Terrell C. Improving the quality of resident performance appraisals. Acad Med 1997;72:S46-8. 4. Heneman RL. The effects of time delay in rating and amount of information observed on performance rating accuracy. Acad Man J 1983;26:677-86. 5. Tamblyn R, Benaroya S, Snell L, McLeod P, Schnarch B, Abrahamowicz M. The feasibility and value of using patient satisfaction ratings to evaluate internal medicine residents. J Gen Intern Med 1994;9:146-52. 6. Kwolek C, Donnelly M, Sloan D, Birrell SN, Strodel WE, Schwartz RW. Ward evaluations: should they be abandoned? J Surg Res 1997;69:1-6. 7. American Council for Graduate Medical Education Web site. www.acgme.org.

8. Reznick RK, Regehr G, MacRae H, Martin JA, McCullock W. Testing technical skills (OSATS) for surgical residents. Br J Surg 1997;173:226-30. 9. Lingard L, Reznick R, Espin S, DeVito I, Regehr G. Team communications in the operating room: an observational study of sites of tension. Association for Surgical Education 20th Annual Meeting; 2000:34. 10. MacRae HM, Cohen R, Regehr G, Reznick R. A new assessment tool: the patient assessment and management examination. Surgery 1997;122:335-44. 11. Ginsburg S, Regehr G, Hatala R, McNaughton N, Frohna A, Hodges B, et al. Context, conflict, and resolution: a new conceptual framework for evaluating professionalism. Acad Med 2000;75:S6-11. 12. Papadakis MA, Osborn MC, Cooke M, Healy K. A strategy for the detection and evaluation of unprofessional behavior in medical students. Acad Med 1999;74:980-90. 13. Kaufman DM, Laidlaw TA, Macleod H. Communication skills in medical school: exposure, confidence, and performance. Acad Med 2000;74:S90-2. 14. Donnelly MB, Sloan D, Plymale M, Schwartz R. Assessment of residents’ interpersonal skills by faculty proctors and standardized patients: a psychometric analysis. Acad Med 2000;75:S93-5.

Acknowledgment We would like to thank the reviewers listed below who contributed their time recently to review manuscripts for Surgery. These individuals, as well as members of the Editorial Board, commit their time and careful consideration to ensure that articles in Surgery reflect the highest standards of scholarship and relevance. Andrew L. Warshaw Michael G. Sarr Editors in Chief Abbott, William M. Massachusetts General Hospital Alverdy, John University of Chicago Medical Center Basson, Marc D. John D. Dingell VA Medical Center Becker, James M. Boston Medical Center Bianchi, Adrian United Kingdom Bolling, Steven F. University of Michigan Hospital Cambria, Robert A. Medical College of Wisconsin Carter, W. Bradford University of Maryland Cheadle, William VA Medical Center Chung, Daniel Massachusetts General Hospital Conlon, Kevin C. P. Memorial Sloan-Kettering Cancer Center Cullen, Joseph J. University of Iowa Hospital and Clinics DeMatteo, Ron Memorial Sloan-Kettering Cancer Center Demetriou, Archilles A. Cedars-Sinai Medical Center

DeMey, Albert Brugmann University Hospital Dempsey, Daniel T. Temple University Hospital Dervenis, C. G. “Agia Olga” Hospital Doherty, Gerard M. Washington University School of Medicine Doody, Daniel P. Massachusetts General Hospital Duh, Quan-Yang VA Medical Center, San Francisco Dunnington, Gary L. Southern Illinois University School of Medicine Fernandez-del Castillo, Carlos Massachusetts General Hospital Fleshman, James Washington University School of Medicine Foitzik, Thomas Klinikum Benjamin Franklin Franco, Dominique Hôpital Beclere Gloviczki, Peter Mayo Clinic Goldstein, Alan New York Presbyterian Hospital (Continued on page 240)