Comparing resident measurements to attending surgeon self-perceptions of surgical educators

Comparing resident measurements to attending surgeon self-perceptions of surgical educators

The American Journal of Surgery 185 (2003) 323–327 Association for surgical education Comparing resident measurements to attending surgeon self-perc...

72KB Sizes 0 Downloads 37 Views

The American Journal of Surgery 185 (2003) 323–327

Association for surgical education

Comparing resident measurements to attending surgeon self-perceptions of surgical educators Jeffrey A. Claridge, M.D.*, J. Forrest Calland, M.D., Vinay Chandrasekhara, B.S., Jeffrey S. Young, M.D., Hillary Sanfey, M.D., Bruce D. Schirmer, M.D. Department of Surgery, University of Virginia, 1640 Stoney Creek Dr., Charlottesville, VA 22902, USA Manuscript received September 3, 2002; revised manuscript November 16, 2002

Abstract Objective: The purpose of this study was to evaluate the initiation and utility of evaluating attending surgeons as educators by resident trainees. Additionally, we were interested in comparing resident measurements to attending self-perceptions. Methods: A written evaluation form, (utilizing five-point ordinal scale assignments) queried respondents regarding the performance of surgical attendings in the operating room, and other clinical settings. A similar form was distributed to the faculty members, which they used to evaluate themselves. Mean scores were determined, as were comparisons between self-perception and resident assessments. Differences in scores with p values less than 0.05 were considered statistically significant. Results: Thirty-six residents evaluated 23 attendings. Mean assignments by residents of performance in the operating room, other clinical settings, and overall scores for all faculty members as a group were 4.22 ⫾ 0.04, 4.11 ⫾ 0.03, and 4.16 ⫾ 0.03, respectively, with a score of five, generally corresponding to a most favorable rating. When overall scores were analyzed, 10 attendings received scores that differed significantly from those received by their peers, with half of subjects above, and the other half being below the 95% confidence interval. Eighteen (78%) of attendings completed the self-evaluation forms, and of these, 11 (61%) had self-perceptions that differed significantly from overall scores as reported by the residents. Conclusions: Our evaluation process delineated significant differences among attending faculty members and identified individual strengths and weaknesses. Many educators’ self-perceptions differed significantly from resident assessments, and attendings who did not evaluate themselves scored lower than their peers. © 2003 Excerpta Medica, Inc. All rights reserved. Keywords: Surgical education; Self-evaluation; Attending evaluations

Performance evaluation of residents is required of any institution that participates in residency training. In fact, at our institution, it is mandated that residents meet with their advisor two times per year to review their performance evaluations. Typically, advisors verbally summarize written evaluations received within the past 6 months, and discuss their meaning with the trainee, identifying strengths, weaknesses, and potential areas for improvement. The program requirements for residency education in General Surgery according to the Accreditation Council for Graduate Medical Education (ACGME) also require that residents evaluate faculty members at least annually for their performance as an educator and their demonstrated commitment to the educational process. Perhaps related to this, the ACGME * Corresponding author. Tel.: ⫹1-434-924-0000 ext. 4725. E-mail address: [email protected]

mentions that a program must designate well qualified surgeons to assist in the supervision of the resident staff. They also mandate that the residents complete written confidential evaluations of the entire training program. Like other competencies and mandates of the ACGME, the true prevalence of such activities in today’s residency training programs remains unknown, and their effectiveness remains relatively unstudied, most particularly the process of attending evaluation. Furthermore, to our knowledge formal review of attendings as educators is not routinely performed at many institutions. The medical literature is nearly void of studies on the topic of evaluating surgical attendings as educators. The most relevant investigation that we found involved evaluation of surgical attendings by third year medical students [1], and demonstrated that teaching abilities of surgery faculty members appeared to have a significant impact on

0002-9610/03/$ – see front matter © 2003 Excerpta Medica, Inc. All rights reserved. doi:10.1016/S0002-9610(02)01421-6

324

J.A. Claridge et al. / The American Journal of Surgery 185 (2003) 323–327

student performance during a surgical clerkship. That study built upon and corroborated a previous study that demonstrated similar results with students on their medicine clerkships [2]. The main purpose of our study was to evaluate the initiation and utility of a written evaluation process of surgical attendings by residents, and how such performance evaluations compared with self-assessments of performance by the same surgical attendings. The use of self-evaluation has been a tool used by and validated as useful by other previous studies [3]. Our hypotheses were that differences in performance among attendings could be detected, and that self-perceptions of attending performance would be different from resident perceptions of attending performance.

Methods Surgical residents were asked to anonymously evaluate all the surgical attending physicians at the University Hospital using a questionnaire form designed through a consensus process utilized by a small ad-hoc committee of residents, medical students, and surgical attendings. The forms consisted of initial questions evaluating demographic information of the resident, and then progressed into two subsequent sections involving 10 questions each. The first section dealt with questions as they related to the performance of surgical educators in the operating room. The second section addressed educational performance on the floor, clinic, and in conferences. These questions are listed in Table 1. For each question, a five point ordinal scale was used to query respondents for their level of agreement with affirmative statements regarding the performance of the surgical educators, with a score of 1 corresponding to a low level of agreement, 3 being neutral, and 5 signifying a high level of agreement. In addition, residents were provided with a sixth choice, which allowed them to opt-out of answering a question regarding a specific attending if they possessed insufficient data to answer the question. Also provided was a section for residents to comment in prose on attendings’ strengths and weaknesses. A questionnaire that was very similar to the one filled out by residents was distributed to all attendings in an effort to assess their self-perceptions, using the same 5 point ordinal scale to assess their agreement with certain affirmative statements related to performance as an educator. All results are reported as means ⫾ standard error of the mean (SEM). All comparisons were made using the F test to determine homogenesity and the Student t test to evaluate for statistical difference. Microsoft Excel and SPSS were the two software programs utilized for statistical analysis. A p value of less than 0.05 was considered statistically significant.

Table 1 Questions on the evaluation form Section I. “In the operating room this attending . . . 1. . . . appropriately describes upcoming surgical procedures, including operative approach, rationale, and alternatives.” 2. . . . appropriately discusses expected patient outcomes and possible complications.” 3. . . . appropriately clarifies resident roles and responsibilities.” 4. . . . demonstrates technical skills with confidence and expertise.” 5 . . . permits resident participation in procedures according to ability.” 6. . . . demonstrates awareness and sensitivity to resident learning needs.” 7. . . . answers questions clearly and concisely.” 8. . . . stimulates residents to think critically and problem solve.” 9. . . . provides direct and ongoing feedback regarding operative technical proficiency.” 10 . . . maintains a climate of mutual respect for all members of the health care team.” Section II. “In the wards, clinics and conferences this attending . . . 11. . . . explicitly orients residents to his/her practice setting and role expectations.” 12. . . . explicitly outlines objectives and expected patient outcomes for procedures.” 13. . . . develops and sustains a positive learning atmosphere.” 14. . . . maintains a consistent presence in theses environments, so as to be available for teaching opportunities.” 15. . . . shares up to date knowledge of developments in the field.” 16. . . . provides ample opportunity for residents to teach.” 17. . . . encourages resident questions and active participation.” 18. . . . gives positive reinforcement.” 19. . . . Gives direct and ongoing feedback regarding resident progress.” 20. . . . Maintains a climate of mutual respect for all members of the health care team.”

Results All 23 surgical attendings were evaluated at the University Hospital by 36 surgical residents. The average postgraduate year (PGY) level of the residents were 3.0 ⫾ 0.3 years. The distribution of residents is shown in Table 2. The overall means of the surgical attendings for each question is illustrated in Table 3. The range of scores were 3.53 ⫾ 0.05 to 4.48 ⫾ 0.04. The lowest observed scores were on questions 9 and 19, which in both sections evalu-

Table 2 Distribution of residents responding to questionnaire by postgraduate year (PGY) level Number of residents

PGY level

13 6 3 4 4 3 3

1 2 3 4 5 6 7

J.A. Claridge et al. / The American Journal of Surgery 185 (2003) 323–327

325

Table 3 Results of each question

Section 1: Operating room Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Section 2: Wards, clinics, and conferences Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q18 Q19 Q20 Overall totals Section 1 Section 2 Overall

Mean

SEM

4.27 4.37 4.11 4.48 4.42 4.23 4.43 4.25 3.95 4.37

0.05 0.04 0.04 0.04 0.05 0.05 0.04 0.05 0.06 0.05

4.12 4.29 4.18 4.05 4.37 4.15 4.30 4.05 3.53 4.32

0.04 0.04 0.05 0.05 0.04 0.05 0.04 0.05 0.05 0.04

4.22 4.11 4.16

0.04 0.03 0.03

ated how well attendings gave feedback to residents. The mean values of operating room performance, performance in other clinical settings, and overall means for the surgical attendings are also listed in Table 3. Individual attendings scores were compared with the mean of the rest of their peers. In section 1, which evaluated operative room experience, 11 of 23 (47%) attendings received scores that were significantly different from the sample mean, with 7 of 23 (30%) being significantly above the mean, and 4 of 23 (17%) were significantly below the mean. In section 2, which evaluated the attendings performance in the wards, clinics, and conferences, 9 of 23 attendings (39%) received scores that were significantly different than the mean. Five (22%) attendings had scores above the mean and 4 (17%) below the mean. When overall scores were analyzed, 10 (43%) attendings received scores that differed significantly from those received by their peers. Five (22%) attendings scored above the mean and 5 (22%) attendings scored below the means. Fig. 1 illustrates these findings. Although all were invited to do so, only 18 of 23 (78%) attendings completed self-evaluation forms. Self-perceptions of attendings who completed their forms were compared with the scores given by the residents, and are summarized in Table 4. Eleven of eighteen (61%) had selfperceptions that differed significantly from assignments made by residents. None of the attendings that scored significantly below the mean of their peers by section (or overall) identified this in their self-evaluations. A comparison of the 5 attendings who did not evaluate

Fig. 1. Graphic representation of how attending surgeons scored according to resident evaluations compared with their peers (n ⫽ 23 attendings). Section 1 refers to operating room experience and section 2 refers to clinic, wards, and conferences.

themselves to the 18 attendings who did evaluate themselves is illustrated in Fig. 2. In summary, the attendings who did not evaluate themselves scored significantly lower then the attendings who did the evaluations. Attendings who did not evaluate themselves had mean scores that were lower than those reported for their peers on 17 of the 20 questions. In addition, attendings who did not fill out selfevaluations received significantly lower scores overall. A last comparison was made between senior and junior residents. Residents were considered junior if they were PGY levels 1 or 2. All other residents were considered senior residents. A comparison of these results is demonstrated in Fig. 3. In general, junior residents gave lower scores to attendings than senior residents.

Comments Surgical attending physicians have a substantial role in educating medical students and residents. There have been (and remain) a great number of incredible educators and role models in the surgical fields. However, there does not seem to be any standardized, validated systems available to provide useful feedback or opportunities for improvement to surgical educators. Furthermore, it remains unknown whether written performance evaluations of surgical educators have any effect on future performance. At this time, when the public is demanding better outcomes from surgery it seems important to provide a system to improve the quality of surgical teaching and presumabley to produce better surgeons. At this time, when surgical positions are going unfilled, and some feel that general surgery may be in a time of crisis [4 –9] improved teaching may entice more top cnadidates to apply to surgical residency programs. It is clear that teaching is important to surgeons. This was demonstrated in a study by Seely et al [10] that illustrated that surgical residents ranked the enjoyment of teaching as the most important stimulator to the commitment to teaching. This same study also demonstrated that 25% of surgical residents stated that poor teach-

326

J.A. Claridge et al. / The American Journal of Surgery 185 (2003) 323–327

Table 4 Comparison of self-perception scores versus resident evaluation Attending 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Self set 1

4.20 3.70 4.00 4.40 3.40 4.70 3.90 4.50 5.00 4.80 4.63 4.00 3.90 4.00 5.00 4.70 4.17

Resident set 1 4.36 3.33 3.58 4.78* 4.36 4.58* 4.38* 3.79* 4.74* 4.34* 4.53* 4.32 4.38 4.40* 4.73 3.92* 3.58 4.14 3.80 4.28* 4.52* 4.39 4.89

Self set 2

4.30 4.40 3.90 4.30 3.10 4.40 4.20 4.30 4.60 4.30 4.50 3.50 3.40 4.00 4.70 4.70 4.14

Resident set 2 4.31 3.61 3.51 4.69 4.18 4.29 4.09 4.01 4.75* 3.97* 4.45 4.14 4.33 4.08 4.51 3.72 3.68 3.55 3.38 3.93 4.36 4.27* 4.75*

Self overall

4.25 4.05 3.95 4.35 3.25 4.55 4.05 4.40 4.80 4.55 4.56 3.65 3.65 4.00 4.85 4.70 4.15

Resident overall 4.34 3.47 3.55 4.74* 4.27 4.41 4.21* 3.91* 4.75* 4.15* 4.49* 4.23 4.36 4.21* 4.62 3.80* 3.63 3.78 3.54 4.06 4.44* 4.33* 4.82*

* Denotes P ⬍ 0.05 between comparisons of self versus resident evaluations. SD and SEM are not demonstrated to minimize complexity of the Table.

ing role models were a deterrent to teach. [10]. Studies in internal medicine have demonstrated that only 20% of US internal medicine residency programs had resident teaching skill improvement programs [11]. Other authors have also suggested the need for more rigorous methodology to study educational interventions. [12]. These facts seem to clearly demonstrate the need to provide a system to improve teaching. Furthermore, another study demonstrated that a brief educational intervention delivered to faculty prior to the start of a ward rotation appeared to have an effect on faculty behavior for written evaluation and promoted higher quality feedback given to house staff [13]. This is encouraging as evidence that some forms of intervention and feedback may improve the educational process. The results of our study demonstrated that written resi-

dent evaluations could detect differences among attendings. Other studies have demonstrated a concern for the inadequacy of the written evaluation process because of the phenomenology of “central tendencies” and the “halo effect” [3,14,15]. Although our study likely exhibited these tendencies and effects, differences could still be determined among attendings. Thus, it is at least theoretically possible given our methodology to notify outliers, high and low, of their status. This will allow attendings to either continue forward with increased enthusiasm after receiving validation, or to pause for introspection so as to better understand ones strengths and weaknesses. Our surgical educators did not possess accurate perceptions of resident perceptions, with 61% of attendings who filled out the self-evaluation forms scoring themselves significantly different than the residents. Perhaps most impor-

Fig. 2. Comparison of attending surgeons who evaluated themselves versus attending surgeons who did not evaluate themselves. *P ⬍ 0.05.

Fig. 3. Comparison of junior residents’ versus senior residents’ evaluations of attending surgeons. Junior represents PGY 1 and 2 level residents (n ⫽ 19), Senior represents PGY 3 to 7 level residents (n ⫽ 17). *P ⬍ 0.05.

J.A. Claridge et al. / The American Journal of Surgery 185 (2003) 323–327

tantly, the attendings who received scores that were significantly below their peers were most likely to overrate their ability. None of the attendings who received scores that were significantly below their peers were able to accurately identify the weaknesses specified by the residents. Our study demonstrates that resident’s feedback in an anonymous questionnaire format could identify areas of attending strength and weakness in surgical teaching. One very interesting finding of the study was that the 22% of attendings who did not fill out the self-evaluation form scored significantly below attendings who did fill out the forms. A second finding was that junior residents evaluated attendings lower than senior residents. The explanation for this is likely multifactorial. Junior residents may simply have higer expectations for teaching. Senior residents will also have more teaching experience and may simply be more sympathetic to the difficulty of balancing teaching with other responsibilities. Another factor may be that senior residents are biased based on forming friendships and bonds with attendings, and thus evaluate them higher. This study does not allow one to make conclusions that residents performing evaluations on attendings serves any benefit. However, it does lay some important groundwork for further work in this field and the potential to eventually make those conclusions. This study demonstrated that residents’ feedback in this format could identify individual strengths and weaknesses, as well as differences among peers. Secondly, it clearly demonstrated that attending selfperceptions of their teaching abilities were inaccurate. The future of this project will involve evaluating the effect of this feedback with time, as well as following surgeon selfperceptions. Though preliminary in nature, we feel that the results of this investigation further support the need to set up a system of encouraging, enhancing, and rewarding good teaching by attendings to surgical residents using standardized evaluation processes.

327

References [1] Blue AV, Griffith CH, Wilson J, et al. Surgical teaching quality makes a difference. Am J Surg 1999;177:86 –9. [2] Griffith CH, Wilson JF, Haist SA, Ramsbottom-Lucier M. Relationships of how well attending physicians teach to their students’ performances and residency choices. Acad Med 1997;72(suppl 1):S118 – 20. [3] Johnson D, Cujec B. Comparison of self, nurse, and physician assessment of residents rotating through an intensive care unit. Crit Care Med 1998;26:1811– 6. [4] Sanfey H. General surgery training crisis in America. Br J Surg 2002;89:132–3. [5] Organ CH. The generation gap in modern surgery. Arch Surg 2002; 137:250 –2. [6] Meyer AA, Weiner TM. The generation gap: perspectives of a program director. Arch Surg 2002;137:268 –70. [7] Henningsen JA. Why the numbers are dropping in general surgery: the answer no one wants to hear—lifestyle! Arch Surg 2002;137: 255– 6. [8] Britt LD. “Halstedian 2” residency training: bridging the generation gap. Arch Surg 2002;137:271–3. [9] Craven JE. The generation gap in modern surgery: a new era in general surgery. Arch Surg 2002;137:257– 8. [10] Seely AJ, Pelletier MP, Snell LS, Trudel JL. Do surgical residents rated as better teachers perform better on in-training examinations? Am J Surg 1999;177:33–7. [11] Bing-You RG, Tooker J. Teaching skills improvement programmes in US internal medicine residencies. Med Educ 1993;27:259 – 65. [12] Veet L, Shea J, Ende J. Our continuing interest in manuscripts about education. J Gen Intern Med 1997;12:583–5. [13] Holmboe ES, Fiebach NH, Galaty LA, Huot S. Effectiveness of a focused educational intervention on resident evaluations from faculty a randomized controlled trial. J Gen Intern Med 2001;16:427– 34. [14] Thompson WG, Lipkin M, Gilbert DA, et al. Evaluating evaluation: assessment of the American Board of Internal Medicine Resident Evaluation Form. J Gen Intern Med 1990;5:214 –7. [15] Haber RJ, Avins AL. Do ratings on the American Board of Internal Medicine Resident Evaluation Form detect differences in clinical competence? J Gen Intern Med 1994;9:140 –5.