A Pleasure to Work With—An Analysis of Written Comments on Student Evaluations Patricia S. Lye, MD, MS; Kathy A. Biernat, MS; Dawn S. Bragg, PhD; Deborah E. Simpson, PhD Objective.—Studies assessing rating scales on student evaluations are available. However, there are no data related to the written comments on these evaluations. This study was designed to evaluate these comments. Methods.—A content analysis was performed on the narrative section of pediatric clerks’ evaluations. Final evaluations were obtained from 10 outpatient clinical sites staffed by full-time faculty over 14 months. A coding dictionary containing 12 categories (7 linked to clinical skills) was used. Results.—One thousand seventeen comments on 227 evaluations were coded. The mean number of comments per evaluation was 4. Learner and personal characteristics were the largest categories. Normative comments, such as ‘‘good physical exam,’’ as opposed to more specific comments, such as ‘‘complete presentation,’’ predominated in all categories. Conclusions.—Evaluation comments were infrequently related to basic clinical skills and were not often specific enough to lead to effective change in a student’s performance. Faculty development is needed to make final evaluation comments more useful for students. KEY WORDS:
evaluation; medical students
Ambulatory Pediatrics 2001;1:128 131
M
edical educators repeatedly emphasize a commitment to guiding students in their development as physicians. As part of this commitment, clerkship directors carefully select and refine the educational objectives for their clerkships and set acceptable skill levels. Students are then evaluated at the completion of their clerkships using scaled and written responses. These written comments provide documentation of student performance in cases of academic strength and difficulty and provide the basis for narratives often used for deans’ letters, awards, and scholarships.1 These comments also play an important role in the development of students as professionals. Formal, written comments are a powerful feedback tool, giving students both positive comments as well as specific assessments of skills that may be addressed in the next clerkship. However, for the written comments to be effective as feedback, they must meet several well-documented requirements.2,3 The utility of these comments depends on their specificity and quality.4 An informal review of written comments on final pediatric clerkship evaluations revealed that the comments appeared unrelated to clerkship objectives and too vague
to be a meaningful source of feedback to students. Although studies assessing reliability and validity of rating scales and other aspects of the final evaluation exist,5–8 no data related to the evaluation narrative are available. Without a formal analysis, efforts to improve the quality of these comments remains anecdotal, limiting progress. This study was designed to describe evaluation narrative through a content analysis of final evaluation comments provided by faculty to students during their pediatric clerkship. METHODS Pediatrics is a 2-month, required clerkship at the Medical College of Wisconsin. The clerkship objectives are based on the Council on Medical Student Education in Pediatrics (COMSEP) general pediatric clerkship curriculum.9 The rotation is divided equally between the inpatient and outpatient setting. Since the focus of this study was on faculty evaluation of students, we studied outpatient evaluations only. Inpatient evaluations are completed by the entire ward team (including residents) using a group discussion approach. During the outpatient month, the students are typically assigned to 1 or 2 clinics based on available preceptors and students’ personal interests. Most students work with multiple faculty preceptors, with each preceptor responsible for completing an evaluation. Full-time faculty from 10 out of 12 sites agreed to participate in a project on feedback but were not aware that their end of clerkship evaluations would be reviewed. Evaluation forms were collected for 14 months. The evaluation form is used in all of the third-year clerkships and includes 21 items to rate using a 7-point Likert scale (1 5 major strength to 7 5 major deficit) as well as space for required comments. Representative items on the form include ‘‘records a well-
From the Departments of Pediatrics (Dr Lye, Dr Bragg), Educational Services (Ms Biernat, Dr Bragg, Dr Simpson), and Family and Community Medicine (Dr Simpson), The Medical College of Wisconsin, Milwaukee, Wis. This project was partially funded by a grant from the Learning Resources Subcommittee of the Curriculum and Evaluation Committee, The Medical College of Wisconsin, Milwaukee, Wis. Address correspondence to Patricia S. Lye, MD, MS, Department of Pediatrics, Medical College of Wisconsin, 8701 Watertown Plank Rd, PO Box 26509, Milwaukee, WI 53226-0509 (e-mail:
[email protected]). Received for publication September 5, 2000; accepted February 6, 2001. AMBULATORY PEDIATRICS Copyright q 2001 by Ambulatory Pediatric Association
128
Volume 1, Number 3 May-June 2001
AMBULATORY PEDIATRICS
Written Evaluations of Students
Table 1. End of Clerkship Comments from each Coding Category
Table 1. Continued
History Thorough, complete/incomplete Well organized/disorganized Average, good Excellent Improved
Knowledge base Specific to pediatrics Good, average knowledge Lack of strong knowledge base Stretched knowledge inappropriately Above average, outstanding Improved/needs to improve
Physical exam Accurate Appropriate Organized Average, good Complete/incomplete Improving Excellent Skill specific Presentation Focused Pertinent Organized/not organized Complete/incomplete Concise Improved Good Excellent Write-up Overall Good Excellent Improved Specific Focused Organized Pertinent Complete Assessment Developed independently Developed with assistance Good, average Below average Excellent Improved Management plan Independently developed Good Below average Above average, excellent Patient/family relationships Relates well to families Rapport with patients Uncomfortable with children Student’s personal characteristics Respectful Hard working, conscientious Punctual Energetic Professional, sets high standards Pleasant, personable Characteristics as a learner Excited, eager to learn Seeks out new patients Motivated/not motivated Asks good questions/needs to ask more questions Reads Follows up Seeks feedback
129
Potential as an MD Primary care Pediatrics Will make an excellent MD Overall performance Good Clinical skills Above anticipated level At appropriate level Above average, excellent Improved
organized history and physical,’’ ‘‘accurately interprets signs and symptoms,’’ and ‘‘establishes rapport with patients.’’ Content analysis using standardized qualitative techniques10,11 was used to code the faculty’s narrative responses. More specifically, a copy of each faculty’s original hand-written narrative was reviewed to identify commonly used descriptions, phrases, and/or modifiers (eg, accurate, average, thorough) for specific areas of students’ performance (eg, history, physical exam). This preliminary analysis was then synthesized by 2 authors (PL, DS) to create a coding dictionary linked to 7 clinical skills (history, physical exam, presentation, write-up, assessment, management plan, and patient/family relationships). A third author (KB), who was blinded to faculty, student, and clinic identities, then used the coding book to categorize the comments for each of the forms. Several additional categories emerged during the data review and were agreed upon by all the authors and included in the final coding book (Table 1). Coding results were then keypunched and descriptive statistics were provided using SPSS for Windows Version 8.0. RESULTS Two hundred and sixty-one evaluations on 157 students were available for coding during the study time frame. Thirteen evaluations were missing. The mean number of evaluations per student was 1.6, with a range of 1–3. Seventeen evaluations (7%) had no comments, with an additional 17 (7%) evaluations omitted from the final analysis because the preceptor’s only comment was ‘‘unable to judge the student’s performance.’’ The remaining 1017 comments on 227 evaluations provided the data for the analysis. The mean number of comments per student evaluation was 4, with a range of 1–14. Learner and personal characteristics were the largest categories, accounting for 26 and 25%, respectively, of all comments. These were followed by overall clinical performance (9% of the total
130
Lye et al.
AMBULATORY PEDIATRICS
Table 2. Percentage of Comments by Coding Category Clinical Skills Category
Number of Comments (%)
History Physical exam Patient/family relationships Presentation Write-up Assessment Management plan
70 64 58 52 34 21 12
(7) (6) (6) (5) (3) (2) (1)
comments), knowledge base (7%), and history (7%). Only 31% (311) were related to clinical skills (Table 2). Within each of the categories, global comments using a normative reference point were dominant (Table 1). Typical examples included ‘‘good physical exam,’’ ‘‘excellent history,’’ ‘‘above average management plan,’’ ‘‘well-developed H and P’s’’ and ‘‘plan of care was slightly less than average.’’ Comments on fund of knowledge were similar, with normative comments, such as ‘‘knowledge level was below what I would expect‘‘ or ‘‘average knowledge, but improved,’’ predominating. Another common theme was that a particular skill ‘‘improved over the rotation,’’ but the current level of performance was not stated. Evaluators’ comments revealed that, when overall skills were summed up, they typically referenced a certain level of training, such as ‘‘work was at an intern level.’’ Only 134 (34%) of the comments provided specific details about the learner’s clinical skills. Examples of those comments were ‘‘complete presentation,’’ ‘‘focused writeup,’’ and ‘‘developed assessment independently.’’ Occasionally, a specific incident was recorded in some detail: ‘‘Dictated a comprehensive, organized letter on one of the clinic patients’’ or ‘‘excellent mechanical skills in suturing lacerations.’’ Personal and learner characteristics were the largest categories. Themes in these categories included ‘‘eager to learn,’’ ‘‘hard working and thorough,’’ ‘‘dependable,’’ or ‘‘solid.’’ The most frequently used evaluation comment in this analysis was ‘‘pleasant/pleasure to work with’’ (8% of the total comments). DISCUSSION Clerkship evaluations provide students with a summative evaluation of their performance. Optimally, this evaluation should inform audiences regarding the degree to which the student has met the objectives of the rotation as well as document the progress made in accomplishing his/her own personal learning objectives. Given the many responsibilities of faculty, these evaluations are frequently not a priority. Local efforts to improve both the completion and the quality of these evaluations have been extensive and echo other institutional experiences. Despite these efforts, 14% of the evaluations had no useable comments. The clerkship objectives were adapted from the COMSEP general pediatric clerkship curriculum9 by a collaborative process with the faculty. Preceptors know that evaluation drives learning, and yet only 31% of the comments were related to basic clinical skills. Students’ in-
Other Categories
Number of Comments (%)
Characteristics as a learner Personal characteristics Overall clinical performance Knowledge base Potential as an MD
263 256 95 70 22
(26) (25) (9) (7) (2)
terest in learning these basic skills was shown in Lawrence’s12 recent study. He demonstrated that, throughout the third year, students on a required ambulatory clerkship are concerned about attaining basic skills in history taking and physical examination. Of particular concern in pediatrics is the lack of comments specific to patient/family relations, with only 6% of the more than 1000 comments focused on this critical objective. Equally alarming is that the single most common comment was ‘‘pleasant/a pleasure to work with.’’ Although being pleasant may be a necessary attribute for future physicians, most would argue that it is not the most important personal characteristic for a successful junior medical student. Furthermore, our data do not supply evidence that comments on the final evaluation comply with characteristics of good feedback. Even the comments that were directed toward specific clinical skills provided an overall normative judgment as opposed to criterion-based comments that might effect a competency-focused change in the student’s knowledge or skills. Specific comments that reinforce good performance or clarify behaviors that need to be changed will assist the student in redirecting their learning and behavior in their future rotations. Specific comments can also be used to design remedial programs for students with marginal skills. It is not enough to evaluate students’ global skills. Where then have we failed as educators and evaluators? While many preceptors have no specific training in evaluation, they do reliably assess the competency of learners at a global level. However, specific competency-based judgments do not appear in their narrative comments. As with learners, preceptors need specific feedback that identifies both effective and ineffective evaluation behaviors. Pangaro13 proposes a possible framework for one such system, which focuses on a progression of trainer roles (reporter, interpreter, manager, and educator). Written comments about student performance on clinical evaluations remain a cornerstone of assessment. Based on our data, faculty development with subsequent pre/postanalysis to assess impact is needed to make end of clerkship comments valuable for learners and other key stakeholder audiences. Comments linked to specific clerkship objectives and based on the established criteria for good feedback are necessary to help our students become the excellent physicians we envision. REFERENCES 1. Hunt DD. The dean’s letter: improving the summary evaluation. JAMA. 1997;278:789.
AMBULATORY PEDIATRICS 2. Ende J. Feedback in clinical medical education. JAMA. 1983; 250:777–781. 3. Ende J, Pomerantz A, Erickson F. Preceptors’ strategies for correcting residents in an ambulatory care medicine setting: a qualitative analysis. Acad Med. 1995;70:224–229. 4. Hewson MG, Little ML. Giving feedback in medical education. J Gen Intern Med. 1998;13:111–116. 5. Speer AJ, Solomin DJ, Ainsworth MA. An innovative evaluation method in an internal medicine clerkship. Acad Med. 1996; 71:S76–S78. 6. Gray JD. Global rating scales in residency education. Acad Med. 1996;71:S55–S63. 7. Greenberg LW, Getson PR. Assessing student performance on a pediatric clerkship. Arch Pediatr Adolesc Med. 1996;150: 1209–1212. 8. Rosenblum ND, Wetzel M, Platt O, et al. Predicting medical
Written Evaluations of Students
9.
10. 11.
12.
13.
131
student success in a clinical clerkship by rating students’ nonverbal behavior. Arch Pediatr Adolesc Med. 1994;148:213–219. COMSEP, Olson AL (project director). General Pediatric Clerkship Curriculum. http://www.ucihs.uci.edu/comsep/curric/ comcurr/gpcurtoc.html. Accessed January, 2001. Miles MB, Huberman AM (eds). Qualitative Data Analysis. Thousand Oaks, Calif: Sage Publications; 1994. Weinstein EA, Tamur JM. Meanings, purposes, and structural resources in social interaction. In: Manis JG, Meltzer BN, eds. Symbolic Interaction. 3rd ed. Boston: Allyn & Bacon; 1978: 138–140. Lawrence SL, Lindemann JC, Gottlieb M. What students value: learning outcomes in a required third-year ambulatory primary care clerkship. Acad Med. 1999;74:715–717. Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med. 1999;74: 1203–1207.