2012 APDS SPRING MEETING
Does Resident Ranking During Recruitment Accurately Predict Subsequent Performance as a Surgical Resident? Jonathan P. Fryer, MD, Noreen Corcoran, MA, Brian George, MD, Ed Wang, PhD, and Debra DaRosa, PhD Department of Surgery, Northwestern University Feinberg School of Medicine, Chicago, Illinois BACKGROUND: While the primary goal of ranking applicants for surgical residency training positions is to identify the candidates who will subsequently perform best as surgical residents, the effectiveness of the ranking process has not been adequately studied.
justments to the rank list generated by this process should be undertaken with caution. (J Surg 69:724-730. © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.)
METHODS: We evaluated our general surgery resident re-
tive, performance
cruitment process between 2001 and 2011 inclusive, to determine if our recruitment ranking parameters effectively predicted subsequent resident performance. We identified 3 candidate ranking parameters (United States Medical Licensing Examination [USMLE] Step 1 score, unadjusted ranking score [URS], and final adjusted ranking [FAR]), and 4 resident performance parameters (American Board of Surgery In-Training Examination [ABSITE] score, PGY1 resident evaluation grade [REG], overall REG, and independent faculty rating ranking [IFRR]), and assessed whether the former were predictive of the latter. Analyses utilized Spearman correlation coefficient. RESULTS: We found that the URS, which is based on objective and criterion based parameters, was a better predictor of subsequent performance than the FAR, which is a modification of the URS based on subsequent determinations of the resident selection committee. USMLE score was a reliable predictor of ABSITE scores only. However, when we compared our worst residence performances with the performances of the other residents in this evaluation, the data did not produce convincing evidence that poor resident performances could be reliably predicted by any of the recruitment ranking parameters. Finally, stratifying candidates based on their rank range did not effectively define a ranking cut-off beyond which resident performance would drop off. CONCLUSIONS: Based on these findings, we recommend surgery programs may be better served by utilizing a more structured resident ranking process and that subsequent ad-
Correspondence: Inquiries to Jonathan P. Fryer MD, Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611; fax: 312-695-9194; e-mail:
[email protected]
724
KEY WORDS: recruitment, ranking, surgery residents, predicCOMPETENCIES: Medical Knowledge, Interpersonal and
Communication Skills, Systems Based Practice
INTRODUCTION A primary goal in the recruitment of surgical residents is to identify those candidates who will perform best, both as residents1,2 and, ultimately, as independent surgeons.3-6 However, the best strategy for achieving this objective has not been clearly established,6-8 and there is significant variability in the ranking and selection systems/philosophies used by individual programs. Many programs use multiple parameters to best define their most promising applicants, including: United States Medical Licensing Examination (USMLE) scores, Alpha Omega Alpha (AOA) status, personal statements, letters of reference, Medical Students Performance Evaluations (MSPE), research accomplishments, and interview performance.6-12 Despite these rigorous efforts to define and rank the best candidates, the final rank order is often adjusted by the resident selection committee and/or the program leadership based on their additional insights. While it is not clear if current ranking strategies most effectively identify the candidates who will subsequently perform the best as residents, it is especially unclear whether adjustments to rank lists derived by more structured ranking processes result in selection of better candidates. The purpose of this study was to assess the predictive validity of our current resident selection system by addressing the following questions: (1) Does it effectively provide the program with high quality residents? (2) Do candidate recruitment rankings predict subsequent performance with our surgical residents? and (3) Does our adjusted rank list predict subsequent
Journal of Surgical Education • © 2012 Association of Program Directors in Surgery 1931-7204/$30.00 Published by Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jsurg.2012.06.010
resident performance more accurately than our unadjusted rank list? We hypothesize that recruitment ranking is not entirely predictive of subsequent performance and that adjustments made to rank lists derived from more structured ranking processes do not result in selection of better performing residents.
METHODS Full institutional review board (IRB) approval was obtained to perform this study. We reviewed the selection data for all our categorical general surgery resident recruitments at our institution from 2001 to 2011 inclusive (n ⫽ 46), to determine how our successful resident candidates were ranked during their recruitment. During this time period, our program recruited 4 (2001-2003) or 5 (2004-2011) categorical residents using a selection system developed by Dr. Gary Dunnington (unpublished) while program director at the University of Southern California. Residents who entered our program after the PGY1 year (n ⫽ 2) and who therefore did not participate in our PGY1 recruitment process were excluded from further analyses as were residents who were part of the PGY1 recruitment process but left the program before completion (n ⫽ 1). All applications to our program are screened on the basis of application completeness, USMLE score, and other factors to determine who will be invited for an interview. The number of candidates participating in an interview varies each year and in the time period of this study has ranged from 60 to 94. Only those candidates who participated in an interview are subsequently ranked. We evaluated three recruitment ranking parameters as potential predictors of subsequent resident performance.
c. Interview score: based on the mean of two independent faculty interviewer scores. With each candidate, faculty assigns a numerical score based on a 5-point Likert scale with well-defined anchors for each of 7 parameters evaluated. Higher numbers correlate with better scores, and the sum of the scores is determined. 2. Final adjusted ranking (FAR): The entire preliminary rank list generated by the URS is subsequently reviewed by the resident selection committee. Based on additional discussion pertaining to new thoughts and insights regarding specific candidates, adjustments are made to the preliminary rank order with some candidates being moved to higher or lower rank, while others are removed from the list altogether if they have been associated with major concerns. Furthermore, additional modifications are made following the ranking committee meeting at the discretion of the program leadership before submission of the FAR to NRMP. 3. USMLE Scores (% tile): We also looked at USMLE step 1 scores independently, since they have previously been shown to be predictive of subsequent performance.6,12 For both the URS and FAR, we considered both the candidates’ overall ranking among all program applicants who were invited for interviews from that year (n ⫽ 60-94) as well as their relative ranking among the cohort of applicants who were successful in getting into the program that year (n ⫽ 4-5). Candidates’ recruitment ranking parameters were compared with subsequent resident performance. The following parameters were used to evaluate resident performance: Resident Performance Parameters
Recruitment Ranking Parameters 1. Unadjusted rank score (URS): A preliminary ranking of all candidates is based on their URS with the candidates obtaining the highest URS achieving the highest preliminary rankings. The URS is based on the sum of three equally weighted independent evaluation scores: a. Academic profile score: based on the following parameters: institutional ranking in United States world news and report, USMLE part I scores, class rank, and performance grade on surgery rotations. Each candidate is assigned a numerical score for each parameter based on an established scale with welldefined anchors and with higher numbers reflecting better scores. The sum of these scores is determined. b. Program directors review score: based on the following parameters: research experience, extracurricular activities, personal statement, letters of reference, and the Dean’s letter. Each candidate is assigned a numerical score based on a five-point Likert scale with welldefined anchors for each of these parameters. Higher numbers correlate with better scores, and the sum of the scores is determined.
1. Resident evaluation grade (REG): REGs are assigned to each resident using a letter grade (A-F) at semiannual resident evaluation meetings by consensus of the resident evaluation committee. Grade assignments were based on specific criteria (Table 1) pertaining to residents’ knowledge, skills, and attitudes, including examination scores (ABSITE, mock orals, patient assessment and management exam [PAME]) and professionalism, including compliance with administrative reTABLE 1. REG Key Grade Score A B C D E F
Numerical Score Advance or complete—exemplary performance Advance or complete—fully satisfactory performance Advance or complete with statement of deficiencies Advance or complete with probationary status No advancement and probationary status Dismissal from program
Journal of Surgical Education • Volume 69/Number 6 • November/December 2012
5 4 3 2 1 0 725
TABLE 2. IFRR Survey Anchors Likert Scale Category 7 6 5 4 3 2 1 0
Descriptive Anchors One of the WORST performances I have seen in my career Unusually poor performance Below average performance Average performance Above average performance Unusually good performance One of the BEST performances I have seen in my career Unable to rank due to inadequate exposure to resident
sponsibilities, such as case logging, duty hour logging, conference attendance, and completion of evaluations. For the purposes of these analyses, letter grade assignments (A, B, C, D, E, and F) were converted into numerical grades (5, 4, 3, 2, 1, and 0, respectively) and averaged for individual residents. Additionally, all residents’ average REGs from their PGY-1 years (PGY-1 REG) were evaluated independently of their overall REG (i.e., average REGs from all PGY years). 2. Independent faculty rating with cohort ranking (IFRR): These data were obtained via confidential survey sent to all surgical faculty who work with categorical general surgery residents with instructions to rate each resident using a 7-point Likert scale with specific performance anchors (Table 2). If more than 1 resident in a single recruitment cohort (n ⫽ 4 or 5) is rated in the same category of the Likert scale, faculty were prompted to rank all candidates within that category to force discrimination among the cohort. Data from individual faculty rating surveys were compiled and averaged for individual residents. All data were de-identified after collection and aggregated to protect resident confidentiality. This survey is not a part of our standard evaluation process and was used exclusively for this study to compel faculty to rank residents based on their relative performances and to do so without the immediate influence of other faculty or the residents’ academic file. 3. ABSITE score (percentile): Most recent scores were used independently since ABSITE scores are predictive of subsequent performance on ABS qualifying and certifying examinations.13 Poor Resident Performance For the purposes of this study, we defined poor resident performances by one of the following criteria: a. REG ⬍B and/or probation at any time during the residency. b. REG ⬍C and/or probation at any time during the residency. 726
c. IFRR score of ⬎4.00, indicating that overall the faculty rated them below average. d. ABSITE scores ⬍the 35% percentile at any time during the residency. Predictive Ability of Ranking Range We evaluated whether candidates’ rank range would predict subsequent performance. Specifically we evaluated whether the subsequent performances of candidates were different if they had been ranked in: (a) the top 15th percentile, (b) the 15-30th percentile, or (c) the ⬎30th percentile of their entire recruitment cohort. We used percentiles rather than raw rankings because recruitment cohorts varied significantly in number (i.e., 60-94) during this time period. Statistical Analysis Data were summarized using descriptive statistics (e.g., means and standard deviation for continuous variables; count and frequency for categorical variables). Recruitment ranking parameters between residents with poor performance and satisfactory performance were compared using Student t test. The association between each recruitment ranking parameter and resident performance parameter was analyzed using Spearman’s correlation coefficient. Finally, the difference in each of the resident performance parameter among residents with different ranking range (⬍15% tile, 15%-30% tile, and ⬎30% tile) was compared using F-test. All data were analyzed using SAS 9.2 statistical software (SAS Inc., Cary, NC, USA). A p value ⬍0.05 was considered statistically significant. This study was approved by our institutional IRB.
RESULTS Overall Resident Quality During the time period of evaluation (2001-2011) our program recruited 47 PGY1 categorical general surgery residents. One resident recruited during this period dropped out of the program for personal health reasons at the end of the PGY1 year despite excellent performance evaluations and was excluded from the analysis. All residents recruited during this period who went on to complete the residency (n ⫽ 13) were successful in passing both their ABS qualifying and certifying examinations on their first attempts. We evaluated if the recruitment ranking criteria, in retrospect, were effective in predicting poor resident performances in our program. We identified only 1 resident who had ever been assigned an REG ⬍C and/or probation at any time during the residency, 16 residents who had been assigned an REG ⬍B at any time during the residency, 6 residents who had received IFRR scores of ⬎4.00 (no residents received an IFRR score ⬎5), and 12 residents who had scored below the 35% percentile on the ABSITE at any time during the residency.
Journal of Surgical Education • Volume 69/Number 6 • November/December 2012
TABLE 3. Predictors of Poor Resident Performance USMLE
URS
FAR
Poor Resident Performance
Mean
SD
Mean
SD
Mean
SD
REG ⬍ B (n ⫽ 16) No REG ⬍ B (n ⫽ 30) P (t-test) REG ⬍ C (n ⫽ 1) No REG ⬍ C (n ⫽ 45) P (t-test) IFRR ⬎ 4.0 (n ⫽ 6) No IFRR ⬎ 4.0 (n ⫽ 40) P (t-test) ABSITE ⬍ 35% (n ⫽ 12) No ABSITE ⬍ 35% (n ⫽ 29)* P (t-test)
230.8 242.5 0.0101 239.00 238.42 0.8008 232.50 239.33 0.3073 228.92 243.48 0.0057
16.29 12.92
109.2 111.1 0.2990 111.00 110.44 0.6041 111.33 110.33 0.7023 110.25 110.32 0.9752
4.91 6.38
21.7 17.1 0.2027 27.96 18.48 ⬍0.0001 13.14 19.52 0.2199 19.25 17.95 0.7625
12.74 11.11
— 15.27 17.44 14.76 17.70 13.05
— 6.00 5.20 6.08 5.93 6.37
— 11.83 9.12 12.00 13.59 11.95
REG ⫽ resident evaluation grade; IFRR ⫽ independent faculty rating/ranking score; URS ⫽ unadjusted rank score; FAR ⫽ final adjusted rank; IFRR ⫽ independent faculty rating ranking. *Five residents did not yet have ABSITE score.
We found that the FAR for the 1 resident with a REG ⬍C was significantly below the mean FAR for all the other successful candidates (p ⬍ 0.0001). We also found that USMLE scores were predictive of a ⬍35% ABSITE (p ⫽ 0.0057) and an REG ⬍B (p ⫽ 0.0101) performance. No other recruitment ranking criteria were predictive of subsequent poor performance (Table 3). Correlation Between Recruitment Ranking Parameters We looked for correlation between URS, FAR, and USMLE scores in the absolute and relative rank order of the successful candidates. We found that there was no significant correlation in the candidate rankings between any of these parameters except for FAR and URS (r ⫽ 0.45, p ⬍ 0.01) (Table 4). Correlation Between Resident Performance Parameters Similarly, we looked for correlation between PGY1 REG, overall REG, IFRR, and ABSITE scores in ranking resident performance. We found there was significant correlation between IFRR and overall REG (r ⫽ 0.70, p ⬍ 0.0001). As expected, there was also a high correlation between PGY1 REG and overall REG (r ⫽ 0.85, p ⬍ 0.0001). There was no significant correlation between any of the other parameters (Table 5).
TABLE 4. Correlation Between Recruitment Ranking Parameters USMLE r USMLE URS FAR
1.00
p
URS
FAR
r
p
r
p
0.21 1.00
0.15
0.22 0.45 1.00
0.15 ⬍0.01
URS ⫽ unadjusted ranking score; FAR ⫽ final adjusted ranking.
Predictive Ability of Recruitment Ranking Parameters for Subsequent Resident Performance Our data reveals that USMLE scores correlated with subsequent ABSITE scores (r ⫽ 0.61, p ⬍ 0.0001) but not with PGY1 grades, overall grades, or IFRRs (Table 6). URS also correlated with both PGY1 grades (r ⫽ 0.40, p ⫽ 0.0058) and overall grades (r ⫽ 0.34, p ⫽ 0.0219) but not with ABSITE scores or IFRR scores. FAR did not correlate with any of the resident performance parameters. Predictive Ability of Ranking Range Using absolute rankings based on the URS, we found that there were no significant differences in subsequent ABSITE scores, PGY1 grades, overall grades, or IFRR based on which of the 3 rank ranges they were placed in during recruitment (Table 7A). However, using the FAR, significant differences were noted based on which of the 3 rank ranges candidates were placed in (Table 7B). Curiously, the rank range that predicted the worst performance was the 15th-30th percentile range, while the ⬎30th percentile range group exhibiting generally better performance.
DISCUSSION The process of resident recruitment involves independent ranking decisions on behalf of both the individual surgical residency programs and the individual surgery resident candidates. Both sides assess each other’s merits and then make a decision as to how they would like to rank potential educational partners for the 5-year process of general surgery training before submitting their rank list of preferences to NRMP. From the perspective of the surgical residency programs, the goal of recruitment is typically to identify the very best candidates and bring them into
Journal of Surgical Education • Volume 69/Number 6 • November/December 2012
727
TABLE 5. Correlation Between Resident Performance Parameters ABSITE (%Tile) r ABSITE (%tile) PGY-1 REG Overall REG IFRR
PGY-1 REG p
1.00
Overall REG
IFRR
r
p
r
p
r
p
0.04 1.00
0.82
0.14 0.85 1.00
0.38 ⬍0.0001
0.12 0.48 0.70 1.00
0.44 ⬍0.001 ⬍0.0001
REG ⫽ resident evaluation grade; IFRR ⫽ independent faculty rating ranking.
the program. The process of ranking the candidates from best to worst varies between institutions but is often based on a combination of various performance parameters6,9 combined with a gestalt on behalf of those in charge of resident selection. While program directors differ in how they prioritize parameters in assessing resident applicants, most consider USMLE scores, interview performance, letters of recommendation, and clerkship grades to be high importance.14,15 While USMLE scores appear to be predictive of success in Board examinations, they are less helpful in predicting clinical performance.6,16 While data are not entirely consistent, many program directors believe letters of recommendation16 and faculty interviews17 are of critical importance. Interestingly, different programs vary in the details of how they assess candidates, but generally come to similar conclusions in their assessment and ranking of specific candidates.8 Existing data suggest that correlation between resident ranking and subsequent performance is poor.1,6,7 Various interventions have been attempted to improve selection of residents who will perform well. Based on input from an organizational management consultant, Kelz et al. were able to reduce resident attrition and increase resident performance by adapting a novel selection strategy based on a standardized screening format, a formalized interview process to assess potential weaknesses, and a disciplined approach to ordering of the rank list.3 Others have also looked to external consultants to improve their recruitment process using tools of occupational analysis and personnel selection.4 Based on the overall performance of our categorical resident recruits during the period of evaluation, for the most part, our evaluation and recruitment process appears to have been effective in recruiting candidates who can perform well in our program and successfully complete their ABS examinations. How-
ever, since we have no performance data on the residents who did not match with our program, we cannot determine if the system used enabled us to recruit the candidates who were ultimately the best performers as residents. When we retrospectively reviewed the resident ranking parameters for a resident with the poorest performance in our analysis, this candidate was given a final ranking that was significantly worse than the mean of all other candidates from the 10-year period evaluated. In isolation, this finding suggests the system is effective in identifying potential poor performers. However, this individual was ranked high enough to be matched in our program and, in fact, was ranked at the top of that year’s recruitment cohort, indicating that the entire cohort of recruits from that year were ranked lower than average for our program. The initial process of screening candidates for interview invitations may significantly influence subsequent recruitment of high quality residents. With stringent screening, all candidates who are subjected to subsequent ranking may be capable of high quality performances as residents. Our data (Table 5) may therefore reflect more stringent screening during this time period, with the consequence of subjecting only a pool of candidates who are all capable of performing well in our program to the ranking process. If this is true, the subsequent ranking process may have had limited impact on the quality of our resident recruits and random resident selection may be just as effective. As has been shown by data from our study and others,1,6,7,9 commonly used recruitment ranking criteria are of limited effectiveness in predicting which residents will subsequently perform poorly relative to the other residents. If reliable criteria were available to predict poor performances more accurately, there are several potential benefits to developing an effective ranking system. Obviously, it could be used to influence re-
TABLE 6. Resident Recruitment Ranking vs Subsequent Resident Performance USMLE Resident Performance Parameter ABSITE PGY1 REG Overall REG IFRR
URS
FAR
r
p
r
p
r
p
0.61 0.12 0.16 0.22
⬍0.0001 0.4087 0.2783 0.1409
0.06 0.40 0.34 0.02
0.6952 0.0058 0.0219 0.9020
0.09 0.17 0.00 ⫺0.12
0.5891 0.2597 0.9867 0.4245
URS ⫽ unadjusted ranking score; FAR ⫽ final adjusted ranking; REG ⫽ Resident evaluation grade; IFRR ⫽ independent faculty rating/ranking score. 728
Journal of Surgical Education • Volume 69/Number 6 • November/December 2012
TABLE 7. Resident Ranking Ranges and Subsequent Performance ABSITE (%Tile) Resident Rank Range (%) (A) Unadjusted Ranking Score (URS) ⬍ 15 (n ⫽ 14) 15–30 (n ⫽ 17) ⬎30 (n ⫽ 15) p Value (F-test) (B) Final Adjusted Rank (FAR) ⬍15 (n ⫽ 19) 15–30 (n ⫽ 16) ⬎30 (n ⫽ 11) p Value (F-test)
PGY1 Grade
Overall Grade
IFRR
Mean
SD
Mean
SD
Mean
SD
Mean
SD
61.69 65.57 66.43 0.8831
28.82 21.77 23.73
4.57 4.29 4.00 0.1106
0.55 0.79 0.76
4.50 4.26 4.07 0.1338
0.44 0.64 0.56
3.10 3.16 3.22 0.9158
0.78 0.80 0.68
60.84 66.17 70.00 0.6475
26.40 22.26 28.81
4.63 3.81 4.36 0.0025
0.62 0.66 0.71
4.42 3.92 4.54 0.0051
0.50 0.61 0.38
3.13 3.52 2.70 0.0156
0.83 0.65 0.43
cruitment decisions or to direct additional attention to potential poor performers who match into the program. Additionally, while it is essential to recruit residents who are likely to meet existing standards, it is even better to identify and recruit those who will exceed these standards as they will benefit the program in many ways, including patient care, research, education, and institutional reputation, and this may perpetuate the recruitment of high quality residents year after year. Of course, since the applicant pool may vary significantly from year to year and, as our data indicate, the fidelity of the ranking process may be more important in some years than in other years, it may be useful to know if a threshold rank level can be identified beyond which subsequent performance may drop off substantially. While we were unable to clearly define a threshold rank in our study, it may be possible to do so with larger multicenter studies utilizing standardized resident selection criteria. Finally, significant time and effort goes into resident recruitment and it is therefore important to define the effectiveness of the process being used so that effective measures can be maintained and ineffective ones discontinued. This study has several limitations. First, the resident selection system is only being studied at 1 institution that recruits 5 categorical residents per year so results may be different for a larger or multiple programs. Secondly, this is not a comparative study but rather a descriptive case study. Unfortunately, there is no standardized approach to resident selection and programs utilize their own highly variable processes, thereby limiting most published studies pertaining to resident selection to descriptive studies.1,3,6,7 This should be reevaluated on a larger scale with a focused comparison of key parameters. Another concern with this study is that the ranking parameters we have used are not entirely independent of each other. The USMLE score is a component of the URS and the FAR is essentially a modification of the rank list generated by the URS. Similarly, our performance parameters the ABSITE score is a component of the REG. Our program and others utilize a structured ranking process based primarily on both objective and criterion based subjective parameters. Subsequent modifications to the preliminary rank lists that are derived from this more structured process are in-
tended to modify individual candidate ranking based on supplementary information, good or bad, that may not have been previously available. This process is especially important in identifying serious “red flags” pertaining to specific candidates that may emerge independent of the objective scoring process. While the merits of this approach seem apparent, a more subjective committee-based ranking process can potentially generate a rank list that is significantly different from the preliminary list that was generated based on more objective data.18 This can be problematic if the process generating committee-derived rank list is not disciplined and is driven by personal biases and opinions that may not always reflect relevant or accurate information. Therefore, while rank list adjustments may be prudent in some circumstances, our data suggest that caution must be used when making these changes, as they may negatively influence the quality of the successful resident recruits.3 These final ranking “adjustments” should be based primarily on supplemental information that is both relevant and reliable. Conversely, adjustments that are considered based on more nebulous criteria where the opinions and biases of a more vocal minority may dictate the final decisions of the group should probably be resisted.
ACKNOWLEDGMENTS The authors have no conflicts of interest to report relative to the preparation or publication of this study.
REFERENCES 1. Adusumilli S, Cohan RH, Marshall KW, et al. How well
does applicant rank order predict subsequent performance during radiology residency? Acad Radiol. 2000;7:635640. 2. Neely D, Feinglass J, Wallace WH. Developing a predic-
tive model to assess applicants to an internal medicine residency. J Grad Med Educ 2010;2:129-132.
Journal of Surgical Education • Volume 69/Number 6 • November/December 2012
729
3. Kelz RR, Mullen JL, Kaiser LR, et al. Prevention of sur-
11. Melendez MM, Xu X, Sexton TR, et al. The importance
gical resident attrition by a novel selection strategy. Ann Surg. 2010;252:537-543.
of basic science and clinical research as a selection criterion for general surgery residency programs. J Surg Educ. 2008; 65:151-154.
4. Prager JD, Myer CM, Hayes KM, et al. Improving meth-
ods of resident selection. Laryngoscope. 2010;120:23912398.
12. Bremer ES, Brooks CM, Erdmann JB. Use of USMLE to
5. Shellito JL, Osland JS, Helmer SD, et al. American Board
13. De Virgilio C, Yaghoubian, Kaji A, et al. Predicting per-
of Surgery Examinations: Can we identify surgery resident applicants and residents who will pass the examinations on the first attempt. Am J Surg. 2010;199:216-222. 6. Hamdy H, Prasad K, Anderson MB, et al. BEME system-
atic review: Predictive values of measurements obtained in medical schools and future performance in medical practice. Med Teach. 2006;28:103-116. 7. Borowitz SM, Saulsbury FT, Wilson WG. Information
collected during the residency match process does not predict clinical performance. Arch Pediatr Adolesc Med. 2000; 154:256-260. 8. Gilbart MK, Cusimano MD, Regehr G. Evaluating sur-
gical resident selection procedures. Am J Surg. 2001;181: 221-225. 9. Harfmann KL, Zirwas MJ. Can performance in medical
select residents. Acad Med. 1993;68:753-759. formance of the American Board of Surgery qualifying and certifying examinations: A multi-institutional study. Arch Surg. 2010;145:852-856. 14. Makdisi G, Takeuchi T, Rodriguez J, et al. How we select
our residents—a survey of criteria in general surgery residents. J Surg Educ. 2011;68:67-72. 15. National Resident Matching Program. Results of the
2010 NRMP Program Directors Survey. Available at: http://www.nrmp.org/data/programresultsbyspecialty2010; 68v3.pdf. Accessed February 2, 2012. 16. Stohl H, Hueppchen N, Bienstock J. The utility of
letters of recommendation in predicting resident success. Can the ACGME competencies Help? J Grad Med Educ 2011;3:387-90.
school predict performance in residency? A compilation and review of correlative studies. J Am Acad Dermatol. 2011;65:1010-1022.
17. Brothers TE, Wetherholt S. Importance of the faculty
10. Brothers TE, Wetherholt S. Importance of the faculty
18. Collins M, Curtis A, Artis K, et al. Comparison of two
interview during the resident application process. Surg J Educ. 2007;64:378-385.
methods for ranking applicants for residency. J Am Coll Radiol. 2010;7:961-966.
730
interview during the resident application process. Surg J Educ. 2007;64:378-85.
Journal of Surgical Education • Volume 69/Number 6 • November/December 2012