The Journal of Emergency Medicine, Vol. 49, No. 4, pp. 505–512, 2015 Copyright Ó 2015 Elsevier Inc. Printed in the USA. All rights reserved 0736-4679/$ - see front matter
http://dx.doi.org/10.1016/j.jemermed.2015.05.035
Education PREDICTORS OF A TOP PERFORMER DURING EMERGENCY MEDICINE RESIDENCY Rahul Bhat, MD,* Katrin Takenaka, MD,† Brian Levine, MD,‡ Nikhil Goyal, MD,§ Manish Garg, MD,k Annette Visconti, MD,{ Leslie Oyama, MD,** Edward Castillo, PHD,** Joshua Broder, MD,†† Rodney Omron, MD,‡‡ and Stephen Hayden, MD** *Department of Emergency Medicine, MedStar Georgetown University, Hospital/MedStar Washington Hospital Center, Washington, DC, †Department of Emergency Medicine, University of Texas, Houston, Texas, ‡Department of Emergency Medicine, Christiana Care Health System, Newark, Delaware, §Department of Emergency Medicine, Henry Ford Hospital, Detroit, Michigan, kDepartment of Emergency Medicine, Temple University Hospital, Philadelphia, Pennsylvania, {Department of Emergency Medicine, New York Methodist Hospital, Brooklyn, New York, **Department of Emergency Medicine, University of California at San Diego, La Jolla, California, ††Division of Emergency Medicine, Department of Surgery, Duke University Hospital, Durham, North Carolina, and ‡‡Department of Emergency Medicine, Johns Hopkins University Hospital, Baltimore, Maryland Reprint Address: Rahul Bhat, MD, Department of Emergency Medicine, MedStar Georgetown University, Hospital/MedStar Washington Hospital Center, 110 Irving Street NW, Washington, DC 20010
, Abstract—Background: Emergency Medicine (EM) residency program directors and faculty spend significant time and effort creating a residency rank list. To date, however, there have been few studies to assist program directors in determining which pre-residency variables best predict performance during EM residency. Objective: To evaluate which pre-residency variables best correlated with an applicant’s performance during residency. Methods: This was a retrospective multicenter sample of all residents in the three most recent graduating classes from nine participating EM residency programs. The outcome measure of top residency performance was defined as placement in the top third of a resident’s graduating class based on performance on the final semi-annual evaluation. Results: A total of 277 residents from nine institutions were evaluated. Eight of the predictors analyzed had a significant correlation with the outcome of resident performance. Applicants’ grade during home and away EM rotations, designation as Alpha Omega Alpha (AOA), U.S. Medical Licensing Examination (USMLE) Step 1 score, interview scores, ‘‘global rating’’ and ‘‘competitiveness’’ on nonprogram leadership standardized letter of recommendation (SLOR), and having five or more publications or presentations showed a significant association with residency performance. Conclusion:
We identified several predictors of top performers in EM residency: an honors grade for an EM rotation, USMLE Step 1 score, AOA designation, interview score, high SLOR rankings from nonprogram leadership, and completion of five or more presentations and publications. EM program directors may consider utilizing these variables during the match process to choose applicants who have the highest chance of top performance during residency. Ó 2015 Elsevier Inc. , Keywords—education; NRMP; match; predictors; success
INTRODUCTION Emergency Medicine (EM) residency program directors and faculty spend significant time and effort creating a residency rank list. The foremost goal in resident selection and ranking is to determine which credentials would aid in the selection of applicants who will become outstanding residents in their program. To date, however, there have been few studies to assist program directors in determining which preresidency variables best predict performance during EM residency.
RECEIVED: 15 May 2015; ACCEPTED: 29 May 2015 505
506
R. Bhat et al.
Prior studies in the EM literature have attempted to better elucidate pre-residency variables that are associated with successful completion of an EM residency. Crane and Ferraro published a study using a 5-point retrospective opinion survey of EM program directors examining the most important applicant credentials for a competitive rank (1). The investigators determined that the EM rotation grade, the interview, clinical grades, and letters of recommendations had the highest reported mean value. Hayden et al. took a step further by using a consensus faculty survey to rank order EM residents at a single center (2). They then retrospectively evaluated their residents’ applications to determine which factors best correlated with faculty-assigned rank. The authors concluded that the quality of medical school attended (as determined by internal faculty consensus) and distinctive talents (class officer, star athlete, and others) were the best predictors of a successful resident. More recently, Breyer et al. retrospectively evaluated EM grade, standardized letter of recommendation (SLOR), medical school class rank, and U.S. Medical License Examination (USMLE) scores and their relationship to the applicant’s position on a single program’s rank list (3). They determined that higher EM rotation scores, medical school rank from the medical school performance evaluation (MSPE), and SLOR global assessments were positively correlated with their rank list, but none of these correlations was considered strong. Additionally, the study was not designed to evaluate applicant performance during residency. To date, there are no multi-institutional data to guide evaluation of an applicant’s credentials that can be readily used by program directors. The objective of this study was to evaluate which preresidency variables best correlated with an applicant’s performance during residency based on the end of residency evaluation. METHODS Study Design This was a retrospective cohort study looking at data from the three most recent graduated classes of nine participating residency programs from United States Accreditation Council for Graduate Medical Education (ACGME)-accredited EM residencies. Program directors who attended a didactic session at the Council of Residency Directors in EM Academic Assembly in 2012 and 2013 were asked to volunteer to participate in the study, which was coordinated through the EMERG network (Emergency Medicine Education Research Group). Each participating program’s institutional review board approved the study protocol.
Study Setting and Population A sample of the three most recent graduated classes of residents from each participating EM residency program was chosen. All residents from each program for whom complete data were available were included for analysis. Study Protocol For each graduate, pre-specified predictor variables (Table 1) were extracted from the applicant’s: Residency application from the Electronic Residency Application Service (ERAS) Interview scores assigned by residency faculty, stratified by tier (top, middle, bottom third) Position on the residency’s National Resident Matching Program’s (NRMP) rank list stratified by tier (top, middle, bottom third) Medical school rankings were based on the 2013 US News and World Report Top Medical Schools report using the research ranking, primary care ranking, and the average of both rankings. To limit the potential for bias when reviewing resident outcomes, the end of residency semiannual evaluation was chosen as the most objective measure of residency performance from each program. All participating programs indicated that they utilized a numerical scoring system for each resident across each of the six ACGME core competencies. The scores for each competency at all sites was a composite of all monthly evaluations, nursing feedback, and direct observation, as well as faculty input. Each resident was categorized into top, middle, or bottom third of their class based on the sum of the resident’s total scores for each of the core competencies. Thus, the outcome measure of a top performing resident was defined as those with the top third total scores in comparison with the rest of their class. The classification of ‘‘top third’’ was chosen because this measure was utilized in several other studies, and program directors were comfortable with this category delineation. Statistical Analysis Average clerkship score was calculated as a mean score of third-year rotations in Surgery, Medicine, Pediatrics, Obstetrics, Family Medicine, and Psychiatry. For medical schools where the clerkship was graded on a ‘‘pass/fail only’’ basis, those data were excluded from the analysis. Home and away EM clerkship grades were categorized as honors and other. Medical school ranking for research and primary care were averaged, and each measure was reported as a continuous variable. Publications and presentations were totaled and used as a composite variable
Predictors of a Top Performer During EM Residency
507
Table 1. List of All Predictor Variables Evaluated During the Study Predictors Evaluated
Notes
Core third-year clerkship grade Home/away EM clerkship grade Ratings from MSPE USMLE step 1 & step 2 clinical knowledge (CK) scores Standard letter of recommendation (SLOR)
0 = Fail, 1 = Pass, 2 = High Pass, 3 = Honors 0 = Fail, 1 = Pass, 2 = High Pass, 3 = Honors Overall rank by quintile Raw score on each examination Work ethic, global assessment and competitiveness scores by program leadership (PL) and other SLORs (non PL). Ranked as good, very good, excellent, and outstanding. Stratified into top, middle, bottom tier Stratified into top, middle, bottom tier Yes/no Yes/no Yes/no 1 = Unranked, 2 = Not published, 3 = Lower third, 4 = middle third, 5 = top third DO vs. MD Yes/no Yes/no if significant extracurricular experience (examples include star athlete, eagle scout, military service, officer in medical school, or national society) Yes/no if over 1 year experience as an EMT, EM patient care technician, medical scribe in the ED, or ED nurse Yes/no if over 1 year any work experience Including posters, oral presentations at national meetings Including only full manuscripts in peer review journals, text book chapters, review articles In-state, regional, cross country, or international
Interview score NRMP rank order list rank Alpha Omega Alpha (AOA) membership Gold Humanism award Other awards Rank of medical school attended (research and primary care) Degree earned International graduate Extracurricular activities Prior EM experience Prior work experience Number of presentations Number of publications Distance from permanent home address to residency program
EM = emergency medicine; MSPE = medical school performance evaluation; NRMP = National Resident Matching Program; EMT = emergency medical technician; ED = emergency department.
and categorized as none, 1, 2, 3 or 4, and 5 or more. All SLOR measures were categorized as ‘‘outstanding’’ and ‘‘less than outstanding.’’ Univariate logistic regression was used to identify associations between potential predictor variables and the outcome of top third of residency class. All variables were used as continuous variables unless noted otherwise. Odds ratios, 95% confidence intervals (CIs), and associated p-values are presented. Significance was defined as p < 0.05. All statistical analyses were conducted using the IBM SPSS Statistics 19.0 software package (SPSS, Inc., Chicago, IL). RESULTS Nine programs volunteered to participate in the study (Table 2). During the study period, each program received between 800 and 1200 applications each year, and interviewed 80 to 200 applicants for their postgraduate year-1 class. The main teaching hospitals affiliated with each training program had an annual emergency department census of 60,000 to 115,000 patients and included community, academic, urban, and suburban settings. The entire study population consisted of 286 residents, nine of whom were excluded due to missing data. Data from the remaining 277 residents were analyzed. Mean USMLE Step 1 score was 221 (95% CI 218– 222) and mean USMLE Step 2 score was 228 (95% CI
225–230). For EM home rotation grade, the most frequent grade was honors (59.7%, n = 126), followed by high pass (34.1%, n = 72) and pass (6.2%, n = 13). For EM away rotation, the most frequent grade was honors (53.1%, n = 77), followed by high pass (41.3%, n = 60) and pass (5.5%, n = 8). Logistic regression analyses showed that resident performance had statistically significant associations with EM home rotation grade, EM away rotation grade, USMLE step 1 score, tiered interview score, Alpha Omega Alpha (AOA) designation, nonprogram leadership SLOR global assessment, nonprogram leadership SLOR competitiveness ranking, and having more than five publications/presentations (Table 3). DISCUSSION In our increasingly competitive specialty, EM program directors select candidates to interview from an overwhelmingly large pool of applicants. For example, each program in our study annually reviewed 800– 1200 applications and interviewed 80–200 applicants for 8–14 resident positions. Complicating this selection process is the dearth of evidence to support which preresidency variables identify candidates who are likely to become top-performing emergency medicine residents. This is the first multicenter study to analyze which
508
R. Bhat et al.
Table 2. List of All Participating Sites Program Name Christiana Care Health System, Newark, DE Duke University Hospital, Durham, NC Georgetown University Hospital/Washington Hospital Center, Washington, DC Henry Ford Hospital, Detroit, MI Johns Hopkins University Hospital, Baltimore, MD New York Methodist Hospital, Brooklyn, NY Temple University Hospital, Philadelphia, PA University of California at San Diego, San Diego, CA University of Texas Health Science Center, Houston, TX
Training Format
Graduates per Class
Annual Main ED Census
PGY1-3 PGY1-3 PGY1-3
12 8–9 8
115,000 70,000 88,000
PGY1-3 PGY1-3 PGY1-3 PGY1-3 PGY1-4 PGY1-3
14 12 10 12 8 12
95,000 65,000 90,000 80,000 66,000 60,000
ED = emergency department; PGY = postgraduate year.
pre-residency predictors correlate with favorable residency outcome. Although several studies have sought to identify predictors of success in residency, little consensus exists.
Some of the difficulty in identifying predictors of success is related to the lack of a universally accepted definition of success (4,5). Some studies have used objective measures such as in-training or board certification
Table 3. Logistic Regression Analyses with Significant Correlations in Bold Predictor Medical school Average clerkship grade EM home rotation grade (honors/other) EM away rotation grade (honors/other) MSPE overall tier International medical graduate (yes/no) Degree type (DO/MD) Average medical school ranking Test scores USMLE step 1 score USMLE step 2 CK score Awards/extracurricular Extracurricular activities (yes/no) Full time work (yes/no) EM work (yes/no) AOA designation (yes/no) Gold Humanism award (yes/no) Other medical school awards (yes/no) Interview tier ranking NRMP rank list tier SLOR – program leadership Work ethic (outstanding/other) Global assessment (outstanding/other) Competitiveness (very competitive/other) SLOR nonprogram leadership Work ethic (outstanding/other) Global assessment (outstanding/other) Competitiveness (very competitive/other) Distance from home (Ref = in state) Out of state - same region Cross country Out of country Presentations and publications (1 or more) 1 2 3 to 4 5 or more
OR
95% CI
p-Value
1.70 2.06 2.34 1.05 1.27 0.72 1.09
0.97–3.00 1.16–3.68 1.09–5.04 0.79–1.39 0.42–3.88 0.24–2.17 0.89–1.34
0.066 0.014 0.029 0.758 0.676 0.558 0.413
1.02 1.01
1.01–1.04 1.00–1.03
0.004 0.100
1.25 0.85 1.01 2.36 0.33 1.58 1.82 1.17
0.77–2.03 0.52–1.39 0.56–1.84 1.11–5.01 0.04–2.60 0.95–2.64 1.20–2.76 0.85–1.61
0.373 0.517 0.973 0.025 0.290 0.080 0.005 0.323
1.39 1.36 1.27
0.79–2.46 0.78–2.38 0.74–2.20
0.255 0.280 0.387
1.94 2.02 2.23
0.93–4.03 1.07–3.83 1.17–4.27
0.95 0.69 0.57
0.52–1.71 0.38–1.28 0.19–1.78
0.69 1.78 1.23 2.16
0.31–1.54 0.76–4.17 0.54–2.79 1.04–4.48
0.076 0.030 0.015 0.546 0.855 0.241 0.336 0.117 0.359 0.185 0.617 0.039
OR = odds ratio; CI = confidence interval; EM = emergency medicine; MSPE = medical school performance evaluation; USMLE = US Medical License Examination; AOA = Alpha Omega Alpha; NRMP = National Resident Matching Program; SLOR = standardized letter of recommendation.
Predictors of a Top Performer During EM Residency
examination scores, whereas others have looked at clinical evaluations and global assessments of performance (4–7). Recent data from two pediatric residency programs have shown that most faculty do not emphasize knowledge or examination scores, yet concentrate on global behaviors when determining the makeup of a successful resident (8). In the design of our study, we chose variables that were readily obtainable, used by most programs for semi-annual evaluations, and combined attributes that consisted of both behaviors and medical knowledge. Although not specifically designed to evaluate a resident’s comparative performance with their peers, the final semi-annual evaluation was the most objective and generalizable outcome measure across programs. In a study by Stohl et al., no applicant variable accurately predicted high performers in the ObstetricsGynecology residents at Johns Hopkins (4). Another study by Chole and Ogden found that most typical factors (such as USMLE Step 1 scores, medical school grades, letters of recommendation, election to AOA, and residency interviews) were not predictive of top otolaryngology graduates (6). In contrast to these studies, our current study yielded evidence of several specific predictors of a resident in the top third of their class. One key predictor was an honors grade in an EM rotation at the candidate’s home institution. These students were twice as likely to be in the top third of their residency class. An honors performance during an away EM rotation was similarly well correlated with performance during residency. This result is consistent with a study by Baldwin et al. that found that away rotations were a strong predictor of matching in an orthopedic residency (9). Similarly, upon reviewing 29 resident files, Grewal et al. noted that clinical performance as a surgical intern correlated with performance during urologic residency (7). This makes sense–that a student or intern who performs exceptionally well under conditions similar to those found during residency will succeed during their future postgraduate training. The Stohl and Chole studies found that USMLE Step 1 scores were not predictive of exceptional residents (4,6). Additionally, whereas one pediatric study found that Step 1 scores were predictive of how well one would do on national board examinations, this did not translate into overall residency performance (10). In contrast, our study indicated that raw scores on Step 1 were strongly predictive of high-performing residents. For every point increase in the Step 1 score, the applicant’s odds of being in the top third of their class increased by 1.02. The Step 2 clinical knowledge (CK) raw score approached but did not reach statistical significance. One can extrapolate that to score well on an examination requires a developed work ethic and medical knowledge base.
509
Perhaps these attributes translate to high overall residency performance. Another predictor of a top-performing resident was election into the AOA Honor Society. AOA members are generally in the top quartile academically of their medical school and are selected based on additional demonstrations of leadership, professionalism, and service to the community. Several studies have shown a positive correlation between AOA membership and first time passage of board examinations, however, the relationship between election to AOA and faculty ratings of residents’ performance is less clear (11–13). Daly et al. found that AOA membership was highly predictive of a resident placing in the top third of their otolaryngology class based on faculty consensus (14). In contrast, Borowitz et al. did not detect any difference between faculty ratings of pediatrics residents who were and those who were not AOA members (15). Program directors in multiple specialties such as orthopedics, pediatrics, internal medicine, family medicine, surgery, and physical medicine and rehabilitation have described the interview rating as an important criterion in the resident selection process (2,5,16,17). The interview’s value as a predictor of future residency performance, however, has been limited, as some studies have shown a positive correlation, whereas others have not (2,5,7,16,17). Some of this dichotomy may be due to disparate interview techniques at different institutions and among different specialties. Although standardized interviews may offer more interrater reliability, this reliability does not translate into a better prediction of performance during residency (18,19). In our study, interview ranking was noted to be a significant predictor of a top-performing resident. Each site had its own method of interviewing as well as scoring candidates after the interview, but despite this heterogeneity, the interview score was noted to be a useful predictor. In a recent study by Love et al., the vast majority of EM program directors agreed that the SLOR is important in the evaluation of residency candidates (20). In our study, there was no statistically significant correlation between the categories of work ethic, global assessment, and competitiveness ratings on SLORs written by program leadership and performance during residency. Interestingly, nonprogram leadership SLOR ratings did show a stronger association with residency performance, although this is difficult to interpret because students often choose those who would write the most favorable letters. These results may be confounded by the shortcomings of the SLOR, which have been reported in other studies, including grade inflation, inexperienced letter writers, inconsistencies between grades/ratings with written comments, and selection bias. Selection bias can be displayed by several mechanisms such as the candidate
510
choosing the letter writer and selecting which SLOR to post on ERAS, or by an author who may decline to write an unfavorable SLOR (20,21). In our study, presentations and publications did not correlate with performance during residency, with the exception of the subgroup of applicants who completed five or more presentations and publications. Studies in other specialties have revealed mixed results. One orthopedic study showed a correlation between student research experience and research productivity during residency (22). Others have not demonstrated any consistent correlation of prior publications and presentations with measures of residency success such as in-training examination scores and clinical performance ratings (5,16). Factors that were not predictive of performance during residency were extracurricular activities, full-time work experience, EM-related work experience, the Gold Humanism award, graduating from an international or osteopathic medical school, other medical school awards, third-year clerkship scores, MSPE quintile, and distance from home. Perhaps most surprisingly, an applicant’s NRMP rank list tier also did not correlate with performance as a resident. Applicants near the bottom of a program’s rank list were as likely to be in the top third of their graduating residency class as those at the top of the rank list, underscoring the need for better stratifying applicants based on variables that better correlate with residency performance. Future areas of study include a prospective trial utilizing predictors with a weighted algorithm in the decision-making process. Additionally, a study to determine predictors of poor residency performance could help identify individuals who might struggle and therefore benefit from earlier identification and intervention during their residency. Limitations Our study had several limitations. There was significant heterogeneity in terms of institutional grading as well as applicant credentials. For example, some medical schools graded on a pass/fail basis, whereas others used variations such as honors/high pass/pass/fail. Additionally, not all candidates completed an away EM rotation, and even fewer completed multiple away rotations. Most applicants had reported USMLE Step 1 scores, but fewer had Step 2 CK scores reported. Comparisons and statistical analysis can be difficult to interpret in these cases. As this was a retrospective study evaluating only applicants who successfully matched in EM, the results may not be generalizable to all applicants or specialties. Also, as most participating programs were urban, academic, tertiary care centers, the results may not be generalizable to all EM programs. Furthermore, since the
R. Bhat et al.
completion of the study, the standardized letter of evaluation (SLOE) was introduced to replace the SLOR in evaluating applicants after an EM rotation. It is unknown whether the new SLOE will correlate with the SLOR and resident performance. Finally, an overall regression model assessing independent associations could not be developed due to the high correlation between many of the predictor variables. Although this could be addressed somewhat with data reduction strategies, we were still limited because some of the significant variables that would be included in a final model had a higher number of missing cases, which would have dramatically decreased the sample included in the model. For example, most evaluated residents had a score for USMLE Step 1 and 2 and for third-year clerkships, but a large number did not have an EM grade or a SLOR. Additionally, there is the possibility that one or more of the univariate associations were significant by chance. However, because the identified associations are plausible and we were not testing specific hypotheses, we did not modify the significance level. CONCLUSION In conclusion, we identified several predictors of top performers in EM residency: an honors grade for an EM rotation (both home and away), USMLE Step 1 score, election to AOA, interview score, high SLOR rankings from nonprogram leadership, and completion of five or more presentations and publications. Applicants who scored better on each of these measures were significantly more likely to perform at the top of their residency class. EM program directors may consider utilizing these variables during the match process to choose applicants who have the highest chance of top performance during residency. Acknowledgments—Sam Luber MD, from University of Texas at Houston for helping with the study concept and Chad Kessler from Duke University for helping organize the study sites.
REFERENCES 1. Crane JT, Ferraro CM. Selection criteria for emergency medicine residency applicants. Acad Emerg Med 2000;7:54–60. 2. Hayden S, Hayden M, Gamst A. What characteristics of applicants to emergency medicine residency programs predict future success as an emergency medicine resident? Acad Emerg Med 2005;12:206–12. 3. Breyer MJ, Sadosty A, Biros M. Factors affecting candidate placement on an emergency medicine residency program’s rank order list. West J Emerg Med 2012;13:458–62. 4. Stohl HE, Hueppchen NA, Bienstock JL. Can medical school performance predict residency performance? J Grad Med Educ 2010; 2:322–6. 5. Harfmann KL, Zirwas MJ. Can performance in medical school predict performance in residency? A compilation and review of correlative studies. J Am Acad Dermatol 2011;65:1010–22.
Predictors of a Top Performer During EM Residency 6. Chole RA, Ogden MA. Predictors of success in otolaryngology residency applicants. Arch Otolaryngol Head Neck Surg 2012;138:707–11. 7. Grewal SG, Yeung LS, Brandes S. Predictors of success in a urology residency program. J Surg Educ 2013;70:138–43. 8. Rosenbluth G, O’Brien B, Asher A, Cho C. The ‘‘Zing Factor’’- how do faculty describe the best pediatrics residents? J Grad Med Educ 2014;6:106–11. 9. Baldwin K, Weidner Z, Ahn J, Mehta S. Are away rotations critical for a successful match in orthopaedic surgery? Clin Orthop Relat Res 2009;467:3340–5. 10. Mccaskill QE, Kirk JJ, Barata DM, et al. USMLE Step 1 scores as a significant predictor of board passage in pediatrics. Ambul Pediatr 2007;7:192–5. 11. Shellito JL, Osland JS, Helmer SD, Chang FC. American Board of Surgery examinations: can we identify surgery residency applicants and residents who will pass the Boards on their first attempt. Am J Surg 2010;199:216–22. 12. Andriole DA, Jeffe DB, Hageman HL, Whelan AJ. What predicts USMLE Step 3 performance? Acad Med 2005;80(Suppl):S21–4. 13. Hamdy H, Prasad K, Anderson MB, et al. BEME systematic review: predictive values of measurements obtained in medical schools and future performance in medical practice. Med Teach 2006;28: 103–16. 14. Daly KA, Levine SC, Adams GL. Predictors for resident success in otolaryngology. J Am Coll Surg 2006;202:649–54.
511 15. Borowitz SM, Saulsbury FT, Wilson WG. Information collected during the residency match process does not predict clinical performance. Arch Pediatr Adolesc Med 2000;154:256–60. 16. Egol KA, Collins J, Zuckerman JD. Success in orthopaedic training: resident selection and predictors of quality performance. J Am Acad Orthop Surg 2011;19:72–80. 17. Balentine J, Gaeta T, Spevack T. Evaluating applicants to emergency medicine residency programs. J Emerg Med 1999;17:131–4. 18. Bandiera G, Regehr G. Reliability of a structured interview scoring instrument for a Canadian postgraduate emergency medicine training program. Acad Emerg Med 2004;11:27–32. 19. Blouin D, Day AG, Pavlov A. Comparative reliability of structured versus unstructured interviews in the admission process of a residency program. J Grad Med Educ 2011;3:517–23. 20. Love JN, Smith J, Weizberg M, et al. Council of Emergency Medicine Residency Directors’ standardized letter of recommendation: the program director’s perspective. Acad Emerg Med 2014;21: 680–7. 21. Grall KH, Hiller KM, Stoneking LR. Analysis of the evaluative components on the Standard Letter of Recommendation (SLOR) in emergency medicine. West J Emerg Med 2014;15:419–23. 22. Spitzer AB, Gage MJ, Looze CA, Walsh M, Zuckerman JD, Egol KA. Factors associated with successful performance in an orthopedic surgery residency. J Bone Joint Surg Am 2009;91: 2750–5.
512
R. Bhat et al.
ARTICLE SUMMARY 1. Why is this topic important? Significant time and effort are spent evaluating residency applications. There is little evidence-based guidance regarding which characteristics of an applicant predict performance as a resident. 2. What does this study attempt to show? This study attempts to explore applicant credentials that are best correlated with success as a resident. 3. What are the key findings? The best predictors of a successful resident are Emergency Medicine (EM) grade, US Medical License Examination Step 1 score, interview score, standardized letter of recommendation rating from nonprogram leadership faculty, Alpha Omega Alpha designation, and having more than five publications. 4. How is patient care impacted? Evaluation of an EM residency applicant’s credentials should include the variables that best correlate with a successful resident.