TAGEDENACADEMIC PEDIATRICS for their PED shifts affected residents’ and preceptors’ experiences with learning, teaching, and feedback. METHODS: This was a qualitative study with attending physicians and residents from fourteen training programs that rotate through an academic PED. Residents were asked to write a learning goal for their shift and to share it with their attending. Semi-structured interviews were conducted with a convenience sample of residents and a purposive randomized sample of attending physicians about their experience. Interviews were audio-recorded, transcribed, parallel coded, and analyzed until thematic saturation was reached. RESULTS: During the 19-week study period, 358 unique learning goals were collected. Nineteen residents and ten attending physicians were interviewed. Major themes included: (1) Goalsetting facilitated learning. Residents and attendings reported that learning was attending-dependent and identified multiple ways in which attendings facilitated accomplishing residents’ goals, such as prioritizing teaching on shift, doing verbal teaching, and directing residents to patients and resources. (2) Residents’ perceived weaknesses, future practice settings, and available patients informed their goals. (3) Goal identification helped determine residents’ educational needs, as there was often mismatch between resident and attending-identified goals. (4) Ideal goals were specific and achievable. (5) There were multiple barriers and facilitators to goal-setting, accomplishment, and feedback. The most commonly reported barriers to goal-setting, accomplishment, and feedback were the busyness of the ED, available patients, and resident difficulty creating goals. CONCLUSIONS: Asking residents to self-identify learning goals for their shifts in the pediatric ED as an instructional strategy facilitated perceived learning, goal accomplishment, and feedback. 44. DEVELOPMENT OF A TOOL FOR FACULTY TO ASSESS RESIDENT-LED LARGE GROUP TEACHING Ariel S. Frey-Vogel, MD, MAT, Kristina Dzara, PhD, MMSc, Massachusetts General Hospital, Boston, MA, Kimberly A. Gifford, MD, Dartmouth-Hitchcock Medical Center, Hanover, NH, Erica Y. Chung, MD, Brown University, Providence, RI BACKGROUND: Residency programs are required to develop residents as teachers. Much of the formal teaching by residents occurs in group settings; the existing published tools did not collect validity evidence for assessment of resident-led large group teaching. We aim to create a tool for faculty to assess resident teaching in this setting. METHODS: Initial content for the tool came from literature review and our personal experience leading resident-as-teacher curricula. Resident focus groups provided stakeholder input, informing the first round of tool revisions. A modified Delphi panel of 14 international faculty experts, over 2 rounds of revisions, provided feedback on the tool’s elements. Anchors were designed and finalized after a third Delphi round. Study investigators piloted the tool with 10 video recordings of senior residents teaching from the 3 sites. Cronbach’s alpha was calculated for internal consistency and intraclass correlation (ICC) for interrater reliability. RESULTS: The tool has 6 domains: learning climate, goals and objectives, content, promotion of understanding and retention, session management, and closure. Each domain contains 12 subelements which are described by 37 observable behaviors. The Cronbach’s alpha was 0.88. The ICC was good or excellent for 13/37 sub-elements (35%), fair or poor for 22/37 sub-elements
ABSTRACTS
e21
(59%) and the remaining 2 elements had no ICC score given no variability in rater scores. CONCLUSION: A tool for faculty assessment of resident-led large group teaching was developed using robust methodology. In the pilot study, the assessed behaviors have good internal consistency, but low interrater reliability without rater training. In the next study phase, we will develop tool utilization standards, train faculty raters, and apply the tool to a larger video sample of resident teaching. We will collect validity evidence for the tool including ability to discriminate between novice and advanced teachers and its correlation with teaching milestones. 45. “INTERN CHECK-IN TOOL” TO IMPROVE EARLY IDENTIFICATION OF STRUGGLING INTERNS AND FACILITATE FEEDBACK Alyssa Swick, MD, Duane Allen, MD, Krista Allen, MD, Stefan Malin, MD, Mitchell Goldman, MD, Zeina Nabhan, MD, Jerry Rushton, MD, Indiana University School of Medicine, Indianapolis, IN BACKGROUND: Our current system for evaluation relies on faculty and peer evaluation of intern performance relating to general ACGME milestones. However, a program may have insufficient data to accurately identify a struggling intern until several months into the academic year. OBJECTIVE: To develop a brief, objective, resident-based evaluation tool to facilitate earlier identification of struggling interns in pediatric and internal medicine programs. METHODS: The intern check-in tool (ICT) consists of 18 items with a variety of observable key skills expected for interns (refer to attached form). It is scored on a 22-point scale of objective behaviors. Chief residents meet half way through each rotation and review the tool with senior residents supervising each intern. RESULTS: We implemented the use of the ICT at the beginning of the academic year in July 2018. Mid-year data are still being analyzed. In January 2019, after completion of the clinical competency committee (CCC) meetings, we will perform statistical analysis to measure correlations between the ICT scores and the overall intern performance as assessed by the CCC. We will also calculate the sensitivity and specificity for a range of ICT scores and measure correlations between the ICT scores and the demographic data, including medical school quartile and USMLE scores, for each intern. The ICT allowed us to identify a struggling intern early on who had multiple high scores. Following