Performance Assessment of Arthroscopic Rotator Cuff Repair and Labral Repair in a Dry Shoulder Simulator Tim Dwyer, M.B.B.S., Ph.D., F.R.A.C.S., F.R.C.S.C., Rachel Schachar, M.D., M.Sc., Tim Leroux, M.D., M.Ed., F.R.C.S.C., Massimo Petrera, M.D., Jeffrey Cheung, M.Sc., Rachel Greben, Patrick Henry, M.D., F.R.C.S.C., Darrell Ogilvie-Harris, M.D., M.Sc., F.R.C.S.C., John Theodoropoulos, M.D., M.Sc., F.R.C.S.C., and Jaskarndip Chahal, M.D., M.Sc., F.R.C.S.C., M.B.A.
Purpose: To evaluate the use of dry models to assess performance of arthroscopic rotator cuff repair (RCR) and labral repair (LR). Methods: Residents, fellows, and sports medicine staff performed an arthroscopic RCR and LR on a dry model. Any prior RCR and LR experience was noted. Staff surgeons assessed participants by use of task-specific checklists, the Arthroscopic Surgical Skill Evaluation Tool (ASSET), and a final overall global rating. All procedures were video recorded and were scored by a fellow blinded to the year of training of each participant. Results: A total of 51 participants and 46 participants performed arthroscopic RCR and LR, respectively, on dry models. The internal consistency or reliability (Cronbach a) using the total ASSET score for the RCR and LR was high (>0.9). One-way analysis of variance for the total ASSET score showed a difference between participants based on year of training (P < .001) for both procedures. The inter-rater reliability for the ASSET score was excellent (>0.9) for both procedures. A good correlation was seen between the ASSET score and the year of training, as well as the previous number of sports rotations. Conclusions: The results of this study show evidence of construct validity when using dry models to assess performance of arthroscopic RCR and LR by residents. Clinical Relevance: The results of this study support the use of arthroscopic simulation in the training of residents and fellows learning arthroscopic shoulder surgery.
R
esearch continues to emerge regarding the use of simulation in orthopaedic training, given an increasing acceptance that some elements of surgical training should occur outside of the operating room.1 Although there are multiple options for the simulation of technical procedures, including cadaveric models and virtual reality, interest in the use of dry models continues because of their relatively low cost and their ability to simulate complex arthroscopic procedures.2-4 Rotator cuff repair (RCR) and labral repair (LR) are 2 common orthopaedic procedures that are increasingly From the University of Toronto Orthopaedic Sports Medicine (T.D., R.S., T.L., M.P., J.C., R.G., P.H., D.O-H., J.T., J.C.); Women’s College Hospital (T.D., D.O-H., J.T., J.C.); and Mount Sinai Hospital (T.D., J.T.), Toronto, Ontario, Canada. The authors report that they have no conflicts of interest in the authorship and publication of this article. Received July 22, 2016; accepted January 13, 2017. Address correspondence to Tim Dwyer, M.B.B.S., Ph.D., F.R.A.C.S., F.R.C.S.C., Women’s College Hospital, 76 Grenville St, Toronto, Ontario M5S 1B2, Canada. E-mail:
[email protected] Ó 2017 by the Arthroscopy Association of North America 0749-8063/16706/$36.00 http://dx.doi.org/10.1016/j.arthro.2017.01.047
being performed with arthroscopic techniques.5,6 It is generally accepted that these arthroscopic procedures have significant learning curves7,8 and are typically performed by surgeons with fellowship training.9 In the setting of these types of complex technical procedures, simulation allows the acquisition of skills before performance in the operating room.10,11 Staff surgeons can then use task-specific checklists and global rating scales (GRSs) such as the Arthroscopic Surgical Skill Evaluation Tool (ASSET)12 (designed specifically to assess performance of arthroscopic procedures) to ensure that trainees develop a minimal level of competency in a safe environment. The purpose of this study was to evaluate the use of dry models to assess performance of arthroscopic RCR and LR. We hypothesized that the combination of a checklist and a previously validated GRS would show evidence of validity when assessing RCR and LR as performed by residents in a dry model.
Methods Over 2 resident training days in November 2014, all residents in our training program (junior residents
Arthroscopy: The Journal of Arthroscopic and Related Surgery, Vol
-,
No
-
(Month), 2017: pp 1-9
1
2
T. DWYER ET AL.
Fig 1. (A) Drilling of pilot hole for glenoid anchor into anterior aspect of a right shoulder. (B) Labral suture after passage.
[postgraduate years 1-3] and senior residents [postgraduate year 4 and 5]) were offered the opportunity to participate in this study. The inclusion criterion was any resident in the orthopaedic program at our institution, whereas the exclusion criterion was any resident unable to participate because of clinical duties or vacation time. Residents were asked to perform both an arthroscopic RCR and an arthroscopic LR on a dry shoulder model inside foam musculature with vinyl skin (Arthrex Custom Shoulder Model; Arthrex, Naples, FL). Standard 30 arthroscopic cameras with high-definition video systems were used for the procedure. Arthroscopic LR was performed using 2.3-mm PK suture anchors in conjunction with ACCU-PASS suture shuttle (Smith & Nephew, Andover, MA) (Fig 1). Residents were asked to insert the glenoid anchor, pass one of the limbs of the sutures around the labrum using a suture shuttle technique, and tie an arthroscopic knot of their choosing. Arthroscopic RCR was performed using a screw-in 5.0-mm titanium anchor. Residents were asked to insert the anchor, pass both limbs in a horizontal mattress fashion through the rotator cuff using the ELITE PASS suture shuttle (Smith & Nephew), and tie an arthroscopic knot (Fig 2). Before the performance of each procedure, residents were e-mailed the lists of steps required to perform the RCR and LR, as well as an instructional videodthese were also available on the day the procedure was performed. Prior to the procedure, anterosuperior and anteroinferior cannulas were inserted in the LR model and anterior and lateral subacromial cannulas were inserted in the RCR model by staff surgeons. All models had a prefabricated rotator cuff tear, while residents repaired the glenoid labrum in situdone model was used for each resident. Participants were evaluated using task-specific checklists (Tables 1 and 2), the ASSET GRS12 (Table 3), and a final overall 5-point GRS. Five staff surgeons (T.D., P.H., D.O-H., J.T., J.C.) created the task-specific checklists, achieving consensus using a modified Delphi procedure conducted by way of multiple electronic surveys.12 The final overall 5-point GRS
corresponded to the Dreyfus model of skill acquisition (novice, advanced beginner, competent, proficient, expert).13,14 Examiners were instructed to disregard the year of training and to assign each participant a rating of “competent” if able to perform at the level of a practicing orthopaedic surgeon. Descriptors, other than competent, were not provided for the GRS. Five staff surgeons (T.D., P.H., D.O-H., J.T., J.C.), with experience at rating resident performance and fellowship trained in arthroscopic shoulder surgery, acted as examiners, with a single examiner at each station. The length of each procedure was not timed. Participants were determined to be competent if they achieved a final overall GRS of competent or above, if they scored a minimum of 3 on each of the 8 ASSET domains as per the study by Koehler et al.,12 or if they achieved an ASSET score of 24 or greater.4 The examiner provided assistance throughout the case as directed by the resident, but no verbal instruction from the examiner was allowed. Each procedure was recorded, with videotaping of hand movements, as well as arthroscopic video recordings of the intraarticular procedure. These videos were scored by a single orthopaedic sports medicine fellow (M.P.), who was not a participant in this study and who was blinded to the year of training of each resident. The number of sports rotations (3-month duration) each resident had previously undertaken was recorded; residents were also asked to estimate their prior exposure to arthroscopic RCR and LR (number of cases). Approval for this study was obtained from the institutional research ethics board before commencement. No funding was used for this study. Statistical Analysis The 8 subdomains of the ASSET were summed to create a total ASSET score, with a maximum score of 38. For each procedure, the overall reliability (Cronbach a) was calculated by use of the total ASSET score for each participant. The correlation between the ASSET, checklist, and final rating scores for each of the procedures was calculated with the Pearson correlation
ROTATOR CUFF REPAIR SIMULATION IN DRY MODELS
3
Fig 2. (A) Insertion of rotator cuff anchor into humeral head of a right shoulder. (B) Passage of rotator cuff sutures.
coefficient. One-way analysis of variance was used to assess for performance differences between junior residents, senior residents, fellows, and staff surgeons (reporting degrees of freedom and the F statistic). Interrater reliability (using examiner ratings and the blinded assessor ratings) was calculated with the intraclass correlation coefficient [ICC(1,1)], given that each subject was assessed by a different rater, and reliability was calculated from a single measurement. Finally, the Pearson correlation coefficient was used to assess the relation between performance on LR and performance on RCR for participants who performed both. SPSS software (version 22; IBM) was used to complete all analyses.
Arthroscopic LR A total of 51 of 60 potential participants performed LR (Table 4). Nine residents who did not participate were involved in clinical duties on the days of the study. The
internal consistency or reliability by use of the total ASSET score was very high (0.98). The inter-rater reliability between the examiner rating and the blinded assessor was 0.98 (95% confidence interval [CI], 0.96-0.99) for the total ASSET score, 0.83 (95% CI, 0.67-0.92) for the total checklist score, and 0.97 (95% CI, 0.96-0.99) for the final GRS. One-way analysis of variance showed a significant difference between participants based on year of training for the total ASSET score (F3,47 ¼ 41.5, P < .001), for the total checklist score (F3,21 ¼ 19, P < .001), and for the overall GRS (F3,47 ¼ 48.6, P < .001) (Fig 3). Post hoc tests for the total ASSET score showed a significant difference in the total ASSET score between junior residents and senior residents, between senior residents and fellows, and between fellows and staff (all P < .001). The correlation between the total ASSET score and the previous number of sports rotations, as well as exposure to arthroscopic LR, is detailed in Table 5, whereas the number of participants deemed competent by each of the 3 methods is shown in Table 6.
Table 1. Task-Specific Checklist for Arthroscopic Insertion of Rotator Cuff Suture Anchor, Passage of Sutures Through Rotator Cuff Tear Using Suture Passer, and Tying of Arthroscopic Knot
Table 2. Task-Specific Checklist for Arthroscopic Insertion of Glenoid Anchor, Passage of Sutures Around Glenoid Labrum, and Tying of Arthroscopic Knot
Task Uses appropriate portal, i.e. above footprint Places trocar on footprint Uses insertion angle of 45 Performs insertion of trocar to appropriate depth Is able to insert anchor into pilot hole Inserts anchor to correct depth Tests anchor security Ensures sutures are running Performs suture managementdthrough separate portals Loads suture correctly Is able to pass suture Achieves adequate tissue bite Retrieves sutures through alternative portal Achieves appropriate separation between the 2 sutures, i.e. 1 cm Is able to tie arthroscopic knot Uses assistant appropriately
Task Uses drill guide Uses low anterior portal Uses angle 45 to articular surface Minimizes inferior angulation Performs correct drill insertion to depth stop Maintains drill guide while placing anchor Achieves anchor insertion to correct depth Tests anchor security Ensures sutures are running Protects articular surfaces Separates sutures through portals Uses instrument in correct directiondright vs left Uses low anterior portal Takes tissue inferior to anchor Obtains good portion of capsulolabral tissue Able to perform suture shuttling Is able to tie arthroscopic knot Uses assistant appropriately
Results
Completed Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes Yes Yes Yes Yes
or or or or or
no no no no no
Yes or no Yes or no
Completed Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no Yes or no
4
Table 3. ASSET Global Rating Scale12 Safety
NOTE. The ASSET was originally published by Koehler et al.12 in the American Journal of Sports Medicine in 2013. Reprinted with permission. ASSET, Arthroscopic Surgery Skill Evaluation Tool.
T. DWYER ET AL.
1: novice 2 3: competent 4 5: expert Significant damage to articular cartilage Insignificant damage to articular cartilage or soft tissue No damage to articular cartilage or soft tissue or soft tissue Field of view 1: novice 2 3: competent 4 5: expert Narrow field of view, inadequate Moderate field of view, adequate arthroscope Expansive field of view, optimal arthroscope or light source positioning and light source positioning arthroscope and light source positioning Camera dexterity 1: novice 2 3: competent 4 5: expert Awkward or graceless movements, fails to Appropriate use of camera, occasionally Graceful and dexterous throughout keep camera centered and correctly oriented needs to reposition procedure with camera always centered and correctly oriented Instrument dexterity 1: novice 2 3: competent 4 5: expert Overly tentative or awkward with Careful, controlled use of instruments; Confident and accurate use of instruments, unable to consistently direct occasionally misses targets all instruments instruments to targets Bimanual dexterity 1: novice 2 3: competent 4 5: expert Unable to use both hands or no Uses both hands but occasionally fails to Uses both hands to coordinate camera coordination between hands coordinate movement of camera and instruments and instrument positioning for optimal performance Flow of procedure 1: novice 2 3: competent 4 5: expert Frequently stops operating or persists Steady progression of operative procedure with few Obviously planned course of procedure, without progress, multiple unsuccessful unsuccessful attempts prior to completing tasks fluid transition from one task to next attempts prior to completing tasks with no unsuccessful attempts Quality of procedure 1: novice 2 3: competent 4 5: expert Inadequate or incomplete final product Adequate final product with only minor flaws Optimal final product with no flaws that do not require correction Autonomy 1 2 3 Unable to complete procedure even with Able to complete procedure but required intervention Able to complete procedure without intervention intervention
ROTATOR CUFF REPAIR SIMULATION IN DRY MODELS Table 4. Number of Participants in Each Year of Training for Each Arthroscopic Procedure Performed on Dry Models Arthroscopic Labral Repair
Arthroscopic Rotator Cuff Repair
10 6 7 8 8 7 5 51
7 7 8 6 7 6 5 46
Year of training PGY 1 PGY 2 PGY 3 PGY 4 PGY 5 Fellow Staff Total PGY, postgraduate year.
Arthroscopic RCR A total of 46 participants (of 60) performed RCR (Table 4). The internal consistency or reliability (Cronbach a) using the total ASSET score was very
5
high (0.98). The inter-rater reliability between the examiner rating and the blinded assessor was 0.95 (95% CI, 0.92-0.98) for the total ASSET score, 0.76 (95% CI, 0.91-0.97) for the total checklist score, and 0.95 (95% CI, 0.91-0.97) for the final GRS. One-way analysis of variance showed a significant difference between participants based on year of training for the total ASSET score (F3,38 ¼ 53.7, P < .001), for the total checklist score (F3,15.8 ¼ 15.6, P < .001), and for the final GRS (F3,38 ¼ 46.6, P < .001) (Fig 4). Post hoc tests for the total ASSET score showed a significant difference between junior residents and senior residents (P < .001) and between senior residents and fellows (P < .001). No difference was seen between fellows and staff. The correlation between the total ASSET score and the previous number of sports rotations, as well as exposure to arthroscopic LR, is detailed in Table 5, whereas the number of participants deemed competent by each of the 3 methods is
Fig 3. Box plots for arthroscopic labral repair. In the box plots, boxes represent the interquartile ranges, the central line represents the median, and the whiskers represent the minimum and maximum. Circles represent suspected outliers outside the interquartile range; asterisks represent extreme outliers. (A) Box plot for total checklist score. (B) Box plot for final global rating. (C) Box plot for total Arthroscopic Surgical Skill Evaluation Tool (ASSET).
6
T. DWYER ET AL.
Table 5. Correlation Between Mean ASSET Score for Each Procedure and Experience Level of Participants
Arthroscopic LRemean ASSET score correlation Arthroscopic RCRemean ASSET score correlation
Total Checklist Score 0.84
Final Global Rating 0.96
Year of Training 0.82
No. of Sports Rotations 0.74
Arthroscopic Procedures Assisted 0.52
Arthroscopic Procedures Performed 0.48
0.71
0.98
0.84
0.65
0.67
0.58
ASSET, Arthroscopic Surgical Skill Evaluation Tool; LR, labral repair; RCR, rotator cuff repair.
shown in Table 7. A high correlation was seen (0.86) between the total ASSET score on the arthroscopic RCR and LR of those 32 residents who performed both procedures.
Discussion The results of this study show that there is evidence of validity for the use of dry models to assess performance of arthroscopic RCR and LR. Evidence of validity included the high internal consistency, good inter-rater reliability, and ability to differentiate between novices and experts, as well as correlation with experience and previous exposure to arthroscopic procedures. The purpose of surgical simulation is to provide residents and fellows with an opportunity to improve their technical skills and demonstrate a minimal level of competency at these skills while reducing risk to patients.15 Cadavers,16 virtual simulators,17-19 and dry models3,20 have all been used for this purpose, and although high-fidelity models such as cadavers clearly have some advantages over lower-fidelity models, the high cost of running cadaveric laboratories makes them impractical for regular training and assessment, at least at our institution. For this reason, dry models provide a feasible and cost-efficient alternative for arthroscopic simulation and training. In 2013 Koehler et al.12 published the ASSET, a GRS used to assess arthroscopic skill. The ASSET was designed to be generalizable to multiple arthroscopic procedures. Evidence of its validity has been shown when used to assess the performance of diagnostic knee and shoulder arthroscopy in the operating room,21 and
the ASSET has been shown to be useful in the assessment of anterior cruciate ligament reconstruction using dry models.4 There are multiple GRSs in the literature that exist for the purpose of evaluating arthroscopic procedures, with a recent study failing to identify the superiority of one over another.22 We continue to use the ASSET GRS because of its link to the Dreyfus model of skill acquisitiondsurgeons seem able to understand terms such as “novice,” “advanced beginner,” “competent,” “proficient,” and “expert.” In this study a final overall rating corresponding to the Dreyfus model of skill acquisition was used in addition to the ASSET, with a descriptor for the grading of competent (performing at the level of an orthopaedic surgeon) provided as guidance. It is important to note that quantifying assessment of performance is difficult and that all assessment is, by definition, subjective.23,24 The subjective nature of a GRS is one of the commonly accepted advantages of a GRS over task-specific checklistsdalthough a resident may perform every step on a checklist, the overall performance may be deemed suboptimal by an expert rater.25 Whereas previous studies have evaluated dry models to assess the performance of diagnostic shoulder arthroscopy,26-30 this study concentrates on the assessment of performance of complex technical procedures such as RCR and LR. In 2015 Angelo et al.31 studied the performance of an arthroscopic Bankart repair in a dry model, comparing 12 arthroscopic shoulder surgeons with 7 postgraduate year 4 and 5 orthopaedic residents: The expert group made fewer errors and performed a 3-anchor repair more quickly. It
Table 6. Number of Participants Performing Arthroscopic Labral Repair by Year of Training No. Year of training PGY 1 PGY 2 PGY 3 PGY 4 PGY 5 Fellow Faculty Total
10 6 7 8 8 7 5 51
No. Competent or Above by Final Overall GRS 2 of 10 1 of 6 4 of 7 4 of 8 8 of 8 7 of 7 5 of 5 31 of 51
(20%) (17%) (57%) (50%) (100%) (100%) (100%) (61%)
No. Competent by ASSET Score With Minimum of 3 in Each Domain 1 of 10 1 of 6 4 of 7 4 of 8 7 of 8 7 of 7 5 of 5 29 of 51
(10%) (17%) (57%) (50%) (88%) (100%) (100%) (57%)
No. Competent by ASSET Score 24 of 38 1 of 10 1 of 6 4 of 7 4 of 8 7 of 8 7 of 7 5 of 5 29 of 51
NOTE. Three different methods of determining a minimal level of competency (i.e. graded as competent or above) are shown. ASSET, Arthroscopic Surgery Skill Evaluation Tool; GRS, global rating scale; PGY, postgraduate year of training.
(10%) (17%) (57%) (50%) (88%) (100%) (100%) (57%)
7
ROTATOR CUFF REPAIR SIMULATION IN DRY MODELS
Fig 4. Box plot for arthroscopic rotator cuff repair. In the box plots, boxes represent the interquartile ranges, the central line represents the median, and the whiskers represent the minimum and maximum. Circles represent suspected outliers outside the interquartile range. (A) Box plot for total checklist score. (B) Box plot for final global rating. (C) Box plot for total Arthroscopic Surgical Skill Evaluation Tool (ASSET) score.
is interesting to note that the study by Angelo et al. did not use GRSs but scored participants using a checklist, as well as 27 different Bankart procedure metric errorsdparticipants were allowed no more than 4 total errors, as well as 1 sentinel error.
Whether designating resident performance as competent based on the number of errors committed is superior to judgment of performance using a combination of checklists and GRS is unknown and requires further study. The list of potential errors associated with
Table 7. Number of Participants Performing Arthroscopic Rotator Cuff Repair by Year of Training No. Year of training PGY 1 PGY 2 PGY 3 PGY 4 PGY 5 Fellow Faculty Total
7 7 8 6 7 6 5 46
No. Competent or Above by Overall GRS 3 of 7 4 of 7 6 of 8 5 of 6 7 of 7 6 of 6 5 of 5 36 of 46
(43%) (57%) (75%) (83%) (100%) (100%) (100%) (78%)
No. Competent by ASSET Score With Minimum of 3 in Each Domain 2 of 7 2 of 7 5 of 8 5 of 6 7 of 7 6 of 6 5 of 5 32 of 46
(29%) (29%) (63%) (83%) (100%) (100%) (100%) (70%)
NOTE. Three different methods of determining a minimal level of competency are shown. ASSET, Arthroscopic Surgery Skill Evaluation Tool; GRS, global rating scale; PGY, postgraduate year of training.
No. Competent by ASSET Score 24 of 38 2 of 7 2 of 7 5 of 8 5 of 6 7 of 7 6 of 6 5 of 5 32 of 46
(29%) (29%) (63%) (83%) (100%) (100%) (100%) (70%)
8
T. DWYER ET AL.
LR and described by Angelo et al.31 is extensive and certainly supports the use of simulation to train and assess residents and fellows in these techniques. In our study, many residents were observed leaving glenoid anchors proud, failing to use drill guides, or tying arthroscopic knots inadequately and were therefore deemed to be performing less than competently. We agree with the designation of these errors as sentinel errors. A constant issue with the use of simulation to both train and assess performance is whether the use of these models leads to or correlates with an improvement in performance in the operating room.32 In 2008 Howells et al.33 randomized junior orthopaedic trainees either to no training or to training on a bench-top knee arthroscopy simulator: In the operating room, the group with training on the simulator performed significantly better. In a subsequent study, Howells et al.34 showed that although orthopaedic surgeons trained to perform lower limb surgery improved over time when performing Bankart repair on a model, these surgeons did not retain these skills 6 months later. In 2014, in another randomized trial, Cannon et al.17 reported that training on a virtual reality simulator improved performance of a diagnostic knee arthroscopy in the operating room. Although further research is required in this area to show that simulation improves operating room performance, it seems logical that practicing the steps of complex procedures is helpful prior to performance on live patients in the operating room. In this study a higher proportion of residents were able to perform RCR at a competent level than were able to perform LR at a competent level. This may be a function of the increased complexity of LR, especially when residents have had limited exposure to the insertion of glenoid anchors or to techniques of suture shuttling around the labrum. However, the complexity of these arthroscopic procedures increases in the operating room, where the management of fluid and soft tissues is as important as the technical components. It may also be that residents are exposed to greater numbers of RCR than LR. No difference in performance of an arthroscopic RCR between fellows and staff was shown; this may be because of the limited numbers in each group and a resultant lack of sufficient power to show an actual difference. This finding may also be another example of the limitations of dry models compared with actual operative assessment. Limitations One limitation of this study was the use of dry models, without a correlation with performance on cadaveric models or in the operating room. Another limitation was that residents were asked to estimate their experience with both assisting and performing LR
and RCR. As a result, these data may be affected by recall bias, possibly explaining the lower-than-expected correlations between operative exposure and performance on the dry models. It may have been that residents interested in sports surgery as a career outperformed residents interested in other careers, but this information was not collected. No procedures were timed in this studydeach resident was allowed as much time as required to complete each task. In addition, only a single fellow reviewed each video of performance. Finally, this study did not use an error model to determine competence because we believe that designating a fixed number of errors allowed, as well as type of error allowed, is arbitrary.
Conclusions The results of this study show evidence of construct validity when using dry models to assess performance of arthroscopic RCR and LR by residents.
References 1. Dwyer T, Wadey V, Archibald D, et al. Cognitive and psychomotor entrustable professional activities: Can simulators help assess competency in trainees? Clin Orthop Relat Res 2016;474:926-934. 2. Martin KD, Patterson D, Phisitkul P, Cameron KL, Femino J, Amendola A. Ankle Arthroscopy simulation improves basic skills, anatomic recognition, and proficiency during diagnostic examination of residents in training. Foot Ankle Int 2015;36:827-835. 3. Pollard TC, Khan T, Price AJ, Gill HS, Glyn-Jones S, Rees JL. Simulated hip arthroscopy skills: Learning curves with the lateral and supine patient positions: A randomized trial. J Bone Joint Surg Am 2012;94:e68. 4. Dwyer T, Slade Shantz J, Chahal J, et al. Simulation of anterior cruciate ligament reconstruction in a dry model. Am J Sports Med 2015;43:2997-3004. 5. Wasserstein D, Dwyer T, Veillette C, et al. Predictors of dislocation and revision after shoulder stabilization in Ontario, Canada, from 2003 to 2008. Am J Sports Med 2013;41:2034-2040. 6. Colvin AC, Egorova N, Harrison AK, Moskowitz A, Flatow EL. National trends in rotator cuff repair. J Bone Joint Surg Am 2012;94:227-233. 7. Guttmann D, Graham RD, MacLennan MJ, Lubowitz JH. Arthroscopic rotator cuff repair: The learning curve. Arthroscopy 2005;21:394-400. 8. Felder JJ, Elliott MP, Mair SD. Complications associated with arthroscopic labral repair implants: A case series. Orthopedics 2015;38:439-443. 9. Horst PK, Choo K, Bharucha N, Vail TP. Graduates of orthopaedic residency training are increasingly subspecialized: A review of the American Board of Orthopaedic Surgery Part II Database. J Bone Joint Surg Am 2015;97:869-875. 10. LeBlanc J, Hutchison C, Hu Y, Donnon T. A comparison of orthopaedic resident performance on surgical fixation of an ulnar fracture using virtual reality and synthetic models. J Bone Joint Surg Am 2013;95:e60.S1-S5.
ROTATOR CUFF REPAIR SIMULATION IN DRY MODELS 11. Schaverien MV. Development of expertise in surgical training. J Surg Educ 2010;67:37-43. 12. Koehler RJ, Amsdell S, Arendt EA, et al. The Arthroscopic Surgical Skill Evaluation Tool (ASSET). Am J Sports Med 2013;41:1229-1237. 13. Carraccio CL, Benson BJ, Nixon LJ, Derstine PL. From the educational bench to the clinical bedside: Translating the Dreyfus developmental model to the learning of clinical skills. Acad Med 2008;83:761-767. 14. Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health Aff (Millwood) 2002;21:103-111. 15. Atesok K, Mabrey JD, Jazrawi LM, Egol KA. Surgical simulation in orthopaedic skills training. J Am Acad Orthop Surg 2012;20:410-422. 16. Elliott MJ, Caprise PA, Henning AE, Kurtz CA, Sekiya JK. Diagnostic knee arthroscopy: A pilot study to evaluate surgical skills. Arthroscopy 2012;28:218-224. 17. Cannon WD, Eckhoff DG, Garrett WE Jr, Hunter RE, Sweeney HJ. Report of a group developing a virtual reality simulator for arthroscopic surgery of the knee joint. Clin Orthop Relat Res 2006;442:21-29. 18. Gomoll AH, Pappas G, Forsythe B, Warner JJ. Individual skill progression on a virtual reality simulator for shoulder arthroscopy: A 3-year follow-up study. Am J Sports Med 2008;36:1139-1142. 19. Rebolledo BJ, Hammann-Scala J, Leali A, Ranawat AS. Arthroscopy skills development with a surgical simulator: A comparative study in orthopaedic surgery residents. Am J Sports Med 2015;43:1526-1529. 20. Jackson WF, Khan T, Alvand A, et al. Learning and retaining simulated arthroscopic meniscal repair skills. J Bone Joint Surg Am 2012;94:e132. 21. Koehler RJ, Goldblatt JP, Maloney MD, Voloshin I, Nicandri GT. Assessing diagnostic arthroscopy performance in the operating room using the Arthroscopic Surgery Skill Evaluation Tool (ASSET). Arthroscopy 2015;31:2314-2319e2. 22. Middleton RM, Baldwin MJ, Akhtar K, Alvand A, Rees JL. Which global rating scale? A comparison of the ASSET, BAKSSS, and IGARS for the assessment of simulated arthroscopic skills. J Bone Joint Surg Am 2016;98:75-81. 23. Brooks MA. Medical education and the tyranny of competency. Perspect Biol Med 2009;52:90-102.
9
24. Ginsburg S, McIlroy J, Oulanova O, Eva K, Regehr G. Toward authentic clinical evaluation: Pitfalls in the pursuit of competency. Acad Med 2010;85:780-786. 25. Hodges B, Regehr G, McNaughton N, Tiberius R, Hanson M. OSCE checklists do not capture increasing levels of expertise. Acad Med 1999;74:1129-1134. 26. Bayona S, Akhtar K, Gupte C, Emery RJ, Dodds AL, Bello F. Assessing performance in shoulder arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS). J Bone Joint Surg Am 2014;96:e112. 27. Rahm S, Germann M, Hingsammer A, Wieser K, Gerber C. Validation of a virtual reality-based simulator for shoulder arthroscopy. Knee Surg Sports Traumatol Arthrosc 2016;24:1730-1737. 28. Martin KD, Belmont PJ, Schoenfeld AJ, Todd M, Cameron KL, Owens BD. Arthroscopic basic task performance in shoulder simulator model correlates with similar task performance in cadavers. J Bone Joint Surg Am 2011;93:e1271-e1275. 29. Martin KD, Cameron K, Belmont PJ, Schoenfeld A, Owens BD. Shoulder arthroscopy simulator performance correlates with resident and shoulder arthroscopy experience. J Bone Joint Surg Am 2012;94:e160. 30. Henn RF III, Shah N, Warner JJ, Gomoll AH. Shoulder arthroscopy simulator training improves shoulder arthroscopy performance in a cadaveric model. Arthroscopy 2013;29:982-985. 31. Angelo RL, Pedowitz RA, Ryu RK, Gallagher AG. The Bankart performance metrics combined with a shoulder model simulator create a precise and accurate training tool for measuring surgeon skill. Arthroscopy 2015;31: 1639-1654. 32. Frank RM, Erickson B, Frank JM, et al. Utility of modern arthroscopic simulator training models. Arthroscopy 2014;30:121-133. 33. Howells NR, Gill HS, Carr AJ, Price AJ, Rees JL. Transferring simulated arthroscopic skills to the operating theatre: A randomised blinded study. J Bone Joint Surg Br 2008;90:494-499. 34. Howells NR, Auplish S, Hand GC, Gill HS, Carr AJ, Rees JL. Retention of arthroscopic shoulder skills learned with use of a simulator. Demonstration of a learning curve and loss of performance level after a time delay. J Bone Joint Surg Am 2009;91:1207-1213.