The American Journal of Surgery (2016) 211, 377-383
Association for Surgical Education
Adaptive simulation training using cumulative sum: a randomized prospective trial Yinin Hu, M.D., Kendall D. Brooks, B.A., Helen Kim, B.A., Adela Mahmutovic, B.A., Joanna Choi, B.A., Ivy A. Le, B.A., Bartholomew J. Kane, M.D., Ph.D., Eugene D. McGahren, M.D., Sara K. Rasmussen, M.D., Ph.D.* Department of Surgery, University of Virginia School of Medicine, P.O. Box 800709, Charlottesville, VA, 22908-0709, USA
KEYWORDS: Surgical education; Surgical simulation; Adaptive learning; Cumulative sum; Medical student education; Resident education
Abstract BACKGROUND: Cumulative sum (Cusum) is a novel tool that can facilitate adaptive, individualized training curricula. The purpose of this study was to use Cusum to streamline simulation-based training. METHODS: Preclinical medical students were randomized to Cusum or control arms and practiced suturing, intubation, and central venous catheterization in simulation. Control participants practiced between 8 and 9 hours each. Cusum participants practiced until Cusum proficient in all tasks. Group comparisons of blinded post-test evaluations were performed using Wilcoxon rank sum. RESULTS: Forty-eight participants completed the study. Average post-test composite score was 92.1% for Cusum and 93.5% for control (P 5 .71). Cusum participants practiced 19% fewer hours than control group participants (7.12 vs 8.75 hours, P , .001). Cusum detected proficiency relapses during practice among 7 (29%) participants for suturing and 10 (40%) for intubation. CONCLUSIONS: In this comparison between adaptive and volume-based curricula in surgical training, Cusum promoted more efficient time utilization while maintaining excellent results. Ó 2016 Elsevier Inc. All rights reserved.
Surgical resident operative preparedness in an era of work-hour restrictions, and stringent outcomes scrutiny is a subject of mounting concern.1 As a result, 2 widespread movements have begun to gain momentum: the growing role of simulation and the exploration of competency-based Funding support is provided by National Institutes of Health (NIH) T32 CA163177 (to Y.H.) and the Academy of Distinguished Educators, University of Virginia School of Medicine (to S.K.R.). The authors declare no conflicts of interest. * Corresponding author. Tel.: 11-434-982-2796; fax: 11-434-2430036. E-mail address:
[email protected] Manuscript received March 23, 2015; revised manuscript August 4, 2015 0002-9610/$ - see front matter Ó 2016 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amjsurg.2015.08.030
curricula.2,3 The residency review committee has increasingly emphasized simulation’s role as a major supplement to technical training.4 Meanwhile, the surgical skills curriculum is a promising first step toward proficiency-based training both in the operating room and at the benchtop.5 Theoretically, combining simulation with proficiencybased training should efficiently prepare trainees to take full advantage of operative experiences. Cumulative sum (Cusum) is a quality-control tool suitable for real-time proficiency monitoring during training and has been used to depict learning curves for simulation techniques including airway endoscopy and robotic surgery.6,7 At the bedside, Cusum has been applieddusually in a retrospective mannerdto an even broader range of invasive skills.8–11
378 Cusum’s overall principle is that proficiency for a given procedure can be determined by tracking the temporal trend of successful and failed attempts at that procedure. By comparing Cusum records against prespecified acceptable thresholds, performance by an individualdor an institutiondcan be categorized as proficient or subproficient. Advantages of Cusum within a proficiency-driven curriculum include its objectivity, ease of use, and graphic depiction of learning progress. However, Cusum has not been rigorously validated in surgical education nor has it been used to prospectively guide training in a proficiency-driven manner. Needless to say, there have been no randomized trials comparing Cusum-based training to the traditional ‘‘time-spent’’ model of surgical education. The purpose of this study was 2-fold. First, we aimed to describe a prospective, Cusum-guided simulation training curriculum to demonstrate the variability in learning rates among inexperienced trainees. Second, we sought to report the first randomized, prospective trial comparing an adaptive, proficiency-based simulation protocol to a traditional, ‘‘time-spent’’ system. We hypothesized that although the traditional system can effectively confer technical proficiency, the Cusum-guided protocol would attain equivalent results with less overall practice time.
Methods Simulation modules Three simulated invasive skills were incorporated within an elective medical student training curriculum: orotracheal intubation, basic surgical suturing, and subclavian central venous catheterization (CVC). Details regarding each skill’s methods and scoring criteria have been reported in a prior publication.12 In brief, the intubation task involved single-operator bag-valve-mask ventilation, direct laryngoscopy, and orotracheal intubation. The suturing task tested 2-handed, instrument, and 1-handed tie techniques using a series of figure-of-eight stitches. The CVC task involved right subclavian central venous access without ultrasound guidance. Assessment checklists were created by expert consensus based on task-specific objective structured assessments of technical skills as previously reported.12–15 Minimum proficiency scores for each task were 32/36 for suturing, 16/ 18 for intubation, and 32/36 for CVC. Time limits for the 3 tasks were 5, 2.5, and 10 minutes, respectively. For each practice attempt at any task, both the checklist and time criteria must be satisfied in order for the attempt to be considered successful.
Cusum analysis Cusum methodology was based on the work by Bolsin and Colson.16 In brief, Cusum is founded on the binary outcome of each attempt at a given task: success or failure.
The American Journal of Surgery, Vol 211, No 2, February 2016 Each success is given a numeric rewarddrepresented by a downward deflection on a Cusum graph, whereas each failure is associated with a penaltydan upward deflection. By accumulating rewards and penalties through repetitive practice attempts, a classic learning curve is generated with a learning phase (incline) and a proficient phase (flat or decline). Because a Cusum curve is updated after every task attempt, relapses in proficiency can be detected, which trigger retraining. A relapse is defined as a period of subproficient performance following an earlier period of proficiency. On a Cusum curve, this manifests as a curve which trends upward after a downtrending or flat segment. Cusum uses several parameters defined a priori. The acceptable and unacceptable failure rates (p1 and p0, respectively) describe the maximum acceptable level of human error (p12p0).8 The false-positive rate (a) defines the allowable risk of falsely labeling a proficient practitioner as subpar. These parameters determine the numeric reward (negative) or penalty (positive) associated with each successful or failed attempt, respectively. They also determine the unacceptable decision interval. A Cusum curve that trends upward and crosses a decision interval over a series of attempts is indicative of subproficient performance. For the present study, the following Cusum parameters were set a priori by consensus: p1 5 .1, p0 5 .2, a 5 .3. These values yielded a reward of 2.15, a penalty of .85, and a decision interval of 1.05. A participant was considered Cusum proficient in a given task as long as no decision interval had been crossed over the 5 most recent practice attempts.
Participants and training Volunteer participants were recruited from the 1st- and 2nd-year medical school classes, before enrollment in clinical clerkships. Participants first completed a 2-hour orientation session which addressed proper techniques for each task. Instructional videos demonstrating each simulation task were also provided for independent review. Following orientation, participants were randomized to 1 of the 2 experimental arms: Cusum and control. All participants then underwent weekly 1-on-1 practice sessions proctored by trained assistant instructors on a rotating schedule to minimize teaching biases.12 The control arm’s practice protocol was designed to emulate a traditional surgical training paradigm based primarily on a requisite amount of time spent in practice. Participants were asked to complete a total of 7 weekly 1on-1 practice sessions combining to roughly 8.75 hours of practice. During each session, participants could choose to practice any task, in any order. After each task attempt, the task-specific checklist was used by the assistant instructors to provide objective feedback regarding task components which were missed or improperly performed. Additional positive or negative feedback beyond the task-specific checklists was neither encouraged nor discouraged.
Y. Hu et al.
Adaptive simulation training
The Cusum arm’s protocol dictated each participant’s practice based on proficiency. Participants started each weekly practice session with suturing and repetitively performed this task until deemed proficient by Cusum criteria, at which point training shifted to intubation. Once proficient at intubation, the participant then progressed to CVC. Each practice session lasted up to 1.25 hours. Regardless of which training task a participant ended a session with, each subsequent practice session always initiated with suturing to verify that proficiency is maintained over time. Cusum participants received the same checklist-based feedback protocol as that was used by the control arm. On attaining proficiency in all 3 tasks, training was terminated regardless of the number of practice sessions completed (Fig. 1). Cusum participants were allowed a maximum of 8 practice sessions, totaling up to 10 hours of 1-on-1 proctored practice.
Evaluation After completing the designated practice protocol, participants each underwent a post-test assessment conducted by an experienced surgical faculty member blinded
379 to group assignment. Post-tests were comprised of 3 attempts at each simulation task graded using the taskspecific checklists. The top 2 scores for each task were averaged and normalized to a 100-point scale. A composite score was generated by averaging the normalized scores for all three tasks. Because an existing survey had identified that only 1% to 3% of participants had prior exposure to the simulation tasks, faculty-administered pretests were not performed. In lieu of this, the top 2 scores from each participant’s initial 3 practice attempts of each task were averaged as a measure of baseline proficiency acquired through orientation and instructional videos alone.
Analysis The 2 primary outcomes of this study were post-test composite performance and overall practice time. Secondary outcomes included task-specific post-test subscores, number of practice attempts for each task, and magnitude of improvement comparing post-tests to baseline proficiency. Power calculations were based on a pilot group of 20 participants (10 per arm). Within this group, average overall practice times were 8.75 hours for control and 6.96 hours for Cusum, with a standard deviation of 1.15 hours. A study size of 46 participants would power analyses to detect a 10% difference in overall practice time with 80% power at a equal to .05. Summative statistics are represented using median and interquartile range (IQR), and group comparisons were performed using Wilcoxon rank sum. All data were analyzed using SAS statistical software (version 9.3; SAS Institute, Inc., Cary, NC). This study was deemed exempt by the University of Virginia Institutional Review Board (IRB-SBS protocol 2013-0246-00).
Results
Figure 1 Flow diagram depicting the Cusum practice algorithm. Each practice session begins with suturing to verify skill retention.
Fifty-two participants were enrolled in this study; of these, 48 completed all practice sessions and post-testing (24 per group). Three participants vacated the study because of scheduling conflicts interfering with practice sessions, and 1 participant was excluded because of unintentional evaluator unblinding before post-testing. Demographics, reported specialties of interest, and baseline proficiency scores were not significantly different between groups (Table 1). Suturing learning curves of 3 Cusum participants are shown in Fig. 2. These curves are representative of 3 patterns of learning: slow, fast, and relapsing. Within the Cusum-monitored group, relapses in proficiency were detected among 7 participants (29%) for suturing and 10 (40%) for intubation. There were no relapses detected for the CVC task. All relapsing participants regained Cusum proficiency before post-testing. Median post-test composite score was 92.1 (IQR, 90.0 to 97.2) for the Cusum group and 93.5 (IQR, 90.3 to 94.9) for the control group (P 5 .710). Median composite score
380
The American Journal of Surgery, Vol 211, No 2, February 2016
Table 1
Participant baseline demographics Cusum, n (%) Control, n (%)
Characteristic
n 5 24
n 5 24
Male Baseline composite score, (median, IQR) Class MS1 MS2 Specialty interest Surgery/subspecialty Internal medicine Other
17 (71) 75.2 (72.3–78.6)
14 (58) 75.9 (72.2–81.0)
P value .55 .46
7 (29) 17 (71)
6 (25) 18 (75)
1.00
10 (42) 8 (33) 6 (25)
12 (50) 7 (29) 5 (21)
.77
IQR 5 interquartile range; MS 5 medical school class.
improvement over baseline was 116.2 (IQR, 12.7 to 22.9) and 115.7 (IQR, 10.9 to 19.4), respectively (P 5 .206). Subscore analysis showed no significant differences between arms in average post-test score or average magnitude of improvement for any of the simulation tasks (Fig. 3). Cusum group participants on average spent 19% less time practicing than control counterparts (7.12 vs
Figure 3 Groupwise comparison of post-test performance (A) and magnitude of improvement over baseline (B).
8.75 hours, P , .001). They performed significantly fewer CVC attempts (7 vs 11 attempts, P , .001) and trended toward fewer suturing attempts (25 vs 28, P 5 .118; Table 2). Twenty-three Cusum participants attained Cusum proficiency in all tasks (96%); 3 required more than 8.75 practice hours to achieve this goal. One participant remained subproficient in CVC after 10 hours of total practice. This participant did not advance from the suturing task until her 6th practice session, after 7.25 hours of 1-on-1 training. Sixteen out of 24 Cusum participants (67%) achieved proficiency in all 3 tasks within 6 practice sessions or less. Control participants’ practice records were retrospectively analyzed using Cusum. Within this experimental arm, 96% (23/24), 92% (22/24), and 75% (18/24) of participants attained Cusum proficiency for suturing, intubation, and CVC, respectively; only 71% (17/24) of participants attained Cusum proficiency in all the 3 skills. Figure 2 Example suturing Cusum results. Markers denote acquisition of proficiency by Cusum criteria. Participants 1 and 2 represent fast and slow learners, respectively (A). Participant 3 learns quickly, but experiences a period of skill relapse before ultimately reacquiring proficiency (B).
Comments This is the first prospective, randomized trial comparing an adaptive, proficiency-based training protocol to a
Y. Hu et al.
Adaptive simulation training
381
Table 2 Practice volume characteristics of Cusum and control groups Simulation task
Cusum, median Control, median (IQR) (IQR) P value
Practice volume (repetitions) Suturing 25 (22–27) 28 Intubation 14 (9–15) 14 Central venous 7 (6–10) 11 catheterization Total practice 7.12 (5.6–7.7) 8.75 time (h)
(24–34) (11–17) (9–15)
.118 .463 ,.001
(8.5–8.8)
,.001
IQR 5 interquartile range.
traditional ‘‘time-spent’’ protocol in surgical education. Our study demonstrated that an adaptive protocol founded on Cusum can effectively tailor technical training to individual learning patterns and may be substantially more time efficient than a traditional protocol. Most importantly, Cusum offered an unforeseen but vital advantage: an ability to detect proficiency relapse through real-time monitoring. Before applying Cusum guidance to surgery residents at the bedside, a study of medical students in the simulation setting was critical to verify the safety and pragmatism of this approach. The 3 study tasks were chosen because of the presence of existing research using these models17,18 and to alleviate common deficiencies in medical student clinical exposure.19,20 Many surgery interns have had limited experience in invasive skills such as CVC insertion, yet an intense clinical schedule precludes extensive simulation training during residency.21,22 Although the ultimate goal of Cusum-based education research is to facilitate the acquisition and maintenance of proficiency in the clinical setting, an intermediary step would be to use Cusum to streamline the simulation curriculum of junior surgery residents, thereby improving preparedness for critical patient encounters.23 Because simulation training is commonly performed independently by residents, it would be useful to determine how effective a Cusum-guided protocol is in the setting of independent practice. However, a critical prerequisite would be the verification of accurate and reliable trainee self-evaluations. Learning rate variability in surgery has been studied broadly and extensively.24 In a review of robotic surgery education, Olthof et al25 recognized differences in learning curve slope based on participants’ preexisting laparoscopic experience. Hodgins et al26 demonstrated substantial variability in the rate of arthroscopic skills acquisition across 20 orthopedic residents. At the faculty level, Cusum has been used to depict the learning curves of advanced techniques such as endobronchial ultrasound.27 Our results corroborate these findings in demonstrating a wide range of learning rates. In the control arm, 29% of participants failed to attain Cusum proficiency for all skills. By comparison, all participants in the Cusum arm achieved proficiency in suturing and intubation, and only one failed to achieve proficiency in CVC. This suggests that without guidance,
participants may not optimally allocate time resources for simulation practice. Total practice time to proficiency ranged from 4.5 to 10 hours among Cusum participants. Had all Cusum participants been subjected to a fixed 8.75 hours of practice, 4 (17%) would not have attained Cusum proficiency. Conversely, 16 Cusum participants required fewer than 7 sessions to attain proficiency in all tasks. Implementing Cusum among these participants saved a combined 40.8 hours of monitored practice time. By guiding proficiency-based training in real time, Cusum embraces variable learning rates and allows trainees to efficiently allocate time resources toward those skills most in need of practice. From a training program’s perspective, the efficiency of an adaptive training paradigm may manifest through several applications. For example, an institution that is planning to incorporate monitored simulation training for medical students or residents may not have adequate resources for 1-on-1 training of all learners for a set allotment of time. Cusum can streamline this proctored curriculum by minimizing superfluous training among fast learners. Another example is case volume allocation. Residents quickly demonstrating Cusum proficiency in basic operations such as open umbilical hernia repairs and uncomplicated appendectomies may be assigned to more complex cases, allowing the more elementary cases to be allocated to slower learners or junior trainees.28 From a pragmatic standpoint, implementing Cusum is intuitive and inexpensive. After an attempt at a given task is evaluated as a success or failure, the formulas for numeric rewards and penalties are easily automated. At our institution, a Webbased tool optimized for mobile devices allows instructors to create new tasks and specify success criteria. This tool automatically tracks Cusum records and updates a participant’s proficiency status after each practice attempt. Such a tool provides the flexibility and convenience necessary to incorporate Cusum into a wide variety of training tasks. One unique advantage of prospectively implemented Cusum monitoring is the ability to detect proficiency relapses. In a landmark study, de Leval et al29 described the application of Cusum to detect proficiency relapse in an experienced surgeon performing a large series of neonatal arterial switch transpositions. This progressive work promoted the use of ‘‘exponentially weighted moving average’’ to instill Cusum with a memory loss mechanism which facilitates feedback on recent performance. Our study adopts a reductive form of this adjustment by performing Cusum calculations within a sliding ‘‘window’’ of 5 practice attempts, a method previously used to monitor surgery resident learning in upper endoscopy.28 Similar to the work by de Leval et al, we found that proficiency relapses were common, affecting up to 40% of trainees for the suturing and intubation tasks. Because the CVC task was the last skill within the Cusum group’s training protocol, the training protocol was not optimized to detect relapses in this task. In future applications of adaptive training, it will be important to incorporate periodic assessments of skills retention after attaining Cusum proficiency in any skill.
382 In an era of public reporting and pay for performance, verification of competency has never been more important. It is no longer adequate to credential learners based on arbitrarily determined case volume thresholds. To broadly improve quality of care, the importance of objective performance reviews using adaptive metrics such as Cusum cannot be overemphasized. The Milestones curriculum represents a first step in proficiency-based surgical training. However, the Milestones project’s periodic proficiency checks represent snapshots in time and are ill suited to detect relapses in technical ability. Nevertheless, Milestones evaluations provide a superior level of granularity than Cusum. Ultimately, a hybrid system which uses Cusum prospective monitoring to trigger Milestone evaluations when subproficient performance is detected may be the ideal balance of pragmatism and efficacy. We believe that using Cusum as a continuous monitoring tool rather than as a summative evaluation will emphasize its constructive feedback potential and avoid potential negative effects on trainee confidence. Despite its prospective, randomized design, this study has several limitations. First, pretesting by a faculty evaluator was not performed. Before orientation, the vast majority of participants did not know the most fundamental aspects of the simulation tasks: how to hold a needle driver, the components of the CVC kit, and so forth. Pretesting in this setting would not only have been ineffectual, it could also falsely inflate score improvement data. Second, assistant instructors were trained undergraduate students rather than surgical faculty. In a prior report, we showed that evaluations by these instructors correlated closely with faculty scoring, and that instructors were well regarded by most participants.12 These instructors were a pragmatic necessity to offer a combined 350 hours of 1-on-1 practice guidance. Moreover, because practice sessions were assigned on a rotating schedule, each participant had equal exposure to all instructors, preserving the validity of the randomized design. Third, although one-fourth of the control participants did not attain Cusum proficiency on retrospective analysis, post-test evaluations across groups were not significantly different. This finding highlights the fact that this study was powered to detect a difference in practice time and may be underpowered to compare post-test performance. Finally, the advantages of a Cusum-based training protocol may not translate from the simulation setting to a clinical setting. Nevertheless, these promising early results support testing Cusum guidance at the bedside and in the operating room. At our institution, Cusum is now being prospectively applied to clinical training in colonoscopy, upper endoscopy, and thyroid ultrasound.
Conclusions An adaptive, Cusum-guided training paradigm can produce technical proficiency in a more time-efficient manner than a traditional, ‘‘time-spent’’ curriculum. Prospective
The American Journal of Surgery, Vol 211, No 2, February 2016 Cusum monitoring can also reveal relapses in technical proficiency that may otherwise persist unrecognized. By accommodating individual learning rate variability, Cusum holds considerable appeal as an essential tool in the ongoing transition to proficiency-based surgical training.
References 1. Carlin AM, Gasevic E, Shepard AD. Effect of the 80-hour work week on resident operative experience in general surgery. Am J Surg 2007; 193:326–9; discussion 329–30. 2. Stefanidis D, Sevdalis N, Paige J, et al, Association for Surgical Education Simulation Committee. Simulation in surgery: what’s needed next? Ann Surg 2015;261:846–53. 3. Sonnadara RR, Mui C, McQueen S, et al. Reflections on competencybased education and training for surgical residents. J Surg Educ 2014; 71:151–8. 4. Accreditation Council for Graduate Medical Education ACGME Program Requirements for Graduate Medical Education in Surgery, 2009. 5. ACS/APDS Surgical Skills Curriculum for Residents. 2014, 2008. 6. Dalal PG, Dalal GB, Pott L, et al. Learning curves of novice anesthesiology residents performing simulated fibreoptic upper airway endoscopy. Can J Anaesth 2011;58:802–9. 7. Kang SG, Ryu BJ, Yang KS, et al. An effective repetitive training schedule to achieve skill proficiency using a novel robotic virtual reality simulator. J Surg Educ 2015;72:369–76. 8. Blackstone EH. Monitoring surgical performance. J Thorac Cardiovasc Surg 2004;128:807–10. 9. East JM, Valentine CS, Kanchev E, et al. Sentinel lymph node biopsy for breast cancer using methylene blue dye manifests a short learning curve among experienced surgeons: a prospective tabular cumulative sum (CUSUM) analysis. BMC Surg 2009;9:2. 10. Hu Y, Puri V, Crabtree TD, et al. Attaining proficiency with endobronchial ultrasound-guided transbronchial needle aspiration. J Thorac Cardiovasc Surg 2013;146:1387–1392.e1. 11. Naik VN, Devito I, Halpern SH. Cusum analysis is a useful tool to assess resident proficiency at insertion of labour epidurals. Can J Anaesth 2003;50:694–8. 12. Hu Y, Choi J, Mahmutovic A, et al. Assistant instructors facilitate simulation for medical students. J Surg Res 2015;194:334–40. 13. Ma IW, Zalunardo N, Pachev G, et al. Comparing the use of global rating scale with checklists for the assessment of central venous catheterization skills using simulation. Adv Health Sci Educ Theory Pract 2012;17:457–70. 14. Khaliq T. Reliability of results produced through objectively structured assessment of technical skills (OSATS) for endotracheal intubation (ETI). J Coll Physicians Surg Pak 2013;23:51–5. 15. Martin JA, Regehr G, Reznick R, et al. Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg 1997;84:273–8. 16. Bolsin S, Colson M. The use of the Cusum technique in the assessment of trainee competence in new procedures. Int J Qual Health Care 2000; 12:433–8. 17. Low D, Healy D, Rasburn N. The use of the BERCI DCI video laryngoscope for teaching novices direct laryngoscopy and tracheal intubation. Anaesthesia 2008;63:195–201. 18. Barsuk JH, Cohen ER, Feinglass J, et al. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med 2009;169:1420–3. 19. Promes SB, Chudgar SM, Grochowski CO, et al. Gaps in procedural experience and competency in medical school graduates. Acad Emerg Med 2009;16(Suppl 2):S58–62. 20. Okuda Y, Bryson EO, DeMaria Jr S, et al. The utility of simulation in medical education: what is the evidence? Mt Sinai J Med 2009;76: 330–43.
Y. Hu et al.
Adaptive simulation training
21. Huang GC, Newman LR, Schwartzstein RM, et al. Procedural competence in internal medicine residents: validity of a central venous catheter insertion assessment instrument. Acad Med 2009;84:1127–34. 22. Dehmer JJ, Amos KD, Farrell TM, et al. Competence and confidence with basic procedural skills: the experience and opinions of fourthyear medical students at a single institution. Acad Med 2013;88: 682–7. 23. Bond WF, King AE. Modeling for the decision process to implement an educational intervention: an example of a central venous catheter insertion course. J Patient Saf 2011;7:85–91. 24. Hopper AN, Jamison MH, Lewis WG. Learning curves in surgical practice. Postgrad Med J 2007;83:777–9.
383 25. Olthof E, Nio D, Bemelman WA. The learning curve of robot-assisted laparoscopic surgery. In: Bozovic V, ed. Medical Robotics. Vienna, Austria: I-Tech Education and Publishing; 2008. p. 526. 26. Hodgins JL, Veillette C, Biau D, et al. The knee arthroscopy learning curve: quantitative assessment of surgical skills. Arthroscopy 2014;30:613–21. 27. Kemp SV, El Batrawy SH, Harrison RN, et al. Learning curves for endobronchial ultrasound using Cusum analysis. Thorax 2010;65:534–8. 28. Hu Y, Jolissaint JS, Ramirez A, et al. Cumulative sum: a proficiency metric for basic endoscopic training. J Surg Res 2014;192:62–7. 29. de Leval MR, Francois K, Bull C, et al. Analysis of a cluster of surgical failures. Application to a series of neonatal arterial switch operations. J Thorac Cardiovasc Surg 1994;107:914–23; discussion 923–4.