ORIGINAL REPORTS
A Comparison of Teaching Modalities and Fidelity of Simulation Levels in Teaching Resuscitation Scenarios Andrew J. Adams, MD,* Emily A. Wasson, BSc, MPH,† John R. Admire, MD,* Pedro Pablo Gomez, MD,* Raman A. Babayeuski, MD, MS,* Edward Y. Sako, MD, PhD,‡ and Ross E. Willis, PhD* Department of Surgery, University of Texas Health Science Center at San Antonio, Texas; †School of Medicine, University of Texas Health Science Center at San Antonio, Texas; and ‡Department of Cardiothoracic Surgery, University of Texas Health Science Center at San Antonio, Texas *
INTRODUCTION: The purpose of our study was to
examine the ability of novices to learn selected aspects of Advanced Cardiac Life Support (ACLS) in training conditions that did not incorporate simulation compared to those that contained low- and high-fidelity simulation activities. We sought to determine at what level additional educational opportunities and simulation fidelity become superfluous with respect to learning outcomes.
CONCLUSION: Video-based and simulation-based training is associated with better learning outcomes when compared with traditional didactic lectures only. Videobased, low-fidelity, and high-fidelity simulation training yield equivalent outcomes, which may indicate that highfidelity simulation is superfluous for the novice trainee. C 2015 Association of Program Directors ( J Surg ]:]]]-]]]. J in Surgery. Published by Elsevier Inc. All rights reserved.)
METHODS: Totally 39 medical students and physician
KEY WORDS: simulation-based education, simulator fidel-
assistant students were randomly assigned to 4 training conditions: control (lecture only), video-based didactic instruction, low-, and high-fidelity simulation activities. Participants were assessed using a baseline written pretest of ACLS knowledge. Following this, all participants received a lecture outlining ACLS science and algorithm interpretation. Participants were then trained in specific aspects of ACLS according to their assigned instructional condition. After training, each participant was assessed via a Megacode performance examination and a written posttest.
ity, ACLS
RESULTS: All groups performed significantly better on the written posttest compared with the pretest (p o 0.001); however, no groups outperformed any other groups. On the Megacode performance test, the video-based, low-, and high-fidelity groups performed significantly better than the control group (p ¼ 0.028, p o 0.001, p ¼ 0.019). Equivalence testing revealed that the high-fidelity simulation condition was statistically equivalent to the video-based and low-fidelity simulation conditions.
Correspondence: Inquiries to Ross E. Willis, PhD, Director of Surgical Education, University of Texas Health Science Center at San Antonio, Mail Code 7737, 7703 Floyd Curl Dr. San Antonio, TX 78229-3900; fax: þ(210) 567-2347; e-mail:
[email protected]
COMPETENCIES: Patient Care, Medical Knowledge
INTRODUCTION As the advent of managed care has led to shorter hospitalizations and more out-patient care and the restructuring of residency requirements has led to duty hour restrictions, residents have less time and opportunity to achieve the amount of practice needed to master skills. In response, there has been a push toward the increasing use of simulation-based education. Various models are used to recreate the patient care environment and mimic the techniques that need to be mastered by every physician. With simulation, these techniques can be learned in a safe, controlled environment.1 Simulators have varying degrees of realism or fidelity. Fidelity refers to the degree to which a model or simulation reproduces the state and behavior of a real-world object, feature, or condition. There are 2 types of fidelity that can be considered: (1) engineering fidelity, which seeks to create the sense that a scenario looks real and (2) psychological fidelity, which seeks to create the sense that a scenario feels real.2 Improved technology has led to the development of an increasing number of high-fidelity simulators that more accurately mimic the real environment. These high-fidelity
Journal of Surgical Education & 2015 Association of Program Directors in Surgery. Published by 1931-7204/$30.00 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jsurg.2015.04.011
1
simulators tend to cost much more than their low-fidelity counterparts, not only in terms of the price of acquiring them, but also when the costs associated with the personnel and resources needed to use these simulators are added. As reported by researchers in the field,3,4 a commonly held assumption (or myth) is that higher levels of simulation fidelity lead to increased training effectiveness. This assumption is supported by situated learning theory,5 which posits that learning is enhanced when the activity is situated in the context in which it naturally occurs. According to this view, educational activities should closely resemble the natural context in which the newly acquired knowledge and skills are to be used (i.e., high-fidelity simulation). However, research findings examining the benefit of high-fidelity compared to low-fidelity simulation have yielded mixed findings. For various surgical tasks (e.g., laparoscopy, endourology, and anastomosis), many researchers have found no additional benefit of high-fidelity simulation models6-11 but others found an increased benefit of highfidelity simulation models.12 Similarly, for the American Heart Association's (AHA) Advanced Cardiac Life Support (ACLS) skills, no additional benefit of high-fidelity simulation was seen by some,13,14 whereas others have found additional benefits of high-fidelity simulation.15-17 One possible explanation for the discrepancy in these research findings could be that the relationship between fidelity and learning is dependent upon learner experience.18 According to cognitive load theory, working memory is limited with respect to the amount of information it can hold and the number of operations it can perform on that information.19,20 When a learner is engaged in learning a novel task, his or her working memory is occupied with processing task-relevant information. Because of the increased cognitive load, unused attentional resources are scarce, which can lead to incomplete, ineffective, or inefficient learning. In effect, the cognitive processing system is overwhelmed with information. This is particularly relevant to novice learners, who are expected to have fewer spare attentional resources to devote to learning any specific task.20 The excess of “irrelevant” stimuli associated with high-fidelity simulation may even be detrimental to learning. In line with this theory, Alessi18 suggests that low-fidelity simulation may be bettersuited for novice learners, whereas high-fidelity simulation is more appropriate for advanced learners. This belief has received some empirical support.21
In this study, we examined the ability of novices to learn a subset of ACLS concepts in training conditions that did not incorporate simulation compared to those that contained low- and high-fidelity simulation activities. Thus, the study was designed to test whether (a) simulation vs no simulation and (b) low- vs high-fidelity simulation was better in preparing students for resuscitation activations. In line with cognitive load theory, we posited that at some point additional fidelity becomes superfluous, and may even be detrimental to learning, especially for the novice. However, some amount of hands-on practice situated within a low-fidelity simulation activity may be necessary to acquire ACLS skills. We hypothesized that low-fidelity simulation would result in superior performance to learners assigned to training conditions that did not involve simulation and superior (or at least equivalent) to a condition that involved high-fidelity simulation activities.
METHODS Approval was obtained from the University of Texas Health Sciences Center at San Antonio Institutional Review Board. Totally 39 first- and second-year medical students (n ¼ 24) and first-year physician assistant students (n ¼ 15) were recruited to participate in this study. We chose to include medical and physician assistant students as participants with basic medical knowledge. Additionally, both of these groups would be required to become ACLS certified in the future and as such, the content was applicable to these students, which would increase interest and participation. Participants were excluded if they were currently ACLS certified or had been certified in the past 6 months. All participants were certified in Basic Life Support. Demographic information is presented in Table 1. Sample size was determined in part by availability and interest in the groups eligible for the study. As shown in the study design diagram (Fig. 1), all participants completed a written baseline knowledge pretest comprising 20 multiple-choice questions covering ACLS science and algorithms at the beginning of the study. The test was based on the AHA ACLS test and covered rhythm identification, timing of medical treatment of various arrhythmias, medications and dosages, and scenario questions that assessed knowledge of ACLS algorithms. All participants then received instruction in ACLS science and
TABLE 1. Demographic Information, Knowledge Pretest and Posttest Scores, and Megacode Performance Scores No. of participants Sex (M/F) Mean age (y) Medical student/physician’s assistant No. with previous ACLS experience Median rating “knowledge before training” 2
Control
Video
Low Fidelity
High Fidelity
10 7/3 29.2 8/2 4 2 (Poor)
10 5/5 25.6 8/2 3 2 (Poor)
10 5/5 26.8 7/3 1 2 (Poor)
9 7/2 25.1 8/1 2 2 (Poor)
Journal of Surgical Education Volume ]/Number ] ] 2015
FIGURE 1. Study design diagram.
application of algorithms via a 1-hour didactic lecture in a large-group format. Participants were then randomly assigned to 4 training conditions while attempting to stratify medical and physician assistant students to each condition. The control group (n ¼ 10) did not receive additional teaching or hands-on training after the lecture. This training condition was included as a way to gauge the educational benefit of standard, presentation-format lectures in the absence of hands-on practice. The video group (n ¼ 10) attended the lecture and then watched selected AHA ACLS videos of professionals running a Megacode and received feedback about the effectiveness of the performance from an expert. Videos were presented to students in small groups comprising 4 to 5 students in 1-hour sessions, which were hosted by an ACLS-certified third-year general surgery resident, who led discussions and answered questions. Videos were posted online for additional viewing outside of the training session. This training condition was included to assess the additional benefit of video-based instruction in the absence of handson practice. Participants in this condition were not subjected to the theoretical deleterious effects of unnecessary input, thus reducing cognitive load. The low-fidelity simulation group (n ¼ 10) attended the lecture, watched the videos, and worked through simulated scenarios using a DARTsim electrocardiogram (ECG) software simulator (DARTsim Inc., San Diego, CA), which models the information normally displayed on a cardiac monitor during a cardiac resuscitation effort and mimics the sounds of common alarms that indicate changes in patient status. DARTsim is classified as a computer-based simulator according to previously published taxonomies.22,23 The DARTsim software accurately represents an actual ECG monitor and could be classified as a high-fidelity simulator; however, we chose to classify this simulation activity as lowfidelity one because the training environment lacked situational context3 and psychological fidelity24 owing to the
absence of a physical patient and other components typically present in real-world code management environments. Code scenarios included stable tachycardia, ventricular fibrillation, ventricular tachycardia, symptomatic bradycardia, and pulseless electrical activity. These training sessions comprised 4 to 5 students and lasted 1 hour. Each participant completed a code scenario as the “team” leader whereas other participants observed. A third-year general surgery resident operated the ECG simulation software, led discussions, and answered questions. As the code team leader, the participant was asked to give directions to virtual team members (e.g., start CPR, administer 6 mg of adenosine, and charge the biphasic defibrillator to 200 J). The surgery resident then adjusted settings in the DARTsim software to coincide with the participant's actions. As the team leader, the participant was not required to perform any physical actions such as chest compressions, administering medications, or delivering defibrillator shocks. Those observing study participants were not required to perform any physical actions and were able to devote complete attention to the events of the scenario. As such, participants in this condition were subjected to a moderate level of cognitive load due to the amount of information presented on the ECG display. During the scenario, the surgery resident gave real-time instructive feedback. At the end of each scenario, the surgery resident gave summative feedback about the participant's performance. The high-fidelity group (n ¼ 9) attended the lecture, watched the videos, and trained with a mannequin (Megacode Kelly, Laerdal, NY), DARTsim ECG simulator software, defibrillator, crash cart containing simulated ACLS medications, airway adjuncts, and 3 team members. Team member roles generally comprised study participants; however, when scheduling difficulties prevented 4 students from attending the training sessions, surgery residents (all ACLS certified) were incorporated to create complete teams of 4 members. A maximum of 2 residents were employed to
Journal of Surgical Education Volume ]/Number ] ] 2015
3
create a complete team in any training session. Team members were assigned to the roles of airway, chest compressions, and medication administration. The training sessions were 1 hour in duration and scenarios included stable tachycardia, ventricular fibrillation, ventricular tachycardia, symptomatic bradycardia, and pulseless electrical activity. Each participant completed a scenario in the role of team leader and was asked to give directions to the team members in the room. A surgery resident then adjusted settings in the DARTsim software to coincide with the team leader's actions. As the team leader, the participant was not required to perform any physical actions. Participants who were assigned other team roles were required to perform physical actions and, as such, could not devote full attention to the events of the scenario, thus increasing cognitive load. The surgery residents gave real-time instructive feedback during the scenario and summative feedback at the end of the scenario. Despite the fact that the participant's role as team leader was essentially the same as in the low-fidelity condition (i.e., attending to the ECG monitor and requesting actions based on interpretation of the situation), we classified this simulation activity has high-fidelity one because it contained a life-sized patient, equipment, and personnel that increased situational context and psychological fidelity. After the training sessions, each participant completed a Megacode performance posttest within 7 to 14 days (mean ¼ 11 d, standard deviation ¼ 3.72) in the role of team leader. The performance posttest was conducted in a different room than that used in the high-fidelity training in an attempt to reduce equipment and situational familiarity. The Megacode posttest used a mannequin (SimMan, Laerdal, NY), an actual ECG monitor, crash cart, defibrillator, and 3 team members comprising surgical residents or other research assistants when 3 residents were not available. Study confederate team members were instructed not to give real-time feedback or hints during the code scenario unless the participant egregiously failed to make progress. Totally 3 scenarios were tested during the Megacode: stable tachycardia, ventricular fibrillation, and sinus tachycardia. The Megacode sessions were video recorded for later grading using the AHA ACLS critical performance checklist. As the focus of this study was examining the effect of training on medical knowledge in a code management situation (i.e., identification and treatment of arrhythmias and correct application of the ACLS algorithms), team communication skills items were removed from the original
ACLS critical performance checklist. A point was awarded for every correct action taken by the participant whereas 1 point was deducted for every incorrect action, action performed out of the correct sequence, or prompt given. The maximum possible score was 55 points. Totally 2 graders (both Basic Life Support certified, 1 ACLS certified) were blinded to the groups to which participants were assigned. The graders used the same scoring instrument to facilitate inter-rater reliability, which was statistically significant (intraclass correlation coefficient ¼ 0.490, p ¼ 0.042) and showed a moderately strong degree of agreement.25 Though there was no specific validation of this scoring system, it should be noted that the checklist was derived from that used in ACLS. After the Megacode performance posttest, a written knowledge posttest was administered to the participants. The questions in the pretest and posttest were isomorphic, thus allowing for the same subject matter to be tested in both situations while eliminating recall familiarity bias. Statistical Analysis Repeated measures analysis of variance (ANOVA) and 1way ANOVA tests were used to evaluate learning outcomes within and among teaching conditions, as applicable. We were interested in determining differences among teaching conditions, but we were also interested in determining whether various teaching conditions were at least statistically equivalent to others. To determine this, we used the 2 1sided test procedures for assessing bioequivalence with a limit of equivalence equal to 20% of the high-fidelity mean Megacode performance score.26,27 Statistical analyses were conducted using SPSS version 21 (IBM Corp., Armonk, NY). By convention, significance level was set at p o 0.05.
RESULTS Mean knowledge pretest and posttest scores and mean Megacode performance scores are presented in Table 2. Repeated measures ANOVA was used to analyze learning gain on the knowledge test for each training condition from pretest to posttest. The analysis indicated a significant main effect for test score (p o 0.001, observed power ¼ 1.0; Fig. 2). Thus, significant knowledge acquisition occurred for all participants in all training conditions. The interaction between test score and training condition was not
TABLE 2. Knowledge Pretest and Posttest Scores, and Megacode Performance Scores. Mean (standard deviation) pretest score (max ¼ 20) Mean (standard deviation) posttest score (max ¼ 20) Mean (standard deviation) Megacode performance score (max ¼ 55) 4
Control
Video
Low Fidelity
High Fidelity
12.8 (2.9) 16.3 (2.3) 8.2 (9.9)
12.7 (2.6) 17.3 (1.9) 15.7 (7.5)
11.9 (2.4) 18.3 (.8) 21.3 (5.8)
12 (2.3) 17.8 (1.6) 16.4 (4.7)
Journal of Surgical Education Volume ]/Number ] ] 2015
FIGURE 2. Written pretest to posttest performance.
significant (p ¼ 0.89, observed power ¼ 0.542) and pairwise comparisons revealed that no group improved significantly more than other groups. A 1-way ANOVA was used to analyze Megacode performance scores among training conditions. The analysis showed a significant main effect of training condition (p ¼ 0.003; observed power ¼ 0.911). Pairwise comparisons showed that the control group performed significantly worse than the training conditions that included videos, lowfidelity simulation, and high-fidelity simulation (p ¼ 0.028, p o 0.001, p ¼ 0.019, respectively; Fig. 3). No other pairwise comparisons reached significance; however, the low-fidelity condition had the highest mean performance score. Equivalence testing showed that the high-fidelity simulation-based training condition was statistically equivalent to the video-based and low-fidelity simulation-based teaching conditions (both p o 0.05).
ACLS training is typically provided to medical professionals via the AHA Provider course, which combines reading, lecture or video instruction (or both), and practical instruction in ACLS science and algorithm performance in a course of variable duration (usually 1-2 d). The course emphasizes medical treatment, diagnostic steps, and team dynamics and effective communication strategies in an environment that
attempts to mimic the high-stress atmosphere of a real clinical resuscitation effort, in which the wrong decisions can lead to dire consequences. Prior research that evaluated the effect of simulation in ACLS training has demonstrated that the incorporation of simulation activities improves learning compared with didactic lecture.28-30 Results of research comparing lowand high-fidelity simulation have been equivocal.13,14,17 In our study, we attempted to examine the additional benefit of video-based instruction as well as low- and high-fidelity simulation activities over traditional didactic lecture. Our results showed that the traditional lecture-based format is the least effective method for teaching our selected ACLS knowledge components. The addition of video-based instruction alone or in combination with simulation-based activities enhanced the learning experience. If participants in the high-fidelity simulation training condition had significantly outperformed participants in other conditions, one might conclude that the finding is not surprising given that these learners were trained in a very similar condition to that of the Megacode performance posttest. However, this was not observed. ANOVA and pairwise comparisons failed to reveal any differences among the video-based, low-fidelity, and high-fidelity instructional conditions. Equivalency testing showed that high-fidelity simulation-based training was statistically equivalent to video-based instruction and lowfidelity simulation instruction in terms of Megacode performance. Participants in the low-fidelity simulation
Journal of Surgical Education Volume ]/Number ] ] 2015
5
DISCUSSION
FIGURE 3. Megacode performance.
condition achieved higher, although not significantly higher, Megacode posttest performance scores. Thus, adding increasing levels of fidelity to the simulation experience does not appear to add significant benefit. This is significant in considering the cost and resources required to provide such levels of fidelity as well as access, ease, and frequency of use by individual learners. One of the possible reasons that simulation activities did not enhance learning may be that our novice learners have not yet developed the ability to filter unnecessary input and concentrate on important aspects of the situation, which is consistent with the hypotheses established by cognitive load theory.20 In actual crisis situations, cognitive load can become problematic for inexperienced trainees; thus, it is important for trainees to become accustomed to receiving a deluge of information and learn to attend to critical datum. This can be accomplished via simulation via the model of the “pretrained novice.”27 As postulated by scaffolding theory,31 novice learners benefit greatly from instructional environments that begin at a very basic level with sufficient instructor guidance. As the learner masters the content, the instructor guidance (i. e., scaffold) can be removed. Following this sequence, learners progressively gain competence and confidence. In terms of simulation-based training, Alessi18 recommended that novice learners be introduced to concepts via low-fidelity simulation activities and as competence increases, more advanced levels of simulation are introduced to further mimic the target environment. Brydges et al.21 empirically validated this claim by demonstrating that learners trained using a progressive sequence starting with low-fidelity simulation
activities and progressing to more advanced simulation and found that these learners outperformed learners who engaged in either low- or high-fidelity simulation training alone. There were a few limitations to our study. First, our sample size was small with only 10 participants in each of the 3 groups and 9 in a group. This sample size was limited by the desire to have subjects with similar levels of medical knowledge and were motivated to participate in the entire study. The sample size limitation may have been further compounded by the fact that 4 individuals in the control condition (i.e., lecture only) had previous ACLS experience, compared with only 3, 1, and 2 individuals in the videobased, low-fidelity, and high-fidelity conditions, respectively. Although the median “knowledge before training” rating was similar for all study conditions (i.e., “poor”), it is possible that the individuals who had previous ACLS experience could have achieved higher scores because of prior exposure. When analyzing knowledge gain on the written knowledge test, we controlled for amount of incoming knowledge by computing learning gain scores (i. e., posttest score minus pretest score). However, we chose not to have study participants complete a Megacode performance pretest in an attempt to reduce familiarity bias with the simulators. Thus, participants did not serve as their own controls for the performance test and prior knowledge may have played a role in achieving higher scores. The training exposure obtained by the different groups could not be fully standardized. For instance, some participants may have been more diligent than others and may have re-viewed training videos more than other participants,
6
Journal of Surgical Education Volume ]/Number ] ] 2015
whereas some may have studied the lecture more than peers in their cohort or may have used even other sources to supplement the study materials. Hence, an effect may have been due to some participants having more self-directed learning than others. However, it is unlikely that this increased diligence would have been uniform across any particular group. Further studies should examine whether knowledge obtained in simulation environments transfers to clinical settings and whether various levels of fidelity are more appropriate for differing levels of learners.
CONCLUSION The development of increasingly more life-like simulators has progressed at a rapid rate, fueled by a great appetite for technology. Our study results agree with the growing volume of evidence that challenges the notion that higher fidelity simulators are better teaching aids than lower fidelity simulators. Perhaps more advanced trainees may benefit from higher fidelity training. However, it seems that at least for the novice learner, less may be more. These findings are particularly important as financial constraints limit the ability of medical education programs to acquire the expensive highfidelity simulators that are en vogue. Medical educators can be assured that their educational objectives, for novice learners, are not being adversely affected by opting for lowfidelity simulators that are available at a fraction of the cost.
REFERENCES 1. Teteris E, Fraser K, Wright B, McLaughlin K. Does
training learners on simulators benefit real patients? Adv Health Sci Educ Theory Pract. 2012;17 (1):137-144. 2. Miller RB. Psychological considerations in the design
of training equipment. Report no. WADC-TR54-563, AD 71202. Write Patterson Air Force Base, OH; Write Air Development Center; 1953.
7. Diesen DL, Erhunmwunsee L, Bennett KM, et al.
Effectiveness of laparoscopic computer simulator versus usage of box trainer for endoscopic surgery training of novices. J Surg Educ. 2011;68(4):282-289. 8. Grober ED, Hamstra SJ, Wanzel KR, et al. The
educational impact of bench model fidelity on the acquisition of technical skill: the use of clinically relevant outcome measures. Ann Surg. 2004;240(2):374-381.
9. Matsumoto ED, Hamstra SJ, Radomski SB, Cusi-
mano MD. The effect of bench model fidelity on endourological skills: a randomized controlled study. J Urol. 2002;167(3):1243-1247.
10. de Giovanni D, Roberts T, Norman G. Relative
effectiveness of high- versus low-fidelity simulation in learning heart sounds. Med Educ. 2009;43(7):661-668. 11. Anastakis DJ, Regehr G, Reznick RK, et al. Assess-
ment of technical skills transfer from the bench training model to the human model. Am J Surg. 1999;177 (2):167-170. 12. Sidhu RS, Park J, Brydges R, MacRae HM, Dubrow-
ski A. Laboratory-based vascular anastomosis training: a randomized controlled trial evaluating the effects of bench model fidelity and level of training on skill acquisition. J Vasc Surg. 2007;45(2):343-349. 13. Lo BM, Devine AS, Evans DP, et al. Comparison of
traditional versus high-fidelity simulation in the retention of ACLS knowledge. Resuscitation. 2011;81(11): 1440-1443. 14. Hoadley TA. Learning advanced cardiac life support: a
comparison study of the effects of low- and high-fidelity simulation. Nurs Educ Perspect. 2009;30(2):91-95. 15. Rodgers DL, Securro S, Pauley RD. The effect of high-
fidelity simulation on educational outcomes in an Advanced Cardiovascular Life Support course. Sim Healthcare. 2009;4(4):200-206.
16. Ko PY, Scott JM, Mihai A, Grant WD. Comparison
relationship between simulation fidelity and transfer of learning. Med Educ. 2012;46(7):636-647.
of a modified longitudinal simulation-based Advanced Cardiovascular Life Support to a traditional advanced Cardiovascular Life Support curriculum in third-year medical students. Teach Learn Med. 2011;23(4):324-330.
4. Beaubien JM, Baker DP. The use of simulation for
17. Wayne DB, Didwania A, Feinglass J, Fudala MJ,
training teamwork skills in health care: how low can you go? Qual Saf Health Care. 2004;13(10):i51-i56.
Barsuk JH, McGaghie WC. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a casecontrol study. Chest. 2008;133(1):56-61.
3. Norman G, Dore K, Grierson L. The minimal
5. Lave J. Cognition in Practice: Mind, Mathematics,
and Culture in Everyday Life. Cambridge, UK: Cambridge University Press; 1988. 6. Munz Y, Kumar BD, Moorthy K, Bann S, Darzi A.
18. Alessi SM. Fidelity in the design of instructional
simulations. J Comput Based Instr. 1988;15(2):40-47.
Laparoscopic virtual reality and box trainers: is one superior to the other? Surg Endosc. 2004;18(3):485-494.
19. Van Gerven PW, Paas F, Van Merrienboer JJ,
Journal of Surgical Education Volume ]/Number ] ] 2015
7
Hendriks HG, Schmidt HG. The efficiency of
multimedia learning into old age. Br J Educ Psychol. 2003;73(4):489-505. 20. Sweller J. Cognitive load during problem solving:
effects on learning. Cogn Sci. 1988;12(3):257-285. 21. Brydges R, Carnahan H, Rose D, Rose L, Dubrowski
A. Coordinating progressive levels of simulation fidelity to maximize educational benefit. Acad Med. 2010;85(5):806-812.
22. Kneebone RL. Simulation in surgical training: educa-
tional issues and practical implications. Med Educ. 2003;37(3):267-277.
26. Rogers JL, Howard KI, Vessey JT. Using significance
tests to evaluate equivalence between two experimental groups. Psychol Bull. 1993;113(3):553-565. 27. Van Sickle KR, Ritter EM, Smith CD. The pretrained
novice: using simulation-based training to improve learning in the operating room. Surg Innov. 2006;13 (3):198-204. 28. Langdorf MI, Strom SL, Yang L, et al. High-fidelity
simulation enhances ACLS training. Teach Learn Med. 2014;26(3):266-273. 29. Wayne DB, Butter J, Siddall VJ, et al. Simulation-
tion: not just a manikin. J. Nurs Educ. 2004;43 (4):164-169.
based training of internal medicine residents in Advanced Cardiac Life Support protocols: a randomized trial. Teach Learn Med. 2005;17(3):210-216.
24. Rehmann A, Mitman R, Reynolds M. A handbook of
30. Wayne DB, Butter JB, Siddall VJ, et al. Mastery
23. Seropian MA, Brown K, Gavilanes JS, Driggers B. Simula-
flight simulation fidelity requirements for human factors research. Technical report no. DOT/FAA/CT-TN95/ 46. Wright-Patterson AFB, OH: Crew Systems Ergonomics Information Analysis Center, 1995.
learning of Advanced Cardiac Life Support skills by internal medicine residents using simulation technology and deliberate practice. J Gen Intern Med. 2006;21 (3):251-256.
25. Green SB, Salkind NJ, Akey TM. Using SPSS for
31. Wood D, Bruner JS, Ross G. The role of tutoring
Windows: Analyzing and Understanding Data. 2nd ed. Upper Saddle River, NJ: Prentice-Hall; 2000.
in problem solving. J Child Psychol. 1976;17(2): 89-100.
8
Journal of Surgical Education Volume ]/Number ] ] 2015