Clinical Simulation in Nursing (2013) 9, e85-e93
www.elsevier.com/locate/ecsn
Featured Article
Comparison of Three Simulation-Based Teaching Methodologies for Emergency Response Jacqueline J. Arnold, MSN, RN*, LeAnn M. Johnson, MS, RN, Sharon J. Tucker, PhD, RN, Sherry S. Chesak, MS, RN, Ross A. Dierkhising, MS Mayo Clinic, Rochester, MN 55905, USA KEYWORDS confidence; critical care; emergency response; interrater reliability; performance assessment; simulation; knowledge; Emergency Response Performance Tool; computer based
Abstract Background: The purpose of this study was to compare the effects of 3 simulation methodologies (low-fidelity, computer-based, and full-scale) on the outcomes of emergency response knowledge, confidence, satisfaction and self-confidence with learning, and performance. Additionally, interrater reliability was assessed for the Emergency Response Performance Tool (ERPT). Method: An experimental, pretesteposttest, control-group design was used to evaluate the effects of the 3 teaching methodologies. In all, 28 participants enrolled in a Critical Care Orientation program participated in the study. Each participant was randomized to 1 of the 3 groups. Participants completed pre- and posttest written examinations and confidence questionnaires, the Student Satisfaction and Self-Confidence in Learning instrument, and baseline and posttest performance assessments. Results: No significant differences were found among the 3 groups in emergency response knowledge, confidence, or performance. There were significant differences in participants’ results on the Student Satisfaction and Self-Confidence in Learning instrument, with the full-scale simulation group rating the highest in satisfaction and self-confidence. The interrater reliability for the ERPT ranged from 0.58 to 1.0. Conclusions: Although the statistical findings did not support the hypothesis that critical care RNs who receive full-scale simulation training will score higher in knowledge, confidence, and performance, this study advances the current knowledge base of simulation-based education and research. The ERPT can be a reliable measure for assessing performance in full-scale simulation. However, further studies with larger sample sizes are warranted. Cite this article: Arnold, J. J., Johnson, L. M., Tucker, S. J., Chesak, S. S., & Dierkhising, R. A. (2013, March). Comparison of three simulation-based teaching methodologies for emergency response. Clinical Simulation in Nursing, 9(3), e85-e93. doi:10.1016/j.ecns.2011.09.004. Ó 2013 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved.
* Corresponding author:
[email protected] (J. J. Arnold). 1876-1399/$ - see front matter Ó 2013 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved.
doi:10.1016/j.ecns.2011.09.004
Three Simulation-Based Teaching Methodologies for Emergency Response
Introduction Background The goal of emergency response education is to develop the learner’s ability to respond appropriately during an Key Points emergency situation. Sev Confidence and knoweral factors influence attainledge in emergency ment of this goal, including response. curriculum, learning envi Emergency response ronment, instructors, and performance in fullmultiple learner variables scale simulation. (Mancini & Kaye, 1996). Emergency Response Actual emergency situations Performance Tool. are chaotic, stressful, and anxiety provoking. All of these factors can negatively influence the outcome of the patient. Registered nurses (RNs) new to critical care may lack the knowledge, confidence (Arnold et al., 2009), and technical skills, particularly defibrillation (Gordon & Buckley, 2009), for cardiopulmonary resuscitation. Additionally, RNs have very little, if any, opportunity to practice responding to an emergency situation in a realistic setting. RNs who choose to work in critical care at the health care institution where this study was conducted are required to complete a 12-week Critical Care Orientation program, which includes didactic and clinical teaching. Emergency response is one of the courses taught in the program. Traditionally, the institution’s Critical Care Orientation program combined didactic teaching with low-fidelity simulation, using case studies in a skills lab. Didactic teaching provides the fundamental knowledge and principles that nurses need for their practice. However, the classroom environment does not resemble the real practice environment (Landers, 2000), and an understanding of nursing principles cannot ensure that they will be applied to practice (Steele, 1991). Howard (2001) found that the theoretical instruction students in an academic setting received was inadequate, and the students reported they were unprepared to do basic clinical skills. Case studies offer more interactive education in which students can solve problems on a clinical scenario. Skills laboratories allow the student to practice emergency response skills such as operating a defibrillator, performing chest compressions, and managing the airway. Mastering these types of skills in an isolated laboratory setting is important and an initial step in learning. However, students in skills labs are performing such skills in isolation from other tasks (Vandrey &Whitman, 2001); this dynamic limits RNs’ abilities to learn how to perform in the real dynamic of an emergency situation. Although scheduled time in the clinical arena offers the critical care trainee realism and hands-on experience, there are many barriers that can impede learning, including preceptor discontinuity, lack of either direction or
e86
adequate guidance (Landers, 2000; Thomason, 2006), variability in learning experiences, anxiety, fear of harming the patient, and unit culture. Simulation technologies offer a method whereby students can respond to realistic emergency resuscitation situations in a safe and controlled practice environment (Gordon & Buckley, 2009; Hoadley, 2009; Kardong-Edgren & Adamson, 2009). Simulation is used in many different ways, and there are different types of simulators. Fidelity is a term often used in simulation to ‘‘describe the accuracy of the system being used’’ (Seropian, Brown, Gavilanes, & Driggers, 2004, p. 165). Seropian et al. (2004) classify fidelity simulators as (a) low-fidelity, (b) moderate-fidelity, or (c) highfidelity. Simulators can be used to teach psychomotor and cognitive skills or to teach knowledge (Cumin & Merry, 2007) and have been categorized in a variety of ways (Cumin & Merry, 207; Seropian, 2003; Wong, 2004), including screen-based simulators, virtual realityebased simulators (Cumin & Merry, 2007), full-scale computer simulators (Wong, 2004), part task trainers, computerbased simulation, and full-scale simulation (Seropian, 2003). For the purposes of this study, categories included (a) a case study and a static simulator as low fidelity; (b) a screen-based simulator as moderate fidelity; and (c) full-scale simulation using a dynamic, full-body simulator, as high-fidelity. The traditional approach to teaching emergency response in the Critical Care Orientation program in which this study was conducted was didactics with low-fidelity simulation that used a case scenario. The simulation was static and lacked the realism of clinical practice. Resusci AnneTM (task and skill trainer) and case scenarios were used in combination with the display of arrhythmias on a defibrillator via an electronic box simulator. The students were expected to verbalize the dysrhythmias and the correct management according to advanced cardiac life support (ACLS) standards. Cognitive assessment was achieved with a written examination. Computer-based simulation fidelity can be low, moderate, or high, depending on the program design and interactive features. Anesoft Corporation (Issaquah, WA 98027, USA; www.anesoft.com) offers computer software, the ACLS Simulator 7 package, that incorporates modules for electrocardiogram rhythm recognition and for ACLS real-time megacode simulation. Research demonstrated that use of the ACLS Simulator software improved performance during simulated megacode management more than did review of the ACLS textbook (Schwid, Rooke, Ross, & Sivarajan, 1999). In full-scale simulation that uses high-fidelity simulators, all the elements of a clinical situation are present in order to immerse the learners in an experience that resembles real experiences (Holzman et al., 1995; Seropian, 2003; Seropian et al., 2004). This full-scale simulation offers learners an opportunity to practice with equipment that brings ‘‘near
pp e85-e93 Clinical Simulation in Nursing Volume 9 Issue 3
Three Simulation-Based Teaching Methodologies for Emergency Response reality’’ to the bedside without risk to a patient. Learners are likely to achieve the best benefit if the simulators and the learning experience are as similar as possible to actual practice (Hotchkiss, Biddle, & Fallacaro, 2002). Although nurses in the Critical Care Orientation program are evaluated in the cognitive and psychomotor domains during and after a low-fidelity simulation, their overall performance is not tested in a more realistic emergency response situation. The current method of instruction and assessment in the program is limited. Indeed, the didactic content and case scenarios are important in building a foundation of knowledge, but they do not offer the nurse new to critical care the opportunity to practice or be assessed in a more realistic environment. Full-scale simulation may offer the opportunity to increase confidence in emergency response by providing opportunities to practice in a more realistic situation. Additionally, performance assessment in video recorded full-scale simulation offers the opportunity to objectively assess initial responses in a cardiopulmonary arrest and to assist the course director in determining readiness of the nurse in the program (Alinier, Hunt, & Gordon, 2003; DeVita, Schaerfer, Lutz, Dongilli, & Wang, 2004). Simulation-based education is now widespread in medical and nursing education; however, outcomes research on the effectiveness of full-scale simulation is limited, with varying degrees of methodological rigor noted (Alinier et al., 2003; DeVita et al., 2004; Holcomb et al., 2002; Issenberg, McGaghie, Petrusa, Gordon, & Scalese, 2005; Lindekaer, Jacobsen, Andersen, Laub, & Jensen, 1997; Mayo, Hackney, Mueck, Ribaudo, & Schneider, 2004; Ravert, 2004).
Purpose of the Study The purpose of this study was to compare the effects of three simulation methodologies (low-fidelity, computerbased, and full-scale) on the outcomes of emergency response knowledge, confidence in responding to an emergency, satisfaction and self-confidence with learning, and performance in emergency response. Additionally, interrater reliability was assessed for a modified Emergency Response Performance Tool (ERPT; Arnold et al., 2009). The investigators devised the following research questions: (a) What is the difference in knowledge among participants who participate in full-scale simulation in comparison with those who participate in low-fidelity simulation and computer-based simulation? (b) What is the difference in confidence in emergency response among participants who participate in full-scale simulation in comparison with those who participate in lowfidelity simulation and computer-based simulation? (c) What is the difference in participant satisfaction and self-confidence in learning among participants who participate in full-scale simulation in comparison with those who participate in low-fidelity simulation and computer-
e87
based simulation? (d) What is the difference in performance among participants who participate in full-scale simulation in comparison with those who participate in low-fidelity simulation and computer-based simulation? (e) What is the interrater reliability of a modified version of the ERPT?
Hypotheses (a) RNs who receive full-scale simulation training will score higher in knowledge and self-report higher confidence in emergency response and satisfaction and self-confidence in learning than will those receiving computer-based simulation or low-fidelity simulation. (b) RNs who receive full-scale simulation training will perform better in emergency response than will those receiving computer-based simulation or low-fidelity simulation on measures of performance outcomes, using the ERPT.
Method An experimental, pretesteposttest, control-group design was used to evaluate the effects of the three teaching methodologies. The study was conducted at a large Midwestern health care institution. Following approval from the institutional review board, all 79 participants who were enrolled in a Critical Care Orientation program and met the inclusion criteria were invited to participate in the study, and written consent was obtained at the beginning of each program. The Critical Care Orientation program is offered 4 times each year; this study was conducted during 1 year. Participants finished the study after the full-scale simulation post intervention performance assessment in Week 8 or 9 of the Critical Care Orientation program. In all, 33 participants who were enrolled in a Critical Care Orientation program agreed to participate in the study. Inclusion criteria were RNs with a medicalesurgical background and new graduate RNs. The Critical Care Orientation program does not limit the number of years for medicalesurgical nursing. RNs with prior ACLS and/or prior critical care experience were excluded. Participants were asked to complete a demographic questionnaire identifying age, race, gender, years of RN and medicale surgical nursing experience, highest level of nursing education, prior experience with emergency response arrest situations, and prior electrocardiogram, ACLS, and/or simulation training. Participants were randomized to one of three groups: (a) traditional low-fidelity simulation, (b) computer-based simulation using the Anesoft screen-based ACLS Simulator program, or (c) full-scale simulation. Table 1 depicts the study design. All three groups received the standard
pp e85-e93 Clinical Simulation in Nursing Volume 9 Issue 3
Three Simulation-Based Teaching Methodologies for Emergency Response Table 1
e88
Study Design
Group 1: n = 9 Enrolled in Critical Care Orientation Standard teaching method (Low-Fidelity Simulation)
Group 2: n = 9 Enrolled in Critical Care Orientation Computer-based Simulation
Group 3: n = 10 Enrolled in EPIC High-fidelity simulation
Before educational intervention Week 1 or 2 of Critical Care Orientation ERCT Preintervention knowledge test Baseline performance assessment
Before intervention Week 1 or 2 of Critical Care Orientation ERCT Preintervention knowledge test Baseline performance assessment
Before intervention Week 1 or 2 of Critical Care Orientation ERCT Preintervention knowledge test Baseline performance assessment
Intervention – Week 5 Didactic – 30 minutes Code cart scavenger hunt - 30 minutes Defibrillator operation - 30 minutes Low-fidelity simulation - 45 minutes Debrief & SSSL questionnaire
Intervention – Week 5 Didactic - 30 minutes Code cart scavenger hunt - 30 minutes Defibrillator operation - 30 minutes Computer simulation - 45 minutes Debrief & SSSL questionnaire
Intervention – Week 5 Didactic - 30 minutes Code cart scavenger hunt - 30 minutes Defibrillator operation - 30 minutes High-fidelity simulation – 45 minutes Debrief & SSSL questionnaire
Post intervention – Week 6 ERCT Post intervention knowledge test
Post intervention – Week 6 ERCT Post intervention knowledge test
Post intervention – Week 6 ERCT Post intervention knowledge test
High -fidelity Simulation Performance Post Intervention – Week 8 or 9 -Before ACLS
Note. ACLS, advanced cardiac life support; ERCT, Emergency Response Confidence Tool; SSSL, Student satisfaction and self-confidence in learning.
didactic component for emergency response, hands-on practice with operating the Zoll biphasic defibrillator, and a review of the institution’s standard emergency code cart. In addition to the didactic session, hands-on defibrillator practice, and code cart review, each group had a 3.5hour simulation training session, as follows: Group 1 received the traditional methodology of lowfidelity emergency response training, using Resusci AnneÔ, case scenarios, and a simulator to depict arrhythmias on a defibrillator monitor. Group 2 received computer-based simulation training with the Anesoft ACLS 7 Simulator package. Participants were allowed to repeat the simulation as many times as they were able during the allotted 3.5-hour time frame but were not allowed further access to the training until the study was completed. Group 3 received full-scale simulation training with high-fidelity manikins (Medical Education Technologies, Inc., Sarasota, FL, USA) in a full-scale, realistic, dynamic intensive care unit at the institution’s simulation center.
Instruments Knowledge Knowledge was assessed with a 12-item written examination. The examination was created by three critical care content experts by drawing from standard basic life support principles, ACLS algorithm guidelines, and an established basic electrocardiogram arrhythmia course that is taught in the institution’s Critical Care Orientation program. Items on the examination included seven matching items for sequencing basic life support events (e.g., open the airway, call for help), rhythm strips identification, and multiplechoice questions. Confidence Confidence in one’s ability to perform emergency response was assessed with the Emergency Response Confidence Tool (ERCT), designed by us in an earlier study. Reliability testing was done on the ERCT in that pilot study, ‘‘Emergency Response: Development, Validation, and Evaluation of Measurement Tools’’ (Arnold et al., 2009), with the intention of using it in this study. The tool consists of 19
pp e85-e93 Clinical Simulation in Nursing Volume 9 Issue 3
Three Simulation-Based Teaching Methodologies for Emergency Response items related to emergency response to the assessment of confidence pertaining to critical attributes of emergency response. Its Cronbach’s alpha coefficient (.92) indicated high internal consistency for the entire set of confidence items. The Cronbach’s alpha coefficient for each group was as follows: Group 1, .86; Group 2, .92; and Group 3, .68. The Spearman correlation coefficient (.87) was computed to evaluate testeretest reliability. Performance Performance was assessed in a large simulation center at the Midwestern health care institution, with the revised ERPT. The original tool was developed and pilot tested in the earlier study mentioned above (Arnold et al., 2009) for the purpose of evaluating and modifying it for use in this study. The tool consists of 12 categorical and continuous items of emergency response. Interrater agreement assessed by kappa statistics was high for most categorical and continuous items on the original ERPT. Construct validity was assessed by calculating the Spearman correlation coefficient for each confidence item that was similar to each categorical and timed-task ERPT item. Correlations were moderately high for medications administered (.83), moderate for rhythm identified (.65) and airway assessed (.47), and low for the remaining items. Modifications of the ERPT were made on the basis of the statistical findings and the raters’ verbal reports of difficulty with using the tool. Modifications included enhancing the behavioral anchors and eliminating all of the items with the nominal scale of > 60 seconds, 30 to 60 seconds, and < 30 seconds and replacing them with actual times of tasks. The original ERPT items for defibrillation, medications, and arrhythmia were not inclusive of other ACLS scenarios because the tool was designed for a ventricular tachycardia scenario. The revised tool was adapted multiple ACLS arrhythmias, types of medications, and types of electrical intervention. Student Satisfaction and Self-Confidence in Learning Student satisfaction and self-confidence in learning (SSSL) were assessed with the SSSL instrument developed for the National League for NursingeLaerdal simulation study (Jeffries & Rizzolo, 2006). Written permission to use the instrument was obtained through the National League for Nursing. The instrument’s 13 items measure participants’ satisfaction with the simulation session (5 items) and selfconfidence in learning (8 items) on a 5-point Likert-type scale. Internal consistency reliability has been demonstrated: satisfaction, r ¼ .94; self-confidence, r ¼ .87 (Jeffries & Rizzolo, 2006).
Preintervention Procedure When written consent was obtained, participants’ names were entered into a randomized table assigning them to
e89
one of the three groups (traditional low-fidelity simulation, computer-based simulation, and full-scale simulation). Group assignments were concealed until after baseline data were collected. Knowledge, confidence, and performance assessments were completed by all three groups before they received the didactic emergency response educational component, hands-on defibrillator practice, and code cart review. Participants completed the written examination and the ERCT immediately after giving their written consent to participate and approximately 2 to 3 weeks prior to the educational intervention. Baseline performance assessment was achieved by scheduling students each a 30-minute time frame to present to the simulation center for their baseline performance assessment in emergency response. The performance assessment was completed 1 to 2 weeks prior to the educational intervention. Once the participants arrived at the simulation center, they received an orientation to the high-fidelity simulator, emergency code cart, defibrillator, and environment. One standardized 5-minute scenario was used. The scenario started when the participant entered the simulation room. The simulation technologist was instructed to program the manikin for pulseless ventricular tachycardia once the manikin stated, ‘‘I don’t feel well.’’ Participants were given instructions on how to ask for help and were told how they could end the scenario. They were instructed that the scenario would end after 5 minutes, but they could end it at any time if they could not continue or had exhausted their interventions. We used a written script for the participant orientation in order to maintain a standardized approach. If participants asked for help, two helpers would enter the room but would provide assistance only when the participant explicitly delegated them a task. At the end of each scenario, one of the investigators met briefly with each participant to ensure that participants had an opportunity to debrief after the event.
Educational Intervention Procedure With the exception of the traditional didactic lecture, participants received the educational intervention in their assigned groups. The didactic lecture was conducted as traditionally scheduled and done in a classroom setting with all the participants in the Critical Care Orientation program. Following the didactics and on a different day, each study group was assigned to do both a code cart review and a hands-on practice with the Zoll defibrillator. Each group was segregated for this teaching activity because the educational interventions for the study were conducted immediately after the code cart review and Zoll defibrillator practice. Once the participants completed the review and hands-on practice, they received instruction and training in one of the three simulation methodologies: low-fidelity, computer-based, or full-scale simulation. We provided the instruction and training. The training time for each
pp e85-e93 Clinical Simulation in Nursing Volume 9 Issue 3
Three Simulation-Based Teaching Methodologies for Emergency Response simulation methodology was 25 minutes, in addition to a 15-minute debriefing and 5 minutes to complete the SSSL instrument. The instructors for each session had written instructions for conducting the debriefing session. The debriefing included asking the participants how the experience felt and providing them with an opportunity to ask questions about ventricular tachycardia, defibrillation, and ACLS protocol.
Post Intervention Procedure One week following the intervention, all participants completed the post intervention knowledge test and ERCT. The emergency response performance assessment occurred at the simulation center approximately 2 weeks after the intervention and 1 week prior to ACLS training. Each of the 28 participants was scheduled a 30-minute block of time to complete the performance assessment. Prior to post intervention simulation, participants received the same orientation to the scenarios as described above. Two helpers were available to participants during the simulation, as they had been in the preintervention assessment. Each participant tested out separately and received the same simulation scenarios. Each participant was debriefed immediately following the scenario.
Data Collection Procedures The written pre- and post intervention examinations were scored and entered into a database. Demographic information and ERCT and SSSL scores were also entered into the database. Emergency response baseline and post intervention performances were digitally recorded and archived for later review. The archived video recordings were independently viewed and scored by two of us.
Table 2
e90 Demographic Characteristics of Participants (N ¼ 28)
Characteristic
M (SD)
Age* Years of RN experience Years of medicalesurgical experience
24.9 (4.4) 0.85 (1.4) 0.79 (1.3) N (%) 22 (78.6) 12 (42.9) 0 (0) 7 (25) 12 (43)
Female Prior ECG course Prior ACLS training Prior simulation experience Prior experience in a resuscitation event
Note. ACLS ¼ advanced cardiac life support; ECG ¼ electrocardiogram. * N ¼ 27 (1 participant did not provide age).
baseline performance assessment was completed on 28 participants, and post intervention performance assessment on 27 participants. A post intervention performance assessment video for a participant in Group 3 was not archived, and therefore data were lost because of human error; thus, data analysis was completed for 9 participants in Group 3. Binary performance outcomes (‘‘success’’ vs. ‘‘no success’’) were compared across groups by means of logistic regression models, and ANOVA models were used for the continuous performance data. If the overall ANCOVA, ANOVA, or logistic model group comparison test was significant at the .05 level, pairwise tests between groups were conducted with a Bonferroni multiple comparison adjustment (the p value needs to be less than .017, or .05/3, for the groups to be statistically different). Interrater reliability for the ERPT was assessed with kappa statistics for categorical data and with concordance correlation coefficients for continuous data.
Results Data Analysis Data analysis was performed with SAS version 9 software. Of the 33 enrolled students, 28 (85%) completed the study; 9 participants were randomized to Groups 1 and 2, and 10 participants were randomized to Group 3. Demographic data were summarized by frequencies and percentages for categorical data and by means and standard deviations for continuous data. An analysis of covariance (ANCOVA) model was used to compare posttest knowledge scores (percentages, 0-100 scale) between groups, adjusting for pretest knowledge scores. Adjusted group knowledge score means were also computed. ANCOVA was also used to compare post intervention confidence scores (mean of all individual items, 0-100 scale) between groups, adjusting for preintervention confidence scores. SSSL overall scores (mean of all individual items on a 5-point scale in which 1 ¼ least satisfied and 5 ¼ most satisfied) were evaluated by analysis of variance (ANOVA). Data analysis for
Demographic data are shown in Table 2. In the overall group comparison, there was no significant difference in knowledge among the groups (Table 3). Each group’s posttest percentage and confidence intervals are reported in Table 3. The computer-based group had the highest posttest score, and the low-fidelity group had the lowest. There was no significant difference in the overall group comparison for confidence (Table 3). The percentage and confidence intervals of post intervention confidence are reported in Table 3. The low-fidelity group had the highest confidence score, and the computer-based group had the lowest score. For the SSSL results, there was a significant difference in the overall group comparison (Table 3). The computerbased group had a significantly lower overall mean SSSL score than either of the other groups, but the low-fidelity and full-scale simulation groups were not significantly different. The group comparisons for performance outcomes are reported in Table 4. There were no significant
pp e85-e93 Clinical Simulation in Nursing Volume 9 Issue 3
Three Simulation-Based Teaching Methodologies for Emergency Response Table 3 Mean Postintervention Knowledge, ERCT, and Satisfaction and Self-Confidence Scores Group Low-fidelity Computer-based Full-scale Low-fidelity Computer-based Full-scale
Low-fidelity Computer-based Full-scale
Post Intervention Mean Score (95% CI) Knowledge* 67 [58, 76] 80 [71, 89] 76 [68, 84] ERCQ Confidence* 59 [53, 65] 52 [46, 58] 54 [48, 60] Satisfaction and Self-Confidence† 4.4 [4.2, 4.6] 3.6 [3.4, 3.8] 4.7 [4.5, 4.9]
Table 4
Overall Group Comparison p Value .12
.31
<.001
Note. CI ¼ confidence interval; ERCT ¼ Emergency Response Confidence Tool. * Adjusted for pretest score using analysis of covariance. † Analysis of variance model; computer-based versus low-fidelity and full-scale (both p < .001); low-fidelity versus full-scale (p ¼ .043).
e91 Group Comparisons for Performance Outcomes
Performance Binary (Yes/No) Items
Logistic Regression Overall Group Comparison p Value
Rhythm identified Airway assessed Pulses assessed Chest compressions initiated Electrical intervention initiated Oxygen applied Chest compressions maintained
.53 .40 .32 .44 .44 .35 .09
Performance Continuous (Time) Items
ANOVA Overall Group Comparison p Value
Time to identify rhythm Time to oxygen applied Time to electrical intervention Time to chest compressions initiated
.36 .64 .22 .78
Note. ANOVA ¼ analysis of variance.
differences in any post intervention performance assessment items among the groups. A contingency table was made for the comparison of the electrical intervention type selected among the groups during post intervention performance, and a Fisher’s exact test showed a significant difference (p ¼ .014), with Group 3 (full-scale simulation) having the highest percentage not selecting defibrillation (56% vs. 0% and 11% for the low-fidelity and computerbased groups, respectively; see discussion section). Kappa statistics showed substantial agreement between the raters for all categorical ERPT items except pulses assessed and chest compressions maintained (for the baseline assessment) and electrical intervention and type of electrical intervention (for the post intervention assessment; Table 5; see discussion section). Interrater agreement was high for all the continuous ERPT items (Table 6).
Discussion Full-scale simulation with high-fidelity manikins is becoming widely recognized as a preferred method for the teaching of nursing skills such as emergency response. Few nursing research comparison studies in emergency response have been conducted to look at the effect of fullscale simulation compared with other simulation methodologies. This study is distinctive in that it looked at knowledge, confidence, satisfaction, and performance among three randomized groups. Although there was no significant difference in knowledge scores among the groups, Group 2 (computer-based simulation) scored the highest and Group 1 (low-fidelity simulation) scored the lowest. The Anesoft ACLS simulator targets the cognitive
domain and could explain the higher written test scores in Group 2. Despite scoring higher on the knowledge test, Group 2 scored lowest on the ERCT Questionnaire and the SSSL tool. Group 1 scored the highest on the ERCT Questionnaire, perhaps attributable to familiarity with the learning method. Group 3 (full-scale simulation) participants rated the highest on the SSSL tool. With technology a predominant method of instruction today and the average age of participants being 24.9 years, it was unexpected that Group 2 (computer-based simulation) would rate the least satisfied on the SSSL. Statistically significant differences in group performances were difficult to demonstrate with the small sample sizes in each group. Frequency statistics for the comparison of electrical intervention type selected among the groups during post intervention performance assessment showed that many participants in Group 3 did not select the defibrillation mode. The Zoll defibrillator offers the option of selecting either analyze mode or manual mode. Although the participants were instructed to select the manual mode during their simulation, many of them selected the analyze mode. One of the limitations of simulation that involves manikins is that you cannot always duplicate reality. The Zoll defibrillator does not interface with a manikin as it does with a real patient. In analyze mode, the defibrillator cannot recognize the manikin’s programmed rhythm. The participants were instructed not to select the analyze mode if they wanted to defibrillate; however, they were unable to recall this specific instruction once they were immersed in the simulation. Four of the participants in Group 3 selected the analyze mode on the defibrillator, and the defibrillator feedback mechanism advised them not to shock. Overall, interrater agreement for the ERPT was high. Interrater agreement for the items pulses assessed and chest
pp e85-e93 Clinical Simulation in Nursing Volume 9 Issue 3
Three Simulation-Based Teaching Methodologies for Emergency Response Table 5 Baseline and Post Intervention Kappa Coefficients for Categorical ERPT Items
Outcome
Baseline Kappa
Post Intervention Kappa
Rhythm identified Type of rhythm Called for help Airway assessed Pulses assessed Chest compressions initiated Pad attached Electrical intervention initiated Type of electrical intervention Oxygen applied Chest compressions maintained Verbalized drugs and dose
1.0 1.0 No kappa* 0.91 0.65 0.91 1.0 1.0 1.0 1.0 0.78 0.91
1.0 1.0 No kappa* 1.0 1.0 1.0 No kappa* 0.63 0.58 1.0 1.0 0.87
e92
Table 6 Baseline and Post Intervention Confidence Concordance Correlation Coefficients (CCCs) for Continuous ERPT Items
Outcome Time Time Time Time
to to to to
Post Baseline Intervention CCC CCC
identified rhythm 0.96 chest compressions initiated >0.99 electrical intervention 0.99 oxygen applied 0.97
0.99 >0.99 0.94 0.94
Note. ERPT ¼ Emergency Response Performance Tool.
Note. ERPT ¼ Emergency Response Performance Tool. * No kappas computed because both raters reported that all the participants attached the pads and called for help.
compression maintained on the baseline assessment was lower. Lower agreement for pulses assessed may be attributable to the subjectivity that can enter when adequate time is not taken to assess the pulse, making it difficult for the rater to observe whether it was assessed. Another possibility is inadequately defined behavioral anchors. Disagreement between the raters on the item chest compressions maintained occurred primarily when the participants ended the scenario prematurely. For post intervention interrater agreement, the lower agreement for electrical intervention initiated and type of electrical intervention is attributable to the technical glitch described above.
Strengths and Limitations The study was an experimental study using randomization, which strengthened the research. In an earlier study, we collectively designed a script so that every participant would receive the same information prior to their engagement in the full-scale simulations. We developed and tested a performance tool and confidence questionnaire that were designed specifically for this study (Arnold et al., 2009). A significant limitation of this study was its small sample size of 9 to 10 participants in each group, limiting the statistical power and generalizability of the results. When viewing the video for assessing performance, raters initially did not understand when the clock should start: as soon as the participant entered the room or when the manikin went into ventricular tachycardia. It is recommended that the start time be made explicitly clear when future studies are designed. Inconsistent manikin operators and differing levels of operator experience proved to be another limitation. The operators were given a script; however, to minimize the variation, it would have been helpful to have one consistent
operator. Although the investigators were blinded to the group type while rating the video, the investigatoreraters recognized some of the participants because the raters had done the participants’ educational intervention. The lack of blinding of the raters to all of the participants might have introduced unintended bias in the rating. The use of trained raters who are not involved in designing and conducting the study is recommended. Last, there was a change in the Critical Care Orientation program director during the middle of the study, and this may have affected recruiting and retention efforts. The previous Critical Care Orientation program director had years of experience and confidence in managing the program, putting students at ease with the intense program, and assisting them in navigating their schedules to minimize extraneous cognitive load.
Conclusions Although the statistical findings did not support the hypothesis that RNs who receive full-scale simulation training will score higher in knowledge, confidence, and performance than those receiving computer-based simulation or low-fidelity simulation, this study advances the current knowledge base of simulation-based education and research. The results of this study support what is known about realistic simulation-based education: learners are more satisfied with this approach. We demonstrated that the use of full-scale simulation to assess RNs’ emergency response performance is achievable and reliable. The ERPT can be a reliable measure for the assessment of performance in full-scale simulation. However, further studies with larger sample sizes are warranted. Given the small sample size, it would be worthwhile to replicate the study with a larger representative sample. Systematic development and evaluation of simulationbased performance assessment tools can improve the quality of assessments, which may increase the likelihood of adopting a higher standard to assess performance more rigorously prior to completing a Critical Care Orientation program. Rigorous research in simulation is difficult and time-consuming, yet it can yield long-term advantages for
pp e85-e93 Clinical Simulation in Nursing Volume 9 Issue 3
Three Simulation-Based Teaching Methodologies for Emergency Response trainees and faculty. Future research could identify the impact of this simulation-based course on patient outcomes. This study is a first step in simulation-based educational research. Next steps should include increasing the sample size, obtaining participant samples at multiple sites, and conducting longitudinal studies to assess how the training methods affect performance over time.
References Alinier, G., Hunt, W. B., & Gordon, R. (2003). Determining the value of simulation in nurse education: Study design and initial results. Nurse Education in Practice, 4, 200-207. Arnold, J. J., Johnson, L. M., Tucker, S. J., Malec, J. F., Henrickson, S. E., & Dunn, W. F. (2009). Evaluation tools in simulation learning: Performance and self-efficacy in emergence response. Clinical Simulation in Nursing, 5(1), e35-e43. doi:10.1016/j.ecns.2008.10.003. Cumin, D., & Merry, A. F. (2007). Simulators for use in anaesthesia. Anaesthesia, 62, 151-162. DeVita, M. A., Schaerfer, J., Lutz, J., Dongilli, T., & Wang, H. (2004). Improving medical crisis team performance. Critical Care Medicine, 32(Suppl. 2), S61-S65. Gordon, C. J., & Buckley, T. (2009). The effect of high-fidelity simulation training on medical-surgical graduate nurses’ perceived ability to respond to patient clinical emergencies. Journal of Continuing Education in Nursing, 40(11), 491-498. Hoadley, T. A. (2009). Learning advanced cardiac life support: A comparison study of the effects of low- and high-fidelity simulation. Nursing Education Perspectives, 30(2), 91-95. Holcomb, J. B., Dumire, R. D., Crommett, J. W., Stamateris, C. E., Fagert, M. A., Cleveland, J. A., et al. (2002). Evaluation of trauma team performance using an advanced patient simulator for resuscitation training. Journal of TRAUMA Injury, Infection, and Critical Care, 52(6), 1078-1086. Holzman, R. S., Cooper, J. B., Gaba, D., Philip, J. H., Small, S. D., & Feinstein, D. (1995). Anesthesia crisis resource management: Real-life simulation training in operating room crises. Journal of Clinical Anesthesia, 7, 675-687. Howard, D. (2001). Student nurses’ experiences of Project 2000. Nursing Standard, 15(48), 33-38. Hotchkiss, M. A., Biddle, C., & Fallacaro, M. (2002). Assessing the authenticity of the human simulation experience in anesthesiology. Journal of the American Association of Nurse Anesthetists, 70(6), 470-473.
e93
Issenberg, S. B., McGaghie, W. C., Petrusa, E. R., Gordon, D. L., & Scalese, R. J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Medical Teacher, 27(1), 10-28. Jeffries, P. R., & Rizzolo, M. A. (2006). Summary Report: Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national, multi-site, multi-method study. New York: National League for Nursing. Retrieved October 11, 2011, from. http://www.nln.org/research/LaerdalReport.pdf. Kardong-Edgren, S., & Adamson, K. A. (2009). BSN medical-surgical student ability to perform CPR in a simulation: Recommendations and implications. Clinical Simulation in Nursing, 5, e79-e83. doi: 10.1016/j.ecns.2009.01.006. Landers, M. G. (2000). The theory-practice gap in nursing: The role of the nurse teacher. Journal of Advanced Nursing, 32(6), 1550-1556. Lindekaer, A. L., Jacobsen, J., Andersen, G., Laub, M., & Jensen, P. F. (1997). Treatment of ventricular fibrillation during anaesthesia in an anaesthesia simulator. Acta Aneasthesiologica Scandinavica, 41(10), 1280-1284. Mancini, M. E., & Kaye, W. (1996). Resuscitation training: A time for reassessment. Journal of Cardiovascular Nursing, 10(4), 71-84. Mayo, P. H., Hackney, J. E., Mueck, J. T., Ribaudo, V., & Schneider, R. F. (2004). Achieving house staff competence in emergency airway management: Results of a teaching program using a computerized patient simulator. Critical Care Medicine, 32(12), 2422-2427. Ravert, P. K. M. (2004). Use of a human patient simulator with undergraduate nursing students: A prototype evaluation of critical thinking and self-efficacy (Doctoral dissertation, College of Nursing, University of Utah, 2004). Dissertation Abstracts International-B, 65/05, 2346. Schwid, H. A., Rooke, G. A., Ross, B. K., & Sivarajan, M. (1999). Use of computerized advanced cardiac life support simulator improves retention of advanced cardiac life support guidelines better than a textbook review. Critical Care Medicine, 27(4), 821-824. Seropian, M. A. (2003). General concepts in full-scale simulation: Getting started. Anesthesia & Analgesia, 97, 1695-1705. Seropian, M. A., Brown, K., Gavilanes, J. S., & Driggers, B. (2004). Simulation: Not just a manikin. Journal of Nursing Education, 43(4), 164-169. Steele, R. L. (1991). Attitudes about faculty practice, perception of role, and role strain. Journal of Nursing Education, 30(1), 15-22. Thomason, T. R. (2006). ICU nursing orientation and postorientation practices: A national survey. Critical Care Nursing Quarterly, 29(3), 237-245. Vandrey, C. I., & Whitman, K. M. (2001). Simulator training for novice critical care nurses. American Journal of Nursing, 101(9). 24GG-24LL. Wong, A. K. (2004). Full scale computer simulators in anesthesia training and evaluation. Canadian Journal of Anesthesia, 51(5), 455-464.
pp e85-e93 Clinical Simulation in Nursing Volume 9 Issue 3