Clinical Simulation in Nursing (2012) 8, e219-e225
www.elsevier.com/locate/ecsn
Featured Article
Using Objective Structured Clinical Evaluation for Simulation Evaluation: Checklist Considerations for Interrater Reliability Mary Cazzell, RN, PhD*, Carol Howe, RN, MSN University of Texas at Arlington, Arlington, TX 76019, USA KEYWORDS domains of learning; interrater reliability; nursing education; nursing students; objective structured clinical evaluation (OSCE); simulation evaluation
Abstract Background: Reliability of simulation outcome measurements is infrequently reported in nursing education. The purpose of this study was to establish interrater reliability of a checklist for a pediatric medication administration objective structured clinical evaluation. Method: Two raters scored 207 videotaped nursing student objective structured clinical evaluation performances using a 14-item checklist. Item interrater reliability statistics were calculated. Results: Adequate interrater reliability was obtained on six items from the cognitive and psychomotor domains of learning. Unacceptable interrater reliability was obtained on four items from the affective domain. Conclusion: Results verified the difficulty of quantitatively measuring affective domain behaviors and the need for consistency in rater roles. Cite this article: Cazzell, M., & Howe, C. (2012, July/August). Using objective structured clinical evaluation for simulation evaluation: Checklist considerations for interrater reliability. Clinical Simulation in Nursing, 8(6), e219-e225. doi:10.1016/j.ecns.2011.10.004. Ó 2012 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved.
Objective evaluation of clinical competence in nursing students is a major challenge for nurse educators (Walsh, Bailey, & Koren, 2009). With limited sites for clinical placements and varying clinical hour requirements, students may not have sufficient opportunities to integrate classroom content into clinical performance (Benner, Sutphen, Leonard, & Day, 2009; Institute of Medicine, 2011). Simulation allows students to actively participate in structured situations closely reflecting encounters in real practice while building knowledge, skills, beliefs, and
* Corresponding author:
[email protected] (M. Cazzell).
attitudes in a less risky environment (Cioffi, 2001; Sinclair & Ferguson, 2009; Tomey, 2003). The objective structured clinical evaluation (OSCE) is designed to evaluate clinical performance by assessing ‘‘students’ transfer of classroom and laboratory learning experiences into clinical practice’’ (McWilliam & Botwinski, 2010, p. 36). For an OSCE, nurse educators develop case scenarios, skill stations, or simulated situations reflecting either course or curricular outcomes. Students can be evaluated on their basic clinical knowledge and skills as well as their communication, teaching, assessment, procedural, and problem-solving skills (Rentschler, Eaton, Cappiello, McNally, & McWilliam, 2007). To reflect
1876-1399/$ - see front matter Ó 2012 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved.
doi:10.1016/j.ecns.2011.10.004
Objective Structured Clinical Evaluation for Simulation Evaluation the ‘‘holism’’ of nursing practice, the OSCE checklist must be able to measure components from all three domains of learningdcognitive, psychomotor, and affective. ‘‘Knowledge in all three domains is essential for full professional development and thus, socialization into the profession’’ (Weis & Schank, 2002, p. 271). Key Points Weis and Schank (2002) To achieve inter-rater assert that student evaluareliability, items on tions usually focus on the simulation evaluation cognitive and psychomotor checklists must meadomains, with less attention sure specific quantifion affective learning. Curable behaviors from rently, the evaluation of all three domains of simulation outcomes with learning dcognitive, reliable and valid instrupsychomotor, and ments is lacking in nursing affective. education literature Rater factors such as (Jeffries, 2007). This study poor rater training, describes important considlack of familiarity with erations toward establishing checklist or OSCE interrater reliability of protocol, or faulty a checklist designed to meaknowledge of role desure videotaped nursing stuscription (‘‘assess’’ verdent performances during sus ‘‘evaluate’’) affect a pediatric medication adinter-rater reliability. ministration OSCE. The analysis of item inter-rater reliability is an important first Background step toward the development of psychoOSCE metrically sound evaluation tools in The OSCE was developed in nursing education. 1975 for British medical education to objectively assess clinical knowledge and skills through the use of timed rotational skill stations in a simulated setting (Harden & Gleeson, 1979). Nursing education has adapted the original format of medical OSCEs to better reflect the holistic and diverse nature of nursing practice by using fewer and longer stations, one or two complex patient scenarios, and headto-toe assessment OSCEs (Rushforth, 2007). Walsh et al. (2009) redefined the nursing OSCE ‘‘as a psychological construct that includes aspects of cognitive, affective, [and] psychomotor skills such as critical thinking and problemsolving, and the incorporation of knowledge, values, beliefs, and attitudes’’ (p. 1585). OSCEs are used as formative and/or summative methods of evaluation. Formative evaluation is used in nursing education to assess the value of learning activities such as simulations, to quantify student learning and application of relevant course content, and to provide feedback to students on their strengths and weaknesses (Jeffries & Norton, 2005). Using their end-of-program OSCE as formative evaluation, Rentschler et al. (2007) gave students written
e220
feedback on their individual performance and used the results to set goals for the final capstone course. These researchers also used their OSCE as summative evaluation of the nursing program and its undergraduate curriculum. Summative evaluation is used to inform decisions on the effectiveness of course, curriculum, and program teaching strategies; to review and revise student learning outcomes; and to plan course revisions as appropriate (Jeffries & Norton, 2005). Though this current study used the OSCE as formative evaluation, establishing psychometric data on the OSCE checklist is imperative in order to use the OSCE as a reliable and valid summative evaluation in the future.
Psychometric Challenges with OSCEs The challenge for establishing OSCE checklist reliability and validity is to objectively evaluate student OSCE performances in all three domains of learning, including feelings, beliefs, attitudes, and values from the affective domain (Neumann & Forsyth, 2008). Difficulties may stem from (a) affective behaviors not immediately seen during OSCE performances, (b) lack of objective assessment guidelines for affective behaviors, (c) evaluation objectives that focus on observable measures of knowledge and skills, and (d) subjective nature of affective behaviors (Lund, Wayda, Woodard, & Buck, 2007; Neumann & Forsyth, 2008). ‘‘Thus, the assessment of complex and essentially subjective constructs, such as caring, empathy and other interpersonal skills are vulnerable to findings of low validity and poor interrater reliability within an OSCE’’ (Mitchell, Henderson, Groves, Dalton, & Nulty, 2009, p. 402). Despite obstacles, OSCE reliability, in the form of consistency or reproducibility in scoring, administration, and sample, is necessary before validity can be achieved (Gupta, Dewan, & Singh, 2010; Jones, Pegram, & Fordham-Clarke, 2010; Turner & Dankoski, 2008). Published psychometric analyses of nursing OSCE evaluations have lagged behind the use of OSCEs in nursing education (Rushforth, 2007; Walsh et al., 2009). A systematic review of medical OSCE literature found that only 37% of authors reported evidence of checklist or rating scale reliability (Patricio et al., 2009). Data on checklist content validity have been reported for nursing (Bloomfield, Roberts, & While, 2010; Todd, Manz, Hawkins, Parsons, & Hercinger, 2008) and medicine (White, Ross, & Gruppen, 2009). Data for interitem correlations have been reported by researchers in pharmacology (Sturpe, Huynh, & Haines, 2010). Checklist interrater reliability data have been reported by both nursing (Bloomfield et al.; Todd et al.) and pharmacy (Sturpe et al.) disciplines. No checklist reliability or validity data were reported in a nursing study that used the OSCE as both a formative assessment of senior nursing students’ clinical competencies and a summative program evaluation (Rentschler et al.,
pp e219-e225 Clinical Simulation in Nursing Volume 8 Issue 6
Objective Structured Clinical Evaluation for Simulation Evaluation 2007). One nursing study focused solely on evaluating the psychometrics of a formal OSCE evaluation tool measuring nursing core competencies (critical thinking, communication, assessment, and technical skills), reporting both construct and content validity as well as interrater reliability (Todd et al., 2008). Our study focused on the establishment of interrater reliability for a pediatric medication administration OSCE checklist. Research questions included the following: (a) What are the interrater reliability coefficients and percentage of correct behaviors for each item on the OSCE checklist when scored by two raters? (b) Are there differences in interrater reliability among OSCE checklist items from the cognitive, psychomotor, and affective domains of learning?
Method A quantitative descriptive research design was used to collect and analyze data from solo videotaped student OSCE performances. Establishment of OSCE checklist interrater reliability was the first step toward evaluating the effectiveness of a 7-hour pediatric medication administration simulation lab. Both the simulation lab and the OSCE are clinical requirements for all senior-level baccalaureate nursing students enrolled in Nursing of Children and Adolescents. Recruitment and data collection occurred across the fall 2010 and spring 2011 semesters. Approval by the institutional review board of a large public university in the southwestern United States was obtained. All 207 enrolled nursing students consented to include their OSCE data for this study. The simulation lab and OSCE occurred at the college of nursing’s simulation facility. The 7-hour Pediatric Medication Administration, Assessment, and Skills Simulation Lab consisted of the following components: Prebriefing by the lead teacher (1 hour) Simulated medication experiences among developmentally different simulated patients (5 hours): (a) Infant Colt with Tetralogy of Fallot and otitis media (b) Amanda Preschool with asthma (c) Sally School-Age with cystic fibrosis (d) Travis Teen with postop appendectomy and type 1 diabetes A postsimulation instructor-led debriefing (1 hour) Students performed medication calculations to assess for safety of ordered medication dosages, determined developmentally appropriate considerations for medication administration, and administered medications to their patients by various routes (oral, nasogastric, inhalation, IV, intramuscular, subcutaneous, and intradermal). Within 2 to 4 weeks of simulation completion, students participated in a solo videotaped pediatric medication administration OSCE. Each student retrieved, calculated, and
e221
administered both oral (furosemide and multivitamin) and IV (IV ceftriaxone via syringe pump) medications due at 9:00 p.m. to Infant Coltda ‘‘patient’’ they met during their simulation day. The OSCE room was equipped with an overhead audioe video system, and videotaping was directed by simulation staff from the control room. The students encountered SimBabyÔ, dressed in a t-shirt and diaper, covered with a blanket, and equipped with a heparin lock (right foot) in an overhead warmer. Students received a packet of information, including (a) instructions to calculate and administer safe dosages for all 9:00 p.m. meds, (b) kardex, and (c) medication administration record (MAR). Prior to the OSCE, students received training on accessing medications from the PYXIS medication retrieval system and on MAR documentation. Because syringe pump competency was not an OSCE learning outcome, the IV ceftriaxone was preloaded into the syringe pump. Students were expected to attach the IV medication to the patient’s heparin lock and would verbalize the rate of the IV medication infusion. The lead teacher and a graduate research assistant (GRA) with pediatric nursing expertise viewed all 207 OSCE videotapes and independently scored all OSCEs using a 14-item checklist (Figure 1). The OSCE checklist was developed by a team of pediatric nurse educators to reflect the step-by-step processes (knowledge, skills, and professional communication) of safe medication administration taught throughout the undergraduate nursing curriculum.
Results Data Analysis PASW Statistics 18 (SPSS Inc., Chicago, IL) software was used to obtain percentages of correct student behaviors between the two raters, as well as item interrater reliability statistics: kappa (l) and intraclass correlation coefficient (ICC) from quantitative data. The l statistic, appropriate for nominal or ordinal data, is a chance-corrected measure of rater agreement calculated on the basis of the proportion of observed agreements to the agreements expected by chance. As explained by Portney and Watkins (2000), even if two raters had no common grading criteria, one could anticipate agreement by chance at least 34% of the time. The ICC statistic can be used on nominal, ordinal, interval, or ratio data, can assess interrater reliability among two or more raters, and can assess both rater consistency and average agreement (Portney & Watkins, 2000).
Item Interrater Reliability Results The sample (N ¼ 207) consisted of first-semester senior nursing students: 183 women (88%) and 24 men (12%). Both interreliability statistics (l and ICC) were closely correlated on each item (see Table 1). Item 5, handwashing or
pp e219-e225 Clinical Simulation in Nursing Volume 8 Issue 6
Objective Structured Clinical Evaluation for Simulation Evaluation STEPS OF PROPER MEDICATION ADMINISTRATION
e222
Correct Behaviors Noted by Evaluator for Yes X for No
Checks MAR for medication that is due Calculates dosages and documents if dosages are safe or unsafe Professionally dressed, greets patient and parent, and introduces herself or himself Explains to patient and/or parent what medications will be given and any further explanation needed Washes hands prior to patient contact OR appropriate donning and removal of gloves throughout scenario Demonstrates developmentally appropriate communication and actions toward infant patient Checks patient’s ID band and asks for second form of ID (birth date from parent) Checks patient’s allergy band Cleans port on heparin lock for 10 seconds with alcohol pad Assesses heparin lock for patency by attaching Normal Saline (NS) syringe, opens clamp on heparin lock, gently flushes with 2 ml NS, and clamps off Attaches medication (microbore tubing) to heparin port using aseptic technique (cleans port if contaminated) Verbalizes medication to be infused over 30 minutes at 10 ml/hr Administers 2 oral medications using safe and developmentally-appropriate administration techniques for infant Documents all meds on MAR appropriately and clearly MAR, medication administration record
Figure 1
Checklist for pediatric medication administration objective structured clinical evaluation.
appropriate use of gloves, had the strongest item interrater reliability (l and ICC ¼ 0.71) among all items. Having substantial or strong item interrater reliability ‘‘allows the researcher to assume that measurements obtained by one rater are likely to be representative of the subjects’ true score’’ (Portney & Watkins, 2000, p. 70). A significant number of students (27%-33%) did not either wash their hands prior to patient contact or use appropriate gloving, an educational concern. Two other items (Items 7 and 9) attained substantial interrater reliability: checking patient’s ID band (l and ICC ¼ 0.67) and cleaning the heparin port with alcohol for 10 seconds (l and ICC ¼ 0.61), psychomotor skill competencies that impact patient safety. Almost 25% of students did not check their patient’s ID band before giving meds, and almost 70% of students did not clean the heparin port with alcohol for 10 seconds. Checking the allergy band, assessing the heparin lock for patency, and correctly verbalizing the infusion rate (l and ICC ¼ 0.49; 0.45-0.48; and 0.51, respectively) were moderately reliable checklist items that assess psychomotor skills. Almost 60% of students did not properly assess heparin lock for patency before administering IV medication, more than 25% did not
check the patient’s allergy band, and almost 50% did not correctly state that IV infusion was to run at 10 ml/hour for 30 minutes. Several items achieved fair to poor interrater reliability (l and ICC < 0.4): Items 1, 3, 4, 6, 11, and 13 (see Table 1). Four of the six items were evaluations of affective domain competencies such as professional dress and introduction (l ¼ 0.18; ICC ¼ 0.19), explanations of medications to parents (l and ICC ¼ 0.18), developmentally appropriate communication and actions (l and ICC ¼ 0.13), and correct oral medication administration to infant (l ¼0.33; ICC ¼ 0.36). The other two items, checking MAR before giving med (l and ICC ¼ 0.38) and attaching medication tubing to IV port using aseptic technique (l ¼ 0.28; ICC ¼ 0.29), were not reliable. The percentages of correct student behaviors on these unreliable items cannot be confidently reported or interpreted until the items are revised. Two items (Items 2 and 14) were scored solely by the lead teacher, and no l or ICC statistic was calculated. Both safe medication dosage calculations (Item 2) and MAR documentation (Item 14) were completed by students in their OSCE packet and were scored by one rater.
pp e219-e225 Clinical Simulation in Nursing Volume 8 Issue 6
Objective Structured Clinical Evaluation for Simulation Evaluation Table 1
e223
Item-by-Item Percentages and Interrater Reliability Results (N ¼ 207)
Item 1. Checks MAR for medication that is due 2. Calculates dosages and documents if dosages are safe or unsafe‡ 3. Professionally dressed, greets patient and parent, and introduces herself or himself 4. Explains to patient and/or parent what medications will be given and any further explanation needed 5. Washes hands prior to patient contact OR appropriate donning and removal of gloves throughout scenario 6. Demonstrates developmentally appropriate communication and actions toward infant patient 7. Checks patient’s ID band and asks for second form of ID (birth date from parent) 8. Checks patient’s allergy band 9. Cleans port on heparin lock for 10 seconds with alcohol pad 10. Assesses heparin lock for patency by attaching normal saline (NS) syringe, opens clamp on heparin lock, gently flushes with 2 ml NS, and clamps off 11. Attaches medication (microbore tubing) to heparin port using aseptic technique (cleans port if contaminated)
k† 0.8: Excellent; 0.6: Substantial Correct Behaviors (%)* 0.4-0.6: Moderate; <0.4: Poor to Fair LT, GRA
Intraclass Correlation Coefficient (ICC)† 0.75: Good Reliability <0.75: Poor-to-Moderate Reliability
87%, 89% 71%, NA 92%, 97%
0.38 NA 0.18
0.38 NA 0.19
98%, 92%
0.18
0.18
67%, 73%
0.71
0.71
98%, 82%
0.12
0.13
75%, 74%
0.67
0.67
73%, 52% 32%, 30% 38%, 39%
0.45 0.61 0.48
0.49 0.61 0.48
20%, 24%
0.28
0.29
12. Verbalizes medication to be infused over 30 minutes at 10 ml/hour 13. Administers two oral medications using safe and developmentally appropriate administration techniques for infant 14. Documents all meds on MAR appropriately and clearly§
Correct Behaviors (%)* LT, GRA
k†
Intraclass Correlation Coefficient (ICC)†
68%, 68% 83%, 66%
0.51 0.33
0.51 0.36
85%, NA
NA
NA
GRA ¼ graduate research assistant; LT ¼ lead teacher; MAR ¼ medication administration record; NA ¼ item reviewed by one rater only. * Each behavior scored by both lead teacher and graduate research assistant. † All k and ICC statistics are statistically significant at p .003. ‡ Written assignment prior to start of objective structured clinical evaluation; scored by lead teacher only. § Written MAR scored by lead teacher only.
Discussion Most checklist items relating to the cognitive and psychomotor domains attained acceptable interrater reliability; these results can be interpreted as a reliable evaluation of student behaviors for this pediatric medication OSCE. The most reliable items involved easily observable psychomotor skills: handwashing and/or gloving, checking patient’s ID and allergy bands, cleaning heparin lock for 10 seconds, completing all steps that assess for IV patency, and verbalizing correctly the rate of the IV medication infusion. It is interesting to note that these skills are not ‘‘pediatric specific’’ and indicate a need to emphasize, practice, and remediate hand hygiene and IV competencies during each clinical course. Because of potential variability in OSCE set-up (station vs. scenario), range of skills assessed, number of examiners, and method of scoring, Rushforth (2007) cautions that interrater reliability results are not transferable to other OSCE checklists. Educators must attain their own checklist
reliability statistics for their particular OSCE. Based on interrater reliability results, the OSCE checklist needs specific item revisions before student OSCE performance scores, as a whole, can be used for summative evaluations of course content and teaching strategies or curricular and program strengths and weaknesses. Clinical instructors in this course currently offer formative evaluation through face-to-face feedback to each student on the student’s OSCE performance in the categories on the checklist. The items with poor to fair interrater reliability were checking the MAR, professional dress and introduction, explanation to parent, developmentally appropriate communication, attaching medication to heparin port, and oral medication administration to infant. The items with the lowest reliability were from the affective domain. Although affective-domain behaviors are important in nursing, this study verified the difficulty of measuring these behaviors quantitatively. In a review of research literature across health and teaching disciplines, Miller (2010) found that
pp e219-e225 Clinical Simulation in Nursing Volume 8 Issue 6
Objective Structured Clinical Evaluation for Simulation Evaluation assessment of student behaviors from the affective domain was the most problematic and challenging concern because of unclear definitions of and minimal assessment guidance on affective domain behaviors. Clinical educators are most concerned with affective assessments of ‘‘communication skills, appropriate dress and mannerisms, behaviour towards others and time management’’ (Miller, 2010, p. 6). To assess affective-domain behaviors in nursing students, Miller (2010) stated that these standards must be spelled out in detail for both learners and checklist raters. For this study’s OSCE checklist, the item on professional dress and appropriate introduction could be detailed as follows: Student (a) appears in full nursing uniform per school policy [hair pulled back, no visible tattoos, no baseball caps or hats, no jackets, white close-toed shoes, one pair of stud earrings] and (b) clearly states name, role, and school affiliation. Assessing developmentally appropriate communication to an infant could include observable behaviors such as facing infant when speaking, quiet intonation of voice, simple statements or questions, gentle touching of infant, and exposing only appropriate areas of infant rather than full removal of blanket. Without improved interrater reliability, OSCE performance data on items with poor to fair reliability cannot be interpreted or generalized beyond each individual student’s performance. Rater factors affecting interrater reliability are leniency, lack of familiarity with checklist and/or OSCE protocol, trivialization of OSCE-related tasks, cognitive bias toward students (‘‘halo effect’’), and rater fatigue (Gupta et al., 2010; Iramaneerat & Yudkowsky, 2007). Both raters met to review the OSCE checklist items, the content of the student OSCE packet, and the OSCE protocol and together viewed two OSCE videotapes, scoring them independently and analyzing the results for rationales behind discrepancies. No cognitive bias was evident as the GRA was not previously familiar with the students and the lead teacher was not a clinical instructor during the two semesters of this study. Rater fatigue was addressed because both reviewers noted that, to maintain alertness, 10 to 15 videotapes was sufficient at one sitting. Hodges and McNaughton (2009) suggest that interrater reliability can be affected if one rater acts as an assessor and the other as an evaluator. An assessor functions as a neutral observer with the main goal of assessing the presence or absence of skills. The assessor role is essential in establishing interrater reliability of an OSCE checklist. An evaluator observes performances but adds judgments about the ‘‘worthiness’’ (value) or ‘‘goodness’’ (appropriateness) of a given situation (Hodges & McNaughton, 2009). This role is beneficial when an OSCE is used as a summative form of clinical competency. In this study, the lead teacher, as rater, may have evaluated not only the students but also OSCE worthiness as a whole, while the GRA, with no involvement in the course, may have scored student performances more objectively. In the future, raters will be trained as neutral observers/assessors so that only presence or absence of skills will be determinants of OSCE checklist interrater reliability.
e224
Limitations While rater training consisted of scoring two studentvideotaped OSCE performances together and discussing the checklist and potential root causes of rater differences, the raters differed in pediatric expertise. One had pediatric clinical education background, and the other had pediatric research experience; this may have contributed to scoring disparities. The OSCE checklist was designed to measure all behaviors as either yes for correct or no for incorrect. One researcher cautions that binary checklists, when used to assess affective domain behaviors, can be ‘‘difficult and even invalid’’ because behaviors such as caring, communication skills, or rapport with patient occur on a continuum rather than as present or not present (Norman, 2005). Each item of the OSCE checklist will be reevaluated for specific, observable, and measurable behaviors that may assist in the next psychometric analysis of interrater reliability.
Conclusion The analysis of interrater reliability of an OSCE checklist is an important nurse educator responsibility when OSCEs will be used as summative evaluation tools. Gupta et al. (2010) describe a dark ‘‘Mr. Hyde’’ side to the revered ‘‘Dr. Jekyll’’ side of OSCEs when factors affecting reliability are not addressed. Generalizability of OSCE checklist use or consistency of interpretation of OSCE results among different student populations is affected by lack of item analysis, individualized rater scoring procedures, and vaguely described tasks (Gupta et al., 2010). In this study, checklist items reflecting behaviors from cognitive and psychomotor domains (handwashing and/or gloving, checking ID and allergy bands, cleaning of heparin port, and assessing IV patency) attained acceptable interrater reliability. Results reflect the difficulty with behavioral assessments from the affective domain, but emphasis on this domain’s behaviors, such as professional communication, dress, and attitude, is quite important for successful professional nursing practice. Miller (2010) revealed that many reported complaints to boards of nursing reflect professional practice problems from the affective domain: failure to communicate, breach of confidentiality, and various other unprofessional behaviors. Future nursing education research studies are essential for the development of quantitative measures with clear definitions of observable professional nursing behaviors from the cognitive, psychomotor, and especially the affective domains of learning.
References Benner, P., Sutphen, M., Leonard, V., & Day, L. (2009). Educating nurses: A call for radical transformation. San Francisco, CA: Jossey-Bass. Bloomfield, J., Roberts, J., & While, A. (2010). The effect of computerassisted learning versus conventional teaching methods on the acquisition
pp e219-e225 Clinical Simulation in Nursing Volume 8 Issue 6
Objective Structured Clinical Evaluation for Simulation Evaluation and retention of handwashing theory and skills in pre-qualification nursing students: A randomized controlled trial. International Journal of Nursing Studies, 47, 287-294. doi:10.1016/j.ijnurstu.2009.08.003. Cioffi, J. (2001). Clinical simulation: Development and validation. Nurse Education Today, 21, 479-486. doi:10.1054/nedt.2001.0584. Gupta, P., Dewan, P., & Singh, T. (2010). Objective structured clinical examination (OSCE) revisited. Indian Pediatrics, 47, 911-920. doi: 10.1007/s13312-010-0155-6. Harden, R. M., & Gleeson, F. A. (1979). Assessment of clinical competence using an objective structured clinical examination (OSCE). Medical Education, 13, 41-54. Retrieved from http://www.wiley.com/ bw/journal.asp?ref=0308-0110. Hodges, B. D., & McNaughton, N. (2009). Who should be an OSCE examiner? Academic Psychiatry, 33(4), 282-284. doi:10.1176/appi. ap.33.4.282. Institute of Medicine. (2011). The future of nursing: Leading change, advancing health. Washington, DC: National Academies Press. Iramaneerat, C., & Yudkowsky, R. (2007). Rater errors in a clinical skills assessment of medical students. Evaluation & the Health Professions, 30(3), 266-283. doi:10.1177/0163278707304040. Jeffries, P. (Ed.). (2007). Simulation in nursing education from conceptualization to evaluation. New York, NY: National League for Nursing Press. Jeffries, P. R., & Norton, B. (2005). Selecting learning experiences to achieve curriculum outcomes. In D. M. Billings, & J. A. Halstead (Eds.), Teaching in nursing: A guide for faculty (2nd ed.). (pp. 187-212) St. Louis, MO: Elsevier. Jones, A., Pegram, A., & Fordham-Clarke, C. (2010). Developing and examining an Objective Structured Clinical Examination. Nurse Education Today, 30, 137-141. doi:10.1016/j.nedt.2009.06.014. Lund, J., Wayda, V., Woodard, R., & Buck, M. (2007). Professional dispositions: What are we teaching prospective physical education teachers? Physical Educator, 64(1), 38-47. Retrieved from http://findarticles.com/p/articles/mi_hb4322/. McWilliam, P., & Botwinski, C. (2010). Developing a successful nursing objective structured clinical examination. Journal of Nursing Education, 49(1), 36-41. doi:10.3928/01484834-20090915-01. Miller, C. (2010). Improving and enhancing performance in the affective domain of nursing students: Insights from the literature for clinical educators. Contemporary Nurse, 35(1), 2-17. Retrieved from http://www.researchgate.net/journal/1037-6178_Contemporary_nurse_a_journal_for_the_ Australian_nursing_profession. Mitchell, M. L., Henderson, A., Groves, M., Dalton, M., & Nulty, D. (2009). The objective structured clinical examination (OSCE): Optimising its value in the undergraduate nursing curriculum. Nurse Education Today, 29, 398-404. doi:10.1016/j.nedt.2008.10.007. Neumann, J. A., & Forsyth, D. (2008). Teaching in the affective domain for institutional values. Journal of Continuing Education in Nursing, 39(6), 248-254. doi:10.3928/00220124-20080601-07.
e225
Norman, G. (2005). Checklists vs. ratings, the illusion of objectivity, the demise of skills and the debasement of evidence. Advances in Health Sciences Education: Theory and Practice, 10, 1-3. doi:10.1007/ s10456-005-4723-9. Patricio, M., Juli~ao, M., Fereleira, F., Young, M., Norman, G., & Vaz Carneiro, A. (2009). A comprehensive checklist for reporting the use of OSCEs. Medical Teacher, 31(2), 112-124. doi:10.1080/ 01421590802578277. Portney, L. G., & Watkins, M. P. (2000). Foundations of clinical research: Applications to practice (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Rentschler, D. D., Eaton, J., Cappiello, J., McNally, S. F., & McWilliam, P. (2007). Evaluation of undergraduate students using objective structured clinical evaluation. Journal of Nursing Education, 46(3), 135-139. Retrieved from http://www.slackjournals.com/jne. Rushforth, H. E. (2007). Objective structured clinical examination (OSCE): Review of literature and implications for nursing education. Nurse Education Today, 27, 481-490. doi:10.1016/j.nedt. 2006.08.009. Sinclair, B., & Ferguson, K. (2009). Integrating simulated teaching/learning strategies in undergraduate nursing education. International Journal of Nursing Education Scholarship, 6(1), 1-11. doi:10.2202/1548923X.1676. Sturpe, D. A., Huynh, D., & Haines, S. T. (2010). Scoring objective structured clinical examinations using video monitors or video recordings. American Journal of Pharmaceutical Education, 74(3), 44. Retrieved from. http://www.ncbi.nlm.nih.gov/pmc/articles/ PMC2865410/. Todd, M., Manz, J. A., Hawkins, K. S., Parsons, M. E., & Hercinger, M. (2008). The development of a quantitative evaluation tool for simulations in nursing education. International Journal of Nursing Education Scholarship, 5(1). Article 41. doi:10.2202/1548-923X.1705. Tomey, A. (2003). Learning with cases. Journal of Continuing Education in Nursing, 34, 34-37. Retrived from http://www.slackjournals.com/jcen. Turner, J. L., & Dankoski, M. E. (2008). Objective structured clinical exams: A critical review. Family Medicine, 40(8), 574-578. Retrieved from http://www.stfm.org/fmhub/. Walsh, M., Bailey, P. H., & Koren, K. (2009). Objective structured clinical evaluation of clinical competence: An integrative review. Journal of Advanced Nursing, 65(8), 1584-1595. doi:10.1111/j.1365-2648.2009. 05054.x. Weis, D., & Schank, M. J. (2002). Professional values: Key to professional development. Journal of Professional Nursing, 18(5), 271-275. doi: 10.1053/jpnu.2002.129224. White, C. B., Ross, P. T., & Gruppen, L. D. (2009). Remediating students’ failed OSCE performances at one school: The effects of self-assessment, reflection, and feedback. Academic Medicine, 84(5), 651-654. doi: 10.1097/ACM.0b013e31819fb9de.
pp e219-e225 Clinical Simulation in Nursing Volume 8 Issue 6