Resident Education in Laryngeal Stroboscopy: Part II—Evaluation of a Multimedia Training Module

Resident Education in Laryngeal Stroboscopy: Part II—Evaluation of a Multimedia Training Module

ARTICLE IN PRESS Resident Education in Laryngeal Stroboscopy: Part II—Evaluation of a Multimedia Training Module *,1Joel W. Jones, *,1Maraya M. Bauma...

562KB Sizes 0 Downloads 41 Views

ARTICLE IN PRESS

Resident Education in Laryngeal Stroboscopy: Part II—Evaluation of a Multimedia Training Module *,1Joel W. Jones, *,1Maraya M. Baumanis, *Mollie Perryman, *Kevin J. Sykes, *Mark R. Villwock, † Cristina Cabrera-Muffly, ‡Jayme Dowdall, *James David Garnett, and *Shannon Kraft, *Kansas City, Kansas, yAurora, Colorado, and zOmaha, Nebraska

Summary: Objective. To evaluate the efficacy of a web-based training module for teaching interpretation of laryngeal stroboscopy in a cohort of otolaryngology residents. Study Design. Randomized controlled trial. Setting. Academic tertiary center. Subjects and Methods. Residents from three training programs were invited to complete an assessment consisting of a survey and five stroboscopic exams. Subsequently, participants were randomized to receive teaching materials in the form of (1) a handout (HO) or (2) a multimedia module (MM) and asked to complete a posttraining assessment. Responses were compared to responses provided by three fellowship-trained laryngologists. Results. Thirty-five of 47 invited residents (74.4%) completed both assessments. Overall mean postassessment scores were 64.3% § 7.0, with the MM group (67.0% § 7.6, n = 17) scoring higher (P = 0.03) than the HO (61.6% § 5.4, n = 18) cohort. Postassessment scores did not differ by postgraduate year (P = 0.75) or institution (P = 0.17). Paired analysis demonstrated an overall mean improvement of 7.4% in the handout (HO) cohort (P = 0.03) and 10.3% in the MM cohort (P = 0.0006). Subset analysis demonstrated higher scores for the MM cohort for perceptual voice evaluation (HO = 68.8% § 11.0; MM = 77.3% § 10.6, P = 0.03) and stroboscopy-specific items (HO = 55.5% § 8.2; MM = 61.9% § 10.8, P = 0.06). On a five-point Likert scale, residents reported improved confidence in stroboscopy interpretation (P < 0.0001), irrespective of cohort (P = 0.62). Residents rated the MM (median = 5) more favorably as a teaching tool compared to the HO (median = 4, P = 0.001). Conclusion. Use of both the written HO and MM module improved scores and confidence in interpreting laryngeal stroboscopy. The MM was more effective in perceptual voice evaluation and stroboscopy-specific items. The MM was also rated more favorably by residents and may be an ideal adjunct modality for teaching stroboscopy. Key Words: Laryngology−Laryngoscopy−Otolaryngology−Resident education−Stroboscopy.

INTRODUCTION Otolaryngology training programs must constantly balance the competing demands of resident education, service obligations, and patient care. Further constrained by work-hour requirements, limited institutional resources and variable clinical opportunities, educators must continually innovate to ensure residents receive a consistent training experience and achieve basic competency in their chosen field. Recently, some specialties have begun to examine the effectiveness of web-based courses and multimedia modules as adjuncts to the traditional model of apprenticeship and formal didactic lectures.1 Not only have residents shown improvements in procedural proficiency and case preparation after using e-learning tools, trainees also express high satisfaction with multimedia teaching technology.2-4 Accepted for publication December 30, 2019. This paper was presented at the American Academy of Otolaryngology-Head & Neck Surgery annual meeting in New Orleans, Louisiana on September 17th, 2019. From the *Department of Otolaryngology—Head and Neck Surgery, University of Kansas School of Medicine, Kansas City, Kansas; yDepartment of Otolaryngology— Head and Neck Surgery, University of Colorado School of Medicine, Aurora, Colorado; and the zDepartment of Otolaryngology—Head and Neck Surgery, University of Nebraska School of Medicine, Omaha, Nebraska. 1 These authors contributed equally to this work. Address correspondence and reprint requests to Maraya M. Baumanis, Department of Otolaryngology—Head and Neck Surgery, University of Kansas Medical Center, Otolaryngology - MS 3010, 3901 Rainbow Blvd., Kansas City, KS 66160. E-mail: [email protected] Journal of Voice, Vol. &&, No. &&, pp. &&−&& 0892-1997 © 2020 The Voice Foundation. Published by Elsevier Inc. All rights reserved. https://doi.org/10.1016/j.jvoice.2019.12.026

For procedures in which clinical exposure is limited, such a laryngeal videostroboscopy, web-based modules can be a valuable tool for learners to build and become more confident in their knowledge base. Stroboscopy requires training and practice to obtain accurate and consistent results. The American Board of Otolaryngology has included stroboscopy in the core curriculum for Otolaryngology residency training.5 Approximately, 25% of training programs, however, did not have a fellowship-trained otolaryngologist or affiliation with a voice center as of 2018, thereby limiting opportunities to learn stroboscopy in a clinical setting.6 Web-based teaching modules can be both an efficient and effective solution to some of the challenges of providing opportunities for resident learning, and may be particularly well-suited for use in audiovisual tasks such as voice evaluation and interpreting stroboscopy.7 Understanding that acquiring the skills to rate audiovisual exams requires specific instruction and practice, Poburka and Bless developed a computer-based program with 13 exemplars of normal and abnormal stroboscopy exams in 1998. In the study, speech language pathology students with no previous experience making video-perceptual judgments of stroboscopy were presented with a series of exams to rate. After completing the computer-aided instruction, participants showed improved accuracy of stroboscopic ratings compared to expert raters.8 Unfortunately, while they demonstrated the potential for using multimedia tools to aid in the teaching of stroboscopy, the

ARTICLE IN PRESS 2

METHODS Study design Three otolaryngology training programs with fellowshiptrained laryngologists on faculty (University of Kansas, University of Nebraska, and University of Colorado) were identified to participate in the study over a 3-month period in 2018. The University of Kansas was selected as the sponsoring institution for the study. Secondary sites were selected based on pre-existing collaborative relationships with these training programs. Approval for this study was granted by the institutional review board at all participating sites. Resident trainees from these institutions were invited to join the study (excluding author JJ), and consent was obtained from the local site director. In part I of this study, a cohort of residents were asked to complete a survey regarding clinical exposure and training in stroboscopy, and to rate five stroboscopic exams in order to identify potential gaps in resident education on the topic. Residents were given 30 days to review and complete the preassessment materials in January 2018.6 To preserve the integrity of the testing materials, participants were not permitted to review the answer key or exams after the completion of the assessment. Part II of this study focuses on the efficacy of different types of educational materials to fill that void. Residents who completed part I of the study were invited to enroll in the second phase, in March 2018. Participants were first grouped by postgraduate year (PGY) and then randomly assigned to either to receive teaching materials in the form of a written HO or a MM (Figure 1). All educational materials were created by a fellowship-trained laryngologist (SK) and a senior Otolaryngology resident (JJ) at The University of Kansas. The MM was created using PowerPoint software (Microsoft, Redmond, Washington) and Panopto version 5.3 recording software (Panopto, Seattle, Washington). The HO and MM materials contained identical information regarding the background and history of stroboscopy, as well as written/ verbal descriptions of the elements of perceptual voice evaluation (PVE), laryngoscopy, and stroboscopy. The HO included written descriptions of normal and abnormal findings,

PHASE I

Pre-Assessment (N = 38)

Handout (HO) (N = 19)

Mulmedia Module (MM) (N = 19)

PHASE II

program has not been widely adopted, likely due to technical and logistical limitations of the program at the time. In a previous study, we identified that residents reported low confidence in interpreting stroboscopy exams. Given a series of five videostroboscopy exams to review, participants scored lower compared to experts when rating stroboscopyspecific items.6 With this in mind, we developed two training modules: (1) a text-based handout (HO) consisting of written descriptions of stroboscopic findings with still images and (2) a multimedia module (MM) that contained the same information with the addition of embedded audio and visual demonstrations of the described findings. Because of the inherently perceptual nature of stroboscopy, we hypothesize that the use of the MM will improve accuracy and participant confidence in rating stroboscobic exams compared to the use of a written HO as a teaching tool.

Journal of Voice, Vol. &&, No. &&, 2020

HO Post-Assessment (N = 18)

MM Post-Assessment (N = 17)

FIGURE 1. Randomization of participates to the HO or MM group.

supplemented with representative still images. The 24-minute MM consisted of narrated descriptions of the same voice, laryngoscopy and stroboscopy findings found in the written HO, but was accompanied by audio and visual examples of the exemplars. The HO and MM included all topics that would be tested in the postassessment exam. Participants received an e-mail link to either the HO or the MM in March 2018 and were given 30 days to review the teaching materials. Subsequently, they were invited to complete a post-training assessment, consisting of five unique stroboscopic exams, and a questionnaire, which were designed and stored in a REDCap (Research Electronic Data Capture) database hosted by The University of Kansas.9 The exams, which were different from those included in the preassessment, included representative strobes demonstrating (1) normal vocal folds with underclosure, (2) a vocal fold polyp, (3) a vocal fold cyst, (4) papilloma, and (5) vocal fold paralysis. The questionnaire asked participants to identify PGY, estimate exposure to laryngeal stroboscopy in a clinical and/or educational setting, and self-evaluate level of confidence in interpreting stroboscopy. Understanding of the material presented in the HO and MM was evaluated using multiple choice questions to grade voice samples and to rate the accompanying laryngoscopic and stroboscopic exams. Understanding of PVE was based on the participant’s ability to rate a sample voice using the GRBAS (Grade, Roughness, Breathiness, Asthenia, and Strain) scale.10 Competency in evaluating laryngoscopic findings required participants to characterize vocal fold mobility and supraglottic postures during phonation, as well as evaluating the true vocal folds for structural pathology. Comprehension in interpreting stroboscopy tested the participant’s ability to grade six parameters—periodicity, amplitude, mucosal wave movement, phase symmetry, phase closure, and glottic closure—with answer choices reflecting characteristic descriptions specific to each parameter.

Evaluation Three fellowship-trained laryngologists independently completed the assessments, and these answers were subsequently used to generate a key for scoring the study responses. One point was given for each correct answer when the selected

ARTICLE IN PRESS Joel W. Jones, et al

3

Resident Education in Laryngeal Stroboscopy

answer matched the established answer key. Residents were allowed to choose “don’t know” as an answer choice for the laryngoscopy and stroboscopy portions of the test to decrease the contribution of the score due to guessing. If there was a lack of consensus between the responses provided by the fellowship-trained laryngologists, the question was eliminated from scoring. For PVE items, items were retained if the variance between expert graders was only one point, and credit was given if the resident’s answer choice matched either answer provided by the laryngologists. A total score was generated for each assessment, as well as subset scores for the different domains within the exam. Grading was performed for both the pre- and postassessment tests. Statistical analysis Descriptive data were collected and analyzed using Prism version 7.0 (GraphPad Software, La Jolla, CA). Reported confidence in interpreting stroboscopy and satisfaction with the teaching materials were assessed on a five-point Likert scale, and analyzed with nonparametric t tests. Mean assessment scores were calculated, with subset analysis in PVE, laryngoscopy, and stroboscopy. Scores were compared by PGY and by institution using analysis of the variance (ANOVA). Paired t tests were used to analyze pre- and postassessment scores. Inter-rater reliability was assessed using Krippendorff’s alpha using SPSS (Version 25, Armonk, NY).11 Perceptual voice variables were treated as ordinal measures, while the remaining variables were treated nominally. Krippendorff’s alpha is reported for each group along with 95% confidence intervals produced via bootstrapping with 1000 samples. Values of Krippendorff’s alpha generally range from 0 to 1, where 0 is perfect disagreement and 1 is perfect agreement. A negative alpha indicates less agreement than would be expected by chance and suggests raters have conflicting understanding of the measure.12 Consistent with prior research, suggested interpretations of agreement based upon an alpha score are: 0.00-0.20 = poor; 0.21-0.40 = fair; 0.41-0.60 = moderate; 0.61-0.80 = good; 0.81-1.00 = very good.13,14 RESULTS Demographics and clinical exposure to stroboscopy Thirty-five of 47 invited residents (74.4%) completed the assessments from part I and part II of this study (Table 1). Twenty-six (68.4%) respondents reported having had didactics on stroboscopy within the last year. Clinical exposure to stroboscopy was variable, with the largest percentage of residents (39.5%) reporting exposure every 3 months and 23.7% reporting exposure every 6 months to a year. One resident had previously participated in a stroboscopy course. Analysis of PVE, laryngoscopy, and stroboscopy in the post-training assessment The post-training clinical assessment contained 126 test items (voice evaluation = 25; laryngoscopy = 43; stroboscopy = 58). Assessment scores for the entire participant cohort, as well as

TABLE 1. Demographics of Participants After Randomization of Multimedia Module (MM) and Handout (HO) for the Three Participating Residency Programs

Institution U of Colorado U of Kansas U of Nebraska Training year PGY-1 PGY-2 PGY-3 PGY-4 PGY-5 Total

Handout

Multimedia Module

3 8 7

3 8 6

5 3 3 4 3 18

3 3 4 3 4 17

Equal randomization of 17 MM and 18 HO (n = 35). Junior residents (PGY 1-3) randomized to 10 MM and 11 HO with senior residents (PGY 4-5) 7 MM and 7 HO. Abbreviation: PGY, postgraduate year.

for HO and MM subgroups, passed a D'Agostino-Pearsons normality test to determine if scores fit a normal distribution (P = 0.99, 0.70, and 0.65, respectively). After completion of the assigned training module, the mean assessment score for the entire cohort was 64.3% § 7.0, with the MM group (67.0% § 7.6, n = 17) scoring higher (P = 0.03) than the HO (61.6% § 5.4, n = 18) contingent. Postassessment scores did not show a significant difference in scores by PGY (P = 0.75) or institution (P = 0.17). Paired comparisons demonstrated an overall mean improvement of 7.4% in the HO cohort (P = 0.03) and 10.3% in the MM cohort (P = 0.0006; Figures 2 and 3). Subset analysis demonstrated higher scores for the MM cohort for perceptual voice analysis (HO = 68.8% § 11.0; MM = 77.3% § 10.6, P = 0.03) and stroboscopy-specific items, which approached statistical significance (HO = 55.5% § 8.2; MM = 61.9% § 10.8, P = 0.06; Figures 4 and 5). Laryngoscopy scores did not change significantly post-training (HO = 66.8 § 6.4, P = 0.47; MM = 69.3% § 8.9, P = 0.52), and were not statistically different between groups (P = 0.35; Figure 6).

Inter-rater agreement among novice and trained raters Preassessment inter-rater agreement was moderate for PVE and laryngoscopy-related items in both the HO and MM groups (Table 2). Agreement on PVE remained fair in both groups after module completion. Among the MM participants, agreement improved in laryngoscopy (moderate to good), but was relatively unchanged in the HO group. Agreement on stroboscopy-related items was poor for both groups before completion of the training modules. Both groups showed improvements with training, with the MM

ARTICLE IN PRESS 4

FIGURE 2. Paired analysis demonstrated an overall mean improvement of 7.4% in the HO with P = 0.03. Box whisker plots shown with mean denoted as “+.”

FIGURE 3. Paired analysis demonstrated an overall mean improvement of 10.3% in the MM cohort with P = 0.0006. Box whisker plots shown with mean denoted as “+.”

Journal of Voice, Vol. &&, No. &&, 2020

FIGURE 5. Subset analysis demonstrated the MM cohort was close but did not reach statistical significance in the stroboscopy subset (HO = 55.5% § 8.2; MM = 61.9% § 10.8).

FIGURE 6. Subset analysis did not demonstrate a statistically significant higher score for either the MM or HO in the laryngoscopy subset (HO = 66.8 § 6.4, P = 0.47; MM = 69.3% § 8.9, P = 0.52), and were not statistically between groups (P = 0.35). phase symmetry/closure (0.57) were specific areas where agreement was only fair or moderate. Participant evaluation of training materials Residents reported improved confidence in stroboscopy interpretation (preassessment median = 2, interquartile range = 2; postassessment median = 3, interquartile range = 2; P < 0.0001), irrespective of which training materials they were provided (P = 0.62). Residents rated the MM (median = 5) more favorably as a teaching tool compared to the HO (median = 4, P = 0.001; Figure 7).

FIGURE 4. Subset analysis demonstrated higher scores for the MM over the HO cohort for PVE (HO = 68.8% § 11.0; MM = 77.3% § 10.6).

group achieving higher agreement in the postassessment compared to the HO group (0.49-moderate vs 0.40-fair). The expert raters participating in this study demonstrated good agreement on laryngoscopy-related items but had only moderate agreement on PVE and stroboscopy questions (Table 3). Glottal closure (0.54), mucosal wave (0.40), and

DISCUSSION Utilization of web-based learning has been an increasing trend in medical education. A generation of Otolaryngology trainees and medical students who have come of age in the era of “all things online” not only gravitate toward technologically based educational platforms, but also demonstrate improved objective learning compared to traditional modalities.15 In a study of residents preparing for the Otolaryngology Training Exam, participants with access to content-based videos exhibited improved scores in certain subspecialties areas of the

ARTICLE IN PRESS Joel W. Jones, et al

5

Resident Education in Laryngeal Stroboscopy

TABLE 2. Pre- and Post-Test Inter-Rater Agreement (Krippendorff Alpha) Multimedia Module (MM) Parameter Perceptual voice evaluation Laryngoscopy Vocal fold motion Vocal fold pathology Supraglottic postures Stroboscopy Amplitude Wave Glottic closure Periodicity Phase closure and symmetry

Pretest K⍺

Handout (HO)

95% CI

Post-Test K⍺

95% CI

Pretest K⍺

95% CI

Post-Test K⍺

95% CI

0.46

0.43-0.48

0.48

0.45-0.51

0.41

0.39-0.43

0.43

0.41-0.45

0.54 0.68 0.25

0.52-0.55 0.66-0.71 0.22-0.29

0.63 0.79 0.44

0.62-0.65 0.77-0.81 0.41-0.48

0.52 0.70 0.27

0.51 -0.54 0.68-0.72 0.24-0.30

0.56 0.71 0.39

0.55-0.58 0.69-0.72 0.36-0.42

0.39

0.36-0.42

0.53

0.50-0.56

0.35

0.33-0.38

0.47

0.44-0.49

0.25-0.28 0.18-0.24 0.13-0.20 0.40-0.47 0.06 to 0.06 0.14-0.21

0.49 0.43 0.38 0.60 0.43 0.41

0.47-0.50 0.39-0.46 0.34-0.42 0.56-0.66 0.38-0.49 0.36-0.45

0.25 0.20 0.03 0.42 0.02 0.12

0.24-0.26 0.17-0.22 0.00 to 0.07 0.39-0.45 0.03 to 0.07 0.09-0.16

0.40 0.35 0.36 0.46 0.31 0.26

0.38-0.41 0.33-0.38 0.33-0.40 0.43-0.49 0.26-0.36 0.23-0.30

0.26 0.21 0.16 0.44 0.00 0.17

For Ka, 0.00-0.20 = poor; 0.21-0.40 = fair; 0.41-0.60 = moderate; 0.61-0.80 = good; .81-1.00 = very good. Abbreviations: CI, confidence interval; Ka, Krippendorff alpha.

TABLE 3. Inter-Rater Agreement for Expert Raters (Ka) Parameter Perceptual voice evaluation Laryngoscopy Vocal fold motion Vocal fold pathology Supraglottic postures Stroboscopy Amplitude Wave Glottic closure Periodicity Phase closure and symmetry

Expert K⍺

95% CI

0.53 0.77 1.00 0.61 0.64 0.59 0.62 0.40 0.54 0.86 0.57

0.39-0.65 0.71-0.83 1.00 0.46-0.76 0.52-0.74 0.52-0.66 0.50-0.75 0.24-0.57 0.39-0.70 0.65-1.00 0.42-0.72

For Ka, 0.00-0.20 = poor; 0.21-0.40 = fair; 0.41-0.60 = moderate; 0.610.80 = good; .81-1.00 = very good. Abbreviations: CI, confidence interval; Ka, Krippendorff alpha.

FIGURE 7. On a five-point Likert scale, residents rated the MM (median = 5) more favorably as a teaching tool compared to the HO (median = 4, P = 0.001).

exam.15 Video materials demonstrating surgical procedures have been highly rated among OTO-HNS trainees and are more likely to promote self-study.15 The audiovisual nature of stroboscopy makes it ideal subject matter for computer-based learning. In the late 1990s, Poburka and Bless first explored this concept by developing a computer-aided instructional program. The computer-aided instructional program proved to be an effective tool among speech language pathology students but was limited by the logistical requirements for both hardware and software. In this study, we examined a web-based teaching and assessment module and compared it to traditional text materials on the subject of stroboscopy. Having established that residents have a poor understanding of the subject matter in a prior study, it is not surprising that both the MM and the HO

cohorts demonstrated improved overall scores after reviewing the study materials.6 On further analysis, however, the MM group outperformed the HO group in PVE and stroboscopy in the postassessment. Scores in both groups remained relatively flat for laryngoscopy items that likely reflect a greater degree of pre-existing experience and knowledge in basic laryngoscopy. In the preassessment, overall scores (P = 0.04) and stroboscopy subset scores (P = 0.01) differed significantly when comparing junior to senior residents.6 Interestingly, this difference was not detected in the postassessment scores. This suggests the HO and MM materials were effective interventions that elevated all participants to a similar level of understanding or, more likely, close enough in postassessment comprehension that the study was not sufficiently powered to detect smaller differences between groups.

ARTICLE IN PRESS 6 One of the difficulties in teaching PVE and stroboscopy rating is the inherently subjective nature of grading, which impacts an examiner’s ability to achieve consistent and reliable ratings. Perhaps a better assessment of the efficacy of a teaching tool, particularly in an area with an inherent degree of subjectivity, is the ability of that tool to improve the agreement of ratings among trainees. Stroboscopy has long been valued in the evaluation and treatment of voice-related disorders, but the ability to interpret and apply the information obtained is operator specific.16,17 In a recent systematic review examining the rigor and consistency of inter- and intrarater reliability of stroboscopic exams found that interrater, and even to some degree intrarater, reliability was poor.18 Due to the current apprenticeship model for most laryngologists, this may stem from differences in how interpretation is taught. The inter-rater agreement of our experts, each of whom were trained at different fellowship programs, was, in fact, only moderate to good in most subset areas. Looking more closely at the data from this cohort, the MM group demonstrated a greater degree of inter-rater reliability in stroboscopy-related items compared to the HO group. Intuitively, this makes since, as the nuances of stroboscopy are best appreciated in video form. Interestingly, the subset areas where the MM cohort participants had the least amount of agreement (wave, phase closure/symmetry) overlapped with the same areas where there was lesser concordance among our experts. This further highlights the need not only for ongoing improvements in the rigor of expert rating protocols, but further development of our multimedia tools to teach these concepts. Although the MM group posted significantly better scores in PVE compared to the HO participants, there was not a demonstrable improvement in rater agreement after training, as was anticipated. There are a variety of ratings scales for PVE. The GRBAS scale has consistently been shown to be one of the simpler and reliable measures for clinical use, which is why it was selected for this study.19 Learners, however, have to use different strategies compared to expert raters, and training is required to produce consistent ratings.20-22 Internal consistency improves with practice, but as each rater has their own experience and anchors, inter-rater reliability is difficult to achieve. Even our expert raters had only moderate agreement when grading voices in this study. Overtime, this suggests that, while the web-based audio format has potential as a learning platform for PVE, future iterations of this module will need to be adapted to accommodate for mild variations in this subjective grading between raters. An additional limitation to this study the small number of trainee and expert raters involved. For logistical reasons, this pilot study recruited residents from two Midwestern and one Southwestern training programs. While stroboscopy is included in the core curriculum, didactic content and clinical exposure to stroboscopy can vary across training programs and limit a nationwide or regional generalization of our data (although, assessment results and confidence levels did not differ between training programs). This fact, however, only serves to support the argument for the on-going development of widely accessible

Journal of Voice, Vol. &&, No. &&, 2020

learning platforms. To ensure the highest possible standards for teaching future otolaryngologists, subsequent attempts to develop a multimedia web-based stroboscopy teaching platform will require the collaborative efforts of voice specialists from a variety of training backgrounds, as well as rigorous psychometric analysis, prior to dissemination. CONCLUSION Use of both a written HO and MM improved scores and confidence in interpreting laryngeal stroboscopy among residents in training. The MM was more effective in improving scores in PVE and inter-rater agreement. The MM was also rated more favorably by residents, indicating that webbased modules may be an ideal adjunct modality for teaching stroboscopy. DECLARATION OF INTEREST The authors declare no conflict of interest or financial support for this project. SUPPLEMENTARY MATERIALS Supplementary material associated with this article can be found in the online version at https://doi.org/10.1016/j. jvoice.2019.12.026. REFERENCES 1. Jokinen E, Mikkola TS, Harkki P. Evaluation of a web course on the basics of gynecological laparoscopy in resident training. J Surg Educ. 2017;74:717–723. 2. Hearty T, Maizels M, Pring M, et al. Orthopaedic resident preparedness for closed reduction and pinning of pediatric supracondylar fractures is improved by e-learning: a multisite randomized controlled study. J Bone Joint Surg Am. 2013;95:e1261–e1267. 3. Hindle A, Cheng J, Thabane L, et al. Web-based learning for emergency airway management in anesthesia residency training. Anesthesiol Res Pract. 2015;2015: 971406. 4. Mitchell JD, Mahmood F, Wong V, et al. Teaching concepts of transesophageal echocardiography via web-based modules. J Cardiothorac Vasc Anesth. 2015;29:402–409. 5. American Board of Otolaryngology. Otolaryngology Head and Neck Surgery Comprehensive Core Curriculum (October 2017). 2018. https://www.aboto.org/pub/Core%20Curriculum.pdf. July 29. 6. Jones JW, Perryman M, Judge P, et al. Resident education in laryngeal stroboscopy and perceptual voice evaluation: an assessment. J Voice. 2018. https://doi.org/10.1016/j.jvoice.2018.11.016. [Epub ahead of print]. 7. Whitson BA, Hoang CD, Jie T, et al. Technology-enhanced interactive surgical education. J Surg Res. 2006;136:13–18. 8. Poburka BJ, Bless DM. A multi-media, computer-based method for stroboscopy rating training. J Voice. 1998;12:513–526. 9. Harris PA, Taylor R, Thielke R, et al. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377–381. 10. Hirano M. Clinical Examination of Voice. 1st ed. Berlin, Heidelberg, New York: Springer; 1981. 11. Hayes A, Krippendorff K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. 2007;1:77–89. https://doi.org/10.1080/19312450709336664. 12. Krippendorff K. Reliability in content analysis. Human Comm Res. 2004;30:411–433. https://doi.org/10.1111/j.14682958.2004.tb00738.x.

ARTICLE IN PRESS Joel W. Jones, et al

Resident Education in Laryngeal Stroboscopy

13. Krippendorff K. Content Analysis: An Introduction to Its Methodology. London: Sage Publications; 1980. 14. Altman G. Practical Statistics for Medical Research. London: Chapman and Hall; 1991. 15. Tarpada SP, Hsueh WD, Gibber MJ. Resident and student education in otolaryngology: a 10-year update on e-learning. Laryngoscope. 2017; 127:E219–e224. 16. Sataloff RT, Spiegel JR, Hawkshaw MJ. Strobovideolaryngoscopy: results and clinical value. Ann Otol Rhinol Laryngol. 1991;100(9 Pt 1): 725–727. 17. Remacle M. The contribution of videostroboscopy in daily ENT practice. Acta Otorhinolaryngol Belg. 1996;50:265–281.

7

18. Bonilha HS, Focht KL, Martin-Harris B. Rater methodology for stroboscopy: a systematic review. J Voice. 2015;29:101–108. 19. Webb AL, Carding PN, Deary IJ, et al. The reliability of three perceptual evaluation scales for dysphonia. Eur Arch Otorhinolaryngol. 2004;261:429–434. 20. Kreiman J, Gerratt BR, Precoda K. Listener experience and perception of voice quality. J Speech Hear Res. 1990;33:103–115. 21. Bassich CJ, Ludlow CL. The use of perceptual methods by new clinicians for assessing voice quality. J Speech Hear Disord. 1986;51:125–133. 22. De Bodt MS, Wuyts FL, Van de Heyning PH, et al. Test-retest study of the GRBAS scale: influence of experience and professional background on perceptual rating of voice quality. J Voice. 1997;11:74–80.