Crowdsourcing: a valid alternative to expert evaluation of robotic surgery skills

Crowdsourcing: a valid alternative to expert evaluation of robotic surgery skills

Original Research ajog.org EDUCATION Crowdsourcing: a valid alternative to expert evaluation of robotic surgery skills Michael R. Polin, MD; Nazema...

1MB Sizes 0 Downloads 2 Views

Original Research

ajog.org

EDUCATION

Crowdsourcing: a valid alternative to expert evaluation of robotic surgery skills Michael R. Polin, MD; Nazema Y. Siddiqui, MD, MHSc; Bryan A. Comstock, MS; Helai Hesham, MD; Casey Brown, DO; Thomas S. Lendvay, MD; Martin A. Martino, MD

BACKGROUND: Robotic-assisted gynecologic surgery is common, but requires unique training. A validated assessment tool for evaluating trainees’ robotic surgery skills is Robotic-Objective Structured Assessments of Technical Skills. OBJECTIVE: We sought to assess whether crowdsourcing can be used as an alternative to expert surgical evaluators in scoring Robotic-Objective Structured Assessments of Technical Skills. STUDY DESIGN: The Robotic Training Network produced the RoboticObjective Structured Assessments of Technical Skills, which evaluate trainees across 5 dry lab robotic surgical drills. Robotic-Objective Structured Assessments of Technical Skills were previously validated in a study of 105 participants, where dry lab surgical drills were recorded, de-identified, and scored by 3 expert surgeons using the RoboticObjective Structured Assessments of Technical Skills checklist. Our methods-comparison study uses these previously obtained recordings and expert surgeon scores. Mean scores per participant from each drill were separated into quartiles. Crowdworkers were trained and calibrated on Robotic-Objective Structured Assessments of Technical Skills scoring

using a representative recording of a skilled and novice surgeon. Following this, 3 recordings from each scoring quartile for each drill were randomly selected. Crowdworkers evaluated the randomly selected recordings using Robotic-Objective Structured Assessments of Technical Skills. Linear mixed effects models were used to derive mean crowdsourced ratings for each drill. Pearson correlation coefficients were calculated to assess the correlation between crowdsourced and expert surgeons’ ratings. RESULTS: In all, 448 crowdworkers reviewed videos from 60 dry lab drills, and completed a total of 2517 Robotic-Objective Structured Assessments of Technical Skills assessments within 16 hours. Crowdsourced Robotic-Objective Structured Assessments of Technical Skills ratings were highly correlated with expert surgeon ratings across each of the 5 dry lab drills (r ranging from 0.75-0.91). CONCLUSION: Crowdsourced assessments of recorded dry lab surgical drills using a validated assessment tool are a rapid and suitable alternative to expert surgeon evaluation. Key words: crowdsourcing, robotic surgery, simulation, surgical training

Introduction Robotic-assisted gynecologic surgery has become a widely used alternative to traditional laparoscopy. Robotic technology provides surgeons with opportunities for altered dexterity and flexibility within the operative field. However, robotic technology requires unique training to achieve proficiency and mastery.1,2 For teaching institutions, it is difficult to know when to incorporate trainees into robotic surgeries. In an effort to develop a standardized method to determine when a trainee can safely operate under supervision from the robotic console, the Robotic Training Network (RTN) was initiated. The RTN

Cite this article as: Polin MR, Siddiqui NY, Comstock BA, et al. Crowdsourcing: a valid alternative to expert evaluation of robotic surgery skills. Am J Obstet Gynecol 2016;volume:x.ex-x.ex. 0002-9378/free ª 2016 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.ajog.2016.06.033

was developed by surgical educators to standardize robotic training for residents and fellows. The initial meetings of the network were supported by travel grants from Intuitive Surgical. However, subsequent meetings, curriculum development, and research studies were all independently funded. The RTN produced a validated assessment checklist known as the RoboticObjective Structured Assessments of Technical Skills (R-OSATS).3 The R-OSATS was adapted from previously described standardized assessment tools4,5 and is designed to be used specifically with 5 dry lab robotic surgical drills: (1) tower transfer; (2) roller coaster; (3) big dipper; (4) train tracks; and (5) figure-of-8. The first drill, “tower transfer,” requires the trainee to transfer rubber bands between towers of varying heights. The “roller coaster” drill involves moving a rubber band around a series of continuous wire loops. “Big dipper” is a drill requiring

the participant to drive a needle through a sponge in specific directions. “Train tracks” is a drill simulating a running suture. The “figure-of-8” drill consists of throwing a needle in a figure-of-8 conformation and then tying a square surgical knot. Within ROSATS, each drill is scored using a 5point Likert scale based on the following metrics: (a) depth perception/ accuracy of movements; (b) force/tissue handling; (c) dexterity; and (d) efficiency of movements. The maximum R-OSATS score is 20 points per drill. Currently, R-OSATS are completed by expert surgeons directly observing trainee performance on robotic simulation drills. Although this is a reliable method of assessment, it requires at least 30 minutes of the evaluator’s time per trainee. This is in addition to any setup and practice time that the trainee uses prior to the formal assessment. Given that expert evaluators are also surgeons with busy clinical practices, this time requirement creates a limitation to any objective structured training and

MONTH 2016 American Journal of Obstetrics & Gynecology

1.e1

Original Research

EDUCATION

assessment process. Thus, we sought to assess whether we could use crowdworkers in place of expert surgeons to assess dry lab surgical skill drills. Crowdsourcing is the process of obtaining work or ideas from a large group of people. The people who compose the group are known as crowdworkers. Crowdworkers come from the general public and, in the case of this study, they do not necessarily have any prior medical experience or training. A crowdworker may be from anywhere in the world. Typically, crowdsourcing occurs through an online forum or marketplace. Amazon.com Mechanical Turk is one such Internet-based crowdsourcing marketplace where an entity can post tasks for crowdworkers to complete. Through the marketplace, crowdworkers are able to view posted tasks and choose which tasks they are interested in completing. Since crowdworkers are not required to have experience prior to beginning a task, they receive specific training for the tasks they choose to complete. This training is a part of the posted task and is created by the entity requesting the task. After completing a task, crowdworkers receive financial compensation. Essentially, crowdworkers replace traditional employees and they can be solicited by anyone. Crowdsourcing is inexpensive, fast, flexible, and scalable, although studies evaluating the utility of crowdsourcing for assessing complex technical skills are limited. Crowdsourcing is being explored in several ways within medicine. It has been used in ophthalmology to screen retinal images for evidence of diabetic retinopathy and evaluate optic disk images for changes associated with glaucoma.6,7 Pathologists have used crowdsourcing to quantify malarial parasites on blood smears8 and assess positivity on immunohistochemistry stains.9 Crowdsourcing has been proposed as a means of evaluating surgical skill,10 although these assessments are more complex than still images and thus require validation. We hypothesized that when using a valid and reliable assessment tool, such as R-OSATS, to evaluate trainees

performing dry lab drills, crowdworker and expert surgeon scores would be similar. Thus, our primary objective was to assess the degree of correlation between R-OSATS scores ascertained by crowdworkers vs expert surgeons.

Materials and Methods This is a methods-comparison study comparing 2 methods of assessment for dry lab surgical drills. As a part of the prior R-OSATS validation study,3 105 obstetrics and gynecology, urology, and general surgery resident, fellow, and expert robotic surgeons performed the 5 robotic dry lab drills: (1) tower transfer; (2) roller coaster; (3) big dipper; (4) train tracks; and (5) figure-of-8. These drills were recorded, de-identified, uploaded to a private World Wide Webebased location, and scored by 3 separate expert surgeons. Again, each drill was scored on: (a) depth perception/accuracy of movements; (b) force/tissue handling; (c) dexterity; and (d) efficiency of movements using a 5-point Likert scale for a maximum of 20 points per drill. For the current methods-comparison study, we utilized these previously recorded videos with their accompanying expert surgeon evaluator scores. The expert surgeons had extensive robotic surgery backgrounds having completed a median of 108 robotic procedures (range 50-500 per surgeon).11 Furthermore, they were active resident and/or fellow robotic surgery educators. After obtaining institutional review board approval, we reviewed the previously obtained R-OSATS scores. Since each drill had been viewed and scored by 3 expert surgeons, we calculated the mean expert R-OSATS scores, per video, for each drill. We used the mean expert ROSATS score to separate the recordings of each dry lab drill into quartiles. To ensure high-quality responses from crowdworkers, we used techniques previously described by Chen et al12 to select and train crowdworkers via Amazon.com Mechanical Turk. Only crowdworkers with an acceptance rating >95% from previous assignments on Amazon.com Mechanical Turk were able to sign up to evaluate our recordings. To assess crowdworkers’ discriminative ability,

1.e2 American Journal of Obstetrics & Gynecology MONTH 2016

ajog.org we used a screening test that required the crowdworkers to watch short side-by-side videos of 2 surgeons performing a robotic dry lab drill and identify which video showed the surgeon of higher skill. Separate training videos were used for each of the 5 dry lab drills and participants had to complete specific training videos for each task that they chose to complete. Additionally, to ensure that the crowdworkers were actively engaged in the scoring process, an attention question was embedded within the survey that directed the crowdworker to leave a particular question unanswered. If a crowdworker failed either the screening or attention questions, this crowdworker’s responses were excluded from further analyses. It should be noted that in the selection process, crowdworker education level, prior training, or past work experiences were not taken into consideration. After passing the selection process, 1 representative recording of a skilled surgeon and 1 representative recording of a novice surgeon, performing the dry lab drill the crowdworker was going to assess, was shown to the crowdworkers to train them on R-OSATS scoring. This was accomplished through a virtual online training suite (C-SATS Inc, Seattle, WA). To compare crowdworker R-OSATS scores to expert evaluators, we recognized that we needed to provide videos that spanned a wide range of skill levels for each drill. Three unique video recordings of dry lab surgical drills were randomly selected from each scoring quartile, for a total of 12 videos per drill. As a result, crowdworkers evaluated videos that covered the entire skill spectrum. This was repeated for each of the 5 dry lab drills, providing a total of 60 unique recordings available for evaluation. Each video recording was posted on Amazon.com Mechanical Turk for crowdworkers to view and assess. The posted recordings included the appropriate training videos as described above since the crowdworkers did not necessarily have any formal medical education or prior experiences assessing surgical skills. Crowdworkers evaluated 1 of the 60 recordings, based on their desire, using R-OSATS.

ajog.org Expert evaluator interrater reliability was calculated using intraclass correlation coefficients for each of the 5 drills. Linear mixed effects models were used to derive average crowd ratings for each drill.13 Pearson correlation coefficients were constructed to assess the correlation between the crowdsourced and expert R-OSATS scores. Two-sided tests with alpha ¼ 0.05 were used to declare statistical significance. Using estimates from prior crowdsourcing studies,12,14 we estimated needing at least 30 successfully trained crowdworkers to evaluate each recording to derive average R-OSATS scores per video with 95% confidence intervals of 1 point. In a secondary analysis aimed at determining the minimum number of crowdworker ratings needed to maintain high correlation with expert scores, we used bootstrapping to sample data sets of varying numbers of crowd ratings per video and reassessed the correlation with expert ratings. Bootstrapping is a technique that generates multiple random samples (with replacement) from the original data set and allows one to estimate specific statistical parameters of interest. All statistical analyses were conducted using R 3.1.1 (R Foundation for Statistical Computing, Vienna, Austria).

Results In all, 448 crowdworkers from Amazon. com Mechanical Turk evaluated the 60 videos within 16 hours of their posting on the World Wide Webebased forum. A median of 42 (41-43) crowdworkers reviewed each video. In all, 53% (n ¼ 237) of crowdworkers reviewed only 1 recording, 25% (n ¼ 113) reviewed 2-5 recordings, and 22% (n ¼ 98) reviewed 6 recordings. A total of 2517 R-OSATS ratings were received across the 60 recordings, of which 2119 (84.2%) passed the screening process and were included in the final analysis. The crowdworkers who completed the posted tasks for this study came from the United States, Mexico, and India. The existing expert evaluator R-OSATS scores had moderate to good15 internal consistency (intraclass correlation coefficient ¼ 0.61, ranging from

EDUCATION

0.55-0.69 across drills). Figure 1 depicts these values for each of the 5 robotic dry lab drills, and how these scores compared to mean crowdworker scores per recording. Crowdworker R-OSATS ratings were highly correlated with expert ratings across each of the 5 dry lab drills (Figure 2). The correlation coefficients for each drill were: (1) tower transfer r ¼ 0.75, P ¼.005; (2) roller coaster r ¼ 0.91, P < .001; (3) big dipper r ¼ 0.86, P < .001; (4) train tracks r ¼ 0.76, P ¼ .004; and (5) figure-of-8 r ¼ 0.87, P < .001. Given the high correlation between crowdworker R-OSATS scores and expert evaluator R-OSATS scores, we secondarily aimed to determine the minimum number of crowdworker scores that would still maintain a high correlation with expert evaluators. Thus, we bootstrapped sample data sets of varying number of crowd ratings, from 2-32 crowdworker ratings per video, and reassessed the correlation with expert ratings for a single dry lab drill. Figure 3 displays a plot of these correlation coefficients, between expert ratings and a bootstrapped sample size of crowdworker ratings, for the figure-of-8 drill. Based on these data, obtaining 15 crowdworker assessments per trainee is sufficient to maintain high correlation with expert scores as that is the point where the curve plateaus.

Comment Training residents in surgical procedures is a challenging and dynamic process that has been made more complex by the introduction of advanced surgical techniques such as robotic-assisted laparoscopy. Objective Structured Assessments of Technical Skills (OSATS) are valuable tools in the surgical education of trainees.4 However, the process of scoring OSATS can be time-consuming and, with the ever-increasing pressures that physicians face for clinical and research productivity, an alternative means to evaluate surgical skills using OSATS would prove helpful. Additionally, in-person expert evaluation is at risk of some degree of subjective bias, unless recordings and blinded assessments are used. This study shows that

Original Research

crowdsourced assessments of recorded dry lab robotic surgical drills using a validated assessment tool, the R-OSATS, are a suitable alternative to faculty expert evaluation. This study is strengthened by the methods-comparison study design, which provides the ideal means of comparing 2 methods of measurement.16 In this study, we compared 2 ways in which a standard, valid tool was scored: (1) crowdworker and (2) expert evaluator. Our study is further strengthened by the previously established validity and reliability of the R-OSATS assessment tool. The R-OSATS evaluates 5 different dry lab drills, and each drill requires trainees to utilize different technical skills to achieve competence. Since we performed a methods-comparison study with all 5 drills, it furthers the generalizability of our results as multiple kinds of technical skills were assessed and compared. For instance, the tower transfer drill tests camera manipulation and instrument dexterity, while the figure-of-8 drill tests a participant’s suturing and knot-tying ability. Finally, the large number of crowdworker ratings allowed for narrow confidence intervals despite any potential diversity in the crowd, which strengthens our conclusions. A limitation of this study was that crowdworkers were able to evaluate any number of the video recordings they desired. The majority of crowdworkers, just >50%, evaluated only 1 video while >75% evaluated 5 videos. We cannot comment on whether the correlations would have been stronger or weaker if the crowdworkers had been required to evaluate a recording from each of the 5 dry lab drills. Although crowdworkers might have become better raters if they viewed more videos, their attention and ability to focus on details might also have suffered if they were required to review and rate all 5 drills. Another limitation of this study is that it was limited exclusively to dry lab surgical drills. Live surgery introduces a more dynamic environment. As a result, we cannot discuss the generalizability of our findings to the evaluation of live surgery. However, recent studies have

MONTH 2016 American Journal of Obstetrics & Gynecology

1.e3

Original Research

EDUCATION

ajog.org

FIGURE 1

Expert evaluator and mean crowd R-OSATS scores

Individual comparisons between expert evaluator (black dot) and mean crowd (red diamond) Robotic-Objective Structured Assessments of Technical Skills (R-OSATS) scores for each drill. ICC, intraclass correlation coefficient. Polin et al. Crowdsourcing evaluation of surgical skills. Am J Obstet Gynecol 2016.

1.e4 American Journal of Obstetrics & Gynecology MONTH 2016

ajog.org

EDUCATION

Original Research

FIGURE 2

Correlation between expert evaluator and crowdworker R-OSATS scores

Scatterplot and correlation between mean expert evaluator and crowdworker Robotic-Objective Structured Assessments of Technical Skills (R-OSATS) scores for each drill. Dashed line ¼ 45-degree line; solid line ¼ least squares best fit. Polin et al. Crowdsourcing evaluation of surgical skills. Am J Obstet Gynecol 2016.

MONTH 2016 American Journal of Obstetrics & Gynecology

1.e5

Original Research

EDUCATION

FIGURE 3

Bootstrapped sample data sets of varying crowd numbers

Correlation coefficient vs bootstrapped crowd sample size, ranging from n ¼ 3 crowd ratings per video to n ¼ 32 total available ratings. Solid line ¼ mean; dotted line ¼ interquartile range; dashed line ¼ 95% confidence interval. Polin et al. Crowdsourcing evaluation of surgical skills. Am J Obstet Gynecol 2016.

investigated crowdworkers’ ability to assess live surgery skills in nongynecologic surgeries. Powers et al17 evaluated crowdsourced assessments of robotic partial nephrectomies in trainee and attending surgeons using Global Evaluative Assessment or Robotic Skills (GEARS). They showed that when focusing on a particular task within in a procedure (renal artery dissection in the case of their study), crowdsourced scores were highly correlated with expert scores. However, their study was limited by low interrater reliability between expert scores. Ghani et al18 also evaluated crowdsourced assessments of live surgeries. Their study focused on assessments of attending surgeons performing 4 portions of a robotic

radical prostatectomy. They demonstrated similar scores on GEARS and Robotic Anastomosis and Competency Evaluation when comparing crowdsourced scores to expert scores. Our findings support the findings of prior studies evaluating crowdsourced assessments of surgical skills. Holst et al19 demonstrated that crowdworkers’ global scores of the GEARS and Global Operative Assessment of Laparoscopic Skills in a porcine lab setting were similar to scores from expert evaluators. In a separate study evaluating crowdsourced assessment of dry lab skills using GEARS and Global Operative Assessment of Laparoscopic Skills validated assessment tools, crowdworkers were able to assess the level of trainee surgical skill relative

1.e6 American Journal of Obstetrics & Gynecology MONTH 2016

ajog.org to a faculty panel and there was excellent interrater reliability between the crowdworkers and faculty panel.14 However, this study was only conducted using 1 dry lab drill and it only utilized 5 videos. While our uses a similar study design, our study markedly expands upon those findings and improves the generalizability by incorporating 5 dry lab drills encompassing multiple kinds of technical skills, and 60 different videos of varying skill levels. In total, there is a growing body of evidence supporting the use of crowdworkers in place of expert evaluators in performing validated assessments of surgical skills. To our knowledge, this is the first study to determine the minimum number of crowdworkers needed to reliably evaluate correlation between expert and crowd-assessed performance ratings. This new information is helpful in allowing programs to estimate future costs. For our study, the costs were $0.70 per video minute with 1 dry lab drill video lasting no more than 5 minutes. Although a program could use more crowdworkers to gain tighter score estimates, our data suggest that a minimum of 15 crowdworkers are needed to receive a score that is similar to one from expert surgeons. Thus, assuming the average video lasts approximately 3 minutes, obtaining 15 crowdworker ratings for 1 drill would cost roughly $31.50 per dry lab drill per trainee. While our study objective was to estimate the correlation in R-OSATS scores between crowdworkers and expert evaluators, crowdworker evaluations also provide comments for the surgeon. Just as an attending surgeon may provide verbal feedback to a trainee, the crowdworkers were able to provide written feedback. Some examples of this feedback included: “well thought-out movements,” “misses the target a few times,” “does not use both hands to their full potential,” and “never really uses them [hands] in tandem and seems to focus on one at a time.” These comments, paired with a recently established minimum threshold score of 14 (out of the total possible 20) per drill to establish competence11 allows

ajog.org crowdworkers to provide an alternative method of providing trainees meaningful, competence-based evaluations. Expert surgeons will still be an integral part of the training of novice surgeons as they develop fundamental skills. However, crowdworkers may provide an alternative to expert surgeons when it comes to the formal assessment and evaluation of trainees’ skills. For live surgeries, there are data showing that blinded video assessments of surgeons’ technical performances directly correlate with patient outcomes.20 With the growing focus on quality-based outcomes in health care and continued introduction of new technologies, validated assessments like the R-OSATS may provide an important standardized measure of surgeons’ skills. However, evaluation of these skills on a large scale remains an ongoing problem. Our study further advances the possibility of using crowdsourced evaluations, which would provide a rapid and scalable alternative to expert evaluations, in such a setting. However, live surgical procedures introduce numerous additional variables and further studies are needed in the live surgical environment to evaluate and develop this concept. In conclusion, crowdsourcing is a valid alternative to expert evaluation of dry lab robotic skills using a validated assessment tool. It provides a rapid and accurate method of assessing technical skills of trainees while minimizing burdens on expert surgeon or faculty time. n References 1. Lenihan JP Jr, Kovanda C, SeshadriKreaden U. What is the learning curve for robotic assisted gynecologic surgery? J Minim Invasive Gynecol 2008;15:589-94. 2. Woelk JL, Casiano ER, Weaver AL, Gostout BS, Trabuco EC, Gebhart JB. The

EDUCATION

learning curve of robotic hysterectomy. Obstet Gynecol 2013;121:87-95. 3. Siddiqui NY, Galloway ML, Geller EJ, et al. Validity and reliability of the robotic objective structured assessment of technical skills. Obstet Gynecol 2014;123:1193-9. 4. Martin JA, Regehr G, Reznick R, et al. Objective structured assessment of technical skills (OSATS) for surgical residents. Br J Surg 1997;84:273-8. 5. Reznick R, Regehr G, MacRae H, Martin J, McCulloch W. Testing technical skill via an innovative “bench station” examination. Am J Surg 1997;173:226-30. 6. Wang X, Mudie L, Brady CJ. Crowdsourcing: an overview and applications to ophthalmology. Curr Opin Ophthalmol 2016;27:256-61. 7. Mitry D, Peto T, Hayat S, et al. Crowdsourcing as a screening tool to detect clinical features of glaucomatous optic neuropathy from digital photography. PLoS One 2015;10:e0117401. 8. Luengo-oroz MA, Arranz A, Frean J. Crowdsourcing malaria parasite quantification: an online game for analyzing images of infected thick blood smears. J Med Internet Res 2012;14: e167. 9. Della Mea V, Maddalena E, Mizzaro S, Machin P, Beltrami CA. Preliminary results from a crowdsourcing experiment in immunohistochemistry. Diagn Pathol 2014;9(Suppl):S6. 10. Lendvay TS, White L, Kowalewski T. Crowdsourcing to assess surgical skill. JAMA Surg 2015;150:1086-7. 11. Siddiqui NY, Tarr ME, Geller EJ, et al. Establishing benchmarks for minimum competence with dry lab robotic surgery drills. J Minim Invasive Gynecol 2016;23:633-8. 12. Chen C, White L, Kowalewski T, et al. Crowd-sourced assessment of technical skills (C-SATS): a novel method to evaluate surgical performance. J Surg Res 2014;187: 65-71. 13. Laird NM, Ware JH. Random effects models for longitudinal studies. Biometrics 1983;38: 963-74. 14. Holst D, Kowalewski TM, White L, et al. Crowd sourced assessment of technical skills (CSATS): an adjunct to urology resident surgical simulation training. J Endourol 2015;29: 604-9. 15. Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychol Assess 1994;6:284-90.

Original Research

16. Hanneman SK. Design, analysis, and interpretation of method-comparison studies. AACN Adv Crit Care 2008;19:223-34. 17. Powers MK, Boonjindasup A, Pinsky M, et al. Crowdsourcing assessment of surgeon dissection of renal artery and vein during robotic partial nephrectomy: a novel approach for quantitative assessment of surgical performance. J Endourol 2016;30:447-52. 18. Ghani KR, Miller DC, Linsell S, et al. Measuring to improve: peer and crowd-sourced assessments of technical skill with robotassisted radical prostatectomy. Eur Urol 2016;69:547-50. 19. Holst D, Kowalewski TM, White LW, et al. Crowd-sourced assessment of technical skills (C-SATS): differentiating animate surgical skill through the wisdom of crowds. J Endourol 2015;29:1183-8. 20. Birkmeyer JD, Finks JF, O’Reilly A, et al. Surgical skill and complication rates after bariatric surgery. N Engl J Med 2013;369: 1434-42.

Author and article information From the Division of Urogynecology, Department of Obstetrics and Gynecology, Duke University, Durham, NC (Drs Polin and Siddiqui); Department of Biostatistics (Mr Comstock) and Division of Pediatric Urology, Department of Urology (Dr Lendvay), University of Washington, Seattle, WA; and Division of Gynecologic Oncology (Dr Martino), Department of Obstetrics and Gynecology (Drs Hesham, Brown, and Martino), Lehigh Valley Health Network, Allentown, PA. Received April 20, 2016; revised June 15, 2016; accepted June 19, 2016. Supported by the Lehigh Valley Health Network Research Support Fund. Disclosure: N.Y.S. received a research grant from Medtronic as well as an honorarium and travel reimbursement from Intuitive Surgical. B.A.C. and T.S.L. are co-founders, board members, and stock owners in C-SATS Inc. M.A.M. received travel reimbursement from Intuitive Surgical. M.R.P., H.H., and C.B. report no conflicts of interest. Presented at Pelvic Floor Disorders Week 2015, 26th annual meeting of the American Urogynecologic Society, Seattle, WA, Oct. 13-17, 2015; and the 44th Global Congress on Minimally Invasive Gynecology, American Association of Gynecologic Laparoscopists, Las Vegas, NV, Nov. 15-19, 2015. Corresponding author: Michael R. Polin, MD. michael. [email protected]

MONTH 2016 American Journal of Obstetrics & Gynecology

1.e7