Evaluation of robotic cardiac surgery simulation training: A randomized controlled trial

Evaluation of robotic cardiac surgery simulation training: A randomized controlled trial

Accepted Manuscript Evaluation of Robotic Cardiac Surgery Simulation Training: A Randomized Controlled Trial Matthew Valdis, MD, Michael WA. Chu, MD, ...

448KB Sizes 0 Downloads 25 Views

Accepted Manuscript Evaluation of Robotic Cardiac Surgery Simulation Training: A Randomized Controlled Trial Matthew Valdis, MD, Michael WA. Chu, MD, Christopher Schlachta, MD, Bob Kiaii, MD PII:

S0022-5223(16)00234-8

DOI:

10.1016/j.jtcvs.2016.02.016

Reference:

YMTC 10334

To appear in:

The Journal of Thoracic and Cardiovascular Surgery

Received Date: 15 July 2015 Revised Date:

17 October 2015

Accepted Date: 7 February 2016

Please cite this article as: Valdis M, Chu MW, Schlachta C, Kiaii B, Evaluation of Robotic Cardiac Surgery Simulation Training: A Randomized Controlled Trial, The Journal of Thoracic and Cardiovascular Surgery (2016), doi: 10.1016/j.jtcvs.2016.02.016. This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

1

1 Title:

Evaluation of Robotic Cardiac Surgery Simulation Training: A Randomized

3

5

Matthew Valdis MD1, Michael WA Chu MD1, Christopher Schlachta

Authors:

MD2, Bob Kiaii MD1. 1

6 7

Division of Cardiac Surgery, Department of Surgery, Western

University, London Health Sciences Centre, London, Ontario, Canada. 2

8

Division of General Surgery, Department of Surgery, Western

University, London Health Sciences Centre, London, Ontario, Canada.

M AN U

9

RI PT

4

Controlled Trial

SC

2

10 11

Funding for this research was provided in part by a resident research grant from St. Jude

13

Medical. There are no conflicts of interest to disclose with regards to this work.

14 15

Corresponding author:

TE D

12

Dr. Matthew Valdis

17

Department of Cardiac Surgery

19 20 21 22

B6 University Hospital, London Health Sciences Centre 339 Windermere Road

AC C

18

EP

16

London, Ontario Phone: 519-860-2567 Fax: 519-663-8815

23 24

N6A 5A5

[email protected] Word Count: 3494

1

Canada

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

2

25 26 ABSTRACT

EP

TE D

M AN U

SC

OBJECTIVE: To compare the currently available simulation training modalities used to teach robotic surgery. METHODS: 40 surgical trainees completed a standardized robotic 10cm dissection of the internal thoracic artery and placed three sutures of a mitral valve annuloplasty in porcine models and were then randomized to; a wet lab, a dry lab, a virtual reality lab or a control group that received no additional training. All groups trained to a level of proficiency determined by two expert robotic cardiac surgeons. All assessments were evaluated using the Global Evaluative Assessment of Robotic Skills in a blinded fashion. RESULTS: Wet lab trainees showed the greatest improvement in time-based scoring and the objective scoring tool compared to the experts(24.9±1.7 vs. 24.9±2.6, p=0.704). The virtual reality lab improved their scores and met the level of proficiency set by our experts for all primary outcomes(24.9±1.7 vs. 22.8±3.7, p=0.103). Only the control group trainees were not able to meet the expert level of proficiency for both time-based scores as well as the objective scoring tool(24.9±1.7 vs. 11.0±4.5, p<0.001). The average duration of training was least for the dry lab and most for the virtual reality simulation(1.6hr vs. 9.3hr, p<0.001). CONCLUSIONS: Here we have completed the first randomized controlled trial to objectively compare the different training modalities of robotic surgery. This work shows the significant benefits of wet lab and virtual reality robotic simulation training and highlight key differences in current training methods. This study will help training programs invest resources in cost-effective, high-yield simulation exercises(ClincalTrials.gov, NCT#02357056).

Abstract Word Count: 250

AC C

28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

RI PT

27

Key Words:

Robotic cardiac surgery, Simulation training, Randomized controlled trial, Wet lab, Dry lab, Virtual reality

61 62 63

2

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

64

3

INTRODUCTION

Since its inception in the late 1990’s robotic cardiac surgery has increased

65 66

in popularity with large numbers of cases being performed at specialized centers1-

67

3

68

to a sternotomy4-6.

RI PT

69

. This increase has been driven by patient demands for less invasive approaches

Despite the demonstrated benefits and an increase in the total number of robotic surgery cases, the exposure to robotic surgery is still very limited for

71

surgical trainees5,6. At the present time no credentialing body requires proficiency

72

in robotic surgery for the successful completion of any residency program7-9. This

73

combined with the high up-front costs, OR time-constraints and administrative

74

demands for improved outcomes, all contribute to the limited exposure of surgical

75

trainees7.

M AN U

76

SC

70

Schachner et al. previously reported the experience of junior trainees as they progressed to senior roles in a robotic cardiac surgery program and tracked

78

their intraoperative performances compared to senior surgeons10. The authors

79

concluded that robotic cardiac surgery can be taught through a stepwise

80

approach, where portions of the operation are entrusted to the trainee with

81

increasing responsibilities as their surgical skills improve10. This method of training

82

represents the classic model of education and skill acquisition in surgery, and is

83

neither efficient nor does it utilize the impressive advantages of new training

84

modalities available in surgical disciplines, such as simulation.

EP

85

TE D

77

A 2011 systematic review of 35 (10 wet lab, 12 dry lab, 13 virtual reality) simulation studies (n=2-49), identified the need for a competency based training

87

system and a step-wise approach with objective assessments in robotic surgery11.

88

Only three of the included studies involved any comparison between different

89

training modalities and all three of these studies had samples sizes of only two

90

participants per group11.

91

AC C

86

Simulation offers great benefits to surgical trainees by allowing for repeated

92

practice of a specific skill set in a controlled and safe environment12-15. This style

93

of training is vastly different from historical surgical training and is necessitated by

3

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

4

an ever increasing focus on outcome-based initiatives, combined with aging and

95

frailer patients and a push from the public for a less invasive surgical approach7.

96

The three main areas of simulated surgical training currently in use are; cadaveric

97

and animal models (wet labs), dry labs and virtual reality simulation16-20. Despite

98

their ongoing use, no direct comparison of these methods exists within the current

99

literature9. The purpose of this study was to determine the most effective method

100

for robotic cardiac surgery training through a prospective randomized controlled

101

trial comparing wet lab, dry lab and virtual reality simulation with an untrained

102

control group. For this we used a time-based scoring systems adapted from the

103

Fundamentals of Laparoscopic Surgery (FLS) program21 and the Global

104

Evaluative Assessment of Robotic Skill (GEARS) scoring tool, a validated

105

objective method for scoring intraoperative robotic performance (Appendix D)22.

106

This work forms one of the largest trials of its kind and the first ever randomized

107

controlled trial (RCT) comparing the currently available training modalities in

108

robotic surgery.

M AN U

SC

RI PT

94

TE D

109 Materials and Methods

111

This study was approved by the University Health Science Research Ethics Board

112

at Western University and was also registered into the public domain on

113

clinicaltrials.gov (NCT#02357056).

114 115 116

Participant Selection, Initial Assessment and Randomization 40 surgical trainees with less than 10 hours of experience with the da Vinci

117

(Intuitive Surgical, Sunnyvale, CA) surgical system or any robotic surgical

118

simulator were enrolled in the study. Participants were shown five-minute videos of

119

a robotically harvested internal thoracic artery (ITA) and a robotic-assisted mitral

120

valve annuloplasty, highlighting basic operative techniques and relevant anatomy.

121

Participants were then required to harvest a 10cm length of the ITA pedicle off a

122

porcine chest wall using robotic Debakey forceps and monopolar spatula cautery.

AC C

EP

110

4

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

5

Next, a porcine heart model of the mitral valve was used and two 3-0 Ethibond

124

Excel (Ethicon, Cincinnati, OH) sutures were passed to the participant by an

125

assistant and placed through both the posteromedial and anterolateral trigones of

126

the mitral valve. A third suture was given to the participant and placed through the

127

annulus of the mitral valve and then through a flexible annuloplasty band (St. Jude

128

Medical,St. Paul, MN). Both of these tasks were timed and recorded, on the

129

robot’s camera using a Stryker 1288 HD Camera Control Unit (Stryker,

130

Kalamazoo, MI), and coded for blinded assessment. After completing the initial

131

assessment, participants were randomized to one of four different robotic training

132

streams: wet lab, dry lab, virtual reality simulation or a control group using

133

concealed identical cards chosen by the participant from an opaque container

134

(Figure 1).

135 136 137

Wet Lab The wet lab consisted of the same two tasks of the initial assessment with

138

ongoing guidance and feedback provided by one of the study investigators. The

139

level of proficiency for these tasks was set by the mean time of completion by two

140

fellowship-trained, expert robotic cardiac surgeons, who performed the robotic

141

ITA harvest and mitral annuloplasty tasks five times each (Figure 2). To ensure

142

the achievement of proficiency was not a random occurrence, each participant

143

was required to pass each task two consecutive times based on time-based

144

scores determined by an equation derived from the FLS scoring system

145

(Appendix B).

146 147 148

Dry Lab The dry lab training stream consisted of three tasks to address camera movement

149

and clutching, transferring and endowrist manipulation, and needle control,

150

needle driving, suturing and intracorporeal knot tying. The first task used a pre-

151

drawn template with 10 numbered boxes of varying shapes and sizes, each of

152

which was surrounded by a dot on all four sides. Each participant was required to

153

clutch and move the camera to focus on each box such that all four corners could

AC C

EP

TE D

M AN U

SC

RI PT

123

5

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

6

be seen and all four surrounding dots were excluded (Appendix A). The second

155

and third tasks of the dry lab used the Peg Transfer and Intracorporeal Knot Tying

156

materials from Tasks 1 and 5 of the standard FLS skills program21. The methods

157

for these tasks were exactly as what has been previously described by the FLS

158

manual skills program with laparoscopic instruments replaced with the da Vinci

159

robot (Appendix B). Levels of proficiency for each exercise were set by the mean

160

scores of our two expert robotic cardiac surgeons completing each exercise 5

161

times.

162 163 164

Virtual Reality We established a VR training protocol specific to robotic cardiac surgery using the

165

da Vinci Skills Simulator (Intuitive Surgical, USA), a commercially available robotic

166

surgical simulation platform. We surveyed our expert robotic cardiac surgeons to

167

define skills important for robotic cardiac surgery. From this we generated a list of

168

useful virtual reality simulation exercises and created a 9-exercise curriculum,

169

specific to the skills required for robotic cardiac surgery (Appendix C). Levels of

170

proficiency for each task were set by allowing our expert surgeons to complete

171

each exercise as many times as necessary until they felt they had performed to a

172

level indicative of their abilities. From this, a level of proficiency for each task of

173

90% or greater with no critical errors, was required to match the performance of

174

our experts.

175 176 177

Control A control group was utilized to assess for an improvement in skill from the initial

178

assessment due to reasons other than the training that the other groups received.

179

Individuals randomized to this group following the first assessment, received no

180

additional training on the robot.

AC C

EP

TE D

M AN U

SC

RI PT

154

181 182

Primary Outcomes and Evaluation

6

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

7

The primary outcomes for this study were 1) the time-based scores upon

184

successful completion of the assessments and 2) the mean GEARS score for

185

each trainee’s completion of the two assessment tasks.

186

Each participant was allowed to repeat each exercise, in their respective training

187

stream, up to 80 times in order to reach the level of proficiency set by our experts

188

for that specific task. In order to ensure the successful completion of the exercise

189

was not a random occurrence, each participant was required to pass each

190

exercise two consecutive times, similar to the FLS training program.

191

Upon achieving the predetermined proficiency score for each task in their

192

respective training stream, all participants were brought back and retested on the

193

original robotic ITA harvest and mitral annuloplasty tasks. All attempts were timed

194

and recorded. The de-identified recordings of the initial and final assessments

195

were objectively assessed for intraoperative surgical skills using the GEARS

196

assessment tool in a blinded fashion by a single investigator to control for inter-

197

observer variability.

SC

M AN U

TE D

198

RI PT

183

Statistical Analysis Because no previous or similar study exists we were unable to predict the

201

standard error and significance of our primary outcomes prior to participant

202

enrollment. Data recorded from one expert robotic surgeon and the first ten

203

trainees to complete the initial assessments were used to calculate a minimum

204

sample size of 8 participants in each treatment arm in order to detect a clinical

205

significance, with a statistical power of 0.90. Because a second expert surgeon

206

was required to set levels of proficiency for each task, we felt expanding

207

enrollment to 10 participants for each arm would account for any increased

208

variability, without being too large for the unavoidable logistical and financial

209

constraints surrounding the study design.

210

Data analysis was based on the original random allocation of each participant into

211

each training stream they were assigned without any crossover. All continuous

AC C

EP

199 200

7

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

8

variables were compared using a Kruskal-Wallis ANOVA, which accounts for our

213

small sample sizes and does not assume normality of the data. The continuous

214

variables from each group were then compared to the experts individually, using a

215

Mann Whitney U analysis.

RI PT

212

216 217 RESULTS

219

Baseline Demographics

220

At baseline, participants in all four training streams were similar in regards to age,

221

gender, year of training, and previous robotic experience. In addition to this, no

222

difference was detected in each group`s performance of the ITA dissection and

223

mitral valve annuloplasty for both the time-based scoring and the GEARS

224

assessment (Table 1). The expert surgeons scored significantly higher than the

225

trainees in the original assessments time-based scores for the 10cm ITA

226

dissection and the mitral valve annuloplasty tasks, as well as significantly better on

227

the average GEARS score (Table 1).

228

TE D

M AN U

SC

218

Wet Lab

230

Trainees in the Wet lab improved their 10cm ITA dissection time-based scores

231

from 488.8 ± 228.6 on the initial assessment to 1076.1 ± 25.8 at the final

232

assessment. Similarly they improved their time-based mitral valve annulopasty

233

scores from 381.1 ± 107.8 at the initial assessment to 602.2 ± 11.4 by the final

234

assessment. Both of these scores were found to be significantly better than the

235

experts by the final assessment (p = 0.003 and 0.031, respectively) (Figure 3).

236

The wet lab also improved their average GEARS score from 9.3 ± 1.7 to 24.9 ± 2.6

237

by the final assessment, which was not significantly different from the score of the

238

experts (p=0.704) (Figure 4). The average total training time to reach the level of

239

proficiency set by our experts was 116.5 ± 32.1min trainees in the wet lab group,

AC C

EP

229

8

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

9

240

with an average duration of training of 25.9 ± 13.5days between the initial and final

241

assessments (Figure 5).

242 Dry Lab

244

Trainees in the dry lab improved their 10cm ITA dissection time-base scores from

245

388.9 ± 295.1 on the initial assessment to 859.0±143.2 at the final assessment,

246

with no statistical difference between their scores and that of the experts for this

247

task (p=0.191). Trainees also improved their time-based mitral valve annulopasty

248

scores from 304.9 ± 197.0 at the initial assessment to 523.6 ± 48.9, p=0.013 by

249

the final assessment, which despite the improvement was found to be significantly

250

lower than the expert’s average score (p = 0.013) (Figure 3). The dry lab also

251

improved their average GEARS score from 8.6 ± 3.3 to 22.5 ± 3.7 by the final

252

assessment, which was not significantly different from the score of the experts

253

(p=0.160) (Figure 4). The average total training time to reach the level of

254

proficiency set by our experts was 98.0 ± 52.2min for trainees in the dry lab group,

255

with an average duration of training of 34.0 ± 32.9 days between the initial and

256

final assessments (Figure 5).

257

TE D

M AN U

SC

RI PT

243

Virtual Reality

259

Trainees in the virtual reality lab improved their 10cm ITA dissection time-base

260

scores from 457.6 ± 259.9 on the initial assessment to 957.3 ± 98.9 at the final

261

assessment. Similarly they improved their time-based mitral valve annulopasty

262

scores from 409.5 ± 106.1 at the initial assessment to 580.4 ± 14.4 by the final

263

assessment. No significant difference was found between either of these scores

264

and that of the experts by the final assessment (p=0.624 and 0.967, respectively)

265

(Figure 3). The virtual reality lab also improved their average GEARS score from

266

10.2 ± 3.0 to 22.8 ± 2.7 by the final assessment, which was not significantly

267

different from the score of the experts (p=0.110) (Figure 4). The average total

268

training time to reach the level of proficiency set by our experts was 560.5 ±

AC C

EP

258

9

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

10

269

167.4min for trainees in the virtual reality group, with an average duration of

270

training of 46.7 ± 21.3 days between the initial and final assessments (Figure 5).

RI PT

271 Control

273

Trainees in the control group showed an improvement in their 10cm ITA dissection

274

time-based scores from 451.0 ± 264.1 on the initial assessment to 749.1 ± 171.9

275

at the final assessment. A similar mild improvement was seen in their time-based

276

mitral valve annulopasty scores from 402.3 ± 147.2 at the initial assessment to

277

463.8 ± 86.4 by the final assessment. With only these small improvements, both

278

time-based scores were found to be significantly less than that of the experts by

279

the final assessment (p=0.008 and 0.001, respectively) (Figure 3). The control

280

group showed very limited improvement in their average GEARS score from 8.4 ±

281

2.0 to 11.0 ± 4.5 by the final assessment, which was significantly different from the

282

score of the experts (p<0.001) (Figure 4). The average duration between the initial

283

and final assessments was 34.6 ± 24.1 days (Figure 5).

284 285

TE D

M AN U

SC

272

COMMENT

287

The failure to detect any statistical difference between the training groups’

288

demographics and baseline scores indicates our randomization was appropriate

289

and no group was at an advantage at the commencement of their robotic training.

AC C

290

EP

286

291 292

Wet Lab The primary outcome scores for the wet lab indicate the strength of this simulation

293

modality, as this group outperformed all others for all tasks and were even found

294

to be significantly better than our experts. This demonstrates how an exercise that

295

is most similar to the actual operative experience yields the most efficient method

296

of training. This concept has been eluded to previously and multiple examples

10

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

11

exist where educators have attempted to increase the fidelity of simulation to

298

create a more realistic and effective training model (ex. infusion of pulsatile blood

299

into animal/cadaveric tissues)12. However, this is the first study to demonstrate this

300

principle through experimentation. Exposure to these high fidelity models allowed

301

trainees to become familiar with the relevant anatomy and robotic instrumentation,

302

delineate the procedural steps, and provided the repetition necessary to develop a

303

safe and efficient technique. However, high costs, difficult

304

acquisition/storage/preparation/disposal of tissues, and need for an expert

305

presence are major barriers to implementing this type of a training program, which

306

is consistent with the conclusions of other authors in the simulation literature11,12.

307

Because of this the wet lab is best suited for training individuals who have already

308

obtained basic robotic skills through other modalities, so that these sessions can

309

be used to focus on precise anatomical dissection and advanced procedurally

310

specific techniques.

311 312 313

Dry Lab The dry lab group improved all scores on the final assessments but were unable to

314

reach the level of proficiency set by our experts for the mitral annuloplasty.

315

Although they did reach the level of proficiency for the ITA dissection, their

316

average scores were consistently the lowest of the training streams. This

317

indicates that the exposure to only simple tasks does not translate to more

318

complex procedures as well as the other training modalities. Robotic training

319

programs looking to incorporate dry lab simulation must also account for the

320

availability of a designated training robot as well as the high costs of disposable

321

robotic instruments.

SC

M AN U

TE D

EP

AC C

322

RI PT

297

323 324

Virtual Reality The virtual reality group improved their scores and met the levels of proficiency set

325

by our experts for time-based and GEARS scores. Although they did not reach the

326

same scores of the wet lab, this method of training certainly allows for the

11

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

12

acquisition of robotic skill. The merits of virtual reality are demonstrated by the fact

328

that these individuals were never exposed to the porcine tissues or the technique

329

involved in either of the assessments for the entire duration of their training.

330

Improvement in their performance came from an understanding of the robot’s

331

functions as a competent technician of the system. The major advantage of this

332

type of training is the powerful scoring tool that provides ongoing feedback for the

333

trainees to improve robotic proficiency by monitoring a variety of different metrics

334

(ie. distance travelled, excessive force, etc.). This gives the trainee a better idea of

335

areas for improvement, other than simply performing it faster, which is the only

336

insight gained from time-based scoring systems. . The multiple recorded metrics

337

required to pass each task, explains the significantly longer training times needed

338

for subjects in the VR group.

339

M AN U

SC

RI PT

327

Control The control group showed minor improvements on their final assessments, but

342

without any extra exposure to the robot they were not able to meet the expert level

343

of proficiency for any of the primary objectives. These improvements likely

344

represent some familiarization with the surgical anatomy and robotic technique

345

after completing the initial assessment. Because the control group failed to reach

346

all levels of proficiency it is reasonable to assume that the improvements seen in

347

the three training groups was due to the experience and skill they gained during

348

the training exercises of this study.

EP

AC C

349

TE D

340 341

350 351

GEARS Scoring Tool The GEARS scoring tool proved to be a better indicator of overall robotic

352

proficiency compared to the time-based scoring systems. It is not specific to any

353

particular robotic surgical procedure, but does account for the overall efficiency of

354

robotic surgery which is a reflection of time. In addition to this GEARS focuses on

355

depth perception, bimanual dexterity, force sensitivity, autonomy and robotic

356

control making it a far more robust evaluation tool than time-based scoring

12

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

13

systems. The GEARS scoring tool has been shown to objectively detect

358

differences between novice operators of the robot and expert staff surgeons22.

359

This is consistent with what has been demonstrated in this study based on the

360

baseline assessment scores. The inability to detect a significant difference in the

361

three training streams’ scores with the experts at the final assessments

362

demonstrates their significant improvement in robotic surgical abilities.

363

This work is the first prospective randomized controlled trial to ever compare the

364

currently available simulation modalities used in robotic surgical training, and is

365

one of the largest studies regarding robotic training to ever be completed. We

366

report a 96.25% completion rate for the final assessment. All individuals completed

367

the training and assessments except for one individual who did not completed the

368

ITA assessment before completing his training at our institution and another who

369

was unable to complete the training after randomization due to clinical

370

responsibilities.

371 372

Study Limitations

373

One limitation of this study is the small sample size, which is consistent with many

374

similar publications with sample sizes as low as 2 and non-surgical participants

375

due to the time constraints of surgical trainees11. However, all of the proper power

376

calculations were carried out to ensure the statistical validity of the results.

377

Furthermore, the cost and limited availability of porcine materials, precluded the

378

study from involving more extensive surgical skills. With respect to the ITA

379

dissection, only a 10cm length of the ITA was harvested to conserve materials.

380

This proved to be an adequate compromise where robotic proficiency was still able

381

to be evaluated but is a simpler tasks than dissection the entire ITA pedicle which

382

are usually 20-30cm in length. Lastly, only one investigator was used to evaluate

383

each robotic assessment which may serve as a potential source of bias. Although

384

the GEARS scoring tool has been shown to have excellent internal consistency

385

with low variability among evaluators22, this was done purposefully to ensure no

386

inter-evaluator variability with all recordings deidentified and coded, blinding the

AC C

EP

TE D

M AN U

SC

RI PT

357

13

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

14

investigator to the type of participant(expert/trainee) or the stage of

388

assessment(baseline/final).

389 390 391 392

Final Conclusions Simulation based exercises must be incorporated into training programs to keep

393

up with advancements in robotic technology and allow for a higher-yield training

394

experience during each robotic operation. Training programs must evaluate their

395

own institutional resources in order to determine the optimal simulation training

396

they can offer. If a center has the appropriate resources, the results of this study

397

highly favor the high-fidelity wet lab simulation, under the guidance of an expert

398

robotic surgeon for the fastest acquisition of expert-level robotic skill. However, if

399

this is not possible, virtual reality simulation offers a reasonable alternative that

400

allows for familiarization with the robot’s instrumentation and proficiency with a

401

variety of robotic skills.

402

As robotics becomes mainstream in cardiac surgery the need for a reliable robotic

403

training program will become paramount. This work will serve to guide training

404

programs invest resources in cost-effective, high-yield simulation exercises to

405

improve the training of new robotic cardiac surgeons.

408 409 410 411

SC

M AN U

TE D

EP

407

AC C

406

RI PT

387

14

ACCEPTED MANUSCRIPT

REFERENCES

1) Chitwood Jr., W.R. Atlas of robotic cardiac surgery.Springer London Heidelberg New York Dordrecht. Chapter 1, 1-10.

RI PT

2) Gao, C Robotic Cardiac Surgery. Springer London Heidelberg New York Dordrecht. 2014.

3) Pugin, F. Bucher, P. Morel, P. History of Robotic Surgery: From AESOP to ZEUS to da

SC

Vinci. J Visc Surg. 2011;148:3-8

4) Poston RS, Tran R, Collins M, Reynolds M, Connerney I, Reicher B, Zimrin D, Griffith

M AN U

BP, Bartlett ST. Comparison of economic and patient outcomes with minimally invasive versus traditional off-pump coronary artery bypass grafting techniques. Ann Surg. 2008;248:638-646

5) Moss, E. Murphy D. Halkos, M. Robotic cardiac surgery: current status and future

TE D

directions. Robotic Surgery: Research and Reviews 2014;1:27–36 6) Kaneko, T. Chitwood, W. Current Readings: Status of Robotic Cardiac Surgery. Semin Thoracic Surg. 2013;25:165-170

EP

7) Chitwood, WR. Nifong, LW. Chapman, WHH. et al. Robotic Surgical Training at an Academic Institution. Ann Surg. 2001 Oct; 234:475–486

AC C

8) Whitehurst SV, Lockrow EG, Lendvay TS, Propst AM, Dunlow SG, Rosemeyer CJ, Gobern JM, White LW, Skinner A, Buller JL. Comparison of two simulation systems to support robotic-assisted surgical training: a pilot study (Swine model). J Minim Invasive Gynecol. 2015;22:483-488

9) Ganpule A, Chhabra JS, Desai M. Chicken and porcine models for training in laparoscopy and robotics. Curr Opin Urol. 2015;25:158-62

ACCEPTED MANUSCRIPT

10) Schachner, T. Bonaros, N. Wiedemann, D. Weidinger, F. Feuchtner, G. Friedrich, G. Laufer, G. Bonatti, J. Training Surgeons to Perform Robotically Assisted Totally Endoscopic Coronary Surgery. Ann Thorac Surg. 2009;88:523–528

RI PT

11) Schreuder, HWR. Wolswijk, R. Zweemer, RP. Schijven, MP. Verheijen, RHM. Training and learning robotic surgery, time for a more structured approach: a systematic review. BJOG. 2012;119:137–149

SC

12) Liss, MA. McDougall, EM. Robotic Surgical Simulation. Cancer J. 2013;19:124-129 13) Kumar A, Smith R, Patel VR.Current status of robotic simulators in acquisition of robotic

M AN U

surgical skills. Curr Opin Urol. 2015;25:168-174.

14) Fisher RA, Dasgupta P, Mottrie A, Volpe A, Khan MS, Challacombe B, Ahmed K. An over-view of robot assisted surgery curricula and the status of their validation. Int J Surg. 2015;13:115-123

TE D

15) Mimic Technologies Inc. Appendix B - Experienced Surgeon Data. Overview of experience surgeon data. 217-242.

16) Finnegan, KT. Meraney, AM. Staff, I. Schichman, SJ. da Vinci Skills Simulator construct

EP

validation study: correlation of prior robotic experience with overall score and time score simulator performance. Urology. 2012. 80(2):330-5.

AC C

17) Kelly, DC. Margules, AC. Kundavaram, CR. Narins, H. Gomella, LG. Trabulsi, EJ. Lallas, CD. Face, content, and construct validation of the da Vinci Skills Simulator. Urology. 2012.May;79:1068-72

18) Ben-Or, S. Nifong, L. Chitwood, WRJ. Robotic Surgical Training. Cancer J. 2013;19:120-123

ACCEPTED MANUSCRIPT

19) Liu, M. Curet, M. A Review of Training Research and Virtual Reality Simulators for the da Vinci Surgical System. Teaching and Learning in Medicine. 2014;27:12-26 20) Rajanbabu A, Drudi L, Lau S, Press JZ, Gotlieb WH. Virtual reality surgical simulators-

RI PT

a prerequisite for robotic surgery. Indian J Surg Oncol. 2014;5:125-127

21) Ritter, EM. Scott, DJ. Design of a proficiency-based skills training curriculum for the fundamentals of laparoscopic surgery. Surgical Innovations. 2007;14:107-12

SC

22) Goh, AC. Goldfard, DW. Sander, JC. Miles, BJ. Dunkin BJ. Global Evaluative

Assessment of Robotic Skills: Validation of a Clinical Assessment Tool to Measure

AC C

EP

TE D

M AN U

Robotic Surgical Skills. Urology. 2012;187:247-52

ACCEPTED MANUSCRIPT

Table 1: Baseline Demographic Characteristics of Study Participants Wet Lab (n=10)

Dry Lab (n=10)

Mean Age, Years ± SD

31.3 ± 4.0

32.3 ± 5.8

Gender, n (%)

Virtual Reality (n=10) 32.7 ± 6.1

Control (n=10)

p value

29.9 ± 2.4

0.579

RI PT

Characteristic

Male

8 (80.0)

6 (60.0)

8 (80.0)

6 (60.0)

Female

2 (20.0)

4 (40.0)

2 (20.0)

4 (40.0)

Previous Robotic Experience, Hours ± SD 10cm ITA Dissection, Score ± SD

5 ± 2.5

5 ± 2.9

5 ± 3.0

4 ± 2.4

0.801

1.7 ± 3.9

0.3 ± 0.7

2.6 ± 3.2

0.8 ± 2.5

0.305

488.8 ± 228.6

388.9 ± 295.1

457.6 ± 259.9

451.0 ± 264.1

0.859

12.5 ± 5.1

9.2 ± 3.0

0.942

409.5 ± 106.1

402.3 ± 147.2

0.361

7.5 ± 2.4

0.178

10.3 ± 2.4

9.4 ± 3.4

Annuloplasty, Score ± SD

381.1 ± 107.8

304.9 ± 197.0

M AN U

ITA GEARS, Score ± SD

SC

Year of Training, Year ± SD

8.2 ± 1.8

AC C

EP

TE D

Annuloplasty GEARS, Score ± SD

0.619

7.8 ± 1.8

7.8 ± 1.9

ACCEPTED MANUSCRIPT

AC C

EP

TE D

M AN U

SC

RI PT

Figure 1: Allocation of Treatment Arm Flow Chart

ACCEPTED MANUSCRIPT

Figure 2 Legend: Wet Lab Simulation Tasks Comparison of intraoperative image from surgeon’s console for ITA dissection in (A) compared with porcine model (B) and comparison of intraoperative image of mitral annuloplasty (C) with porcine model in lab (D) shows the high

AC C

EP

TE D

M AN U

SC

RI PT

degree of fidelity with wet lab simulation compared to actually robotic operating room experience.

ACCEPTED MANUSCRIPT

AC C

EP

TE D

M AN U

SC

RI PT

Figure 2: Wet Lab Simulation Tasks

ACCEPTED MANUSCRIPT

Wet Lab (n=10)

M AN U

SC

RI PT

Figure 3: Time-Based Scores for 10cm ITA Dissection and Mitral Annuloplasty

Dry Lab (n=10)

Virtual Reality (n=10)

Control (n=10)

388.9 ± 295.1

457.6 ± 259.9

451.0 ± 264.1

859.0±143.2 0.191

957.3 ± 98.9 0.624

749.1 ± 171.9 0.008

Initial 10cm ITA Dissection, Score ± SD Final 10cm ITA Dissection, Score ± SD, p value

488.8 ± 228.6

Initial Mitral Annuloplasty, Score ± SD Final Mitral Annuloplasty, Score ± SD, p value

381.1 ± 107.8

304.9 ± 197.0

409.5 ± 106.1

402.3 ± 147.2

602.2±11.4 0.031

523.6 ± 48.9 0.013

580.4 ± 14.4 0.967

463.8 ± 86.4 0.001

AC C

EP

TE D

1076.1±25.8 0.003

ACCEPTED MANUSCRIPT

Wet Lab (n=10) 9.2 ± 1.7 <0.001 24.9± 2.6 0.704

Dry Lab (n=10) 8.6 ± 3.3 <0.001 22.4 ± 3.7 0.160

AC C

EP

TE D

Initial GEARS, Score ± SD, p value Final GEARS, Score ± SD, p value

M AN U

SC

RI PT

Figure 4: Average GEARS Scores

Virtual Reality (n=10) 10.2 ± 3.0 <0.001 22.8 ± 3.7 0.103

Control (n=10) 8.4 ± 2.0 <0.001 11.0 ± 4.5 <0.001

ACCEPTED MANUSCRIPT

Wet Lab (n=10) Total Training Time, mins ± SD Duration of Training, days ± SD

116.5 ± 32.1

98.0 ± 52.2

Virtual Reality (n=10) 560.5 ± 167.4

34.0 ± 32.9

46.7 ± 21.3

Dry Lab (n=10)

AC C

EP

TE D

25.9 ± 13.5

M AN U

SC

RI PT

Figure 5: Average Total Training Time and Duration of Training

Control (n=10)

p value

-

<0.001

34.6 ± 24.1

0.116

ACCEPTED MANUSCRIPT

AC C

EP

TE D

M AN U

SC

RI PT

Central Picture

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

1

AC C

EP

TE D

M AN U

SC

RI PT

Appendix A: Dry Lab Task#1: Camera Movement and Clutching Template

1

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

1

Appendix B: Wet and Dry Lab Time-Based Scoring Equations

RI PT

Wet Lab Tasks 10cm ITA Dissection - Score = 1320 - Time(s) *Any damage to tissues through cautery, grasping or avulsion resulted in a score of 0 Mitral Valve Annuloplasty - Score = 720 - Time(s) *Any damage to tissues, annuloplasty band or sutures resulted in a score of 0 Dry Lab Tasks Camera Movement and Clutching - Score = 480 - Time(s) – 10(# of Errors) Errors: 1 point for each red dot visualized 1 point for each corner not in view

TE D

M AN U

SC

Peg Transfer - Score = 480 - Time(s) – 10(# of Errors) Errors: 1 point for peg dropped Intracorporeal Knot Tying - Score = 480 - Time(s) – 10(# of Errors) Errors: 1 point per mm needle passed outside of each dot 1 point per mm between model edges (air knot) Score of 0 if: -Suture is broken -Incorrect knot -Frayed Suture -Avulsion of model This scoring system was developed from the FLS training program and was replicated as closely as possible and adapted for the robot. The FLS program set levels of proficiency for these tasks by having two fellowship-trained advanced laparoscopic surgeons, whose practices consisted of mainly minimally invasive surgery, but who were not overly familiar with the FLS tasks prior to initiation of the study, complete each task five times. It was decided a priori that these values would be pooled and any outlier more than 2 standard deviations from the mean were excluded (there were none). The time for proficiency of these tasks was then set as the mean time to completion from this data set. This process was repeated for the tasks listed above by two expert robotic surgeons, again five times and the times pooled to determine the overall proficiency score. In this system the proficiency score equation is created as follows: Score = Max time – Expert pooled time – Errors

Max time - the total time an individual was allowed to complete the task. This time is usually 2-3 time greater than the expert pooled time to allow participants as much time as necessary to complete the task, and also a point where any more time required would represent such an inefficient performance that a score of 0 would be appropriate.

EP

Expert pooled time – Mean time for completion of each expert’s five attempts

AC C

Errors- Errors were defined a priori, and include the defined errors of the FLS program that are still appropriate for the robotic tasks.

1

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

1

Appendix C: Western Protocol for Virtual Reality Training Description

Primary Skill Tested

Camera Targeting -2

Trainees grasp small objects and transfer them though a series of platforms and baskets while zooming in and out to focus the camera of specific targets

Camera Control

Energy Switching – 2

Trainees use the pedals to cauterize vessels and tissue with both monopolar and bipolar cautery

Energy Control

Pegboard – 2

Trainees remove several rings from pegs on a board and transfer them between hands to place them on specific pegs on the ground

Endowrist Manipulation

Trainees must pick up letters and numbers that are scattered around a box with three lids, each lid covers a spot where the correct number or letter must be placed without touching the sides

Endowrist Manipulation

SC

Matchboard – 2

RI PT

VR Simulation Exercise – Level

4th Arm Control

Matchboard – 3

Trainees use the same matchboard as before but a second sliding door covers each box which requires a third hand for retraction to place each number or letter inside

4th Arm Control

Energy dissection – 2

Trainees are required to use bipolar cuatery and scissors to cauterize and cut six small branching arteries off of a larger artery

Energy Control

Suture Sponge – 3

Trainees are given a needle which they must pass back and forth between instruments and suture through targets on a sponge brick, forcing them to take forward an back hand bites with both hands

TE D

Trainees place a simple interrupted suture and place three square knots on two vertical defects

AC C

EP

Vertical Defect Suturing

M AN U

Trainees must move a ring through a rope that is covered by obstacles requiring transferring between both hands and a 3rd arm for restraction

Ring Walk – 3

1

Needle Driving - Advanced

Needle Driving - Advanced

ACCEPTED MANUSCRIPT Robotic Cardiac Surgery Training Modalities

1

AC C

EP

TE D

M AN U

SC

RI PT

Appendix D: Global Evaluative Assessment of Robotic Skill (GEARS) Scoring Tool

1

ACCEPTED MANUSCRIPT

List of Abbreviations

Analysis of variance

CSTAR

Canadian Surgical Technologies & Advanced Robotics

CABG

Coronary artery bypass grafting

dVSS

da Vinci Surgical Skills Simulator

dV-Trainer

da Vinci-Trainer

FLS

Fundamentals of Laparoscopic Surgery

GOALS

Global Operative Assessment of Laparoscopic Skills

GEARS

Global Evaluative Assessment of Robotic Skills

HSREB

Health science research ethics board

ITA

Internal thoracic artery

PGY

Post Graduate Year

RCT

Randomized controlled trial

VR

Virtual reality

SC

M AN U

TE D EP AC C

RI PT

ANOVA