Analysis of human-in-the-loop tele-operated maintenance inspection tasks using VR

Analysis of human-in-the-loop tele-operated maintenance inspection tasks using VR

Fusion Engineering and Design 88 (2013) 2164–2167 Contents lists available at ScienceDirect Fusion Engineering and Design journal homepage: www.else...

968KB Sizes 0 Downloads 19 Views

Fusion Engineering and Design 88 (2013) 2164–2167

Contents lists available at ScienceDirect

Fusion Engineering and Design journal homepage: www.elsevier.com/locate/fusengdes

Analysis of human-in-the-loop tele-operated maintenance inspection tasks using VR H. Boessenkool a,b,∗ , D.A. Abbink c , C.J.M. Heemskerk d , M. Steinbuch b , M.R. de Baar a,b , J.G.W. Wildenbeest c,d , D. Ronden a , J.F. Koning d a FOM Institute DIFFER (Dutch Institute for Fundamental Energy Research), Association EURATOM-FOM, Partner in the Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein, The Netherlands1 b Eindhoven University of Technology, Department of Mechanical Engineering, Dynamics and Control Group, PO Box 513, 5600 MB Eindhoven, The Netherlands c Delft University of Technology, Faculty of 3mE, BioMechanical Engineering Department, Mekelweg 2, 2628 CD Delft, The Netherlands d Heemskerk Innovative Technology B.V., Jonckerweg 12, 2201 DZ Noordwijk, The Netherlands

h i g h l i g h t s     

Execution of tele-operated inspection tasks for ITER maintenance was analyzed. Human factors experiments using Virtual Reality showed to be a valuable approach. A large variation in time performance and number of collisions was found. Results indicate significant room for improvement for teleoperated free space tasks. A promising solution is haptic shared control: assist operator with guiding forces.

a r t i c l e

i n f o

Article history: Received 14 September 2012 Received in revised form 22 December 2012 Accepted 12 February 2013 Available online 4 May 2013 Keywords: Remote maintenance Tele-manipulation ITER Human factors experiment Task performance

a b s t r a c t One of the challenges in future fusion plants such as ITER is the remote maintenance of the plant. Foreseen human-in-the-loop tele-operation is characterized by limited visual and haptic feedback from the environment, which results in degraded task performance and increased operator workload. For improved tele-operated task performance it is required to get insight in the expected tasks and problems during maintenance at ITER. By means of an exploratory human factor experiment, this paper analyses problems and bottlenecks during the execution of foreseen tele-operated maintenance at ITER, identifying most promising areas of improvement. The focus of this paper is on free space (sub)tasks where contact with the environment needs to be avoided. A group of 5 subjects was asked to carry-out an ITER related free space task (visual inspection), using a six degree of freedom master device connected to a simulated hot cell environment. The results show large variation in time performance between subjects and an increasing number of collisions for more difficult tasks, indicating room for improvement for free space (sub)tasks. The results will be used in future research on the haptic guidance strategies in the ITER Remote Handling framework. © 2013 FOM institute DIFFER (Dutch Institute for Fundamental Energy Research). Published by Elsevier B.V. All rights reserved.

1. Introduction During operation of ITER a significant part of components close to the plasma get activated by radiation and/or contaminated with

∗ Corresponding author at: FOM Institute DIFFER (Dutch Institute for Fundamental Energy Research), Association EURATOM-FOM, Partner in the Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein, The Netherlands. E-mail addresses: [email protected], [email protected] (H. Boessenkool). 1 www.differ.nl.

radioactive and toxic materials (like tritium and beryllium). For this reason a significant part of the maintenance activities will need to be done using remote handling techniques. Since the system availability is a critical success factor of ITER, the (remote) maintenance should be executed reliable and in a shortest possible timeframe. Experience at JET shows that because of the unpredictable nature of the maintenance situation and tasks, a flexible system using a human-in-the-loop tele-manipulation approach is crucial [1]. Drawbacks of manual control by tele-manipulation are the required extensive operator training and the reduced performance caused by the limited feedback from the environment to the operator. Improving the effectiveness and safety of Remote

0920-3796/$ – see front matter © 2013 FOM institute DIFFER (Dutch Institute for Fundamental Energy Research). Published by Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.fusengdes.2013.02.064

H. Boessenkool et al. / Fusion Engineering and Design 88 (2013) 2164–2167

2165

Table 1 Expected maintenance tasks at ITER (derived from JET experience) [1]. Motion type

Small loads (<15 kg)

Heavy loads (15–45000 kg)

Free space

All manipulation tasks, visual inspection, surveying and metrology

Movement of (hoisted) components

In contact

Pick up/placement of tools, peg-in-hole like tasks, TIG/MIG welding, drilling, thread tapping, mechanical/vacuum cleaning, sawing, filing, dust and flake sampling, wiring loom installation

Placement and alignment of (hoisted) components

Handling (RH), is therefore a very important focus. To be able to optimize tele-operated task performance, a thorough understanding of the ITER RH tasks and problems during execution is needed. However, this knowledge is not (yet) available. First of all, what are the expected RH tasks in ITER? Detailed sequences of ITER maintenance tasks are not available yet, since the design process of the majority of the reactor components is currently still ongoing. Remote Handling (RH) experience at JET gives however a good indication of required maintenance tasks at ITER as it delivers fusion relevant in reality executed and proven task sequences. During RH shutdowns at JET more than 300 types of tasks have been executed, with a mass range of manipulated components between 1 and 300 kg. Besides planned maintenance at JET, there have been many unplanned tasks, caused by unexpected conditions inside the torus, e.g. deposits of molten debris, fractured tiles, damaged cables and conduit, electrical short damage, damaged or dirty windows and mirrors [1]. The ITER RH system should also be flexible to execute unplanned tasks, as stated in ITER Maintenance Management System [2]. There are important differences between JET and ITER RH which should be considered [1,3]. First of all, the overall maintenance approach differs; a large part of ITER RH will take place in specific hot cell areas instead of inside the tokamak. Secondly the size and mass of ITER components and tools will be much larger (component sizes up to 6 m and masses up to 45,000 kg), this will have impact on the RH strategy and usability of RH methods (scaling). Thirdly ITER RH is expected to be more complex (higher requirements, more diversity of tasks, less available manipulation space). In spite of the differences that will exist between the JET and ITER, JET RH is the best available reference for ITER RH and it gives a good indication of tasks types which can be expected during ITER RH. ITER maintenance tasks will be quite diverse and the task execution therefore also requires different information and approaches for (human)control. It is therefore important to consider different types of tasks when searching for a solution. This paper will use the following general distinction in the nature of maintenance (sub)tasks: ‘free space tasks’ and ‘tasks in contact’. Table 1 shows expected maintenance tasks at ITER, deduced from JET experience. The focus of this paper is on free space (sub)tasks. What are the specific problems during execution of these tasks in tele-operated situation? The main objective of this research is to provide more insight in which aspects of free space tasks are difficult/time consuming. To analyze human task performance an exploratory human factor experiment was performed. Test subjects were asked to perform a tele-operated visual inspection task in an ITER relevant (simulated) environment. It was hypothesized that there would be a large inter subject variability in performance. Moreover it was expected that human operators cannot perform the task without making collisions. 2. Methods 2.1. Subjects The task analysis experiment was performed with a group of 5 subjects. The mean age of the subjects was 24.6 years, with a

standard deviation of 3.5 years. All subjects were right handed. The tele-manipulation experience of the subjects was limited to non at the start of the experiment; 2 subjects (subjects 1 and 2) had considerable experience (<40 h), 1 subject (subject 3) had little experience (∼5 h) and 2 subjects (subjects 4 and 5) had no experience with tele-manipulation. 2.2. Task description The subjects were asked to perform a tele-operated visual inspection task. The level of inspection difficulty increased from ‘freely accessible’ (P1) to ‘moderately accessible’ (P2) to ‘poorly accessible’ (P3). Subsequently the following subtasks had to be executed (see Fig. 2); lift the handheld camera from the table (S), move to and inspect respectively plane 1, 2 and 3 (P1, P2 and P3, see white arrows and black planes), and move the camera back to the start position (S). Inspection of the planes was defined as the identification of the randomly placed small white characters, by moving the camera over the plane (see Fig. 3). An important instruction was to not touch (potentially damage) anything. 2.3. Experimental setup The experiment was performed using a telemanipulation system (see Fig. 1) available in the Remote Handling Study Center (RHSC) at FOM DIFFER. The setup consisted of a Haption VirtuoseTM 6D master device (in top-down configuration), providing a 6 DOF workspace with force feedback. Virtual Reality (VR) technology was used to simulate the slave robot in an ITER-like environment. The virtual Benchmark product [4] was chosen as task environment. A rigid body simulator, based on Nvidia PhysXTM technology, was used to emulate real-time contact interaction, providing realistic feedback to the human operator [5]. A position-error control architecture was implemented between the master and slave. The master-slave control loop and physics simulation ran on 500 Hz. The subject was provided with visual feedback from the remote (VR) environment (Fig. 2). Besides an overview, 4 camera views were provided on the left side of the screen, showing (from top to bottom) the handheld camera view, a top view and the views of cameras on the left and right slave arm. 2.4. Experiment design & analysis The experiment contained 4 repetitions per subject. Before the start of the actual experiment, all subjects did have a training session until the subjects felt comfortable with the system and task. For detailed task execution analysis, a vast amount of variables was recorded during the experiment (sampled at 500 Hz). Data about mental workload was obtained using the NASA TLX questionnaire. Furthermore video recordings of the procedure were made, giving an overview of the subjects’ workspace and the provided visual feedback. Based on the recorded data the following metrics were calculated: completion time (per subtask), distance to inspection planes, and number of collisions with the environment. The results were summarized using the mean () and standard deviation () and were regarded statistical significant when p < 0.05.

2166

H. Boessenkool et al. / Fusion Engineering and Design 88 (2013) 2164–2167

Fig. 1. Schematic representation of the tele-manipulation setup used for the experiments. The human operator controls the simulated slave robot by manipulating a 6 DOF master device (Haption VirtuoseTM ). The human operator gets visual, haptic and auditory feedback from the remote environment.

Fig. 2. Screenshot of the visual feedback provided to the subjects. The operator starts at ‘s’ and has to inspect three black planes (shown by the white arrows), searching for small characters (see Fig. 3). The camera view of the handheld camera is provided at the left upper corner of the screen (marked red). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

3. Results and discussion

Fig. 3. Schematic trajectory of a typical inspection task. The operator moves the inspection camera along the black inspection surfaces, identifying the white characters. It is important to avoid collisions; keep the distance to the surface (‘d’) in a safe range.

A general observation of the task execution shows that relatively inexperienced operators show a moderate and varying performance when looking to completion time and accuracy. The completion time separated per subtask is shown in Fig. 4. First of all a large variation between subjects can be seen (total completion time:  = 100.4 s,  = 39.8 s). The ‘inspection’ subtask is in fact a position tracking task; move along the surface and control the position depending on the visual feedback. The movement between planes is a goal directed movement. The variation in completion time between subjects is present for the ‘inspection’ subtasks (total inspection time:  = 55.2 s,  = 23.5) as also for the movements in between the inspection planes (total ‘movement’ time:  = 45.24 s,  = 18.1 s). No statistical difference could be found between the inspectiontimes of the 3 planes (p = 0.183, F = 1.77, using a 2-way ANOVA with factors: ‘subjects’ and ‘planes’). Fig. 5 shows the total number of collisions between the camera and the environment during the inspection of the different planes. It is interesting to note that the number of unwanted collisions with the environment increases significant with increasing difficulty of the inspection task (from P1 to P3), although the completion time did not increase significant. Another explanation for the increasing number of collisions could be the different orientation of the

H. Boessenkool et al. / Fusion Engineering and Design 88 (2013) 2164–2167

2167

fundamental difficulty of free space tasks. Future experiments with real experienced operators should confirm this results for experienced operators. This paper only analyzed the (sub)task free space motion. Future work will look in more detail to the remaining tasks involving contact. The gained insight in the nature of free space tasks will be used to optimize tele-operated task performance. A promising direction is the assistance of human operators by guiding forces, such as a virtual force field to assist the operator during an inspection motion and/or prevents collision with the environment. Earlier performed research by the author showed that so called ‘haptic shared control’ can result in both a reduction of the time and an improvement in the position accuracy for a tele-manipulated bolt-and-spanner task [8]. Also inter subject variation was found to be smaller in the assisted condition. Future work will focus on the applicability of this haptic shared control approach in ITER RH. Fig. 4. Average duration of subtasks (4 repetitions), shown per subject. The gray boxes represent the move tasks, the colored boxes represent the inspection tasks. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

4. Conclusion Human factors experiments using a Virtual Reality task environment showed to be a valuable approach to gain insight in tele-operated task execution. Analysis of a tele-manipulated inspection task showed a large variation in time performance between subjects. Accurate trajectory planning/execution appeared difficult for the subjects, apparent in the number of collisions. The presented results indicate significant room for improvement for tele-operated free space tasks. Future work will focus on improvement of ITER RH using haptic shared control, which is a promising approach assisting the human operator with guiding forces. Acknowledgments This work was carried out under the EFDA Goal Oriented Training Programme (WP10-GOT-GOTRH) and financial support of FOM Institute DIFFER, which are greatly acknowledged. The views and opinions expressed herein do not necessarily reflect those of European Commission.

Fig. 5. Total number of collisions between the camera and the environment during the 4 repetitions. Significant difference was found (p < 0.05) using a dependent t-test (one-tailed).

planes. A changed orientation requires mental transformations, which degrades task performance and is mentally tiresome [6]. Observations indicated that only a small part of the collisions originated from a non ideal human strategy (high level), most collisions were caused by errors on skill based level (low level) [7]. Since the (motion control) task appeared to be quite difficult, it was expected to find a high mental workload, however the reported TLX-scores were quite low for subjects 1–4; around 26 (range 0–100), and moderate for subject 5: 62. This human factors experiment shows results for relative inexperienced human operators (real world operators are only considered experienced after working a number of months). Although extensive training of operators would probably result in a significant increase of time performance and position accuracy, this research shows that accurate positioning (without collisions) is a

References [1] A.C. Rolfe, A perspective on fusion relevant remote handling techniques, Fusion Engineering and Design 82 (October (15–24)) (2007) 1917–1923. [2] A. Rolfe, A. Tesini. ITER Maintenance Management System (IMMS), version 1.2, ITER document 2FMAJY, 2008, pp. 29–30. [3] A. Tesini, J. Palmer, The ITER remote maintenance system, Fusion Engineering and Design 83 (December (7–9)) (2008) 810–816. [4] C.J.M. Heemskerk, B.S.Q. Elzendoorn, A.J. Magielsen, G.Y.R. Schropp, Verifying elementary ITER maintenance actions with the MS2 benchmark product, Fusion Engineering and Design 86 (October (9–11)) (2011) 2064–2066. [5] C.J.M. Heemskerk, M.R. de Baar, H. Boessenkool, B. Graafland, M.J. Haye, J.F. Koning, et al., Extending virtual reality simulation of ITER maintenance operations with dynamic effects, Fusion Engineering and Design 86 (October (9–11)) (2011) 2082–2086. [6] B.P. DeJong, J.E. Colgate, M.A. Peshkin, Mental transformations in human–robot interaction, in: Mixed Reality and Human–Robot Interaction, 2011, pp. 35–51. [7] J. Rasmussen, Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models, IEEE Transactions on Systems Man and Cybernetics 13 (3) (1983) 257–266. [8] H. Boessenkool, D.A. Abbink, C.J.M. Heemskerk, F.C.T. van der Helm, J.G.W. Wildenbeest, A task-specific analysis of the benefit of haptic shared control during tele-manipulation, IEEE Transactions on Haptics (May) (2012).