Development and evaluation of a virtual training environment for on-line robot programming

Development and evaluation of a virtual training environment for on-line robot programming

International Journal of Industrial Ergonomics 53 (2016) 274e283 Contents lists available at ScienceDirect International Journal of Industrial Ergon...

1MB Sizes 1 Downloads 34 Views

International Journal of Industrial Ergonomics 53 (2016) 274e283

Contents lists available at ScienceDirect

International Journal of Industrial Ergonomics journal homepage: www.elsevier.com/locate/ergon

Development and evaluation of a virtual training environment for on-line robot programming Dimitris Nathanael*, Stergios Mosialos, George-C. Vosniakos School of Mechanical Engineering, National Technical University of Athens, Zografou, Greece

a r t i c l e i n f o

a b s t r a c t

Article history: Received 16 July 2015 Received in revised form 3 December 2015 Accepted 22 February 2016 Available online xxx

The paper reports on the development and evaluation of a virtual reality system to support training in on-line programming of industrial robots. The system was evaluated by running training experiments with three groups of engineering students in the real, virtual and virtual augmented robot conditions. Results suggest that, the group with prior training in the virtual reality system augmented with cognitive/perceptual aids clearly outperformed the group that executed the tasks in the real robot only. The group trained in the non-augmented virtual reality system did not demonstrate the same results. It is concluded that the cognitive/perceptual aids embedded in the augmented virtual reality system had a positive effect on all task performance metrics and on the consistency of results across participants on the real robot. Virtual training environments need not be designed as close as possible to the real ones. Specifically designed augmented cognitive/perceptual aids may foster skill development that can be transferred to the real task. The suggested training environment is simple and cost effective in training of novices in an entry level task. © 2016 Elsevier B.V. All rights reserved.

1. Introduction Virtual reality systems provide a valuable tool for training in various industries, where either the cost or the possible negative consequences of exposing trainees to the real task environment are considerable. Spatial skill and procedural learning transfer from a virtual to the real environment has been reported to be positive in several cases examined (Regian, 1997; Waller et al., 1998; Aurich et al., 2009). Nevertheless, even when a clear transfer of training occurs, it is probable that the overall effect of training will mask a mixture of more specific effects, some of which facilitating correct real world performance (positive transfer) whereas some other hindering it (negative transfer). There is widespread belief that the main challenges for Virtual Reality training effectiveness and applicability have to do with the necessary physical fidelity to mimic the resolution of the physical world (Gupta et al., 2008; Slater and Wilbur, 1997). This view is however probably limited. In effect, some of the most successful Virtual reality training systems such as the MIST VR surgical simulator (Gallagher et al., 1999)

* Corresponding author. E-mail address: [email protected] (D. Nathanael). http://dx.doi.org/10.1016/j.ergon.2016.02.004 0169-8141/© 2016 Elsevier B.V. All rights reserved.

have been highly successful even though they are judged as of very low fidelity by today's standards. In addition, simulators not merely acting as real world replacements have had considerable success in transfer of training by making creative use of the possibilities offered by virtual environments e.g. by flying around a building instead of walking or making use of transparent walls (Lathan et al., 2002). Bardy et al. (2012) propose that the value of a training system should be judged (i) by its ability to provide relevant experience, (ii) by the provision of facilitation and guidance to the acquisition of the designated skill and (iii) by the transfer from VR training to performance in the real world. Therefore, relevance, facilitation and transferability are the key constructs and the crucial evaluation criteria for a training system. In order to become effective then, VR training should be oriented towards the establishment of what it is that is being transferred from the virtual to the real environment (Rose et al., 2000). Industrial robots have been chiefly employed in the last three decades for material handling, e.g. tending machine tools, but also for manufacturing processing, e.g. welding, deburring etc., as well as for assembly. Their programming and re-programming is time consuming or even tricky in the case of complex manufacturing scenarios but it is the most important task to be performed throughout their life. Traditionally, programming systems for industrial robots can be

D. Nathanael et al. / International Journal of Industrial Ergonomics 53 (2016) 274e283

divided into three categories: (a) guiding systems, where the robot is manually moved to each desired position and the joint coordinates are recorded, (b) robotelevel programming systems, typically employing a relatively low-level programming language provided with the robot and (c) taskelevel programming systems, functioning at a higher level, whereby the goals to be achieved rather than the moves as such are specified. In the case of more sophisticated non-industrial robots with embedded intelligence, programming may involve the use of graphical or textebased programming languages and automatic programming, including learning systems, programming by demonstration and instructive systems (Biggs and MacDonald, 2003). In industrial practice, guiding systems are employed most often (usually termed ‘on-line’, ‘teaching’ or ‘lead-through’ systems), but note that the ‘taught’ positions are used together with controllerspecific language commands that are manually entered to generate a program. The complete path is visualised by executing the program on the robot as such or on a simulator. Calculations and complex logic are generally not straightforward to integrate into such programming systems. Modelling the robotic cell on a simulator and subsequently programming the robot on it (usually termed ‘off-line’ programming) offers advantages, such as continuously available visualisation, parametric definition of the path (Zlajpah, 2008) and, most notably, no need to keep the robot from executing its normal tasks while programming it. User interaction is generally required (Jara et al., 2011) but optimisation algorithms and tools may be reverted to in order to automate some of the interactive tasks (Chen et al., 2003). Simulators are based on CAD modellers (Pires et al., 2004) and may exploit relevant functionality, notably constraint-based modelling (Vosniakos and Chronopoulos, 2009). However, even if geometrically accurate models of all equipment elements are available, simulation concerns purely kinematics and does not encompass dynamics and control models, which would have allowed a behaviour close to the real robots. Thus, the robot path derived by off-line programming may need to be corrected on the real robot according to calibration and other procedures that may be time consuming, too (Angelidis and Vosniakos, 2014). Simulators based on Virtual and Augmented Reality (VR-AR) were initially simplistic (Burdea, 1999), but more recent developments in VR/AR are increasingly making an impact (Pan et al., 2012). Purpose-built VR environments in-lieu of previous generation CAD-based simulators have been reported (Gogouvitis and Vosniakos, 2014). Multimodal interfaces such as CAVE, Head Mounted Display (HMD), 3D haptic devices and force/acceleration sensors have already been employed in off-line programming of complex manufacturing scenarios (Mogan et al., 2008) (Haton and Mogan, 2008). Path planning decisions are supported in VR/AR by indicating collision-free volumes (Chong et al., 2009), by presenting alternative collision free paths (Hein and Worn, 2009), by fitting trajectory curves to just a few points through learning algorithms (Fang et al., 2012) etc. In addition, interesting AR interfaces are emerging, concerning, for instance, effective definition of robot operations at the task level using real workpiece data and process limits (Reinhart et al., 2008), specification of end-effector orientation observing dynamic constraints of the robot (Fang et al., 2012), facilitation of task recognition through virtual fixtures, both visual and tactile, in a programming by demonstration paradigm (Aleotti et al., 2004). The notable advantage of VR, but especially of AR and mixed reality (MR), approaches is that they essentially allow intermingling off-line and on-line programming. In particular, they enable the user to manipulate a digital model of the robot, at the same time enhancing cognition either through added information via

275

extra models or calculations (AR) or through presenting parts of the real world (MR). There are obvious cost benefits to such approaches compared to experimenting with the real objects but the most significant added benefit is the enhanced information content. However, most of these benefits have been reaped in a quest to replacing the robot operator by novel programming systems rather than to enhancing the pertinent skills of the operator by novel training systems. Training systems for robotics programming are in essence concerned with spatial skills including motion synthesis and analysis (Verner et al., 2012). According to cognitive scientists, IT can facilitate effective training spatial skills in different contexts ruch et al., 2000). In order to design effective, yet generic (Pe enough training systems for robot programming, a mapping between skills and tasks is necessary. Training is normally focused on a single task, a family of tasks or, better still, on a taxonomy of tasks, such as the taxonomy of assembly tasks defined by Huckaby and Christensen (2012). VR/AR based training systems pertaining to technical equipment exist in abundance. However, there are very few reports of such systems in the context of robot programming. As an example, in the context of space robotics operator training, AR was used to reduce positioning errors and time to completion of manoeuvring tasks with inherent poor visibility conditions. Specific overlay symbols were designed to help in alignment within insertion tolerances, to prompt appropriate control command motions and to allow separation of necessary translation and rotation control inputs (Maida et al., 2007). In the neighbouring field of assembly skills training, Adaptive Visual Aids have been proposed consisting of a trackingdependent pointer object and a tracking-independent content object instead of traditional AR overlays (i.e. detailed 3D models or animations), which suffer from tracking inaccuracies (Webel et al., 2013). In this work a VR system to support training in on-line programming of industrial robots is presented. Section 2 presents the reasoning behind planning and implementation of training using this environment. The development of the (VE) is outlined in Section 3. Use and evaluation of the training environment is focused on in Section 4 by presenting and analysing the results of an experiment designed to this end. Section 5 summarises the conclusions of this work. 2. Specific aims of training Lead-through programming involves manipulation of the robot by means of a teach pendant to perform movements in three complementary coordinate systems; the Joint, World and Tool system and associated control modes. In the Joint system control mode, individual specified Joints of the robot, typically rotary, are moved about their pivot axes; in the World and Tool control modes the robot's end-effector is moved with reference to a Cartesian system which is either fixed in 3D space (World mode) or fixed on the moving end-effector (Tool mode). The three coordinate systems are complementary in the sense that the operator must combine all three modes to effectuate the desired robot movement across the robot's work volume envelope. This requires from the operator to acquire the ability to anticipate control mode effects in all three coordinate systems and the consequent ability to frequently shift focus among them. A generic cognitive task analysis of programming a robot through the teach pendant has been suggested by Gray et al. (1992). This analysis involves four basic planning tasks, (i) select path from A to B, (ii) select programming mode, (iii) plan move and (iv) select controls. The first three tasks are essentially performed in a cyclical manner, by anticipating different alternatives through mental

276

D. Nathanael et al. / International Journal of Industrial Ergonomics 53 (2016) 274e283

simulation before selecting one particular move and subsequent control. To develop this skill one may proceed through progressive training in each coordinate system and subsequently through alterations between them. Gradually, this iterative process may lead both to successful performance and also to the development of higher level optimization techniques i.e. strategic planning. Changing between control modes requires a shift in the human operator's spatial frame of reference. Shifting between frames of reference has proved to be cognitively demanding and subject to antecedent priming (Carlson-Radvansky and Jiang, 1998). In order to help trainees in shifting between frames of reference and in anticipating robot moves an analysis of the cognitive demands and affordances for the human operator for each coordinate system is required. This analysis, provided below, formed the basis in specifying appropriate cognitive/perceptual signals for an augmented VE. In the World control mode the frame of reference is absolute (Levinson, 1996), that is, it remains unchanged with respect to the surroundings and the human operator regardless of the robot's positioning at any moment (i.e. X and Y axes can be set according to surrounding space orientation and Z axis is vertical following gravity). Controlling the robot in the World mode resembles to manipulating an object (the end-effector) in a three dimensional grid from a steady position without affecting its orientation. This characteristic makes the World control mode easier to grasp for the beginner and gross robot movement relative to the surrounding space easier to anticipate. The World control mode however, because of its absolute frame of reference, is unsuitable for precise positioning of the end effector in angular orientations. In the Tool control mode the frame of reference is intrinsic (Levinson, 1996) i.e. based on the end-effector coordinate system, the three axes and coordinates being determined by the inherent features of the end-effector. For example a two-finger gripper may have the X axis defined by the tips of the two fingers, the Y axis defined by the finger pivot axis and the Z axis perpendicular to the previous two. In a cylindrical end-effector, by contrast, determination of the X and Y axes may be difficult, while determining the Z axis is more intuitive. Controlling the robot in the Tool mode resembles moving oneself in an ‘egocentric’ three dimensional grid each time defined by his current orientation. As noted above, anticipation and control of movement in this mode heavily depends on the specific shape of the end-effector. This characteristic makes the tool control mode easier for fine tuning of the end effector's position. In the Joint control mode there is no single frame of reference. Each independent joint of the robot defines a movement axis, which is rotary in the case of most robotic arms. Each axis, when chosen, becomes the reference for a single plane rotational movement of the section of the robot posterior to the joint chosen. Controlling the robot in the Joint mode is direct in the sense that the operator actuates the respective robot motor. Once the operator identifies the joint to actuate, he/she may then easily anticipate robot movement and future position for this particular joint rotation. However, and unlike the other two modes, it may become quite challenging for the operator to anticipate robot final position relative to the environment, if more than one joint rotations are sequentially or even worse, simultaneously, prescribed; in such cases the Joint control mode becomes an expert oriented control mode. However, in the case of the desired move involving just one joint the Joint control mode is quite powerful. Novices typically use the Joint control mode for final orientating adjustments of the end effector. Classroom training for this task helps clarify the differences between coordinate systems and the associated control modes; however hands-on training is important as it provides the only way

to apply the full decision making process involving mental simulation and anticipation over complete end effector paths. In effect, ease in performing accurate mental simulations of the robot's movement prior to execution may be key to fast and untroubled programming. Hands-on training for this task in a VE may be advantageous both for cost and for educational reasons. If hands-on training in a VE is nearly as effective as training in a real robot, then there are clear economic benefits, since it requires comparably less material resources and eliminates the possibility of damages and/or accidents. Nonetheless, training in VE may prove promising for facilitating the educational purposes per-se. This is because (i) in a VE trainees may feel more confident to experiment in a trial and error manner and, most importantly, (ii) in a VE it is easier to implement various cognitive/perceptual aids that arguably could accelerate the development of mental simulation of the robot's movement. The main questions that are being addressed in the present study are: 1. Can trainees perform a robot programming task in a VE with a degree of success comparable to the real task? 2. Can the skills developed in VR (i.e. mental simulation of movement, shifts between frames of reference, evidence of strategic planning) be transferred to the real task? 3. Do cognitive aids enhance performance in the VE and, if so, to what extent? 4. Do skills developed through cognitive aids in a VE have a positive impact on the real task, or, on the contrary, hinder it? To test the above assumptions an experiment was performed. In the next sections the setup of the VE and of this experiment are described.

3. Development of the VE 3.1. The simulation environment A virtual robot is constructed in the VIRTOOLSΤМ environment with full kinematics but without any dynamics capabilities. The model constructed is augmented and functions interactively with an aim to improving performance of the user in on-line robot programming. Any robot can be accommodated but a specific one was used in this work for demonstrating the principles and results

Fig. 1. Robot model in VR environment showing initial pose and the two fixtures F1 and F2.

D. Nathanael et al. / International Journal of Industrial Ergonomics 53 (2016) 274e283

277

€ubli RX-90, see Fig. 1. of the approach, namely a sta Most robot manufacturers provide accurate 3D CAD models of their robots in a format that is acceptable by most CAD systems, typically step or iges. The individual links of the robot need to be isolated and treated as separate objects in the CAD environment as well as in the VR environment to which they should be exported in a suitable format, typically VRML. The links are connected to each other in the VR environment with kinematic joints that are defined exactly as in the real model, thereby forming a kinematic chain. In the joint space, only forward kinematics of the robot model is necessary and it is implemented in a straightforward manner by matrix multiplication. In the world and tool space, it is necessary to calculate from the commanded motion of the end-effector the matching movements of the joints and this is implemented through Inverse kinematics. The equations derived for the particular robot employed in this work are presented in Appendix A. The following signals are employed in the augmented environment:    

change of the angle of observation and zooming in/out, highlighted active coordinate axis or the active joint selected, viewfinder type camera at the tip of the end-effector, near-collision sound of the end-effector against interfacing objects etc.

The robot model is manipulated via a control panel which is based on a stand-alone numpad whose keys were assigned functions that are found on the real robot's own touch pendant, namely a toggle key for switching among World, Tool and Joint coordinate systems, three keys for selecting motion along X, Y or Z axis when world or tool coordinates have been preselected, two keys (þand -) to realise motion in the positive or negative direction either along an axis or around a joint and six keys (1-2-3-4-5-6) to select a joint, when joint coordinates have been preselected, see Fig. 2. The active coordinate space is indicated by a red dot beside the corresponding word on the VR environment's display by analogy to a led lit up on the real pendant. The VE supports computer monitor displays and video projector displays. In this work a computer monitor was employed due to the need for keeping the real robot and the VEVE close together in the course of experiments, see Fig. 3.

Fig. 3. Low cost portable version of virtual training system.

3.2. Modelling of the virtual robot The Virtual model was built using 3DVIA Virtools™. This includes an authoring environment of 3D content, a behaviour engine, a rendering engine, a web player and a software development kit. The behaviour engine operates on reusable behaviour building blocks which are embedded in objects of any type by simple graphical programming. A Time Manager module and a Sound Manager module are important in building interactive simulations. There are also a number of libraries: Physics, Artificial Intelligence, Multiuser Server and VR. Design elements in the Virtools environment belong to classes following a hierarchy, e.g. Render Objects, Level-Scene-Group-Array, Sound, and Mesh-MaterialTexture. Objects are hierarchically structured, e.g. Render objects contain 3D entities and 2D entities. The former contain 3D objects, lights, cameras, Avatars, Grids. The virtual world is expressed in Virtools by building blocks. These are connected with each other in a visual flowchart-like manner, in order to activate and control the various events, including movements. A simple example concerns the auxiliary artefacts illustrated in the augmented environment the X, Y and Z axes along which the end effector is to move in the World space. First, the pertinent axis

Fig. 2. Robot control pendants: real (left) and used in the VEVE (right).

278

D. Nathanael et al. / International Journal of Industrial Ergonomics 53 (2016) 274e283

is selected. By pressing the key ‘Num þ’ on the numpad, see Fig. 2, though building block “Key Event” and “Streaming Event”, a command will be passed to building block “Show”, in order to present the Target “Z-WORLD”, see Fig. 4. In this case, “Z-WORLD”, corresponds to an arrow depicting Z-axis in global coordinates fixed at the tip of the end-effector, thereby aiding the user to visualise the movement direction of the end effector. Arrows along X and Y axes appear in an analogous manner when prompted. A second example concerns the option to use the viewfinder type camera (‘Extra Camera’) tied on the end-effector tip to help in approaching targeted points with alignment requirements, such as the double ring object simulating a lathe chuck opening, see Fig. 1. Upon application set out, the building block “Set as Active Camera” sets “Main Camera” as active, see Fig. 5. This camera is positioned at a distance from the robot and can be rotated using the mouse. At the same time, the building block “Hide” conceals the Viewfinder camera. By pressing the spacebar key the building block ‘Key Event’ activates output 1 of the building block ‘Sequencer’ and, thus, the “Extra Camera”. Pressing the spacebar key again activates output 2 of the building block ‘Sequencer’ to reactivate “Main Camera”. These examples are parts of the overall structure of the virtual model consisting of a large number of building blocks structured into 3 main sets. The first set (18 blocks) refers to generics of the VE such as background colour definition and activation, camera activation and camera operation definition (in this case through a mouse), general light source activation. The second set of building (162 blocks) concerns forward kinematics of the robot in JOINT space and motion control through the simulated teach pendant. The third set (199 blocks) concerns inverse kinematics by materialising the closed solution outlined in Appendix A, as well as motion control through the simulated teach pendant. In addition, all data concerning user performance are stored in ASCII files. These concern time duration, coordinate system changes, number of collisions and trajectory point coordinates. 4. Experimental evaluation of the virtual robot training environment 4.1. Experiment set-up The experiment was designed with three groups of participants, all being 4th year Mechanical Engineering students. Two independent groups of students were asked to execute a set of experimental tasks in two versions of a VR simulator, one standard (Group B, 11 participants) and one incorporating perceptual/cognitive aids (Group C, 10 participants). Subsequently the two groups executed

the same experimental tasks on the real robot. In other words, Groups B and C were both initially exposed to the VE and subsequently to the real one, the difference being that Group B did not use the augmented features of the environment, see Section 3.1, whereas Group C did use them. The third group of participants (Group A, 12 participants) acted as control group and only received theoretical training before executing the same experimental takes on the real robot. A combination of two lead-through programming tasks was selected for the experiment, (Task 1 and Task 2 hereafter). Both tasks involved leading the robot towards inserting a bar held by the end-effector into a double ring fixture (fixtures 1 and 2 in Fig. 1). Movements such as the above are common in various pick and place tasks, e.g. loading/unloading a lathe, loading/unloading a pallet or a cell of an automatic storage and retrieval system (AS/RS) etc. The double ring fixture was especially designed to represent the chuck opening of the lathe or a hole in a pallet where a part may be accommodated or a cylindrical shelf in an AS/RS etc. Task 1 involved leading the robot from its initial resting pose to fixture 1 and was designed so as to be accomplishable without the need to change coordinate systems. Task 2 involved leading the robot from fixture 1 to fixture 2 (see Fig. 1) and was accomplishable only through a combination of coordinate systems. All participants in all conditions performed Task 1 immediately followed by Task 2. All three groups of participants received a five minute briefing on the robot controls along with a theoretical reminder of the three coordinate systems and respective control modes. Participants were instructed to freely use and shift from any of the three control modes in order to accomplish the task at hand. Participants were instructed to perform the task in the most efficient manner; however no time limit was set for successful completion. The following data was collected in order to evaluate performance when programming the real robot:  The completion time T1 that is necessary for the first task, and T2 for the second task. The measurements were taken using a stopwatch from the start to the successful completion of the task.  The number of changes between axes and coordinate systems (world-tool-joint). This was performed for each task separately, the number of changes denoted as N1 and N2 respectively, based on observation of the participants in real time.  The deviation angle (misalignment) between the tool axis and the fixture axis. Successful completion of a task is defined as insertion of the tip of the bar into the fixture. Moreover, the degree of success is denoted as S1 and S2 for the two Tasks and is calculated as 100% minus the deviation angle as a percentage of 90 . In the virtual system the following data was collected:  The completion times T1 and T2, as above, but measurement was performed automatically through the inbuilt facility of the VE.  The number of changes between axes and coordinate systems, as above, but with no need for observation since all manipulation of the robot was recorded by exploiting the inbuilt capabilities of the VE.  The intermediate points in absolute coordinates from which the path of the end-effector tool passes. In particular, consecutive points were recorded along the path at 5 mm distances between them. Thus the length of the total path followed was calculated with acceptable accuracy.

Fig. 4. Building blocks for showing coordinate axis arrows.

The following comparisons were performed and the relevant results were statistically assessed:

D. Nathanael et al. / International Journal of Industrial Ergonomics 53 (2016) 274e283

279

Fig. 5. Building blocks for camera setting.

 To test whether trainees can perform a robot programming task in a VE with a degree of success comparable to the real robot programming task (see question 1, Section 2 above), a comparison of the performance of Group A on the Real robot (denoted as AR) with the performance of Group B on the VE (denoted as BV) was conducted.  To test whether the skills developed in the standard VR are, at least partly, transferable to the real test, a comparison between the performance of Group A and the performance of Group B on the real robot was conducted (AR-BR comparison). Although group B have had additional experience in the virtual robot, still, superior performance of condition BR would suggest that the skills acquired in the VE (BV) have indeed had a positive effect when it comes to the real robot.  To test whether the perceptual/cognitive aids have a positive impact on performance in the VE a comparison between the performance of Group B and Group C on the respective VEs was conducted (BV-CV conditions comparison)  To test whether the skills developed in the augmented VR are, at least partly, transferable to the real task, a comparison between the performance of Group A and the performance of Group C on the real robot was conducted (AR-CR comparison). In addition, to test whether the perceptual/cognitive aids had a positive influence on skill acquisition, a comparison between the performance of Group B and Group C on the real robot was conducted (BR-CR comparison). Superior performance of condition CR compared to AR and BR would suggest that the skills acquired in the augmented VE (CV) have had positive effect in the real robot and outperformed simple VR training. The above comparisons were made separately for Task 1 and Task 2 in order to check the influence of each training mode on two levels of task difficulty. 4.2. Results In the results presented next concerning the three Groups (A, B, C), the two tasks (1, 2) and the real and virtual environment (R,V) each condition is denoted by the corresponding three characters, e.g. BR1 denotes Group B working on Task 1 in the Real environment, etc. 4.2.1. Task performance calculations The mean value and standard deviation for each performance indicator for both tasks at hand are shown in Table 1 and Table 2, corresponding to the Real and the Virtual Environment, respectively, for all pertinent conditions. 4.2.2. Performance comparisons The five comparisons articulated in Section 4.1 were made using

statistical tests. The F-test was used to compare variances for each variable of interest for the designated pairs of groups, see Appendix B Table 1. Similarly, the t-test was used to compare pair-wise the means with unequal or equal variance, as determined by the F-test, in order to deduce whether null hypothesis H0 is valid, see Appendix B Table 2. Null hypothesis (H0) expresses that the means of each variable for the pair of groups that are being compared are equal at 10% level of significance. Alternative hypothesis (H1) expresses that the mean corresponding to the first group is larger or smaller than its counterpart corresponding to the second group. It is noted that, due to the relatively small size of the populations compared using F- and t-test, the significance level was relaxed from 5% to 10%. This is considered acceptable for small samples, since standard error varies inversely with sample size (Labovitz, 1968; Kim, 2015). Note that cells are left blank for a particular variable, if there are no data recorded for this variable for at least one of the two groups compared, due to non-availability of such data.

5. Discussion 5.1. Comparison between groups Following, the results between group comparisons are discussed one by one, referring to Table to address the main questions posed in Section 2.

5.1.1. Performance comparison between groups AR and BV In Task 1, group AR was found to have significantly lower task completion time than group BV and with significantly lower variance than in group BV. Also the number of coordinate changes in group AR was found to be significantly less than in group BV. In Task 2, group AR was found to have significantly higher task completion time than group BV, with no significant difference in variance between the two groups. No significant difference was found for the number of coordinate changes for Task 2 between the two groups both in terms of means and of variance. The higher completion time of group BV in Task 1 may partly be explained by the difference in the number of coordinate changes between the two groups with group BV performing significantly more changes than group AR. This difference may be attributed to the effect of the VE which, being inherently safer, promotes experimentation. In Task 2, no pertinent explanation could be established for the significant difference in task completion times between the two groups. Notwithstanding the above differences, the VE developed proved to be at least partly comparable to the real one since the differences recorded between the two conditions were moderate and balanced.

280

D. Nathanael et al. / International Journal of Industrial Ergonomics 53 (2016) 274e283

Table 1 Performance indicators on the real robot (T ¼ completion time, N ¼ number of coordinate system changes, S ¼ degree of success, M ¼ mean, SD ¼ standard deviation, the numerical subscript denoting task number). Group

Task 1

Task 2

T1 (s)

AR BR CR

N1

S1 (%)

T2 (s)

N2

S2 (%)

M

SD

M

SD

M

SD

M

SD

M

SD

M

SD

139 133 118

94 51 40

8.75 10.36 8.70

3.17 6.57 5.33

100 100 100

0 0 0

289 258 195

113 100 39

19.08 19.82 13.30

4.80 6.71 3.38

66 81 86

22 22 20

Table 2 Performance indicators on the virtual robot (T ¼ completion time, N ¼ number of movements, L ¼ length of path, M ¼ mean, SD ¼ standard deviation, the numerical subscript denoting task number). Group

Task 1

Task 2

T1 (s)

BV CV

N1

L1 (mm)

T2 (s)

N2

L2 (mm)

M

SD

M

SD

M

SD

M

SD

M

SD

M

SD

238 163

139 98

18.18 15.30

9.32 12.14

2965 1903

2107 917

218 211

86 113

22.27 31.30

11.07 17.22

2563 2231

904 466

5.1.2. Performance comparison between groups AR and BR In terms of task completion time, group BR performed slightly better than group AR. However, these differences were not found to be significant in either Task 1 or Task 2. In Task 1 group BR showed more uniform performance with a T1 variance significantly lower than group AR. In task 2 no such difference was found. In terms of number of coordinate changes, no clear differences were found between the means of the two groups; however group BR presented higher variance in their results. A significant difference was found in terms of degree of success for Task 2 with group BR outperforming group AR. These results show that prior training in the VE of group B have had only a limited positive effect on the groups’ performance on the real robot. 5.1.3. Performance comparison between groups BV and CV In Task 1, Group CV was found to have significantly shorter task completion time than group BV. In terms of number of coordinate changes no significant difference was found between the two groups. However, the total path of the end effector (L1) was found to be significantly shorter in group CV compared to group BC. The comparable number of coordinate changes in the two groups coupled with a significantly shorter path length for group CV is most probably due to the cognitive/perceptual aids present in group CV. This is because in group CV mental simulations could be checked through the specifically designed aids without the need to actually perform the robot move. In Task 2 no significant difference was found in terms of task completion time (T2). Group CV demonstrated greater variance in T2 and also performed more coordinate changes than group BV. Total path of the end effector (L2) was shorter in Group CV although the difference was not significant, the variance however was found to be significantly lower for group CV. These results show that there was a positive effect of the cognitive/perceptual aids on task performance in the VE, especially in minimizing total path length i.e. reducing unneeded robot moves and in the uniformity of results among participants. 5.1.4. Performance comparison between groups AR and CR In Task 1, no significant difference was found in terms of task

completion time between the two groups. Also no significant difference was found in terms of coordinate changes, or in the degree of success. In Task 2 there was a significant difference in terms of task completion time with group CR outperforming group AR, time completion variance between participants also being significantly smaller for group CR. The number of coordinate changes was significantly lower for group CR, the degree or success also being significantly better. 5.1.5. Performance comparison between groups BR and CR In Task 1, no significant difference was found in terms of Task completion time between the two groups. Also, no significant difference was found in terms of coordinate system changes, or in the degree of success. In Task 2 there was a significant difference in terms of task completion time, group CR performing better than group BR, time completion variance between participants also being significantly smaller for group CR. The number of coordinate system changes was significantly lower for group CR, the degree or success however not being significantly better. 5.2. Overall comments Overall in Task 1, groups BR and CR did not manifest significantly better performance than group AR. Task completion times as well as number of coordinate system changes were comparable to group AR. In the VE, performance of Group B (i.e. Group BV) was significantly poorer than group AR, whereas performance of Group C in the VE (i.e. Group CV) was comparable to group AR. One marked difference between performance in the real versus the VE in Task 1 was that, in the VE, both groups (i.e. BV, CV) made significantly more coordinate changes than any group in the real environment (i.e. AR, BR, CR). This difference should most probably be attributed to the effect of the VE which, presenting no danger, promotes experimentation. Thus the differences in task completion time between performance in the real and VEs in Task 1 may partly be attributed to time spent to higher number coordinate system changes. Indeed, the comparable performance in task completion time between groups AR and CV, although group CV executed almost double coordinate system changes, suggests that the

D. Nathanael et al. / International Journal of Industrial Ergonomics 53 (2016) 274e283

281

Table 3 Difference between groups (T ¼ completion time, N ¼ number of movements, L ¼ length of path, M ¼ mean, SD ¼ standard deviation, the numerical subscript denoting task number, <0.1 ¼ significant difference at a 10% level, ns ¼ no significant difference). Group

Task 1

Task 2

T1 (s)

N1

S1 (%)

T2 (s)

N2

S2 (%)

L1 (mm) only for BV-CV

AR-BV AR-BR BV-CV AR-CR BR-CR

M

SD

M

SD

<0.1 ns <0.1 ns ns

<0.1 <0.1 ns <0.1 ns

<0.1 ns ns ns ns

ns ns <0.1 ns ns

M ns <0.1 ns ns

cognitive/perceptual aids promoted experimentation. Experimentation, in general, has a positive impact on skill acquisition. In the real robot, the non-marking difference in performance of Group CR compared with group AR may be attributed to the low level of difficulty of Task 1, which could be accomplished even without altering coordinate systems. The same argument also holds for the performance of group BR. Overall in Task 2, group BR did not present a significantly better performance than AR with the exception of the degree of success (S1) where group BR significantly outperformed group AR. This suggests that the training in the simple virtual robot did not have a significant positive effect in the real robot. Group CR on the other hand, demonstrated significantly better performance in Task 2 than both groups AR and BR. In particular it demonstrated significant difference in T2, in M2 and S2 compared to AR and significant difference in T2, in M2 compared to BR, see Table 3. The above suggests that the cognitive/perceptual aids in the augmented VE not only aided performance in the virtual robot itself but, more importantly, had a positive impact on the development of skills that were transferred to the real robot. It is important to note that in the virtual robot, for Task 2, Group CV performed significantly more coordinate changes than both groups BV and AR. In contrast, in the real robot it performed significantly fewer changes than either Group BR or AR. Groups BR and AR were not found to have significant differences between them. This phenomenon can be explained as follows. The augmented VE promoted more coordinate system changes than both the simple VE and the real robot. Group C having performed significantly more coordinate system changes in the VE had acquired more skill in anticipating robot moves and in cognitively switching between coordinate systems. Therefore, in the real robot Group C could present more strategic anticipation which resulted both in less unnecessary coordinate system changes and shorter task completion times.

L1 (mm) only for BV-CV

SD

M

SD

M

SD

M

SD

ns <0.1 ns ns

<0.1 ns ns <0.1 <0.1

ns ns <0.1 <0.1 <0.1

ns ns <0.1 <0.1 <0.1

ns <0.1 ns ns <0.1

<0.1 ns <0.1 ns

Ns <0.1 Ns ns

number of participants in this particular experiment and on the particular tasks performed. On the other hand, with increased task difficulty, the group with prior training in the virtual augmented robot clearly outperformed the group that executed the tasks in the real robot only. The cognitive/perceptual aids embedded in the virtual augmented robot had a positive effect on all task performance metrics and on the consistency of results across participants on the real robot. This positive effect, being present in the real robot even though the cognitive/perceptual aids were no longer present, suggest that virtual augmented condition helped participants in developing higher cognitive skills i.e. cognitive anticipation and evaluation of moves, that were transferred in the real condition. Virtual training environments need not be designed as close as possible to the real ones. Although a high level of fidelity is definitively positive, virtual environments, because of the flexibility they offer, can provide novel opportunities that go beyond the degree of “training-performance match” (Lathan et al. 2002). Specifically designed augmented cognitive/perceptual aids may foster skill development that can be transferred to the real task. However, the design of the augmented signals should be based on an analysis of the cognitive demands and affordances for each task in order to aid skill development. Otherwise, the augmented signals may merely act as substitutes to the subject's cognitive capability, which is subsequently not transferred to the real environment. The suggested training environment is simple and cost effective, yet proved to be able to support training of novices in an entry level task in a few minutes. Acknowledgements The authors wish to thank the participants of the study for their voluntary help on this research and on their consent in publishing the results. No formal ethics approval was required for this research. Also, none of the authors have any conflict of interest concerning the present research.

6. Conclusions Appendix A. Inverse kinematic solution In the present paper a virtual reality system to support training in on-line programming of industrial robots is presented. The system was evaluated by running training experiments with 4th year Mechanical Engineering students. Three groups of subjects performed two robot programming tasks in the real, virtual and virtual augmented conditions. Results show that prior training in the simple virtual robot did not significantly alter performance on the real one, although some positive effect was observed. Thus for the specific tasks tested, no clear transfer of skill to the real robot was observed. This result however may be due to the relatively small

The kinematic model of the robot is described in a standard way by the Denavit e Hartenberg convention. Based on this convention, the transformation matrix between the local coordinate systems of link (i) and link (iþ1) for any robot is as follows:

2

Ai1 i

cosqi 6 sinqi cosai1 ¼6 4 sinqi sinai1 0

sinqi cosqi cosai1 cosqi sinai1 0

0 sinai1 cosai1 0

3 li1 sinai1 di 7 7 cosai1 di 5 1

282

D. Nathanael et al. / International Journal of Industrial Ergonomics 53 (2016) 274e283

For the sample robot presented the general Denavit e Hartenberg table is as follows:

i

ai1 (deg)

li1 (mm)

di (mm)

qi (deg)

1 2 3 4 5 6

0 90 0 90 90 90

0 0 450 0 0 0

0 0 0 656.2 0 74

0 90 90 0 0 0

ffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0    2 2 2 2 2 2АG± ð2АGÞ  4 А þ В  В G B   q2 ¼ cosB @ 2 А2 þ В2 D ¼ c1c2ax  s1c2ay þ s2az

11 C C A

E ¼ c1s2ax þ s1s2ay þ c2az

q5 ¼ cosðc3Е  s3DÞ1   s1ax þ c1ay 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi q4 ¼ ± sin ± 1  c52

The inverse kinematics solution that determines joint coordinates qi given the position of the end-effector (px,py,pz) and orientation (ax, ay, az)

Joint angle q6 is of no interest in this work. Appendix B. F and t-test results

  d6ay  py 1 q1 ¼ ±tan d6ax  px References

A ¼ c1d6ax þ s1d6ay  c1px  s1pykaiB ¼ d6az  pz

Aleotti, J., Caselli, S., Reggiani, M., 2004. Leveraging on a VE for robot programming

Table B.1 Existence (Y/N) of a significant difference (10% level of significance) between the variances of each variable (F-test) (T ¼ completion time, N ¼ number of coordinate system changes, S ¼ degree of success, L ¼ length of path, the numerical subscripts denoting task number) Task Variable AR-BV F 1

2

Τ1 N1 L1 T2 N2 S2 L2

AR-BR Fcr

P e value H1 F

BV-CV Fcr

P e value H1 F

0.448 0.444 0.102 0.114 0.444 0.001

Y N

3316 2302 0.035 0.230 0.444 0.012

Y N

1703 2301 0.205 0.186 0.444 0.005

N N

1264 2301 0.359 0.507 0.444 0.141 0.952 0.444 0.465

N Y Y

AR-CR Fcr

P e value H1 F

1998 0.583 5225 0.582 0.409

2416 0.426 2416 0.426 0.426

0.156 0.206 0.010 0.206 0.090

N Y Y Y N

3720

2416

0.030

Y

BR-CR Fcr

P e value H1 F

Fcr

P e value H1

5381 2396 0.009 0.346 0.439 0.050

Y N

1622 2416 0.240 1503 2416 0.276

N N

8229 1985 1101

Y N N

6505 2416 0.005 3909 2416 0.026 1156 2416 0.418

Y Y N

2396 2396 2396

0.002 0.156 0.450

Table B.2 Validation of the alternative hypothesis (H1) that there is a difference in the mean of the variables (t-test at 10% significance level). Notation is as in Table 1 Task Variable AR-BV t 1

2

Τ1 N1 L1 T2 N2 S2 L2

q3 ¼ sin

G ¼ l2 þ s3d4

2

AR-BR tcr

P e value H1 t

BV-CV tcr

P e value H1 t

1875 1333 0.039 3156 1323 0.002

Y Y

0.197 1333 0.423 0.726 1323 0.238

N N

1592 1323 0.063 1140 1323 0.133

Y N

0.645 1323 0.263 0.675 1330 0.254 1532 1323 0.070

N N Y

А þ

В2

l22

  2l2d4

d42

!1

AR-CR

BR-CR

tcr

P e value H1 t

tcr

1330 0.575 1448 0.163 1371

1327 1333 1345 1333 1327

0.099 0.286 0.085 0.436 0.093

Y N Y N Y

1337 0.252 1325 0.490

N N

0.701 0.602

1328 0.246 1327 0.277

N N

2557 1345 0.011 2526 1325 0.010 2048 1325 0.027

Y Y Y

1843 1350 0.044 2712 1341 0.008 0.486 1327 0.316

Y Y N

1019

1340 0.162

N

0.682 0.026

P e value H1 t

tcr

P e value H1

by demonstration. Robotics Aut. Syst. 47 (2), 153e161. Angelidis, A., Vosniakos, G.-C., 2014. Prediction and compensation of relative position error along industrial robot end-effector paths. Int. J. Precis. Eng. Manuf. 15 (No 1,), 66e73. Aurich, J.C., Ostermayer, D., Wagenknecht, C.H., 2009. Improvement of

D. Nathanael et al. / International Journal of Industrial Ergonomics 53 (2016) 274e283 manufacturing processes with virtual reality-based CIP workshops. Int. J. Prod. Res. 47 (19), 5297e5309. Bardy, B.G., Lagarde, J., Mottet, D., 2012. Dynamics of skill acquisition in multimodal technological environments. Chap. 3. In: Bergamasco, M., Bardy, B.G., Gopher, D. (Eds.), Skill Training in Multimodal Virtual Environments. CRC Press, pp. 31e45. Biggs, G., MacDonald, B., 2003. A survey of robot programming systems. In: Roberts, J., Wyeth, G. (Eds.), Proceedings of the Australasian Conference on Robotics and Automation, Brisbabe 2003, pp. 1e10. Burdea, G.C., 1999. The synergy between virtual reality and robotics. IEEE Trans Robotics Automation 15 (3), 400e410. Carlson-Radvansky, L.A., Jiang, Y., 1998. Inhibition accompanies reference frame selection. Psychol. Sci. 9, 386e391. Chen, H., Xi, N., Chen, Y., Dahl, J., 2003. CAD-guided spray gun trajectory planning of free-form surfaces in manufacturing. J. Adv. Manuf. Syst. 2 (1), 47e69. Chong, J.W.S., Ong, S.K., Nee, A.Y.C., Youcef-Youmi, K., 2009. Robot programming using augmented reality: an interactive method for planning collision-free paths. Robotics Computer-Integrated Manuf. 25 (No 3), 689e701. Fang, H.C., Ong, S.K., Nee, A.Y.C., 2012. Interactive robot trajectory planning and simulation using Augmented Reality. Robotics Comput. Integr. Manuf. 28 (No 2), 227e238. Gallagher, A.G., McClure, N., McGuigan, J., Crothers, I., Browning, J., 1999. Virtual reality training in laparoscopic surgery: a preliminary assessment of minimally invasive surgical trainer virtual reality (MIST VR). Endoscopy 4, 310e313. Gogouvitis, X.V., Vosniakos, G.-C., 2014. Construction of a Virtual Reality environment for robotic manufacturing cells. Int. J. Comput. Appl. Technol. 12 in-press, reference code: IJCAT12097. Gray, S.V., Wilson, J.R., Syan, C.S., 1992. Human Control of Robot Motion: Orientation, Perception and Compatibility. In: Human-robot Interaction. Taylor & Francis, London. Gupta, S.K., Anand, D.K., Brough, J., Schwartz, M., Kavetsky, R., 2008. Training in Virtual Environments. A Safe, Сost-effective, and Engaging Approach to Training. University of Maryland. Haton, B., Mogan, G., 2008. Enhanced ergonomics and virtual reality applied to industrial robot programming. In: Proceedings 4th International Conference on Robotics, Brasov, Romania, 13-14 November 2008, pp. 595e604. Hein, B., Worn, H., 2009. Intuitive and model-based on-line programming of industrial robots: new input devices. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), 10-15 October 2009, pp. 3064e3069. Huckaby, J., Christensen, H.I., 2012. A Taxonomic Framework for Task Modeling and Knowledge Transfer in Manufacturing Robotics Association for the Advancement of Artificial Intelligence. Technical Report WS-12-06, pp. 95e101. Jara, C.A., Candelas, F.A., Gil, P., Torres, F., Esquembre, F., Dormido, S., 2011. An interactive tool for industrial robots simulation, Computer Vision and remote operation. Robotics Aut. Syst. 59 (No 6), 389e401. Kim, J.H., 2015. How to Choose the Level of Significance: a Pedagogical Note, MPRA Paper. University Library of Munich, Germany. http://EconPapers.repec.org/ RePEc:pra:mprapa:66373. Labovitz, S., 1968. Criteria for selecting a significance level: a note on the sacredness of .05. Am. Sociol. 3 (3), 220e222. Retrieved from http://www.jstor.org/stable/

283

27701367. Lathan, C.E., Tracey, M.R., Sebrechts, M.M., Clawson, D.M., Higgins, G.A., 2002. Using virtual environments as training simulators: measuring transfer. Handb. virtual Environ. Des. Implement. Appl. 403e414. Levinson, S.C., 1996. Frames of reference and Molyneux's question: crosslinguistic evidence. In: Bloom, P., Peterson, M.A., Nadel, L., Garrett, M.F. (Eds.), Language and Space. MIT press, Cambridge, Massachusetts. Maida, J.C., Bowen, C.K., Pace, J., 2007. Improving robotic operator performance using Augmented Reality. In: Proceedings of the Human Factors and Ergonomics Society 51st Annual Meeting, pp. 1635e1640. Mogan, G., Talaba, D., Girbacia, F., Butnaru, T., Sisca, S., Aron, C., 2008. A generic multimodal interface for design and manufacturing applications. In: Proceedings of the 2nd International Workshop Virtual Manufacturing (VirMan08) e Part of the 5th INTUITION International Conference: Virtual Reality in Industry and Society: from Research to Application, 6e8 October 2008, Torino, Italy, p. 10. Pan, Z., Polden, J., Larkin, N., van Duin, S., Norrish, J., 2012. Recent progress on programming methods for industrial robots. Robotics Computer-Integrated Manuf. 28 (No 2), 87e94. ruch, P., Belingard, L., Thinus-Blanc, C., 2000. Transfer of spatial knowledge from Pe virtual to real environments. In: Freksa, C., Brauer, W., Habel, C., Wender, K.F. (Eds.), Spatial Cognition II: Lecture Notes in Artificial Intelligence 1849. Springer-Verlag, Berlin, pp. 253e264. Pires, J.N., Godinho, T., Ferreira, P., 2004. CAD interface for automatic robot welding programming. Industrial Robot An Int. J. 31 (1), 71e76. Regian, J.W., 1997. Virtual reality for training: evaluating transfer. In: Virtual Real. Training’s future, pp. 31e40. Reinhart, G., Munzert, U., Vogl, W., 2008. A programming system for robot-based remote-laser-welding with conventional optics. CIRP Ann. e Manuf. Technol. 57, 37e40. Rose, F.D., Attree, E.A., Brooks, B.M., Parslow, D.M., Penn, P.R., 2000. Training in virtual environments: transfer to real world tasks and equivalence to real task training. Ergonomics 43 (4), 494e511. Slater, M., Wilbur, S.A., 1997. Framework for immersive virtual environments (FIVE): speculations on the role of presence in virtual environments. Presence Teleoperators Virtual Environ. 6 (6), 603e616. Verner, I.M., Gamer, S., Shtub, A., 2012. Fostering students' spatial skills through practice in operating and programming robotic cells. In: Kim, J.-H., Matson, E.T., Myung, H., Xu, P. (Eds.), Robot Intelligence Technology and Applications 2012, Advances in Intelligent Systems and Computing, vol. 208, pp. 745e752. Vosniakos, G.-C., Chronopoulos, A., 2009. Industrial robot path planning in a constrained-based CAD and kinematic analysis environment. Proc. Institution Mech. Eng. Part B, J. Eng. Manuf. 223 (B5), 523e534. Waller, D., Hunt, E., Knapp, D., 1998. The transfer of spatial knowledge in virtual environment training. Presence Teleoperators Virtual Environ. 7 (2), 129e143. Webel, S., Bockholt, U., Engelke, T., Gavish, N., Olbrich, M., Preusche, C., 2013. An augmented reality training platform for assembly and maintenance skills. Robotics Aut. Syst. 61 (No 4), 398e403. Zlajpah, L., 2008. Simulation in robotics. Math. Comput. Simul. 79, 879e897.