Advances in Engineering Software 98 (2016) 58–68
Contents lists available at ScienceDirect
Advances in Engineering Software journal homepage: www.elsevier.com/locate/advengsoft
A new constraint-based virtual environment for haptic assembly training Wei Jiang a,∗, Jin-jin Zheng a, Hong-jun Zhou b, Bing-kai Zhang a a
Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Room 206, Mechanics Building 2, Western District, Hefei 230031, Anhui Province, China b National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230031, Anhui Province, China
a r t i c l e
i n f o
Article history: Received 18 October 2015 Revised 9 March 2016 Accepted 14 March 2016
Keywords: Virtual assembly training Haptic feedback Constraints Physics engine
a b s t r a c t This paper presents a virtual training system with the aim to make the assembly training easy and realistic through the usage of haptics and visual fidelity. The paper gives a detailed description on setting basic constraints to solve the shortcomings when physics engine and haptic feedback are both integrated in the training system. The basic constraints not only simplify the assembly operation and shorten the assembly time, but also increase the visual realism when physics engine is integrated in assembly training. Assembly sequence generation based on the disassembly process is also provided. Moreover, except the display and physics engine modules, the whole system is made up of widgets, which can be activated only when its function is needed. The experiment of the Slider Mechanism is presented. 24 participants were distributed into three groups (haptic-based group, mouse-based group and traditional training group). The results show that the haptic-based training system received good evaluations on the ease of operation and assembly cues. No significant differences in training time were found between the haptic group and the traditional training group. But the training of the mouse-based group was significantly slower. Moreover, no significant differences in learning of the task were found among the three groups. © 2016 Elsevier Ltd. All rights reserved.
1. Introduction With the development of the manufacturing industry, virtual assembly training has received much attention in the past few years because it has many advantages compared to traditional assembly training: (1) All assembly parts from CAD (Computer Aided Design) software are imported into virtual environment, which saves time of making physical parts. Assembly workers can perform the assembly training many times without loss of parts. (2) Assembly workers can complete the assembly training task by himself under the guidance of assembly cues. Moreover, fewer mistakes will be made. (3) Avoid the risk of danger in the real parts training task as assembly workers are unfamiliar with the assembly process [1]. (4) Virtual assembly training inside the room will not be affected by the bad weather or other factors.
∗
Corresponding author. Tel.: +8618963786327. E-mail address:
[email protected] (W. Jiang).
http://dx.doi.org/10.1016/j.advengsoft.2016.03.004 0965-9978/© 2016 Elsevier Ltd. All rights reserved.
However, when integrating physics engine and haptics in the training system, there are several problems needed to be solved. Ritchie et al. [2] did a research on factors affecting users’ performance in a haptic-based assembly. One of its results showed that when conducting the chamfered peg-in-hole assembly task, the completion time of SCD (stereo with collision detection) was almost four times longer than the real parts assembly. The SNCD (stereo without collision detection) was seven times longer than the real parts assembly. In the assembly training experiment conducted by Vélaz et al. [3], the assembly training time using the haptic device was longer than the mouse-based training. When using the haptic device in assembly tasks, participants needed to deal with depth and collision of objects in 3D scene. This increased great complexity when performing the insertion type assembly. The above studies have suggested that some of the assembly tasks (precise position, peg-in-hole, screw) are still timeconsuming compared to real parts assembly even with the help of haptic feedback. The previous studies have proposed several methods to solve this problem. One way is using geometry constraints to make virtual objects move along the desired path during the assembly process. However, topological information from CAD software is needed to set objects’ constraints.
W. Jiang et al. / Advances in Engineering Software 98 (2016) 58–68
This paper describes the development of setting basic constraints to solve the shortcomings when physics engine and haptic feedback are both integrated in the training system. The basic constraints make the assembly training easy and realistic through the usage of haptics and visual fidelity. The system doesn’t need topological information from CAD software to set objects’ constraints as basic constraints have already been supported in this system. The assembly training planner just needs to choose the part’s constraint if needed. This saves much of the preparation time. Moreover, the basic constraints can be used in virtual motion simulation. The contribution is mainly in three aspects: (1) Propose the method of setting basic constraints to solve the shortcomings when physics engine and haptic feedback are both integrated in the training system. (2) Assembly sequence generation based on the disassembly process. Moreover, each part’s assembly constraint can be identified during the disassembly process. (3) Except the display and physics engine modules, the whole training system is made up of widgets, which can be activated when it is needed. The rest of the paper is organized as follows, the former studies on virtual assembly is presented in Section 2. Section 3 introduces the system architecture. In Section 4, a detailed description on key technologies of the constraint-based training system is presented. Section 5 presents the function of widgets. In Section 6, an experiment study is presented to evaluate the training system. Finally, the paper ends with conclusion and future work. 2. Related works The techniques of immersion in virtual reality call for different means corresponding to user’s senses [5, 6]. The operator receives information about the digital scene (3D stereo immersion, force feedback, etc.) and he can operate on this virtual environment thanks to the development of VR devices (HMD, haptic device, etc.). 2.1. Virtual assembly systems review The contributions to virtual assembly can date back to almost twenty years ago. The virtual assembly design environment (VADE) was developed in 1999 [7]. The models were designed in CAD software and the system supported both one-handed and two-handed assembly using glove devices. A head-mounted display was also used for stereo vision. In recent years, the most important development must be the use of haptics. Coutee et al. [8] proposed HIDRA which used dual phantom haptic devices. They used GHOST SDK from SensAble Technologies to interact with the haptic device and OpenGl for visualization. The Voxmap PointShell (VPS) method developed at Boeing provided collision detection and haptic feedback with the help of phantom devices. SHARP [9] developed by Seth et al. proposed a new approach by simulating physical constraints and provided a solution for accurate collision detection. Badillo et al. [10] developed a Haptic Assembly and Manufacturing System (HAMS). In their work, dynamic assembly constraints were used to reduce the degrees of freedom of virtual objects. Thus parts could only be moved along the limited direction. In this way, the assembly time was shorter. Wang et al. [11] proposed an assembly system which integrated constraints recognition and location refinement. The system allowed users to simulate a manual assembly without auxiliary CAD software information. This saved much of the preparation time. Ritchie et al. [2] did a research on the factors that affecting users’ performance in haptic assembly. One of its results showed that when conducting the chamfered peg-in-hole
59
assembly task, the assembly time with collision detection and haptic feedback was almost half of the time without collision detection. The above studies have shown the importance of integrating haptics in virtual assembly. It increases the realism of operating parts and shortens the assembly time. 2.2. The integration of physics engine in assembly tasks The integration of physics engine to simulate objects’ physics properties (gravity, friction, contact force, etc.) can increase the visual realism in virtual assembly systems. However, it has several problems as commercial physics engines developed for games don’t specifically focus on solving the problems that exist in assembly tasks, especially when haptics is also integrated. Hummel et al. [4] made a comparison of five physics engines focusing on assembly simulation in virtual environment. In the advanced collision and friction test, they measured the position and orientation of the screw when screwed into the nut. The result showed that only Newton and Bullet physics engines passed the test, but the movement was not stable and continuous like the real assembly task. The Havok engine in the test didn’t manage to solve the task. The screw jumped out of the hole after 12% of the total length. Tching et al. [12] proposed the idea of virtual constraint guidance for insertion tasks. To apply the constraint-based guidance, they needed to set mechanical linkages (constraints) between two parts (hinge, cylinder joint, etc.). Moreover, they needed to model virtual fixture like virtual walls to limit the movement of the user. The use of constraints and virtual fixture makes the assembly task more efficient. However, it takes a long time to prepare for an assembly task. Moreover, in a real assembly task, this kind of force guidance can hardly be supported which may affect the realism of the virtual assembly task. Xia et al. [13] proposed a physics-based modeling approach combining with haptic feedback and geometry constraints to perform the realistic assembly process. When two parts were close enough, a geometry constraint was captured. An attractive force was generated to guide the user to assemble the part along the correct position and the repulsive force could also be generated when the handle was deviated from the mating axis. Gonzalez et al. [14] developed three methods to represent concave objects in physics engine. Concave objects are one of the most challenging problems for collision detection. Four experiments were carried out to test three algorithms. The results showed that HACD (Hierarchical Approximate Convex Decomposition) had the best performance in assembly tasks comprising simple shape models, while GIMPACT had the best performance in assembly tasks of complex geometries. Another challenge of integrating physics engine and haptics in a virtual training system is that the update rate for haptic (about 10 0 0 Hz) is much higher than the physics engine (about 100 Hz). To solve this problem, many studies make the haptic rendering thread and the physics simulation run in an asynchronous execution [15, 16]. He et al. [15] used named-pipes for the communication between two processes. The design of named pipes is based on client-server communication, working in a way like sockets. 2.3. Assembly training and evaluations Adams et al. [17] did a research on virtual training for manual assembly tasks in 1999. An experiment was conducted to investigate the benefits of force feedback in virtual training. Three groups received different levels of virtual training for a plane model assembly. Only one group used haptics in the assembly task. The analysis of completion time revealed that subjects trained with force feedback performed significantly better than those who received no training. But the differences in learning between those
60
W. Jiang et al. / Advances in Engineering Software 98 (2016) 58–68
Fig. 1. The training system interface with Phantom Premium 1.5.
trained with force feedback and those trained without force feedback were not shown significant. Aziz et al. [18] designed a feature-based virtual mechanical assembly training system. They integrated the game engine in the assembly training to reduce the complexity and cost of developing a virtual assembly training environment. The features were created to make the game engine compatible with engineering capabilities. However, they didn’t use the HMD or the haptic device to facilitate more efficient interactions between human and virtual environments. Oren et al. [19] compared the effectiveness and efficiency of virtual training with physical training using a puzzle assembly. The result showed that the participants trained with the virtual puzzle were able to assemble the physical test puzzle three times faster than the group trained with the physical puzzle. The explanation might be in the longer training time compared to the physical training. Vélaz et al. [3] studied the influence of different interaction technologies on the learning of assembly training. 60 participants were distributed to five groups. Each group used a different interaction technology: mouse-based, haptic-based, two Markerless Motion capturing systems with 2D and 3D tracking. The last group was trained with a video tutorial as a comparison. The experiment focused on the efficiency and effectiveness of each interaction technology. Results demonstrated that the interaction technologies had a small impact on learning the procedural knowledge of the assembly task. The assembly training time using the haptic device was longer than the training using the mouse. When using the haptic device in assembly tasks, participants needed to deal with depth of objects in 3D scene. Jia et al. [20] presented an object assembly training system using haptic device to imitate real physical training scenarios. They assessed the design through a large-scale user testing. The results showed that the skill-based learning was considered positive. Users had positive perception of the utility of virtual environment for training. From the above studies, it can be concluded that the integration of haptics and physics engine increases the realism of the virtual environment. However, there are two shortcomings when integrating haptics and physics engine in assembly tasks: (1) the hapticbased assembly training needs much more training time than the real parts assembly because assembly in 3D scene increases the complexity. (2) most of the physics engines can hardly complete the insertion type assembly on the premise of not affecting the visual realism. In this work, the method of setting basic constraints is proposed to solve these two shortcomings. 3. System design The architecture of the training system consists of hardware and software components. The user is able to interact with the virtual environment using the Phantom Premium 1.5 haptic device (see Fig. 1). The haptic device [21] provides a range of motion approximating lower arm movement pivoting at the elbow. In
addition to the force feedback in three translational degrees of freedom, the 6-DOF device simulates torque feedback in three rotational degrees of freedom in the yaw, pitch and roll directions with the help of the encoder stylus gimbal. Fig 2 shows the system architecture and the data flow with different update rates. For graphics rendering, Virtools is employed for visualization as it is one of the most powerful environments for interactive 3D graphics [5, 22–24]. Moreover, extension modules can be developed through Virtools 4.0 SDK. For haptic rendering, Open Haptics Toolkit v3.0 from SensAble is employed. The HDAPI is used to get and set parameter values of the haptic device. For the simulation thread, the physics engine and basic constraints have been implemented. The Havok engine is not only used in collision detection, but also simulates a physics world. The basic constraints are used to calculate the parts’ position and force/torque values during the insertion type assembly. OpenCV 2.3 is used to get the hand position to rotate and zoom the scene camera in Virtools. It is designed as a widget to help trainees adjust parts to satisfy the threshold in the training mode. For the winform development in the training system, DevExpress is adopted to provide UI controls. 4. Key technology The training system consists of three modules: (1) objects rendering, (2) the physics engine and haptic feedback and (3) widgets. In the following subsections, key technologies of the training system are presented. 4.1. Assembly sequence planning The training system has two modes. The default mode is used to plan the assembly sequence [25], set parts’ constraints and parameters. The second mode is the training mode for trainees. In the default mode, the assembly sequence is generated based on the disassembly process. When the model is imported into Virtools, it will be disassembled into parts. The system records the disassembly sequence and the reverse sequence of the disassembly is the initial assembly sequence. Then the initial assembly sequence can be optimized using the functions of the showed toolbar. The buttons of the showed toolbar (see Fig. 3) have different functions: the first button is used to start the disassembly. When it is activated, the system records all the assembly parts’ initial coordinates and orientation. The second button is the play button. It is used to play the animation of the assembly process. It helps us optimize the assembly sequence. The initial assembly sequence of the animation is the reverse order of the disassembly sequence. Each part’s animation is based on its initial position and the current position. Right-click on the play button can set the whole animation time, which helps control the animation speed. The third button is used to stop the animation. Next time when the play button is pressed, the animation will restart. The forth button is used to change the assembly sequence through the winform frame in Fig. 3. The fifth button is used to save the assembly sequence. The sixth button makes the sequence return to initial state. The winform frame contains many combo boxes (see Fig. 3). The number of the combo boxes is the same as the number of parts. The combo box size is determined by the number of parts to suit the frame. Each part can be selected from the combo boxes and the assembly sequence is determined by the sequence of parts in the combo boxes. For example, in the initial assembly sequence, part a is in the first combo box and part b is in the second combo box. So part a is the first to be highlighted and part b is the second. If the sequence needs to be changed, part b will be selected from the first combo box and part a from the second combo box.
W. Jiang et al. / Advances in Engineering Software 98 (2016) 58–68
61
Fig. 2. System architecture and data flow with different update rates.
Fig. 3. Screenshot of the toolbar and winform frame.
The winform frame (see Fig. 3) also contains some information of each part, including the initial position, orientation, operating tool, vector and constraint. The vector is the feature of the moving axis in the insertion type assembly task. The planner is supposed to set each part’s constraint and operating tool during the disassembly process. The constraints are defined in two ways: 1) right click the mouse and select the constraint before the part is disassembled. 2) on the changing sequence interface, each part has its combo box. The constraint can be selected from the combo box. The assembly sequence in the training mode can be used in two aspects: 1) highlight the current assembly part’s reference part. The reference part shows the final position that the current assembly part needs to be moved to. As shown in Fig. 4a, the base is the current assembly part. The reference part with yellow color is the position that the current part needs to be moved to. 2) highlight the next assembly part when the current part has been assembled. When the base (see Fig. 4b) has been moved to the right position, the next assembly part based on the assembly sequence will be highlighted.
environment, but also makes the assembly motion simulation possible. In [26], the Havok engine showed the best in average computation time, stability and friction accuracy. But in [4], the author made a comparison of several engines, the result showed that Havok had less accuracy compared to other physics engines. Although Havok engine helps to simulate the physics environment, it is not developed for haptic rendering [27]. Moreover, according to the peg-in-hole test, when inserting the cylinder into the hole, the cylinder jumped out of the hole after moving a short distance. This is because the insertion type assembly has a concave part which results in a complex collision detection. So the Havok engine can hardly complete the insertion type assembly on the premise of not affecting the visual realism. To solve this problem, the collision group is integrated. It is one of the physics engine capability in Virtools. The collision group is recorded automatically during the disassembly process (during the disassembly process of part a, if part a collides with part b, part b will be in the collision group of part a). Each part’s collision group can also be set by the assembly training planner. If two parts are in the same collision group (this is common in assembly tasks), the physics engine won’t prevent them from penetration. For example, part b is in the collision group of part a. If the collision group is activated, part b won’t prevent part a from penetration. The integration of collision group enables the Havok engine to complete the insertion type assembly, but it results in penetration which greatly affects the visual realism. So virtual assembly in this work is divided into two types: no insertion type assembly and insertion type assembly. No insertion type assembly means placing one part on the surface of the other part. The part just needs to be moved to the accurate position using the automatic merging [11]. In the insertion type assembly, the method of setting basic constraints is proposed to increase the visual and haptic realism when the physics engine is integrated in assembly training.
4.2. The integration of physics engine and haptic feedback The physics engine plugin provided by Virtools is a result of the collaboration between Virtools and the Havok engine. It enables most capabilities of the Havok physics library. The integration of the physics engine not only reduces the effort to simulate a physics world which will increase the realism of the virtual
4.2.1. Setting basic constraints The proposal of the constraint has three advantages: 1) increase the visual and haptic realism when collision group is integrated in assembly tasks; 2) simplify the assembly operation and shorten the assembly time; 3) make the motion simulation of the mechanism more stable.
62
W. Jiang et al. / Advances in Engineering Software 98 (2016) 58–68
Fig. 4. Functionality of the assembly sequence.
Fig. 5. Peg-in-hole constraint illustration.
The moving axis in Fig. 5a plays an important role in setting basic constraints. When the basic constraints are activated, parts are constrained to move along the axis. Each part’s moving axis is automatically recorded during its disassembly process. The moving axis is determined by a point and a vector. The points which represent the part’s position and orientation in the system are recorded at a certain interval during the part’s disassembly. The vectors are decided by the points. For example, vector[i] is decided by point[i] and point[i + 1]. When vector[i]-vector[i − 1] does not satisfy the threshold, which means the part has been dragged out of the hole, the system stops recording points. So the moving axis when this part is being disassembled in the default mode can be determined by vector[i − 1] and point[i]. Moreover, vector[i − 1] and point[i] are chosen to be the features of the moving axis when this part is being assembled in the training mode. It should be noted that the axis is not only limited in the center of the hole. In fact, it is decided by the part’s center point and the disassembly path. So the axis can even be outside of the hole. In Fig. 5a, when part a is dragged to the position near point[i] (point[i] is not only used to determine the moving axis, but also the reference point) and the difference between part a and point[i] satisfies the threshold (see Table 1), part a will be adjusted to the moving axis automatically (see Fig. 5b) and the basic constraint is activated. The threshold consists of the position and orientation thresholds. The distance between the current assembly part and point[i] must be less than 10 mm. The difference of orientation between the current assembly part and point[i] must be less than 10° (the formula in brackets is used to avoid the abrupt change of the angle in 180°. The angle of the part turns to –180° if it is greater than 180°).
During this period, if the scale factor which can be set in the changing parameters widget is less than one, the part’s rotation angle will be smaller than the stylus rotation angle as the part’s orientation is decided by its initial orientation and the changing value. The changing value is based on the stylus rotation and scale factor. The constraints are divided into two types: force feedback constraint and torque feedback constraint. The most common force feedback constraints are the slider constraint and the peg-in-hole constraint. The difference is mainly in the force value calculation. Parts of slider constraint are usually the main assembly parts while parts of peg-in-hole constraint are mainly used to fasten or limit other parts’ position. The force feedback can help the trainees distinguish them. Moreover, in real parts assembly, there is only friction in the slider constraint which is a constant value but the force of the peg-in-hole constraint increases when the peg is being inserted into the hole (if the peg is used to fasten other parts). In Fig. 5b, when the part has been adjusted to the axis, the pegin-hole constraint is activated. During this period, the haptic device does not have to be moved exactly along the axis. In Fig. 5b, the grey ball is the display cursor and the blue ball represents the stylus position. The figure shows that the stylus does not move along the moving axis but part a does. When the constraint is activated, the part’s position is decided by the projection of the blue ball on the moving axis. The part’s position along the axis is calculated using formula:
xn = k ( xi − x0 ) + x0
(1)
yn = k ( yi − y0 ) + y0
(2)
zn = k ( zi − z0 ) + z0
(3)
k=−
( x0 − x p ) ( xi − x0 ) ( y0 − y p ) ( yi − y0 ) + ( z0 − z p ) ( zi − z0 ) (4) (xi − x0 )2 + (yi − y0 )2 + (zi − z0 )2
(xn , yn , zn ) is the position of part a. (xi , yi , zi ) is the position of point[i]. (x0 , y0 , z0 ) is the part’s initial position. (xp , yp , zp ) is the device position. When part a is constrained to move along the axis, its collision group is activated because the moving axis has a coaxial error with the hole’s central axis. Even though the error is very small, the collision group needs to be activated to avoid collision. Since part a is constrained to move along the axis, the penetration can hardly be seen which will not affect the visual realism. In this way, part a can be easily inserted to part b without complex collision detection. Fig. 6 are the screenshots when the cylindrical pin is being inserted into the hole in the training system.
W. Jiang et al. / Advances in Engineering Software 98 (2016) 58–68
63
Table 1 Part adjustment requirement. Position
|Pos p − Posi | ≤ 10
Orientation
θx = |θ p−x − θi−x | ≤ 10 (if |θ p−x − θi−x | > 180, θx = 360−|θ p−x − θi−x | ) θy = |θ p−y − θi−y | ≤ 10 (if |θ p−y − θi−y | > 180, θy = 360−|θ p−y − θi−y | ) θz = |θ p−z − θi−z | ≤ 10 (if |θ p−z − θi−z | > 180, θz = 360−|θ p−z − θi−z | )
Fig. 6. Cylindrical pin assembly.
Moreover, with the help of haptic feedback (Section 4.2.2), it feels like the part is still following the device, though the part’s position now is calculated by the system. The force feedback constraint is also suitable to irregular parts, not only limited to the cylinder assembly. Fig. 7 presents the slider constraint assembly of irregular parts. The two irregular parts are from the Clamping Mechanism. The most common torque feedback constraint in virtual assembly is the screw mechanism (see Fig. 8). When the screw is dragged to the position which satisfies the threshold (see Table 1), the constraint will be activated. So when rotating the stylus, the screw will rotate and the screw’s translation depends on its rotation angle. The translation l of the screw is calculated using the following formula:
l=
ph ·θ 2π
(5)
ph is the screw’s translation along the moving axis when it rotates 360°. θ is the screw’s rotation angle. 4.2.2. Force feedback One of the challenging problems to integrate the force feedback is how physics engine satisfies the high update frequency (10 0 0 Hz) of haptic rendering for stable force feedback. Since the physics simulation frequency is limited to 100 Hz in the training system, the physics simulation and haptic rendering are made to run in an asynchronous execution. Memory-Mapped files backed by the paging file [28] is employed for the communication between Virtools and the haptic rendering process. It is the most efficient communication method between multiple processes. There are two types of haptic feedback in this work. The first type consists of the gravity and collision haptic feedback. The other
type is the basic constraint. The haptic device is able to render force up to 7.5 N. The gravity value is calculated using each part’s bounding box size and its density. A control algorithm has been implemented to get the proportional gravity value of the calculated force. A spring damper system [29] is used to calculate the force value. When collision happens, the cursor and the part are prevented from penetrating the surface. The force value is calculated by stretching a virtual spring-damper between the haptic device position and the display cursor using the following formula:
F = kl d − δl v
(6)
kl ,δ l are the linear spring stiffness and damping constants. d is the distance between the stylus position and the display cursor. v is the relative velocity of the stylus. The constraint type force feedback is based on its constraint. In Fig. 5b, when part a has been adjusted to the axis, the peg-inhole constraint is activated. The force direction is decided by the moving axis and the stylus’s moving direction. The force value is calculated using the following formula:
F = kh d1 − δh v
(7)
kh ,δ h are the linear spring stiffness and damping constants. d1 is the distance between the part and point[i]. v is the velocity of the stylus. Another force feedback constraint is the slider constraint, the force calculation is using the following formula:
F = Fc − δh v
(8)
Fc ,δ h are the force and damping constants. v is the velocity of the stylus.
64
W. Jiang et al. / Advances in Engineering Software 98 (2016) 58–68
Fig. 7. Slider constraint assembly of irregular parts.
Fig. 8. Screw constraint illustration.
The screw constraint in Fig. 8 mainly focuses on the the torque feedback. The torque is calculated using the following formula:
T = kr θ − δr w
(9)
kr ,δ r are the torsional spring stiffness and damping constants.θ , w are the screw’s rotation angle and velocity.
4.2.3. Motion simulation The motion simulation [24] can be used to analyse the mechanism motion after all parts have been assembled. It can also be used to test whether all parts have been assembled correctly. Motion simulation is widely used in CAD software. With the help of the physics engine and designed constraints, some simple motion simulations can be performed in the virtual system with haptic feedback. Fig. 9 is an example of the Slider Mechanism. Whenpartc moves along the axis, the haptic device provides force feedback F1 . Other subordinate parts will also move because of the physics engine. The first part’s perfect moving state increases the possibility of the whole motion simulation. If part c hasn’t been set as the slider constraint, it can hardly be moved exactly along the axis, needless to say other subordinate parts. So, to achieve the motion simulation, both the physics engine and constraints are required. It should be noted that assembly constraints and motion simulation constraints are not exactly the same, even though the force and parts’ position are calculated using the same formula.
5. Widgets The widgets are small tools that help us with the assembly. Almost all the widgets can be turned off if not needed or be activated when needed. In fact, the assembly sequence planning is also a widget. Ten widgets have been designed and the functionality of each widget is presented in Table 2. The following paragraphs are detailed description of the widgets: (1) Moving assembly model widget: when the assembly model is imported to Virtools in the default mode, the first thing is to move and rotate the assembly model for convenient viewing. The scene camera should not be used to perform this operation. If the camera is moved for convenient viewing, the coordinates system will also change. Thus the display cursor may not follow the stylus as the system coordinates and the haptic device coordinates are different. The widget enables the haptic workspace to stay the same with the user’s view. (2) Hand position recognition widget: the hand position recognition is used to rotate and zoom the scene camera to help trainees adjust parts to satisfy the threshold in the training mode. It consists of 5 steps [6]: (1) the camera captures images of the hand, (2) preprocess, (3) get the hand contour, (4) get two points Ymax and Ymin from the convex hull and (5) comparison. The loop is limited to 15 Hz. The comparison is based on the present convex hull points and the previous convex hull points. The two points Ymax and Ymin from
W. Jiang et al. / Advances in Engineering Software 98 (2016) 58–68
65
Fig. 9. Motion simulation. Table 2 Functionality of each widget. Widget
Functionality
Changing assembly mode Assembly cues Changing parameters Multiple views Data recording/Analyse Moving assembly model Video recording Path navigation Hand position recognition Force feedback
The default mode is for assembly planning, the other mode is for assembly training. Turn on/off the assembly cues Set scale factor and parameters for haptic rendering Turn on/off the multiple views Record the assembly data/Analyse each assembly training process Move and rotate the model for convenient viewing Record the assembly operation during training mode Draw the assembly path and show in the training mode Control the scene camera Turn on/off the force feedback
Fig. 10. Screenshot of the analyse widget.
all the convex hull points are chosen to help us determine the hand’s position. Further development will be in the gesture recognition to help turn on/off some widgets during the training process. (3) Analyse widget: the process of logging data can be very helpful to optimize the procedural tasks and training strategies [31]. The data is from another widget named recording data widget. The recording data widget records the assembly sequence, each part’s assembly time and collision during the assembly training. As shown in Fig. 10, the interface consists of two tab pages. One is the assembly time. Each part’s assembly time is decided by the time when it is picked up and the time when it is in the right position. Fig. 10 is the tab page of each part’s assembly time. The tree list and the pie can tell which part needs more assembly time. Another tab page contains the assembly sequence and collision analysis. The sequence analysis is the comparison of the trainee’s assembly sequence and the default assembly sequence. It helps us to know the trainee’s learning of the training task. The record of collision helps us analyse the unexpected collision. The unexpected collision may help us optimize the assembly sequence.
Fig. 11. Screenshot of the path navigation widget.
(4) Path navigation widget: this widget is designed for some parts which may have a complex moving path to the final position (see Fig. 11). The optimized path has a great contribution to the assembly time and distance reductions [32]. The assembly planner can draw the part’s path in the default mode. In the training mode, if this widget is activated, the part’s path (if it has one) will be shown. (5) Changing parameters widget [30]: it is made for the haptic feedback and physics engine. Scale factor, elasticity, friction properties, gravity parameter and density can be set (see Fig. 12). Other widgets are used to turn on/off some functions, like the assembly cues, multiple perspectives, force feedback and video recording. 6. User study 6.1. Experiment design A preliminary experiment has been carried out to test the functionality of the virtual training system. The experiment focuses on three items: 1) differences in training times, 2) differences in
66
W. Jiang et al. / Advances in Engineering Software 98 (2016) 58–68 Table 3 Evaluation of the haptic-based training system. Assessment contents
Min
Max
Average
Quality of visualization Realism of collision detection and haptic feedback Easiness of operation Assembly cues
3 4 6 4
6 6 7 7
4.2 5.3 6.2 5.6
formula:
L=1− Fig. 12. Screenshot of the changing parameters widget.
learning of the training task and 3) evaluation of the constraintbased training system. There were 24 participants involved in the test and none of them had the experience of assembling the same Slider Mechanism. They were divided into three groups. The first group and the second group were trained with the virtual training system. The first group (haptic-based group) used the haptic device while the second group (mouse-based group) used the mouse and keyboard to interact with the virtual environment. The third group (traditional group) were trained with real parts. The test is the Slider Mechanism assembly. Before the actual test, the first group were taught some basic skills of using the haptic device including picking up parts, moving parts, rotating parts. Then they got sufficient time to be familiar with operating the haptic device to make sure they would focus on the assembly task not the use of the device or the training system during the assembly training. The second group also had enough time to be familiar with the training environment. The Slider Mechanism assembly task contains 16 parts. The first group and the second group assembled parts with the help of assembly cues. The third group were taught by the planner to complete the task. After the training, the first group were asked to give a score (from 1 to 7, 7 means strongly satisfied) to a few subjective questions: (1) quality of visualization, (2) realism of collision detection and haptic feedback, (3) easiness of operation and (4) assembly cues. The questions were used to gather users’ feedback of the constraint-based training system and suggestions for further improvement. The next day three groups were asked to complete the physical parts assembly without assembly cues. Fig. 13 shows the flow of the experiment design.
6.2. Results Three groups’ training times were recorded and a one-way ANOVA was used to compare the differences in the training time. The result showed that the differences in training times among the three groups were significant (F(2, 21) = 8.775, p = 0.002). Turkey HSD post hoc test of the three groups showed the training of the mouse group (M = 110.3, SD = 9.9) was significantly slower than that of the haptic group (M = 92.5, SD = 6.6) and the traditional training group (M = 95.0, SD = 10.6). No significant differences in training times were found between the haptic group and the traditional training group. The average learning of the task in this work consists of the assembly sequence learning and parts placement learning. It is measured by the number of wrong steps and assembly cues in the real assembly task. The wrong steps and assembly cues include the assembly sequence and parts’ placement. So if one picks up a wrong order part and places it in a wrong position, it is considered as two mistakes. So the learning of the task is calculated using the
number of wrong steps and assembly cues 2 × number of the assembly parts
(10)
L is the learning of the training. Fig. 14 shows the performance of each group. To compare the differences in learning of the task between the haptic group (M = 65.6%, SD = 8.3) and the traditional training group (M = 69.2%, SD = 6.8), an independent samples T-test was used and no significant differences were found (p = 0.363). To find whether haptics would improve the learning, an independent samples T-test was conducted and the result showed no significant differences (p = 0.337) were found in learning of the task between the haptic group (M = 65.6%, SD = 8.3) and the mouse group (M = 70.2%, SD = 10.2). Participants’ feedback is shown in Table 3. The easiness of operation gets the highest score which means users are satisfied with the easiness of operating the haptic-based training system. The quality of visualization gets the lowest score. 6.3. Discussion The training time Since this work presents the method of setting basic constraints to make the assembly training more efficient, the test is used to see whether the basic constraints can shorten the assembly time. Three groups’ training times were recorded and a one-way ANOVA was used to compare the differences. The result showed the training of the mouse group was significantly slower than that of the haptic group and the traditional training group. The difficulty of operating parts in 3D scene might be the main reason. The mousebased group had to choose the parts’ moving plane and rotation axis all the time. Moreover, some of the participants made mistakes in choosing the right rotation axis which wasted a lot of time. The training time of the haptic group showed that no significant differences from the traditional training group. The reason is mainly in three aspects: 1) the basic constraints made the insertion type assembly easier which shortened the assembly time; 2) before the training task, the first group got enough time to be familiar with the training system and operating the device; 3) In the traditional training group, the assembly cues were provided by the experiment planner. It was not vividly presented like the cues in the virtual training system. So they needed to find how to assemble some of the parts. Learning of the task Learning of the task is measured by the percentage of correct steps without assembly cues. The first T-test showed that no significant differences in assembly sequence learning and parts placement learning were found between the haptic group and the traditional group. So in learning the assembly sequence and parts placement, the haptic-based virtual training can replace the traditional training to some extent. However, the system still needs to be further improved (like two-hand assembly). The second Ttest showed that no significant differences in learning were found between the haptic group and the mouse-based group. However,
W. Jiang et al. / Advances in Engineering Software 98 (2016) 58–68
67
Fig. 13. Flow of the experiment design.
identified during the disassembly process which saves the preparation time. (3) Except the display and physics engine modules, the whole training system is made up of closeable widgets. This leads to less computation and less computer memory is needed. The experiment of the Slider Mechanism is presented. It aims to test the efficiency and learning of the task when using the hapticbased training system. The result shows no significant differences were found in learning the assembly sequence and parts placement of the task among the three groups. However, the hapticbased training helped trainees be familiar with the assembly skills compared to the mouse-based training and this can not be reflected in the formula of measuring the learning. No significant differences in training time were found between the haptic group and the traditional training group. But the training of the mouse-based group was significantly slower. Further improvement will focus on the quality of visualization and two-hand assembly tasks. Acknowledgment
Fig. 14. Performance of physical parts assembly.
This work is supported by NSFC-CAS Joint Fund (No.U1332130), 111 Projects (No.B07033) and 973 Project (No. 2014CB931804). Reference
the mouse-based group took more training time which might facilitate the learning of the task. Moreover, the haptic-based training helped trainees be familiar with the assembly skills compared to mouse-based training. As in the haptic-based training, trainees needed to pick up, move, orient, and assemble parts like the real parts assembly. This can not be reflected in the formula of measuring the learning of the task. Learning of the assembly skills should be further studied when it comes to complex assembly tasks. Feedback on the haptic training The table shows that the haptic-based training system is considered positive. Participants are satisfied with the operation in the haptic-based training system. The assembly cues also get good evaluation. It is very clear in the system and easy to understand. The quality of visualization needs to be improved through the usage of stereo visualization. In some parts’ assembly, scene camera needs to be moved to help participants adjust the part to satisfy the threshold. If the assembly model is more complex, this will greatly increase the complexity of assembly training. 7. Conclusion and future work This paper presents a virtual training system with the aim to make the assembly training easy and realistic through the usage of haptics and visual fidelity. The contribution is mainly in three aspects: (1) Propose the method of setting basic constraints to solve the shortcomings when physics engine and haptic feedback are both integrated in the training system. The basic constraints make the assembly training easy and realistic through the usage of haptics and visual fidelity. (2) Assembly sequence generation based on disassembly process. The assembly sequence is generated based on the disassembly process, and it can be optimized through assembly animation. Moreover, each part’s assembly constraint can also be
[1] Sun SH, Tsai LZ. Development of virtual training platform of injection molding machine based on VR technology. Int J Adv Manuf Technol 2012;63(5-8):609–20. [2] Lim T, Ritchie JM, Dewar RG, et al. Factors affecting user performance in haptic assembly. Virtual Real 2007;11(4):241–52. [3] Vélaz Y, Arce JR, Gutiérrez T, et al. The Influence of Interaction Technology on the Learning of Assembly Tasks Using Virtual Reality. J Comput Inf Sci Eng 2014;14(4):041007. [4] Hummel J, Wolff R, Stein T, et al.. An evaluation of open source physics engines for use in virtual reality assembly simulations: advances in visual computing. Berlin Heidelberg: Springer; 2012. p. 346–57. [5] Xia P, Lopes A, Restivo M. Design and implementation of a haptic-based virtual assembly system. Assem Autom 2011;31(4):369–84. [6] Fillatreau P, Fourquet JY, Le Bolloc’ HR, et al. Using virtual reality and 3D industrial numerical models for immersive interactive checklists. Comput Ind 2013;64(9):1253–62. [7] Jayaram S, Jayaram U, Wang Y, et al. VADE: a virtual assembly design environment. Comput Graph Appl, IEEE 1999;19(6):44–50. [8] Coutee AS, Bras B. Collision detection for virtual objects in a haptic assembly and disassembly simulation environment: ASME 2002. In: International design engineering technical conferences and computers and information in engineering conference. American Society of Mechanical Engineers; 2002. p. 11–20. [9] Seth A, Vance JM, Oliver JH. Combining geometric constraints with physics modeling for virtual assembly using SHARP: ASME 2007. In: International design engineering technical conferences and computers and information in engineering conference. American Society of Mechanical Engineers; 2007. p. 1045–55. [10] Gonzalez-Badillo G, Medellin-Castillo H, Lim T, et al. The development of a physics and constraint-based haptic virtual assembly system. Assem Autom 2014;34(1):41–55. [11] Wang ZB, Ong SK, Nee AYC. Augmented reality aided interactive manual assembly design. Int J Adv Manuf Technol 2013;69(5-8):1311–21. [12] Tching L, Dumont G, Perret J. Interactive simulation of CAD models assemblies using virtual constraint guidance. Int J Interactive Des Manuf (IJIDeM) 2010;4(2):95–102. [13] Xia PJ, Lopes AM, Restivo MT, et al. A new type haptics-based virtual environment system for assembly training of complex products. Int J Adv Manuf Technol 2012;58(1-4):379–96. [14] Gonzalez G, Medellin HI, Lim T, et al. 3D object representation for physics simulation engines and its effect on virtual assembly tasks: ASME 2012. In: International design engineering technical conferences and computers and information in engineering conference. American Society of Mechanical Engineers; 2012. p. 1449–59.
68
W. Jiang et al. / Advances in Engineering Software 98 (2016) 58–68
[15] He XJ, Choi KS. Stable haptic rendering for physics engines using inter-process communication and remote virtual coupling. Int J Adv Comput 2013;4(1). [16] Choi KS, Chan LSH, Qin J, et al. Haptic rendering in interactive applications developed with commodity physics engine. J Multimedia 2011;6(2):147–55. [17] Adams RJ, Klowden D, Hannaford B. Virtual training for a manual assembly task. Haptics-e 2001;2(2):1–7. [18] Aziz ELSS, Chang Y, Esche SK, et al. Virtual mechanical assembly training based on a 3D game engine. Comput-Aided Des Appl 2015;12(2):119–34. [19] Oren M, Carlson P, Gilbert S, et al. Puzzle assembly training: real world vs. virtual environment: Virtual Reality Short Papers and Posters (VRW). IEEE 2012;2012:27–30. [20] Jia D, Bhatti A, Nahavandi S. Design and evaluation of a haptically enable virtual environment for object assembly training: haptic audio visual environments and games, 2009. In: HAVE 2009. IEEE International Workshop on. IEEE; 2009. p. 75–80. [21] Munih M, Novak D, Bajd T, et al. Biocooperation in rehabilitation robotics of upper extremities: rehabilitation robotics, 20 09. In: ICORR 20 09. IEEE international conference on. IEEE; 2009. p. 425–30. [22] Belluco P, Bordegoni M, Polistina S. Multimodal navigation for a haptic-based virtual assembly application: ASME 2010. In: World conference on innovative virtual reality. American Society of Mechanical Engineers; 2010. p. 295–301. [23] Poyade M, Reyes-Lecuona A, Leino SP, et al. A High-Level Haptic Interface for enhanced interaction within virtoolsTM[M]//virtual and mixed reality. Springer; 2009. p. 365–74.
[24] Bruno F, Angilica A, Cosco F, et al. Reliable behaviour simulation of product interface in mixed reality. Eng Comput 2013;29(3):375–87. [25] Wang L, Keshavarzmanesh S, Feng HY, et al. Assembly process planning and its future in collaborative manufacturing: a review. Int J Adv Manuf Technol 2009;41(1-2):132–44. [26] Glondu L, Marchal M, Dumont G. Evaluation of physical simulation libraries for haptic rendering of contacts between rigid bodies: ASME 2010. In: world conference on innovative virtual reality. American Society of Mechanical Engineers; 2010. p. 41–9. [27] Maciel A, Halic T, Lu Z, et al. Using the physx engine for physics-based virtual surgery with force feedback. Int J Med Robot Comput Assist Surg 2009;5(3):341–53. [28] Nasarre C, Richter J. Windows via C/C++. Pearson Education; 2007. [29] Howard BM, Vance JM. Desktop haptic virtual assembly using physically based modelling. Virtual Real 2007;11(4):207–15. [30] Gonzalez-Badillo G, Medellin-Castillo HI, Lim T. Development of a haptic virtual reality system for assembly planning and Evaluation. Procedia Technol 2013;7:265–72. [31] Ritchie JM, Lim T, Sung RS, et al.. The analysis of design and manufacturing tasks using haptic and immersive vr-some case studies’ product engineering. Netherlands: Springer; 2008. p. 507–22. [32] Yoon J. Assembly simulations in virtual environments with optimized haptic path and sequence. Robot Comput-Integr Manuf 2011;27(2):306–17.