Transferring human grasping synergies to a robot

Transferring human grasping synergies to a robot

Mechatronics 21 (2011) 272–284 Contents lists available at ScienceDirect Mechatronics journal homepage: www.elsevier.com/locate/mechatronics Transf...

3MB Sizes 0 Downloads 74 Views

Mechatronics 21 (2011) 272–284

Contents lists available at ScienceDirect

Mechatronics journal homepage: www.elsevier.com/locate/mechatronics

Transferring human grasping synergies to a robot Tao Geng ⇑, Mark Lee, Martin Hülse Intelligent Robotics Group, Department of Computer Science, Aberystwyth University, Wales, UK

a r t i c l e

i n f o

Article history: Received 23 February 2010 Accepted 12 November 2010 Available online 24 December 2010 Keywords: Human–robot skill transfer Synergy Robotic grasping

a b s t r a c t In this paper, a system for transferring human grasping skills to a robot is presented. In order to reduce the dimensionality of the grasp postures, we extracted three synergies from data on human grasping experiments and trained a neural network with the features of the objects and the coefficients of the synergies. Then, the trained neural network was employed to control robot grasping via an individually optimized mapping between the human hand and the robot hand. As force control was unavailable on our robot hand, we designed a simple strategy for the robot to grasp and hold the objects by exploiting tactile feedback at the fingers. Experimental results demonstrated that the system can generalize the transferred skills to grasp new objects. Ó 2010 Elsevier Ltd. All rights reserved.

1. Introduction 1.1. Transferring human skills to robots Humans have remarkable motor skills and outperform most robots over a variety of complex motor tasks. By transferring human skills to robots, we may avoid a long and costly searching process when designing the robot controller. New skill-learning in the robot can also be speeded up if it is based on the transferred skills. To transfer a human skill to a robot, two problems have to be solved: (1) how to extract or model the human skill; (2) how to implement it on the robot whose configuration and sensory-motor systems are quite different from that of humans. Different methods have been proposed for transferring human skills to robots in various tasks and scenarios. Cortesao and Koeppe [1] transferred human skill in the peg-in-hole insertion task to a robot. While a human performed this task, the forces, torques and velocities of the peg were recorded as a function of its pose. Then a neural network trained with these data was used to control the robot in the same task. In another study [2], human expertise in a manipulative task was modeled as an associative mapping in a neural network and implemented on a direct-drive robot. Yang and Chen [3] represented human skills in tele-operation as a parametric model using hidden Markov models. The sensory-motor data representing human skills sometimes have unknown models and are redundant. To deal with this problem, Cortesao and Koeppe [1] proposed a sensor fusion paradigm composed of two independent modules. One module was for the optimised fusion via minimizing the noise power. The other module was a Kalman filter that filters unknown ⇑ Corresponding author. E-mail address: [email protected] (T. Geng). 0957-4158/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.mechatronics.2010.11.003

variables. Unlike the above mentioned methods that explicitly model human skills, a human-to-robot skill transfer framework proposed by Oztop [4] exploited the plasticity of the body schema in the human brain. Firstly, their system integrated a 16 DoF robotic hand into the experimenter’s body schema (i.e., the neural representation of his own body). Then the dexterity exhibited by the experimenter with the external limb, the robotic hand, was used for designing the controller of the robot. Another widely used mechanism for the human-to-robot skill transfer is by imitation, where a robot observes the execution of a task, acquires task knowledge, and then reproduces it. In [5], a robot imitated the grasping and placing of a human model. Skill transfer was realized when the robot learned a goal-directed sequences of motor primitives during the imitation. In [6], a continuous hidden Markov model was trained with characteristic features of the perceived human movements. Then the hidden Markov model was used in a simulated robot to reproduce the human movements. Rather than using robots to observe or imitate human movements, in a recent work [7], a communication language was developed for transferring grasping skills from a nontechnical user to a robot during human–robot interaction. Although these methods are effective in transferring skills to robots, in most cases, the robots can only learn the demonstrated tasks and it is difficult to generalize to new tasks. 1.2. The synergies in human grasping While grasp synthesis is still a tough problem for robot hands (see [8] for a review), humans can grasp and manipulate various objects effortlessly. One challenge in robotic grasping is how to coordinate the several joints of the fingers to generate an appropriate grasp posture for a specific object. Humans and animals have

T. Geng et al. / Mechatronics 21 (2011) 272–284

the same problem in the motor control of their huge number of muscles. Selecting the appropriate muscle pattern to achieve a given goal is an extremely complex task due to the high dimensionality of the search space [9]. Recent research in biology suggests that, to deal with this dimensionality problem, animal motor controllers employ a modular organization based on synergies [9,10]. A synergy refers to a subgroup of muscles or joints that are activated together in a stereotyped pattern [9], which is in contrast to the decoupled control of individual joints in many robots. d’Avella and Bizzi [9] recorded electromyographic activity from 13 muscles of the hind limb of intact and freely moving frogs during their movements, and used multidimensional factorization techniques to extract synergies, which were invariant amplitude and timing relationships among the muscle activations. They have found, in frogs, that combinations of a small number of muscle synergies account for a large fraction of the variation in the muscle patterns observed during jumping, swimming, and walking [9]. Particularly in human hands, synergies refer to the muscular and neurological coupling between the finger joints. The human hand has more than 20 degrees-of-freedom. But, two synergies that co-activate several fingers and joints have been shown to account for 84% of the variance in human hand grasping [11]. The big benefit of synergies is that the computations for motor control can be greatly simplified at the synergy level. Synergies have been found not only in the hand postures of human grasping, but also in the hand reaching movements before grasping. Sabatini [12] identified neuromuscular synergies in natural movements of the human arm by applying factor analysis to the smoothed rectified electromyographic (EMG) signals of the muscles in human arm. Fod and Mataric´ [13] recorded human arm movements by fastening four Polhemus magnetic sensors at upper arm, lower arm, wrist, and middle finger. The data were first filtered, and then segmented, and principal component analysis was applied to the segments. The eigenvectors corresponding to a few of the highest eigenvalues provide a basis set of primitives, which can be used, through superposition and sequencing, to reconstruct the training movements in the data as well as novel ones. Currently, the controllers of some robots have involved synergies. Gullapalli et al. [14] designed an intelligent control architecture that endowed human-like capabilities to a redundant manipulator. In this controller, motor synergies arose when the control of a subset of the available degrees-of-freedom was coupled and coordinated. Rosenstein et al. [15] used trial-and-error learning to evolve synergies in a 3-link robotic manipulator for weightlifting tasks. The robot learns to improve its performance in weightlifting as individual joints become actively coupled at the level of synergies. Brown [16] mechanically hardwired postural synergies into the driving mechanism of a 17 DoF dextrous hand that was driven by only two motors. 1.3. The aim of this study: transferring human grasping synergies to a robot The aim of this study is to transfer human grasping skills to a robot hand-arm system via the use of synergies. The skill-transferring scheme in this study has the following characteristics: (1) The transfer of synergies (i.e., the basic building blocks of sensory-motor control in human grasping) to a robot may be more adaptive and flexible when compared with robot imitation or direct modeling of the human skills, because it allows the generalization of learned skills to extend to grasping new objects. (2) Compared with direct skill-transfer at the task level, transferring synergies is much simpler as the number of synergies is very limited when compared with the large number of grasping tasks they can represent.

273

The transfer involves two stages. Firstly, we extract the synergies from human grasping data and train a neural network with the data. Secondly, we design a novel mapping method by optimization that can map the fingertip positions of the human hand to those of the robot hand. The rest of the paper is organized as follows. Section 2 describes the extraction of synergies from human grasping data. Section 3 addresses the issues in mapping the human hand to our robot hand. Section 4 solves the inverse kinematics problem for our robot arm using neural networks. Section 5 describes the control strategy we designed for the grasping and holding of objects with the robot hand. Section 6 presents the experimental results on the robot, and the last section concludes this paper.

2. Extracting synergies from data on human grasping In our experiments, the subject uses two or three fingers (thumb, index, middle) to make 60 grasps of the objects shown in Fig. 1A. The positions of the hand joints in the grasping postures are recorded with a Shapehand data glove. The position and orientation of the wrist in a fixed world frame are recorded with a Polhemus Patriot magnetic sensor. The Polhemus Patriot system is widely used for 3D motion tracking [17–19]. It includes a magnetic sensor that is fixed on the wrist of the subject, and a source that is fixed at the origin of the world frame on a table. The system calculates online the 3D position and orientation of the magnetic sensor (i.e., the wrist of the subject here) by measuring the relative magnetic flux of three orthogonal coils on both the sensor and the source. The sensor output is 6 DoF: the X, Y, and Z Cartesian coordinates and the orientation (azimuth, elevation, and roll). The same world frame will also be used in the robot grasping experiments. The data is recorded during the whole process (from the start of reaching until the finishing of grasping, including reaching, pre-shaping and grasping) of each grasping trial of various objects. At the same time, the following features of the objects are recorded by measuring: position of the centre, size, and pose. The Shapehand data glove recorded the 3D positions of the following 19 points of the hand: five fingertips and all the 14 joints of the five fingers. Using these positions, we can calculate the angular positions of all the joints of the hand. As our aim is to map three fingers (thumb, index, middle) to the robot hand, the human subject grasps the objects using these three fingers with the data glove. The angular positions of the joints of these three fingers are used in this study. The total degrees-of-freedom of these fingers is 12. Therefore, the dimensionality of this data set is very high. By extracting synergies, this dimensionality can be greatly reduced. Principal Component Analysis (PCA) has been used in studies of the synergies of human grasp postures [11]. PCA is a common technique for finding patterns in data that has a high dimensionality. It transforms a number of correlated variables (i.e., the 12 DoF of the three-finger grasp postures in this study) into a small number of uncorrelated variables called principal components (i.e., synergies in this study). The first principal components accounts for most of the variation in the data, and each succeeding component accounts for as much of the remaining variability as possible. The data from the grasp postures and the hand approaching movements are processed with the Principal Component Analysis method. Correspondingly, two groups of synergies are extracted from these two data sets: one for the hand grasping postures, the other for the hand approaching (i.e., an approaching vector at the wrist). As shown in Fig. 2, the first three principal components account for 90% (see Fig. 2A) and 82% (see Fig. 2B) of the variances in the grasp posture data and the hand approaching data respectively. Therefore, we choose these as the synergies. The reason we don’t use

274

T. Geng et al. / Mechatronics 21 (2011) 272–284

(A)

(B)

Fig. 2. The first five principal components of the grasp posture data (A) and the hand approaching data (B). Other principal components are not shown in the figure as the variances caused by them are below 1%.

A feed-forward neural network (see Fig. 4) is trained with the grasping data using a batch gradient descent algorithm: the weights and biases are updated in the direction of the negative gradient of the performance function. The batch gradient decent algorithm is popularly used for offline training of neural networks, and is available in Matlab. As shown in Fig. 4, the inputs of the neural network are the features of the objects used in the human grasping experiments. The outputs are the coefficients of the first three principal components (synergies) extracted from the grasping posture data and the hand approaching data, which can reconstruct the reaching movements and the grasping postures of the 60 recorded human grasps. We use this trained neural network and the synergies to represent the correlation between the object features and the grasp synthesis (reaching and grasping) in human grasping. 3. Mapping the human hand to the robot hand 3.1. Related methods Fig. 1. (A) The objects. (B) Three finger grasp. (C) Two finger grasp.

the EMG (as in [12]) and upper/lower arm data (as in [13]) for the hand approaching synergies is that the synergies extracted from the muscle EMG and the upper/lower arm cannot be transferred directly to our robot arm as their configurations are totally different. However, the synergies extracted from the wrist position and orientation data can be directly used on the robot hand for the approach control. Although no other studies have tried to extract synergies for reaching directly from the data of the wrist position and orientation, as shown in Fig. 2B, there indeed exists a group of synergies in this data that can reduce the dimensionality of the parameter space of the hand reaching control from 6 to 3. Fig. 3 shows the fingertip movements of the three fingers during the reaching, pre-shaping, and grasping stages of a grasp trial.

Before applying the trained neural network on the robot hand, we have to build a map between the robot and human hands. Early works on mapping between human and robot hands were in the area of tele-operation, where a hand master or a data glove operated with a human hand controls a multi-fingered robot hand. As reviewed in [20], three kinds of mapping methods have been developed for tele-operation: (1) linear joint angle mapping [21]; (2) pose mapping [22]; and (3) finger tip position mapping [23]. For the tele-operation of a dextrous hand whose degree-of-freedom was similar to that of human hands, Rohling et al. [20] designed an optimized fingertip mapping that matched fingertip orientation as well as Cartesian position. In the case of a Barrett hand that has only three fingers and four degrees-of-freedom, Peer et al. [24] simply assigned the robot fingers to three human fingers, and vertically projected the workspace of the human fingers onto the trajectory of the Barrett hand.

T. Geng et al. / Mechatronics 21 (2011) 272–284

275

Fig. 3. The time course of the fingertip movements during one grasp. Reaching, 0–T1; opening hand, T1–T2; pre-shaping, T2–T3; grasping, after T3.

Fig. 4. The neural network trained with the human grasping data.

Because the geometry of the human hand is different from the robot hand there is only partial overlap of the fingertip workspaces of the robot and human hands. Particularly due to the very big kinematic dissimilarity of the thumb and the fingers of most robot hands, the selection of a common reference frame between human and robot hands has a critical influence on the accuracy of the mapping. The fingertip mapping methods mentioned above have manually or intuitively chosen the origin of the common reference frame. Hong and Tan [21], Pao and Speeter [22] and Speeter [23] placed the common origin at the base of the thumb, which is appropriate for dextrous humanoid robot hands. Rohling et al. [20] placed it at the second joint of the middle finger. 3.2. Mapping the three fingers to our robot hand The robot hand-arm system (see Fig. 5) used in this study is from SCHUNK GwbH Co KG (http://www.schunk.com). Both the arm and the hand have seven degrees-of-freedom. The Schunk hand has two special features: (1) the finger mapped to the thumb (finger b in Fig. 7(2) can only move in one plane (i.e., the XROZR plane in Fig. 7(2) and (4)); (2) the first joints of the fingers a and c (/0 in Fig. 7(4)) are coupled together. Therefore, it is impossible to match both the position and orientation of the fingertips of the robot hand to that of the human hand. We only map the fingertip positions in this study. One problem in mapping only the fingertip positions is that the robot fingers could generate abnormal postures whose orientation is much different from that of the human fingers. For example, as shown in Fig. 6, with the same fingertip position P, the finger can

Fig. 5. The Schunk robotic arm and hand system. There is a tactile sensor array at each phalanx of the fingers.

have two different configurations: A and B. While the posture A has a correct orientation, the posture B is in hyperextension. However, as hyperextension postures are not allowed in our robot hand, the posture B can be avoided. The movement trajectory of the human thumb is much more complex than that of the finger b of the robot hand, and is definitely not restricted to one plane for different grasp postures. The common reference frame used in the above mentioned research on mapping methods are fixed for all grasp postures once it has been chosen manually or automatically using an optimization process. If we use similar schemes, the mapping between the thumb

276

T. Geng et al. / Mechatronics 21 (2011) 272–284

and is located at the centre of the palm of the robot hand (see Fig. 7(2) and (4)). Now the task of mapping is to determine the orientation of the robot hand frame in the world frame. As shown in Fig. 7(4), at the robot hand, the finger b is constrained in the plane XR  O  ZR. Considering this special feature of the robot hand configuration, for each specific grasp posture of the human hand, we get the frame XR  YR  ZR from the world frame by making the following transformations in sequence: (1) Rotate XH  YH  ZH around ZH by an angle a to get the frame X0  Y0  Z0 . a is calculated as:

a ¼ arctg Fig. 6. Hyperextension of one finger.

 H yB xHB

ð1Þ

where yHB and xHB are the coordinates of the tip of the thumb in the world frame (point B in Fig. 7(1) and (3)). After this rota0 tion, the fingertip of the thumb (point B) is in the plane X OZ0 of the new frame. (2) Firstly rotate X0  Y0  Z0 around Y0 by an angle b, and then ~ by an angle c to get the robot rotate around the vector OB frame XR  YR  ZR (see Fig. 7(3)). Now the relationship between the frames of the human and robot hands can be described with a homogeneous transformation, R RH(b, c), which is a function of b and c. In the frame XR  YR  ZR, the fingertip positions of the fingers a–c can be described as

xbR ¼ 0

ð2Þ

ybR

ð3Þ

¼ d2 þ h1 cos/3 þ h2 sinð/3  /4 Þ

zbR ¼ h1 sin/3 þ h2 sinð/3  /4 Þ

ð4Þ

xaR ¼ d1 þ ðh1 cos/1 þ h2 cosð/1 þ /2 ÞÞsin/0

ð5Þ

yaR zaR xcR ycR zcR

¼ d2 þ ðh1 cos/1 þ h2 cosð/1 þ /2 ÞÞcos/0

ð6Þ

¼ h1 sin/1 þ h2 sinð/1 þ /2 Þ

ð7Þ

¼ d1  ðh1 cos/5 þ h2 cosð/5 þ /6 ÞÞsin/0

ð8Þ

¼ d2 þ ðh1 cos/5 þ h2 cosð/5 þ /6 ÞÞcos/0 ¼ h1 sin/5 þ h2 sinð/5 þ /6 Þ

ð9Þ ð10Þ

where ðxaR ; yaR ; zaR Þ, ðxbR ; ybR ; zbR Þ, and ðxcR ; ycR ; zcR Þ are the fingertip positions of the three robot fingers, respectively, in the robot hand frame, XR  YR  ZR. d1, d2, h1, and h2 are geometric constants for the Schunk hand (see Fig. 7(4)). /i (i = 0, . . ., 6) are the angles of the joints of the robot hand (see Fig. 7(4)). Meanwhile, the fingertip positions of the three human fingers (see the points A–C in Fig. 7(1)) in the robot hand frame can Fig. 7. The frames of the human hand (1), robot hand (2), and their relationship (3). (4) the joints of the robot fingers. The distal joints of the robot fingers can rotate in a range of 90° to +90°. But we restrict them in the range 0–90°, similar to that of human fingers.

and finger b in the robot hand would have errors for most of the grasp postures due to the first feature of the robot hand described above. Considering the specific constraints in the mechanical structure of our robot hand, here we propose a novel method for mapping our robot hand to the human hand. Rather than using a fixed frame for various postures, we optimize the orientation of the robot hand frame for each individual grasp posture to reduce the mapping error to be nearly zero. The fame at the human hand, XH  YH  ZH (see Fig. 7(1)), is chosen as the world frame. The origin (point O in Fig. 7) is at the centre of the wrist joint. ZH is along the direction of the forearm and XH is perpendicular to the main axis of the wrist joint. The position of the origin and the orientations of XH  YH  ZH are sensed with the Patriot sensor as described in Section 2. The positions of all the joints of the hand in this frame are sensed with the data glove. The origin of the robot hand frame, XR  YR  ZR, is also at point O,

Fig. 8. The running process of the GA.

T. Geng et al. / Mechatronics 21 (2011) 272–284

277

(A)

(B)

Fig. 9. Genetic algorithm example. (A) The best (the solid line) and the averaged (the virtual line) fitness value of each generation. (B) The averaged distance between the individuals of each generation.

Fig. 10. The forward kinematics model of the robot arm. The dashed lines are the axes of the seven joints corresponding to h1, h2, h3, h4, h5, h6, and h7. L1,2,3,4 are the length of the four links. XR  YR  ZR is the frame of the robot hand (the hand is not shown here). X  Y  Z is the world frame for the robot arm.

278

T. Geng et al. / Mechatronics 21 (2011) 272–284

be calculated from their positions sensed by the data glove in the world frame, XH  YHZH,

2

B 3

xR

2

xBH

3

6 yB 7 6 B7 6 R 7 R 6 yH 7 6 B 7 ¼ RH 6 B 7 4 zR 5 4 zH 5 1

2

3

2

xAH

3

6 yA 7 6 A7 6 R 7 R 6 yH 7 6 A 7 ¼ RH 6 A 7 4 zR 5 4 zH 5 1

ð11Þ

1

xAR

ð12Þ

1

3 2 C3 xCR xH 6 yC 7 6 yC 7 6 R7 R 6 H7 6 C 7 ¼ RH 6 C 7 4 zR 5 4 zH 5 2

1

ð13Þ

1

By mapping the following coordinates of the human fingertips to those of the robot fingertips, we get:

Fig. 11. The structure of the MPLNN.

ybR ¼ kyBR

ð14Þ

zbR ¼ kzBR

ð15Þ

xaR yaR zaR xcR zcR

ð16Þ

¼ ¼ ¼ ¼ ¼

kxAR kyAR kzAR kxCR kzCR

ð18Þ ð19Þ ð20Þ

where k is a scale factor, whose value is chosen to be 1.5, according to the ratio between the robot fingers and the subject’s fingers. We can get the values of /i(i = 0–6) accurately by combining and analytically solving Eqs. (3)–(8), (10), and (14)–(20). For example, solving Eqs. 5, 6, 16, 17, we can get:

/0 ¼ arctg

Fig. 12. The performance of the five NNs during the training. The black line is for N0. The colour lines are for N1–N4. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

ð17Þ



xAR  d1 yAR  d2

 ð21Þ

In theory, the error of this mapping only exists at one coordinate, the y coordinate of the finger c (see Eq. (9)). However, as shown in Fig. 7(4), the angles of the distal joints at the robot fingers (/2, /4, /6 in Fig. 7(4)) are restricted in the range, 0-90 degree. In the cases of some grasp postures, their values calculated with the equations above could be outside this range, and lead to errors in the mapping of the other coordinates of the finger tip. These errors are directly related to the homogeneous transformation between the world frame and the robot hand frame, which is determined by the two angles, b and c (see Fig. 7(3)). Therefore, the problem is, for a specific grasp posture of the human hand, how to find an optimal homogeneous transformation that can minimize the

Fig. 13. The mean square errors of the five NNs on a segment of the validation data. The gray line is for N0. The colour lines are for N1–N4. The black virtual line is the error from the combination of N1–N4. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

T. Geng et al. / Mechatronics 21 (2011) 272–284

mapping error. We solve this problem using a genetic algorithm (GA). In the GA we used (see Fig. 8), the population size is 20, with each individual encoded as a vector of the values of b and c. The best 10% individuals in each generation are retained in the next generation. The remaining 30% and 60% of the individuals in the next generation are, respectively, generated by crossover and mutation of selected parents. Each selected parent is mutated by adding a random vector chosen from a Gaussian distribution. The fitness value of each individual is the total error in the mapping of the three fingertip positions, which is calculated as:



qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðxaR  xAR Þ2 þ ðyaR  yAR Þ2 þ ðzaR  zAR Þ2

279

(A)

ð22Þ

The GA runs 50 generations for each grasp posture. Fig. 9 shows the result of one GA run for a grasp posture example. It costs about one second. The mapping error (the fitness value in Fig. 9A) is reduced to below 2 mm after 45 generations. A population has high diversity if the average distance is large. The average distance between individuals is large at the beginning, and changes to be below 20° in the last generation (see Fig. 9B). This shows that the search starts in a large region and finally converges to a small area.

(B)

4. Inverse kinematics of the robot arm The output of the neural network in Fig. 4 can give the desired position and orientation of the wrist for the hand to grasp an object. The robot needs to use an inverse kinematics model to calculate the required positions of the seven joints of the robot in order for the wrist to reach the desired position and orientation. In this section, we solve the inverse kinematics problem of the robot hand.

(C)

4.1. Forward kinematics model of the robot arm Fig. 10 shows the kinematics model of the robot arm. The position and orientation of the robot hand frame (XR  YR  ZR) in the world frame (X  Y  Z in Fig. 5) can be described with a homogeneous transformation:

2

3

nx

ox

ax

Px

6n 6 y TR ¼ 6 4 nz

oy

ay

oz 0

az 0

Py 7 7 7 Pz 5

0

ð23Þ

1

The elements of this matrix are given in the appendix:

(D)

4.2. Inverse kinematics of the Schunk robot arm There are two kinds of conventional methods for computing the inverse kinematics of robotic manipulators: closed-form solutions and iterative algorithms. In general, closed-form solutions can only be applied to six-degree-of-freedom manipulators with special kinematic structures [25]. These conditions are not fulfilled in our robot arm. Iterative algorithms require a good estimation at the beginning, otherwise they do not converge, or converge to solutions outside the limit range of the joints (the movement range of each joint at our robot arm is, 2 to +2 rad). Neural networks have been used for fast online calculation of the inverse kinematics of high DoF manipulators in robotics studies [26,27], but have not been applied in direct trajectory planning of industrial robots due to their high errors compared with conventional methods. In order to reduce this problem in our robot arm, we have trained four neural networks. When calculating the positions of the joints for a given position and orientation of the end-effector (the wrist of the robot hand), we choose the output of the neural network that has the minimal error.

Fig. 14. Changes in the sensor array signals as a joint moves when a finger pad is touching an object during a grasp trial. Each small area is a tactile sensor. The positions of the joint are at: (A) 20.2°; (B) 20.6°; (C) 21°; (D) 21.4°.

There are no general guidelines for the design of neural networks to compute the inverse kinematics of robots. We explored the following options:

280

T. Geng et al. / Mechatronics 21 (2011) 272–284

(1) Network types: Elman NN, MLPNN (Multi-layer Perceptron NN), Recurrent NN. (2) Number of hidden layers and number of neurons in each hidden layer. (3) Various training algorithms. Finally, we selected MLPNN with three layers (25 neurons each in the first and second hidden layers, and seven neurons in the output layer, see Fig. 11) for each of the neural networks. The networks have 12 inputs, which are the 12 elements of the matrix in Eq. (23) that describe the position and orientation of the wrist. Their outputs are the seven joint angles of the robot arm (see Figs. 5 and 10). The tangent hyperbolic sigmoid function and the linear function are used as the activation functions of the hidden layers and output layer, respectively. The Levenberg–Marquardt algorithm is used for training. The whole data set, DD, includes 50,000 data points calculated with the forward kinematics equations described above. DD is divided into five sub data sets of equal size, D1, D2, D3, D4, and DT. Five MLPNNs (N0–N4) are trained, respectively, with the data sets D0–D4. D0 includes all the training data set, D1–D4. Data set DT is the test data and will be used for the validation test of these NNs later. Fig. 12 shows the performance of these NNs during their training. The mean square error in Fig. 12 is calculated with,

mse ¼

n 1X ðh^i  hi Þ2 n i¼1

ð24Þ

where n = 7 is the number of the joints of the robot arm. hi is the actual angular position of the ith joint. ^ hi is the corresponding angular position from the NN. The neural network N0 is only for a comparison below, while N1–N4 are used in combination to compute the inverse kinematics. With the validation data set (DT, involving 10,000 data points), the averaged mean square errors of the five NNs (N0–N4) are 7.6  104, 6.8  104, 6.4  104, 5.6  104, and 8.8  104 rad, respectively. By combining N1–N4, this averaged error is reduced to 3.7  104. Fig. 13 shows how this combination has reduced the error on a segment (300 data points) of the validation data. For example, the big errors in the outputs of the four NNs at the points A–C in Fig. 13 have been avoided in the combined output (the virtual black line). By combining the four NNs (N1–N4) trained with different sub data sets for the computation of the inverse kinematics, we have obtained results (the virtual black line) better than that of N0 (the gray line in Fig. 13) which was trained with the whole training data set. However, using several neural networks in parallel to reduce the error is an empirical method, rather than a systematic method. We think this method can also be applied on other robotic manipulators whose DoF is similar to our robot’s. In those robot with a higher DoF, the following two measures would improve the performance: (1) Using more neural networks in parallel. (2) Dividing the work space of the robot into several parts, and using a few neural networks for each part. 5. Grasp control strategy We assume the position and the four features of the object shown in Fig. 4 are known to the robot. For an object to be grasped by the robot, the four features are input to the neural network shown in Fig. 4. The desired position and orientation of the wrist and the grasp posture are first computed with the outputs of the neural network, and then mapped to the robot hand with the mapping method described. Now the task of the robot is to control its

Fig. 15. A serial of frames extracted from a video of a grasp experiment. See the text for more details.

arm to reach the desired position and orientation of the wrist, and control the finger joints to form the grasp posture. However, just making a grasp posture and closing fingers to touch the object cannot ensure a successful grasp. To stably grasp and hold an object, the contact forces between the fingers and the object must be

T. Geng et al. / Mechatronics 21 (2011) 272–284

controlled appropriately. The distribution of forces exerted by fingers on objects are sensitively controlled during human grasping. Force closure is a widely used grasp quality measure for robotic grasping synthesis. As torque/force control is not available at the finger joints of our robot hand, we have a problem in how to control the fingers to hold the object after they touch the object. To deal with this problem, we have designed a simple strategy that takes advantage of the following special features of our robot hand: (1) The position control of the finger joints is accurate and the joints are quite stiff with a high gear ratio. (2) The sensor array of each phalanx is covered with a rubber pad (see the black pads in Fig. 7(2)). These pads are elastic and deformable when the finger contacts the object, and, the deformation of the pads is proportional to the contact force. (3) The contact force and its rate of change can be sensed quickly and accurately with the tactile sensor arrays that have a high resolution of 7  13 at each phalanx (see Fig. 5). Fig. 14 shows how the contact pressure signals of the sensor array at one joint are changing when the position of that joint is controlled to close and touch the object during a grasp trial. As we can see in Fig. 14, the pressures in the contact area of the phalanx is sensed accurately and timely when the joint position is slightly increased from 20.2° (Fig. 14A) to 21.4° (Fig. 14D) after touching the object. To summarize, the control strategy involves two steps: (1) According to the desired position and orientation of the wrist, the positions of the joints at the robot arm are computed with the inverse kinematics neural networks. The robot arm joints are controlled to reach these positions. (2) After the robot hand

281

reaches the object, each finger is first controlled to move quickly to make the grasp posture that has been computed with the synergy neural network (see Fig. 4), and then continue to close slowly to touch the object. When the rubber pads touch the object and the contact pressure is over a threshold, the finger stops and is kept at its current position. Due to the high stiffness of the finger joints they can resist the contact force caused by the deformation of the rubber pads, and thus hold the object. Although this strategy is very simple, as will be shown below, it is effective in grasping, holding, and lifting objects of different weights up to 2 kg. 6. Experimental results on the robot To further illustrate the grasp control strategy described above, we take one grasp experiment as an example. The experiment is done in the following five steps: (1) The location and the four features (i.e., length, width, height and pose) of the object are measured manually (see the object on the table in Fig. 15A). In the future, this information will be obtained automatically and online by a vision system. (2) These features are input to the neural networks shown in Fig. 4 to get the coefficients of the 6 synergies. By combining these synergies, we can get the desired position and orientation of the wrist, and the proper grasp posture for the reaching and grasping of this object. Then this posture is mapped to the robot hand to get the desired grasp posture of the robot hand. (3) According to the object location and the desired position and orientation of the wrist, we calculate the desired positions of the seven joints of the robot arm using the neural networks of the inverse kinematics, which has been described in

(A)

(B)

Fig. 16. The success rate of grasps for the trained objects (A) and new objects (B).

282

T. Geng et al. / Mechatronics 21 (2011) 272–284

Fig. 17. The robot grasping and holding objects of the training set (A) and new objects (B).

Section 4. As shown in Fig. 15B, the robot arm is being controlled to move to these desired positions. In Fig. 15C, the robot arm has been moved to the desired position, which means that the wrist has reached the desired position and orientation for grasping the object. Now the reaching control process has been finished. The next task is to grasp the object by controlling the finger joints of the robot hand.

(4) The seven joints of the robot hand are controlled to make the desired grasp posture that has been obtained in the second step described above. In Fig. 15D, the robot hand has just reached the desired grasp posture. One finger is slightly touching the object, the other two fingers are close to, but not touching the object. All the three fingers are at proper positions relative to the object.

T. Geng et al. / Mechatronics 21 (2011) 272–284

(5) Finally, the fingers are closed to grasp the object using the tactile sensor based control strategy that has been described in the last section. As shown in Fig. 15E, the object has been grasped with a proper final posture. A video of this experiment is available at, http://users.aber.ac.uk/tag/posture. AVI In this video, in order to show clearly the grasp posture, the robot fingers stop for 5 s after they form the desired grasp posture (see Fig. 15D). In Fig. 15 and the video, we can clearly see how the synergy based control strategy has generated a proper approaching movement in the robot arm (see Fig. 15B) and a proper grasp posture (see Fig. 15D) in the robot hand. Without these, the final stable grasp of the object would have been impossible. One-hundred-and-sixty grasp trials were made by the robot to evaluate the performance of the system on eight objects (20 trials for each objects). Four of the objects (see Fig. 16A) were selected from the objects in Fig. 1A. The other four objects are new, unseen test objects (see Fig. 16B). In each test, the object is put in a position and pose that is reachable for the robot hand. For a grasp to be counted as successful, the robot has to grasp the object, lift and move it to another position 1 meter away, and hold it for 30 s. The success rates for the two sets of objects are shown in Fig. 16. Fig. 17 shows the robot grasping the eight objects. In a video footage at, http://users.aber.ac.uk/tag/grasp.AVI, the robot grasps, holds, and moves a 2 kg bottle. The success rates of the grasps of the new objects (Fig. 16B) are slightly lower than those of the trained objects (Fig. 16A), and are different for different objects. When grasping regular objects (see the first three objects in Fig. 17B), the success rate is satisfactory. Especially, for the big cylinder (the second object in Fig. 17B), the success rate is very high. That is because: (1) there are many cylindrical objects in the training set; (2) the size of this object is very suitable for the robot hand, neither too large nor too small. By contrast, he success rate of the small irregular object (the fourth object in Fig. 17B) is lower.

283

ox ¼ ððððs1 c2 c3 þ c1 s3 Þc4 þ s1 s2 s4 Þc5 þ ðs1 c2 s3  c1 c3 Þs5 Þc6 þ ððs1 c2 c3  c1 s3 Þs4  s1 s2 c4 Þs6 Þc7 þ ðððs1 c2 c3  c1 s3 Þc4  s1 s2 s4 Þs5 þ ðs1 c2 s3  c1 c3 Þc5 Þs7 ax ¼ ðððs1 c2 c3 þ c1 s3 Þc4  s1 s2 s4 Þc5 þ ðs1 c2 s3  c1 c3 Þs5 Þs6 þ ððs1 c2 c3  c1 s3 Þs4  s1 s2 c4 Þc6 Px ¼ ðððs1 c2 c3 þ c1 s3 Þc4  s1 s2 s4 Þs5 þ ðs1 c2 s3  c1 c3 Þc5 ÞL4  ððs1 c2 c3  c1 s3 Þs4  s1 s2 c4 ÞL3 þ s1 s2 L2 ny ¼ ððððc1 c2 c3 þ s1 s3 Þc4 þ c1 s2 s4 Þc5 þ ðc1 c2 s3  s1 c3 Þs5 Þc6 þ ððc1 c2 c3  s1 s3 Þs4 þ c1 s2 c4 Þs6 Þs7  ðððc1 c2 c3  s1 s3 Þc4 þ c1 s2 s4 Þs5 þ ðc1 c2 s3  s1 c3 Þc5 Þc7 oy ¼ ððððc1 c2 c3 þ s1 s3 Þc4 þ c1 s2 s4 Þc5 þ ðc1 c2 s3  s1 c3 Þs5 Þc6 þ ððc1 c2 c3  s1 s3 Þs4 þ c1 s2 c4 Þs6 Þc7 þ ðððc1 c2 c3  s1 s3 Þc4 þ c1 s2 s4 Þs5 þ ðc1 c2 s3  s1 c3 Þc5 Þs7 ay ¼ ðððc1 c2 c3  s1 s3 Þc4 þ c1 s2 s4 Þc5 þ ðc1 c2 s3  s1 c3 Þs5 Þs6 þ ððc1 c2 c3  s1 s3 Þs4 þ c1 s2 c4 Þc6 Py ¼ ðððc1 c2 c3  s1 s3 Þc4 þ c1 s2 s4 Þs5 þ ðc1 c2 s3  s1 c3 Þc5 ÞL4  ððc1 c2 c3  s1 s3 Þs4 þ c1 s2 c4 ÞL3  c1 s2 L2 nz ¼ ðððs2 c3 c4 þ c2 s4 Þc5 þ s2 s3 s5 Þc6 þ ðs2 c3 s4 þ c2 c4 Þs6 Þs7  ððs2 c3 c4 þ c2 s4 Þs5 þ s2 s3 c5 Þc7 oz ¼ ðððs2 c3 c4 þ c2 s4 Þc5 þ s2 s3 s5 Þc6 þ ðs2 c3 s4 þ c2 c4 Þs6 Þc7 þ ððs2 c3 c4 þ c2 s4 Þs5 þ s2 s3 c5 Þs7 az ¼ ððs2 c3 c4 þ c2 s4 Þc5 þ s2 s3 s5 Þs6 þ ðs2 c3 s4 þ c2 c4 Þc6

7. Conclusion We have designed a system that transfers human grasping skills to a robot hand. The human grasping postures are extracted from our human grasping experiments and described in the synergy space. The low-dimensionality of the synergy space has facilitated the human-to-robot skill transfer. To overcome the problem caused by the large differences between the configuration space of the human hand and the robot hand, we have proposed a mapping method that optimizes the posture mapping individually. We have designed a control strategy for the robot that exploits tactile sensor signals to grasp and hold objects stably. The experimental results have shown that the transferred grasp synergies can be generalized on the robot to grasp new objects successfully. Acknowledgements We are grateful for support through the REVERB project, EPSRC Grant EP/C516303/1 and the ROSSI Project, EC-FP7, ICT – 216125. Appendix A. Forward kinematics of the robot arm The elements of the matrix in Eq. (23) can be computed as following:

nx ¼ ððððs1 c2 c3 þ c1 s3 Þc4 þ s1 s2 s4 Þc5 þ ðs1 c2 s3  c1 c3 Þs5 Þc6 þ ððs1 c2 c3  c1 s3 Þs4  s1 s2 c4 Þs6 Þs7  ðððs1 c2 c3 þ c1 s3 Þc4  s1 s2 s4 Þs5 þ ðs1 c2 s3  c1 c3 Þc5 Þc7

Pz ¼ ððs2 c3 c4 þ c2 s4 Þs5 þ s2 s3 c5 ÞL4  ðs2 c3 s4 þ c2 c4 ÞL3  c2 L2  L1 where si and ci represent sin hi and cos hi, respectively (see Fig. 10 for the meanings of hi and Li). References [1] Cortesao R, Koeppe R. Sensor fusion for human–robot skill transfer systems. Adv Robot. 2000;14(6):537–49. [2] Liu S, Asada H. Transferring manipulative skills to robots: representation and acquisition of tool manipulative skills using a process dynamics model. J Dyn Syst Measure Control 1992;114:220. [3] Yang JX, Chen Y. Hidden markov model approach to skill learning and its application in telerobotics. In: Proceedings of IEEE international conference on robotics and automation, vol. 1; 1994. p. 396–402. [4] Oztop E, Lin LH, Kawato M, Cheng G. Dexterous skills transfer by extending human body schema to a robotic hand. In: Humanoid robots, 2006 6th IEEERAS international conference on; 2006. p. 82–7. [5] Erlhagen W, Mukovskiy A, Bicho E, Panin G, Kiss C, Knoll A, et al. Goal-directed imitation for robots: a bio-inspired approach to action understanding and skill learning. Robot Auton Syst 2006;54(5):353–60. [6] Asfour T, Azad P, Gyarfas F, Dillmann R. Imitation learning of dual-arm manipulation tasks in humanoid robots. In: Proceedings of the IEEE-RAS international conference on humanoid robots; 2006. [7] Ralph M, Moussa MA. Toward a natural language interface for transferring grasping skills to robots. IEEE Trans Robot 2008;24(2):468–75. [8] Bicchi A, Kumar V. Robotic grasping and contact: a review. In: Proceedings of IEEE international conference on robotics and automation, vol.1; 2000. p. 348– 53. [9] d’Avella A, Bizzi E. Shared and specific muscle synergies in natural motor behaviors. Proc Natl Acad Sci 2005;102(8):3076–81. [10] Daley MA, Usherwood JR, Felix G, Biewener AA. Running over rough terrain: guinea fowl maintain dynamic stability despite a large unexpected change in substrate height. J Exp Biol 2006;209(1):71. [11] Santello M, Flanders M, Soechting JF. Postural synergies for tool use. J Neurosci 1998;18:10105–15.

284

T. Geng et al. / Mechatronics 21 (2011) 272–284

[12] Sabatini AM. Identification of neuromuscular synergies in natural upper-arm movements. Biol Cyber 2002;86(4):253–62. [13] Fod A, Mataric´ MJ, Jenkins OC. Automated derivation of primitives for movement classification. Auton Robot 2002;12(1):39–54. [14] Gullapalli V, Gelfand JJ, Lane SH, Wilson WW. Synergy-based learning of hybrid position/force control for redundant manipulators. In: Proceedings of IEEE international conference on robotics and automation, vol. 4; 1996. p. 3526–31. [15] Rosenstein Michael T, Barto Andrew G, Van Emmerik Richard EA. Learning at the level of synergies for a robot weightlifter. Robot Auton Syst 2006;54(8):706–17. [16] Brown CY. Inter-finger coordination in robot hands via mechanical implementation of principal components analysis. PhD thesis. Massachusetts Institute of Technology; 2007. [17] de Granville C, Southerland J, Fagg AH. Learning grasp affordances through human demonstration. In: Proceedings of the international conference on development and learning; 2006. p. 23–29 [electronically published]. [18] Chow YW. Low-cost multiple degrees-of-freedom optical tracking for 3D interaction in head-mounted display virtual reality. Int J Recent Trends Eng 2009;1(1):52–6. [19] Park JS. AR-Room: a rapid prototyping framework for augmented reality applications. Multimedia Tools Applicat 2008;7(5):1–22.

[20] Rohling RN, Hollerbach JM, Jacobsen SC. Optimized fingertip mapping – a general algorithm for robotic hand teleoperation. Pres: Teleoperat Virt Environ 1993;2(3):203–20. [21] Hong J, Tan X. Calibrating a VPL DataGlove for teleoperating the Utah/MIT hand. In: Proceedings of IEEE international conference on robotics and automation; 1989. p. 1752–7. [22] Pao L, Speeter T. Transformation of human hand positions for robotic hand control. In: Proceedings of IEEE international conference on robotics and automation; 1989. p. 1758–63. [23] Speeter TH. Transforming human hand motion for telemanipulation. Pres: Teleoperat Virt Environ 1992;1(1):63–79. [24] Peer A, Einenkel S, Buss M. Multi-fingered telemanipulation-mapping of a human hand to a three finger gripper. In: The 17th IEEE international symposium on robot and human interactive communication; 2008. p. 465–70. [25] Siciliano B, Khatib O. Springer handbook of robotics. New York: SpringerVerlag; 2008. [26] Chen PCY, Mills JK, Smith KC. Performance improvement of robot continuouspath operation through iterative learning using neural networks. Mach Learn 1996;23(2):191–220. [27] Xia Y, Wang J. A dual neural network for kinematic control of redundant robot manipulators. IEEE Trans Syst Man Cyber, Part B 2001;31(1):147–54.