Grasp Identification and Synthesis Using Static Grasp Pose

Grasp Identification and Synthesis Using Static Grasp Pose

Proceedings of the 18th World Congress The International Federation of Automatic Control Milano (Italy) August 28 - September 2, 2011 Grasp Identific...

752KB Sizes 2 Downloads 125 Views

Proceedings of the 18th World Congress The International Federation of Automatic Control Milano (Italy) August 28 - September 2, 2011

Grasp Identification and Synthesis Using Static Grasp Pose ★ Jongwoo Park and Joono Cheong Department of Control and Instrumentation, Korea University Sejong Campus, Jochiwon, Korea, (e-mail: {jekiel,jncheong}@korea.ac.kr) Abstract: In this paper, we propose a method for analyzing and synthesizing human’s grasping motion. For given joint data sets, we apply the PCA technique for dimension reduction and the Gaussian mixture model(GMM) for identifying grasp types and the associated object parameters. And we compare the proposed method with other grasp identification methods. The validity of the identification and synthesis method are verified through simulations with human grasp data captured by a data glove. Keywords: grasp synthesis, principal component analysis, gaussian mixture model, grasp identification, grasp analysis 1. INTRODUCTION Realistic and natural grasping is still a challenging task that is needed for virtual simulation and robot hand control, although there have been many research works regarding analysis of human grasp motion, synthesis of grasp motion, or application of grasp motion. The main difficulty seems to lie in the nature of grasp motions that involves high degrees of freedom (DOFs) finger motions. Among many works related to human grasp motions, HagerRoss and Schieber (Hager-Ross and Schieber, 2000) studied the analysis of kinematic and synchronization motion of human fingers using a data glove. Lin et al. (Lin et al., 2000) proposed the constraint model of human hand motion. Ninomiya and Maeno(Ninomiya and Maeno, 2008) investigated the analysis and systematic classification of human hand movement for robot hand design. Amor et al.(Amor et al., 2008) and Moldenhauer (Moldenhauer et al., 2005) proposed a data-driven animation method for the synthesis of natural human grasping posture. Grasp planning has been a traditional issue for applying grasp analysis. Lozano-Perez et al.(Lozano-Perez et al., 1987) proposed grasp/regrasp planning in configuration space. Kang and Katsushi (Kang and Ikeuchi, 1995) studied grasping motion using human grasp motion. Ip and Chan(Ip and Chan, 1997) analyzed dynamic human finger motion. Braido and Zhang (Braido and Zhang, 2004) investigated human finger posture from kinematics. And some researchers studied grasp planning of real-time and sensor fusion. Morales (Morales et al., 2007, 2006) used tactile sensors and vision sensor for analysis of grasp motion. Also, human grasp classification has been a big research issue. Napier (Napier, 1956) classified grasp methods by precision ★ This research was supported, in part, by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(2009-0075730), by the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs, and by the center for Autonomous Manipulation under Human Resources Development Program for Robot Specialists of the Ministry of Knowledge Economy.

978-3-902661-93-7/11/$20.00 © 2011 IFAC

and power grasps based on the grasping purpose. Cukosky (Cutkosky, 1989) divided Napier’s classification of grasping into more details, and also endeavored classification considering thumb motion. Iberall (Iberall, 1987, 1997) classified grasping into three categories considering the opposable thumb, and analyzed human hand motion for robot hands. Kamakura (Kamakura, 1989) widely classified hand motion for medical and rehabilitation applications. For the analysis of human grasp motion data using a motion capture device, a reduction method from a high-dimensional data set into a low-dimensional one has been an important step. It was reported by Santello et al. (Santello et al., 1998) that to analyze human grasp motions, the first two principal components account for more than 80% of original data. In this paper, we address the analysis of human’s grasping data through data glove and identification of grasp taxonomy and parameters of target objects. We first classify the grasp motion by using practical taxonomies; then the PCA and Gassian mixture model are used to mathematically model the human grasp motions. Finally we identify the grasp type and object parameters using this result. 2. ANALYSIS OF HUMAN GRASP 2.1 Taxonomy Human hand grasp has diverse characteristics subject to shape of grasp objects. Among studies of the classification of the hand movements, Cutkosky (Cutkosky, 1989) classified grasp motion using hand dexterity, task, and characteristic of geometry. Cutkosky’s taxonomy has so many characteristic and classes that it is difficult to discern collective behaviors. Aleottie and Caselli’s taxonomy (Aleottie and Caselli, 2006) can be roughly divided into two types (power grasp and precision grasp). Kamakura’s taxonomy (Kamakura, 1989) focuses on real life grasp postures for grasping scissors, pencils, chopsticks, ball, and key. However Kamakura’s taxonomy could not consider grasp postures of arbitrary objects. In this research, we focus on precision grasp which can be roughly divided into pinch and circular. Also each grasp type

7402

10.3182/20110828-6-IT-1002.03765

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

account for more than 80% (respectively, 87%) of the variance in hand posture, which was already reported by Santello and colleagues (Santello et al., 1998). If we project grasp motion data using principal components, the following is obtained: t 1 n v˜ = ∑ vi + ∑ c j e j (4) n i=1 j=1 where v˜ ∈ ℝd is a grasp motion data after projection, e j ∈ ℝd is the j-th PC vector, c j is a coefficient for the j-th PC, t is the number of PCs involved in the PCA projection. In our application domain, PCA space with two PCs proved to be sufficient. Taking any arbitrary point c = (c1 , . . . , ct )T in PCA space and mean value of the grasp posture, we can synthesize the corresponding grasp without difficulty.

Fig. 1. Grasp taxonomy has its subgrasp type by the number of fingers used for grasping. Fig. 1 shows a schematic of our grasp taxonomy.

2.3 Gaussian Mixture Model

2.2 Principal Component Analysis

Generally, a GMM estimates the probability density function of the data set by a weighted sum of Gaussian distributions. The probability density function can be written as

Hand grasp motion is associated with a large number of finger joints, and each grasp task is completely defined only if all the joint data are available. However, as we recognize everyday, the grasp tasks look very similar or at least may be classified by a few dominant poses. This observation offers an idea that finger motions show collective behaviors between fingers or between joints of a finger. Besides, by modeling the finger joints as revolute ones, we may be able to find out regular patterns that can be easily identified and synthesized for any grasp tasks. Since these a few noticeable patterns should appear in the highdimensional joint space, a dimension reduction technique such as PCA could be effective. So we will make use of PCA as a fundamental means for grasp analysis and synthesis. In order to analyze human grasp motion, we need to collect numerical finger joint data using some devices for capturing joint motion such as data gloves. Captured data at each instance is of the form: v = [a1 , ..., ad ]T where d is the number of data glove sensor. Each ai corresponds to the i-th finger joint angle. By carrying out grasps with different objects, we can get a collection of data: ⎡ T⎤ v1 ⎢ .. ⎥ (1) A = ⎣ . ⎦ ∈ ℝn×d

K

p(x) =

∑ πk N(x∣µk , Σk )

(5)

k=1

where πk is the weight of the k-th Gaussian distribution, and N(x∣µk , Σk ) is the conditional density function with mean, µk , and covariance, Σk . The conditional density function is a Gaussian distribution such that 1 T −1 1 e− 2 (x−µ ) Σk (x−µ ) (6) N(x∣µk , Σk ) = d√ 2π 2 det(Σk ) where µk is the mean vector and Σk is the covariance matrix. We define θk ≜ {µk , Σk , πk } as the parameter set of the k-th Gaussian model. Then we estimate the parameter using the expectation-maximization (EM) algorithm. EM algorithm has two steps: E step and M step. We estimate a probability of involving Gaussian groups to each element in E step as follows: π j N(xi ∣µ j , Σ j ) (7) P(z j = 1∣xi ) = K ∑k=1 πk N(xi ∣µk , Σk ) where z j ∈ ℝK is the j-th hidden vector about Gaussian group of each data. After obtaining the probability in (7), we estimate Gaussian parameter θ j using (8), which is called M(Maximization) step. N

vn T

N j = ∑ P (z j = 1∣xi ) i=1

where matrix A denotes hand posture data set, and n is the instances of hand motion data. We employ PCA on this data set. In order to eliminate the effect of bias, mean value of the data set is extracted from each joint data such that 1 n bi = vi − ∑ v j , i = 1, 2, ⋅ ⋅ ⋅ , n, and n j=1 ⎡ T⎤ (2) b1 ⎢ ⎥ B = ⎣ ... ⎦ ∈ ℝn×d . T

bn The mean value roughly implies the posture to grasp the medium-sized object. To obtain principal components, we define: C = BT B ∈ ℝd×d (3) where C has rank d, and thus we get d PCA vectors. It should be noted that the first two (or three) principal components (PCs)

µj =

1 Nj

1 Σj = Nj Nj πj = N

N

∑ P (z j = 1∣xi ) xi

i=1 N

(8)

∑ P (z j ∣xi ) (xi − µ j ) (xi − µ j )T

i=1

We repeat these E-M steps for estimating the Gaussian parameters. Fig. 2 shows the Gaussian distribution in PCA space. 3. GRASP IDENTIFICATION & SYNTHESIS VIA PCA & GMM For a given grasp data set, in order to estimate the corresponding grasp taxonomy and geometric parameters, we need GMM in joint space. We apply a lower dimensional GMM that is

7403

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

(a) Pinch

(b) Tripod

(b) Spherical

Fig. 2. Examples of Gaussian distributions in PCA space obtained over a grasp motion data in PCA space. Note that the PCA space is a reduced dimension space that simply characterizes the original data set. The GMM in the higher dimensional joint space is obtained from the GMM of lower dimensional PCA space. Finally using these GMMs we construct mathematical various grasp postures. 3.1 Mathematical model of grasp posture We define two Gaussian distributions such that 1 T −1 1 (9) e− 2 x Σx x f joint (x) = m√ 2π 2 det(Σx ) 1 T −1 1 fPCA (c) = (10) e− 2 c Σ c , n√ 2π 2 det(Σ) where f joint (x) and fPCA)(c) are Gaussian distributions in joint space and PCA space, respectively. In the above, m is the number of joint space dimension and n is the number of PCA space dimension. Σx ∈ ℝm×m denotes covariance matrix of joint space, Σ ∈ ℝn×n denotes covariance matrix of PCA space, x ∈ ℝm denotes joint space data set, and c ∈ ℝn denotes data set of PCA space. Joint space data x is the estimation from eigenvector e j and a constant c j by the following equation: ⎡ ⎤ c1 ⎢ .. ⎥ m×n E = [ e1 . . . en ] ∈ ℝ , c = ⎣ . ⎦ ∈ ℝn (11) cn x˜ = c1 e1 + ⋅ ⋅ ⋅ + cn en = Ec ≈ x . Using (9) - (11), we obtain the following relation about the covariance matrices: 1 1 T T −1 1 T −1 T −1 (12) e− 2 x Σx x ≈ e− 2 (EC) Σx (EC) = e− 2 C E Σx EC . From above, the high dimensional covariance Σx is determined as −1 E T Σ−1 x E ≈Σ −1 T (13) Σ−1 x ≈ EΣ E Σx ≈ (E T )−1 ΣE −1 . In this case, in fact E and Σ are not invertible because these matrices are non-square. So we extend the matrix dimension of ˜ which is the extend matrix of E, is filled with Σ to be ℝm×m . E, eigenvectors from the training data set; so E˜ becomes a matrix of ℝm×m . Thus the followings are obtained: ] [ Σ 0 ∈ ℝm×m (14) Σ˜ = 0 0 ˜ ˜ −1 ˜ T Σ−1 x ≈ EΣ E (15) Σx ≈ (E˜ T )−1 Σ˜ E˜ −1 . From (15) we can find the covariance matrix in joint space. (See Fig. 3.) In this research, we construct GMMs for six grasp taxonomies following the procedures presented in the above.

Fig. 3. Covariance conversion from PCA space to joint space 3.2 Meaning of PCA Coefficient After the grasp type identification, we need to identify object parameters such as the diameter of a circular object or the depth of a rectangular object. We find that these object parameters are related to PCA coefficient. In other words, if we know the grasp type of a given unknown grasp posture, we can define the grasp posture like (16). t

pin ≈ µidx + ∑ cidx, j eidx, j

(16)

j=1

where subscript idx is a grasp type for grasp identification, µidx is mean vector of the grasp type, and pin is the joint data set of the grasp posture. From experimentation, the coefficient of the PCA has a strong linear relation with the object parameters. Fig. 4 shown the relation between c j and object diameter in the pinch grasp case. In Fig. 4, green points represent the data set for generating

Fig. 4. Object coefficient about pinch grasp the grasp model. As shown, we can interpolate these points by (17) that relates c j and the object diameter; the best line approximation is found using the least square method. d = ac j + b

(17)

where d is the object diameter, a and b are the coefficients for object parameter identification. Blue line in Fig. 4 shows the approximated relation between PCA coefficients and the object diameter.

For our grasp models, 10 different objects with six taxonomies are used. All the collected data used for constructing mathematical grasp models are joint angles of finger. 7404

Table 1. Coefficient for object identification

a b

Mini Pinch -0.0014 0.0543

Tri Pinch 0.0010 0.0521

Quad Pinch 0.00087 0.0497

Pinch

Tripod

Spherical

0.00071 0.0519

-0.00095 0.0336

-0.00080 0.0499

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

4. IDENTIFICATION AND SYNTHESIS: SIMULATIONS 4.1 Grasp identification For the input finger data from users with unknown grasp taxonomies for unknown objects, we compute the probability of each taxonomy group using (7) and (9). Fig. 5 shows target objects of grasp and object identification. Table.2 – Table.7 show results of the computation. As shown by numerical values, probabilities at the correct Gaussian models are the highest, all through different sizes of objects. For example, a 3cm object of unknown grasp taxonomy – real grasp posture was pinch – identified that the grasp taxonomy made pinch with probability of 0.6250, which is most probable. We also estimate the object parameters For example, the diameter or depth of the grasped target object, after grasp taxonomy identification, is performed, as Table.8. This estimation is done in the PCA space by computing PC coordinates and comparing the results with Table 2. Identification of grasp taxonomy(Pinch)

3cm 6cm 9cm

Mini Pinch 0.0000 0.0000 0.0000

Tri Pinch 0.0000 0.0000 0.0000

Quad Pinch 0.0025 0.0001 0.0000

Pinch

Tripod

Spherical

0.6250 0.8165 1.0000

0.0000 0.0000 0.0000

0.3725 0.1834 0.0000

Table 3. Identification of grasp taxonomy(Tripod)

3cm 6cm 9cm

Mini Pinch 0.0232 0.0159 0.0000

Table

3cm 6cm 9cm

4.

Mini Pinch 0.0005 0.0000 0.0000

Tri Pinch 0.0625 0.0815 0.0000

Quad Pinch 0.0363 0.0212 0.0000

Pinch

Tripod

Spherical

0.0000 0.0000 0.0000

0.8413 0.8643 1.0000

0.0467 0.0171 0.0000

Identification of grasp omy(Spherical) Tri Pinch 0.0025 0.0000 0.0000

Quad Pinch 0.0862 0.0360 0.0000

taxon-

3cm 6cm 9cm

Tri Pinch 0.2410 0.1923 0.0000

Quad Pinch 0.0046 0.0000 0.0000

Tripod

Spherical

0.2844 0.2405 0.0000

0.0039 0.0000 0.0000

0.6325 0.7235 1.0000

Pinch

Tripod

Spherical

0.0000 0.0000 0.0000

0.1621 0.0454 0.0000

0.0000 0.0000 0.0000

Table 6. Identification of grasp taxonomy(Tri Pinch)

3cm 6cm 9cm

Mini Pinch 0.1023 0.0124 0.0000

Tri Pinch 0.5901 0.7621 1.0000

Quad Pinch 0.0267 0.0126 0.0000

3cm 6cm 9cm

Tri Pinch 0.1038 0.0297 0.0000

Quad Pinch 0.7413 0.9163 1.0000

(c) 9cm

Fig. 5. Identification target objects

(a) Grasp posture

(b) Measure

Fig. 6. Identification of arbitrary grasp posture mathematical models. That is, we find c j using grasp PCA information in (4), and with this we determine the parameter of the grasp target object. Table.8 shows that the results of estimating parameters for the models shown in Fig.5 with the correct values. The results show that the estimation method is very accurate. Fig.6 shows measure values of identification for arbitrary grasp posture which is not in our standard taxonomy. As shown, the measure values are so low that it clearly indicates the grasp does not belong to the considered grasp types. Table.9 shows the standard deviation of grasp identification results using 25 person’s grasp posture data. The object used is shown in Fig.11.

Pinch

Tripod

Spherical

0.0026 0.0002 0.0000

0.2754 0.2126 0.0000

0.0030 0.0001 0.0000

In this subsection, we show grasp synthesis results for given some grasp target objects. We use mathematic grasp models for grasp synthesis introduced above. Please find Fig.7 where six grasps with different objects are demonstrated. For each grasp, appropriate grasp type is chosen beforehand the grasp synthesis. We find out that the grasp synthesis is very natural and effective. 5. COMPARISON WITH OTHER GRASP IDENTIFICATION METHODS Ju et al.(Ju et al., 2008) addressed several algorithms for grasp identification. Ju’s algorithms, unlike our proposed method, requires whole continuous grasp data during opening and closing of grasps. Thus, we collect complete trajectory data for each Table 8. Estimation results for object diameters Real Size 3cm 6cm 9cm

Table 7. Identification of grasp taxonomy(Quad Pinch) Mini Pinch 0.0045 0.0000 0.0000

(b) 6cm

4.2 Grasp synthesis

Pinch

Table 5. Identification of grasp taxonomy(Mini Pinch) Mini Pinch 0.5923 0.7623 1.0000

(a) 3cm

Pinch

Tripod

Spherical

0.0972 0.0244 0.0000

0.0257 0.0189 0.0000

0.0269 0.0107 0.0000

Mini Pinch 2.8587 6.1421 8.7714

Tri Pinch 3.2541 5.7451 9.2412

Quad Pinch 3.0189 5.8207 8.7972

Pinch

Tripod

Spherical

2.6562 6.2050 8.7945

2.6462 5.7421 8.6415

3.1624 5.6813 9.3065

Table 9. Standard deviation of grasp identification

7405

Mini Pinch

Tri Pinch

Quad Pinch

Pinch

Tripod

Spherical

0.0409

0.0429

0.0137

0.1872

0.0682

0.1105

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

only for grasping a specific size of object that the trained model is derived.

(a) Mini pinch

(b) Tri pinch

Fig. 8. Preprocessing in Ju’s methods

(c) Quad pinch

(d) pinch

(e) Tripod

(f) Spherical

Fig. 7. Grasp Synthesis grasp motion using the data glove. And before recognition, their algorithms need pre-processing: (1) first, smoothing trajectory data of finger joint using a Low-Pass filter, (2) second, calculating the extrema of finger joint angles, and (3) finally, adjusting joint angle data into a same time-period. In order for comparison purposes, we take time clustering and time-based multiple GMMs methods from (Ju et al., 2008) among others. Time clustering based grasp identification algorithm computes the amount of deviation between grasp motion model in unified time period and the input grasp motion data as an Euclidean norm. Time-based multiple GMMs identification also needs unified time period grasp motion data. Here, the whole grasp motion data is represented by several jointed Gaussian distributions whose center points belong to the grasp motion trajectory. The average distance between the center of each Gaussian distribution and the input data at each time yields the information of identification error. Fig. 11 shows an object used in this section. Table.10 and Table.11 show identification results of the methods for grasping a coffee can with approximately 5cm in diameter. As shown, the identification results are very correct for all cases. (Note that a smaller value means a smaller identification error.) Our method also gives all correct identification results but with only the final static pose at the contact. Hence, though the identification results are the same, the proposed method is much simpler and convenient. Furthermore, our GMM models work for various geometric scales of grasp objects, while Ju’s algorithms work

Fig. 9. Time Clustering in Ju’s methods

Fig. 10. Gaussian mixture model in Ju’s methods

Fig. 11. Target objects in compare identification (5cm)

7406

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

6. CONCLUSIONS AND FUTURE WORKS 6.1 Conclusions In this paper, we proposed a systematic way to grasp posture analysis and synthesis finger joint data. The proposed method combines PCA technique and GMM, in order to derive a mathematical model of human grasps that show collective behaviors due to constraints in the finger joints. Results show that the method works well in defining grasp types with only a few principal components and also in identifying the grasp type of an input motion data. 6.2 Future works Our data glove has 14 sensors, which might not be enough to represent natural human hand motions. So we will add more sensors for measurement of hand joints, so that we can obtain more accurate collective behaviors of human grasps. REFERENCES Aleottie, J. and Caselli, S. (2006). Grasp recognition in virtual reality for rovot pregrasp planning by demonstration. In Proceedings of the IEEE International Conference on Robotics and Automation, 2801–2806. Amor, H.B., Heumer, G., and Jung, B. (2008). Grasp synthesis from low-dimensional probabilistic grasp models. Computer Animation and Virtual Worlds, 19(3–4), 445–454. Braido, P. and Zhang, X. (2004). Quantitative analysis of finger motion coordination in hand manipulative and gestic acts. Human Movement Science, 22(6), 661–678. Cutkosky, M.R. (1989). on grasp choice, grasp models, and the design of hands for manufacturing tasks. IEEE Transactions on Robotics and Automation, 5(3), 269–279. Table 10. Ju’s Time clustering based identification Input Mini Tri Quad Pinch Tripod Spherical

Mini Pinch 0.0375 0.7538 0.6491 0.8761 0.6190 0.6384

Tri Pinch 0.6578 0.0243 0.7883 0.7924 0.7228 0.8569

Quad Pinch 0.8448 0.6430 0.0227 0.7657 0.8743 0.7759

Pinch

Tripod

Spherical

0.8968 0.6635 0.7983 0.0389 0.7676 0.7892

0.8863 0.6757 0.7055 0.7815 0.0176 0.7805

0.6883 0.6787 0.6409 0.6413 0.7175 0.0278

Table 11. Ju’s Gaussian Mixture Model based identification Input Mini Tri Quad Pinch Tripod Spherical

Mini Pinch 0.2834 0.8801 0.8202 0.7470 0.8199 0.6418

Tri Pinch 0.7424 0.3157 0.7439 0.6034 0.7452 0.8411

Quad Pinch 0.6427 0.8882 0.2438 0.7156 0.6248 0.7047

Pinch

Tripod

Spherical

0.6751 0.7686 0.8970 0.2874 0.7204 0.7266

0.8761 0.7914 0.9101 0.8573 0.2383 0.8427

0.8742 0.8324 0.8672 0.7642 0.9451 0.2754

Table 12. Compare identification(5cm) Mini Tri Quad Pinch Tripod Spherical

PCA & GMM

TC

GMM

0.7815 0.7941 0.8812 0.8032 0.8512 0.7721

0.0375 0.0243 0.0227 0.0389 0.0176 0.0278

0.2834 0.3157 0.2438 0.2874 0.2383 0.2754

Hager-Ross, C. and Schieber, M.H. (2000). Quantifying the independence of human finger movements:comparisons of digits, hands and movements frequencies. The Journal of Neuroscience, 20(22), 8542–8550. Iberall, T. (1987). The nature of human prehension: Three dexterous in one. In Proceedings of IEEE International Conference on Robotics and Automation, 396–401. Iberall, T. (1997). Human prehension and dexterous robot hands. International Journal of Robotics Research, 16(3), 285–299. Ip, H.H. and Chan, C. (1997). Dynamic simulation of human hand motion using an anatomically correct hierarchical approach. In Proceedings of IEEE International Conference on Systems Man and Cybernetics, volume 2, 1307–1312. Ju, Z., Liu, H., Zhu, X., and Xiong, Y. (2008). Dynamic grasp recognition using time clustering, gaussian mixture models and hidden markov models. In Proceedings of International Conference on Intelligent Robotics and Applications, 669– 678. Kamakura, N. (1989). Te no katachi, te no ugoki. Ishiyaku Publishers, Inc., Japan. Kang, S.B. and Ikeuchi, K. (1995). Toward automatic robot instruction form perception-temporal sementation of tasks from human hand motion. IEEE Transactions on Robotics and Automation, 11(5). Lin, J., Wu, Y., and Huang, T.S. (2000). Modeling the constraints of human hand motion. In Proceedings Workshop on Human Motion, 121–126. Lozano-Perez, T., Jones, J.L., Mazer, E., O‘Donnell, P.A., Grimson, W.E.L., Tournassoud, P., and lanusse, A. (1987). Handey: A robot system that recognizes, plans, and manipulates. In Proceedings of IEEE International Conference on Robotics and Automation, 843–849. Moldenhauer, J., Boesnach, I., Beth, T., Wank, V., and Bos, K. (2005). Analysis of human motion for humanoid robots. In Proceedings of IEEE International Conference on Robotics and Automation, 312–317. Morales, A., Asfour, T., and Azad, P. (2006). Integrated grasp planning and visual object localization for a humanoid robot with five-fingered hands. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 5663–5668. Morales, A., Prats, M., Sanz, P., and Pobil, A.P. (2007). An experiment in the use of manipulation primitives and tactile perception for reactive grasping. In Proceedings of the RSS Workshop on Robot Manipulation: Sensing and Adapting to the Real World, 1–7. Napier, J.R. (1956). The prehensile movements of the human hand. Journal of Bone and Joint Surgery, 38(4), 902–913. Ninomiya, T. and Maeno, T. (2008). Analysis and systematic classifiation of humman hand movement for robot hand design. Journal of Robotics and Mechatronics, 20(3), 429– 435. Santello, M., Flanders, M., and Soechting, J.F. (1998). Postural hand synergies for tool use. The Journal of Neuroscience, 8(23), 10105–10115.

7407