Grasp configuration planning for a low-cost and easy-operation underactuated three-fingered robot hand

Grasp configuration planning for a low-cost and easy-operation underactuated three-fingered robot hand

Mechanism and Machine Theory 129 (2018) 51–69 Contents lists available at ScienceDirect Mechanism and Machine Theory journal homepage: www.elsevier...

4MB Sizes 2 Downloads 41 Views

Mechanism and Machine Theory 129 (2018) 51–69

Contents lists available at ScienceDirect

Mechanism and Machine Theory journal homepage: www.elsevier.com/locate/mechmachtheory

Research paper

Grasp configuration planning for a low-cost and easy-operation underactuated three-fingered robot hand Shuangji Yao a,∗, Marco Ceccarelli b, Giuseppe Carbone b, Zhikui Dong c a

College of Material Science and Engineering, Yanshan University, Qinhuangdao, PR China LARM: Laboratory of Robotics and Mechatronics, University of Cassino and South Latium, Cassino Frosinone, Italy c School of Mechanical Engineering, Yanshan University, Qinhuangdao, PR China b

a r t i c l e

i n f o

Article history: Received 29 November 2017 Revised 27 March 2018 Accepted 23 June 2018

Keywords: Robotic hand Underactuation Grasp configuration planning Simulation Experimental testing

a b s t r a c t This paper proposes a method for modeling and planning the grasping configuration of a robotic hand with underactuated finger mechanisms. The proposed modeling algorithm is based on analysis and mimicking of human grasping experience. Results of the analysis is preprocessed and stored in a database. The grasp configuration planning algorithm can be used within a real time online grasp control as based on artificial neural networks. Namely, shapes and sizes of task objects are described by taxonomy data, which are used to generate grasp configurations. Then, a robot hand grasp control system is designed as based on the proposed grasp planning with close-loop position and force feedback. Simulations and experiments are carried out to show the basic features of the proposed formulation for identifying the grasp configurations while dealing with target objects of different shapes and sizes. It is hoped that the well-trained underactuated robot hand can solve most of grasping tasks in our life. The research approach is aimed to research low-cost easy-operation solution for feasible and practical implementation. © 2018 Elsevier Ltd. All rights reserved.

1. Introduction Self-adaptive envelope grasping can be conveniently obtained by means of underactuated finger mechanisms in a robotic hand. This solution provides convenient features for grasping objects of unknown shapes and sizes. However, it gives indetermination in the finger position during the grasping. In the last decades several researchers have been investigating multi-fingered robotic hands aiming to mimic the human hand. Many studies focus on the issues of grasp planning for target objects and related control planning. Research works are undertaken from different views in order to analyze grasp planning and establish grasp strategy models, as referring to grasping configuration planning, grasping position planning for fingers and palm, contacting point selection, kinematic and torque computation for each joint, grasping operation and stability [1]. Geometry method is an initial solution to build grasp planning which is based on the theory of form-closure and forceclosure. Nguyen presented a simple algorithm in [2] to construct force-closure grasps directly based on the grasped objects’ shape. An efficient algorithm for grasp planning synthesis is reported in [3] as based on bounded polyhedral/polygonal objects. In paper [4], computation method for stable grasps of three-dimensional polyhedral objects is formulated. A sufficient and necessary condition to achieve form-closure grasp is demonstrated in [5] for multi-fingered robot hands. An improved ∗

Corresponding author. E-mail addresses: [email protected] (S. Yao), [email protected] (M. Ceccarelli), [email protected] (G. Carbone).

https://doi.org/10.1016/j.mechmachtheory.2018.06.019 0094-114X/© 2018 Elsevier Ltd. All rights reserved.

52

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

approach by using a ray-shooting algorithm is reported in [6] to test force-closure for 3D frictional grasp. Optimization methods are also reported to be used in designing grasp planning for multi-fingered robot hands. The evaluation optimal criterions are related to grasping capabilities such as manipulating space, grasp forces and torque, grasp stability. When an optimum criterion reaches the maximum or minimum solution, the grasping parameters can be calculated as optimal results. Form-closure optimum problem for stable grip has been formulated and solved as in [7]. An optimality criterion based on decoupled wrenches is presented in [8], in which the algorithms for achieving force-closure grasps of 2-D and 3-D objects are developed. In [9] two general optimality criteria are introduced and discussed by considering the total finger force and the maximum finger force. An approach of quantifying the effects of compliant grasps is reported in [10], in which the grasping stiffness is defined and used as an optimal criterion. A general algorithm with two computing phases is presented in [11] for optimum dynamic force distributions in multi-fingered grasping. The considerations are based on optimum function for grasp planning as focused on optimum evaluation. A dimension reduction method for dexterous hand grasping is proposed in [12]. It is possible to apply a control algorithm to dexterous hand models and obtain consistent results. However, for most of multi-contact grasp systems, it is complex to provide a suitable formulation that can express the evaluation function. In addition, an optimal iteration need mass computation to converge. This computation process always costs a lot of time. Thus, it is difficult to apply in a real time control system. Target objects are variable in different shape, size and grasping environment. It may be complicated to describe the objects’ parameters and translate them into a mathematical model. Thus, a hand control system usually cannot be programmed for universal grasp tasks. In this paper, grasp planning is proposed with 6 configurations for an underactuated multi-fingered robot hand. With the ability of passive compliance self-adaptation, the underactuated robot hand can grasp most of the objects in daily life. The artificial intelligence algorithm for grasp configurations modeling is proposed as based on human knowledge. The simulations and experiments show feasibility and practicality with the feature of low-cost easy-operation. 2. Grasp configuration planning based on artificial intelligence Human hand can grasp different objects with several finger configurations. The rule for choosing human hand grasp configuration is based on daily experiences and intelligence considerations. For example, a human can use two finger tips to pinch a hammer handle, but it is easy to slide off. However, the grasp is stable if all the fingers and palm envelope the hammer together. Thus, it can be concluded that human experience and artificial intelligence can provide the basic information for a successful planning of robot hand grasp configuration. The grasp planning is possible to be generated by some algorithms with human knowledge process and analysis. Grasp planning of multi-fingered robot hand based on human knowledge and artificial intelligence technology is a significant challenge. There are two mainly issues that should be studied: i) the methodology and representation for grasp selection on known and unknown objects; ii) learning from human experience to grasp similar objects. Many research works have been developed in this field in recent years. The grasping algorithms proposed in [13,14] can figure out grasp type and establish ideal contact points and define metrics, even for uncertainty shape object and complex environment. A method of demonstrated grasp sequence for five-fingered robot hand is reconstructed and proposed in [15], which is based on hybrid fuzzy modeling. A reinforcement learning algorithm is proposed in [16] to enable service robot hand to grasp daily objects with many contact points. The algorithm is evaluated by simulation with three-fingered robot hand. Comprehensive knowledge for grasping is learned by referring to the geometrical and physical knowledge of grasping in [17]. The simulation work with the Barrett Hand shows that it can find suitable grasp solutions for a novel object quickly. A method is presented in [18] to acquire robust motion primitives for uncertain object grasping. Grasping algorithm based on template is reported in [19] to find suitable grasp configuration for given tasks. Some approaches of machine learning from demonstration are discussed in [20–22]. The robot hands can be trained to perform the desired grasp according the tasks. A fast KNN-based method of objects categorization is proposed in [23] for grasp planning by human experience. A neural networks evolution and probabilistic inference approach for robot hand grasping novel objects are introduced in [24] and [25]. A method for grasping planning generated by recognizing and estimating the objects with image is developed in [26] and [27]. The acquired grasping data is used for learning the best grasping strategy. In retrospect, a planning algorithm for grasping configuration is used in control system to establish the relationship between target objects and grasp configurations. The grasp analysis with human knowledge can be carried out when the objects’ characteristics such as shape and size are known. It would be applied in a real-time control system with some necessary characters: • •



Robot hands can perform stable grasp configurations. The experiences with human knowledge can be studied as an artificial intelligence to build a model for grasp configuration planning algorithm. The grasp configuration planning algorithm has the character of rapid calculation speed.

We proposed a grasp configuration planning algorithm for the three-fingered underactuated robot hand in this paper. The possibility of realizing the grasp configuration planning algorithm according human experiences is discussed in the first and second sections. In the third section, several available grasp configurations are introduced for the proposed three-fingered underactuated robot hand. The algorithm is obtained in the fourth section based on human experience and knowledge. The artificial intelligence data analyzing and modeling is processed by rough set mixed neural networks. In the fifth section, the

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

53

Fig. 1. Three kinds of finger mechanisms for underactuated robot hand.

Fig. 2. An enveloping grasping simulation with underactuated finger mechanism of Fig. 1a) in [33].

proposed grasp configuration planning method is applied. We designed a simple and practical control system to examine the proposed grasp configuration planning algorithm. The simulation and experiment results indicate that it is possible to develop a low-cost easy-operation underactuated robot hand. 3. Grasp configurations for underactuated three-fingered robot hand Usually, dexterous robot hand has the same number for actuators and DOFs (degrees of freedom). Thus, the torque or rotation motion in each finger joint can be controlled individually. Underactuated finger mechanism is designed with a reduced number of degrees of freedom than actuators. Gosselin designed and built several prototypes of underactuated based on linkage and tendon mechanism. The theory and practice are developed in underactuated grasp design, kinetostatic analysis and control as in [28]. Ciocarlie has studied the issue of grasp planning for tendons underactuated hands in details [29,30]. Herder developed an underactuated adaptive robotic hand for grasp a large range of products in warehouses environment [31]. The goal of the research is to design a cheap, robust, easy to control, and capable to reliably underactuated hand. Ceccarelli presented design considerations and structure scheme of finger mechanism in [32] for underactuated grasp. Many research works are carried out in LARM (Laboratory of Robotics and Mechatronics, University of Cassino and South Latium) with the topic of low-cost and easy-operation robot hand. Some linkage underactuated finger mechanisms are proposed in [33] and [34] as shown in Fig. 1. The grasp simulation and design criterions were undertaken for the mechanisms in [35]. Although the objects’ characters are uncertain and complicated, the grasp simulations show that the underactuated robot finger mechanisms can grasp different shapes and size objects with self-adaptive and enveloping feature, as shown in Fig. 2. Underactuated finger mechanism in robot hand can perform stable grasps by using enveloped configuration. Thus, underactuated robot finger mechanisms can be put into practice as an approach to deal with the universal unknown grasp tasks. It can be a solution to build low-cost and easy-operation robot hand in applications. The palm of the proposed three-fingered underactuated hand is designed as shown in Fig. 3. The position of Finger 1 and Finger 2 can be changed by a pair of gears. The gears can rotate around each center. Gear system is assembled under the palm to ensure the position of Finger 1 and 2 are symmetric for the gear transmission. Finger 3 is a fixed finger and located in the opposite side of palm. The prototype of three-fingered robot hand is built according the proposed underactuated finger and palm mechanism, as shown in Fig. 4. It can be observed from Fig. 4 that the hand has four possible finger positions to achieve at least 6 different grasp configurations. The finger positions 1, 2, 3 and 4 are listed as: Position 1. finger 1 and 2 space are opposite to finger 3, Fig. 4a); Position 2. three fingers are surrounded, Fig. 4b); Position 3. finger 1 and 2 are parallel and finger 3 is free, Fig. 4c).

54

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

Fig. 3. The palm of proposed three-fingered robot hand with gear system for finger positions adjusting.

Fig. 4. The proposed hand prototype with different finger positions: a) finger 2 and 3 opposite to finger l; b) three fingers surrounded; c) finger 2 and 3 parallel and finger 1 is free; d) three fingers parallel in one side.

Position 4. three fingers are parallel in one side, Fig. 4d); The three-fingered robot hand in Fig. 4 can perform six grasp configurations. The types of the grasp configurations are derived from human grasp configurations in daily life, which can be described in details as Fig. 5. The grasp configurations of the proposed three-fingered robot hand are summarized in Table 1. Configuration 1. Three finger parallel pinch, shown as Fig. 5a). In this grasp configuration, finger 1 and 2 are parallel and finger 3 is located on the opposite side of the target object. Finger tips are used to pinch small and long objects, like for example, pinch a pen. This configuration is derived from Fig. 4a) (position 1).

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

55

Fig. 5. Three-fingered robot hand grasp configurations which are derived from human grasp configurations: a) Three fingers parallel pinch; b) Three fingers cylindrical envelop; c) Three fingers surrounded pinch; e) Three fingers surrounded envelop; e) Two fingers pinch; f) Three fingers parallel hook grasping. Table 1 Grasp configuration types for the proposed underacutated three-fingered robot hand. Configuration types

Configuration description

Shown in Fig. 5

Derived from Fig. 4

Configuration Configuration Configuration Configuration Configuration Configuration

Three finger parallel pinch Three fingers parallel envelop Three fingers surrounded pinch Three fingers surrounded envelop Two fingers pinch Three fingers parallel pull in one side

Fig. Fig. Fig. Fig. Fig. Fig.

Fig. Fig. Fig. Fig. Fig. Fig.

1 2 3 4 5 6

5a) 5b) 5c) 5d) 5e) 5f)

4a): position 1 4a): position 1 4b): position 2 4b): position 2 4c): position 3 4d): position 4

Configuration 2. Three fingers parallel envelop, shown as Fig. 5b). In this grasp configuration, finger 1 and 2 are parallel and finger 3 is located on the opposite side. All finger phalanxes envelop cylinder objects and each finger phalanx in contact with object surface, like for example, envelop grasp a bottle. This configuration is also derived from Fig. 4a) (position 1). Configuration 3. Three fingers surrounded pinch, shown as Fig. 5c). In this grasp configuration, finger 1, 2 and 3 are located central symmetric on the palm. Three finger tips surrounded pinch small cylindrical or spherical objects even small regular polyhedron objects, like for example, pinch a round cover. This configuration is derived from Fig. 4b) (position 2). Configuration 4. Three fingers surrounded envelop, shown as Fig. 5d). In this grasp configuration, finger 1, 2 and 3 are located central symmetric on the palm and surrounded envelop some spherical objects or regular polyhedron objects. All the Finger phalanxes are in contact with object in this grasp configuration, like for example, envelop grasp an apple. This configuration is also derived from Fig. 4b) (position 2). Configuration 5. Two fingers pinch, shown as Fig. 5e). In this grasp configuration, finger 1 and 2 pinch very small object and finger 3 is free, like for example, pinch a coin. This configuration is derived from Fig. 4c) (position 3). In order to achieve stable pinch grasp by finger tips, joint limitation is designed in this finger prototype. The finger can perform stable pinching when the joint limitation is reached for suitable objects. Configuration 6. Three fingers parallel hook in one side, shown as Fig. 5f). In this grasp configuration, three fingers are parallel at the same side of the object. Finger tips are used to hook some ring objects, like for example, open a door. This configuration is derived from Fig. 4d) (position 4). In fact, there are many grasp configurations for human hand grasp. We considered the general grasp taxonomy and decided to achieve the 6 grasp configurations for the underactuated three-fingered robot hand. We must admit that some objects cannot be grasped because of their impracticable location and position. In this case, the objects should be moved by other means before grasping (such as push or pull by finger). Our aim is to solve most of the tasks in daily life by the mentioned 6 grasps. Grasping will be carried out when position and environment requirements are satisfied.

56

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

Fig. 6. The function of grasp configuration planning algorithm.

4. Grasp configuration planning algorithm with artificial intelligence The purpose of grasp configuration planning algorithm is to establish a relationship between objects and grasp configurations for robot hand. The decision for grasp configuration by planning algorithm can be carried out according to the grasp task. The characters of target objects can be described in form of discrete data. The data can be used as the input parameters for the grasp configuration planning modeling. The grasp configurations are chosen as from the decision for the modeling. As shown in Fig. 6, the relationship between grasp configurations and grasped objects’ characters referring shape and size will be built in the configuration planning algorithm. The data process and computation are undertaken with artificial intelligence algorithm. The 6 grasp configurations are generated specifically by the proposed three-fingered underactuated robot hand in this paper. Thus, the grasp configurations planning method may not be matched with other robot hand.

4.1. Grasp configuration planning algorithm based on human grasp knowledge Human hand can grasp different objects stably. It is due to human can choose a suitable grasp configuration to adapt object’s shape and size. The grasp configuration planning algorithm is generated in human’s brain by daily experience and skill. The grasp experience and knowledge are acquired from lots of human grasp examples. The grasp decisions can be generated by these examples for a new grasp tasks. Three algorithms for planning grasp configuration modeling have been introduced in [36]. They are Gaussian mixture model, support vector machine and artificial neural networks, respectively. The advantages and weaknesses of these modeling algorithms are also mentioned in [36]. Rough set theory was firstly presented in 1980s, which is applied for information processing [37]. It has been proved that rough set theory is an effective approach to analyze imprecise, confused or fragmented data. It is possible to mix the rough set theory with artificial neural networks or fuzzy set theory to generate a new analysis algorithm [38]. Rough set is considered as a mathematical method for human grasp knowledge data preprocess in this paper. We provide an algorithm with rough set mixed artificial neural networks. It can solve the problem of grasp configurations planning algorithm for the underactuated robot hand. The grasp configurations planning modeling procedure is shown in Fig. 7. It has three functions, which are expressed as data preprocess, rough set mixed artificial neural networks and motor control section, respectively. In the function of the data preprocess, the grasp objects are described by some kind of parameters and are classified into different types. The description and classification refer to objects’ size, shape and weight. The proposed rough set theory is applied to process objects’ parameters. First of all, many grasp samples should be obtained. Secondly, the parameters attributed with sample objects size and shapes are extracted. Thirdly, the extracted attribute parameters will be classified into different types with similar attribute parameters. Finally, each type of the similar objects will match a grasp configuration. In these process, fuzzy clustering method (FCM) is used to classify continuous attributed parameters data by their features. In the function neural networks, classified data is trained and generated grasp configurations by artificial neural networks. First of all, a rule for grasp configuration decision is built with sample objects by considering human experience and knowledge. Then, the grasp configuration decision scheme is generated by off-line artificial neural networks training. However, the scheme may contain mass data of the sample objects, which will lead to the neural networks become complicated and low efficiency. Thus, a simplifying calculation by rough set analysis method is carried out to get a simplified grasp configurations scheme. The simplified scheme will be used as the final data for off-line neural networks training. In this paper, three neural networks are proposed with the aim to show training effect. They are back propagation neural networks (BP), radial basis function neural networks (RBF) and possibility neural networks (PNN), respectively. The training results of the three neural networks are compared. When a grasp task is considered, the object’s attribution parameters refer to size, shape and weight will be extracted as the input parameters for the modeling. These data will be the input as independent variable in the on-line neural networks. Then a grasp configuration can be generated by the neural networks for the target object and used in real-time control system. In the function of control, the three fingers will be actuated to appropriate positions and the grasp configuration is generated. Then the motors actuate finger mechanisms to grasp task object. If the grasp task fails, the system will extract attribute

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

57

Fig. 7. A flowchart for the proposed procedure for grasp configuration choice.

parameters again from the target object and will generate a new grasp configuration decision. The specific computational algorithms are formulated in the next sections. 4.2. Objects description and data preprocess A data space which contains a lot of samples of grasped objects with attribution parameters is built and defined as object space U. We have built a date space [39] and it is used for grasp planning test in this paper. The object space contains 251 grasp samples. Each of the object samples in U is defined as ui (i = 1,…, 251). These sample objects include most of the objects in human daily life. The objects are described by 5 attribution parameters. Attribution parameters can be obtained by robot vision recognitions system or data inputs, which come from objects’ size, shape, weight, volume, revolving body and surrounding space. The attribution space is composed by the5 attribution characters and defined as A. The 5 attribution parameters is defined as ai (i = 1,…5) for the 251 reference samples. Thus, each ui is formed by 5-dimension parameters. First of all, ui was approximately seems as a cube. The shape character is decided by three parameters which are length, width and height. Thus, the 5 attribution parameters can be described as following. a1 : The shape of three dimensions synthesis approximately for the sample objects;

58

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69 Table 2 A sheet of objects description with attribution parameters and the meaning of each item data ai . ai

Items of objects

Parameters in digital format

Parameter description

a1

Approximately shape

a2

Relative weight

a3

Relative volume

a4

Rotated surface for envelop Surrounding space for grasp

a1 = 0 a1 = 1 a1 = 2 a2 = 0 a2 = 1 a2 = 2 a3 = 0 a3 = 1 a3 = 2 a1 = 0 a1 = 1 a1 = 0 a1 = 1

Slender Cylindrical or spherical Flat Heavy Medium Light Large Medium Small Non-rotated Rotated Has-not Has

a5

Table 3 Grasp configuration decision space D and description. d

Parameter description

d1 = 1 d2 = 2 d3 = 3 d4 = 4 d5 = 5 d6 = 6

Three finger parallel pinch Three fingers parallel envelop Three fingers surrounded pinch Three fingers surrounded envelop Two fingers pinch Three fingers parallel pull

a2 : The weight of the sample objects relative to robot hand capability; a3 : The volume of the sample objects relative to robot hand capability; a4 : If the sample object has rotated surface for envelop grasp or not; a5 : If the sample object has enough surrounding space for grasp or not. Attribution parameter a1 , a2 and a3 are continuous data, which cannot be directly processed in the rough set mixed neural networks. There should be a data preprocess for a1 , a2 and a3 to classify them into different types. Thus, a fuzzy clustering method is used to classify a1 in to three types by considering the three dimension variables. Similarly, a2 and a3 are also classified into three types. The parameters of a4 and a5 are discontinuous data which shows the possibility for different grasp configurations. Thus, the classified attribute parameters can be transformed into digital format and described as: a1 : [0, 1, 2] to express [slender, cylindrical or spherical, flat] attribution for an object shape synthesis. a2 : [0, 1, 2] to express [heavy, medium, light] attribution for an object’s weight relative to robot hand capability; a3 : [0, 1, 2] to express [large, medium, small] attribution for an object’s volume relative to robot hand capability; a4 : [0, 1] to express [non-rotated, rotated] attribution for an object’s surface can be envelop grasped or not; a5 : [0, 1] to express [has-not, has] enough surrounding space for an object to grasp. For example, an object located in corner usually has not surrounding space for grasp. The length of the finger is 108 mm. The palm is 166 mm in length and 110 mm in width. The maximum contact force can be 10 Newton. The hand design parameters and grasp capability have been introduced in reference [33].The classification a2 and a3 are relative to the maximum grasp weight and finger dimension of the underactuated robot hand. A sheet of objects description with attribution parameters and the meaning of each item data ai can be listed as in Table 2. 4.3. Grasp configuration rule scheme generate and simplify Grasp configuration decision space D describes the rule of grasp configuration choice in the grasp planning modeling algorithm. The proposed underactuated robot hand contains 6 grasp configurations. Thus, the configuration grasp decision space contains 6 parameters which are defined as di (i = 1,…6). In order to compute in the rough set mixed neural networks, each of the grasp configurations is converted into digital format from 1 to 6, as listed in Table 3. We divided the objects space U into two parts, which are U1 and U2 respectively. U1 contains 200 sample objects and can be expressed as U1 = [u1 , …ui …, u200 ]. U1 is used for neural networks off-line training to generate grasp configuration decision scheme. U2 contains 51 sample objects and can be expressed as U2 = [u201 ,…, u251 ]. It will be used for the grasp configuration planning examination. For each grasped object sample ui , the relationship between attribution parameters A = [a1 , …, a6 ] (in Table 2) and grasp configuration decision di (in Table 3) is built by considering human experience and knowledge. For each ui , it contains 6

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

59

Table 4 Grasp configuration rule scheme. Samples

Attribute parameters

Decision

Samples

Attribute parameters

Decision

ui

a1

a2

a3

a4

a5

di

ui

a1

a2

a3

a4

a5

di

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

0 0 0 1 1 0 2 1 0 0 0 2 1 0 1 1 0

1 0 1 1 2 2 0 0 0 2 0 1 2 0 1 0 1

2 0 1 0 0 0 2 2 2 1 1 0 1 2 0 1 0

1 0 0 0 1 1 1 0 0 0 0 0 0 0 1 0 0

0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0

4 1 3 2 5 4 4 1 3 4 1 4 5 3 2 1 3

18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

0 0 1 2 1 0 2 0 0 1 0 1 2 2 1 1 2

1 2 2 0 2 2 2 0 1 0 1 1 0 0 0 0 1

2 2 2 1 0 0 1 1 1 0 0 1 1 0 1 1 2

0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0 1

1 0 0 1 1 0 0 1 0 0 0 0 0 1 1 0 0

4 4 5 6 5 4 4 6 3 1 3 5 3 6 1 1 4

attribute parameters ai (i = 1,…, 6) and a decision parameter di . The relationship can be described as

f (a1 , . . . , a6 )=di

(1)

The function f in Eq. (1) is the grasp configuration planning algorithm. It will be obtained from human experience by using U1 = [u1 ,…u200 ] for neural networks training. If there are enough samples as input parameters in neural networks, the computation process of the neural networks need a lot of time. Thus, masses of input parameters may not be suitable for applying in a real time control system. In this grasp configuration planning modeling, some samples with the same attribute parameters A are identified and will be considered as one type of target object. The masses information in U1 = [u1 ,…u200 ] will be simplified by this method. The simplified grasp configuration rule scheme is composed by 34 types of samples which is defined as [u1 ,…,u34 ]. Each ui relates to a grasp configuration decision di . The 34 types of samples include most of the objects in human daily life. The simplified grasp configuration parameters and decision scheme is listed in Table 4. The obtained grasp configuration decision scheme has been listed in Table 4. Thus, the basic grasp configuration planning rule between attribute parameters ai (i = 1,…,5) and decision di can be established. Let’s make an example by using grasped object u15 to explain the grasp configuration decision scheme. When the grasped object’s character is obtained as type u15 , the attribute parameters A15 can be expressed as A15 = [1, 1, 0, 1, 0]. According to the objects description information in Table 2, attribute parameters [1, 1, 0, 1, 0] means that u15 is an object with the characters of cylindrical or spherical shape, medium weight, large volume, and this object has rotated surface but has not enough surrounding space. According to human experience and knowledge, this type of objects will be grasped by using the configuration of three fingers parallel envelop, as shown in Fig. 5b). Thus, the grasp configuration planning decision is d2 = 2. In addition, Table 4 can be simplified with the aim to delete the redundant attribute parameters. An algorithm is elaborated for searching and removing some redundant attribute parameters ai . The algorithm of finding and simplifying redundant attribute parameters can be expressed as For every ui , if

f ( a1 , a2 , · · · , aN ) = di

(2)

f (a1 , a2 · · · ai−1 ,ai+1 · · · aN ) = di

(3)

f ( A ) = f ( CA ai )

(4)

and

then

in which f relates to the function of grasp configuration planning function in Eq. (1). It can be concluded from Eq. (4) that ai is the redundant attribute parameters and can be omitted. It can be concluded from Eqs. (2)–(4) that the attribution parameter a4 is redundant information in Table 4. The simplified grasp configuration scheme is listed in Table 5. The simplified attribution space is A = [a1 , a2 , a3 , a5 ], with the redundant attribution data a4 is omitted. Comparing with Table 4, the attribution parameters in the far-simplified grasp configuration scheme are less in Table 5, and the sample’s decisions are not changed. Some of the redundant samples such as [u3 , u26 ], [u4 , u15 ], [u6 , u23 ], [u16 , u32 , u33 ] and [u17 , u28 ] in Table 4 are combined as the same type in Table 5. It should be noticed

60

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69 Table 5 Simplified grasp configuration rule scheme from Table 4. Samples

Attribute parameters

Decision

Samples

Attribute parameters

Decision

ui

a1

a2

a3

a5

di

ui

a1

a2

a3

a5

di

1 2 3, 26 4, 15 5 6, 23 7 8 9 10 11 12 13 14

0 0 0 1 1 0 2 1 0 0 0 2 1 0

1 0 1 1 2 2 0 0 0 2 0 1 2 0

2 0 1 0 0 0 2 2 2 1 1 0 1 2

0 0 0 0 0 0 1 1 0 0 0 0 0 1

4 1 3 2 5 4 4 1 3 4 1 4 5 3

16, 32, 33 17, 28 18 19 20 21 22 24 25 27 29 30 31 34

1 0 0 0 1 2 1 2 0 1 1 2 2 2

0 1 1 2 2 0 2 2 0 0 1 0 0 1

1 0 2 2 2 1 0 1 1 0 1 1 0 2

1 0 1 0 0 1 1 0 1 0 0 0 1 0

1 3 4 4 5 6 5 4 6 1 5 3 6 4

Fig. 8. The structures of three types of neural networks: a) BP neural network; b) RBF neural network; c) Possibility neural network (PNN).

that the redundant attribution data is not always exist in grasp configuration rule scheme. For example, if the objects samples are described by more attribute parameters ai (i = 1,…, n), there may be not any redundant attribution data in the grasp configuration rule scheme. 4.4. Neural networks training and the modeling examination The data in the obtained simplified grasp planning scheme is used for artificial neural networks training. Here we present three types of neural networks, which are back propagation neural networks (BP), radial basis function neural networks (RBF) and possibility neural networks (PNN), respectively. The structure of BP neural networks is 3-layer feed forward neural network. Hidden layer is composed by 7 neurons and hyperbolic tangent function. Conjugate gradient algorithm is applied for networks training [40]. The function in RBF neural networks is designed by using Gaussian function, and the neurons can be added automatically by considering the mean-squared error of the networks’ output. The training work will be finished until the output error meets requirement [41]. Possibility neural networks have the feature of fast calculate speed, and better generalization performance. Neural networks are appropriate method applied in taxonomy as reported in [42]. Thus, it can be used as a method for the grasp planning modeling. The structures of BP, RBF and PNN neural networks are shown in Fig. 8. The BP neural network and RBF neural network have three layers, which can be expressed as input layer, hidden layer and output layer respectively. The Possibility neural network (PNN) has four layers, which is input layer, pattern layer, summation layer and output layer respectively. In Fig. 8, ai is the input parameters in the set A, which refer to the grasped objects’ attribute parameters. di is the output results of the neural networks, which refers to the grasp configuration decision. The basic formulation for the three neural networks can be expressed as

BP : di =

7  m=1



km ym ; ym = f m

4  i=1



ki,m ai

(5)

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

RBF : di =

M 

km ϕm (A );

  ϕm (A ) = exp −A − μk /σk2

61

(6)

m=1



N AT w j − 1 1 PNN : di = class o f Max(g1 , . . . , g6 ); gi (x ) = hi j ; hi j = exp n σ2

 (7)

j=1

In which, km is the weighting coefficient of the neural net connection; ym and ϕ m are the output of the middle layer for BP and RBF neural networks; fm is the transmission function of the middle layer in BP neural network. μk and σ k are the central value and variance of the neural units in the middle layer for RBF neural network; hij is the output of the pattern layer for PNN neural network; gi is the output of the summation layer for PNN neural network; The grasp configuration planning modeling can be trained by the three types of neural networks in Fig. 8. The input parameters for neural networks training are A = [a1 , a2 , a3 , a5 ] and di . The input parameters of grasp configuration rule have been listed in Tables 4 and 5. The neural networks training computation is programed by using a numerical procedure in commercial software package of Neural Network Toolbox - MATLAB. The grasp configuration planning function f in Eq. (1) is well trained by BP, RBF and PNN neural networks. The training time of BP neural networks costs 0.82 s. It is the longest training time in the three type neural networks. The maximum number of allowed iterations in BP neural networks is defined as 30 0 0, the minimum training target error value is given as 0.001. The obtained three grasp configuration planning functions are named as fBP , fRBF and fPNN , respectively. In order to examine the proposed grasp planning algorithm, the attribution parameters in the examination object space U2 are put into the well-trained neural networks. The computed results by fBP , fRBF and fPNN are the grasp configurations planning decisions di for samples in U2 . Grasp configurations planning decision by BP, RBF and PNN neural networks are illustrated and compared in Table 6. As shown in Table 6, the obtained neural networks are checked by examination object space U2 . There are totally 51 grasped objects in U2 . Let’s make an example to illustrate the grasp configurations planning results in Table 6. When the grasped object’s character of u251 is obtained, the attribute parameters A251 can be expressed as A251 = [1, 1, 0, x, 0]. According to the objects description information in Table 2, A251 = [1, 1, 0, x, 0] means that the characters of object u251 are cylindrical or spherical shape, medium weight, small volume, and it has not enough surrounding space. a4 = x means that we do not know if this object has rotated surface. The grasp configurations planning results of BP, RBP and PNN neural networks are d2 = 2. According to Table 3, the object should be grasped by the configuration of three fingers parallel envelop. The result is the same as the human experience and knowledge. Comparing with the human grasp decision di , the well trained BP, RBF and PNN have put out 8, 6 and 4 error results respectively (The error results have been painted as bold number in Table 6). The computation results accuracy rate based on three types of neural networks are all higher than 84%. Especially the modeling algorithm based on possibility neural networks (PNN), it acquired highest accuracy rate at 92%. Thus, it is concluded that the rough set mixed artificial neural networks algorithm can be used for grasp configuration planning modeling. It should be indicated that the proposed rough set method can simplify the grasp configuration decision scheme. However, it does not mean the omitted attribution parameter is not considered in the modeling process. The omitted attribution parameter just does not make any effect for the modeling because it has been confirmed as redundant information in the scheme data. 5. Control system and grasp experiment The underactuated three-fingered robot hand has the features of easy operation and control with self-adaptive enveloping grasp according objects’ shape. Thus, the control algorithm or strategy for the three-fingered robot hand prototype is not complicated. The control system is designed as based on the displacement and force feedback. In order to grasp and release objects with a suitable force, force sensors are applied in the finger surface. The maximum value of the contact force is set in the control system. It can prevent unreasonable grasp force to damage the objects. An angular displacement sensor is assembled at each finger root joint to test the rotation angle of each finger. 5.1. The control system for underactuated robot hand The design scheme of robot hand control system is shown in Fig. 9. The DSP controller based on TI’s motor control chip TMS320LF2407A mainly consists of power circuit, encoder-signal processing circuit, motor drive circuit, force sensor’s signal conditioning circuit and RS232 interface circuit.

62

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69 Table 6 Results of grasp configurations planning based on rough set mixed neural networks. (Bold number means fail compute result). Object Space U2 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251

a1

a2

a3

a5

Human decision di

2 2 0 0 2 1 1 2 2 1 1 2 1 2 0 2 0 0 1 1 0 0 2 1 2 1 0 0 0 2 2 0 0 1 0 1 0 1 2 0 1 1 2 0 0 0 2 0 0 0 1

0 0 1 1 1 0 2 1 2 2 1 0 0 1 0 1 0 0 0 2 2 0 2 1 0 1 0 0 1 0 0 0 1 1 2 0 1 2 0 0 2 0 1 1 1 2 2 2 2 0 1

2 1 1 1 0 2 0 0 0 0 1 1 1 1 0 2 1 0 0 0 1 1 1 0 0 1 1 2 2 0 1 2 2 0 0 0 1 2 1 0 1 1 0 0 0 2 0 0 1 1 0

1 0 0 0 0 1 0 1 1 1 0 1 1 0 1 0 0 0 0 1 0 1 0 0 1 1 0 0 0 1 1 1 1 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 1 0 0

4 3 3 4 4 1 5 4 5 5 5 6 1 5 6 4 1 1 1 5 4 6 4 2 6 5 1 3 4 6 6 3 4 2 4 1 4 5 3 1 5 1 4 3 6 4 4 4 4 1 2

Attribute parameters

Results of rough set mixed networks BP

RBF

PNN

3.9982 3.0025 2.9416 3.9416 3.995 1.0096 5.285 1.3385 2.3588 5.0096 4.3467 6.0036 0.9973 5.2134 6.3847 4.0014 1.3245 1.0134 1.0069 5.0096 4.0327 5.9951 4.2964 2.0021 5.9926 2.7281 1.3245 3.0316 3.8872 5.9926 6.0036 3.0044 3.8875 0.9496 4.0307 1.0069 3.8344 4.9934 3.0025 1.0134 5.3594 0.9973 3.995 2.6547 4.1483 3.8978 2.7543 5.2734 3.9017 1.3245 2.0021

4 3 3 4 4 1 5 2.8464 5.0768 5 5 6 1 4.7967 5.9909 4 1 1 1 5 4 6 4 2 6 3.192 1 3 4 6 6 3 4 2.5323 4 1 3.0619 5 3 1 5 1 4 3 2.931 4 4.6469 3.8458 3.681 1 2

4 3 3 4 4 1 5 4 5 5 5 6 1 4 6 4 1 1 1 5 4 6 4 2 6 5 1 3 4 6 6 3 4 5 4 1 4 5 3 1 5 1 4 3 3 4 4 5 4 1 2

The control system of the three-fingered underactuated robot hand is composed by three parts: the upper computer, the DSP control board and the robot hand, as shown in Fig. 10. The upper computer is an industry PC, it communicates with DSP control board by RS232 series data interface. The DSP control board is composed by two parts: high level control panel and low level DSP controller. The high level control panel is run by PC and interacts with the hand prototype. The interface of the panel is shown in Fig. 11, it is programmed by Microsoft Visual C++.NET 2003. There are two functions in the panel, which are force control and position control. The contact force and rotation angle based on force control or position control can be set in the panel. The data is send back form force sensor and angle sensor and shown in the right part of the panel. Another part is a low level DSP controller which is based on TI’s motor control chip TMS320LF2407A. It is composed by two boards. Master board contains two motor driver circuits, controls the motion of two fingers, and slave board controls the motion of a finger and palm. The hand grasping is controlled by the motion of four motors via PWM pulse.

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

63

Fig. 9. The design scheme of robot hand control system.

Fig. 10. The hardware of the control system.

5.2. Control methods based on position feedback and force feedback In order to improve the stability and reliability of the controller, software is designed as based on U/COS-II. The motion control of finger can be divided into two parts: position-based control and force-based control. The algorithm of positionbased control is designed as PID algorithm, and the algorithm of force-based control is designed as PD algorithm. 5.2.1. Control algorithm based on position feedback The position-based controller is composed by speed closed-loop and position closed-loop. Each loop contains a PID correction module, and chained as shown in Fig. 12. The discrete form of the difference expressions of PID controller is given by

u(n ) = K p e(n ) + Ki

n 

e(k ) + Kd [e(n ) − e(n − 1 )]

(8)

k=1

in which, Kp is the proportion coefficient, it is unitless parameter; Ki is the integral coefficient (sec); Kd is the differential coefficient (Hz); u(n) is the output of the PID controller after n times’ sampling (Hz); e(n) is the inputting sampling error of the nth times for the PID controller (Hz). The error e(n) presents the difference between the ideal value and the feedback. 5.2.2. Control algorithm based on force feedback The force analysis with one finger grasping statics has been researched in article [33]. The relationship between finger contact forces and motor input torque can be obtained by the finger grasping statics modeling. However, it is not convenient to use the formulation from a practical point of view. In order to simplify the simulation model, the equations can be given as follow to obtain the contact force



Fi = 0, Fi = ki τM ,

τM ≤ τ f max τM > τ f max

(9)

64

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

Fig. 11. Control panel interface of the controller.

Fig. 12. System block diagram of position-based controller.

Fig. 13. System block diagram of force-based controller.

where, ki is a coefficient of proportionality can be estimated by experiments. The contact force Fi in Eq. (9) is calibrated by engineering view in the simulation environment. Usually, the coefficient value ki is tested many times in engineering practice and selected by grasp experiments. The force-based controller is designed as based on PD algorithm. Fig. 13 shows the system block diagram. The discrete form of the difference expressions of PD controller is given by.

u∗ (n ) = K p∗ e∗ (n ) + Kd∗ [e∗ (n ) − e∗ (n − 1 )]

(10)

in which, K p∗ is the proportion coefficient, it is unitless parameter; Kd∗ is the differential coefficient (Hz); u∗ (n) is the output of the PID controller after n times’ sampling (Hz); e∗ (n) is the inputting sampling error of the nth times for the PID controller (Hz). It is clarified that the PID and PD control parameters are difficult to be set an appropriate value directly. We have to tested and searched different value by both the grasp statics modeling and experiments many times. The control parameters should be selected and applied in an engineering view and to fix suitable control parameters by lots of trial-and-error process for the prototype. 5.3. Grasp simulation and experiment for control 5.3.1. Grasp simulation with the designed control system In order to examine the designed control system, two simulations are carried out as based on Matlab/Simulink environment. The results of simulation are shown in Figs. 14 and 15. In the first simulation, the control algorithm based on position feedback is tested. The target positions are assumed as 1° and 10°. The results of control system simulation are shown in Fig. 14. As shown in the simulation results in Fig. 14a) and b), the controlled finger responds quickly to the aim angle at 1° and 10° respectively. The overshoot and the relative steady-state error of angle position are lesser than 1% in the control process. Fig. 14c) shows a simulation process for a

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

65

Fig. 14. Simulation results of position control: a) Target position θd = 1◦ ; b) Target position θd = 10◦ ; c) Grasp at θd = 10◦ and then release to θd = 0◦ .

Fig. 15. Simulation results of force-based control: a) Without disturbance; b) With certain disturbance; c) With random disturbance.

finger grasp object at θd = 10◦ and releases back to θd = 0◦ . The actuator in the first finger joint can perform a desired grasp – release process by the designed control system. Thus, it can be concluded from the simulation results in Fig. 14 that the designed position control method shows feasible operation and response with excellent dynamic performance as desired. In the second simulation, the control algorithm based on force feedback is tested. According to the previous analysis in Section 5.2.2, the parameter ki in Eq. (9) is defined as 10 0,0 0 0. The value ki is test by many times and finally defined in an engineering view. The desired maximum grasp force is 3 N, the results of simulation based on Matlab/Simulink environment are shown in Fig. 15. In Fig. 15a), the contact force increases quickly to touch object and becomes stable grasp at the 5th second. In Fig. 15b) an unexpected disturbance force is suddenly acted on the second finger phalanx at the 5th second, the contact grasp force decreases rapidly. Then, the controller adjusts the output and the system returns to be stable at the 8th second. In Fig. 15c), a random disturbance of 1% noise is added and acted on the second finger phalanx. The controller can resist the disturbance and perform relative steady-state. The error of the contact grasp force is lesser than 1%. Thus, it can be concluded from the simulation results in Fig. 15 that the designed force control method can suppress unexpected disturbance effectively with no overshoot and responses with good dynamic performance in simulation as desired.

66

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

Fig. 16. Sensor layout drawing for each finger mechanism. Table 7 Specifications of motors and encoders of Faulhaber production in the experiment. Function describe Motors with Gearheads

Encoders

Fingers drive Motors type Voltage Gearheads type Reduction ratio Continuous/Intermittent torque Maximum speed Encoders type Lines per revolution Frequency range Encoder channels

1319C012SR 12 V 13A 1024:1 1024:1 0.22/0.18 Nm 12.5r/min IE2-400 400 ≤160 kHz 2

Fingers position change

13A 201:1 201:1 0.18/0.15 Nm 63 r/min

Fig. 17. Experiment results for hand grasping: a) grasp experiment based on position feedback; b) grasp experiment based on force feedback.

5.3.2. Grasp experiment with the designed control system In order to examine the designed control system and the proposed grasp configuration planning method in this paper, grasp experiments with robot hand are carried out. The prototype of the three-fingered robot hand has been designed and manufactured as based on underactuated mechanism, which has been reported in [33]. The sensor layout drawing for each finger mechanism is illustrated in Fig. 16. A pressure sensor is assembled at the second phalanx of each finger. An angular displacement sensor is assembled on the rotated shaft of each finger. The motor and encoder in this system comes from Faulhaber production [43]. The specifications of motors and encoders are list in Table 7. The results of grasp experiments are shown in Figs. 17 and 18. The aim of first experiment is to show the characteristics of the control system for the multi-fingered robot hand grasp tasks. Fig. 17a) shows a grasp experiment based on position feedback control method. In this experiment, the finger root joint can be driven to rotate 10 deg to approach the object in very short time (0.5 s). Then the finger is actuated to rotated 1 deg to contact the object at 1.5 s. The robot hand releases object at 3.5 s finally. It can be seen from the curve in Fig. 17a) that the finger is actuated by position feedback control method can move accurately to the target position in short responsive time. The process of hand grasping and releasing object is stable. Fig. 17b) illustrates the experiment of force feedback control performance. It can be seen that when the finger joint rotates to 19 deg, the finger touch the object. The contact force is 1 N at the moment from 5.0 to 6.5 s. From 8.8 to 10 s, the finger joint continues to rotate to 21 deg and the contact force reaches a new stable state of 2.5 N. When the finger rotates to 23 deg, the contact force reaches to its threshold value 3.0 N of the control system. The contact force will not increase from 13.3 s. A random disturbance force is added on the grasping object from 14.0 s, the contact force of the robot finger

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

67

Fig. 18. Grasp examination experiment with different objects: a) Three fingers parallel pull a glasses case; b) Three fingers parallel envelop a water bottle; c) Three fingers surrounded envelop a tennis; d) Three finger parallel pinch a book; e) Two fingers pinch the candy box (with palm); f) Three fingers surrounded envelop a pear; g) Two fingers pinch a small orange. Table 8 Grasped objects information in grasp configuration planning experiment. No.

Grasped objects

Attribute parameters a1

a2

a3

a4

a5

1 2 3 4 5 6 7

Glasses case Water bottle Tennis Book Candy box Pear Small orange

0 1 1 2 2 1 1

1 1 1 1 2 0 2

1 0 2 0 0 0 2

1 1 1 0 0 1 1

1 1 1 1 1 1 0

is variable in a reasonable range from 3.0 to 3.5 N, while the finger joint angle is not changed. The source of the noise before 14.0 s comes from the motion clearance of the finger mechanism. The effect of position error and tolerances has been discussed in [34]. In fact, the noise in Fig. 18b) did not change the contact points and objects position. The grasped object is stable in the experiment. 5.4. Grasp experiment with the proposed configuration planning algorithm Another grasping experiment is carried out to test the proposed grasp configuration planning approach. Seven task objects are selected to generate the grasp configurations decision by the proposed algorithm. The objects are Glasses case, Water bottle, Tennis, Book, Candy box, Pear and Small orange. The attribute parameters of each task object have been obtained and listed in Table 8. These attribute parameters can be input to the well-trained neural networks in Section 4. The grasp configuration decision is generated as list in Table 9. According the generated grasp configuration decision in Table 9, the control system will adjust the three underactuated robot fingers to the prepared grasp position and actuate the fingers to perform suitable grasp configurations. The process is programed for the grasp configuration planning and actuating as the flowchart in Fig. 7. The grasp configuration results for each task objects are shown in Fig. 18. It can be seem that the task objects can be grasped by reasonable configurations.

68

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69 Table 9 Grasp configuration planning decision generated by the proposed approach.

Grasp configuration planning

Decision di

Configuration description

Glasses case

6

Three fingers parallel pull

2

Water bottle

2

Three fingers parallel envelop

3

Tennis

4

Three fingers surrounded envelop

4

Book

1

Three finger parallel pinch

5

Candy box

5

Two fingers pinch (with palm)

6

Pear

4

Three fingers surrounded envelop

7

Small orange

5

Two fingers pinch

No.

Grasped objects

1

The underactuated multi-fingered mechanism robot hand can perform preferable feature for passive compliance envelop grasp in this experiment. It is possible to use only one motor in each finger to realize suitable grasp configurations. By simple and effective grasping planning training, the solution can satisfy most of the grasping objects in daily life. However, it should be emphasized that, there must be some tasks that cannot be done for this underactuated hand. Because the operation capability of this three-fingered underactuated robot hand is lack by comparing with dexterous hand. 6. Conclusions In this paper, a grasp configuration planning method is presented by referring to a three-fingered robot hand with underactuated mechanism, which has been designed and built at Yanshan University in collaboration with LARM in Cassino. The proposed robot hand has the feature of self-adaptive enveloping grasp and can be used for uncertainty tasks to grasp unknown objects. An efficient and practical grasp control method is presented to deal with most of objects in daily life. The grasp planning algorithm is based on human mimicking via mixed neural networks. Three types of neural networks have been tested in MATLAB environment to identify the best performing one. Grasp simulations and experiments are carried out to test the performance of the proposed control methods. The grasp examination experiment indicated that the three-fingered underactuated robot hand can realize suitable grasp configuration by the presented grasp configuration planning algorithm. The research and experiment does not integrate complex sensors and is proved simple and practical. The scientific contribution of the paper can be considered in a planning method of grasp configurations as based on artificial intelligent aspects. The feasibility of the proposed planning procedure is also discussed by reporting successful results of experimental tests. The feasible and practical research method is contributed for developing a low-cost and easy-operation robotic hand. Acknowledgment The authors would like to thank the China Postdoctoral Science Foundation Project grant no. 2017M611184 for supporting this research work. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

G. Carbone, Grasping in Robotics, Springer, London, 2013. V.D. Nguyen, Constructing force-closure grasps, Int. J. Robot. Res. 7 (3) (1988) 3–6. B. Mishra, J.T. Schwartz, M. Sharir, On the existence and synthesis of multi finger positive grips, Algorithmica 12 (4) (1987) 541–558. J. Ponce, S. Sullivan, On computing four finger equilibrium and force close grasps of polyhedral objects, Int. J. Robot. Res. 16 (1) (1997) 11–35. Y. Liu, D. Ding, S. Wang, Towards construction of 3D frictional form-closure grasps: a formulation, in: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 1, Detroit, 1999, pp. 279–284. Y. Zheng, H. Qian, Simplification of the ray-shooting based algorithm for 3-D force-closure test, IEEE Trans. Robot. 21 (3) (2005) 470–473. X. Markenscoff, C.H. Papadimitriou, Optimum Grip of a Polygon, Int. J. Robot Res. 8 (2) (1989) 17–29. B. Mirtich, J. Canny, Easily computable optimum grasps in 2-D and 3-D, in: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), San Diego, 1994, pp. 739–747. W.D. Ferrari, J. Canny, Planning optimal grasps, in: Proceedings of the IEEE International Conference on Robotics and Automation, 1992, pp. 2290–2295. Q. Lin, J.W. Burdick, E. Rimon, A stiffness-based quality measure for compliant grasps and fixtures, IEEE Trans. Robot. Autom. 16 (6) (20 0 0) 675–688. Y. Zheng, H. Qian, Dynamic force distribution in multifingered grasping by decomposition and positive combination, IEEE Trans. Robot. 21 (4) (2005) 718–726. M. Ciocarlie, C. Goldfeder, P. Allen, Dimensionality reduction for hand-independent dexterous robotic grasping, in: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007, pp. 3270–3275. K. Hsiao, M. Ciocarlie, P. Brook, Bayesian Grasp Planning, in: Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, May 2011. M.R. Dogar, K. Hsaio, M. Ciocarlie, S.S. Srinivasa, Physics-Based Grasp Planning Through Clutter. Robotics: Science and Systems, MIT Press, 2012, pp. 504–511.

S. Yao et al. / Mechanism and Machine Theory 129 (2018) 51–69

69

[15] R. Palm, B. Iliev, Segmentation and recognition of human grasps for programming-by-demonstration using time-clustering and fuzzy modeling, in: Proceedings of the IEEE International Conference on Fuzzy Systems, IEEE, 2007, pp. 1–6. [16] T. Baier-Lowenstein, J. Zhang, Learning to grasp everyday objects using reinforcement-learning with automatic value cut-off, in: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2007, pp. 1551–1556. [17] N. Curtis, J. Xiao, Efficient and effective grasping of novel objects through learning and adapting a knowledge base, in: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2008, pp. 2252–2257. [18] F. Stulp, E. Theodorou, J. Buchli, et al., Learning to grasp under uncertainty, in: Proceedings of the IEEE International Conference on Robotics and Automation, IEEE, 2011, pp. 5703–5708. [19] A. Herzog, P. Pastor, M. Kalakrishnan, et al., Template-based learning of grasp selection, in: Proceedings of the IEEE International Conference on Robotics and Automation., IEEE, 2012, pp. 2379–2384. [20] A. Gupta, C. Eppner, S. Levine, et al., Learning dexterous manipulation for a soft robotic hand from human demonstrations, IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2016, pp. 3786–3793. [21] H.Y. Wang, W.K. Ling, Robotic grasp detection using deep learning and geometry model of soft hand, in: Proceedings of the IEEE International Conference on Consumer Electronics-China, IEEE, 2017, pp. 1–6. [22] E. Hyttinen, D. Kragic, R. Detry, Learning the tactile signatures of prototypical object parts for robust part-based grasping of novel objects, in: Proceedings of the IEEE International Conference on Robotics and Automation, IEEE, 2015, pp. 4927–4932. [23] C. Liu, W. Li, F. Sun, et al., Grasp planning by human experience on a variety of objects with complex geometry, in: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2015, pp. 511–517. [24] P.C. Huang, J. Lehman, A.K. Mok, et al., Grasping novel objects with a dexterous robotic hand through neuroevolution, in: Proceedings of the IEEE Symposium Series on Computational Intelligence. 2014, 2014, pp. 1–8. [25] D. Song, C.H. Ek, H. Kai, et al., Task-Based Robot Grasp Planning Using Probabilistic Inference, IEEE Trans. Robot. 31 (3) (2017) 546–561. [26] Y.D. Shin, G.R. Jang, J.H. Park, et al., Integration of recognition and planning for robot hand grasping, 10th International Conference on Ubiquitous Robots and Ambient Intelligence, URAI, 2013, pp. 171–174. [27] A. Saran, D. Teney, K.M. Kitani, Hand parsing for fine-grained recognition of human grasps in monocular images, in: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2015, pp. 5052–5058. [28] L. Birglen, T. Laliberté, C.M. Gosselin, Underactuated Robotic Hands, Springer, Berlin Heidelberg, 2008. [29] M. Ciocarlie, F.M. Hicks, R. Holmberg, J. Hawke, M. Schlicht, J. Gee, S. Stanford, R. Bahadur, The Velo gripper: a versatile single-actuator design for enveloping, parallel and fingertip grasps, Int. J. Robot. Res. 33 (5) (2014) 753–767. [30] M. Ciocarlie, P. Allen, A constrained optimization framework for compliant underactuated grasping, Mech. Sci. 2 (1) (2011) 17–26. [31] G. Kragten, F.V.D. Helm, J. Herder, Underactuated robotic hands for grasping in warehouses, Automation in Warehouse Development, Springer, London, 2012, pp. 117–131. [32] M. Ceccarelli, C. Tavolieri, Z. Lu, Design considerations for underactuated grasp with a one D.O.F. anthropomorphic finger mechanism, in: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, 2006, pp. 1611–1616. [33] S. Yao, M. Ceccarelli, Q. Zhan, G. Carbone, Z. Lu, Design considerations for underactuated finger mechanism, Chin. J. Mech. Eng. V22 (4) (2009) 475–488. [34] S. Yao, M. Ceccarelli, Q. Zhan, et al., Analysis and design of a modular underactuated mechanism for robotic fingers, Proc. Instit. Mech. Eng. Part C J. Mech. Eng. Sci. 226 (2) (2012) 242–256. [35] S. Yao, L. Wu, G. Carbone, M. Ceccarelli, Z. Lu, Grasping Simulation of an Underactuated Finger Mechanism for LARM Hand, Int. J. Model. Simul. V30 (1) (2010) 87–97. [36] Y. Zhang, J. Li, J. Li, in: Robot Dexterous Hand—Modeling, Planning and Simulation, China Machine Press, Beijing, 2007, pp. 137–160. (in Chinese). [37] R. Jensen, Q. Shen, Computational Intelligence and Feature Selection: Rough and Fuzzy Approaches, IEEE Press; John Wiley & Sons, 2008. [38] R. Bello, R. Falcon, Rough Sets in Machine Learning: A Review Thriving Rough Sets, Springer International Publishing., 2017. [39] https://www.researchgate.net/publication/323847823_Space_Ver1. [40] J. Liao, D. Zou, R. Luo, Improve Go AI based on BP-Neural Network, Cybernetics and Intelligent Systems, 2008 IEEE Conference on, IEEE, 2008, pp. 487–490. [41] M. Marinaro, S Scarpetta, On-line Learning in RBF Neural Networks: A Stochastic Approach, Elsevier Science Ltd, 20 0 0. [42] G.A. Anastassiou, I.F. Iatan, A New Approach of a Possibility Function Based Neural Network, Intelligent Mathematics II: Applied Mathematics and Approximation Theory, Springer International Publishing, 2016. [43] https://www.faulhaber.com/en/products/dc-motors/dc-micromotors/