Preprinta of the Fourth IFAC Symposium on Robot Control September 1~21, 1994, CBPri,ltaIy
COMPLIANT MOTION BASED ACTIVE SENSING BY ROBOTIC FINGERS Makoto Kaneko* and Kazunobu Honkawa**
* Hiroshima University, Faculty of Engineering, Higashi-Hiroshima. 724 Japan
** Kyushu Matsushita Co., Fukuoka, 812 Japan
Abstract. When a robot approaches an unknown object and grasps it stably, it is important to know the local shape of the object to be grasped. Assuming that the position of object is roughly specified and the object is convex, this paper discusses the issue on how to detect the local shape of object without using any fmgertip tactile sensor. A link system having one compliant joint and one position-controlled joint is capable of changing its posture with maintaining contact between the link and the object, when imparting a small angular displacement continuously to the position controlled joint This motion planning is so simple but still guarantees to make contact between the link and the object, which is essential for detecting the local shape of object. We show how to construct the local shape of object from the obtained link trajectories. Finally we show some experimental results to confum the validation of the proposed approach. Key Words. Tactile sensing, compliant motion. robotic finger. active sensing
1. INTRODUCTION Research in dexterous manipulation has mainly addressed issues in grasp stability analysis (Hanafusa and Asada; 1977. Nguyen; 1987). grasping force analysis (Kerr and Roth; 1986, Yoshikawa and Nagai; 1988). contact conditions (Cutkosky and Wright; 1986). finger tip sensors (Cutkosky and Howe; 1988) and manipulation of objects (Hsu. Li. and Sastry; 1988). Most work in dexterous manipulation has assumed knowledge of object shape. location and orientation. Once exact object information is established. finger positions. which ensure forceclosure grasps. are designated. Let us now consider a practical case where the exact object position and shape are not known a priori. Even with a vision system. it may be difficult to establish the exact shape information. especially at the rear of the object to be grasped. When a robot hand approaches such an object, there are two principal phases. The first, the contact point detection phase. is to find the first point of contact between a finger link and the object. In the second phase. the grasping point detection phase. the final grasp locations are identified after some more information about the object shape has been obtained by using a fmger tip tactile sensor. for example. In this paper. we deals with the second phase as well as the first phase without assuming any finger tip tactile sensor. Fearing (1986) first considered stable planar grasps. assuming that contact
point information was unavailable. He showed that the bounded slip of a part in a hand generates twirling motion of the object. This is valuable for adjusting the fingers and object to a stable situation. Dario and Buttazzo (1987). Brock and Chiu (1987) have developed finger tip tactile sensors and they have succeeded in obtaining some good reconstructured surfaces using the fingertip tactile sensors. In these studies. however. the fmgertip sensors are supposed to be in contact with the environment. In this paper we relax the assumption that a finger tip is always in contact with an environment, and permit another part of the finger link to be in contact with the environment In our previous works (Kaneko and Tanie; 1990. Kaneko and Tanie; 1992. Kaneko and Hayashi; 1993). we have proposed a scheme for detecting the contact point between a fmger link and an object that actively uses finger joint compliance in lieu of either distributed tactile sensors. This scheme can be separated into two phases. namely. the approach phase and the detection phase. In the approach phase. each finger is extended to envelop the largest volume and approaches the object to be grasped until a part of a finger link first contacts the object. The detection phase starts by instructing a small finger posture change while maintaining contact between the object and finger. A series of link motions while maintaining contact between the link and object are called self-posture changing motions (SPCMs) . It has been shown that from two different link postures
137
during aSP CM, the robot can compute an intersecting point which leads to an approximate contact point. This approach has the significant advantage of utilizing existing hardware (such as torque and position sensors) to detect the contact point. Although we assume joint torque sensors for realizing a SPCM, we never use their outputs directly for computing a contact point. Instead, these sensors are used only for generating joint compliance together with joint position sensors. Computation for contact points is based purely on the outputs from joint position sensors which ensure much more robustness than joint torque sensors. By continuously applying SPCMs, either inner link or the link tip always makes contact with the object's surface. This motion planning is so simple. We show that the contact points during S PC M s are continuously determined from the obtained link trajectories and the shape of finger tip. We also show some experimental results.
2. REVIEW OF SPCM 2.1.General Discussion In this section, we begin by briefly explaining the basic idea of self-posture changing motion (SPCM) with the help of a simple example.
assumed to be fixed to the environment Now, assume that we impart an arbitrary angular displacement at the position-controlled joint in the direction shown by the arrow of q3 in Fig.1(a) for the following two cases: Case 1 in which joint 1, joint 2, and joint 3 are locked, compliantly controlled and position-controlled, respectively; and Case 2 in which joint 1, joint 2, and joint 3 are compliantly contrOlled, locked and position-controlled, respectively. As a result, the link posture will automatically change its posture according to the angular displacement of the positioncontrolled joint as shown in Figs.l(a) and (b), while maintaining contact between link and object. These characteristics constitute self-posture changeability (SPC) [17]-[18] and the series of motions that bring about SPC are termed self-posture changing motion (SPCMs) . When the axis of a position-controlled joint is selected to be parallel to the compliant joint, the motion of link is restricted in a 2D plane as shown in Fig.1(a), while that in Fig.l(b) is not. This particular joint assignment can be conveniently used when finding an approximate contact point between fmger link and an unknown object. Assumption A (general) Before addressing S PC M in some details, the following assumptions are made without any loss of generality. Assumption 1: A serial link system is assumed. Assumption 2: The object does not move during a
SPCM. Assumption 3: Both object compliance and link compliance are negligibly small. Assumption 4: The system compliance derives only from joint compliance control. Assumption 5: The mass of each link element is negligibly small.
x
(a) Case 1 (joint 1:locked, joint 2 :compliant, joint 3:position-control)
z
x
(b) Case 2 (joint 1:compliant, joint 2:locked, joint 3:position-control) Fig.l Examples of SPCMs for a 3D link system
Assumption 1 can be applied to most of the existing multifmgered robotic hands. Assumption 2 means that the contact force between link and object is smaller than the friction force between the object and the plane where the object is placed, or that the object is regarded as a part of the environment. Both Assumptions 3 and 4 are for simplifying the mathematical treatment With Assumption 5, we can neglect any dynamic effects, such as inertia force or impulsive force appearing during collision between link and object. We observe, again, that for the general discussion of SPCM, plane and line contacts are allowed, while assuming that the contact is rigid. We now define self-posture changeability with the following mathematical expressions. Assume a 3-D link system with its Defmition 1: h-th link in contact with an object. For the angular displacement vector dqp=[dqpl, ... , dqpn]t (l
A simple 3D link system as shown in Fig.1 is used to explain the self-posture changing motion. The link system has three degrees-of-freedom and the object is
(1)
(2) 138
(3) (4) wbere r * is the vector wbose end exists on both link and object surfaces, r# is the vector wbose end exists outside of the object but on the link surface, and q and Pi are the joint angle vector q=[ql, ... , qn)t and a position-controlled joint number. Functions Go(r) and GI(r, q) are defined to possess the following characteristics: Go(r)>O Go(r)=O Go(r)
O GI(r, q)=O GI(r, q)
for a point outside the object for a point on the object for a point inside the object for a point outside the link system for a point on the link system for a point inside the link system
Tbe series of motions that bring about a SPC is defined as a Self-Posture Changing Motion (SPCM) . By the definition of GI(r, q), GI(r, q)=O denotes the surface of link and it cbanges with the joint angle vector. (1) and (2) mean that the point indicated by r * exists over not on the surface of the object but also on the surface of the link. Mathematically, these two conditions still allow for the bypothetical condition that a part of link system is inside the object. With additional equations (3) and (4), we can eliminate the geometrical relationship between the link and the object. (3) means that a point on the link system is never inside the object. In other words, it is either a point outside of object or a point on the surface of the object. If both a link element and an object bave smooth surfaces as shown in Fig.3, Go(r*) and GI(r*, q) have continuous first derivatives. Then, (3) and (4) can be replaced by no(r *)=-nl(r *)
Our ultimate purpose in this study is to find the local shape of an object through SPCMs. Before addressing local sbape sensing, we briefly summarize the utilization of SPCM for detecting a contact point between object and link system. Hereafter, we make several additional assumptions. Assumption B (specific) Assumption 6 : Link motion is limited to a 2D plane during detecting motions. Assumption 7 : Using the notation Vi for the workspace of the i-th link and V 0 for the space occupied by object :
(6) Assumption 8 : Each joint has a torque sensor and a joint position sensor. Assumption 9 : Controllable joint compliance is bounded with O
(5)
wbere no(r *) and nl(r *) are unit vectors representing the outer normal direction of the contact surface of object and the outer normal direction of the surface of the link.
An approximate contact point can be obtained through the following two phases when an inner link makes contact with an object.
ACCroach chase In this phase (see Fig.3), each finger is opened wide to cover a large area for object detection. The hand approacbes the object until a finger link eventually contacts it. This motion is easily realized by using a damping control law for the first joint. Thus,
(7)
Fig:2 Contact between link and object 2.2 Utilization of SPCM for Detectin2 Contact Point
where 't 1, k y ,
So far, we assumed that, subject to Assumption 11, the approach phase is executed using the Straight-lined Link Posture (SLP) , where each joint angle is set to q2= ... =
,, ,,
I
/
I I
I
/
I
I
I
I
I
'
Intermediate link posture
I
2~
object. The robot can recognize the link element being in contact with the object from torque sensor outputs. Once the robot can recognize that the link element in contact with the object is not the last one, the robot can easily detect the approximate contact point through the intersection of two slightly different link postures, as explained in 2.2. Now, let us assume that a part of the last link 'is in contact with an object. The robot can recognize such a situation, because all torque sensor outputs are now "ON". Under this condition, there are two possible contact states as shown in Fig.5, one at the link tip and the other at 'the inner link surface. Both of Fig.5(a) and (b) can satisfy the local geometrical constraint for a small posture change during a SPCM, if we assume such objects as shown in Fig.5(a) and (b), respectively. Thus, when we rely on geometrical information alone, we can not determine uniquely that either the link tip or inner link is in contact with the object. Determining which contact state happens is the starting point of object shape sensing utilizing S PC M s. In this chapter, we discuss how to distinguish between inner link contact and link tip contact, and how to localize contact point during SPCMs. Additionally we set the following two assumptions: Assumption 12: The object is convex. This assumption avoids any contact-point-jump during SPCMs, while a contact-point-jump does not ensure that the local shape can be measured continueously. ......~.
Fig.3 Approach phase Position controlled / joint
)i
Compliant I joint
Initial link posture
I
I
l
Final link
(b) Link tip contact (a) Inner link contact Fig.5 Two possible cases when the last link is in contact with the object. 3 1 How To Distin2uish Between Inner Link Contact And Link Tip Contact
Fig.4 Detection phase
3. LOCAL SHAPE SENSING OF OBJECT USING SPCM In this section, we show that a serial link system can detect the local shape of object through a series of S PC Ms. Assume that a part of link element j (1 ~j~-l) except the last one makes contact with an
The main problem is to judge either link tip contact or inner link contact when a part of the last link element is in contact with an object just after the approach phase as shown in Fig,3. Let us now consider the following new joint role assignment, namely, the first joint and the second joint are position controlled and damping controlled, respectively. Under this joint role assignment, we assume to impart angular displacements for the first joint step by step, Fig. 6 demonstrates what link motions happen for each case, where Fig.6(a) and (b)
140
correspond to inner link contact and link tip contact, respectively. Note that under damping control (see eq.(7» for the second joint, the second (or last) link always takes actions to rotate in the clockwise direction so that the joint torque may result in the predetermined value. At the initial phase of SPCM, if the link tip moves to the left half plane with respect to the line extended by initial straight-line link posture and the second joint moves to the right half plane, this means that a part of the second link is in contact with the object (Fig.6(a». Because otherwise the link tip should shift to the right half plane under the joint role assignment prescribed. On the other hand, if both the link tip and the second joint move to the right half plane (Fig.6(b», this means that finger tip makes contact with the object. In order to describe this condition more generally, we define the safety region where the link passes during approach phase and the other non-safety region Then, we can say that at the initial phase of SPCM, if the link tip moves to safety region for the clockwise rotation of the first joint, inner link is in contact with the object, and if it moves to the non-safety region, then the link tip is in contact with the object. By using the idea of safety region, we can easily judge whether the link tip or inner link is in contact with the object at the initial phase of SPCM .•
be B=BIuB2, where BI and B2 are assemblies of safety regions obtained during approach phase and detection phase (SPCM), respectively. A contact between the object's surface and link system always occurs over the boundary of B. If A2 is the assembly of points showing the exact object's surface, the assembly of exact local shape S is given by eq.(8). S=Bbn(AIuA2)
(8)
where Bb denotes the boundary of the safety region. However, note that the intersection between two slightly different link postures never provides us with the exact contact point over the object's surface, except the case that the link system is in contact with a sharp-edged object. Thus, the utilization of eq.(8) removes most points computed by two link intersections. In order to avoid such undesirable situation, we use the newly defined boundary Bn where every point whose distance from the boundary is less than D is included into Bn. By using Bn, we can obtain the assembly of approximate local shape Sa by eq.(9). Sa=Bnn(A IuA2)
(9)
(a) Approach phase joint (a) Inner link contact (b) Link tip contact Fig.6 Two cases during SPCM.
3.2 Local Shape Sensing Now, let us consider the object shape sensing for the whole SPCMs . For this purpose, we can utilize the following causality. (1) Either inner link or link tip always in contact with the object as far as we take the joint role assignment. (2) There can not exist the object's surface in the region where any part of link passes. (3) If there is no intersection between two slightly different link postures, the link tip is in contact with the object. Let the assemblies of points obtained by finger tip trajectory and intersections between every two slightly different link postures be Al and A2, respectively. Let also the assembly of safety region
( (b) Detection phase Fig.7 An example of object shape sensing
3.3 Experimental Approach We have performed several experiments to verify the idea discussed in section 3.1 and 3.2. We use two degrees of freedom wire-driven robot having joint position sensor and joint torque sensor in each joint. Fig. 7 explains the link motion during experiment
141
where Fig.7(a) and (b) are the approach phase and the detection phase using SPCM, respectively. The same motion is applied to the link system from the opposite direction to extend the sensing region. To avoid complexity of experiment, we use the rectangular finger tip so that the finger tip contact always may occur at the edge close to the object. Fig.8 shows an experimental result, where the dotted line shows the exact object shape, and pasted circles and white circles denote points obtained from finger tip trajectory and link intersections, respectively. Fig.8(a) displays all data obtained by link tip trajectory and link intersections. Fig.8(b) shows the local shape of object obtained by applying the process given by eq.(9). It can be seen from Fig.8(b) that the detected shape recovers the object shape fairly well.
(a) AluA2
20Irnmt
(b) Bnrl(AluA2) Fig.8 Experimental result
4. CONCLUSION We have proposed an approach to detect local shape of object using SPCM continuously. The advantages.of the proposed scheme are: (1) Without any external sensor, such as finger tip tactile sensor, the robot can detect the local shape of convex object. (2) The proposed scheme is so simple and it is available for various shapes of object without changing the main part of the scheme. For actual application of this idea, we have to prepare a position sensor with high resolution in each joint so that we can accurately compute the intersection
with small change of link posture. Also, we have to be careful for designing the robot with small backlash so that we can reconstruct the object shape accurately. Acknowledgment Finally, the authors would like to express their sincere gratitude to Mr. T. Hayashi for his help with experiments.
5. REFERENCE Brock, D.L. and S.Chiu, Environment perception of an articulated robot hand using contact sensors, Proc. of the IEEE Int. Conf. on Robotics and Automation, Raleigh, 89-96, 1987. Cutkosky, M.R. and P.K. Wright, Friction , stability and the design of robotic fmgers, Int. Journal of Robotics Research, vol.5, no.4, 20-37, 1986. Cutkosky, M. R. and R. D. Howe, Dynamic tactile sensing. In Romansy '88:7th CISM-IFToMM Symp. on the Theory and Practice of Robots and Manipulators, Udine, Italy, 1988. Dario P, and G. Buttazzo, An anthropomorphic robot finger for investigating artificial tactile perception, Int J. Robotics Research, vol.6, no.3, 25-48, 1987. Fearing, R.S. , Simplified grasping and manipulation with dextrous robot hands, IEEE journal of Robotics and Automation. vol.2, no.4. 188-195, 1986. Gropen, R. A. and M. Huber, 2-D Contact detection and localization using proprioceptive information, Proc. of the IEEE Int. Conf. on Robotics and Automation, 1-130, 1993. Hanafusa, H. and H. Asada, Stable prehension by a robot hand with elastic fingers, Proc. of 7th Int. Symp. on Industrial Robots, Tokyo, p361, 1977. Hsu. P, Z. Li, and S. Sastry. On grasping and coordinated manipulation by a multifmgered robot hand, Proc. of the IEEE Int. Conf. on Robotics and Automation, Philadelphia, 384-389, 1988. Kaneko. M and K. Tanie, Contact point detection for grasping an unknown object using self-posture changeability, Proc. of the IEEE Int. Conf. on Robotics and Automation, p864, 1990 Kaneko, M and K. Tanie, Quasi-statically planned self-posture changeability for link system, Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems. p1889. 1992. Kaneko, M and T. Hayashi, Standing-up characteristic of contact force during self-poture changing motion. Proc. of the IEEE Int. Conf. on Robotics and Automation, p202, 1993. Kerr. J. and B. Roth. Analysis of Multifingered Hands. The International Journal of Robotics Research. vol.4, no.4, 3-17, 1986. Nguyen, V., Constructing stable grasps in 3D, Proc. of the IEEE Int. Conf. on Robotics and Automation, Raleigh, 234-239, 1987. Yoshikawa, T. and K. Nagai, Evaluation and determination of grasping force for multi-fmgered hands, Proc. of the IEEE Int. Conf. on Robotics and Automation, Philadelphia, 245-248, 1988.
142