An unconstrained optimization approach to the resolution of the inverse kinematic problem of redundant and nonredundant robot manipulators

An unconstrained optimization approach to the resolution of the inverse kinematic problem of redundant and nonredundant robot manipulators

Robotics and Autonomous Systems 7 (1991) 37-45 North-Holland 37 An unconstrained optimization approach to the resolution of the inverse kinematic pr...

708KB Sizes 8 Downloads 70 Views

Robotics and Autonomous Systems 7 (1991) 37-45 North-Holland

37

An unconstrained optimization approach to the resolution of the inverse kinematic problem of redundant and nonredundant robot manipulators Yasmina Bestaoui Laboratoire de Robotique et d'Informatique Industrielle, 3 rue du Mardchal Joffre, 44041 Nantes, France

Abstract Bestaoui, Y., An unconstrained optimization approach to the resolution of the inverse kinematic problem of redundant and nonredundant robot manipulators, Robotics and Autonomous Systems, 7 (1991) 37-45. This paper presents a comparative study of some unconstrained optimization techniques applied to robotic kinematic control. We are concerned not only with the final location of the end-effector but also with the velocity at which the end-effector moves. The position and velocity kinematic problems are related since the Jacobian matrix depends on the values of the joint variables. In order to move the end-effector in a specified direction at a specified speed, it is necessary to solve, at the same time, a set of nonlinear and a set of linear equations. Some well-known optimization techniques (valid for both nonredundant and redundant robots): steepest descent, conjugate gradient and quasi-Newton methods, are applied to solve this inverse kinematic problem. All these techniques are tested on two manipulators (a nonredundant and a redundant one) and compared by means of simulation. Keywords: Kinematic problem; Redundant and nonredundant robots; Unconstrained optimization.

1. Introduction

Kinematics is the science of motion which treats motion without regard to the forces which cause it. The study of the kinematics of manipulators refers to all the geometrical and time based properties of the motion [8]. The kinematic equations provide the functional relationship between the joint displacements and velocities and the reY ~ Betammi was born in Paris, France in 1960. She received the B.S and M.S degrees from Tlemcen University, Tlemcen, Algeria in 1981 and 1984 respectively all in electrical engineering and is now preparing a Doctorat Es-Sciences from Oran University, Algeria and a Doctorat from ENSM, Nantes, France. From 1981 onwards she has been a teaching assistant in the Hydraulics Institute, University of Tlemcen, Algeria. Her research interests are robotic control and optimization.

sultant end-effector position, orientation and velocity. In most cases, it is far more convenient to specify Cartesian motions than joint level ones. Cartesian motion specifications are made with respect to some Cartesian coordinate system. We suppose that there exists no obstacle which could cause a collision. At the trajectory planning stage, a geometrical description is received as an input and then a time history of desired positions, velocities and accelerations is evaluated. Solution to the kinematic equations is the first step in control of robot-manipulators [2,17]. The problem of finding the end-effector position and orientation for a given set of joint displacements is referred to as the direct kinematic problem. This is simply to evaluate the fight hand side of the kinematic equations for known joint displacements [1]. Since articulated manipulators

0921-8830/91/$03.50 © 1991 - Elsevier Science Publishers B.V. (North-Holland)

38

Y, Bestaoui

are composed of serially connected links, in general, both position and orientation of a manipulator hand depend upon all of its joints. As kinematic equations are comprised of nonlinear simultaneous equations with m a n y trigonometrical functions, this problem is in fact the resolution of a set of nonlinear equations. In addition to dealing with static positioning problems, we may wish to analyze manipulators in motion. Often in performing velocity analysis of a mechanism, it is convenient to define a matrix quantity called the Jacobian of the manipulator. The Jacobian specifies a mapping from velocities in joint space to velocities in Cartesian space. The nature of this mapping changes as the configuration of the manipulator varies. Joint rates are related to velocity of the tip in a linear fashion. It then results in the resolution of a set of linear equations, which depends on the solution of the previous set of nonlinear equations. In this paper we are concerned not only with the final location of the end-effector, but also with the velocity at which the end-effector moves. In order to move the end-effector in a specified direction at a specified speed, it is necessary to solve, in the same time, a set of nonlinear and a set of linear equations. We discuss the problem of moving and orientating the end-effector of a completely general manipulator to a specified position, with a certain velocity. There is a need for a coherent general purpose inverse kinematics framework to accomodate both position and velocity transformations. In the direct kinematic problem, the end-effector location is determined uniquely for any given set of joint displacements. The inverse kinematic problem is not as simple as the forward kinematics. Because the kinematic equations are nonlinear, their solution is not always easy or even possible in a closed form. Also the questions of existence of a solution and of multiple solutions arise [1,8]. The existence or nonexistence of a kinematic solution defines the workspace of a given manipulator. The lack of a solution means that the manipulator cannot attain the desired position and orientation because it lies outside of the manipulator's workspace. For a solution to exist, the specified goal point must lie within the workspace. If the desired position of the end-effector is in the workspace then at least one solution exists [8]. The fact that a manipulator has

multiple solutions may cause problems because the system has to be able to choose one. The criteria upon which to base a decision vary~ but a very reasonable choice would be the closest solution. There are two ways to approach the problem of inverse r o b o t kinematics: symbolically and numerically. The majority of the methods used to solve the inverse kinematic position problem [1,8,19] for some particular manipulators, utilize the D e n a v i t - H a r t e n b e r g formulation. The best known method for determining a closed kinematic solution is the one described in [19] which involves systematically exploring various kinematic relationships and picking out the best one to yield solutions. The form of the inverse solution depends on the geometry of the robot joints and links. The major drawback of the symbolic solutions is that working out such a solution may not always be possible, although it usuaUy is for most industrial manipulators. Particular solutions are available for a number of arm-wrist combinations. The criterion for kinematic simplicity is that three consecutive joint axes intersect at a point. Such solutions are usually obtained based on either a geometric approach or an analytical one. When the analytic method cannot be used, numerical techniques are taken into account. Numerical solutions to the inverse kinematics problem are much more general. Certain authors [3,9,10] have attempted to find general solutions based on numerical methods that can be used for manipulators of arbitrary geometry. These methods typically use the Jacobian matrix J, which expresses the rate of change in the Cartesian variables with respect to the joint variables. A single formulation may be applied to a wide variety of linkages. For the numerical resolution of the position kinematic problem, the best known method is the N e w t o n Raphson method. In fact, this technique has certain drawbacks that prevent to use it: its convergence domain is too narrow and it involves the resolution of a set of linear equations. To avoid these drawbacks, some well-known optimization techniques: steepest descent, conjugate gradient and quasi-Newton methods, can be applied to the inverse kinematic problem. To the author's knowledge, Van Der Bosch et al. [9] were the first who make use of the optimization technique: the steepest descent method, to solve the kinematic problem.

Redundant and nonredundant robot manipulators

Inverse Jacobian differential kinematics is the basis for a general method of controlling robotmanipulators. While direct closed-form solutions exist for a restricted class of manipulators, the Jacobian can be computed for every possible manipulator. ComputationaUy efficient methods for evaluating the Jacobian and inverse Jacobian matrices have been developed [1,7,8,19]. Direct methods need too much computations to be utilizable in real-time, so iterative techniques must be used to obtain a compromise between the number of computations and the desired precision. Singularity is a critical problem for articulated arms [6,7]. Chevallereau and Khalil [7] discuss the kinematic structures of redundant manipulators that are appropriate for avoiding the singular points internal to the workspace. The study of redundant manipulators is an important one, particularly for obstacle and singularity avoidance. The application of pseudo-inverse matrices to obtain optimal joint velocities has traditionally been a central tool for the redundant arm problem [10]. For the case of kinematically redundant robots, an iterative approach such as the one based on constrained nonlinear optimization [10] have been proposed. The method presented in [12] considers the kinematics model as a whole and determines all joint coordinates using an interactive procedure. In [13], a technique is presented where the kinematics is shown to be equivalent to a nonlinear operator mapping the coordinates of the task space into the joint space. The operator is recursively estimated using a least-square estimator and is used to calculate the configuration space coordinates for a desired task space location. This paper consists of five sections. Section 2 presents the problem formulation, and Section 3 introduces the resolution methods. Simulation results are discussed in Section 4, all the proposed techniques are tested on two manipulators (a nonredundant and a redundant one) and compared by means of simulation, and finally some concluding remarks are given in Section 5.

2. Problem formulation The problem is posed as follows: Given a desired end-effector position, orientation and velocity, we find the corresponding joint positions and velocities that cause the end-effector to move at

39

the desired velocity. This is a fundamental problem in the practical use of manipulators. The kinematic equations must be solved for joint displacements and rates, given ffie end-effector position, orientation and velocities. Once the kinematic equations are solved, the desired end-effector motion can be achieved by moving each joint to the determined value, with the specified velocity. A manipulator arm must have at least 6 degrees of freedom in order to locate its end-effector at an arbitrary position with an arbitrary orientation. Similarly, 6 degrees of freedom are necessary to move the end-effector in an arbitrary direction with an arbitrary angular velocity. The effective position and orientation of a manipulator can be described in terms of the position and orientation of a coordinate frame embedded in the last link of a manipulator. The orientation of the end-effector is defined by the orientation matrix containing the components of the normal, slide and approach (n, s, a) vectors and its position is defined by the three components of a vector from the reference coordinate origin to the tool point [19], function of the articular positions.

[no Sx ax ny

Tn

Sy

ay

Sz

az

0

0

py

(2.1)

For any given configuration of the manipulator, joint rates are related to velocity of the tip in a linear fashion. This is only an instantaneous relationship, since in the next instant, the Jacobian has changed slightly. Jacobians of any dimension (including nonsquare) may be defined. For any n link manipulator, it is possible to compute the n × m Jacobian matrix [22]. Then, Cartesian terminal link rates :t = (vto) r are related to joint rate 0 by the following equation: -- J ( q ) q ,

(2.2)

where J is the (m × n) manipulator Jacobian. Once J is calculated based on the previous value of the joint angle q, .~ is calculated v = (ox, Oy, vz) Cartesian velocities in the coordinate frame T, to = ( % , toy, % ) angular velocities about the x, y, z axes of frame T" 0 = ( 0 1 , q2 . . . . . 0,) r n joint rates. For a kinematicaUy redundant manipulator, the

40

Y. Bestaoui

dimension of the joint space is greater than the dimension of the task space:

hand side of (3.1) to zero. Thus the direction d k satisfies a system of linear equations:

d i m ( q ) = n > d i m ( p ) = m.

D(qk ) dk = --f( qk )

Consequently, the Jacobian matrix J(q) is rectangular and there exists no unique inverse velocity transformation:

and the next iterate is given by

= j-l(q)~.

(2.3)

For the nonredundant manipulators, this scheme is referred to as resolved motion rate control [19]. Redundant manipulators provide increased flexibility for the execution of complex tasks. The redundancy of such manipulators can be effectively used to avoid obstacles and singularities and maintain a high degree of manipulability while performing the desired end-effector task. If the robot is redundant, there are generally an infinite number of configurations by which to achieve a desired end-effector position. In many cases, it is satisfactory to use some default of configuration choice. This is done automatically when optimization techniques are used because the majority of these methods search the closest local/minimum.

qk+l = qk + dk,

(3.2)

(3.3)

Under standard assumptions about the nonsingularity of D(q*) and the closeness of qk to q*, the sequence {qk } convergence quadratically to

q*.

The drawbacks of this method appeared to be: 1. the initial point must belong to a narrow neighborhood of the solution, otherwise this method will diverge 2. this method involves the resolution of a linear system that would induce problems if the Jacobian matrix D is singular or only ill-conditioned. This method needs too many computations. To avoid these drawbacks, more robust numerical methods are introduced: the optimization techniques. For optimization approach, the following objective function is defined:

F(q) = ½~( Xi _ f,.(q))2.

(3.4)

The kinematic controller has to determine the minimum of F(q). In this minimum, it holds that F ( q ) = 0, thus

3. Resolution methods

Xi = f , ( q )

for i = 1, l

(3.5)

Some numerical methods for assigning link coordinate systems and obtaining the end-effector position and orientation in terms of joint coordinates are reviewed in the following. Kinematic position control can be interpreted as finding zeroes of a set of nonlinear equations or as an optimization problem. First, we will consider the best-known of those methods: the N e w t o n - R a p h son technique, and then the ones directly related to the optimization field. The N e w t o n - R a p h s o n method can be defined for the nonlinear system. From the Taylor-series expansion of f about a point qk, we can obtain linear approximation to f , i.e.

Methods differ in the demands they make to the user. Some only require him to supply a program to evaluate F(q). Others require in addition the calculation of the gradient vector ~TF(q) where

f ( q * ) = / ( q k ) + D(qk)(q* -- qk)

V'F( q) = D( q) r f ( q)

(3.1)

in which D is the Jacobian matrix: the matrix of first order derivatives of f , and q* is the solution of the nonlinear equations. The Newton-step d k is an approximation to ( q * - q k ) and is defined by equating the right-

q is an n vector where n is the number of degrees of freedom, l = 12 when we must position and orientate and 1 = 3 when there is only positionning. We then have a set of l nonlinear equations with n unknowns generally 1 ~ n. Consequently, the kinematic controller can also be described as rain F ( q ) ,

(3.6)

(3.7)

and a few need both first and second derivatives so that the user also has to supply the Hessian matrix ~72F(q) where

vZF = D(q)TD(q) + Q(q)

(3,8)

Redundant and nonredundantrobotmanipulators where O(q) = E~(q) • v2f/(q).

(3.9)

We observe that the Hessian of a least squares objective function consists of a special combination of first and second information. The same reasoning is also valid for the inverse kinematics velocity problem. The total velocity of the end-effector during coordinated motion is the superposition of all the elementary velocities that represent single joint motion. Once the orientation and position of each frame is known, the computation of the Jacobian matrix involves, at most, vector cross-products. The objective of the inverse transformation is to compute the joint velocities required to achieve a desired end-effector [13]. The elements of the Jacobian matrix are complex trigonometric functions of the joint coordinates, prohibiting symbolic inversion except for trivial manipulators. Since the inverse of the Jacobian is not of interest, numerical techniques can be employed to solve the linear equation system. The solution of the linear system is possible for configurations for which the Jacobian matrix is invertible. A singularity occurs at manipulator positions that correspond to physical boundaries of the robot workspace or configurations at which two or more joint axes align. Singular positions therefore, are manipulator-dependent and can be avoided in the planning stage [13]. The Jacobian J (Eq (2.2)) specifies a linear mapping form velocities in joint space to velocities in Cartesian space. The inverse kinematics velocity problem results then in the resolution of a set of linear equations, which depends on the solution of the previous set of nonlinear equations. For a redundant manipulator, we have a linear system with more unknowns than equations. Such a system is underdetermined and it may have more than one solution. In this case, the classical methods for the linear equations system resolution are not applicable. One alternative is to compute the right pseudo-inverse of the Jacobian matrix [10]. Another is the utilization of the optimization methods, applicable for both redundant and nonredundant manipulators. Let us consider the linear system A4 = b

(3.10)

41

with q* solution of the linear system. Let us define the residue r = b - A4

(3.12)

and the quadratic form E ( 4 ) = 0.5(4 -- q* )T'4 (4 -- 4* ) = 0.5rTA -~r. (3.13) The solution q* corresponds to a vector which minimizes E(q), and the gradient is given by: grE = r.

(3.14)

The resolution of the system A q = b is thus equivalent to the minimization of the function E(q): minoE ( 4 ) .

(3.15)

In fact this paper is concerned only with the methods which make use of the knowledge of the function and its gradient as steepest descent method, conjugate gradient method or quasi-Newton technique. Some other methods considered for nonlinear least squares problems are optimization methods that make use of the special structure of the objective function F. Such methods are the G a u s s - N e w t o n method [18], the LevenbergMarquardt [4] and special quasi-Newton methods related to the least-squares problems [11]. These methods are out of the scope of this study. In the case of unconstrained minimization, a natural measure of progress is provided by the value of the objective function. It seems reasonable to require a decrease in F at each iteration and to impose the descent condition that Fk+ 1 < F k for all k > 0. A method that imposes this requirement is termed a descent method. We shall discuss on descent methods for unconstrained minimization. All methods to be considered are of the form of the following model algorithm [11]: Algorithm. Let qg, qk the current estimate of the solution q* and 4". 1. (test for convergence) If the conditions for convergence are satisfied, the algorithm terminates with qk and qk as the solution. 2. (compute a search direction) Compute two nonzero n vectors d k and dk, the directions of search, respectively for the nonlinear and linear systems. I

with A symmetrical definite positive matrix A=jTj,

b=jT.¢c

(3.11)

42

Y.

3. (compute a step length) Compute two positive

scalars a k and a k, the step length, for which it holds that F ( q k + akdk ) < F ( qk )

(3.16)

and E(O + a k d k ) < E ( q k ) . 4. (update the estimate o f the m i n i m u m ) Set qk + 1 qk + Otkdk, 0 k + l ~-- qk + a'kdk, k + k + 1 and go back to Step 1. We have then to determine the search direction and the stepsize. A fundamental requirement for a steplength algorithm associated with a descent method involves the change in F at each iteration. If convergence is to be assured, the steplength must produce a sufficient decrease in F. The sufficient decrease requirement can be satisfied by several alternative sets of conditions on a k. It is important to note that Step 3 of the model algorithm involves in general the solution of a univariate problem (finding the scalar ak). This line minimum is determined by means of a number of trial points. In this case, these trial points would be used as control vectors for the robot system. Consequently, this techniques can cause an oscillating movement of the robot arm. Therefore, another techniques must be used to compute the steplength. 3.1. Search direction calculation

The steepest descent direction is the negative gradient of the objective function. This is the direction in which the objective function locally decreases the most: d k = - ~yF( qk )"

(3.17)

The steepest descent method is not recommended because of its slow rate of convergence. The pattern of points of the steepest descent method is of a zigzagging nature. This is typical of the behaviour of this method. The second class of algorithm for minimizing unconstrained functions of several variables that use only first-derivative information, is called methods of conjugate directions. The general function is minimized by separately minimizing in n directions. A popular conjugate gradient technique is briefly described next: the Polak-Ribirre method. The search direction is computed by: d k = - ~TF(qk) + / 3 d k _ 1

(3.18)

Bestaoui

where fl = ( ~TF( qk )v ( ~YF( qk ) - VF( qk -J ))) /ll lYF(qk-l)II

(3.19)

In fact, there exist two popular choices for/3: the Fletcher-Reeves one and the Polak-Ribi~re one. However, simulation results suggest that the P o l a k - R i b i ~ r e choice is preferable to the Fletcher-Reeves one [11,16]. Newton's method [5,15] is too difficult to implement and methods which approximate the Hessian or inverse Hessian are more useful as unconstrained minimization methods. A class of very popular algorithms are quasi-Newton methods. A quasi-Newton method is any algorithm that generates a sequence of points which tends to find a local minimizer for F ( q ) in some open set by means of the following general equation: (3.20)

qk +1 = qk -- Hk~TF( qk ) ak

where H k is an approximation to the inverse Hessian at a presumed local minimizer and a k is the step size scalar. The step size scalar is sometimes taken equal to 1 and more often chosen using the optimal stepsize procedure. Quasi-Newton methods are based on the idea of building up curvature information as the iterations of a descent method proceed, using the observed behaviour of F and ~TF. The theory of quasi-Newton methods is based on the fact that an approximation to the curvature of a nonlinear function can be computed without explicitly forming the Hessian matrix. Different algorithms compute H k in different ways. The best-known quasi-Newton method for minimizing an unconstrained function is the BFGS method [16]. The updating formula is T

--1

T

T

q- OkOk (Ok Yk ) 1

(3.21) where Ok = q k - - q k - 1

and Yk =

~TF(qk

) --

wF(qk_ i ). (3.22)

The updating formula called the BFGS method does seem to work better than the others [5,11].

43

Redundant and nonredundant robot manipulators

3.2. Steplength calculation The step a can be determined by a variety of rules [11] but always so that at least F(qk+l)< F(qk). Some early methods choose a k as the value of a which minimizes the one-variable function:

g(a) = F(q k + adk).

z4 z 6

z5

(3.23)

This is known as an exact line search [15]. However, recent work has concentrated on approximate line searches, where a choice a k is accepted, provided simply that F is reduced by a sufficient amount, for example the one presented in [9] that uses the first order Taylor approximation. By means of the linearization of the kinematic equation x = f ( q ) on the neighborhood of qk, the value of a k can be calculated analytically:

etk = (Ddk), ( Xk - f ( q k ) ) / l l

Ddk

II 2.

~

C3

z0 Zl

(3.24)

Stanford arm

This general expression for a k is valid for all the optimization techniques presented above. In practice, the usual convention is to take ctk as unit to achieve the best convergence with Newton-type and quasi-Newton methods. These two choices will be tested in the following simulation results, for the resolution of the inverse position problem. It is possible to calculate the steplength a ' analytically. E(q k+l) =

E ( q k -Jr-ottkdtk )

z4

(3.25)

redundant manipulator Fig. 1. (a) Stanford arm, (b) redundant manipulator.

thus

e(o,k+,) = o

A(o* + o'ka'k)]

× A - I [ b - A ( d l k +a'kd'k)]

(3.26)

E(gl k+l) will be minimum for a,k = ( rkTd,k ) / ( d "kTAd'k ).

(3.27)

the initial point of the simulations and the Cartesian point we want to attain are changed and when the steplength is computed with Equation (3.24) or is taken as unity. The approximation

This steplength will be used in the resolution of the inverse velocity problem. Table 1 Point

4. Simulation results

This section presents the application of the described methods to the manipulators whose schematic diagrams are presented in Fig. 1. Numerous simulations were performed to compare their behaviour Tables 1 and 2 present the number of iterations (function evaluations) when

1st 2nd 3rd 4th

a(3-24) a=1 a(3-17) a=1 a(3-17) a=1 ~t(3-17) a=1

NewtonRaphson

Quasi Newton

Conjugate gradient

Steepest descent

DV DV DV DV DV DV DV DV

7 52 4 9 17 22 18 29

10 > 100 11 > 100 36 > 100 30 > 100

28 > 100 15 > 100 42 > 100 55 > 100

44

Y. Bestaoui

Table 2

x4 = (0.6, 0, o), v, = (0, o, o)

Point

1st 2nd 3rd 4th

a(3-24) a=l a(3-24) a=l 0t(3-24) a = 1 a(3-24) a = 1

NewtonRaphson

Quasi Newton

Conjugate gradient

Steepest descent

> 100 21 17 13 ll > 100 69 > 100

16 >100 10 >100 15 > 100 33 > 100

15 >100 11 >100 11 > 100 25 > 100

16 >100 10 >100 15 > 100 40 > 100

precision was ~ = 10 -5. All units used are SI. Stanford arm 0

rl=

T2=

1

0

0 0

0 0

i.15

0

[oo 0

1

0

o 0

1 0

-1 T4 =

0 -1 0

[i x 0°i 1 0

~=

0.5 ]

0

0 0

-1 0

1

0.25 ] !

°! 1

0 0

0 -1 0 0

0 0 1 0

0.25] 0 0.25 1

J

(o I , 601) = (0.05, 0.035, - 0.05, 0, 0.1, 0)

(02,602) = (0.025, 0.025, - 0.025, o, o, o) (v 3, 603) = (0.0125, 0, - 0.0125, 0, - 0.1, 0)

(v,, 60~) = (0, 0, 0, 0, 0, 0) qo = (0, 1, 0, 0, 1, 1) and q0 = (0, 0, 0, 0, 0, 0). Each solution found is an initial point to the next simulation. For the redundant robot, we will concern ourselves only with the positioning problem. As previously, we will compute in the same time, the joint positions and velocities for four operational points: Redundant manipulator

qo = (0, 0.5, 1, 0) and qo = (0, 0, 0, 0) D V means diverge. Remark. When the solutions are obtained, we must keep in mind that an angle is always defined modulo (2H). These two manipulators differ very much: the first has five rotary and one prismatic (2RP3R) degrees of freedom (nonredundant), the second three rotary and one prismatic degrees of freedom (2RPR) (redundant). When the initial point is very close to the solution, the Newton-Raphson in most cases needs less function evaluations than the other methods. However, at each iteration, it involves the resolution of a set of linear equations. It needs more computations and is less robust than the quasi-Newton or conjugate gradient methods. The choice a = l is not a good one for the steepest descent conjugate gradient and QuasiNewton methods because in general, the point (qk + dk) is not the line-minimum in the search direction d k. Although the steplength a = 1 is recommended for the Newton-Raphson methods, the stepsize computed by Eq (3.24) has better behaviour. The simulation results show that: 1. It is preferable to use the stepsize computed by the Van Der Bosch method [9] than to use the unity step size. 2. All unconstrained optimization techniques presented are more robust than the NewtonRaphson method. The convergence domains of the optimization techniques are larger than those of the Newton-Raphson. . Typically, the rate of convergence of the steepest descent method is linear and the rate of convergence of the conjugate gradient method and the quasi-Newton method is superlinear. From the simulation results, quasi-Newton method seems to behave better than the conjugate gradient or simple gradient method. Compared with Quasi-Newton methods, conjugate gradient methods usually require more function evaluations.

X1 = (0.2, 0.2, 0.2), V, = (0.5, 0.5, 0.5) X 2 = (0.3, 0.1, 0.1), ~ = (0.6, 0.4, 0.4)

X3 = (0.4, 0, 0.1), V3 = (0.7, 0, 0.4)

5. Conclusions The motion control of mechanical manipulators in Cartesian space has been the subject of inten-

Redundant and nonredundant robot manipulators

sive research in recent years. The tasks are carried out by precisely controlling the linear and angular position and velocity of the hand along a preplanned path in Cartesian space. For the transformation of a Cartesian into the correspondent joint trajectories, the so-called inverse-kinematics problem is needed. This paper presents a comparative study of some unconstrained optimization techniques applied to kinematic control of redundant and nonredundant manipulators. We can conclude from the simulation results that for a general manipulator, it is preferable to use the quasi-Newton method with the stepsize computed by the Van Der Bosch method to any other method. This method has a superlinear rate of convergence (close to that of the NewtonRaphson) and permits to avoid the drawbacks of this latter method.

References [1] Asada and J.J.E. Slotine, Robot Analysis and Control (John Wiley, New York, 1986). [2] Y. Bestaoni, Adaptive hierarchical control for robotic manipulators Robotics and Autonomous Systems 4 (2) (1988) 145-155. [3] Y. Bestaoui, Comparative study of unconstrained optimization techniques applied to robotic kinematic control, in: Proc. Second Internat. Syrup. on Robotics and Manufacturing, Albuquerque, USA (1988) 21-28. [41 M. Boumahrat and A. Gourdin, Methodes num~riques appliqu~es, OPU publ., 1983. [5] K.W. Brodlie, Unconstrained minimization, in: Proc. State of the Art m Numerical Analysis, Dundee, UK (1977) 229-268. [6] A.K. Chassiakos and M.A. Christodoulou, Kinematic control and obstacle avoidance for redundant manipulators, in: Proc. 25th Conf. on Decision and Control, Athens, Greece (1986) 96-97.

45

[7] C. Chevallereau and W. Khalil, A new method for the solution of the inverse kinematics of redundant robots, in: Proc. IEEE Internat. Conf on Robotics and Automation, Philadephia, USA (1988) 1842-1848. [8] J.J. Craig, Introduction to Robotics Mechanics & Control (Addison-Wesley, Reading, MA, 1986). [9] P.P.J. Van Der Bosch, W. Jongkind, H.R. Vannautalemke and D. Zoetekouw, Kinematic control by means of optimization, in: Proc. lnternat. Syrup. on Theory of Robots, Vienna, Austria (1986) 47-52. [10] A. Fournier, G~n~ration de mouvements en robotique, application des inverses g~n~ralistes et des pseudo-inverses, Doctorat Es-Sciences Thesis, Montpellier, France, 1980. [11] P.E. Gill, W. Murray and M.H. Wright, Practical Optimization (Academic Press, New York, 1981). [12] A.A. Goidenberg, J.A. Apkarian and H.W. Smith, A new approach to kinematic control of robot manipulators, J. Dynamic Syst. Measurement Control 109 (1987) 97-103. [13] J.U. Korein and J. Ish-Shalom, Robotics, IBM Syst. J. 26 (1) (1987) 55-95. [14] J. Lloyd and V. Hayward, Kinematics of common industrial robots, Robotics and Autonomous Systems 4 (2) (1988) 169-191. [15] D.G. Luenberger, Introduction to Linear and Non Linear Programming (Addison-Wesley, Reading, MA, 1973). [16] G.P. McCormick, Non Linear Programming (John Wiley, New York, 1983). [17] R. Mezencev, J. Szymanowski and Y. Bestaoui, On-line implementation of the reference trajectories for a multiaxis robot, in: Proc. 12th World Congres IMACS, Paris, France (1988). [18] J.M. Ortega and W.C. Rheinboid, Iterative Solution of Nonlinear Equations in Several Variables (Academic Press, New York, 1970). [19] R.P. Paul, Robots Manipulators: Mathematics, Programming and Control (MIT), Cambridge, MA, 1981). [20] R. Schmidt, Advances in Nonlinear Parameter Optimization (Springer, Berlin, 1982). [21] R.A. Tapia, Diagonalized multiplier methods and quasiNewton methods for constrained optimization, J. Optim. Theory Appl. 22 (2) (1977) 135-194. [22] C.H. Wu and R.P. Paul, Resolved motion force control of robot manipulator, IEEE Trans. Systems Man Cybernet. 12 (3) (1982) 266-275.