Comparison of parallel computation schemes for calculating robot Jacobians

Comparison of parallel computation schemes for calculating robot Jacobians

M~hemm/cs VoW. 3. No. l, pp. 107-115, 1993 l~iat~ in G~-~t Briuua 0~r7.413~ $6.00+0.00 O IW2 Ft.qp~ol ~ Lzd COMPARISON OF PARALLEL COMPUTATION SCHEM...

532KB Sizes 0 Downloads 14 Views

M~hemm/cs VoW. 3. No. l, pp. 107-115, 1993 l~iat~ in G~-~t Briuua

0~r7.413~ $6.00+0.00 O IW2 Ft.qp~ol ~ Lzd

COMPARISON OF PARALLEL COMPUTATION SCHEMES FOR CALCULATING ROBOT JACOBIANS A. S. MORRIS* and A. Y. H. ZOMAYAt *Robotics Research Group, Department of Control Engineering, University of Sheffield, Sheffield S1 3JD, U.K. and i'Department of Electrical and Electronic Engineering, University of Western Australia, Perth, Western Australia 6009, Australia

(Received 9 April 1992; accepted 11 June 1992) Abstract--The paper addresses a point of major difficulty in designing robot control algorithms; that of computing the kinematic Jacobian matrix and its inverse on-line. Various techniques for calculating the forward and inverse Jacobian have been modified into a form optimized for parallel processing. The important problem of singularities in the robot configuration has also been considered, and robust matrix inversion techniques are proposed for this case. All algorithms have been implemented on transputer networks and computation times measured. The paper emphasises the importance of including communicationoverheads in comparisons of the computational efficiency of alternative algorithms and processor networks. Theoretical processing times based on computer cycle times and arithmetic operation counts are shown to be a false basis for comparison. Whilst considering the specific case of computation of Jacobian matrices for a robot manipulator, the paper provides a u~ful example of the considerations and constraints involved in distributing any algorithm across a multi-processor network.

INTRODUCTION Forward and backward Jacobians are kinematic matrices which provide conversion between expressions for the velocity of the end-effector of a robot manipulator expressed with respect to the alternative coordinate systems of joint and spatial (world) coordinates. The use of Jacobian matrices in various different robot applications, for tasks such as inverse kinematic computation and control, is documented in several texts [1-3]. This paper is particularly directed towards the development of fast computation schemes for Jacobian matrices for use within controllers designed for high-speed robot motion. The target trajectory for the robot end-effector is defined in terms of a spatial trajectory between specified start and end points, with a particular velocity profile along the required spatial trajectory. While the target position and velocity are defined in spatial (x, y, z) coordinates, the control torques and forces designed to drive the end-effector along the required trajectory are applied to the joint actuators and so have to be measured in terms of joint coordinates. Conversion between spatial and joint coordinates is therefore required for both position and velocity profiles. The robot manipulator is a complex system governed by a set of differential, coupled and highly non-linear dynamic equations. Design of a controller to drive the robot along some specified path at high speed is a difficult task because a set of kinematic and dynamic computations have to be carried out at a high frequency. 107

108

A.S. MORRIS and A. Y. H. ZOMAYA

Current industrial practice is to simplify the dynamic computations by ignoring coupling effects and designing conventional servo-mechanism controllers for each joint which are linearized about a typical operating point. However, such a treatment is inaccurate even at low speeds and leads to gross trajectory errors at medium to high operating speeds, when large-magnitude, coupled inertia forces are excited. At low operational speeds, neglect of the dynamic forces by the controller only leads to small trajectory errors because the forces are then of low magnitude. However, for high speed motion, the dynamic forces are of large magnitude and it is essential that the controller takes account of them. Unfortunately, a major source of difficulty arises in that the dynamic parameters which determine the coupling forces between the manipulator links are time-varying. This time-varying nature arises because the dynamic parameters are functions of the manipulator geometrical configuration which changes as the manipulator moves. Therefore, the controller must recompute the dynamic forces at each sampling interval. The sampling rate must typically be at least 60 Hz because the resonant frequency of most manipulators is between 5 and 10 Hz [4]. In consequence, all computation of kinematic and dynamic parameters and calculation of control inputs must be calculated in a time less than or equal to 16.6 ms. Schemes for computation of the dynamic parameters using parallel architectures have been presented before [5-7]. However, little attention has been paid previously to the problem of computing the Jacobian matrices on-line, which forms the dominant task in computation of the kinematic parameters. Fast and efficient Jacobian computations are very important for high-speed controllers, and especially so where control of incremental and fine motions of the end-effector are required. To match the minimum sampling rate specified above for the controller, the algorithm for the Jacobian must also be re-computed at a frequency of at least 60 Hz. This requirement, together with the computational complexity of the Jacobian matrix calculation dictates that parallel computation and a multi-processor network be used. In the work described in this paper, the lnmos transputer and its specially designed Occam programming language have been adopted to show how recent advances in VLSI computer architectures can be utilized to improve the performance of Jacobianbased control schemes.

JACOBIAN AND INVERSE JACOBIAN FORMULATION The Jacobian matrix J(q) relates changes in Cartesian (spatial) coordinates &x to changes in joint coordinates ¢YMaccording to 6x = J(q)ckI,

(I)

where x is the Cartesian coordinates vector and q is the joint coordinates vector. Similarly, the inverse Jacobian matrix J-~(q) relates &l to 6x according to &l = J-~(q)fx.

(2)

Various methods of computing the Jacobian matrix (J) are available [8-I0], and these have been reviewed by Orin and Schrader [II]. The technique adopted in this paper,

Parallel computation of robot Jacobians

109

because of its simple, algorithmic nature, is the one proposed by Whitney [8] and later refined by Paul [12]. The starting point for the derivation of J is the standard homogeneous kinetic transformation matrix A, relating the coordinate frames of successive manipulator links, using conventions first defined by Denavit and Hartenburg [13]. A~ is given by

At =

sin 0, 0 0

cos 0, cos ct, sin tr i 0

- c o s 0, sin aq cos tvi 0

a, sin 0, d, 1

'

(3)

for successive coordinate frames (xt_~, Yi-t, zi-t) and (xt, Yi, z3 where the z axis of each link lies along the joint axis actuating the link, 0, measures rotation about the z,-t axis of link i with respect to link i - 1, d t is the distance between the xi and xt-t axes, ai is the shortest distance between the z, and z~-t axes and crt is a rotation angle about the xt axis which describes the twist angle between the two coordinate frames. For a typical six degrees of freedom (6 dof) manipulator, the end-effector tip is related to the coordinate frame of the robot base by the transformation matrix T °, where T~ = AIAzA3A4AsA~,. (4) Other matrices relating intermediate joints to the base can also be formulated as T~ = A2A3A4A~A6

(5)

T6 = A.~A4AsA~,

(6)

T~, = A4A.~A~,

(7)

T~ = AsA~,

(8)

T~ = A~.

(9)

if the elements of T~ are represented by

fi n ~

T6 =

0 x

ax

oy o: 0

ay az 0

,J-1 Px

py Pzl '

(1o)

then the ith column of the Jacobian matrix J, can be expressed as [12] •

n~,p,: - n.rp~,

n:

o~p~

o

1

Ji

I

~

I

I

fli a r P " n , . a x P Y _

where fl, =

1

o~p'y

a

+ (1 - fli) 0

o(

o

,,t

o

'

(i1)

i f / i s a rotational joint if i is a translational joint.

It is generally accepted that the best procedure for computing the inverse Jacobian ( j - t ) is to compute the forward Jacobian first and then calculate its inverse.

110

A.S. MORRIS and A. Y. H. ZOMAYA COMPUTATION OF JACOBIAN MATRIX

In dividing the algorithm for evaluating the Jacobian matrix into sub-tasks for parallel computation on a network of processors, the aim is to achieve an optimal configuration which maximizes processor utilization and minimizes the communication overhead between processors. In part satisfaction of these requirements, all constant kinematic parameters are duplicated in the memory of each processor. Three different methods of computing the algorithm, involving alternative networks and modes of division of the algorithm over the network, are proposed below. For each, simulation is used in order to compare their performance. The manipulator simulated is a PUMA model 560, which has six rotational joints and is therefore typical of the most complicated, computationally, manipulator encountered in industry at present. The kinematic parameters for this have been published previously by Elgazzar [14]. Method 1

In this method, a network of four transputers arranged as one master processor (P0) and three slave processors (Pt, Pz, P3) running in parallel, as shown in Fig. I, was used. The tasks allocated to the processors were as follows P, supervises the network, sends t~i to the slave processors and receives the columns of J. P= computes columns 4-6 of J using Eqns (7)-(10). P2 computes columns 2-3 of J using Eqns (5) and (6). P3 computes column l of J using Eqn (4). INPUT/OUTPUT (PC)

'

aL2 P1 BL3

I

i

BU

:

I MASTER PROCESSOR (PO) BL2 "

°!, :

P2 BL21BL3

s

,

'LII

uJ

P3

BL3

e,~

.J

I BL : BIDIRECTIONAL LINK. Pi • PROCESSOR (TRANSPUTER). Fig. I. Tree-structured four-processor network.

Parallel computation of robot Jacobians

111

The operations performed by processors P0 and Pt are summarised below. P2 and P3 operate in a very similar fashion to Pt. In this summary, operations preceeded by 'SEQ' are sequential operations and operations preceeded 'PAR' are operations that are carried out in parallel. Processor O.

PAR Send qt Send 03 Send 05 SEQ Receive Receive Receive

and 02 to P3and q4 to P:. and 06 to Pt. J t from P3. Jz and J3 from Pz. J4, J5 and J6 from PI.

Processor 1.

SEQ Form A~ and A56. Store A~ x A65 as T~,. PAR Send T~ to P:. Send T~ to P3. Receive A~ from P2. SEQ Store A~ x T~, in Temporary Location 1. Compute J~ from Temporary Location 1. Compute J5 from T~. Compute J~ from A~. Send J4, J5 and J6 to P0. In the above scheme, computation is minimized by avoiding redundant operations. For example, only the first three rows of (4 x 4) matrices are transferred between processors. Also, useless multiplications and additions are avoided by simplifying the matrix product terms before computation, e.g. FDi-loi

D i-IDi

Ti+t is computed as instead of as the product

" 1'7

1

E

6

i

J

t

3 i+

.

Method 2

This uses the same network as method 1 (Fig. I) but reduces inter-processor communication by redistributing the task-allocations as follows Pl computes columns 5-6 of J using Eqns (8)-(10). P2 computes columns 3-4 of J using Eqns (6) and (7). P3 computes columns 1-2 of J using Eqns (4) and (5).

112

A.S. MORRIS and A. Y. H. ZOMAYA

Method 3

This method uses a different network, shown in Fig. 2, and a different task-allocation, with the aim of reducing still further the overhead of inter-processor communication. The number of processors is expanded to seven, with the slave processors arranged as two layers of three, with the lower-layer processors acting as slaves to the upper-layer processors. This allows each processor to be dedicated to computing just one column of J. Simulation results

Execution times for computation of the Jacobian matrix using each of the three task-distribution methods outlined above are compared in Table 1. Figures for the INPUT/OUTPUT

(PC)

I

MASTER PR%OESSO. [

~[B=

B,

I

P1

aLO!:

0L,

1

m.,~ [ :-mStO P 2

I

I - au

P4

d

B=,-

0=

=IlaLO P3

I

1

I BL :

_- SLO P5

a

BL: BIDIRECTIONAL LINK. Pi • P R O C E S S O R (TRANSPUTER)

.....J UJ

'

,

,

i

_J

.~

I

..J UJ

BLO P6

_J

.

Fig. 2. Tree-structured seven-processor network.

Table 1. Computation times for Jacobian matrix Algorithm

Sequential Method 1 Method 2 Method 3

Number of transputers

Theoretical processing time ( m s ) T414

T800

6.9 2.22 3.027 3.633

0.3217 0.1075 O. 1413 O. 1692

Actual processing time ( m s ) T414 T800 SUN 10.4 3.84 4.67 1.9

2. I 0.768 0.973 0.38

15.0 ----

Parallel computation of robot Jacobians

113

cases where network processors are: (a) all T414 transputers and (b) all T800 transputers are included to illustrate the much superior performance of the T800. Results for the same simulation carried out sequentially on a single transputer and a SUN 3//50 workstation are also included for comparison. Processing times are quoted in terms of two quantities, theoretical processing time (TPT) and actual processing time (APT). TPT is computed for serial and parallel operations according to the formulae Tpa~lel = MAX(ti) k Tsequential -- ' ~ ( t , ) , i=1

i6k

(12) (13)

where k is the number of processors and t~ is the theoretical processing time for each processor calculated by counting the number of arithmetic operations performed. APT is the actual processing time measured, which is greater than TPT because it includes communication time as well as processing time. The figures in Table 1 demonstrate the importance of measuring communication time when evaluating alternative task-allocation strategies. Considering only the theoretical processing time, method 1 is the fastest and method three slowest. However, when communication overheads are considered, method three becomes the fastest. It is also interesting to note that the large processing speed advantage of the T800 compared with the T414 transputer is greatly diminished by communication overheads.

COMPUTATION OF INVERSE JACOBIAN MATRIX Two classical methods of matrix inversion are the Gaussian Elimination (GE) and Gauss-Jordan (G J) techniques. These have a structure which can be readily adapted for parallel computation, as described below (GE and GJ methods). However, singularity points cause problems with both of these techniques. In singular configurations, the manipulator loses one or more degrees of freedom, the determinant of J approaches zero and the inverse kinematic equation has either zero or infinite solutions. The practical effect is that, at singular points, it is impossible to move the robot hand in some directions no matter what joint velocities are demanded. Suitable modifications to cope with singularity points are also discussed (GESCP and GEMP methods).

Gaussian Eliminationmethod (G E) The priciple of this method is to express the ith variable tL as a function of the variables q, ... q~v and thus eliminate t~, from the system. (N - I) such steps yield an equation in the single variable t~N, which is easily solved. The algorithm was implemented on a multiprocessor transputer network arranged in four levels as shown in Fig. 3. Levels I, 3 and 4 had one processor each. The number of processors in level 2 was varied between I and 3 to observe the effect on computation speed.

114

A.S. MORRIS and A. Y. H. ZOMAYA

i INPUT ,..J W W ._J

Te

~

1 1

1: W

_J

PUT

Fig. 3. Processor network for Gaussian Elimination algorithm.

Level 1 augments J with the world coordinates vector .~ to yield the input matrix: Jt.f. Initially, this is of order (6 x 6) for the Puma robot. A check is made to avoid zero pivoting elements, interchanging rows if necessary. Row 1 is normalized by dividing all elements in the row by the first element in the row and then the normalized row is sent to level 4. Next, rows 2 to n (n = 6 initially) are sent to level 2 processor(s), one per processor. If the n u m b e r of rows exceeds the number of level 2 processors, the excess rows are retained in the level I processor until there is a free processor available in level 2. Level 2 consists of one or more processors operating in parallel. Each operates on one row of the system matrix and makes the first element of the row zero by applying the formula

(

J~ ~= \J~ -

lJ" ) x J,

k = i + l,i + 2.....

n,

(14)

where Jk is the processed row and Ji is the first row which is used in c o m m o n with the array of processors. The processed row is then sent to level 3.

Parallel computation of robot Jacobians

115

Level 3 receives the processed rows from level 2 and constructs a new system matrix, which is of order (n - 1) x 6 after the first iteration of the algorithm. This new system matrix is sent back to level 1 and the next iteration through levels 1-3 commences. Level 4 receives rows from level 1 and constructs a 6 x 7 matrix which, after six iterations through levels 1-3, is of the form

J=

1 0 0 0 0 0

0 0 0 0

1 0 0 0

J~4 1 0 0

J35 J45 1 0

~ J.16 J~ 1

z3

(15)

£5 -~6_

Back substitution is then performed on this upper-triangular matrix to yield the joint values qi.

Gauss-Jordan method (GJ) This method transforms the augmented input matrix to a reduced column echelon form which avoids the back substitution step required by the (GE) algorithm. It requires a network of parallel processing elements arranged in three layers, as shown in Fig. 4. Level I processor carries out the same operations as level I in the (GE) method. Level 2 consists of one or more processors operating in parallel. Each processor operates on the row sent to it by level 1 and performs an elimination process which sets the k th element in the row to zero. The processed row is then sent to level 3. Level 3 receives the processed rows from level 2 and forms a new system matrix of the form 1 0 0 0 0 0 ,~t0 1 0 0 0 0 £2 0 1 0 0 0 ,~j 0 (16) J= 0 0 0 1 0 0 ,r4 0 0 0 0 1 0 ,~s 0 0 0 0 0 1 -r6_ m

from which the joint variables//i are calculated.

Gaussian Elimination with scaled column pivoting (GESCP) As explained earlier, unmodified GE and GJ algorithms fail in the presence of singularities. A row equilibration technique, as described by Forsythe [15], is one way of overcoming singularity problems. Instead of solving Eqn (2), the following modified equation is solved D-tJbx = D-~tSq,

(17)

where D is a diagonal permutation matrix whose i th entry e, is given by e, = maxt,,~,,~lJ,.kl,

(18)

116

A.S. MORRIS and A. Y. H. ZOMAYA INPUT

.,J

L

O

..

I

l: ..d Iii

.t

_d

OUTPUT

Fig. 4. P r o c e s s o r n e t w o r k [or G a u s s - J o r d a n

algorithm.

where i = 1-6. Row interchange is performed in such a way that the pivoting element is selected to be the largest absolute value in the same column that is below the diagonal, i.e. [J~e,kl = maxk-.,.~6lJ~k].

(19)

Gaussian Elimination with maximal pivoting (GEMP) An alternative solution to the singularities problem is to incorporate both row and column interchanges in a total pivoting approach. The solution is extracted from the system of equations

D-'JB~x = D-lo~q.

(20)

The pivoting element is selected to be IJ~,~l -- maxk'~i,t~.6[Ji~,t[• The diagonal permutation matrix B is assumed to have • IJ .,I f, = m.,x,..,.~,[ e---k["

for i = 1-6.

(21) a n i da

entry fi such that (22)

Parallel computation of robot Jacobians

117

Simulation results Execution times for calculation of the inverse Jacobian by the four methods outlined above are given in Table 2. Figures are quoted for, firstly, one processor in level 2 and, secondly, three processors in level 2. Computation times are also given for the two cases where: (a) all processors are T414 transputers and (b) all processors are T800 transputers. Times for the SUN 3//50 workstation are listed for comparison as well. All times given are actual times which include communication overheads. Table 2 shows that the Gaussian Elimination technique is faster than the GaussJordan one. Also, as expected, the total computation time is significantly reduced by using three processors in level 2 instead of one. The more robust, modified Gaussian elimination techniques with partial (GESCP) and total (GEMP) pivoting have slightly increased processing times, but real-time computation is still feasible. The most significant point about these results is the dramatic reduction in total computation time achieved by transputers for the forward and backward Jacobian matrices compared with computation on even a powerful serial computer such as the SUN workstation. Whilst the computation time on a SUN workstation for even the simplest (GE) algorithm is greater than the time of 16.6 ms allowed for the entire computation of the control input, the parallel transputer implementation produces times of the order of 1 ms, leaving ample time for computation of the rest of the control algorithm. Lack of sufficient hardware prevented a greater number of processors being used in level two. However, increasing the number in level 2 beyond three would require an additional layer of transputers, owing to the limit of four communication channels per transputer. Thus, little increase in computation speed is to be expected because of the greater communication overhead. This conclusion will have to be reassessed when other cheap parallel processing chips, similar to the transputer but with a greater number of communication channels, become available.

CONCLUSIONS The paper has described suitable algorithms and feasible parallel-processing schemes for on-line calculation of forward and inverse Jacobian matrices. The ability to compute Jacobians on-line solves an important problem which currently exists in the design of robot controllers for high speed operation. Table 2. Computation times for inverse Jacobian matrix

Algorithm GE GJ GESCP GEMP

Number of transputers 4 6 4 6 4 6 4 6

Computation time (ms) T414 TS00 SUN 10.10 4.48 12.6 5.38 13.{I 5.7 18.11 7.74

1.43 11.64 1.8 0.77 1.8 0.8 2.6 1.1

22.0 30.0 611.0 80.0 -

118

A.S. MORRIS and A. Y. H. ZOMAYA

Parallelism to reduce the computational burden has been introduced at both job and task levels. In the case of forward Jacobian computation, simulation of real-time performance has shown the superiority of the seven-processor network. The importance of considering communication overheads has been demonstrated, and false results have been shown to occur without such consideration. In the case of inverse Jacobian computation, the Gaussian elimination technique has been shown to be faster and the value of parallel processing has again been demonstrated. The important case of operation near singular points has been considered, and two different pivoting techiques have been proposed for this case. These have been shown to meet real-time constraints whilst still maintaining the robustness and stability of the Gaussian elimination method.

REFERENCES I. Schilling R. J., Fundamentals of Robotics: Analysis and Control. Prentice-Hall, Englewood Cliffs, New Jersey (1990). 2. Yoshikawa T., Foundations of Robotics: Analysis and Control. MIT Press, Cambridge, Massachusetts (19'~)). 3. Craig J. J.. Adaptive Control of Mechanical ~,laniptthttors. Addison-Wesley, Reading, Massachusetts (1988). 4. Fu K. S. et al., Robotics: Control, Sensing, Vision and Intelligence. pp. 221-222. McGraw-Hill, New York (1987). 5. Zomaya A. Y. H. and Morris A. S., Dynamic simulation and modelling of robot manipulators using parallel architectures, b~t. J. Robotics Automation 6, 129-139 (1990). 6. Zomaya A. Y. H. and Morris A. S., Modelling and simulation of robot dynamics using transputerbased architectures. Simulation 54, 269-278 (1990). 7. Zomaya A. Y. H., Efficient robot dynamics for high sampling rate motions: case studies and benchmarks. Int. J. C~mtrol 54,793-814 (1991). 8. Whitney D. E., Trans ASME. J. Dyn. Syst. Mens'. Control 94,303-3(Y9 (1972). 9. Lcahy M. B. et al., Efficient PUMA manipulator Jacobian calculation and inversion. J. Robotic Syst. 4, 185-197 (1987). IlL Wamplcr C. W., A new Jacobian formulation for general six-revolute manipulators. I'roc. IEEE I,t. Conf. on Roboticv attd Automation, California, Vol. 2, pp. 11146-1051 (ITS91). I 1. Orin D. E. and Schradcr W. W., Int. J. Robotics Res. 3, 66-75 (1984). 12. Paul R. P., Robot Manipulators: Mathematics, Programming and Control. MIT Press, Cambridge, Massachusetts (1981). 13. Denavit H. and Hartenburg R., J. appl. Mech. 22,215-221 (1955). 14. Elgazzar S., IEEE J. Robotics" Automation !, 142-151 (1985). 15. Forsythe G., Computer Sohaion of Lbwar Algebraic Systems. Prentice-Hall, Englewood Cliffs, New Jersey (1967).