Controller Design for High-Performance Visual Servoing

Controller Design for High-Performance Visual Servoing

Copyright © IFAC 12th Triennial World Congress, Sydney, Australia, 1993 CONTROLLER DESIGN FOR HIGH-PERFORMANCE VISUAL SERVOING PJ. Corke-and M.C. Goo...

900KB Sizes 42 Downloads 156 Views

Copyright © IFAC 12th Triennial World Congress, Sydney, Australia, 1993

CONTROLLER DESIGN FOR HIGH-PERFORMANCE VISUAL SERVOING PJ. Corke-and M.C. Good-·CS1RO Division of Manufacturing Technology, LoclcLd Bag 9, Preston, Australia 3072 "Departtnefll ofMechanical and Manufacturing Engineering, University of Melbourne, Parlcville, Australia 3052

Abstract. This paper describes the design of a control architecture for robot visual servoing, in particular high-performance tracking control. A 50Hz vision system in conjunction with an end-effector mounted camera is used to close the robots position loop. The limitations of pure feedback control are discussed, and an approach based on estimated target velocity feedforward is introduced. Experimental results are provided. Keywords. Robots; image processing; target tracking; Kalman filters

1.

INTRODUCTION

2.

Only recently has VISion been investigated as a high bandwidth sensor for robot control. Previously, technological problems imposed by the huge output data rates from a video camera, and the difficulty of extracting meaning from that data, have hampered this work. The term 'visual servoing' is used here to describe an approach in which the 'sensor dynamics' of the cameraprocessing subsystem are accounted for in the control loop, as opposed to the simpler 'look then move' approach.

PERFOR.MANCE LIMITING FACTORS

The syst.em archit.ecture has been previously described(Corke and Good 1992). A Puma 560 robot is used, a common machine but far from state-of-the-art. Three performance limiting factors will be discussed in t.his section and they are due to the: mult.i-rat.e nature of the controller, limited velocity capability of the robot, and maximum achievable feedback gain for stable control. The Unimate position controller limits performance due to its requirement for position setpoints at intervals of 7, 14,28 or 56ms. The machine vision subsystem operates at CCIR. video field rate, a sampling interval of 20ms. The resulting multi-rate system introduces a variable latency, whose average value is significant(Corke and Good 1992).

In earlier work on high-performance visual servoing(Corke and Good 1992) a number of performance limiting factors were identified. The dominant dynamics of the open-loop system were found to be time delay due to serial pixel transport, the multi-rate nature of the control system, and pipeline delay in the servo communications. The first effect is inescapable, bu t the latter two are artifacts of the controller implement.at.ion and will be addressed in this paper. The limit.at.ions of pure feedback control will also be discussed.

Despit.e considerable literature on robot dynamics, the dominant. performance limiting factors in a highly geared, DC servo drive, arm-type manipulators are gravity, friction and motor back EM F( Corke 1992). The maximum velocity is ultimately limited by back EMF and viscous friction.

Section 2 examines the limitations of the robot and the existing control approach, while section 3 introduces the new control structure. Section 4 presents some experimental results in robot fixation and gaze control, for a variety of excitations including simple harmonic motion.

The previous control strategy(Corke and Good 1992) for I DO F sets joint. velocity proportional to visually-sensed error, leading to an approximate

629

k

z(z - 1)(z - Pm)

Block

(1)

Gr(z) G.(z) Gv(z)

where 0 is the pan or tilt angle of the camera, i Xl is the image plane coordinate of the target being tracked, Pm is the pole of the velocity loop and actuator, and k a gain. 3.

Transler r

~r

Z!Pm- 1

1 ~.!. Z Z

Description Dynamics of velocity loop Arm structural dynamics Vision system and lens dynamics

sequence of precise spatial locations, but rather a velocity which is continually corrected on the basis of sensory inputs.

DESIGN

Given the difficulties with the eXlstmg positIOn loop mentioned above, and the advantages of velocity control, axis velocity control loops have been implemented. These digital loops run at video field rate, 50Hz, and axis velocity is estimated using a 3-point derivative of measured joint angles. Motor current, set via the Unimate interface, is proportional to velocity error. The relatively low sample rate reduces the disturbance rejection capability, and gives a relatively 'soft' position response.

Motion so as to keep one point in the scene, the interest point, at the same location in the image plane is referred to as fixation. Knowledge of camera motion during fixation can be used to determine the 3D position of the target using active vision techniques. Such a 'low-level' unconscious mechanism operates in animal vision - the human eye typically fixates on 10 points per second, rapidly moving the high resolution fovea of the eye over the scene so as to build up a detailed composite image. To achieve accurate gaze control the eye muscles accept feedback from the retina, as well as feed forward information from the vestibular system, giving head lateral and rotational acceleration, and position information from the neck muscles. To achieve high performance tracking it is essential to incorporate feedforward control, derived from both from the robot's joint angles and vision system output (retina).

3.2.

Tracking performance

The aim of the visual tracking control loop is to keep the target centered in the camera's field of view. The target position though is essentially a disturbance input, and can not be measured directly - the end-effector mounted camera measures only the difference between target and robot position. The common approaches to achieving good tracking performance are

In earlier work(Corke and Good 1992, Corke and Paul 1989) the robot translated the camera to keep the target object centered in the field of view. To achieve higher performance the last two axes of the robot are treated as an autonomous camera 'pan/tilt' mechanism on the end of a 3DOF positioning system. The wrist axes have the requisite high performance for gaze control.

1. increase the loop gain so as to minimize the magnitude of any proportional error 2. increil.se the 'type' of the system 3. introduce feedforward of the signal to be tracked.

The camera used provides an electronic field shutter and an exposure time of 2ms is used, so that it approximates the action of an ideal sampler upon which the Z-transform model of the vision system is based, see Fig. 2. 3.1.

Details of dynamics terms in Fig.!.

TABLE 1

discrete-time open-loop transfer function of

The root locus of (1) shows that the poles leave the unit circle for only moderate values of loop gain. Increasing the system type by introducing open-loop integrators results in instability for any finite loop gain. The next section addresses the issue of predictive target state estimation, which is necessary for feedforward control of target velocity.

Velocity control

In this work it is appropriate to consider the actuators as velocity sources, rather than position or torque sources as is common in robotics. In a non-structured environment precise positions have less significance than they would in say a manufacturing cell. Also it is difficult to determine positions of objects in the environment with precision. Observe that cars and ships are controlled in an unstructured environment, not by a

3.3.

Target state estImation

The model of the robot dynamics in one DOF is shown in Fig. I and the details of the individual blocks are given in Table I. The target is assumed to have second-order dynamics and a zero-mean Gaussian acceleration profile. The problem then 630

is to estimate 5 states (2 target, 2 robot and 1 vision system) from a single output, but controllability analysis shows that the complete state is not observable. To achieve observability it is also necessary to measure the joint angle, Bm. In simple terms, from the camera's point of view, it is impossible to determine if the observed motion of the target is due to target or robot motion.

target velocity feedlorward

9 •

The image plane coordinate of the target is given by

i

_ f({)m C Xt ) X t-a --G C Zt

(2) visual

position

where a is the CCD pixel pitch, f the lens focal length, C Zt and C Xt are the target distance and displacement, Om the motor shaft angle, G the reduction gear ratio. {)t = C xtl c Zt the target's angle with respect to an arbitrary axis fixed in space. An estimate of the target's angle is given by

feedback

Fig. I. Pan/t.ilt, control structure.

The observat.ion matrix C is

C

e; = Bm

_ iX t

G

af

(3)

= [I

(8)

0]

where T is t.he sampling interval, 20ms, the video field interval.

III

this case

For each axis the Kalman filter(Astrom and Wittenmark 1984) is given by

The target angle estimates obtained from the binary vision system, 0t are contaminated by spatial quantization noise, and simple differencing cannot be used to estimate velocity. A Kalman filter is therefore used to reconstruct the target's angle and velocity states, based on noisy observations of target position, and to predict the velocity one step ahead, thus countering the inherent delay of the vision system.

if>PkCT(CPkC T +R 2 )-1

(9)

if>Xk+Kk+l(Yk-CX k )

(10)

if>Pkif>T

+ R ll2-

Kk+l CPkif>{ll)

where K is a gain, P is the error covariance matrix and 12 is 2 x 2 identity matrix. RI and R 2 are input and output covariance estimates, and are used to tune the dynamics of the filter. This filter is predictive; that is X k+1 is the predicted value for the next sample int.erval.

Equation (3) and the target states (position and velocity) are computed for each of the pan and tilt axes. Target state for each axis comprises angle and rotational velocity

The Kalman filter equat.ions are relatively complex and t.ime consuming t.o execute in matrix form. Using the computer algebra package MAPL8 the equat.ions were reduced to an efficient. scalar form, and the corresponding 'C' code automatically generated for inclusion into the real-time system. Suitable values for RI and R 2 are determined empirically.

(4)

In discrete-time state-space form the target dynamics are (5)

3.40

(6)

Control struclure

A schematic of the variable structure controller is given in Fig. 1. It operates in two modes, gazing or fixating. When there is no target in view, the system is in gaze mode and maintains the last gaze direction by closing a joint position control loop.

where ~ represents state uncertainty and Yk is the observable output of the system, the target's pan or tilt angle. For constant-velocity motion the state-transition matrix, if> , is

(7)

In fixation mode the system attempts to keep the 631

Tm.

Ac'" .
2S,,--------,------,--.------,------,--.------,---,.-----.----

i-------------i

"'."

o \,W-_~'_-~'----'----'-:O-", .-------, i

••poMd

Jortl.ngI••

,.od

PlJ:til tlr"lip(lrl

andproc..ng

i D '

i

,····------------t--~ ~.,i

Conlroll_

calcUallon

1.5

iD

f

1"'1)

o

O.S 0

! ·os ~ -1.S .

Fig. 2. Details of system timing.

-2~ ..·

....;· ........, .. · ....···; .. ·rl..·,·........ ,........;.... ·····;·...... ·;......... ;!I

-2.SL----'----'--'----'---'S--6'----'---'S--g'-----.J Time (s)

Fig. 4.

i

i

0

,

f'

:

I;

}

axis. The feedforward signal provides around half the velocity demand, the rest being provided by the PI feedback law, due to unmodeled dynamics and joint to image plane interactions. The joint velocity of up to 2 radls is close to the sustained maximum of 2.3 rad/s. The previous, feedback only, strategy results in large following errors and a lag of over 40 0 .

._,L

;"\' I

~

\

UJ

I

\

~l

·······r~·;·1 . :1

~

I:

l

= ..S ~........L...~.......~...g............

-10k.....J..>..~~~............

Time (s)

Fig. 3. Centroid tracking error of the target (in pixels on the image plane).

target centered in the field of view. The velocity demand comprises the predicted target velocity feedforward, and the target image plane error feedback so as to center the target on the image plane. When the target is wit.hin a designated region in the center of t.he image plane, integral action is enabled. The transition from gaze to fixation involves resetting the state estimates and covariance of the Kalman filter to allow convergence on the new target state. Fig. 2 shows details of the timing, particularly the temporal relationship between sampling of the image and joint angles.

4.

EXPERIMENTAL SULTS

RE-

Early work measured performance with an optical step test, but a more serious challenge to tracking performance is a target with continuous acceleration profile, such as provided by a pendulum, or in this case an object on a turntable rotating at 0.5rev Is. Fig. 3 shows the image plane coordinate error, and it can be seen that the the target is kept within ±10 pixels of the reference. Fig. 4 shows the demanded, measured (almost identical) and predict.ed joint velocity for the pan

Pan velocity demanded, measured and predicted (dotted).

5.

CONCLUSION

The limit.ations of pure feedback control for visual servoi ng have been discussed, and a new cont.rol strat.egy, ba.sed on velocit.y feedforward introduced and verified experiment.ally. REFERENCES

Astrom, K. J. and Wittenmark, B. (1984). Computer Controlled Systems: Theory and Design, Prent.ice Hall. Corke, P. (1992). Symbolic and numerical investigation of manipulator performance limiting fact.ors, Proc. IEEE Region 10 lnt. ConI, IEEE, pp. 563-567. Corke, P. and Good, M. (1992). Dynamic effect.s in high-performance visual servoing, Proc. 1EEE 1nl. ConI Robotics and Automation, Nice, pp. 1838-1843. Corke, P. and Paul, R. (1989). Video-rat.e visual servoing for robots, in V. Hayward and O. Khatib (eds), Experimental Robotics 1, Vo!. 139 of Lecture Notes in Control and Information Sciences, Springer- Verlag, pp. 429-451.

632