Sensitivity of an optimal system to specified errors of measurement

Sensitivity of an optimal system to specified errors of measurement

Automatlca, VoL 3, pp. 151-159. Pelgamon Prem, 1966. Printed in Great Britain. PAPER HI SENSITIVITY OF AN SPECIFIED ERRORS OPTIMAL SYSTEM TO O...

396KB Sizes 0 Downloads 47 Views

Automatlca, VoL 3, pp. 151-159. Pelgamon Prem, 1966. Printed in Great Britain.

PAPER HI SENSITIVITY

OF AN

SPECIFIED

ERRORS

OPTIMAL

SYSTEM

TO

OF MEASUREMENT

J. M . C. O . A R K

1. S E N S I T I V I T Y :

ITS MEANING

AND PURPOSE

Grve~ a mathematical model in the form of first order vector differential equations and specified initial values of state, the determination of the optimal control schedule of a system reduces to a particular problem of numerical analysis studied by several authors, for instance [I]. In practice, however, errors occur in the fitting of the mathematical model, and, more simply, in the measurement of the/n/tial state. In this paper only the effect of errors in the measurement of the initial state are considered. The use of a control schedule calculated to be optimal for measured initial values of state for an inaccurately measured system leads, in general, to a loss of performance. If we have a sensitivity relation between this loss and some parameters describing the errors in measurement we are able to identify the values of these error parameters which give us an acceptable level of performance loss. In this paper we take as error parameters the errors of measurement themselves, and derive the sensitivity of an optimal system to specified errors of measurement. This is not the only approach; in a second paper [2] we take as error parameters some sufficient statistics of the measurement errors and consider the sensitivity of the system to variations of the statistical description of the state. In Section 2 we express the performance loss as a function of the initial value of state. We approximate this performance loss by a Taylor's expansion about the measured value of x. In Sections 3, 4 we give methods of determining the coefficients of this expansion. These we call the sensitivity coefficients. In Sections 5, 6 we discuss the properties of these coefficients and in Section 7 their computation. In Section 8 we give an example of the calculation of the sensitivity coefficients in a problem describing a missile and target on collision courses. One computationally feasible method of representing the performance loss as a function of the measurement errors is to expand the performance loss as a truncated Taylor's series in the measurement errors. This will be valid as long as the measurement errors are small. It is shown in the next section that the first derivatives of the performance loss are zero, so it is necessary to calculate at least the second order partial derivatives to get a nontrivial approximation. A method of doing this is given in the following sections. [I] J. V. BnLu~wel~: The optimization of trajectories../. Soc. Indust. Appl. Math. 7, (1959). [2] J. M. C. C t . o ~ : The sensitivity of an optimsd system to meagul'cme~t ~ Proc. Conventlonon Advances in Automatic Control, (To be published in the Chartered Ensineer). 151

152

J. M. C. CWRK 2. DESCRIPTION Let

OF THE SYSTEM

the differential equations describing the system be

where x is a column vector of n state variables, f is a column of n known functions, u is a column vector of m control variables and t is the independent variable (often time). Let J[x(t,)] be the minimum (maximum) value of the performance index P[x(r,)] for initial conditions x(t,) and where rf is given by the stopping condition s[x(t,ll

=0

(2)

Let Q[x(?e), ii(?)] be the value of P[x(tJ] for initial conditions A+,), but where the control schedule ii(t) is chosen to be that which is optimal for a nominal set of initial conditions 3(&J. A fortiori

The performance loss R we take to be Q -J.

This is illustrated in Fig. 1.

Fro. 1. Comparison between optimally and nominally controlled performance indices.

In this formulation of the problem t only occurs implicitly, but should it occur explicitly it can always be included as an extra state variable with f-derivative one. Similarly if we are interested in the sensitivity of the problem to errors in measurement of one of the constant parameters of the system, such as the terminal condition, we can include this as an extra state variable with f-derivative zero. 3. THE FIRST

ORDER

DERIVATIVES

OF THE PERFORMANCE

LOSS

The following equations are derived in the manner of DREWUS [3]. Since J and Q are the values at P at the terminal time they are unaffected by movements of the initial [3] S. DREYFWS:Variational problems with state variable constraints. Rand publication, p. 2605, July (1962).

Sensitivity of au optimal systw to SpeciEed erron, of MBSUrCmQlt

153

This is summarised by the Proper Descent

conditions along their respective trajectories. Rate equations of DRBYPUS[3] :

H(J,, x, u) = J,f -0, (3)

WQ,x,4=Q,f=o, where J,=

;,

-$

dJ

..

1

*

2

‘ax,

>

with a similar definition for Q,. J, and Q, are the Lagrange multipliers of the constraining

is the Hamiltonian

differential equations and H of the system. The necessary condition of optimality of J is H,=J,f,=tl

(4)

where

-ai rl,

af, . . . ,aU,

The fkst order partial derivatives of R are Q, -Jr terminal values of Q, and J, satisfy

Now on the nominal trajectory the

J&l = Q&,1=P, + P&L=t,.

C9

where cc=

=Jsx+Jxfx=4

(64

f =Qxx+Q&s=0,

NW

f

where Jxx = J& = (J,): , etc. The x-derivative of u, u, does not appear in (da) as its coefficient is zero by virtue of (4). So on the nominal trajectory J, and Q, satisfy

;,(J:) =f J, = -

Jxfx ,

Pa) m

(7a) is the Euler-Lagrange equation of the system. Equation (7) together with the terminal conditions (5) show that on the nominal trajectory Q,=J,. Hence k-0

(8)

154

J. M. C. CLARK

along the nominal trajectory. So to get a non-trivial approximation to R it is necessary to consider Rxx.

4. T H E S E C O N D O R D E R D E R I V A T I V E S O F T H E P E R F O R M A N C E LOSS, Rzz

To find Rx~ along the nominal trajectory it is necessary to integrate a set of equations in Jx~, and Q= backwards in time from known terminal conditions. We derive the equation describing the evolution of Jxx as follows: differentiating (6a) for x we see that Jx~ satisfies Hx,,+ H'x,+ Jxxfx+f~l.x+(Jx,,f.+ H~)u ~ = 0 ,

(9)

where H ' ( x , f ) = J x f = H ( J , , , x, u). To find the unknown u, we differentiate (4) for x: f f J,= + H~,, + H,=u,, =O.

(io)

So from (9) and (10)

d(~)ffiH'~ ffi - H ~ , - J ~ . f ~ - f ~ - J ~ , + ( J ~ f , + H ~ , , ) H ~ z (H,=+ frj,#,).

(lla)

To derive the corresponding equations in Q~= we set u= to zero. The equation satisfied by Q ~ which corresponds to (lla) is d H r ~(Q=x)-- - z ~ - Q x x f x - f =Qxx.

(11b)

The terminal conditions for J ~ and Q~x are found by equating the coefficients of 6x in the first differentials of equations (2) and (5) at t = t1: dS = Sxtx + Sxfdt ---O, J ~ t x -- Q ~ t x -- Mxxtx + dpSx + (M ~ f + H ~)dt ,

(12) (13)

where

M~

ffi (P,., +

#S~=)t~,I,

6( ) denotes a free differential at t ffit I and d( ) denotes a differential constrained to satisfy the terminal conditions, dp and dt can be eliminated from (12) and (13) with the aid

of (6). R=~ along the nominal trajectory is simply Q=,-J,=. Unfortunately it does not appear to be possible to derive a set of equations satisfied by R== but not involving either Jx~ or Q,=. So to obtain R ~ it is necessary to solve both (1 la) and (11b).

Sensitivity of an optimal systemto specifiederrors of ~

t

155

5. THE P R O P E R T I E S OF Rzz

At $, the nominal value of x, R--0. For variations fix of x about this value the first differential of R is zero by virtue of (9). As Y is optimal, afortiori, Q>~J and so the second differential of R

d2R =/ixrR~,,/ix is non-negative. Hence R,..~ is positive semidefinite. R~= has a zero eigenvalue. This follows from equations (6):

f r R=z--fr(Q~-J~=)=(Qx-J=)f=--O. So f is an eigenvector of R,~ with zero eigenvalue. Thus R~= is no more than positive semi-definite and measurement errors in the direction of F have only third order effects on the performance loss.

6. B O U N D S ON T H E P E R F O R M A N C E LOSS

If the range of measurement errors is limited in some way, far more can be said about the performance loss. For instance, if the /ix have a Gaussian probability distribution with zero mean and covariance matrix V, then it is reasonable to consider only those/ix with a probability of occurrence greater than a certain amount; that is,

/ixrv-:/ix<..c,

a constant.

(14)

Now on the constraint (14), the/ix for which/ixrl~/ix is stationary are given by adjoining (14) with Lagrange multipliers and equating the/ix-derivative to zero:

(R,~-2v-~)~x=O

(15)

The 2's and/ix's satisfying this equation are the eigenvalues and eigenvectors of the constrained system. As R== and V-1 are positive semi-definite, the 2's are non-negative and can be numbered in order of increasing magnitude; 2o, 2t, 22. . . . etc. the corresponding eigenvectors being eo, el, e2. . . . etc. It is a property of quadratic forms [4] that, if/ix is orthogonal to all e j, for which j>~ some i, then

/ixr R~=~x<<.2~.

(16)

So the eigenvalue 1~ provides an upper bound on the performance loss for a subspace of errors; the larger the value of i, the larger is the dimension of the subspace. Hence, if the value of I i is insi~,nificant, it is pointless to attempt a more accurate measurement of its subspace of errors. If we have no probability description of the measurement errors, we can make the Bayesian assumption that all errors are equally likely. Then the 21 are simply the eigenvalues of R~. This assumption is made in the example in Section 8. [4] R. B ~ :

Introduction to Matrix Analysis. Chap. 7. McGraw-Hill, New York (1960).

J.M.C.~

156

7. THE COMPUTATION OF~THE SECOND ORDER DERIVATIVES Equation (lla) is a matrix Ricatti equation in J=, and (llb) a matrix linear equation in Q=,. In general, if the state equations (1) are stable in theforward direction, (lla) and (llb) will be stable in the backward direction which is the direction they are integrated in the computation of J ~ and Q=x. To integrate (lla) and (llb) it is necessary to know the value of Y=, x and u along the trajectory. These would have to be stored in the computer though storage of 3x and x could be avoided by generating them by means of equations (1) and (7a) along with J ~ and Q=~. If the performance index of the problem contains an integral expression, this would have to be introduced as an extra state variable in the Mayer formulation given in this paper. However the extra n + 1 second order derivatives turn out to be zero, so the dimensionality of the computing problem would be unaffected. A technique of BltF.AKWeLL,S P ~ and BltYSOS [5] for calculating optimal trajectories adjacent to a nominal trajectory can easily be adapted to make use of the equations given in this paper. However the adaptation is, in practice, limited to problems with only one terminal constraint, whereas the method of [5] is not. The method requires integration of the x-equations in the forward direction and the J x - and ./~=- equations in the backward direction along the nominal trajectory to determine the coefficients in the expression (17)

6u = H~, 1(HTu+ (Hx, + SxxfJr6x).

The initial conditions are changed by a small amount and the x-equations rcintegrated, the control being corrected by the amount in (17). ~x is the change in state from the nominal trajectory. After a few iterations the new trajectory can be regarded as the nominal trajectory and the whole process can be repeated, n m ÷ 2 m ÷ n terms have to be stored along a trajectory. This method has been used with success. It can also be regarded as a special case of the method given by S. K. M r r r ~ in his accompanying paper. 8. AN EXAMPLE OF THE CALCULATION OF R=z AND ITS EIGENVALUES: A MISSILE AND TARGET ON COLLISION COURSES The technique developed in the previous sections was applied to the problem of determining the performance loss sensitivity of a missile and target on collision courses. The rote of turn (u) of the missile is controllable. The performance index is the sum of the square of the miss distance and an integral cost of control action. The equations describing the system are (see Fig. 2):

Ta~H~I~ T J tst l -,'"**

U

_.j_

v_ . . . .

....~X3 .-/"

....

~'2

FIo. 2. Nomenclature for the miuile-target problem.

[5] J. V. ]~tBAKWIK~J. L. Sp~/~gand A. E. BaYsON: Optimieationand control of nonlinear systemsusing the lecond variation. J.S.I.A.M. Contr. A I, No. 2 (1963).

Sensitivity of an optimal system to specifiedca'rots of measurem~t

157

-•l:ux2-

V + V r cos xs ,

dx2 --~f=ux; + VT sin x3 , dx3 dt dx4

-

2

dt =½u , P :

+ ax, ,

S:Xl:0,

where xl is the range projected onto the direction of the missile, x2 the projected miss distance, x3 the angle between the flight paths of the missile and target and x3 the accumulated control cost. V and Vr are the constant speeds of the missile and target, a is the weighting coefficient of the control cost. The performance loss sensitivity was calculated for the case where the collision courses of the missile and target are straight and at right angles. The missile was assumed to have twice the speed of the target. (See Fig. 3).

A w

V

Missile

• 0 • (0.44,0.2210) • I • ( 0.09,

0.18;'0.46)

• z = ( 0.201 0.41,0.20)

eo

-.....~_ ,' el Target

F~o. 3. The collision courus of the ~ and tarpt; and the directions of the e~ror e i ~ v ~ t o n .

Writing aJ

as p, and

the equations giving Pi and J,j are: d - ~ p t = up2, d ~ P 2 = --UPl,

~J

as J,j,

158

J . M . C . CLAgK

d

-~tP3 = VT(Pt sin X3 --P2 COS X3) , d

-~tJlx =2uJ12

+lb~

•dj,2-_u(J22-J1,)+l-blb2 ' dt a TJla---uJ23 +

V1.(Jll sin x 3 - J 1 2 cos x3)q- biba,

d 222 = _ 2 u J 1 2 + ~ b22 --~ d

1

-~t223 = -uJ13 + VT(J12 sin x3--J22 COS X3)-[" a b2b3, d U/

1

• "~.233 ffi2Vr(J13

sin x3-223 cos x3)+ g/ 7 b~+ VI~Pt cos x 3 +Pz sin x3),

where

b i = x2J11 - XxJ12 - J13 - 22, b2 =x2J12- xlJ22-J23 +,~1, b3 =x3J13 -- xij23 - J 3 3 . The terminal conditions, at Xl =0, are given by:

pl=--~(x2Vr sin xa+aU2 ), p2=X2,

p3=O,

and 211 =up2g

~ (up2 cos xs-2upl sin x 3 - Vr sinZx3),

212 = - 1 ( u p 1 + g

VT sin

x3),

213=- ~-~2r(p2 cos x2--Pl sin

X3),

J22=1, J23ffiJ33ffiO, g where

gffiux2-V+Vr cos x3.

Trivially, P4 =a and J~4 =0, i ffi 1, 2, 3, 4, at all times, and so these have been eliminated from the above equations. The set of equations giving the derivatives of Q are the same with bl, b 2 and b 3 set to zero. For the reasons given in Section 7 we present the eigvnvaluvs of Rx= rather than its elements. These cigenvalucs are plotted as functions of (alr)- 1/3xI in Fig 4.

Sensitivity of an optimal system to specifkd errors of maumment

wt Eigen~luea l*O.-

0

1

3

2 )o x+

Fro.

-

4. The eigcnV8l~ of Ru as functions of the initial corlditionJ3.

The three actual positions of the target which would give rise to measurement errors in thedirection of the error eigenvectors, for (uv)--xl’~3 =2, are given in Fig. 3. The magnitude of the error in each case is eye’e,=0*25i==0,1,2. In Fig. 4 for (UT”)-1’3x1< 1,2 1, ~0.1 AZ,and so for this range of x1, the performance loss can be reduced by measuring the component of the state error in the direction of e2: measurements of the components of error in the direction e,, e, give little or no improve ment. Acknowledgements-This work was supported by the Ministry of Aviation. The muncrical computations were done on the University of London &rcury computer.