Bias, variance, and estimation error in reduced order filters

Bias, variance, and estimation error in reduced order filters

Automatica, Vol. 12, pp. 589-600. Pergamon Press, 1976. Printed in Great Britain Bias, Variance, and Estimation Error in Reduced Order Filters R O B ...

922KB Sizes 0 Downloads 117 Views

Automatica, Vol. 12, pp. 589-600. Pergamon Press, 1976. Printed in Great Britain

Bias, Variance, and Estimation Error in Reduced Order Filters R O B E R T B. A S H E R t , K E N N E T H

D. H E R R I N G ~ and J E S S E C, R Y L E S §

Reduced orderfilters are generaUy biased, and the equations necessary to evaluate the bias, variance, and meansquare estimates in tradeoff analyses indicate that separation between estimation and control is not possible. Summary--It is shown that a reduced order filter is in general biased. The equations necessary to evaluate the bias, variance, and mean square estimation error for a reduced order filter are presented. From these equations it can be observed that separation between control and estimation does not occur. The equations can be used for hardware tradeoff analysis, reduced order filter sensitivity analysis, and reduced order filter synthesis.

filter were designed f r o m this new model. F o r example, in a navigation system one m a y identify and model 61 state elements[9], and in a precision pointing and tracking problem one can identify and model 75 states[10]. H o w e v e r , this leads to a significant computational problem and, in fact, an intolerable one for a real-time flight computer. T h e r e f o r e , a reduced order filter (ROF) that estimates only the required minimum n u m b e r of state elements must be designed. Since all the high order state elements are coupled, one m u s t consider the p e r f o r m a n c e of the R O F against the true state model; otherwise, when implemented the filter m a y at best yield p o o r p e r f o r m a n c e and at w o r s t m a y diverge. One must consider the sensitivity of the filter due to deliberate modeling errors introduced b y the suboptimal filter structure. The K a l m a n filter[4] gives the optimal (minimum variance, unbiased) estimate of the s y s t e m state. It is shown in this p a p e r that for a R O F one cannot in general obtain an unbiased estimator. The conditional m e a n of the estimation error is non-zero and, therefore, the true covariance of the estimation error is not equal to the second m o m e n t of the estimation error. This is not recognized in previous work[5, 11, 12, 23] in R O F ' s . The problem of bias is recognized in the w o r k of several r e f e r e n c e s [13, 20, 21]. H o w ever, the reduced order problem was not addressed. The w o r k of this p a p e r considers both bias and reduced order filters. P a s t work, e x c e p t for [22] which still does not treat the bias problem, have not recognized the bias problem and, thus, they do not h a v e correctly defined equations, i.e. second m o m e n t equations are defined as variance equations. T h e r e f o r e , this p a p e r differs f r o m previous w o r k in that the n e c e s s a r y statistical parameters: second m o m e n t , variance, and bias, are obtained for the R O F problem. In order to illustrate the i m p o r t a n c e of these statistical p a r a m e t e r s consider Figures 1 and 2 which represent, at a given time t, the conditional probability density for the estimation error of a given

I. INTRODUCTION THE KALMAN filter has been utilized in m a n y applications throughout the last decade. T h e s e applications range f r o m chemical processing plants to a e r o s p a c e navigation and g u i d a n c e [ l 3]. In m a n y of these applications the implemented filter did not b e h a v e as the theoretical analysis showed as its predicted behavior. Rather than achieving a degree of optimality, the filter diverged[4] causing an e r r o n e o u s state estimate to be generated. This problem is due to several causes: e r r o n e o u s state models including neglected biases, incomplete or e r r o n e o u s knowledge of statistical models for filter derivation, nonlinearities, and roundoff and truncation errors [5, 6]. With respect to the e r r o n e o u s state model problem, m a n y filter designs have been d e v e l o p e d without due regard to additional states that arise b e c a u s e of error sources such as biases, first or higher order M a r k o v processes driving the d y n a m i c s y s t e m and/or the m e a s u r e ment system. As is well known[7, 8], one m a y use state augmentation techniques in order to take these errors into account. H o w e v e r , this m a y lead to a significant and perhaps intolerable state dimensionality if a fully optimal K a l m a n *Research Associate, Frank 2. Seiler Research Laboratory, USAF Academy, CO 80840, U.S.A. ~:Assistant Section Chief, Laser Development Division, Air Force Weapons Laboratory, Kirtland AFB, NM 87117, U.S.A. §Chief Scientist, Air Force Avionics Laboratory, WrightPatterson AFB, OH 45433. U.S.A. Received 21 March 1975; revised 27 October 1975; revised 5 February 1976; revised 13 May 1976.The original version of this paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by associate editor A. Sage. 589

590

R.B.

ASHER, K. D. HERRING and J. C. RYLES

Or

FIG. I. Estimationerrorprobabilitydensity:filterstructure !.

FIG. 2. Estimationerrorprobabilitydensity:filterstructure 2.

state element given two different R O F structures. It is assumed that the second moments of the estimation error are equal to P. With just the information on the second moment available, a filter designer may say that each filter structure is equal. H o w e v e r , without any filter structure modification the filter structure in Fig. 1 may be more desirable since its mean, i.e. the bias, is such that e, ,~ e2 and, consequently, the designer does not e x p e c t on the average a large mean square estimation error. If the filter structure can be modified to take into account the large and relatively predictable estimation error in Fig. 2, then this density might be shifted to d e c r e a s e the second moment. This is not considered in the paper, but it is pointed out to help show the importance of knowing all statistical parameters which is the intent of this paper to give this knowledge. Since the reduced order filter is biased, the second moment is not equal to the variance as has been previously implied. Equations for the second moment, true covariance, and bias are presented for the continuous dynamics, continuous measurements, and the continuous

dynamics, discrete measurement ROF problem. The results include the subcases of continuous and impulsive control. The derivation method is that of a method used in [22, 23] of defining an estimation error that only includes the subset of states to be estimated as opposed to the method used in [5] to define an estimation error that includes all the model states. The derivation yields second moment and covariance equations that are computationally advantageous. Since the derivation methods are dissimilar, they cannot be compared one to one with [5]. It may again be noted at this point that no other paper has derived covariance equations. The results include the subcases of continuous and impulsive control. From these equations, it can be observed that, in general, separation between control and estimation does not occur in reduced order filters. Since separation does not occur in general, the filter designer should be first aware of this factor. A particular choice of control configuration in a feedback loop might cause undesirable estimator behavior which could cause undesirable closed loop performance. Furthermore, the designer may be able to judiciously choose the filter/control structure to obtain better closed loop performance. The equations may be used in several ways. Hardware tradeoff studies may be accomplished. These studies may be used to determine the effect on the estimation error of an increase or decrease in accuracy of a particular piece of equipment. For example, a typical problem in aided navigation systems such as D o p p l e r / i n e r t i a l / L O R A N systems might be to determine the effect of a degraded gyro with larger drift rates on the position and velocity estimation error. Generally, an increase in accuracy in a particular piece of hardware as a sensor is associated with an increase in cost. Thus, the accuracy studies may show that the estimation error is insensitive to a decrease in accuracy of a particular sensor and the sensor could be replaced with a decrease in cost. On the contrary, if a particular reduced order filter configuration is found to be highly sensitive to a given error source, then the filter must be redesigned. This can be accomplished by developing an error budget[5] and by plotting the r.m.s, estimation error vs the magnitude of the error source found through repeated use of the equations and the use of the linearity of the equations for the true covariance and second moment. The equations can be used as a synthesis tool in order to help in the state deletion process. By using the equations for different number of states in the reduced order filter, the percentage change in error may be monitored, and if slight, the smaller dimension

Bias, variance, and estimation error in reduced order filters filter can be used. Furthermoi'e, the equations may be used to determine performance of reduced order filters with different methods of gain calculations such as constant gain filters. Previous work accomplished in the area of suboptimal filters is numerous. Aoki and Huddle[14] consider the design of an estimator for a discrete-time stochastic system with timeinvariant dynamics and m e m o r y constraints. Sims[15] considers the design of an estimator based upon fixing the filter structure and optimizing certain parameters within the estimator to minimize the estimation error. Griffin and Sage[16,17] obtain sensitivity equations for error analysis for another class of errors. Heffes[18] considers the problem for discretetime systems. All the above referenced work, however, consider only the problem where the state and filter are of the same dimensionality. Asher and Reeves [22] develop a computationally advantageous set of equations for the second moment. H o w e v e r , the bias problem is not considered. Stubberud and Wisner[19] consider the problem in which one estimates only a certain subset of the state called the primary variables; however, the structure considered is not general nor is the problem of f e e d b a c k control considered. Section II contains the results for the continuous dynamics and measurement problem. Section III contains the results for the continuous dynamics and discrete measurement problem. Section IV contains an example.

II. CONTINUOUS DYNAMICS AND MEASUREMENT PROBLEM The equations for the second moment, covariance and bias of the true estimation error are developed in this section for the continuous dynamics and continuous measurement R O F problem. The conditions for obtaining an unbiased estimate using a R O F are developed. These conditions show that in general one cannot obtain an unbiased estimate. The problem of continuous control is discussed and the necessary equation modification is given. Consider the time varying system

591

The system is observed via the measurement equation y ( t ) = H s ( t )Xs(t ) + Vs(t )

(2)

where y E R ~ is the measurement vector, H s ( t ) is a p x n~ measurement matrix with continuous and bounded elements, and Vs E R" is a zeromean, white noise vector with known variance E { v s ( t ) v : ( ~ ' ) } = R s ( t ) 8 ( t - ~).

The system model (1) and measurement model (2) are assumed to be the best known model of the system structure. This representation contains all the physical states of interest and all error sources that are elements of the state vector due to the utilization of state augmentation techniques. In many application problems, it is computationally infeasible to design a Kalman filter that will estimate the total system state vector. Thus, the problem is one of structuring a R O F that estimates, with minimum performance degradation, a linear transformation of the state vector, xs. The filter vector to be estimated is given as x,(t) = Txs(t)

(3)

where x~ E R"2 are the pertinent states of interest and T is an n2 x n, selector matrix, n, x n~. It is assumed without loss of generality that the pertinent state elements are the first n2 elements of Xs. T h e r e f o r e , the selector matrix consists of an n 2 x n 2 identity matrix and an ( n , - n , ) x n 2 null matrix, i.e. T = [I : 0].

(4)

A differential equation for x~ may be found by differentiating equation (3). This yields ~(t) = T[Fs(t)Xs(t) + Gs(t)Us(t)].

(5)

The reduced order filter is assumed to be of the structure ~(t ) = Fv(t )X~(t ) + K~(t )[y(t ) - Hp(t )X~(t )]

(I)

(6)

where Xs E R ' , is the system state vector, F s ( t ) is an n l x n, system matrix with continuous and bounded elements, G s ( t ) is an n, x m, system matrix with continuous and bounded elements, and Us E R m, is a zero-mean, white noise vector with known variance

where £a E R ~ is the linear estimate of x~ as determined by the filter structure in equation (6), Fp is an n2 x n, filter system matrix, Kd is an n2 x p filter gain matrix, y E R" is the measurement vector, and HF is a p x n2 measurement matrix. The filter structure is derived by use of the optimal Kalman filter equations [5] applied to the a s s u m e d (however erroneous) state model,

~s(t) =Fs(t)Xs(t)+Gs(t)Us(t)

E {u~(t ) u : ( r ) } = Q~(t ) 8 ( t - ~').

592

R . B . ASHER, K. D. HERRING and J. C. RYLES (y(T), ~- E [to, tr]). Equation (12) implies that

xF, for the state Xd, i.e.

xF(t) =FF(t)xF(t)+GF(t)uF(t)

(7)

where FF has continuous and bounded elements, GF is an ne X ms noise gain matrix with continuous and bounded elements, and UF E R "~ is a zero mean, white noise vector with known variance E{uF(t)uFT(¢)} = QF(t)8(t -- ~')

(8)

y(t) = H ~ ( t ) x , , ( t ) + v , 4 t )

where HF has continuous and bounded elements, and vF is a zero-mean, white noise vector with known variance E { V F ( t ) v ~ ( ~ ) } = R ~ ( t ) a ( t - ~).

Vt E [to, tt]

(9)

where the time argument has been suppressed for notational simplicity and EIe~(to)l Y(to)} = 0. P r o o f . The true estiination error for the states of interest is given as

(1o)

ed(t) = x d ( t ) - - ~ ( t ) .

A differential equation for the estimation error may be obtained by differentiating equation (10), using equations (1)-(3), and (6) and adding and subtracting the terms ( F ~ - K d H F ) T x s . This yields = (F,~ - K ~ H ~ ) e ~

+ ( TFs - Kj'ls - FpT + KJ'lpT)xs

E{e~(to)l Y(to)} = O,

0 = ( T F s - K d H s - F F T + K~HpT):?s(t),

Vt E [to, tr]

(14)

Y(t)}.

(15)

where ~s = E { x s ( t ) l

However, in general ~s does not lie in the null space of the coefficient matrix throughout the interval [to, tl]. Therefore, the condition

must be satisfied Vt E [to, tl]. The other requirement is obvious. T h e o r e m 2. If the estimator is conditionally unbiased, then it is unbiased. P r o o f . Easily shown. One may partition the matrix E s as follows

Fs

:fP" [F2, P':I F=J

K ~ H s = [(KH), : (KH),].

Y(t)

is

the

(18)

The use of equation (4), (17), and (18) in equation (16) yields the following two conditions that must be satisfied for a conditionally unbiased estimator, i.e.

FIe - ( K H ) 2 = O.

where

(17)

where Fll is n~ x n2, F,, is n2 x (n, - n~) and the remainder of the submatrices are compatible with the dimensions of Fs. Also, the n 2 x n , matrix K d H s may be partitioned into two submatrices ( K H ) , and ( K H h of dimensions n2 x n2 and n2 × (n, - n2) respectively, i.e.

F,I - (KH), - F~ + KdHF = 0

The bias properties of the estimation error are demonstrated as follows. The requirement for a conditionally unbiased estimator is that = 0,

(16)

(19)

(ll)

+ T G s u s - K~vs.

E{e~(t)l Y ( t ) }

(13)

Taking the conditional expectation of both sides of equation (11) and assuming that the ROF is initially conditionally unbiased, i.e. that

TFs - KdHs - FFT + K~HFT = 0

T h e o r e m 1. The estimator structure as given in the problem statement will be conditionally unbiased over the interval [to, t~] iff

~

V t E [to, tr].

equations (12), (13), and (11) imply that

and to the a s s u m e d (however erroneous) measurement model for the measurements, y, i.e.

TFs - K e H s - F ~ T + K ~ H ~ T = 0,

E{d~(t)IY(t)}=O,

Vt E [to,

measurement

it]

(12)

function

(20)

These conditions will n o t be satisfied in general, see the example in Section IV; thus, the R O F will in general be biased. Therefore, the conditional expectation of the estimation error will be such that ~ ( t ) = E { e ~ ( t ) I Y(t)} # 0

(21)

Bias, variance, and estimation error in reduced order filters in general. The conditional variance, V~(t), of the estimation error is thus given as V~(t) = E{(ed(t) -- ~(t))(ed(t) -- ~,(t))TlY(t)}. (22)

Therefore, the variance of the estimation error is not equal to the second moment, E{e~(t)e:(t)} as is c o m m o n l y used[12] for ROF's. A differential equation for the variance of the estimation error is n o w derived. Theorem 3. The variance, V,, of the estimation error for the previously defined system is given as the solution of the equations (/~ = (FF - K ~ H ~ ) V . + V . ( F p - K d H F ) T

The term Xs - ~ s is recognized as the estimation error of the optimal Kaiman filter. This quantity is denoted as es. Thus, equation (31) may be rewritten as 8id = (F~ - K ~ H p ) S e ~ + (TFs - K~Hs - FpT + KdHFT)es

+ V,2(TFs - K~Hs - FFT + K~H~T) ~

The true error variance, E{SedSe~ T} will be denoted by, V,, i.e.

(33)

V . = E{SeaSe~*}.

(23) z,(t)=[

(/,~ = (F~ - K ~ H ~ ) V~. + ( T F s - K d H s - F ~ T + K~H~T)Vs+

(32)

+ T G s u s - KdVs.

In order to develop the necessary differential equation for 1:. one may define the augmented state, z,. and the augmented noise vector, ~, as

+ ( T F s - K d H s - F ~ T + K a H ~ T ) VTz

+ TGsQsGsXT ~ + K~RsK~

593

Sea(t)] L e s ( t ) J'

a(t)=[

us(t)] L Vs(t)_l"

(34)

V , . ( F s - KsHs) ~

+ TGsQsG~ + K~RsK~ r

(24)

e s = ( F s - KsHs) V s + V s ( F s - K s H s ) ~ + GsQsGs r + KsRsKs ~

(25)

where the definitions of V,2 and V s are given in the following proof and where Ks = VsHZRs'

(26)

and V,(to) = Vfl, V,z(to)= V°2, and Vs(to)= Vs °. (27) P r o o f . One may for notational convenience define a new variable, Bed(t), defined as Bed(t) = e . ( t ) - ~ ( t ).

(28)

The derivative of this new variable is given as 6~d(t) = ~a(t) - ~ ( t ) .

(29)

The differential equation for the conditional mean of the estimation error is e~ = ( F . - K ~ H , ~ ) ~

+ (TFs - K~Hs - F~T + K~HFT)Xs.

(30)

The use of equation ( l l ) and (33) in (32) yields the differential equation for #e~ as

- FFT + KdHvT)

x (xs - ~ s ) + T G s u s - K~vs.

[ V.(t) Vn(t)] P,,(t)= LV~,(t) Vs(t)J

(35)

where V s ( t ) is the variance associated with the optimal Kalman filter yielding xs. The differential equation for P~, may be used to yield (23), (24) and (25). One may note that in the calculation of V. one must compute the optimal Kaiman filter variance, Vs. This quantity is useful as information about the performance of the optimal system filter. T h e o r e m 4. The second moment, P., of the estimation error for the previously defined system is given as the solution of the equations f,. = (FF - K~H, OP.

+ (TFs - K~Hs - FvT + KdHFT)P~, + P,(FF - K~H~) T + P,,_(TFs - K d H s - F ~ T + K ~ H ~ T ) T T T T + TGsQsGs T + K~RsK~

(36)

P , . = (FF - K a r t s ) P , 2 + (TFs - K~Hs - FFT + KaH~T)Ps + P,2Fsv + TGsQsGs T

(37)

Ps = FsPs + PsFJ + G~QsGs ~

(38)

where the definitions of P,. and P s follow from the proof outline where

8 ~ = (FF - K ~ H F ) S e ~ + (TFs -KaHs

A linear differential equation for z~ may be developed by use of (35) and the well known equation for es. The second moment of z,, P~,, may be clearly derived. The second moment partitions into

(31)

P , ( t o ) = Pfl, P,2(to) = P°,2 and P s ( t o ) = P s °.

594

R.B.

ASHER. K. D. HERRING and J. C. RYLES

P r o o f . An augmented state z2r : t et Ta , xTI s ~ may be used with the prior procedure to develop the results. One may readily note that P s is the propagation of the second m o m e n t of xs when no measurements are incorporated. The bias matrix, ~ , may be computed from the knowledge of V, and P,, i.e.

~d~dT = Pe - Ve.

+ Cs(t)$~(t) + Gs(t)Us(t)

(40) where C s ( t ) is the system control matrix. The corresponding filter equation (6) is modified as ~ ( t ) = [Fv(t) + C F ( t ) l l d ( t ) + K d [ y ( t ) -- H F ( t ) $ a ( t ) ] .

(41)

T h e o r e m 5. The variance, V~, of the estimation error for the system as in (40) with filter (41) and the remaining problem the same as in the original problem statement with the appropriate modifications is given as (/e = ( FF -- K d H r + CF - T C s ) V , + V , (FF -- K ~ H F + CF - T C s ) r + (TFs - K~Hs - FFT + KdHFT + TCsTCFT)VT2 + V,2(TFs - K~Hs - F~T + K~HFT + TCsT - CFT) r + TGsQsG~T r + K~RsK~ r

(42)

T 'GsQsGs r + KaRsKs r

(43) f'~ -- (F~ - K . d 4 ~ ) V~ + V ~ ( F ~ - K ~ G ~ ) T + GsQsGs r + KsRsKs r

(FF - K d H F + C~ - T C s ) P . + (TFs - KdHs - FFT + K~HFT + TCsTCpT)pT2 + P ~ ( F ~ - K ~ H ~ + C~ - T C s ) T + P,2(TFs - KdHs - FFT + KdHFT + TCsT- C~T) T

=

+ TGsQsGsTT T + K~RsK~ T

(45)

(FF -- K ~ H p + CF - T C s ) P , 2 + (TFs - KdHs - FFT + KdHpT + TCsTCFT)Ps + PI2Fs T + TGsQsGs T

(46)

FsPs + PsF~ r+ GsQsGs T

(47)

with the known initial conditions. P r o o f . The proof is easily constructed. The bias equation remains that of equation (39). One may note an interesting feature of the above variance and second moment equations. Unlike the optimal variance equations it is possible to control the variance and second moment by a judicious choice of the control matrix since this matrix effects both the variance and second moment equations. Therefore, separation betwen control and estimation does not generally occur in ROF problems. The quantities V., P., and id yield the statistical information for the ROF. In order to determine the effect of an unestimated error source on the filter estimation error one may consider the correlation matrices V,2 and P,2 where V,2 ~= E { S e d e s r}

(48)

P , : a__E { e d x s r }

(49)

and

(/,2 = (FF -- K u H F + C~ - T C s ) V,2 + (TFs - KdHs - FFT + KdHpT + TCsTCFT)Vs + V,2(Fs - KsHs)r+

=

(39)

Thus, equations (23-25, 36-39) yield all the relevant statistical properties of the ROF. The problem of covariance analysis of systems with continuous control is now considered. The control is accomplished by using the estimated state as a f e e d b a c k control signal to control the system state. This may be modeled by including a feedback signal in equation (1). This yields the modified equation Jis(t) = Fs(t)Xs(t)

in the problem statement with appropriate modifications is given as

(44)

with known initial conditions. P r o o f . The proof follows easily from the previous procedure. T h e o r e m 6. The second moment, P., of the estimation error for the system as in (40) with filter (41) and the remaining problem the same as

and are generated via the solutions to equations (24) and (37) respectively. The next section expands the consideration of the ROF problem to the case of continuous dynamics with discrete measurements and continuous, impulsive, and continuous plus impulsive control. III. CONTINUOUS DYNAMICS, DISCRETE MEASUREMENT PROBLEM The equations giving the second moment, covariance, and bias of the true estimation error are developed herein for the continuous dynamics and discrete measurement problem. The problem of discrete dynamics may be easily derived and it is not shown. For this case the

Bias, variance, and estimation error in reduced order filters measurement update equations are contained within this section and the propagation equations between measurements can be easily derived. The problem of continuous, impulsive, and continuous plus impulsive control is discussed and the necessary equation modifications are given. Consider the dynamic equation given in (1). The system is observed at discrete time intervals & denoted by subscript " k " . The measurement equation is

+ KdHsKsHs) Vsx (K~HFT- K~Hs - TKsHs + KaHsKsHs)r

vs(k)

+ (KaHsKs-

- Ka)Rs(KaHsKs-

TKs

(55)

TKs - K~) ~

V ,+2= ( I -- K ~ H v ) V T:(I - K s l - l s ) r + (KaHFT - KnHs - TKsHs + KuHsKsHs) Vs-(I - KsHs)T - (KuHsKs - TKs - Kd)RsK T Vs ÷ = ( I -

y(k) = H s ( k ) x s ( k ) +

595

KsHs) Vs-(I-

(56) (57)

K s H s ) r + K s R s K s r.

(50) The variance between measurements is given as

where all other definitions are obvious from Section II and where

V , = F F V , + V . F J + ( T F s - F ~ T ) VT~ + V , 2 ( T F s -- F F T ) T + T G s Q s G s T T T

(58)

E{vs(k )vJU)} = R(k )8(k - j).

Q,: = FFV,2 + (TFs - F p T ) V s

An estimate of the states as defined in equation (3) is required. The filter design is based upon a reduced order structure given by the measurement equation y(k) = H ~ ( k ) x ~ ( k ) +

vF(k)

(51)

where E{v~(k)v:U)}

= R~(k)8(k

- j)

+ Vs + V,2Fff

(59)

+ TGsQsG:

(60)

es = Fs Vs + V s F s r + G s Q s G •

with the known initial conditions. P r o o f . See Appendix. T h e o r e m 8. The second moment, P,, of the estimation error for the defined system is given as the solution of the following equations. The second moment across a measurement is

and the filter equations, between measurements P,÷ = ( I - K ~ H F ) P , - ( I - K , t H p ) T

~

(52)

= F ~ ( t )~d

and at a measurement Xd+(k) = ~d-(k) + Kd(k){y(k) - Hp(k)~d-(k)}

(53)

+ (I - K~Hv)P~2(KdHFT - K4Hs) T + (KdHpT - KdHs)P~(I - KdHF) T + KdRsK~ T P~2 = ( I - K a H ~ ) P ; : + (KdHFT - KdHs)Ps(K4HFT

where the gain K s ( k ) is calculated by using the optimal filtering equations given in [4] for the dynamics YcF(t) = FF(t)xF(t)+ G ~ ( t ) u r ( t )

(54)

where E{up(t)uJ(r)} = Q r ( t ) 8 ( t

- ~)

and the measurement equation as given in (51). The estimation error is defined in equation (10). T h e o r e m 7. The variance, V,, of the estimation error for the defined system is given as the solution of the folowing equations. The variance across a measurement is V/ = (I + + + +

- K~HF) Vj(I - K~HFY ( I - K ~ H F ) V-~2(K~HFT - K ~ H s TKsHs+ K~HsKsHs) T (KdHvTKsHs - TKsHs K ~ H s K s H s ) V?2V(l - K ~ H ~ ) T (K~HFT - K~Hs - TKstts

(61)

- K~Hs)

T

(62)

where P s is the value of the second moment of Xs at the update time as generated by the equations between measurements; note that the measurements do not affect I s . The second moment between measurements is P. = FFP. + (TFs - FFT)PT~ + P . F : + P,2(TFs - F p T ) T + T G s Q s G s T T r

(63)

P,2 = FvP,2 + (TFs - F p T ) P s + P,.Fs* + T GsQsGs T

(64)

[~s = FsPs + PsFs T + G s Q s G s T

(65)

with the known conditions. P r o o f . The proof is easily constructed. The equations for continuous control between measurements may be easily derived from equations (42) through (47) by setting Kd and K s equal to zero between measurements. This also takes care of the continuous control plus

596

R . B . ASHER, K. D. HERRING and J. C. RYLES

impulsive control problem between measurements as the impulsive control only takes place at a measurement. T h e o r e m 9. The variance, V,, for the continuous control problem for the defined system is given as the solution of the following equations. The variance between measurements is given by f', =(FF + C F - T C s ) V ~ + V , ( F F + C F - T C s ) T + (TFs -FFT + TCsT - CvT)vT,: + V,2(TFs - FFT + TCsT - CFT) r r~ ,':_ r T T + TG ,-,s~s,-,s

(66)

¢ , 2 = ( F ~ + CF - T C s ) V~2 + ( T F s - F F T + T C s T

- C r T ) Vs + V , 2 F s r + T G s Q s G s r

Cs = FsVs + VsFsr + G sQ~G~ T

(68)

(69)

The filter update equation at the time of control application is given by ~Z ÷ = (I + Vp)$d÷

(70)

where :~+ = ~ - + Ka (y - H~fa-).

P,÷ = m t P t - m : + m 2 P Y ~ m , V + m , P Y 2 m 2 r + m2Ps-m2r+ m3Rsm3 T

(75)

(67)

with known initial conditions. The variance at a measurement is given by equations (55), (56), and (57). P r o o f . The proof is easily constructed. The equations for the second moment may be derived by setting K s = Kd = 0 in equations (45), (46), and (47) between measurements and by the use of equations (61) and (62) at a measurement. The impulsive control problem will now be solved. This problem is important as in navigation systems when one is updating an error reset vector [ 11]. At a measurement the system state is modified by use of an impulsive control matrix Vs, i.e. Xs + = Xs- + VS-fd÷.

matrices is contained in the proof. The variance between measurements propagates according to equations (66)-(68). The A~j's are defined in the proof. P r o o f . See Appendix. T h e o r e m 11. The second moment, Pt, of the estimation error for the continuous plus impulsive control subproblem of the continuous dynamics, discrete measurement problem is given as the solution of the following equations. The second moment between measurements is

p;2 = m , P ~ ( I + V s K ~ H s - V s K ~ H F T + V s T ) r + m 2 P s - ( I + V s K d H s - V s K d H F T + VsT) r + m , P ~ - ( V s K d H F - Vs) T + m 2 p ? T ( V s K d H F - Vs) r + m 3 R s ( V s K ~ ) r

(76) P s ÷ = ( V s K d H F -- V s ) P ~ - ( V s K ~ H F - Vs)V + ( V s K ~ H F - V s ) P ~2(I + V s K d H s - V s K ~ H r T + VsT) r + (I + V s K ~ H s - V s K a H p T + V s T ) P - i T ( V s K d H F - Vs) r + (I + VsKaHs - VsKaHFT + VsT)Psx (I + VsK~Hs - VsKdHFT + VsT) T + VsKuRs (VsKa)T. (77)

The definition of the matrices is contained in the proof. The second moment between measurements propagates according to equations (45), (46), and (47) by setting K~ = K s = O. The next section gives an example for the results. IV. EXAMPLE The following example is chosen in order to obtain insight into the reduced order filter problem. It is assumed that the true system model is given by the following

(71)

[~:]=[0 10. The variance, V,, of the estimation error for the continuous plus impulsive control subproblem of the continuous dynamics, discrete measurement problem is given as the solution of the following equations. The variance at a measurement is

cJtx2jb][x']+[::]

(78)

e] x,] x2 + v,

(79)

Theorem

and y = [d

w h e r e u,, u2, and v, are z e r o mean white noise

V. + = A t , V . - A T, + A , 2 V ( T A ;~ + A , , V~2A T2 + A,2Vs-AT,2 + B , R s B , T

(72)

processed with each process uncorrelated with each other process. The variances of each process are

V~2 = A , , V , 2- A T 2 2 + A , 2 V s - A ~ 2 + B i R s B 2T

(73)

E { u , 2} = q,,

V s ÷ = A 2 2 V s - A22 x+ B2RsB2 T

(74)

E { u 2 " } = q2,

and E { v , 2} = r,.

where V. is the desired true variance and V s is the optimal filter variance. The definition o f the

It is a s s u m e d that the filter model given in order

Bias, variance, and estimation error in reduced order filters to estimate x, is designed via the state models ~F, = fxr, + gup,

(80)

y = hxF, + v,

(81)

where

E{u~} =

q,,

It will be assumed that the filter structure is chosen such that equation (83) is satisfied by the choice of f = a and h = d . In the use of the equations for the second moment and the variance, the reduced order filter gain, k ( t ) , is given as

E { v , ~} = r,,

k(t) = P(t) dr,-'

and u~, and v, are uncorrelated. The filter based upon the design equation (80) is given as £,, = f£,, + k ( t ) ( y - h£,,)

(82)

where k ( t ) is the optimal Kalman gain for the above erroneous model (80), (81). In order to ascertain if the filter is conditionally unbiased one may apply (19) and (20). These conditions imply that a -k(t)d-f+k(t)h

=0

(83)

(84)

b - k ( t ) e = O.

If f is chosen equal to a and if h is chosen equal to d, then (83) can be satisfied. H o w e v e r , equation (84) cannot be satisfied by the optimal filter structure and can be satisfied only for k =-

b e

This yields an unbiased estimator. H o w e v e r , it does not have the gain structure of the Kalman filter. T h e r e f o r e , one cannot claim any optimality properties e x c e p t that the filter would be conditionally unbiased. T h e r e f o r e , since the filter is biased with the choice of k ( t ) equal to the optimal filter gain corresponding to the assumed state and measurement models (80) and (81), one must utilize equations (23-25, 36-39) in order to obtain the statistical filter information.

where = 2 A P - P " d r , - ' + q,.

The optimal Kalman gain, k , , ( t ) , i = 1, 2, are calculated by applying the optimal Kalman results to the system model• The equations for V, and P, may be easily constructed and are omitted. This example was run with two sets of parameter values. The first set of parameter values was a = - 2 . 0 , b = 1.0, c = - 1.0, d = 2.0, e = 1.0, r, =0.2, q, = 1.0, q 2 = 0 . 3 , and initial conditions Vs,,(to) = Ps,,(to) = V,(to) = P,2,(to) = P,(to) = 1.0, and Vs,2(to) = Vs~:(to)= Ps,2(to)= P s ~ ( t o ) = p , 2 , ( t o ) = O . O . The filter thinks it is behaving with second moment, P, in Fig. 1. H o w e v e r , the actual second moment, p,, is diverging from what the filter suboptimal variance is, P, i.e. what the filter thinks it is accomplishing. Furthermore, the actual second moment is diverging from the optimal second moment, Vs,,. The actual variance, V,, tracks the optimal, Vs,,, to the fourth decimal place. The filter bias, b,, calculated as the square root of P , - V,, is continually growing. One may note that the actual variance and second moments are indeed different• This same example was run with the state element x~ assumed to be an unstable mode as opposed to being a stable mode in the previous discussions. The value for c was changed to 2.0. One may note in Fig. 4 that again the filter variance, P, is over optimistic as compared with the actual and indeed, diverging second moment, P,. The actual variance, V,, is I0

I.C Actual second moment,

0.9 Oe

597

.....

P~

O9

F i l t e r suboptimal variance, P

OIE

07

O7

OE

OE

Actual variance, ....

Ve

Filter suboptimal variance, P

O~ O4 O3 l 04 O2

0.3 02

O~

OI O

i

0.5

L

I0 Time

L

15

FIG. 3. P,, P vs l i m e . Stable M o d e .

L 2.0

0

L 05

; IO Time

i 15

FIG. 4. Vo, P vs time. Stable Mode.

20

598

R . B . ASHER, K. D. HERRING and J. C. RYLES 30

008

006

~_oo~

20

0 O2

015

110

115

20

Time

FIG. 5. Bias vs time. Stable Mode. I0.0

/

Actual second moment,

90 8.0

....

[

Pe

#

Filter suboptimal

05

I0

15

20

Ti me

FIG. 8. Bias vs time. Unstable Mode. 7.0 60 50 40 30

2.0 I0 0

05

1.0

I5

2.0

Time

Fro. 6. P,P, vs time. Unstable Mode.

Actual variance,

0.9

....

Ve

Filter suboptimal variance, P

08

,°I

07

Q.

0.6

.05 04

0 2

~

0 0

0"5

tO Time

1~'5

2,0

FIG. 7. V,, P vs time. Unstable Mode.

not close to the optimal variance, Vs,,. Furthermore, one may note a marked growth in the bias,

bias characteristics that will show that one structure is more desirable than the other. Thus, it is shown that previous work does not have covariance equations as the ROF problem is biased. The derivations are based on a method used in [22] to derive the correct second moment equations and a similar method in [23] to derive equations that are not covariance equations. This method of derivation yields a computational attractiveness since the second moment and the covariance equations need the solution of, for each, (n, +/12) ( n , + n2+ 1)/2 differential equations vs (2n,)(2n, + 1)/2 differential equations if the derivation method of [5] were used. Since n2"~n, in general, this yields a large computational savings. The equations may be used to evaluate systems with continuous system equation and continuous measurements, continuous system equation with discrete measurements, and continuous plus impulsive control. The equations are useful in the evaluation of filters in which it has been necessary to reduce the state dimensionality for the filter design because of the computational problem alluded to in the introduction, or because of poor filter design in which the dominant states are not included in the true system model. It is further shown that control and estimation with reduced order filters cannot be separated.

bt. V. C O N C L U S I O N S

This paper considered the derivation of the necessary equations in order to obtain the bias, variance, and second moment for a reduced order filter configuration. It is shown in the Introduction that it is extremely important to have all three of these statistical parameters as two different ROF structures may have the same second moments but have decidedly different

REFERENCES [I] B. S. CRAWPORD, J. D U N N and A. SUTHERLAND, JR.: CIRIS design evaluation.T A S C Tech Rpt 213-I (1970). [2] T. K. Wu: Applicationof Kalman filteringto the tactical aircraft navigation system. Proc. First NWC Symposium on the Application of Control Theory to Modern Weapons Systems, China Lake, CA (1973). [3] C. T. LEONDES: Theory and applications of Kalman filtering. AGARD-ograph No. 139 (1970). [4] A. H. JAZWINSKI: Stochastic Processes and Filtering Theory, Academic Press, New York (1970). [5] A. GELe: Applied Optimal Estimation. MIT Press, MA (1973).

Bias, variance, and estimation error in reduced order filters [6] R. J. FITZGERALD:Divergence of the Kalman filter. I E E E Trans. Aut. Control, AC-16, Dec. (1971). [7] M. AorJ: Optimization of Stochastic Systems. Academic Press. New York (1967). [8] R. E. KALMAN:New methods in Wiener filtering theory, in Proceedings o f the First Symposium on Engineering Applications o f Random Function Theory and Probability, Wiley, New York (1963).

[9] R. NASH,JR., K. ROY and J. KASPER,JR.: GEANS error minimization study. TASC Tech Rpt 172-3 (1970). [10] R. B. AStmR, R. MrrCHELL and P. MAYBECK:Filtering for precision pointing and tracking with application for aircraft to satellite tracking, Frank J. Seiler Research Laboratory Report. [11] J. A. D'AppoLrro: The evaluation of Kaiman filter design for muitisensor integrated navigation systems. TASC, AFAL-TR-70-271 (1971). [12] E. M. DUIVEN: Suboptimal linear filtering. A I A A J. Spacecraft Rockets 11, No. 3 (1974). [13] A. W. WARREN:Bias and covariance formula for filter error analysis and model sensitivity. Proc. 1973 Joint Automatic Control Conf. Columbus, OH. [14] M. AOKI and J. R. HUDDLE: Estimation of the state vector of a linear stochastic system with a constrained estimator. I E E E Trans. Aut. Control, August (1967). [15] C. S. SIMSand J. L. MELSA:Specific optimal estimation., I E E E Trans. Aut. Control, April (1967). [16] R. E. GRIFFINand A. P. SAGE: Large and small scale sensitivity analysis of optimum estimation algorithms. I E E E Trans. Aut. Control, AS-13 (1968). [17] R. E. GRIFFINand A. P. SAGE: Sensitivity analysis of discrete filtering and smoothing algorithms. A I A A JI 7 (1969). [18] H. HEFFES: The effect of erroneous models on Kalman filterresponse. I E E E Trans., July (1966). [19] A. R. S~JBBERUD and D. A. WXSMER: Suboptimal Kalman filtertechniques, in Theory and Applications of Kalman Filtering by C. T. LEONDF~. [20] R. J. BROWN and A. P. SAGE: Analysis of modeling and bias errors in discrete-time state estimation, I E E E Trans. A E S AES-7, No. 2 (1971). [21] R. J. BROWNand A. P. SAGE:Error analysis of modeling and bias errors in continuous time state estimation. Automatica 7 (1971). [22] R. B. ASHERand R. M. REEVES:Performance evaluation of suboptimal filters. I E E E Trans. AES, AES-I1, No. 3 (1975). [23] R. A. NASH, JR., J. D'APPOLITOand K. J. RoY: Error analysis of hybrid aircraft inertial navigation systems. AIAA Paper 72-848.

APPENDIX A PROOFS OF M A I N THEOREMS Proof of theorem

599

Thus, ~÷ = ( I - K d H p )~d- + ( T K s H s - K d H s K s H s + (KdH~T + K d H s ) $ s + ( T K s - K~HsKs)vs.

)es-

(A.4)

O n e m a y define the n e w v a r i a b l e s #d+

(A.5)

Bed- = ed- - #a-.

(A.6)

Bed + = ed+

-

T h e e q u a t i o n for Bed m a y b e w r i t t e n as 8,d ÷ = ( I -- K d H F ) S e d + (KdHFT- Karts - TKsHs + KdHsKsHs)es- + (KdHsKs - T K s - K d ) Vs.

(A.7)

I n o r d e r to d e v e l o p the e q u a t i o n s f o r the variance propagation between measurements o n e m a y first define a n a u g m e n t e d state z3 as

and then proceed to find a linear difference equation for z3. The second moment, P~,, of the linear difference equation may be easily derived from well known results. This second moment matrix may be partitioned as

L v , +, where

V. ÷ is the required

(A.9) v a r i a n c e across a

measurement the variance of the optimal Kaiman filter. This yields the component equations as in (55), (56) and (57). The variance between measurements may be easily derived from equations (23), (24), and (25) since Kd and K s equal zero between measurements.

7

T h e e q u a t i o n f o r the e s t i m a t i o n e r r o r at a m e a s u r e m e n t is

Proof of theorem 10

ed÷ = ( I - K d H F ) e d -

control update

At a m e a s u r e m e n t and after an impulsive + (KdHFT

-

KdHs)xs

-

KdVs.

(A.I) ed + =

T h e e q u a t i o n f o r the bias is d e r i v e d f r o m the c o n d i t i o n a l e x p e c t a t i o n of ed÷ as f o l l o w s

Txs +

-- ~ ~.

H F - T V s + ( I + V F ) -- ( I + VF) KdHF]ed- + [ T V s K d ( H s - H F T ) + ( I + VF)

= [TVsK,

x (KdHvT

+ (K~HFT

- K d H s ) ~ s +.

(A.2)

It c a n be s h o w n t h a t E{ed-I Y~} = T£s + - ~,-.

- KdHs) + TVsT (I + Vp)K~]vs = ruled- + m 2 x s - + m 3 v s

+ [TVsKd

E{e:[ Y * } = (I - K d H ~ ) E { e d - I Y k }

(A.3)

- VvT]xs-

-

(A.10)

w h e r e the d e f i n i t i o n s of the m , ' s are o b v i o u s . T h e e q u a t i o n f o r the bias will n o w b e d e r i v e d . T h e c o n d i t i o n a l e x p e c t a t i o n o f e, ÷ m a y b e t a k e n .

600

R . B . ASHER, K. D. HERRING and J. C. RYLES This yields the state equation

This yields E{ed+ly k} = m,E{ed-lyk}+

m~.fs +

(A.22)

(A.l 1) = ~ z , - + ~vs

where Ss+ is the optimal estimate of the state Xs given the measurement sequence Y~ but prior to any impulsive control action. Now E{ed-lYk}=

T ~ s + - ~e -

where All

A,2 = (m2- m , T K s H s - m 2 K s H s ) A~ = (I - KsHs) B i = m3 - m , T K s - m 2 K s

(A.12)

where $ s ÷ = :~s- + K s ( y

- Hs.fs-).

(A.13)

Equation (A.12) may be rewritten as E { e d - l Y * } = T ~ s - - :re- + T K s H s e s - + T K s v s

(A. 14)

= ml

and B~ = -Ks. One may easily develop the equation for the second moment and the required equations follow.

where

Proof of theorem

By definition

In order to derive the equation for the second moment across a measurement one may consider the following augmented state

es- = Xs - $s-.

(A.15)

e-~- = T i s - - $e-.

(A. 16)

11

fee+]

Thus,

z, +

E { e d - l Y k } = ~e- + T K s H s e s - + T K s v s .

(A.17) This may be used to develop #e+ = m a l e - + ( m l T K s H s + m2$s- + (m,TKs

+ m2KsHs)es+ m2Ks)Vs. (A.18)

Lxs+j-

The equation for e~÷ is given by (A.10). In order to derive the equation for X s ÷ in terms of X s - and e~- consider equations (69) and (70). They may be used as follows Xs

+__

Xs- + V s X e + = Xs- + V s [ : f e - + K d ( H s x s + Vs - HF2e-)] = (VsKaH~, - V s ) e s - + ( I + V s K e H s VsKeHpT + VsT)xs- + VsKevs. --

Now, by definition See ÷ = m ,6ee- + (m: - m i T K s H s - m 2 K s H s )es+ (m3- m , T K s - m 2 K s ) v s (A. 19)

=

Thus, ml, m2

where

Z6+ = ( V s K d H F - Vs), ( I + V s K e H s es+

= (I - KsHs)es-

- Ksvs.

(A.20)

One may define the augmented state =[See ÷] z~ ÷

[ es +.j.

-

(A.21)

VsKeHFT

+ Vs

T ] Z 6 - + [ m3 ]Vs )J [Vsg, J "

The second moment equations may be easily found. The bias may be calculated as before.