Automatica, Vol. 8, pp. 599-608. Pergamon Press, 1972. Printed in Great Britain.
A Minimax Approach to the Design of Low Sensitivity State Estimators" Un abordage minimax de la conception d'estimateurs d'6tat faible sensibilitd Eine Minimax-Niiherung beim Entwurf yon Zustandssehiitzungen geringer Empfmdliehkeit r [ o ~ o ~ MmmMaxc ICpac,~ry onemu~oB coc'roaHm~ HrI3XO~ qyecrmrrensHOCrH J O S E P H A. D ' A P P O L I T O t
a n d C H A R L E S E. H U T C H I N S O N ,
Minimax criteria may be used to develop a technique which synthesizes state estimators for linear systems, with large uncertainties in plant and measurement noise statistics, and systematizes the sensitivity analysis approach to Kalman filter design. Fmmmary--This paper propm~ and eaplores a new approach to the design of state estimators for systems with large, but bounded uncertainties in plant and measurement noise covariances. A linear estimator with umpetified gain is cho~a a priori. Useful semitivity measures for this filter are its total mean square estimation re'for (St), and the deviation of this error from the optimum, minimum estimation error in either an absolute (S2) or relative ($3) seine. These sensitivity measures are a function of the uncertain statistics and the tmspec~ed filter gein. If a particular measure is first maximized over the set of uncertain covariances and then mininaized with ~ tO the adjustable gains, a filter is obtained which yields a least upper bound on the actual measure regardless of the exact value of the statistics. Minimax filter dmign for plants with constant, but uncertain plant and meusm'em~t noise covariances is fully explored. First, for the St measure, itis shown that rain-max equals max-rain. Thus the minimax problem is replaced with a simple maximization of the optimal mean square error over the uncertain parameter set and the Si filter is simplT the Kalman filter for the maximizing noise statistics. Several properties of the required ruaxjml,ation for the infinite time case are then developed. Next the $2 and 83 filters are shown to be unique and optimal for at least one point i n the set of uncertain parameters. Min-max does not equal max-rain for the $2 and $3 rne~m~. However, the convexity of $2 and $3 in the uncertain statistics is used to show that the maximum of these sensitivity measures is attained over a finite set of points. Three short examples are presemed to illustrate the properties of minimax filters and the utility of the minimax design approach.
I. I N T R O D U C T I O N
~TION o f the optimal, K a l m a n filter for estimating the state o f a linear system f r o m noisy measurements r~luires exact knowledge o f plant a n d measurement noise covariance matrices. A minimax a p p r o a c h to filter design when large uncertainties in these parameters exist is p r o p o s e d a n d investigated. Consider the linear plant
~(OfF(Ox(O+O(t)u(O
(1)
with noisy measurement
z(t)ffiH(t)x(t)+w(t): t o < t < T
(2)
x(t), u(t) and z(t) are c o l u m n vectors o f n, m < n a n d p dimension, respectively. F, G and H are matrices o f appropriate dimension. X(to) is gaussianly distributed with zero m e a n a n d covariance Po, a n d u(t) and w(t) are uncorrelated zero m e a n Gaussian white noise processes such that
Cov[u(t)]--E {u(OuT(z) }=Q(t)~(t-x); Q>O
(3)
and
Cov[w(t)lfZ{w(OwT(~)}=R(O~(t-~); R>0. (4) The uncertain matrices Q a n d R are assumed to lie in c o m p a c t convex sets Vo a n d V~. F o r example
* Received 10 January 1972; revised 23 March 1972. The original version of this paper was presented at the 5th IFAC Congress which was held in Paris, France during June 1972. It was recommended for publication in revised form by Associate Editor A. Sage. t The Analytic Sciences Corporation, Reading, Massachusetts 01867. University of Massachusetts, Amherst, Massachusetts 01002.
VO= { Q (t) lQ (t)= Q r(t), Q (t) ~ O, a n d O
(5)
is one such set. F o r convenience the set V = Ve × V~ with elements t~ is defined. L a c k i n g exact knowledge o f Q and R, a filter identical in f o r m to the
599
600
JOSEPH A. D'APPOLITO a n d CHARLES E. HUTCHINSON
Kalman falter is chosen a priori to estimate x(t). Now, however, the gain will be adjusted to satisfy an appropriate sensitivity criterion. Specifically, the filter takes the form
and
SR(T)=[JM(K, v, Po, to, T)-Jo(v , Po, to, Jo(v, Po, to, T)
T)] .
(15)
~(t) =F(t)£(t) + K(t)[z(t)- H(t)~(t)].
(6)
The matrix K(t) in equation (6) in an independent variable to be selected by the filter designer. For our purposes K(t) may belong to any compact convex set, Vr, which covers the set of all Kalman gains corresponding to V. Note that ~(0 is an unbiased estimate of x(t). Let
Since v is uncertain, it seems most appropriate to select K to satisfy one of the following criteria:
St(P°, to, T)=min max J~(K, v, Po, t°, T) KeVK vzV
(16)
S2(Po, to, 7 3 - rain max Sa(K, v, Po, to, T) KeVK
veV
(17)
M(t)=E{[x(t)-£(t)][x(t)-£(t)]r} .
(7)
S3(Po, to, 73ffi rain max SR(K, v, Po, to, T). KtVz
Then for a given Q and R
(18)
Jf4(t)=(F- KH)M + M ( F - KH)r + GQG r +KRKr; MofP°.
(8)
The mean square error of filter (6) is tr[M(T)]. Let the filter performance index be
Ju(73=tr[WM(T)];
(9)
where W is an arbitrary constant positive definite symmetric weighting matrix. For a given Q and R the minimum value of Ju(73 is [1, 2]
Jo(73=tr[WP(731
(10)
where P(0, the covariance of the optimal estimate, satisfies the matrix Riecati equation
2. T H E & F I L T E R
The properties of the St filter for constant but uncertain O and R are now developed.* First, it is shown that JM with the sets V and V~ satiffy the sufficient conditions of a min--max theorem and thus
rain max JM(K, v, Po, to, T)ffi RaVer
veV
(11)
(l 2)
J u is a function of the unknown matrices Q and R and the gain K. From the definition of Jo
One theorem and a lemma are required.
Theorem 2.1 [3]. Let f(x, y) be a real-valued function of two variables x and y which are elements of X and Y, respectively, where both Xand Y are compact, convex sets. If f is continuous, convex in y for each x and concave in x for each y, then rain maxf(x, y)ffi max minf(x, y). yeY
J~(K, v, P,, to, 73>~ro(V, p°, to, 73>0; Y v s V and ~ KsV~
(19)
o~V E e V x
and the optimal filtex gain is
Ko(t)---.P(OH(OR - ~(t)
All three criteria are minimax in nature. The St criterion places_ a least upper bound on Jsr in the presence of uncertain parameters and may be considered a "worst case" design. The S 2 and $3 criteria seek to control filter sensitivity directly by minimizing the maximum absolute or relative deviation of J u from its optimum value over the uncertain parameter set.
max rain JM(K, v, Po, to, 73.
P(O=F(t)P(O + P(t)Fr(O + G(t)O(t)G r(0 -e(t)Hr(OR-'H(t)P(t); e(tO=Po
oIV
(13)
J u is one performance measure for filter (6). It is also appropriate to measure the performance of filter (6) in terms of its absolute or relative departure from optimality. These performance measures take the form
SA(T)=Ju(K, v, Po, to, 73-Jo(v, Po, t,, 7") (14)
xltX
xtX
ye¥
Lemma 2.1 [4]. If/(x, y, t) is a continuous scalar function of x, y and t and if f (x, y, t) is convex in X for every y and t, with second order partials with respect to x continuous in y and t, then g(x, y)
x, y, t)dt t
is a convex function of x for every y. * Results given in this section axe also valid for time varying Q and R, however, they are obtained most directly via variational techniques. See Ref. [4.]
A minimax approach to the design of low sensitivity state estimators The sets Vx and V are compact and convex by definition. The solution for M(0 is
~
SI(Po, to, T)= maxJo(v, Po, to, T)--J.(v~., P,, to, 7")
(27)
O(t, o')
Io
x [C;Q6 r + K O)RKr( )]or(t,
(2O)
where
¢b(t, to)=[F(t)-K(t)H(t)]O(t, to); O(to, to)---'-I. (21) Continuity of Ju in Vr~ and V for all finite t is easily demonstrated by applying the properties of matrix norms to either equation (8) or (20). Since JM is linear in VQ and Vs, it is concave in V. Observe that Ju is continuous and concave in Po so that any uncertainties in initial covariance can be included by appropriately redefining V. It remains to be shown that JM is convex in Vx for every w V and t>t o. From equations (8) and (9) one has
Ju=tr(W~l) = t r [ W ( F - KH)M + W M ( F - K H ) r + WGQGr+ WKRKr]. (22) If the rows of K are mapped into an equivalent
np× 1 column vector k, where kr=[kll . . . k i p
and using (16) one obtains vJeV
t
M(t)--O(t, to)PoOr(t, to)+
601
, . ..,
knl
. . . kay.
(23)
Then the second parital of JM with respect to K is
a2J~ = 2(W®R )
(24)
aK" where ® denotes Bellman's Kronecker product for matrices [5]. Since W and R are positive definite, W®R is positive definite, and JM is strictly convex in Vx. Now
JM(T)--tr(WPo)+f,~JMdt.
(25)
where v~ is the maximizing v at time T. The saddle point theorem [3] or the uniqueness of the Kalman filter now implies that
K'(t) = Ko(VT, t); to< t < T
(28)
where K*(t) is the $1 filter gain. Thus the relatively difficult problem of minimaximizing J u can be replaced with the simpler problem of maximizing Jo over V, and the S1 filter is simply the Kalman filter for the maximizing values of Q and R. 3. THE INFINITE TIME S1 FILTER Although the minimaximization of Ju has been simplified, the problem for arbitrary T is still formidable. The dependence of Jo on Po further complicates matters since any uncertainty in Po must be included in the set V. Under fairly general conditions Jo is independent of Po as T--,oo. Assuming that F, G, H, Q and R are constant, bounded in norm, and that the system defined by equations (1) and (2) is uniformly completely controllable and observable, KALMA~ and BucY [I] have shown that every solution of the variance equation, (11), starting at a symmetric non-negative matrix Po converges to a unique constant non-negative matrix P as t--,oo. For constant plants P and Jo are functions of T - t o only. For convenience let t°=O. Let us examine (27) under the above assumptions in the limit as T--,oo. Since V is compact, the convergence of P(T) to P and therefore Jo(T) to Jo is uniform in V. That is, there exists a T(8) such that for every T> T(e) and any wV IJo(v, Po, T)-'lo(v)l
(29)
3"o is always non-negative so (29) implies that
Po, 7)
(30)
where the rightmost inequality follows from the definition of v~. Since (30) is true for every vsV, it is true for v~ and one has
The convexity of the integral of JM follows immediately from lemma 2.1. Since adding a constant to the integral does not affect its convexity, JM is strictly convex in VK. Thus, all the conditions of theorem 2.1 are met and assertion (19) is true. The right-hand side of (19) can be written as
Again from equation (29) and the definition of
\ / m a x [ minJM(K, v, Po, to, T)}= maxJo(v, Po, to, 7")
which is equivalent to
oaF \ K z g z ¢
/
Jo(v , Po,
(31)
Po, r)
v~V
(26)
Po,
(33)
602
JOSEPH A. D'APPOLITO and CHARLESE. HUTCHINSON and differentiating (37) with respect to rl/yields
Equations (30) and (33) together imply that
[Jo(vr, Po, T)-Jo(V~)[ <8; T> T(e).
(34)
0 = (F - K~H)Pij + P~y(F- KNH) + KNR~jKrn. (43)
Equation (27) into (34) now yields the desired result
lira SI(Po, T)=- St = max Yo(V) =Yo(V*~). (35) T-+ oo
v~ V
It follows that
K'= ~o(V*~)
(36)
where the bar again denotes infinite time. Thus the steady-state St filter gain is a fixed gain equal to the Kalman gain for the uncertain parameters that maximize Jo. 4. MAXIMIZATION OF do(v) Some of the important properties of the infinite time maximization are now developed. From the saddle point theorem v~, is unique. Furthermore, Jo(V) is continuous and concave in V. These proImrties are sufficient to guarantee that Jo has only a global maximum. Next it is shown that grad~J°(v) always exists. P for some nominal QN and RN is obtained by equating the left-hand side of equation (11) to zero. Thus
0 = FlJs + PNFr + GQ~Gr - PNHrR~ tHpm
(37) Consider an uncertain element qu in Q and let Qu = ; Q j and P~ = 0PN
Oqu
(38)
Observe that Qu is symmetric with two forms: Q u f [ l u ] , i,=j; Q~lf[1,d+['l~J, i~#j
(39)
where [lu] denotes a matrix whose entries are all zero except for a 1 in the/jth position. Differentating (38) with respect to qu one obtains
Equations (41) and (43) are linear matrix algebraic equations for P~ and Pu which have solutions whenever the eigenvalues of (F--KNH) are nonzero [5]. But ( F - i ~ s H ) ,the system matrix of the steady-state Kalman filter, is stable and therefore Po and ~u always exist. It follows immediately that
O J----°- = tr[WPtl ] and OJo = tr[WPi J
always exist. Qu and R . are positive semi-definite and therefore so are P . and P . . Then
Oyo. -- tr[ WPu-] > 0 and ~J.--2°= tr[W[J,,] > 0 Oq. ~r. (45) and Jo is maximized with respect to a diagonal element of Q or R by setting that element to its maximum assumed vahm. The above results indicate that Yo(V~) is easily found. Diagonal elements of Q and R are set to their largest value.* For off-diagonal elements steepest accent techniques are appropriate. One need only insure that the definiteness requirements on Q and R are met at each step in the search. Such a program, together with several solved examples is described in Ref. [4]. 5. THE $2 AND $3 FILTERS Certain useful properties of the 82 and $3 filters for constant uncertain Q and R are presented in this section. It is convenient as with the S t filter to restrict our attention to time invariant systems at infinite time where $,t and S R are functions of K and v only. We shall denote by ~ that value of Ke Vg for which max S(K, v)= rain max S(K, v)
0 = FP u + P,jF + t;QuG T_ P,jH rRT, ~HP~
(4O) Recognizing that PNHrR~ t =/~N, (40) becomes
O=(F- KMH)P u+ P~(F--.r:NH) r + a Q u a r .
(41)
(46)
veV
where S may be S x or S a. Since rain S(K, v)~=O, it is clear that rain-max K~
V~
does not equal max-rain for the S ~ and S ~ sensitivitymeasures. However, the regions in V and V x in which the minimax is found can be determined.
First, the strict convexity of JM in Vx implies that S(K, v) and max S(K, v) are strictly convex in VK. vS V
Similarly, defining
Ru = OR and i~ = OPs Oru -~J 0r~j
K~V~
veV
- p~HrR~ trip u.
(44)
(42)
* If Q and R axe purely diagonal this strategy will maxirnlza Jo(T) for evcxy T. In Smexal, howevca', no constant value of Q and R will do this. See l~f. [4].
A minimax approach to the design of low sensitivity state estimators Since a strictly convex function has a unique minimum, ~ and thus the $2 and $3 filters are unique. We now show that R is optimum for some
603
that Kt~f'o. Then there exists a direction AK in Vx such that
wV.
2~(KI+sAK, v,)
Theorem 4.1. The $2 and $3 falters for constant but uncertain Q and R are optimal for some w V, that is J~" o (47) where X'o= {KIK=I~. for some vtV}. The proof of this theorem proceeds as follows: It is shown that if ~t.YFoa direction of motion in V~, say AK, always exists such that
Yr(R, v)> Yr(R +eAK, v); ~ veV
(48)
where 8 is an appropriate small positive constant. But (48) implies that
S(I~, v)>S(R +aAK, v); YveV
(49)
and ~ an appropriate positive constant. An inductive proof is given. 1. Observe that when Kt:~¢'o, the gradient VxJu(K, v)¢O for any wV. Now by selecting AKffi-VjrJ~(K~, vi), the first basis functional can be reduced in value. 2. Assume (54) is true for i = 1. . . . . n and denote by a, the gradient VxYu(K, v,)l~fx ~. Then there exists a convex polyhedral cone C,, defined as follows:
C,={AKI
S={AK.+II<0}
and thus max s(R, v)>max S(R +e.AK, v) ¥ veV. (50) viV
veV
Therefore, ~ cannot be the minimax sensitivity filter gain. (Indeed the condition that no direction in Vx can be found which simultaneously reduces Jr(K, v) for every w V is precisely the condition that K~.,~o.) Since V is convex, any point w V can be written as a convex combination of the extreme points of V denoted here by VE. That is
(54)
(56)
with C. is non-void. Assuming the contrary, observe that C, n S = ~ if and only if a.+l=-t
(;) |
1~ l d i
; 0 < ~ , < 1 , X ~ , f l , t > 0 (57) i=1
i.e. a,+ 1 lies in the negative convex span o f a i . . . . . a,. When (57) is true one has from (55) for any
AKsC.
0 .
(58)
iml
Yr(K, v)ffiJu(K, ~: ~vD;
Thus no small motion in C, at Kl will reduce Y~I(K, Vm+l). NOW consider a point ~ V
l,=l
v,e~;, o_<~,<1, r. ~,=1.
(51) n+l
m+l
~= Y, 7,vl; v~Vz, 0 ~ ? , ~ 1 , ~. T,=I |wl
Since JM is linear in V, (51) may be written as
(59)
i~1
and the performance functional
Ju(K, v)ffi ~; ~tJ~t(r, v3.
(52) n+l
i=l
Yu(K, v0= Z T,J~t(K, v,). The J~(K, v,), i = 1. . . . . r, constitute a set of basis functionals for the functional Jr(K, v). Since the cq are all positive we need only show that for K~aCo, a AK always exists such that
(60)
l=l
Then a+l
V j u ( K , ~)= % 7,VxJ~(K, v,)
(61)
/ffil
Y~t(K + 8AK, v3 < J~t(K, v,); ¥ ,.
(53)
The formal proof of theorem 4.1 is a direct consequence of the following lemma.
and using (57) one obtains O
VxJr(K, v')= ~ (T,- ~ltT.+ 1)VKJ~(K, va). t=l
Lemma 4.1. Let 2 ~ K , v~); v~VE, i=1 . . . . .
r
represent the set of basis functionals for the performance index Jr(K, v), and let Kl be such
(62) This gradient can be made zero at Kl by equating all coefficients to zero and invoking the constraint
604
JOSEPH A. D'APPOLITO a n d CHARLES E. HUTCHINSON
on the 7~. This leads to a set of n + 1 linear equations, in n + 1 unknowns, the 7g which has the solution 71=
~t l+t
<1, i = 1 . . . . .
n 7.+t--
1 l+t
7. SOME EXAMPLES
Example 1. This example illustrates the basic geometric properties of minimax filters. Consider a first order plant with noisy measurement
<1.
2(0 = - x(t) + u(t); z=x(t) + w(t)
(63) Thus, a set of 7~ satisfying (59) exists and, therefore, there exists a ~e V such that (62) is zero. That is
V J u ( K , v")lxffixl = 0
(67)
where cov[u(t)]=q6(t-r), cov[w(t)]=r6(t-z) and q and r are assumed to be in the ranges
O
(68)
~= - ~-k(z-~); k> - I
(69)
(64) Using the filter
which implies Kts,g"o. Contradiction. The fact that J° is concave in V and J u is linear in V may be used to show that S a and S R are convex in V. Since a convex scalar function of a vector defined on a compact convex set attains its maximum on the extreme points of that set, one has immediately rain max S(K, v)-- rain max S(K, v)
KIIVK vaV
(65)
KCVjK vSVE
or
max S(l~, v) = max S(a~, v). vBV
(66)
v~VE
Thus, the search for $2 and $3 may be restricted to the set of all optimal gains corresponding to V and the set of extreme points of V. An algorithm for finding the infinite time $2 and $3 filters is given in Ref. [6].
and setting W= 1, Jo and JM are J o = ( r2 +rq)~-r, JTM-k2r +q 2(1 + k)"
(70)
The minimax value of JM occurs at q=r= 1 with k=0-414. The minimax value of S A is attained at the q × r extreme points (0, 1) and (1, 0) with k = 1. Since S a is infinite when r = 0 or q = 0 , the $3 filter does not exist for this example. A numerical comparison of the St and $2 filters is contained in Table 1. Notice that the S t filter provides a least upper bound on JM at the expense of greater maximum deviation from optimality whereas the opposite is true of the $2 filter. The optimal, St and $2 filter error surfaces are shown in Fig. 1. Observe that the St and $2 filters are optimal for q/r ratios of 1 and 3, respectively.
6. SOME PRACTICAL CONSIDERATIONS
The sets of uncertain noise statistics, V~ and V~, have been defined qu/te abstractly with the minimum number of constraints required for the proofs presented. In any practical problem much more will usually be known about Q and R than has been assumed so far. Often only a few elements in these matrices will be uncertain. The designer should choose the simplest V which adequately describes his uncertainty in Q and R. Algorithms for finding the infinite time minimax falters based on the development in Sections 4 and 5 are described in Refs. [4 and 6]. Unfortunately, space limitations preclude a description of these algorithms here. Of the three minimax filters, the St filter is the simplest to compute. Its maximum departure from optimality is also easily determined since it will occur at one of the extreme points of V. It is therefore recommended that the St filter be evaluated first in any design effort. If closer tracking of the optimal error than that provided by the St filter is then dt~med necessary, one may proceed with a design and evaluation of the $2 or S~ filters.
TABLE I. $1, $2 FILTERCOMPARISONFOR EXAMPLE1 Extl~me
point q, r
S~ Jo
0, 0 0 0, 1 0 1, 0 0 1, 1 0.414 Max.value 0.414 • A--)~,--)o.
JM
$2 AI*
0 0 0.068 0-068 0.353 0.353 0.414 0 0.414 0.353
J~f
A2*
0 0 0.250 0"250 0-250 0"250 0.500 0.068 0.500 0.250
"
This observation points up an interesting property of the optimal filter, namely that the mapping from V~.X,"o given by the Kalman algorithm is not one-to-one. Thus a given Ko can be optimal over a subset of It'. If this subset contains the uncertain parameter set, the minimax falter is everywhere optimal. In general, when the number of uncertain elements in Q and R are less than the dimension of VK the minimax filters can be made to provide nearly optimal performance over the entire range of uncertain parameters.
A minimax approach to the design of low sensitivity state estlm~tors
605
These gains are also Kalman gains for the (q11, q22)
pairs v:o=(4, 9), v~=(2.526, 6.458), v~ffi(2.541, 6.432). (75) The optimal, 81, 82 and 83 filters for this example are compared in Table 2. Note that the total mean square error for all three filters in this example is always within 7 ~ of the optimal error.
i
a
Jo [olDtimol) ERRORSURFACE
d
PROJECTIONOFcONqxr
b
SI FILTERERRORPLANE
•
52 FILTERERRORPLANE
c
$1,JoLINEOF CONTACT
f
S2,Jo LINE OF CONTACT
II PROJECTIONOF ! ON cl • r
Fx<~.1. $1, $2 and optimalerror surfacesfor example1.
~fExample 2. Although this paper has so far considered only continuous time systems, all of the properties of S 1, $2 and S 3 filters carry over directly to the discrete case. Consider the mixed system
x(t)fFx(t)+u(t)
Example 3. In many instances the position error of a terrestrial inertial navigator is adequately described by a three-degree-of-freedomoscillator with a 24 hr period [7]. The three states of this oscillator, ~'z, ~ , and ~/~ represent the small angular misalignment of the inertial platform about the computational coordinate system. Inputs to the oscillator are the x, y and z gyro random drift rates which are assumed to be first-order continuous Markov processes. The mean square values of the gyro drift rates are fairly well known. The x and y gyro drift rates are also known to be correlated, but the amount of correlation is uncertain. Continuous measurements of x and y position with uncertain crosscorrelation are available. The performance index of interest is the total rms radial position error
(71)
zn~Hxn + wn
• .=R'[E { ¢ / -
where
(76)
A complete specification of the position error problem is given in Fig. 2. The sets VQ and V~ for this problem are: The uncertain parameters set Ve (q11, q22) is taken to be l ~qxx <4, 4~q22~9.
(72)
va=
I
0.4x10 -s ±0"3x10 -5 ±0"3 x 10 -5 @4 x 10 -5 0
0
0 0
1
0"18 x 10 -s
The infinite time discrete filter is given by £,ffi~(At)~,_ 1+ K ( z , - Hq~(At)~,_ 1)
(73)
r 0.o625 ±o.o5 ] --L ±0"05
0"06253"
(77)
where A miniraax radial error of 0.269 nautical miles was attained at the point
¢,(A0ffiePA': Atffi0"l sec. Setting W=I, the values of K for each minimax filter are /~.
['0.5316"] ~,a F0.4718"] =LO.5S40_r
=Lo.6492_],--
I-0.4723-1
=LO.SS26j"
(74)
? Provided by P. L. Bongiovannl, Department of Electrical Engineering,Universityof Massachusetts.
q12-----0"3 × 10 -e, r12-----0"0234.
(78)
The $2 filter for this example produced a minimax absolute radial deviation from optimum of 0.026 nautical miles. The design point for this filter is q12-------0"15 X 10-6, rz2__~--0.03.
(79)
606
JOSEPH A. D'APPOLITO a n d CHARL~ E. HUTCHINSON TABLE 2. MmaMAXFILTER COMPARISON FOR EXAMPLE 2
Extreme point
SI
S2
53
qtt
q22
.To
JM
At
JM
A2
JM
A3
1
4
3"212
3"326
0"114
3-258
0.046
3"259
0.047
1
9
5"133
5"493
0"360
5"466
0"333
5"473
0.340
4
4
4"937
5"285
0-348
5-269
0.333
5"262
0.327
4
9
7"452
7.452
0
7.478
0.026
7"477
0"025
7.452
7.452
0.360
7.478
0"333
7.477
0"340
Max. value
0
K1
0
0
~
0
t~s
0
KI
0
Wv
O'
oaj 0
u2
0
0
0
K1
W*
0
0
3
Navisator error state equations
O Cu
~v 0
--f~v 0 --~H
:ffira
+
-j
0
0
0
-p.
0
0
e.
I
0
8y
0
0
0
0
-Pv
0
ev
0
0
~s
0
0
0
0
0
-p~
e~
0
I
Meummnent equations
o-
-R.
0
0
ooo:i Iwtw 0
Wv
0
"
i
I
wh~'~
w., w., ~ , - p l a f f o r m miulimm~at (tad) ez, ev, e,==a~'ro drift rates (degJhr)
fle==gl.v--0-186 rad/hr Kt =0.01734 tad/dell
~c==~.y----O'O014 deg/hr P~--py = I/hr ~ , •0"003 deg/hr p,--0.1/hr
Roffi3437 mm
~.
2. Navisation problem description.
A m i n i m a x a p p r o a c h to t h e design o f l o w sensitivity state e s t i m a t o r s The e r r o r performanceo f b o t h filters is s h o w n in T a b l e 3. N o t i c e t h a t t h e $1 filter e r r o r is insensitive t o t h e v a l u e o f the o f f - d i a n g o n a l t e r m s since it is designed f o r t h e p o i n t where the g r a d i e n t with respect to these t e r m s is zero. TABLE 3. RADIAL ~ O N ERROR COMPARISON FOR ~t AND S 2 NAVIGATIONFILTERS
Uncertain parameter q12( X IOt'5), r12
0"3, 0.05 0"3, --0"05 --0-3, 0 " 0 5 --0"3, --0"05 0 , 0 Max. value
St
82
Jo
JM
A1
JM
A2
0.258 0.221 0"262 0-220 0"265 0-269
0"269 0.269 0-269 0"269 0-269 0.269
0.011 0.048 0.007 0.049 0.004 0.049
0"284 0.247 0-283 0.246 0"265 0.284
0"026 0"026 0"021 0"026 ~0 0"026
* All data in nautical miles. 8. CONCLUSION A m i n i m a x a p p r o a c h to t h e design o f linear filters f o r state e s t i m a t i o n w h e n large uncertainties in p l a n t a n d m e a s u r e m e n t noise c o v a r i a n c e s a r e p r e s e n t h a s been given. T h i s a p p r o a c h yields a u n i q u e fixed filter design which places a least u p p e r b o u n d o n a given sensitivity m e a s u r e o v e r the assumed range of uncertain parameters. The minim a x filters a r e identical in f o r m t o t h e K a l m a n filter a n d c a n p r o v i d e n e a r l y o p t i m a l p e r f o r m a n c e over t h e entire r a n g e o f u n c e r t a i n statistics. T h e design a p p r o a c h , therefore, c i r c u m v e n t s one l i m i t a t i o n o n t h e use o f the K a l m a n filter, n a m e l y , the need t o have exact k n o w l e d g e o f p l a n t a n d m e a s u r e m e n t noise statistics. F u r t h e r , in light o f t h e fact t h a t the filter is o p t i m u m f o r s o m e p o i n t , o r set o f points, in the r a n g e o f u n c e r t a i n statistics, the m i n i m a x a p p r o a c h c o r r e s p o n d i n g l y systematizes the sensitivity analysis a p p r o a c h t o filter design. A l t h o u g h d e t e r m i n a t i o n o f the m i n i m a x filter g a i n g e n e r a l l y requires search techniques,this deterruination can be m a d e off-line. T h e resulting m i n i m a x filter is n o m o r e difficult to m e c h a n i z e t h a n the K a l m a n filter, a n d as such represents a n a t t r a c t i v e alternative to a d a p t i v e filters which a r e necessarily m o r e c o m p l e x in structure. REFERENCES [1] R. E. KAt,MANand R. S. Buck: New results in linear filtering and prediction theory. J. Basic Engng Trans. ASME, Series D 83, 95-108 (1961). [2] M. ATHANS and E. TsE: A direct derivation of the optimal linear filter using the maximum principle. IEEE Trans. Aut. Control AC-12, 690--698 (1967). [3] S. KAm.~: Mathematical Methods and Theory in Games, Programming, Economics, Vol. II. Addison-Wesley, Reading, Massachusetts (1959).
607
[4] J. A. D'AppoLrro: Minimax Design of Low Semitivity FiltersforStateEstinmtion. Ph.D. Thesis, Departmentof Electrical Engineering, University of Massachusetts, (1969). [5] R. E. B~L~L~,N: Introduction to Matrix Analysis. McGraw-Hill, New York (1960). [6] P. L. BONGIOVANNL"De~dgnofDiscreteEstimatorsUsing Minimax Techniqum. Ph.D. Tt-~a;eis,Departmentof Electrical Engineering, Unive~ity of Ma__~.a_ohus~s, (1971). [7] J. C. I ~ s o N : I n i t i a l guidance for cruise vehicles, Chapter 4 of Guidance and Control o f Aerospace Vehicles (Edited by C. T. Leom>~). McGraw-Hill, New York (1963). R~mm~-Ce texte propose et explore tm nouvel abordage de la conception d'estimateurs d'~tat pour des ~ ayant des incertitudes fortes mals fiwit/~es en installations et des covariances de mesures de bruit. Un estimateur fin/nfire avec gain non-sp6clfi6 est choisi d priori. Des raesures utiles de sensibifit6 pour ce filtre sont le carr~ moyen total de l'erreur d'estimation (SO et la d6viation de cette erreur de l'erreur d'estimation minimum optimum mit dam le ~ a s a b ~ l u ($2) soit dana le ~ a s relatif ($3). C.es mesures de xasibifit6 sont f o n ~ o n des statistiques incertain~ et du gain nonsp6clfi6 du filtre. Si une meaure particuli~re est d'abord maximis6e sur l'ensemble des covariances incertaines, puis minirnis~e par rapport aux gains ~giables, on obtieat un filtre qui donne une ]/mite s u p ~ u r e minimum stlr la v~'itable mesure quelle que soit la valeur exacte des statistiques. On explore enti~m~ent la conception du filtre minimax pour les installations avec installation constante reals incertalne et des covariances de mesure de bruit. D'abord, pour la mesure de St il est montr~ que win-max ~ maxwin. Ainsi le probi/~me de minimax est remplac6 par une simple maximisation du carr~ moyen de l'erreur optimale sur l'ensemble du parar~tre int~lain, et ie filtre St est simplement le filtre Kalman pour les statistiques de bruit maximisant. Plusieurs des propri6t~ de la maximisation voulue pour le cas du temps jnfini~ sont alors d~velopp~. EnsuRe les filtres $2 et $3 sont montr~ etre uniques et optimaux pour au moins un point de l'ensemble des param~res incertains. Min-max n'est pas ~gal A max-min pour les mesures de $2 et $3. Cependant, la convexit~ de $2 et S3 dans les statistiques incertaines est utilis6e pour montrer que le maximum de ces mesures de sensibilit6 est atteint sur tm ensemble fini de points. Trois courts exemples sont pr~ent6s pour illnstrer les proprbY~s des filtres minimaux et l'utifit6 de l'abordage de la conception mininlax. Zmammml~ammg--Vorgeschlagen und untersucht wird ein neuer Zugang zu dem Entwuff von Zustandeschlttzeinrichtungen ftir Systeme wit groBen aber begrenzte~ Unbestimmtheiten bei den Covarianzen von Aniagen- tmd Megrauschen. Eine lineare Zustandsachitzeinrichtung wird a priori gew~thlt. Brauchbare Marie ftir die Empfmdlichkeit dieses Filters sind sein totaler mittlerer quadratischer Sch~tzfehler ($1) und die Abweichtmg dieses Fehlers yon optimalen (minimalen) Schittzfehler sowohl in abeolutem ($2) als auch relativem ($3) Sinne. Diese Empfmdlichkeitsmafle sind eine Funktion der unsicheren Statistik und der unbestimmten Verstarkung. Wenn ein partikul&,'es MaB zuerst fiber die Menge von uusicheren Covarianzen maximiert und dann im Hinblick auf den justierbaren Zuwachs minimierne, wird ein Filter erhalten, das eine kleinste obere Gre~ze ffir das wirkfiche MaB ohne Rilcksicht auf den genauen Wen tier Statistik fieferr. Der Entwurf yon Minimax-Filtern fOr Anlagen wit konstamen aber unsicheren Aniagan- und Melkausch-Covarianzen ist voll erforscht. Zuerst wird fOr das SI-MoB gezeigt, dab das Min-Max dem Max.Min gleich ist. Daher wird das Minimaxproblem durch eine einfache Maximierung des optimalen mittleren quadratischen Fehlers tiber die unbestimmte Parametermenge ersetzt tmd das Sl-Filter ist einfach das Kalman-Filter zur Maximiertmg der Rauschstatistik. Verschiedene Eigenschaften tier geforderten Maximiertmg fOr den Fall unendlicher Zeit werden entwikkelt. Danach wird
608
J O S ~ H A . D'APPOLITO a n d C F L ~ U . ~ E. HUTCHrNSON
von dem $2- trod dem S~-Filter gczeigt, dab sie ffir mind~tens einen Punkt der Menge unsicherer Parameter die einzigen und opt/malen sin& Flit die $2- und S r M a ~ e ist Min-Max nicht gleich Max-Min. Jedoch wird die KovexitAt von $2 und S~ bei der unsicheren Statistik benutzt, um zu zeigen, dab das Maximum dieser Empfindlichkeitsmafle fiber eine endliche Punktmenge erreicht wird. Drei kurze Beispiele werden angefllhrt, urn die Eigenschaften yon MinimaxFi]tern und den Nutzen der Minimax-Entwurfsmethode zu zeisen.
pmmeT H O ~
nO~XO~ X ~ OR©-,,;-fOB c o c r o ~ Krm CRUTeM C ~ 0 ~ HO Orpas~-m~mdM~ m ~ 3 ~ e C l ~ O C r ~ s xosapn~rsocT~x y c T a s O m m s m y M a ~3Mepem~.
B ~ S p a s a np~olm ~ o~emmrr c s e o n p e ~ e m m m ~ ~ . FIone~m~e Mep~ ~lyscTmrrem~ocra ~ m • Toro ~mm,Tpa---ero o 6 m u c p c m s e - n a ~ p a T n m ~ x u onm6Ka O ] ~ [SI], ~ OTIrdIOH~lH~~rog O1.11~6T M OT O ~ HO#, ~ , - - - ~ a m m o g oum6ms ou~mm n ~ 6 o s a6cony~oM [$2] m m o v a ~ ~ [S~] c~mcae. 3 ~ s M~p~ W ~ - r ~ O C T ~
H~O~OrO
~
OT H¢.R3B~IID~IX
y s e . m ~ e z u ~mm,Tpa.
C T a ~
Mazcm~z~U~
H
oHpeHeJIeHHO~[ MepI~I~ rpym'm~ HeH3BeCTHIm~KoBapHaHT= Hoc're~ c noc.uen~tome~t ~mm~M~ue~[ s O T H O ~ Ha pery~spyeM~e yee.q~¢H~m HonyqaeTc~[ ~bH.rmTp~a~omml H~mRg BbIClmSg ~ e H e ~ ~egCTB~rreffl,HO~ MepbI, He B3Hpa.q Ha TORHyIOBeam.tony CTaTHCTHKH. BnonHe HCC~e~O~H ~---,.mXC packet ~pHHI~TpOB yCTaHOBOK c HeH3BeCTH~MH KoBapHaHTHOCTmVmycTaHOS~m H MyMa H3MepeHHff. Bnepsme IIOKa3aHo ~ MepbI S 1 wro Mm~-MaXC paBHo MaKc-M~m. ~ o6pa~3OM npo6HeMa ~laKc 3aMeselta npocTO~ ~ a ~ c ~ v ~ a u ~ e ~ OnTHMa~HOg c p e ~ e ~ s a ~ p a T ~ e c E o g onm6rK ~ rpynma ,eH3~CTm,Ix napaMeTpOB, a OKnI,Tp SI--m~OCTO ~RmbTp Ka.mbMaHa ~ MaKCHMH3alB~ CTaTHCTHml myMa. Pa3pa6oTaRo 3aTOM HeCgO.mbKO xaparreplic'rgK rpe6yeMo~t M a K ~ m m arm c ~ 6ecKo~e,moro BpeMcHH. NoKa3aHo trfo ~blLrl~a'lgH $2 H $3 3~I~Ic'rI~BIL~ H O~I~,AaJII,H H no KpaffH¢~ Mcpe ~I~I~ O~IHOt~ TOqla~ B rpyIIIle HeI~BeCTHHX HapaMeTpOB. MHH-Ma~c He paBeH MaKC-M]ffHB cJr'y'~e Mep $2 H S3. M e x ~ TOM, KOH]~EL"HOL"T]b $2 ~t $3 s H e ~ C C T H O ~ CTaTaCTSXe ~cnon~zom~sa ~rro 61,[ no~ac~ SO3MO~SOCr~ ~ o c r m ~ m ~ MaxcmvvyMa ~T~x ~ep ~iTscrsm~.JmHocra ~ onpe~zeneHHO~t rpynn~ To~e~.
FIpmse.uem~i Tp~ rpaTm~X npm~epa ~nmocTp~pymm~e xapaxTep~craz~ Vm~mMaKC~aTpOB ~ IIOne3HOCT~IIo~xoHa ~azc pac~-ra.