I N F O R M A T I O N S C I E N C E S 10, 2 4 3 - 278 (1976)
243
A Unifying Framework for Linear Estimation: Generalized Partitioned Algorithms D E M E T R I O S G. LAINIOTIS
Department of Electrical Engineering, Bell Hall, State University of New York at Buffalo, Amherst, New York 14260
ABSTRACT In this paper, generalized partitioned algorithms are presented that serve as the unifying framework for linear filtering and smoothing. The fundamental and all encompassing nature of the generalized partitioned algorithms (hereon denoted GPA) is clearly demonstrated by showing that the GPA contain as special cases important generalizations of past wellknown linear estimation algorithms, as well as whole families of such algorithms, of which all previously obtained major filtering and smoothing algorithms are special cases. Specifically, generalized partitioned filtering and smoothing algorithms are given in terms of integral expressions, that are theoretically interesting, computationally attractive, as well as provide a unification of all previous approaches to linear filtering and smoothing, and clear delineation of their inter-relationships. In particular, the GPA for filtering are shown to contain as special cases families of filtering algorithms which constitute generalizations of the Kalman-Bucy, and Chandrasekhar algorithms as well as of the previously obtained partitioned algorithms of Laimotis. These generalizations pertain to arbitrary initial conditions and time-varying models. Further, the GPA for smoothing are shown to contain two families of generalized backward and forward smoothing algorithms valid for arbitrary boundary conditions, of which all previous backward and forward algorithms are special cases. It is also shown that the GPA may also be given in terms of an imbedded generalized Chandrasekhar algorithm with the consequent computational advantages. Furthermore, the GPA are shown to serve as the basis of computationally effective, fast, and numerically robust algorithms for the numerical implementation of the estimation formulas. A particularly effective doubling algorithm is also given for calculating the steady-state filter. The partitioned numerical algorithms are given exactly in terms of a set of elemental solutions which are both simple as well as completely decoupled, and as such computable in either a parallel or serial processing mode. Moreover, the overall solution is given by a simple recursive operation on the elemental solutions.
I. I N T R O D U C T I O N T h e l i n e a r e s t i m a t i o n p r o b l e m - - b o t h filtering a n d s m o o t h i n g - - h a s attracted c o n s i d e r a b l e a t t e n t i o n in the past q u a r t e r century, as c a n be seen in the r e c e n t surveys of the subject [1-3]. Specifically, there have b e e n @American Elsevier Publishing Company, Inc., 1976
244
DEMETRIOS G. LAINIOTIS
numerous approaches [ I-3] to linear filtering resulting essentially in the fundamental Kalman-Bucy filter [4]. More recently, in an approach to linear filtering computationally different than that of Kalman-Buey [4], Kailath and Lindquist [5--6] have obtained a variation of the Kalman-Bucy filter that is--under certain conditions--computationally attractive. This is the So-called Chandrasekhar algorithm. However, it must be noted that the latter results were restricted to time-invariant linear models only, and their possible computational advantages were restricted to a special class of initial conditions, Brammer [7]. Similarly, linear smoothing has attracted an even greater attention [1-3], resulting in a plethora of algorithms. Specifically, various algorithms have been obtained by Mayne [8], Fraser [9], Meditch [10], Kailath and Frost [11], and many others [1-3], each using a different approach or considering a different aspect of the problem. In a radically different approach to linear estimation--both filtering and smoothing--Lainiotis [12-16] obtained fundamentally new filtering and smoothing algorithms--the partitioned algorithms---in terms of explicit integral expression in a partitioned or decomposed form. The "partitioned" algorithms were shown [12-16] to have several important properties, as well as to yield interesting new results and interpretations of fast and robust Riccati equation solutions [14-21], generalizations of the concepts of observability and controllability [ 15-19], effective filter initialization [ 14-16], etc. Furthermore, it has been shown by Lainiotis [15-16,24] that all major previous smoothing algorithms can be readily obtained from the partitioned algorithms by simple forward or backward differentiations of the 'partitioned" algorithms. Specifically, the Meditch [10], and Kailath-Frost [11] algorithms are obtained by forward differentiation of the "partitioned" algorithms, while the MayneFraser [8-9] by backward differentiation. In this paper, generalized partitioned algorithms are presented that serve as the unifying framework for linear filtering and smoothing. The fundamental and all encompassing nature of the generalized partitioned algorithms (hereon denoted GPA) is clearly demonstrated by showing that the GPA contain as special cases important generalizations of past well-known linear estimation algorithms, as well as whole families of such algorithms, of which all previously obtained major filtering and smoothing algorithms are special cases. Specifically, generalized partitioned filtering and smoothing algorithms are given in terms of integral expressions, that are theoretically interesting, computationally attractive, as well as provide a unification of all previous approaches to linear filtering and smoothing, and clear delineation of their inter-relationships. In particular, the GPA for filtering are shown to contain as special cases families of filtering algorithms which constitute generalization of the Kalman-Bucy, and Chandrasekhar algorithms as well as of the previously obtained partitioned algorithms of Lainiotis. These generalizations
U N I F Y I N G FRAMEWORK FOR LINEAR ESTIMATION
245
pertain to arbitrary initial conditions and time-varying models. Further, the GPA for smoothing are shown to contain two families of generalized backward and forward smoothing algorithms valid for arbitrary boundary conditions, of which all previous backward and forward algorithms are special cases. It is also shown that the GPA may also be given in terms of an imbedded generalized Chandrasekhar algorithm with the consequent computational advantages. Furthermore, the GPA are shown to serve as the basis of computationally effective, fast, and numerically robust algorithms for the numerical implementation of the estimation formulas [ 16-19, 22-24]. The numerical estimation algorithms have a decomposed or "partitioned" structure that results through "partitioning" of the total computation interval into subintervals and solving for the estimation Riccati equation in each subinterval with an arbitrarily chosen nominal initial condition for each subinterval. Thus, effectively, the Riccati equation solution over the whole interval or equivalently the estimation solution, have been decomposed into a set of elementary piece-wise solutions which are both simple as well as, and most importantly, completely decoupled from each other via anchoring at the chosen nominal initial conditions at the initial point of each subinterval. As such, these elemental solutions are computable in either a parallel or serial processing mode. Further, the overall solution is given in terms of a simple recursive operation on the elemental solutions. II. PREVIOUS APPROACHES TO LINEAR F I L T E R I N G AND SMOOTHING The linear estimation problem considered is briefly described by the following model equations and statement of objective: Specifically, the model is given by the equations:
dx(t) dt - F ( t ) x ( t ) + u(t),
(1)
z(t)=H(t)x(t)+v(t),
(2)
where x(t), and z(t) are the n and m-dimensional state and measurement processes, respectively; (u(t)}, and {v(t)} are the plant and measurement noise random processes, respectively. They are independent, zero-mean, white, gaussian processes, with covariances Q (t) and R (t), respectively. The initial state x (~) is gaussian with mean ~(~-[~-), and covariance P (~, ~). Moreover, xO') is independent of (u(t)}, and (v(t)), for t ~ ~-. Given the measurement record, )~(t,~') -- {x (o); oc ( ~, t) }, the optimal, in the mean-square-error sense, filtered estimate ~(t[ t, ~-) of x (t), and the smoothed estimate ~(~'{t, ~-) of xO') are desired.
246
DEMETRIOS G. LAINIOTIS
The major past approaches to filtering and the resulting algorithms are given briefly below:
KALMAN-BUCY FILTER The filtered estimate and its covariance were given by Kalman and Bucy [4] in the following well-known equations: 0~(tlt, z)
Ot
-[F(t)-K(t'¢)H(t)]~c(tlt'z)+K(t'z)z(t)'
(3)
with initial condition ~(zlz ), where
K(t,,)= P(t,z)Hr (t)R -' (t),
(4)
and
aP(t,z) Ot - F ( t ) P ( t ' z ) + P ( t ' * ) F r ( t ) + Q ( t ) - P (t,z)H r (t)R
--I
(t)H (t)P (t, ~)
(5)
with initial condition P (~', ~), and for t > z.
CHANDRASEKHARALGORITHM (KAILATH-LINDQUIST) The so-called Chandrasekhar algorithm as given by Kailath [5], and Lindquist [6] differs from the Kalman-Bucy filter only in the way it calculates the gain K(t,z). As obtained by Kailath and Lindquist [5,6] it is applicable to time-invariant models only, and it consists of the substitution of Eqs. (4-5) of the Kalman-Bucy filter by the following equations:
OK(t-z) Ot -epx(t'z)P(*'z)¢r(t'z);
0¢x(t,z) Ot
=[F-K(t-z)Hlq~k(t'z);
K ( z , z ) = P ( , , , ) H r R -I, (6) ~x($,~)=I,
(7)
where
P(q',7)=--FP(z,z)+P(z,z)Fr+Q-P(z,z)HrR-IHp(z,z).
(8)
U N I F Y I N G F R A M E W O R K FOR LINEAR ESTIMATION
247
We note that the possible computational advantages of the above Chandrasekhar algorithm depend integrally on the low-rank property of the actual initial condition. However, as pointed out by Brammer [7], the computational advantages of the Kailath-Lindquist algorithm exist essentially for zero initial conditions. Most recently and in connection with the partitioned algorithms of Lainiotis [12-16], a generalized Chandrasekhar algorithm applicable to time-varying models and zero initial conditions was given as a special case of the "partitioned" algorithms. Similarly, the major previous approaches to smoothing and the resulting algorithms are given briefly below. These fall generally into two categories, namely those based essentially on a forward filter such as the Meditch [10], and Kailath-Frost [11] algorithms, and those based on two filters, one forward and one backward, such as the Mayne [8], and Fraser [9] algorithms.
TWO-FILTER ALGORITHM (MAYNE-FRASER) The approaches of Mayne [8] and Fraser [9] although somewhat different, result in the following two-filter smoothing algotithm:
£(~[t,z)=P(zlt)[Pb-'('c,t)~b(Tl'r,t)+ e-l(z,~)~(~lr)],
(9)
P (tit)-- [Pb-1 ( r , t ) + e - i (I.,~.) ] -1,
(lO)
and
where ~(~']z) and P (~'1~') are the forward filtered estimate and its covariance matrix, respectively, given by a forward Kalman-Bucy filter; ~b(T[~-,t), and Pb(z, t) are the backward-time filtered estimate and its covariance, respectively. The quantities.Pb (zlz, t) = PZ t(~., t)~b (zlz, t), and P ~ I(z, t) are given by the following backward-time differential equations:
0?b(~l~,t) Or
-
[ F ( ~ ' ) - Q(,r)Pb-' (.r,t)]rfib(,rlr, t)--Hr(.r)R-I (~)z(r), (ll)
with final condition fib(tit)-- O; or
0xb(~lr, t) O~
=[F(z)+Pb(z't)Hr(~)R-l(z)H(z)]~b(~lz't) -- Po (z,t) H r (z)R -l (~')z (~')
(1 la)
248
DEMETRIOS G. LAINIOTIS
with final condition ~b(tl t, t) arbitrary and a/,~ -1 ( ¢ , t ) _
a~
Pb-1 ( ' r , t ) F ( ¢ ) - F r
(¢)P~ ] ( z , t ) - H r (¢)R -I (¢)H (1")
+ Pb- ] (':, t)Q (¢)p~ l (¢, t),
(12)
with final condition PZ 1(t, t) -- 0, and for t ~ ~.
MEDITCH ALGORITHM The smoothing algorithm of Meditch [10] is based on a forward KalmanBucy filter only, and is given by the following forward-time differential equations: 8x (~'lt, ¢) _ P (q'lt)~xr (t, ¢)H r (t)R - l (t)7.x (t, ¢), bt
(13)
with initial condition ~(~1¢); and aP(~-It) a------T-- P ( ' r ' 1 " ) ~ r ( t " c ) H r ( t ) R - l ( t ) H ( t ) e P x ( t ' ¢ ) P ( ¢ " r ) '
(14)
with initial condition P (¢, ~). The innovation 7.x(t, ~) =_z ( t ) - H (t)x(t It, ~), and ~(tl t,¢) is the filtered estimate obtained from a Kalman-Bucy filter; ~k(t, ~') is the transition matrix of this filter, namely
a~x(t,¢) - - [ F ( t ) - P (t, ) H r ( t ) R -1 ( t ) H (t) ]epx(t,.r); at
~x(¢,¢)=I
(15) for t ~ ¢.
INNOVATION ALGORITHM (KAILA TH-FROST) The innovation smoothing algorithm was first given by Kailath and Frost [ 11]. A more explicit version of the innovation algorithm was later given by Lainiotis [ 14-15]. The latter is more convenient for analysis and interpretation as well as for use. It is given [14-15] by the following integral expressions:
~(~It,z)-~(T]~)+P(T,~)Mx(t,~), P(zIt)=P(z,z)--P(~,¢)OK(t,~)P(~,T),
06) 07)
UNIFYING FRAMEWORK FOR LINEAR ESTIMATION
249
where
Mx ( t, ~)=- f'eP% ( o, ~)H r ( o)R -' ( o)e,~( o, ~)ao,
OK(l,,r)~frt~bTK(O,~-)HT(o)R-l(o)H(o)d?K(o,'r)do ,
(18)
(19)
where ~x(O,t"), and epx(t,z) are as defined above. It is noted that, as shown by Lainiotis [15-16], the innovation algorithm given above may be readily obtained from the Meditch algorithm by straightforward integration of Eqs. (13-14). PARTITIONED FILTERING AND SMOOTHING ALGORITHMS (LAINIOTIS) The "partitioned" filtering and smoothing algorithms of Lainiotis [12-16] are given by the following explicit integral expressions. Specifically, the filtered estimate and the filtering error-covariance matrix are given, respectively, by: ~(tit,~) = YCo(t[t, ~) + q,o( t, T)Yc('cit, ~), P (t, I") = Po (t,~) + q~o(t,~)P (~[t)qJor(t, "r),
(20) (21)
where ~(~[ t, 1") and P (~[ t) are the smoothed estimate of the initial state and the corresponding error-covariance matrix, respectively; they are given by the following explicit integral expressions: ~ ( ~ l t , ~ ) = P ( . r l t ) [ M o ( t , ~ ) + P -~ (~, ~):~(~'l~')], P (Tit) = [ O o ( t , ' r ) + P - ' (z,z) ] - ' ,
(22) (23)
where M0(t, ~'), and the observability matrix O (t, ~) are defined as M o (t, ~) =--('~or(o,'r)H r (o)R - l (o)S0 (o, z) do a, Oo(t,.r)= f t e p r ( o , z ) H r (o)R -I (o)H (o)qJo(O,.r)do '
(24)
(25)
where the partial or nominal innovation ~o(t, ~)=--z ( t ) - H ( t)~o( t It, ~), and the corresponding partial or nominal state-estimate ~o(tl t, z), and its covariance matrix Po(t,.r) are given by the forward Kalman-filter equations:
a~o(t{t,~) at
- e ( t ) ~ o ( t l t , ~ ) + Ko (t,'O~o(t,'O,
(26)
250
DEMETRIOS G. LAINIOTIS
with initial condition 20(~'[~-) = 0; and Ko (t;z) =--Po (t, ~')Hr (t)R -t (t),
aeo(t,O Ot
(27)
-F(t)P°(t'~)+P°(t'z)Fr(t)+Q(t) - Po ( t, , ) H r ( t)R -t ( t)H ( t)Po( t,,)
(28)
with initial condition Po(~,~')= 0. The transition matrix Oo(t,*) of this Kalman filter is given by
Oepo( t, O at
_ [F(t)_Po(t,z)Hr(t)R-i
(t)H(t)]~o(t,1.);
~o(q.,¢)__i. (29)
In conclusion we note that, the above partitioned algorithms are given in terms of explicit integral expressions which can be readily differentiated in either the forward or backward direction yielding all previous forward or backward-time filtering and smoothing algorithms. This was shown previously by Lainiotis [15-16], and Lainiotis and Govindaraj [21]. III. GENERALIZED PARTITIONED ALGORITHMS The generalized partitioned filtering and smoothing algorithms are given in the following theorems and related corollaries. Their physical interpretation, theoretical significance, and computational advantages are also briefly discussed. THEOREM I. The GPA for filtering and smoothing are given by the following explicit integral expressions: 2 (t] t,~') -- 2n (tl t, ~") + ~n (t, ~')~r(~'] t, ~'), P (t,z) = P, (t,z) +ep,(t,z)P r (~'[t)~r(t,~),
(30) (31)
where 2, (zl t, ~) and P, (~lt) are the partial initial-state smoothed estimate and its covariance matrix. They are given by: 2,(~-1t, z ) --- P, 0"I t) [ n . (t, ~-)+ P,- '2,], p, (~.]t) = [On(t,~')+ V,- i ] -i ,
(32) (33)
U N I F Y I N G F R A M E W O R K FOR L I N E A R ESTIMATION
251
and M. ( t, z), and the generalized observabilRy matrix On( t, ¢) are given by:
Mn ( t,
f'4, (o,
(o)R -' (o)en(o, )do,
(34)
and (9. (t,T)----
[email protected] (o,'r)H r (o)R -I ( o ) H (o)q~.(o,¢)do,
(35)
where the nominal or partial innovation ~ ( t, ~) is defined as ~. (t, ~-)_----~(t)- H (t) ~. (tit, z), and the corresponding nominal state-vector estimate ~. (tit, ¢) and its covariance matrix P. ( t, ¢) are given by the forward Kalman filter equations:
0~.(tlt,~') Ot
= [ F ( t ) - K ~ ( t , z ) H ( t ) ] ~ . ( t l t , z)+K~(t,¢)z(t),
(36)
with initial condition ~ (~'{%~') = xn; K n (t,'r) =_Pn (t, ¢)H r (t)R
--I
(t),
(37)
OP. ( t,'r) Ot -F(t)P"(t'z)+P"(t'¢)Fr(t)+Q(t) - P. (t, ~')H r (t)R -' (t)H(t)P~ (t,'r),
(38)
with initial condition P . ( z , ¢ ) = P . . The transition matrix qJ~(.,.) of the above Kalman filter is given by:
a¢.(t,~) at
-- [ F ( t ) - K. (t,r)H(t)]q~.(t,¢);
ep.(¢,~.) -~ I.
(39)
The nominal initial conditions ~. and P., and the remainder initial conditions
~, and P, are given by (~'{~') ~ ~. + .~,,
(40)
e(~,,r)----e~+ e..
(41)
Proof. The PGA are obtained in a straightforward manner through simple use of the "partitioning" approach to estimation, Lainiotis [1-13], and in particular of the "partition" theorem [ 12-13]. As will be seen in the following
252
DEMETRIOS G. LAINIOTIS
proof, the derivation of the GPA follows exactly the derivation of the partitioned algorithms, namely Eqs. (20-29), as given by Laimotis [1-15]. The partitioning approach as used in this derivation consists simply of partitioning the initial gaussian state-vector x(~') into two other independent gaussian vectors, and designating one of the latter as an unknown random parameter 0. Specifically, x(r) is expressed as
X(~)-~Xn+Xr, where x, and
x r are statistically independent and gaussian so that (~'11")= ~ + ~,,
(42)
P (~',z) = P, + P ,
(43)
where xn and P, are the mean and covariance of x,, and ~, and P, are the mean and covariance of x,. Following the partitioning approach, x, is viewed as an unknown parameter 0. Thus using the "partition" theorem, Lainiotis [12, 13], we have that the filtered estimate and its covariance matrix are given by
x(tlt,~) = f Yc(tlt,'r;x,)p(x, lt,'r)dx,,
(44)
and P (t,z) =
f (e (,,,ix,) + [~2(tl t, z) - ~ (tlt,~; xr)] × [~ (tlt,~") - ~ (tlt,~'; x,)lrp(Xrlt,.r)dx,),
(45)
where ~(tlt,¢; xr) and P(t,¢[Xr) are the conditional estimate and the corresponding conditional covariance matrix of x (t) given ~(t, 1") and xr. Further, p(x~lt,¢) is the a posteriori probability density of x, given the measurements ~(t,1"). Since all pertinent random variables are jointly gaussian, the a posteriori probability p (xtl t, ~'; x,) of x (t) given ~(t, r) and xr is gaussian; hence the associated conditional mean and covariance ~ (tit, ~'; x,) and P (t, ~'lxr), respectively, are given by the Kalman-Bucy filter equations. Moreover, since x, is assumed known, the initial conditions are
~ (~l~,x,)-- ~ + ~,
(46)
UNIFYING FRAMEWORK FOR LINEAR ESTIMATION
253
and
p (z, zlx,) = p.
(47)
So, P(t, rlxr)= P.(t,'t) is independent of x,, and it is given by OP. (t,¢) at -F(t)P"(t"r)+P"(t"r)rr(t)+Q(t)
- P. (t,¢)H r (t)R -l (t)H(t)P. (t, ~');
(48)
with initial condition P.(z,z)= P.; and .~(t[t,r; xr) is given by
a~(tlt,~;xr) at =[F(t)-K.(t,.r)Hr(t)]Sc(tlt,.c;xr)+K.(t,.t)z(t),
(49)
with initial condition ~(tlt,z; x,) = ~. + xr. Equation (47) may be rewritten as
~ ( tlt, ~; xr)= ~.( tlt, ~) + q).( t, ~)x ~
(50)
where ~.(t It, i-) and q).(t, ¢) are given by Eqs. (36-37), and Eq. (39), respectively. Substituting Eq. (48) in Eq. (42) we have
~ ( tlt,.r)= Sc.( tlt,.c) + q~.( t, ~)yc~(.rlt,.r),
(51)
P ( t, ~) = e. ( t, z) + ep.( t, ¢)P, ( zlt)~r ( t, T),
(52)
and
where ~,(1"It, z) and P~(~I t) are the smoothed estimate and the corresponding error-covariance matrix of x, given k(t,~'). From the "partition" theorem, Lainiotis [12-13], we have that A(t, zlx.)
p(x, lt, z )--
p(x,),
(53)
f A(t,~lx,)p(x,)dxr where the conditional likelihood ratio A(t, rlx.) is given by A(/,'rlx.)-exp{ ~ xt
T
(o[o,'r;xr)H r (o)R--I (o)g(a)do
_ 12 f f IIH (°)2(°la'T;x')ll~-'(°)d°)
.
(54)
254
DEMETRIOS G. LAINIOT.IS
Using Eq. (48) in Eqs. (51-52), completing the square in the numerator and denominator of Eq. (51), and cancelling from the numerator and denominator identical factors that are not functions of x,, one readily obtains
p(xrt, )--
exp( - ½Ilx,- O.-I (t,~)M. (t,z)ll20. (t,~))
p(x,).
(55)
f p(x.)exp{ - ½1Ix.- on''1 (t,T)gn (t,~)l120. (t,$)dxr} Finally, using the fact thatp(x,) is gaussian with mean ~,, and covariance P,, and combining exponential terms in both numerator and denominator, we obtain
p( xrlt, z)= S (~(~lt,~'); er (~Jt)),
(56)
where ~ (I-It, ~) and Pr (zl t) are given by Eqs. (32-35); q.e.d. This completes the proof.
REMARKS (i) As can be seen from the above proof, the GPA and the associated initial-condition partitioning have an interesting stochastic interpretation. Specifically, the GPA result from partitioning the initial state-vector into the sum of two independent gaussian vectors x n, and xr, and proceeding in the adaptive or partitioned framework of estimation, Lainiotis [12-13]. This adaptive formulation simply consists of viewing part of the initial state-vector, namely x,, as an unknown and random parameter. In this framework, the GPA is decomposed into the "non-adaptive" part ~n(t,~'), and into the "adaptive" part ~,n(t,*)~r(~'lt), the latter incorporating the adaption or updating with respect to the unknown parameter. In other words, the "partitioning" approach decomposes the original estimation problem into a simpler or better conditioned estimation problem, namely the one with partially known initial condition, and a parameter estimation problem pertaining to the "unknown" part x, of the initial state. Thus, ~, (t[t,.r) is the filtered estimate of x (t) with x, assumed known, while the second part ~,(~lt) is the smoothed estimate of xr, and P,(l"[t) the corresponding smoothing error-covariance matrix. So :~,(t[ t,~) constitutes a nominal solution of the original filtering problem and ~n(t, ~-)~,(~'[t) constitutes the necessary connection factor to give us the overall-optimal mse filtered estimate ~(tlt,~'). (ii) The GPA may also be given a transmission-line or in general a scattering theory interpretation, by noting that :~,(tit, ~) constitutes the solution of
UNIFYING FRAMEWORK FOR LINEAR ESTIMATION
255
the problem with un matched boundary conditions while the term ~n(t, z) ~,(z] t) constitutes the necessary correction factor for proper matching of the boundary conditions. (iii) We note further that the above partitioned algorithms constitute the natural generalization of the previously obtained partitioned algorithms of Lainiotis [ 12-16]. The generalization consists of using arbitrary nominal initial conditions ~,, and Pn, instead of the previously used zero nominal initial conditions, as in Eqs. (20-29). The GPA reduce readily to the partitioned algorithms, Eqs. (20-29), when -~n= 0, Pn = 0. (iv) The freedom in choosing the initial conditions arbitrarily has significant computational advantages. Specifically, it is very useful in Kalman filter initialization procedures, such as those associated with reprocessed smoothing, Lainiotis [ 14], etc, as well as in connection with the Chandrasekhar implementation, as per Eqs. (6-8), of the imbedded Kalman filter given by Eqs. (38-40). Namely, if the actual initial condition P (~',~') does not have the low-rank property required for the Chandrasekhar algorithms to be computationally advantageous, one may partiton the given condition as per Eqs. (42-43), thus yielding a low-rank nominal initial condition P,. (v) We note that the GPA yield interesting generalizations of the concepts of observability and controllability. Specifically, as given in Eq. (35), namely
On (t,z) = ftq~r(o,'r)Hr (o)R -l (o)H (o)eon(o, z)do,
(57)
0n(tO') is seen to constitute an observability matrix. Moreover we note from Eq. (33) that
P,('rlt ) < 0n I(/,'r),
(58)
and therefore, see Lainiotis [ 14, 15], 0n(t, z) constitutes also a Fisher information matrix with respect to only a part of the initial state-vector. As such, 0,(t, z) constitutes a generalized observability or Fisher information matrix in the sense of being a partial observability or partial Fisher information matrix. This interpretation of 0n(t, z) becomes even more apparent in view or remark (i) and the related stochastic interpretation of initial condition partitioning in the "adaptive" framework. In a following section it will be shown that Pn(t, ~) may also be given as
OPn(t,z) - - Or
~pn(t,.r)pn(.r,.r)~nr(t,.r);
Pn(t,t)=Pn,
(59)
which identifies Pn(t, ~) as a generalized controllability matrix. Specifically,
256
DEMETRIOS G. LAINIOTIS
Pn (t,¢) as given in Eq. (57) constitutes a generalized controllability matrix in the sense of being a partial controllability matrix since it pertains to controllability with respect to only a part of the state-vector. (vi) We observe that the GPA are given in terms of certain basic quantities, namely q~(t,1-), O~(t,¢), and Mn(t,~ ). It is of interest from a theoretical standpoint as well as for further use, to explore the relationship between these basic quantities of the GPA and the corresponding quantitites of the Kalman filter, denoted epx(t, ¢), Ox( t , ¢), and Mr(t, ¢). The desired relationships are given in the following corollary of Theorem 1: COROLLARY1. The relationship between epx(.,. ) defined in Eq. (15), and ep~(., .), given in Eq. (39), is
q~x(t,¢) =epn(t,~')[I + P,O n (t,z)] -l.
(6o)
Similar~, the relationship between Ox( t, ¢), defines in Eq. (19), and On(t, ¢), given in Eq. (35), is Or (t, ~) -- On (t,¢)[I + P, On (t,¢)]-'.
(61)
Finally, the relationship between Mx( t,¢), defined in Eq. (18), and Mn( t,¢), defined in Eq. (34), is given by Mx(t,¢)=[l+On(t,¢)P,]-I[Mn(t,¢)-On(t,¢)~,].
(62)
Proof. The proof is fairly straightforward. It is given in [21]. We note that similar relationships were obtained by Lainiotis [ 15-16] for the zero nominal-initial-condition case. (vii) We note, further, that the GPA are given in terms of explicit integral expressions, e.g. Eqs. (32-35), which can be readily differentiated in either the forward or backward direction. The resulting forward, and backward differential versions of the GPA yield whole families of forward and backward algorithms that are generalizations of previous filtering and smoothing algorithms. As such, these differential versions of the GPA further illustrate the fundamental nature of the GPA as the unifying basis of all linear estimation algorithms. (viii) In conclusion we note that the GPA are also the natural framework for theoretical and computational analysis of unbiasedness, estimation error bounds, filter initialization, and of observability, and controllability. In fact,
U N I F Y I N G FRAMEWORK FOR LINEAR ESTIMATION
257
as indicated previously, the GPA yield generalized concepts of observability and controllability. IV. GENERALIZED PARTITIONED ALGORITHMS: FORWARD FORMULAS In this section the fundamental and all encompassing nature of the GPA is demonstrated by showing that the forward differential version of the GPA contain all previously obtained forward filtering and smoothing algorithms as special cases. More importantly, the GPA contain interesting generalizations of past algorithms as well as whole families of such algorithms. FILTERING A L G O R I T H M S
It can be seen from Theorem 1, e.g. Eqs. (36-39), that the GPA are given in terms of a family of Kalman-Bucy-like filters, one for each possible nominal initial condition. In particular, the well-known Kalman-Bucy filter [4] is a number of this class for nominal initial conditions equal to the actual conditions, namely for P r - - O, x r -~ O. In fact for these initial conditions the GPA reduces to the Kalman-Bucy filter. As such the K-B filter is a special case of the GPA. Furthermore, the GPA is computationally superior to the K-B filter since it is applicable regardless of the nature of the initial conditions. For example if P (~',~')= oo, the K-B filter cannot be used as originally given, instead the information filter form must be used. In contrast, the GPA can be readily used with these initial conditions as well as with any other. Moreover, as noted previously, the freedom with which the nominal initial conditions may be chosen in the realization of the GPA is very advantageous in connection with the initialization of the imbedded Kalman filter, namely Eqs. (36-39). Specifically in connection with the Chandrasekhar implementation, as per Eqs. (6-7), of the imbedded Kalman filter, if the actual initial condition P ( , ) does not have the low-rank property required for the Chandrasdkhar algorithm to be computationally advantageous, one may partition the given initial condition as per Eqs. (40-41), thus yielding a low-rank nominal initial condition. SMOOTHING ALGORITHM
The forward differential form of the generalized partitioned smoothing algorithm given in integral form by Eqs. (32-35) can be readily obtained by differentiating Eqs. (32-35) with respect to t. This forward smoothing algorithm is given in the following corollary of Theorem 1:
258
DEMETRIOS G. LAINIOTIS
COROLLARY2. The smoothed estimate and the corresponding error-covariance matrix are given by
a~,(tlt,~) -P,(*{t)¢r(t,,)Hr(t)R-'(t) ~t ×[~(t,*)-H(t)¢~(t,*)~,(*lt,*)],
(61)
with initial condition .~(~-1~-)= "Xr; aP,(Tlt) at -Pr(~'lt)epr(t'*)Hr(t)R - ' (t)H(t)ep'(t'*)P'(*lt)'
(62)
with initial condition Pr0"[~') = Pr; and
aM.(t,,)
=epr(t,¢)Hr(t)R-l(t)~,(t,,);
~t
M. (z,¢)=0,
ao~ (t, z) --=Our(t,T)Hr(t)R-l(t)H(t)dp.(t,z); at
O~ (~',~') = 0,
(63) (64)
where the remaining quantities are as given in Theorem 1. Proof. The proof consists of simple differentiation with respect to t. As such it is omitted. Remarks. We note that the partitioned smoothing algorithm of Corollary 2 constitute a family of forward smoothing algorithms, one for each particular set of nominal initial conditions. From this family one may readily obtain, as in Lainiotis [15], a related family, of which the Meditch differentail algorithm, given in Eqs. (13-15), is a member. Specifically, substituting Eq. (58) in Eqs. (61--62) and noting that ~, (t,1")- H (t)¢, (t, ~-)~2,(¢{t, ¢) = ~(t, ~), we readily obtain the new family of smoothing algorithms. It is given in the following variation of Corollary 2. COROLLARY 3.
a~r0"lt,~) _ p. ( ¢ r (t,z)H r (t)R - ' (t)~ (t,.)), at
(65)
and
aP,(tlt) at
-P/Pr(t'r)Hr(t)R-'(t)H(t)~K(t'¢)P"
with initial conditions Xr, and P,, respectively.
(66)
UNIFYING FRAMEWORK FOR LINEAR ESTIMATION
259
The Meditch differential algorithm is the member of the family given by Corollary 3, for the special case of zero nominal initial conditions, namely for ~r=:~(,11"), and for Pr--e('r,~'). Furthermore, by straightforward integration of Eqs. (65-66), one may readily obtain a family of integral smoothing formulas of which the so-called innovation algorithm of Kailath-Frost [11], given by Eqs. (16-19), is a member. Specifically, by integration of Eqs. (65-66) one obtains: COROLLARY 4.
:~r0"lt.~') = :~r-- erM~: (t.~')e,
(67)
P, 0"It) = e , - P~O,, (t, ~)P,.
(68)
In particular, the innovation formula is simply a special case of the above family of integral smoothing algorithms, namely it is the member corresponding to zero-nominal initial conditions. As such the innovation formula results simply through integration of the Meditch formula. This particular direct derivation of the innovation formula from the Meditch formula was first given by Lainiotis [15]. V. GENERALIZED PARTITIONED ALGORITHMS: BACKWARD FORMULAS In this section the fundamental and all encompassing nature of the GPA is demonstrated further by showing that the backward-time differentail version of the GPA contains all previously obtained backward smoothing algorithms. Moreover, as a by-product of the backward version of the GPA, the backwardtime Kalman filter for arbitrary final conditions is also obtained. More importantly, the GPA will be shown to contain interesting generalizations of past backward filtering and smoothing algorithms as well as whole families of such algorithms. FILTERING A L G O R I T H M S
In this subsection attention is focused on backward filtering algorithms. Specifically, we are interested in obtaining the backward filtered estimate ~(s[ t,s) of x(s) given the data ~,(s,t)= (x(o); oc(t,s)}, as well as the initial (a priori) information thatp[x(t)] = N {~(¢[I-); P(T,¢)), where ¢ < s < t. We note that the backward filtered estimate ~(s Its) constitutes also the forward smoothed estimate given the same data and a priori information. The only difference is that in backward filtering the data is scanned from t to s, while
260
DEMETRIOS G. LAINIOTIS
in forward smoothing the data is scanned in the opposite direction, namely from s to t. Thus, we may readily obtain the backward filtered estimate by differentiating in the backward-time direction the corresponding forward smoothed estimate as given by the integral partitioned formulas, nar~ely Eqs. (22-23). The partitioned formulas for smoothing for this case are as follows:
~(slt, s)= P(slt)[ Mo(t,s) + P~ ' (s,~)~(s,~') ],
(69)
P (slt)= [ Oo ( t,s) + P~- ' ( s : ) ] - ' ,
(70)
and
where Mo(t,s), and Oo(t,s ) are given by Eqs. (24-28). Moreover, ~o(S,~') and P~(s,'r) are the filtered estimate and the corresponding covariance of x(s) given the available information from ¢ to s. However, as was stated above in the definition of the backward filtering problem, in the interval {¢,s} the only information given is summarized in the a priori information {~(~,¢),P(~,¢)} with no data available in the interval {~,s}. As such, ~,(s,¢) and Po(s, ~) constitute the optimal predicted estimate and the corresponding covariance matrix of x(s) given {~O'I~'),P(,,I")}. They are given by the following well-known formulas
a~o(s,,) as
-F(s).~o(s,'O;
.~o(¢,¢)~.~ (T,~'),
(71)
and
aPo(s,~') as
-F(s)Pa(s"r)+ P°(s'z)Fr(s)+Q(s);
Pa (~,¢)ffi P (~,¢).
(72)
By straightforward differentiation in the backward s direction of the partitioned smoothed formulas given in Eqs. (69-70) we readily obtain the desired backward filtering algorithm. It is given in the following corollary of Theorem l: COROLLARY5. The backward filtered estimate is given by
a~c(slt,s) as
- -
- -
[ F ( s ) + Q ( s ) P Z I(s:)]~(slt,s)
+ Kb (s,t)[z(s)- H(s)~(slt, s)],
(73)
U N I F Y I N G FRAMEWORK FOR LINEAR ESTIMATION
261
with initial condition ~c(t It, t) = 0 (for convenience); where K b (s, t) =- P (s It)H r (s)R - 1 (s),
(74)
and P(slt ) is given by the Riccati equation aP(slt) 0~-
[F(s)+Q(s)P~-~(s'*)]P(slt) - P ( s l t ) [ F ( s ) + Q(s)P~ -1 (s,~) ] r + a ( s ) - P (s I t ) H r (s)R -i (s)H (s)P (s It),
(75)
with initial condition P (t It)-- Pa(t, z). The initial conditions ~¢a(t,'r) and Po( t, z) are obtained from Eqs. (71,72). Proof. The proof is obtained readily through simple differentiation of Eqs. (69-70) with respect to s. As such it is omitted. REMARKS. (i) We note that Eqs. (73-75) constitute the backward Kalman filter for arbitrary boundary conditions. It is the filter matched to the backward state-model for the process given by:
Oxb(s't______~ ) as
[F(s)+Q(s)p~-I(s,,r)]xb(s,t)+u(s)
z (s) = H (s)x b(s, t) + v (s),
(76) (77)
with boundary conditions p [x(t, t)] = N {o, Po(t, ~')); where {u(s)}, and {v(s)} are white-gaussian, zeromean, and independent with covariances Q (s) and R (s), respectively. The above backward state-model of the process is equivalent to the forward state model given by Eqs. (1-2) in the sense that when solved backwards in time yields the same covariance function for the process as the forward model. This can be readily verified. This, one may readily convert from a forward to a backward state-model or vice-versa through the connecting Eqs. (71-72). (ii) The backward filter given by Eqs. (73-75) being valid for arbitrary boundary conditions constitutes a generalization of all previous backward filters obtained for specific boundary conditions. For example it contains as a special case the backward filter of Mayne [8], and Fraser [9], given by Eqs. (11-12) which is valid for final conditions P(t[t)= oo and 5c(t[t,t)=O.
262
D E M E T R I O S G. LAINIOTIS
This can be readily shown by noting that for P(tlt )-- oo, P(z,~') is also equal to infinity, and hence Po (s, z ) = oo or P,--t(s, ~)= 0, for every s in the interval (z, ~). So substituting P,--l(s, z ) = 0 in Eqs. (73) and (75), we obtain the following special case of Eqs. (73) and (75):
OYc(slt,s) Os
ffi [ F ( s ) + e (slt)H r (s)R -t (s)n (s) ]~(slt, s) - P
(s It)H r
(s)R -1 (s)z(s),
(78)
with final condition 2(tit, t)---0; and
0e(slt)
0------~--- F ( s ) P ( s [ t ) + ( s l t ) F r ( s ) - Q ( s ) +P(slt)Hr(s)R-'(s)H(s)P(slt),
(79)
with boundary condition P (tit)= oo; or equivalently
De-1 (sit) as
_
p - I ( s l t ) F ( s ) _ F r ( s ) p - , ( s J t ) - H T ( s ) R - ' (H(s)) + e - ' (slt)Q (s)p -l (sit)
(80)
with boundary condition P -I(tlt ) = 0. Moreover, we note from Eqs. (9-10) that for P(~',z) = oo, we have that P (z[ t) = Pb (~, t), and )2(~'l t, ~) = ~b (TI,t). Thus, comparing Eqs. (78) and (80) to Eqs. (1 la) and (12), respectively, we see that they are identical. Namely, we have shown that for P (tit)= oo, the general backward filter formula given by Eqs. (73-75) reduces to the Mayne-Fraser formula of Eqs. (11-12); q.e.d. (iii) The backward state model of the process may also be obtained as a special case of the backward filter Eqs. (73-75) for no observations, or equivalently for R (s)--oo. Substituting R - l(s)=O in Eq. (75) we obtain
OP(slt) Os - - [ F ( s ) + Q(s)P,--' (s,.r)]P(s[t)
- -
- P (s[t)[ F(s)+ Q (s)P~-t(s,.t)] + Q (s); P (tit) = Pa (t,z)
(81)
which is the covariance equation for a process whose backward-time evolution
UNIFYING FRAMEWORK FOR LINEAR ESTIMATION
263
is given by the backward state model
axb(s,t) 0-------7--- [F(s)+Q(s)Pd-'(s"Q]xb(s't)+u(s)'
(82)
where the covariance of (u(s)} is Q(s). This is the previously obtained backward state-model given by Eq. (76). (iv) For convenience of the presentation, the backward filter formula was given for zero initial condition, namely for 2(~', ~)--0, which results in 2,(s, r) = 0, for all s in the interval (z, t). If 2(r, r) is not zero, then the backward filter formula given by Eq. (73) is modified slightly. For non-zero 2o(s,'r), the backward filtered estimate is given by
a2(slt,s)
a--------T------- [ F ( s ) + Q (s)Pa 1 (s,z)
]Sc(slt, s)
+ Kb (S,t)[z(s)-- H (s)2(sIt, s)I+ Q(s)P~ -1 (s,z)2a(s,T),
(83)
which reduces to Eq. (73) for 2,(s,~)--0, as expected. Up to this point interest was focused on obtaining the backward-time equivalent of the ordinary smoothing estimate 2(s It, s), which was seen to correspond to a backward Kalman filter. In a similar manner, by differentiating in the backward-time direction the generalized or partial forward smoothed estimates 2r(s It,s) and corresponding partial covariance matrices Pr(s[ t), given in Eqs. (32-33), we readily obtain a family of corresponding generalized or partial backward Kalman-like filters. As an illustration lets consider the partial covariance for forward smoothing, namely P,(s[ t), where P,(s[ t) is the covariance of part of the state-vector x(s), namely xr(s), given the data )~(s,t)= (z(o); o~(t,s)), and the a priori information p[x (~)] = N (x(1-, z); P (~, z)}. The generalized or partial smoothed covariance P,(slt) is given by the GPA, in particular by Eq. (33). Specifically, in this case it is given by
P,(slt)=[O,(t,s)+P ,- 1 (s,T)] - I ,
(84)
where P,(s,~) = P,(s,'O- P.(s,s). Moreover, P.(s,s)= P., for all s, and Po(s,,r) is still given by Eq. (72). The expression for O.(t,s) is given by Eq. (37). Differentiating Eq. (84) in the backward s direction we readily obtain the
264
DEMETRIOS G. LAINIOTIS
desired backward filtering formula. It is given by:
aPr(slt) as = -[F"(s)+P"(s's)P'-l(s'~)]
P'(slt)
- Pr(slt)[ F . ( s ) + P.(s,s)P,-t (s,~) ] r
+P.(s,s) - P. (s It)P.- ] (s, ¢)P. (s, * ) H r (s)R -' (s)
× H (s)Po (s, ¢)P,- ' (s, ¢)P, (s It),
(85)
with initial condition Pr(tlt)= Pr(t,~); where P,(s,¢) is given by aP. (s,'r) as -F(s)P'(s'¢)+P~(s'¢)Fr(s)+[Q(s)+F(s)P"+PnFr(s)]'
(86)
with initial condition P~(r,¢) = P(~-,t-)- P.. The matrices F~(s), and P~(s,s) are defined as:
F~ (s) = F(s) - P~H r (s)R - l (s)H (s), Pn (s,s) = F(s)Pn + P . F r (s) + Q (s) - P~H r (s)R -l (s)H (s)P..
(87) (88)
In a similar manner one may obtain readily the corresponding backward equation for ~.(slt, s ). The details are left to the reader.
REMARKS. (i) We observe that Eq. (85) constitutes the backward Riceati equation corresponding to the generalized Kalman-like filter for the partial estimate ~r(slt, s). It is the filter matched to the backward state-model for the partial state-process x,(s, t); hence this backward partial state-model is given by:
Ox,(s,t)
0------~--- [F"(s)+Pn(s's)Pr-'(s'I")] x'(s't)+ur(s)' z(s) = p - I (s,¢)Pa (s,¢)H (s)xr(s, t) + v(s),
(89)
(90)
with boundary conditionsp[xr(t,t)]= N (o; P,(t,t)}, where {u,(s)}, and {v(s)}
U N I F Y I N G FRAMEWORK FOR LINEAR ESTIMATION
265
are white-gaussian-zero-mean, and independent with covariances Pn(s,s), and R (s), respectively. The above backward state-model of the part x,(s, t) of the process x(s, t) is equivalent to the forward partial state-model in the sense of yielding the same covariance for the partial state-process. (ii) We note further that the above generalized family of backward formulas contains as a special case the backward formula for the total statevector, given by Eqs. (73-75). Namely, for P,(s,~)= Pa(S,~), hence for Pn =0, we have that Pr(S[t)= e (s[ t) and thus F n(s)-- F(s), Pn(S,S)= Q (S), and finally, Pr-l(s,z)Pa(s,¢)= I. Substituting the above equivalences in Eq. (85) yields immediately the desired Eq. (75); q.e.d.
SMOOTHING ALGORITHMS In this subsection attention is focused on deriving backward smoothing algorithms. Toward this objective it is useful to consider the backward-time evolution of the nominal forward filtered estimate ~,(tlt, s). We note the interesting nature of Yen(tit,s), namely it is an anchored estimate. That is, regardless of s, the initial value ~,n(sls,s) at s remains constant and equal to x n. Similarly, Pn(s,s)=O for all s < t. The backward-time differentiation of the r/ominal or anchored estimate ~, (tl t, s) and of its covariance matrix Pn (t,s) yields an interesting corollary of Theorem 1. The corollary contains several generalizations of powerful smoothing and filtering algorithms. In fact it contains families of such algorithms. The pertinent corollary is given below. COROLLARY6. The backward-time evolution of the nominal estimate and its covariance matrix are given by:
Ofc.( tlt,s) Os
=dPn(t's)P"(s's)[M"(t's)+P"-I(s's)x"(s's)]'
(91)
and OP, ( t,s) =dp,(t,s)P, (s,s)ep,r ( t,s),
(92)
0s
where q~,(t,s) and M,( t,s) are given by the following backward differential equations
34'n(t,s) Os
=q~n(t,s)[rn(s)-Pn(s,s)On(t,s)];
ePn(t,t)=l,
(93)
0(9. ( t,s) Os - O n ( t ' s ) F n ( s ) + F r ( s ) O n ( t ' s ) + H r ( s ) R - I ( s ) H ( s ) - (9, (t,s)P, (s,s)O, (t,s),
(94)
266
DEMETRIOS G. LAINIOTIS
with O.(t,t)=O; and aM n (t,S) Os -[F"(s)-P"(s's)O"(t's)]rM"(t's)-O"(t's)2"(s's) + H r (s)R -I (s)ff,n(s,s),
(95)
with M~(t, t) = O. The matrices Fn(s), and P~(s,s) were defined previously, namely in Eqs. (87-88), and X,(s,s) is defined as Y,,(s,s) = F(s)Yc, + P~H r (s)R - ' (s)~(s,s),
(96)
and zn (s, s) = z (s) - H ( s ) ~ . Proof. The proof consists simply of differentiation of Eqs. (34-39) with respect to s. The details of the proof are of some independent interest, and they may be found elsewhere [21]. REMARKS. (i) As can be seen from Eqs. (36-39), the "anchored" estimate ~.(tlt, s) constitutes filtering in the forward direction, while it constitutes smoothing in the backward direction, as can be seen from Eq. (91). The later formula is different than the usual one for smoothing since 2n(tlt, s ) is an anchored backward smoothed estimate in the sense that 2.(sis, s) = 2., for all s in the interval (~, t). (ii) We observe, further, that as given in Eq. (92), P.(t,s) constitutes a generalized controllability matrix. Specifically, as indicated previously, P.(t,s) is a partial controllability matrix since it pertains to controllabiltiy with respect to only a part of the state-vector. Thus, we see that the partitioned solution, e.g. the covariance equation (31), is given completely and explicitly in terms of the generalized observability matrix O~(t,s), and the corresponding partial controllability matrix P. (t,s). Most importantly, we note that the results of Corollary 6 constitute the basis of two families of generalized smoothing and filtering algorithms that are of theoretical significance and possible computational importance. Specifically, Corollary 6 provides the basis of: a) a family of generalized two-filter smoothing algorithms, of which the Mayne-Fraser algorithm [8, 9] is a special
U N I F Y I N G F R A M E W O R K FOR LINEAR ESTIMATION
267
case; and b) a family of generalized Chandrasekhar algorithms of which the Kailath-Lindquist [5, 6], and the Lainiotis [12-16] Chandrasekhar algorithms are special cases. The generalized two-filter smoothing algorithms and the generalized Chandraseldaar algorithms are given in some detail, in the following. GENERALIZED TWO-FILTER SMOOTHING A L G O R I T I t M S
The family of generalized two-filter smoothing algorithms pertains to smoothing the part x~(s) of the state-vector x(s), given the data ;k(tO'), and the initial conditions (~(z[~-),P (z,1")}, where ~-< s < t, and x(s)= x,(s)+ x,(s). The desired algorithms are given by the partitioned formulas, namely Eqs. (32-33). Specifically, the partitioned formulas for the smoothed estimate and its covariance for this case are given by:
~,(slt,.c)=P~(slt, T)[M,(t,s)+Pr-'(s,,)Yc~(sls,.r)],
(97)
and
P~(slt, z)= [ O.(t,s)+ Pr-I
(S,"r) ],
(98)
where the filtered estimate .~(sls, ~) and its covariance matrix P,(s, ~) are given by a forward Kalman filter operating on the data A(s,,) and using the a priori information ( ~(I-1i-), P (z, ~')). Specifically, ~r(sls, .r) and P,(s,.r) are defined as
~r(sls,,)~c(sls, . ) - ~.,
(99)
Pr(s,,)=_P(s,z)-P.,
(100)
where ~(sls, z) and P (s, ~) are given by the forward Kalman filter matched to the forward process model of Eqs. (1-2) and with initial conditions R ( , [ , ) , P ( , , , ) at 1". Moreover, M, (t,s) and On (t, s) are obtained from the generalized, backward filter given by Eqs. (95, 94), which utilizes the data X(t,s) only. Namely, they are given by the following backward-time differential equations of Corollary 6:
OM~(t,s) - [ F. (s) - P. (s,s) O. (t,~) ] ~M. (t,~) - O. (t, ~).~.(,, s) OS + H r (s)R
--I
($)~n(S,S) '
(101)
268
DEMETRIOS G. LAINIOTIS
with Mn(t,t)=O; and
OOn(t,s) OS
--On(t's)Fn(s)+Fnr(s)O"(t's)+Hr(s)R-l(s)H(s) -- On ( t,s)P n (s,s)O n ( t,s),
(102)
with On(t,t)=O. Thus, Eqs. (97-102) constitute a family of two-filter (one forward and the other backward) smoothing algorithms, one for each set of nominal initial conditions {xn, P, }, or correspondingly one for each possible partitioning of the state-vector x(s). In particular, the Mayne-Fraser [8-9] two-filter smoothing algorithm, given by Eqs. (9-12), is a member of this family corresponding to the choice of zero nominal-initial-conditions, namely for a state-vector partitioning such that Xr(S) = X(S), the whole state-vector. This can be readily ascertained by noting that for zero -~n and Pn, we have: F, (s)= F(s), Pn (s, s) = Q (s), zn(s, s) -- z (s), and x n(s, s) -- 0, so that Eqs. (97-98) and (101- 102) reduce, respectively, to Eqs. (9, 10) and (11, 12) of the Mayne-Fraser algorithm. In conclusion, we note that the family of generalized partitioned smoothing algorithms of Eqs. (97, 98) may be given in terms of: a) a forward and a backward filter, as indicated above; or b) in terms of the same forward filter given above, and the forward filter of Eqs. (36-38). We therefore have a choice in the realization of the generalized partitioned smoothing algorithms to use either two decoupled forward filters or a forward and a backward filter which are also decoupled. GENERALIZED CHANDRASEKHAR
ALGORITHMS
We note that the generalized filtering and smoothing algorithms of Theorem I are given in terms of the nominal filtered estimate ~n(t[t,~-) and its covariance matrix Pn(t, ¢). Furthermore, the latter filtered estimate may be realized in terms of an imbedded generalized Chandrasekhar algorithm with the consequent possible computational advantages. Specifically, the nominal filter gain may be evaluated in terms of Eq. (39) of Theorem I, and Eq. (92) of Corollary 6, which constitute the generalization to time-oarying models and arbitrary initial conditions of the previously obtained Chandrasekhar algorithms of Lainioris [12-16], and Kailath-Lindquist [5, 6]. As noted previously, the Kailath-Lindquist algorithm is applicalble to time-invariant models only and arbitrary initial conditions, while the previous Chandrasekhar algorithm of Lainiotis [12-16] is applicable to time-invarying models but zero initial conditions.
UNIFYING FRAMEWORK FOR LINEAR ESTIMATION
269
The generalized Chandrasekhar algorithm is given by the following Corollary of Theorem I and Corollary 6: COROLLARY7. The nominal error-covariance matrix Pn (t, ~') and the nominal filter transition d?n( t, ~) are given by the following system of differential equations:
3Pn (t,s) 3s =dp~(t,s)P~ (s,s)d?~(t,s),
(103)
3dPn(t,s) - -[F(t)-P.(t,s)Hr(t)R-'(t)H(t)]dp.(t,s). Ot
(104)
To clearly delineate the possible computational advantages inherent in the generalized Chandrasekhar algorithm, the following version of Eqs. (103-104) is presented: Suppose P.(s,s) has rank a, where a < n, and since it is symmetric, it will have real eigenvalues; thus it may be written as
P. (s,s) = L, (s)L~(s)- Lds)Lf(s),
(105)
where the Li have dimensions
Ll(s)=n×a,
L 2 ( s ) = n × ( a - f l ),
where fl--number of positive eigenvalues of e. (s, s). Making a lower triangular-diagonalization-upper triangle (LDU) decomposition of P.(s,s) as
p.(s,s)--[L,(s) L2(s)] s [LF(s) LF(s)] = L ( s ) S Lr(s),
(106)
where S(a) is the a × ct signature matrix of P.(s,s), namely S--diag.{ 1, l ..... l, Ot
- 1 ..... - 1 } Ot--fl
(107)
we may write the generalized Chandrasekhar algorithm more compactly as
OK. (t, s) Os
--
Yn(t,s)SYnr(t,s)Hr'(t)R-I(t),
(108)
270
DEMETRIOS G. LAINIOTIS
with K ( t , t ) = p~Hr(t)R -I(t); and
OY~(t,s) Ot
[F(t)-Kn(t's)H(t)lYn(t's)'
(109)
with Y~(s,s) = L(s). So we note that K,(t,s) is n x p (p being the dimension of z), and Y~(t,s) is n x a. So the total number of simultaeous equations to be solved in Eqs. (108-109) is n x ( p + a ) .
REMARKS. (i) We note that the above generalized Chandrasekhar algorithms constitute a family of Chandrasehaar algorithms, one for each possible choice of nominal initial condition Pn, or correspondingly for each possible state-vector partitioning. Thus, the GPA for estimation, are given in terms of a memeber of a family of imbedded generalized algorithms. Moreover, the choice of which member to use is arbitrary and can be made so that it is computationally advantageous. Specifically, the particular generalized Chandrasekhar algorithm may be chosen that corresponds to a low-rank initial condition Pn, with the consequent computational advantages. (ii) We note further that even the generalized Chandrasekhar algorithms are a special case of the GPA not only in the sense that the generalized Chandrasekhar algorithms result from a mere differentiation of the GPA, but also in the sense that the GPA reduce to the gen. Chandrasekhar algorithms for nominal initial conditions equal to actual initial conditions, namely for :~r= 0, and Pr = 0. (iii) The generalized Chandrasekhar algorithm corresponding to the actual initial conditions may be used by itself to calculate the gain of the Kalman filter given by Eqs. (3-5), without recourse to the GPA. There is, however, a significant difference between using the Chandrasekhar algorithms and the GPA. Namely, while the possible computational advantages of the Chandrasekhar algorithms depend integrally on the low-rank property of the actual initial conditions, the computational advantages of the GPA are valid for any actual initial condition. Specifically, as Brammer [7] pointed out, the computational advantages of the Chandrasekhar algorithms exist essentially for a small class of initial conditions. So for non-low-rank actual initial conditions, the Chandrasekhar algorithms will have little computational advantage. However, in this case the GPA may be used with a low-rank nominal initial conditions taking full advantage of the nice computa-
UNIFYING FRAMEWORK FOR LINEAR ESTIMATION
271
tional properties of the associated imbedded generalized Chandrasekhar algorithm, namely Eqs. (108, 109), for the low-rank P,, e.g. P,=O. (iv) The special case of Corollary 7 for zero nominal initial conditions is of particular computational importance. As noted above, this special case was previously obtained by Lainiotis [ 16-19, 22, 23]. For P. =0, we have/~.(s,s)-- Q (s), and since now Pn(s,s) can have no negative eigenvalues P.(s,s)-- Ll(s)Lr(s), where Ll(s)---- n × a, and the generalized Chandrasekhar algorithm reduces readily to the previously derived algorithm of Lainiotis [ 16-19, 22, 23] namely:
oKo(t,~)
- Os
-
Yo(t,s)yor(t,s)Hr(s)R -' (s),
(110)
with Ko(t,s)=O; and OYo(t,s) ~t -[F(t)-K°(t's)H(t)]Y°(t's);
Yo(S,s)=LI(s),
(111)
where again Ko(t,s) is n Xp, Yo(t,s) is n × a, and the total no. of simultaneous equations to be solved is n × (p + a). (v) Further, for the even more special case of time-invariant models, the generalized Chandrasekhar algorithm reduces readily to the algorithm of Kailath-Lindquist [5, 6]. Specifically, for time-invariant models we have that P (t,s)= P ( t - s), so that OP(t-s) ~P(t-s) ~t 0s ' (112) and therefore the Chandrasekhar algorithm is now given by the following equations: OKn (t - s) Ot =~'(t's)i#"~r(t's);
K . ( O ) = P . H r R -',
OdPn(t,s) ~t -[F-Kn(t-s)Hr]~'(t's);
dp.(s,s)=I,
(113) (114)
where P. =--FP. + p . F r + Q - P . H r R - 1HP., which are the same as Eqs. (6-8) of the Kailath-Lindquist [5, 6] algorithm.
015)
272
DEMETRIOS G. LAINIOTIS
Finally, we note that since we now have general backward state-process models, we may readily utilize them to obtain in the backward direction any estimate one wishes. For example ~(t[s, t) may be obtained readily as a backward smoothed estimate of x(t) given the data (s, t) scanned from t to s. It may be readily obtained, given the backwark models, via the generalized partitioned formula. Moreover, as the reader may readily verify, differentiating this backward estimate with respect to t yields readily the forward version of the generalized filtering formula, since the backward smoothed estimate 2(t[s, t) constitutes also a forward filtering estimate. In concluding this section, several remarks of a general and historical nature are given below: (i) We note that similar and equally general results have also been obtained for the discrete linear estimation problem [15, 16, 29-31], following again the partitioning approach. However, because these discrete results are completely analogous to the continuous-case results, and because of space limitations, the discrete results will not be presented here. They can be found in the above references. (ii) We note that related linear estimation and associated Riccati equation results have also been obtained independently by Denman [32], Ljung et al. [33], Ljung and Kailath [34], Friedlander et al. [35], Sidhu and Desai [36], Sidhu and Bierman [37], and Desai [38], using essentially Redheffer's scattering framework. Denman, in particular, was the first to apply Redheffer's scattering framework and the associated augmented transition matrix concept to the solution of Riccati Eq. (32). In contrast to the partitioning approach, however, scattering framework used by the above investigators did not provide a stochastic, estimationrelated, physical interpretation of the change in boundary conditions, and the related partial observability and controllability matrices. In the partitioning framework, these interpretations and physical insight yield interesting general partial-state vector backward filtering and smoothing algorithms as well as general partial state vector backward Markov models. It is finally noted that the partitioning framework to estimation is quite general and it applies to nonlinear estimation as can be seen in Refs. [12, 13, 26, and 27]. Indeed, the linear estimation results are a mere special case of the nonlinear or adaptive estimation partitioned results [12, 13, 26, and 27]. VI. FAST AND ROBUST NUMERICAL ALGORITHMS The GPA serve as the basis of computationally effective, fast, and numerically robust algorithms for the numerical implementation of the estimation
U N I F Y I N G FRAMEWORK FOR LINEAR ESTIMATION
273
formulas [16-19, 22-24]. The numerical estimation algorithms have a decomposed or "partitioned" structure that results through "partitioning" of the total computation interval into subintervals and solving for the estimation Riccati equation in each subinterval with an arbitrarily chosen nominal initial condition for each subinterval. Thus, effectively, the Riccati equation solution over the whole interval or equivalently the estimation solution, have been decomposed into a set of elementary piece-wise solutions which are both simple as well as, and most importantly, completely decoupled from each other via anchoring at the chosen nominal initial conditions at the initial point of each subinterval. As such, these elemental solutions are computable in either a parallel or serial processing mode. Further, the overall solution is given exactly in terms of a simple recursive operation on the elemental solutions. The "partitioned" numerical algorithm (Lambda algorithm) is given in the following for the estimation RE. Completely analogous results are also true for the nonsymmetric as well as the control RE, but are not presented because of space limitations. PARTITIONED NUMERICAL ALGORITHM (LAMBDA ALGORITHM)
Given the partitioning of the computation interval { to, t/} into the nonoverlapping subintervals (t~,tt+ O, i=O, 1,2 .... f - 1 , the partitioned numerical algorithm is given as P(tk+X, to) ffi Po(tk+l, tk)+Oo(tk+~,tk) 1
T
× [Oo(tk+l,tk)+p-I(tk, to)] - ~o(tk+l,tk),
(116)
where Po(tk+ l, tk) is the solution of Eq. (5) at time tk+ I, with initial condition Po(tk, tk)=O, at tk; k = 1,2 .... ; ~Po(tk+~,tk) is the transition matrix at tk+ t, corresponding to the dynamic matrix
F ( t) - 1)o ( t, tk)H r ( t)R -l ( t)H ( t), and Oo(t, tk) is given by Eq. (25) with initial condition Oo(tk,tk)=O. REMARKS.
(i) We note that the Lambda algorithm gives the RE solution at discrete times t;, i = 1,2 ..... It does so in terms of the set of elemental solutions---one for each subinterval which are completely decoupled from each other and
274
DEMETRIOS G. LAINIOTIS
as such computable in either a serial or parallelprocessing mode, with the consequent advantages of fast operation and reduction in storage requirements. Moreover, the PS is given by the simple recursive operation on the elemental solutions, namely, Eq. (116) whose desirable properties have been recounted previously, e.g., numerical robustness in the presence of initial ill-conditioning, stiff F matrices, etc. (ii) The numerical advantages and robustness of the Lambda algorithm may be explained in terms of the following: a) by partitioning into small subintervals and anchoring the initial value at zero in each subinterval, reduces the effect of the nonlinear term which is the source of numerical errors; and by partitioning into subintervals and decoupling the solutions over each subinterval, the propagation of the errors from one interval to the next is reduced. (iii) At this point the computational requirements of the Lambda algorithm must be discussed. We note that since the L-algorithm is based on the PS it has all the numerical and computational advantages of the PS, e.g., it is better numerically than other means of solving RE such as Runge-Kutta integration. In using the L-algorithm instead of the PS over the whole interval there is no decrease in computational requirements for general RE. However, as noted above, in using the L-algorithm we have the choice of either serial or parallel processing. Further, we note that there is a class of time-varying RE for which appreciable decrease in computational requirements results through use of this algorithm. Specifically, if the time-variation is of a periodic nature then by suitable choice of the subinterval length, Po('), dpo(.) and Oo(') need only be calculated once, in the first subinterval, and remain the same thereafter. Namely, from the second interval on the algorithm consists only of matrix inversions. (iv) Moreover, for the case of time-invariant RE the Lambda algorithm takes the following simple form, if ts+l-t ~= A for all i.
Pi(k+I)AI=Po(A)+~o(A)[Oo(A)+P-'(kA)]-',~r(A),
(117)
where now Po(A), ~o(A) and Oo(A) need only be computed for the first subinterval, and remain the same thereafter. As such, for constant RE considerable decrease in the computational burden accrues from the use of the Lambda algorithm. Moreover, it must be noted that even in this case we are dealing with nonstationary processes or time-varying solutions because of transients. (v) W.e observe from Eqs. (116, 117) that the Lambda algorithm constitutes in essence a continuous fraction expansion. This can be seen easily for the
UNIFYING FRAMEWORK FOR LINEAR ESTIMATION
275
scalar, constant RE where Eq. (117) may be written as:
¢~(a)
P[(K+I)A]=Po(A)+
(118)
Oo(a)+
¢~(a) P o ( a ) + o o ( a ) + ...
The significant numerical advantages and other interesting interpretations of the/-algorithm can be readily deduced from its continuous fraction expansion structure or ladder network representation. (vi) As noted previously the L-algorithm results naturally from the "partitioned" solution first obtained by Lainiotis in 1970. More recently and independently of this author, Womble and Potter [25] obtained an algorithm very similar to the Lambda algorithm. However, their derivation is very different and perhaps not as illuminating or as simple as the one given herein. In many estimation applications whhere time-invariant Riccati equations appear, the steady-state solution may be the prime item of interest. In these situations it is of essence to reduce the integration time and hence the amount of computation) required to obtain the steady-state solution. Such a computationally effective procedure exists and is based on "doubling" the length of the "partitioning" interval, and straightforward use of the "partitioned" solution. The pertinent numerical algorithm is given in the following: DOUBLING PARTITIONED ALGORITHM (DELTA ALGORITHM)
P ( 2n + 1A) = Po(2"a) + Oo(2"A) [ Oo (2"a) + P -1 (2"a) ] - 1. Oor(2.a) '
(119)
where now
Po ( 2n+ 'A) = Po(2hA) + ¢o(2"A) [ Oo (2hA) + Po-I (2"A) ] -lcr(2"A),
(120)
¢o(2"+lA)=Oo(2~A)[Oo(2"A)+Po-I (2nA)]-~Po I (2"A).*o(2"A), (121) Oo(2"+lA)=Oo(2"A)+r~r(2"A)EOo(2"A)+Pd-l(2"A)]~o(2"A).
(122)
REMARKS.
(i) W e note the possible computational advantages of the above "doubling" algorithm. Specifically,it only requires: a) integrationfor the firstA interval
276
DEMETRIOS G. LAINIOTIS
to obtain ~o(A), Oo(A), and Po(A); and subsequently, requires repeated use of the simple recursive Eqs. (119-122). The latter require only iterative matrix inversions to obtain the RE solution at the end of a time interval which is twice as long as the interval in the previous iteration, i.e., doubling. (ii) The partitioned algorithms have numerical and computational advantages, specifically, robustness, speed, and stability. These advantages are especially pronounced in the case of ill-conditioned RE that result from either: a) ill-conditioned initial conditions; or b) stiff system matrices. The superior computational performance of the partitioned algorithms has been brought out in an extensive computer study, documented elsewhere [16-18]. In this paper, attention is focused on the computational time requirements of the Lambda and Delta algorithms in determining tne steady-state Riccati solution. Specifically, a comparison is made between the computational time requirements of the Runge-Kutta, the Lambda, and the Delta algorithms for the same accurate up to tenth decimal place. The comparison was made on a 10 x 10, and 20 × 20 RE with coefficient matrices as follows: 0'
F--
I
1
-2 1
A21
]
I- - A ~ , A22
-2
0
A2~=A22
0 1
0
i]
0
1
H = [ l l ...] R--1, Q=I.
P(~'O') ~O
-
We note that the above RE have nonstiff F matrices, and well-conditioned initial condition matrix namely P (~, ~')---0. Computation-Time to Steady-State Algorithm Runge-Kutta (RK) Lambda (A) Delta (A)
10 × 10 89.1 36.9 6.64
20 × 20 1244 568 48
Computation-Time Ratios Ratio RK to A RK to A A to A
10 × 10 2.62 13.4 5.6
20 x 20 2.45 26. 11.8
UNIFYING FRAMEWORK FOR LINEAR ESTIMATION
277
In summary, the above examples indicate that the delta algorithm is an order of magnitude faster than the R u n g e - K u t t a algorithm, a n d several times faster than the l a m b d a algorithm. Moreover, the computational advantage of the delta algorithm over either the R u n g e - K u t t a or even the lambda algorithm increases drastically with increasing state-vector dimentionality. For further details, the reader is referred to Ref. [28]. REFERENCES 1. D. G. Lalniotis, Estimation: a brief survey, Inform. Sci. 7, (3) 191-202 (Nov. 1974). 2. T. Kailath, A View of Three Decades of Linear Filtering Theory, 1EEE Trans. Inform. Theory IT-20, (2) (March 1974). 3. J. S. Meditch, A Survey of Data Smoothing for Linear and Nonlinear Dynamic Systems, Automatica 9, 151-162 (1973). 4. R. E. Kalman and R. S. Bucy, New Results in Linear Filtering and Prediction Theory, Trans. A S M E , J. Basic Eng., Series D 82, 34-35 (March, 1960). 5. T. Kailath, Some New Algorithms for Recursive Estimation in Constant Linear Systems, I E E E Trans. on Inform. Theory IT-19 (6), 750-760 (Nov. 1973). 6. A. Lindquist, Optimal Filtering of Continuous-time Stationary Processes by Means of the Backward Innovation Process, S I A M J. Control 12, (4) (Nov. 1974). 7. R. F. Brammer, A Note on the Use of Chandrasekhar Equations for the Calculation of the Kalman Filter Gain Matrix, IEEE Trans. Inform. Theory IT-21, 334-337 (May 1975). 8. D. Q. Mayne, A Solution of the Smoothing Problem for Linear Dynamic Systems Automatica 4, 73-92 (1966). 9. D. G. Fraser, A New Technique for the Optimal Smoothing of Data, Sc.D. Dissertation, MIT, January 1967. 10. J. S. Meditch, Optimal Fixed-point Continuous Linear Smoothing, Prec. 1967Joint Aurora. Contr. Conf., pp. 249-257. I 1. T. Kailath and P. Frost, An InnovationsApproach to Least-Squares Estimation--Part II: Linear Smoothing in Additive White Noise, I E E E Trans. on Automatic Control AC-13 (6), 655-660 (1968). 12. D. G. Lainiotis, Partitioned Estimation Algorithms, I: Nonlinear Estimation, Inf. Sci. J. 7, 203-255 (1974). 13. D. G. Lainiotis, Optimal Nonlinear Estimation, Int. J. Control 14 (6), 1137-1148 (1971). 14. D. G. Lainiotis, Optimal Linear Smoothing: Continuous Data Case, Int. J. Control 17 (5), 921-930 (1973). 15. D. G. Lainiotis, Partitioned Estimation Algorithms, II: Linear Estimation, J. Inform. Sci. 7 (3), 317-340 (1974). 16. D. G. Lainiotis, Partitioned Estimation Algorithms, II: Linear Estimation, Prec. IEEE Decision and Control Conference, Nov. 1974. 17. D. G. Lainiotis, Fast Riccati Equation Solutions: Partitioned Algorithms, Prec. 1975 Milwaukee Symposium on Automatic Computation and Control, April 1975. 18. D. G. Lainiotis, Generalized Chandrasekhar Algorithms: Time-varying Models, Prec. 1975 1EEE Decision and Control Conference, Dec. 1975. 19. D. G. Lainiotis, Partitioned Riccati Algorithms, Prec. 1975 1EEE Decision and Control Conference, Dec. 1975. 20. D. G. Lainiotis and K. S. Govindaraj, A Unifying Approach to Linear Estimation Via the Partitioned Algorithms, I: Continuous Models, Prec. 1975 IEEE Decision and Control Conference, Dec. 1975.
278
D E M E T R I O S G. L A I N I O T I S
2 I. D. G. Lainiotis and K. S. Govindaraj, Optimal Linear Smoothing Algorithms: Comparisons and Interconnections, submitted for publication. 22. D. G. Lainiotis, Fast Riccati Equation Solutions: Partitioned Algorithms, to appear in the J. Comput. Electric. Engin., Nov. 1975. 23. D. G. Lainiotis and K. S. Govindaraj, Partitioned Riccati Equation Solution Algorithms: Computer Simulation, Prec. 1975 Pittsburg Conference on Modeling and Simulation, April 1975. 24. D. G. Lainiotis, Fast Estimation Algorithms, submitted for publication. 25. M. E. Womble and J. E. Potter, A Prefiltering Version of the Kalman Filter with New Numerical Integration Formulas for Riccati Equations, IEEE Trans. Auto. Control AC-20 (3), 278-380 (1975). 26. D. G. Lainiotis, Partitioning: A Unifying Framework for Adaptive Systems, I: Estimation I E E E Prec., Aug. 1976. 27. D. G. Lainiotis, Partitioning: A Unifying Framework for Adaptive Systems, II: Control, I E E E Prec., Aug. 1976. 28. D. G. Lainiotis, Partitioned Riccati Solutions and Integration-Free Doubling Algorithms, 1EEE Trans. Autom. Control, (Oct. 1976). 29. D. G. Lainiotis, Partitioned Linear Estimation Algorithms: Discrete Case, IEEE Trans. Autom. Control AC-7,0, (c) 255-257, (June 1975). 30. D. G. Lainiotis, Discrete Riccati Equation Solutions: Partitioned Algorithms, I E E E Trans. Aurora. Control AC-20, (4) 555-556, (Aug. 1975). 31. D. G. Lainiotis and K. S. Govindaraj, A Unifying Approach to Linear Estimation via the Partitioned Algorithms, II: Discrete Models, Prec. 1975 IEEE Decision and Control Conf., Dec. 1975. 32. E. D. Denman, Invariant Imbedding and Linear Systems, in Invariant Imbedding, R. E. Bellman, and E. D. Denman, Eds., Springer-Verlag, NY, 1971. 33. L. Ljung, T. Kailath, and B. Friedlander, Scattering Theory and Linear Least Squares Estimation-Part I: Continuous-Time Problems, Prec. 1EEE, Vol. 64, No. 1, Jan. 1976, pp. 131-139. 34. L. Ljung, and T. Kailath, A Unified Approach to Smoothing Problems, Automatica, to appear, 1976. 35. B. Briendlander, T. Kailath, and L. Ljung, Scattering Theory and Linear Least Squares Estimation, Part II: Discrete-time Problems, J. Franklin Inst. (Jan. 1976). 36. G. S. Sidhu, and U. B. Desai, New Smoothing Algorithms Based on Reversed-Time Lumped Models, I E E E Trans. Autom. Control, to appear, 1976. 37. G. S. Sidhu, and G. J. Bierman, Integration-Free Interval Doubling Calculation for Riccati Equation Solution, submitted for publication. 38. U. B. Desai, An Orthogonal Directions Approach to Discrete-Time TPBVP's Scattering Matrices, and Recursive Estimation Algorithms, M. S. Thesis, Dept. of Electrical Engineering, State University of New York at Buffalo (Feb. 1976). Received November, 1975