Multi-Partitioning Linear Estimation Algorithms: Discrete Time Systems

Multi-Partitioning Linear Estimation Algorithms: Discrete Time Systems

Copyright © IFAC Identification and System Parameter Estimation 1982, Washington D.C . . USA 1982 MULTI-PARTITIONING LINEAR ESTIMATION ALGORITHMS: DI...

1MB Sizes 0 Downloads 75 Views

Copyright © IFAC Identification and System Parameter Estimation 1982, Washington D.C . . USA 1982

MULTI-PARTITIONING LINEAR ESTIMATION ALGORITHMS: DISCRETE TIME SYSTEMS D. G. Lainiotis* and D. Andrisani 11** *University of Patras, Patras, Greece **Purdue University, West Lafayette, Indiana 47907, USA

Abstract. The partitioned estimation alnorithms of Lainiotis for the linear discrete time state estimation problems have been generalized in two important ways. First, the inital condition of the estimation problem can be partitioned into the sum of an arbitrary number of jointly Gaussian random variables; and second, these jointly Gaussian random variables may be statistically dependent. The form of the resulting alqorithm consists of an imbedded Kalman filter with partial initial conditions and one correction term for each other partition or subdivision of the initial state vector. This approach, called multi-partitioning, can be used to provide added insight into the estimation problem. One significant application is in the parameter identification problem where identification algorithms can be formulated in which the inversion of the information matrix of the oarameters is replaced by simple division by scalars. A second use of multi-partitioning is to show the specific effects on the filtered state estimate of off-diagonal terms in the initial-state covariance matrix, An extremely important offshoot of the multi-partitioning results is a new formula for the inversion of a symmetric matrix. The nature of the solution shows promise for the inversion of numerically ill conditioned matrices. This result is applicable not only to estimation problems but to matrix theory in general. 1.

INTRODUCTION

2.1

During the last several years the partitioned approach to state estimation of linear stochastic systems has been in a continual state of evolution and extension. Based uoon the "Partition Theorem" of Lainiotis " fundamentally new linear filterinn, and smoothinq algorithms in terms of explicit integral equations for continuous time systems, (1-5), or sum~~tion equations for discrete time systems, (5-10), have been obtained. Moreover, it has been shown that all previously established smoothinq algorithms can be readily obtained from the partitioned alqorithms, (3-5, 7, 9). These earlier results were based upon the decomposition or partitioning of the initial state vector into the sum of two statistically independent Gaussian random vectors. In this paper, the initial state is partitioned into the sum of an arbitrary number of jointly Gaussian random vectors which may be statisti~ dependent. Hence, the name multipartitionin~ is appropriate. This paper presents the multi-partitioning results for discrete time systems. The multipartitioninq results for the continuous time problem are very similar. 2.

MULTI-PARTITIONING ALGORITHMS: DISCRETE TIME SYSTEMS

1495

Background

The discrete time state estimation problem is given by the following mathematical model x(k+l )= q, (k+l ,k)x(k)+u(k) Z(k)=H(k)x(k)+v(k) where x(k) is an nxl state vector and u(k) is an nxl vector of Gaussian w:lite zero mean process noise. The observation vector Z(k) is an mxl vector which is corrupted by the Gaussian white zero mean noise sequence v(k), also an mxl vector. The inital condition x( £) is a Gaussian random vector which is statistically independent of u(k) and v(k). The process noise u(k) and measurement noise v(j) are themselves statistically independent for all k and j. Furthermore

E { u(k)uT(j) ~ =Q(k) 5 (k-j) E ~ v(k)vT(j) } =R(k) S (k-j)

(2) p{X( i ) J=N {x( i ),P( t ) }. where E { : is the expectation operator and where N1x,P ) represents the normal probabil ity density function ~/ith mean vector x and variance matrix P. The objective of the discrete time state estimation problem is to find the ootimal filtered estimate, x(k !k), of x(k) qiven all the discrete data from the initial time point z until the time point k, ~ (k, l )= Z( i +2), ... Z(k) }. Thus we

1496

D. G. Lainiotis and D. Andrisani, II

wri te x(k !k)=E[x(k) iA (k, ~ )J. Based upon this mathematical model, the discrete time Generalized Partitioned Alqorithm (GPA) has been derived (references (~-10)) where it is assumed that the initial condition is partitioned or decomposed into two statistically independent components. That is, x( ;, )=xn+x r

nq~d '

where

The discrete time GPA is presented below for reference purposes. Discrete Time Generalized Partitioned Al 0rithm GPA -The optimal filtered estimate and the filtered error covariance matrix are gi ven by x(j l j)=xn(j l j)+ ~ n(j, £ )xr( £ lj)

P(j Ij) =P n (j Ij) + ~n (j , £) Pr (£ Ij) ~ ~ (j , £)

(3)

where O<£< j. The partial initial state smoothed estimate and its error covariance matrix are given by

xr( £l j)=Pr( £l j)[Mn(j, £ )+p~lxrJ

(4)

Pr( £ lj)=[On(j, £ )+p~lJ-l

The nominal state estimate xn(j Ij), its error covariance matrix, Pn(j lj), the nominal Kalman gain matrix, Kn(j), the residuals of the Kalman filter, Zn(j lj-l), the covariance of the residuals, P~n(j l j-l), and the nominal transition matrix, ~ nU , £), are given in tenns of a discrete time Kalman filter initialized at xn( £I£ )=xn Pn( £I£ )=P n (5) The nominal Kalman filter is related to the total estimation problem by the partial state observability matrix 0n(j, £) and the vector Mn(j, £) defined as j T T T ~1n ( j , £) = L ~ n ( k- 1, ) ~ (k, k- 1 ) H (k) k=£+ 1 Pzn(k lk-l)Zn\k lk-l)

I

k= £+l -1

~~(k-l, £ j ~T(k,k-l)HT(k)

PZn(k l k-l)'H(k) ~ (k,k-l) ~ n(k-l,

P(x l ,x 2 ,···,x L)=N

r~l

PllP12,,,P1L

,

T

x2

P12 P 22

I: I.

lXL ,

p(xn,Xr)=N(

0n(j, £) =

where the nxl random vectors Xi have joint Gaussian density functions given by

£).

(6)

The quantities Xi and Pij are nxl and nxn matrices, respectively. With these assumptions, the mean and variance of x( £) are , L, L L x( £) = L x. P( £) = L L P.. (9)&(10) i=l j=l 1J i=l 1 The objective of this section is to obtain the optimal mean-square-error filtered estimate x(klk) of x(k) and the corresponding filtered error covariance matrix P(klk). In the discrete time GPA, References (6)(10), the estimate and covariance are composed of two portions each with statistical meaning. On the other hand, in the multipartitioned framework, the estimate and covariance are partitioned into many parts (multi-partitioned). 2.2

Independent Multi-Partitioning

A natural simplifying assumption to the model hypothesized in Section 2. 1 is to assume that the random vectors Xi which together comprise the inital condition are independent Gaussian random vectors, i.e. P(x l ,x 2 ,· .. ,x L)=N rl Pll 0 .... 0 x2 0 P22

As a result, the mean and covariance of x( £) can be written as ,

x(£) =

L ,

LX.,

i=l

1

L

P(£) = LP... i=l 11

(12)

The partitioned estimation results for this initial condition are expressed in the following theorem.

where

~ n(j,j-l)=[I-Kn(j)H(j)J~(j,j-l).

Theorem 1: Independent Mul ti-Partitioned Estimation

As in the continuous time case, the two part ~iscrete GPA can be extended by allowing more than two partitionings and by allowing each component of the initial state to be statistically dependent. This is done as follows.

When the initial condition x(£) is partitioned into the sum of L independent random vectors, the optimal filtered estimate x(k) and its error covariance matrix are given by ,

The initial condition, x(£), is partitioned into L statistically dependent jointly Gaussian random vectors Xi as follows:

P(klk)=Pl(k lk) +

L

x( £) = L x· i =1

1

(7)

L

,

x(klk)=xl(klk) + L ~.(k,£)x·(£lk) (13) i=2 1 1 L

T

L ~.(k, £ )p·(£lk)~.(k, £ ) 1

where

1

1

(

14 )

Multi-Partitioning Linear Estimation Algorithms

Xl (k Ik)=E[x(k) 1 A(k,t) ;x 2=O,x 3=O, ... ,xL =OJ (15 ) Pl(klk)=E{[x(k)-x(klk)J[X(k)-x(klk)JT I A(k ,.t) ; X2 ' x 3' ... , xL } (16 ) Furthennore, the conditional smoothed estimate of the partial initial state vector xi and its error covariance are given by x.(tlk)=P.(tlk)[M.(k,.t)+P:~~.J (17) -1 11 1 • 1 1 1 =[I+PiiOi(k,t)J [PiiMi(k, .t )+xiJ .. P1. ( .t 1k ) =[0 1. ( k ,.t) +P:11~ 1=[I +P 11 .. 0 1. (k , .t ) 1P11 ( 18) where xi (.t 1 k) =E [xi 1 A(k, .t ) ; xi+ 1=0 ,xi +2=0, .. ,xL =0 J

r

r

(19)

Pi ( .t 1 k) =E{ [xi -xi ( .t 1 k) J [x i -xi ( .t 1 k) J TI A(k , .t ) ; xi+l'x i +2 '··· ,xL}.

(20)

The auxiliary equations for i > 3 are evalated using the recursive equatTons M1. (k,.t)=[I-O.1- l(k,.t)P.1- l(tlk)J[M.1- l(k, .t ) -Oi_l(k,t)xi_1J (21) °i(k,.t)=Oi_l(k,.t)[I-Pi_l( t lk)Oi_l(k,.t)J (22) ~.(k,t)=~. l(k,t)[I-P. l(.tlk)O . l(k,.t)J 1 111(23) For i=2 the auxiliary equations are

discrete time GPA and can be found in reference 18. Remarks: 1. Note that the optimal filtered estimate x(klk) is given in terms of a conditional filtered estimate Xl(k lk) and a se~ies of conditional smoothed estimates x·( £l k) of the components of the initial state xi. The conditional filtered estimate xl(k lk) results from an imbedded Kalman filter initial ized at xl, Plo It is apparent, therefore, that the optimal estimate is composed of a filtered portion due to xl, i.e. Xl(k lk), and a portion due to updated knowledge of the xi, i.e. xi ( t l k). 2. From (18) it is apparent that the partial state smoothed covariance is bounded from above by 0il(k, .t ), i.e. P.( .tl k) -< o:l(k, .t ) 1 1 where the equality holds when Pii=oo . Thus, O~l(k, t ) is seen to be a Fisher Infonnation matrix for estimation of Xi conditioned on knowing xi+l,xi+2, ... , XL. In the absence of a priori knowledge about Xi, i.e. Pii=oo , the Fisher estimate (least squares estimate) of Xi is given by xi( t lk)=Oil(k, t )Mi(k, t )

M2(k,t)=. I ~~(i-l,.t)~T(i,i-1)HT(i)P~ll 1=.t+1

Pi ( t! k)=0"j1 (k, t ) an d for i .2.J2.L

(i 1 i -1) . {Z ( i) -H ( i) ~ ( i , i -1) xl ( i - 1 1 i - l} k (24) 02(k,.t)=. I ~~(i-l,.t)~T(i,i-l)HT(i)Pi~ 1=.t+1 (ili-1).~(i,i-l)~2(i-1, .t ) (25) ~2(i,i-1)=[I-Kl(i)H(i)J~(i,i-1). (26)

0j(k, t )=O Mj(k, t )=O


k

Fi na lly, the quantiti es xl (i 1 i), Pl (i 1 i) and Kl(i) are given by a Kalman filter initialized at xl (.t I.t )=Xl and Pl(.tI .t )=P11, i.e. xl (i 1 i)=~2(i ,i-l)Xl (i-l li-1 )+K l (i)Z, (i 1 i-1) (27)

Kl(i)=Pl(ili-l)HT(i)P~~(i l i-l)

(28)

Pzl(i li-1)=H(i)P l (i li-l)HT(i)+R(i) (29) P1 ( i 1 i -1) =~ ( i , i -1) P( i - 1 1 i - 1 ) ~ T( i , i - 1 ) + O(i-l) (30) Pl(ili)=[I-Kl(i)H(i)JP(i li-l) (31) with initial conditions (32 ) xl(.tl.t)=~l Pl(.tI .t )=P ll Proof: The proof involves repeated use of the t SPE - 2-v *

1497

3. Note further that the above mu1tipartitioned algorithm constitutes a natural generalization of the previousl y obtained partitioned algorithms of Lainiotis (10)-(14). The generalization consists of decomposing the initial state into many components rather than two. 4. Partitioning of the initial condition into many components allows great flexibility in the solution form. For instance, suppose the problem is time invariant and that P1=Psteady state P2=Ptrue-Psteady state P3=oo

. X =0 .2

X3 =0

then X1(k !k) will be the Weiner filter of X(k); X1(k ik) +


D. G. Lainiotis and D. Andrisani, 11

1498

estimators are incorporated into a sinale multi-partitioned soluti on. The information form of the V-a lman fi lte r was not requ i red. 5. The discrete time multipartitioning results are remarkably similar to the continuous time results (see reference 18). The only difference between the two algorithms is that a Kalman filter replaces a Kalman-Bucy filter and summations replace integrations. 6. Theorem 1 reduces to the discrete time GPA wi th L=2. 7. A rigorous proof of this theorem is possible by direct use of the Partition Theorem of Lainiotis (reference 1), a proof which has the advantage that it provides physical interpretation for the various partitioned quantities. 8. The optimal unconditional smoothed estimates of Xi can be obtained as follows: L Xi(9,[ k)=Xi( £[ k)-P i ( t[ k)Oi(k, £) . L X.( t [k) j=i+l(33) J

where Xi( t[ k)=E {Xi [A (k, A) L 9. The total state filtered estimate can be expressed iD terms of the optimal smoothed estimates, Xi( £[ k), as follows: ,

L

X(k[k)=Xl(k [ k)+ ~ 2(k, £ )

L X.( t[ k)

i =2

(34)

where the joint Gaussian probability density function of the random vectors Xn and Xr is given by ,

l

r The moments of

X(~)

[:T nr are therefore

(35)

X( ~ )=Xn+Xr P( t )=Pn+Pr+Pnr+P~r'

(36) It is tempting to propose that X( ~ ) is composed of three independent Gaussian random vectors wi th moments , , Xl=Xn,X2=Xr,X3=0 T

Although Theorem 1 and the discrete time GPA were derived with the assumption that Pr and P22, P33, ... PLL were covariance matrices (nonnegative definite symmetric matrices), this strict requirement is not necessary. This important results is stated in the following corollary. Corollary 2 Independent Multi-partitioning with Idefinite Pii For a ~iven XU, ) and pCI'.) Theorem 1, equations (13)-(20), is valid with the Xi' (2 2 i 2 L), are arbitrary and when the Pii, (2
1

In the previous theorem, statistically meaningful interpretations have been given to the quantities in the multi-partitioned solutions, equations (15), (16), (19), and (20), for instance. These interpretations resulted from our assumption that the initial condition was partitioned into the sum of random variables. However, cases exist where physical interpretation of this kind is not desired. For instance, suppose the initial state is partitioned as follows: X( £)=X n+X r

p(Xn'Xr)=Nf[~n]

variable. With this as motivation, a reevaluation of Theorem 1 and the discrete time GPA is in order.

( 37)

Pll=Pn,P22=Pr,P33=Pnr+Pnr Such a choice would guarantee that equation (36) would be satisfied, however, the problem is that P33, while symmetric, is an indefinite matrix. Therefore, P33 is not a valid covariance matri x of a random

Theorem 2 Dependent Discrete Time GPA: Independent Multi-partitioning Approach Assume that the random initial condition X( £) is partitioned into the sum of two dependent random vectors Xn and Xr with joint Gaussian probability density function given by

'('0"')°1[::] ·[::, ::'l\

The optimal filtered state estimate of X(k) and its corresponding error covariance matrix are given by X(k [ k) =X l (k [k) + ~ 2 (k, 9, )X2 (,- [k) +<1> 3(k, 9, ) . X3(k [k) (38) P(k [ k) =P 1 (k f k) +<1> 2(k, ( ) P2 ( 9, [k) <1> ; (k, 9, ) + 3(k, 2)P 3( £! k) 1(k [Z ) (39) where X3( t 1 k)=P 3(i[ k)M 3(k, t ) (40) P3( ~! k)=[I+P +pT )03(k, £)]-1[p +pT] nr nr , n r nr (41) X2( ~ i k)=P2( ~ [ k)[M2(k, £ )+p~lXr] (42)

P2( ~ i k)=[I+Pr02(k, £ )]-lpr

(43)

The auxiliary equations are given by M3 (k, : )=[I-0 2 (k, l )P 2 (£[ k)][M 2 (k, t )-02(k, £) Xr ] (44)

Multi-Partitioning Linear Estimation Algorithms

03(k, [ )=02(k, ; )[I-P 2 ( [ ,k)02(k , ;) ]

(45) * 3(k' ~) = ; 2 ( k, ;) [I-P2(,k)02 ( k, ,) ] (46) k 112 ( k , ; ) =, I "; (j - 1 , ; ) ; T(j , j - 1) fI T(j ) PZ1 J= . +l 1 (j ' j-l) k (47) 02(k ;; )=, ~ :;(j_l, .): T(j ,j_l )HT(j)P Zl J= ; +l 1 (j , j - 1 ) fI (j ) , ~ (j , j - 1 ) t 2 (j - 1 , ; ) ( 48) :t2 (j , j - 1 ) =[I - Kl (j) H( j) ]; (j , j -1) , (49) Furthermore, the CJuantites Xl (k i k), Pl (k :k), and K, (k) are given by a Ka lman filter initi al i zed at (50) Xl( 21~ )=Xn' Pl( ~iz )=Pn Proof: The proof consists of rartitioninC] the initial condition into three parts as in equations (37) and applyinf] Theorerl 1 and Corollary 1. Remarks: 1.

This theorem s hould be closely cOrloared If Pn , Pr~ i n, X are the same in the GPA as in Theorem 7, then the follol'Jinq theoretical and numerical equivalences can be made : Theorem 7 Discrete Time GPA

~ith the discrete time GPA,

Xl(k lk) X2 ( zl k)

not imDortant, then the third terms in equations (38), (39) can be ignored. 3, The term X3( ~ : k) is not an optimal estimate of anything in the sense that X( , ;k) ~as an ontimal estimate of Xr qiven data A (k, ~ ). Neither is P3 ( ,: k) a covariance matrix since it may be indefinite. The next subsection approaches multi-partitioning from a different direction. 2.3

Dependent Multi-Partitioning

The requirement that the individual random vectors Xi in the partition of the initial state vector (equation (7)) are statistically independent can now be removed. Res~lts for the case where the Xi are statist1cally dependent can be found in references (13) and (14). 2.4

Application to Matrix Inversion

A new alqorithm for matrix inversion has been derived from Theorem 1 by appropriately multi-partitioning the estimation problem into N+l parts and comparison with the two part partitioninq solution. The results are given below as Theorem 3. Theorem 3. Multipartitioning Matrix Inversion Algorithm - Symmetric Case

Xn(k lk) Xr( ~l k)

~ 2(k, ~ )

~ n(k, t )

Pl(k lk)

Pn(k !k)

P2( £j k)

Pr (1! k)

(51)

It follows therefore that the first two terrlS in eCJuation (38) Xl (k I k) + ~ 2 (k, ~ ) X2 (;, 1 k) represent the ontimal filtered estimate of X(k) if Xn and Xr are assumed independent, The third term in (38), ~ 3(k, ~ )X3( il k), depends on Pnr and represents and expl icit correction term to account for the deoendence of Xn and Xr on the ortimal estimate of X(k), Similarl y , the first two terns in eCJuation (39)

Pl(k i k)+ ; 2(k, ~ )P2( ; ; k) ; ;(k, i )

represents the filtered error covariance matrix of X(k) when Xn and Xr are assumed independent. Furthermore, the third term in ( 39) T

~ 3(k, t )P3( ~ I k) ~ 3(k, .: )

is a correction term to account for the effect of Pnr on the covariance of X(k). In many oractical estimation oroblems off-dia~onal ' terms in covariance matrices such as Pnr are freouently i C]nored, Theorel'l 2 gives a direct way of studying the effects of Pnr on the optimal estimate. If Pnr is 2.

1499

The matrix inverse of the nxn symmetric covariance matrix C is given by the following summation equation -l = C

tJ

T

I Q. R.Q. i =1 1 1 1

(52)

where the following definitions apply: a. The NxN diagonal matrix D is arbitrari~ chosen and has elements d. o~ the ith diagonal. 1 (53) b. The NxN matrix Fl is defined as Fl=C-D- l c. The element of the first row and first column of the matrix Rl is defined as dl R) (1,1) = ----r(54) l+d l Fr 1,1) where F1 (1 ,1) is the element of the fi rs t row and first column of matrix Fl. All other elements of Rl are zero. 01 = I. d.

For 2
Ri ( i , i) = 1+d . F~ ( i , i) 1 1

Q.1=Q.1- l~I-R.1- 1 Fi _1 J

(55,56)

vlhere the ma tri x Ri has on ly one non- zero eiement, Ri(i,i), defined above.

D. G. Lainiotis and D. Andrisani, 11

1500

Rer;]arks

4.

1. If C i s a nons ymmetric ~atri x the alqorith m i s very sim pl y modified by repla cinq 0JbY Si in Equation 52 where Si is ~iven by tne re curs i on Sl=I, S.=[I-F . l R.1- lJS.1- 1 for 22i_
Summary and Conclusions

The partitioned es timation algorithms of Lainioti s for linear state estimation have been generalized in this r aper in two important ways. First, the initial condition of the esti mation problem can now be partitioned into the su~ of an arbitrary number of random variables and second, these jointly Gaussian random variables may be statistically dependent. r1 ulti-partitionin g is a natural framework for in vestiaatin g the ef fects of the initial state vec tor (its es timate and covariance ) on the op ti mal state estimate and its covarian ce. A new formula f or the invers ion of a symmetric matri x is an important offshoot of this theory . RE FERENCES 1.

Lainiotis, D.G.: "Optimal Nonlinear Estimation," International Journal of Control, Vol. 14, No . 6, pp. 1137-1148, 1971 2. Lainiotis, D.G.: "Partitioned Estimation Algorithms, 11 : Linear Estimation," Information Science, Vol. 7, pp. 317-340, 1974. 3. Lainiotis, D. G. : "A Uni fying Framework for Linear Estimation : Generalized Partitioned Algorithms," Journal of Information Science, Vol . 10, No. 3, April 1976.

Lainiotis, D.G.: "Optimal Adaptive tstimation: Structure and Parameter Adaptation," IEEE Trans. on Auto. Control, Vol. AC-16, No . 2, April 1971. 5. Lainiotis, D.G. , and Govindaraj, K.S., "A Unifying Approach to Linear Estimation via the Partitioned Algorithms, I: Continuous Models," Proceedings of the 1975 IEEE Conference on Decision and Control, IEEE, New York, Dec . 1975. 6. Lainiotis, D.G. : "Partitioned Linear Estimation Algorithms: Discrete Case," IEEE Trans . Automat. Contr., Vol . AC-20 pp . 255-257 , June, 1975. 7. Govindaraj, K. S. : "Partitioned Algorithms for Estimation and Control," Ph.D. Dissertation submitted to the State Unviersity of New York at Buffalo September 1976. 8 . La i ni 0 t is, D. G.: "D i s c re te Ri cc a t i Equation Solutions: Partitioned Algorithms," IEEE Trans. Automat. Contr., Vol . AC-20, pp. 555-556, Aug . 1975. 9. Govindaraj, K.S . and Lainiotis, D.G.: "A Un ifyi ng Framework for Di screte Linear Estimation : Generalized PartitionedAlgorithms," Int. J. Control, 1978, Vol 28, No, 5, pp. 571-588. 10. Lainiotis, D.G. and Govindaraj, K.S.: "Discrete Riccati Equation Solutions : Generalized Partit i oned Algorithms," J. Inform. Sciences, Vol. 15, No. 3, pp. 169-185, November 1978. 11. Meditch, J.S . : "Stochastic Optimal Linear Estimation and Control," McGrav: Hill, New York, New York, 1969. 12. Saridis, G.N.: "Self-Organizing Control of Stochastic Systems," Dekker Pub. Co. New York, New York, 1977. 13. Andrisani, D. and Lainiotis, D.G .: "Mul ti-Partitioning Estimation Algorithms," Technical Report 1978-4, Systems Research Center, State Univ. of New York at Buffalo, Amherst, New York, Dec. 1978. 14. Andrisani, D. : "Multi-Partitioning Estimation Algorithms," Ph.D. Thesis, State University of New York at Buffalo Amherst, New York, Dec. 1978. 15 . Eulrich, B.J., Andrisani, D., and Lainiotis, D.G.: "New Identification Algorithms and Their Relationships to Ma ximum-Likelihood Methods: The Partitioned Approach," Proc. of the 1978 Joint Atomatic Control Conf., ISA Pittsburgh, Pennsylvania, Oct. 1978. 16. Eulrich, B.J., Andrisani, D., and Lainiotis, D.G.: "New Identification Algorithms and Their Relationships to Maximum-Likelihood Methods: The Partitoned Approach, submitted for pub., 1978. 17. Lj ung, L: "Asymptotic Behavi or of the Extended Kalman Filter as a Parameter Est ima tor for Linea r Sys terns," IEEE Trans. on Auto. Cont. AC-24, p. 36-50, Feb. 1979. 18. Lainiotis, D.G. and Andrisani, D., "Mul tipartitioninfj Linear Estimation Algorithms: Continuous Systems," IEEE Trans. on Auto Con. Vol AC-24, No. 6 pp. 937-944, Dec. 1979.