A new per-sample partitioning filter

A new per-sample partitioning filter

Signal Processing 12 (1987) 27-47 North-Holland 27 A N E W P E R - S A M P L E P A R T I T I O N I N G FILTER D.G. L A I N I O T I S and K.E. A N A ...

943KB Sizes 0 Downloads 15 Views

Signal Processing 12 (1987) 27-47 North-Holland

27

A N E W P E R - S A M P L E P A R T I T I O N I N G FILTER D.G. L A I N I O T I S and K.E. A N A G N O S T O U Department of Computer Engineering and Informatics, School of Engineering, University of Patras, 26500 Patras, Greece Received 12 February 1986 Revised 21 August 1986

Abstract. The partitioning approach essentially constitutes an adaptive framework which is a unifying and powerful setting for optimal linear and nonlinear estimation. In particular, for the class of linear estimation problems, model partitioning has yielded [Lainiotis (1971-1979) and Lainiotis and Andrisani (1979, 1983)] two novel and practically useful estimation algorithms. The first algorithm results by partitioning the initial state vector into the sum of two independent random vectors, arbitrarily chosen by the filter designer. Similarly, the second algorithm results by partitioning both the initial state, and the process-noise vectors, respectively, into the sum of two random vectors. These algorithms were shown to posses several computational advantages, and interesting properties. In this paper, based on the above results, a new linear estimation algorithm is obtained. This constitutes a per-sample partitioning version of the above Lainiotis-Andrisani algorithm. Moreover, a computational analysis of the new algorithm is made with respect to computer time and storage requirements. This is compared to the computational requirements of the Kalman filter, and relevant conclusions are drawn. Finally, to demonstrate the practical usefulness of the new algorithm to signal processing, they are applied to the important class of multi-sensor estimation problems, encountered in geophysical data processing, multi-radar problems, etc.

Zusammenfassang. Das Verfahren der Zerlegung in Teilbereiche steckt einen Rahmen ab, in dem lineare und nichtlineare Sch~itzungsprobleme in einheitlicher Weise optimal gel6st werden k6nnen, lnsbesondere fiir die Klasse der linearen Sch~itzprobleme hat die Zerlegung des Modells zwei neuartige und in der Praxis niitzliche Schiitzalgorithmen hervorgebracht [Lainiotis (1971-1979) und Lainiotis und Andrisani (1979, 1983)]. Im ersten Algorithmus wird der Eingangs-Zustandsvektor zerlegt in die Summe zweier voneinander unabh/ingiger Zufalls vektoren, die beim Entwurf des Filters beliebig gew/ihlt werden. In iihnlicher Weise ergibt sich der zweite Algorithmus, indem sowohl der Eingangs-Zustandsvektor als auch der Vektor, der das Rauschen darstellt, in die Summe zweier Zufallsvektoren zerlegt werden. Wie gezeigt wurde, besitzen diese Algorithmen einige Vorteile beziiglich der Implementation sowie weitere interessierende Eigenschaften. Auf der Grundlage obengenannter Ergebnisse wird in diesem Beitrag ein neuer linearer Schiitzalgorithmus hergeleitet. Dieser est eine Version des Lainiotis-Andrisani-Algorithmus, bei der die Zerlegung fiir jeden Abtastwert einzeln erfolgt. Dariiber hinaus wird der neue Algorithmus hinsichtlich des Rechenaufwandes und des Speicherbedarfs untersucht. Hierbei dient das Kalmanfilter als Vergleichsobjekt. Um schlieBlich die Niitzlichkeit des neuen Algorithmus in der praktischen Anwendung in der Signalverarbeitung zu demonstrieren, wird der Algorithmus in dem wichtigen Bereich der Multi-SensorSch/itzprobleme eingesetzt; derartige Aufgaben treten beispielsweise in der Geophysik oder der Radartechnik auf. R6sum6. Le partitionnement constitue une approche essentieltement adaptative ainsi qu'un cadre unificateur et puissant pour les probl~mes d'estimation optimale lin6aire et non lin6aire. Le partitionnement a conduit en particulier, pour la classe des probl~mes d'estimation lin6aire, ~ deux nouveaux algorithmes d'utilit6 pratique [ Lainiotis (1971-1979) et Lainiotis et Andrisani (1979, 1983)]. Le premier r6sulte du partitionnement du vecteur d'6tat initial en la somme de deux vecteurs al6atoires ind6pendants choisis arbitrairement par le concepteur du filtre. Le second algorithme r6sulte d'un partitionnement analogue du vecteur d'6tat initial et du vecteur bruit de processus. Ces algorithmes poss~dent de nombreux avantages calculatoires et d'int6ressantes propri&~s. Cet article d6crit un nouvel algorithme d'estimation lin6aire bas6 sur les r6sultats pr6c6dents. II constitue une version partitionnement par 6chantillon de l'algorithme de Lainiotis-Andrisani 6voqu6 ci-dessus. De plus, une analyse de la complexit6 de ce nouvel algorithme est faite en termes de temps de calcul et d'encombrement-m6moire. Une comparaison est faite avec les besoins en calcul du filtre de Kalman. Enfin, pour d6montrer l'int6r~t pratique du nouvel algorithme en traitement du signal, son application est 6tudi6e pour la classe importante des probl6mes d'estimation multi-capteurs rencontr6s en traitement de donn6es g6ophysiques, en probl6mes multi-radars, etc. Keywords. Filtering, estimation, Kalman filter, Lainiotis filter, partitioning algorithm. 0165-1684/87/$3.50 O 1987, Elsevier Science Publishers B.V. (North-Holland)

28

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

1. Introduction

In the last twenty-five years, estimation theory has received considerable attention, as can be seen in [12] and several books on estimation (e.g., [21]). This is because of the great applicability of estimation theory to a variety of engineering scientific and socio-economic problems. Here we restrict our attention to the linear discrete Gaussian estimation problem or specifically to finding a best (in the mean square sense) estimate of the states of a linear dynamic-state model. The problem was first solved, very elegantly, by Kalman [9]. Subsequently, many authors obtained similar results to those of Kalman using a different approach or considering different aspects of the problem. Another solution to the problem, the model partitioning approach, was proposed by Lainiotis [10, 11, 13-17], who using his "Partitioning Theorem" obtained fundamentally new linear estimation algorithms. The partitioning approach essentially constitutes an adaptive framework which is a unifying and powerful setting for optimal linear and nonlinear estimation. In particular for the class of linear estimation problems, model partitioning has yielded Lainiotis [10, 11, 13-17] and Lainiotis and Andrisani [18, 19] two novel and practically useful estimation algorithms. The first algorithm results by partitioning the initial-state vector into the sum of two independent random vectors, chosen arbitrarily by the filter designer. Similarly, the second algorithm results by partitioning both the initial state and the process noise vectors into the sum of two random vectors. Also a third algorithm results, by using the last model partitioning, as pointed out by Andrisani and Gau [1]. These algorithms were shown to possess several advantages [1, 10, 11, 13-19]. Moreover, these algorithms of Lainiotis et al. [1, 10, 11, 13-19] have extensively been used in signal processing [3, 20] for identification [2, 8]. Finally, it must be noted that research has started on the application of these partitioning algorithms to image processing and to their VLSI implementation. They are efficient replacements of the Kalman filter which has extensively been used in signal processing applications. In this paper, based on the above results, a new linear estimation algorithm is obtained. This constitutes a per-sample partitioning version of the above Lainiotis-Andrisani algorithm. Moreover, a computational analysis of the new algorithm is made with respect to computer time and storage requirements. This is compared to the computational requirements of the Kalman filter and relevant conclusions are drawn. Finally, to demonstrate the practical usefulness of the new algorithm to signal processing, they are applied to the important class of multi-sensor estimation problems, encountered in geophysical data processing, multi-radar problems, etc.

2. Problem formulation

The discrete, linear-state estimation problem is described by the following model equations and statement of objective: x ( k + 1) = ~ ( k + 1, k)x(k)+F(k)u(k),

z(k+ 1) = H(k+ 1)x(k+ 1)+ v(k + 1),

x(0) = Xo,

(1) (2)

where x(k) and z(k) are the n- and m-dimensional state and measurement processes, respectively, {u(k)} and {v(k)} are the p-dimensional plant and m-dimensional measurement noise random processes. They are independent, zero-mean, white Gaussian processes. They are independent, zero-mean, white Gaussian processses, with covariances Q(k) and R(k), respectively. The initial state-vector x(0) is Gaussian, with mean and covariance ~(0), P(0), respectively. It is independent of the noise processes for all k t> 0. Signal Processing

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

29

Given the set of measurements Ak={Z(l), z(2),.., z(k)}, we desire the mean-square-error optimal filtered estimate £(k/k) of x(k), which will be denoted as ~(k, 0) when used in the new algorithm in order to retain the same symbol first used by Lainiotis in the Lainiotis per-sample partitioning filter (LPSPF) [17]. To facilitate the presentation of the main results of this paper, the well-known Kalman filter and the Lainiotis-Andrisani partitioning filter are given in Appendix A.

3. A new per-sample partitioning aigorithmmlnitial condition and process noise partitioning

This paper pertains to model partitioning consisting of decomposition of both the initial state vector x(0) and the process noise vector u(k), at every sample instant tk, into two independent vectors: Namely,

x(k)=x,,(k)+xr(k),

u(k)=u,,(k)+u,(k)

(3), (4)

and the corresponding mean and covariances are given by

~(k)=~,,(k)+~r(k),

P(k)=P,,(k)+Pr(k),

and

Q(k) = Q,,(k)+Qr(k).

(5), (6), (7)

Assuming the nominal conditions R. (k) = 0, P. (k) = 0 at every sample instant tk, the following algorithm is obtained. Theorem. ~ The optimal estimate ~( k + 1, O) and its covarianee P( k + 1, O) are given by

£(k + 1,0) = :~,(k + 1, k)+ O~(k + 1, k) × [P-~(k,O)+N(k+l,k)]-'[M(k+l,k)+P-~(k,O):~(k,O)],

(8)

P ( k + 1, 0)-- P. (k+ 1, k ) + C . ( k + l , k)[Qrl(k)+On2(k+l, k)]-~CT(k+l, k) + qb~(k+ 1, k)[P-~(k, O)+ N ( k + 1, k ) ] - ~ T ( k + 1, k),

(9)

where ~,(k+ 1, k) --- Kl(k+ 1, k)z(k+ 1),

(10)

• ~(k + 1, k) = [I - K~(k+ 1, k)H(k + 1 ) ] ~ ( k + 1, k), 2

(11)

K~(k+ 1, k) = Kn (k+ 1, k)+ Cn (k + 1, k)[Qr~(k)+ On2 (k+ 1, k)]-'FT(k)HT(k+1)A(k + 1), (12) Kn (k+ 1, k) = F(k)Qn

(k)FT(k)HT(k+1)A(k + 1),

(13)

Cn (k+ 1, k) = [ I - K. (k+ 1, k ) H ( k + 1)IF(k),

(14)

P. ( k + 1, k) = [ I - K. (k+ 1, k)H(k + 1)]F(k)Q. (k)rT(k),

(15)

L(k+ 1) -- [Q~-~(k) + On2(k+ 1, k)]-IoT3(k+ 1, k),

(16)

N ( k + 1, k) = Onl(k+ 1, k) - O.3 (k+ 1, k)L(k+ 1),

(17)

M ( k + 1, k) = [ ~ ( k + 1, k) - F(k)L(k+ 1)]THT(k + 1)A(k + 1)z(k+ 1),

(18)

i The p r o o f o f the T h e o r e m is g i v e n in A p p e n d i x B. 2 1 is a n i d e n t i t y m a t r i x of d i m e n s i o n a l i t y n x n. VoI. 12, No.l, January1987

D. (7. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

30

On, ( k + 1, k) = { H ( k + 1 ) ~ ( k + 1, k)}TA(k+ 1){H(k + 1)qb(k+ 1, k)},

(19)

0n2 (k + 1, k) = {H(k + 1)F(k)}TA(k + 1){H(k + 1)r(k)},

(20)

On3 ( k + 1, k) = { n ( k + 1)qb(k+ 1, k)}-rA(k+ 1 ) { n ( k + 1)F(k)},

(21)

A ( k + 1) = [ H ( k + 1)F(k)Qn (k)FT(k)HT(k+ 1)+ R ( k + 1)] -~,

(22)

with initial conditions ~(0, O) = ~(0), P(0, 0) = P(0), ~l(0, 0) = I. Remarks (1) It is observed that the quantities ~ , ( k + 1, k), K~(k+ 1, k), Cn(k+ 1, k), Pn ( k + 1, k), N ( k + 1, k), M ( k + 1, k), and On2(k+ 1, k) are completely determined simply and nonrecursively by equations (11), (12), (14), (15), (17), (18), and (20) utilizing only the model quantities and the data at the current time instant t k + l . (2) The computation of the quantities K , ( k + l , k), C n ( k + l , k), P n ( k + l , k), and O n , ( k + l , k) (i= 1, 2, 3) is based only on the nominal component un (k) of the process noise u(k). (3) In this algorithm, five matrix inversions are needed: one of dimensionality m x m, two of dimensionality p x p, and two of dimensionality n x n. In the case of the Kalman filter, one matrix inversion is needed of dimensionality m x m. (4) This algorithm is a generalization of the LPSPF [17]. This conclusion will be obvious if we set Qr(k)=O; then we have Qn(k) = Q(k),

(23)

L ( k + 1) = 0,

(24)

K , ( k + 1, k) = F ( k ) Q ( k ) F T ( k ) H T ( k + 1)A(k+ 1) = Kn(k + 1, k),

(25)

cI),(k + 1, k) = [I - Kn (k + 1, k ) H ( k + 1)]~(k + 1, k) -- tbn ( k + 1, k),

(26)

~,(k + 1, k) = Kn (k + 1, k)z(k + 1) = ~n ( k + 1, k),

(27)

N ( k + 1, k) = { H ( k + 1 ) ~ ( k + 1)}TA(k + 1 ) { H ( k + 1 ) ~ ( k + 1, k)} -- On ( k + 1, k),

(28)

P. (k + 1, k) = [I - Kn (k + 1, k ) H ( k + 1)]F(k)Q(k)FY(k),

(29)

M ( k + 1, k) = OT(k + 1, k ) H r ( k + 1)A(k + 1)z(k+ 1) = M~ ( k + 1, k),

(30)

~(k + 1, O) = ~n (k + 1, k) + qb (k + 1, k)[P-'(k, 0) + On (k + 1, k)]-' x [Mn (k+ 1, k) + P-'(k, O)~(k, 0)], P ( k + 1 , 0 ) = Pn ( k + l , k ) + ~ , ( k + 1, k)[P ~(k,O)+On(k+l, k ) ] - l q ~ ( k + 1, k).

(31) (32)

So the LPSPF algorithm [17] results. (5) The computational nature of this algorithm and its computational advantages can further be seen by considering the case of time-invariant models. For time-invariant models, the algorithm has the following form: ~(k+l,0)=R~(k+l,

k ) + ~ l [ P - ~ ( k , O ) + N ] ' [ M ( k + l , k)+P-'(k,O)~(k,O)],

P(k + 1, 0) = ion + C, [ Q i l + O~2] -~ C,r+ ~ [ P - ~ ( k , 0)+ N ] - ~ T, Signal Processing

(33) (34)

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioningfilter

31

where now .~,(k + 1, k) = KI z(k + 1),

~1=(I-KIH)~,

(35),(36)

N = 0~1 -

L = ( Q r l + Q~2)-1o~3,

(37),(38)

K1 = K. + C. (Q~I + On2)-IFTHTA,

K. = FQnFT HT A,

(39),(40)

C~=(I-K.H)F,

Pn = (I - K . H ) F Q . F T,

(41),(42)

0.1 = ( HcI))T A( HCI)),

0.2 = ( H F ) T A ( H F ) ,

(43),(44)

0.3 = (HOS)TA(HF),

A = ( H F Q . F T H T + R ) 1,

(45),(46)

0.3

L,

M ( k + 1, k) = ( • - F L ) T H T A z ( k + 1).

(47)

To fully appreciate how interesting the above version of the theorem is, it must be noticed that the Kalman filter realization of the optimal linear estimate results in filter algorithms that are time-varying even for time-invariant models. It is seen that the above per-sample partitioning realization of the optimal filter for time-invariant models is a completely time-invariant one even from the first iteration. It is further noted that the quantities KI, cI91, N, (c19-FL)THTA, P , + C , ( Q f I + 0 . 2 ) -1 C., T O.i ( i = 1 , 2 , 3 ) , L , A , K. must be computed once and stored (except for the quantities A, L, O.1, On2, On3, and Kn which are needed for the first iteration only) for subsequent use. Two matrix inversions per iteration (of dimensionality n × n) are needed and three mnatrix inversions (one of dimensionality m x m and two of dimensionality p x p ) for the first iteration. (6) The new algorithm has reduced computational and storage requirements for the following system models: (a) Periodic models: In this time-varying case, the new algorithm is again given by equations (8)-(22) except that now the filter quantities K I ( ' ) , I~l('), N ( . ) , Pn(.)+Cn(.)[QrI(.)+On2(.)]-1cT(.), and the coefficients multiplying z ( k + 1) in (18) although time-varying are now periodic with the same period as that of the model. Thus, they only need to be computed for a data subinterval equal to the period, and stored for subsequent use in the remaining part of the total data interval. (b) Slowly time-varying models: Partitioning the total data interval into subintervals of which the model is approximately time-invariant, although different in each subinterval, the new algorithm is given by the simpler equations (33)-(47) (in each subinterval). Thus, the filter quantities qbl, K1, P , + Cn(Q~l + 0.2) 1C .T, N, the coefficient in M ( . ) need only be computed once for each subinterval, and stored for use throughout the subinterval to which they pertain. (7) It is emphasized that the choice of the covariance of u. (k) (namely, Qn (k), is completely arbitrary) is up to the designer of the filter and according to the following equation: Q(k) = Qn ( k ) + Qr(k). It must be noted that Q . ( k ) may be chosen to yield computationally efficient realizations of the optimal linear filter as in [18, 19].

4. Comparative~computational analysis The computational requirements of a given filter are a realistic measure of its practicality and usefulness. Computational requirements, computational time per cycle (iteration), and required storage determine the minimum sampling rates, the computational capacity required, and computer memory size. In this section, a comparative computational analysis is performed of the new per-sample partitioning algorithm Vol. 12. No. 1, January 1987

32

D.G. Lainiotis, ICE. Anagnostou / A new per-sample partitioning filter

of Section 3, and of the Kalman filter for time-invariant systems. The computational characteristics examined are computation time (counts) and storage requirements per iteration. This analysis involves several complexities, such as those associated with computer word length, system software, etc. A complete treatment of these questions, however, is clearly beyond the scope of a single paper. To facilitate the presentation, certain basic assumptions on which the analysis is based are given below. It must be noted that similar assumptions were also made in the work of Cosmidis [5, 6]. These assumptions are the following: (a) Storage requirements are divided into two types: • permanent storage: the amount of computer memory which must be permanently assigned to the filter that constitutes the memory of the filter, and • auxiliary storage: the extra computer memory that is used for temporary storage of auxiliary variables which are needed per iteration to 'run' the filter. (b) Single precision variables and arithmetic are assumed. (c) Any quantities computed once and also needed later, will be saved. (d) Operations performed only once for each algorithm are ignored. (e) Multiplications, additions, divisions, and square roots are counted. (f) In the computation of time and storage counts, matrix symmetry has been taken into consideration whenever it appears. (g) All symmetric positive-definite matrix inversions are assumed to be performed via the efficient Cholesky factorization [4]. Inverted matrix is stored at the same locations as those assigned to the given matrix. (h) The appropriate model parameters are a priori given and no computational effort is made to derive them. (i) The dimension of process noise vector is smaller than (or equal to) the dimension of state vector (i.e., p ~
Kalman Filter (denoted KF) (1) Storage requirements: (a) Permanent memory: Running the Kalman algorithm needs the permanent storage of the constant filter quanties ~, H, R (i.e., n(n + m) +0.5m(m + 1) memory words). Also, it needs the permanent storage of the constant quantity Ql = FQFT (i.e., 0.5 n (n + 1) m:words) and of P, ~, z requiring 0.5 n (n + 3) + m m.words. Therefore, the total per iteration permanent memory word requirement for the Kalman filter is given by the following sum: n2÷ n . m + 0.5(m2 + m)÷ 0.5(n2÷ n) ÷ 0.5n2 ÷ 1.5n + m = n ( 2 n + 2 + m ) + O . 5 m ( m + 3 ) m.words.

(48)

(b) Auxiliary storage: From Section D.1 of Appendix D results the following count of auxiliary storage requirements: 2nm + max{½(m2+ m), n 2} m.words.

(49)

Adding (48) and (49) gives the following total storage (per iteration): n(2n +2+3m)+O.5m(m +3)+max{½(m2+ m), n 2} m.words. SignalProcessing

(50)

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

33

(2) Computation time: From Section D.1 results the following number of multiplications, additions, divisions and square roots: MULT

1.5n2(n + 1) + 0.5m(m 2+ 3 m - 2) + 1.5 nm(2 + n + m),

ADD:

0.5n(3n2+ n - 2 ) + 0 . 5 m ( m 2 - 1)+0.5nm(2+3n+3m),

DIV:

2m-l,

SQRT:

m.

(51)

New Per-Sample Partitioning Filter (denoted NPSPF) (1) Storage requirements: (a) Permanent storage: NPSPF requires the permanent storage of the constant quantities P1, K1, D2, N, qbl, and of ~, P, z, i.e., 2.5n(n + 1) + m(2n + 1) re.words.

(52)

(b) Auxiliary storage: From Section D.2 of Appendix D results the following count of auxiliary storage requirements: n~+ n re.words

(53)

and the total storage (per iteration) is

3.5n(n+ l)+ m(2n+ l) m.words.

(54)

(2) Computation time: From Section D.2 results MULT: n(2.5n2+5.5n-2+2m), ADD:

n(2.5(n2-1)+2(n+m)),

DIV:

4n - 2,

(55) SQRT: 2n.

Next, the computational comparison between the two algorithms analyzed is given in Table 1, containing the total storage requirements, and Table 2, containing the time requirements. Table 1 Total storage requirements Algorithm

Total storage (memory words)

Kalman

2n2+2n+3nm+O.5m2+l.5m+max{½(m2+m), n 2}

NPSPF

3.5n2 + 3.5n + 2nm + m

The storage requirements of the above algorithms versus n are presented in Fig. 1, while those versus m are presented in Fig. 2. For these figures, typical parameter mixtures are used.

Remarks. (1) From Fig. 1 it becomes obvious that NPSPF is consistently less demanding in storage than the Kalman filter when m/> 0.5n. Vol. 12, No. 1, January 1987

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

34

Table 2 Time requirements Algorithm

MULT

ADD

Kalman

1.5n3+l.5n2+O.5m(m2+3m-2)

0.5n(3n2+n-2)+O.5m(m2-1)

+l.5nm(2+ NPSPF

n+m)

2.5n 3 + 5.5 n 2 - 2n +

DIV

+0.5nm(2+3n+2m)

2nm

2.5n 3 + 2n 2 - 2.5n +

SQRT

2m-

2nm

1

m

4n - 2

2n

(2) Fig. 2 indicates that NPSPF is less sensitive to changes in the dimensionality of the measurement vector z than the Kalman filter. The time behavior of each of the examined algorithms is shown in Figs. 3 and 4. These figures are produced using the concept of total normalized number of operations which is derived through replacing square roots, divisions, and multiplications by an equivalent number of additions, according to Cosmidis' work [5] the equivalences are indicative of most computer installations with floating point hardware capabilities: S Q R T = 25 a d d ,

DIV = 6 add,

M U L T = 4 add.

Remarks. (1) Fig. 3 shows the variations of the number of equivalent operations for m = 0.10n, rn = 0.5n, rn = I0n versus n. Fig. 3 indicates that the Kalman filter is the fastest for m <0.5n. For m ~>0.5n the situation is reversed, namely the partitioning algorithm is the fastest. (m'l~n)

(m'IBrO

2~6

/ //////// . 0

¢v I-A

I

t

U

,

Ig

6 -->No.

,

2~

OF STATES(n)

Fig. !. Storage requirements versus n. Signal Processing

,

,

:38

.

.

.

.

,

4i~

D.G. Lainiotis, K,E. Anagnostou / A new per-sample partitioning filter

35

KF

35go1

NPSPF

(n=2o1)

J

-.----

.-----

..__-

-

L o

i,i

o

i

[4

,

,

i

m

l

i

i

,

m

18

I

i

,

,

L

28

l

m

i

m

38

m

I

4O1

-->OIHENSION OF HEASOREHENT VECTOR{m)

Fig. 2. Storage requirements versus m.

///--... (m=l@n)(m'O. 50rl)

(re'loin)

588Ololo1

(m=~l.loin)

/ / /

z c~

/

/

/

r~

//

n

/ /J

KF NPSPF

J J Z A

1

O1

O1

101

28

3fl

401

-->No. OF STATES(n)

Fig. 3. N u m b e r of equivalent operations versus n. Vol. 12, No. 1, J a n u a r y 1987

36

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter KF

5flSO~

NPSPF

03 Z i-4 t,-

8 e,,,i

Z

d I

l

,

,

I

~

I

L

,

,

20

I

3~

i

,

i

i

1

4B

-->DIMENSION OF MEASUREMENTVECTOR(m)

Fig. 4. Number of equivalent operations versus m. (2) Note that NPSPF is less sensitive to changes in m than the Kalman filter, as can be seen from Fig. 4. In summary, the conclusion of this analysis is that NPSPF is the fastest and the most economic (about storage requirements) for m/> 0.50n. It must be noted that, for time-varying models, NPSPF has greater computational requirements than the Kalman filter.

5. An application: The multisensor problem The practicality and usefulness of the new partitioning filter as well as the usefulness of the computational analysis of this paper are demonstrated in the multisensor problem. The multisensor problem is of considerable practical importance in several applications such as geophysical data processing, radar, avionics (duo-mode systems), etc. A brief description of the multisensor problem is given by the following model: x ( k + 1) = q b x ( k ) + F u ( k ) , z,(k+l)=H,x(k+l)+v,(k+l),

x(0) = Xo,

(56)

i=l,2,...,r,

where measurement process z i ( k + 1) has dimensionality mi. x(0) ~ N{~o, P(0)}, E{v,(k+l)vT(k+l)}=g,t

u ( k ) ~ N{0, Q},

V~(k+ 1) - N(O, g,), 3

foralli, l, i ~ l ,

3 N{x,y} denotes the Gaussian probability density function with mean x and variance y. Signal Processing

(57)

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

37

or, in compact form,

x(k+ 1) = ~x(k)+Fu(k),

(58)

x(O) = Xo,

z(k + 1) = Hx(k + l)+ v(k + l),

(59)

where I

I

I

zr(k+l)=[z~(k+l)~t z ~ ( k + l t . . . t , z ~ ( k + l ) ] r

(the dimensionality of vector z(k+ 1) is m = ~=~ mi),

vT(k+I)=[vT(k+I) i v~(k+l)',, ' ' . ," v T ( k + l ) ] , 'Ii tL/T 'Ii H T

HT=

[HT

R=

RI RI2 R~2 R2





I'I



v(k+l)~N(O,R),

'l i TH , ] ,

• ..

(60)

Rlr]

I

It is now obvious that the increase in the dimensionality of the measurement vector (and, of course, of the measurement noise vector, too) is sufficient to bring the problem from the possible case of mi <~ n, to the case of m > n. In the following, two cases are considered: Case 1 (R, ~ 0 for all i, l i ~ l). The results of the previous comparative section are still valid, but with

m=Y l " . Case 2 (R~t = 0 for all i, l, i ~ l). The storage requirements and the number of additions of the Kalman filter are changed as follows: STORAGE: 2n2+2n+3nm+0.5 ~ m2i+l.5m+max{n2,½(m2+m)} re.words,

(61)

i=1

ADDITIONS: 1.5n3+0.5n 2 - n + 1.5n2m + nrn +0.5m3+0.5 ~ m 2,

(62)

i=1

while the requirements of NPSPF remains the same. Therefore, for the case of time requirements, the change is unimportant, and the results of the previous general case are still valid• But for the case of storage requirements, although the Kalman filter presents a remarkable improvement, the superiority of NPSPF over the Kalman filter is still maintained, but for a bigger value of the m~ n ratio• The latter will be clear by examining the following arithmetic example: Suppose a system (for example, a set of radars) in which m = ~=1 m~ = rM, where n = 10, 20, and M = 2. Fig. 5 shows the variations of storage requirements versus r for the above algorithms. From Fig. 5 we observe that NPSPF for Ra = 0 and for all i, l, i ~ l, becomes more economic than the Kalman filter for values of the m/n ratio greater than the one corresponding to the case of R~ ~ 0 for all i, l, i ¢ / . In the worst case, the value of this ratio is about 0.6 (when n is large) and decreases as n decreases. In this paper, the multisensor estimation problem is discussed via the global new per-sample partitioning algorithm (NPSPF) considering simultaneously processing of all measurements z~, z 2 , . . . , zr by using only one processor. It is of interest to compare the computational requirements of the NPSPF using global processing of the measurements (namely, using an augmented state vector as given above) to the computational Vol. 12, No. 1, January 1987

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

38

14-2

4~u [-

//../

~ ~ - /

.~../~ ~

/

./ ../

Sz / "13 L 0

% ~ . ~ .

~

-

0 ^ I I

L

i

i

,

I

i

,

i

18 -->No.

~

i

..,

i

~

~.

.I

28

OF SENSORS (r-)

Fig. 5. Storage requirements versus r for mu]tisensor estimation problems,

requirements of the NPSPF when each zi is processed independently, e.g., by a 'local' processor, and then the 'local' estimates are combined optimally to yield the global estimate. The independent processing of each measurement z~ leads to an estimate ~i of x (with covariance matrix P~). A combination of these independent estimates ~ leads to a best estimate ~ of x which is given by the following equation (an extension of a result given by Deutch [7]): =p

p~-l~, i

,

(63)

I

where the error covariance matrix P is given by

64, The elemental piecewise solutions xi (and Pi) may be computated via the NPSPF parallel or serial. In parallel processing, r processors are used and the computational requirements of the NPSPF for the computation of the global estimate ~ (using all the measurements zl, z2 . . . . , z,) are given below (for simplicity, m~ = M is assumed for i -- 1, 2 , . . . , r): Storage requirements: r(3.5nE+ 3 . 5 n + 2 n M + M ) + O . 5 n ( n + 3) m.words. Signal Processing

(65)

D. (3. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

39

Time requirements:

MULT:

3n3+8n2-3n+2nM,

DIV:

6n-3,

ADD:

3n3+3n2-4n+2nM+O.5nfl(n+3),

SQRT:

3n,

(66) where the integer/3 is selected to satisfy the condition 28-1< r<~ 28. In serial processing, only one processor is used and the computational requirements of NPSPF for the computation of the global estimate ~ (using all the measurements Zl, z 2 , . . . , zr) are given below (where m~ = M for i = l, 2, . . . , r): Storage requirements:

m.words.

r(2.5n2+2.5n+2nM+M)+O.5n(3n+5)

(67)

Time requirements:

MULT:

r(2.5n3+5.5n2-2n+2nM)+O.5n3+2.5n2-n,

DIV:

(2r+ 1 ) ( 2 n - 1),

ADD:

r(2.5n3+2.5n2-n+2nM)+O.5n3+n2-1.5n,

SQRT:

n(2r+ 1).

(68)

Table 3 includes the variations of the ratio Ss, Sp, Ts, Tp versus r, for n = 10, M = 2, where S and T are the ratios of the storage requirements and time requirements of NPSPF, respectively, for serial or parallel processing of each measurement zi (as indicated by the index s or p, respectively) to those of simultaneously processing of all measurements Zl, z 2 , . . . , Zr by using the augmented state model (when Ril = 0 for all i ~ 1) for m = rM.

Table 3 V a r i a t i o n s o f the storage ratios Ss, Sp a n d the time ratios Ts, Tp versus r, w h e n n = 10, M = 2

r

s~

ro

s~

L

10

5.38

1.13

4.15

9.23

20

7.02

1.02

5.32

16.37

30

7.82

0.93

5.98

22.18

50

8.62

0.78

6.45

31.08

From Table 3 it is obvious that the global per-sample processing is preferable to any other per-sample processing, because it has the least computational requirements except for the case of parallel processing where the time requirements for r > 20 are greater.

Appendix A In this appendix, the well-known Kalman filter [9] and the Lainiotis-Adrisani partitioning filter [18, 19] are presented: Vol. 12, No. 1, January 1987

D.(3. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

40

A.1. Kalman filter The optimal mean-square-error estimate ~(k + 1/k + 1) and the variance of the estimate P(k + 1/k ÷ 1) are given by

~(k+ 1/k÷ 1) = tb(k+ 1, k)~(k/k)+ K(k+ 1):~(k + 1/k),

(A.1)

S(k + 1/k) = z(k + 1 ) - H(k + 1)~(k + 1, k)~(k/k),

(A.2)

K(k+ 1) = P(k+ 1/k)HT(k+ 1)p~-I(k + 1/k),

(A.3)

Pz(k+ 1/k) = H(k + 1)P(k+ 1/k)HX(k + 1)+ R(k ÷ 1),

(A.4)

P(k + 1/k) = ~(k + 1, k)P(k/k)~T(k + 1, k) + r(k)Q(k)FT(k),

(A.5)

P(k+ 1/k+ 1) -- [I - K(k+ 1 ) H ( k + 1)]P(k + 1/k),

(A.6)

with initial conditions ~(0/0) = ~(0) and P(O/O) = P(O).

A.2. Lainiotis-Andrisani partitioning filter [ 18, 19] In the Lainiotis-Andrisani partitioning filter, model partitioning consists of partitioning the initial state vector x(0), and the process noise vector u(j) into two independent random vectors. Namely, x(O)=x,(O)+xr(O),

u(j)=u,,(j)+u,(j)~

(A.7), (A.8)

yielding for the mean anc covariances

~(O)=;,(O)+;r(O),

P(O)=P,(O)+P,(O), and

Q(j)=Q,,(j)+Q,(j).

(A.9),(A.10),(A.11)

The corresponding partitioning algorithm is given by the following equations:

;(k + 1/k + 1) = ; , ( k + 1/k + 1)+ Co(k+ 1) t]o(k + 1/k + 1),

(A.12)

P(k ÷ l/k + 1) = P, (k + 1/k + 1)+ Co(k+ 1)Po(k + 1/k + 1)CT(k + 1),

(A.13)

~]o(k+ 1/k+ 1) = E{Uo(k+ 1)/Ak+I}, 4

(A.14)

Po(k + 1/k + 1) = Var{ Uo (k + 1)/Ak+l},

(A.15)

UXo(k+l)=[xX~(O) lI uX~(O) 'Il . . . II uT(k)],

(A.16)

E{Uo(k+I)}=[~T(O) :0 , I, ' ' ' , : 0] = /.~o(k+l),

(A.17)

where

[1°,(0)

Var{U°(k+l)}= [

i

0

...

Qr(O) • ''

0

]

i

J

Qrk)

= Po(k + l).

(A.18)

The optimal estimate of Uo(k+ 1) and its error covariance are given by

Uo(k + 1/k + 1) = Po(k + 1/k + 1)[ Mo(k + 1, 0) + Po~(k + 1) L~o( k ÷ 1)],

(A.19)

Po(k + 1/k ÷ 1) = [ Oa(k + 1, O)+ Pod(k+ 1)] -1.

(A.20)

4 H e r e , Ak+l = { Z ( 1 ) , Z(2) . . . . , Signal Processing

z(k+ 1)}.

41

D.G. Lainiotis,ICE. Anagnostou/ A new per-samplepartitioningfilter The auxiliary equations are given by

Mo(k, 0)] + DTo(k + 1)p~_~(k + 1/k)Z.(k + l / k ) , Mo(k + l, O)= [L---O-p-~J rnt~ma Oo(k + I, O) =

M~,(0, 0) = 0.×1, 5

l

LO~x(.+kp)',

0,×~

J

+ D (k +

"

+ llk)Do(k +

(A.21)

I), (A.22)

Oo (0, o) = o . . . .

where

Co(k+l)=[cla.(k+l,O) i C.(k+l)],

(A.23)

Do(k+l)=[H(k+l)O(k+l.

(A.24)

k)qb.(k, 0) i D , ( k + l ) ] ,

(A.25)

(7.(1) = A(1, 1) = [I-K.(1)H(1)]F(O),

C.(k+l)=[A(k+l,1)

i A(k+l,2)!

''" i A(k+l,k+l)],

(A.26)

A(k + 1, i) = ~ . ( k + 1, i)[ I - K.(i)H(i)]F( i - 1),

(A.27)

D.(1) = H(1)F(0),

(A.28)

D.(k+I) =[n(k+l)O(k+l,

k)C.(k) i H(k+I)F(k)];

(A.29)

and finally the imbedded Kalman filter is given by

~.(k + 1/k + 1) = ~ ( k + 1, k)~. (k/k)+ K . ( k + 1, k)~.(k + 1/k),

(A.30)

~.(k + 1/k) = z(k + 1) - H(k + 1) qb(k + 1, k)~. (k/k),

(A.31)

K. (k + 1, k) = P.(k + 1/k)HV(k + 1)P~-~(k + 1/k),

(A.32)

Pz.(k + Uk) = H ( k + 1)P. ( k + 1/k)HT(k + 1)+ R(k + 1),

(A.33)

P.(k + 1/k)= O ( k + 1, k)P.(k/k)OT(k + I, k)+ F(k)Q.(k)F(k),

(A.34)

P . ( k + 1/k+ 1) = [I - K . ( k + l, k ) H ( k + 1)]P. (k + l/k),

(A.35)

qb. (k + 1, k) = [I - K . ( k + 1, k ) H ( k + 1 ) ] ~ ( k + 1, k),

(A.36)

qb. (k + 1, 0) = ~ . ( k + 1, k)O. (k, 0),

(A.37)

with initial conditions ~. (0/0) = ~. (0), P. (0/0) = P. (0), and qO (0, 0) = I.

Appendix B. Proof of the Theorem

The proof follows the proof of the LPSPF [17], as well as the proof in [22]: namely, when partitioning the data interval {to, t,} (where the data interval consists of measurements hk ={z(0), z ( 1 ) , . . . , z(k)}, 5 0.×p it is a null matrix of dimensionality n ×p. Vol. 12, No. 1, January 1987

D.G. Lainiotis, K.E. Anagnostou / A new per-samplepartitioningfilter

42

with z(k) = Z(tk) and to<~tk <~t,) into the nonoverlapping subintervals {t, t~} we then have that

;(j, O) = ;,, (j, i) + Co (j) tJo (j, i),

(B.1)

P(j . O)= P,, (j, i) + Co (j) Po(j, i)cT(jk ),

(B.2)

where

~(j,O)--$(tJb, to),

P(j,O)=-P(tJt~, to),

Pn(j,i)=-P.(tj/tj, t,), Po(i,O) =- Poft,),

Co(j)=-Co(tj),

~.(j, i)=-~.(tJtj, t,),

Po(j,i)=-Po(tj/tj, t,),

(Jo(j, i)=- Uoftj/tj, ti),

(Jo(i, O)-= [~o (t,),

and the nominal initial conditions at tl are given by

~.(i,i)=--~.(t,),

P.(i,i)=-e.(t,),

Q.(i)=Q.(t,).

The remaining initial conditions ~,(i, 0), Pr(i, O) and @(i) are given by ~,(i, 0) =- ~.(i, 0) - ~. (i, i),

(B.3)

Pr(i, O) =- P(i, O) - P, (i, i),

(B.4)

Qr(i) =- Q ( i ) - Q,(i).

(B.5)

Moreover, the quantities ~,(j, i), Co(j), Oo(j, i) and Po(j, i) are given from equations (A.30), (A.23), (A.19) and (A.20), respectively (where now tk ~ tj, to =- ti, ti+l =- tj, £ r =- £.(i, 0) and P. =- P, ( i,O) ) as follows:

~.(j, i)= ~(j, i)~.(i, i)+ K.(j, i)~.(j, i),

(B.6)

Co(j) =[~.(j, i) i { I - K . ( j , i)H(J)}F(i)],

(B.7)

Oo(j, i)= Po(j, i)[ Mo(j, i)+ P~'( i, O)l~o(i, 0)],

(8.8)

Po (j, i) = [Oo (.h i)+ P~l(i, 0)] -1,

(B.9)

where L(j, i)= z ( j ) - H ( j ) ~ ( j , i)~,,( i, i),

(8.10)

K,, (j, i) = P,, (j/ i)HT(j)pTI(j/ i),

(BA1)

Pz. (j/ i) = H(j)P. (j/ i)HT(j) + R(j),

(B.12)

P,(j/i) = ~(j, i)P.(i, i)~T(j, i)+ F(i)Q.(i)FT(i),

(B.13)

P.(j, i)= [ I - Kn(j, i)H(j)]P.(j/ i),

(B.14)

• .(j, i)= [ I - K.(j, i)H(j)]~(A i),

(B.15)

Uo( i, O) =

(B.16) k. 0 p x l

..I

[Pr(i,O)'. Onxp]

Po(i, O) = [--O-p~- ]-~)7(-i)] '

(B.17)

Mo(j, i)= D~(j)P~o'(j/ i)~.(j, i),

(B.18)

Signal Processing

D.G. Lainiotis, K.E. Anagnostou / A new per-samplepartitioningfilter Oo (j, i) = D~(j) P~o~(j/ i) Do (j),

43 (B. 19)

I

Do(j) =[ H ( j ) ~ ( j , i) ~ r n(j)F(i)].

(B.20)

The above operations are repeated for each subinterval until tj = t,. Next, if we take £,(i, i ) = 0 and

P. (i, i)= 0 at each instant ti, the following algorithm results: ~(k + 1,0) = ~.(k + 1, k)+ Co(k+ 1)[Po'(k, 0)+ Oo(k + 1, k)]-' × [Mo(k + 1, k)+ P-I(k, O) Uo(k, 0)], P(k + 1, O) = P.(k + 1, k)+ Co(k+ 1)[P~'(k, 0) + Oo(k + l, k)]-lcTo(k + 1),

(B.21) (B.22)

where /~e (k, 0) =

0)]

L -o;~ -J '

Po(k, 0) =

] L o.×.

]

,, Q~(k)J'

(B.23)

Mo(k + 1, k) = DT(k + l)A(k + l)z(k+ 1),

(B.24)

Do(k+l) =[H(k+l)~(k+],

(B.25)

k) JI pH ( k + l ) F ( k ) ] ,

C o ( k + l ) - - [ q b , ( k + l , k) I C , ( k + l , k)],

(B.26)

Oo(k + 1, k) = DT(k + 1)A(k + 1)Do(k+ 1),

(B.27)

£, (k + l, k) = K, (k + 1, k)z(k + 1),

(B.28)

• , (k+ 1, k) = [ I - K , ( k + 1, k ) H ( k + 1 ) ] ~ ( k + l, k),

(B.29)

where A ( k + 1), C,(k+ 1, k), Pn(k+ l, k), and K , ( k + 1, k), are given by equations (22), (14), (15) and (13), respectively. Equation (B.27) can be written in the following form:

~O.l(k+l,k) rjO.3(k+l,k)]

Oo(k+ 1, k) =

-5- . . . . . . . .1, k) Lo.3(k+

Z, .O,,2(k+ . . . . . . . . . 1, k)J ,

(B.30)

where the quantities O.~(k+l,k), O . 2 ( k + l , k), and O . 3 ( k + l , k), are given by equations (19)-(21), respectively. The quantity [P;l(k, 0)+ Oo (k + 1, k)] -1 using the inversion lemma can be written as follows:

[P;'(k, 0)+ Oo(k+ l, k)]-' =

[S~(k+l) ', S3(k+l)]

b;_;]3j,

(B.31)

where

S,(k+ 1) = [P-'(k, 0)+ N ( k + 1, k)] -1,

(B.32)

S2(k + 1) = So(k+ 1) - L(k+ 1)S3(k + 1),

(B.33)

S3(k+ l) = - S I ( k + 1)LT(k + 1),

(B.34)

So(k+ 1) = [Q~-~(k) + O,2(k + 1, k)] -~,

(B.35)

where L(k+ 1), N ( k + 1, k), and O,2(k+ 1, k) are given by equations (16), (17), and (20), respectively. Vol. 12, No. 1, January 1987

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

44

Also, the quantity [Mo(k+ 1, k)+ Pol(k, O)Uo(k, 0)] can be written as follows: [M~l(k + 1, k)+ P-I(k, 0)~(k, 0)] [ Mo (k + 1, k) + Po'(k, O) Uo (k, 0)] = L- . . . . . . M~~lc~--f,-k-) . . . . . . . J'

(B.36)

M,l(k + 1, k) -- { H ( k + 1 ) ~ ( k + l, k)}TA(k+ 1)z(k + 1),

(B.37)

M,z(k+ 1, k) = { H ( k + 1)F(k)}TA(k+ 1)z(k + 1).

(B.38)

where

From equations (B.21) and (B.22), according to (B.31), equations (B.36)-(B.38) result. ~(k + 1, O) = ~, (k+ 1, k)+ T~(k + 1, k)[M.,(k + 1)+ P-l(k, O):~(k, 0)] + T2(k+ 1, k)M,2(k+ 1, k)

(B.39)

and P(k+ 1, O) = P,(k+ 1, k)+ Tl(k+ 1, k)cI)T(k+ 1, k)+ T2(k+ 1, k)C~(k+ l, k),

(B.40)

T,(k + 1, k) = crp~(k + 1, k)Sl(k + 1)+ C, ( k + 1, k)ST(k + 1),

(B.41)

T2(k + 1, k) = clgn(k + 1, k)S3(k + 1)+ Cn(k + 1, k)S2(k + 1).

(B.42)

where

Because of T~(k+ 1, k) = [qb, (k + 1, k ) - C , ( k + 1, k ) t ( k + 1)]S,(k+ 1) = clg,(k+ 1, k)S,(k+ 1)

(B.43)

and T:(k+ 1, k) = C~(k+ 1, k)So(k+ 1) + ['tlb, (k + 1, k ) - C~(k+ 1, k)L(k+ 1)]$3(k + 1) = C , ( k + 1, k)So(k+ 1 ) - ~ ( k + 1, k)S~(k+ 1)LT(k + 1),

(B.44)

where cI),(k + 1, k) = cI)n(k + 1, k) - C,(k + 1, k)L(k + l) = [I - K l ( k + 1, k ) H ( k + 1)] (/)(k + 1, k),

(B.45)

and K~(k+ 1, k) is given by (12). Finally, from equations (B.39), (B.43), (B.44), and (B.45) equations (8), (10), and (18) result, while, from equations (B.40) and (B.41)-(B.45), equation (9) results.

Appendix C. Description of algorithms for time-invariant systems C.1. Kalman filter Initial conditions: (1)

x(0/0) = ~(0);

Signal Processing

(2)

P(O/O)=P(O).

D.(5. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

45

Precomputations :

(1)

Q~ = FQF T.

Per iteration computations:

(1)

P(k + 1/k) = cbP(k/k)~T + Q~,

(2)

P ~ ( k + 1/k) = [ HP(k + 1/k)HT + R] -~,

(3)

K ( k + 1) = P(k + 1/k)HTp~I(k + l / k ) ,

(4)

P(k + 1/k + 1) = P(k + 1 / k ) - K ( k + 1)HP(k + I/k),

(5)

; ( k + 1/k) = @;(k/k),

(6)

~(k + l~/k + 1) = R(k + 1/k)+ K (k + 1)[z(k + 1) - H~(k + l/k)].

C.2. New per-sample partitioning filter (NPSPF) Initial conditions: (1)

;(0, 0) = ;(0),

(3)

Qn

(2)

P(0, 0) = P(0),

(4)

Qr,

with Q, + Qr = Q.

Precomputations :

r o . r T,

(1)

Q, :

(2)

a = [(HQ,)HT + R] -1,

(10)

K, = K. + C.So(HF)TA,

(3)

K. =(HQ,)TA,

(11)

L=SoOL

(4)

P. = Q 1 - K.(HQO,

(12)

N=OI-O3L,

(5)

C.=F-K.(HF),

(13)

clJ,=(I-K,H)~,

(6)

01 = (HO)TA(HO),

(14)

C 1=

(7)

02 = (HF)TA(HF),

(15)

P, = P. + Cx.

(8)

03 =

(9)

So = [ Q ~ - I + 0 2 ] - , ,

CnSo CT,

(HO)TA(HF),

Per iteration computations:

(1)

M (k + 1, k) = Dz(k + 1),

where D = (@ - FL)THTA,

(2)

.~,(k + 1, k) = Klz(k + 1),

(3)

S ( k + 1) -- [P-'(k, 0)+

(4)

;(k+l,0)=5~(k+l,

(5)

P(k + 1,0) = P~+ @~S(k + l)@~.

N] -1,

k ) + @ ~ S ( k + I ) [ M ( k + I , k ) + P l(k, 0);(k, 0)],

Vol. 12, No. 1, January 198;

46

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

Appendix D D.I. Kalman filter implementation

Operation

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15)

P2= q~P P3=P2rP T

MULT

ADD

n3 0.5(n3+n 2)

H2=HIH T H3=HE+R A=H31 K = HT A P4=KH~ P=P1-Pa

;1--

Auxiliary storage (re.words)

SQRT

n3-- n2 0.5(n3-n) 0.5(n2+ n)

P~ = P3+ Q1 H1 = HP1

DIV

max{n2,½(mZ+m)}

store in P store in P

n2m -- n m 0.5(nm2+nm-m2-m)

n2m

0.5(nm2+nm)

nm

0.5(m2+ m) 0.5(m3+3m2-2m) 0.5(m3- m) nm 2 nm 2 - nm 0.5(n2m+nm) 0.5(n2m+nm-n2-n)

t~;

m

nm

store in 1 store in P store in 4 store in 1 store in z store in ; store in ;

0.5(n2+ n) n2- n

n2

z2 = H;1 zl=z-z2 ;2 = Kzl ;=;1+;2

2m-1

nm

nm- m m nm - m n

nm

store in 1 store in 1 store in 1

D.2. N P S P F implementation

Operation

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12)

MULT

ADD

DIV

SQRT

Auxiliary storage (m.words)

0.5(n3 + 3 n 2 - 2n)

0.5(n 3 - n)

2n-1

store in P

Mo=Po;

n2

n2-n

n

M = Dz

nm.

n m -- n

2n - 1

store in ; store in ; store in 2 store in P store in P

po~p

-1

M1 = Mo+ M ;1 = KI z $1 = N + Po S = S~ I Y= ~lS ;3 = YM1 ; = ; i + ;3 P3 = Y opT P = PI + P3

Signal Processing

n

nm

rim -- n

0.5(n2 + n) 0.5(n3 + 3n2--2n)

0.5(n3- n)

n3

n3_ n 2

n2

n2-n n

0.5(n3+

r/E)

0.5(n3-n) 0.5(nE+n)

n2

store in P store in ; store in P store in P

D.G. Lainiotis, K.E. Anagnostou / A new per-sample partitioning filter

References [1] D. Andrisani and G.F. Gau, "Multistage linear estimation using partitioning" IEEE Trans. Automat. Control, Vol. AC-20, No. 2, February 1985, pp. 182-184. [2] D. Andrisani, G.F. Gau and D.G. Lainiotis, "Identification of human pilot model via adaptive estimation", AIIJ, 1983. [3] J. Aquilar-Martin, M. Balssa and R. Lopez de Mantaras, "Estimation recursive d'une partition. Exemples d'apprentissage dans R h e t I "'', Seminaires INRIA: Classification Automatique et Perception par Ordinateur, INRIA, 1980. [4] R.L. Burden, J.D. Faires and A.C. Reynolds, in: Prindle, Weber and Schmidt, eds., Numerical Analysis, Reidel, Boston, MA, 1978. [5] E.T. Cosmidis, "Analysis of algorithms for discrete stochastic estimation", Ph.D. Thesis, State Univ. of New York at Buffalo, 1980. [6] E.T. Cosmidis, "Comparison study of the Kalman and the Lainiotis filters", Proc. Melecon '83, IEEE press, Athens: 1983, C4.06. [7] R. Deutch, Estimation Theory, Prentice-Hall, Englewood Cliffs, N J, 1965. [8] B.J. Eurlich, D. Andrisani and D.G. Lainiotis, "Partitioning identification algorithms", IEEE Trans. Automat. Control, Vol. AC-25, No. 3, June 1980, pp. 521-528. [9] R.E. Kalman, "A new approach to linear filtering and prediction problems", Trans. ASME, J. Basic Engng., Vol. 82, Part D, March 1960, pp. 33-45. [10] D.G. Lainiotis, "Optimal adaptive estimation: Structure and parameter adaptation", IEEE Trans. Automat. Control, Vol. AC-16, No. 2, April 1971, pp. 160-170. [ 11 ] D.G. Lainiotis, "Optimal nonlinear estimation", lnternat. J. Control, Vol. C.14, December 1971, pp. 1137-1148.

47

[12] D.G. Lainiotis, "Estimation: A brief survey", J. Inform. Sci., Vol. 7, 1974, pp. 191-202. [13] D.G. Lainiotis, "Partitioned estimation algorithms, Part II: Linear estimation", J. Inform. Sci., Vol. 7, No. 3, July 1974, pp. 317-340. [14] D.G. Lainiotis, "Partitioned linear estimation algorithms: Discrete case", IEEE Trans. Automat. Control, Vol. AC20, No. 3, April 1975, pp. 255-257. [15] D.G. Lainiotis, "Partitioning: A unifying framework for adaptive systems, Part I: Estimation", Proc. IEEE, Vol. 64, No. 8, August 1976, pp. 1126-1143. [16] D.G. Lainiotis, Partitioning: A unifying framework for adaptive systems, Part II: Control", Proc. IEEE, Vol. 64, No. 8, August 1976, pp. 1182-11989. [17] D.G. Lainiotis, "Partitioning filters", J. Inform. Sci., Vol. 17, 1979, pp. 177-193. [18] D.G. Lainiotis and D. Andrisani, "Multipartitioning linear estimation algorithms: Continuous systems", IEEE Trans. Automat. Control, Vol. AC-24, No. 6, December 1979, pp. 937-944. [19] D.G. Lainiotis and D. Andrisani, "Multipartitioning linear estimation algorithms: Discrete time systems", in: G. Bekey and S. Saridis, eds., Identification and Parameter E~timation, Pergamon Press, New York, 1983. [20] D.G. Lainiotis, S.K. Katsikas and S.D. Likothanassis, "Minimum variance deconvolution", Proc. l A S T E D Internat. Symp. on Identification and Pattern Recognition, Toulouse, France, 1986, pp. 163-181.' [21] J.S. Meditch, Stochastic Optimal Linear Estimation and Control, McGraw-Hill, New York, 1969. Reference added in proof [22] K.E. Anagnostou and D.G. Lainiotis, "Comparative computational analysis of a new per-sample partitioning linear filter and the Kalman filter", lnternat. J. Systems Sci., 1987, to appear.

Vol. 12. No. I, January 1987