CONTINUOUS-DISCRETE ESTIMATION OF STOCHASTIC PROCESSES OVER MEMORY (TIME-DELAY) OBSERVATIONS WITH ANOMALOUS NOISES 1 Nikolai Dyomin ∗ Svetlana Rozhkova ∗∗ ∗ Department of Applied Mathematics and Cybernetics, Tomsk State University, 36 Lenin ave., 634050 Tomsk, Russia,
[email protected] ∗∗ Department of Natural Sciences and Mathematics, Tomsk Polytechnic University, 30 Lenin ave., 634034 Tomsk, Russia,
[email protected]
Abstract: The joint synthesis of mean square optimal (MSO) unbiased filterinterpolator-extrapolator (FIE) is performed. It is supposed that these processes depend not only on the current values of an unobserved process but on an arbitrary number of its preceding values, and anomalous noise occurs in discrete observations. The properties of MSO FIE are investigated. Keywords: stochastic systems, filtering, interpolation, extrapolation, time-delay, memory
1. INTRODUCTION In stochastic systems of the Kalman type (Kalman, 1960), which are widely applicable in control theory, navigation, and data transmission, the problems of the synthesis of algorithms for estimation (Basin et. al., 2004; 2005a; 2005b) and control (Basin et. al., 2005c; 2006; Dion et. al., 1999) under uncertainty in the type of unknown parameters or anomalous perturbations with incomplete a priori information come into the forefront. The new aspect of the considered problem consist in the following (Abakumova et. al., 1995; Dyomin et. al., 1997; 2000): 1) a continuous-time process is estimated on the bases of the set of realizations of continuous and discrete time processes in contrast to the cases when only discrete or continuous-time processes are observed; 2) observations possess 1
This work was supported by Russian grant MD9509.2006.1
a memory of an arbitrary multiplicity with respect to an unobserved process; 3) estimations are carried out simultaneously for current, past and future values of unobserved process; 4) there is no a priori information about the average anomalous noise, and its components do not act on all components of the observed process. Notations: I and O are identity matrix and zero matrix of the corresponding dimension.
2. STATEMENT OF THE PROBLEM The unobserved n-dimensional process xt and the observed l-dimensional process zt are described by the equations dxt = F (t)xt dt + Φ1 (t)dwt , dzt = [H0 (t)xt +
N k=1
t ≥ 0,
(1)
Hk (t)xτk ]dt+Φ2 (t)dvt , (2)
the observed q-dimensional discrete time process η(tm ) (m = 0, 1, . . . ; t0 ≥ 0) has the form
η(tm ) = G0 (tm )xtm +
N
Gk (tm )xτk (3)
where: 0 < τN < . . . < τ1 < tm ≤ t; wt and vt are r1 –dimensional and r2 –dimensional standard Wiener processes, respectively; ξ(tm ) is a Gaussian r3 –dimensional sequence with E{ξ(tm )} = 0, E{ξ(tm )ξ (tk )} = Iδmk , which is a regular noise; f (tm ) is a Gaussian r–dimensional sequence with E{f (tm )} = f0 (tm ), E{[f (tm ) − f0 (tm )][f (tk ) −f0 (tk )] } = Θ(tm )δmk , which is an anomalous noise. We consider the general case where the components of the vector of anomalous noise f (tm ) cannot act on all components of the observation vector η(tm ), i.e., r ≤ q. Hence, the (q × r) matrix C, which specifies the structure of the action of the components of f (tm ) on the components of η(tm ), has the following form. Suppose that ij are the numbers of components of the vector η(tm ), where the components of the vector of anomalous noise f (tm ) (1 ≤ ij ≤ q; 1 ≤ j ≤ r) force on its. Then, the ij entry in the j–th column of the matrix C is equal to one, and the other entries are zero. It is assumed: 1) x0 has the normal distribution with the parameters µ0 and Γ0 ; 2) the matri ces Γ0 , Q(t) = Φ1 (t)Φ 1 (t), R(t) = Φ2 (t)Φ2 (t) V (tm ) = Φ3 (t)Φ3 (t), Θ(tm ) are positive definite; 3) x0 , wt , vt , ξ(tm ), f (tm ) are assumed to be statistically independent; 4) f0 (tm ) is unknown. Problem 1 (Synthesis). For a sequence of time moments s1 , s2 , . . . , sL such that t < s1 < . . . < sL , on the basis of the set of realizations z0t = {zσ : 0 ≤ σ ≤ t} and η0m = {η(t0 ), η(t1 ), . . . , η(tm ); 0 ≤ t0 < . . . < tm ≤ t} find MSO unbiased estimations of the filtering µ(t), the interpolation µ(τk , t), and extrapolation µ(sl , t), respectively, for values of the process xσ at the moments σ = t, σ = τk , and σ = sl for all k = 1; N and l = 1; L. In this case τk = const, i.e., the memory is fixed, and sl = const, i.e., the extrapolation is inverse (Dyomin et. al., 1997; 2000). Solution of this problem is carried out in Theorem 1 from (Dyomin and Rozhkova, 2000). Problem 2 (Analysis). The analysis of MSO FIE properties is to be investigated.
Let (Dyomin et. al., 1997) Ftz,η = {z0t , η0m } µ(t) =
µ(τk , t) =
(5)
τN , sL , t)][·] }, − µN +L+1 (
(7)
τN = [τ1 , τ2 , . . . , τN ], sL = [s1 , s2 , . . . , sL ], GN +1,L (tm ) .. . . . . . = [G0 (tm ).G1 (tm ).. . . . ..GN (tm )..O.. . . . ..O]. (8) Suppose that the vector of discrete observations η¯(tm ) of the dimension (q − r) is obtained from the vector η(tm ) by means of excluding the components with numbers i1 ,i2 ,· · ·,ir , that possess ¯ k (tm ), k = 1; N , ¯ 0 (tm ), G anomalous noises. Let G ¯ N +1,L (tm ) are matrixes relative to the dimenG sions [(q−r)×n], [(q−r)×n], [(q−r)×(N +L+1)n], that are obtained from the matrixes G0 (tm ), Gk (tm ), GN +1,L (tm ) by means of excluding the lines with the numbers i1 ,i2 ,· · ·,ir , and the matrix V¯ (tm ) of the dimension [(q−r)×(q−r)] is obtained from the matrix V (tm ) by means of excluding lines and columns with the specified numbers. So, MSO FIE, where the observation vector η¯(tm ), is used, is called truncated. Proposition 1. The truncated FIE on time ¯(t), µ ¯(τk , t), µ ¯(sl , t), intervals tm ≤ t < tm+1 for µ ¯ 0k (τk , t), Γ ¯ ki (τi , τk , t), Γ ¯ ll (sl , t), ¯ ¯ kk (τk , t), Γ Γ(t), Γ ¯l ¯l ¯ jl (sj , sl , t), Γ Γ 0,N +1 (sl , t), Γk,N +1 (τk , sl , t) are determined by the expressions of Corollary 2 from (Dyomin et. al., 1997), where η(tm ), η(tm ), G0 (tm ), Gk (tm ), V (tm ) are respectively replaced ¯ 0 (tm ), G ¯ k (tm ), V¯ (tm ). by η¯(tm ), η¯(tm ), G Theorem 1. The FIE described by Theorem 1 from (Dyomin and Rozhkova, 2000) is equivalent to the truncated FIE. Proof. ∆ Let moment tm is the first moment of the τN , sL , anomalous noise. This means that µ N +L+1 ( N +L+1 ( tm−0) and Γ τN , sL , tm −0) in the truncated form coincide with the corresponding values in MSO FIE. Then from (Dyomin et. al., 1997) ¯ µ N +L+1 (·, tm ) ¯ m )η¯(tm ), =µ N +L+1 (·, tm −0)+ K(t ¯ m) = Γ N +L+1 (·, tm − 0) K(t
(9)
¯ −1 (tm ), ¯ ×G N +1,L (tm )W η¯(tm ) = η¯(tm )
(10)
¯ N +1,L (tm ) µN +L+1 (·, tm − 0), −G
(11)
and from (Dyomin and Rozhkova, 2000)
3. ANALYSIS
E{xt |Ftz,η },
τN , sL , t) = [xt ; xτk ; xsl ], x N +L+1 (
τN , sL , t) = [µ(t); µ(τk , t); µ(sl , t)], (6) µ N +L+1 ( N +L+1 ( τN , sL , t) = E{[ xN +L+1 ( τN , sL , t)] Γ
k=1
+Φ3 (tm )ξ(tm ) + Cf (tm ),
µ(sl , t) = E{xsl |Ftz,η }, k = 1; N , l = 1; L, (4)
E{xτk |Ftz,η },
µ N +L+1 (·, tm ) = µ N +L+1 (·, tm − 0) +K(tm ) η (tm ),
(12) K(tm ) = K(tm )[Iq−CY (tm )] = K(tm )Y (tm ), (13)
Y (tm ) = [Iq − CY (tm )],
(14)
and it follows that the proof of the formulated theorem comes to the proof of the equation m ) ¯ m )η¯(tm ) = K(t η (tm ). Let us introduce K(t for our study the Boolean matrix E of the dimension [(q − r) × q], obtained from the matrix Iq by means of excluding the lines with numbers i1 , i2 , · · ·, ir . It is obvious that η¯(tm ) = ¯ N +1,L (tm ) = EGN +1,L (tm ), η¯(tm ) = Eη(tm ), G E η(tm ). Consequently, the proof of the mentioned equation comes to proof of the matrix identity m ), which, taking into considera¯ m )E = K(t K(t N +L+1 (·, tm − 0) tion (10), (13) and K(tm ) = Γ −1 (tm ) has the form ×GN +1,L (tm )W
−1 N +L+1 (·, tm−0)G =Γ N +1,L(tm )W (tm )Y(tm). (15) ¯ N +1,L (tm ) = EGN +1,L (tm ), V¯ (tm ) = E Since G ¯ (tm ) = EW (tm )E , and the ×V (tm )E , then W proof (15) comes to the proof of the matrix expression −1 (tm )Y (tm ). (16) E [EW (tm )E ]−1 E = W Use of the matrix identity [A + BCB ]−1 = A−1 − A−1 B[C −1 + B A−1 B]−1 B A−1 taking into account (2.35) from (Dyomin and Rozhkova, 2000) yields −1 (tm ) = W −1 (tm ) − W −1 (tm )C W ×[Θ−1 (tm ) + N (tm )]−1 C W −1 (tm ), (17) (18)
Multiplying both parts (17) from the left by C and from the right by C, and then simplifying the right part according to the mentioned matrix −1 (tm )C = [Θ(tm ) + identity, we obtain C W −1 −1 N (tm )] . From this, taking into consideration −1 (tm )C (tm ) = C W N
−1 (tm )C N −1 (tm )C W −1 (tm ) −1 (tm )− W W = W −1(tm)−W −1(tm)CN −1(tm)C W −1(tm). (21) Use of (21) in (16), taking into account (2.33) from (Dyomin and Rozhkova, 2000), (14), (19) demonstrates that the proof of (15) comes to the proof of the property W (tm )E [EW (tm )E ]−1 E +CN −1 (tm )C W −1 (tm ) = Iq .
N +L+1 (·, tm − 0)G ¯ ¯ −1 (tm )E Γ N +1,L (tm )W
N (tm ) = C W −1 (tm )C.
m ), formula (2.35) from (Dythe right in Ψ(t omin and Rozhkova, 2000) and (19) in consecu −1 (tm ) m ) = W (tm )[W tive order, we obtain Ψ(t −1 (tm )C N −1 (tm )C W −1 (tm )]W (tm ). Then −W from (20) follows
(19)
−1 (tm )−N −1 (tm ). Multiplying follows Θ(tm ) = N of both parts of the last expression, from the left by C and from the right by C with the following adding to the both parts W (tm ) gives the expres (tm ) = W (tm )+ C[N −1 (tm )−N −1 (tm )]C , sion W from which the matrix identity follows −1 (tm ) − W −1 (tm )C N −1 (tm )C (tm )[W W −1 (tm )]W (tm ) = W (tm )[W −1 (tm ) ×W −W −1(tm )CN −1(tm )C W −1 (tm )]W (tm ). (20) m ) is the left part of (20). Suppose that Ψ(t Using for W (tm ), that are given as factors in the square bracket from the left and from
(22)
Let us introduce the notations A1 = W (tm )E ×[EW (tm )E ]−1 E, A2 = CN −1 (tm )C W −1 (tm ). For the ranks of the arbitrary matrixes A and B the properties (Albert, 1972) take place rk[AB] = rk[A+ AB] = rk[ABB+ ].
(23)
Consecutive use of (23) for A1 and A2 which gives for the reverse matrix D+ = D−1 , results in rk[A1 ] = rk[E [EW (tm )E ]−1 EE + ], rk[A2 ] = rk[C + C[C W −1 (tm )C]−1 C ]. (24) Since the matrixes E and C are by the structure the matrixes with the respectively independent lines and columns, then EE + = I(q−r) , CC + = Ir (Albert, 1972). Taking into consideration (23) and the last property in (24) we obtain rk[A1 ] = rk[E ] = q−r, rk[A2 ] = rk[C ] = r. (25) It follows that A21 = A1 , A22 = A2 , that is the matrixes A1 and A2 are projection matrixes (Albert, 1972). According to the structure of the matrixes C and E we obtain EC = O. Then A1 A2 = O, A2 A1 = O and, besides, rk[A1 ]+rk[A2 ] = q. Since the projection matrixes, satisfying these conditions, have the properties A1 + A2 = Iq (Albert, 1972), this, proves (22) and, consequently, (15). Arbitrariness of the time moment tm follows from the induction. ∇ Since the matrix C characterizes the structure of impact of the anomalous noises vector f (tm ) on the components of the observation vector η(tm ), different estimation precision of MSO FIE (mean square errors of estimations) will correspond to different matrixes C. Let I(r) is a Boolean vector of the dimension q, where the components with numbers i1 ,i2 ,· · ·,ir are zero, and the rest are single. The estimation precision J(r) (tm ) at the time moment tm , corresponding to the vector I(r) , is understood as a value
(r) J(r) (tm ) = tr[AΓ τN , sL , tm )], N +L+1 (
(26)
where A is an arbitrary symmetrical non-negatively (r) τN , sL , t) is a matrix defined matrix, and Γ N +L+1 ( of the second moments of MSO FIE estimation errors, corresponding to the vector I(r) .
Let us introduce the matrix function (r1 ) Φ(α) = Γ N +L+1 (·, tm+1 − 0) + αΓ 2 of a scalar variable α ≥ 0. Taking Γ N +L+1 (·, tm+1 ) as a function from α, from (32) we obtain (r )
Theorem 2. Assume that I(0) , I(1) , · · ·, I(q) are vectors differing one from the other by only one component value in sequence. Let (r2 = r1 + 1)
(r2 ) Γ N +L+1 (·, tm+1 ; α) = [I − Φ(α)GN +1,L (tm+1 )E2
(r2 ) Γ τN , sL , tm − 0) N +L+1 (
−1 ×G N+1,L (tm+1 )E2 ] E2 GN+1,L (tm+1 )]Φ(α). (33)
(r1 ) ≥Γ τN , sL , tm − 0). N +L+1 (
(27)
J(r+1) (tm ) ≥ J(r) (tm ).
(28)
Then
Proof. ∆ To prove this Theorem it is sufficient to demonstrate that the inequality ∆J(tm ) ≥ 0, taking into consideration (27), taken down for the time moment tm+1 , will give the inequality (r2 ) ∆J(tm+1 ) ≥ 0. Matrix Γ N +L+1 (·, tm+1 − 0) taking into account (27) can be given in the form (r1 ) (r2 ) (Gantmakher, 1988) Γ N +L+1 (·) = ΓN +L+1 (·) + Γ, ≥ 0. Then from (8) follows where Γ W(r2 ) (tm+1 ) = W(r1 ) (tm+1 ) +GN +1,L (tm+1 )ΓG N +1,L (tm+1 ).
(29)
From (14), we obtain
(r )
(34)
B = I − Φ(α)G N +1,L (tm+1 ) ×E2 [E2 W(r1 ) (tm+1 )E2 + αE2 GN +1,L (tm+1 )Γ −1 E2 GN +1,L (tm+1 ). (35) ×G N +1,L (tm+1 )E2 ]
≥ 0, then from (34) follows that (AlSince Γ (r2 ) bert, 1972) dΓ N +L+1 (·, tm+1 ; α)/dα ≥ 0, that is (r2 ) ΓN +L+1 (·, tm+1 ; α) is a monotonously non-decreasing according to α in sense of definite of matrix. Then (r2 ) (·; α) Γ N +L+1
α=1
(r2 ) ≥Γ (·; α) N +L+1
α=0
. (36)
α=1
.
Then from (33), (36), (8), follows (30)
Taking down (21), (22) for Ci and the corresponding Ei and using these expressions for (30) taking into account (2.33) from (Dyomin and Rozhkova, 2000), (18), (19), we obtain (ri ) (ri ) Γ N +L+1 (·, tm+1 ) = [I − ΓN +L+1 (·, tm+1 − 0) −1 Ei ×G N +1,L (tm+1 )Ei [Ei W(ri ) (tm+1 )Ei ]
(ri ) ×GN +1,L (tm+1 )]Γ N +L+1 (·, tm+1−0).
2 dΓ N +L+1 (·, tm+1 ; α)/dα = B ΓB ,
(r2 ) (r2 ) (·, t ) = Γ (·, t ; α) Γ m+1 m+1 N +L+1 N +L+1
(ri ) −Γ N +L+1 (·, tm+1 −0)GN +1,L (tm+1 ) −1 ×W(r (tm+1 )[Iq − Ci Y (tm+1 )] i) (r )
Use of the formula of reverse matrix differentiation dA−1 (α)/dα =−A−1 (α)[dA(α)/dα]A−1 (α) in (33) yields
According to (32), (33),
(ri ) (·, tm+1−0) (ri ) (·, tm+1 ) = Γ Γ N+L+1 N+L+1
i ×GN +1,L (tm+1 )Γ N +L+1 (·, tm+1−0).
×[E2 W(r1 ) (tm+1 )E2 + αE2 GN +1,L (tm+1 )Γ
(31)
Supposing that ri = r2 in (31) taking into consideration (29) we obtain the final expression for (r2 ) τN , tm+1 , sL ) in the form Γ N +L+1 ( (r2 ) (r1 ) Γ N +L+1 (·, tm+1 ) = [I −(ΓN +L+1 (·, tm+1 −0)+ Γ)
(r2 ) Γ N +L+1 (·, tm+1 ) ≥ Ψ,
(37)
(r1 ) Ψ = [I − Γ N +L+1 (·, tm+1 − 0) −1 E2 ×G N +1,L (tm+1 )E2 [E2 W(r1 ) (tm+1 )E2 ]
(r1 ) ×GN +1,L (tm+1 )]Γ N +L+1 (·, tm+1 −0).
(38)
From (22) for C1 , E1 , W(r1 ) at the time moment tm follows −1 (tm+1 ) E1 [E1 W(r1 ) (tm+1 )E1 ]−1 E1 = W(r 1) −1 (tm+1 )]. (39) ×[Iq − C1 N1−1 (tm+1 )C1 W(r 1)
The above formula yields E1 [E1 W(r1 ) (tm+1 )E1 ]−1 E1 − E2 [E2 W(r1 ) (tm+1 )
×G N +1,L (tm+1 )E2 [E2 W(r1 ) (tm+1 )E2
−1 ×E2 ]−1 E2 = W(r (tm+1 )L1,2 (tm ), 1)
(40)
−1 +E2 GN +1,L (tm+1 )ΓG N +1,L (tm+1 )E2 ]
L1,2 (tm ) = Iq − (A12 + A21 ),
(41)
×E2 GN +1,L (tm+1 )] (r1 ) ×(ΓN +L+1 (·, tm+1 − 0) + Γ).
(32)
−1 A12 = W(r1 ) (tm+1 )E2 [E2 W(r1 ) (tm+1 )E2 ] E2 , −1 (tm+1 ). (42) A21 = C1 N1−1 (tm+1 )C1 W(r 1)
Analogously to (25), we obtain rk[A12 ] = q − r2 , rk[A21 ] = r1 . Since A212 = A12 , A221 = A21 , then A12 and A21 are projection matrixes (Gantmakher, 1988). By the structure of matrixes C1 and E2 we obtain that E2 C1 = O. Then A12 · A21 = O, A21 · A12 = O. For the projection matrixes, satisfying this condition, the ma = A12 + A21 is also projection and trix A rk[A] = rk[A12 ] + rk[A21 ] = q − 1, since is r2 = r1 + 1. The matrix L1,2 (tm ) = Iq − A 2 also projection, since A = A, and correspond 2 = (Iq − A). Since for a projecingly (Iq − A) tion matrix the rank is equal to the track, then = rk[L1,2 (tm )] = tr[L1,2 (tm )] = tr[Iq ] − tr[A] tr[Iq ] − rk[A] = q − (q − 1) = 1. Let {λi (Φ)} signifies spectrum of matrix Φ. Since spectrum of projection matrix s is equal to 0, or to 1, and q λi (L1,2 (tm )) = 1, rk[L1,2 (tm )] = tr[L1,2 (tm )] = i=1
{λi (L1,2 (tm ))} = {0, 0, . . . , 0, 1, 0, . . . , 0}, i = 1; q, −1 > 0, then from that is L1,2 (tm ) ≥ 0. Since W(r 1) (40) follows E1 [E1 W(r1 ) (tm+1 )E1 ]−1 E1 −E2 [E2 W(r1 ) (tm+1 )E2 ]−1 E2 ≥ 0.
(43)
Supposing that ri = r1 , we obtain from (31), (38), (43), that (Albert, 1972) 1 1 Ψ−Γ N +L+1 (·, tm+1 ) = ΓN +L+1 (·, tm+1 − 0) (r )
(r )
−1 E1 − ×G N +1,L (tm+1 )[E1 [E1 W(r1 ) (tm+1 )E1 ]
−E2 [E2 W(r1 ) (tm+1 )E2 ]−1 E2 ]GN +1,L (tm+1 ) 1 ×Γ N +L+1 (·, tm+1 − 0) ≥ 0. (r )
(44)
Then from (37), (44) ) follows (r1 ) (r2 ) Γ N +L+1 (·, tm+1 ) ≥ ΓN +L+1 (·, tm+1 ). (45) Since A ≥ 0, then from (45) follows (Albert, 1972) 2 1 λj (A[Γ N +L+1(·, tm+1 )− ΓN +L+1(·, tm+1 )])≥ 0, (46) (r )
(r )
j = 1; (N +L+1)n. From the definition J(r) (tm+1 ) we obtain that (r2 ) ∆J(tm+1 ) = tr[A[Γ N +L+1 (·, tm+1 ) 1 −Γ N +L+1 (·, tm+1 )]]. (r )
(47)
Then from (46), (47) follows ∆J(tm+1 ) =
(r2 ) λj (A[Γ N +L+1 (·, tm+1 )
1 −Γ N +L+1 (·, tm+1 )]) ≥ 0, (r )
which was to be proved. ∇
(48)
4. EXAMPLE Assume that the unobserved process xt be scalar described by the equation (1), where F (t) ≡ −a, a > 0, Q (t) ≡ Q = const. Process zt is scalar and defines continuous observations without memory in the form (2), where Hk ≡ 0, k = 1; N , H0 ≡ H = const, R (t) ≡ R = const. Assume that the discrete observation channel is formed as a combination of two scalar channels, and, in each of them, the signal X (tm , τ ) = G0 xtm + G1 xτ is observed, where G0 = const, G1 = const, against background of the uncorrelated regular noise ξ1 (tm ), ξ2 (tm ) with the same intensity V = . const, i.e. ξ (tm ) = [ξ1 (tm ) .. ξ2 (tm )], η (tm ) = . [η1 (tm ) .. η2 (tm )]. Let L = 1, then η(tm ) = G0 (tm )xtm + G1 (tm )xτ +ξ(tm ) + Cf (tm ), G0 G1 G0 (tm ) = , G1 (tm ) = , G0 G1 G0 G1 0 V 0 , V (tm ) = , G (tm ) = 0 V G0 G1 0 ⎤ ⎡ γ (t) γ01 (τ, t) γ02 (s, t) 3 (τ, s, t) =⎣γ01 (τ, t) γ11 (τ, t) γ12 (τ, s, t)⎦. (49) Γ γ02 (s, t) γ12 (τ, s, t) γ22 (s, t) N +L+1 ( Let us note that in the matrix Γ τN , sL , t) (see (7)) for the considered case (N + L + 1 = 3) obvious remuneration of the elements was done to ensure a convenient application, while GN +1,L (tm ) = G(tm ). Let us study three cases: 1) there is no anomalous noise; 2) the anomalous noise acts on the first component η1 (tm ); 3) the anomalous noise acts on the both components η1 (tm ) and η2 (tm ) of the observation vector η (tm ). We denote the mean square errors of estimates of the extrapolation at the moments tm for the three 0 1 (s, tm ), γ22 (s, tm ), and cases in question by γ22 2 γ22 (s, tm ), respectively. We consider the case of sparse discrete observations, when γ, γ01 , γ11 , γ02 , γ12 , γ22 are determined by (3.2) in (Dyomin 1 0 (s, tm ) − γ22 (s, tm ), et. al., 1997). Let ∆10 = γ22 2 1 ∆21 = γ22 (s, tm ) − γ22 (s, tm ). Then from Theorem 1 in (Dyomin et. al., 2000) follows that ∆10 > 0, ∆21 > 0, i.e 2 1 0 (s, tm ) > γ22 (s, tm ) > γ22 (s, tm ) . γ22
(50)
0 According to the accepted designations γ22 (s, tm ) 1 2 (s, tm ) = J(1) (tm ), γ22 (s, tm ) = J(0) (tm ), γ22 = J(2) (tm ) for the extrapolation problem Thus, inequalities (50) illustrate property (28) for the considered example relative to extrapolation estimation.
Let us introduce an efficiency measure ε21 = 21 of discrete observations without mem∆21 − ∆ ory relative to observations with memory for
21 is a value the extrapolation problem, where ∆ of ∆21 when G1 = 0. If ε21 > 0, observations with memory are more efficient than observations without memory, as presence of an ideal reserve channel with memory ensures larger decrease of the mean square error of extrapolation estimation than presence of an ideal reserve channel without memory. If ε21 < 0, observations without memory are more efficient than observations with memory. The analysis of ε21 as a function of memory depth t∗ = t − τ gives us the following result. Proposition 2. Let
G = (G0 , G1 ) : G21 + 2G0 G1 ≤ 0 . If (G0 , G1 ) ∈ / G, then function ε21 (t∗ ) is a monotonously decreasing function from ε021 > 0, determined by the formula V γ 2 G21 + 2G0 G1 exp {−2aT } 0 , (51) ε21 = 2 V + (G0 + G1 ) γ (V + G20 γ) up to ε∞ 21 < 0, determined by the formula ε∞ 21 = −
G20 G21 æγ 3 exp {−2aT } , (V + (G20 + G21 æ) γ) (V + G20 γ)
(52)
where T = s − t, γ = (λ + a)/2λ, λ = a2 + δQ, δ = H02 /R, æ = 1 − (δ/2λ). The value t∗ = t∗ef f (ε21 (t∗ef f ) = 0) can be defined as the effective depth of memory and determined by the formula t∗ef f
= (1/λ) ⎧ ⎫ ⎨ ⎬ |G1 | V +æγG20 , (53) × ln ⎩|G | [V 2 +æγG2 (V +æγG2 )]1/2 ∓V ⎭ 0 1 0 where the sign − , if G0 · G1 = |G0 | · |G1 |, the sign + , if G0 · G1 = −|G0 | · |G1 |. Formula (53) is obtained as a unique positive root of the equation G21 (V + æγG20 ) exp{−2λt∗ } +2V G0 G1 exp{−λt∗ }−æγG20 G21 = 0, (54) which follows from the condition ε21 (t∗ ) = 0. Monotonous decreasing of ε021 is directly proved. CONCLUSION The Theorem 2 means that adding of anomalous components of the observation vector to the already present anomalous components may just make worse the estimation precision. In general case for two vectors I(r1 ) and I(r2 ) where r2 > r1 , while the set of zero components of vector I(r2 ) does not absorb the set of zero components of vector I(r1 ) , there is nothing definite to be said
about J(r1 ) (tm ) and J(r2 ) (tm ), and this case to need further consideration. REFERENCES Abakumova, O.L., N.S. Dyomin and T.V. Sushko (1995). Filtering of stochastic processes for continuous and discrete observations with memory. Automat. and Remote Control. 56(10), 1383–1393. Albert, A. (1972). Regression and the MoorePenrose Pseudoinverse. Boston University, Boston, Massachusets. Basin, M.V. and Martinez R. Zuniga (2004) Optimal linear filtering over observations with multiple delays. International Journal of Robust and Nonlinear Control, 14(8), 685-696. Basin, M.V. and Rodriguez Gonzalez J.G., Martinez R. Zuniga (2005a) Optimal filtering for linear state delay systems. IEEE Transactions on Automatic Control, AC-50(5), 684-690. Basin, M.V., Alcorta Garcia M.A. and Rodriguez Gonzalez J.G. (2005b) Optimal filtering for linear systems with state and observation delays. International Journal of Robust and Nonlinear Control, 15(17), 859-871. Basin, M.V. and Rodriguez Gonzalez J.G. (2005c) A closed-form optimal control for linear systems with equal state and input delays. Automatica, 41(5), 915-921. Basin, M.V. and Rodriguez Gonzalez J.G. (2006) Optimal control for linear systems with multiple time delays in control input. IEEE Transactions on Automatic Control, AC-51(1), 91-97. Dion, J.M., J.L. Dugard and M. Fliess (1999). Linear Time-Delay Systems. Pergamon, London. Dyomin, N.S., S.V. Rozhkova (2000). Continuously discrete estimation of stochastic processes for observations with memory under anomalous noise: Synthesis. J. Comput. Syst. Sci. Intern., 39(3), 335-346. Dyomin, N.S., T.V. Sushko and A.V. Yakovleva (1997). Generalized inverse extrapolation of stochastic processes by an aggregate of continuousdiscrete observations with memory. J. Comput. Syst. Sci. Intern., 36(4), 543-554. Gantmakher, F.R. (1988). Theory of Matrices, Moscow: Nauka (in Russian). Kalman, R.E. (1960). A new approach to linear filtering and prediction problems. Trans. ASME. J.Basic Eng., Ser.D., 82(March), 35-45.