Aerospace Science and Technology 32 (2014) 200–211
Contents lists available at ScienceDirect
Aerospace Science and Technology www.elsevier.com/locate/aescte
Recursive estimation for descriptor systems with multiple packet dropouts and correlated noises ✩ Jianxin Feng ∗ , Tingfeng Wang, Jin Guo State Key Laboratory of Laser Interaction with Matter, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
a r t i c l e
i n f o
Article history: Received 12 March 2013 Received in revised form 26 August 2013 Accepted 5 October 2013 Available online 14 October 2013 Keywords: Descriptor system Recursive estimation Autocorrelated Multiple packet dropouts Innovation approach
a b s t r a c t In this paper, the problem of recursive estimation is studied for a class of descriptor systems with multiple packet dropouts and correlated noises. The multiple packet dropouts phenomenon is considered to be random and described by a binary switching sequence that obeys a conditional probability distribution. The autocorrelated measurement noise is characterized by the covariances between different time instants. The descriptor system is transformed into a regular line system with an algebraic constraint. By using an innovation analysis method and the orthogonal projection theorem, recursive estimators including filter, predictor and smoother are developed for each subsystem and the process noise. Further, the recursive filter, predictor and smoother are obtained for the original descriptor system with possible multiple packet dropouts phenomenon and correlated noises. Simulation results are provided to demonstrate the effectiveness of the proposed approaches. © 2013 Elsevier Masson SAS. All rights reserved.
1. Introduction In the past decades, the state estimation problem for descriptor systems has received significant attention, since these systems arise naturally in extensive practical application areas, such as economic system, robotics system, electric network system and chemical system. Different from non-descriptor systems, the future dynamics of descriptor systems can affect the present values of the state, and this leads to more difficulties in the research. So far, a great deal of state estimators have been available in literature [1–4,9,12,19,23,28]. For example, the problem of robust Kalman filtering for uncertain descriptor systems has been investigated in [12]. In [1,19], the estimation problem for descriptor systems has been transformed into the estimation problem for non-descriptor systems. The descriptor Kalman filter based on the least-square method has been studied in [2]. In [3,28], the optimal estimation for descriptor systems has been treated based on modern timeseries analysis method. The distributed Kalman filter fusion for descriptor systems has been studied in [4,9], where an optimal fusion criterion weighted by block-diagonal matrices is used. The problem of state estimation with correlated noises has attracted recurring interests in recent years [11,15–18], This is due to the fact that correlated noises are commonly encountered in engineering practice. In [18], the state estimation for discrete-time ✩ This work was supported by the National 973 Program of China (Grant Nos. 51334020202-2 and 51334020204-2). Corresponding author. E-mail address:
[email protected] (J. Feng).
*
1270-9638/$ – see front matter © 2013 Elsevier Masson SAS. All rights reserved. http://dx.doi.org/10.1016/j.ast.2013.10.004
systems with cross-correlated noises has been treated based on an optimal weighted matrix sequence, where the process noises and measurement noises are cross-correlated. The optimal Kalman filtering fusion problem for systems with cross-correlated sensor noises has been dealt with in [11,16]. However, the estimators in [11,16,18] only deal with the correlated noises at the same time instant. Song et al. [17] presented a Kalman-type recursive filter for systems with finite-step autocorrelated process noises. The filtering problem with finite-step cross-correlated process noises and measurement noises has been investigated in [8]. On the other hand, with the development of network technologies, the networked control systems (NCSs) have received significant attention in many practical applications for its advantages of low cost, great mobility, simple installation and implementation. However, the cost of these advantages is the need for communication and handing the network-induced uncertainties. In the NCSs, the limited capacity communication networks that are generally shared by a group of systems have brought us new challenges in the analysis and design state estimators with time-delays or/and packet dropouts (also called missing measurements). Several results have been proposed when time-delays are deterministic, see [13] and references therein. However, time-delays and packet dropouts in networked systems are inherently random. Recently, the binary switching sequence has been employed to describe time-delays or/and multiple packet dropouts for its simplicity and practicality [7,10,14,20–22,24–27]. In [21,22], the problem of robust filtering for uncertain systems with missing measurements has been investigated. The least-mean-square filtering problem for one-step random sampling delay has been studied in
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
[25,26]. Unfortunately, the filters designed in [25,26] are suboptimal since a colored noise due to augmentation has been treated as a white noise. The filtering problem for systems with random measurement delays and multiple packet dropouts has also been discussed in [14,20]. In [27], the optimal non-fragile filtering problem for dynamic systems with finite-step autocorrelated measurement noises and multiple packet dropouts has been studied. The problem of optimal filtering for uncertain systems with different delay rates sensor network and autocorrelated process noises has also been investigated in [7]. It should be noted that all the literature mentioned above has been concerned with non-descriptor systems, and the corresponding results for descriptor systems are relatively few. Recently, Feng and Zeng [6] presented recursive estimators for descriptor systems with different delay rates, where the delay of each sensor is restricted to at most one unit delay. However, in many applications, the delay of each sensor may not obey this restriction. Up to now, to the best of the authors’ knowledge, the recursive estimation problem has not yet been addressed for descriptor systems with multiple packet dropouts and correlated noises, and this situation motivates our current study. In this paper, we aim at solving the recursive estimation problem for a class of descriptor systems with multiple packet dropouts and correlated noises. The multiple packet dropouts phenomenon is described by a binary switching sequence satisfying a conditional probability distribution. The measurement noise is autocorrelated. Without loss of generality, the measurement noise is assumed to be one-step autocorrelated. The descriptor system is transformed into a regular linear system with an algebraic constraint. By using an innovation analysis approach and the orthogonal projection theorem (OPT), recursive estimators including filter, predictor and smoother are developed for each subsystem and the process noise. Further, we can obtain the recursive filter, predictor and smoother for the original descriptor system. The main contribution of this paper is threefold: 1) to the best of the authors’ knowledge, this is the first time that the multiple packet dropouts phenomenon is considered in descriptor systems; 2) the considered autocorrelated measurement noise which is characterized by the covariances between different time instant is intractable; 3) the recursive estimators including filter, predictor and smoother are developed for the proposed descriptor system. A numerical simulation example is used to demonstrate the effectiveness of the proposed estimation schemes in this paper. The remainder of the paper is organized as follows. In Section 2, the recursive estimation problem is formulated for a class of descriptor systems with multiple packet dropouts and correlated noises. The recursive estimators including filter, predictor and smoother are derived in Section 3. In Section 4, a simulation example is provided to illustrate the usefulness of the theory developed in this paper. We end the paper with some concluding remarks in Section 5. Notation. The notation used in the paper is fairly standard. The superscripts “T ” and “+” stand for matrix transposition and pseudo-inverse, respectively. Rn denotes the n-dimensional Euclidean space, Rm×n is the set of all real matrices of dimension m × n, and I and 0 represent the identity matrix and zero matrix, respectively. The notation P > 0 means that P is real symmetric and positive definite, and diag(. . .) stands for block-diagonal matrix. δk− j is the Kronecker delta function, which is equal to unity for k = j and zero for k = j. In addition, E {x} means mathematical expectation of x and Prob{·} represents the occurrence probability of the event “·”. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.
201
2. Problem formulation Consider the following descriptor system with multiple packet dropouts and correlated noises:
k, E sk+1 = F sk + G ω
(1)
yk = C sk + v k ,
(2)
yk = λk yk + (1 − λk ) yk−1 ,
(3)
where sk ∈ R is the state of the system to be estimated. yk ∈ Rm is the measured output of the sensor, yk ∈ Rm is the measure k ∈ Rq is the process noise with ment received by estimators. ω k > 0. E, F , G, and C are known mazero mean and covariance Q trices with appropriate dimensions. λk ∈ R is a binary switching white sequence and has the statistic properties as follows: n
Prob{λk = 1} = E {λk } = βk , Prob{λk = 0} = 1 − E {λk } = 1 − βk ,
σk2 = E (λk − βk )2 = (1 − βk )βk ,
(4)
where βk ∈ [0 1] is a known real time-varying positive scalar and λk is assumed to be uncorrelated with other noise signals. v k ∈ Rm k and has is the zero mean measurement noise correlated with ω the statistic properties as follows:
k δk−t + R k,k−1 δk−t −1 + R k,k+1 δk−t +1 , E v k v tT = R k v tT = S k δk−t , E ω
(5)
T . R k
k = where R From (5), it is known that the measurement noise v k is one-step autocorrelated. The measurement noise at time k is correlated with the measurement noises at time k − 1 and k + 1 k,k−1 as well as R k,k+1 , respectively. with covariances R Remark 1. The measurement model (3) was first introduced in [14] and has been employed in [20] to describe the multiple packet dropouts phenomenon. In measurement (3), if λk = λk−1 = 0 while λk−2 = 1, i.e. the measurements at time k and k − 1 are lost and the measurement at time k − 2 will be used at k − 1 and k. Thus, the measurement (3) can be used to describe multiple packet dropouts phenomenon. Assumption 1. E is singular, i.e. det E = 0. rank E = n1 , n1 < n. Using Assumption 1 and the singular value decomposition [5], there exist two nonsingular matrices U and V such that
U EV =
0 0
,
0
= diag(μ1 , μ2 , . . . , μn1 ), μi > 0, i = 1, 2, . . . , n1 . By defining
UFV =
F 11 F 21
F 12 , F 22
C V = [C 1 C 2 ],
UG =
sk = V
(6)
G1 , G2
s1,k , s2,k
(7)
the system (1)–(3) can be rewritten as follows:
k, s1,k+1 = −1 F 11 s1,k + −1 F 12 s2,k + −1 G 1 ω
(8)
k, 0 = F 21 s1,k + F 22 s2,k + G 2 ω
(9)
yk = C 1 s1,k + C 2 s2,k + v k ,
(10)
yk = λk yk + (1 − λk ) yk−1 , where s1,k ∈ R
n1
(11)
and s2,k ∈ R , n2 = n − n1 . n2
202
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
Assumption 2. The initial value s1,0 has mean s¯ 1,0 and covariance 1,0 and is uncorrelated with other noise signals. P Assumption 3. The system is causal, i.e. F 22 is nonsingular [5]. According to (9) and Assumption 3, we have
k, s2,k = Γ1 s1,k + Γ2 ω
(12)
where −1
−1
Γ1 = − F 22 F 21 ,
Γ2 = − F 22 G 2 ,
(13)
Remark 2. From (21), it is known that the process noise ωk and the measurement noise v k are one-step autocorrelated and crosscorrelated. The process noise at time k is correlated with the process noises at time k − 1 and k + 1 with covariances Q k,k−1 and Q k,k+1 , respectively. The measurement noise at time k is correlated with the measurement noises at time k − 1 as well as k + 1 with covariances R k,k−1 and R k,k+1 , respectively. The process noise at time k is correlated with the measurement noises at time k − 1 and k + 1 with covariances S k,k−1 and S k,k+1 , respectively. Since A k , B k and H k involve the stochastic variable λk , the system (19)–(20) is actually a stochastic parameter system. For convenience of later development, we introduce the following notations:
substituting (12) into (8) and (10), we have the subsystem for s1,k as follows:
Ae =
1,k + B ω k, s1,k+1 = As
(14)
− I m ], He = [H
s1,k + v k , yk = H
(15)
yk = λk yk + (1 − λk ) yk−1 ,
(16)
where
= −1 ( F 11 + F 12 Γ1 ), A = C 1 + C 2 Γ1 , H
= −1 (G 1 + F 12 Γ2 ), B
k. v k = v k + C 2 Γ2 ω
(17)
Introducing the following notations:
Ak =
A λk H
0
ωk =
ω k
xk =
0 H
0 , − Im
Be =
0 0
0 , Im
J k = λk − βk , A 0 ¯ k = E { Ak } = A (1 − βk ) I m , βk H B 0 , B¯ k = E { B k } = 0 βk I m ˜ k = A k − A¯ k = J k A e , ¯ k = E { H k } = βk H (1 − βk ) I m , H A
B˜ k = B k − B¯ k = J k B e ,
˜ k = H k − H¯ k = J k H e . H
(23)
˜ k , B˜ k , H˜ k and J k are zero mean stochastic seObserve that A quences.
s1,k , y k −1
a compact representation of (14)–(16) can be expressed as follows:
Remark 3. It can be easily seen from (7), (12) and (18) that recursive estimators for sk can be obtained from recursive estimators for xk and ωk . Therefore, our first step in this paper is to find recurˆ k|k , predictors xˆ k+N |k , ωˆ k+N |k , sive estimators including filters xˆ k|k , ω ˆ k|k+N , N > 0 for xk and ωk , respecN > 0 and smoothers xˆ k|k+ N , ω tively.
xk+1 = A k xk + B k ωk ,
(19)
3. The main results
yk = H k xk + λk v k ,
(20)
Bk =
B 0
,
,
(1 − λk ) I m vk 0 (1 − λk ) I m , , H k = λk H
(18)
λk I m
where ωk and v k are the process noise and measurement noise of the system (19)–(20), respectively. It can be easily obtained from (5), (17) and (18) that ωk and v k have the statistic properties as follows:
E {ωk } = E { v k } = 0, E ωk ωtT = Q k δk−t + Q k,k−1 δk−t −1 + Q k,k+1 δk−t +1 , E v k v tT = R k δk−t + R k,k−1 δk−t −1 + R k,k+1 δk−t +1 , E ωk v tT = S k δk−t + S k,k−1 δk−t −1 + S k,k+1 δk−t +1 ,
Before proceeding further, we first introduce the following preliminary lemma. Lemma 1. For state covariance matrix X k+1 = E {xk+1 xkT+1 }, we have the following result:
¯ k Xk A¯ T + σ 2 A e Xk A eT + A¯ k B¯ k−1 Q k−1,k B¯ T X k +1 = A k k k + σk2 A e B¯ k−1 Q k−1,k B eT + B¯ k Q k,k−1 B¯ kT−1 A¯ kT + σk2 B e Q k,k−1 B¯ kT−1 A eT + B¯ k Q k B¯ kT + σk2 B e Q k B eT , (21)
1,0 , 0). with the initial value X 0 = diag(¯s1,0 s¯ 1T,0 , 0) + diag( P 2
where
Proof. Please see Appendix A.
k + S T Γ2T C 2T + C 2 Γ2 S k + C 2 Γ2 Q k Γ2T C 2T , Rk = R k
3.1. Recursive filter
k,k−1 , k,k+1 , R k,k−1 = R R k,k+1 = R k R v ,k Q , Qk = T
Theorem 1. For the system state xk and process noise recursive filter as follows:
R v ,k
0 Q k,k−1 = 0
Sk =
R v ,k , Rk
Rk
0 R k,k−1
,
S k,k−1 =
k Γ2T C 2T . R v ,k = S k + Q
0 Q k,k+1 = 0
0 , R k,k−1
0 R k,k+1
(24)
ωk , we have the
¯ k−1 xˆ k−1|k−1 + (Λk−1 + k−1 )Π −1 εk−1 xˆ k|k−1 = A k −1
,
S k,k+1 =
0 , R k,k+1 (22)
+ βk−2 B¯ k−1 S k−1,k−2 Πk−−12 εk−2 , −1
xˆ k|k = xˆ k|k−1 + Ξk,k Πk
εk = yk −
(26)
¯ k xˆ k|k−1 − βk βk−1 R k,k−1 Π −1 εk−1 , H k −1
2 T k −1 A e X k −1 H e
Λk−1 = σ
εk ,
(25)
2 ¯ k−1 A e B k−2 S k−2,k−1 ,
+σ
(27) (28)
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
k−1 = B¯ k−1 Q kT−2,k−1 B¯ kT−2 H¯ kT−1 + σk2−1 B e Q kT−2,k−1 B¯ kT−2 H eT
3.2. Recursive predictor
+ βk−1 B¯ k−1 S k−1 + σk2−1 B e S k−1
Theorem 2. For the system state xk and process noise ωk , the N-step (N > 1) recursive predictor can be calculated as follows:
− βk−2 B¯ k−1 S k−1,k−2 Πk−−12 ΞkT−1,k−2 H¯ kT−1 − βk−1 βk2−2 B¯ k−1 S k−1,k−2 Πk−−12 R kT−1,k−2 ,
(29)
¯ k+ N −1 xˆ k+ N −1|k + δ N −2 βk B¯ k+1 S k+1,k Π −1 εk , xˆ k+ N |k = A k P k+ N |k =
Πk = H¯ k P k|k−1 H¯ kT + βk R k + βk H¯ k B¯ k−1 S k−1,k
− δ N −2 βk A¯ k+1 Ξk+1,k Πk−1 S kT+1,k B¯ kT+1
+ σk2 H e B¯ k−1 S k−1,k + βk S kT−1,k B¯ kT−1 H¯ kT
+ σk2+ N −1 A e Xk+ N −1 A eT
− βk βk−1 R kT−1,k Πk−−11 ΞkT,k−1 H¯ kT + σk2 S kT−1,k B¯ kT−1 H eT − βk2 βk2−1 R k,k−1 Πk−−11 R kT,k−1 ,
(30)
+ σk2+ N −1 A e B¯ k+ N −2 Q k+ N −2,k+ N −1 B eT
(31)
− δ N −2 βk B¯ k+1 S k+1,k Πk−1 ΞkT+1,k A¯ kT+1
+ B¯ k+ N −1 Q kT+ N −2,k+ N −1 B¯ kT+ N −2 A¯ kT+ N −1
Ξk,k = P k|k−1 H¯ kT + βk B¯ k−1 S k−1,k − βk βk−1 Ξk,k−1 Πk−−11 R kT,k−1 , Ξk,k−1 = A¯ k−1 Ξk−1,k−1 + Λk−1 + k−1 ,
(32)
P k|k = P k|k−1 − Ξk,k Πk−1 ΞkT,k ,
(33)
¯ k−1 P k−1|k−1 A¯ T + A¯ k−1 B¯ k−2 Q k−2,k−1 B¯ T P k|k−1 = A k −1 k −1 − βk−2 Ξk−1,k−2 Πk−−12 S k−2,k−1 B¯ kT−1 + σk2−1 A e Xk−1 A eT + σ 2 A e B¯ k−2 Q k−2,k−1 B eT + B¯ k−2 Q k−2,k−1 B¯ T k −1
k −1
T
− βk2−2 B¯ k−1 S k−1,k−2 Πk−−12 S kT−1,k−2 B¯ kT−1 ,
εk + βk−1 S k,k−1 Πk−−11 εk−1 ,
(34) (35)
Σk,k = Q k,k−1 B¯ kT−1 H¯ kT + βk S k − βk−1 S k,k−1 Πk−−11 ΞkT,k−1 H¯ kT
(39)
where the initial value εk , xˆ k+1|k and P k+1|k are give by Theorem 1, and the expectation X k+1 is calculated as in Lemma 1.
2
Proof. Please see Appendix C.
ωk , the N-step
Ξk,k+ N =
(40)
Ψk+ N H¯ kT+ N
− βk+ N −1 βk+ N Ξk,k+ N −1 Πk−+1N −1 R kT+ N ,k+ N −1 ,
Ψk+ N =
(41)
Ψk+ N −1 A¯ kT+ N −1
− βk+ N −2 Ξk,k+ N −2 Πk−+1N −2 S kT+ N −1,k+ N −2 B¯ kT+ N −1 (36)
where εk is the innovation with covariance Πk , Ξk, j is the covariance matrix between xk and ε j , j = k − 1, k, and Σk,k is the covariance matrix between ωk and εk , X k = E {xk xkT } can be determined as in Lemma 1, xˆ k|k−1 and xˆ k|k are the one-step prediction and filtering for the state xk , respectively, P k|k−1 and P k|k are the one-step prediction and filtering error covariances for the state xk , respectively. The initial values are 1,0 , 0) and ε1 = y 1 − H¯ 1 xˆ 1|0 . xˆ 0|0 = [¯s1T,0 0] T , P 0|0 = diag( P Proof. Please see Appendix B.
(38)
xˆ k|k+ N = xˆ k|k+ N −1 + Ξk,k+ N Πk−+1N εk+ N ,
− (k−1 + Λk−1 )Πk−−11 (Λk−1 + k−1 )T
− βk βk2−1 S k,k−1 Πk−−11 R kT,k−1 ,
− δ N −2 βk2 B¯ k+1 S k+1,k Πk−1 S kT+1,k B¯ kT+1 ,
Theorem 3. For the system state xk and process noises (N > 0) fixed-lag recursive smoother is given by
+ σk2−1 B e Q k−1 B eT
ωˆ k|k = Σk,k Πk
+ B¯ k+ N −1 Q k+ N −1 B¯ kT+ N −1 + σk2+ N −1 B e Q k+ N −1 B eT
3.3. Recursive smoother
¯T A k −1
+ σk2−1 B e Q kT−2,k−1 B¯ kT−2 A eT + B¯ k−1 Q k−1 B¯ kT−1
−1
+ σk2+ N −1 B e Q kT+ N −2,k+ N −1 B¯ kT+ N −2 A eT
ωˆ k+N |k = 0,
− Ξk−1,k−1 Πk−−11 kT−1
− βk−2 Ξk−1,k−2 Πk−−12 S k−2,k−1 B¯ kT−1
(37)
¯ k+ N −1 P k+ N −1|k A¯ T A k + N −1
+ A¯ k+ N −1 B¯ k+ N −2 Q k+ N −2,k+ N −1 B¯ kT+ N −1
− βk βk−1 H¯ k Ξk,k−1 Πk−−11 R k−1,k + σk2 H e Xk H eT
− Ξk−1,k−1 Πk−−11 kT−1
203
2
Remark 4. In the standard case, innovations are calculated as εk = yk − H k xˆ k|k−1 . However, due to the possible multiple packet dropouts and the autocorrelated measurement noises, this is not true for the problem at hand. Therefore, we need to recalculate innovations as in (27). Furthermore, it can be seen that the last two terms on the right-hand side of (25), the last eight terms on the right-hand side of (30), the last two terms on the right-hand side of (31) and the last nine terms on the right-hand side of (34) are all caused by the possible multiple packet dropouts and the autocorrelated or cross-correlated noises. These terms constitute the main difference between our work and the classic Kalman filter. In additional, a noise filter for ωk is also developed in Theorem 1.
− Ξk,k+ N −1 Πk−+1N −1 ΞkT+ N ,k+ N −1 + δ N −1 B¯ k−1 Q k−1,k B¯ kT , P k|k+ N =
ωˆ k|k+N =
(42)
P k|k+ N −1 − Ξk,k+ N Πk−+1N ΞkT,k+ N , k
+N
(43)
Σk,i Πi−1 εi ,
(44)
i =k−1
Σk,i = Θk,i − k,i ,
i = k + 1, k + 2, . . . , k + N , l −1
Θk,k+l = Q k,k−1 B¯ kT−1 + Q k B¯ kT
j =0
l −1
¯ T H¯ T A k+ j k+l
¯ k+ j (1 − δ j ) I + δ j A¯ k+ j A
j =0
+ Q k,k+1 B¯ kT+1 + δ1− j A¯ k+ j k,k+t =
k+ t −1
i =k−1
(45)
l −1
+ T
¯T H k+l
¯ k+ j (1 − δ j ) (1 − δ1− j ) I A
j =0
+ T
¯ T H¯ T , A k+ j k+l
Σk,i Πi−1 ΩtT,i ,
l = 1, 2, . . . , N ,
t = 1, 2, . . . , N ,
(46) (47)
204
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
Ωt ,k+ j = H¯ k+t
−1 t
¯ k+ j (1 − δ j −t +1 ) I A 1
j 1 = j +1
+ δ j −t +1 A¯ k+ j 1 −1 t
+ H¯ k+t
+
Ξk+ j +1,k+ j
¯ k+ j (1 − δ j −t +1 ) (1 − δk−t +2 ) I A 1
j 1 = j +1
+ δk−t +2 A¯ k+ j 1
+
B¯ k+ j +1 Σk+ j +1,k+ j
+ δ j −t +1 βk+t βk+t −1 R k+t ,k+t −1 , t = 1, . . . , N , j = −1, 0, . . . , t − 1,
(48)
where Πi , εi , Σk,k−1 , Σk,k , Ξk,k−1 , Ξk+1,k , xˆ k|k and Ψk = P k|k−1 are given by Theorem 1. Proof. Please see Appendix D.
2
Theorem 4. For the initial descriptor system (1)–(3), we have the following recursive estimator:
sˆ | = V
sˆ 1,| , sˆ 2,|
ˆ | , sˆ 2,| = Γ1 sˆ 1,| + Γ2 ω
ωˆ | = [ I q 0]ωˆ | ,
sˆ 1,| = [ I n1 0]ˆx| ,
Fig. 1. The signal s1,k and filter sˆ 1,k|k .
(49)
where sˆ | is the estimation for sk , and denote the time, xˆ | and ωˆ | are calculated by Theorems 1, 2 and 3, and V , Γ1 and Γ2 are given as before. Proof. Theorem 4 follows directly from Theorems 1, 2 and 3, and (7), (12) and (18). 2 Remark 5. In this section, based on an innovation analysis approach and the OPT, we derive recursive estimators including filter, predictor and smoother for the system (19)–(20) and the process noise ωk . The structures of the newly developed recursive state estimators (25)–(36), (37)–(39) and (40)–(48) are much like the classic Kalman estimators, in fact, they are quite different. The differences are mainly cased by the multiple packet dropouts phenomenon and correlated noises. At last, based on Theorems 1, 2 and 3, recursive estimators including filter, predictor and smoother are obtained in Theorem 4 for the original descriptor system (1)–(3).
is chosen to be as defined in (53). λk ∈ R is a binary switching sequence taking value on 1 with Prob{λk = 1} = E {λk } = βk = 0.8.
4. An illustrative example Consider the following descriptor system with multiple packet dropouts and correlated noises:
1 0 0 zk+1 0 1 0 s3,k+1 0 0 0
0.5 2 −0.85 1 zk k, + 1 ω = 1 0 0.5 s3,k 0.2 0 0.5 1.5
Our objective is to find the recursive filter sˆ i ,k|k , predictor sˆ i ,k+2|k and smoother sˆ i ,k|k+1 , i = 1, 2, 3, and to give a comparison of their accuracies. In the simulation, the initial value z0 has mean E (z0 ) = [2 0] T 0 = diag(2, 1). Let MSE1 denote the mean square and covariance P
K
(50)
zk yk = [0.5 0.9 0.6] + v k , s3,k
(51)
yk = λk yk + (1 − λk ) yk−1 ,
(52)
k + ζk + ζk−1 , v k = ω
(53)
k ∈ R and ζk ∈ R where zk = [s1,k s2,k ] , and si ,k ∈ R, i = 1, 2, 3. ω are zero mean Gaussian white noises with covariances 1 and 0.5, respectively. Without loss of generality, the measurement noise v k T
Fig. 2. The signal s1,k and predictor sˆ 1,k+2|k .
error for the estimation of s1,k , i.e., (1/ K ) k=1 {s1,k − sˆ 1,|∗ }2 , where K is the number of the samples. Similarly, MSE2 K is the mean square error for the estimation of s2,k , i.e., (1/ K ) k=1 {s2,k −
sˆ 2,|∗ }2 and MSE3 is the mean square error for the estimation of s3,k , i.e., (1/ K ) kK=1 {(s3,k − sˆ 3,|∗ )}2 . Figs. 1–12 are the simulation results. From Figs. 1–12, we can see that, 1) the recursive estimators including filter, predictor and smoother developed in this paper have good performances for descriptor systems, this is due to the fact that efforts have been made to compensate the possible multiple packet dropouts phenomenon and the correlated noises; and 2) the smoother has the best performance and the predictor has the
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
205
Fig. 3. The signal s1,k and smoother sˆ 1,k|k+1 .
Fig. 6. The signal s2,k and predictor sˆ 2,k+2|k .
Fig. 4. MSE1.
Fig. 7. The signal s2,k and smoother sˆ 2,k|k+1 .
Fig. 5. The signal s2,k and filter sˆ 2,k|k .
Fig. 8. MSE2.
206
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
Fig. 9. The signal s3,k and filter sˆ 3,k|k .
Fig. 11. The signal s3,k and smoother sˆ 3,k|k+1 .
Fig. 12. MSE3.
Fig. 10. The signal s3,k and predictor sˆ 3,k+2|k .
worst performance, this is due to the fact that the most measurement information is used in the smoother and the least measurement information is used in the predictor. 5. Conclusions
has been exploited to show the effectiveness of the proposed recursive estimators. Appendix A. The proof of Lemma 1 Proof. From (18), (19) and (23), we have
In this paper, we have studied the recursive estimation problem for a class of descriptor systems with multiple packet dropouts and correlated noises. The multiple packet dropouts phenomenon is described by a binary switching sequence satisfying a conditional probability distribution. The measurement noise is characterized by the covariances between different time instants. The descriptor system is transformed into a regular line system with an algebraic constraint. By applying an innovation analysis approach and the OPT, the recursive estimators including filter, predictor and smoother have been developed for each subsystem and the process noise. Further, the recursive filter, predictor and smoother have been obtained for the original descriptor system with multiple packet dropouts and correlated noises. An illustrate example
X k+1 = E ( A k xk + B k ωk )( A k xk + B k ωk ) T
= E ( A¯ k + A˜ k )xk + ( B¯ k + B˜ k )ωk T × ( A¯ k + A˜ k )xk + ( B¯ k + B˜ k )ωk = E ( A¯ k + J k A e )xk + ( B¯ k + J k B e )ωk T × ( A¯ k + J k A e )xk + ( B¯ k + J k B e )ωk = E ( A¯ k + J k A e )xk xkT ( A¯ k + J k A e )T + E ( A¯ k + J k A e )xk ωkT ( B¯ k + J k B e )T + E ( B¯ k + J k B e )ωk xkT ( A¯ k + J k A e )T + E ( B¯ k + J k B e )ωk ωkT ( B¯ k + J k B e )T
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
= E A¯ k xk xkT A¯ kT + E A¯ k xk xkT A eT E { J k } + E { J k }E A e xk xkT A¯ kT + E J k2 E A e xk xkT A eT + E A¯ k xk ωkT B¯ kT + E A¯ k xk ωkT B eT E { J k } + E { J k }E A e xk ωkT B¯ kT + E J k2 E A e xk ωkT B eT + E B¯ k ωk xkT A¯ kT + E { J k }E B e ωk xkT A¯ kT + E B¯ k ωk xkT A eT E { J k } + E J k2 E B e ωk xkT A eT + E B¯ k ωk ωkT B¯ kT + E B¯ k ωk ωkT B eT E { J k } + E { J k }E B e ωk ωkT B¯ kT + E J k2 E B e ωk ωkT B eT = A¯ k Xk A¯ kT + E J k2 A e Xk A eT + A¯ k E xk ωkT B¯ kT + E J k2 A e E xk ωkT B eT + B¯ k E ωk xkT A¯ kT + E J k2 B e E ωk xkT A eT + B¯ k Q k B¯ kT + E J k2 B e Q k B eT ,
= A¯ k−1 xˆ k−1|k−1 + (Λk−1 + k−1 )Πk−−11 εk−1 + βk−2 B¯ k−1 S k−1,k−2 Πk−−12 εk−2 ,
˜ k−1 is zero Taking into account (20), (23) and the fact that matrix A mean, we have
Λk−1 = E A˜ k−1 xk−1 εkT−1 = E A˜ k−1 xk−1 ykT−1 k −2 T
−1 T ˜ − E A k−1 xk−1 E y k −1 ε Π ε i i
(54)
2
k 2.
(55)
i =1
Using (20)–(21) and Assumption 2, the expectation E { yk εiT } can be calculated as follows:
i = k − 1.
(56)
k −1
E xk εiT Πi−1 εi + βk βk−1 R k,k−1 Πk−−11 εk−1 i =1
= H¯ k xˆ k|k−1 + βk βk−1 R k,k−1 Πk−−11 εk−1 .
(57)
Subtracting (57) from (20) yields (27). From the OPT, the one-step prediction xˆ k|k−1 has the following form:
xˆ k|k−1 =
k −1
E xk εiT Πi−1 εi i =1
k −1
= E ( A k−1 xk−1 + B k−1 ωk−1 )εiT Πi−1 εi i =1 k −1
= E ( A¯ k−1 + A˜ k−1 )xk−1 εiT Πi−1 εi i =1
+
k −1
E B k−1 ωk−1 εiT Πi−1 εi i =1
= σk2−1 A e Xk−1 H eT + σk2−1 A e B¯ k−2 S k−2,k−1 .
(59)
B¯ k−2 Q k−2,k−1 and E {ωk−1 εkT−2 } = βk−2 S k−1,k−2 , we have from (20) and the OPT that
k−1 = E B k−1 ωk−1 εkT−1 = E B k−1 ωk−1 ykT−1 k −2 T
−1 T − E B k−1 ωk−1 E y k −1 ε i Π i ε i
Substituting (56) into (55), we have
¯k yˆ k|k−1 = H
= E A˜ k−1 xk−1 ( H k−1 xk−1 + λk−1 v k−1 )T = E A˜ k−1 xk−1 xkT−1 ( H˜ k−1 + H¯ k−1 )T + E λk−1 A˜ k−1 xk−1 v kT−1 = E J k2−1 A e xk−1 xkT−1 H eT + E λk−1 J k−1 A e xk−1 v kT−1 Furthermore, considering the fact that ωk−1 is one-step autocorrelated and one-step cross-correlated with v k−1 , E {xk−1 ωkT−1 } =
Proof. According to the OPT, the one-step prediction for yk can be calculated as follows:
E yk εiT = H¯ k E xk εiT , i k − 2, E yk εkT−1 = H¯ k E xk εkT−1 + βk βk−1 R k,k−1 ,
i
i =1
Appendix B. The proof of Theorem 1
k −1
yˆ k|k−1 = E yk εiT Πi−1 εi ,
(58)
˜ k−1 xk−1 ε T } and k−1 = E { B k−1 ωk−1 ε T }. where Λk−1 = E { A k−1 k−1
where the last equality holds since ωk is one-step autocorrelated and J k is zero mean. Using the facts that E {xk ωkT } = B¯ k−1 Q k−1,k and E { J k2 } = σk2 , we obtain (24).
207
= A¯ k−1 xˆ k−1|k−1 + E A˜ k−1 xk−1 εkT−1 Πk−−11 εk−1 + E B k−1 ωk−1 εkT−1 Πk−−11 εk−1 + E B k−1 ωk−1 εkT−2 Πk−−12 εk−2
i =1
T k −1 y k −1
= E B k −1 ω T − E B k−1 ωk−1 εkT−2 Πk−−12 E yk−1 εkT−2 = E B k−1 ωk−1 xkT−1 H kT−1 + E B k−1 ωk−1 v kT−1 λk−1 − βk−2 B¯ k−1 S k−1,k−2 Πk−−12 E εk−2 xkT−1 H¯ kT−1 − βk−2 B¯ k−1 S k−1,k−2 Πk−−12 E λk−1 εk−2 v kT−1 = B¯ k−1 E ωk−1 xkT−1 H¯ kT−1 + E B˜ k−1 ωk−1 xkT−1 H˜ kT−1 + βk−1 B¯ k−1 E ωk−1 v kT−1 + E B˜ k−1 ωk−1 v kT−1 J k−1 − βk−2 B¯ k−1 S k−1,k−2 Πk−−12 ΞkT−1,k−2 H¯ kT−1 − βk−2 B¯ k−1 S k−1,k−2 Πk−−12 E λk−1 εk−2 v kT−1 = B¯ k−1 Q kT−2,k−1 B¯ kT−2 H¯ kT−1 + σk2−1 B e Q kT−2,k−1 B¯ kT−2 H eT + βk−1 B¯ k−1 S k−1 + σk2−1 B e S k−1 − βk−2 B¯ k−1 S k−1,k−2 Πk−−12 ΞkT−1,k−2 H¯ kT−1 − βk−1 βk2−2 B¯ k−1 S k−1,k−2 Πk−−12 R kT−1,k−2 ,
(60)
where the expectation Ξk−1,k−2 = E {xk−1 εkT−2 } can be determined as in (69). It follows from (19) and (58) that the one-step prediction error x˜ k|k−1 is given by
x˜ k|k−1 = xk − xˆ k|k−1
= A k−1 xk−1 + B k−1 ωk−1 − A¯ k−1 xˆ k−1|k−1 − (Λk−1 + k−1 )Πk−−11 εk−1 − βk−2 B¯ k−1 S k−1,k−2 Πk−−12 εk−2
208
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
= A¯ k−1 x˜ k−1|k−1 + A˜ k−1 xk−1 + B k−1 ωk−1
xˆ k|k =
− (Λk−1 + k−1 )Πk−−11 εk−1 − βk−2 B¯ k−1 S k−1,k−2 Πk−−12 εk−2 ,
i =1
(61)
where x˜ k−1|k−1 = xk−1 − xˆ k−1|k−1 is the filtering error at time k − 1. From (61), the one-step prediction error covariance P k|k−1 can be obtained as follows:
P k|k−1 = E {˜xk|k−1 x˜ kT|k−1 }
= A¯ k−1 P k−1|k−1 A¯ kT−1 + A¯ k−1 E x˜ k−1|k−1 ωkT−1 B kT−1 + E A˜ k−1 xk−1 xkT−1 A˜ kT−1 + E A˜ k−1 xk−1 ωkT−1 B kT−1 − E A˜ k−1 xk−1 εkT−1 Πk−−11 (Λk−1 + k−1 )T + E B k−1 ωk−1 x˜ kT−1|k−1 A¯ kT−1 + E B k−1 ωk−1 xkT−1 A˜ kT−1 + E B k−1 ωk−1 ωkT−1 B kT−1 − E B k−1 ωk−1 εkT−1 Πk−−11 (Λk−1 + k−1 )T − E B k−1 ωk−1 εkT−2 Πk−−12 S kT−1,k−2 B¯ kT−1 βk−2 + (k−1 + Λk−1 )Πk−−11 (Λk−1 + k−1 )T − (Λk−1 + k−1 )Πk−−11 E εk−1 xkT−1 A˜ kT−1 − (Λk−1 + k−1 )Πk−−11 E εk−1 ωkT−1 B kT−1 − βk−2 B¯ k−1 S k−1,k−2 Πk−−12 E εk−2 ωkT−1 B kT−1 + βk2−2 B¯ k−1 S k−1,k−2 Πk−−12 S kT−1,k−2 B¯ kT−1 ,
(62)
i =1
= B¯ k−2 Q k−2,k−1 B¯ kT−1 − E xk−1 εkT−1 Πk−−11 E εk−1 ωkT−1 B kT−1 − E xk−1 εkT−2 Πk−−12 E εk−2 ωkT−1 B kT−1 = B¯ k−2 Q k−2,k−1 B¯ kT−1 − Ξk−1,k−1 Πk−−11 kT−1 − βk−2 Ξk−1,k−2 Πk−−12 S k−2,k−1 B¯ kT−1 ,
(63)
=σ
(64)
(65)
= xˆ k|k−1 + Ξk,k Πk εk . Since
(67)
ωk and v k are one-step cross-correlated across time,
E {λk xk v kT } = βk B¯ k−1 S k−1,k and xˆ k|k−1 is orthogonal to x˜ k|k−1 , it fol-
lows from (21), (23) and (27) that
Ξk,k = E xk H¯ k x˜ k|k−1 + H˜ k xk + λk v k T − βk βk−1 R k,k−1 Πk−−11 εk−1 = P k|k−1 H¯ kT + E λk xk v kT − βk βk−1 Ξk,k−1 Πk−−11 R kT,k−1 = P k|k−1 H¯ kT + βk B¯ k−1 S k−1,k − βk βk−1 Ξk,k−1 Πk−−11 R kT,k−1 .
(68)
Taking into account (19), (21) and (23), the expectation Ξk,k−1 can be determined as follows:
Ξk,k−1 = E ( A k−1 xk−1 + B k−1 ωk−1 )εkT−1 = E ( A¯ k−1 + A˜ k−1 )xk−1 εkT−1 + E B k−1 ωk−1 εkT−1 = A¯ k−1 E xk−1 εkT−1 + E A˜ k−1 xk−1 εkT−1 + E B k−1 ωk−1 εkT−1 (69)
Using (23), (27) and the facts that x˜ k|k−1 is uncorrelated with
εk−1 and E { v k εkT−1 } = βk−1 R k,k−1 , the innovation covariance Πk can be calculated as follows:
¯ k x˜ k|k−1 + H˜ k xk + λk v k − βk βk−1 R k,k−1 Π −1 εk−1 H k −1
T × H¯ k x˜ k|k−1 + H˜ k xk + λk v k − βk βk−1 R k,k−1 Πk−−11 εk−1 = H¯ k P k|k−1 H¯ kT + βk H¯ k Υk + E H˜ k xk xkT H˜ kT + E λk H˜ k xk v kT + βk ΥkT H¯ kT + E λk v k xkT H˜ kT + E λk2 v k v kT − βk βk−1 E λk v k εkT−1 Πk−−11 R kT,k−1 − βk βk−1 R k,k−1 Πk−−11 E εk−1 v kT λkT + βk2 βk2−1 R k,k−1 Πk−−11 R kT,k−1
= H¯ k P k|k−1 H¯ kT + βk H¯ k Υk + σk2 H e Xk H eT + σk2 H e B¯ k−1 S k−1,k − βk2 βk2−1 R k,k−1 Πk−−11 R kT,k−1 ,
(70)
Υk = E xk v kT − E xˆ k|k−1 v kT k−1
− 1 = B¯ k−1 S k−1,k − E Ξk,i Π εi v T i
k
i =1
= B¯ k−1 S k−1,k − Ξk,k−1 Πk−−11 E εk−1 v kT
E B k−1 ωk−1 ωkT−1 B kT−1 = E ( B¯ k−1 + B˜ k−1 )ωk−1 ωkT−1 ( B¯ k−1 + B˜ k−1 )T = E B¯ k−1 ωk−1 ωkT−1 B¯ kT−1 + E B˜ k−1 ωk−1 ωkT−1 B˜ kT−1
= B¯ k−1 Q k−1 B¯ kT−1 + σk2−1 B e Q k−1 B eT ,
i =1
where the expectation Υk = E {˜xk|k−1 v kT } is given as follows:
E A˜ k−1 xk−1 ωkT−1 B kT−1 = E A˜ k−1 xk−1 ωkT−1 B˜ kT−1 = E J k−1 A e B k−2 ωk−2 ωkT−1 B eT J kT−1 = σk2−1 A e B¯ k−2 Q k−2,k−1 B eT ,
Ξk,i Πi−1 εi + Ξk,k Πk−1 εk
+ βk ΥkT H¯ kT + σk2 S kT−1,k B¯ kT−1 H eT + βk R k
E A˜ k−1 xk−1 xkT−1 A˜ kT−1 = E J k−1 A e xk−1 xkT−1 A eT J kT−1 2 T k −1 A e X k −1 A e ,
k −1
−1
Πk = E
E x˜ k−1|k−1 ω = E xk−1 ωkT−1 B kT−1 k−1
−1 T T T E xk−1 εi Πi εi ωk−1 B k−1 −E T T k −1 B k −1
Ξk,i Πi−1 εi =
= A¯ k−1 Ξk−1,k−1 + Λk−1 + k−1 .
where E {εk−2 ωkT−1 B kT−1 } can be obtained as in (58) and the remaining expectations can be calculated as follows:
k
= B¯ k−1 S k−1,k − βk−1 Ξk,k−1 Πk−−11 R k−1,k . (66)
substituting (63)–(66) into (62) leads to (34). Again, by using the OPT, the state estimation xˆ k|k can be obtained as follows:
(71)
Substituting (71) into (70) yields (30). According to (67), the filtering error x˜ k|k has the following form:
x˜ k|k = xk − xˆ k|k = x˜ k|k−1 − Ξk,k Πk−1 εk ,
(72)
then the filtering error covariance can be calculated as follows:
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
P k|k = E
x˜ k|k−1 − Ξk,k Πk−1 εk x˜ k|k−1 − Ξk,k Πk−1 εk
= P k|k−1 − E
x˜ k|k−1 kT
−1
ε Πk
T
then, the two-step prediction error covariance is calculated as follows:
ΞkT,k
− Ξk,k Πk−1 E εk x˜ kT|k−1 + Ξk,k Πk−1 ΞkT,k
P k+2|k = E x˜ k+2|k x˜ kT+2|k
= P k|k−1 − Ξk,k Πk−1 ΞkT,k ,
(73)
where E {˜xk|k−1 εkT } = Ξk,k has been applied. Again, by using the OPT, we have the following filter for
ωˆ k|k =
ωk :
k
E ωk εiT Πi−1 εi .
(74)
i =1
Denote Σi , j = E {ωi ε Tj }. For the expectation E {ωk εiT }, we have the following results:
Σk,k = E ωk ykT − E ωk yˆ kT|k−1 = E ωk xkT H kT + E ωk v kT λk k −1
T
−1 T − E ωk E y k εi Πi εi
= A¯ k+1 P k+1|k A¯ kT+1 + A¯ k+1 E x˜ k+1|k ωkT+1 B kT+1 + E A˜ k+1 xk+1 xkT+1 A˜ kT+1 + E A˜ k+1 xk+1 ωkT+1 B kT+1 + E B k+1 ωk+1 x˜ kT+1|k A¯ kT+1 + E B k+1 ωk+1 xkT+1 A˜ kT+1 + E B k+1 ωk+1 ωkT+1 B kT+1 − βk E B k+1 ωk+1 εkT Πk−1 S kT+1,k B¯ kT+1 − βk B¯ k+1 S k+1,k Πk−1 E εk ωkT+1 B kT+1 + βk2 B¯ k+1 S k+1,k Πk−1 S kT+1,k B¯ kT+1 .
(82)
¯ k+1 P k+1|k A¯ T + A¯ k+1 B¯ k Q k,k+1 B¯ T P k+2|k = A k +1 k +1
− βk A¯ k+1 Ξk+1,k Πk−1 S kT+1,k B¯ kT+1 + σk2+1 A e Xk A eT + σk2+1 A e B¯ k Q k,k+1 B eT + B¯ k+1 Q kT,k+1 B¯ kT A¯ kT+1
= Q k,k−1 B¯ kT−1 H¯ kT + βk S k − E ωk εkT−1 Πk−−11 E εk−1 ykT
− βk B¯ k+1 S k+1,k Πk−1 ΞkT+1,k A¯ kT+1 + σk2+1 B e Q kT,k+1 B¯ kT A eT
= Q k,k−1 B¯ kT−1 H¯ kT + βk S k − βk−1 S k,k−1 Πk−−11 ΞkT,k−1 H¯ kT − βk βk2−1 S k,k−1 Πk−−11 R kT,k−1 ,
(75)
Σk,k−1 = E ωk εkT−1 = βk−1 S k,k−1 , E ωk εiT = 0, i k − 2.
(76)
Furthermore, taking into account (82) and the facts that E {˜xk+1|k ωkT+1 } = B¯ k Q k,k+1 − βk Ξk+1,k Πk−1 S kT+1,k and E {ωk+1 εkT } = βk S k+1,k , we have
i =1
209
(77)
Substituting (75)–(77) into (74), we obtain (36) which completes the proof. 2
+ σk2+1 B e Q k+1 B eT + B¯ k+1 Q k+1 B¯ kT+1 − βk2 B¯ k+1 S k+1,k Πk−1 S kT+1,k B¯ kT+1 ,
N = 2.
Similarly, the N-step (N > 2) prediction error is written as follows:
¯ k+ N −1 xˆ k+ N −1|k x˜ k+ N |k = A k+ N −1 xk+ N −1 + B k+ N −1 ωk+ N −1 − A = A¯ k+ N −1 x˜ k+ N −1|k + A˜ k+ N −1 xk+ N −1 + B k+ N −1 ωk+ N −1 , (84)
Appendix C. The proof of Theorem 2 Proof. It follows from the OPT and (19) that the N-step prediction xˆ k+ N |k can be computed as follows:
xˆ k+ N |k =
k
E xk+ N εiT Πi−1 εi i =1
=
k
E A k+ N −1 xk+ N −1 εiT Πi−1 εi i =1
+
k
E B k+ N −1 ωk+ N −1 εiT Πi−1 εi .
(78)
i =1
Further, using (78) and the fact that with v k , we have
ωk is one-step cross-correlated
¯ k+1 xˆ k+1|k + βk B¯ k+1 S k+1,k Π −1 εk , xˆ k+2|k = A k
(79)
¯ k+ N −1 xˆ k+ N −1|k , xˆ k+ N |k = A
(80)
N > 2.
Combining (79) and (80) yields (37). According to (19) and (79), the two-step prediction error is given by
then, the N-step prediction error covariance is calculated as follows:
P k+ N |k = E x˜ k+ N |k x˜ kT+ N |k
= A¯ k+ N −1 P k+ N −1|k A¯ kT+ N −1 + A¯ k+ N −1 E x˜ k+ N −1|k ωkT+ N −1 B kT+ N −1 + E A˜ k+ N −1 xk+ N −1 xkT+ N −1 A˜ kT+ N −1 + E A˜ k+ N −1 xk+ N −1 ωkT+ N −1 B kT+ N −1 + E B k+ N −1 ωk+ N −1 x˜ kT+ N −1|k A¯ kT+ N −1 + E B k+ N −1 ωk+ N −1 xkT+ N −1 A˜ kT+ N −1 + E B k+ N −1 ωk+ N −1 ωkT+ N −1 B kT+ N −1 ,
N > 2.
(85)
Furthermore, considering the fact that E {˜xk+ N −1|k ωkT+ N −1 B kT+ N −1 }
= B¯ k+N −2 Q k+N −2,k+N −1 B¯ kT+N −1 , we have
¯ k+ N −1 P k+ N −1|k A¯ T P k+ N |k = A k + N −1
+ A¯ k+ N −1 B¯ k+ N −2 Q k+ N −2,k+ N −1 B¯ kT+ N −1 + σk2+ N −1 A e Xk+ N −1 A eT
+ σk2+ N −1 A e B¯ k+ N −2 Q k+ N −2,k+ N −1 B eT
+ B¯ k+ N −1 Q kT+ N −2,k+ N −1 B¯ kT+ N −2 A¯ kT+ N −1
¯ k+1 xˆ k+1|k x˜ k+2|k = A k+1 xk+1 + B k+1 ωk+1 − A − βk B¯ k+1 S k+1,k Πk−1 εk
+ σk2+ N −1 B e Q kT+ N −2,k+ N −1 B¯ kT+ N −2 A eT
= A¯ k+1 x˜ k+1|k + A˜ k+1 xk+1 + B k+1 ωk+1 − βk B¯ k+1 S k+1,k Πk−1 εk ,
(83)
+ B¯ k+ N −1 Q k+ N −1 B¯ kT+ N −1 (81)
+ σk2+ N −1 B e Q k+ N −1 B eT ,
N > 2.
(86)
210
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
Combining (83) and (86) yields (38). At last, (39) follows directly from the OPT and the fact that ωk is one-step cross-correlated with v k . 2 Appendix D. The proof of Theorem 3 Proof. Again, by applying the OPT, the N-step fixed-lag smoother is calculated as follows:
xˆ k|k+ N =
k+ N −1
T
−1
E xk εi Πi εi + E
i =1
xk kT+ N
ε
Πk−+1N εk+ N
= xˆ k|k+ N −1 + Ξk,k+ N Πk−+1N εk+ N , where the expectation Ξk,k+ N = follows:
E {xk kT+N }
ε
can be determined as
Ξk,k+ N = E xk H¯ k+ N x˜ k+ N |k+ N −1 + H˜ k+ N xk+ N + λk+ N v k+ N T − βk+ N −1 βk+ N R k+ N ,k+ N −1 Πk−+1N −1 εk+ N −1 = E xk H¯ k+ N x˜ k+ N |k+ N −1 T − βk+ N −1 βk+ N R k+ N ,k+ N −1 Πk−+1N −1 εk+ N −1 (88)
T − βk−1 B¯ k S k,k−1 Πk−−11 εk−1 − Ξk+1,k Πk−1 εk = E xk x˜ kT|k−1 A¯ kT + E xk ωkT B kT − βk−1 E xk εkT−1 Πk−−11 S kT,k−1 B¯ kT − E xk εkT Πk−1 ΞkT+1,k = P k|k−1 A¯ kT + B¯ k−1 Q k−1,k B¯ kT − βk−1 Ξk,k−1 Πk−−11 S kT,k−1 B¯ kT (89)
+ B k+ N −1 ωk+ N −1 − βk+ N −2 B¯ k+ N −1 S k+ N −1,k+ N −2 Πk−+1N −2 εk+ N −2 T − Ξk+ N ,k+ N −1 Πk−+1N −1 εk+ N −1 = E xk A¯ k+ N −1 x˜ k+ N −1|k+ N −2
E ωk εiT Πi−1 εi
i =1 k −2
k
+N
Σk,i Πi−1 εi +
k
+N
Σk,i Πi−1 εi
i =k−1
Σk,i Πi−1 εi ,
(93)
where Σk,k−1 and Σk,k are given in Theorem 1, and for k + 1 i k + N, the expectation Σk,i = E {ωk εiT } can be determined as follows:
Σk,i = E ωk εiT = E ωk ( y i − yˆ i |i −1 )T = Θk,i − k,i ,
(94)
ˆ iT|i −1 }. ky
where Θk,i = E {ω and k,i = E {ω Since ωk is onestep autocorrelated and cross-correlated with v k , it follows from (20)–(21) that T k yi }
Θk,k+1 = E ωk ( H k+1 xk+1 + λk+1 v k+1 )T = E ωk xkT+1 H¯ kT+1 + βk+1 S k,k+1 = E ωk xkT A¯ kT H¯ kT+1 + Q k B¯ kT H¯ kT+1 + βk+1 S k,k+1
= Q k,k−1 B¯ kT−1 A¯ kT H¯ kT+1 + Q k B¯ kT H¯ kT+1 + βk+1 S k,k+1 , Θk,k+2 = E ωk ( H k+2 xk+2 + λk+2 v k+2 )T = E ωk xkT+1 A¯ kT+1 H¯ kT+2 + Q k,k+1 B¯ kT+1 H¯ kT+2 = Q k,k−1 B¯ T A¯ T + Q k B¯ T A¯ T H¯ T
(95)
+ Θk,k+l = E ωk ( H k+l xk+l + λk+l v k+l )T = E ωk xkT+l−1 A¯ kT+l−1 H¯ kT+l
(96)
k
k +1
k +2
l −1 ¯ T H¯ T = E ωk xkT+2 A k+ j k+l j =2
− βk+ N −2 Ξk,k+ N −2 Πk−+1N −2 S kT+ N −1,k+ N −2 B¯ kT+ N −1 (90)
Combining (89) and (90) yields (42). From (87), the smoother error x˜ k|k+ N can be expressed as follows:
x˜ k|k+ N = xk − xˆ k|k+ N −1 − Ξk,k+ N Πk−+1N εk+ N
= x˜ k|k+ N −1 − Ξk,k+ N Πk−+1N εk+ N .
k
+N
.. .
= Ψk+ N −1 A¯ kT+ N −1
N > 1.
ωˆ k|k+N =
k −1 k ¯T , Q k,k+1 B¯ kT+1 H k +2
− βk+ N −2 B¯ k+ N −1 S k+ N −1,k+ N −2 Πk−+1N −2 εk+ N −2 T − Ξk+ N ,k+ N −1 Πk−+1N −1 εk+ N −1
− Ξk,k+ N −1 Πk−+1N −1 ΞkT+ N ,k+ N −1 ,
(92)
i =k−1
Ψk+1 = E xk A¯ k x˜ k|k−1 + A˜ k xk + B k ωk
Ψk+ N
x˜ k|k+ N −1
= P k|k+ N −1 − Ξk,k+ N Πk−+1N ΞkT,k+ N .
=
ωk and A˜ k+N is zero mean, we have from
− Ξk,k Πk = E xk A¯ k+ N −1 x˜ k+ N −1|k+ N −2 + A˜ k+ N −1 xk+ N −1
T
i =1
where Ψk+ N = E {xk x˜ kT+ N |k+ N −1 }. Furthermore, considering the facts
ΞkT+1,k ,
+ Ξk,k+ N Πk−+1N ΞkT,k+ N
=
= Ψk+ N H¯ kT+ N
−1
x˜ k|k+ N −1 − Ξk,k+ N Πk−+1N εk+ N
− Ξk,k+ N Πk−+1N εk+ N = P k|k+ N −1 − E x˜ k|k+ N −1 εkT+ N Πk−+1N ΞkT,k+ N − Ξk,k+ N Πk−+1N E εk+ N x˜ kT|k+ N −1
(87)
− βk+ N −1 βk+ N Ξk,k+ N −1 Πk−+1N −1 R kT+ N ,k+ N −1 ,
Based on the OPT, we have the following N-step fixed-lag smoother for ωk :
that xk is correlated with (61) and (69) that
P k|k+ N = E
(91)
Thus, taking into account the fact E {˜xk|k+ N −1 εkT+ N } = Ξk,k+ N , we have from (91) that
= E ωk xkT+1
l −1 j =1
= Q k,k−1 B¯ kT−1
¯ T H¯ T + Q k,k+1 B¯ T A k+ j k+l k +1
l −1 j =0
+ Q k,k+1 B¯ kT+1
¯ T H¯ T + Q k B¯ T A k+ j k+l k
l −1 j =2
¯ T H¯ T , A k+ j k+l
Combining (95)–(97) yields (46).
l −1 j =1
l −1 j =2
¯ T H¯ T A k+ j k+l
¯ T H¯ T A k+ j k+l
l = 3, 4, . . . , N .
(97)
J. Feng et al. / Aerospace Science and Technology 32 (2014) 200–211
Similarly, using the OPT and the fact correlated with v k , we have
k,k+t = E ωk yˆ kT+t |k+t −1 =
k+ t −1
ωk is one-step cross-
k+t −1 T
−1 T = E ωk E yk+t εi Πi εi i =1
T E ωk εiT Πi−1 E yk+t εiT
i =k−1
=
k+ t −1
Σk,i Πi−1 ΩtT,i ,
t = 1, 2, . . . , N ,
(98)
i =k−1
where the expectation Ωt ,i = E { yk+t εiT } is given as follows:
Ωt ,k+t −1 = E ( H k+t xk+t + λk+t v k+t )εkT+t −1 = H¯ k+t E xk+t εkT+t −1 + βk+t E v k+t εkT+t −1 = H¯ k+t Ξk+t ,k+t −1 + βk+t βk+t −1 R k+t ,k+t −1
t = 1, 2, . . . , N ,
(99)
Ωt ,k+t −2 = E ( H k+t xk+t + λk+t v k+t )εkT+t −2 = H¯ k+t A¯ k+t −1 E xk+t −1 εkT+t −2 + H¯ k+t B¯ k+t −1 E ωk+t −1 εkT+t −2 = H¯ k+t A¯ k+t −1 Ξk+t −1,k+t −2
+ H¯ k+t B¯ k+t −1 Σk+t −1,k+t −2 , t = 1, 2, . . . , N ,
(100)
Ωt ,k+ j = E ( H k+t xk+t + λk+t v k+t )εkT+ j = H¯ k+t A¯ k+t −1 E xk+t −1 εkT+ j .. . = H¯ k+t
−1 t j 1 = j +2
= H¯ k+t
−1 t j 1 = j +1
+ H¯ k+t
¯ k+ j E xk+ j +2 ε T A 1 k+ j ¯ k+ j E xk+ j +1 ε T A 1 k+ j
−1 t
¯ k+ j B¯ k+ j +1 E A 1
j 1 = j +2
= H¯ k+t
−1 t
ωk+ j+1 εkT+ j
¯ k+ j Ξk+ j +1,k+ j A 1
j 1 = j +1
+ H¯ k+t
−1 t
¯ k+ j B¯ k+ j +1 Σk+ j +1,k+ j , A 1
j 1 = j +2
t = 2, 3, . . . , N , j = −1, 0, . . . , t − 2, t − j > 2.
(101)
Combining (99)–(101), we have (48) which completes the proof. 2
211
References [1] L. Dai, Filtering and LQG problem for discrete-time stochastic singular linear systems, IEEE Transactions on Automatic Control 34 (10) (1989) 1105–1108. [2] M. Darouach, M. Zasadzinski, M. Mehdi, State estimation for stochastic singular linear systems, International Journal of Science Systems 24 (2) (1993) 345–354. [3] Z. Deng, Y. Liu, Descriptor Kalman estimators, International Journal of Systems Science 30 (11) (1999) 1205–1212. [4] Z. Deng, Y. Gao, G. Tao, Reduced-order steady-state descriptor Kalman fuser weighted by block-diagonal matrices, Information Fusion 9 (2) (2008) 300–309. [5] M.M. Fahmy, J. O’Reilly, Observers for descriptor systems, International Journal of Control 49 (6) (1989) 2013–2028. [6] J. Feng, M. Zeng, Descriptor recursive estimation for multiple sensors with different delay rates, International Journal of Control 84 (3) (2011) 584–596. [7] J. Feng, Z. Wang, M. Zeng, Optimal robust non-fragile Kalman-type recursive filtering with finite-step autocorrelated noises and multiple packet dropouts, Aerospace Science and Technology 15 (6) (2011) 486–494. [8] A. Fu, Y. Zhu, E. Song, The optimal Kalman type state estimator with multi-step correlated process and measurement noises, in: The 2008 International Conference on Embedded Software and Systems, Chengdu, China, 2008, pp. 215–220. [9] Y. Gao, G. Tao, Z. Deng, Decoupled distributed Kalman fuser for descriptor systems, Signal Processing 88 (5) (2008) 1261–1270. [10] H. Gao, Y. Zhao, J. Lam, K. Chen, H ∞ fuzzy filtering of nonlinear systems with intermittent measurements, IEEE Transactions on Fuzzy Systems 17 (2) (2009) 291–300. [11] H.R. Hashmipour, S. Roy, A.J. Laub, Decentralized structures for parallel Kalman filtering, IEEE Transactions on Automatic Control 33 (1) (1988) 89–93. [12] J.Y. Ishihara, M.H. Terra, J.C.T. Campos, Robust Kalman filter for descriptor systems, IEEE Transactions on Automatic Control 51 (8) (2006) 1354–1358. [13] X. Lu, H. Zhang, W. Wang, K. Teo, Kalman filtering for multiple time-delay systems, Automatica 41 (8) (2005) 1455–1461. [14] M. Sahebsara, T. Chen, S.L. Shah, Optimal H 2 filtering with random sensor delay, multiple packet dropout and uncertain observations, International Journal of Control 80 (2) (2007) 292–301. [15] D. Simon, Optimal State Estimation, John Wiley and Sons, Inc., New York, 2006. [16] E. Song, Y. Zhu, J. Zhou, Z. You, Optimal Kalman filtering fusion with crosscorrelated sensor noises, Automatica 43 (8) (2007) 1450–1456. [17] E. Song, Y. Zhu, Z. You, The Kalman type recursive state estimator with a finite-step correlated process noises, in: Proceedings of the IEEE International Conference on Automation and Logistics, Qingdao, China, 2008, pp. 196–200. [18] S. Sun, Z. Deng, Multi-sensor optimal information fusion Kalman filter, Automatica 40 (6) (2004) 1017–1023. [19] S. Sun, J. Ma, Optimal filtering and smoothing for discrete-time stochastic singular systems, Signal Processing 87 (1) (2007) 189–201. [20] S. Sun, L. Xie, W. Xiao, Y.C. Soh, Optimal linear estimation for systems with multiple packet dropouts, Automatica 44 (5) (2008) 1333–1342. [21] G. Wei, Z. Wang, X. Liu, H. Shu, Filtering for networked stochastic time-delay systems with sector nonlinearity, IEEE Transactions on Circuits and Systems-II: Express Briefs 56 (2009) 71–75. [22] G. Wei, Z. Wang, H. Shu, Robust filtering with stochastic nonlinearities and multiple missing, Automatica 45 (3) (2009) 836–841. [23] S. Xu, J. Lam, Reduced-order H ∞ filtering for singular systems, Systems & Control Letters 56 (1) (2007) 48–57. [24] R. Yang, Z. Zhang, P. Shi, Exponential stability on stochastic neural networks with discrete interval and distributed delays, IEEE Transactions on Neural Networks 21 (1) (2010) 169–175. [25] E. Yaz, A. Ray, Linear unbiased state estimation for random models with sensor delay, in: Proceedings of IEEE Conference on Decision and Control, Kobe, Japan, 1996, pp. 47–52. [26] E. Yaz, A. Ray, Linear unbiased state estimation under randomly varying bounded sensor delay, Applied Mathematics Letters 11 (4) (1998) 27–32. [27] M. Zeng, J. Feng, Z. Yu, Optimal unbiased estimation for uncertain systems with different delay rates sensor network and autocorrelated process noises, in: 2011 Chinese Control and Decision Conference, Mianyang, China, 2011, pp. 3014–3018. [28] H. Zhang, L. Xie, Y.C. Soha, Optimal recursive filtering, prediction, and smoothing for singular stochastic discrete-time systems, IEEE Transactions on Automatic Control 44 (1) (1999) 2154–2158.