Systems & Control Letters 60 (2011) 450–459
Contents lists available at ScienceDirect
Systems & Control Letters journal homepage: www.elsevier.com/locate/sysconle
Linear estimation for random delay systems✩ Huanshui Zhang a,∗ , Gang Feng b , Chunyan Han a,c a
School of Control Science and Engineering, Shandong University, Jinan 250061, Shandong, PR China
b
Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Hong Kong
c
School of Control Science and Engineering, University of Jinan, Jinan 250022, Shandong, PR China
article
info
Article history: Received 2 June 2010 Received in revised form 27 November 2010 Accepted 25 March 2011 Available online 10 May 2011 Keywords: Linear estimation Random delay Reorganized innovation analysis Riccati equation Convergence Stability
abstract This paper is concerned with the linear estimation problems for discrete-time systems with random delayed observations. When the random delay is known online, i.e., time-stamped, the random delayed system is reconstructed as an equivalent delay-free one by using measurement reorganization technique, and then an optimal linear filter is presented based on the Kalman filtering technique. However, the optimal filter is time-varying, stochastic, and does not converge to a steady state in general. Then an alternative suboptimal filter with deterministic gains is developed under a new criteria. The estimator performance in terms of their error covariances is provided, and its mean square stability is established. Finally, a numerical example is presented to illustrate the efficiency of proposed estimators. © 2011 Elsevier B.V. All rights reserved.
1. Introduction As the result of the increasing development in communication networks, control and state estimation over networks have attracted great attention during the past few years (see e.g. [1]). The feedback control systems wherein the control loops are closed through a real-time network are called networked control systems (NCSs) (see e.g. [2–6]). In NCSs, data typically travel through communication networks from sensors to controllers and from controllers to actuators. As a direct consequence of the finite bandwidth for data transmission over networks, time delay, either from sensors to controllers, called sensor delay, or from controllers to actuators, called controller delay, or both, is inevitable in networked systems where a common medium is used for data transfer. This time delay, being either constant, time-varying, or random, can degrade the performance of a control system if it is not given due consideration. In many instances it can even destabilize the system. Hence time delay is one of the challenging problems faced by control practitioners in NCSs. The filtering problem for networked control systems with sensor delays especially for random sensor delays has received
✩ This work is supported by the Taishan Scholar Construction Engineering by Shandong Government, the National Natural Science Foundation for Distinguished Young Scholars of China (No. 60825304), and the Major State Basic Research Development Program of China (973 Program) (No. 2009cb320600). ∗ Corresponding author. E-mail address:
[email protected] (H. Zhang).
0167-6911/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.sysconle.2011.03.009
much attention during the past few years (See e.g. [7,8]). On the filtering problems with intermittent observations, the initial work can be traced back to Nahi [9] and Hadidi [10]. Recently, this problem has been studied in [11,12], respectively, where the statistical convergence property of estimation error covariances was studied, and the existence of a critical value for the arrival rate of observations was shown. For the situation that the one-step sensor delay was described as a binary white noise sequence, a reduced-order linear unbiased estimator was designed via state augmentation in [13]. When the random delay was characterized by a set of Bernoulli variables, the unscented filtering algorithms [14], the linear and quadratic least-square estimation method [15], the optimal H2 and Kalman filtering [16,17], and the H∞ filtering [18,19] have been developed. The rational of modeling the random delay as Bernoulli variable sequences has been justified in those papers. On the other hand, modeling the random delay as a finite state Markov chain is also a reasonable way. The relevant estimation results for this type of modeling can be found in [20,21], and the reference therein. It can be noted that the results mentioned above mainly focus on the systems just with one- or two-step delay. To the best of our knowledge, there exist few estimation results on multiple random delayed systems [22]. Furthermore, most existing results employ state augmentation method to deal with random delays. In fact, the reorganization observation method which was developed in our previous works [23] is also an effective tool to deal with the random delay without state augmentation. In this paper we will investigate the optimal and suboptimal linear estimators for discrete-time systems with random observation
H. Zhang et al. / Systems & Control Letters 60 (2011) 450–459
451
delays, where the delay steps are larger than one sampling period and known online via time-stamped data. The key technique to be used for dealing with the random delay is the reorganization observation method, and by this method the random delayed system is transformed into a delay-free one. An optimal linear filter is presented with time-varying stochastic gains, and the solution to this estimator is also discussed. Furthermore, an alternative suboptimal estimator with deterministic gains is developed under a new performance index. Convergence and mean square stability of this estimator are established. It is shown that the stability of this estimator does not depend on observation packet delay but only on the overall observation packet loss probability. Note that both filters (optimal and suboptimal) have the same dimension as the original systems. The remainder of this paper is organized as follows. In Section 2, we state the problem formulation. In Section 3, the reorganized observations are defined, and an optimal filter is designed by the projection formula. In Section 4, an alternative suboptimal linear filter is developed under a new criteria. Its convergence and stability are discussed. Finally, a numerical example is given in Section 5, which is followed by some conclusions in Section 6. Notations: Throughout this paper, Rn denotes the n-dimensional Euclidean space, Rm×n denotes the set of all m × n real matrices. A real symmetric matrix Q > 0(≥0) denotes Q being a positive-definite (or positive semi-definite) matrix, and A > (≥)B means A − B > (≥)0. A′ stands for the transpose of the matrix A.
Under the Assumption 2.2, we know that the possible received observations at time t when t ≥ r are
Q −1 and Q 2 represent the reverse and a square root of the positivedefinite matrix Q . diag{...} denotes a block-diagonal matrix, P indicates the occurrence probability of an event, and E {.} represents the mathematical expectation operator. As usual, we define H = L2 (Ω , F , P) as the Hilbert space of all square summable in the probability space (Ω , F , P), equipped with the inner product ⟨x, y⟩ = E {xy′ }, ⟨x, x⟩ = E {xx′ } = ‖x‖2 , where ‖.‖ represents the standard Euclidean norm. L{y1 , . . . , yn } denotes the linear space spanned by y1 , . . . , yn . Proj{.} denotes the projection operator, and δts means the Kronecker delta function. In addition, for a realvalued function ζ with domain X , arg minx∈X ζ (x) = {x ∈ X : ζ (x) = miny∈X ζ (y)}.
Problem 1 (Optimal Linear Estimation). Given {{y(s)}ts=0 } and {γi,s : i = 0, . . . , r ; s = 0, . . . , t }, find a linear minimum mean square error (LMMSE) estimation xˆ (t | t ) of x(t ).
1
′ y(t ) = [y0 (t )
···
y′r (t )]′ ,
(2.3)
where yi (t ) = γi,t Hx(t − i) + γi,t v(t − i),
i = 0, . . . , r ,
(2.4)
with γi,t defined as a binary random variable indicating the arrival of the observation packet for state x(t − i) at time t, that is 1,
γi,t =
0,
If the observation for state x(t − i) is received at time t; otherwise.
(2.5)
Obviously, P(γi,t = 1) = ρi , i = 0, . . . , r. As is well known, in the real-time control systems, the state x(t ) can only be observed at most one time, and thus γi,t , i = 0, . . . , r must satisfy the following property:
γi,t +i × γj,t +j = 0,
i ̸= j.
(2.6)
When t < r, the observation of (2.3) is written as ′ y(t ) = [y0 (t )
···
y′t (t )
0
···
0]′ ,
(2.7)
where we set yi (t ) ≡ 0 for t < i ≤ r. Then the estimation problems considered in this paper can be stated as:
Problem 2 (Suboptimal Linear Estimation). Given {{y(s)}ts=0 }, find a suboptimal linear estimation xˆ e (t | t ) of x(t ). 3. Optimal linear estimator In this section, an analytical solution to the LMMSE estimation will be presented by reorganizing the observation sequences and applying the projection formula in the Hilbert space.
2. Problem formulations 3.1. Measurements reorganization Consider the following discrete-time systems with randomly delayed measurements x(t + 1) = Φ x(t ) + Gw(t ),
In view of the definition of y(t ), we define a new observation as
x(0),
(2.1)
yr (t ) (t ) = Hx(t − r (t )) + v(t − r (t )),
(2.2)
y˜ r (s) ,
where x(t ) ∈ Rn is the state, w(t ) ∈ Rr is the input noise, y· (t ) ∈ Rp is the measurement and v(t ) ∈ Rp is the measurement noise. r (t ) is the random delay and its probability distributions are known a priori. The following assumptions are made to the system (2.1) and (2.2).
y0 (s)
.. , 0 ≤ s ≤ t − r, . yr (s + r ) y0 (s) . y˜ t −s (s) , .. , t − r < s ≤ t . yt −s (t )
(3.1)
(3.2)
Assumption 2.1. The initial state x(0), w(t ), and v(t ) are null mean white noises with covariance matrices
Then y˜ (·) is the delay-free observation which satisfies
E [x(0)x′ (0)] = P0 ,
y˜ r (s) = Hr (s)x(s) + vr (s),
E [w(t )w′ (s)] = Q δts ,
0 ≤ s ≤ t − r,
E v(t )v (s) = Rδts ,
y˜ t −s (s) = Ht −s (s)x(s) + vt −s (s),
respectively. x(0), w(t ), and v(t ) are mutually independent.
where
′
Assumption 2.2. Measurements in (2.2) are time-stamped and transmitted through a digital communication network. The random delay r (t ) is bounded with 0 ≤ r (t ) ≤ r where r is given as the length of memory buffer, and its probability distributions are P(r (t ) = i) = ρi , i = 0, . . . , r. If the measurements transmitted to the receiver with a delay larger than r, they will ∑ be considered r as the ones lost completely. Thus the property 0 ≤ i=0 ρi ≤ 1 is satisfied. Also r (t ) is independent of x(0), w(t ), and v(t ).
γ0,s H . Hr (s) = .. , γr ,s+r H γ0,s H . Ht −s (s) = .. ,
γt −s,t H
t − r < s ≤ t,
γ0,s v(s) .. vr (s) = , . γr ,s+r v(s) γ0,s v(s) .. vt −s (s) = , .
(3.3) (3.4)
γt −s,t v(s)
(3.5)
(3.6)
452
H. Zhang et al. / Systems & Control Letters 60 (2011) 450–459
with vr (s) and vt −s (s) being white noises of zero means and respective covariance matrices Rr (s) = diag{γ0,s R, . . . , γr ,s R},
(3.7)
Remark 3.1. From the above definitions, it can be observed [24] that the estimation xˆ (s, r ) is indeed the projection of x(s) onto the linear space
L{˜yr (0), . . . , y˜ r (s − 1)} = L{ηr (0), . . . , ηr (s − 1)},
and Rt −s (s) = diag{γ0,s R, . . . , γt −s,t R}.
(3.8)
Then it is easy to know [23] that
(3.22)
and xˆ (s, t − s + 1) is the projection of x(s) onto the linear space
L{˜yr (0), . . . , y˜ r (t − r ); y˜ r −1 (t − (r − 1)), . . . , y˜ t −(s−1) (s − 1)}
−r {{˜yr (s)}ts= yt −s (s)}ts=t −r +1 }, 0 ; {˜
(3.9)
contains the same information as {{y(s)} }. Thus Problem 1 can be restated as: Given the observation −r {{˜yr (s)}ts= yt −s (s)}ts=t −r +1 } and the information {γi,s : i = 0 ; {˜ 0, . . . , r ; s = 0, . . . , t }, find an LMMSE estimation xˆ (t | t ) of x(t ). As in the Kalman filter, we first define the innovation sequences associated with the reorganized observations (3.1) and (3.2). t s=0
Definition 3.1. Consider the new observations (3.1) and (3.2).
• For 0 ≤ s ≤ t − r, define ηr (s) , y˜ r (s) − yˆ˜ r (s),
(3.10)
where yˆ˜ r (s) is the LMMSE estimation of y˜ r (s) given the observations
{˜yr (0), . . . , y˜ r (s − 1)}.
(3.11)
• For s > t − r, define
= L{ηr (0), . . . , ηr (t − r ); ηr −1 (t − (r − 1)), . . . , ηt −(s−1) (s − 1)}.
(3.23)
Furthermore, the innovation sequences can be rewritten as
ηr (s) = y˜ r (s) − Hr (s)ˆx(s, r ),
0 ≤ s ≤ t − r,
ηt −s (s) = y˜ t −s (s) − Ht −s (s)ˆx(s, t − s + 1),
(3.24)
s > t − r.
(3.25)
3.2. Design of the optimal estimator In this subsection we will give the solution to the LMMSE estimation problem by applying the reorganized innovation sequences defined in the last subsection. Theorem 3.1. Consider the system (2.1)–(2.2), the optimal estimation xˆ (t | t ) is given by xˆ (t | t ) = [In − K0 (t )H0 (t )]ˆx(t , 1) + K0 (t )˜y0 (t ),
ηt −s (s) , y˜ t −s (s) − yˆ˜ t −s (s),
(3.12)
where K0 (t ) is the solution to the following equation
where yˆ˜ t −s (s) is the LMMSE estimation of y˜ t −s (s) given the observations
K0 (t )Qη0 (t ) = P0 (t )H0′ (t ),
{˜yr (0), . . . , y˜ r (t − r ); y˜ r −1 (t − (r − 1)), . . . ,
with
y˜ t −(s−1) (s − 1)}.
(3.13)
The sequence {ηi (s)} defined above is indeed a white noise of zero mean and covariance Qηi (s) and spans the same linear space as (3.9) (See [23]). And it is usually termed as the reorganized innovation. Then the optimal estimation based on the new innovation sequences can be defined as follows.
(3.26)
(3.27)
Qη0 (t ) = H0 (t )P1 (t )H0′ (t ) + R0 (t ) and the estimation xˆ (t , 1) is computed by the following iteration.
• Step 1: Calculate xˆ (s + 1, r ), s = 0, . . . , t − r by the following Kalman recursion xˆ (s + 1, r ) = Φr (s)ˆx(s, r ) + Kr (s)˜yr (s);
xˆ (0, r ) = 0,
(3.28)
where Definition 3.2. Consider the given time instant t.
• For 0 ≤ s ≤ t − r, the estimation xˆ (s, r ) is defined as xˆ (s + 1, r ) = Φ xˆ (s, r ) + Kr (s)[˜yr (s) − Hr (s)ˆx(s, r )], Ew,v ‖x(s + 1) − xˆ (s + 1, r )‖
(3.14)
is minimized. Further define Pr (s) , Ew,v [˜x(s, r )˜x (s, r )],
Kr (s)Qηr (s) = Φ Pr (s)Hr (s),
(3.30)
with Qηr (s) = Hr (s)Pr (s)Hr′ (s) + Rr (s),
(3.15)
′
(3.29)
′
where Kr (s) is to be determined such that 2
Φr (s) = Φ − Kr (s)Hr (s),
(3.16)
(3.31)
Pr (s + 1) = Φ Pr (s)Φ − Kr (s)Qηr (s)Kr (s) + GQG Pr (0) = P0 . ′
′
′
; (3.32)
• Step 2: Calculate xˆ (s + 1, t − s) for s = t − r + 1, . . . , t − 1 by the following recursion
where x˜ (s, r ) = x(s) − xˆ (s, r ).
(3.17)
xˆ (s + 1, t − s) = Φt −s (s)ˆx(s, t − (s − 1))
+ Kt −s (s)˜yt −s (s),
• For s > t − r, the estimation xˆ (s, t − s) is defined as xˆ (s + 1, t − s) = Φ xˆ (s, t − s + 1) + Kt −s (s)[˜yt −s (s)
− Ht −s (s)ˆx(s, t − s + 1)],
where (3.18)
Φt −s (s) = Φ − Kt −s (s)Ht −s (s),
(3.34)
where Kt −s (s) is to be determined such that
Kt −s (s)Qηt −s (s) = Φ Pt −(s−1) (s)Ht −s (s),
Ew,v ‖x(s + 1) − xˆ (s + 1, t − s)‖
with
2
(3.33)
′
(3.19)
(3.35)
is minimized. Further define
Qηt −s (s) = Ht −s (s)Pt −(s−1) (s)Ht′−s (s) + Rt −s (s),
(3.36)
Pt −s+1 (s) , Ew,v [˜x(s, t − s + 1)˜x′ (s, t − s + 1)],
Pt −s (s + 1) = Φ Pt −(s−1) (s)Φ − Kt −s (s)Qηt −s (s)Kt′−s (s) + GQG′ .
(3.37)
(3.20)
where x˜ (s, t − s + 1) = x(s) − xˆ (s, t − s + 1).
(3.21)
′
• Step 3: Set s = t − 1 in (3.33), then xˆ (t , 1) is obtained directly.
H. Zhang et al. / Systems & Control Letters 60 (2011) 450–459
Proof. Note that xˆ (t | t ) is the projection of the state x(t ) onto −r t the linear space spanned by {{ηr (s)}ts= 0 ; {ηt −s (s)}s=t −r +1 }. Since η is a white noise, the estimation xˆ (t | t ) is calculated by using the projection formula as xˆ (t | t ) = Proj{x(t ) | ηr (0), . . . , ηr (t − r );
× ηr −1 (t − r + 1), . . . , η1 (t − 1)} + Proj{x(t ) | η0 (t )} = xˆ (t , 1) + Proj{x(t ) | η0 (t )}.
(3.38)
Now we calculate Proj{x(t ) | η0 (t )}. Since Proj{x(t ) | η0 (t )} is a linear combination of the innovation η0 (t ), set Proj{x(t ) | η0 (t )} = K0 (t )η0 (t ),
(3.39)
where K0 (t ) is to be determined. Case 1: When the covariance matrix of the innovation η0 (t ), denoted by Qη0 (t ), is invertible, Proj{x(t ) | η0 (t )} is given by Proj{x(t ) | η0 (t )} = P1 (t )H0′ (t )Qη−01 (t )η0 (t ),
(3.40)
where Qη0 (t ) is the covariance of the innovation η0 (t ) and P1 (t ) is as in (3.20). From (3.39) and (3.40), we obtain that K0 (t ) = P1 (t )H0′ (t )Qη−01 (t ),
Remark 3.3. Compared to the existing work in [22], the contributions of our work can be described as follows: a new model for the system with finite multiple random observation delays is constructed, and another efficient method—the reorganized innovation analysis method is employed for the design of the optimal estimator with random observation delays. Remark 3.4. The LMMSE estimator described above is optimal for any observation packet delay generating process. However from an engineering perspective it is also desirable to further characterize the performance of the estimator. In this scenario a natural performance metric is the expected error covariance, i.e., Eγ [Pt +1|t ], where the expectation is performed with respect to the arrival binary random variable {γi,s : i = 0, . . . , r ; s = 0, . . . , t }. Unfortunately, it is not clear whether such quantity converges to a steady state. Thus instead of trying to obtain the upper bound on the expected error covariance of the time-varying LMMSE estimator, we will, in the next section focus on a suboptimal filter with deterministic gains. That is the filter gains will not contain the observation packet arrival binary random variable γ.,. , instead the gains will be determined by the probability of the random variable.
(3.41) 4. Suboptimal linear estimator
which satisfies (3.27). Case 2: When the covariance matrix of the innovation η0 (t ) is singular, the matrix K0 (t ) is chosen to minimize ‖K0 (t )η0 (t ) − x(t )‖2 , K0 (t )Qη0 (t ) = P1 (t )H0′ (t ),
(3.42)
which is (3.27). Note that η0 (t ) = y˜ 0 (t ) − yˆ˜ 0 (t ) = y˜ 0 (t ) − H0 (t )ˆx(t , 1), then the covariance matrix of η0 (t ) is given by Qη0 (t ) = H0 (t )P1 (t )H0 (t ) + R0 (t ), ′
(3.43)
and then (3.26) follows directly from (3.38). For the case of 0 ≤ s ≤ t − r, the optimal estimation xˆ (s + 1, r ) is given as xˆ (s + 1, r ) = Proj{x(s + 1) | ηr (0), . . . , ηr (s)}
= Φ xˆ (s, r ) + Proj{x(s + 1) | ηr (s)}, = Φ xˆ (s, r ) + Kr (s)ηr (s).
(3.44)
With a similar discussions as that of (3.38)–(3.43) and (3.28)–(3.31) are obtained immediately. Now we start to derive the expression for Pr (s + 1). In view of (2.1) and (3.28), we obtain that x˜ (s + 1, r ) = x(s + 1) − xˆ (s + 1, r )
= Φ x˜ (s, r ) + Gw(s) − Kr (s)ηr (s).
(3.45)
Since x˜ (s, r ) is independent of w(s), it follows from (3.45) that Pr (s + 1) + Kr (s)Qηr (s)Kr (s) = Φ Pr (s)Φ + GQG , ′
453
′
′
(3.46)
which is (3.32). Finally, following the similar procedure as in the determination of Kr (s) and Pr (s), we shall obtain (3.33)–(3.37). This completes the derivation of the solution to the Problem 1. Remark 3.2. It should be pointed out that the Riccati equation (3.32) is not in a standard form as the innovation covariance matrix Qηr (s) may be singular. In order to solve the Riccati equation (3.32) for the filter design, we should first solve the linear equation (3.30), which may be unsolvable or solvable but with infinite number of solutions. Fortunately it can be shown that Eq. (3.30) is solvable and any solution, if there exist more than one solution, to the equation yields the same result for the Riccati equation (3.32). More detailed discussion can be seen in Appendix.
In this section, we will propose a new suboptimal estimator with deterministic gains by minimizing the mean square estimation error where the statistics of the observation packet arrival random variable is used. Then we prove the convergence and mean square stability of the new estimator under standard assumptions. 4.1. Design of the suboptimal estimator Similar to the discussions in Section 3, we first employ the reorganized observation method to transform the random delayed system into the one just with multiplicative noises but without delays. The obtained system has the same description as (3.1)– (3.6), while the covariance matrices of vr (s) and vt −s (s) are described as follows Rr = diag{ρ0 R, . . . , ρr R},
0 ≤ s ≤ t − r,
Rt −s = diag{ρ0 R, . . . , ρt −s R},
t − r < s ≤ t.
(4.1) (4.2)
Also, it can be shown [23] that −r {{˜yr (s)}ts= yt −s (s)}ts=t −r +1 }, 0 ; {˜
spans the same linear space as {{y(s)}
(4.3) t s=0
}.
Definition 4.1. Given (2.1), (3.3) and (3.4), the linear suboptimal estimation xˆ e (t | t ) of x(t ) with deterministic gains is defined as xˆ e (t | t ) , xˆ e (t , 1) + K0 (t )[˜y0 (t ) − H0 (t )ˆxe (t , 1)],
(4.4)
where K0 (t ) is to be determined such that Eγ ,w,v ‖x(t ) − xˆ e (t | t )‖2
(4.5)
is minimized, and xˆ e (t , 1) is defined as follows.
• For 0 ≤ s ≤ t − r, define xˆ e (s + 1, r ) , Φ xˆ e (s, r ) + Kr (s)[˜yr (s) − Hr (s)ˆxe (s, r )],
(4.6)
with Kr (s) to be determined such that Eγ ,w,v ‖x(s + 1) − xˆ e (s + 1, r )‖2 is minimized.
(4.7)
454
H. Zhang et al. / Systems & Control Letters 60 (2011) 450–459
• For t − r < s ≤ t, define
Kt −s (s) = [Φ Pt −s+1 (s)Ht′−s ]
× [diag{ρ0 HPt −s+1 (s)H ′ , . . . , ρt −s HPt −s+1 (s)H ′ } + Rt −s ]−1 ,
xˆ e (s + 1, t − s) , Φ xˆ e (s, t − s + 1) + Kt −s (s)[˜yt −s (s)
− Ht −s (s)ˆxe (s, t − s + 1)],
with Kt −s (s) to be determined such that Eγ ,w,v ‖x(s + 1) − xˆ e (s + 1, t − s)‖
(4.21)
(4.8) Pt −s (s + 1) = Φ Pt −s+1 (s)Φ ′ −
2
Furthermore, we define the covariance matrix of the estimation errors Pr (s + 1) , Eγ ,w,v [˜xe (s + 1, r )˜x′e (s + 1, r )],
(4.10)
Pt −s (s + 1) , Eγ ,w,v [˜xe (s + 1, t − s)˜x′e (s + 1, t − s)],
(4.11)
where
ρi
−1 × Φ Pt −s+1 (s)H ′ HPt −s+1 (s)H ′ + R × HPt −s+1 (s)Φ ′ + GQG′ ,
is minimized.
Remark 4.1. Note from the criteria index of (3.15) (or (3.19)) and (4.7) (or (4.9)) that, the estimation xˆ e (s, r ) or xˆ e (s, t − s + 1) defined in Definition 4.1 is different from the one defined in Definition 3.2. The expectation in (4.7) (or (4.9)) is taken over w, v and {γ.,. } simultaneously.
− i =0
(4.9)
So Problem 2 is to find an estimator as defined in Definition 4.1.
t −s
Ht −s = Eγ (Ht −s (s)) = col{ρ0 H , . . . , ρt −s H },
x˜ e (s + 1, t − s) = x(s + 1) − xˆ e (s + 1, t − s).
(4.12)
(4.23)
with Pr (t − r + 1) given by (4.18). Proof. Based on the definitions of (4.6) and (4.8), we introduce the following notations er (s) = y˜ r (s) − Hr (s)ˆxe (s, r ),
s ≤ t − r,
et −s (s) = y˜ t −s (s) − Ht −s (s)ˆxe (s, t − s + 1),
(4.24) s > t − r.
(4.25)
Firstly for s = t, we have from (4.4) that Eγ ,w,v ‖x(t ) − xˆ e (t |t )‖2
= P1 (t ) + K0 (t )M (t )K0′ (t ) − ⟨˜xe (t , 1), e0 (t )⟩K0′ (t ) − K0 (t )⟨e0 (t ), x˜ e (t , 1)⟩ = P1 (t ) − K0∗ (t )M (t )[K0∗ (t )]′ + [K0 (t ) − K0∗ (t )]M (t )[K0 (t ) − K0∗ (t )]′ ,
x˜ e (s + 1, r ) = x(s + 1) − xˆ e (s + 1, r ),
(4.22)
(4.26)
where M (t ) = ⟨e0 (t ), e0 (t )⟩ and
Then one has the following result on the suboptimal estimation.
K0∗ (t ) = ⟨˜xe (t , 1), e0 (t )⟩M −1 (t ).
Theorem 4.1. The suboptimal estimation xˆ e (t | t ) with deterministic gains is given by
It is clear that Eγ ,w,v ‖x(t ) − xˆ e (t |t )‖2 will be minimized if we choose K0 (t ) = K0∗ (t ). Now we calculate K0∗ (t ). In view of (4.25), we obtain that
xˆ e (t | t ) = [I − K0 (t )H0 (t )]ˆxe (t , 1) + K0 (t )˜y0 (t ),
(4.13)
where K0 (t ) = P1 (t )H0′ [ρ0 HP1 (t )H ′ + ρ0 R]−1 ,
(4.14)
H0 = Eγ [H0 (t )] = ρ0 H ,
(4.15)
and xˆ e (t , 1) is derived by the following iterations.
• For 0 ≤ s ≤ t − r, the estimation xˆ e (s + 1, r ) of Definition 4.1 is calculated as
× y˜ r (s), xˆ e (0, r ) = 0,
(4.16)
where Kr (s) = [Φ Pr (s)Hr ] ′
× [diag{ρ0 HPr (s)H ′ , . . . , ρr HPr (s)H ′ } + Rr ]−1 , (4.17) r − ′ Pr (s + 1) = Φ Pr (s)Φ − ρi Φ Pr (s)H ′ i=0
−1
× HPr (s)H ′ + R
HPr (s)Φ ′ + GQG′ ,
Pr (0) = P0 , (4.18)
Hr = Eγ (Hr (s)) = col{ρ0 H , . . . , ρr H }.
(4.19)
• For t − r < s ≤ t, the estimation xˆ e (s + 1, t − s) of Definition 4.1 is calculated as xˆ e (s + 1, t − s) = [Φ − Kt −s (s)Ht −s (s)]
× xˆ e (s, t − s + 1) + Kt −s (s)˜yt −s (s), xˆ e (t − r + 1, r ), (4.20) where xˆ e (t − r + 1, r ) can be calculated by (4.16) and
⟨˜xe (t , 1), e0 (t )⟩ = ⟨˜xe (t , 1), H0 (t )˜xe (t , 1) + v0 (t )⟩ = P1 (t )H0′ ,
(4.28)
M (t ) = ⟨H0 (t )˜xe (t , 1) + v0 (t ), H0 (t )˜xe (t , 1) + v0 (t )⟩ = ρ0 HP1 (t )H ′ + ρ0 R.
(4.29)
Substituting (4.28) and (4.29) into (4.27), we obtain the expression of (4.14). Next for s ≤ t − r, it follows from (4.6) and (2.1) that x˜ e (s + 1, r )
= {Φ − Kr (s)Hr (s)}˜xe (s, r ) + Gw(s)
− Kr (s)vr (s).
xˆ e (s + 1, r ) = [Φ − Kr (s)Hr (s)]ˆxe (s, r ) + Kr (s)
(4.27)
(4.30)
It follows from (4.30) that Pr (s + 1)
≡ Eγ ,w,v ‖x(s + 1) − xˆ e (s + 1, r )‖2 = Φ Pr (s)Φ ′ − Φ Pr (s)Hr′ Kr′ (s) − Kr (s)Hr Pr (s)Φ ′ + GQG′ + Kr (s) × [diag{ρ0 HPr (s)H ′ , . . . , ρr HPr (s)H ′ } + Rr ]Kr′ (s) = [Kr (s) − Kr∗ (s)] × [diag{ρ0 HPr (s)H ′ , . . . , ρr HPr (s)H ′ } + Rr ] r − ∗ ′ × [Kr (s) − Kr (s)] − ρi Φ Pr (s)H ′ i=0
× (HPr (s)H + R) HPr (s)Φ ′ + Φ Pr (s)Φ ′ + GQG′ , ′
−1
(4.31)
where Kr∗ (s) = [Φ Pr (s)Hr′ ]
× [diag{ρ0 HPr (s)H ′ , . . . , ρr HPr (s)H ′ } + Rr ]−1 .
(4.32)
H. Zhang et al. / Systems & Control Letters 60 (2011) 450–459
It is obvious that E ‖˜x(s + 1, r )‖2 will be minimized if we choose Kr (s) = Kr∗ (s) and Pr (s) satisfying
Pr (s + 1) = Φ Pr (s)Φ − ′
r −
455
Theorem 4.2. Consider the suboptimal estimators (4.13)–(4.22). Suppose there exists a positive-definite matrix P˜ such that P˜ > gρ0 ,...,ρr (P˜ ),
ρi Φ Pr (s)H ′
(4.36)
where
i=0
× (HPr (s)H ′ + R)−1 HPr (s)Φ ′ + GQG′ .
(4.33)
Finally, following the similar procedure as in the determination of Kr (s) and Pr (s), we can obtain (4.21)–(4.22). This completes the proof of Theorem 4.1. Remark 4.2. The advantage of the suboptimal filter is that it leads to a deterministic time-varying filter which is easy to be implemented and all of its calculations (the gain matrices) can be done off line. That is to say, all the heavy calculations can be made before we start collecting the data, since the gain matrices of the filter are not data-dependent. Also, the deterministic gain allows us to analyze the convergence and mean stability of the filter. Moreover, it can be shown that the stochastic LQ control is dual to the proposed suboptimal estimator for multiplicative noise systems. Thus, it can be applied to solve the control problems with random input delays and packet dropping via the duality.
gρ0 ,...,ρr (P˜ ) = Φ P˜ Φ − ′
r −
˜ ′ ρi Φ PH
i=0
˜ ′ + R)−1 H P˜ Φ ′ + GQG′ . × (H PH Then for any initial condition Pr (0), the Riccati difference equations for Pr (t − r ), Pr −1 (t − r + 1), . . . , P0 (t ) converge to a unique set of algebraic equations when t → ∞,
Pr = Φ Pr Φ − ′
r −
ρi Φ Pr H ′ (HPr H ′ + R)−1
i=0
P r −1
× HPr Φ ′ + GQG′ , r −1 − ′ = Φ Pr Φ − ρi Φ Pr H ′ (HPr H ′ + R)−1
(4.37)
i=0
× HPr Φ ′ + GQG′ ,
(4.38)
.. .
4.2. Convergence of the suboptimal estimator
P0 = Φ P1 Φ ′ − ρ0 Φ P1 H ′ (HP1 H ′ + R)−1 HP1 Φ ′ + GQG′ .
We first make the following assumptions:
(4.39)
And the corresponding estimators xˆ e (t − r + 1, r ), xˆ e (t − r + 2, r − 1), . . . , xˆ e (t |t ) converge to a set of constant gain estimators,
Assumption 4.1. (Φ ′ , H ′ ) is stabilizable. 1
Assumption 4.2. (Φ ′ , Q 2 G′ ) is observable.
xˆ e (t − r + 1, r ) = [Φ − Kr Hr (t − r )]ˆxe (t − r , r )
In the following our aim is to show that for an arbitrary but fixed nonnegative symmetric Pr (t0 ), Pr (t − r ), Pr −1 (t − r + 1), . . . , P0 (t ) are convergent when t → ∞. At first, we present some useful lemmas without proof since similar derivation lines can be found in [11].
+ Kr y˜ r (t − r ), .. . xˆ e (t , 1) = [Φ − K1 H1 (t − 1)]ˆxe (t − 1, 2) + K1 y˜ 1 (t − 1),
(4.40)
xˆ e (t |t ) = [I − K0 H0 (t )]ˆxe (t , 1) + Ko y˜ 0 (t ),
(4.42)
Lemma 4.1. Consider the following operator
where
gρ (X )
= Φ X Φ + GQG − ρ Φ XH (HXH + R) HX Φ , φ(K , X ) = [Φ − ρ KH ]X [Φ − ρ KH ]′ + ρ(1 − ρ)KHXH ′ K ′ ′
′
′
′
−1
′
+GQG + ρ KRK , ′
′
(4.35)
where 0 ≤ ρ ≤ 1. Assume X ∈ S = {S ∈ R
n×n
1
(4.34)
|S ≥ 0}, R > 0, Q ≥
0, and (Φ , GQ 2 ) is stabilizable. Then the following facts are true: (i) With KX = Φ XH ′ (HXH ′ + R)−1 , gρ (X ) = φ(KX , X ). (ii) gρ (X ) = minK φ(K , X ) ≤ φ(K , X ), ∀F . (iii) If X ≤ Y , then gρ (X ) ≤ gρ (Y ). Lemma 4.2. Define the linear operator
ψ(X ) = [Φ − ρ KH ]X [Φ − ρ KH ]′ + ρ(1 − ρ)KHX (KH )′ . Suppose there exists X¯ > 0 such that X¯ > L(X¯ ). (i) For all W ≥ 0, limt →∞ ψ t (W ) = 0. (ii) Let V > 0 and consider the linear system Xt +1 = ψ(Xt ) + V initialized at X0 . Then, the sequence Xt is bounded. Lemma 4.3. Consider the operator gρ (X ) defined in (4.34). Suppose there exists a positive-definite matrix P¯ such that P¯ > gρ (P¯ ). Then for any P0 , the sequence P (t ) = gρt (P0 ) is bounded, i.e., there exists MP0 ≥ 0 dependent on P0 such that P (t ) ≤ MP0 , ∀t. With these lemmas, one can obtain the following results on the convergence properties of the Riccati difference equations in Theorem 4.1.
(4.41)
Kr = Φ Pr Hr′ [diag{ρ0 HPr H ′ , . . . , ρr HPr H ′ } + Rr ]−1
.. . K1 = Φ P2 H1′ [diag{ρ0 HP2 H ′ , ρ1 HP2 H ′ } + R1 ]−1 , K0 = P1 H0′ [ρ0 HP1 H ′ + ρ0 R]−1 . Proof. To derive the claimed results, we just need to analyze the convergence of the Riccati difference equations for (4.18) and (4.22). The whole process in the derivation are divided into three stages. (i) To show the convergence of the Riccati equation (4.18). First, denote gρ0 ,...,ρr (X ) = Φ X Φ ′ + GQG′ −
r −
ρi Φ XH ′ (HXH ′ + R)−1 HX Φ ′ ,
i=0
(4.43) then Pr (s + 1) = gρ0 ,...,ρr (Pr (s)) = gρs 0 ,...,ρr (Pr (0)). Recalling Assumption 2.2, we obtain that 0 ≤ i=0 ρi ≤ 1. Thus (4.43) satisfies the conditions of Lemma 4.1. Consider the Riccati equation (4.18) initialized at Qr (0) = 0 and let Qr (s) = gρs 0 ,...,ρr (0), then 0 = Qr (0) ≤ Qr (1). It follows from Lemma 4.1(iii) that
∑r
Qr (1) = gρ0 ,...,ρr (Qr (0)) ≤ gρ0 ,...,ρr (Qr (1)) = Qr (2). A simple inductive argument establishes that 0 = Qr (0) ≤ Qr (1) ≤ Qr (2) ≤ · · · ≤ M0 .
456
H. Zhang et al. / Systems & Control Letters 60 (2011) 450–459
Here, we have used the Lemma 4.3 to bound the sequence {Qr (s)}. We now have a monotone nondecreasing sequence of matrices bounded above. It is obvious that
Pr = Φ Pr Φ ′ + GQG′ −
r −
ρi Φ Pr H ′ (HPr H ′ + R)−1 HPr Φ ′ .
(4.44)
Next, we show that the Riccati iteration initialized at Rr (0) ≥ Pr also converges, and to the same limit Pr . Define the matrix KPr = Φ Pr H ′ (HPr H ′ + R)−1 and consider the linear operator
ρ i KP r H X Φ −
i =0
+
ρi 1 −
i=0
r −
′ ρi KPr H
i =0 r −
ρi KPr HXH ′ KP′r .
i=0
× (HP1 (t − 1)H ′ + R)−1 HP1 (t − 1)Φ ′ + GQG′ .
Pr = gρ0 ,...,ρr (Pr ) = ψ(Pr ) + GQG′ +
r −
ρi KPr RKP′r > ψ(Pr ).
i =0
Thus, ψ satisfies the condition of Lemma 4.2. Consequently, for all X ≥ 0, lims→∞ ψ s (X ) = 0. Now, suppose Rr (0) ≥ Pr . Then Rr (1) = gρ0 ,...,ρr (Rr (0)) ≥ gρ0 ,...,ρr (Pr ) = Pr .
(4.45)
A simple inductive argument establishes that Rr (s) ≥ Pr , ∀s. Note that 0 ≤ (Rr (s + 1) − Pr ) = gρ0 ,...,ρr (Rr (s)) − gρ0 ,...,ρr (Pr ) = φ(KRr (s), Rr (s)) − φ(KPr , Pr )
In the above, we have proven the convergence of Pr (t − r ) in (4.46). Based on this fact and note the expression of (4.47), we get that Pr −1 (t − r + 1) converges as well. Similar reasoning shows that Pr −2 (t − r + 2), . . . , P0 (t ) are all convergent when t → ∞. Set Pr −1 = limt →∞ Pr −1 (t − r + 1), . . . , P0 = limt →∞ P0 (t ), which obviously satisfy the algebraic equation (4.38)–(4.39). (iii) To show the uniqueness of the limits. To this end, we just need to establish that (4.37) has a unique semi-definite solution, since a unique solution to (4.37) just determine a unique set of solutions to (4.38)–(4.39) respectively. ConsiderPˆ r = gρr ,...,ρ0 (Pˆ r )
Pˆ r , Pˆ r , . . . . However, we have shown that for any initial condition (4.18) converges to Pr . Thus, Pˆ r = Pr . The uniqueness of the solutions is proved. 1
Corollary 4.1. Assume ∑ (Φ ′ , H ′ ) is stabilizable and (Φ ′ , Q 2 G′ ) is r observable. Then for any i=0 ρi ≥ ρ¯ , the matrices Pr (t −r ), Pr −1 (t − r + 1), . . . , P0 (t ) converge to a unique set of constant matrices when t → ∞, where ρ¯ is determined by the following optimization problem:
ρ¯ = arg min ζρ (F , Y ) > 0,
≤ φ(KPr , Rr (s)) − φ(KPr , Pr )
ρ
= ψ(Rr (s) − Pr ) ≤ ψ 2 (Rr (s − 1) − Pr ) ≤ · · · ≤ ψ s (Rr (0) − Pr ).
ζρ (F , Y )
Then in light of Lemma 4.2 and taking limit on both sides of the above inequalities, one obtains that 0 ≤ lim (Rr (s + 1) − Pr ) ≤ 0.
1
Qr (0) ≤ Pr (0) ≤ Rr (0). It then follows from Lemma 4.1(iii) that
∀ s.
We have already established that the Riccati equations Qr (s) and Rr (s) converge to Pr . As a result, we have s→∞
proving (4.37). (ii) To show the convergence of the expressions in (4.22), that is, r −
ρ(1 − ρ)FH
1 2
1
F (ρ R) 2 0
Y
0
YGQ 0
0
Y
0
0
0
I
0
0
0
0
I
0
.
(4.50) Moreover, the suboptimal estimators become the steady-state ones. Proof. First, we prove that the following statements are equivalent: (i) ∃X¯ > 0 such that X¯ > gρ (X¯ ), (ii) ∃K¯ , X¯ > 0 such that X¯ > φ(K¯ , X¯ ), (iii) ∃F¯ and 0 < Y¯ ≤ I, such that ζρ (F¯ , Y¯ ) > 0. Let K¯ = Φ X¯ H ′ (H X¯ H ′ + R)−1 , then we will obtain the equivalence between (i) and (ii) immediately. Further, by using the Schur complements, we can prove that ζρ (F , Y ) > 0 if and only if Y > 0,
(4.51)
− ρ(1 − ρ)FHY −1 H ′ F ′ − YGQG′ F ′ − ρ FRF ′ > 0. Set Y = X yields
−1
,F = X
−1
(4.52)
K , and multiply X on both sides of (4.52),
X > 0,
ρi Φ Pr (t − r − 1)H ′
(4.53)
X − [Φ − ρ KH ]X [Φ − ρ KH ]′ − ρ(1 − ρ)KHXH ′ K ′
i=0
× (HPr (t − r − 1)H ′ + R)−1 HPr (t − r − 1)Φ ′ + GQG′ .
(4.49)
Y − [Y Φ − ρ FH ]Y −1 [Y Φ − ρ FH ]′
Pr = lim Qr (s) ≤ lim Pr (s) ≤ lim Rr (s) = Pr ,
Pr (t − r ) = Φ Pr (t − r − 1)Φ ′ −
Y > 0, 0 < ρ ≤ 1,
(Y Φ − ρ FH )
(ρ R) 2 F ′
Thus we have proved that the Riccati iteration initialized at Rr (0) also converges to the same limit Pr . Now we establish that the Riccati iteration converges to Pr for all initial condition Pr (0) ≥ 0. Define Qr (0) = 0 and Rr (0) = Pr (0) + Pr . Consider three Riccati iterations, initialized at Qr (0), Pr (0) and Rr (0). Note that
s→∞
Y
(Y Φ − ρ FH )′ = ρ(1 − ρ)H ′ F ′ 1 Q 2 G′ Y
s→∞
s→∞
(4.48)
and the Riccati iteration (4.18) initialized as Pr (0) = Pˆ r . This yields the constant sequence
Observe that
Qr (s) ≤ Pr (s) ≤ Rr (s),
(4.47)
P0 (t ) = Φ P1 (t − 1)Φ ′ − ρ0 Φ P1 (t − 1)H ′
i=0
r −
−1
.. .
Also, we see that Pr is a fixed point of the Riccati iteration
r −
ρi Φ Pr (t − r )H ′
× (HPr (t − r )H + R) HPr (t − r )Φ ′ + GQG′ .
lim Qr (s) = Pr .
ψ(X ) = Φ −
r −1 − i =0
′
s→∞
Pr −1 (t − r + 1) = Φ Pr (t − r )Φ ′ −
(4.46)
− GQG′ − ρ KRK ′ > 0.
(4.54)
H. Zhang et al. / Systems & Control Letters 60 (2011) 450–459
Thus the condition (ii) is satisfied if and only if (iii) is satisfied. So (i), (ii) and (iii) are mutually equivalent. Next, we show the convergence of the Riccati equations (4.18) and (4.22)∑ for Pr (t − r ), . . . , P0 (t ). r When i=0 ρi = 1, the Riccati equation (4.18) reduces to the standard Riccati equation
verges to a fixed point Pr , and the corresponding algebraic Riccati equation can be rewritten as
P (s + 1) = Φ P (s)Φ ′ − Φ P (s)H ′ (HP (s)H ′ + R)−1
× HP (s)Φ ′ + GQG′ ,
Φ−
Pr =
∀s ≥ 0.
r −
1
Remark 4.3. It is noted that, (4.49) with (4.50) is a quasi-convex optimization problem in the variables (ρ, F , Y ) and the solution can be obtained by iterating LMI feasibility problem and using bisection for the variable ρ , as shown in [25]. 4.3. Stability of the suboptimal estimator In this subsection, we will show the mean square stability of the constant gain estimators (4.40)–(4.42). The following lemma will be used subsequently. Lemma 4.4 ([26]). The system x(t + 1) =
A+
m −
Ai ξi (t ) x(t ),
i=0
where x(t ) ∈ Rn and ξi (t ), i = 1, . . . , m are zero-mean random processes with E [ξi (t )ξj (s)] = δij δts , is mean square stable if and only if there exists a matrix Q = Q ′ ∈ Rn×n with Q > 0 such that A′ QA − Q +
m −
(4.55)
i =0
Theorem 4.3. Under Assumptions 4.1 and 4.2, if the Riccati equations (4.18) and (4.22) converge, then the corresponding constant gain estimators (4.40)–(4.42) are mean square stable. Proof. It is noted that if the filter xˆ e (t − r + 1, r ) is stable, the future finite iterations xˆ e (t − r + 1, r − 1), . . . , xˆ e (t , 1) are stable as well. Thus to show the stability of (4.40)–(4.42), one just needs to show the stability of (4.40). Note that (4.40) can be rewritten as xˆ e (t − r + 1, r ) =
Φ−
r −
ρi FPr H xˆ e (t − r , r )
r − (ρi − γi,t −r +i )FPr H xˆ e (t − r , r ) + Kr y˜ r (t − r ), i=0
+
r −
r −
′ ρi FPr H
i =0
ρi 1 −
i =0
r −
ρi FPr HPr H ′ FP′ r + GQG′ +
i =0
r −
ρi FPr RFP′ r .
i =0
As shown in Theorem 4.2
Pr >
Φ−
r −
ρi FPr H Pr Φ −
i=0
+
r −
r −
′ ρi FPr H
i =0
ρi 1 −
i =0
r −
ρi FPr HPr H ′ FP′ r ,
i =0 1
and Pr > 0 since (A , Q 2 G′ ) is observable. Then the conditions of Lemma 4.4 is satisfied, and thus (4.40) is mean square stable. The proof is thus completed. ′
5. An illustrative example In this section, we present a simple numerical example to illustrate the developed theoretical results. Consider a dynamic system described in (2.1) and (2.2) with the following parameters 1.02 Φ= 0
[
0.05 , 0. 9
]
[
]
1 G= , 0.5
H = 2
1 ,
where w(t ) and v(t ) are white noises with zero means and covariance matrices Q = 1 and R = 1, respectively. The initial value x0 and its covariance matrix are set to be
[ ]
1 x0 = , 1
[
1 P0 = 0
]
0 . 1
In this example, it is assumed that r (t ) ∈ {0, 1, 2}. r (t ) is a prior known to the estimator and its paths can be described by γi,t (i = 0, 1, 2) with the restriction γi,t +i × γj,t +j = 0 if i ̸= j. One path of r (t ) is given in Fig. 1, based on which the optimal filter can be obtained directly by using the scheme proposed in Theorem 3.1. The true state values and the optimal estimates of the state components x1 (t ) and x2 (t ) are shown in Figs. 2 and 3, respectively. For the suboptimal filter design, it is assumed that r (t ) is of the probabilities
P{r (t ) = 1} = 0.1
and P{r (t ) = 2} = 0.1, and also γi,t +i × γj,t +j = 0 if i ̸= j. Then from Theorem 4.1, the suboptimal filter can be obtained. Fig. 4 plots the real state x1 (t ) and its suboptimal estimate, while Fig. 5 plots the real state x2 (t ) and its estimate. Finally, the sum of the optimal and suboptimal estimation error covariances of x1 (t ) and x2 (t ) are given in Fig. 6. It can be seen from the simulation results that the obtained linear estimators for systems with random delays track well and the estimation scheme proposed in this paper produces good performance. 6. Conclusion
i=0
+
ρi FPr H Pr Φ −
P{r (t ) = 0} = 0.8,
A′i QAi < 0.
i=0
Under the conditions that (Φ ′ , H ′ ) is stabilizable and (Φ ′ , Q 2 G′ ) is observable, we know that P (s) converges to a∑fixed point [24]. r Obviously, the iterations in (4.22) converge. If ¯ , we i=0 ρi = ρ know from (4.50) that, there exist F¯ and 0 < Y¯ ≤ I such that ζρ (F¯ , Y¯ ) > 0. Then based on the equivalence between (i) and (iii), we obtain that, there exists X¯ > 0 such that X¯ > gρ0 ,...,ρr (X¯ ). In view of Theorem 4.2, we get that for any initial condition Pr (0) ≥ 0, the Riccati iterations (4.18) and (4.22) converge to some finite limits, and the corresponding estimators ∑r converge to the steady ones. Now, consider the case that ρ¯ ≤ i=0 ρi ≤ 1. Recalling the operators gρ (X ) in (4.35), we can prove that gρ (X ) is monotonically decreasing in ρ . If there exist F¯ and 0 < Y¯ ≤ I such that ζρ¯ (F¯ , Y¯ ) > 0, there must be existing X¯ > 0 such that X¯ > gρ¯ (X¯ ) > gρ0 ,...,ρr (X¯ ). Also, from Theorem 4.2, we can get that Pr −1 (s), . . . , P0 (s) are all convergent, and the existence of the limits of Pr (s), . . . , P0 (s) implies that the suboptimal estimators converge to a set of steadystate estimators. The proof is thus completed.
457
∑r where FPr = Φ Pr H ′ (HPr H ′ + R)−1 . It can be seen that E [ i=0 (ρi − ∑r ∑r γ i,t −r +i )] = 0, and cov[ i=0 (ρi − γi,t −r +i )] = i =0 ρ i ( 1 − ∑ r i =0 ρ i ) . Based on the theorem’s hypotheses, we know that Pr (s) con-
(4.56)
In this paper, we have studied the linear estimation for random delayed systems with time-stamped measurements. An optimal linear filter, with time-varying and stochastic gains has been presented via reorganized observation method and projection formula. Also, a suboptimal estimator with deterministic gains has
458
H. Zhang et al. / Systems & Control Letters 60 (2011) 450–459
Fig. 1. The trajectories of γi,t , (i = 0, 1, 2) which give one path of r (t ).
Fig. 2. The first state component x1 (t ) and its optimal filter xˆ 1 (t | t ) based on the given path of r (t ) as shown in Fig. 1.
Fig. 3. The second state component x2 (t ) and its optimal filter xˆ 2 (t | t ) based on the given path of r (t ) as shown in Fig. 1.
Fig. 4. The first state component x1 (t ) and its suboptimal filter xˆ o1 (t | t ) based on the given probabilities of r (t ) : P{r (t ) = 0} = 0.8, P{r (t ) = 1} = 0.1 and P{r (t ) = 2} = 0.1.
Fig. 5. The second state component x2 (t ) and its suboptimal filter xˆ o2 (t | t ) based on the given probabilities of r (t ) : P{r (t ) = 0} = 0.8, P{r (t ) = 1} = 0.1 and P{r (t ) = 2} = 0.1.
Fig. 6. The sum of the optimal and suboptimal estimation error covariances.
H. Zhang et al. / Systems & Control Letters 60 (2011) 450–459
been proposed under a new performance index. Its convergence and stability are proved, and it is shown that the stability of this estimator does not depend on observation packet delay but only on the overall observation packet loss probability. Appendix. Discussions on the solutions to (3.32) and (3.37) In this Appendix, we discuss the solution to the singular Riccati equation (3.32). We only consider the case of s ≤ t − r, the other case of s > t − r can be addressed similarly. Observe that Kr (s)Qηr (s) = Φ Pr (s)Hr′ (s), Qηr (s) = Hr (s)Pr (s)Hr (s) + Rr (s), ′
Pr (s + 1) = Φ Pr (s)Φ − Kr (s)Qηr (s)Kr (s) + GQG , ′
′
′
γ0,s H . Hr (s) = .. , γr ,s+r H
(A.4)
(A.5)
When γi,i+s = 0, i = 0, . . . , r, it follows from (A.3) and (A.4) that Qηr (s) = 0, s = 0, . . . , t − r, then the Riccati equation (A.3) becomes (A.6)
Thus the solution to the Riccati equation exists and is unique. Note from the property (2.6), the remaining case is that there exist one, say ℓ ∈ {0, . . . , r } such that γℓ,ℓ+s = 1, and the others are zero. In this case, it is easily known that there exists an elementary matrix 1 Ar (s) with A′r (s) = A− r (s) such that Ar (s)Hr (s) =
[ ]
Ar Rr (s)A′r (s) =
H , 0
[
R 0
(A.7)
]
0 . 0
(A.8)
Thus, by using (A.7)–(A.8), it follows from (3.31) that HPr (s)H ′ + R Ar (s)Qηr (s)Ar (s) = 0
[
′
]
0 . 0
(A.9)
On the other hand, note that A′ (t , i)A(t , i) = I, (3.30) can be equivalently written as Kr (s)A′r (s)Ar (s)Qηr (s)A′r (s) = Φ Pr (s)Hr′ (s)A′r (s).
(A.10)
Let Kr (s)A′r (s) = [K¯ r (s)
K˜ r (s)].
(A.11)
Incorporating the above equation into (A.10) and utilizing (A.7) and (A.8), yields
[K¯ r (s)(HPr (s)H ′ + R)
0] = [Φ Pr (s)H
′
0].
(A.12)
Note that the matrix Qv is invertible, K¯ r (s) is given uniquely as K¯ r (s) = Φ Pr (s)H ′ (HPr (s)H ′ + R)−1 .
(A.13)
Therefore, Kr (s) is given as Kr (s) = [K¯ r (s)
K˜ r (s)]Ar (s),
where K˜ r (s) can be of any value.
(A.15)
References
with vr (s) a white noise of zero mean and covariance matrix
Pr (s + 1) = Φ Pr (s)Φ ′ + GQG′ .
× [HPr (s)H ′ + R]−1 HPr (s)Φ ′ + GQG′ ,
(A.2)
Rr (s) = diag{γ0,s R, . . . , γr ,s+r R}.
Pr (s + 1) = Φ Pr (s)Φ ′ − Φ Pr (s)H ′
(A.1)
(A.3)
γ0,s v(s) .. vr (s) = , . γr ,s+r v(s)
We have shown in the above that the linear equation (3.30) always has a solution. Next, we show that for any solution to (3.30), the Riccati equation (3.32) remains the same. In fact, substituting (A.14) into (3.32) yields
which does not depend on the choice of the matrix K˜ r (s). Based on the above discussions, we conclude that Eq. (3.30) is solvable and any solution, if there exist more than one solution, to the equation yields the same result for the Riccati equation (3.32).
where
459
(A.14)
[1] J.P. Hespanha, P. Naghshtabrizi, Y. Xu, A survey of recent results in networked control systems, IEEE Trans. Automat. Control 95 (1) (2007) 138–162. [2] J. Nilsson, B. Bernhardsson, B. Wittenmark, Stochastic analysis and control of real-time systems with random time delays, Automatica 34 (1) (1998) 57–64. [3] L. Zhang, Y. Shi, T. Chen, A new method for stabilization of networked control systems with random delays, IEEE Trans. Automat. Control 50 (8) (2005) 1177–1181. [4] W. Zhang, L. Yu, Modeling and control of networked control systems with both network-induced delay and packet-dropout, Automatica 44 (2008) 3206–3210. [5] M. Liu, D.W.C. Ho, Y. Niu, Stabilization of Markovian jump linear system over networks with random communication delay, Automatica 45 (2009) 416–421. [6] C. Lin, Z. Wang, F. Yang, Observer-based networked control for continuoustime systems with random sensor delays, Automatica 45 (2009) 578–584. [7] C. Su, C. Lu, Interconnected network state estimation using randomly delayed measurements, IEEE Trans. Power Syst. 16 (4) (2001) 870–878. [8] Z. Wang, D.W.C. Ho, X. Liu, Robust filtering under randomly varying sensor delay measurements, IEEE Trans. Circuits Syst. Express Briefs 51 (2004) 320–326. [9] N.E. Nahi, Optimal recursive estimation with uncertain observation, IEEE Trans. Inf. Theory 15 (4) (1969) 457–462. [10] M.T. Hadidi, C.S. Schwartz, Linear recursive state estimators under uncertain observations, IEEE Trans. Automat. Control 24 (6) (1979) 944–948. [11] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M.I. Jordan, S.S. Sastry, Kalman filtering with intermittent observations, IEEE Trans. Automat. Control 49 (9) (2004) 1453–1463. [12] K. Plarre, F. Bullo, On Kalman filtering for detectable systems with intermittent observations, IEEE Trans. Automat. Control 54 (2) (2009) 386–390. [13] E. Yaz, A. Ray, Linear unbiased state estimation under randomly varying bounded sensor delay, Appl. Math. Lett. 11 (1998) 27–32. [14] A.H. Carazo, J.L. Pérez, Extended and unscented filtering algorithms using one-step randomly delayed observations, Appl. Math. Comput. 190 (2007) 1375–1393. [15] S. Nakamori, R.C. Águila, A.H. Carazo, J.L. Pérez, Recursive estimators of signals from measurements with stochastic delays using covariance information, Appl. Math. Comput. 162 (2005) 65–79. [16] M. Sahebsara, T. Chen, S.L. Shah, Optimal H2 filtering with random sensor delay, multiple packet dropout and uncertain observations, Internat. J. Control 80 (2) (2007) 292–301. [17] M. Moayedi, Y.C. Soh, Y.K. Foo, Optimal Kalman filtering with random sensor delays, packet dropouts and missing measurements, in: Proceedings of the American Control Conference, USA, June 2009, pp. 3405–3410. [18] S. Zhou, G. Feng, H∞ filtering for discrete-time systems with randomly varying sensor delays, Automatica 44 (2008) 1918–1922. [19] B. Shena, Z. Wang, H. Shu, G. Wei, H∞ filtering for nonlinear discrete-time stochastic systems with randomly varying sensor delays, Automatica 45 (4) (2009) 1032–1037. [20] J. Evans, V. Krishnamurthy, Hidden Markov model state estimation with randomly delayed observations, IEEE Trans. Automat. Control 47 (8) (1999) 2157–2166. [21] H. Song, L. Yu, W. Zhang, H∞ filtering of network-based systems with random delay, Signal Process. 89 (4) (2009) 615–622. [22] L. Schenato, Optimal estimation in network control systems subject to random delay and packet drop, IEEE Trans. Automat. Control 53 (5) (2008) 1311–1316. [23] H. Zhang, L. Xie, D. Zhang, Y.C. Soh, A reorganized innovation approach to linear estimation, IEEE Trans. Automat. Control 49 (10) (2004) 1810–1814. [24] B.D.O. Anderson, J.B. Moore, Optimal Filtering, Prentice-Hall, Englewood Cliffs, N.J., 1979. [25] S. Boyd, L.E. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, PA, 1997. [26] F. Wang, V. Balakrishnan, Robust Kalman filters for linear time-varying systems with stochastic parametric uncertainties, IEEE Trans. Signal Process. 50 (4) (2002) 803–813.