Applied Mathematics and Computation 343 (2019) 230–246
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
Distributed state estimation for stochastic discrete-time sensor networks with redundant channels Qian Li a,∗, Xinzhi Liu b, Qingxin Zhu a,∗, Shouming Zhong c, Dian Zhang d a
School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan 610054, PR China b Department of Applied Mathematics, University of Waterloo, Waterloo, Canada c School of Mathematics Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, PR China d School of Automation and Electrical Engineering, Qingdao University of Science and Technology, Qingdao 266061, PR China
a r t i c l e
i n f o
Keywords: Distributed state estimation Sensor network Randomly varying nonlinearity Redundant channel Stochastic switching topologies
a b s t r a c t This paper investigates the distributed state estimation problem for a class of sensor networks described by discrete-time stochastic systems with stochastic switching topologies. In the sensor network, the redundant channel and the randomly varying nonlinearities are considered. The stochastic Brownian motions affect both the dynamical plant and the sensor measurement outputs. Through available output measurements from each individual sensor, the distributed state estimators are designed to approximate the states of the networked dynamic system. Sufficient conditions are established to guarantee the convergence of the estimation error systems for all admissible stochastic disturbances and randomly varying nonlinearities. Then, the distributed state estimator gain matrices are derived using the linear matrix inequality method. Moreover, a numerical example is given to verify the effectiveness of the theoretical results. © 2018 Elsevier Inc. All rights reserved.
1. Introduction As a special class of complex networks, sensor networks have attracted an increased attention of researchers from many disciplines such as signal processing, networking and protocols, embedded systems, information management and distributed algorithms, see, e.g., [1,2]. Sensor networks typically consist of a large number of sensor nodes and also a few control nodes. The sensor nodes are equipped with a sensing component, a processing device and a communication device. Sensors comprising sensor networks are deployed to perform collaborative estimation tasks, where each individual sensor in a sensor network locally estimates the system state from not only its own measure ment but also its neighbouring sensors measurements according to the given topology. This gives rise to the so-called distributed state estimation problem that has recently been attracting growing research interests [4,35]. A phenomenon in sensor networks is that the neighboring set of a specific sensor node sometimes varies frequently due to node mobility even node fault, limited energy resources and limited sensing, computation, communication capabilities in real-world sensor networks, which gives rise to random sensing topologies. In other words, the sensor networks display a number of modes that may switch from one to another randomly within different time intervals. In recent decades, some primary investigations have been launched on sensor networks with switching topologies [5–8,37]. ∗
Corresponding authors. E-mail addresses:
[email protected] (Q. Li),
[email protected] (Q. Zhu).
https://doi.org/10.1016/j.amc.2018.09.045 0 096-30 03/© 2018 Elsevier Inc. All rights reserved.
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
231
In sensor networks, due to the limited energy, computational power, and communication resources of the sensor nodes, communication between network nodes is limited. Distributed estimation inevitably suffers from the constrained communication and computation capabilities that would degrade the network performances. For instance, packet dropouts are inevitable due to network traffic congestion and packets transmission failures [9–12]. To date, various issues on sensor networks with randomly occurring packet dropouts have been widely investigated, see,e.g., [13–26] and the references therein. Also, a missing measurement phenomenon typically occurs in networked control systems, which may be caused for a variety of reasons such as the high maneuverability of the tracked target, intermittent sensor failures, or limited battery energy. Note that the sensor measurements are usually subject to probabilistic information and have attracted considerable attention during the past few years, see [28–30]. On the other hand, the sensor networks are often influenced by additive nonlinear disturbances. Such nonlinear disturbances themselves may experience random abrupt changes due probably to abrupt phenomena such as random failures and changes in node interconnections, see [31,32] for more explanations. Stochastic disturbances are also unavoidable when modeling sensor networks in a noisy environment. The distributed estimation problem has been extensively studied for sensor networks with additive white noises in [3,33,34]. However, almost all the existing results in the literature are built on the assumption that only one transmission channel is used. In practice, the single channel of data transmission may be invalid off and on owing to low reliability, frequent traffic congestion, and poor anti-interference ability of the communication network, which gives rise to the random occurrence of packet dropouts. In order to reduce the possibility of packet dropouts in single-channel case, one redundant channel will be considered in the communication network. But so far, only a few studies on the use of such redundant channels have been carried out in the literature for any performance improvements [27]. To the best of our knowledge, few studies have considered the distributed estimation problems with stochastic switching topologies, the redundant channel and the randomly varying nonlinearities. Motivated by the above discussions, we shall investigate, in this paper, the problem of distributed state estimation for a class of sensor networks described by discrete-time stochastic systems with stochastic switching topologies. Our approach considers the redundant channel in the sensor networks, where the stochastic Brownian motions affect both the dynamical plant and the sensor measurement outputs. The randomly varying nonlinearities and missing measurements are introduced to reflect more realistic dynamical behaviors of the sensor networks that are caused by noisy environment as well as by probabilistic communication failures. Through available output measurements from each individual sensor, the distributed state estimators are designed to approximate the states of the networked dynamic system. Sufficient conditions are presented to guarantee the convergence of the estimation error systems for all admissible stochastic disturbances and randomly varying nonlinearities. Then, the distributed estimators gains are derived using the method of linear matrix inequalities (LMI). Finally, a numerical example is given to verify the effectiveness of the theoretical results. Notations: Throughout this paper, the notations used are fairly standard. N stands for the set of nonnegative integers. Rn denotes n-dimensional Euclidean space. For symmetric matrices X and Y, the notation X > Y (X ≥ Y) means that the matrix X − Y is positive definite (nonnegative). (, F, P) is a probability space, where is the sample space, F is the σ -algebra of subsets of the sample space, and P is the probability measure on F. E[·] denotes the expectation operator with respect to some probability measure P. The superscripts T and (−1 ) stand for matrix transposition and matrix inverse respectively. The notation sym(A) denotes the sum of matrix A and its transposition AT , i.e. A + AT . 2. Preliminaries Consider a target plant described by the following discrete-time nonlinear stochastic system defined on a probability space (, F, P):
x(k + 1 ) = A(rk )x(k ) + B(rk )x(k − τ (k )) + δ (k )C1 (rk ) f (k, x(k )) +(1 − δ (k ))C2 (rk )g(k, x(k )) + D0 (rk )x(k )ω0 (k ),
(1)
with N sensors modeled by
yi ( k ) = Ei ( rk )x ( k ) + Di ( rk )x ( k )ωi ( k ), i = 1 , 2 , . . . , N
(2)
where x(k) ∈ R is the state vector, yi (k) ∈ R is the measurement output measured by sensor i on the target x(k), τ (k) denotes the discrete-time delay satisfying τ 1 ≤ τ (k) ≤ τ 2 , where τ 1 and τ 2 are constant positive scalars. f( · , · ) : N × Rn → Rn and g( · , · ) : N × Rn → Rn are nonlinear functions. The stochastic variable δ (k) is a Bernoulli-distributed white noise process and satisfies the following distribution laws: n
m
P r{δ (k ) = 1} = E{δ (k )} = δ¯ , P r{δ (k ) = 0} = 1 − E{δ (k )} = 1 − δ¯ , E{(δ (k ) − δ¯ )2 } = δ¯ (1 − δ¯ ),
(3)
where δ¯ ∈ [0, 1] is known constant. Assumption 2.1. The nonlinear functions f( · , · ) : N × Rn → Rn and g( · , · ) : N × Rn → Rn are continuous and satisfy f (k, 0 ) = 0, g(k, 0 ) = 0, and the following inequalities:
|| f (k, u ) − f (k, v )|| ≤ ||1 (u − v )||
232
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
||g(k, u ) − g(k, v )|| ≤ ||2 (u − v )|| for all k ∈ N and u, v ∈ Rn , where 1 , 2 are real matrices with appropriate dimensions. Remark 2.1. Due to the random abrupt changes in the environmental circumstances such as random failures and changes in node interconnections, the nonlinear disturbances may occur in a probabilistic way. In the dynamic system (1), the random variable δ (k) is used to model the probability distribution of the nonlinear functions f(k, x(k)) and g(k, x(k)) according to a given probability distribution. ˜ } with The sequence {rk , k ≥ 0} is a right-continuous Markov chain, which takes values in a finite set L = {1, 2, . . . , N transition probability matrix = {π pq } given by
π pq = Pr{rk+1 = q|rk = p}, N where π pq ≥ 0 is the transition probability from mode p to mode q at time k, and q=1 π pq = 1 for all p ∈ L. For each possible rk = p ∈ L, A(rk ), B(rk ), C1 (rk ), C2 (rk ), Ei (rk ) and Di (rk )(i = 0, 1, 2, . . . , N ) are known constant matrices. ωi (k )(i = 0, 1, . . . , N ) is a scalar Wiener process defined on the probability space (, F, P) with
E {ωi ( k )} = 0, E {ωi ( k )ω j ( k )} = 0
E {ωi2 (k )} = 1,
( i = j ),
i, j = 0, 1, . . . , N.
In practice, the single channel of data transmission may be invalid off and on owing to low reliability, frequent traffic congestion, and poor anti-interference ability of the communication network, which gives rise to the random occurrence of packet dropouts. In order to reduce the possibility of packet dropouts in single-channel case, one redundant channel will be considered in the communication network in this paper. Letting y¯ i (k ) denote the actual output, collected by sensor node i from the plant via two different channels in a random way is given as follows:
y¯ i (k ) = αi (k )yi (k ) + (1 − αi (k ))βi (k )yi (k ), i = 1, 2, . . . , N
(4)
The two stochastic variables α i (k) and β i (k), i = 1, 2, . . . , N, are Bernoulli-distributed white noise specified by the following distribution laws:
P r{αi (k ) = 1} = E{αi (k )} = α¯ i , P r{αi (k ) = 0} = 1 − E{αi (k )} = 1 − α¯ i , E{(αi (k ) − α¯ i )2 } = α¯ i (1 − α¯ i ), P r{βi (k ) = 1} = E{βi (k )} = β¯ i , P r{βi (k ) = 0} = 1 − E{βi (k )} = 1 − β¯ i , E{(βi (k ) − β¯ i )2 } = β¯ i (1 − β¯ i ),
(5)
where α¯ i , β¯ i ∈ [0, 1] are known constant. Remark 2.2. The sensor measurement model proposed in (4) provides a unified framework to explain the phenomena of randomly occurring packet dropouts under the circumstance that two channels coexist. If αi (k ) = 1, βi (k ) = 1/0, the data collected from sensor node i in the primary channel; If αi (k ) = 0, βi (k ) = 1, the data collected from sensor node i in the redundant channel; If αi (k ) = 0, βi (k ) = 0, the packet dropouts occur, i.e.,both the primary and redundant channels are invalid. Note that the measurement model in (4) can be further extended to the multichannel case. Assumption 2.2. The variables δ (k), α i (k), βi (k )(i = 1, 2, . . . , N ) and ω j (k )( j = 0, 1, . . . , N ) are mutually independent. As stated in the introduction, the purpose of this paper is to design distributed state estimators to approximate the state x(k) of system (1) on the condition that there is no centralized processor capable of collecting all the measurements from the sensors. By considering the neighboring measurements, we construct the following distributed state estimators:
xˆi (k + 1 ) = A(rk )xˆi (k ) + δ¯C1 (rk ) f (k, xˆi (k )) + (1 − δ¯ )C2 (rk )g(k, xˆi (k )) + ai j (rk )Ki j (rk )[y¯ j (k ) − E j (rk )xˆ j (k )],
(6)
j∈Ni
where xˆi (k ) ∈ Rn is the state estimate on sensor node i. Kij (rk ) ∈ Rn×m is the distributed estimator gain matrix to be determined, which is governed by Markov chains {rk }. Letting ei (k ) = x(k ) − xˆi (k ), the error system for sensor i can be obtained from (1), (2) and (6) as follows:
ei (k + 1 ) = A(rk )ei (k ) + B(rk )x(k − τ (k )) + (δ (k ) − δ¯ )[C1 (rk ) f (k, x(k )) − C2 (rk )g(k, x(k ))] +δ¯C1 (rk ) f˜(k, ei (k )) + (1 − δ¯ )C2 (rk )g˜(k, ei (k )) + D0 (rk )x(k )ω0 (k ) − ai j (rk )Ki j (rk ){(α j (k ) − α¯ j )Eˆ j (k, rk ) j∈Ni
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
233
+α¯ j Eˆ j (k, rk ) + [(β j (k ) − α j (k )β j (k )) − (β¯ j − α¯ j β¯ j )]Eˆ j (k, rk ) +(β¯ j − α¯ j β¯ j )]Eˆ j (k, rk ) − E j (rk )x(k ) + E j (rk )e j (k )}
(7)
where f˜(k, ei (k )) = f (k, x(k )) − f (k, xˆi (k )), g˜(k, ei (k )) = g(k, x(k )) − g(k, xˆi (k )) and Eˆ j (k, rk ) = E j (rk )x(k ) + D j (rk )x(k )ω j (k ). For notational simplicity, the following notations are defined for each possible rk = p ∈ L:
E˜ p = diag{E1 p , E2 p , . . . , EN p },
E¯ p = [E1T p , E2T p , . . . , ENT p ]T ,
C¯l p = [ClTp , ClTp , . . . , ClTp ]T (l = 1, 2 ),
B¯ p = [BTp , BTp , . . . , BTp ]T ,
Dˆ 0 p = [DT0 p , DT0 p , . . . , DT0 p ]T ,
(k ) = diag{α1 (k )Im , α2 (k )Im , . . . , αN (k )Im }, = diag{α¯ 1 Im , α¯ 2 Im , . . . , α¯ N Im }, ˜ (k ) = diag{ω1 (k )Im , ω2 (k )Im , . . . , ωN (k )Im }, D¯ p = [DT1 p , DT2 p , . . . , DTN p ]T , ω ϒ (k ) = diag{β1 (k )Im , β2 (k )Im , . . . , βN (k )Im },
ϒ = diag{β¯ 1 Im , β¯ 2 Im , . . . , β¯ N Im },
and K¯ p = (ai j,p Ki j,p )N×N is a sparse matrix satisfying K¯ p ∈ Wn×m , where Wn×m is defined as
Wn×m = {U¯ = [Ui j ] ∈ RnN×mN |Ui j ∈ Rn×m , Ui j = 0
j∈ / Ni }
if
(8)
By utilizing the Kronecker product, the error system governed by (7) above can be rewritten in a compact form as
e(k + 1 ) = (IN A p − K¯ p E˜ p )e(k ) + K¯ p E¯ p x(k ) + B¯ p x(k − τ (k )) +(δ (k ) − δ¯ )(C¯1 p f (k, x(k )) − C¯2 p g(k, x(k ))) + δ¯ (IN C1 p ) f¯ (k, e(k ))) +(1 − δ¯ )(IN C2 p )g¯ (k, e(k )) + Dˆ 0 p x(k )ω0 (k ) − K¯ p ( (k ) − )E¯ p∗ (k ) −K¯ p E¯ p∗ (k ) − K¯ p ϒ ∗ (k )¯E p∗ (k ) − K¯ p (ϒ − ϒ )E¯ p∗ (k )
(9)
where f¯ (k, e(k )) = [ f˜T (k, e1 (k )), f˜T (k, e2 (k )), . . . , f˜T (k, eN (k ))]T , g¯ (k, e(k )) = [g˜T (k, e1 (k )), g˜T (k, e2 (k )), . . . , g˜T (k, eN (k ))]T , e(k ) = [eT1 (k ), eT2 (k ), . . . , eTN (k )]T , E¯ ∗p (k ) = E¯ p x(k ) + ω ˜ (k )D¯ p x(k ), ϒ ∗ (k ) = ϒ (k ) − ϒ (k ) (k ) − (ϒ − ϒ ). By letting η (k ) = [xT (k ), eT (k )]T , we can obtain the following augmented system:
¯ 1,p η (k ) + (δ (k ) − δ¯ )C2 p F (k ) η (k + 1 ) = Y (k ) − ¯ 2,p (k )η (k ) − ϒ ˜ p ( k )η ( k ) − ϒ ¯ p ( k )η ( k ) +D p η ( k )ω0 ( k ) −
(10)
where Y (k ) = A p η (k ) + B p η (k − τ (k )) + C1 p F (k ), F (k ) = [ f T (k, x(k )), gT (k, x(k )), f¯T (k, e(k )), g¯T (k, e(k ))]T ,
Ap Ap = ¯ ¯ KpE p
C1 p =
Dp =
δ¯C1 p 0
D0 p Dˆ 0 p
¯ 2,p (k ) =
ϒ˜ p (k ) =
ϒ¯ p (k ) =
0 , IN A p − K¯ p E˜ p
(1 − δ¯ )C2 p
0 , 0
0 ¯ 1,p =
Bp Bp = ¯ Bp 0
δ¯ (IN C1 p )
0 , 0
0 , (1 − δ¯ )(IN C2 p )
0 K¯ p E¯ p + K¯ p (ϒ − ϒ )E¯ p
0
˜ (k )D¯ p + K¯ p (ϒ − ϒ )ω ˜ (k )D¯ p K¯ p ω
0 ˜ (k )D¯ p K¯ p ϒ ∗ (k )E¯ p + K¯ p ϒ ∗ (k )ω
C2 p
C = ¯1 p C1 p
−C2 p −C¯2 p
0 0
0 , 0
0 , 0
0 , 0
0 ˜ (k )D¯ p K¯ p ( (k ) − )E¯ p + K¯ p ( (k ) − )ω
0 , 0
0 . 0
In this paper, we aim to consider one redundant channel and stochastic switching topologies in the sensor networks, and determine the gain matrices Kij, p (i = 1, 2 . . . , N; j ∈ Ni ; ∀ p ∈ L ), such that augmented system (10) is globally asymptotically stable in the mean square sense. Then, system (6) is said to be a convergent distributed state estimatiors of system (1) with measurement outputs (4). Before proceeding further, we introduce the following definition that will be used in later derivations. Definition 2.1. [36]For any initial condition φ ( · ), if the corresponding solution {η (k ); k ∈ N} satisfies
lim E{ η (k ) 2 } = 0
k→∞
system (10) is globally asymptotically stable in the mean square sense.
234
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
3. Main results In this section, the sufficient criteria for the augmented system (10) to be globally asymptotically stable in the mean square sense are established. Theorem 3.1. Given matrices K¯ p , the augmented system (10) is globally asymptotically stable in the mean square sense if there ¯ p and a scalar ε > 0, such that the following matrix inequalities hold for all p ∈ L: exist matrices P p > 0, Q > 0, M
¯ p + sym(M ¯ p T p ) < 0,
(11)
¯ p and T p are defined in Appendix A. where Proof. See Appendix B. After conducting the dynamic analysis in Theorem 3.1 for the augmented estimation error system (10), we are in a position to deal with the problem of designing the distributed state estimator (6) for the networked target system (1) with measurement outputs (4). Before processing on, the following lemma will be utilized in establishing our criteria. Lemma 3.1. Let S = diag{S11 , S22 , . . . , SNN }, with Sii ∈ Rn×n (i = 1, 2, . . . , N ) being invertible matrices. If X = SU¯ for U¯ ∈ RnN×mN , then we have U¯ ∈ Wn×m ⇔ X ∈ Wn×m . Theorem 3.2. Under Assumptions 2.1,and 2.2, system (6) is said to be a convergent distributed state estimatior of system (1) with measurement outputs (4), if there exist matrices P p1 > 0, P p3 > 0, Q1 > 0, Q3 > 0, Q2 , X pu , Mi1 , Mi2 (i = 1, 2, 4 ), M3j ( j = 1, 2, 3, 4 ) and scalar ε > 0, such that the following matrix inequalities hold for all p ∈ L:
Q=
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ¯p=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Q1 ∗
1 , 1
Q2 Q3
1 , 2 2 , 2
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
> 0,
1 , 3 2 , 3 3 , 3 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
(12)
1 , 4 2 , 4 3 , 4 4 , 4
1 , 5 2 , 5 3 , 5 4 , 5 5 , 5
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
1 , 6 2 , 6 3 , 6 4 , 6 5 , 6 6 , 6 ∗ ∗ ∗ ∗ ∗ ∗ ∗
1 , 7 2 , 7 3 , 7 4 , 7 5 , 7 6 , 7 7 , 7 ∗ ∗ ∗ ∗ ∗ ∗
1 , 8 2 , 8 3 , 8 4 , 8 5 , 8 6 , 8 7 , 8 8 , 8
1 , 9 2 , 9 3 , 9 4 , 9 5 , 9 6 , 9 7 , 9 8 , 9 9 , 9
∗ ∗ ∗ ∗ ∗
∗ ∗ ∗ ∗
1,10 2,10 3,10 4,10 5,10 6,10 7,10 8,10 9,10 10,10 ∗ ∗ ∗
1,11 0 0 0 0 0 0 0 0 0
11,11 ∗ ∗
1,12 2,12 0 0 0 0 0 0 0 0 0
12,12 ∗
1,13 0 0 0 0 0 0 0 0 0 0 0
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ < 0, ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
13,13 (13)
where i, j (i, j = 1, 2 . . . , 13 ) are defined in Appendix C. Moreover,
K¯ p = (P˜ p3 )−1 X p
(14)
and, accordingly, the estimator gains Kij, p (i = 1, 2, . . . , N, j ∈ Ni , p ∈ L ) can be derived from (8). Proof. See Appendix D.
4. Illustrative example In this section, we will demonstrate the advantages and effectiveness of the proposed methods in this paper via a example. Example 4.1. Consider system (1) with two modes and the following parameters: Mode 1:
A1 =
0.01 0.7 0
−0.01 −0.01 , 0.06
−0.2 −0.2 0
0 −0.1 0.2
0.1 0.1 , −0.1
C21 =
−0.5 0 −0.01
B1 =
D01 =
−0.1 0.2 −0.1
0.2 0.5 0.3
0.1 0.1 −0.2 −0.1 −0.5 −0.1
0.1 0 , 0.1
0.1 0 . 0.2
C11 =
0.2 0.1 0
−0.1 −0.1 −0.2
0 0 , −0.1
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
235
Fig. 1. Stochastic switching topologies of sensor network.
Mode 2:
A2 =
−0.3 0 −0.01
0.03 0.2 0
0 −0.01 , 0.01
−0.2 −0.2 0
0 −0.1 0.2
0.1 0.1 , −0.1
C22 =
B2 =
D02 =
−0.1 0 −0.1
0.1 0.1 0.1
0.01 0.1 −0.1 0.1 −0.1 −0.1
0.1 0 , 0.1
C12 =
0.1 0.1 0
−0.1 −0.1 −0.2
0.1 0 , −0.1
0.1 0 . 0.1
The delay-varying delay in (1) is assumed to be τ (k ) = 2 + 3|sin((k/2 )π )|, i.e., the delay has upper bound τ2 = 5 and lower bound τ1 = 2. The randomly occurred nonlinearities are
f (k, x(k )) = (0.1x1 (k ) − tanh(0.2x1 (k )), 0.1x2 (k ) − tanh(0.1x2 (k )), 0.2x3 (k ))T , and
g(k, x(k )) = (−0.2x1 (k ), 0.2x2 (k ) − tanh(0.1x2 (k )), tanh(0.1x3 (k )))T , i.e., they satisfy Assumption 2.1 with 1 = diag(0.3, 0.2, 0.2 ) and 2 = diag(0.2, 0.3, 0.1 ). The transition probability matrix belonging to system (1) is given by
0.3 = 0.6
0.7 . 0.4
The stochastic switching topologies of sensor network are shown in Fig. 1, i.e., the adjacency matrices
⎡1
1 1 0 0 0
⎢0 L1 = ⎢ 0 ⎣ 1 0
1 0 1 0 0
⎤
0 0 1 1 1
⎡1
0 1⎥ 0 ⎥, ⎦ 0 1
0 1 0 1 0
⎢1 L2 = ⎢ 0 ⎣ 0 0
0 0 1 0 1
⎤
0 0 0 1 1
1 1⎥ 1 ⎥. ⎦ 0 1
The dynamic descriptions of (2) containing five sensors are given as follows:
E11 =
E41 =
0 0.1 0 −0.1
D21 =
D51
0.1 0.3
0.2 = −0.1
E32
E21 =
0.1 0.2 0.1 0.2
0.1 , 0.3
−0.2 , 0
−0.3 0.2
0.1 0
−0.2 , 0.1
0.4 = 0
0.1 −0.1
−0.1 , 0.3
D42 =
0.2 −0.2
−0.1 0.2
D31 =
0.1 , 0
0.1 0.3
E42
−0.1 0.2
0.1 = −0.1 0.2 = 0.1
D52 =
0.2 −0.1
0.1 0.2
D11 =
D41 =
0 , 0.1
E22
−0.1 , 0 0 , −0.1
−0.3 0.2
0.4 0
E52
0.1 . 0
0.1 −0.1
0 0
0.1 = 0.3
D32
−0.2 , 0.1
0.1 −0.1
0.1 = 0.1
0.2 −0.1
0 , 0.2
0.1 0.2 0 0.1
E31 =
0.4 , 0.1
0.1 −0.1
D22
0.1 , 0 0.3 −0.2
−0.1 0
0.1 = 0.2
0 0
E12
0.2 −0.1
0.3 0
E51 =
0 , −0.2
0.1 = 0.2
D12
−0.1 0.2
−0.1 = 0
−0.1 , 0.3
−0.2 0.1
0.1 , 0 0.3 −0.2 0.1 −0.1
0 , 0.2
0.4 , 0.1
0 , 0.2
236
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
Fig. 2. Trajectories of system state x(k) and its estimate xˆi (k ) and Modes evolution.
The Bernoulli-distributed white noise sequences δ (k), α i (k) and β i (k) (i = 1, 2, 3, 4, 5 ) assumed to satisfy conditions (3) and (5) with δ¯ = 0.68, α¯ 1 = 0.15, α¯ 2 = 0.86, α¯ 3 = 0.96, α¯ 4 = 0.26, α¯ 5 = 0.76, β¯ 1 = 0.24, β¯ 2 = 0.35, β¯ 3 = 0.56, β¯ 4 = 0.16, β¯ 5 = 0.85. By solving the linear matrix inequalities (12) and (13) in Theorem 3.2 of this paper, we can derive the distributed state estimator gain matrices in (6) to be
K111 =
−1.6619 0.2879 , −0.0974
−1.2850 0.0103 −0.0896
0.4568 −0.0039 , 0.0319
0.2199 −0.4918 0.4108
0.0251 −0.0035 , −0.0231
−0.3368 −0.2954 0.3142
0.2245 −0.1175 , 0.1916
0.3441 0.0046 −0.0821
−0.2075 −0.1271 , 0.0641
0.0755 0.0472 −0.0690
−0.0931 −0.0528 , −0.0078
−0.2039 0.1432 0.2597
0.0725 −0.0509 , −0.0923
K122 =
K134 =
K154 =
K215 =
K225 =
K242 =
1.1566 0.0356 −0.4903
K112 =
0.0725 0.0728 , −0.0029
0.0755 0.0472 −0.0690
−0.0931 −0.0528 , −0.0078
K125 =
K141 =
K155 =
−0.2806 0.2788 0.2444
−0.9331 −0.0862 , 0.0099
0.3423 0.1936 −0.4681
−0.0692 0.1169 , 0.0053
−0.8556 0.1541 0.5563
−1.2877 −0.1795 , 0.0174
−1.0776 0.1606 0.3239
1.8663 0.2284 , −0.0024
K233 =
K244 =
−0.1693 0.0080 −0.2563
K113 =
K133 =
−0.1864 0.0930 , 0.0019
0.2871 0.2513 −0.1971
K221 =
−0.2042 −0.2047 0.0082
K144 =
1.1566 0.0356 −0.4903
−1.6619 0.2879 , −0.0974
−1.2850 0.0103 −0.0896
0.4568 −0.0039 , 0.0319
0.0204 0.0751 −0.0583
−0.0965 −0.0708 , 0.0330
−0.1796 0.0585 0.2781
0.0092 0.3337 , −0.0805
K235 =
K253 =
1.8663 0.2284 , −0.0024
K222 =
−1.2877 −0.1795 , 0.0174
−1.0776 0.1606 0.3239
K211 =
−0.8556 0.1541 0.5563
−0.2600 −0.2032 , 0.0240
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
237
Fig. 3. Trajectories of the estimation error ei (k).
K254 =
−0.3368 −0.2954 0.3142
0.2245 −0.1175 , 0.1916
K255 =
−0.2806 0.2788 0.2444
−0.9331 −0.0862 . 0.0099
Given the initial conditions of the state and its estimate via each sensor node as x(0 ) = [0.2 − 0.1 0.1]T and xˆi (0 ) = [0 0 0]T , i = 1, 2, 3, 4, 5, respectively. In addition, the packet dropouts are generated in a random way according to the variations of α i (k) and β i (k). Then, given initial mode values r0 = 1 , the trajectories of system state x(k) and its estimate xˆi (k ) are presented in Fig. 2, where i = 1, 2, 3, 4, 5. And, the trajectories of the estimation error of each node are shown in Fig. 3, which implies that the augmented system (10) is globally asymptotically stable in the mean square sense. It can be seen from Fig. 3 that the designed distributed state estimatiors are effective for the underlying systems in the presence of stochastic switching topologies and randomly occurring packet dropouts. Remark 4.1. Compared with the result in [36] where the distributed state estimation is investigated for for discrete-time sensor networks with randomly varying nonlinearities and missing Measurements, more practical factors such as redundant channel and stochastic switching topologies are considered in the model of this paper. Distributed estimators with stochastic switching topologies are designed which could be reflected by the term j∈Ni ai j (rk )Ki j (rk )y¯ j (k ) and more practical. In this viewpoint, the system model investigated in the paper is comprehensive and the new obtained results are more general. 5. Conclusion The distributed state estimation problem has been investigated for a class of sensor networks described by discrete-time stochastic systems with stochastic switching topologies. One redundant channel has been introduced in the communication network toward performance benefit. The stochastic Brownian motions affect both the dynamical plant and the sensor measurement outputs. Through available output measurements from each individual sensor, distributed state estimators were designed to estimate the states of the target plant in a distributed way. Sufficient conditions are presented to guarantee the convergence of the estimation error systems for all admissible stochastic disturbances, randomly varying nonlinearities, and missing measurements. Finally, a numerical example is given to verify the theoretical results. The methodologies and techniques developed in this paper are expected to be extended into the filtering issues of complex networks with the topology, described by an undirected graph, and variable communication capability caused by the redundant channels. This will be one of our future research topics. Acknowledgment The authors would like to thank the editors and the reviewers for their valuable suggestions and comments which have led to a much improved paper. This work was supported by the National Natural Science Foundation of China under Grant (Nos. 61533006, 61703060, 11601474 and 11461082), Opening Fund of Geomathematics Key Laboratory of Sichuan Province
238
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
(scsxdz201704), and Research fund for International Young Scientists of National Natural Science Foundation of China (NSFC Grant No. 61550110248). Appendix A
⎡
11
12 22
∗ ¯p=⎢ ⎣ ∗ ∗
∗ ∗
13 23 33 ∗
⎤
0 0⎥ , 0⎦ 0
with
¯ 1,p )T P¯ p (A p − ¯ 1,p ) + D Tp P¯ p D p + (τ2 − τ1 + 1 )Q + ε ˜ − Pp 11 = (A p − +
5 N
j T j (E p,i ) P¯ p E p,i ,
¯ 1,p )T P¯ p B p , 12 = (A p −
¯ 1,p )T P¯ p C1 p , 13 = (A p −
j=1 i=1
22 = BTp P¯ p B p − Q, P¯ p =
q∈L
1 E p,i =
=
3 E p,i
=
4 E p,i =
5 E p,i =
33 = C1T p P¯ p C1 p + δ¯ (1 − δ¯ )C2T p P¯ p C2 p − ε I,
˜ = diag{ T 1 + T 2 , IN T 1 + IN T 2 ), π pq P p , 1 2 1 2
0 (α¯ i + β¯ i − β¯ i α¯ i )K¯ p ei Dip
2 E p,i
23 = BTp P¯ p C1 p ,
0 , 0
0 α¯ i (1 − α¯ i )K¯ p ei Eip
0 , 0
0 α¯ i (1 − α¯ i )K¯ p ei Dip
0 , 0
0
0
(β¯ i − β¯ i α¯ i )(1 − β¯ i + β¯ i α¯ i )K¯ p ei Eip 0
0
(β¯ i − β¯ i α¯ i )(1 − β¯ i + β¯ i α¯ i )K¯ p ei Dip 0 0 6 E p,i = , α¯ i β¯ i (1 − α¯ i )K¯ p ei Eip 0 0 0 7 E p,i = , α¯ i β¯ i (1 − α¯ i )K¯ p ei Dip 0
,
0
0
,
T p = [−A p , −B p , −C1 p , In(N+1) ], ei = [0m×(i−1)m , Im , 0m×(N−i )m ]T ,
i = 1, 2, . . . , N.
Appendix B Proof of Theorem 3.1. Consider the following Lyapunov functional candidate for the augmented estimation error system (10):
V (k, rk ) = V1 (k, rk ) + V2 (k ),
(15)
where
V1 (k, rk ) = V2 (k ) =
η T ( k )P ( rk )η ( k ), k−1 i = k −τ ( k )
η T ( i )Qη ( i ) +
−τ1
k−1
j=1−τ2 i=k+ j
η T ( i )Qη ( i ).
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
239
For ∀rk = p ∈ L, calculating the difference of V(k, p) along the trajectory of system (10), and taking the mathematical expectation, we have
E{V1 (k, p)} = E{V1 (k + 1, rk+1 )| p − V1 (k, p)} = E{ ηT (k + 1 )Pq η (k + 1 )Pr {rk+1 = q|rk = p} − ηT (k )P p η (k )} q∈L
= E{ηT (k + 1 )P¯ p η (k + 1 ) − ηT (k )P p η (k )} ¯ 1,p η (k ))T P¯ p (Y (k ) − ¯ 1,p η (k )) + 2(Y (k ) − ¯ 1,p η (k ))T P¯ p = E{ ( Y ( k ) − ¯ 2,p (k )η (k ) − ϒ ˜ p ( k )η ( k ) − ϒ ¯ p ( k ) η ( k )] [(δ (k ) − δ¯ )C2 p F (k ) + D p η (k )ω0 (k ) − +(δ (k ) − δ¯ )2 F T (k )C2T p P¯ pC2 p F (k ) ¯ 2,p (k )η (k ) − ϒ ˜ p ( k )η ( k ) − ϒ ¯ p ( k ) η ( k )] +2(δ (k ) − δ¯ )F T (k )C2T p P¯ p [D p η (k )ω0 (k ) − ¯ 2,p (k )η (k ) + ϒ ˜ p ( k )η ( k ) + ϒ ¯ p ( k ) η ( k )] +ω02 (k )ηT (k )D Tp P¯ p D p η (k ) − 2ω0 (k )ηT (k )D Tp P¯ p [ ¯ T (k )P¯ p ¯ 2,p (k )η (k ) + 2ηT (k ) ¯ T (k )P¯ p [ϒ ˜ p ( k )η ( k ) + ϒ ¯ p ( k ) η ( k )] +η T ( k ) 2,p 2,p ˜ pT (k )P¯ p ϒ ˜ p ( k )η ( k ) + 2η T ( k )ϒ ˜ pT (k )P¯ p ϒ ¯ p ( k )η ( k ) + η T ( k )ϒ ¯ pT (k )P¯ p ϒ ¯ p ( k )η ( k ) +η T ( k )ϒ −ηT (k )P p η (k )} ¯ 1,p η (k ))T P¯ p (Y (k ) − ¯ 1,p η (k )) + δ¯ (1 − δ¯ )F T (k )C T P¯ pC2 p F (k ) = E{ ( Y ( k ) − 2p ¯ T (k )P¯ p ¯ 2,p (k )η (k ) + ηT (k )ϒ ˜ pT (k )P¯ p ϒ ˜ p ( k )η ( k ) +ηT (k )D Tp P¯ p D p η (k ) + ηT (k ) 2,p ˜ pT (k )P¯ p ϒ ¯ p ( k )η ( k ) + η T ( k )ϒ ¯ pT (k )P¯ p ϒ ¯ p ( k )η ( k ) − η T ( k )P p η ( k )}, +2ηT (k )ϒ
(16)
¯ T (k )P¯ p ¯ 2,p (k )η (k )} E{η T ( k ) 2,p = E{xT (k )[K¯ p ( + ϒ − ϒ )ω ˜ (k )D¯ p ]T P˜ p3 [K¯ p ( + ϒ − ϒ )ω ˜ (k )D¯ p ]x(k )}
= E xT (k )(
N
(
i=1
(
3 ai j,p Ki j,p (α¯ j + β¯ j − β¯ j α¯ j )ω j (k )D jp )T P˜ p,i
j∈Ni
ai j,p Ki j,p (α¯ j + β¯ j − β¯ j α¯ j )ω j (k )D jp ))x(k )
j∈Ni
= E{xT (k )(
N
3 (ai j,p Ki j,p (α¯ j + β¯ j − β¯ j α¯ j )D jp )T P˜ p,i (ai j,p Ki j,p (α¯ j + β¯ j − β¯ j α¯ j )D jp ))x(k )}
i=1 j∈Ni
= E{xT (k )(
N
3 (ai j,p Ki j,p (α¯ j + β¯ j − β¯ j α¯ j )D jp )T P˜ p,i (ai j,p Ki j,p (α¯ j + β¯ j − β¯ j α¯ j )D jp ))x(k )}
j∈Ni i=1
= E{x (k )( T
(K¯ p e j (α¯ j + β¯ j − β¯ j α¯ j )D jp )T P˜ p3 (K¯ p e j (α¯ j + β¯ j − β¯ j α¯ j )D jp ))x(k )}
j∈Ni
=
N
1 T ¯ 1 ηT (k )(E p,i ) P p E p,i η ( k ),
(17)
i=1
3 ,P ˜ 3 , . . . , P˜ 3 }. Through the similar operations, the following equalities hold: where P¯ p = diag{P˜ p1 , P˜ p3 } and P˜ p3 = diag{P˜ p, 1 p,2 p,N
˜ pT (k )P¯ p ϒ ˜ p ( k )η ( k )} E{η T ( k )ϒ = E{xT (k )[K¯ p ( (k ) − )E¯ p + K¯ p ( (k ) − )ω ˜ (k )D¯ p ]T P˜ p3 [K¯ p ( (k ) − )E¯ p + K¯ p ( (k ) − )ω ˜ (k )D¯ p ]x(k )} = E{xT (k )[K¯ p ( (k ) − )E¯ p ]T P˜ p3 [K¯ p ( (k ) − )E¯ p ]x(k )} +E{xT (k )[K¯ p ( (k ) − )ω ˜ (k )D¯ p ]T P˜ p3 [K¯ p ( (k ) − )ω ˜ (k )D¯ p ]x(k )} =
N i=1
2 T ¯ 2 ηT (k )(E p,i ) P p E p,i η (k ) +
N i=1
3 T ¯ 3 ηT (k )(E p,i ) P p E p,i η ( k ),
(18)
240
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
¯ pT (k )P¯ p ϒ ¯ p (k )η (k )}} E{η T ( k )ϒ = E{xT (k )[K¯ p ϒ ∗ (k )E¯ p + K¯ p ϒ ∗ (k )ω ˜ (k )D¯ p ]T P˜ p3 [K¯ p ϒ ∗ (k )E¯ p + K¯ p ϒ ∗ (k )ω ˜ (k )D¯ p ]x(k )} 3 ¯ = E{xT (k )[K¯ p ϒ ∗ (k )E¯ p ]T P˜ p3 [K¯ pu ϒ ∗ (k )E¯ p ]x(k )} + E{xT (k )[K¯ p ϒ ∗ (k )ω ˜ (k )D¯ p ]T P˜ pu [K p ϒ ∗ ( k )ω ˜ (k )D¯ p ]x(k )}
=
N
4 T ¯ 4 ηT (k )(E p,i ) P p E p,i η (k ) +
i=1
N
5 T ¯ 5 ηT (k )(E p,i ) P p E p,i η ( k ),
(19)
i=1
˜ pT (k )P¯ p ϒ ¯ p ( k )η ( k )} E{2η T ( k )ϒ = E{2xT (k )[K¯ p ( (k ) − )E¯ p + K¯ p ( (k ) − )ω ˜ (k )D¯ p ]T P˜ p3 [K¯ p ϒ ∗ (k )E¯ p + K¯ p ϒ ∗ (k )ω ˜ (k )D¯ p ]x(k )} = E{2xT (k )[K¯ p ( (k ) − )E¯ p ]P˜ p3 [K¯ p ϒ ∗ (k )E¯ p ] + 2xT (k )[K¯ p ( (k ) − )ω ˜ (k )D¯ p ]T P˜ p3 [K¯ p ϒ ∗ (k )ω ˜ (k )D¯ p ]x(k )} = −2
N
6 T ¯ 6 ηT (k )(E p,i ) P p E p,i η (k ) − 2
i=1
N
7 T ¯ 7 ηT (k )(E p,i ) P p E p,i η ( k ).
(20)
i=1
Then, by combining (16)-(20) together, we have
E{V1 (k, p)} ¯ 1,p η (k ))T P¯ p (Y (k ) − ¯ 1,p η (k )) + δ¯ (1 − δ¯ )F T (k )C T P¯ pC2 p F (k ) ≤ (Y (k ) − 2p +ηT (k )D Tp P¯ p D p η (k ) +
9 N
j T j ηT (k )(E p,i ) P¯ p E p,i η ( k ) − η T ( k )P p η ( k ).
(21)
j=1 i=1
E{V2 (k )} = V2 (k + 1 ) − V2 (k ) k
=
i=k+1−τ (k+1 )
≤
k −τ1
k−1
η T ( i )Qη ( i ) −
η T ( i )Qη ( i ) +
−τ1
k
η T ( i )Qη ( i ) −
j=1−τ2 i=k+1+ j
i = k −τ ( k )
−τ1
k−1
η T ( i )Qη ( i )
j=1−τ2 i=k+ j
ηT (i )Qη (i ) + ηT (k )Qη (k ) − ηT (k − τ (k ))Qη (k − τ (k ))
i=k+1−τ2 k −τ1
+ ( τ2 − τ1 ) η T ( k ) Q η ( k ) −
η T ( i )Qη ( i )
i=k+1−τ2
= (τ2 − τ1 + 1 )ηT (k )Qη (k ) − ηT (k − τ (k ))Qη (k − τ (k )).
(22)
By Assumption 2.1, for any positive scalar ε , the following inequality holds:
˜ η (k ) − ε F T F . 0 ≤ εηT (k )
(23)
¯ p of appropriate dimensions, we have On the other hand, for any matrix M
¯ p [Y (k ) − A p η (k ) − B p η (k − τ (k )) − C1 p F (k )] = 0, 2ξ T ( k )M
ξ T (k )
[η T ( k ), η T ( k
(24)
− τ (k )), F T (k ), Y T (k )]T .
where = Considering (21)-(24), we can obtain that
¯ p + sym(M ¯ p T p ))ξ (k ). E{V (k, p)} ≤ ξ T (k )(
(25)
The remaining part of the proof is similar to those in [36] and so omitted here for simplicity, and the proof is then completed. Appendix C
1 , 1 =
DT0 p P˜ p1 D0 p + Dˆ T0 p P˜ p3 Dˆ 0 p + (τ2 − τ1 + 1 )Q1 + ε 1T 1 + ε 2T 2 − P p1 − sym(M11 A p ),
1,2 = (τ2 − τ1 + 1 )Q2 − (M12 A p + X p E¯ p )T , 1,3 = 12,1 − M11 B p − (M21 A p )T , 1,4 = −(M22 A p + X p E¯ p )T ,
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
1,5 = δ¯ ATp P˜ p1C1 p − δ¯ M11C1 p − (M31 A p )T , 1,6 = (1 − δ¯ )ATp P˜ p1C2 p − (1 − δ¯ )M11C2 p − (M32 A p )T , 1,7 = 13,1 − (M33 A p + X p E¯ p )T , 1,8 = 13,2 − (M34 A p + X p E¯ p )T , 1,9 = M11 − (M41 A p )T , 1,10 = (−M42 A p + X p E¯ p )T , 1,11 = ATp P˜ p1 , 1,12 = (X p E¯ p − X p E¯ p − X p (ϒ − ϒ )E¯ p )T , ˜, 1,13 =
2,2 = (τ2 − τ1 + 1 )Q3 + ε (IN 1T 1 ) + ε (IN 2T 2 ) − P p3 + sym(−P˜ p3 (IN A p ) + X p E˜p ), 2,3 = 12,2 − M12 B p − P˜ p3 B¯ p , 2,4 = (−P˜ p3 (IN A p ) + X p E˜p )T , 2,5 = −δ¯ M12C1 p , 2,6 = −(1 − δ¯ )M12C2 p , 2,7 = 13,3 − δ¯ P˜ p3 (IN C1 p ) + (−P˜ p3 (IN A p ) + X p E˜p )T , 2,8 = 13,4 − (1 − δ¯ )P˜ p3 (IN C2 p ) + (−P˜ p3 (IN A p ) + X p E˜p )T , 2,9 = M12 , 2,10 = P˜ p3 + (P˜ p3 (IN A p ) − X p E˜p )T , 2,12 = (P˜ p3 (IN A p ) − X p E˜p )T , 3,3 = 22,1 − sym(M21 B p ), 3,4 = −Q2 − (M22 B p + P˜ p3 B¯ p )T , 3,5 = 23,1 − δ¯ M21C1 p − (M31 B p )T , 3,6 = 23,2 − (1 − δ¯ )M21C2 p − (M32 B p )T , 3,7 = 23,3 − (M33 B p + P˜ p3 B¯ p )T , 3,8 = 23,4 − (M34 B p + P˜ p3 B¯ p )T ,
241
242
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
3,9 = M21 − (M41 B p )T , 3,10 = (−M42 B p + P˜ p3 B¯ p )T , 4,4 = −Q3 , 4,5 = −δ¯ M22C1 p , 4,6 = −(1 − δ¯ )M22C2 p , 4,7 = −δ¯ P˜ p3 (IN C1 p ), 4,8 = −(1 − δ¯ )P˜ p3 (IN C2 p ), 4,9 = M22 , 4,10 = P˜ p3 , 5,5 = 33,1 − δ¯ sym(M31C1 p ), 5,6 = 33,2 − (1 − δ¯ )M31C2 p − δ¯ (M32C1 p )T , 5,7 = −δ¯ (M33C1 p )T , 5,8 = −δ¯ (M34C1 p )T , 5,9 = M31 − δ¯ (M41C1 p )T , 5,10 = −δ¯ (M42C1 p )T , 6,6 = 33,3 − (1 − δ¯ )sym(M32C2 p ), 6,7 = −(1 − δ¯ )(M33C2 p )T , 6,8 = −(1 − δ¯ )(M34C2 p )T , 6,9 = M32 − (1 − δ¯ )(M41C2 p )T , 6,10 = −(1 − δ¯ )(M42C2 p )T , 7,7 = 33,4 − δ¯ sym(P˜ p3 (IN C1 p )), 7,8 = 33,5 − (1 − δ¯ )P˜ p3 (IN C2 p ) − δ¯ (P˜ p3 (IN C1 p ))T , 7,9 = M33 , 7,10 = P˜ p3 + δ¯ (P˜ p3 (IN C1 p ))T ,
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
8,8 = 33,6 − (1 − δ¯ )sym(P˜ p3 (IN C2 p )), 8,9 = M34 , 8,10 = P˜ p3 + (1 − δ¯ )(P˜ p3 (IN C2 p ))T , 9,9 = sym(M41 ), T 9,10 = M42 ,
10,10 = −2P˜ p3 , 11,11 = −P˜ p1 , 12,12 = −P˜ p3 , 13,13 = −(I5N P˜ p3 ), 12,1 = ATp P˜ p1 B p + E¯ pT X pT B¯ p − E¯ pT X pT B¯ p − E¯ pT (ϒ − ϒ )X pT B¯ p, 12,2 = ((IN ATp )P˜ p3 − E˜pT X pT )B¯ p , 13,1 = δ¯ (E¯ pT X pT − E¯ pT X pT − E¯ pT (ϒ − ϒ )X pT )(IN C1 p ), 13,2 = (1 − δ¯ )(E¯ pT X pT − E¯ pT X pT − E¯ pT (ϒ − ϒ )X pT )(IN C2 p ), 13,3 = δ¯ ((IN ATp )P˜ p3 − E˜pT X pT )(IN C1 p ), 13,4 = (1 − δ¯ )((IN ATp )P˜ p3 − E˜pT X pT )(IN C2 p ), 22,1 = BTp P˜ p1 B p + B¯ Tp P˜ p3 B¯ p − Q1 , 23,1 = δ¯ BTp P˜ p1C1 p ,
23,2 = (1 − δ¯ )BTp P˜ p1C2 p ,
23,3 = δ¯ B¯ Tp P˜ p3 (IN C1 p ),
23,4 = (1 − δ¯ )B¯ Tp P˜ p3 (IN C2 p ),
33,1 = δ¯C1Tp P˜ p1C1 p + δ¯ (1 − δ¯ )C¯1TpP˜ p3C¯1 p − ε In , 33,2 = −δ¯ (1 − δ¯ )C¯1TpP˜ p3C¯2 p , 33,3 = (1 − δ¯ )C2Tp P˜ p1C2 p + δ¯ (1 − δ¯ )C¯2TpP˜ p3C¯2 p − ε In , 33,4 = δ¯ 2 (IN C1Tp )P˜ p3 (IN C1 p ) − ε InN , 33,5 = δ¯ (1 − δ¯ )(IN C1Tp )P˜ p3 (IN C2 p ), 33,6 = (1 − δ¯ )2 (IN C2Tp )P˜ p3 (IN C2 p ) − ε InN , ˜ = [ ¯ 1, ¯ 2, ¯ 3, ¯ 4, ¯ 5 ], T ¯ l = [l1T , l2T , . . . , lN ],
l = 1, 2, . . . , 5
243
244
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
1i = (α¯ i + β¯ i − β¯ i α¯ i )X p ei Dip , 2 i =
4 i =
5 i =
α¯ i (1 − α¯ i )X p ei Eip , 3i =
α¯ i (1 − α¯ i )X p ei Dip ,
(β¯ i − β¯ i α¯ i )(1 − β¯ i + β¯ i α¯ i )X p ei Eip , (β¯ i − β¯ i α¯ i )(1 − β¯ i + β¯ i α¯ i )X p ei Dip .
Appendix D 3 , P 3 , . . . , P 3 }. By computation, we can obtain Proof of Theorem 3.2. Let P p = diag{P p1 , P p3 } and P p3 = diag{P p, 1 p,2 p,N
D Tp P¯ p D p = diag{DT0 p P˜ p1 D0 p + Dˆ T0 p P˜ p3 Dˆ 0 p , 0},
12,1 0 12 = , 12,2 0 ¯ T ˜1 ¯ T ˜1 = δ A p P p C1 p (1 − δ )A p P p C2 p 13
0
22 =
−Q2 , −Q3
∗
23 =
⎡
0
22,1 23,1
23,2
23,3
23,4
0
0
0
0
33,1
⎢ ∗ 33 = ⎣ ∗ ∗
33,2 33,3
0 0
33,4
∗ ∗
∗
13,2 , 13,4
13,1 13,3
,
⎤
0 0
⎥ . 33,5 ⎦ 33,6
¯ p = col (M1 , M2 , M3 , M4 ) and By setting M
M11 M1 = M12 Then,
0 , P˜ p3
⎡
−M1 A p −M2 A p ¯ pT p = ⎢ M ⎣ −M A 3 p −M4 A p
M21 M2 = M22
−M1 B p −M2 B p −M3 B p −M4 B p
−M11 A p −M1 A p = −M12 A p − X p E¯ p
−M1 B p =
−M11 B p −M12 B p − P˜ p3 B¯ p
−M1 C1 p =
−M2 A p =
−M2 B p =
−δ¯ M11C1 p −δ¯ M12C1 p
0 , P˜ p3
−M1 C1 p −M2 C1 p −M3 C1 p −M4 C1 p
⎡
M31 ⎢ M32 M3 = ⎣ M33 M34
⎤
0 0⎥ , P˜ p3 ⎦ 3 P˜ p
M41 M4 = M42
0 . −P˜ p3
⎤
M1 M2 ⎥ , M3 ⎦ M4
0 , −P˜ p3 (IN A p ) + X p E˜ p
0 , 0
−(1 − δ¯ )M11C2 p −(1 − δ¯ )M12C2 p
0 −δ¯ P˜ p3 (IN C1 p )
−M21 A p −M22 A p − X p E¯ p
0 , −P˜ p3 (IN A p ) + X p E˜ p
−M21 B p −M22 B p − P˜ p3 B¯ p
0 , 0
0 , −(1 − δ¯ )P˜ p3 (IN C2 p )
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
−M2 C1 p
−δ¯ M21C1 p = −δ¯ M22C1 p
⎡
−(1 − δ¯ )M21C2 p −(1 − δ¯ )M22C2 p
0 −δ¯ P˜ p3 (IN C1 p )
⎤
−M31 A p ⎢ −M32 A p −M3 A p = ⎣ −M33 A p − X p E¯ p −M34 A p − X p E¯ p
0 0 ⎥ , −P˜ p3 (IN A p ) + X p E˜ p ⎦ 3 −P˜ p (IN A p ) + X p E˜ p
−M31 B p −M32 B p ⎢ −M3 B p = ⎣ −M33 B p − P˜ p3 B¯ p −M34 B p − P˜ p3 B¯ p
0 0⎥ , 0⎦ 0
⎡
⎡
−M3 C1 p
−δ¯ M31C1 p ⎢ −δ¯ M32C1 p =⎢ ¯ ⎣ −δ M33C1 p −δ¯ M34C1 p
−M4 A p =
−M4 B p = −M4 C1 p
−(1 − δ¯ )M31C2 p −(1 − δ¯ )M32C2 p −(1 − δ¯ )M33C2 p −(1 − δ¯ )M34C2 p
0 0 −δ¯ P˜ p3 (IN C1 p ) −δ¯ P˜ p3 (IN C1 p )
0 , P˜ p3 (IN A p ) − X p E˜ p
−M41 B p −M42 B p + P˜ p3 B¯ p
0 , 0
−δ¯ M41C1 p = −δ¯ M42C1 p
0 , −(1 − δ¯ )P˜ p3 (IN C2 p )
⎤
−M41 A p −M42 A p + X p E¯ p
245
⎤
0 ⎥ 0 ⎥, 3 ¯ ˜ −(1 − δ )P p (IN C2 p )⎦ −(1 − δ¯ )P˜ p3 (IN C2 p )
−(1 − δ¯ )M41C2 p −(1 − δ¯ )M42C2 p
0
0
δ¯ P˜ p3 (IN C1 p )
(1 − δ¯ )P˜ p3 (IN C2 p )
.
From the detailed matrices computed above and applying the Schur complement, we obtain the inequalities in (13) ensures the conditions in (11) to hold. Based on Theorem 3.1, we conclude that augmented estimation error system in (10) is globally asymptotically stable in the mean square sense. In addition, the estimator gains in (6) can be easy to derived. The proof is completed. References [1] H.S. Ahn, K.H. Ko, Simple pedestrian localization algorithms based on distributed wireless sensor networks, IEEE Trans. Ind. Electron. 56 (10) (2009) 4296–4302. [2] R.R. Brooks, P. Ramanathan, A.M. Sayeed, Distributed target classification and tracking in sensor networks, Proc. IEEE 91 (8) (2003) 1163–1171. [3] X. Li, X. Fu, Synchronization of chaotic delayed neural networks with impulsive and stochastic perturbations, Commun. Nonlinear Sci. 16 (2) (2011) 885–894. [4] Z. Xiong, A.D. Liveris, S. Cheng, Distributed source coding for sensor networks, IEEE Signal Proc. Mag. 21 (5) (2004) 80–94. [5] S. Kar, J.M. Moura, Sensor networks with random links: topology design for distributed consensus, IEEE Trans. Signal Proc. 56 (7) (2008) 3315–3326. [6] V. Ugrinovskii, Distributed robust estimation over randomly switching networks using consensus, Automatica 49 (1) (2013) 160–168. [7] J. Qin, C. Yu, H. Gao, Coordination for linear multiagent systems with dynamic interaction topology in the leader-following framework, IEEE Trans. Ind. Electron. 61 (5) (2014) 2412–2422. [8] D. Zhang, J. Cheng, J.H. Park, J. Cao, Robust H∞ control for nonhomogeneous Markovian jump systems subject to quantized feedback and probabilistic measurements, J. Franklin Inst. 355 (15) (2018) 6992–7010. [9] H. Li, Y. Shi, Network-based predictive control for constrained nonlinear systems with two-channel packet dropouts, IEEE Trans. Ind. Electron. 61 (3) (2014) 1574–1582. [10] B. Jiang, Z. Mao, P. Shi, Filter design for a class of networked control systems via T-S fuzzy-model approach, IEEE Trans. Fuzzy Syst. 18 (1) (2010) 201–208. [11] X.Z. Meng, S.N. Zhao, W.Y. Zhang, Adaptive dynamics analysis of a predator-prey model with selective disturbance, Appl. Math. Comput. 266 (2015) 946–958. [12] J. Qiu, G. Feng, H. Gao, Nonsynchronized-state estimation of multichannel networked nonlinear systems with multiple packet dropouts via T-S fuzzy-affine dynamic models, IEEE Trans. Fuzzy Syst. 19 (1) (2011) 75–90. [13] H. Dong, Z. Wang, H. Gao, Distributed filtering for a class of time-varying systems over sensor networks with quantization errors and successive packet dropouts, IEEE Trans. Signal Process. 60 (6) (2012) 3164–3173. [14] W. Chen, W. Zheng, An improved stabilization method for sampled-data control systems with control packet loss, IEEE Trans. Autom. Control 57 (9) (2012) 2378–2384. [15] Z. Wu, P. Shi, H. Su, J. Chu, Asynchronous filtering for discrete time stochastic Markov jump systems with randomly occurred sensor nonlinearities, Automatica 50 (1) (2014) 180–186. [16] P. Shi, Y. Yin, F. Liu, J. Zhang, Robust control on saturated Markov jump systems with missing information, Inf. Sci. 265 (2014) 123–138. [17] P. Shi, X. Luan, F. Liu, Filtering for discrete-time systems with stochastic incomplete measurement and mixed delays, IEEE Trans. Ind. Electron. 59 (6) (2012) 2732–2739. [18] J. Liu, J. Xia, E. Tian, S. Fei, Hybrid-driven-based h∞ filter design for neural networks subject to deception attacks, Appl. Math. Comput. 320 (2018a) 158–174. [19] J. Liu, L. Zha, X. Xie, E. Tian, Resilient observer-based control for networked nonlinear T-S fuzzy systems with hybrid-triggered scheme, Nonlinear Dyn. 91 (3) (2018b) 2049–2061. [20] L. Zha, E. Tian, X. Xie, Z. Gu, J. Cao, Decentralized event-triggered h∞ control for neural networks subject to cyber-attacks, Inf. Sci. 457–458 (2018) 141–155.
246
Q. Li et al. / Applied Mathematics and Computation 343 (2019) 230–246
[21] M.J. Park, O.M. Kwon, J.H. Park, S.M. Lee, J.W. Son, E.J. Cha, h∞ consensus performance for discrete-time multi-agent systems with communication delay and multiple disturbances, Neurocomputing 138 (2014) 199–208. [22] M.J. Park, O.M. Kwon, J.H. Park, S.M. Lee, E.J. Cha, h∞ state estimation for discrete-time neural networks with interval time-varying delays and probabilistic diverging disturbances, Neurocomputing 153 (2015) 255–270. [23] G. Zhu, X. Meng, L. Chen, The dynamics of a mutual interference age structured predator-prey model with time delay and impulsive perturbations on predators, Appl. Math. Comput. 216 (2010) 308–316. [24] J. Cheng, X. Chang, J.H. Park, H. Li, H. Wang, Fuzzy-model-based h∞ control for discrete-time switched systems with quantized feedback and unreliable links, Inf. Sci. 436–437 (2018) 181–196. [25] J. Cheng, J.H. Park, H.R. Karimi, H. Shen, A flexible terminal approach to sampled-data exponentially synchronization of Markovian neural networks with time-varying delayed signals, IEEE Trans. Cybern. 48 (8) (2018) 2232–2244. [26] J. Cheng, J.H. Park, J. Cao, D. Zhang, Quantized H∞ filtering for switched linear parameter-varying systems with sojourn probabilities and unreliable communication channels, Inf. Sci. 466 (2018) 289–302. [27] T. Zhang, W. Ma, X. Meng, Periodic solution of a prey-predator model with nonlinear state feedback control, Appl. Math. Comput. 266 (2015) 95–107. [28] J. Liang, Z. Wang, X. Liu, State estimation for coupled uncertain stochastic networks with missing measurements and time-varying delays: the discrete-time case, IEEE Trans. Neural Netw. 20 (5) (2009) 781–793. [29] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M.I. Jordan, S.S. Sastry, Kalman filtering with intermittent observations, IEEE Trans. Autom. Control 49 (9) (2004) 1453–1464. [30] Z. Wang, D.W. Ho, Y. Liu, X. Liu, Robust h∞ control for a class of nonlinear discrete time-delay stochastic systems with missing measurements, Automatica 45 (3) (2009) 684–691. [31] Z. Wang, Y. Wang, Y. Liu, Global synchronization for discrete-time stochastic complex networks with randomly occurred nonlinearities and mixed time delays, IEEE Trans. Neural Netw. 21 (1) (2010) 11–25. [32] C. Lin, Z. Wang, F. Yang, Observer-based networked control for continuous-time systems with random sensor delays, Automatica 45 (2) (2009) 578–584. [33] A. Speranzon, C. Fischione, K.H. Johansson, A. Sangiovanni-Vincentelli, A distributed minimum variance estimator for sensor networks, IEEE J. Sel. Areas Commun. 26 (4) (2008) 609–621. [34] F.S. Cattivelli, A.H. Sayed, Diffusion LMS strategies for distributed estimation, IEEE Trans. Signal Process. 58 (3) (2010) 1035–1048. [35] Y. Xu, R. Lu, H. Peng, K. Xie, A. Xue, Asynchronous dissipative state estimation for stochastic complex networks with quantized jumping coupling and uncertain measurements, IEEE Trans. Neural Netw. 28 (2) (2017) 268–277. [36] J. Liang, Z. Wang, X. Liu, Distributed state estimation for discrete-time sensor networks with randomly varying nonlinearities and missing measurements, IEEE Trans. Neural Netw. 22 (3) (2011) 486–496. [37] Y. Zhu, L. Zhang, W. Zheng, Distributed h∞ filtering for a class of discrete-time Markov jump lure systems with redundant channels, IEEE Trans. Ind. Electron. 63 (3) (2016) 1876–1885.