Automatica 92 (2018) 162–172
Contents lists available at ScienceDirect
Automatica journal homepage: www.elsevier.com/locate/automatica
Consistent distributed state estimation with global observability over sensor network✩ Xingkang He, Wenchao Xue *, Haitao Fang LSC, NCMIS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
article
info
Article history: Received 5 January 2017 Received in revised form 19 October 2017 Accepted 25 January 2018 Available online 26 March 2018 Keywords: Wireless sensor networks Time-varying systems State estimation Covariance intersection Distributed Kalman filter Global observability Semi-definite programming
a b s t r a c t This paper studies the distributed state estimation problem for a class of discrete time-varying systems over sensor networks. Firstly, it is shown that the gain parameter optimization in a networked Kalman filter requires a centralized framework. Then, a sub-optimal distributed Kalman filter (DKF) is proposed by employing the covariance intersection (CI) fusion strategy. It is proven that the proposed DKF is of consistency, that is, an upper bound of error covariance matrix can be provided by the filter in real time. The consistency also enables the design of adaptive CI weights for better filter precision. Furthermore, the boundedness of covariance matrix and the convergence of the proposed filter are proven based on the strong connectivity of directed network topology and the global observability which permits the subsystem with local sensor’s measurements to be unobservable. Meanwhile, to keep the covariance of the estimation error bounded, the proposed DKF does not require the system matrix to be nonsingular at each moment, which seems to be a necessary condition in the main DKF designs under global observability. Finally, simulation results of two examples show the effectiveness of the algorithm in the considered scenarios. © 2018 Elsevier Ltd. All rights reserved.
1. Introduction Wireless sensor networks (WSNs) usually consist of intelligent sensing devices located at different geographical positions. Since multiple sensors can collaboratively carry out the task by information communication via the wireless channels, WSNs have been widely applied in environmental monitoring (Cao, Chen, Yan, & Sun, 2008), collaborative information processing (Kumar, 2012; Wu, Sun, Lee, & Pan, 2017), data collection (Solis & Obraczka, 2007), distributed signal estimation (Schizas, Ribeiro, & Giannakis, 2008), and etc. In the past decades, state estimation problems of WSNs have drawn more and more attention of researchers. Two approaches are usually considered in existing work. The first one is centralized filtering (Guo, Shi, Johansson, & Shi, 2017; Ren, ✩ This work was supported in part by the NSFC61603380, the National Key Research and Development Program of China (2016YFB0901902), the National Basic Research Program of China under Grant No. 2014CB845301. The material in this paper was partially presented at the 14th International Conference on Control, Automation, Robotics and Vision, November 13–15, 2016, Phuket, Thailand. This paper was recommended for publication in revised form by Associate Editor Brett Ninness under the direction of Editor Torsten Söderström. Corresponding author at: LSC, NCMIS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China. E-mail addresses:
[email protected] (X. He),
[email protected] (W. Xue),
[email protected] (H. Fang).
*
https://doi.org/10.1016/j.automatica.2018.03.029 0005-1098/© 2018 Elsevier Ltd. All rights reserved.
Wu, Johansson, Shi, & Shi, 2018), i.e., a data center is set to collect measurements from all sensors at each sampling moment. The centralized Kalman filter (CKF) can be directly designed such that the minimum variance state estimator is achieved for linear systems with Gaussian noises. However, the centralized frame is fragile since it could be easily influenced by link failure, time delay, package loss and so on. The second approach, on the contrary, utilizes distributed strategy, in which no central sensor exists. The implementation of this strategy simply depends on information exchange between neighbors (Boem, Xu, Fischione, & Parisini, 2015; Farina & Carli, 2016; He, Hu, Xue, & Fang, 2017; Hu, Xie, & Zhang, 2012; Speranzon, Fischione, Johansson, & SangiovanniVincentelli, 2008; Sun, Fu, Wang, Zhang, & Marelli, 2016; Xie & Guo, 2015; Yang, Chen, Wang, & Shi, 2014; Yang, Yang, Shi, Shi, & Chen, 2017). Compared with the centralized approach, the distributed frame has stronger ability in robustness and parallel processing. Information communication between sensors plays an important role in the design of distributed filtering. Generally, communication rate between neighbors could be faster than the rate of measurement sensing. Fast information exchange between neighbors supports the consensus strategy which can achieve the agreement of information variables (e.g. measurements Das & Moura, 2015) of sensors. Actually, Carli, Chiuso, Schenato, and Zampieri (2008), Cattivelli and Sayed (2010), Khan and Moura (2008) and Olfati-Saber (2007) have shown some remarkable results on the
X. He et al. / Automatica 92 (2018) 162–172
convergence and the consensus of local filters with the consensus strategy. However, faster communication rate probably needs larger capability of computation and transmission to conduct the consensus before the updates of filters. In the single-time scale, the neighbor communication and measurement sensing share the same rate, which can not only reduce communication burden but also result in computation cost linearly matching with sensor number over the network (He et al., 2017; Liu, Wang, He, & Zhou, 2015; Matei & Baras, 2012; Zhou, Fang, & Hong, 2013). Additionally, the DKF algorithm with faster communication rate can be designed by combining the filter with single-time scale and the consensus process. Hence, this paper considers distributed state estimation algorithms in the single-time scale. Parameter design of algorithms is one of the most essential parts in the study of distributed state estimation problems. In He, Xue, and Fang (2016), it is shown that a networked Kalman filter with optimal gain parameter requires a centralized framework since the calculation of time-varying gain parameter is dependent on information of non-neighbors. Then a modified sub-optimal distributed filter under undirected graph is proposed. Distributed filters with constant filtering gains are well studied in Khan and Jadbabaie (2014) and Khan, Kar, Jadbabaie, and Moura (2010), which evaluate the relationship between the instability of system and the boundedness of estimation error. In Das and Moura (2015), measurement consensus based DKF is presented and design methods of the consensus weights as well as the filtering gains are rigorously studied. In Cattivelli and Sayed (2010), a general diffusion DKF based on time-invariant weights is proposed and performance of the distributed algorithm is analyzed in detail. To achieve better estimation precision, time-varying parameters are considered in Speranzon et al. (2008), which provides a distributed minimum variance estimator for a scalar time-varying signal. In Boem et al. (2015), a distributed prediction method for dynamic systems is proposed to minimize bias and variance. The method can effectively compute time-varying weights of the distributed algorithm. A scalable partition-based distributed Kalman filter is investigated in Farina and Carli (2016) to deal with coupling terms and uncertainty among sub-systems. Furthermore, stability of this algorithm is guaranteed through designing proper parameters. Nevertheless, the work mentioned above have not considered the distributed filter problem with global observability condition which allows the sub-system with local sensor’s measurements to be unobservable. Research of distributed filter for time-varying systems based on global observability is an important but difficult problem. Since sensors of WSNs are sparsely located in different positions, the observability condition assumed for the sub-system with respect to one sensor is much stronger than that assumed for the overall system based on global network. However, the work mentioned above pay little attention to boundedness analysis of covariance matrix and convergence analysis of the algorithm under global observability. Regarding time-invariant systems, conditions on global observability are usually determined by the system matrix, the network topology and the global observation matrix which collects model information of all sensors (Khan & Jadbabaie, 2014; Khan et al., 2010). This means that distributed filters with constant filtering gain can be designed to guarantee stability of the algorithm. However, most of the methods fail for time-varying systems. Battistelli and Chisci (2014) and Battistelli, Chisci, Mugnai, Farina, and Graziano (2015) give some pioneer work on building consensus DKF algorithms under the global observability for timeinvariant systems. Nevertheless, they require the assumption that the system matrix is nonsingular, which seems to be severe for time-varying systems at every moment. In this paper, we aim to develop a scalable and totally distributed algorithm for a class of discrete linear time-varying systems in the WSNs. The main contributions are summarized as follows.
163
(1) The proposed consistent distributed Kalman filter (CDKF) guarantees the error covariance matrix can be upper bounded by a parameter matrix, which is timely calculated by each sensor using local information. This property is quite of importance since it supports an effective error evaluation principle in real time. (2) A set of adaptive weights based on CI fusion is determined through a Semi-definite Programming (SDP) convex optimization method. It is proven that the proposed adaptive CI weights ensure lower error covariance bound than that with constant CI weights which are mainly used in existing work (Battistelli & Chisci, 2014, 2016; Battistelli et al., 2015). Therefore, adaptive CI weights can achieve improvement of estimation performance. (3) Global observability instead of local observability is assumed for the system over networks. This allows the sub-system with local sensor’s measurements to be unobservable. Additionally, the assumption of system matrix being nonsingular at each moment is loosened (Battistelli & Chisci, 2014, 2016; Battistelli et al., 2015; He et al., 2016; Wang & Ren, 2017). Since the nonsingularity of system matrix at each moment is difficult to be satisfied for timevarying systems, the proposed filter can greatly enlarge application range of the distributed state estimation algorithms. The remainder of this paper is organized as follows. Section 2 presents some necessary preliminaries and notations of this paper. Section 3 is on problem formulation and distributed filtering algorithms. Section 4 considers performance of the proposed algorithm. Section 5 is on simulation studies. The conclusion of this paper is given in Section 6. 2. Preliminaries and notations Let G = (V , E , A) be a directed graph, which consists of the set of nodes V = {1, 2, . . . , N }, the set of edges E ⊆ V × V and the weighted adjacent matrix A = [ai,j ]. In the weighted adjacent matrix A, all elements are nonnegative, row stochastic ∑and the diagonal elements are all positive, i.e., ai,i > 0, ai,j ≥ 0, j∈V ai,j = 1. If ai,j > 0, j ̸ = i, then there is an edge (i, j) ∈ E , which means node i can directly receive the information of node j. In this situation, node j is called the neighbor of node i. All neighbors of node i including ⋃ itself can be represented by the set {j ∈ V |(i, j) ∈ E } {i} ≜ Ni , whose size is denoted as |Ni |. G is called strongly connected if for any pair nodes (i1 , il ), there exists a directed path from i1 to il consisting of edges (i1 , i2 ), (i2 , i3 ), . . . , (il−1 , il ). According to Horn and Johnson (2012) and Varga (2009), the following lemma is obtained. Lemma 1. If the directed graph G = (V , E , A) is strongly connected with V = {1, 2, . . . , N }, then all elements of As , s ≥ N − 1, are positive. Throughout this paper, the notations used are fairly standard. The superscript ‘‘T’’ represents transpose. The notation A ≥ B (or A > B), where A and B are both symmetric matrices, means that A − B is a positive semidefinite (or positive definite) matrix. In stands for the identity matrix with n rows and n columns. E {x} denotes the mathematical expectation of the stochastic variable x, and blockcol{·} means the block elements are arranged in columns. blockdiag {·} and diag {·} represent the diagonalizations of block elements and scalar elements, respectively. tr(P) is the trace of matrix P. The notation ⊗ stands for tensor product. The integer set from a to b is denoted as [a : b].
164
X. He et al. / Automatica 92 (2018) 162–172
3. Problem formulation and distributed filtering algorithms 3.1. Problem formulation Consider the following time-varying stochastic system xk+1 = Ak xk + ωk , k = 0, 1, 2, . . . , yk,i = Hk,i xk + vk,i , i = 1, 2, . . . , N ,
{
(1)
where xk ∈ Rn is the state at the kth moment, Ak ∈ Rn×n is the known system matrix, ωk ∈ Rn is the process noise with covariance matrix Qk ∈ Rn×n , yk,i ∈ Rm is the measurement vector obtained via sensor i, Hk,i ∈ Rm×n is the observation matrix of sensor i and vk,i is the observation noise with covariance matrix Rk,i ∈ Rm×m . N is the number of sensors over the network. Definition 1. The ith sub-system of the overall system (1) is defined as the system with respect to (Ak , Hk,i ). In this paper, the following assumptions are needed. ∞ Assumption 1. The sequences {ωk }∞ k=0 and {vk,i }k=0 are zero-mean, Gaussian, white and uncorrelated. Also, Rk,i is positive definite, ∀k ≥ 0. There exist two constant positive definite matrices Q¯ 1 and Q¯ 2 such that Q¯ 1 ≤ Qk ≤ Q¯ 2 , ∀k ≥ 0. The initial state x0 is generated by a zero-mean white Gaussian process independent of T ∞ {ωk }∞ k=0 and {vk,i }k=0 , subject to E {x0 x0 } = P0 .
Assumption 2. The system (1) is uniformly completely observable, i.e., there exist a positive integer N¯ and positive constants α, β such that 0 < α In ≤
k+N¯ ∑
1 ΦjT,k HjT R− j Hj Φj,k
≤ β In ,
j=k
for any k ≥ 0, where
{ Φk,k = In , Φk+1,k = Ak , Φj,k = Φj,j−1 · · · Φk+1,k , Hk = blockcol{Hk,1 , Hk,2 , . . . , Hk,N }, Rk = blockdiag {Rk,1 , Rk,2 , . . . , Rk,N }. Assumption 3. The topology of the network G = (V , E , A) is a fixed directed graph and it is strongly connected. Assumption 4. There exists a positive scalar β1 , such that
λmax (Ak ATk ) ≤ β1 , ∀k ≥ 0. Assumption 5. There exist a sequence set K = {kl , l ≥ 1}, an integer L ≥ N + N¯ and a scalar β2 > 0, such that
⎧ sup(kl+1 − kl ) < ∞, ⎪ ⎪ ⎨ l≥1 inf(kl+1 − kl ) > 0, l≥1 ⎪ ⎪ ⎩λ (A AT ) ≥ β , ∀k ∈ K, s = 0, . . . , L − 1. min kl +s kl +s 2 l Remark 1. Assumption 2 is a basic global observability condition which does not require any sub-system with local sensor’s measurements to be observable. Assumption 3 is quite general for the direct topology graph of the network, since strong connectivity is a basic condition for the implementation of distributed algorithms which rely on information spread over the networks. Assumption 5 does not require Ak to be nonsingular at each moment, which is assumed in Battistelli and Chisci (2016), Battistelli et al. (2015), He et al. (2016) and Wang and Ren (2017). Under this condition, the application range of distributed filters can be effectively enlarged.
In this paper, we consider the following general distributed filtering structure for sensor i , which mainly consists of three parts:
⎧ x¯ k,i = Ak−1 xˆ k−1,i , ⎪ ⎪ ⎪φ = x¯ + K (y − H x¯ ), ⎪ k,i k,i k,i k,i k,i k,i ⎪ ∑ ⎪ ⎨ xˆ k,i = Wk,i,j φk,j , j∈Ni ⎪ ∑ ⎪ ⎪ ⎪ ⎪ s.t. Wk,i,j = In , Wk,i,j = 0, if j ̸ ∈ Ni , ⎪ ⎩
(2)
j∈Ni
where x¯ k,i , φk,i and xˆ k,i are the state prediction, state update and state estimate of sensor i at the kth moment, respectively. Kk,i is the filtering gain matrix and ∑Wk,i,j are the local fusion matrices. Additionally, the condition j∈Ni Wk,i,j = In is to guarantee the unbiasedness of the estimates. It is noted that since the measurement update and local fusion are implemented in two steps, this may lead to a suboptimal design of Kk,i and Wk,i,j (Carli et al., 2008). The objective of this paper is to design the filtering gain Kk,i and fusion weights Wk,i,j for each sensor through utilizing the local information under peer-to-peer communication, so as to estimate the state in (1). In the meanwhile, the performance of the proposed filter, including the boundedness of estimation covariance and the convergence of the filter, is aimed to be analyzed. 3.2. Distributed filters Given the filter structure (2), the design of optimal filtering gain matrix Kk∗,i can be achieved through Kk∗,i = arg min tr(Pk,i ), Kk,i
where Pk,i = E {(xˆ k,i − xk )(xˆ k,i − xk )T }. Then one can obtain the networked Kalman filter with optimal gain parameter in Table 1, where Kk,i stands for Kk∗,i (He et al., 2016). In this algorithm, the error covariance matrices are derived with the forms P¯ k,i,j = E {(x¯ k,i − xk )(x¯ k,j − xk )T }, P˜ k,i,j = E {(φk,i − xk )(φk,j − xk )T } and Pk,i,j = E {(xˆ k,i − xk )(xˆ k,j − xk )T }. However, since the calculations of (P¯ k,i,j , P˜ k,i,j , Pk,i,j ) need the global information on {Kk,j , Hk,j , j ∈ V }, the algorithm in Table 1 is almost impossible to be conducted in a scalable manner for a large network. Since the optimal design of Wk,i,j depends on the covariance matrices which rely on global information (He et al., 2016), we will discuss the sub-optimal design for Wk,i,j simply with the local information in the following text. Generally, for the design of local fusion weights Wk,i,j , the traditional methods assume Wk,i,j = αi,j In , where αi,j are positive scalars satisfying the required conditions (Cattivelli & Sayed, 2010; Li, Jia, Du, & Meng, 2015). In this paper, Wk,i,j are considered as time-varying matrix weights obtained by the CI strategy (Julier & Uhlmann, 1997). Hence, we propose a sub-optimal scalable algorithm named as consistent distributed Kalman filter in Table 2, which corresponds to the communication topology illustrated in Fig. 1. In the communication process, only the pair (φk,j , P˜ k,j ) is transferred from neighbors. Some remarks on the proposed consistent distributed Kalman filter in Table 2 are given as follows. Firstly, the matrix Pk,i in the algorithm may not stand for the error covariance matrix of sensor i. In the subsequent parts, we will show the relationship between Pk,i and the error covariance matrix. Secondly, the timevarying CI weights {wk,i,j } are considered as constant CI weights in Battistelli and Chisci (2014, 2016) and Battistelli et al. (2015). This paper will show an adaptive design method with respect to {wk,i,j } through a convex optimization algorithm. Thirdly, the proposed algorithm can also be equipped with the consensus (multiple times of local fusion) with certain steps similar to Battistelli and Chisci (2014). Fourthly, compared with the computation complexity of
X. He et al. / Automatica 92 (2018) 162–172 Table 1 Networked Kalman Filter with Optimal Gain He et al. (2016). Prediction: x¯ k,i = Ak−1 xˆ k−1,i , P¯ k,i = Ak−1 Pk−1,i ATk−1 + Qk−1 , P¯ k,i,j = Ak−1 Pk−1,i,j ATk−1 + Qk−1 , Measurement Update: φk,i = x¯ k,i + Kk,i (yk,i − Hk,i x¯ k,i ), Kk,i = P¯ k,i HkT,i (Hk,i P¯ k,i HkT,i + Rk,i )−1 ,
P˜ k,i = (I − Kk,i Hk,i )P¯ k,i , P˜ k,j,s = (I − Kk,j Hk,j )P¯ k,j,s (I − Kk,s Hk,s )T , j ̸ = s, Local Fusion: ∑ xˆ k,i = j∈N Wk,i,j φk,j ,
∑ i∑ Pk,i = W P˜ W T , ∑j∈Ni ∑s∈Ni k,i,j k,j,s k,Ti,s ˜ Pk,i,l = j∈Ni s∈Nl Wk,i,j Pk,j,s Wk,l,s . Table 2 Consistent Distributed Kalman Filter. Prediction: x¯ k,i = Ak−1 xˆ k−1,i , P¯ k,i = Ak−1 Pk−1,i ATk−1 + Qk−1 , Measurement Update: φk,i = x¯ k,i + Kk,i (yk,i − Hk,i x¯ k,i ), Kk,i = P¯ k,i HkT,i (Hk,i P¯ k,i HkT,i + Rk,i )−1 ,
P˜ k,i = (I − Kk,i Hk,i )P¯ k,i , Local Fusion: Receiving (φk,j , P˜ k,j ) from neighbors j ∈ Ni ∑ xˆ k,i = Pk,i j∈N wk,i,j P˜ k−,j1 φk,j , i
Pk,i = ( j∈N wk,i,j P˜ k−,j1 )−1 , i ∑ Design wk,i,j (≥ 0), such that j∈N wk,i,j = 1. i Initialization: xˆ 0,i = 0, P0,i ≥ P0 .
∑
165
Theorem 1. Consider the system (1) with the CDKF in Table 2, then under Assumption 1 the state estimation error of each sensor is zeromean and Gaussian, i.e., the following equation holds xˆ k,i − xk = ek,i ∼ N (0, E {ek,i eTk,i }), ∀i ∈ V , k ≥ 0,
(3)
where N (0, U) is the Gaussian distribution with mean 0 and covariance matrix U. Proof. See Appendix A. The Gaussianity and unbiasedness of estimation error in Theorem 1 provide an effective evaluation method for the system state, if we can obtain the estimation error covariance matrix E {ek,i eTk,i }. In the Kalman filter, the error covariance can be represented by the parameter Pk . However, in the distributed Kalman filters (Cattivelli & Sayed, 2010; Das & Moura, 2015; Olfati-Saber, 2007), the relationship between Pk,i and error covariance matrix is uncertain. For the sake of evaluating the estimation error of CDKF, their relationship will be analyzed from the aspect of consistency defined as follows. Definition 2 (Julier & Uhlmann, 1997). Suppose xk is a random vector. Let xˆ k and Pk be the estimate of xk and the estimate of the corresponding error covariance matrix. Then the pair (xˆ k , Pk ) is said to be consistent (or of consistency) at the kth moment if E {(xˆ k − xk )(xˆ k − xk )T } ≤ Pk . The following theorem shows the consistency of CDKF, which directly depicts the relationship between the estimation error covariance matrix E {ek,i eTk,i } and the parameter matrix Pk,i . Theorem 2. Considering the system (1), under Assumption 1, the pair (xˆ k,i , Pk,i ) of the CDKF in Table 2 is consistent, i.e., E {(xˆ k,i − xk )(xˆ k,i − xk )T } ≤ Pk,i , ∀i ∈ V , k ≥ 0.
Fig. 1. An illustration of the communication topology for consistent distributed Kalman filter.
centralized Kalman filter (O(m3 N 3 + n3 )), the one of CDKF in Table 2 for sensor i is O(m3 + n3 |Ni |) if {wk,i,j } are set to be constant, such as wk,i,j = ai,j . If we turn to obtain the optimized weights by certain optimization algorithms, the computational complexity of the total algorithm should include the complexity of the specific optimization method. It is noted that the main advantages of distributed filters compared to the centralized ones lie in the aspects of more robustness and less communication burden. 4. Performance analysis In this section, we will give the performance analysis of the proposed consistent distributed Kalman filter. The distribution of estimation error and the consistency of the proposed filter will be studied. Also, a design of adaptive CI weights is to be considered. Additionally, we will prove the boundedness of estimation covariance as well as the convergence of the filter. 4.1. Error evaluation and consistency Firstly, the following theorem shows the state estimation error’s probability distribution of each sensor.
(4)
Proof. Here we utilize an inductive method to finish the proof of this theorem. Firstly, under the initial condition, due to x0 ∼ N (0, P0 ), there is E {(xˆ 0,i − x0 )(xˆ 0,i − x0 )T } ≤ P0,i . It is supposed that, at the (k − 1)th moment, E {(xˆ k−1,i − xk−1 )(xˆ k−1,i − xk−1 )T } = E {ek−1,i eTk−1,i } ≤ Pk−1,i . Eq. (A.3) provides the prediction error at the kth moment with the form e¯ k,i = Ak−1 ek−1,i − ωk−1 . Due to E {ek−1,i ωkT−1 } = 0, it is immediate to see that E {¯ek,i e¯ Tk,i } = Ak−1 E {ek−1,i eTk−1,i }ATk−1 + Qk−1 . Thus, E {¯ek,i e¯ Tk,i } ≤ Ak−1 Pk−1,i ATk−1 + Qk−1 = P¯ k,i . In the update process, according to (A.2), there is e˜ k,i = (I − Kk,i Hk,i )e¯ k,i + Kk,i vk,i . Because of E {¯ek,i vkT,i } = 0, we can obtain E {˜ek,i e˜ Tk,i } ≤ P˜ k,i . Notice that ek,i = xˆ k,i −xk = Pk,i then we get
∑
j∈Ni
wk,i,j P˜ k−,j1 e˜ k,j ,
E {ek,i eTk,i } =Pk,i E {∆k,i }Pk,i .
(5)
)T
˜ −1 ˜ ˜ −1 ˜ where ∆k,i = j∈Ni wk,i,j Pk,j ek,j j∈Ni wk,i,j Pk,j ek,j . According to the consistent estimate of covariance intersection (Niehsen, 2002), we find that
(∑
E {∆k,i } ≤
∑
)(∑
wk,i,j P˜ k−,j1 E {˜ek,j e˜ Tk,j }P˜ k−,j1 ≤ Pk−,i1 .
(6)
j∈Ni
Hence, the combination of (5) and (6) leads to (4). □ Remark 2. Theorem 2 essentially states that the estimation error covariance matrix can be upper bounded by the parameter Pk,i provided by the filter. With this property, one can not only evaluate the estimation error in real time under the error distribution illustrated in Theorem 1, but also judge the boundedness of covariance matrix through Pk,i .
166
X. He et al. / Automatica 92 (2018) 162–172 Table 3 SDP algorithm for the solution of (9).
4.2. Design of adaptive CI weights Since the matrix Pk,i is the upper bound of the covariance matrix of the unavailable estimation error, we seek to compress Pk,i so as to lower the estimation error. Fortunately, the proper design of wk,i,j is helpful to achieve the compression on Pk,i . Since ∑ Pk,i = ( j∈N wk,i,j P˜ k−,j1 )−1 , we aim to obtain a smaller Pk,i than Pk,i i calculated with constant weights ai,j , i.e.,
∆k,i =
∑
(wk,i,j − ai,j )P˜ k−,j1 > 0.
(7)
j∈Ni
z = wk,i,1 − ai,1 · · · wk,i,|Ni | − ai,|Ni |
( (
c= 0
Under the condition ∆k,i > 0, we consider the following optimization problem: 1 {wk,i,j , j ∈ Ni } = arg min tr(∆− k,i ),
(8)
wk,i,j
∑
0
···
1
···
mk,in · · · mk,in
)T
)T
,
1 , A1 =
) ··· 1 0 ··· 0 , ( b2 = −ai,1 (ai,1 − 1 · · · −ai,|Ni |
(
1
I |Ni |×|Ni | ⊗ (1 0
b1 = 0, A2 = P˜ k−,s1
( Fs =
j∈Ni wk,i,j = 1, 0 ≤ wk,i,j ≤ 1, j ∈ Ni . In this study, we choose wk,i,j to obtain a larger Pk−,i1 ∑ ∑ = j∈Ni wk,i,j P˜ k−,j1 as much as possible than the j∈Ni ai,j P˜ k−,j1 which is the case of constant weights wk,i,j ≡ ai,j . That is, the objective function (8) is considered. Moreover, the condition ∆k,i > 0 is the constraint of the optimization problem. Therefore, ∑the solution of the optimization problem always satisfies Pk,i < ( j∈N ai,j P˜ k−,j1 )−1 . i
where
Input: ai,j , P˜ k,j , j = 1, . . . , |Ni |, Output: wk,i,j , obtained by solving the following problem (1) SDP optimization: z ∗ = arg min c T z, subject to A1 z = b1 , A2 z ≥ b2 , F0 + z1 F1 + · · · + z|Ni |+n F|Ni |+n > 0, where b1 ∈ R, z , c, A1 ∈ R|Ni |+n,1 , A2 ∈ R2|Ni |+n,|Ni |+n , b2 ∈ R2|Ni |+n,1 , Fs ∈ R2n,2n , s = 0, . . . , |Ni | + n, with the following forms
0
)
0 0n×n
− 1)T
, s ≤ |Ni |, F0 =
01×n
1 −)ai,|Ni | 0
n×n ( I
0
I
n×n
)T
,
,
I n×n 0
)
,
Fs = diag { 0, . . . , 0 , 1, 0, . . . , 0}, |Ni | < s ≤ |Ni | + n.
n+s−|Ni |−1
(2) Adaptive CI weights: wk,i,j = zj∗ + ai,j , j = 1, . . . , |Ni |.
The following linear
Remark 4. The SDP optimization of Table 3 can be handled with the existing optimization algorithms, such as interior point methods, first-order methods and Bundle methods. The total computational complexity of the proposed algorithm consists of the filtering method and the chosen optimization method.
where Q (x) = Q (x)T and R(x) = R(x)T , is equivalent to the following condition:
By utilizing the algorithm of Table 3 to solve the optimization problem (9), one can obtain a set of adaptive CI weights at each moment, which gives rise to effective compression on the error covariance bound. Regarding the comparison of the error covariance bound between the adaptive CI weights and the constant CI weights, the following theorem gives a direct conclusion.
To solve the problem (8), it is equivalently transferred to another form given in Lemma 3.
Lemma 2 (Schur Complement Seber, 2007). matrix inequality (LMI):
(
Q (x) S(x)T
S(x) R(x)
Q (x) > 0,
)
> 0,
R(x) − S(x)T Q (x)−1 S(x) > 0.
Theorem 3. Considering the system (1) with the CDKF in Table 2, under Assumption 1 and the same initial conditions, there is
Lemma 3. The problem (8) is equivalent to the following convex optimization problem
{wk,i,j , mk,il } = arg
min
wk,i,j ,mk,il
tr(Mk,i ),
(9)
∆k,i In
In Mk,i
∀i ∈ V , ∀k ≥ 0,
where the parameter matrix Pk,i|w and Pk,i|a correspond to wk,i,j and ai,j , respectively. Proof. See Appendix D.
subject to
(
Pk,i|w ≤ Pk,i|a ,
)
> 0,
where Mk,i = diag {mk,i1 , mk,i2 , . . . , mk,in } > 0, 1, 0 ≤ wk,i,j ≤ 1, j ∈ Ni .
(10)
4.3. Boundedness and convergence of CDKF
wk,i,j =
Due to the consistency of CDKF in Theorem 2, the boundedness of Pk,i implies the boundedness of covariance matrix. Thus, we draw the following boundedness conclusion on the proposed CDKF.
Lemma 4. The problem (9) is a convex optimization problem, which can be solved through the algorithm of Table 31 based on the popular SDP algorithm (Boyd & Vandenberghe, 2004).
Theorem 4. Under Assumptions 1– 5, there exists a positive definite ˆ such that matrix P,
∑
j∈Ni
Proof. See Appendix B.
Pk,i ≤ Pˆ < ∞,
∀i ∈ V , ∀k ≥ 0 ,
(11)
Proof. See Appendix C.
where Pk,i is the parameter matrix of the CDKF in Table 2.
Remark 3. The result of the SDP algorithm in Table 3 may not be feasible due to the constraint of LMI (10). If the feasibility of the SDP algorithm is not satisfied, {wk,i,j } will keep the setting {wk,i,j = ai,j , j ∈ Ni }. Thus, the proposed algorithm always works no matter the SDP algorithm is feasible or not.
Proof. According to Theorem 3 and Pk,i = Pk,i|w , we only need to prove Pk,i|a can be uniformly upper bounded. Under Assumption 5, one can pick out a subsequence set {klm , m ≥ 1} from the sequence set K = {kl , l ≥ 1} such that L ≤ klm+1 − klm . Due to supl≥1 (kl+1 − ¯ such that kl ) < ∞, there exists a sufficiently large integer L, ¯ Without loss of generality, we suppose the L ≤ klm+1 − klm ≤ L. ¯ ∀l ≥ 1 . To prove set {kl } has this property, i.e., L ≤ kl+1 − kl ≤ L, the boundedness of Pk,i|a , we divide the sequence set {kl , l ≥ 1}
1 The subscript j of the non-zero parameters (w , a , P ) is set from 1 to |N | k,i,j i ,j k,j i without loss of generality.
X. He et al. / Automatica 92 (2018) 162–172
into two bounded and non-overlapping set : {kl + L, l ≥ 1} and ⋃ l≥1 [kl + L + 1 : kl+1 + L − 1]. Step 1: k = kl + L, l ≥ 1. At the (kl + L)th moment, substituting wk,i,j = ai,j into the ∑ ˜ −1 CDKF, there is Pk−+1 L,i|a = j∈Ni ai,j Pkl +L,j|a . According to (D.2) and l
Assumptions 4 and 5, we can obtain Pk−+1 L,i|a l
=
∑
ai,j (P¯ −1
kl +L,j|a
+
∑
l
∑
1 ai,j HkTl +L,j R− k +L,j Hkl +L,j ,
(12)
l
j∈Ni
where 0 < η < 1, and the last inequality is derived similarly to Lemma 1 in Battistelli and Chisci (2014) by noting the lower boundedness of Akl +L−1 ATk +L−1 and upper boundedness of Qk . By l recursively applying (12) for L times, there is T Pk−+1 L,i|a ≥ ηL Φk−+ L,k ( l
∑
l
1 aLi,j Pk−,1j|a )Φk−+ L,k l
l
l
j∈V L ∑
(13)
η
s−1
s=1
∑
asi,j Skl +L−s+1,j
,
j∈V
where Φk,j is the state transition matrix defined in Assumption 2 and
{
Sk−s+1,j = Φk−,kT−s+1 S¯k−s+1,j Φk−,k1−s+1
1 S¯k−s+1,j = HkT−s+1,j R− k−s+1,j Hk−s+1,j .
According to Assumption 3 and Lemma 1, there is asi,j > 0, s ≥ N. Since the first part on the right side of (13) is positive definite, we consider the second part denoted as P˘ k−,i1|a . Then P˘ k−,i1|a
=
L ∑
ηs−1
∑ j∈V
ηs−1
∑
asi,j Skl +L−s+1,j
j∈V
s=N
≥ amin
L ∑ s=N
ηs−1
Skl +L−s+1,j
Step 2: k ∈ [kl + L + 1 : kl+1 + L − 1], l ≥ 1. From Step 1, it is known that Pkl +L,i|a ≤ P. Since the length of the interval ¯ we can just k ∈ [kl + L + 1 : kl+1 + L − 1] is bounded by L, consider the prediction process to study the boundedness of Pk,i|a , k ∈ [kl + L + 1 : kl+1 + L − 1]. Under Assumption 4, there is Ak ATk ≤ β1 In . Under Assumption 1, Qk ≤ Q¯ 2 < ∞. Thus, it is safe to conclude that there exists an constant matrix P mid , such that Pk,i|a ≤ P mid , k ∈ [kl + L + 1 : kl+1 + L − 1]. Since k1 is finite, for k ∈ [0 : k1 + L − 1], there exists a constant matrix P 0 , such that Pk,i|a ≤ P 0 . Given P, P mid and P 0 , according to Theorem 3 and the division of {kl , l ≥ 1}, it is straightforward to guarantee (11). □ Under the boundedness of Pk,i provided in Theorem 4, we can obtain Theorem 5 which depicts the convergence of the proposed CDKF. Theorem 5. Under Assumptions 1–3, if {Ak }∞ k=0 belongs to a nonsingular compact set, then there is lim E {ˆxk,i − xk } = 0,
∑
Φk−l +T L,b
∑
(17)
Vk+1,i (E {¯ek+1,i })
1 −1 HbT,j R− b,j Hb,j Φk +L,b l
= E {¯ek+1,i }T P¯ k−+11,i E {¯ek+1,i }
j∈V
kl +1+L−N
≥ amin ηL−1
∀i ∈ V .
From the fact (iii) of Lemma 1 in Battistelli and Chisci (2014) and the invertibility of Ak , there is
j∈V
b=kl +1
∑
1 −1 Φk−l +T L,b HbT R− b Hb Φkl +L,b ,
(14)
where amin = > 0, s ∈ [N : L], Hk and Rk are defined in Assumption 2. Since the system (1) is uniformly completely observable, there is mini,j∈V asi,j
kl +1+N¯ 1 T ˆ −1 ΦjT,kl +1 HjT R− j Hj Φj,kl +1 = Gkl +1 Rkl +1 Gkl +1
where Gk = blockcol{Hk , Hk+1 Φk+1,k , . . . , Hk+N¯ Φk+N¯ ,k }, Rˆ k = blockdiag {Rk , Rk+1 , . . . , Rk+N¯ }.
(15)
(18)
T −1 −1 A− ek+1,i }, k Pk,i Ak E {¯
where 0 < βˇ < 1. Due to e¯ k+1,i = Ak ek,i − wk , E {wk } = 0 and (18), one can obtain Vk+1,i (E {¯ek+1,i }) ≤ βˇ E {ek,i }T Pk−,i1 E {ek,i }. Notice that Pk,i = ( xk = Pk,i
j=kl +1
≥ α In , α > 0,
≤ βˇ E {¯ek+1,i }
T
b=kl +1
∑
¯ P˘ −1 in (14) is lower bounded by According to (16) and L ≥ N + N, k,i|a a constant positive definite matrix. Thus, Pkl +L,i|a is upper bounded by a constant positive definite matrix P, i.e., Pkl +L,i|a ≤ P.
Vk,i (E {¯ek,i }) = E {¯ek,i }T P¯ k−,i1 E {¯ek,i }.
∑
kl +1+L−N
≥ amin ηL−1
(16)
Proof. Under the conditions of this theorem, the conclusion of Theorem 4 holds. Thus, Pk,i is uniformly upper bounded. Then according to Assumptions 1 and 4, P¯ k,i is uniformly upper bounded and lower bounded. Thus we can define the following Lyapunov function
asi,j Skl +L−s+1,j
L
∑
l
k→+∞
s=1
≥
1 −1 Φk−+T 1+N¯ ,j HjT R− j Hj Φk +1+N¯ ,j
1 = FkTl +1 GTkl +1 Rˆ − kl +1 Gkl +1 Fkl +1 ≥ κα In , α > 0, κ > 0.
l
j∈Ni
+
l
l
T −1 −1 ai , j A − k +L−1 Pk +L−1,j|a Ak +L−1 l
l
l
there exists a positive real κ , such that FkT +1 Fkl +1 > κ In . Thus, l considering (15), one can obtain that
j=kl +1
∑
+
Due to the nonsingularity of Ak , k ∈ [kl : kl + L − 1], and the relationship N + N¯ ≤ L, the matrix Fkl +1 can be well defined as 1 Fkl +1 = Φk−+ . Under Assumption 5, for k ∈ [kl : kl + L − 1], 1+N¯ ,k +1
kl +1+N¯
1 HkTl +L,j R− kl +L,j Hkl +L,j )
j∈Ni
≥η
167
∑
∑
j∈Ni
wk,i,j P˜ k−,j1 xk .
(19)
wk,i,j P˜ k−,j1 )−1 , there is (20)
j∈Ni
Hence, according to CDKF and (20), the estimation error ek,i satisfies E {ek,i } = E {ˆxk,i − xk }
168
X. He et al. / Automatica 92 (2018) 162–172
= Pk,i
∑
wk,i,j P˜ k−,j1 E {φk,j − xk }
j∈Ni
= Pk,i
∑
wk,i,j P˜ k−,j1 (I − Kk,j Hk,j )E {¯ek,j }.
(21)
j∈Ni
P˜ k−,i1 (I − Kk,i Hk,i ) = P¯ k−,i1 .
(22)
Substituting (22) into (21), we have E {ek,i } = Pk,i
∑
wk,i,j P¯ k−,j1 E {¯ek,j }.
(23)
j∈Ni
Since Pk,i = ( Pk,i = (
∑
j∈Ni
wk,i,j P˜ k−,j1 )−1 , there is
1 −1 wk,i,j (P¯ k−,j1 + HkT,j R− k,j Hk,j ))
j∈Ni
≤(
∑
(24)
wk,i,j P¯ k−,j1 )−1 .
j∈Ni
Applying (23), (24) and Lemma 2 in Battistelli and Chisci (2014) to the right hand of (19), one can obtain that Vk+1,i (E {¯ek+1,i })
≤ βˇ
∑
wk,i,j E {¯ek,j }T P¯ k−,j1 E {¯ek,j }
(25)
j∈Ni
≤ βˇ
∑
(
1.1
)
where the observation matrices of the sensors are
Since P˜ k,i = (I − Kk,i Hk,i )P¯ k,i , one can get
∑
0.05 kπ xk+1 = xk + ωk , ) 1.1 0.1sin( ⎪ 6 ⎩ yk,i = Hk,i xk + vk,i , i = 1, 2, 3, 4,
⎧ ⎪ ⎨
wk,i,j Vk,j (E {¯ek,j }).
Here, it is assumed that the process noise covariance matrix Qk = diag {0.5, 0.7}, and the whole measurement noise covariance matrix Rk = diag {0.5, 0.6, 0.4, 0.3}. The initial value of the state is generated by a Gaussian process with zero mean and covariance matrix I2 , and the initial estimation settings are xˆ i,0 = 0 and Pi,0 = I2 , ∀i = 1, 2, 3, 4. The sensor network’s communication topology, assumed as directed and strongly connected, is illustrated in Fig. 2. The weighted adjacent matrix A = [ai,j ] is designed as ai,j = 1 , j ∈ Ni , i, j = 1, 2, 3, 4. We conduct the numerical simulation |Ni | through Monte Carlo experiment, in which 500 Monte Carlo trials are performed. The mean square error of a whole network is defined as MSEk =
500 1 ∑ 1 ∑ j j j j (xˆ k,i − xk )T (xˆ k,i − xk ), N 500 i∈V
j∈Ni
Denote Ak = [wk,i,j ], i, j = 1, 2, . . . , N. Summing up (25) for i = 1, 2, . . . , N, then there is Vk+1 (E {¯ek+1 }) ≤ βˇ Ak Vk (E {¯ek }),
⎧ ) ( kπ ⎪ ⎪ H = ⎪ k,1 ) 0 , 1 + sin( ⎪ ⎪ 12 ⎪ ( ) ⎪ ⎨H = 0 0 , k,2 ( ) ⎪ kπ ⎪ ⎪ H = , − 1 1 + cos( ) k , 3 ⎪ ⎪ 12 ⎪ ⎪ ( ) ⎩ Hk,4 = 0 0 .
0 < βˇ < 1,
(26)
where Vk (E {¯ek }) = col{Vk,1 (E {¯ek,1 }), . . . , Vk,N (E {¯ek,N })}. According to Lemma 3, Ak is a row stochastic matrix at each moment, thus the spectral radius of Ak is always 1. Due to 0 < βˇ < 1, limk→+∞ E {¯ek+1,i } = 0. Under the equation E {¯ek+1,i } = Ak E {ek,i } and the assumption that {Ak }∞ k=0 belongs to a nonsingular compact set, the conclusion of this theorem holds. □ Remark 5. The reason for the invertibility of Ak in Theorem 5 is that the proof using Lyapunov method to guarantee the convergence needs the P¯ k−,i1 (see (18)), whose iteration requires the invertibility of Ak . While, in the proof of the boundedness of covariance matrix in Theorem 4, we used another method to relax the invertibility of Ak . 5. Simulation studies To demonstrate the aforementioned theoretical results, two simulation examples in this section will be studied. In the first example, both the consistency and the boundedness of covariance matrix will be illustrated. The performance of adaptive CI weights will be compared with that of constant CI weights as well as the algorithm in Table 1. In the second example, the proposed algorithm is compared with some other algorithms. 5.1. Performance evaluation Consider the following second-order time-varying stochastic system with four sensors in the network
j=1
j xk,i
where ˆ is the state estimation of the jth trail of sensor i at the kth moment. To show the ability of CDKF in coping with the singularity of system matrices, the determinants of time-varying system matrices are plotted in Fig. 3. The tracking graph for the system states is shown in Fig. 4. From Figs. 3 and 4, it can be seen that under the case that system matrices are singular, the proposed algorithm CDKF has effective tracking performance for each state element. Fig. 4 also shows the unbiasedness of estimations conforming with Theorem 1. The consistency CDKF is depicted in Fig. 5, where ∑of 4 MSEk is compared with tr( i=1 Pk,i ). From this figure, it is found that the CDKF keeps stable in the given period and the estimation error can be evaluated in real time. Besides, Fig. 5 also shows the comparison between the algorithm in Table 1 and the proposed CDKF in Table 2. We can see that although the CDKF is sub-optimal, the estimation performance of CDKF is very close to the networked Kalman filter with optimal gain parameter which utilizes the global information. To test the effectiveness of the SDP optimization algorithm for adaptive weights in the CI strategy, we compare the algorithms with adaptive CI weights and constant CI weights. The comparison results are shown in Figs. 6 and 7. Fig. 6 shows that the results on parameter matrix P in the two algorithms conform with the aforementioned theoretical analysis in Theorem 3. Fig. 7 implies that the optimization algorithm for adaptive weights is quite effective in improving the estimation performance. The above results reveal that the proposed CDKF is an effective and flexible distributed state estimation algorithm. 5.2. Comparisons with other algorithms In this subsection, numerical simulations are carried out to compare the proposed CDKF with some other algorithms including CKF, Collaborative Scalar-gain Estimator (CSGF) (Khan & Jadbabaie, 2014) and Distributed State Estimation with Consensus on the Posteriors (DSEA-CP) (Battistelli & Chisci, 2014), which is a information form of distributed Kalman filter. The algorithm
X. He et al. / Automatica 92 (2018) 162–172
169
Fig. 2. The topology of the sensor network.
Fig. 5. Network estimation of the algorithm in Table 1 and CDKF in Table 2.
Fig. 3. Determinants of system matrix A: singular points.
Fig. 6. Minimal eigenvalues of Pa − Pw for each sensor.
Fig. 4. Network tracking for each state.
CKF is the optimal centralized algorithm. Here, we consider the simulation example studied in Khan and Jadbabaie (2014) on CSGF. This example is based on time-invariant systems, i.e., for system (1), Ak = A, Qk = Q , Hk,i = Hi and Rk,i = Ri , ∀i ∈ V , ∀k = 0, 1, . . .. The topology of the sensor network consisting of 20 sensors, assumed as undirected and connected, is illustrated in
Fig. 7. Network MSE (NMSE) of two kinds of weights.
Fig. 8. The weighted adjacent matrix A = [ai,j ] is designed as ai,j = 1 , j ∈ Ni , i, j = 1, . . . , N. The system matrices are assumed to be |N | i
170
X. He et al. / Automatica 92 (2018) 162–172
the estimation error in real time. In order to improve the estimation performance at each moment, the design of adaptive CI weights was casted through an optimization problem, which can be solved with a convex SDP optimization method. Additionally, it was proven that the adaptive CI weights can give rise to a lower error covariance bound than the constant CI weights. For the proposed algorithm, the boundedness of covariance matrix and the convergence have been analyzed based on the global observability condition and the strong connectivity of the directed network topology, which are general requirements for distributed state estimations. Additionally, the proposed algorithm has loosen the nonsingularity of system matrix in the main distributed filter designs under the condition of global observability. The simulation examples have shown the effectiveness of the proposed algorithm in the considered scenarios. Appendix A. Proof of Theorem 1 Fig. 8. The topology of the sensor network with 20 sensors.
According to the CDKF of Table 2, the estimation error is ek,i = xˆ k,i − xk = Pk,i
∑
wk,i,j P˜ k−,j1 (φk,j − xk )
j∈Ni
= Pk,i
∑
wk,i,j P˜ −1 e˜ k,j , k,j
(A.1)
j∈Ni
where e˜ k,i is the estimation error in the measurement update, which can be derived through e˜ k,i = φk,i − xk = x¯ k,i + Kk,i (yk,i − Hk,i x¯ k,i ) − xk
= (I − Kk,i Hk,i )e¯ k,i + Kk,i vk,i .
(A.2)
The prediction error e¯ k,i follows from e¯ k,i = x¯ k,i − xk = Ak−1 ek−1,i − ωk−1 .
(A.3)
Thus, from (A.1)–(A.3), there is ek,i = Pk,i
∑
wk,i,j P˜ k−,j1 e˜ k,j
j∈Ni
Fig. 9. The performance comparison of filters.
= Pk,i
∑
( ) wk,i,j P˜ k−,j1 (I − Kk,j Hk,j )e¯ k,j + Kk,j vk,j
j∈Ni
Q = diag {1, 1}, Ri = 1, i ∈ V and A =
(
1 0
0.05 1
)
. The observation
= Pk,i
∑
wk,i,j P˜ k−,j1 (I − Kk,j Hk,j )Ak−1 ek−1,j
j∈Ni
matrices of these sensors are uniformly randomly selected from {(1, 1), (0, 0), (0, 0), (1, 0)}. The initial value of the state is generated by a Gaussian process with zero mean and covariance matrix I2 , and the initial estimation settings are xˆ i,0 = 0 and Pi,0 = I2 , ∀i ∈ V . We conduct the numerical simulation through Monte Carlo experiment, in which 500 Monte Carlo trials for CKF, CDKF, CSGF and DSEA-CP are performed, respectively. The comparison of estimation error dynamic is carried out for the four algorithms, and the result can be seen in Fig. 9. From this figure, we see that the four algorithms are all stable and the estimation performance of CDKF is better than CSGF as well as DSEA-CP. The reason that CKF, CDKF and DSEA-CP have better estimation performance than CSGF is the time-varying gain matrices in the measurement updates. Additionally, the estimation performance of CDKF is nearer to CKF.
Thanks to the Gaussian property acting on (A.4), and the Gaussianity of e0,i , ωk−1 and vk,i , ek,i is also Gaussian. Due to E {ωk−1 } = 0 and E {vk,i } = 0, from (A.4) one can obtain that
6. Conclusion
Appendix B. Proof of Lemma 3
This paper has investigated the distributed state estimation problem for a class of discrete time-varying systems. Since the gain parameter optimization in a networked Kalman filter requires a centralized framework, a sub-optimal distributed Kalman filter based on CI fusion method was proposed. The consistency of the algorithm can provide an effective method to evaluate
− Pk,i
∑
wk,i,j P˜ k−,j1 (I − Kk,j Hk,j )ωk−1
j∈Ni
+ Pk,i
∑
wk,i,j P˜ k−,j1 Kk,j vk,j .
(A.4)
j∈Ni
E {ek,i }
= Pk,i
∑
wk,i,j P˜ k−,j1 (I − Kk,j Hk,j )Ak−1 E {ek−1,j }.
j∈Ni
Under the initial condition of this algorithm, there is E {e0,i } = 0, ∀i ∈ V . Therefore, it is straightforward to rectify (3). □
According to Lemma 2, (10) is equivalent to 1 ∆k,i > 0, Mk,i − ∆− k,i > 0. 1 −1 Then we have Mk,i > ∆− k,i . Hence, tr(Mk,i ) > tr(∆k,i ). Under the condition, the parameter set {wk,i,j , mk,il } that minimizes tr(Mk,i ) 1 will lead to the minimization of tr(∆− k,i ), and vice versa. □
X. He et al. / Automatica 92 (2018) 162–172
Appendix C. Proof of Lemma 4 First, considering the objective function (9), we set a vector z with the form in Table 3 to rewrite the objective function with the SDP algorithm form. After getting optimized z ∗ , we can obtain {wk,i,j } through the equation in (2) of Table 3. Then, the LMI (10) is written as the third inequality constraint in Table 3, which consists of three types of Fj listed in the table. Finally, the first and the second ∑ inequality constraints in Table 3 correspond to the constraints j∈Ni wk,i,j = 1 and 0 ≤ wk,i,j ≤ 1, j ∈ Ni , respectively. Therefore, we can obtain the general form of SDP algorithm in Table 3. □ Appendix D. Proof of Theorem 3 Here we utilize the inductive method to give the proof of this theorem. Firstly, at the (k − 1)th moment, it is supposed that Pk−1,i|w ≤ Pk−1,i|a , ∀i ∈ V . Then due to the prediction equation P¯ k,i = Ak−1 Pk−1,i ATk−1 + Qk−1 , there is P¯ k,i|w ≤ P¯ k,i|a .
(D.1)
Exploiting the matrix inverse formula on the measurement update equation of CDKF, the following iteration holds 1 P˜ k−,i1 = P¯ k−,i1 + HkT,i R− k,i Hk,i .
(D.2)
1 ¯ −1 Then from (D.1), we have P˜ k−,i1|w = P¯ k−,i1|w + HkT,i R− k,i Hk,i ≥ Pk,i|a +
HkT,i Rk−,1i Hk,i = P˜ k−,i1|a . Due to the equation in the local fusion process Pk,i = (
∑
j∈Ni
wk,i,j P˜ k−,j1 )−1 , and the optimization condition for
adaptive CI weights in (10), there is Pk−,i1|w =
∑
≥
∑
j∈Ni
wk,i,j P˜ k−,j1|w , ≥
∑
ai,j P˜ k−,j1|w
j∈Ni
ai,j P˜ k−,j1|a = Pk−,i1|a .
j∈Ni
Therefore, Pk,i|w ≤ Pk,i|a . The proof is finished with the initial conditions satisfying P0,i|w = P0,i|a . □ References Battistelli, G., & Chisci, L. (2014). Kullback-Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability. Automatica, 50(3), 707–718. Battistelli, G., & Chisci, L. (2016). Stability of consensus extended Kalman filter for distributed state estimation. Automatica, 68, 169–178. Battistelli, G., Chisci, L., Mugnai, G., Farina, A., & Graziano, A. (2015). ConsensusBased Linear and Nonlinear Filtering. IEEE Transactions on Automatic Control, 60(5), 1410–1415. Boem, F., Xu, Y., Fischione, C., & Parisini, T. (2015). A distributed pareto-optimal dynamic estimation method. In European control conference (pp. 3673–3680). Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Cambridge university press. Cao, X., Chen, J., Yan, Z., & Sun, Y. (2008). Development of an integrated wireless sensor network micro-environmental monitoring system. Isa Transactions, 47(3), 247–255. Carli, R., Chiuso, A., Schenato, L., & Zampieri, S. (2008). Distributed Kalman filtering based on consensus strategies. IEEE Journal on Selected Areas in Communications, 26(4), 622–633. Cattivelli, F. S., & Sayed, A. H. (2010). Diffusion strategies for distributed Kalman filtering and smoothing. IEEE Transactions on Automatic Control, 55(9), 2069– 2084. Das, S., & Moura, J. M. F. (2015). Distributed Kalman filtering with dynamic observations consensus. IEEE Transactions on Signal Processing, 63(17), 4458–4473. Farina, M., & Carli, R. (2016). Partition-based distributed Kalman filter with plug and play features. IEEE Transactions on Control of Network Systems. http://dx.doi.org/ 10.1109/TCNS.2016.2633786. Guo, Z., Shi, D., Johansson, K. H., & Shi, L. (2017). Optimal linear cyber-attack on remote state estimation. IEEE Transactions on Control of Network Systems, 4(1), 4–13.
171
He, X., Hu, C., Xue, W., & Fang, H. (2017). On event-based distributed Kalman filter with information matrix triggers. IFAC-PapersOnLine, 50(1), 14308–14313. He, X., Xue, W., & Fang, H. (2016). Consistent distributed Kalman filter with adaptive matrix weights. In International conference on control, automation, robotics and vision. Horn, R. A., & Johnson, C. R. (2012). Matrix analysis. Cambridge university press. Hu, J., Xie, L., & Zhang, C. (2012). Diffusion Kalman filtering based on covariance intersection. IEEE Transactions on Signal Processing, 60(2), 891–902. Julier, S. J., & Uhlmann, J. K. (1997). A non-divergent estimation algorithm in the presence of unknown correlations. In Proceedings of the American control conference (pp. 2369–2373). Khan, U. A., & Jadbabaie, A. (2014). Collaborative scalar-gain estimators for potentially unstable social dynamics with limited communication. Automatica, 50(7), 1909–1914. Khan, U. A., Kar, S., Jadbabaie, A., & Moura, J. M. F. (2010). On connectivity, observability, and stability in distributed estimation. In IEEE conference on decision and control (pp. 6639–6644). Khan, U. A., & Moura, J. M. F. (2008). Distributing the Kalman filter for large-scale systems. IEEE Transactions on Signal Processing, 56(10), 4919–4935. Kumar, V. (2012). Computational and compressed sensing optimizations for information processing in sensor network. International Journal of Next-Generation Computing. Li, W., Jia, Y., Du, J., & Meng, D. (2015). Diffusion Kalman filter for distributed estimation with intermittent observations. In American control conference (pp. 4455–4460). Liu, Q., Wang, Z., He, X., & Zhou, D. (2015). Event-based distributed filtering with stochastic measurement fading. IEEE Transactions on Industrial Informatics, 11(6), 1643–1652. Matei, I., & Baras, J. S. (2012). Consensus-based linear distributed filtering. Automatica, 48(8), 1776–1782. Niehsen, W. (2002). Information fusion based on fast covariance intersection filtering. In International conference on information fusion (pp. 901–904). Olfati-Saber, R. (2007). Distributed Kalman filtering for sensor networks. In Proceedings of the IEEE conference on decision and control (pp. 5492–5498). Ren, X., Wu, J., Johansson, K. H., Shi, G., & Shi, L. (2018). Infinite horizon optimal transmission power control for remote state estimation over fading channels. IEEE Transactions on Automatic Control, 63(1), 85–100. Schizas, I. D., Ribeiro, A., & Giannakis, G. B. (2008). Consensus in ad hoc wsns with noisy links–part I: Distributed estimation of deterministic signals. IEEE Transactions on Signal Processing, 56(1), 350–364. Seber, G. A. F. (2007). A matrix handbook for statisticians. Wiley-Interscience. Solis, I., & Obraczka, K. (2007). In-network aggregation trade-offs for data collection in wireless sensor networks. International Journal of Sensor Networks, 1(3–4), 200–212 (13). Speranzon, A., Fischione, C., Johansson, K. H., & Sangiovanni-Vincentelli, A. (2008). A distributed minimum variance estimator for sensor networks. IEEE Journal on Selected Areas in Communications, 26(4), 609–621. Sun, Y., Fu, M., Wang, B., Zhang, H., & Marelli, D. (2016). Dynamic state estimation for power networks using distributed MAP technique. Automatica, 73(11), 27–37. Varga, R. S. (2009). Matrix iterative analysis. Springer Science & Business Media. Wang, S., & Ren, W. (2017). On the convergence conditions of distributed dynamic state estimation using sensor networks: A unified framework. IEEE Transactions on Control Systems Technology, PP(99), 1–17. Wu, G., Sun, L., Lee, K. Y., & Pan, L. (2017). Disturbance rejection control of a fuel cell power plant in a grid-connected system. Control Engineering Practice, 60, 183–192. Xie, S., & Guo, L. (2015). Stability of distributed LMS under cooperative stochastic excitation. In Chinese control conference (pp. 7445–7450). Yang, W., Chen, G., Wang, X., & Shi, L. (2014). Stochastic sensor activation for distributed state estimation over a sensor network. Automatica, 50(8), 2070– 2076. Yang, W., Yang, C., Shi, H., Shi, L., & Chen, G. (2017). Stochastic link activation for distributed filtering under sensor power constraint. Automatica, 75, 109–118. Zhou, Z., Fang, H., & Hong, Y. (2013). Distributed estimation for moving target based on state-consensus strategy. IEEE Transactions on Automatic Control, 58(8), 2096–2101.
Xingkang He received his B.S. degree in Hefei University of Technology. He is currently working toward the Ph.D. in Academy of Mathematics and Systems Science, Chinese Academy of Sciences. He won the National Scholarship for Doctor in 2017. He has been the reviewer for several highquality journals and conferences. His research interests include filtering theory, distributed Kalman filter, eventbased state estimation and multi-agent systems.
172
X. He et al. / Automatica 92 (2018) 162–172 Wenchao Xue received the B.S. degree in applied mathematics from Nankai University, Tianjin, China, in 2007, and the Ph.D. degree in control theory from the Academy of Mathematics and Systems Science (AMSS), Chinese Academy of Sciences (CAS), Beijing, China, in 2012. He is currently an Assistant Professor with the LSC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences. His research interests include distribute filter, nonlinear uncertain systems control and estimation. He is a member of the IEEE Industrial Electronics Society and the IEEE Control Systems Society.
Hai-Tao Fang received his B.S. degree in probability and statistics and M.S. degree in applied mathematics in 1990 from the Peking University and 1993 from Tsinghua University, respectively. He received Ph.D. degree in applied mathematics in 1996 from the Peking University. From 1996–1998 he was a Postdoc at the Institute of Systems Science, Chinese Academy of Sciences (CAS), and joined the Institute as an Assistant Professor in 1998. He now is with the Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, CAS as a Full Professor. He currently serves as Deputy Editor-inChief of Journal of Control Theory and Applications and Associate Editor of Acta Mathematicae Applicatae Sinica. His research interests include dynamical behavior analysis of networked systems and distributed system estimation, optimization and control.