Signal Processing 127 (2016) 12–23
Contents lists available at ScienceDirect
Signal Processing journal homepage: www.elsevier.com/locate/sigpro
Fusion estimation using measured outputs with random parameter matrices subject to random delays and packet dropouts R. Caballero-Águila a,n, A. Hermoso-Carazo b, J. Linares-Pérez b a b
Departamento de Estadística e I. O., Universidad de Jaén, Paraje Las Lagunillas s/n, 23071 Jaén, Spain Departamento de Estadística e I.O., Universidad de Granada, Campus Fuentenueva s/n, 18071 Granada, Spain
a r t i c l e in f o
abstract
Article history: Received 20 October 2015 Received in revised form 21 January 2016 Accepted 22 February 2016 Available online 3 March 2016
This paper investigates the centralized and distributed fusion estimation problems for discrete-time random signals from multi-sensor noisy measurements, perturbed by random parameter matrices, which are transmitted to local processors through different communication channel links. It is assumed that both one-step delays and packet dropouts can randomly occur during the data transmission, and different white sequences of Bernoulli random variables with known probabilities are introduced to depict the transmission delays and losses at each sensor. Using only covariance information, without requiring the evolution model of the signal process, a recursive algorithm for the centralized least-squares linear prediction and filtering estimators is derived by an innovation approach. Also, local least-squares linear estimators based on the measurements received by the processor of each sensor are obtained, and the distributed fusion method is then used to generate fusion predictors and filters by a matrix-weighted linear combination of the local estimators, using the mean squared error as optimality criterion. In order to compare the performance of the centralized and distributed fusion estimators, recursive formulas for the estimation error covariance matrices are also derived. A numerical example illustrates how some usual network-induced uncertainties can be dealt with the current observation model with random matrices. & 2016 Elsevier B.V. All rights reserved.
Keywords: Fusion estimation Covariance information Random parameter matrices Random delays Packet dropouts
1. Introduction In the last years, the use of sensor networks has experienced a fast development, since they usually provide more information than traditional single-sensor communication systems. So, the estimation problem in sensor network stochastic systems is becoming an important focus of research because of its countless practical applications, such as
n
Corresponding author. Tel.: þ34953 21 29 26; fax: þ 34953 21 20 34. E-mail addresses:
[email protected] (R. Caballero-Águila),
[email protected] (A. Hermoso-Carazo),
[email protected] (J. Linares-Pérez). http://dx.doi.org/10.1016/j.sigpro.2016.02.014 0165-1684/& 2016 Elsevier B.V. All rights reserved.
localization, target tracking, fault detection, environment observation, habitat monitoring, animal tracking, and communications. Different fusion estimation algorithms have been proposed for conventional systems where each sensor transmits its outputs to the fusion center over perfect connections (see e.g. [1–4], and references therein). However, in a networked environment inevitably arise problems due to the restrictions of the physical equipment or uncertainties in the external environment that can worsen dramatically the quality of the fusion estimators designed without considering these drawbacks. Random observation losses, multiplicative noise uncertainties, sensor gain degradations and missing measurements are
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
some of the random phenomena that motivate the need of designing new estimation algorithms. Furthermore, when the sensors send their measurements to the fusion center over imperfect communication networks, uncertainties as random delays or packet dropouts occurring during transmission can spoil the estimation performance. Therefore, it is not surprising that the design of new fusion estimation algorithms for systems with one of the aforementioned uncertainties (see e.g. [5–10] and references therein), or even several of them simultaneously (see e.g. [11–14] and references therein), has become an active research topic of growing interest. Systems describing some situations with networkedinduced random phenomena are special cases of systems with random parameter measurement matrices; for example, the networked systems with stochastic sensor gain degradation considered in [7], the systems with random observation losses in [8], missing measurements in [9] or with observation multiplicative noises in [10], all can be rewritten by using random parameter measurement matrices. Also, the original system with random delays and packet dropouts in [15] is transformed into an equivalent stochastic parameterized system and, in many papers, e.g. [16] and [17], systems with two-step random delays are transformed into systems with random parameter matrices. The wide variety of real situations which can be described by systems with random parameter state transition and/or measurement matrices has encouraged the growth of interest on them and, as a consequence, a large number of results about the estimation problem in such systems have been obtained (see e.g. [18–23] and references therein). As can be observed from these references, many research efforts have been devoted to the centralized fusion estimation problem for systems with random parameter matrices and missing or randomly delayed measurements. However, although the centralized algorithms provide optimal estimators based on the measurements of all the sensors, they have an expensive computational cost when the number of sensors increases; moreover, they have other drawbacks such as bad robustness, poor survivability and reliability. In contrast, the distributed fusion method has less computation burden and more fault tolerance. In this paper, we address the centralized and distributed fusion estimation problems in networked systems with random parameter matrices, from measurements subject to random delays and packet dropouts during the transmission. To the best of the authors knowledge, the simultaneous consideration of both uncertainties has not been investigated yet in the framework of random parameter matrices and, therefore, it constitutes an interesting research challenge. More precisely, the current paper makes the following contributions: (1) Random measurement matrices are considered in the measurement outputs of the sensors, thus providing a unified framework to address some network-induced phenomena such as missing measurements or sensor gain degradation. (2) Besides the above network-induced phenomena, simultaneous random onestep delays and packet dropouts with different rates are
13
supposed to exist in data transmissions from each sensor. Hence, the proposed observation model considers the possibility of simultaneous missing measurement outputs and random delays and packet dropouts after transmission, a realistic assumption in applications of networked systems where the noise is generated primarily and the unreliable network features can originate pure noise outputs before transmission. (3) Our approach, based on covariance information, does not require the evolution model generating the signal process. (4) Unlike most existing papers on random parameter matrices, where only centralized fusion estimators are obtained, in this paper both the centralized and distributed estimation problems are addressed under the innovation approach and recursive algorithms, very simple computationally and suitable for online applications, are proposed. (5) The estimators are obtained without the need of augmenting the state; so, the dimension of the designed estimators is the same as that of the original state, thus reducing the computational cost compared with the augmentation method. The rest of the paper is organized as follows. In Section 2, we present the measurement model to be considered and the assumptions under which the centralized and distributed estimation problems are addressed. In Section 3, the innovation approach is used to derive a recursive algorithm for the centralized least-squares linear prediction and filtering estimators. In Section 4, local least-squares linear prediction and filtering algorithms are derived, and the proposed distributed estimators are generated by a matrix-weighted linear combination of the local estimators using the mean squared error as optimality criterion. A simulation example is given in Section 5 to show the performance of the proposed estimators. Finally, some conclusions are drawn in Section 6. Notations. The notations used throughout the paper are standard. Rn and Rmn denote the n-dimensional Euclidean space and the set of all m n real matrices, respectively. For a matrix A, AT and A 1 denote its transpose and inverse, respectively. The shorthand DiagðA1 ; …; Am Þ stands for a block-diagonal matrix whose diagonal matrices are A1 ; …; Am . 1n ¼ ð1; …; 1ÞT denotes the all-ones n 1-vector. I and 0 represent the identity and zero matrices of appropriate dimensions, respectively. If the dimensions of matrices are not explicitly stated, they are assumed to be compatible with algebraic operations. The notations and ○ represent the Kronecker and the Hadamard products, respectively. δk;s denotes the Kronecker delta function, which is equal to one if k¼s, and zero otherwise. Finally, for any function Gk;s , dependent of the time instants k and s, we will write Gk ¼ Gk;k for simplicity; analogously, K ðiÞ ¼ K ðiiÞ will be written for any function K ðijÞ , dependent on the sensors i and j.
2. Problem formulation This paper aims to discuss the prediction and filtering estimation problems for discrete-time random signals from multi-sensor noisy measurements transmitted through different channel linked to local processors, using the centralized and distributed fusion estimation methods.
14
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
Each sensor is assumed to transmit its outputs to a local processor over imperfect networks, which yield random one-step delays and packet dropouts. Different sequences of Bernoulli random variables with known probabilities are introduced to depict different one-step delays and loss rates in the transmission from each sensor to the local processor. In the centralized fusion method, all measurement data of the local processors are transmitted to the fusion center over perfect connections, and the least-squares (LS) linear predictor and filter based on all the received measurements are obtained by recursive algorithms. In the distributed fusion method, each local processor produces LS linear estimators (predictor and filter) based on the measurements received from the sensor itself; afterwards, these local estimators are transmitted to the fusion center, also over perfect connections, where the distributed fusion predictor and filter are generated by a matrix-weighted linear combination of the local LS linear estimators using the mean squared error as optimality criterion. The centralized and distributed fusion structures are shown in Fig. 1. Both centralized and distributed fusion estimators will be obtained under the assumption that the evolution model of the signal to be estimated is unknown and only information about its mean and covariance functions is available; this information is specified in the following assumption: Assumption 1. The nx-dimensional signal process fxk ; k Z1g has zero mean and its autocovariance function is expressed in a separable form, E½xk xTs ¼ Ak BTs , s r k, where Ak ; Bs A Rnx M are known matrices. Remark 1. Although Assumption 1 might seem restrictive, actually it covers many practical situations; for example, when the state-space model is available, xk ¼ Φk 1 xk 1 þ wk 1 , the covariance function can be expressed as E½xk xTs ¼ Φk;s E½xs xTs ; s r k; where Φk;s ¼ Φk 1 ⋯Φs , and Assumption 1 is satisfied 1 T Þ . Furthermore, taking Ak ¼ Φk;0 and Bs ¼ E½xs xTs ðΦs;0
Assumption 1 also covers situations where the system matrix in the state-space model is singular (see Section 5), and the 1 above factorization, E½xk xTs ¼ Φk;0 Φs;0 E½xs xTs ; s r k; is not feasible. Also, processes with finite-dimensional, possibly time-variant, state-space models have semi-separable covarP lT iance functions, E½xk xTs ¼ rl ¼ 1 alk bs ; s r k (see [27]), and this structure is a particular case just taking of that assumed, 1 2 r Ak ¼ a1k ; a2k ; …; ark and Bs ¼ bs ; bs ; …; bs . Consequently, the structural assumption on the signal autocovariance function covers both stationary and non-stationary signals. Note also that, although a state-space model can be generated from covariances, when only this kind of information is available, it is preferable to address the estimation problem directly using covariances, thus obviating the need of previous identification of the state-space model. 2.1. Multi-sensor measurements with random parameter matrices Consider m sensors which provide measurements of the signal process according to the following model: zðiÞ ¼ H ðiÞ x þ vðiÞ ; k k k k
k Z1;
i ¼ 1; …; m;
ð1Þ
A Rn z zðiÞ k
is the measured output of the ith sensor at where time k, which will be transmitted to a local processor by an unreliable network; H ðiÞ is a random parameter matrix and k vðiÞ is the measurement noise vector. k Remark 2. This observation model with random measurement matrices cover some networked-induced phenomena that usually exist in many sensor networks applications in which the measured outputs present uncertainties which cannot be described only by the usual additive disturbances. For example, random observation losses [8], stochastic sensor gain degradation [7], multiplicative noises in the observation equations [10], missing measurements [9] or both multiplicative noises and missing measurements [24] are clearly networked-induced phenomena that can be modeled by random measurement matrices. Consequently, the observation model (1) is found in a wide variety of real situations related to multi-sensor systems. The following assumptions are required on the observation model (1):
Centralized fusion estimation scheme
Assumption 2. fH ðiÞ ; k Z1g, i ¼ 1; …; m, are independent k sequences of independent random parameter matrices ðiÞ with known means, E½H ðiÞ ¼ H k ; also the covariances k ðiÞ ðiÞ 0 Cov½hpq ðkÞ; hp0 q0 ðkÞ, for p; p ¼ 1; …; nz and q; q0 ¼ 1; …; nx , ðiÞ are assumed to be known (hpq ðkÞ denotes the ðp; qÞth entry ðiÞ of H k ). Assumption 3. The measurement noises fvðiÞ ; k Z 1g, k i ¼ 1; …; m, are zero-mean white processes with known E½vðiÞ vðjÞT ¼ RðijÞ δk;s , for i; j ¼ 1; …; m. k s k
Distributed fusion estimation scheme Fig. 1. Illustration of centralized and distributed fusion structures.
Note that the conservative hypothesis of independence of different sensor noises has been weakened here, since this assumption may be a limitation in many real-world problems; for example, when all the sensors operate in the same noisy environment, the noises are usually correlated, or even some sensors may have the same measurement noises.
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
measurements. In the following Proposition, expressions for such correlation matrices are derived.
2.2. Observation model with random delays and packet dropouts As it has been indicated, due to unreliable communication medium or network congestion, random one-step delays and packet dropouts with different rates are supposed to exist in data transmissions from the individual sensors to the local processors. Specifically, the following model for yðiÞ , the measurement determined in the ith local k processor, is considered: yðiÞ ¼ γ ðiÞ zðiÞ þ γ ðiÞ zðiÞ þ γ ðiÞ yðiÞ ; k 0;k k 1;k k 1 2;k k 1
k Z2;
ðiÞ yðiÞ 1 ¼ z1 ;
15
Proposition. For i; j ¼ 1; …; m, the correlation matrices ðiÞ
ðijÞ
s ¼ k; k 1, are given by: ðiÞ
ðiÞ
ðjÞT
ðiÞT
Σ zk;s ¼ H k Ak BTs H s þ E½H~ k Ak BTk H~ k δi;j δk;s þRðijÞ δk;s ; k ðijÞ
1 r s r k:
ð4Þ ðiÞ
i ¼ 1; …; m;
ðiÞ
Σ zk;s E½zðiÞ zðjÞT , Σ zy E½zðiÞ yðiÞT and Σ yk;s E½yðiÞ yðiÞT , for k;s k s k s k s
ðiÞ
ðiÞ
ðiÞ z ðiÞ zy zy z z Σ zy ¼ γ ðiÞ 0;s Σ k;s þ γ 1;s Σ k;s 1 þ γ 2;s Σ k;s 1 ; 2 r s r k; Σ k;1 ¼ Σ k;1 : k;s ðiÞ
ðiÞ
ðiÞ
ð5Þ
ð2Þ ; k Z2g, fγ ðiÞ d;k
d ¼ 0; 1; 2; denote sequences of Berwhere noulli variables with γ ðiÞ þγ ðiÞ þ γ ðiÞ ¼ 1. 0;k 1;k 2;k Remark 3. The observation model (2) has been recently proposed in [25] for single-sensor systems with missing measurements and featuring simultaneously one-step random delays and packet dropouts during data transmissions. Therefore, in this paper, the results of [25] are generalized in two directions. On the one hand, random measurement matrices are considered, thus covering not only missing measurements but also, as already mentioned, other important networked-induced random phenomena. On the other, the single sensor case, or multiple sensors with the same delay and loss rates in [25], is extended to the case of multiple sensors with different delay and loss rates. γ ðiÞ 0;k
yðiÞ k
ðiÞ
ðiÞ
ðiÞ
Σ yk ¼ γ ðiÞ Σ z þ γ ðiÞ Σ z þγ ðiÞ Σ y ; k Z2; Σ y1 ¼ Σ z1 0;k k 1;k k 1 2;k k 1 ðiÞ
ðiÞ
ðiÞ
ðiÞ
ðiÞ
ðiÞ
Σ yk;k 1 ¼ γ ðiÞ Σ zy þ γ ðiÞ Σ zy þγ ðiÞ Σy ; 0;k k;k 1 1;k k 1 2;k k 1
Assumption 4. For i ¼ 1; …; m, and d ¼ 0; 1; 2, the process fγ ðiÞ ; k Z 2g is a sequence of independent Bernoulli random d;k variables with known probabilities P½γ ðiÞ ¼ 1 ¼ γ ðiÞ ; 8 k Z2. d;k d;k ðiÞ Also we assume that fγ d;k ; k Z2g is independent of the 0 sequences fγ ðjÞ ; k Z 2g, d ¼ 0; 1; 2, for any j a i. d0 ;k From this assumption, it is clear that, for i; j ¼ 1; …; m, 0 and d; d ¼ 0; 1; 2, the correlation of the variables γ ðiÞ and d;k ðjÞ γ d0 ;s is known, and it is given by: 8 h i < γ ðiÞ δ 0 ; i ¼ j and k ¼ s d;k d;d ðiÞ ðjÞ ð3Þ E γ d;k γ d0 ;s ¼ : γ ðiÞ γ ðjÞ0 ; i aj or k as: d;k d ;s Finally, the following independence hypothesis is also assumed. Assumption 5. For i ¼ 1; …; m, and d ¼ 0; 1; 2, the signal, fxk ; k Z 1g, and the processes fHðiÞ ; k Z 1g, fvðiÞ ; k Z 1g and k k ðiÞ fγ d;k ; k Z 2g are mutually independent. The distributed fusion estimation algorithm requires the computation of some autocorrelation and crosscorrelation matrices of the transmitted and received
k Z 2:
ð6Þ
ðiÞ Proof. Based on the independence of xk and H~ k ¼ ðiÞ ðiÞ H k H k , the conditional expectation properties and Assumption 1 lead to: . ðiÞ i ðiÞT i h ðiÞ h i ðiÞT i h ðiÞ h ðiÞ ðiÞT ¼ E H~ k E xk xTk H~ k E½H~ k xk xTk H~ k ¼ E H~ k E xk xTk H~ k H~ k
i h ðiÞ ðiÞT ¼ E H~ k Ak BTk H~ k ; where the (p,q)th entries of these matrices are given by: h ðiÞ i ðiÞT E H~ k Ak BTk H~ k
pq
¼
nx X nx X a¼1b¼1
h i ðiÞ ðiÞ Cov hpa ðkÞ; hqb ðkÞ Ak BTk ; ab
¼ zðiÞ ; k
From (2) it is clear that ¼ 1 means that that is, the local processor receives the data of the ithsensor at the time instant k. When γ ðiÞ ¼ 1, which means 1;k that yðiÞ ¼ zðiÞ , the measurement received at time k is onek k1 step delayed. Finally, if γ ðiÞ ¼ 1, then yðiÞ ¼ yðiÞ , due to the 2;k k k1 fact that the packet at time k is lost and the latest packet previously received is then used for estimation. Next, the hypotheses on the Bernoulli variables are presented.
ðiÞ
p; q ¼ 1; …; nz : So, from (1) and (2) and taking into account Assumptions 1–5, expressions (4)–(6) are easily derived. □
3. Centralized fusion estimators In this section, using an innovation approach, a recursive algorithm for the LS linear centralized fusion predictor and filter of the signal, xk, based on the measurements ðiÞ fyðiÞ 1 ; …; yL ; i ¼ 1; …; mg; L r k, is obtained. 3.1. Stacked observation model To address the estimation problem through the centralized fusion method, the observation Eqs. (1) and (2) are combined yielding the following observation model: Z k ¼ Hk xk þ V k ;
k Z 1;
Y k ¼ Γ 0;k Z k þ Γ 1;k Z k 1 þ Γ 2;k Y k 1 ; k Z2; Y 1 ¼ Z 1 ; ! ! ! ð1Þ ð1Þ ð1Þ where Z k ¼
z
k ⋮ zðmÞ k
, Hk ¼
Hk
⋮ H ðmÞ k
, Vk ¼
v
k ⋮ vðmÞ k
; …; γ ðmÞ I; d ¼ 0; 1; 2. and Γ d;k ¼ Diag γ ð1Þ d;k d;k
, Yk ¼
y
ð7Þ ! ð1Þ k ⋮
yðmÞ k
Hence, the problem is to obtain the LS linear estimator of the signal, xk, based on the observations fY 1 ; …; Y L g; L rk. This problem requires the statistical properties of the processes involved in the observation model (7), which are easily inferred from Assumptions 1–5:
16
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
(a) fHk ; k Z 1g is a sequence of independent random parameter matrices with known means, H k E½Hk ¼ ð1ÞT ðmÞT T H k ; …; H k , and it satisfies E
h
~ k xk xT H ~T H k k
i
h ð1Þ i h ðmÞ i ð1ÞT ðmÞT ¼ Diag E H~ k Ak BTk H~ k ; …; E H~ k Ak BTk H~ k ;
~ k ¼ Hk H k . where H process with E½V k V Ts ¼ Rk δk;s ; (b) fV k ; k Z1g isa zero-mean where Rk ¼ RðijÞ . k i;j ¼ 1;…;m
(c)
Γ d;k ; k Z 2 , d ¼ 0; 1; 2, are sequences of independent
random matrices with known means, Γ d;k E½Γ d;k ¼ ; …; γ ðmÞ ; …; Diag γ ð1Þ I, and if we denote γ d;k ¼ γ ð1Þ d;k d;k d;k γ ðmÞ ÞT d;k 0
1nz , the correlations
Σ γd;d0 ;k
E½γ d;k γ Td0 ;k ,
Remark 4. From the above properties, it is clear that fZ k ; k Z1g and fY k ; k Z 1g are zero-mean processes whose T correlation functions, Σ Zk;s E½Z k Z Ts , Σ ZY k;s E½Z k Y s , for s r k, and Σ Yk;s E½Y k Y Ts , for s ¼ k; k 1, are obtained by the following expressions: T ~ T þ Rk δk;s ; 1 rs r k: ~ k Ak Bk H Σ Zk;s ¼ H k Ak BTk H k þ E½H k Z Z ZY ZY Z Σ ZY k;s ¼ Σ k;s Γ 0;s þΣ k;s 1 Γ 1;s þ Σ k;s 1 Γ 2;s ; 2 rs r k; Σ k;1 ¼ Σ k;1 :
Σ Yk ¼ Σ γ0;0;k ○Σ Zk þ Σ γ1;1;k ○Σ Zk 1 þ Σ γ2;2;k ○Σ Yk 1 þΣ γ0;1;k ○Σ Zk;k 1 γ YZ þ Σ γ1;0;k ○Σ Zk 1;k þ Σ γ0;2;k ○Σ ZY k;k 1 þ Σ 2;0;k ○Σ k 1;k
k Z 2: ð8Þ
3.2. Centralized prediction and filtering recursive algorithm The following theorem presents a recursive algorithm ðCÞ for the optimal LS linear centralized fusion estimators b x k=L , L r k, of the signal xk based on the observations fY 1 ; …; Y L g given by (7). ðCÞ Theorem 1. The centralized predictor and filter, b x k=L , L r k, are obtained by
¼ Ak OL ; L rk;
ð9Þ
where the vectors OL are recursively calculated from OL ¼ OL 1 þ J L Π L 1 μL ; L Z1; O0 ¼ 0: J L ¼ GTBL r L 1 GTAL J L 1 Π L11 Υ TL 1 ; L Z2; J 1 ¼ GTB1 ;
r L ¼ r L 1 þ J L Π L 1 J TL ; L Z 1; r 0 ¼ 0:
and the innovation covariance matrix, ΠL, is given by Π L ¼ Σ YL Γ 2;L Σ YL 1;L Σ YL;L 1 Γ 2;L þ Γ 2;L Σ YL 1 Γ 2;L ½GBL J TL GTAL Υ L 1 þ GAL J L 1 Π L11 Υ TL 1 ; Π1 ¼
L Z2;
Σ Y1 :
ð14Þ ΣYL
Σ YL;L 1
and are given in (8), and the matrices The matrices GAL and GBL are defined by GΨ L ¼ Γ 0;L H L Ψ L þ Γ 1;L H L 1 Ψ L 1 ; L Z 2; GΨ 1 ¼ H 1 Ψ 1 ; Ψ ¼ A; B:
ð15Þ Proof. See Appendix A. Remark 5. The performance of the LS linear estimators ðCÞ b x k=L ; L r k; is measured by the error covariance matrices h i ðCÞ ðCÞT P ðCÞ x k=L , whose computation, which is ¼ E xk xTk E b x k=L b k=L
not included in Theorem 1, is immediate from Assumption 1 and (9): P ðCÞ ¼ Ak ðBk Ak r L ÞT ; k=L
L r k:
Note that these matrices does not depend on the current set of observations, but only on the matrices A k and B k, which are known, and on the matrices r L, which are recursively calculated; hence, the prediction and filtering error covariance matrices provide a measure of the estimators performance even before we get any observed data.
In this section we obtain prediction and filtering estimators using the distributed fusion method, which is performed in two steps. In the first one, for each ðiÞ i ¼ 1; …; m, local LS linear estimators, b x k=L , L r k, of the n o ðiÞ signal xk, based on the measurements yðiÞ , are 1 ; …; yL
obtained by a recursive algorithm. In the second step, ðDÞ
fusion distributed estimators, b x k=L , L rk, are generated by a matrix-weighted linear combination of the corresponding bðiÞ local estimators, x k=L ; i ¼ 1; …; m, using the mean squared error as optimality criterion. 4.1. Local LS linear prediction and filtering recursive algorithm
ð10Þ
The matrices JL are given by
with Υ L ¼ Γ 1;L þ 1 RL Γ 0;L ; L Z1; and r L ¼ obtained from
ð13Þ
4. Distributed fusion estimators
k Z 2;
ZY Y Σ Y1 ¼ Σ Z1 :Σ Yk;k 1 ¼ Γ 0;k Σ ZY k;k 1 þ Γ 1;k Σ k 1 þΓ 2;k Σ k 1 ;
ðCÞ b x k=L
μL ¼ Y L GAL OL 1 Υ L 1 Π L11 μL 1 Γ 2;L Y L 1 ; LZ 2; μ1 ¼ Y 1 ;
for
d; d ¼ 0; 1; 2, are also known matrices whose entries are given in (3). Moreover, for any deterministic matrix S, the Hadamard product properties guarantee that E½Γ d;k SΓ d0 ;k ¼ Σ γd;d0 ;k ○S. (d) For d ¼ 0; 1; 2, the signal, fxk ; k Z 1g, and the processes fHk ; k Z 1g, fV k ; k Z1g and fΓ d;k ; k Z 2g are mutually independent.
γ YZ þ Σ γ1;2;k ○Σ ZY k 1 þ Σ 2;1;k ○Σ k 1 ;
The innovation, μL, satisfies
E½OL OTL
ð11Þ
recursively
Theorem 2. For i ¼ 1; …; m, the local LS linear prediction ðiÞ and filtering estimators, b x k=L , L r k, are obtained by ðiÞ b x k=L ¼ Ak OðiÞ L ;
L rk;
ð16Þ
where ð12Þ
ðiÞ ðiÞ ðiÞ 1 ðiÞ μL ; L Z1; OðiÞ OðiÞ L ¼ OL 1 þ J L Π L 0 ¼ 0;
ð17Þ
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
with J ðiÞ L
¼ GðiÞT BL
ðiÞT r ðiÞ L 1 GAL
ðiÞ 1 ðiÞ J ðiÞ L 1ΠL 1 Υ L 1;
L Z 2;
J ðiÞ 1
¼ GðiÞT B1 ; ð18Þ
being from
Υ ðiÞ L
ðiÞ ðiÞ ¼ γ ðiÞ 0;L γ 1;L þ 1 RL ; L Z1,
and
r ðiÞ L
ðiÞT ¼ E½OðiÞ L OL
obtained
ðiÞ ðiÞ ðiÞ 1 ðiÞT r ðiÞ J L ; L Z1; r 0 ¼ 0: L ¼ rL 1 þ JL Π L
The innovation, μðiÞ L
μðiÞ L ,
ð19Þ
is calculated by
ðiÞ ðiÞ ðiÞ ðiÞ 1 ðiÞ ðiÞ ðiÞ ¼ yðiÞ L GAL OL 1 Υ L 1 Π L 1 μL 1 γ 2;L yL 1 ;
L Z 2;
μ1 ¼ yðiÞ 1 ;
ð20Þ
and its covariance matrix, Π ðiÞ L , is given by ðiÞ ðiÞ ðiÞ y ðiÞ y y ðiÞ 2 yðiÞ Π ðiÞ L ¼ Σ L γ 2;L Σ L 1;L þΣ L;L 1 þðγ 2;L Þ Σ L 1 h i ðiÞT ðiÞT ðiÞ ðiÞ ðiÞ ðiÞ 1 ðiÞ ½GðiÞ BL J L GAL Υ L 1 þ GAL J L 1 Π L 1 Υ L 1 ; L Z2; yðiÞ
Π ðiÞ 1 ¼ Σ1
ð21Þ ðiÞ
ðiÞ
The matrices Σ yL and Σ yL;L 1 are given in (6), and the matrices GðiÞ Ψ L , for Ψ ¼ A; B, are defined by GðiÞ ΨL
ðiÞ ðiÞ ðiÞ ¼ γ ðiÞ 0;L H L Ψ L þγ 1;L H L 1 Ψ L 1 ;
L Z2;
GðiÞ Ψ1
ðiÞ ¼ H 1 Ψ 1:
ð22Þ
The local prediction and filtering error covariance matrices, , L r k, are obtained by P ðiÞ k=L T ¼ Ak Bk Ak r ðiÞ ; L r k: ð23Þ P ðiÞ L k=L Proof. The derivation of this theorem is analogous to that of Theorem 1, so the details are omitted here. □ 4.2. Matrix-weighted distributed fusion estimators design As we have already indicated, our goal is to obtain ðDÞ distributed fusion estimators, b x k=L , L rk, generated by a matrix-weighted linear combination of the local estimaðiÞ tors, b x k=L ; i ¼ 1; 2; …; m, in which the weight matrices are computed so that the mean squared estimation error is minimized. ð1ÞT ðmÞT T b k=L ¼ b x k=L ; …; b x k=L and Ck;L ¼ Hence, by denoting X ð1Þ ðmÞ C k;L ; …; C k;L , the aim is to find Ck;L such that the estiðiÞ b k=L ¼ Pm C ðiÞ b b mator Ck;L X i ¼ 1 k;L x k=L minimizes E ½ðxk C k;L X k=L Þ b k=L ÞT : As it is well known, the solution of this ðxk Ck;L X problem is given by the following matrix (see [27]):
1 T T b b b X X E½ X Copt ¼ E x : ð24Þ k k=L k=L k=L k;L Taking into account that, from the orthogonal projection
h i bT bðmÞ bðmÞT , it is bð1Þ bð1ÞT lemma (OPL), E xk X k=L ¼ E½x k=L x k=L ; …; E x k=L x k=L
17
The following theorem provides the proposed disðDÞ tributed fusion estimators, b x k=L , and their error covariance ðDÞ matrices, P k=L . The notations in this theorem are those of Theorem 2. ð1ÞT ðmÞT T b k=L ¼ b Theorem 3. Let X x k=L ; …; b x k=L be the vector formed by the local LS estimators calculated in Theorem 2. Then, the ðDÞ distributed fusion estimators b x k=L , L r k, are given by
1 b ðDÞ b k=L ; L r k; b X ð25Þ x k=L ¼ Ξ k=L Σ Xk=L ðijÞ b with Σ Xk=L ¼ Σbxk=L
i;j ¼ 1;…;m
ð1Þ
ðmÞ and Ξ k=L ¼ Σbxk=L ; …; Σbxk=L ,
h i ðijÞ bðjÞT bðiÞ where the cross-correlation matrices Σbxk=L ¼ E x k=L x k=L , i; j ¼ 1; …; m, are given by ðijÞ
T Σbxk=L ¼ Ak r ðijÞ L Ak ;
with r LðijÞ
r LðijÞ
L r k;
ðjÞT ¼ E½OðiÞ L OL
ð26Þ
satisfying
ðijÞ ðjÞ 1 ðjÞT ¼ r ðijÞ JL L 1 þJ L 1;L Π L
ðiÞ 1 ðjiÞT þ J ðiÞ JL ; L ΠL
L Z1:
ð27Þ
ðiÞ ðjÞT The matrices J ðijÞ L ¼ E½OL μL are given by ðijÞ ðiÞ ðiÞ 1 ðijÞ J ðijÞ ΠL ; L ¼ J L 1;L þJ L Π L
L Z 1;
ð28Þ
ðiÞ ðjÞT and J ðijÞ L 1;L ¼ E½OL 1 μL are calculated by h i ðiÞ ðijÞ ðjÞT ðiÞ ðiÞ 1 ðjiÞT ðijÞ ðjÞ 1 ðjÞ J ðijÞ L 1;L ¼ r L 1 r L 1 GAL þ J L 1 Π L 1 Υ L 1 J L 1 Π L 1 Υ L 1 ;
L Z 2; J ðijÞ 0;1
¼ 0:
ð29Þ
ðjÞT The innovation cross-correlation matrices, Π LðijÞ ¼ E½μðiÞ L μL , are given by ðijÞ ðiÞ ðjÞT ðjÞ ðijÞ ðijÞ ðjÞ 1 ðjÞT ðjÞT Π ðijÞ L ¼ ΘL GAL GBL J L þ J L 1;L Υ L 1 Π L 1 J L 1 GAL ðiÞ ðiÞ 1 ðijÞ L Z 2; þ Υ ðjÞ L 1 Υ L 1 Π L 1 Π L 1;L ; z Π ðijÞ 1 ¼ Σ1 ; ðijÞ
ð30Þ
where ðiÞ ðjÞ z ðiÞ ðjÞ z ðiÞ ðjÞ z ΘðijÞ L ¼ γ 0;L γ 0;L Σ L þγ 1;L γ 1;L Σ L 1 þ γ 0;L γ 1;L Σ L;L 1 ðijÞ
ðijÞ
ðjÞ z þ γ ðiÞ 1;L γ 0;L Σ L 1;L ; ðijÞ
and
Π ðijÞ L 1;L
¼
ðjÞT E½μðiÞ L 1 μL
ðijÞ
L Z 2;
ð31Þ
are calculated by
T ðiÞ ðjiÞ ðijÞ ðjÞ 1 ðjÞ Π ðijÞ GAðjÞT þ Υ LðjiÞT L 1;L ¼ J L 1 J L 1 1 ΠL 1ΠL 1 Υ L 1; L
L Z2:
ð32Þ The matrices
Υ ðijÞ L
ðjÞ ðijÞ ¼ γ ðiÞ 1;L þ 1 γ 0;L RL ;
L Z1; and
ðijÞ Σ zs;s0 ,
0
s; s ¼ L 1;
L, are given in (4). , L r k, of the distributed The error covariance matrices, P ðDÞ k=L fusion estimators are computed by
1 b P ðDÞ ¼ Ak BTk Ξ k=L Σ Xk=L Ξ Tk=L ; L rk: ð33Þ k=L
clear that to obtain Copt , which is needed to design the disk;L bðDÞ tributed fusion estimators, x k=L , L r k, it is necessary to cal-
Proof. See Appendix B.
culate the cross-correlation matrices between any two local
Remark 6. The proposed distributed LS linear fusion estimator given by (25) requires the computation of an
ðiÞ ðjÞT estimators, E½b x k=L b x k=L , i; j ¼ 1; …; m:
18
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
nx m nx m inverse matrix, with nx being the dimension of the signal and m the number of sensors; consequently, this fusion method has the computational order of magnitude O½ðnx mÞ3 , which is clearly lower than that of the distributed fusion estimators based on the signal augmentation. This computational complexity reduction makes the proposed distributed fusion method be superior to that based on an augmentation approach.
Remark 7. The main difficulties caused by the new measurement model considered in this paper are mainly associated to the following two reasons:
The random parameter matrices considered in the
4.3. Computational procedure For an easier understanding, the computational steps of the distributed fusion estimators proposed in Theorems 2 and 3 are now summarized as follows: h ðiÞ i ðiÞT (I) Previous matrices: The matrices E H~ L AL BTL H~ L , necesðiÞ
sary to compute Σ zL , are obtained as indicated in Section ðijÞ
ðiÞ
2.3. The correlation matrices Σ zL;s and Σ zy L;s , 1 rs r L are ðiÞ
computed by (4) and, from them, we obtain Σ yL;s , for s ¼ L; L 1, and ΘðijÞ L by expressions (6) and (31), respecðiÞ tively. The matrices GðiÞ AL and GBL are obtained by (22). ðiÞ ðjÞ ðijÞ Finally, we compute Υ ðijÞ L ¼ γ 1;L þ 1 γ 0;L RL ; L Z 2: Note that
all these matrices depend only on the system model information and can be obtained before the observations are available. (II) Local LS linear prediction and filtering recursive algorithm (Theorem 2): At the sampling time L, starting with the prior knowledge of the ðL 1Þth iteration, ðiÞ ðiÞ ðiÞ ðiÞ including J ðiÞ L 1 , Π L 1 , r L 1 μL 1 and OL 1 , the proposed local estimation algorithm operates as follows: ðiÞ Step II-1: Compute J ðiÞ L by (18) and, from it, Π L is
by (19). Then, the error provided by (14) and r ðiÞ L , can be obtained by (23). covariance matrices, P ðiÞ k=L Step II-2: When the new observation yðiÞ L is available, ðiÞ the innovation μðiÞ L is computed by (20) and, from it, OL ðiÞ is obtained using (17). Then the local estimators b x k=L
are computed by (16). (III) Distributed fusion estimators (Theorem 3). At the sampling time L, once the ðL 1Þth iteration is finished, starting with the prior knowledge including J ðijÞ L 1, ðijÞ Π ðijÞ L 1 and r L 1 we proceed as follows:
Step III-1: Compute J LðijÞ 1;L and Π ðijÞ L 1;L by (29) and (32), respectively; from them, Π ðijÞ L is obtained by (30). ðijÞ is computed by Step III-2: From J LðijÞ 1;L and Π ðijÞ L , JL
(28). ðijÞ ðijÞ Step III-3: From J ðijÞ L 1;L and J L , r L is computed by (27),
measured outputs complicate the derivation of the measurement correlation matrices and make necessary to apply some statistical properties of the conditional expectation (see Proposition 1 in Section 2). The consideration of simultaneous random one-step delays and packet dropouts adds some extra difficulties to obtain simple expressions for the local innovation covariance and cross-covariance matrices, as those given in (21) and (30), respectively. It should be noted that the expression of the innovation covariance matrix is the fundamental difference between the local recursive algorithm in this paper and that proposed in [23]; specifically, the innovation covariance matrix in [23] depends on the matrices Fk, which are recursively obtained, while in this paper a new easier expression for such matrix is derived using (35) for the local innovation.
5. Numerical simulation example Consider a zero-mean two-dimensional signal xk ; k Z 1g with autocovariance function given by ! 1:02 0:8k s 0:8k s E½xk xTs ¼ ; s ok; 0:9 0:8k s 1 0:918 0:8k s 1 which, agreeing with Assumption 1, is factorizable just taking ! 0:8k 1:02 0:8k and Ak ¼ 0:9 0:8k 1 0:918 0:8k 1 ! 0:8 k 0 : Bk ¼ 0 0:8 k For the simulation, the signal is supposed to be generated by the same model as that considered in [26] (Example 2); specifically,
0:8 0 0:6 xk þ 1 ¼ xk þ wk ; k Z1: 0:9 0 0:5 where wk ; k Z 1 is a zero-mean white noise with unit variance. This example illustrates that Assumption 1 covers situations in which the system matrix in the statespace model is a singular matrix. Consider three sensors which provide scalar measurements of the signal according to model (1), ¼ H ðiÞ x þ vðiÞ ; zðiÞ k k k k
k Z1;
i ¼ 1; 2; 3;
and from it, the cross-covariance matrices Σbxk=L are
with the following characteristics:
obtained using (26).
The random parameter matrices fHðiÞ ; k Z 1g, i ¼ 1; 2; 3 k
ðijÞ
ðijÞ
Step III-4: From Σbxk=L , the distributed fusion estimaðDÞ
tors, b x k=L , and their error covariance matrices, P ðDÞ , are k=L calculated by (25) and (33), respectively.
are defined by: ð0:74; 0:75Þ, where fλkð1Þ ; k Z 1g is a sequence ○ H kð1Þ ¼ λð1Þ k of independent random variables uniformly distributed over ½0:3; 0:7.
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
○ Hð2Þ ¼ λkð2Þ ð0:85; 0:87Þ, where fλkð2Þ ; k Z 1g is a sequence k of independent discrete random variables with ¼ 0 ¼ 0:1; P½λð2Þ k
P½λð2Þ ¼ 0:5 ¼ 0:5; k
P½λkð2Þ ¼ 1 ¼ 0:4:
¼ λkð3Þ ðð0:75; 0:70Þ þϵk ð0:95; 0:50ÞÞ, where fλð3Þ ; kZ ○ Hð3Þ k k 1gh are i independent Bernoulli variables with ¼ 1 ¼ pð3Þ , 8 k Z 1, and ϵk ; k Z 1 is a zeroP λð3Þ k mean Gaussian white process n with unit o variance. The additive noise processes vðiÞ ; k Z 1 , i ¼ 1; 2; 3, are k defined by vðiÞ ¼ ci ηk ; i ¼ 1; 2; 3, being c1 ¼0.25, c2 ¼0.5 k and c3 ¼0.75, and ηk ; k Z 1 a zero-mean Gaussian white process with unit variance. Clearly, the additive noises n o ; k Z 1 , i ¼ 1; 2; 3, are correlated at any time, with vðiÞ k ¼ c2i and RðijÞ ¼ ci cj , which agrees with Assumption 3. RðiÞ k k
Note that the random parameter matrices at each , allow us to model different types of uncersensor, H ðiÞ k tainty. Actually, in sensors 1 and 2, as in [7], the values of are within the interval the random variables λkð1Þ and λð2Þ k ½0; 1, so they represent continuous and discrete stochastic sensor gain degradations, respectively. On the other hand, cover the phenomenon of the Bernoulli variables λð3Þ k missing measurements in sensor 3; namely, λð3Þ ¼ 1 means k that the signal xk is present in the kth measurement coming from the third sensor, while λkð3Þ ¼ 0 means that the signal is missing in such measurement or, equivalently, ; moreover, as in [24], a that it is only noise, zkð3Þ ¼ vð3Þ k multiplicative noise is also considered in the measurements of sensor 3. Next, according to the theoretical observation model (Section 2.1), suppose that random one-step delays and/or packet dropouts with different rates exist in the data transmissions from the individual sensors to the local processors. Specifically, we assume that ¼ αð1Þ βkð1Þ zð1Þ þαð1Þ ð1 βð1Þ Þzð1Þ þ ð1 αkð1Þ Þykð1Þ 1 ; yð1Þ k k k k k k1 ð1Þ k Z2; yð1Þ 1 ¼ z1 ;
yð2Þ k
¼ αð2Þ zð2Þ þ ð1 αð2Þ Þykð2Þ 1 ; k k k
k Z 2; y1ð2Þ ¼ zð2Þ 1 ;
¼ βð3Þ zð3Þ þ ð1 βð3Þ Þzkð3Þ 1 ; yð3Þ k k k k
ð3Þ k Z2; yð3Þ 1 ¼ z1 ;
; k Z 2g; i ¼ 1; 2; and fβðiÞ ; k Z 2g; i ¼ 1; 3; are indewhere fαðiÞ k k pendent sequences of independent Bernoulli random varih i h i ¼ 1 ¼ αðiÞ and P βðiÞ ¼ 1 ¼ βðiÞ . ables with P αðiÞ k k This model considers the possibility of delays and packet dropouts in transmissions from sensor 1, while the measurements received by the local processors of sensors 2 and 3 are only subject to random dropouts and delays, respectively. If αðiÞ ¼ 1; i ¼ 1; 2, all the transmissions at time k are k ¼ 1; i ¼ 1; 3, the current measured successful; if also βðiÞ k ¼ zðiÞ ; i ¼ 1; 2; 3, outputs are all received on time, i.e. yðiÞ k k ¼ 0 for some i, the corresponding observation is while if βðiÞ k ¼ zðiÞ . At the sensors 1 and 2, αðiÞ ¼0 one-step delayed, yðiÞ k k1 k means that the kth-transmission fails, no packet is then received at time k, and the most recent data information, yðiÞ , is used instead. k1
19
To illustrate the feasibility and effectiveness of the estimators proposed in the current paper, the algorithms were implemented in MATLAB, and fifty iterations were run. Using simulated values, both centralized and distributed fusion estimates were calculated, as well as the corresponding error covariance matrices in order to measure the estimators accuracy. Since the results obtained for the first and second signal components are analogous, only those corresponding to the first one are presented here. Figs. 2–4 consider pð3Þ ¼ 0:7 and αð1Þ ¼ 0:8, αð2Þ ¼ 0:7, ð1Þ β ¼ 0:6 and βð3Þ ¼ 0:8. The first one displays the trajectories of the first signal component along with the centralized and distributed fusion filtering estimations. Fig. 3 displays the error variances of the different filtering estimators (local, centralized and distributed), and Fig. 4 shows the error variances of the one, two and three-step predictors and filters obtained by centralized and distributed fusion methods. From Fig. 2, a satisfactory and efficient tracking performance of the proposed fusion filtering estimators is observed. Fig. 3 shows that the error variances of the distributed fusion filtering estimators are significantly smaller than those of every local estimators, but lightly greater than those of the centralized ones. Nevertheless, this slight difference is compensated by the fact that the distributed fusion structure reduces the computational cost and has better robustness and fault tolerance. Fig. 4 shows the expected fact that the fusion filtering estimators outperform the prediction ones and that the accuracy of the predictors is better as the number of available observations increases. Next, in order to show the effect of the missing measurements phenomenon in sensor 3, the centralized and distributed filtering error variances are displayed in Fig. 5 for different probabilities, pð3Þ ¼ 0:3; 0:5; 0:7; 0:9. From this figure, we see that the performance of the filters is indeed influenced by this probability and, as expected, it is confirmed that the centralized and distributed error variances become smaller as pð3Þ increases and, hence, the performance of both filters improves when 1 pð3Þ , the probability of missing measurements, decreases. Finally, the centralized and distributed filtering accuracy is analyzed as a function of the probabilities αð2Þ and βð3Þ , the mean values of the variables that model the phenomena of packet dropouts and delays in the transmissions from sensors 2 and 3 to local processors, respectively. Specifically, the filters performance is analyzed when αð2Þ is varied from 0.1 to 0.9, from different fixed values of βð3Þ . Since the behavior of the error variances is analogous in all the iterations, only the results of a specific iteration (k¼50) are shown here. In Fig. 6 the centralized and distributed filtering error variances are displayed versus αð2Þ , for βð3Þ ¼ 0:4; 0:6 and 0.8. From this figure it is concluded that, as αð2Þ or βð3Þ increase, the centralized and distributed error variances both become smaller, which means that, as expected, the smaller the probabilities of packet dropouts and/or delays in transmission are, the better estimations are obtained.
20
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
2
1.1 Simulated value Distributed fusion filtering estimates Centralized fusion filtering estimates
1.5
Distributed three−step prediction error variances Centralized three−step prediction error variances Distributed two−step prediction error variances Centralized two−step prediction error variances Distributed one−step prediction error variances Centralized one−step prediction error variances Distributed filtering error variances Centralized filtering error variances
1
0.9 1
0.8 0.5
0.7 0
0.6 −0.5
0.5
−1
0.4
−1.5
0.3
−2
0.2 0
5
10
15
20
25
30
35
40
45
5
10
15
20
50
25 Time k
30
35
40
45
50
Time k
Fig. 2. Simulated first signal component and centralized and distributed fusion filtering estimates.
Fig. 4. Centralized and distributed prediction and filtering error variances.
0.35
0.7 Filtering error variances (processor 1) Filtering error variances (processor 2) Filtering error variances (processor 3) Distributed fusion filtering error variances Centralized fusion filtering error variances
0.65
Distributed filtering error variances 0.34
0.6
0.55
0.33
0.5 0.32
0.45
0.4 0.31 Centralized filtering error variances
0.35
p(3)=0.3 (3)
p =0.5 (3)
0.3
p =0.7
0.3
(3)
p =0.9
0.25
0
5
10
15
20
25
30
35
40
45
0
50
Fig. 3. Error variance comparison of the centralized, distributed and local filtering estimators.
Centralized fusion estimators: Using covariance information, a recursive centralized LS linear prediction and
10
15
20
25
30
35
40
45
50
Fig. 5. Centralized and distributed filtering error variances, when pð3Þ ¼ 0:3; 0:5; 0:7 and 0.9.
6. Conclusions In this paper, the centralized and distributed fusion estimation problems have been investigated in networked multi-sensor systems from measurements with random parameter matrices assuming random transmission delays and packet dropouts. The main outcomes and results can be summarized as follows:
5
Time k
Time k
filtering algorithm, very simple computationally and suitable for online applications, has been designed by an innovation approach without requiring full knowledge of the state-space model generating the signal process. To measure the accuracy of the centralized fusion estimators, recursive formulas for the error covariance matrices have been also derived. Distributed fusion estimators: Firstly, recursive algorithms for the local LS linear predictors and filters of the signal based on the measurement data of each processor have been also designed by an innovation approach; the
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
fading measurements) or [30] (discrete time-varying stochastic systems involving fading measurements, randomly occurring nonlinearities and mixed multiplicative and additive noises).
0.42
Distributed filtering error variances
0.4
21
Acknowledgments
0.38 β(3)=0.8
0.36
This research is supported by Ministerio de Economía y Competitividad and Fondo Europeo de Desarrollo Regional FEDER (Grant no. MTM2014-52291-P).
β(3)=0.6 (3)
β =0.4 0.34
0.32
Appendix A Centralized filtering error variances
0.3
0.28 0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Probability α(2)
Fig. 6. Centralized and distributed filtering error variances at k ¼50 versus αð2Þ , when βð3Þ ¼ 0:4; 0:6 and 0.8.
computational procedure of the local algorithms is analogous to that of the centralized one and, hence, they are very simple and suitable for online applications. Once the local estimators have been obtained, the matrixweighted sum that minimizes the mean-squared estimation error is proposed as distributed fusion estimator. The error covariance matrices of the fusion estimators have been also derived. A numerical simulation example has illustrated the usefulness of the proposed algorithms. Error variance comparisons have shown that both centralized and distributed fusion filters outperform the local ones and, also, a slight superiority of the centralized fusion estimators over the distributed ones. The example has also highlighted the applicability of the proposed algorithms to a great variety of multi-sensor systems featuring network-induced stochastic uncertainties, such as sensor gain degradation, missing measurements or multiplicative observation noises, which can be dealt with the observation model considered in this paper.
Future work in the estimation problem for this kind of systems with random parameter matrices and random transmission delays and packet dropouts includes, in a first stage, the extension of the proposed results to the case of systems with correlation in the measurement matrices and noises. Another challenging topic for the authors is the consideration of connected sensor networks, whose nodes are distributed according to a topology represented by a directed graph. Finally, we highlight also the interest in investigating the centralized and distributed estimation fusion problems in different classes of systems with network-induced phenomena as, for example, those considered in [28] (nonlinear time-varying systems subject to multiplicative noises, missing measurements and quantization effects), [29] (discrete time-varying nonlinear systems with both randomly occurring nonlinearities and
Proof of Theorem 1. Theorem 1 will be proved by an innovation approach. The innovation at time h is defined b h=h 1 , where Y b h=h 1 , the LS one-stage linear as μh ¼ Y h Y predictor of Yh, is the orthogonal projection of Yh onto the linear space spanned by Y 1 ; …; Y h 1 . To simplify the notation we write Z h ¼ Γ 0;h Z h þ Γ 1;h Z h 1 ; then, from (7) and the OPL, the innovation process can be expressed by b h=h 1 ðΓ 2;h Γ 2;h ÞY h 1 ; μh ¼ Z h Z
h Z2;
μ1 ¼ Y 1 ¼ Z 1 ;
b h=h 1 denotes the LS one-stage linear predictor where Z of Z h . Now, by denoting Sk;h ¼ E½xk μTh , the LS linear centralized ðCÞ fusion estimators b x k=L , L rk, are expressed as linear combination of the innovations as follows (see [27]): ðCÞ b x k=L ¼
L X
Sk;h Π h 1 μh ;
L rk;
h¼1
and we start by calculating the coefficients Sk;h which, from the above expression of μh, are given by T
b Sk;h ¼ E½xk Z Th E½xk Z h=h 1 ;
2 r h rk:
From Assumption 1 and properties (a)–(d), we have that E½xk Z Th ¼ Ak GTBh ;
1 r h rk: hP 1 b h=h 1 ¼ Using that Z E½Z h μTl Π l 1 μl , and also that, by l¼1 properties (a)–(d), E½Z h μTl ¼ Γ 0;h H h Sh;l þ Γ 1;h H h 1 Sh 1;l þ Rh 1 Γ 0;h 1 δh 1;l ; l o h;
the following identity holds bT E½xk Z h=h 1 ¼
h 1 X
T Sk;l Π l 1 Γ 0;h H h Sh;l þΓ 1;h H h 1 Sh 1;l
l¼1
þSk;h 1 Π h11 Υ Th 1 : Hence, Sk;h can be expressed as Sk;h ¼ Ak J h , for hr k, where Jh is a function satisfying J h ¼ GTBh
h 1 X
T J l Π l 1 Γ 0;h H h Sh;l þ Γ 1;h H h 1 Sh 1;l J h 1 Π h11 Υ Th 1 ;
l¼1
for 2 r hr k, and J 1 ¼ GTB1 . Then, by defining OL ¼
L X h¼1
J h Π h 1 μh ;
L Z 1; O0 ¼ 0;
22
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
h i ðijÞ ðjÞT ðijÞ ðjÞ 1 ðjÞ bðjÞT E OðiÞ L 1 ζ L=L 1 ¼ r L 1 GAL þ J L 1 Π L 1 Υ L 1 ;
and rL ¼
L X
J h Π h 1 J Th ;
L Z 1; r 0 ¼ 0;
thus concluding (29) for J ðijÞ L 1;L .
Derivation of expressions (30)–(32):
h¼1
it is easy to obtain expressions (9)–(13) of Theorem 1. Finally, expression (14) for ΠL is obtained taking into account that from the OPL, the innovation covariance matrix can be expressed as h h i T i b L=L 1 Z bT E Z Π L ¼ E Y L Γ 2;L Y L 1 Y L Γ 2;L Y L 1 L=L 1 ; b L=L 1 ¼ GA OL 1 þ Υ L 1 Π 1 μL 1 . Then the and using that Z L L1 proof of Theorem 1 is complete. □
ðjÞT ○ If we use expression (35) in Π LðijÞ ¼ E½μðiÞ L μL , it is not difficult to see that h i h ðiÞ i ðjÞT bðjÞT b ; L Z 2; ð37Þ Π LðijÞ ¼ ΘLðijÞ E ζ ðiÞ L ζ L=L 1 E ζ L=L 1 μL
h i ðjÞT ; from (34) it is clear that ΘLðijÞ is where ΘLðijÞ ¼ E ζ ðiÞ L ζL h i bðjÞT given (31). From the OPL, we have E ζ ðiÞ L ζ L=L 1 ¼ h ði=jÞ i ðjÞT ζ L=L 1 and, from (36) and (18), it follows that E b ζ L=L 1 b h ði=jÞ i ðjÞT ðjÞT ðjÞ 1 ðjÞT ðjÞT E b ζ L=L 1 b ζ L=L 1 ¼ GðiÞ þ Υ LðijÞ 1 Π ðjÞ L 1 J L 1 GA L AL GBL J L ðjÞ 1 ðjÞ þ Υ ðijÞ L 1ΠL 1 Υ L 1;
Appendix B
Proof of Theorem 3. To simplify the subsequent mathematical derivations, we introduce the following notation ðiÞ ðiÞ ðiÞ ðiÞ ζ ðiÞ L ¼ γ 0;L zL þ γ 1;L zL 1 ;
L Z 2;
ð34Þ
from which the innovations can be rewritten as ðiÞ ðiÞ ðiÞ ðiÞ bðiÞ L Z 2: μðiÞ L ¼ ζ L ζ L=L 1 þ γ 2;L γ 2;L yL 1 ;
ð35Þ
ðiÞ Also, applying the OPL, it is not difficult to see that b ζ L=L 1 ði=jÞ and b ζ L=L 1 , the estimators of ζ ðiÞ L based on the measuren o n o ðiÞ ðiÞ ðjÞ ments y1 ; …; yL 1 and y1 ; …; yðjÞ L 1 , respectively, are
given by ðiÞ b ζ L=L 1
L Z 2;
L Z 2:
Also, from (36), the following identity holds h ðiÞ i ðijÞ ðiÞ ðiÞ 1 ðijÞ ¼ GðiÞ L Z 2: E b ζ L=L 1 μðjÞT L AL J L 1;L þΥ L 1 Π L 1 Π L 1;L ; Substituting the above expectations into (37), we immediately get (30). in Π ðijÞ ¼ E½μðiÞ μðjÞT ○ Using again expression (35) for μðjÞ h iL hL 1;L ðjÞT L i 1 L ðijÞ ðiÞ ðjÞT ðiÞ b we have Π L 1;L ¼ E μL 1 ζ L E μL 1 ζ L=L 1 , and applying the OPL and (36), we obtain: h i h i ðjÞT ðiÞT ðjÞT ðjiÞT bðj=iÞT ¼ E μðiÞ E μðiÞ L 1 ζL L 1 ζ L=L 1 ¼ J L 1 GAL þ Υ L 1 ; h i ðjiÞT ðjÞT ðijÞ ðjÞ 1 ðjÞ bðjÞT E μðiÞ L 1 ζ L=L 1 ¼ J L 1 GAL þ Π L 1 Π L 1 Υ L 1 ;
L Z 2;
L Z 2:
From these expectations, we can show that expression
ðiÞ ðiÞ ðiÞ 1 ðiÞ ¼ GðiÞ AL OL 1 þ Υ L 1 Π L 1 μL 1 ;
L Z 2;
ði=jÞ ðjÞ ðijÞ ðjÞ 1 ðjÞ b ζ L=L 1 ¼ GðiÞ AL OL 1 þ Υ L 1 Π L 1 μL 1 ;
L Z 2:
(32) for Π ðijÞ L 1;L holds true. ð36Þ
Derivation of expressions (25)–(27) and (33): ○ Expressions (25) and (33) for the distributed estimators and their error covariance matrices, respectively, are immediately derived from (24). ○ Expression (26) for the cross-correlation matrices ðijÞ between local estimators, Σbx , follows immediately k=L
ðjÞT from (16) for such estimators, since r LðijÞ ¼ E½OðiÞ L OL ; moreover, using (17) and taking into account that ðijÞ ðiÞ ðjÞT J ðijÞ s;L ¼ E½Os μL , for s ¼ L; L 1, we get (27) for r L .
Derivation of expressions (28) and (29):
ðijÞ ðiÞ ðjÞT ○ Using (17) for OðiÞ L , expression (28) for J L ¼ E½OL μL is directly obtained. ðiÞ ðjÞT ðjÞ ○ To derive (29) for J ðijÞ L 1;L ¼ E½OL 1 μL , we use (35) for μL , h i h i ðijÞ ðiÞ ðjÞT ðiÞ bðjÞT which leads to J L 1;L ¼ E OL 1 ζ L E OL 1 ζ L=L 1 . h i ðjÞT ¼ Now, from the OPL, we have E OðiÞ L 1 ζL h i ðiÞ bðj=iÞT E OL 1 ζ L=L 1 and, from (36), it is easy to see that h i ðiÞ ðjÞT ðiÞ ðiÞ 1 ðjiÞT bðj=iÞT L Z 2; E OðiÞ L 1 ζ L=L 1 ¼ r L 1 GAL þ J L 1 Π L 1 Υ L 1 ;
The proof of Theorem 3 is then complete. □
References [1] E. Song, Y. Zhua, J. Zhou, Z. You, Optimal Kalman filtering fusion with cross-correlated sensor noises, Automatica 43 (2007) 1450–1456. [2] J. Feng, M. Zeng, Optimal distributed Kalman filtering fusion for a linear dynamic system with cross-correlated noises, Int. J. Syst. Sci. 43 (2) (2012) 385–398. [3] X. Sun, G. Yan, Distributed state fusion Kalman predictor based on WLS algorithm, Proc. Eng. 29 (2012) 216–222. [4] L. Yan, X. Rong, Li, Y. Xia, M. Fu, Optimal sequential and distributed fusion for state estimation in cross-correlated noise, Automatica 49 (2013) 3607–3612. [5] Y. Shi, H. Fang, Kalman filter based identification for systems with randomly missing measurements in a network environment, Int. J. Control 83 (3) (2010) 538–551. [6] M. Liu, X. Liu, Y. Shi, S. Wang, T–S fuzzy-model-based H2 and H1 filtering for networked control systems with two-channel Markovian random delays, Digit. Signal Process. 27 (2014) 167–174. [7] Y. Liu, X. He, Z. Wang, D. Zhou, Optimal filtering for networked systems with stochastic sensor gain degradation, Automatica 50 (5) (2014) 1521–1525. [8] S. Gao, P. Chen, Suboptimal filtering of networked discrete-time systems with random observation losses, Math. Probl. Eng. 2014 (2014) (art. no.151836). [9] R. Caballero-Águila, I. García-Garrido, J. Linares-Pérez, Information fusion algorithms for state estimation in multi-sensor systems with correlated missing measurements, Appl. Math. Comput. 226 (2014) 548–563.
R. Caballero-Águila et al. / Signal Processing 127 (2016) 12–23
[10] P. Fangfang, S. Sun, Distributed fusion estimation for multisensor multirate systems with stochastic observation multiplicative noises, Math. Probl. Eng. 2014 (2014) (art. no.373270). [11] J. Ma, S. Sun, Centralized fusion estimators for multisensor systems with random sensor delays, multiple packet dropouts and uncertain observations, IEEE Sens. J. 13 (4) (2013) 1228–1235. [12] N. Li, S. Sun, J. Ma, Multi-sensor distributed fusion filtering for networked systems with different delay and loss rates, Digit. Signal Process. 34 (2014) 29–38. [13] B. Chen, W. Zhang, L. Yu, Distributed fusion estimation with missing measurements, random transmission delays and packet dropouts, IEEE Trans. Autom. Control 59 (7) (2014) 1961–1967. [14] B. Chen, W. Zhang, G. Hu, L. Yu, Networked fusion Kalman filtering with multiple uncertainties, IEEE Trans. Aerosp. Electron. Syst. 51 (3) (2015) 2332–2349. [15] S. Sun, J. Ma, Linear estimation for networked control systems with random transmission delays and packet dropouts, Inf. Sci. 269 (2014) 349–365. [16] S. Wang, H. Fang, X. Tian, Recursive estimation for nonlinear stochastic systems with multi-step transmission delays, multiple packet dropouts and correlated noises, Signal Process. 115 (2015) 164–175. [17] D. Chen, Y. Yu, L. Xu, X. Liu, Kalman filtering for discrete stochastic systems with multiplicative noises and random two-step sensor delays, Discrete Dyn. Nat. Soc. 2015 (Article ID 809734) (2015) 11 pp. [18] Y. Luo, Y. Zhu, D. Luo, J. Zhou, E. Song, D. Wang, Globally optimal multisensor distributed random parameter matrices Kalman filtering fusion with applications, Sensors 8 (12) (2008) 8086–8103. [19] X.J. Shen, Y.T. Luo, Y.M. Zhu, E.B. Song, Globally optimal distributed Kalman filtering fusion, Sci. China Inf. Sci. 55 (3) (2012) 512–529. [20] J. Hu, Z. Wang, H. Gao, Recursive filtering with random parameter matrices, multiple fading measurements and correlated noises, Automatica 49 (2013) 3440–3448.
23
[21] R. Caballero-Águila, A. Hermoso-Carazo, J. Linares-Pérez, Covariancebased estimation from multisensor delayed measurements with random parameter matrices and correlated noises, Math. Probl. Eng. 2014 (Article ID 958474) (2014) 13 pp. [22] J. Linares-Pérez, R. Caballero-Águila, I. García-Garrido, Optimal linear filter design for systems with correlation in the measurement matrices and noises: recursive algorithm and applications, Int. J. Syst. Sci. 45 (7) (2014) 1548–1562. [23] R. Caballero-Águila, A. Hermoso-Carazo, J. Linares-Pérez, Optimal state estimation for networked systems with random parameter matrices, correlated noises and delayed measurements, Int. J. Gen. Syst. 44 (2) (2015) 142–154. [24] J. Ma, S. Sun, Centralized fusion estimators for multi-sensor systems with multiplicative noises and missing measurements, J. Netw. 7 (10) (2012) 1538–1545. [25] R. Caballero-Águila, A. Hermoso-Carazo, J. Linares- Pérez, Covariancebased estimation algorithms in networked systems with mixed uncertainties in the observations, Signal Process. 94 (2014) 163–173. [26] T. Kailath, A.H. Sayed, B. Hassibi, Linear Estimation, Prentice Hall, Upper Saddle River, New Jersey, 2000. [27] S. Sun, Optimal estimators for systems with finite consecutive packet dropouts, IEEE Signal Process. Lett. 16 (7) (2009) 557–560. [28] J. Hu, Z. Wang, B. Shen, H. Gao, Quantised recursive filtering for a class of nonlinear systems with multiplicative noises and missing measurements, Int. J. Control 86 (4) (2013) 650–663. [29] D. Ding, Z. Wang, J. Lam, B. Shen, Finite-horizon H1 control for discrete time-varying systems with randomly occurring nonlinearities and fading measurements, IEEE Trans. Autom. Control 60 (9) (2015) 2488–2493. [30] D. Ding, Z. Wang, B. Shen, H. Dong, Envelope-constrained H 1 filtering with fading measurements and randomly occurring nonlinearities: the finite horizon case, Automatica 55 (2015) 37–45.