Least-squares linear estimators using measurements transmitted by different sensors with packet dropouts

Least-squares linear estimators using measurements transmitted by different sensors with packet dropouts

Digital Signal Processing 22 (2012) 1118–1125 Contents lists available at SciVerse ScienceDirect Digital Signal Processing www.elsevier.com/locate/d...

287KB Sizes 0 Downloads 19 Views

Digital Signal Processing 22 (2012) 1118–1125

Contents lists available at SciVerse ScienceDirect

Digital Signal Processing www.elsevier.com/locate/dsp

Least-squares linear estimators using measurements transmitted by different sensors with packet dropouts R. Caballero-Águila a , A. Hermoso-Carazo b , J. Linares-Pérez b,∗ a b

Departamento de Estadística e I.O., Universidad de Jaén, Paraje Las Lagunillas s/n, 23071 Jaén, Spain Departamento de Estadística e I.O., Universidad de Granada, Campus Fuentenueva s/n, 18071 Granada, Spain

a r t i c l e

i n f o

Article history: Available online 6 June 2012 Keywords: Least-squares estimation Packet dropouts Multiple sensors Covariance information Innovation approach

a b s t r a c t The least-squares linear estimation problem (including prediction, filtering and fixed-point smoothing) from measurements transmitted by different sensors subject to random packet dropouts is addressed. For each sensor, a different Bernoulli sequence is used to model the packet dropout process. Under the assumption that the signal evolution model is unknown, recursive estimation algorithms are derived by an innovation approach, requiring only information about the covariances of the processes involved in the observation equation, as well as the knowledge of the dropout probabilities at each sensor. © 2012 Elsevier Inc. All rights reserved.

1. Introduction The estimation problem of random signals from noisy measurements coming from different multiple sensors constitutes a current topic of growing interest due to its importance in several application fields, for example in network communication systems incorporating heterogeneous measurement devices (see e.g. [1–3]). Also, over the past few years, research on systems with time delay and/or data packet dropouts has gained lot of interest due to the wide applications of networks in communication systems, networked control systems or signal processing, among others. In networked systems, time delay and/or data packet dropouts are usually unavoidable due to numerous causes such as network congestion, random failures in the transmission mechanism, accidental loss of some measurements, or data inaccessibility at certain times. Moreover, these time delays and packet dropouts often occur randomly. Systems with randomly delayed measurements and/or random packet dropouts are modeled by introducing additional random variables in the observation model, thus making these systems be always non-Gaussian and, as in other kind of non-Gaussian systems, the estimation problem has usually been focused on the search of suboptimal (basically linear) estimators. Under the assumption that the state-space model of the signal to be estimated is known, several modifications of conventional linear estimation algorithms have been proposed to incorporate the effects of random delays and/or random packet dropouts on the measurement arrival (see e.g. [4–9] and references therein).

*

Corresponding author. E-mail address: [email protected] (J. Linares-Pérez).

1051-2004/$ – see front matter © 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.dsp.2012.06.002

On the other hand, in some practical situations the state-space model of the signal is not available and another type of information, for example about the covariance functions of the processes involved in the observation equation, must be processed for the estimation. In this context, linear estimation algorithms from randomly delayed observations based on covariance information have been derived, for example, in [10–12] under different hypotheses on the delay process. Also, situations with bounded random observation delays which can lead to bounded packet dropouts has been recently considered in [13]. Most papers on signal estimation from observations featuring random delays and/or random packet dropouts assume that the observations come from a single sensor or, when multiple sensors exist, all the sensors are assumed to have the same delay/dropout probability. Nevertheless, such assumption is not realistic in many practical situations where the information is gathered by an array of heterogeneous sensors, and the delay/dropout probability at each individual sensor can be different from the others. Recently, in [14], systems with multiple random delays and packet dropouts are considered, admitting different delay/dropout probabilities at each sensor, and the H ∞ filtering problem is considered when the signal state-space model is known. In [15] and [16] the least-squares linear estimation problem from observations coming from multiple randomly delayed sensors, with different delay characteristics, has been studied under a covarianceinformation framework. However, for systems with packet dropouts, estimation problems using covariance information have not been well studied yet. In this paper, as in [16], we assume that the state-space model of the signal is not available. But, differently from [16] where only measurement delays are considered, our aim in the current paper is to obtain a least-squares linear estimation algorithm from

R. Caballero-Águila et al. / Digital Signal Processing 22 (2012) 1118–1125

observations transmitted by different sensors featuring multiple packet dropouts. At each sensor, the random data packet dropouts are modeled by a sequence of Bernoulli random variables, whose values (one or zero) indicate if the current measure is available or lost (in which case, the latest measurement is processed instead). The rest of the paper is organized as follows. In Section 2 the observation model with multiple sensors and packet dropouts is presented. The least-squares linear estimation problem is formulated in Section 3, including a brief description of the innovation approach. The recursive one-stage prediction and filtering algorithm is derived in Section 4, and the fixed-point smoothing algorithm is presented in Section 5, which includes recursive formulas for the estimation error covariance matrices. Finally, in Section 6, a numerical simulation example is presented to show the effectiveness of the estimation algorithms proposed in the current paper, and some conclusions are drawn in Section 7.

due to eventual communication failures during the transmission process, packet dropouts from each sensor may occur in a random way. The packet dropouts will be treated as in [6]: if the current packet is lost, which occurs with a known probability, the latest one successfully received is processed for the estimation (it is also assumed that the packet information at the initial time is always available). Hence, the processed measurements at any time k > 1, that will be denoted by yki , i = 1, . . . , m, are not always equal to the real ones,  yki , i = 1, . . . , m. More specifically, assume that, for each sensor i = 1, . . . , m, the real and the processed measurements at the initial time are equal, y 1i =  y 1i , but, at any time k > 1, the measurement processed, yki ,

will be either the current measured output  yki (with known prob-

ability pki ) or the latest measurement processed, yki −1 , if  yki is lost i during transmission (with probability 1 − pk ); that is:



2. Observation model The purpose in this paper is to find the least-squares (LS) linear estimator of a n-dimensional discrete-time random signal using measurements transmitted by different sensors subject to random packet dropouts. The estimation problem is addressed under the assumption that the evolution model of the signal to be estimated is unknown and only information about its mean and covariance functions is available; this information is specified in the following assumption. Assumption 1. The n-dimensional signal process { zk ; k  1} has zero mean and its autocovariance function is expressed in a separable form, E [ zk z Tj ] = A k B Tj , j  k, where A and B are known n × s matrix functions, the symbol E denotes the mathematical expectation and the superscript T denotes the transpose operation. Remark 1. Although Assumption 1 might seem restrictive, it covers many practical situations; for example, when the system matrix Φ in the state-space model of a stationary signal is available, the signal autocovariance function is E [ zk z Tj ] = Φ k− j E [ z j z Tj ], j  k, and Assumption 1 is clearly satisfied, taking A k = Φ k and B j = E [ z j z Tj ](Φ − j ) T . Also, processes with finite-dimensional, possibly time-variant, state-space r models have semi-separable covariance functions, E [ zk z Tj ] = i =1 aki b iT j , j  k (see [17]), and this structure is a particular case of that assumed, just taking A k = (ak1 , ak2 , . . . , akr ) and B j = (b1j , b2j , . . . , brj ). Consequently, the structural assumption on the signal autocovariance function covers both stationary and non-stationary signals. Next, the observation model and the assumptions required to address the LS linear estimation problem are presented. This model consists of noisy measurements coming from different sensors, which independently transmit their measurements, featuring random packet dropouts during transmission. 2.1. Measurement model with multiple sensors and packet dropouts Let { zk ; k  1} be the signal process satisfying Assumption 1 and consider that, at any sampling time k, different scalar linear functions of the signal vector zk are observed by different systems with additive noises v ki ; that is,

 yki = H ki zk + v ki ,

k  1, i = 1, . . . , m .

(1)

The outputs are measured at every sampling time and transmitted from the m different sensors to a data processing center producing the signal estimation. In this paper it is assumed that,

1119

yki

=

 yki ,

with probability pki , k > 1; i = 1, . . . , m,

yki −1 ,

with probability 1 − pki , k > 1; i = 1, . . . , m.

Therefore, the measurements processed to estimate the signal can be modeled as follows:





yki = γki yki + 1 − γki yki −1 , y 1i

= y 1i ,

i = 1, . . . , m ,

k > 1; (2)

i k

where γ is a binary switching variable taking the values 0 or 1, representing if the corresponding data packet is dropped out or not, with probability 1 − pki and pki , respectively. Similarly to [6], the following assumption is required for the random packet dropout processes: Assumption 2. For i = 1, . . . , m, the processes {γki ; k > 1} are sequences of independent Bernoulli random variables with known probabilities, P (γki = 1) = pki , ∀k > 1, where pki is the probability of successful packet transmission at time k for the ith channel sensor measurement. Remark 2. Model (2) can describe different multiple packet dropouts in each sensor if successive values of the corresponding Bernoulli variables are equal to zero. For example, if γki = γki−1 =

γki−2 = 0 and γki−3 = 1, three consecutive data of the sensor i are

yki −3 is processed at times k − 2, k − 1 lost and the measurement  and k. Moreover, at each sampling time, the number of packet dropouts may vary from one sensor to another.

As indicated previously, our aim is to find the LS linear estimator of the signal zk based on measurements y 1i , . . . , y iL , i = 1, . . . , m, problem which requires the existence of the second-order moments of the signal (Assumption 1), and also the existence of the second-order moments of the observations which, in view of (1) and (2), is guaranteed by that of the second-order moments of the additive noises; so, the following assumption on the statistical properties of these noises is imposed. Assumption 3. For i = 1, . . . , m, the scalar sensor noises, { v ki ; k  1}, are zero-mean white sequences with known variances. Finally, the following independence hypothesis about the processes involved in model (1)–(2) is also assumed. Assumption 4. The signal process, { zk ; k  1}, and the noise processes, { v ki ; k  1} and {γki ; k > 1}, for i = 1, . . . , m, are mutually independent.

1120

R. Caballero-Águila et al. / Digital Signal Processing 22 (2012) 1118–1125

2.2. Closed form of the observation model If we represent the measurements of the system outputs by the m×1 vector  y k = ( yk1 , . . . , ym )T , Eq. (1) can be rewritten in a veck torial form as:

 y k = H k zk + v k ,

k  1,

(3)

with H k = ( H k1T , . . . , H mT )T and v k = ( v k1 , . . . , v m )T . k k Similarly, by defining y k = ( yk1 , . . . , ym )T and k

γkm )T , (2) can be rewritten as follows: yk = γ k ◦  yk + (1m − γ k ) ◦ yk−1 ,

γ k = (γk1 , . . . ,

y1 =  y1,

k > 1;

(4)

where 1m = (1, . . . , 1) T denotes the m × 1 all-ones vector and ◦ the Hadamard product ([C ◦ D ]i j = C i j D i j ). Next, we present the statistical properties of the processes involved in the vectorial observation model (3)–(4), from which the LS linear estimator of the signal zk based on the observations y 1 , . . . , y L will be derived. The following properties are easily inferred from the model Assumptions 2–4 established previously. Property 1. The m-dimensional process { v k ; k  1} is a zero-mean white sequence with autocovariance function Σ kv = Diag(Var[ v k1 ], . . . , Var[ v m ]). k Property 2. The m-dimensional process {γ k ; k > 1} is a white sequence with

 m T



• E [γ k ] = pk = pk1 , . . . , pk γ

• Cov[γ k ] = K k = Diag





pk1

T • E (1m − γ k )(1m − γ k )







;

dressed by replacing the observation process by the innovation one. The orthogonality of the new process enables us to express the estimators in a simpler form (which makes also simpler to derive the algorithms) than that obtained when the estimators are expressed directly as a linear combination of the observations. Specifically, if ν i denotes the innovation at time i, the LS linear estimator of zk based on ν 1 , . . . , ν L is the orthogonal projection of zk into the n-dimensional linear space generated by ν 1 , . . . , ν L ,

 zk/ L =

L 

N k, j ν j ,

j =1

where the n × m impulse–response function N k, j , j = 1, . . . , L, is calculated from the orthogonality property,





E ( zk − zk/ L )ν Tj = 0, which, since E [ν tion:





T j s]

E zk ν Tj = N k, j E

ν

j  L,

= 0 for j = s, leads to the Wiener–Hopf equa-





ν j ν Tj ,

j  L;

consequently, by denoting S k, j = E [ zk ν Tj ] and Π j = E [ν j ν Tj ], the following general expression for the LS linear estimators is obtained:

 zk/ L =

L 

1 S k, j Π − ν j. j

(5)

j =1







m ; 1 − pk1 , . . . , pm k 1 − pk

⎞ · · · (1 − pk1 )(1 − pm ) k ⎜ ⎟ .. .. .. = E k1 = ⎝ ⎠; . . . m m 1 (1 − pk )(1 − pk ) · · · 1 − pk   T • E γ k (1m − γ k ) ⎛ ⎞ 0 pk1 (1 − pk2 ) · · · pk1 (1 − pm ) k ⎜ 2 ⎟ ⎜ pk (1 − pk1 ) 0 · · · pk2 (1 − pm )⎟ k ⎟ = E k2 = ⎜ .. .. .. ⎜ ⎟. .. ⎝ ⎠ . . . . m m 1 2 pk (1 − pk ) pk (1 − pk ) · · · 0 1 − pk1

From this general expression, it is clear that the linear smoothers at the fixed point k,  zk/k+l , for any l  1, can be recursively calculated from

 zk/k+l = zk/k+l−1 + S k,k+l Π k−+1l ν k+l ,

l  1,

by starting from the linear filter,  zk/k , as initial condition. Hence, the first step for obtaining the signal estimators is to find an explicit formula for the innovations ν j , from which S k, j = E [ zk ν Tj ] and Π j = E [ν j ν Tj ] will be calculated. y j / j −1 , where The innovation at time j is defined as ν j = y j −   y j / j −1 is the one-stage linear predictor of y j which, from the orthogonal projection lemma (OPL) and Property 3, is given by

 y j / j −1 = p j ◦ ( H j z j / j −1 ) + (1m − p j ) ◦ y j −1 , Hence the innovation at time j is expressed as

3. Least-squares linear estimation problem of the signal

ν 1 = y1,

3.1. Innovation approach to the LS linear estimation problem The innovation approach [17] consists of transforming the observation process { yk ; k  1} into an equivalent process (innovation process) of orthogonal vectors {ν k ; k  1}, equivalent in the sense that each set {ν 1 , . . . , ν L } spans the same linear subspace that { y 1 , . . . , y L }; so the estimation problem can be ad-

j > 1;

 y 1/0 = 0.

Property 3. The signal process, { zk ; k  1}, and the noise processes, { v k ; k  1} and {γ k ; k > 1}, are mutually independent.

Our aim in this paper is to address the LS linear estimation problem of the signal from a set of available observations. Specifzk/k , and the ically, recursive algorithms for the LS linear filter,  zk/k+l , at the fixed point k, for any l  1, LS linear smoother  will be derived. For this purpose, we will use an innovation approach.

(6)

ν j = y j − p j ◦ ( H j z j / j −1 ) − (1m − p j ) ◦ y j −1 ,

j > 1; (7)

and its determination requires that of the linear one-stage predictor of the signal,  z j / j −1 . The derivation of the linear one-stage predictor is analogous to that of the filter, so both are derived simultaneously in the following section. 4. One-stage prediction and filtering recursive algorithm The linear one-stage predictor,  zk/k−1 , and the filter,  zk/k , of the signal zk are given by

 zk/k−1 = A k ok−1 ,

 zk/k = A k ok ,

k  1,

(8)

where the s × 1 vectors ok are recursively calculated from

ok = ok−1 + J k Π k−1 ν k ,

k  1;

o 0 = 0.

(9)

R. Caballero-Águila et al. / Digital Signal Processing 22 (2012) 1118–1125

The s × m matrix function J satisfies

 



J k = 1s pkT ◦



Now, using the Hadamard product property



B kT − Σ ko−1 A kT H kT ,

 

1n c 1 , . . . , cm

k > 1;

J 1 = B 1T H 1T ,

(10)

where Σ ko is recursively obtained from

Σ ko = Σ ko−1 + J k Π k−1 J kT , The innovation,

Σ o0 = 0.

k  1;

(11)

ν 1 = y1.

y

   y = K k + pk pkT ◦ H k Ak B kT H kT + Σ kv + E k1 ◦ Σ k−1  y y T  yy + E k2 ◦ Σ k,k−1 + E k2T ◦ Σ k,k−1 , k > 1;

y

Σ 1 = H 1 A 1 B 1T H 1T + Σ 1v ,

(14)

the following expression

k > 1.

(15)

Finally, the matrix function G k in (13) is recursively calculated from





G 1 = J 1.

(16)

Proof. From the general expression (5), the coefficients S k, j = E [ zk ν Tj ], j  k, must be calculated in order to determine the filter. Using expression (7) for ν j , we obtain

   T   S k, j = E zk y j − 1n p Tj ◦ E zk z j / j −1 H Tj     − 1n (1m − p j )T ◦ E zk y Tj−1 , j > 1;   S k,1 = E zk y 1T = A k B 1T H 1T .



Again, taking into account (5) for the predictor,  z j / j −1 , and using y Tj ] = A k B Tj H Tj for 1  j  k, and E [ zk ν iT ] = S k,i , we have that E [ zk



 T

− 1n p j ◦

 j −1  i =1

S k,1 = A k B 1T H 1T .



k 

(17)

1 < j  k;

1  j  k.

(18)

1 J j Π− ν j, j

k  1;

o 0 = 0,

(19)

which obviously satisfies (9), expressions (8) for the one-stage predictor and the filter are deduced from (5) for L = k − 1, k, respectively, jointly with (18) and (19). Next, taking into account that, from (18), S k, j = A k J j for 1  j  k, and by denoting

k  1;

Σ o0 = 0,

(20)

expression (10) for J k is easily derived just making j = k in (17). The recursive formula (11) for Σ ko is immediately clear from (20). Expression (12) for ν k is obtained by substituting  zk/k−1 = A k ok−1 in (7) for j = k. To obtain the innovation covariance matrices Π k = E [ν k ν kT ], ing Σ k = E [ yk y kT ] and G k = E [ok y kT ], it is clear that Π k satisfies expression (13).  y yy Recursive relations (14) for Σ k with (15) for Σ k,k−1 , and (16) for G k are derived as follows:

• Clearly, from the model hypotheses, E [ yk y kT ] = H k A k B kT H kT + v Σ k ; hence, taking into account Properties 2 and 3, and by  yy

y

y k y kT−1 ], expression (14) for Σ k is obdenoting Σ k,k−1 = E [ tained.  yy • Again, from the model hypotheses, Σ k, j = E [ yk y Tj ] = H k A k × B Tj H Tj , j < k, and using Properties 2 and 3, expression (15)  yy

leading to Σ k,k−1 is clear.

• Now, we write G k = E [ok ykT ] = E [ok ν kT ] + E [ok y kT/k−1 ]. Clearly,

from (19) and since the innovations constitute a white process, we have E [ok ν kT ] = J k and E [ok y kT/k−1 ] = E [ok−1 y kT/k−1 ]. Now, since





1 T S k ,i Π − S j ,i H Tj , i

1 < j  k;

y

k > 1;

 

p

H Tj D j ,

k  1, we express Π k = E [ y k ykT ] − E [ y k/k−1 y kT/k−1 ]; then by denot-

G k = J k + 1s pkT ◦ Σ ko−1 A kT H kT + 1s (1m − pk ) T ◦ G k−1 ,





1 T J i Π− S j ,i i

j =1

j = 2, . . . , k − 1;

S k, j = 1n p Tj ◦ A k B Tj H Tj

1 < j  k;



j −1 

k    1 T Σ ko = E ok okT = J j Π− Jj, j

      yy ◦ H k Ak B Tj H Tj + 1m (1m − p j )T ◦ Σ k, j −1 ,

 T

p

j =1

 yy with E k1 and E k2 defined in Property 2, and Σ k,k−1 obtained from



1 T S k ,i Π − S j ,i H Tj D j , i

Hence, if we denote

ok =

 γ



B Tj

S k, j = A k J j ,

where Σ k is recursively calculated from

 



j −1 

we conclude that

(13)



p Dj

J 1 = B 1T H 1T ,

y

= H k Ak B 1T H 1T ,

H Tj

i =1

Π1 = Σ 1 ,

1m p Tj

 Jj=

   y Π k = Σ k − pk pkT ◦ H k Ak Σ ko−1 AkT H kT   y − (1m − pk )(1m − pk )T ◦ Σ k−1   − pk (1m − pk )T ◦ ( H k Ak G k−1 )     − (1m − pk ) pkT ◦ G k−1 AkT H kT , k > 1;

 yy Σ k,1

S k, j =

A k B Tj

If we now introduce a function J satisfying that



=

p

and denoting D j = Diag( p 1j , . . . , pm ), we can rewrite this expresj sion for S k, j as follows:

(12)



  ◦ M n×m = M n×m Diag c 1 , . . . , cm ,

S k,1 = A k B 1T H 1T .

The covariance matrix of the innovation, Π k , is given by

 yy Σ k, j



i =1

ν k , satisfies

ν k = yk − pk ◦ ( H k Ak ok−1 ) − (1m − pk ) ◦ yk−1 , k > 1;

y Σk

1121

E o k −1  ykT/k−1



  T  = E ok−1 pk ◦ ( H k Ak ok−1 ) + (1m − pk ) ◦ yk−1 ,

expression (16) is obtained from (20).

1122

R. Caballero-Águila et al. / Digital Signal Processing 22 (2012) 1118–1125

5. Fixed-point smoothing algorithm and error covariance matrices From (6), to calculate the fixed-point smoothing estimators,  zk/k+l , an expression for S k,k+l = E [ zk ν kT+l ] = E [ zk y kT+l ] −

E [ zk  y kT+l/k+l−1 ], k, l  1, must be obtained. Using Assumption 1 and Properties 2 and 3, it is easy to see that







 

E zk ykT+l = 1n pkT+l ◦ B k A kT+l H kT+l



6. Computer simulation results In this section, the application of the filtering and fixedpoint smoothing algorithms proposed in the current paper is illustrated by a simulation example. Consider a zero-mean twodimensional signal { zk ; k  1} with autocovariance function given by







E zk z sT =

    + 1n (1m − pk+l )T ◦ E zk ykT+l−1 ,

0.8k−s 0.9 × 0.8k−s−1

1.02 × 0.8k−s 0.918 × 0.8k−s−1

 ,

s  k,

which, together with the expression of  y k+l/k+l−1 , yields

which is factorizable according to Assumption 1 just taking

S k,k+l = 1n pkT+l ◦ B k A kT+l H kT+l

Ak =



 



      − 1n pkT+l ◦ E zk okT+l−1 AkT+l H kT+l ,

and defining the function F k, L = E [ zk o TL ], it is clear that



 



S k,k+l = 1n pkT+l ◦ [ B k − F k,k+l−1 ] A kT+l H kT+l ,

k , l  1.

Next, using the recursive expression (9) for ok+l , the following formula for F k,k+l is deduced

F k,k+l = F k,k+l−1 + S k,k+l Π k−+1l J kT+l ,

k , l  1.

5.1. Fixed-point smoothing algorithm zk/k+l , l  1, of the signal zk is calcuThe fixed-point smoother,  lated as

k , l  1,

with initial condition given by the filter,  zk/k , and



 



S k,k+l = 1n pkT+l ◦ [ B k − F k,k+l−1 ] A kT+l H kT+l ,

k , l  1.

The matrices F k,k+l satisfy the following recursive formula

F k,k+l = F k,k+l−1 + S k,k+l Π k−+1l J kT+l , F k,k =

A k Σ ko ,

k , l  1;

k  1.

The filter,  zk/k , the matrices J k+l , the innovations ν k+l and their covariance matrices Π k+l are obtained from the filtering algorithm given in Section 4. 5.2. Error covariance matrices The performance of the fixed-point smoothing estimators is zk/k+l = measured by the magnitude of the estimation errors,  zk −  zk/k+l , l  1, and, more specifically, by their covariance ma-

z = E [ zk/k+l zkT/k+l ]. In order to obtain a relation which trices, Σ k/k+l provides these matrices in a recursive way, we use the recursive relation (6) and, taking into account that ν k+l and  zk/k+l−1 are uncorrelated, we obtain

Σkz/k+l = Σkz/k+l−1 − S k,k+l Π k−+1l S kT,k+l ,

k , l  1.

The initial condition is given by the error covariance matrix of the filter which, taking into account that Σ ko = E [ok okT ], clearly satisfies:

  Σkz/k = Ak B kT − Σ ko AkT ,

k  1.

and



Bk =

1.02 × 0.8k 0.918 × 0.8k−1

0.8k 0.9 × 0.8k−1 0.8−k 0

0 0.8−k



 .

For the simulation, the signal is supposed to be generated by the same signal model as that considered in [8] (Example 2); specifically,



Its initial condition, F k,k = is easily derived taking into aczk/k okT ] and using (8) count that, from the OPL, F k,k = E [ zk okT ] = E [ o T and definition Σ k = E [ok ok ]. Summarizing these results, the following recursive algorithm is obtained. A k Σ ko ,

 zk/k+l = zk/k+l−1 + S k,k+l Π k−+1l ν k+l ,



z k +1 =

0.8 0 0.9 0



 zk +

0.6 0.5

 wk ,

k  1,

where { w k ; k  1} is a zero-mean white noise with Var[ w k ] = 1, for all k. This example illustrates that Assumption 1 covers situations in which the system matrix Φ in the state-space model is a singular matrix. yki = H ki zk + Consider measurements coming from two sensors,  v ki , k  1, i = 1, 2, with measurement matrices H k1 = [1, 0] and

H k2 = [0, 1], and independent additive white noises with zero mean and constant variances, Var[ v k1 ] = 0.5 and Var[ v k2 ] = 0.9, for all k. Now, according to the proposed observation model, assume that, at any time k > 1, the measured output of the ith sensor,  yki , can be randomly dropped-out during network transmission and the measurement processed, yki , is either the current measured

output,  yki , or the latest measurement received, yki −1 , if  yki is lost during transmission; that is,





yki = γki yki + 1 − γki yki −1 ,

k > 1;

y 1i =  y 1i ,

i = 1, 2,

where {γki ; k > 1}, i = 1, 2, are sequences of independent Bernoulli

random variables with P [γk1 = 1] = pki , ∀k > 1. First, to compare the effectiveness of the proposed filtering and fixed-point smoothing estimators, fifty iterations of the respective algorithms have been performed, considering different constant values of the dropout probabilities, 1 − pki , i = 1, 2; on the one hand, pk1 = p 1 = 0.7, pk2 = p 2 = 0.5 and, on the other,

pk1 = p 1 = 0.5, pk2 = p 2 = 0.4. For these values, the filtering and smoothing estimates, as well as the corresponding error variances, have been calculated. z ) , j = 1, 2, Fig. 1 displays the filtering error variances (Σ k/k j j

z and the fixed-point smoothing error variances, (Σ ) and k/k+1 j j  z (Σ k/k+3 ) j j , for j = 1, 2, of the first ( j = 1) and second ( j = 2) signal components. For each signal component, this figure shows, on the one hand, that the error variances corresponding to the smoothers are less than those of the filters and, on the other, that as the probabilities p i increase or, equivalently, as the dropout probabilities 1 − p i of the sensors decrease, the error variances are smaller and consequently, the performance of the estimators is better. From this figure, it is also deduced that the accuracy of the

R. Caballero-Águila et al. / Digital Signal Processing 22 (2012) 1118–1125

Fig. 1. Filtering and fixed-point smoothing error variances of the first and second signal components, for the values p 1 = 0.7, p 2 = 0.5 and p 1 = 0.5, p 2 = 0.4.

Fig. 2. Filtering error variances of the first and second signal components at k = 50 versus p 1 with p 2 varying from 0.1 to 0.9.

1123

1124

R. Caballero-Águila et al. / Digital Signal Processing 22 (2012) 1118–1125

Fig. 3. Filtering error variances of the first and second signal components for time-variant probabilities pk1 , when p 2 = 0.5 and p 2 = 0.7.

smoother at each fixed-point k is better as the number of available observations increases. z ) , j = 1, 2, Next, we study the filtering error variances, (Σ k/k j j when the probabilities p 1 and p 2 are varied from 0.1 to 0.9. In all the cases examined, these variances present insignificant variation from the tenth iteration onwards and, consequently, only the values at a specific iteration are shown here. In Fig. 2, the filtering error variances, at k = 50, of the first and second signal components are displayed versus p 1 (for constant values of p 2 = 0.1, . . . , 0.9). From these figures it is gathered that, as p 1 or p 2 increase (and, consequently, the probability of packet dropouts in the observations coming from the corresponding sensor decreases), the filtering error variances become smaller and, hence, better estimations are obtained. Note that this improvement is more significant for small values of p 1 or p 2 ; that is, when the dropout probability in the observations coming from one of the sensors is large. Finally, we analyze the filter performance when the probabilities at the first sensor, pk1 , are time-dependent, while at the second

sensor they are constant, pk2 = p 2 , for all k. Fig. 3 shows the time-

variant values of pk1 and the corresponding filtering error vari-

ances of the first and second signal components when p 2 = 0.7 and p 2 = 0.5; from this figure it is observed that, in general, for both values of p 2 , the error variances are smaller and, consequently, better estimations are obtained, at those sampling times k in which the probability pk1 is close to one (which means that the dropout probability is close to zero). Also, agreeing with the comments made about Fig. 1, in all the iterations the error variances are smaller or, equivalently, the performance of the estimators is better, when p 2 is greater.

7. Conclusions A recursive algorithm is proposed for the LS linear filter and fixed-point smoother from observations transmitted by different sensors, featuring random packet dropouts. The observation model considered in the current paper covers situations concerning sensor data that are transmitted over communication networks where, generally, multiple sensors with different properties are involved. Also, the possibility of random packet dropouts is a realistic assumption in networked stochastic systems where, usually, transmission losses are unavoidable due to the unreliable network characteristics. We assume that, when the current measurement is not available, the latest measurement transmitted successfully is processed for the estimation. The proposed estimation algorithms do not require the knowledge of the signal state-space model, but only information about the covariance functions of the signal and noise processes. Furthermore, our estimators only depend on the dropout probabilities at each sensor, but do not need to know if a measurement is received or lost at a particular sampling time. To measure the performance of the estimators, recursive formulas for the estimation error covariance matrices are also proposed. To illustrate the theoretical results established in this paper, a simulation example is presented, showing the feasibility of the proposed algorithm. Acknowledgments This research is supported by Ministerio de Educación y Ciencia (grant No. MTM2011-24718) and Junta de Andalucía (grant No. P07-FQM-02701).

R. Caballero-Águila et al. / Digital Signal Processing 22 (2012) 1118–1125

References [1] R. Huang, G.V. Záruba, Incorporating data from multiple sensors for localizing nodes in mobile ad hoc networks, IEEE Trans. Mob. Comput. 6 (9) (2007) 1090– 1104. [2] S. Jeong, J.K. Tugnait, Multisensor tracking of a maneuvering target in clutter using IMMPDA filtering with simultaneous measurement update, IEEE Trans. Aerosp. Electron. Syst. 41 (3) (2005) 1122–1131. [3] V. Malyavej, I.R. Manchester, A.V. Savkin, Precision missile guidance using radar/multiple-video sensor fusion via communication channels with bit-rate constraints, Automatica 42 (5) (2006) 763–769. [4] C. Han, Z. Zhang, Linear optimal filtering for discrete-time systems with random jump delays, Signal Process. 89 (6) (2009) 1121–1128. [5] M. Sahebsara, T. Chen, S.L. Shah, Optimal H 2 filtering with random sensor delay, multiple packet dropout and uncertain observations, Internat. J. Control 80 (2007) 292–301. [6] S. Sun, L. Xie, W. Xiao, Y.C. Soh, Optimal linear estimation for systems with multiple packet dropouts, Automatica 44 (5) (2008) 1333–1342. [7] S. Sun, Linear minimum variance estimators for systems with bounded random measurement delays and packet dropouts, Signal Process. 89 (2009) 1457– 1466. [8] S. Sun, Optimal estimators for systems with finite consecutive packet dropouts, IEEE Signal Process. Lett. 16 (7) (2009) 557–560. [9] H. Song, L. Yu, W.A. Zhang, H ∞ filtering of network-based systems with random delay, Signal Process. 89 (4) (2009) 615–622. [10] S. Nakamori, A. Hermoso-Carazo, J. Linares-Pérez, A general smoothing equation for signal estimation using randomly delayed observations in the correlated signal-noise case, Digital Signal Process. 16 (2006) 369–388. [11] R. Caballero-Águila, A. Hermoso-Carazo, J.D. Jiménez-López, J. Linares-Pérez, S. Nakamori, Recursive estimation of discrete-time signals from nonlinear randomly delayed observations, Comput. Math. Appl. 58 (2009) 1160–1168. [12] A. Hermoso-Carazo, J. Linares-Pérez, Linear and quadratic least-squares estimation using measurements with correlated one-step random delay, Digital Signal Process. 18 (3) (2008) 450–464. [13] R. Caballero-Águila, A. Hermoso-Carazo, J. Linares-Pérez, A new estimation algorithm from measurements with multiple-step random delays and packet

[14]

[15]

[16]

[17]

1125

dropouts, Math. Probl. Eng. 2010 (2010), Article ID 258065, 18 pp., http://dx. doi.org/10.1155/2010/258065. H. Dong, Z. Wang, H. Gao, Robust H ∞ filtering for a class of nonlinear networked systems with multiple stochastic communication delays and packet dropouts, IEEE Trans. Signal Process. 58 (4) (2010) 1957–1966. J. Linares-Pérez, A. Hermoso-Carazo, R. Caballero-Águila, J.D. Jiménez-López, Least-squares linear filtering using observations coming from multiple sensors with one or two-step random delay, Signal Process. 89 (2009) 2045–2052. R. Caballero-Águila, A. Hermoso-Carazo, J.D. Jiménez-López, J. Linares-Pérez, S. Nakamori, Signal estimation with multiple delayed sensors using covariance information, Digital Signal Process. 20 (2010) 528–540. T. Kailath, A.H. Sayed, B. Hassibi, Linear Estimation, Prentice Hall, 2000.

Raquel Caballero-Águila received the M.Sc. degree in Mathematics (Statistics) from the University of Granada (Spain) in 1997 and her Ph.D. in Polynomial Filtering in Systems with Uncertain Observations in 1999. She joined the university in 1997 and is now an Associate Professor at the Department of Statistics and Operations Research, University of Jaén (Spain). Her research is focused on estimation in stochastic systems. Aurora Hermoso-Carazo received the M.Sc. degree in Mathematics (Statistics) and her Ph.D. in Likelihood Test in Log Normal Processes, both from the University of Granada (Spain), in 1979 and 1984, respectively. She is currently a Professor at the Department of Statistics and Operations Research, University of Granada (Spain). Her current research interests include stochastic systems, filtering and prediction. Josefa Linares-Pérez received the M.Sc. degree in Mathematics (Statistics) and her Ph.D. in Stochastic Differential Equations, both from the University of Granada (Spain), in 1980 and 1982, respectively. She is currently a Professor at the Department of Statistics and Operations Research, University of Granada (Spain). Her research interest involves the areas of stochastic calculus and estimation in stochastic systems.