Derivation of centralized and distributed filters using covariance information

Derivation of centralized and distributed filters using covariance information

Computational Statistics and Data Analysis 55 (2011) 312–323 Contents lists available at ScienceDirect Computational Statistics and Data Analysis jo...

1MB Sizes 0 Downloads 56 Views

Computational Statistics and Data Analysis 55 (2011) 312–323

Contents lists available at ScienceDirect

Computational Statistics and Data Analysis journal homepage: www.elsevier.com/locate/csda

Derivation of centralized and distributed filters using covariance information M.J. García-Ligero ∗ , A. Hermoso-Carazo, J. Linares-Pérez Dpto. Estadística e I.O., Universidad de Granada, Granada, Spain

article

info

Article history: Received 5 June 2009 Received in revised form 28 April 2010 Accepted 29 April 2010 Available online 5 May 2010 Keywords: Multiplicative noise Filter Multiple sensors

abstract The problem of estimating a degraded image using observations acquired from multiple sensors is addressed when the image degradation is modelled by white multiplicative and additive noise. Assuming the state-space model is unknown, the centralized and distributed filtering algorithms are derived using the information provided by the covariance functions of the processes involved in the measurement equation. The filters obtained are applied to an image affected by multiplicative and additive noise, and the goodness of the centralized and distributed filters is compared by examining the respective filtering error variances. © 2010 Elsevier B.V. All rights reserved.

1. Introduction The use of multiple sensors to estimate the state of a stochastic system has received considerable attention in recent years; it is extensively applied for complex systems such as industrial tasks, military command or mobile robot navigation, among many others. The fusion of the information provided by the different sensors is generally carried out by two different methods. The first of these, the centralized filter, obtains the state estimator by using all the sensor measurements that are processed jointly at each time instant. This filter is generally obtained by using standard Kalman filtering. The distributed or decentralized method uses the information provided by the local state estimators to obtain the state estimator. Although the precision of the distributed filter is generally lower than that of the centralized filter, it does present considerable advantages; the memory requirement space to the fusion centre is broadened, and parallel structures can be increased the input data rates. Furthermore, the decentralized approach facilitates fault detection and isolation, and so in practice the distributed filter has been widely studied. In order to obtain the distributed fusion filter, algorithms have been developed by combining local estimators by means of different optimization criteria. Kim (1994) proposed a multi-sensor optimal fusion estimator in the maximum likelihood sense under the assumptions of normal distributions and the independence of the noise processes involved in the system. This same result was subsequently derived using a criterion weighted by scalars and matrices in the linear minimum variance sense, where the restrictive assumption of normal distribution is avoided. Specifically, in Sun (2004) a new multisensor optimal information fusion Kalman algorithm weighted by matrices is obtained, in the linear minimum variance sense, assuming that the noise processes are independent and that the measurement noises of the different sensors are uncorrelated. This assumption is justifiable in some contexts; however, there are several situations where measurement noises are cross-correlated. Sun and Deng (2004) assumed measurement noises to be cross-correlated, and derived a multi-sensor optimal information fusion decentralized Kalman filter with a two-layer fusion structure. In both cases, the algorithms obtained are equivalent to Kim’s algorithm, considering the hypothesis of normal distribution.

∗ Corresponding address: Dpto. de Estadística e I.O., Facultad de Ciencias, Avda. Fuentenueva s/n, 18071 Granada, Spain. Tel.: +34958246302; fax: +34958243267. E-mail addresses: [email protected] (M.J. García-Ligero), [email protected] (A. Hermoso-Carazo), [email protected] (J. Linares-Pérez). 0167-9473/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.csda.2010.04.027

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

313

On the other hand, Bar-Shalom and Campo (1986) considered two sensors, deriving an optimal linear combination of the local two estimates with the matrix weights satisfying the linear algebraic equations, known as the Millman and Bar-Shalom–Campo formulae, under hypotheses of uncorrelated and correlated estimation errors. Generalization of these formulae for the multi-sensor environment was performed by Shin et al. (2006, 2007), who obtained the generalized Millman’s formula, which provides the state estimator on the basis of the overall measurements, constructed from the local estimates. In the aforementioned papers, it is assumed that the observation model is corrupted only by additive noise, and that the state-space model is known; these hypotheses allow us to apply the standard Kalman filter in order to address the estimation problem. However, in practice, there are many systems where the observation equation is also affected by multiplicative noise (see Benson et al., 2000; Dragan and Morozan, 1997 and references therein). This has motivated many studies to attempt to solve the filtering problem for dynamic systems with multiplicative noise. Specifically, assuming the state-space model is known, several authors have focused their search on obtaining suboptimal estimators, mainly linear ones (see Nahi, 1969; Rajasekaran et al., 1971; Zhang and Zhang, 2007). On the other hand, in many practical situations, if the equation generating the state process is unknown then the estimation problem has to be addressed to obtain the linear least-squares estimator using the information provided by the covariance functions of the processes involved in the measurement equation as it can be seen in Nakamori et al. (2003, 2008a,b). In a multi-sensor environment, the estimation problem for systems with uncertain observations has been examined by means of a centralized method, assuming different hypotheses regarding noise (see Hounkpevi and Yaz, 2007; Jiménez-López et al., 2008). In many practical situations, such as remote sensing or medical imaging, the measurements of an image are acquired from different sensors, and these observations are often distorted versions of the original image. The aim is then to estimate this image as accurately as possible, from the information provided by the different sensors. The present paper, thus, examines the image-estimation problem, working from degraded versions of the original image in a multi-sensor environment. These degradations of the image may be due to the effects of coherent fading, which are modelled by multiplicative noise, and are common in images such as synthetic aperture radar (see Chaillan et al., 2007; Coulon et al., 2006; Park et al., 1999; Subrahmanyam et al., 2008) and ultrasonic images (for example, see Argenti and Torricelli, 2003; Karaman et al., 1995; Mateo and Fernández-Caballero, 2009) and electronic effects modelled by additive noise. To address the estimation problem of an image affected by both types of noise, it is assumed that the equation which describes the image is unavailable and, therefore, it is not possible to use the state-space approach. Under the hypotheses that random components affecting the image are white noise and that the measurement noises are cross-correlated at the same time, we derive the centralized and distributed fusion filters. The centralized filter and the local linear least-squares estimators for each sensor are obtained by using the information provided by the covariance functions of the image and the noises. Based on the distributed fusion method, the distributed fusion filter is obtained as the linear combination of the local filters, using the least-squares errors as the optimal criterion. Finally, the proposed filters are applied to estimate a degraded image acquired from different sensors. 2. Observation model and problem statement 2.1. Observation model Consider a monochromatic image which is represented by a random field {u(k, l); 1 ≤ k ≤ N , 1 ≤ l ≤ M }, where u(k, l) denotes the grey level at the location (k, l). We assume that the image is corrupted by multiplicative and additive noises and that the observations are made by multiple sensors; specifically, the measurement equation in the ith sensor is modelled as z i (k, l) = w i (k, l)u(k, l) + v0i (k, l),

i = 1, . . . , r ,

(1)

where z (k, l) is the observed image at location (k, l) from sensor i. Our aim is to estimate the original image, {u(k, l), 1 ≤ k ≤ N , 1 ≤ l ≤ M }, from its degraded versions as given by (1). For this purpose, let us assume the following hypotheses on the image field and noise processes: i

(I) The image, {u(k, l); 1 ≤ k ≤ N , 1 ≤ l ≤ M }, has a known mean, E [u(k, l)], and its covariance function is E [(u(k, l) − E [u(k, l)])(u(s, ξ ) − E [u(s, ξ )])] = α(k, l, ξ )β T (s, l, ξ ),

s ≤ k,

where α(·, l, ξ ) and β(·, l, ξ ) are known 1 × n matrix functions. (II) For i = 1, . . . , r, the multiplicative noise, w i (k, l); 1 ≤ k ≤ N , 1 ≤ l ≤ M , is a white scalar process with means µi (k, l) and covariance function E [(w i (k, l) − µi (k, l))(w i (s, ξ ) − µi (s, ξ ))] = σw2 i (k, l)δK (k − s)δK (l − ξ ), where δK is the Kronecker delta function.   Furthermore, for i 6= j let us assume that w i (k, l); 1 ≤ k ≤ N , 1 ≤ l ≤ M and w j (k, l); 1 ≤ k ≤ N , 1 ≤ l ≤ M are independent.  (III) For i = 1, . . . , r, the additive noise, v0i (k, l); 1 ≤ k ≤ N , 1 ≤ l ≤ M , is a white process with zero mean and j

E [v0i (k, l)v0 (s, ξ )] = σ 2ij (k, l)δK (k − s)δK (l − ξ ), v0

∀i, j.

314

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

(IV) For all i, j = 1, . . . , r, the image field, n {u(k, l); 1 ≤ k ≤ N , 1 ≤ l ≤ Mo} and the additive and multiplicative noises,  i w (k, l); 1 ≤ k ≤ N , 1 ≤ l ≤ M and v0j (k, l); 1 ≤ k ≤ N , 1 ≤ l ≤ M , are mutually independent. In order to address the estimation problem, we consider the centred image u(k, l) = u(k, l) − E [u(k, l)]. Thus, denoting i z (k, l) = z i (k, l) − E [z i (k, l)] and v i (k, l) = v0i (k, l) + (w i (k, l) − µi (k, l))E [u(k, l)], the measurement equation is z (k, l) = w i (k, l)u(k, l) + v i (k, l),

i = 1, . . . , r ,  i where the noise v (k, l); 1 ≤ k ≤ N , 1 ≤ l ≤ M has zero mean, covariance function i h E [v i (k, l)v i (s, ξ )] = σv2ii (k, l) + σw2 i (k, l)(E [u(k, l)])2 δK (k − s)δK (l − ξ ) i

(2)

0

= σvi (k, l)δK (k − s)δK (l − ξ ) 2

and moreover, for i 6= j, E [v i (k, l)v j (s, ξ )] = σ 2ij (k, l)δK (k − s)δK (l − ξ ). v0

2.2. Problem statement Our aim is to estimate u(k, l) based on the information provided by the measurements {z (j, l − m), . . . , z (j, l), . . . , z (j, l + m); j = 1, . . . , k, i = 1, . . . , r }. That is, at each location (j, l), we consider the information provided by symmetric pixels adjacent to this, according to the following scheme i

i

i

(j, l − m) •

··· ···

−→

(j, l) •

←−

(j, l + m) •

··· ···

In accordance with this scheme, for each fixed value l, the measurements available to estimate u(k, l), 1 ≤ k ≤ N from the ith sensor are expressed as i z m (j, l) = Wmi (j, l)um (j, l) + vm (j, l); i

i = 1, . . . , r , j ≤ k,

(3)

i where um (j, l) and vm (j, l) denote the (2m + 1)-dimensional vectors defined by





um (j, l) = Col u(j, l − m), . . . , u(j, l + m) ,

  vmi (j, l) = Col v i (j, l − m), . . . , v i (j, l + m) and Wmi (j, l) denotes the (2m + 1)-dimensional diagonal matrix defined by





Wmi (j, l) = diag w i (j, l − m), . . . , w i (j, l + m) . Note that the pixels localized at the boundary will be treated in the same sense. Specifically, to estimate, u(k, l), 1 ≤ k ≤ N , l < m and l > M − m, we choose m as the maximum of the symmetric pixels around the pixel which is estimated. i In order to set out the estimation problem of u(k, l) based on the measurements z m (j, l), j = 1, . . . , k, it is necessary to study some statistical properties of the process {um (k, l); 1 ≤ k ≤ N } and the noises Wmi (k, l); 1 ≤ k ≤ N and

 i vm (k, l); 1 ≤ k ≤ N . These are easily derived from the model hypotheses.

Proposition 1. Under the model hypotheses (I)–(IV), for each value l, the processes involved in measurement equation (3) have the following properties: (i) For each l, {um (k, l); 1 ≤ k ≤ N } has zero mean and T E [u(k, l)um (s, ξ )] = αm (k, l, ξ )βm (s, l, ξ ), T

E [um (k, l)

T um

(s, ξ )] = αmm (k, l, ξ )β

T mm

s ≤ k,

(s, l, ξ ),

s ≤ k,

where

  αm (k, l, ξ ) = α(k, l, ξ − m), . . . , α(k, l, ξ + m) ,   βm (s, l, ξ ) = diag β(s, l, ξ − m), . . . , β(s, l, ξ + m) ,   αmm (k, l, ξ ) = diag αm (k, l − m, ξ ), . . . , αm (k, l + m, ξ ) ,   βmm (s, l, ξ ) = βm (s, l − m, ξ ), . . . , βm (s, l + m, ξ ) .

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

315

(ii) For i = 1, . . . , r, the multiplicative noise, Wmi (k, l); 1 ≤ k ≤ N , is a white noise with mean





i Mm (k, l) = diag (µi (k, l − m), . . . , µi (k, l + m))

and covariances i E [(Wmi (k, l) − Mm (k, l))(Wmi (k, l) − Mmi (k, l))] = Qmi (k, l) = diag (σw2 i (k, l − m), . . . , σw2 i (k, l + m)). i (iii) For i = 1, . . . , r, the additive noise, vm (k, l); 1 ≤ k ≤ N , has zero mean,







i E [vm (k, l)vmiT (k, l)] = Riim (k, l) = diag σv2i (k, l − m), . . . , σv2i (k, l + m) jT

ij







i and E [vm (k, l)vm (k, l)] = Rm (k, l) = diag σ 2ij (k, l − m), . . . , σ 2ij (k, l + m) .

v0

v0



As commented above, in order to process the information provided by all sensors, two fusion methods are usually used to estimate the signal. In this paper, assuming that the state-space model is not fully known, the estimation problem is studied by means of both methods, using the information provided by the covariance functions of the image and multiplicative and additive noises. Specifically, we derive distributed and centralized fusion filtering algorithms to estimate the image using an innovation approach, which provides a straightforward means of resolving the estimation problem. 3. Fusion formulae In this section, the distributed and centralized fusion filtering algorithms are obtained; specifically, in Section 3.1 the distributed fusion filtering algorithm is derived. The proposed algorithm is structured in two stages. In the first, we derive local filters and the filtering error covariance and cross-covariance, while in the second, the distributed fusion filter is obtained as the linear combination of the local filters verifying that the estimation error covariance is minimum. The centralized fusion filtering algorithm is presented in Section 3.2. In this algorithm, it was assumed that the measurements provided by the different sensors are processed jointly at each time. 3.1. Distributed fusion estimation As observed above, the distributed fusion filter is structured in two stages. Firstly, under the hypotheses set out in i

b Section 2, the recursive filtering derived  i algorithm is in order to obtain the local estimators, u (k, l/k), of the image, u(k, l), i based on the observations, z m (1, l), . . . , z m (k, l) , for each i = 1, . . . , r, given by (3). An innovation approach is used to derive the least-squares linear estimator of the signal. This approach consists in i considering the estimator as a linear combination of the innovations νm (j, l), j = 1, . . . , k, and it is based on the onei i to-one correspondence existing between the set of measurements {z m (1, l), . . . , z m (k, l)} and the set of orthogonal vectors i {νmi (1, l), . . . , νmi (k, l)}. Therefore, for each fixed value l, the estimator, b u (k, l/k), is given by

i b u (k, l/k) =

k X

i Sm (k, l, j)(Πmii (j, l))−1 νmi (j, l),

(4)

j=1 i where Sm (k, l, j) = E [u(k, l)νmiT (j, l)] and Πmii (j, l) = E [νmi (j, l)νmiT (j, l)] denotes the innovation covariance matrix. The following theorem provides a recursive algorithm for the linear least-squares filter, as well as a measure of its effectiveness. i

Theorem 1. Under the hypotheses (I)–(IV), for each fixed value l, the filters b u (k, l/k), for i = 1, . . . , r, of the signal u(k, l), for 1 ≤ k ≤ N, can be obtained by i b u (k, l/k) = αm (k, l, l)oim (k, l),

1 ≤ k ≤ N,

(5)

where the vector oim (k, l) is obtained recursively by i oim (k, l) = oim (k − 1, l) + Jm (k, l)(Πmii (k, l))−1 νmi (k, l),

oim (0, l) = 0,

(6)

with i T Jm (k, l) = [βmT (k, l, l) − Fmi (k − 1, l)αmm (k, l, l)]Mmi (k, l)

(7)

316

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

and i iT Fmi (k, l) = Fmi (k − 1, l) + Jm (k, l)(Πmii (k, l))−1 Jmm (k, l),

(8)

Fmi (0, l) = 0. i The innovation νm (k, l) and its covariance matrix are given by

νmi (k, l) = z im (k, l) − Mmi (k, l)αmm (k, l, l)oimm (k − 1, l), Πmii

(9)

(k, l) = ( (k, l) + ( (k, l)) ) ◦ (αmm (k, l, l)β (k, l, l)) + (k, l) i T − Mmi (k, l)αmm (k, l, l)Fmm (k − 1, l)αmm (k, l, l)Mmi (k, l), Qmi

i Mm

2

T mm

Riim

(10)

where ◦ denotes the Hadamard product. i i The vector oimm (k, l) and matrices Jmm (k, l) and Fmm (k, l) verify i oimm (k, l) = oimm (k − 1, l) + Jmm (k, l)(Πmii (k, l))−1 νmi (k, l),

(11)

oimm (0, l) = 0, i T i T Jmm (k, l) = [βmm (k, l, l) − Fmm (k − 1, l)αmm (k, l, l)]Mmi (k, l), i Fmm i Fmm

(k, l) =

i Fmm

(k − 1, l) +

i Jmm

(k, l)(

Πmii

(12)

(k, l)) Jmm (k, l), −1 iT

(13)

(0, l) = 0.

The filtering error covariance, P ii (k, l/k), for 1 ≤ k ≤ N, satisfies T P ii (k, l/k) = α(k, l, l)β T (k, l, l) − αm (k, l, l)Diim (k, l)αm (k, l, l),

(14)

with i Diim (k, l) = Diim (k − 1, l) + Jm (k, l)(Πmii (k, l))−1 JmiT (k, l),

(15)

Diim (0, l) = 0. i

i Proof. To determine the filter, we start by deriving an expression for the innovation νm (k, l) = z m (k, l) − b z m (k, l/k − 1), i

i

firstly determining the one-stage predictor, b z m (k, l/k − 1) which, from Orthogonal Projection Lemma and the hypotheses i

i

i of the model, is given by b z m (k, l/k − 1) = Mm um (k, l/k − 1) where (k, l)b

 i  i i b um (k, l/k − 1) = Col b u (k, l − m/k − 1), . . . , b u (k, l + m/k − 1) . Then, i

νmi (k, l) = z im (k, l) − Mmi (k, l)b um (k, l/k − 1),

1 ≤ k ≤ N.

(16)

Taking into account expression (16) for the innovation, hypothesis (IV) and the properties given in Proposition 1 and i

using (4) for b u m (j, l/j − 1), we have i Sm (k, l, j) = E [u(k, l)νmiT (j, l)] = αm (k, l, l)βmT (j, l, l)Mmi (j, l) −

j −1 X

i iT Sm (k, l, p)(Πmii (p, l))−1 Smm (j, l, p)Mmi (j, l),

j ≤ k,

p=1





i where Smm (j, l, p) = Col Smi (j, l − m, p), . . . , Smi (j, l + m, p) . i (j, l), such that Now, introducing Jm

i Jm (j, l) = [βmT (j, l, l) −

j−1 X

i iT Jm (p, l)(Πmii (p, l))−1 Smm (j, l, p)]Mmi (j, l),

(17)

p=1

we have i Sm (k, l, j) = αm (k, l, l)Jmi (j, l),

j ≤ k.

Substituting (18) in (4) and noting oim (k, l) =

(18)

(j, l)(Πmii (j, l))−1 νmi (j, l), expression (5) for the filter is obtained and the recursive relation (6) is immediate from the definition of oim (k, l). Pk

i j=1 Jm

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

317

i i In order to derive (7), performing j = k in (17) and taking into account that from (18), Smm (k, l, j) = αmm (k, l, l)Jmm (j, l) i i i where Jmm (j, l) = Col(Jm (j, l − m), . . . , Jm (j, l + m)), we have

" i Jm (k, l) =

βmT (k, l, l) −

k−1 X

# i iT T Jm (p, l)(Πmii (p, l))−1 Jmm (p, l)αmm (k, l, l) Mmi (k, l)

p=1

and noting Fmi (k, l) =

Pk

i p=1 Jm

iT (p, l)(Πmii (p, l))−1 Jmm (p, l) = E [oim (k, l)oiTmm (k, l)], where

oimm (k, l) = Col(oim (k, l − m), . . . , oim (k, l + m)), expression (7) is obtained. From the definition of Fmi (k, l), recursive relation (8) is immediate. i

A similar reasoning to that used to obtain the filter, b u (k, l/k), leads to the following expression for the predictor,

i b um (k, l/k − 1) = αmm (k, l, l)oimm (k − 1, l) and substituting in (16), expression (9) for innovation is obtained.

We now obtain the covariance matrix of the innovation process. Since the estimation error is orthogonal to the estimator,

bi biT Πmii (k, l) = E [z im (k, l)z iT m (k, l)] − E [z m (k, l/k − 1)z m (k, l/k − 1)], using the expression of the observation predictor, i i i b z m (k, l/k − 1) = Mm (k, l)αmm (k, l, l)oimm (k − 1, l), and defining Fmm (k, l) = E [oimm (k, l)oiTmm (k, l)]   i i T i Πmii (k, l) = E z im (k, l)z iT m (k, l) − Mm (k, l)αmm (k, l, l)Fmm (k − 1, l)αmm (k, l, l)Mm (k, l). Taking into account the model hypotheses, i T E [z m (k, l)z m (k, l)] = (Qmi (k, l) + (Mm (k, l))2 ) ◦ (αmm (k, l, l)βmm (k, l, l)) + Riim (k, l) i

iT

and (10) is obtained. From relations (6)–(8), expressions (11)–(13) are easily derived. Now, taking into account hypothesis (I) and using (5), it is clear that the filtering error covariance, P ii (k, l/k) = i

i

E [u (k, l)u (k, l)] − E [b u (k, l/k)b u (k, l/k)], satisfies the following expression i

i

T P ii (k, l/k) = α(k, l, k)β T (k, l, l) − αm (k, l, l)Diim (k, l, l)αm (k, l, l), ii where Diim (k, l) = E [oim (k, l)oiT m (k, l)] and so, (14) is obtained. Finally, the recursive relation for Dm (k, l) is obtained from (6), i taking into account that the innovation, νm (k, l), is uncorrelated with oim (k − 1, l).  i

The filtering algorithms given in Theorem 1 provide r linear filters of the original image, b ui (k, l/k) = b u (k, l/k) +  i i E [u(k, l)], i = 1, . . . , r, based on the observations, z m (1, l), . . . , z m (k, l) , i = 1, . . . , r, obtained from the r sensors. From the information provided by these local filters, we now obtain the estimator of the image as a linear combination of them; that is, the distributed fusion filter, b uF (k, l/k), is given by

b uF (k, l/k) =

r X

aib ui (k, l/k),

ai ∈ R ,

(19)

i=1

such that the least-squares error is minimum. The following theorem provides the expression of the linear fusion filter and its estimation error covariance. Theorem 2. Let b ui (k, l/k), i = 1, . . . , r, be the local filters of the original image, then the linear least-squares fusion filter is given by

b uF (k, l/k) = [eT Σ −1 (k, l/k)e]−1 eT Σ −1 (k, l/k)b u(k, l/k),

1 ≤ k ≤ N,

(20)

where

b u(k, l/k) = Col(b u1 (k, l/k), . . . ,b ur (k, l/k)), e = Col(1, . .. , 1),  Σ (k, l/k) = P ij (k, l/k)

i,j=1,...,r

,

and P ij (k, l/k) = E [(u(k, l) − b ui (k, l/k))(u(k, l) − b uj (k, l/k))]. The fusion filtering error variance is given by PF (k, l/k) = [eT Σ −1 (k, l/k)e]−1 ,

1 ≤ k ≤ N.

Proof. This result follows immediately by applying the Lagrange multiplier method.

(21) 

318

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

Since the expression of the distributed fusion estimator, b uF (k, l/k), depends on the filtering error cross-covariance, P ij (k, l/k), i, j = 1, . . . , r, we need to calculate their explicit formulae; these are derived in the following theorem. Theorem 3. Under hypotheses (I)–(IV), the local filtering error cross-covariance between the ith and the jth sensor has the following expression for each l and for 1 ≤ k ≤ N T P ij (k, l/k) = α(k, l, l)β T (k, l, l) − αm (k, l, l)[Diim (k, l) + Djjm (k, l) − Dijm (k, l, l)]αm (k, l, l),

Dijm

Dijm

(k, l) =

i, j = 1, . . . , r ,

(22)

(k − 1, ) + ( , )( ( , )) ( , )( (k, l)) jT × Jm (k, l) + ( , )( ( , )) ( , )α ( , l, l) × [FmjT (k − , ) − ( − , )] + [ ( − , ) − Fmij (k − 1, l)] T × αmm (k, , ) ( , )( ( , )) ( , ), l k l Πmii k l −1 Πmij k l Πmjj i i Jm k l Πmii k l −1 Mm k l mm k jiT 1 l Fm k 1 l Fmi k 1 l j jT l l Mm k l Πmjj k l −1 Jm k l

Dijm

−1

i Jm

(23)

(0, l) = 0,

where i Fmij (k, l) = Fmij (k − 1, l) + Jm (k, l)(Πmii (k, l))−1 Πmij (k, l)(Πmjj (k, l))−1 jT T × Jmm (k, l) + [Fmi (k − 1, l) − Fmij (k − 1, l)]αmm (k, l, l) j jj −1 jT i × Mm (k, l)(Πm (k, l)) Jmm (k, l) + Jm (k, l)(Πmii (k, l))−1 j ij × Mmi (k, l)αmm (k, l, l)[Fmm (k − 1, l) − Fmm (k − 1, l)],

(24)

Fmij (0, l) = 0, ij ij i Fmm (k, l) = Fmm (k − 1, l) + Jmm (k, l)(Πmii (k, l))−1 Πmij (k, l)(Πmjj (k, l))−1 jT i ij T × Jmm (k, l) + [Fmm (k − 1, l) − Fmm (k − 1, l)]αmm (k, l, l) jT jj −1 jT i × Mm (k, l)(Πm (k, l)) Jmm (k, l) + Jmm (k, l)(Πmii (k, l))−1 j ij × Mmi (k, l)αmm (k, l, l)[Fmm (k − 1, l) − Fmm (k − 1, l)],

ij Fmm

Πmij

(25)

(0, l) = 0,

ij T ij T (k, l) = Qm (k, l) ◦ (αmm (k, l, l)βmm (k, l, l)) + Rijm (k) − Mmi (k, l)αmm (k, l, l)Fmm (k − 1, l)αmm (k, l, l)Mmj (k, l), (26)

ij

jT

and Qm (k, l) = µim (k, l)µm (k, l) and µim (k, l) = Col(µi (k, l − m), . . . , µi (k, l + m)) for i 6= j. Proof. Using the Orthogonal Projection Lemma and from hypothesis (I), the filtering error cross-covariance is j

j

i

i

i

j

P ij (k, l/k) = α(k, l, l)β T (k, l, l) − E [b u (k, l/k)b u (k, l/k)] − E [b u (k, l/k)b u (k, l/k)] + E [b u (k, l/k)b u (k, l/k)]. ij

Taking into account the expression of the estimator given by (5), the definition of Diim (k, l) and defining Dm (k, l) = jT om

E [oim (k, l) (k, l)], expression (22) is obtained. In order to derive a recursive relation for expression for oim (k, l) given by (6), obtaining

ij Dm

(k, l), we use the recursive

i Dijm (k, l) = Dijm (k − 1, l) + Jm (k, l)(Πmii (k, l))−1 E [νmi (k, l)ojTm (k − 1, l)] + E [oim (k − 1, l)νmjT (k, l)](Πmjj (k, l))−1 JmjT (k, l)

+ Jmi (k, l)(Πmii (k, l))−1 Πmij (k, l)(Πmjj (k, l))−1 JmjT (k, l), jT

ij

jT

i (k, l)νm (k, l)]. Now, it is necessary to derive, in the first place, E [νmi (k, l)om (k − 1, l)]. To do so, taking where Πm (k, l) = E [νm i

i into account the definition of innovation, νm (k, l) = z m (k, l) − b z m (k, l/k − 1), we have i

i

i E [νm (k, l)ojTm (k − 1, l)] = E [z m (k, l)ojTm (k − 1, l)] − E [b z m (k, l/k − 1)ojT m (k − 1, l)]. i

Substituting the expressions of the observation and the predictor of the observation, i i b z m (k, l/k − 1) = Mm (k, l)αmm (k, l, l)oimm (k − 1, l),

and taking into account the hypotheses of the model, i E [νm (k, l)ojTm (k − 1, l)] = Mmi (k, l)αmm (k, l, l) E [ojmm (k − 1, l)ojTm (k − 1, l)] − E [oimm (k − 1, l)ojTm (k − 1, l)] .



j

ij



jT

From the definition of Fm (k, l) and noting Fm (k, l) = E [oim (k, l)omm (k, l)] we have i E [νm (k, l)ojTm (k − 1, l)] = Mmi (k, l)αmm (k, l, l)[FmjT (k − 1, l) − FmjiT (k − 1, l)].

Clearly, jT T E [oim (k − 1, l)νm (k, l)] = [Fmi (k − 1, l) − Fmij (k − 1, l)]αmm (k, l, l)Mmj (k, l)

(27)

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

319

ij

and the recursive expression for Dm (k, l) is obtained. ij Next, we obtain the recursive expression for Fm (k, l). From the recursive relations (6) and (11), i jT Fmij (k, l) = Fmij (k − 1, l) + Jm (k, l)(Πmii (k, l))−1 E [νmi (k, l)ojTmm (k − 1, l)] + E [oim (k − 1, l)νmjT (k, l)](Πmjj (k, l))−1 Jmm (k, l) jT + Jmi (k, l)(Πmii (k, l))−1 Πmij (k, l)(Πmjj (k, l))−1 Jmm (k, l). jT

i Then, we only need to calculate E [νm (k, l)omm (k − 1, l)]. To do so, a reasoning analogous to that used to obtain (27) produces i j ij E [νm (k, l)ojTmm (k − 1, l)] = Mmi (k, l)αmm (k, l, l)[Fmm (k − 1, l) − Fmm (k − 1, l)], ij

jT

ij

where Fmm (k, l) = E [oimm (k, l)omm (k, l)], and (24) is obtained. From the expression for Fm (k, l) it is straightforward to derive ij Fmm

the recursive expression for (k, l) given by (25). ij i jT Finally, we derive expression (26). By using the Orthogonal Projection Lemma, Πm (k, l) = E [z m (k, l)z m (k, l)] − i

jT

E [b z m (k, l/k − 1)b z m (k, l/k − 1)]. Using expression (3) and taking into account the independence hypothesis and the properties set out in Proposition 1, we have

h i i jT T Πmij (k, l) = Qij (k, l) ◦ (αmm (k, l, l)βmm (k, l, l)) + Rijm (k, l) − E b z m (k, l/k − 1)b z m (k, l/k − 1) . i

i Using the expression of the one-stage observation predictor, b z m (k, l/k − 1) = Mm (k, l)αmm (k, l, l)oimm (k − 1, l), and the ij

definition of Fmm (k, l), expression (26) is obtained.



3.2. Centralized fusion estimation For each fixed value l, assuming that the observations acquired from the different sensors, z m (k, l), i = 1, . . . , r, are available at the fusion centre, then the observation data can be combined to transform the r measurement equations described by (3) into the following single measurement equation i

z m (k, l) = Wm (k, l)H (k, l)um (k, l) + vm (k, l),

1 ≤ k ≤ N,

(28)

where z m (k, l) = Col(z m (k, l), . . . , z m (k, l)), 1

r

Wm (k, l) = diag (Wm1 (k, l), . . . , Wmr (k, l)), H (k, l) = Col(I2m+1 , . . . , I2m+1 ),

vm (k, l) = Col(vm1 (k, l), . . . , vmr (k, l)). The following proposition provides some statistical properties of the multiplicative and additive noises involved in measurement equation (28) which are necessary to set out the centralized filtering algorithm. These are easily derived from Proposition 1. Proposition 2. Under the model hypotheses, the noise processes of (28) have the following properties: 1 (i) {Wm (k, l); 1 ≤ k ≤ N } has mean Mm (k, l) = diag (Mm (k, l), . . . , Mmr (k, l)) and

E [(Wm (k, l) − Mm (k, l))(Wm (k, l) − Mm (k, l))] = Qm (k, l) = diag (Qm1 (k, l), . . . , Qmr (k, l)), i where Mm (k, l) = E [Wmi (k, l)] and Qmi (k, l) = E [(Wmi (k, l) − Mmi (k, l))(Wmi (k, l) − Mmi (k, l))] are as given in Proposition 1. (ii) The additive noise {vm (k, l); 1 ≤ k ≤ N } has zero mean and

 T E [vm (k, l)vm (k, l)] = R(k, l) = 



R11 m (k, l)

.. . Rr1 ( m k, l) ij

··· .. . ···

R1r m (k, l)



..  , .  rr Rm (k, l) jT

i where Riim (k, l) = E [vm (k, l)vmiT (k, l)] and Rm (k, l) = E [vmi (k, l)vm (k, l)], ∀i, j = 1, . . . , r are as given in Proposition 1.



For each fixed value l, the linear least-squares estimation problem of the signal, u(k, l), 1 ≤ k ≤ N, based on the measurements {z m (1, l), . . . , z m (k, l)} is also addressed from an innovation approach; that is, the linear estimator, b u(k, l/k), is expressed as the linear combination of the innovations,

b u(k, l/k) =

k X j =1

Sm (k, l, j)Πm−1 (j, l)νm (j, l),

320

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

T where Sm (k, l, j) = E [u(k, l)νm (j, l)] and Πm (j, l) = E [νm (j, l)νmT (j, l)] denotes the innovation covariance matrix. The following theorem provides a recursive algorithm for the centralized linear fusion filter, as well as a measure of the goodness of this filter, which is obtained by means of reasoning analogous to that used for Theorem 1 (for this reason, the proof is not repeated).

Theorem 4. Under hypotheses (I)–(IV), for each fixed value l, the filter of the signal u(k, l), 1 ≤ k ≤ N, based on the observations {z m (1, l), . . . , z m (k, l)} can be obtained by

b u(k, l/k) = αm (k, l, l)om (k, l),

1 ≤ k ≤ N,

where the vector om (k, l) is obtained recursively by om (k, l) = om (k − 1, l) + Jm (k, l)Πm−1 (k, l)νm (k, l), om (0, l) = 0.

1 ≤ k ≤ N,

The matrix Jm (k, l) verifies T T Jm (k, l) = [βm (k, l, l) − Fm (k − 1, l)αmm (k, l, l)]H T (k, l)Mm (k, l)

and the matrix Fm (k, l) is obtained recursively by T Fm (k, l) = Fm (k − 1, l) + Jm (k, l)Πm−1 (k, l)Jmm (k, l), Fm (0, l) = 0.

1 ≤ k ≤ N,

The innovation, νm (k, l), and its covariance matrix are given by

νm (k, l) = z m (k, l) − Mm (k, l)H (k, l)αmm (k, l, l)omm (k − 1, l), T Πm (k, l) = Q(k, l) ◦ (H (k, l)αmm (k, l, l)βmm (k, l, l)H T (k, l)) + R(k, l) T − Mm (k, l)H (k, l)αmm (k, l, l)Fmm (k − 1, l)αmm (k, l, l)H T (k, l)Mm (k, l), where 1 Qm1 (k, l) + (Mm (k, l))2 21  Q ( k , l ) m 

12 Qm (k, l)

 Q(k, l) = 



Qm2

.. . r1 Qm (k, l)

(k, l) + (

2 Mm

(k, l))

2

··· ···

··· ··· .. . ···

1r Qm (k, l) 2r Qm (k, l)



  . ..  . r r 2 Qm (k, l) + (Mm (k, l))

The vector omm (k, l) and matrices Jmm (k, l) and Fmm (k, l), for 1 ≤ k ≤ N, verify omm (k, l) = omm (k − 1, l) + Jmm (k, l)Πm−1 (k, l)νm (k, l), omm (0, l) = 0, T T Jmm (k, l) = [βmm (k, l, l) − Fmm (k − 1, l)αmm (k, l, l)]H T (k, l)Mm (k, l), T Fmm (k, l) = Fmm (k − 1, l) + Jmm (k, l)Πm−1 (k, l)Jmm (k, l),

Fmm (0, l) = 0. The filtering error covariance, P (k, l/k), for 1 ≤ k ≤ N, satisfies T P (k, l/k) = α(k, l, l)β T (k, l, l) − αm (k, l, l)Dm (k, l)αm (k, l, l),

with T Dm (k, l) = Dm (k − 1, l) + Jm (k, l)Πm−1 (k, l)Jm (k, l),

Dm (0, l) = 0.



4. Simulation examples In this section, the proposed filters are applied to restore a monochromatic image, ‘‘Saturn.tif’’ with 328 × 438 pixels and 256 grey levels, which is corrupted by multiplicative and additive noises, acquired by multiple sensors. The image field, {u(k, l); 1 ≤ k ≤ 328, 1 ≤ l ≤ 438}, has u = 50.4938 mean and its covariance function can be expressed in semidegenerate kernel form as follows, |k−s| |l−ξ |

E [(u(k, l) − u)(u(s, ξ ) − u)] = σ 2 A1

A2

,

where A1 and A2 are the correlations of the adjacent points in the sample data in the vertical and horizontal directions, respectively. Therefore, taking into account hypothesis (I),

α(k, l, ξ ) = σ 2 Ak1 ,

|l−ξ |

s β(s, l, ξ ) = A− 1 A2

.

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

321

Fig. 1. Observed image from sensor 1.

Fig. 2. Observed image from sensor 2.

The values of A1 , A2 and σ 2 are determined from the original image and it is obtained that A1 = 0.9984, A2 = 0.9991 and σ 2 = 5318.1. In this example, the image is assumed to be affected by different noises and the corresponding observations are acquired from two sensors. The measurement equation is z i (k, l) = w i (k, l)u(k, l) + v0i (k, l),

i = 1, 2,

(29)

for 1 ≤ k ≤ 328 and 1 ≤ l ≤ 438. The multiplicative noises satisfy the hypotheses set out in Section 2 and these noises have the distribution exponential, exp(λi ), with λ1 = 0.8 and λ2 = 0.7, respectively. The measurement noises are white with normal distribution, N (0, σv ii ), where σv 11 = 100 and σv 22 = 94; moreover, the measurement noises are cross-correlated 0

0

j

0

with E [v0i (k, l)v0 (s, l)] = 0.6δK (k − s)δK (l − l), i, j = 1, 2. The original image is corrupted by the above noises in accordance with model (29) and the simulated images corresponding to each sensor are shown in Figs. 1 and 2, respectively. As the application of the proposed filters requires the image field to be centred, let us consider the zero-mean signal obtained by subtracting the mean of the pixel levels, u, from the pixel levels; the observation equations are given by (2). The estimated original image is obtained by b u(k, l/k) = b u(k, l/k) + u. As mentioned above, in addressing the problem of image restoration, the observations are taken at pixels adjacent to those where the estimation is required. Specifically, in this example we consider m = 2. The observations for the two sensors are then z 2 (k, l) = Col(z (k, l − 2), z (k, l − 1), z (k, l), z (k, l + 1), z (k, l + 2)), i

i

i

i

i

i

for 1 ≤ k ≤ 328, 2 < l < 437. In order to derive the filters of the boundary levels, u(k, 2) or u(k, 437), 1 ≤ k ≤ 328, we consider the observations corresponding to m = 1. For the pixels corresponding to the first and last columns the observations were made only at the pixel that was estimated. In order to obtain the distributed filter, firstly, the least-squares filtering algorithm is applied to obtaining the filtered images using the observations provided by each sensor. The following step is to apply the distributed fusion filtering algorithm; the estimated image, based on all measurements and constructed from the local estimates, is shown in Fig. 3. The filtered image using the centralized method given in Theorem 4 is shown in Fig. 4.

322

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

Fig. 3. Distributed filtered image, m = 2.

Fig. 4. Centralized filtered image, m = 2.

Fig. 5. Filtering error variance.

Filtering error variance algorithms were also implemented, to derive a measure of the goodness of the proposed estimators. Fig. 5 shows the different values of the centralized and distributed fusion filtering error variances for the different values of m. This figure shows, for each fixed value m, the different values of the filtering error variances for each k and l ≥ m, which are independent of l. The variances are shown for and only for k ≤ 30, since they are stabilized, as is apparent. Examination of this figure reveals that the centralized fusion filtering variances are lower, but very close to the distributed fusion filtering variances. This fact leads us to conclude that the proposed distributed filter provides good

M.J. García-Ligero et al. / Computational Statistics and Data Analysis 55 (2011) 312–323

323

estimators. Taking into account the computational advantages of this method with regard to the centralized filter method, it it would be interesting to use this approach to address the estimation problem in a multi-sensor environment. The filtering error variances in both cases decrease as the value of m increases; thus, the best results are obtained when m = 2. 5. Conclusions In this paper, centralized and distributed fusion filters are proposed to estimate an image that has been degraded by multiplicative and additive noises, observed from multiple sensors. The proposed algorithms do not require knowledge of the state-space model or of the Gaussian noise hypothesis, only the covariance function of the image and noises. In addition, we derived formulae for the centralized and distributed filtering error variances. These allowed us to compare the goodness of the proposed filters. Specifically, analysis of the different examples showed that the centralized filtering error variances are slightly less than those of the distributed filtering error approach. This fact, together with the computational advantages provided by the distributed fusion filter method, lead us to prefer the distributed filter method to the centralized filter. Acknowledgements This work has been partially supported by the ‘‘Ministerio de Educación y Ciencia’’ of Spain under contract MTM200805567. References Argenti, F., Torricelli, G., 2003. Speckle suppression in ultrasonic images based on undecimated wavelets. Eurasip J. Appl. Signal Process. 5, 470–478. Bar-Shalom, Y., Campo, L., 1986. The effect of the common process noise on the two-sensor fused-track covariance. IEEE Trans. Aerosp. Electron. Syst. 22 (11), 803–805. Benson, O., Vicent, F., Stoica, P., Gershman, A.B., 2000. Approximate maximum likelihood estimators for array processing in multiplicative noise environments. IEEE Trans. Signal Process. 48 (9), 2506–2518. Chaillan, F., Fraschini, C., Courmontagne, P., 2007. Speckle noise reduction in SAS imagery. Signal Process. 87, 762–787. Coulon, M., Tourneret, J.Y., Swam, A., 2006. Detection of multiplicative noise in stationary random process using second and higher order statistics. IEEE Trans. Signal Process. 48 (9), 2566–2575. Dragan, V., Morozan, T., 1997. Mixed input–output optimization for time-varying Ito systems with state dependent noise. Dyn. Contin. Discrete Impuls. Syst. 3, 317–333. Hounkpevi, F.O., Yaz, E., 2007. Robust minimum variance linear state estimators for multiple sensors with different failure rates. Automatica 43, 1274–1280. Jiménez-López, J.D., Linares-Pérez, J., Nakamori, S., Caballero-Águila, R., Hermoso-Carazo, A., 2008. Signal estimation based on covariance information from observations featuring correlated uncertainty and coming from multiple sensors. Signal Process. 88 (4), 2998–3065. Karaman, M., Alperkutay, M., Bozdagi, G., 1995. An adaptive speckle suppression filter for medical ultrasonic imaging. IEEE Trans. Med. Imaging 14 (2), 283–292. Kim, K.H., 1994. Development of track to track fusion algorithm, Proceeding of the American Control Conference, Maryland, 1037–1041. Mateo, J.L., Fernández-Caballero, A., 2009. Finding out general tendencies in speckle noise reduction in ultrasound images. Expert Syst. Appl. 36, 7786–7797. Nahi, N.E., 1969. Optimal recursive estimation with uncertain observation. IEEE Trans. Inform. Theory 15, 457–462. Nakamori, S., García-Ligero, M.J., Hermoso-Carazo, A., Linares-Pérez, J., 2003. Estimation from uncertain observations in distributed parameter systems using covariance information. In: Proceedings of 3rd International Symposium on Image an Signal Processing and Analysis. Nakamori, S., García-Ligero, M.J., Hermoso-Carazo, A., Linares-Pérez, J., 2008a. Filtering of images corrupted by multiplicative and white plus coloured additive noise using covariance information. Math. Comput. Modelling 47, 298–311. Nakamori, S., García-Ligero, M.J., Hermoso-Carazo, A., Linares-Pérez, J., 2008b. Filtering in generalized signal-dependent noise model using covariance information. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 3, 809–817. Park, J.M., Song, W.J., Pearlman, W.A., 1999. Speckle filtering of SAR images based on adaptive windowing. Proc. IEE Vis., Image Signal Process. 146, 191–197. Rajasekaran, P.K., Satyanarayana, N., Srinath, M.D., 1971. Optimum linear estimation of stochastical signals in the presence of multiplicative noise. IEEE Trans. Aerosp. Electron. Syst. 7, 462–468. Shin, V., Lee, Y., Choi, T-S., 2006. Generalized Millman’s formula and its application for estimation problems. Signal Process. 86, 257–266. Shin, V., Shevlyakov, G., Kim, K., 2007. A new fusion formula and its application to continuous-time linear systems with multisensor environment. Comput. Statist. Data Anal. 52, 840–854. Subrahmanyam, G.R.K.S., Rajagopalan, A.N., Aravind, R., 2008. A recursive filter for despeckling SAR images. IEEE Trans. Image Process. 17, 1969–1972. Sun, S-L., 2004. Multi-sensor optimal information fusion Kalman filters with applications. Aerosol Sci. Technol. 8, 57–62. Sun, S-L., Deng, Z-L., 2004. Multi-sensor optimal information fusion Kalman filter. Automatica 40, 1017–1023. Zhang, L., Zhang, X-D., 2007. An optimal filtering algorithm for systems with multiplicative/additive noises. IEEE Signal Process. Lett. 14 (7), 469–472.