Multidimensional filtering based on a tensor approach

Multidimensional filtering based on a tensor approach

ARTICLE IN PRESS Signal Processing 85 (2005) 2338–2353 www.elsevier.com/locate/sigpro Multidimensional filtering based on a tensor approach Damien Mu...

443KB Sizes 2 Downloads 132 Views

ARTICLE IN PRESS

Signal Processing 85 (2005) 2338–2353 www.elsevier.com/locate/sigpro

Multidimensional filtering based on a tensor approach Damien Muti, Salah Bourennane GSM Team, Institut Fresnel UMR CNRS 6133, Universite´ Aix-Marseille 3 EGIM Nord, DU de Saint Je´roˆme, 13397 Marseille Cedex 20, France Received 13 June 2003 Available online 12 May 2005

Abstract A new multidimensional modelling of data has recently been suggested, which can be applied in a wide range of signal processing fields. Many studies have proposed new tensorial mathematical tools in order to process multidimensional data. With a view of perfecting this multidimensional model, this paper presents a new tensor approach for multidimensional data filtering. A theoretical expression of n-mode filters is established based on a specific modelling of the desired information. The optimization criterion used in this tensorial filtering is the minimization of the mean square error between the estimated signal and the desired signal. This minimization leads to some estimated n-mode filters which can be considered as an extension of the well-known Wiener filter in a particular mode. An alternating least square algorithm is proposed to determine each n-mode Wiener filter. This new multimode Wiener filtering method is tested for noise reduction in multicomponent seismic data. A comparative study with classical bidimensional filtering methods based on principal component analysis is also proposed and presents encouraging results. r 2005 Elsevier B.V. All rights reserved. Keywords: Multilinear algebra; Tensor; Multiway arrays; Subspace method; Tucker3 decomposition; LRTA; HOSVD; Multidimensional Wiener filtering

1. Introduction Multidimensional models can be used in a large range of fields so diverse as chemometrics, psychology, data analysis or signal processing [1]. In signal processing, tensors are built on vector

Corresponding author. Tel.: +33 4 9127 8202.

E-mail addresses: [email protected] (D. Muti), [email protected] (S. Bourennane).

spaces associated with physical quantities such as length, width, height, time, color channel, etc. Each mode of the tensor is associated with a physical quantity. For instance, in image processing, color images can be modelled as threedimensional tensors: two dimensions for rows and columns, and one dimension for the color map. In seismic, when a linear antenna composed of multicomponent sensors is used, a threedimensional modelling of data can be adopted as well: one mode is associated with the spatial

0165-1684/$ - see front matter r 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2004.11.029

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

sensors of the antenna, one mode with time and one mode with the wave polarization components. A tensor of order N is as a multidimensional array the entries of which are accessed via N indices. It is written as A 2 RI 1 I N , where each element is ai1 iN , and R is the real manifold. For all n ¼ 12N, each n-mode represented by index in is associated with a physical quantity such as length, width, height, time, color channel, etc. This mode is built on vector space E n of dimension I n , where I n is the number of data sampled in the physical dimension associated with mode n. In this study, multicomponent data are considered as inseparable whole tensors. This model naturally implies the use of processing technics based on multilinear algebra and especially tensor decomposition and tensor approximation methods. The Tucker3 model [2] is the most frequently used tensor decomposition model. Under some conditions, this model can generalize:





The matrix singular value decomposition (SVD) to tensors. This method is known as higher-order singular value decomposition (HOSVD) [3]. The lower rank matrix approximation to tensors. This method is known as lower rankðK 1 ; . . . ; K N Þ tensor approximation (LRTA) [4].

The LRTA has recently been used to process lower approximation of cumulant tensors involved in independent component analysis (ICA) and Blind Source Separation [5,6]. Since LRTA and HOSVD represent a higher-order principal component analysis (PCA) [7], they have also recently been used to process noise filtering of multicomponent seismic waves [8,9], and color images [10]. In this paper, we propose to improve this last multimode filtering by adopting statistical frameworks. The optimization criterion chosen for the filtering is the minimization of the mean square b and error (MSE) between the estimated signal X the initial signal X. The purpose of this approach is to increase the filtering quality by estimating the n-mode filters that lead to the best initial signal estimation in regard to the minimization of the MSE. In extension to the one-dimensional case, the n-mode filters found in this approach can be

2339

considered as the multidimensional generalization of Wiener filtering. The paper is organized as follows. Section 2 briefly presents the Tucker3 lower rank-(K 1 ; . . . ; K N ) tensor approximation mathematical tool, and the studies related to its alternating least square (ALS) implementation. In Section 3, we propose a new method for multidimensional filtering. We first give a short overview of the way Tucker3 lower rank-(K 1 ; . . . ; K N ) tensor approximation has been applied in signal processing especially for higher-order PCA. Then, we propose a new approach that generalizes Wiener filtering to multidimensional data. Stating the assumption that the desired signal is a weighted combination of signal subspace basis vectors, the expression of n-mode Wiener filters that minimize the MSE between the estimated signal and the desired signal is established. The computation details of this section can be found in Appendices A and B. Finally, in Section 4, these methods are applied to the special case of noise reduction in multicomponent seismic data and compared to classical bidimensional filtering methods based on PCA. In order for the reader to familiarize himself with the paper notations, regardless of their superscript or subscript positions, index n refers to the nth mode of a tensor and index k indicates the kth step of one numerical iterative process.

2. Lower rank-ðK 1 ; . . . ; K N Þ tensor approximation (LRTA) In the Tucker3 tensor decomposition model, any Nth-order tensor A 2 RI 1 I N can be expressed as A ¼ S1 U ð1Þ 2 U ð2Þ    N U ðNÞ , ðnÞ

K 1 K N

(1)

where U 2 StðI n ; K n Þ, S 2 R is the core tensor and n is the n-mode product, properties that can all be found in [3,4]. StðI n ; K n Þ is the ðI n ; K n Þ-Stiefel matrix manifold, i.e. the I n  K n columnwise orthogonal matrix manifold. When K n ¼ I n , Tucker3 tensor decomposition is also called HOSVD [3], and when K n oI n ; it is LRTA [4].

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

2340

Let us give a brief review of tensor rank definitions which can be found in [4,11]. The n-rank of tensor A 2 RI 1 I N , noted as Rankn ðAÞ, is the dimension of its n-mode vector space E n composed of the I n -dimensional vectors obtained from A by varying index in and keeping the other indices fixed. A is called a rank(K 1 ; . . . ; K N ) tensor if Rankn ðAÞ ¼ K n whatever n ¼ 1; . . . ; N. Given a real Nth-order tensor A 2 RI 1 I N , the rank-(K 1 ; . . . ; K N ) tensor approximation problem consists in finding rank-(K 1 ; . . . ; K N ) tensor B, with K n oI n ; 8n 2 ½1; N, which minimizes the following quadratic tensor Frobenius norm: kA  Bk2 .

(2)

Tensor B can be expressed with the Tucker3 model as: B ¼ D1 U ð1Þ 2 U ð2Þ    N U ðNÞ . The least square solution implies that D ¼ A T T ð1ÞT 2 U ð2Þ    N U ðNÞ [12,4]. Thus the best 1U lower rank-ðK 1    K N Þ tensor approximation of A is B ¼ A1 Pð1Þ 2 Pð2Þ    N PðNÞ

(3)

and PðnÞ ¼ U ðnÞ U ðnÞ

T

(4)

is the projector on the K n -dimensional subspace of E n which enables the minimization of (2). It is shown [12] that minimizing relation (2) is equivalent to maximizing the function gðU ð1Þ ; . . . ; U ðNÞ Þ ¼ kDk2

(5)

with respect to U ð1Þ ; . . . ; U ðNÞ , which is a difficult non-linear least square problem that is generally solved thanks to an ALS algorithm referred to as TUCKALS3 algorithm [12,4,11,13]. The purpose of this algorithm is to build matrix series fU ðnÞ k gk2N , for all n ¼ 12N, that converge to the optimal matrices U ðnÞ which maximize (5). Each iteration of the TUCKALS3 algorithm is composed of N steps in which every series fU ðnÞ k g is computed by recurrence, by fixing series fU ðmÞ k g, m 2 f1; . . . ; Ng  fng, to the value of their last iteration,

and maximizing function T

T

ðn1Þ ðnÞ hkn ðV ðnÞ Þ ¼ kA1 U ð1Þ kþ1    n1 U kþ1 n V T

T

T

nþ1 U ðnþ1Þ    N U ðNÞ k2 , k k T

ðnÞ ¼ trðV ðnÞ  C ðnÞ k  V Þ,

ð6Þ

where V ðnÞ is the variable unknown matrix corresponding to searched matrix U ðnÞ kþ1 , and where ð1Þ ðn1Þ T C ðnÞ k ¼ An  ðPkþ1      Pkþ1

 Pkðnþ1Þ      PðNÞ k Þ  An

ð7Þ

in which PðnÞ k is the projector T

ðnÞ ðnÞ PðnÞ k ¼ Uk Uk .

(8)

The term U ðnÞ kþ1 is the argument that maximizes function hkn ðV ðnÞ Þ over StðI n ; K n Þ. Its column vectors are the K n eigenvectors associated with the K n largest eigenvalues of matrix C ðnÞ k [12]. Let us remark that whatever iteration k, the generic term of matrix C ðnÞ k is ¼ hAin ¼i jBðnÞ;k C ðnÞ;k ij in ¼j i,

(9)

where BðnÞ;k is obtained by applying to tensor A projectors PðmÞ k on every m-mode but the n-mode: ðn1Þ BðnÞ;k ¼ A1 Pð1Þ kþ1    n1 Pkþ1

nþ1 Pkðnþ1Þ N PðNÞ k . C ðnÞ;k ij

T

ð10Þ

is the scalar product between two ðN  1Þthorder subtensors extracted from A and BðnÞ;k , and is found by fixing the n-mode index, respectively, to i and j. From a signal processing point of view, C ðnÞ represents the covariance matrix between k tensor A and tensor B n-mode vectors, on which is based the higher-order PCA [12,14]. Several studies on Tucker3 decomposition and its ALS implementation, also called higher-order power method, were recently presented [4,13]. Other studies stress the major importance of rank-one tensor approximation in the canonical decomposition of tensors known as CANDECOMP/PARAFAC decomposition [15,16,13], and in higher-order statistics and blind source separation where some maximization problems are

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

equivalent to the best rank-one approximation of fourth-order cumulant [17,11,18].

3. Multidimensional filtering 3.1. LRTA in signal processing The HOSVD and the LRTA mathematical tools have recently been introduced in the signal processing field for ICA and blind source separation [5,6]. The lower rank-(K 1 ; K 2 ; K 3 ) truncation of the HOSVD [4] has also been used in seismics to elaborate a three-dimensional PCA (3D-PCA) of third-order data tensors [9]. The purpose of this HOSVD-based technique is to decompose the seismic cube into elementary cubes containing different seismic events. In this approach, the twodimensional subspace separation methods are extended to the case of three-dimensional data. The two-dimensional data model can also be extended to a tensorial data model [9] in which multidimensional data tensor R results from the record of a multidimensional signal X, and additional noise B: R ¼ X þ B.

(11)

This model, established for the three-dimensional case, can obviously be generalized to the case of dimensions larger than three. Let us consider the general case, and let E n be the vector space of dimension I n associated with the n-mode of tensor R. We assume that E n is a superposition of two orthogonal subspaces: the signal subspace E 1n of dimension K n , and the noise subspace E 2n of dimension I n  K n , such that E n ¼ E 1n  E 2n . Considering this assumption, the goal of 3D-PCA is to estimate the signal tensor based on the consecutive projection of R on the signal subspaces, in every mode. The estimated signal tensor is given by the best rank-(K 1 ; . . . ; K N ) tensor approximation of R: b ¼ R1 P1 2 P2    N PN , X

(12)

where, for all n ¼ 1; . . . ; N, Pn is the orthogonal projector on K n -dimensional signal subspace E 1n .

2341

It is possible to give a new interpretation to the 3D-PCA of noisy data tensors. Indeed, as the nmode product Rn Pn is the consecutive matrix products between matrix Pn and the I n -dimensional vectors obtained from R by varying index in and keeping the other indices fixed, from a signal processing point of view, the n-mode product represents an n-mode filtering of tensor R by nmode filter Pn . Thus, lower rank-ðK 1 ; . . . ; K N Þ tensor approximation of R represents a multidimensional filtering by n-mode filters Pn , for n ¼ 12N. The initial goal of Tucker3 decomposition [12] is to approximate a non-noisy tensor with a tensor of lower rank. In the 3D-PCA approach, the TUCKALS3 algorithm is applied just as it is. Thus, in a filtering framework, LRTA does not necessarily provide the best n-mode filters such that estimated b is as close as possible to initial signal tensor X signal tensor X. We propose, in Section 3.2, to develop a new multidimensional filtering method in order to overcome this problem. 3.2. Multidimensional Wiener filtering In this section, multidimensional data are still considered as inseparable whole tensors. Statistical and filtering frameworks are adopted. The optimization criterion chosen for the proposed method is the classical minimization of the MSE between b and the the estimated multidimensional signal X initial multidimensional signal X. The purpose of this approach is to increase the filtering quality by estimating the n-mode filters that lead to the best desired signal estimation in regard to the minimization of the MSE. The computations related to Sections 3.2.2 and 3.2.3 can be found in Appendices A and B. 3.2.1. Formulation of the problem Let us consider a multidimensional Nth-order tensor of data R 2 RI 1 I N resulting from the record of a multidimensional signal X with additive white, Gaussian and independent-fromsignal noise B: R ¼ X þ B. The white Gaussian noise assumption can be formulated as Eðbi1 iN bj1 ...j N Þ ¼ di1 j1 . . . diN jN ,

(13)

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

2342

where ik and j k 2 f1; . . . ; I k g, k 2 f1; . . . ; Ng, and where d is the Kronecker symbol. The estimation of signal tensor X can be achieved by the multidimensional filtering of tensor R, using N n-mode filters fH n 2 RI n I n ; n ¼ 1; . . . ; Ng. This can be formulated by the following expression: b ¼ R1 H 1    N H N . X

(14)

The criterion used to determine the optimal nmode filters fH n ; n ¼ 1; . . . ; Ng is the minimization of the MSE between the desired signal X and its b estimation X: eðH 1 ; . . . ; H N Þ b 2Þ ¼ EðkX  Xk ¼ EðkX  R1 H 1    N H N k2 Þ.

ð15Þ

In extension to the one-dimensional case, filters fH n ; n ¼ 1; . . . ; Ng can be called ‘‘n-mode Wiener filters’’. 3.2.2. Analytical expression Developing the squared norm of (15) leads to eðH 1 ; . . . ; H N Þ ¼ EðkXk2 Þ  2EðhXjR1 H 1    N H N iÞ þ EðkR1 H 1    N H N k2 Þ,

ð16Þ

where hji is the tensor scalar product, the properties and definition of which can be found in [5]. Unfolding (16) over the n-mode and using the tensor scalar product properties, we get, for all n ¼ 12N, T eðH 1 ; . . . ; H N Þ ¼ E½kX n k2   2E½trðgðnÞ XR H n Þ T þ E½trðH n GðnÞ RR H n Þ,

ð17Þ

where gðnÞ XR

¼ X nq

ðnÞ

RTn

(18)

 H Tnþ1 H nþ1     H TN H N . ð21Þ The optimal n-mode Wiener filters fH 1 ; . . . ; H N g are the arguments that minimize the MSE (17). These filters are found when   qe qe T gradðeÞ ¼ ... ¼ 0, (22) qH 1 qH N i.e. when qe=qH n are conjointly null for all n ¼ 12N. Let’s study qe=qH n for a given n-mode. The nmode filters H m are supposed fixed for all m 2 f1; . . . ; Ng  fng; then qe=qH n ¼ 0 implies that     q q ðnÞ T T E trðH n GðnÞ H Þ ¼ 2E trðg H Þ RR n XR n . qH n qH n (23) Computing the derivative of both sides of relation ðnÞ (23), by taking into account that gðnÞ XR and G RR are independent from n-mode filters H n , and extracting H n (see Appendix A), lead to the optimal filter that minimizes mean least square error criterion (15), for fixed m-mode filters H m , man. The expression of H n is the following: ðnÞ 1 H n ¼ E½gðnÞ XR E½G RR  .

(24)

In this last expression, let us define the qðnÞ weighted covariance matrix between signal X and data R by ðnÞ gðnÞ XR ¼ E½gXR 

(25)

and the QðnÞ -weighted covariance matrix of the data by (26)

Then, the expression of H n n-mode Wiener filter becomes 1

(19)

ðnÞ H n ¼ gðnÞ XR GRR .

(20)

In the following, g is used for the qðnÞ -weighted covariance, and G for the QðnÞ -weighted covariance.

and ðnÞ T G ðnÞ RR ¼ Rn Q Rn

T

QðnÞ ¼ qðnÞ qðnÞ ¼ H T1 H 1     H Tn1 H n1

ðnÞ GðnÞ RR ¼ E½G RR .

with qðnÞ ¼ H 1     H n1  H nþ1     H N

with

(27)

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

3.2.3. Assumptions and related expression of nmode Wiener filters Let us consider a particular n-mode. In order to ðnÞ determine both expressions of gðnÞ XR and GRR from relation (27), we propose to make some assumptions on desired signal X. 3.2.3.1. Assumptions and model. First, we consider that n-mode vector space E n is an orthogonal direct sum between signal subspace E 1n of dimension K n and noise subspace E 2n of dimension I n  K n: E n ¼ E 1n  E 2n .

Xn ¼

ðnÞ V ðnÞ s O

(28)

with X n 2 RI n M n , where M n ¼ I 1    I n1 I nþ1    I N , V ðnÞ s 2 StðI n ; K n Þ the matrix containing the K n I n -dimensional orthogonal vectors of E 1n nmode signal subspace basis. OðnÞ 2 RK n M n is a random weighting matrix whose terms are supposed to be mutually independent. As matrix OðnÞ is supposed to contain all the information on desired signal X, it can be called ‘‘object’’ matrix. This model necessarily implies that signal n-mode unfolding matrix X n is orthogonal to n-mode noise unfolding matrix Bn since noise subspace and signal subspace are supposed to be mutually orthogonal. 3.2.3.2. Noise and signal independence. The condition on noise and signal independence implies that qðnÞ and QðnÞ -weighted ðX ; BÞ-covariance matrices are null (see Appendix B) ðnÞ gðnÞ XB ¼ gBX ¼ 0, ðnÞ GðnÞ XB ¼ GBX ¼ 0.

gðnÞ RR

GðnÞ RR

So and expressed by

and ðnÞ ðnÞ GðnÞ RR ¼ GXX þ GBB .

(29) weighted covariance matrix can be

ðnÞ ðnÞ gðnÞ RR ¼ gXX þ gBB

(31)

ðnÞ Expression of gðnÞ BB and GBB . White and Gaussian noise condition (13) implies that QðnÞ -weighted covariance matrix GðnÞ BB of n-mode noise unfolding Bn can be expressed by 2

ðnÞ GðnÞ trðQðnÞ ÞII n BB ¼ s 2

2

ðnÞ sðnÞ trðQðnÞ Þ. G ¼s

(33)

qðnÞ -weighted covariance matrix gðnÞ BB of n-mode noise unfolding Bn can also be expressed by 2

ðnÞ trðqðnÞ ÞII n . gðnÞ BB ¼ s

(34)

In this last expression, we can define the qðnÞ weighted n-mode noise power by 2

2

sðnÞ ¼ sðnÞ trðqðnÞ Þ. g

(35)

ðnÞ Expression of gðnÞ XX and GXX . Considering the signal model given at relation (28), the expression of gðnÞ XX becomes T

ðnÞ ðnÞ ðnÞ gðnÞ XX ¼ V s gOO V s ,

(36)

where gðnÞ OO is defined as T

ðnÞ ðnÞ ðnÞ gðnÞ Þ. OO ¼ EðO q O

(37)

Since all terms of object matrix OðnÞ are supposed to be mutually independent, and since the terms of T OðnÞ and qðnÞ OðnÞ are also mutually independent (refer to Appendix B), gðnÞ OO is a diagonal matrix: 2 3 b1 0 6 7 .. 6 7. gðnÞ (38) . OO ¼ 4 5 0 bK n In the same way, matrix GðnÞ XX can also be written as T

(30)

(32)

in which sðnÞ is the n-mode noise power. In the following, let us define the QðnÞ -weighted n-mode noise power by 2

Then, we make the assumption that the n-mode unfolding matrix of signal X n can be expressed as a weighted combination of K n orthogonal vectors from E 1n n-mode signal subspace basis:

2343

ðnÞ ðnÞ ðnÞ GðnÞ XX ¼ V s GOO V s ,

(39)

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

2344

where GðnÞ OO is a diagonal matrix defined by 2 3 0 1 6 7 .. 6 7. GðnÞ . OO ¼ 4 5 0 K n

(40)

lgi ¼ bi þ sðnÞ g

ðnÞ Expression of gðnÞ RR and GRR . Inserting (32) and (39) into (31) leads to the expression of the QðnÞ weighted covariance matrix GðnÞ RR given by

GðnÞ RR

¼

ðnÞ ðnÞT V ðnÞ s GOO V s

¼

ðnÞ 4 ½V ðnÞ s Vb 

þ

2 sðnÞ G II n ,

(41)

i.e. 2 GðnÞ RR

LðnÞ Gs

0

0

LðnÞ Gb

32 54

V ðnÞ s

T

3

V ðnÞ b

T

5,

(42)

where matrix V ðnÞ b 2 StðI n ; I n  K n Þ contains the ðI n  K n Þ I n -dimensional orthogonal column vectors of the n-mode noise subspace basis. Matrix V ðnÞ s 2 StðI n ; K n Þ contains the K n I n -dimensional orthogonal column vectors of the signal subspace basis. LðnÞ Gs is the diagonal ðK n  K n Þ-matrix defined as 2

ðnÞ ðnÞ LðnÞ Gs ¼ GOO þ sG IK n

(43)

and LðnÞ Gb is the diagonal ððI n  K n Þ  ðI n  K n ÞÞmatrix defined as 2

ðnÞ LðnÞ Gb ¼ sG II n K n .

(44)

The covariance matrix gðnÞ RR can be expressed equivalently by 2 32 3 ðnÞT 0 LðnÞ V gs s ðnÞ ðnÞ 4 54 5, gðnÞ (45) T RR ¼ ½V s V b  0 LðnÞ V ðnÞ gb b where 2

ðnÞ ðnÞ LðnÞ gs ¼ gOO þ sg IK n

The uniqueness of the eigenvalue decomposition implies that V ðnÞ s is composed of the K n eigenvectors associated with the K n largest eigenvalues of ðnÞ gðnÞ RR and GRR . Moreover, for all i ¼ 12I n ,

(46)

2

(48)

and lGi ¼ i þ sðnÞ G

2

(49) gðnÞ RR

GðnÞ RR .

are the ith eigenvalues of and Likewise, V ðnÞ is composed of the I  K eigenvectors n n b associated with the I n  K n smallest eigenvalues ðnÞ of GðnÞ RR and gRR . n-mode Wiener filter H n . The final expression of the n-mode filter H n , associated with fixed m-mode Wiener filter H m , man, is obtained by inserting ðnÞ ðnÞ the expressions of gðnÞ XR ¼ gXX and GRR (given by relations (36) and (42)) into relation (27): 1

T

ðnÞ ðnÞ ðnÞ H n ¼ V ðnÞ s gOO LGs V s ,

(50)

ðnÞ in which gðnÞ OO is defined in (38) and LGs is defined in (43). Consequently, according to relation (50) optimal n-mode Wiener filter H n is the projector on the eigenvectors associated with the largest eigenvalues of n-mode QðnÞ -weighted covariance matrix GðnÞ RR . These eigenvectors are weighted by 1

ðnÞ diagonal matrix gðnÞ OO LGs , which is expressed by 3 2 b1 0 2 7 6  þ sðnÞ 7 6 1 G 7 6 7 6 1 ðnÞ ðnÞ . 7 6 .. gOO LGs ¼ 6 7 7 6 7 6 b Kn 5 4 0 ðnÞ2 K n þ sG 3 2 g 2 l1  sðnÞ g 0 7 6 G 7 6 l 1 7 6 7 6 7 6 .. ¼6 7, ð51Þ . 7 6 7 6 2 g ðnÞ 7 6 l K n  sg 5 4 0 lGK n

with bi and i defined in (38) and (40), lg1 and lG1

and

2

LðnÞ gb

¼

2 sðnÞ g II n K n .

(47)

2

defined in (48) and (49) and sðnÞ and sðnÞ g G defined in (35) and (33).

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

From a practical point of view, the qðnÞ -weighted 2 n-mode noise power sðnÞ can be estimated by g processing the mean of the I n  K n smallest eigenvalues of gðnÞ RR : 2

c ¼ sðnÞ g

In X 1 lg . I n  K n k ¼K þ1 kn n

(52)

n

Then, for all kn ¼ 12K n , the values of bkn can be estimated by 2

c b bkn ¼ lgkn  sðnÞ g .

(53)

3.2.4. ALS algorithm An ALS algorithm needs to be used in order to conjointly find H n n-mode Wiener filters that minimize MSE eðH 1 ; . . . ; H N Þ. For n ¼ 12N, series fH kn gk2N is built such that it converges to optimal Wiener filter H n . This is equivalent to building series fXk gk2N that converges to the b ¼ R1 H 1    N H N . optimal estimated signal X Each ALS loop is composed of N steps in which H km is fixed to its kth iteration value, for all m 2 f1; . . . ; Ng  fng, and the next iteration H kþ1 n can be found by computing formula (50). One ALS algorithm can be summarized in the following steps: (1) Initialization k ¼ 0: X0 ¼ R3H 0n ¼ I In for all n ¼ 12N. (2) ALS loop: while kX  Xk k2 4, with 40 to be fixed, (a) for n ¼ 12N: (i) Xkn ¼ R1 H k1    n1 H kn1 nþ1 H knþ1    N H kN , (ii) H kþ1 ¼ argminQn 2RI n I n kX  Xkn n Qn k2 n given by formula (50), (b) Xkþ1 ¼ R1 H kþ1    N H kþ1 N , 1 (c) k k þ 1. b ¼ R1 H k    N H k . (3) Output: X 1 N According to relations (50) and (51), step (2)(a)(ii) of the ALS algorithm can be decomposed into the following substeps: (1) unfold Xkn into H knþ1     H kN ÞT ;

X kn ¼ Rn ðH k1     H kn1 

2345 T

(2) compute gnRR ¼ EðRn X kn Þ; (3) perform gðnÞ s2gn using (52), and RR EVD, calculate b b bn using (53); k kT (4) compute GðnÞ RR ¼ EðX n X n Þ; n (5) perform GðnÞ RR EVD, keep in matrix V s the K n eigenvectors associated with the K n largest eigenvalues of GðnÞ RR , and keep the K n largest eigenvalues lnGk for k ¼ 12K n ; ðnÞ1 given by (51); (6) compute weight matrix gðnÞ OO LGs (7) compute the ðk þ 1Þth iteration of n-mode Wiener filter H kþ1 using (50). n 3.3. One-dimensional Wiener filtering Let us compare n-mode Wiener filters with the one-dimensional classical Wiener filter. In the one-dimensional case, data can be modelled by vector r 2 RM1 resulting from the record of a one-dimensional signal x with additive Gaussian white, random noise b, independent from the signal x: r ¼ x þ b. As in the multidimensional case, the criterion used to determine optimal filter H is the minimization of the MSE between the desired signal x and its estimation b x ¼ Hr: eðHÞ ¼ Eðkx  b xk2 Þ ¼ Eðkx  Hrk2 Þ.

(54)

Minimizing (54) with respect to H leads to the expression of Wiener filter: H ¼ Gxr G1 rr ,

(55)

where Gxr ¼ EðxrT Þ is the covariance matrix between the signal and the data, and Grr ¼ EðrrT Þ is the covariance matrix of the data. The assumption stating that signal x is a weighted combination of K vectors from the K-dimensional signal subspace can be expressed by x ¼ V so ¼

K X

om v m ,

(56)

m¼1

where V s ¼ ½v1 . . . vK  2 StðM; KÞ is the matrix containing the K signal subspace basis M-dimensional orthogonal vectors, and o ¼ ½o1 . . . oK T 2 RK1 is a random weight vector. Vector o is supposed to characterize entirely desired signal x.

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

2346

The white noise assumption and the independence of noise b and signal x lead to Grr ¼ V s Goo V Ts þ s2 IM . " #" T # Vs   Ls 0 ¼ Vs Vb , V Tb 0 Lb where

2

6 Ls ¼ Goo þ s2 IK ¼ 6 4

ð57Þ

o 1 þ s2

0 ..

. o K þ s2

0

3 7 7 5 (58)

KK

is a diagonal matrix of R , as kn are mutually independent for all n ¼ 1 to K. Lb is a diagonal matrix of RðMKÞðMKÞ : Lb ¼ s2 IMK .

(59)

Matrix V s is composed of the K eigenvectors associated with the K largest eigenvalues of Grr . These vectors form the signal subspace basis. Matrix V b is composed of the M  R eigenvectors associated with the M  K smallest eigenvalues of Grr . These vectors form the noise subspace basis. Finally, matrix Ls contains Grr largest eigenvalues and matrix Lb contains the smallest eigenvalues of Grr . Let us call li the eigenvalues of Grr . The final expression for the Wiener filter becomes T H ¼ V s Goo L1 s Vs ,

where

Goo L1 s

(60)

2

o1 6 o 1 þ s2 6 6 ¼6 6 6 4 0 2

l 1  s2 6 l1 6 6 6 ¼6 6 6 4 0

0 ..

. oK oK þ s 2

bk k þ sðnÞ G

2

for k ¼ 1; . . . ; K n represents the qn n-mode filtered signal energy, whereas the denominator involves the Qn n-mode filtered noise and signal energy, with qn and Qn defined in (19) and (21).

4. Performances and simulation results In this section, filtering by lower rankðK 1 ; . . . ; K N Þ tensor approximation and multidimensional Wiener filtering are tested and compared with a classical bidimensional filtering method. This last method basically consists of a consecutive PCA processed on each two-dimensional channel. Each PCA is achieved by performing the SVD of the two-dimensional channel. All these methods are applied to noise filtering of multicomponent seismic data.

3 7 7 7 7 7 7 5 3

0

7 7 7 7 .. 7. . 7 7 lK  s 2 5 lK

the largest eigenvalues of the data covariance matrix Grr , each vector of which is weighted by matrix Goo L1 s . The n-mode Wiener filter found in the multidimensional case is an extension of the onedimensional case. Indeed, filter expressions (27) and (60) are similar and represent a weighted projection on largest eigenvectors of one covariance matrix. Weight matrices (51) and (61) are also similar. They are both diagonal, and their generic terms are ratios between the signal energy in the direction of the corresponding eigenvector and the related eigenvalue which is the superposition of the signal and the noise energy. The difference between the one-dimensional and the multidimensional cases lies in the weight values. In the multidimensional case, the n-mode weight values are influenced by all other m-mode filters, for all man: Indeed, as shown in relation (51), numerator bk involved in weighting value

ð61Þ

Consequently, the Wiener filter results from the projection of the K eigenvectors associated with

4.1. Multidimensional seismic data In the following simulations, we consider a plane polarized seismic wave received on an antenna of 10 multicomponent sensors. We

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353 canal 1

12

canal 2

12 10

10

8

8

8

6

6

6

4

4

4

2

2

2

canal 1

12

(b) 00 2 4 6 8 10 12 14 16 18 20 canal 2

12

(c) 00 2 4 6 8 10 12 14 16 18 20

10

10

8

8

8

6

6

6

4

4

4

2

2

2

0

0

0

-2

(f)-2

canal 1

12

(e) 0 2 4 6 8 10 12 14 16 18 20 canal 2

12

canal 3

12

10

(d) -20 2 4 6 8 10 12 14 16 18 20

0

12

10

10

10

8

8

8

6

6

6

4

4

4

2

2

2

(g) 00

2

4

6

8 10 12 14 16 18 20 canal 1

12

(h) 00

2

4

6

8 10 12 14 16 18 20 canal 2

12

12

10

10

8

8

8

6

6

6

4

4

4

2

2

2

2

4

6

8 10 12 14 16 18 20 canal 1

12

(k) 00

2

4

6

8 10 12 14 16 18 20 canal 2

12

12

10

10

8

8

8

6

6

6

4

4

4

2

2

2

2

4

6

8 10 12 14 16 18 20

(n) 00

2

4

6

8 10 12 14 16 18 20

4

6

8 10 12 14 16 18 20 canal 3

canal 3

(l) 00 2 4 6 8 10 12 14 16 18 20

10

(m) 00

2

(i) 00 2 4 6 8 10 12 14 16 18 20

10

(j) 00

canal 3

12

10

(a) 00 2 4 6 8 10 12 14 16 18 20

2347

canal 3

(o) 00 2 4 6 8 10 12 14 16 18 20

Fig. 1. Method comparison on a polarized seismic wave. (a)–(c) Components a, b and c of the non-noisy seismic wave. (d)–(f) Three components of the noisy seismic wave ðSNR ¼ 10 dBÞ. (g)–(i) PCA applied on each 2D seismic component. (j)–(l) LRTA applied on the whole noisy data tensor. (m)–(o) Multidimensional Wiener filtering applied on the whole noisy data tensor.

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

2348

consider the propagation direction orthogonal to the antenna plane. The three components a, b and c of the seismic wave, which can be modelled by tensor X 2 R102003 when 200 time samples are taken into account, are represented in Fig. 1(a)–(c). A white Gaussian noise is added to the previous multidimensional signal. The corresponding noisy signal is modelled by data tensor R ¼ X þ B, in which B ¼ 1:5  G, and where all elements of G are independent realizations of a random variable following a centered Gaussian density of variance 1. The three components a, b and c of the noisy signal are represented in Fig. 1(d)–(f). A PCA is applied separately on each component of the noisy signal and results in Fig. 1(i)–(k). This process performs SVD of the two-dimensional arrays represented by each component of the seismic signal and a projection on the eigenvector associated with the largest eigenvalue. A first tensor processing is achieved thanks to a higher-order PCA performed by lower rankð1; 100; 3Þ of the noisy data tensor. The polarization components of the resulting estimated signal are represented in Fig. 1(j)–(l). Finally, a second tensor processing is achieved by performing multidimensional Wiener filtering of the noisy data tensor. Parameters (K 1 ; K 2 ; K 3 ) Seismic Data

2000

Multidimensional Wiener filtering LRTA Consecutive 2D PCA

1800 1600

NSME(%)

1400 1200 1000 800 600 400 200 0 -30

-20

-10

0 SNR(dB)

10

are fixed to ð1; 100; 3Þ. The polarization components of the resulting estimated signal are represented in Fig. 1(m)–(o). In this simulation, the best quality, in terms of noise reduction, is given by the proposed multidimensional Wiener filtering. 4.2. Noise influence In order to a posteriori verify the efficiency of the multidimensional Wiener filtering, we propose to use the criterion of the normalized squared mean error (NSME), defined as NSME ¼

b  Xk2 kX , kXk2

(62)

b is the estimated signal and X is the where X desired signal. Fig. 2 shows the evolution of the NSME according to the signal-to-noise ratio defined as SNR ¼ 10 log

kBk2 . kXk2

(63)

This figure allows comparison of the efficiency of multidimensional Wiener filtering, lower rankðK 1 ; . . . ; K N Þ tensor approximation, and consecutive n-mode PCA according to the SNR. In the proposed simulations, when the SNR varies from 25 to 20 dB, the lowest NSME is given by the multidimensional Wiener filtering method. This simulation shows the efficiency of the multidimensional Wiener filtering compared to the other filtering methods. The fact that the NSME given by the lower rank-ðK 1 ; . . . ; K N Þ tensor approximation method is larger than the one given by multidimensional filtering can be explained. Indeed, in lower rankðK 1 ; . . . ; K N Þ tensor approximation, the estimated b does not minimize kX  Xk b 2 but signal X 2 b , thus X b still incorporates information kR  Xk related to noise.

20

Fig. 2. Evolution with respect to the SNR of the NSME given by the proposed multidimensional Wiener filtering, the LRTA filtering method and the consecutive PCA applied on each 2D component of the multicomponent seismic wave (Fig. 1).

5. Conclusion In this paper, a new multidimensional Wiener filtering has been developed. Using a model in

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

which the n-mode unfolding matrix of the desired signal is expressed as a weighted combination of orthogonal vectors from the n-mode signal subspace basis, it is possible to determine the theoretical expression of the n-mode Wiener filter. For a particular n-mode, this filter is a projector on eigenvectors associated with the largest eigenvalues of a matrix that can be assimilated to a weighted covariance matrix. This projection is also weighted by a matrix whose coefficient is the ratio between the signal energy in the direction of the corresponding eigenvector and its related eigenvalue. In extension to the one-dimensional case, the optimization criterion used to determine n-mode Wiener filters is the minimization of the MSE between the estimated signal and the desired signal. Since filters that minimize the MSE need to be determined simultaneously, an ALS algorithm has been developed. A simulation involving multicomponent seismic data shows the efficiency, in terms of noise reduction, of the proposed multidimensional Wiener filtering compared to (i) the higher-order PCA achieved by Tucker3 lower rank-ðK 1 ; . . . ; K N Þ tensor approximation and (ii) a classical filtering method that consists in applying a PCA on each 2D-component of the multicomponent seismic data. However, although this new multidimensional Wiener filtering gives encouraging results, the convergence of the ALS algorithm still needs to be improved. Moreover, the efficiency of the algorithm depends on optimal n-mode rank K n chosen in every mode. This new multidimensional Wiener filtering opens interesting vistas in every field of investigation where a multidimensional modelling of data is possible.

Acknowledgements The authors would like to thank the anonymous reviewers for their careful reading and their helpful remarks, which have contributed in improving the clarity of the paper.

2349

Appendix A. n-mode Wiener filter analytical expression The following computations are related to Section 3.2.2. They rely on the definitions and properties of tensors and multilinear algebra that can be found in [3–5]. They especially use n-mode unfolding matrix An of tensor A, the tensor scalar product, X hAjBi ¼ ai1 ;...;iN bi1 ;...;iN i1 ;...;iN

¼ tr½An BTn ;

8n ¼ 1; . . . ; N,

ðA:1Þ

and some properties on the Kronecker product. The MSE involved in Wiener multidimensional filtering is given by relation (16): eðH 1 ; . . . ; H N Þ ¼ EðkXk2 Þ  2EðhXjR1 H 1    N H N iÞ þ EðkR1 H 1    N H N k2 Þ.

ðA:2Þ

The Frobenius norm of a tensor is the sum of all of its squared terms. Thus, the Frobenius norm of a tensor is also equal to the norm of any of its nmode unfolding matrices. In order to determine the expression of filter H n associated with fixed filters H m , for all man, the n-mode unfolding of relation (A.2) is processed. Expression of n-mode unfolding of hXjR1 H 1    N H N i The n-mode unfolding hXjR1 H 1    N H N i is

expression

of

hX n jH n Rn ðH T1     H Tn1  H Tnþ1     H TN Þi, (A.3) which can be expressed with the trace operator by trðX n ðH 1     H n1  H nþ1     H N ÞRTn H Tn Þ. (A.4) One should note that in this last expression, the following property has been used: ðH T1     H Tn1  H Tnþ1     H TN ÞT ¼ ðH 1     H n1  H nþ1     H N Þ.

ðA:5Þ

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

2350

Let us note that ðnÞ T gðnÞ XR ¼ X n q Rn

(A.6)

with qðnÞ ¼ H 1     H n1  H nþ1     H N .

(A.7)

Hence, for all n ¼ 12N T hXjR1 H 1    N H N i ¼ trðgðnÞ XR H n Þ.

(A.8)

ðnÞ in which G ðnÞ RR and gXR are also independent of H n .

Trace derivation properties Whatever L 2 RMN and R 2 RNN , the following property holds:

Expression of n-mode unfolding of kR1 H 1    N H N k2 The n-mode unfolding kR1 H 1    N H N k2 is

null. As kX n k2 is independent of H n , it can be written as   q ðnÞ T E trðH n G RR H n Þ qH n   q ðnÞ T ¼ 2E trðgXR H n Þ , ðA:14Þ qH n

expression

of

tr½H n Rn ðH T1     H Tn1  H Tnþ1     H TN Þ ðH 1     H n1  H nþ1     H N ÞRTn H Tn , ðA:9Þ which, after simplification using the Kronecker product properties, can be formulated as follows: tr½H n Rn ðH T1 H n     H Tn1 H n1

q q tr½LR ¼ tr½LRT  qR qRT  T q T tr½LR  ¼ ¼ LT . qR

Let FðAÞ ¼ 12 trðACAT Þ, with A 2 Rnm and with C a constant symmetric matrix of Rmm ; then the gradient of FðAÞ over Rnm is q FðAÞ ¼ AC. qA

 H Tnþ1 H nþ1     H TN H N ÞRTn H Tn .

ðA:15Þ

(A.16)

Let us note that ðnÞ T G ðnÞ RR ¼ Rn Q Rn

(A.10)

with T

QðnÞ ¼ qðnÞ qðnÞ ¼ H T1 H 1     H Tn1 H n1  H Tnþ1 H nþ1     H TN H N .

T Expression of qHq n trðgðnÞ XR H n Þ ðnÞ Since gXR is independent of H n , the last term of property (A.15) implies

q ðnÞ T trðgðnÞ XR H n Þ ¼ gXR . qH n

(A.17)

ðA:11Þ Hence, for all n ¼ 12N T kR1 H 1    N H N k2 ¼ trðH n G ðnÞ RR H n Þ.

(A.12)

Minimization of mean square error eðH 1 ; . . . ; H N Þ According to relations (A.8) and (A.12), the expression of the n-mode unfolded MSE eðH 1 ; . . . ; H N Þ is the following: T eðH 1 ; . . . ; H N Þ ¼ E½kX n k2   2E½trðgðnÞ XR H n Þ T þ E½trðH n GðnÞ RR H n Þ.

ðA:13Þ

Assuming that m-mode filters H m are fixed for all man, MSE eðH 1 ; . . . ; H N Þ is minimal when its derivation with respect to n-mode filter H n is

T Expression of qHq n trðH n G ðnÞ RR H n Þ According to expressions (A.10) and (A.11) I n I n GðnÞ is symmetric and independent of RR 2 R H n . Thus, property (A.16) implies

q ðnÞ T trðH n GðnÞ RR H n Þ ¼ 2H n G RR . qH n

(A.18)

Expression of H n n-mode Wiener filter Replacing relations (A.17) and (A.18) into expression (A.14) leads to the expression of H n n-mode Wiener filter associated with fixed H m mmode filters, man: 1

ðnÞ H n ¼ gðnÞ XR GRR ,

(A.19)

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

where

iance matrices are null:

ðnÞ gðnÞ XR ¼ E½gXR 

(A.20)

is the qðnÞ -weighted covariance matrix between the signal X and the data R, and GðnÞ RR

¼

2351

E½GðnÞ RR 

(A.21)

ðnÞ gðnÞ XB ¼ gBX ¼ 0, ðnÞ GðnÞ XB ¼ GBX ¼ 0.

Indeed, their ði; jÞ-term is ðgðnÞ XB Þij ¼

is the QðnÞ -weighted correlation matrix of the data.

Appendix B. Assumptions and related expression of the n-mode Wiener filter The following computations are related to Section 3.2.3. Let us consider matrices qðnÞ and QðnÞ defined in (A.7) and (A.11). Their generic ði; jÞ-terms are noted as qðnÞ and QðnÞ ij ij , respectively.

Mn X Mn X

qðnÞ kl Eðxik bjl Þ,

k¼1 l¼1

ðGðnÞ XB Þij ¼

Mn X Mn X

QðnÞ kl Eðxik bjl Þ.

(B.5)

k¼1 l¼1

Expression of weighted covariance matrices Covariance matrix gðnÞ RR As Rn ¼ X n þ Bn , we can write ðnÞ ðnÞ ðnÞ ðnÞ gðnÞ RR ¼ gXX þ gXB þ gBX þ gBB .

(B.6) gðnÞ RR

So according to relation (B.4), covariance matrix can be expressed as

Weight matrix term independence The terms of random weight matrix OðnÞ 2 R are supposed to be mutually independent: K n M n

Eðokl omn Þ ¼ akl dkm dln ,

(B.4)

(B.1)

whatever k and m 2 f1; . . . ; K n g and l and n 2 f1; . . . ; M n g, in which akl is not null. White and Gaussian noise condition

weighted

ðnÞ ðnÞ gðnÞ RR ¼ gXX þ gBB .

(B.7)

Moreover, ðnÞ ðnÞ ðnÞ gðnÞ XR ¼ gXX þ gXB ¼ gXX .

(B.8)

Covariance matrix GðnÞ RR The previous relations also hold for GðnÞ RR : ðnÞ ðnÞ ðnÞ ðnÞ GðnÞ RR ¼ GXX þ GXB þ GBX þ GBB

White and Gaussian noise condition (13) applied to the n-mode unfolding Bn can also be expressed by Eðbnkl bnpq Þ ¼ s2n dkp dlq ,

(B.2)

where ðk; pÞ 2 f1; . . . ; K n g2 and ðl; qÞ 2 f1; . . . ; M n g2 , and s2n is the n-mode noise power.

ðnÞ ðnÞ GðnÞ RR ¼ GXX þ GBB .

(B.9)

Moreover, ðnÞ ðnÞ ðnÞ GðnÞ XR ¼ GXX þ GXB ¼ GXX .

(B.10)

ðnÞ Expression of GðnÞ BB and gBB

Noise and signal independence The condition on noise and signal independence can be expressed by Eðxnkl bnpq Þ ¼ 0

and

(B.3)

for all ðk; mÞ 2 f1; . . . ; K n g2 and ðl; qÞ 2 f1; . . . ; M n g2 . Hence, qðnÞ and QðnÞ -weighted ðX ; BÞ-covar-

According to (B.2), the ði; jÞ-term of GðnÞ BB is the following: ðGðnÞ BB Þij ¼

M ðnÞ X Mn X

qðnÞ kl Eðbik bjl Þ

k¼1 l¼1 2

¼ sðnÞ G dij

ðB:11Þ

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

2352

with 2

2

ðnÞ ðnÞ sðnÞ . G ¼ trðQ Þs

(B.12)

Hence GðnÞ BB

2 sðnÞ G II n .

¼

The ði; jÞ-term of ðgðnÞ BB Þij ¼

Mn X Mn X

(B.13) gðnÞ BB

can also be expressed by

qðnÞ kl Eðbik bjl Þ

¼

sðnÞ g

(B.21)

and i ¼

k¼1 l¼1 2

with GðnÞ OO , a diagonal matrix: 2 3 1 0 6 7 .. 6 7 GðnÞ . OO ¼ 4 5 0 K n

Mn X

Qnkk aik ,

(B.22)

k¼1

dij

where aik is defined in (B.1).

with 2

sðnÞ ¼ s2n trðqðnÞ Þ. g

Final expression of H n n-mode Wiener filter

Hence

According to (B.8) and (B.15),

2

ðnÞ gðnÞ BB ¼ sg II n .

(B.14)

ðnÞ Expression of GðnÞ XX and gXX

T

ðnÞ ðnÞ ðnÞ gðnÞ XR ¼ V s gOO V s .

According to (B.9), (B.13) and (B.20), T

T

(B.15)

where ðnÞ T gðnÞ OO ¼ EðOq O Þ.

(B.16)

According to (B.1), the generic term of gðnÞ OO is ðgðnÞ OO Þij ¼

Mn X Mn X

2

ðnÞ ðnÞ ðnÞ GðnÞ þ sðnÞ G II n , RR ¼ V s GOO V s

Considering the signal model (28), ðnÞ ðnÞ ðnÞ gðnÞ XX ¼ V s gOO V s ,

(B.23)

qðnÞ kl Eðbik bjl Þ

which can be written as 2 ðnÞ 2 GOO þ sðnÞ IK n G ðnÞ ðnÞ 4 ¼ ½V V  GðnÞ s RR b 0 2 ðnÞT 3 Vs 5 4 T V ðnÞ b

0 2 sðnÞ G II n K n

3 5

ðB:24Þ

k¼1 l¼1

¼ bi dij ,

ðB:17Þ

in which, for all i ¼ 12K n , bi ¼

Mn X

qðnÞ kk aik

(B.18)

k¼1

T

and where aik is defined in (B.1). So, gðnÞ OO is a diagonal matrix: 2 3 b1 0 6 7 .. 6 7. (B.19) gðnÞ . OO ¼ 4 5 0 bK n Matrix

GðnÞ XX

with V ðnÞ n 2 StðI n  K n ; I n Þ the columnwise orthogonal matrix containing the noise subspace basis vectors. The assumption of noise and signal independence implies that the noise subspace and signal subspace are orthogonal:

can also be written as

ðnÞ ðnÞ ðnÞ GðnÞ XX ¼ V s GOO V s

T

(B.20)

ðnÞ V ðnÞ s V b ¼ 0.

(B.25)

Let us call 2

ðnÞ ðnÞ LðnÞ s ¼ GOO þ sG IK n

(B.26)

and 2

ðnÞ LðnÞ b ¼ sG II n K n .

(B.27)

ðnÞ Inserting the last expressions of gðnÞ XR and GRR (resp. (B.23) and (B.24)) into Wiener n-mode filter

ARTICLE IN PRESS D. Muti, S. Bourennane / Signal Processing 85 (2005) 2338–2353

expression (A.19) leads to Hn ¼

ðnÞ ðnÞT ðnÞ ðnÞ V ðnÞ s gOO V s ½V s V b 

2

4

LðnÞ s 0

1

32

0 LðnÞ b

1

54

V ðnÞ s

T

V ðnÞ b

T

3 5,

ðB:28Þ

which can be written as T

T

ðnÞ ðnÞ ðnÞ ðnÞ ðnÞ ðnÞ ðnÞ H n ¼ ½ðV ðnÞ s gOO V s V s ÞðV s gOO V s V b Þ 2 ðnÞ1 ðnÞT 3 Ls V s 0 5. ðB:29Þ 4 1 ðnÞT V 0 LðnÞ b b

Considering noise and signal orthogonality condiðnÞT tion (B.25) and the fact that V ðnÞ ¼ IK n , the n Vn final Wiener n-mode filter expression becomes 1

T

ðnÞ ðnÞ ðnÞ H n ¼ V ðnÞ s gOO LGs V s .

(B.30)

References [1] P. Comon, Tensor decompositions, state of the art and applications, in: IMA Conference Mathematics in Signal Processing, Warwick, UK, December 18–20, 2000. [2] L.R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika 31 (1966) 279–311. [3] L. De Lathauwer, B. De Moor, J. Vandewalle, A multilinear singular value decomposition, SIAM J. Matrix Anal. Appl. 21 (April 2000) 1253–1278. [4] L. De Lathauwer, B. De Moor, J. Vandewalle, On the best rank-(r1 ; . . . ; rN ) approximation of higher-order tensors, SIAM J. Matrix Anal. Appl. 21 (April 2000) 1324–1342. [5] L. De Lathauwer, Signal processing based on multilinear algebra, Ph.D. Thesis, K.U. Leuven, E.E. Department (ESAT), Belgium, September 1997.

2353

[6] L. De Lathauwer, B. De Moor, J. Vandewalle, Fetal electrocardiogram extraction by blind source subspace separation, in: IEEE Transactions of Biomedical Engineering, vol. 47, Las Vegas, May 2000, pp. 567–572. [7] P.M. Kroonenberg, Three-mode Principal Component Analysis, DSWO Press, Leiden, 1983. [8] N. Le Bihan, Traitement alge´brique des signaux vectoriels: Application a` la se´paration d’ondes sismiques, Ph.D. Thesis, INPG, Grenoble, France, 22 Octobre 2001. [9] N. Le Bihan, G. Ginolhac, Subspace methods on 3d array, in: Workshop on Physics in Signal and Image Processing, Marseille, France, January 2001, pp. 359–364. [10] D. Muti, S. Bourennane, Multidimensional signal processing using lower rank tensor approximation, in: IEEE International Conference on Acoustics, Systems and Signal Processing, Hong Kong, China, April 6–10, 2003. [11] E. Kofidis, P.A. Regalia, On the best rank-1 approximation of higher-order supersymmetric tensors, SIAM J. Matrix Anal. Appl. 23 (3) (March 2002) 863–884. [12] P.M. Kroonenberg, J. De Leeuw, Principal component analysis of three-mode data by means of alternating least squares algorithms, Psychometrika 45 (1) (March 1980) 69–97. [13] T. Zhang, G.H. Golub, Rank-one approximation to high order tensor, SIAM J. Matrix Anal. Appl. 23 (2) (November 2001) 534–550. [14] D. Muti, S. Bourennane, Multidimensional estimation based on a tensor decomposition, in: IEEE Workshop on Statistical Signal Processing, St Louis, Missouri, USA, September 28–October 1, 2003. [15] R.A. Harshman, M.E. Lundy, Research Methods for Multimode Data Analysis, Praeger, New York, 1970, pp. 122–215. [16] T.G. Kolda, Orthogonal tensor decomposition, SIAM J. Matrix Anal. Appl. 23 (1) (March 2001) 243–255. [17] L. De Lathauwer, P. Comon, B. De Moor, Vandewalle, Higher-order power method, application in independent component analysis, in: NOLTA Conference, vol. 1, Las Vegas, 10–14 December 1995, pp. 91–96. [18] P. McCullagh, Tensor Methods in Statistics, Monographs on Statistics and Applied Probability, Chapman and Hall edition, 1987.