Filtering of infinite sets of stochastic signals: An approach based on interpolation techniques

Filtering of infinite sets of stochastic signals: An approach based on interpolation techniques

Signal Processing 91 (2011) 2556–2566 Contents lists available at ScienceDirect Signal Processing journal homepage: www.elsevier.com/locate/sigpro ...

865KB Sizes 0 Downloads 12 Views

Signal Processing 91 (2011) 2556–2566

Contents lists available at ScienceDirect

Signal Processing journal homepage: www.elsevier.com/locate/sigpro

Filtering of infinite sets of stochastic signals: An approach based on interpolation techniques A. Torokhti , S.J. Miklavcic Centre for Industrial and Applied Mathematics, School of Mathematics and Statistics, University of South Australia, Mawson Lakes, SA 5095, Australia

a r t i c l e in f o

abstract

Article history: Received 5 November 2010 Received in revised form 9 May 2011 Accepted 9 May 2011 Available online 19 May 2011

We propose an approach to the filtering of infinite sets of stochastic signals, KY and KX. The known Wiener-type approach cannot be applied to infinite sets of signals. Even in the case when KY and KX are finite sets, the computational work associated with the Wiener approach becomes unreasonably hard. To avoid such difficulties, a new theory is studied. The problem addressed is as follows. Given two infinite sets of stochastic signals, KY and KX, find a single filter F : KY -KX that estimates signals from KY with a controlled associated error. Our approach is based on exploiting a signal interpolation idea. The proposed filter F P is represented in the form of a sum of p terms, FðyÞ ¼ pj¼ 1 Tj Rj Qj ðyÞ. Each term is derived from three operations presented by matrices, Qi, Ri and Ti with i ¼ 1,y,p. Each operation is a special stage of the filtering aimed at facilitating the associated numerical work. In particular, Q1 , . . . ,Qp are used to transform an observable signal y 2 KY to p different signals. Matrices R1,y,Rp reduce a set of related matrix equations to p independent equations. Their solution requires much less computational effort than would be required with the full set of matrix equations. Matrices Ti,y,Tp are determined from interpolation conditions. We show that the proposed filter is asymptotically optimal. Moreover, the filter model is determined in terms of pseudo-inverse matrices and, therefore, it always exists. & 2011 Elsevier B.V. All rights reserved.

Keywords: Wiener-type filtering Interpolation

1. Introduction 1.1. Motivation and basic idea In this paper, we consider extensions of the known approaches to filtering of random signals based on the Wiener idea [21]. Wiener-type filters have a broad spectrum of applications in engineering, physics, biology and many other research areas. Some relevant references can be found, for example, in [6,8,10,15].

1.1.1. Motivation One motivation for the proposed method is the following. Most of the literature on the subject of the Wienertype filtering1 discusses the properties of an optimal filter for an individual finite random signal-vector.2 This means that if one wishes to transform an infinite set of observable random signal-vectors KY ¼ fy1 ,y2 , . . . ,yN , . . .g, so that it should be close to an infinite set of reference random signal-vectors KX ¼ fx1 ,x2 , . . . ,xN , . . .g using the standard Wiener-type approach then one is forced to find an infinite set of corresponding Wiener filters fF 1 ,F 2 , . . . , F N , . . .g: one F i of the filter set for each representative 1

 Corresponding author.

E-mail address: [email protected] (A. Torokhti). 0165-1684/$ - see front matter & 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2011.05.008

Relevant references can be found, for example, in [21]. We say that a random signal-vector x is finite if x has a finite number of components. 2

A. Torokhti, S.J. Miklavcic / Signal Processing 91 (2011) 2556–2566

yi of the signal set KY. Therefore, if KY and KX are infinite sets of random signals, such an approach cannot be applied in practice.3 Moreover, in some situations it is appropriate and possibly necessary to introduce an ‘identifier’, a parameter whose value identifies or associates a specific filter from the set fF 1 ,F 2 , . . . ,F N , . . .g with a given signal from KY. An example of infinite sets, KY and KX, arises in the situation where signals y 2 KY and x 2 KX are dependent on a parameter vector a ¼ ðað1Þ , . . . , aðqÞ ÞT 2 C q D Rq , where Cq is a q-dimensional cube, i.e. y ¼ yð, aÞ and x ¼ xð, aÞ. In particular, one coordinate, say að1Þ of a, could be interpreted as time, thus allowing for a continuous stream of vectors. In this regard, see also Section 2.4.1 below. Note that even in the case when KY and KX are finite sets, KY ¼ fy1 ,y2 , . . . ,yN g and KX ¼ fx1 ,x2 , . . . ,xN g, and then KY and KX can be represented as finite signals, the Wiener approach leads to computation of large covariance matrices. Indeed, if each yi has n components and each xi has m components then the Wiener approach leads to computation of a product of an mN  nN matrix and an nN  nN matrix and computation of an nN  nN pseudo-inverse matrix [21]. This requires O(2mn2N3) and O(22n3N3) flops, respectively [7]. As a result, the computational work associated with this approach becomes unreasonably hard. Note that a filtering methodology based on recursive least squares (RLS) algorithms (see, for example, [1,2,9,16]), similar to the Wiener-type filters, also discusses the properties of a filter for an individual finite signal-vector, not for infinite signal sets considered in this paper. Additionally, in the derivation of RLS filters, it is assumed that the correlation matrix is invertible.4 In our method, the latter restriction is omitted (see Remark 5). It is also known [16] that the computational cost of RLS filters is of the same order or higher than that of the Wiener-type filtering techniques. 1.1.2. Brief description of the problem To avoid such difficulties, we here propose and study an approach that allows one to use a single filter to estimate arbitrary signals from the set KY. In other words, we solve the following problem. Given two infinite sets of random signals, KY and KX, find a single filter F : KY -KX based on an observable signal y 2 KY that estimates the signal x 2 KX with a controlled, associated error. Note that in our formulation the set KY can be finite or infinite. In the latter case, though, the set must be compact. 1.1.3. Basic idea Our proposed approach to determining the desired filter F : KY -KX is as follows. First, choose a finite subset SX ¼ fx1 ,x2 , . . . ,xp g  KX and a corresponding subset SY ¼ fy1 ,y2 , . . . ,yp g  KY . Then, define p pairs ðx1 ,y1 Þ, . . . ,ðxp ,yp Þ taken from the set SX  SY. These are used to establish p interpolation conditions. Finally, determine the filter F 3 Here, KY and KX are countable sets. More generally, the sets KY and KX might be uncountable when KY and KX depend on a continuous parameter a. 4 Sections 9.2 and 9.3 in [9].

2557

that satisfies the interpolation conditions. The filter F is called the interpolation filter.

1.1.4. Contribution In Section 2.3, we show that the interpolation filter F is asymptotically optimal in the sense that, under certain conditions, its associated error tends to the minimal error. Assumptions used are detailed in Section 1.2.4. Here, we mention that, in particular, certain covariance matrices formed from specifically chosen signals in KY and KX are assumed to be known or estimated. Subsequently, all random signals under consideration are finite. The paper is arranged as follows. After some notations and definitions are introduced in the next subsection, a rigorous formulation of the problem is given in Section 1.2. Some approaches for finding alternate filters are discussed in Section 1.3. Preliminary results concerning an orthogonalization of random signals are then given in Section 2.1. The main results of the paper, involving the estimator determination and an error analysis, are presented in Section 2. There, it is shown, in Theorems 2 and 3 and Remarks 6 and 7, that the proposed filter has several free parameters that can be manipulated to improve filter performance. They are a number of terms p of the filter F , and sets SX and SY. Moreover, our filter is determined in terms of pseudo-inverse matrices [3] and, therefore, it always exists.

1.2. Notation. Formulation of the problem 1.2.1. General notation For the rigorous statement of the problem, we need the following notation. Let ðO, S, mÞ denote a probability space, where O ¼ fog is the set of outcomes, S a s-field of measurable subsets in O and m : S-½0,1 an associated probability measure on S. Suppose that x and y are random signals such that x 2 L2 ðO, Rm Þ and y 2 L2 ðO, Rn Þ, where x ¼ ðxð1Þ , . . . ,xðmÞ ÞT and y ¼ ðyð1Þ , . . . ,yðnÞ ÞT with xðiÞ ,yðjÞ 2 L2 ðO, RÞ for i¼1,y,m and j ¼1,y,n, respectively. Let Z /xðiÞ ,yðjÞ S ¼ xðiÞ ðoÞyðjÞ ðoÞ dmðoÞ o 1 ð1Þ O

and mn : Exy ¼ EfxyT g ¼ f/xðiÞ ,yðjÞ Sgm,n i,j ¼ 1 2 R

We introduce the norm Z JxJ2E ¼ JxðoÞJ22 dmðoÞ, O

where JxðoÞJ2 is the Euclidean norm of xðoÞ. In the sequel we shall also make reference to the Frobenius norm JAJ of matrix A. If Mmn 2 Rmn , then we define an operator Mmn : 2 L ðO, Rn Þ-L2 ðO, Rm Þ by ½Mmn ðyÞðoÞ ¼ Mmn ½yðoÞ:

ð2Þ

2558

A. Torokhti, S.J. Miklavcic / Signal Processing 91 (2011) 2556–2566

To simplify notation, we omit the index mn for Mmn and Mmn and write (2) as ½MðyÞðoÞ ¼ M½yðoÞ:

ð3Þ

Henceforth, an operator defined similarly to that of (3) will be denoted by a calligraphic letter. 1.2.2. Filter structure Some more notation should be used to describe the filter under consideration. The proposed filter F is specified below in (7) by means of operators Qj , Rj and T j . In particular, Q1 , . . . ,Qp are used to transform an observable signal y 2 KY to p different signals. By means of the orthogonalization procedure provided in Section 2.1, operators R1 , . . . ,Rp reduce the set of related matrix equations to p independent equations (see, below, (8) and (15), respectively). Their solution requires much less computational effort than would be required with the full set of equations. Therefore, operations Qj and Rj are used to reduce the overall numerical work needed for the implementation of the proposed filter. Operators T i , . . . ,T p are determined from interpolation conditions given below by (8) to adjust the accuracy in signal estimation. In more detail, the point of using Qj and Rj is explained in Remark 4 and Section 2.4. Here, we introduce Qj and Rj as follows. 1.2.3. Notation related to the filter structure For Qj : L2 ðO, Rn Þ-L2 ðO, Rn Þ, where j¼1,y,p and p is any positive integer, we denote vj ¼ Qj ðyÞ

and

vjk ¼ Qj ðyk Þ,

ð4Þ

where yk belongs to the set SY introduced above in Section 1.1.3. For example, if y ¼ yð,tÞ, where yð,tÞ is some arbitrary stationary time series and t 2 ½a,b D R is the time variable, then we can define Q1 , . . . ,Qp to be time-shifting operators such that v1 ¼ yð,tÞ,

v2 ¼ yð,t1Þ, . . . ,vp ¼ yð,tpÞ:

ð6Þ

are mutually orthogonal in the sense of Definition 2 given in Section 2.1 below. A method for determining R1 , . . . ,Rp (and consequently, w1k , . . . ,wpk ) is also given in Section 2.1. The orthogonalization is used to reduce the computation of large matrices to a computation of a sequence of much smaller matrices. Such a procedure leads to the substantial reduction in computational load. See Remark 4 in Section 2.2 for more detail. We also use the notation w1 ¼ R1 ðv1 Þ, . . . ,wp ¼ Rp ðvp Þ:

1.2.4. Statement of the problem Let us assume that KX DL2 ðO, Rm Þ and KY D L2 ðO, Rn Þ, and as before, SX ¼ fx1 , . . . ,xp g D KX and SY ¼ fy1 , . . . ,yp g D KY . In particular, SX can be an eX -set for KX and SY an eY -set for KY. In this regard, see Definition 3 in Section 2.3 below.5 In Section 2.4.2, some choice of sets SX and SY is considered. For any y 2 KY , let a filter F : KY -KX be presented in the form F ðyÞ ¼

p X

T j ðwj Þ ¼

j¼1

p X

T j Rj Qj ðyÞ,

ð7Þ

j¼1

where the T j is defined by a matrix Tj 2 Rmn similarly to (3). The problem is to find T1 , . . . ,Tp , so that p X

Tj Ewjk wkk ¼ Exk wkk

for k ¼ 1, . . . ,p:

ð8Þ

j¼1

We note that (8) are interpolation-like conditions. The assumptions we invoke are that (a) x 2 KX is unobservable and unknown (i.e. a constructive analytical form of x is unknown), (b) y 2 KY is observable but also unknown in the same sense as for x. To emphasize the dependence of F , defined by (7), on p we denote F ðpÞ :¼ F . Definition 1. The filter F ðpÞ is called the interpolation filter of the pth order. 1.3. Discussion of the problem

ð5Þ

Such a case has been considered, in particular, in [12]. To demonstrate and justify the flexibility of the filter F in (7), below, with respect to the choice of Q1 , . . . ,Qp , we mainly study the case where Q1 , . . . ,Qp are arbitrary. Some other explicit choices of Q1 , . . . ,Qp are presented in Section 2.4.1 where we also discuss the benefits associated with some particular forms of Q1 , . . . ,Qp . Next, for j ¼1,y,p, let Rj : L2 ðO, Rn Þ-L2 ðO, Rn Þ be operators such that, for k¼1,y,p, signals w1k ¼ R1 ðv1k Þ, . . . ,wpk ¼ Rp ðvpk Þ

Since any random signal can be transformed to a signal with zero mean, we shall assume that all signals y and x have zero mean. By a similar argument, we also assume that signals w1 , . . . ,wp have zero mean.

The issues related to the statement of the problem are considered in the following Remarks. Remark 1. The sets KX and KY are infinite. Therefore, finding a Wiener-like filter that minimizes JxF ðpÞ ðyÞJE for every individual pair fx,yg 2 KX  KY , means that one needs to find and use an infinite set of filters. Clearly, it makes no sense in practice. For the particular case when KX and KY are finite sets, i.e. SX ¼ KX and SY ¼ KY , some reasonable approaches to finding an optimal filter F : SY -SX could be as follows. A possible approach is to find a F ðpÞ that minimizes JgF ðpÞ ðnÞJ2E , where g ¼ ½x1 ,x2 , . . . , xp T and n ¼ ½y1 ,y2 , . . . ,yp T . A second approach is to find a filter determined from p Wiener-like sub-filters F 1 , . . . ,F p that are determined for each pair fxk ,yk g 2 KX  KY with k¼1,y,p. In both approaches, F and F k (with k¼1,y,p) can be chosen, in particular, in the form (7). 5 More generally, a set S D T of a Banach space T with a norm J  JT is called an e-set for T if for given e 4 0 and any t 2 T, there is at least one s 2 S such that JtsJT r e. The finite e-set S always exists if T is a compact set [11].

A. Torokhti, S.J. Miklavcic / Signal Processing 91 (2011) 2556–2566

Nevertheless, even in such a simplified case (i.e. when SX ¼ KX and SY ¼ KY ), the first approach would imply a significant computational effort associated with the computation of large np  np matrices. The second approach would require finding a ‘recognizer’ that would recognize a particular sub-filter F k from a set of p sub-filters F 1 , . . . ,F p , chosen for a particular input signal yk . Unlike the above two approaches, it will be shown in Section 2.2 that the filter satisfying condition (8) requires computation of much lesser n  n matrices and, therefore, is more computationally effective. Second, the filter proposed below does not require any ‘recognizer’. Third, our filter is applicable to infinite sets KX, KY. Remark 2. The parameterized signals considered in Section 1.1, y ¼ yð, aÞ and x ¼ xð, aÞ for each particular vector of parameters a 2 C q D Rq , belong to the particular example of infinite signal sets discussed above. The Wiener-like filtering approach applied to yð, aÞ and xð, aÞ leads to finding minF Jxð, aÞF ½ð, aÞJ2E which is a function of a. This means that such a filter F should be found for each a, which is again of little practical use. An alternative approach is to consider y and x as functions y : O  C q -Rn and x : O  C q -Rm , respectively, and then to state a minimization problem in terms of the new norm R R JxJ2C,E ¼ C q O JxðoÞJ2 dmðoÞ da. Such a problem is different from that stated above and we do not consider it here. 2. Main results 2.1. Orthogonalization of random signals The results presented in this section will be used in the solution of problems (7) and (8) given in Section 2.2. Definition 2. The random signals w1k ,w1k . . . ,wp1,k ,wpk are called mutually orthogonal if, for any k ¼1,y,p, Ewik wjk ¼ O

for iaj with i,j ¼ 1, . . . ,p:

ð9Þ

Here, O is the zero matrix. In this regard, see also [20–24]. The following elementary example illustrates Definition 2.

2559

Then, for k¼1,y,p, random signals w1k ¼ R1 ðv1k Þ, . . . , wpk ¼ Rp ðvpk Þ with R1 , . . . ,Rp such that w1k ¼ v1k

and, for i ¼ 2, . . . ,p,

wik ¼ vik 

i1 X

Kil ðwlk Þ

l¼1

ð12Þ are mutually orthogonal. Proof. We wish for (9) to be true. If Ki‘ has been chosen so that the condition (9) is true for all w‘k with ‘ ¼ 1, . . . ,i1 and k ¼1,y,p, then we have (see (1)) # (" ) i1 X Kil ðwlk Þ wTjk ¼ O E vik  l¼1

) Evi‘ w‘k Ki‘ Ew‘k w‘k ¼ O:

ð13Þ

Eq. (13) has a solution [3] if and only if Evi‘ w‘k ¼ Evi‘ w‘k Eyw‘k w‘k Ew‘k w‘k . The latter is true by [21, p. 168]. Then the general solution [3] is given by (10). Therefore, wik defined by (12) also satisfies the condition (9). Thus, signals w1k , . . . ,wp,k1 ,wpk are mutually orthogonal for any k¼1,y,p. & Remark 3. In Lemma 1, (12) implies wik ¼ h if vik ¼ Pi1 l ¼ 1 Kil ðwlk Þ. At the same time, we are interested in non-zero signals w1k , . . . ,wpk . That is why, in Lemma 1, the restriction of the joint independency (11) of signals v1k , . . . ,vpk has been imposed. We also note that (11) is ‘an operator version’ of the definition of the vector linear independence. Below we give exact formula (14) for the matrices T1,y,Tp that satisfy condition (8) and present an error analysis of the associated estimator defined by (7) and (14). 2.2. Determination of Tj Theorem 1. Let fwjk gpj,k ¼ 1 be a set of orthogonal random signals defined by (10) and (12). Matrices T1,y,Tp that satisfy condition (8) are given by Tj ¼ Exj wjj Eywjj wjj þAj ðIEwjj wjj Eywjj wjj Þ,

ð14Þ

where the Aj for j ¼1,y,p are arbitrary matrices.

Example 1. Let w1k and w2k be such that w1k ðoÞ ¼ ½oo 3  o o 2 , where o 2 O ¼ ½0 1. Then and w2k ðoÞ ¼ 43 3

Proof. The proof follows directly from (8) and (9). Indeed, (9) implies that (8) reduces to

Ew1k w2k ¼ O.

The solution to this equation is given by (14) if and only if [3]

5 o þ o

y

We write M for the Moore–Penrose pseudo-inverse [3] of the matrix M and define Ki‘ : L2 ðO, Rn Þ-L2 ðO, Rn Þ by Ki‘ ¼ Evi‘ w‘k Eyw‘k w‘k þMi‘ ðIEw‘k w‘k Eyw‘k w‘k Þ

ð10Þ

with Mi‘ 2 Rnn arbitrary and i,‘ ¼ 1, . . . ,p. The symbol h denotes the zero vector. Lemma 1. Let Gjk : L2 ðO, Rn Þ-L2 ðO, Rn Þ be a linear continuous operator for j, k¼1,y,p. Let random signals v1k , . . . , vp1,k ,vpk be such that, for any k¼1,y,p, G1k ðv1k Þ þ    þ Gp1,k ðvp1,k Þ þ Gpk ðvpk Þ ¼ h, if and only if G1k , . . . ,Gp1,k ,Gpk are the zero operators.

ð11Þ

Tj Ewjj wjj ¼ Exj wjj

where j ¼ 1, . . . ,p:

ð15Þ

Exj wjj Eywjj wjj Ewjj wjj ¼ Exj wjj : By [21, p. 168], the last identity is true. Thus, (14) is also true. & We note that the matrix Tj is not unique due to the arbitrary matrices Aj. In particular, the Aj can be set identically to be the zero matrix. Such a choice is normally done in the known Wiener-type filters [5,12,13,17,18,20,21]. Remark 4. Theorem 1 and its proof motivate the use of the orthogonalizing operators R1 , . . . ,Rp . If the random signals w1k , . . . ,wp1,k ,wpk were not orthogonal, then condition (8) would represent a set of matrix equations

2560

A. Torokhti, S.J. Miklavcic / Signal Processing 91 (2011) 2556–2566

for the T1 , . . . ,Tp . The orthogonalization of the signals fwik gpi¼ 1 by Lemma 1 reduces the set of matrix equation (8) to p independent equation (15). Their solution requires much less computational effort than would be required with the full set of equations. Although the orthogonalization procedure by Lemma 1 requires additional computational work, the use of orthogonalizing operators R1 , . . . ,Rp leads to a substantial reduction in the overall computational load needed for the implementation of the proposed filter. Remark 5. The proposed filter is determined in terms of the pseudo-inverse matrices, therefore, the filter always exists.

That is, the interpolation filter of the pth order F ðpÞ , given by (7), (12) and (14), is asymptotically optimal.7 The proof follows the proof of Theorem 3 below. To formulate and prove Theorem 3, we need the following notation. Let zk :¼ wkk . Let Eyyy and Qk ðyÞ satisfy the Lipschitz conditions JEyzk zk Eywk wk J2 r lE Jzk wk J2E , JQk ðyk ÞQk ðyÞJ2E r lQ Jyk yJ2E

ð18Þ

with the Lipschitz constants lE and lQ , respectively. Let the operator Rk be bounded so that, for some R^k 4 0,

2.3. Error analysis Let us now estimate the error JxF ðpÞ ðyÞJ2E associated with the interpolation filter of the pth order F ðpÞ presented by (7) and (14). Definition 3 (Kolmogorov and Fomin [11]). The set fxk gpk ¼ 1 is an eX -net for KX if, for any eX Z 0 and x 2 KX , there exists at least one xk such that Jxxk J2E r X

e

¼

1=2 JExx J2 

p X

1=2 JExwj ðEwj wj Þy J2 :

ð19Þ

where J  JO is the operator norm [11]. Let us also denote 1=2 Dk ¼ lE lQ R^k ½JxJ2E Ek þJExk wk J2 JEwk wk J2 ,



for k ¼ 1, . . . ,m:

p X

1=2

JAk ðIEwk wk Eywk wk ÞJ2 JEwk wk J2 ,

k¼1

The eY -net for KY, fyj gpj¼ 1 , is defined similarly. In Theorem 2 below, we show that the error JxF ðpÞ ðyÞJ2E associated with the proposed filter F ðpÞ for an infinite set of signals is asymptotically close, in the sense eX , eY -0, to the error associated with the optimal filter, denoted P p , of a similar structure developed for an individual input signal only. The filter P p has been studied in [20, Section 5.2.2]. The filter P p is optimal in the sense of minimizing the associated error for an individual input signal. That filter generalizes known filters developed from the Wiener approach. The error JxP p ðyÞJ2E associated with the filter P p is given by [20]6 JxP p ðyÞJ2E

JRk J2O r R^k ,

J0 ¼ JxP p ðyÞJ2E

and

1=2

Ek ¼ JðEwk wk Þy J2 :

As before, wk ¼ Rk Qk ðyÞ. Theorem 3. Let KX and KY be compact sets. Let SX ¼ fxk gpk ¼ 1 be an eX -net for KX and SY ¼ fyk gpk ¼ 1 an eY -net for KY. Then an estimate of the error JxF ðpÞ ðyÞJ2E associated with the interpolation filter of the pth order F ðpÞ : KX -KY presented by (7) and (14) is given by JxF ðpÞ ðyÞJ2E r JxP p ðyÞJ2E þ

p X

ðeX Jwk J2E Ek þ eY Dk Þ þ C:

ð20Þ

k¼1

ð16Þ

j¼1

Definition 4. We say that a filter F ðpÞ is asymptotically optimal if its associated error JxF ðpÞ ðyÞJ2E tends to the right-hand side of (16) as eX , eY -0. The following theorem, Theorem 2, characterizes the error JxF ðpÞ ðyÞJ2E associated with the proposed filter F ðpÞ in terms of the error associated with the optimal filter (16). In this theorem and in Theorem 3 below, operators Qk are arbitrary and operators Rk are defined by Lemma 1.

Proof. Since KX and KY are compact sets, a finite eX -net and finite eY -net, for KX and KY, respectively, always exists [11]. For any T1 , . . . ,Tp , we have

JxF ðpÞ ðyÞJ2E

 2   p X    ¼ x T j ðwj Þ  ¼ J0 þ J1 ,   j¼1

ð21Þ

E

where J0 is as above and J1 ¼

p X

1=2

1=2

JTk Ewk wk Exwk ðEwk wk Þy J2 :

ð22Þ

k¼1

Theorem 2. Let KX and KY be compact sets. Let SX ¼ fxk gpk ¼ 1 be an eX -net for KX and SY ¼ fyk gpk ¼ 1 an eY -net for KY. Then JxF ðpÞ ðyÞJ2E -JxP p ðyÞJ2E 6

as eX , eY -0:

A1=2 is defined by the condition A1=2 A1=2 ¼ A.

The error representation in (21) and (22) follows from [20] under the zero-mean assumption E½x ¼ E½wj  ¼ O which we have made in Section 1.2.

ð17Þ 7 Note that we have assumed the two e-nets possess the same number of elements, p. More general situations can be made the subject of a future investigation. Note also that, in general, p ¼ pðeÞ.

A. Torokhti, S.J. Miklavcic / Signal Processing 91 (2011) 2556–2566

Next, for Tj given by (14) with zk :¼ wkk , the summand of J1 is represented as follows: 1=2

1=2

1=2

1=2

ð23Þ where

1=2

¼ ½Exk zk ðEyzk zk Eywk wk Þ þðExk zk Exzk ÞEywk wk

1=2

ð25Þ 1=2

JEyzk zk Eywk wk J2 r lE Jzk wk J2E r lE JRk ½Qk ðyk ÞQk ðyÞJ2E r lE lQ R^k Jyk yJ2E r Y lE lQ R^k ,

e

and on the basis of the Cauchy–Schwarz inequality, Z m,n   2   ðjÞ ðiÞ JExk zk Exzk J2 ¼  ½xðiÞ ð o Þx ð o Þz ð o Þ d m ð o Þ  k  O k i,j ¼ 1  2 m X n Z X ðjÞ ðiÞ ¼ ½xðiÞ ð o Þx ð o Þz ð o Þ d m ð o Þ k k O

m X n Z X

O

½xðiÞ ðoÞxðiÞ ðoÞ2 dmðoÞ k

Z  ½zkðjÞ ðoÞ2 dmðoÞ O

½xðiÞ ðoÞxðiÞ ðoÞ2 dmðoÞ k

Oi¼1

Z X n ½zkðjÞ ðoÞ2 dmðoÞ  Oj¼1

¼ Jxk xJ2E Jzk J2E r eX Jzk J2E and Z m,n   2   ðjÞ ðjÞ ðiÞ JExzk Exwk J ¼  x ðoÞ½zk ðoÞwk ðoÞ dmðoÞ   O i,j ¼ 1  2

m X n Z X i¼1j¼1

r

O

m X n Z X

1=2

þ JAk ðIEzk zk Eyzk zk ÞJ2 JEwk wk J2

2 xðiÞ ðoÞ½zkðjÞ ðoÞwðjÞ ðoÞ dmðoÞ k

½xðiÞ ðoÞ2 dmðoÞ

i¼1j¼1 O

p X

fðeX Jzk J2E Ek þ eY lE lQ R^k ½JxJ2E Ek 1=2

because of the relation ðEwk wk Þy ¼ Eywk wk Ewk wk [21]. In (24),

¼

1=2

þ JExk wk J2 Þ þJAk ðIEwk wk Eywk wk ÞJ2 gJEwk wk J2 : ð24Þ

1=2 þ ðExzk Exwk ÞEywk wk Ewk wk ,

¼

1=2

k¼1

¼ ½Exk zk ðEyzk zk Eywk wk Þ þðExk zk Exwk ÞEywk wk Ewk wk

Z X m

1=2

1=2 1=2 þ eX Jzk J2E Ek JEwk wk J2 þ eY lE lQ R^k JxJ2E Ek JEwk wk J2

JxF ðpÞ ðyÞJ2E r J0 þ

1=2

¼ ðExk zk Eyzk zk Exwk Eywk wk ÞEwk wk

i¼1j¼1

1=2

and

1=2

Exk zk Eyzk zk Ewk wk Exwk ðEwk wk Þy

r

1=2

JTk Ewk wk Exwk ðEwk wk Þy J2 rJExk zk Eyzk zk Ewk wk Exwk ðEwk wk Þy J2 1=2

¼ JExk zk Eyzk zk Ewk wk Exwk ðEwk wk Þy þ Ak ðIEzk zk Eyzk zk ÞEwk wk J2 ,

i¼1j¼1

Therefore,

þ JAk ðIEzk zk Eyzk zk ÞEwk wk J2 r ðeY lE lQ R^k JExk wk J2 JEwk wk J2

1=2

JTk Ewk wk Exwk ðEwk wk Þy J2

1=2

2561

The theorem is proved.

& 1=2

1=2

Proof of Theorem 2. We have IEzk zk Eyzk zk ¼ IEzk zk ðEy Þzk zk [20]. Therefore, 1=2

1=2

1=2

1=2

ðIEzk zk Eyzk zk ÞEwk wk ¼ Ewk wk Ezk zk ðEyzk zk Þ1=2 Ewk wk : 1=2

Thus, Ak ðIEzk zk Eyzk zk ÞEwk wk -0 as eY -0. Then, the statement of Theorem 2 follows from (20). & Remark 6. It follows from (17) and (20) that the error decreases if p increases. Remark 7. A direct comparison of (17) and (20) shows that a proper choice of sets SX and SY leads to a decrease in the error JxF ðpÞ ðyÞJ2E . In other words, the error decreases when eY and eX become smaller. The latter, of course, implies an increase in computational work because smaller eY and eX are, in particular, achieved by an increase in the number of members of SX and SY. Thus, in practice, a tradeoff between the accuracy of estimation and an associated computational price should be sought. Thus, the free parameters for the proposed filter presented by (7) and (14) are its number of terms p, and sets SX and SY. In particular, it is interesting to consider the case when sets SX and SY coincide with sets KX and KY, respectively. The next Corollary 1 establishes an error estimate associated with this case. A direct comparison of the error estimates (20) and (26) below shows that the error of the filter decreases if KX ¼ SX and KY ¼ S Y . Corollary 1. Let KX ¼ SX ¼ fx1 , . . . ,xp g and KY ¼ SY ¼ fy1 , . . . ,yp g. Then, for any xk 2 KX and yk 2 KY , an estimate of the error Jxk F ðpÞ ðyk ÞJ2E associated with the filter F ðpÞ : KX KY presented by (7) and (14), is given by Jxk F ðpÞ ðyk ÞJ2E ¼ Jxk P p ðyk ÞJ2E þ

p X

1=2

J½Exj wj Exk wj ðEwj wj Þy J2 :

j¼1

Z  ½zkðjÞ ðoÞwkðjÞ ðoÞ2 dmðoÞ

ð26Þ

O

¼

Z X m

½xðiÞ ðoÞ2 dmðoÞ

Oi¼1

Z X n ½zkðjÞ ðoÞwkðjÞ ðoÞ2 dmðoÞ  Oj¼1

¼ JxJ2E Jzk wk J2E r eY lE lQ R^k JxJ2E :

Proof. For KX ¼ SX and KY ¼ SY , we have vj ¼ vjk , wj ¼ wjk , wkk ¼ zk ¼ wk and, therefore, Tk ¼ Exk wk Eywk wk þ Ak ðIEwk wk Eywk wk Þ. Thus, (23) is represented as follows: 1=2

1=2

1=2

1=2

JTj Ewj wj Exk wj ðEwj wj Þy J2 ¼ JExj wj Eywj wj Ewj wj Exk wj ðEwj wj Þy 1=2

þAj ðIEwj wj Eywj wj ÞEwj wj J2 ,

ð27Þ

2562

A. Torokhti, S.J. Miklavcic / Signal Processing 91 (2011) 2556–2566 1=2

1=2

where x is replaced with xk . Since ðEwj wj Þy ¼ Eywj wj Ewj wj and 1=2 IEwj wj Eywj wj ¼ IEwj wj ðEy Þ1=2 wj wj (see above), we have 1=2

ðIEwj wj Eywj wj ÞEwj wj ¼ O and 1=2

1=2

1=2

JTj Ewj wj Exk wj ðEwj wj Þy J2 ¼ J½Exj wj Exk wj ðEwj wj Þy J2 :

ð28Þ

Here, O is the zero matrix. Then (26) follows from (21) to (23) with x ¼ xk , and (27) and (28). & 2.4. Choice of operators Qi, and sets SX and SY 2.4.1. Choice of Qi In the filter model (7), operators Q1 , . . . ,Qp are used to transform observable signals y to p signals v1 , . . . ,vp in accordance with (4). In particular, the v1 , . . . ,vp may coincide with the v1k , . . . ,vpk — see (4). However, our model requires that the signals v1 , . . . ,vp be distinct. If some of the v1 , . . . ,vp were the same, say vi ¼ vj with iaj for some i,j ¼1,y,p, and if T i , Ri , Qi were fixed, then terms in (7) associate with vi and vj , would coincide. In such a case, the number of terms in the filter (7) would be less than p. However, we wish to have the number of terms p fixed because each term in (7) contributes to the filter performance. Hence, to keep the structure (7), the signals v1 , . . . ,vp need to be distinct. Here, we consider some examples for choosing the Qi . (i) Let Dg be a subset in Rg and let fa1 , . . . , ap g  Dg be a set of given signals with ai aaj for iaj. For any a 2 Rg and for i¼1,y,p, let us choose Qj : L2 ðO  Dg , Rn Þ-L2 ðO Dg , Rn Þ, so that v1 ð, aÞ ¼ yð, aÞ,

v2 ð, aÞ ¼ yð, aa1 Þ, . . . ,vp ð, aÞ ¼ yð, aap Þ: ð29Þ

Such a choice of Qi is motivated by a generalization of the case considered in [12]. ð1Þ ðgÞ ðiÞ (ii) Let Dg ðaÞ ¼ ½0, b       ½0, b  D Dg with b 4 0 g ð1Þ ðgÞ for all i ¼ 1, . . . ,g. Let a ¼ ½a , . . . , a  2 D ðaÞ. For i¼1,y, p with p rg þ1, and r1 ðaÞ, . . . ,rp ðaÞ known functions, we set Qj : L2 ðO  Dg ðaÞ, Rn Þ-L2 ðO  Dg ðaÞ, Rn Þ, so that Z að1Þ v0 ð, aÞ ¼ yð, aÞ, v1 ð, aÞ ¼ r1 ðaÞyð, aÞ dað1Þ , . . . , ð30Þ 0

vp ð, aÞ ¼

Z aðpÞ

rp ðaÞyð, aÞ daðpÞ :

ð31Þ

0

(iii) For að1Þ , . . . , aðpÞ 2 Dg ðaÞ and r1 ðað1Þ Þ, . . . ,rp ðaðpÞ Þ known functions, let us put Qi , so that Z r1 ðað1Þ Þyð, að1Þ Þ dað1Þ , ð32Þ v0 ð, aÞ ¼ yð, aÞ, v1 ð, aÞ ¼ Dg ðaÞ

v2 ð, aÞ ¼

Z

Z

Dg ðaÞ Dg ðaÞ

r1 ðað1Þ Þr2 ðað2Þ Þyð, að1Þ Þyð, að2Þ Þ dað1Þ dað2Þ , . . . ,

ð33Þ vp ð, aÞ ¼

Z

Z ... r1 ðað1Þ Þ . . . rp ðaðpÞ Þ Dg ðaÞ Dg ðaÞ |fflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflffl} p

yð, að1Þ Þ . . . yð, aðpÞ Þ dað1Þ . . . daðpÞ :

ð34Þ

Choosing Qi as in the form (30) and (32)–(34) is motivated by the following observation. It is clear that P the lesser the difference JxF ðpÞ ðyÞJ2E ¼ Jx pj¼ 1 T j Rj 2 Qj ðyÞJE , the better the estimation of x by the filter P F ðpÞ ðyÞ ¼ pj¼ 1 T j Rj Qj ðyÞ. Here, vj ¼ Qj ðyÞ. In turn, the closer vj is to x, the smaller the value of JxF ðpÞ ðyÞJ2E . For example, if x ¼ vj for all j¼ 1,y,p then one can set p ¼1, choose T 1 and R1 equal to the identity, and then JxF ðpÞ ðyÞJ2E ¼ 0. Therefore, it is reasonable to choose vj as an approximation to x. The Volterra series is one possible approximation to x and the latter choices for Qi are associated with the Volterra series [17]. In general, Qj might be chosen so that vj ¼ Qj ðyÞ, with j ¼1,y,p, are approximations to x constructed from the known methods. Such methods are considered, in particular, in [4,7,11–13,18–24]. 2.4.2. Choice of sets SX and SY By Theorems 2 and 3, SX ¼ fxk gpk ¼ 1 is an eX -net for KX and SY ¼ fyk gpk ¼ 1 is an eY -net for KY. It follows from Theorems 2 and 3 that the error associated with the proposed filter decreases when eX and eY decrease. In practice, signals x 2 KX and y 2 KY are parameterized similarly to those considered in Section 2.4.1 above, i.e. x ¼ xð, aÞ and y ¼ yð, aÞ where a 2 Dg ðaÞ. In such a case, the sets SX and SY are determined from an eD -net for Dg ðaÞ as follows. Let faðkÞ gq1 be the eD -net for Dg ðaÞ, i.e. for any a 2 Dg ðaÞ there is aðkÞ such that JaaðkÞ J22 r eD : Let xk ¼ xð, aðkÞ Þ and let x satisfy the Lipschitz condition Jxð, aÞxð, aðkÞ ÞJ2E r lx JaaðkÞ J22 with the Lipschitz constant lx 4 0. Then Jxð, aÞxð, aðkÞ ÞJ2E r lx eD : Let eD r eX =lx . Then Jxð, aÞxð, aðkÞ ÞJ2E r eX with eX as in Definition 1. Hence, SX ¼ fxð, aðkÞ Þgpk ¼ 1 is the eX -net for KX ¼ fxð, aÞja 2 Dg ðaÞg. The eY -net SY ¼ fyð, aðkÞ Þgpk ¼ 1 is constructed similarly. 2.5. Discussion of the solution: associated advantages The procedure for constructing the proposed filter (7) consists of the following steps. First, the operators Q1 , . . . ,Qp are chosen so that v1 , . . . ,vp are distinct signals. See Section 2.4.1 in this regard. Second, the operators R1 , . . . ,Rp are defined as in Lemma 1. Third, matrices T1,y,Tp are determined from Theorem 1. The matrices then satisfy condition (8). Such a procedure has the following advantages. First, the proposed interpolation filter of the pth order, F ðpÞ , processes the infinite set of signals KY. This advantage has been discussed in Section 1.1. Second, the computational effort associated with the filter F ðpÞ is facilitated by the orthogonalization procedure considered in Section 2.1. As a result, each matrix Tj in the filter model F ðpÞ is determined separately by (14) from Eq. (15) and not from a set of matrix equations as would be required for filters based on a structure similar to that of (7) (see, for example [19]). Third, the filter F ðpÞ has several free parameters to adjust

A. Torokhti, S.J. Miklavcic / Signal Processing 91 (2011) 2556–2566

its performance. They are the number of the filter terms, p, and the signal sets SX and SY. See Remarks 6 and 7 above.

3. Example of application Here, we consider a case when sets KX and KY are finite and large. In the end of Section 3.1 below, it is shown that this case clearly illustrates its extension to infinite sets. First, in Section 3.1, we consider the interpolation filters presented by (7) and (14). Then in Section 3.2, we give their comparison with the Wiener-type filters and RLS filters. Let KX ¼ fx1 ,x2 , . . . ,xN g and KY ¼ fy1 ,y2 , . . . ,yN g, where N ¼100 and xj ,yj 2 L2 ðO, Rn Þ with n ¼116 for each

j ¼1,y,8. Here, yj is an observable random signal (data) and xj is a reference random signal (data). That is, xj is the signal that should be estimated by the filter. In this example, xj and yj are simulated as digital images presented by 116  256 matrices Xj and Yj, respectively. The columns of matrices Xj and Yj represent a realization of signals xj and yj , respectively. Each picture Xj in the sequence X1 , . . . ,XN has been taken at time tj with j ¼1,y,N. Time intervals

D1 ¼ t2 t1 , . . . , DN1 ¼ tN tN1

20

40

40

60

60

80

80

100

100 100

150

200

250

20

20

40

40

60

60

80

80

100

100 50

100

150

200

250

20

20

40

40

60

60

80

80

100

100 50

100

150

200

250

20

20

40

40

60

60

80

80

100

100 50

100

150

200

250

ð35Þ

were very short. Images Y1 , . . . ,YN have been simulated from X1 , . . . ,XN in the form Yj ¼ Xj randj for each j ¼1,y,N. Here,  means the Hadamard product and

20

50

2563

50

100

150

200

250

50

100

150

200

250

50

100

150

200

250

50

100

150

200

250

Fig. 1. Examples of reference data from set KX to be estimated from observed data. (a) Reference data X5. (b) Reference data X18. (c) Reference data X31. (d) Reference data X44. (e) Reference data X57. (f) Reference data X70. (g) Reference data X83. (h) Reference data X96.

2564

A. Torokhti, S.J. Miklavcic / Signal Processing 91 (2011) 2556–2566

randj is a 116  256 matrix whose elements are uniformly distributed in the interval (0, 1). To illustrate the sets KX and KY we have chosen their representatives, X5, X18, X31, X44, X57, X70, X83, X96 and Y5, Y18, Y31, Y44, Y57, Y70, Y83, Y96, respectively, taken at times t5 þ 13ðk1Þ with k¼1,y,8. They are represented in Figs. 1 and 2. 3.1. Interpolation filters presented by (7) and (14)

and

Fð3Þ ðYi Þ ¼ T1 R1 Q1 ðYi Þ þT2 R2 Q2 ðYi Þ þT3 R3 Q3 ðYi Þ,

ð37Þ

where Qj, Rj and Tj, for j¼1,2,3, are as follows. If i¼1,2, then Qj ðYi Þ ¼ Y1ðj1Þ

Sets SX and SY have been chosen in the form SX ¼ fX18 ,X57 ,X83 g

According to (7) and (14), the interpolation filter of the third order F(3) that estimates set KX ¼ fX1 ,X2 , . . . ,X100 g on the basis of observations KY ¼ fY1 ,Y2 , . . . ,Y100 g is given, for i¼ 1,y,100, by

where j ¼ 1,2,3:

ð38Þ

If i¼3,y,N, then

SY ¼ fY18 ,Y57 ,Y83 g:

ð36Þ

Qj ðYi Þ ¼ Yið3jÞ

where j ¼ 1,2,3:

10 20 30 40 50 60 70 80 90 100 110

10 20 30 40 50 60 70 80 90 100 110 50

100

150

200

250

50

100

150

200

250

50

100

150

200

250

50

100

150

200

250

50

100

150

200

250

10 20 30 40 50 60 70 80 90 100 110

10 20 30 40 50 60 70 80 90 100 110 50

100

150

200

250 10 20 30 40 50 60 70 80 90 100 110

10 20 30 40 50 60 70 80 90 100 110 50

100

150

200

250

10 20 30 40 50 60 70 80 90 100 110

10 20 30 40 50 60 70 80 90 100 110 50

100

150

200

250

Fig. 2. Illustration to set KX estimation by filters F(3) and F(5). (a) Observed data Y(57). (b) Observed data Y(70). (c) Estimate X 57 by Wiener filter. (d) Estimate X^ 70 by RLS filter. (e) Estimate X~ 57 by filter F(3). (f) Estimate X~ 70 by filter F(3). (g) Estimate X^ 57 by filter F(5). (h) Estimate X^ 70 by filter F(5).

A. Torokhti, S.J. Miklavcic / Signal Processing 91 (2011) 2556–2566

Such a choice is similar to that considered in (29). We denote Vji ¼ Qj ðYi Þ, where j ¼1,2,3 and i¼1,y,N. Matrices Rj, with j¼1,2,3, are determined by Lemma 1, so that, for i ¼1,y,100, R1 ðV1i Þ ¼ W1i

where W1i :¼ V1i ,

R2 ðV2i Þ ¼ V2i K21 W1i

where K21 ¼ Ev21 w1i Eyw1i w1i

and W2i :¼ R2 ðV2i Þ, and R3 ðV3i Þ ¼ V3i K231 W1i K32 W2i K32 ¼ Ev32 w2i Eyw2i w2i

and

where K31 ¼ Ev31 w1i Eyw1i w1i ,

W3i :¼ R3 ðV3i Þ:

Here, we set M21 ¼ O and M31 ¼ M32 ¼ O (see (10)). Matrices Tj, with j¼1,2,3, are determined by Theorem 1, so that Tj ¼ Exj wjj Eywjj wjj ,

ð39Þ

where we have put Aj ¼ O (see (14)). Covariance matrices Evjk w1‘i , Ew‘i w‘i and Exj wjj used above for certain j,k,‘,i, can be estimated in various ways. Some of them are presented, for example, in [21, Chapter 4.3]. We note that a covariance matrix estimation is a specific and difficult problem which is not a subject for this paper. Here, the estimates of Evjk w1‘i , Ew‘i w‘i and Exj wjj are used for illustration purposes only. Therefore, we use the simplest way to obtain the above estimates, the maximum likelihood estimates [21] based on the samples Xi, Vjk and W‘i . Other related methods, based on incomplete observations, can be found, e.g. in [14,21]. Results of the set KX ¼ fX1 ,X2 , . . . ,X100 g estimation by the filter given by (37) are presented in Fig. 2. We represent typical members of sets SY and KY, Y57 and Y70, respectively. Other members of SY and KY are similar. Estimates of X57 and X70 by the filter F(3), X~ 57 and X~ 70 , are also given in Fig. 2. To illustrate Remark 5, i.e. to show that the error JxF ðpÞ ðyÞJ2E decreases if p increases, an estimate of the set KX ¼ fX1 ,X2 , . . . ,X100 g has been obtained by the interpolation filter of the fifth order FðpÞ , with p ¼5, i.e. with p bigger than p¼3 considered in (37). In this case, Qj, Rj and Tj, for j ¼1,y,5 are determined in ways similar to (38)

2565

and (39). We denote X^ i ¼ Fð5Þ Yi , for i¼1,y,N. Estimates X^ 57 and X^ 70 are given in Fig. 2. In Table 1, the values of the errors JXi FðpÞ Yi J2 are presented for i¼57 and 70, and p ¼3 and 5. It follows from Table 1 and Fig. 2 that an increase in p implies a decrease in the error of the estimation. Note that although the considered sets KX and KY are finite, the above results of their estimation by the interpolation filters can easily be extended to the case when KX and KY are infinite. Indeed, if in (35), Di -0 as i-1, then KX and KY tend to infinite sets. In this case, the choice of sets SX and SY can be the same as in (36) above. Then filters F(3) and F(5) will also be the same as above. Therefore, for the case when KX and KY are infinite sets, estimates of KX by filters F(3) and F(5) are similar to the results obtained above.

3.2. Comparison with the Wiener-type filter and RLS filter In both the Wiener filtering approach and the RLS filtering methodology, a filter should be found for each pair of input–output signals, fxj ,yj g, where j ¼1,y,100. That is, in these simulations, 100 Wiener filters or RLS filters, F1 , . . . , F100 or R1 , . . . ,R100 , respectively, are required to process signals from KY to KX. In contrast, our approach requires the single interpolation filter F ðpÞ with p terms (above, we have chosen p ¼3 and 5), where each term requires computational effort that is similar to that required by the single Wiener filter Fj and the single RLS filter Rj. Clearly, the Wiener filtering approach and the RLS filtering methodology imply much more computational work [9,16] to process signals from KY to KX. Examples of accuracies associated with the filters F57 and R70 for estimating X57 and X70 (which represent typical signals under consideration) are given in Table 2, where X 57 ¼ F57 ðY57 Þ and X^ 70 ¼ R70 ðY70 Þ. The estimates X 57 and X^ 70 are presented in Fig. 2. The accuracy associated with the proposed interpolation filter F ðpÞ is better than that of filters Fj and Rj due to the properties of filter F discussed above (in particular, in Theorems 2 and 3).

Table 1 Accuracy associated with the proposed interpolation filter of pth order. p¼3

p ¼5

JX57 Fð3Þ ðY57 ÞJ2

JX70 Fð3Þ ðY70 ÞJ2

JX57 Fð5Þ ðY57 ÞJ2

JX70 Fð5Þ ðY70 ÞJ2

9.06  106

9.12  106

3.67  106

3.64  106

Table 2 Accuracy associated with the Wiener filters Fj and RLS filters Rj. JX57 F57 ðY57 ÞJ2

JX70 F70 ðY70 ÞJ2

JX57 R57 ðY57 ÞJ2

JX70 R70 ðY70 ÞJ2

2.34  108

2.65  108

9.41  107

9.77  107

2566

A. Torokhti, S.J. Miklavcic / Signal Processing 91 (2011) 2556–2566

4. Conclusion In this paper, we develop a new approach to filtering of infinite sets of stochastic signals. The approach is based on a filter representation in the form of a sum of p terms. Each term is derived from three matrices, Qi, Ri and Ti with i¼ 1,y,p. The methods for determining these matrices have been studied. In particular, matrices Ti,y,Tp are determined from interpolation conditions. This procedure allows us to construct the filter that estimates each signal from the given infinite set with controlled accuracy. An analysis of the error associated with the filter has been provided. The analysis has shown that the filter has three free parameters with which to improve performance. It follows from the error analysis that the proposed filter is asymptotically optimal. The filter is determined in terms of pseudo-inverse matrices and, therefore, it always exists.

Acknowledgment The first co-author is grateful to Y. Hua for a useful discussion regarding RLS filters. References [1] S.T. Alexander, A.L. Ghimikar, A method for recursive least squares filtering based upon an inverse QR decomposition, IEEE Transactions on Signal Processing 41 (1) (1993) 20–30. [2] J.A. Apolinario, M.D. Miranda, Convential and inverse QRD-RLS algorithms, in: J.A. Apolinario (Ed.), QRD-RLS Adaptive Filtering, Springer, 2009. [3] T.L. Boullion, P.L. Odell, Generalized Inverse Matrices, John Wiley & Sons, Inc., New York, 1972. [4] J.H. Ferziger, Numerical Methods for Engineering Applications, Wiley, New York, 1999. [5] V.N. Fomin, M.V. Ruzhansky, Abstract optimal linear filtering, SIAM Journal on Control and Optimization 38 (2000) 1334–1352. [6] Y. Gao, M.J. Brennan, P.F. Joseph, A comparison of time delay estimators for the detection of leak noise signals in plastic water distribution pipes, Journal of Sound and Vibration 292 (3–5) (2006) 552–570.

[7] G.H. Golub, C.F. Van Loan, Matrix Computation, third ed., Johns Hopkins University Press, 1996. [8] K. Hayama, M.-K. Fujimoto, Monitoring non-stationary burst-like signals in an interferometric gravitational wave detector, Classical and Quantum Gravity (8) (2006). [9] S. Haykin, Adaptive Filter Theory, fourth ed., Prentice-Hall, Upper Saddle River, NJ, 2002. [10] S. Jeona, G. Choa, Y. Huh, S. Jin, J. Park, Determination of a point spread function for a flat-panel X-ray imager and its application in image restoration, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 563(1) (2006) 167–171, Proceedings of the Seventh International Workshop on Radiation Imaging Detectors— IWORID 2005. [11] A.N. Kolmogorov, S.V. Fomin, Elements of the Theory of Functions and Functional Analysis, Dover, New York, 1999. [12] J. Manton, Y. Hua, Convolutive reduced rank Wiener filtering, Proceedings of ICASSP’01, vol. 6, 2001, pp. 4001–4004. [13] V.J. Mathews, G.L. Sicuranza, Polynomial Signal Processing, John Wiley & Sons, 2001. [14] L.I. Perlovsky, T.L. Marzetta, Estimating a covariance matrix from incomplete realizations of a random vector, IEEE Transactions on Signal Processing 40 (1992) 2097–2100. [15] G. Robert Redinbo, Decoding real block codes: activity detection, Wiener estimation, IEEE Transactions on Information Theory 46 (2) (2000) 609–623. [16] A.H. Sayed, Adaptive Filters, Wiley-Interscience, IEEE Press, Hoboken, NJ, 2008. [17] L.L. Scharf, Statistical Signal Processing: Detection, Estimation, and Time Series Analysis, Addison-Wesley Publishing Co., New York, 1990. [18] A. Torokhti, P. Howlett, Method of recurrent best estimators of second degree for optimal filtering of random signals, Signal Processing 83 (5) (2003) 1013–1024. [19] A.P. Torokhti, P.G. Howlett, C. Pearce, Optimal recursive estimation of raw data, Annals of Operations Research 133 (2005) 285–302. [20] A. Torokhti, P. Howlett, Optimal transform formed by a combination of nonlinear operators: the case of data dimensionality reduction, IEEE Transactions on Signal Processing 54 (4) (2006) 1431–1444. [21] A. Torokhti, P. Howlett, Computational Methods for Modelling of Nonlinear Systems, Elsevier, 2007. [22] A. Torokhti, P. Howlett, Filtering and compression for infinite sets of stochastic signals, Signal Processing 89 (2009) 291–304. [23] A. Torokhti, J. Manton, Generic weighted filtering of stochastic signals, IEEE Transactions on Signal Processing 57 (12) (2009) 4675–4685. [24] A. Torokhti, S. Miklavcic, Data compression under constraints of causality and variable finite memory, Signal Processing 90 (10) (2010) 2822–2834.