Reducing Gaussian noise using distributed approximating functionals

Reducing Gaussian noise using distributed approximating functionals

Computer Physics Communications 147 (2002) 759–769 www.elsevier.com/locate/cpc Reducing Gaussian noise using distributed approximating functionals Da...

120KB Sizes 2 Downloads 82 Views

Computer Physics Communications 147 (2002) 759–769 www.elsevier.com/locate/cpc

Reducing Gaussian noise using distributed approximating functionals David K. Hoffman a,∗ , Donald J. Kouri b , Eli Pollak c a Department of Chemistry and Ames Laboratory, Iowa State Univ., Ames, IA 50011, USA b Departments of Chemistry and Physics, Univ. of Houston, Houston, TX 77204-5641, USA c Chemical Physics Department, Weizmann Institute of Science, 76100 Rehovot, Israel

Received 12 October 2001; received in revised form 20 March 2002; accepted 21 March 2002

Abstract The denoising characteristics for the representation of experimental data in terms of the Hermite Distributed Approximating Functionals (HDAF’s) are analyzed with respect to signals corrupted with Gaussian noise. The HDAF performance is compared to both the ideal window and running averages representations of the same data. We find that the HDAF filter combines the best features of both. That is, the HDAF filter provides approximately the same noise reduction and bandwidth as the ideal filter while at the same time remaining limited in range in both the physical and Fourier spaces.  2002 Elsevier Science B.V. All rights reserved. PACS: 06.50.-x; 02.60.+y; 89.70.+c

1. Introduction Denoising of signals of various types is a ubiquitous problem throughout science and engineering [1–4]. Noisy signals can arise from experiments or from theoretical studies. Recently, we have discussed the use of distributed approximating functionals (DAFs) for analysis of several types of experimental data [5,6] and for theoretical applications [7–9]. DAFs were found to be very useful for “smoothing” functions whose values have been determined on a grid of points. However, no systematic analysis of the denoising efficiency of DAFs, and especially those based on the Hermite polynomials (the so-called Hermite DAFs or HDAFs), has been given. The present paper provides such an analysis. One particular application of interest to us is the use of HDAF denoising for Monte Carlo dynamical calculations [10]. However, our analysis is of general applicability to signals corrupted with Gaussian noise. It is intuitively obvious that denoising requires that one make a sufficient number of measurements or calculations of the quantities of interest. Furthermore, one expects that for uncorrelated Gaussian random errors, the reduction of the variance of the results should vary inversely with the square root of the number of measurements or calculations. If one knows something about the characteristics of the Fourier space representation of the signal, * Corresponding author.

E-mail address: [email protected] (D.K. Hoffman). 0010-4655/02/$ – see front matter  2002 Elsevier Science B.V. All rights reserved. PII: S 0 0 1 0 - 4 6 5 5 ( 0 2 ) 0 0 4 5 7 - 5

760

D.K. Hoffman et al. / Computer Physics Communications 147 (2002) 759–769

it is also clear that “masking” or “filtering” the signal will remove unwanted Fourier components. In the field of signal processing [11–13], it is well known that the best possible filter, assuming that the true signal is band limited, is the so-called “ideal filter” which is just a window function passing, with unit probability, all Fourier components (k) within the band. Components outside the band are eliminated. The problem with such a filter is the need for an infinite number of data samples in the physical space (x) in order to use it. The opposite limit is a window filter in the physical space—the “running averages” method. Here though, one filters out Fourier components which are within the band, thus removing information. A useful compromise is to use other filters, which mimic the ideal filter, but are smooth and thus require limited sampling in the physical space. One should compare these filters to the ideal filter and the running averages method and see how closely they resemble the ideal one, while simultaneously, as in the running averages method, limiting the number of data samples required in the physical space. Such a comparison is carried out in Section 2 for the HDAF filter. We find that the HDAF filter can be made arbitrarily close to the ideal filter, but avoids the necessity of infinite sampling.

2. Gaussian noise reduction using DAFs 2.1. An idealized model As an idealized model which may correspond to a typical Monte Carlo computation, we assume that the function f (x), whose Fourier transform exists, has been evaluated on a grid of N + 1 equidistant points xj = x0 + j x; j = 0, . . . , N . The noisy value of the function fn (xj ), at the j th grid point is represented as: fn (xj ) = f (xj ) + f (xj ),

(2.1)

where f (xj ) is the (unknown) exact value of the function at the point xj . The noise f (xj ) is assumed to be uncorrelated Gaussian random noise, such that   (2.2) f (xj ) = 0 on all grid points and   f (xj ) f (xk ) = γ 2 δj k

(2.3)

for any pair of grid points. Here δj k is the Kronecker ‘delta’ function. Eq. (2.3) assumes that the strength of the noise, γ , is the same for all points on the grid. The numerically determined function is given only on the set of discrete points xj . Typically, one would want to represent the function for all values of x. This can be done by writing a ‘smoothed’ approximation to the function as a discrete sum of the form fa,n (x) =

N 

xδa (xj − x)fn (xj ),

(2.4)

j =0

where δa (xj − x) is a real-valued, symmetric smoothing kernel, typical examples of which include the HDAF [5– 9], a square window or a sinc function [11–13]. Using Eq. (2.1), one can represent the smoothed function as a sum of two terms: fa,n (x) = fa (x) + fa (x),

(2.5)

where fa (x) is the smoothed representation of the exact function: fa (x) =

N  j =0

xδa (xj − x)f (xj )

(2.6)

D.K. Hoffman et al. / Computer Physics Communications 147 (2002) 759–769

761

and fa (x) is the noisy contribution: fa (x) =

N 

xδa (xj − x) f (xj ).

(2.7)

j =0

We assume that the grid of points is sufficiently dense such that the accuracy of the smoothed representation of the exact function fa (x) is higher than the accuracy needed for the purpose of the computation. This implies that the source of error in the representation of the function comes from the noisy contribution fa (x). The noise is reduced if the variance of the smoothed noisy contribution is smaller than the original variance γ 2 . The probability that the noisy contribution at the point x takes the value fa (x) is by definition:  ∞  N N       d f (xj ) p f (xj ) δ fa (x) − P fa (x) = xδa (xj − x) f (xj ) , (2.8) −∞ j =0

j =0

where the error probability distribution p[ f (xj )], in view of the assumptions given in Eqs. (2.2) and (2.3), is Gaussian:   1 2 2 e− f (xj ) /(2γ ) . p f (xj ) = √ (2.9) 2π γ

∞ 1 ikx Using the Fourier representation of the Dirac ‘delta’ function δ(x) = 2π −∞ dk e , all the integrations in Eq. (2.8) are Gaussian and one readily finds that the noisy contribution to the DAF smoothed function is also Gaussian:   1 2 2 e− fa (x) /(2Γa ) , (2.10) P fa (x) = √ 2π Γa where the ‘smoothed variance’ Γa is: Γa2 = γ 2

N 

2

x δa2 (xj − x).

(2.11)

j =0

This result provides a basis for deriving simple formulae for the smoothed noise in terms of the original noise and the kernel used for smoothing the noisy function. One can view Γa2 as given in this equation to be the square of the 2 -norm of the discrete, real-valued vector with j th component γ xδa (xj − x). Here x is a fixed parameter. Consequently Γa2 can also be calculated in the discrete Fourier representation. We note that Γa2 depends on the position of x relative to the grid. To estimate the magnitude of the smoothed variance, we make the approximation ∞ 

Γa2 ≈ Γa2 (N → ∞) = γ 2

2

x δa2 (xj − x),

(2.12)

j =−∞

by extending the grid of equal spaced points from −∞ to ∞. In this limit Γa2 is a function only of the offset of x from the closest grid point. This approximation can be justified in two ways. First, we can assume values for the function on the extended grid; e.g., we could take the function to be zero on all but the original N + 1 grid points. This can be justified if we have the necessary additional knowledge about the asymptotic behavior of the function. Second, the ends of the grid are typically sufficiently removed from the value of x so that δa2 (xj − x) is effectively zero on all but the original N + 1 grid points. As previously discussed, we can calculate Γa2 in the Fourier representation. From the theory of discrete Fourier transforms [12], we have that Γa2 (N

→ ∞) = γ

2 x



k¯ −k¯

dk a(k)a ∗(k),

(2.13)

762

D.K. Hoffman et al. / Computer Physics Communications 147 (2002) 759–769

where π , k¯ = x that is k¯ is the frequency related to x by the Nyquist relation, and  ∞ 

a(k) = x δa (xj − x) exp −ik(xj − x) ≈ dx δa (x )e−ikx .

(2.14)

(2.15)

j =−∞

The approximate equality here follows from the fact that the sum can be viewed as a discretization of the continuous Fourier transform of δa . This approximation becomes exact if δa is band-limited in the range −k to k because then the sum is invariant to a uniform shift of the xj -grid. In these circumstances Γa2 (N → ∞) becomes independent of x. 2.2. Denoising and the ideal filter We consider first a function f (x) which is band limited (i.e. one whose Fourier transform lies in the range −k ∗ to k ∗ ) and which has been determined on an infinite grid of points (with spacing x), ranging from −∞ to ∞. It is well known that such a function can be reconstructed exactly for all values of x from its values on the grid of equidistant points using the Fourier series. The largest possible equidistant spacing ( x ∗ ) between points, for which the function is represented exactly, is given by the Nyquist relation [11–13]: π = 1. (2.16) x ∗ k ∗ The function f (x) is then represented exactly for all x as:  sin(k ∗ (xj − x)) f (x) = x ∗ . (2.17) f (xj ) π(xj − x) x j

Comparing Eq. (2.4) with Eq. (2.17) leads to the (well known) identification of the “sinc” kernel as a “smoothing kernel” [11–13]: δs (x; k ∗) =

sin(k ∗ x) . πx

(2.18)

We also note that 1 sin(k ∗ x) = πx 2π

k ∗ dk exp(ikx)

(2.19)

−k ∗

demonstrating that the Fourier transform of δs (x; k ∗ ) is the “ideal window” and

1 for −k ∗  k  k ∗ , as (k) = 0 otherwise. The noise reduction is readily found by inserting Eq. (2.20) into Eq. (2.13). We find that    ∗ xk ∗ x k Γ ≡ ≡ for k ∗ < k, ∗ π x k = γ 1 for k ∗  k,

(2.20)

(2.21) (2.22)

where k ∗ is the half-bandwidth of the selected filter presumed (but only in unusual circumstances actually known) to give the bandwidth of the function. There is no reduction of noise when k ∗  k because in this circumstance there is no redundancy in the sampled data. In order for there to be a net noise reduction, x must be smaller than

D.K. Hoffman et al. / Computer Physics Communications 147 (2002) 759–769

763

Fig. 1. The HDAF window in frequency space is plotted for a bandwidth of unity. The lines correspond to the width parameter taking on the values σ = 1, 2, 4, 8 (the solid line being σ = 8). The order parameter is M = σ 2 . Note the approach of the window to the ideal window as the width parameter is increased.

the maximum spacing that would allow exact reconstruction of the underlying function f (x) (i.e. there must be “more” data points than are minimally needed for this reconstruction). If k ∗ > k then there is no noise reduction and some smoothing of the underlying function occurs (see Fig. 1). To summarize, the two parameters that determine the denoising are x and k ∗ . The first is determined by the data for the problem at hand and the second by the particular underlying function of the problem. As one expects, the smoother the function and the denser the data sampling, the more the signal can be denoised [11–13]. 2.3. Noise reduction using HDAFs In the previous subsection we assumed that the grid extended from −∞ to ∞. Such a situation is never realized in practice. Even if the data were available, the sum in Eq. (2.12) converges very slowly because the sinc function dies off as x1 . These problems can be avoided to some extent by making use of filters for δa (x; k ∗) that closely resemble an ideal window in k-space except they are “rounded on the corners”, which makes them shorter ranged in x-space. One such class of filters are HDAFs, which are infinitely smooth in k-space, have a value of almost unity in the interior of the band and decay rapidly (Gaussian fall-off) to almost zero at the edges of the band, as shown in Fig. 1. To explore the capability of such filters to reduce noise the arguments of the previous two subsections can be repeated. In the ideal window case the evaluation of the sum was exact, but now it is done only to a very high level of accuracy (depending on how closely the filter approximates the square window) and with controllable error. We will see that for all such filters the noise reduction is determined by k ∗ , a property of the function, and x, a property of the data. Where these filters differ is in the number of points over which one must sum to achieve the same level of noise reduction. Optimal noise reduction then occurs when the dispersion of the filter in k about the ideal window and the range in x are as small as possible. The uncertainty principle (well known in quantum mechanics but really a consequence of Fourier transforms and hence a feature in all systems governed by this mathematics) places certain limits on the extent to which the dispersion of a filter in x and k can simultaneously be minimized [11–13]. It is well known that a Gaussian filter (which is Gaussian in both x and k) has the smallest uncertainty product (defined to be the product of the variances in x and k). However, for our purposes a Gaussian filter is not optimal because in the k-representation it is a poor approximation to an ideal window. As a consequence the function fa (x) of Eq. (2.6) for such a filter provides a distorted representation of the underlying function f (x). In particular, a Gaussian filter gives too little weight to the high frequencies hence provides too much smoothing. It has recently been shown [14, 15] that the Hermite DAF filter provides an approximation to the ideal window of controllable accuracy, such that

764

D.K. Hoffman et al. / Computer Physics Communications 147 (2002) 759–769

it approaches the ideal window with a relative minimum uncertainty product and hence is especially well suited for noise reduction. In the following we shall study in more detail the noise reduction properties using the Gauss–Hermite DAF kernel [5,9]:    2 2 M/2 1 k1 x e−x /(2σ )  H2k √ − , (2.23) δD (x; M, σ ) = √ 4 k! 2πσ 2 k=0 2σ 2 where the H2k (x) are (even) Hermite polynomials, σ is the width of the Gaussian function and M specifies the order of the truncated summation (in the limit that the order parameter M → ∞, the sum yields the Dirac ‘delta’ function). From Eq. (2.15) its Fourier transform, aD (k), is given by ∞ aD (k; M, σ ) ≈

dx e

ikx

δD (x; M, σ ) = e

−k 2 σ 2 /2

M/2  n=0

−∞

(k 2 σ 2 /2)n Γ (M/2 + 1, 12 k 2 σ 2 ) = , n! (M/2)!

(2.24)

1 2 2 which is just the HDAF window in Fourier space. Here Γ ( M 2 + 1, 2 k σ ) is the incomplete Gamma function [16]. For our purposes, aD (k; M, σ ) provides an adequate approximation for the discrete Fourier transform of Eq. (2.15). As can be seen from Fig. 1, by increasing the order and width parameters M and σ , but keeping the √ ratio σM fixed, the HDAF window in Fourier space increasingly resembles an ideal window. The width of this √

window is determined by the ratio σM and can be defined using its inflection point with respect to the variable  N 2 2 dfN (ξ ) ξk ξ = k 2σ . Defining fN (ξ ) ≡ e−ξ N = −e−ξ ξN! one readily finds that the inflection k=0 k! and noting that dξ 2

point defined by d fdξN2(ξ ) = 0 is ξ ∗ = N . For an HDAF of order M this implies that the range is: √ M . kM,σ = σ The noise reduction may be now estimated from Eqs. (2.13) and (2.24):   2  ΓD 2 1 M 2 + 1, λ = Γ γ 2 (M/2)!2         1 M +3 2 2 M M +3 M 2 + 1, λ Γ ,λ + !Γ −Γ (M/2)!2 λ 2 2 2 2 λ2 −

−t M/2

dt e t



M +3 ,t Γ 2

(2.25)

 (2.26)

0

where λ is defined to be √ π M/2 λ≡ . kM,σ x

(2.27)

An asymptotic expansion of this expression in terms of M is considered in the appendix. The leading order term is identical to the result for the ideal filter, that is using Eq. (2.14)  kM,σ ΓD , if kM,σ < k, k  (2.28) γ 1, if kM,σ > k. In other words, if the bandwidth kM,σ of the HDAF filter is smaller than the bandwidth k implied from the spacing, then there is redundancy in the data and noise will be reduced. If the bandwidth of the HDAF is larger than the

D.K. Hoffman et al. / Computer Physics Communications 147 (2002) 759–769

765

¯ using the ideal window (solid line) and the HDAF (dashed line). The bandwidth is Fig. 2. A comparison of noise reduction (plotted vs. κ ≡ k) kM,σ = 1 and the HDAF order is M = 32.

bandwidth of the spacing, no noise reduction occurs. Comparing Eq. (2.28) with Eq. (2.21), we see that to lowest order the HDAF provides the same noise reduction as the square window for kM,σ = k ∗ . From Fig. 1 this is to be expected. An interesting question is how fast does the HDAF result converge to the asymptotic ideal window noise reduction. In a typical experiment, one should expect that the spacing x is chosen such that the spacing bandwidth k is larger than the true bandwidth of the signal. This assures that there is no systematic distortion of the signal. As shown in the appendix, in this limit one finds:     kM,σ 1 ΓD  1+O √ . (2.29) γ k M A comparison of the noise reduction of the ideal filter and the HDAF, as obtained from Eq. (2.26) is shown in Fig. 2 as a function of k for a bandwidth kM,σ = 1 and the HDAF order M = 32. As may be seen from the figure, for this typical case, there is not any practical difference between the noise reduction of the HDAF filter and the ideal filter. In summary, the noise reduction of the HDAF tends to the noise reduction of the ideal window, in the limit M, σ → ∞ with kM,σ fixed at k ∗ . At the same time, for any finite M and σ , the range of the HDAF in the physical x space is limited by the Gaussian decay. The HDAF thus has the optimal features of both the running average— limited range in the physical space—and the sinc function—limited range in the frequency space, delivering at the same time approximately the same noise reduction as does the sinc function. 2.4. Noise reduction using running averages The running average method used to denoise data replaces the measured value, fn (xj ), at each grid point by j +K its average value defined as: fn (xj )K = l=j −K wl fn (xl ) where wl provides a weighted average. The most

1 straightforward application is to use a simple mean, wl = 2K+1 . In the language of this paper, this means that in Eq. (2.11)

1 for the 2K + 1 grid points included in the averaging, (2.30) δra (xj ) = x(2K+1) 0 otherwise. This choice replaces the noisy value of the function at a grid point by a running average of the values at surrounding points. From Eq. (2.11) one then finds that 1 Γra =√ . (2.31) γ 2K + 1

766

D.K. Hoffman et al. / Computer Physics Communications 147 (2002) 759–769

Fig. 3. A comparison of the Fourier components of the ideal window (mesa), the HDAF and the running average (solid line), under equal denoising conditions. The bandwidth is k ∗ = 1, the number of points used for the running average is 21 and order of the HDAF is 40. Note the significant distortion of the bandwidth associated with the running average function.

This is an intuitively obvious result. When averaging over 2K + 1 points, the variance of the noise is reduced by the square root of the number of points used in the averaging. The more points used in the running average the greater the noise reduction; however, simply adding more points into the running average is not useful unless the bandwidth is very narrow. Averaging not only reduces the noise, it also smooths the underlying signal. More specifically, from Eq. (2.15) the Fourier component ara (k) for the running average is ara (k) =

1 sin[(K + 1)k x] + sin(Kk x) . 2K + 1 sin(k x)

(2.32)

π The noise reduction of the ideal window and the HDAF when x = (N+1)k ∗ will be identical to the noise reduction of the running average provided that 2K = N . However, under these conditions, the Fourier component of the running average can differ significantly (depending on the bandwidth k ∗ of the signal, which is independent of K) from that of the ideal window and the HDAF, as shown in Fig. 3 for k ∗ = 1. Thus fa (x) is only a smoothed approximation to f (x). This is in contrast to the cases previously discussed where denoising is done using filters that approximate an ideal window. The running average always results in noise reduction. Likewise (unless the signal is constant) the signal is always distorted by a running average to a degree determined by the bandwidth of the signal. In contrast, for the ideal window there is noise reduction only if the Nyquist bandwidth k exceeds the true bandwidth k ∗ of the signal, and in this circumstance there is no distortion of the signal. For the HDAF filter this statement must be qualified to virtually no distortion of the signal. More precisely, the HDAF filter can be made to approximate the ideal window filter to controllable accuracy by adjusting the HDAF parameters. The distortion of the signal is therefore controllable to arbitrary accuracy. The range of sampling of the HDAF in the physical space, though larger than the running average remains finite, while the range of the ideal filter decays as 1/x, as shown in Fig. 4. In this sense, the HDAF preserves the best of both worlds. It does not distort the signal significantly but is also limited in range in the physical space.

3. Conclusions As is well known, the ideal filter delivers the best possible results whenever one wants to preserve exactly a band-limited signal, while exactly deleting everything outside the signal band. It does so at the price of requiring sampling of the signal over an unlimited range in physical space with a sampling rate which must at least equal the

D.K. Hoffman et al. / Computer Physics Communications 147 (2002) 759–769

767

Fig. 4. A comparison of the range of smoothing functions, d(x), in the physical space for the ideal window (a sinc function—depicted by the solid line), the HDAF and the running average (a mesa) for a bandwidth of k ∗ = 1. Other parameters are as in Fig. 3. Note the Gaussian decay of the HDAF function as compared to the slower 1/x decay of the sinc function.

rate determined by the Nyquist frequency. The running average, on the other hand, requires only a finite range of samples in the physical space, but as the range of the averaging increases, ultimately, only the zero (DC) Fourier component remains. Too much averaging destroys the signal. In different words, the running average in Fourier space is the sinc function, a poor approximation to the ideal filter. From a practical point of view, it is desirable to minimize the range of sampling needed, maximize the noise reduction and yet retain as much as possible the bandwidth in Fourier space that captures the signal of interest. The ability to achieve this is limited fundamentally by the uncertainty principle. In this paper we have shown that the HDAF’s, which are (relative) minimum uncertainty solutions, provide the means to achieve these three objectives. The results establish that HDAF based filters deliver effectively ideal window removal of Gaussian noise, while still requiring only a finite range of samples in the physical space. In this paper, we considered only uncorrelated Gaussian random noise. It would be desirable to study further how the HDAF’s perform with correlated noise, which is more typical, for example, of Monte Carlo computations. It is also of interest to study the effects of non-Gaussian noise.

Acknowledgement This work has been supported by grants from the U.S.–Israel Binational Science Foundation, the Minerva Foundation, Munich/Germany, the U.S. National Science Foundation and the R.A. Welch Foundation. The Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under contract number 2-7405ENG82.

Appendix A. Asymptotics of noise reduction for the HDAF The purpose of this appendix is to derive the asymptotic expansion of Eq. (2.26) for the noise reduction of the HDAF in the limit of large order M, keeping the bandwidth kM,σ fixed, and assuming that the spacing bandwidth π k = x > kM,σ . The integral representation of the incomplete gamma function is [16]:

768

D.K. Hoffman et al. / Computer Physics Communications 147 (2002) 759–769

∞ Γ (n + 1, t) =

dy e−y y n .

(A.1)

t M M 2 = (k/kM,σ )2 M 2 , one finds that the ratio Γ ( 2 + x, λ )/( 2 )! is exponentially small for x = 1, 3/2 and may thus be ignored. The two important contributions in Eq. (2.26) come from the terms 1/(M/2)! λ2 Γ ((M + 3)/2) and

1 2 λ2 dt e−t t M/2 Γ ((M + 3)/2, t). (M/2)!2 λ 0 For the first term, one may use the standard (Stirling) asymptotic expansion formula [16]:       M +3 1 1 2 M Γ  1+O . (A.2) (M/2)! λ 2 2 M

Since λ2

For the second term, since λ2 > M2 one may safely replace the upper limit of the integral by ∞, this replacement will give an exponentially small error of the order of e−M/2 . Using a steepest descent expansion we note that for large N e−t t N  e−N N N e−(t −N)

2 /(2N)

.

Expanding Γ ((M + 3)/2, t) about t = N we find:      ∞ M 2 Γ ( M+3 1 2 M +3 1 −t M/2 2 , 2) . ,t  +O √ dt e t Γ (M/2)!2 λ 2 λ (M/2)! M

(A.3)

(A.4)

0

Furthermore, one may show that √ M Γ ( M+3 M/2 2 , 2 )  + O(M 0 ). (M/2)! 2 Combining all terms, one obtains Eq. (2.29). To demonstrate this explicitly, we plot in Fig. 5 the function −2  k ΓD −1 (M) ≡ kM,σ γ

(A.5)

(A.6)

vs. M. One clearly sees that for large M, the slope is linear.

Fig. 5. The asymptotic convergence of the HDAF noise reduction to that of the ideal window. The function (M) (see Eq. (A.6)) is plotted vs. the order parameter M. Note the linear slope when M is sufficiently large.

D.K. Hoffman et al. / Computer Physics Communications 147 (2002) 759–769

769

References [1] Discussions of various examples may be found, in: J.S. Rigden (Ed.), Macmillan Encyclopedia of Physics, Vols. 1–4, Simon and Schuster Macmillan, New York, 1996. [2] H.C. Andrew, B.R. Hunt, Digital Image Restoration, Prentice-Hall, Englewood Cliffs, NJ, 1977. [3] B. Farhang-Boronjeny, Adaptive Filters, Wiley, New York, 1998. [4] F.L. Lewis, V.L. Syrmos, Optimal Control, Wiley, New York, 1995. [5] D.K. Hoffman, G.H. Gunaratne, D.S. Zhang, D.J. Kouri, Chaos 10 (2000) 240. [6] D.K. Hoffman, H. Zhang, Z. Shi, D.J. Kouri, S.-Y. Lee, E. Pollak, Theor. Chem. Acc. 105 (2001) 173. [7] D.S. Zhang, G.W. Wei, D.J. Kouri, D.K. Hoffman, M. Gorman, A. Palacios, G.H. Gunabratne, Phys. Rev. E 60 (1999) 3353. [8] D.J. Kouri, D.S. Zhang, G.W. Wei, T. Konshak, D.K. Hoffman, Phys. Rev. E 59 (1999) 1274. [9] D.K. Hoffman, D.J. Kouri, in: G. Cohen (Ed.), Proc. 3rd International Conf. Math. and Num. Aspects of Wave Propagation, SIAM, Philadelphia, 1995, pp. 56–83. [10] J.-L. Liao, E. Pollak, J. Chem. Phys. 111 (1999) 7244. [11] S.W. Smith, The Scientist’s and Engineer’s Guide to Digital Signal Processing, Calif. Tech. Publishing, San Diego, 1997. [12] S. Mallat, Wavelet Tour of Signal Processing, Academic Press, San Diego, CA, 1999. [13] A.V. Oppenheim, R.W. Shafer, Discrete-Time Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, 1989. [14] D.K. Hoffman, D.J. Kouri, Phys. Rev. Lett. 85 (2000) 5263. [15] D.K. Hoffman, D.J. Kouri, Phys. Rev. A, in press. [16] M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions, NBS Applied Math. Series, Vol. 55.