Statistical estimation for hyper shrinkage

Statistical estimation for hyper shrinkage

Digital Signal Processing 17 (2007) 485–494 www.elsevier.com/locate/dsp Statistical estimation for hyper shrinkage S. Poornachandra a,∗ , N. Kumarave...

512KB Sizes 0 Downloads 41 Views

Digital Signal Processing 17 (2007) 485–494 www.elsevier.com/locate/dsp

Statistical estimation for hyper shrinkage S. Poornachandra a,∗ , N. Kumaravel b a Department of ECE, SSN College of Engineering, Anna University, India b Department of ECE, Anna University, India

Available online 15 November 2006

Abstract A new shrinkage technique in wavelet domain called hyper shrinkage that uses hyperbolic function for improved denoising is explained. The methodology is statistically significant in terms of signal recovery and improving signal-to-noise ratio over both hard and soft shrinkage. A mathematical treatment of proposed shrinkage function shows an improvement over the existing shrinkage function in terms of variance and bias. A simulation of mathematical model shows effectiveness of the proposed shrinkage function. © 2006 Elsevier Inc. All rights reserved. Keywords: Hyper shrinkage; Mean estimation; Variance estimation; Wavelet transforms

1. Introduction Recently, wavelet transform has proven to be a useful tool for nonstationary signal analysis. Wavelets provide a flexible prototyping environment that comes with fast computational algorithms. A shrinkage method compares empirical wavelet coefficient with a threshold and is set to zero if its magnitude is less than the threshold value [1]. The threshold acts as an oracle, which distinguishes between the significant and insignificant coefficients. Shrinkage of empirical wavelet coefficients works best when the underling set of the true coefficients of f is sparse. In wavelet shrinkage, energy of the function will be concentrated in a few coefficients [5]. Therefore, nonlinear function in wavelet domain will retain few larger coefficients representing the function while the coefficients below threshold will be reduced to zero. The development of practical algorithms requires that one choose the thresholding of wavelet coefficients empirically. The threshold parameter λ controls the bias-variance trade-off of the risk [5]. Wavelet methods allow one to automatically choose the thresholding simply and naturally, using decision-theoretic criteria based on Stein’s unbiased risk estimate (SURE). SureShrink is an optimized hybrid scale dependent thresholding scheme [4] based on SURE. It combines the universal threshold selecting scheme and scale dependent adaptive threshold selection scheme and provides the best estimation results in the sense of l2 risk when true function is not known. The algorithm SureShrink proposed in runs fully automatically in order n log(n) time, where n is the dataset size, and achieves the optimal speed of estimation for the object under consideration. The derivation of standard soft shrinkage function is not continuous [2]. Due to discontinuities of shrinkage function, hard shrinkage estimates tend to have bigger variance and can be unstable and sensitive to small changes in the * Corresponding author.

E-mail address: [email protected] (S. Poornachandra). 1051-2004/$ – see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.dsp.2006.10.006

486

S. Poornachandra, N. Kumaravel / Digital Signal Processing 17 (2007) 485–494

data. The soft shrinkage estimates tend to have bigger bias, due to shrinkage of large coefficients [3]. Firm shrinkage [5] requires two thresholds. This makes threshold selection problem much harder and computationally more expensive for adaptive threshold selection procedure. Though the proposed shrinkage concept is computationally complex but offers excellent noise rejection capability. One direction of our work deals with the application of hyper shrinkage function for noise removal. In order to remove noise, they use the thresholding of transform coefficients. For wavelet techniques, this field of application is well studied. At the same time, pyramidal techniques, being less popular than their wavelet counterparts, require more being less popular than their wavelet counterparts, require more. We present an analytical method for deriving the probability distribution of transform coefficients. This, in turn, permits us to make an appropriate choice of threshold values for denoising. 2. Hyper shrinkage model We attempt to recover an unknown function from noisy sampled data. Using orthonormal bases of compactly supported wavelets, we develop a nonlinear method that works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coefficients. Assume the observed data vector y = [y1 , y2 , . . . , yN ] ∈ N at equally spaced according to the model yi = f (xi ) + σ ni ,

i = 1, 2, . . . , N,

(1)

where n is noise with i.i.d. distribution N (θ, 1) and f (xi ) is samples of a deterministic function. The major goal of this paper is to estimate f with small l2 risk (MSE). The estimate fˆ for small risk is 2 1  ˆ R(fˆ, f ) = E fi − fi . n n

(2)

i=1

We use this Gaussian white noise (GWN) model since it presents fundamental issues in nonparametric estimation in simplest form, unobscured by important, but complicating, factors such as discrete sampling and heteroscedasticity as in density estimation. We employ the sequence space form since it permits reduction of a number of minimax questions to simpler univariate and exchangeable multivariate normal decision problems. Here we adopt hyperbolic functions, which is a nonlinear model. Unlike hard shrinkage the proposed shrinkage model is continuously differentiable. The hyper shrinkage model [6] is expressed as   hyp (3) δλ (x) = tanh(ρ ∗ x) |x| − λ + , 5  ρ  0, where ρ is the boundary contraction parameter. The condition is developed to made all coefficients fall within the curve is  , (4) ρ= max |x| where  is the exponent region value as in Fig. 1. The wavelet coefficients at the coarsest scale are left intact, while the coefficients at all the other scales are threshold with the universal thresholding  (5) λ = σ 2 log N, where σ 2 is the noise variance and N is the length of the signal. The basic algorithm for hyper shrinkage is 1. Compute the wavelet transform w = Hy, where H is the wavelet transform matrix. 2. Compute an estimate of the standard deviation of the noise in (1) from the wavelet coefficient w. hyp 3. Apply a nonlinear shrinkage rule δλ (·) to the coefficients of wk   wk . wˆ k = σˆ δλ σˆ 4. Invert the wavelet transform fˆλ = H −1 w. ˆ

(6)

S. Poornachandra, N. Kumaravel / Digital Signal Processing 17 (2007) 485–494

487

Fig. 1. (a) Hyper shrinkage function is compared with other shrinkage functions. (b) Variance distribution of shrinkage functions.

2.1. Mathematical treatment Here we have given the mean and variance estimation of the proposed model. Let X ∼ N (θ, 1). Under shrinkage hyp function δλ (X) and threshold λ, define the mean and variance of the shrinkage estimator of θ by  hyp (7) Mλ (θ ) = E δλ (X) ,  hyp Vλ (θ ) = Var δλ (X) , (8)  2

PK 2 hyp PK Rλ (θ ) = E δλ (X) − θ = Vλ (θ ) + Mλ (θ ) − θ . (9) Theorem 1. Let  and ϕ be the probability distribution and the density function for standard Gaussian random variable respectively, then the mean estimation is given by straightforward calculus (proof is given in Appendix A)



1 hyp Mλ (θ ) = −θ 3 + α(λ, θ )ϕ(λ − θ ) + β(λ, θ )ϕ(λ + θ ) + θ (θ 2 + 6) (λ − θ ) − (−λ − θ ) , (10) 3 where α(λ, θ ) = (λ2 + θ λ + θ 2 − 1),

β(λ, θ ) = (−λ2 + θ λ − θ 2 + 1).

Theorem 2. The variance estimation is given by straightforward calculus

hyp Vλ (θ ) = γ (λ, θ )ϕ(λ − θ ) + η(λ, θ )ϕ(λ + θ ) + (θ 6 + 16θ 4 + 52θ 2 + 19) (λ − θ ) − (−λ − θ )



2 1 6 7 4 14 2 − MλPK (θ ) , + θ + θ + 10θ + 9 3 3 where γ (λ, θ ) = −λ5 − λ4 θ − λ3 (θ 2 + 6) − λ2 θ (θ 2 + 10) − λ(θ 4 + 13θ 2 + 19) − 39θ − 15θ 3 − θ 5 , η(λ, θ ) = −λ5 − λ4 θ − λ3 (θ 2 + 6) + λ2 θ (θ 2 + 10) − λ(θ 4 + 13θ 2 + 19) + 39θ + 15θ 3 + θ 5 .

(11)

488

S. Poornachandra, N. Kumaravel / Digital Signal Processing 17 (2007) 485–494

Fig. 2. Simulation results. (a) Original ECG (ML-II from PhysioBank). (b) Noisy ECG with noise level (50% variance of the original ECG). (c) Recovered signal when noise level is 10%. (d) Recovered signal when noise level is 50%. (e) Recovered signal when noise level is 30%. (f) Recovered signal when noise level is 40%.

Both (10) and (11) can be calculated recursively by λ−θ 

Ir =

(x + θ )r e−x

2 /2

dx = −λr−1 A − (−1)r−1 B + θ Ir−1 + (r − 1)Ir−2 ,

(12)

−λ−θ

where A = ϕ(λ − θ ),

B = ϕ(λ + θ ).

The mean and variance estimation given in (10) and (11) is an approximated model by considering the first two terms in the power series. The proposed model is tested using MATLAB simulation environment and was found suitable. For an accurate mathematical model, more number of power series terms can be taken.

S. Poornachandra, N. Kumaravel / Digital Signal Processing 17 (2007) 485–494

Fig. 3. %PRD for different noise % in ECG.

489

Fig. 4. SNR (dB) for different noise % in ECG.

3. Simulation result We consider here a normal ECG added with white Gaussian noise (WGN) at different noise level. The added noise is some percentage of standard deviation of ECG. The sampling rate of ECG obtained from PhysioBank is 360 Hz of 1000 data length (truncated for simulation). The simulation was conducted on different ECG obtained from the different limb-lead system and found the proposed hyper shrinkage model robust. The simulation shown here corresponds to ECG signal taken at ML-II. During the simulation we have consider the boundary contracting factor, ρ = 5. It is evident from Fig. 1 that the variance distribution of proposed shrinkage is equal to soft shrinkage and it retains the advantages feature of the same. The percentage root means square difference (PRD) for noisy ECG at different noise level is plotted in Fig. 3. The simulation shown against other shrinkage techniques, viz., soft, hard, and garrote shrinkage. The %PRD of hyper shrinkage shows better response for higher noise level. As the noise level increases the %PRD level increases rapidly but it is consistent in hyper shrinkage model. Figure 4 shows SNR (dB) versus different noise level for all four shrinkage functions. It is evident from this figure, as noise level of ECG increases, the SNR level decreases but in hyper shrinkage model it is consistent. The proposed hyper shrinkage function out performs the other shrinkage functions. 4. Conclusion The analytical derivation of mean and variance is derived for hyper shrinkage model in the presence of Gaussian environment. Simulation shows that the proposed model has the ability to reduce the noise level effectively (refer to Fig. 2). It can be also seen that the optimal solution using the new shrinkage function is better than the standard soft shrinkage function for the given example. The performance of the shrinkage function has been evaluated for data compression purpose and results show that the technique is able to achieve good PRD. The shrinkage function is further tested with respect to SNR. The implementation of the new algorithm is simple. The results are visually pleasant and comparable to that of state-of-art algorithms. The proposed shrinkage model has potential applications in situations where denoising of signal is prime criterion. Appendix A Proof for Theorem 1: Mean calculation. To determine the mean estimation for the hyper function, let us consider only the first two terms as an approximation hyp Mλ (θ ) =

1 √ 2π

∞  −∞

  ∞  x 3 −(x−θ)2 /2 x 3 −(x−θ)2 /2 1 x− x− e e dx = √ dx 3 3 2π −∞

490

S. Poornachandra, N. Kumaravel / Digital Signal Processing 17 (2007) 485–494

1 +√ 2π 1 =√ 2π

 λ  x 3 −(x−θ)2 /2 x− e dx 3

−λ

∞

xe−(x−θ)

2 /2

−∞



1 1 − √ 3 2π

∞

1 dx + √ 2π

3 −(x−θ)2 /2

x e −∞



xe−(x−θ)

2 /2

dx

−λ

1 dx + √ 2π



 3 −(x−θ)2 /2

x e

(A.1)

dx .

−λ

Let us consider the first two terms of (A.1) 1 √ 2π

∞ xe

−(x−θ)2 /2

−∞

1 dx + √ 2π

λ xe

−(x−θ)2 /2

−λ

1 dx = θ + √ 2π



xe−(x−θ)

2 /2

dx.

−λ

On applying (A.17)

= θ − ϕ(λ − θ ) + ϕ(λ + θ ) + θ (λ − θ ) − (−λ − θ ) .

(A.2)

Let us consider the third and fourth terms of (A.1) 1 √ 2π

∞

3 −(x−θ)2 /2

x e −∞

1 dx + √ 2π



x 3 e−(x−θ)

2 /2

dx.

(A.3)

−λ

To solve the first term of (A.3), let us use third order moment generating function principle 1 √ 2π

∞

3 −(x−θ)2 /2

x e −∞

 d 3 MX (t)  dx ⇒ = E{X 3 }, dt 3 t=0

 d 3 MX (t)  = θ 3 + 3θ. dt 3 t=0

(A.4)

To solve II term of (A.2), consider (A.19) 1 √ 2π where



x 3 e−(x−θ)

2 /2



dx = A(λ, θ )ϕ(λ − θ ) + B(λ, θ )ϕ(λ + θ ) − θ (θ 2 + 3) (λ − θ ) − (−λ − θ ) ,

−λ

  A(λ, θ ) = −λ2 − θ λ − θ 2 − 2 ,

(A.5)   B(λ, θ ) = λ2 − θ λ + θ 2 + 2 .

To obtain the mean estimation of hyper shrinkage function, add (A.2), (A.4), and (A.5)

1 hyp Mλ (θ ) = θ − ϕ(λ − θ ) + ϕ(λ + θ ) + θ (λ − θ ) − (−λ − θ ) − (θ 3 + 3θ ) 3 1

2 2 2 2 − (−λ − θ λ − θ − 2)ϕ(λ − θ ) + (λ − θ λ + θ + 2)ϕ(λ + θ ) 3

1 + θ (θ 2 + 3) (λ − θ ) − (−λ − θ ) , 3 θ3 1

hyp Mλ (θ ) = − + (λ2 + θ λ + θ 2 − 1)ϕ(λ − θ ) + (−λ2 + θ λ − θ 2 + 1)ϕ(λ + θ ) 3 3

θ 2 + (θ + 6) (λ − θ ) − (−λ − θ ) , 3

(A.6)

S. Poornachandra, N. Kumaravel / Digital Signal Processing 17 (2007) 485–494

491

1 3 2 −θ + (λ + θ λ + θ 2 − 1)ϕ(λ − θ ) + (−λ2 + θ λ − θ 2 + 1)ϕ(λ + θ ) 3

+ θ (θ 2 + 6) (λ − θ ) − (−λ − θ ) ,



1 hyp Mλ (θ ) = −θ 3 + α(λ, θ )ϕ(λ − θ ) + β(λ, θ )ϕ(λ + θ ) + θ (θ 2 + 6) (λ − θ ) − (−λ − θ ) , 3 where hyp

Mλ (θ ) =

α(λ, θ ) = (λ2 + θ λ + θ 2 − 1),

β(λ, θ ) = (−λ2 + θ λ − θ 2 + 1).

Proof for Theorem 2: Variance calculation.

2 hyp Vλ (θ ) = E{θ 2 } − E{θ }2 = E{θ 2 } − MλPK (θ ) .

(A.7)

The first term is the square of the mean estimated. Let us consider the I term of (A.7) 1 E{θ } = √ 2π

∞ 

2

1 =√ 2π 1 =√ 2π 1 =√ 2π

−∞

x3 x− 3

∞  x− −∞

∞ 

x3 3

2

2

e−(x−θ)

2 /2

e−(x−θ)

2 /2

x 6 2x 4 x + − 9 3

2

2

−∞ ∞

2 −(x−θ)2 /2

x e −∞

1 −√ 2π

λ −λ

λ 

1 dx − √ 2π

−(x−θ)2 /2

2 1 dx + √ 3 2π

2 −(x−θ)2 /2

x e

e

dx

x− −λ

1 dx − √ 2π

∞ −∞



2

e−(x−θ)

2 /2

dx

 λ  x 6 2x 4 2 −(x−θ)2 /2 x2 + − e dx 9 3

−λ

4 −(x−θ)2 /2

x e

2 1 dx − √ 3 2π

x3 3

1 1 dx + √ 9 2π

4 −(x−θ)2 /2

x e −λ

∞

x 6 e−(x−θ)

2 /2

dx

−∞

1 1 dx − √ 9 2π



x 6 e−(x−θ)

2 /2

dx.

(A.8)

−λ

The solution to the first, second, and third terms of (A.8) is obtained from (A.12), (A.14), and (A.16) 1 √ 2π

∞

2 −(x−θ)2 /2

x e

dx = θ + 1,

−∞

2

2 1 √ 3 2π

∞ −∞

x 4 e−(x−θ)

2 /2

2 dx = [θ 4 + 6θ 2 + 3], 3

∞

1 1 1 2 x 6 e−(x−θ) /2 dx = [θ 6 + 15θ 4 + 45θ 2 + 15] √ 9 2π 9 −∞



1 6 7 4 2 4 1 6 15 4 45 2 15 14 2 2 2 = θ + θ + 10θ + . = θ + 1 + θ + 4θ + 2 + θ + θ + θ + 3 9 9 9 9 9 3 3 The solution to fourth, fifth, and sixth terms of (A.8) is obtained from (A.18), (A.20), and (A.22) 1 √ 2π



x 2 e−(x−θ)

2 /2

dx = −(λ + θ )ϕ(λ − θ ) − (λ − θ )ϕ(λ + θ ) + (θ 2 + 1) (λ − θ ) − (−λ − θ ) ,

−λ

2 1 √ 3 2π

λ −λ

x 4 e−(x−θ)

2 /2



dx = −λ3 − λ2 θ − λ(θ 2 + 3) − 5θ − θ 3 ϕ(λ − θ )



+ −λ3 + λ2 θ − λ(θ 2 + 3) + 5θ + θ 3 ϕ(λ + θ ) + (θ 4 + 6θ 2 + 3) (λ − θ ) − (−λ − θ ) ,

(A.9)

492

S. Poornachandra, N. Kumaravel / Digital Signal Processing 17 (2007) 485–494

1 1 √ 9 2π



x 6 e−(x−θ)

2 /2

dx = −λ5 − λ4 θ − λ3 (θ 2 + 5) − λ2 θ (θ 2 + 9)

−λ



− λ(θ 4 + 12θ 2 + 15) − 33θ − 14θ 3 − θ 5 ϕ(λ − θ ) + −λ5 − λ4 θ − λ3 (θ 2 + 5) + λ2 θ (θ 2 + 9)

− λ(θ 4 + 12θ 2 + 15) + 33θ + 14θ 3 + θ 5 ϕ(λ + θ ) + (θ 6 + 15θ 4 + 45θ 2 + 15) (λ − θ ) − (−λ − θ )

= −λ5 − λ4 θ − λ3 (θ 2 + 6) − λ2 θ (θ 2 + 10) − λ(θ 4 + 13θ 2 + 19) − 39θ − 15θ 3 − θ 5 ϕ(λ − θ )

+ −λ5 − λ4 θ − λ3 (θ 2 + 6) + λ2 θ (θ 2 + 10) − λ(θ 4 + 13θ 2 + 19) + 39θ + 15θ 3 + θ 5 ϕ(λ + θ )

+ (θ 6 + 16θ 4 + 52θ 2 + 19) (λ − θ ) − (−λ − θ ) ,

hyp Vλ (θ ) = −λ5 − λ4 θ − λ3 (θ 2 + 6) − λ2 θ (θ 2 + 10) − λ(θ 4 + 13θ 2 + 19) − 39θ − 15θ 3 − θ 5 ϕ(λ − θ )

+ −λ5 − λ4 θ − λ3 (θ 2 + 6) + λ2 θ (θ 2 + 10) − λ(θ 4 + 13θ 2 + 19) + 39θ + 15θ 3 + θ 5 ϕ(λ + θ )

+ (θ 6 + 16θ 4 + 52θ 2 + 19) (λ − θ ) − (−λ − θ )

2

1 7 14 + θ 6 + θ 4 + 10θ 2 + (A.10) − MλPK (θ ) , 9 3 3

hyp Vλ (θ ) = γ (λ, θ )ϕ(λ − θ ) + η(λ, θ )ϕ(λ + θ ) + (θ 6 + 16θ 4 + 52θ 2 + 19) (λ − θ ) − (−λ − θ )

2

1 7 14 + θ 6 + θ 4 + 10θ 2 + − MλPK (θ ) , 9 3 3

5 γ (λ, θ ) = −λ − λ4 θ − λ3 (θ 2 + 6) − λ2 θ (θ 2 + 10) − λ(θ 4 + 13θ 2 + 19) − 39θ − 15θ 3 − θ 5 ,

η(λ, θ ) = −λ5 − λ4 θ − λ3 (θ 2 + 6) + λ2 θ (θ 2 + 10) − λ(θ 4 + 13θ 2 + 19) + 39θ + 15θ 3 + θ 5 , where the moment generating functions are given by MX (t) = e

θt+t 2 σ 2 /2

,

 ∞ d r MX (t)  r = E{X } = x r f (x) dx. dt r t=0 −∞

For X = N(θ, 1) then MX

2 (t) = eθt+t /2

dMX (t) 2 = eθt+t /2 (θ + t), dt

 dMX (t)  = θ, dt t=0

(A.11)

 d 2 MX (t)  = θ 2 + 1, (A.12) dt 2 t=0  d 3 MX (t) d 3 MX (t)  θt+t 2 /2 3 θt+t 2 /2 = e (θ + t) + e 3(θ + t), = θ 3 + 3θ, (A.13) dt 3 dt 3 t=0  d 4 MX (t) d 4 MX (t)  θt+t 2 /2 4 θt+t 2 /2 2 θt+t 2 /2 =e (θ + t) + 6e (θ + t) + 3e , = θ 4 + 6θ 2 + 3, (A.14) dt 4 dt 4 t=0  d 5 MX (t) d 5 MX (t)  θt+t 2 /2 5 θt+t 2 /2 3 θt+t 2 /2 = e (θ + t) + 10e (θ + t) + 15e (θ + t), = θ 5 + 10θ 3 + 15, dt 5 dt 5 t=0 (A.15) 6 d MX (t) 2 2 2 2 = eθt+t /2 (θ + t)6 + 15eθt+t /2 (θ + t)4 + 45eθt+t /2 (θ + t)2 + 15eθt+t /2 , dt 6  d 6 MX (t)  = θ 6 + 15θ 4 + 45θ 2 + 15, (A.16) dt 6 

d 2 MX (t) 2 2 = eθt+t /2 (θ + t)2 + eθt+t /2 , 2 dt

t=0

1 √ 2π

λ xe −λ

−(x−θ)2 /2

  λ−θ   1 −x 2 /2 λ−θ 1 −x 2 /2 dx = − √ e +θ √ e dx  2π 2π −λ−θ −λ−θ

S. Poornachandra, N. Kumaravel / Digital Signal Processing 17 (2007) 485–494

= −ϕ(λ − θ ) + ϕ(λ + θ ) + θ (λ − θ ) − (−λ − θ ) , 1 √ 2π



2 −(x−θ)2 /2

x e −λ



1 +θ √ 2π

λ−θ 

(A.17)

λ−θ λ−θ   1 1 2 −x 2 /2  dx = − √ (x + θ )e +√ e−x /2 dx  2π 2π −λ−θ

 (x + θ )e−x

2 /2

−λ−θ

dx

−λ−θ

= −(λ + θ )ϕ(λ − θ ) − (λ − θ )ϕ(λ + θ ) + (θ 2 + 1) (λ − θ ) − (−λ − θ ) , 1 √ 2π

λ −λ

x 3 e−(x−θ) 

1 +θ √ 2π

493

2 /2

λ−θ 

(A.18)

λ−θ λ−θ  1 2 2  2 dx = − √ (x + θ )2 e−x /2  +√ (x + θ )e−x /2 dx 2π 2π −λ−θ

 (x + θ )2 e−x

2 /2

−λ−θ

dx

−λ−θ



= A(λ, θ )ϕ(λ − θ ) + B(λ, θ )ϕ(λ + θ ) − θ (θ 2 + 3) (λ − θ ) − (−λ − θ ) ,

(A.19)

where A(λ, θ ) = (−λ2 − θ λ − θ 2 − 2),

B(λ, θ ) = (λ2 − θ λ + θ 2 + 2), λ−θ λ−θ λ   1 3 1 2 4 −(x−θ)2 /2 3 −x 2 /2  x e dx = − √ (x + θ ) e +√ (x + θ )2 e−x /2 dx √  2π 2π 2π −λ−θ −λ



1 +θ √ 2π

λ−θ 

 (x + θ )3 e−x

2 /2

−λ−θ

dx

−λ−θ



= C(λ, θ )ϕ(λ − θ ) + D(λ, θ )ϕ(λ + θ ) − (θ 4 + 6θ 2 + 3) (λ − θ ) − (−λ − θ ) ,

(A.20)

where

C(λ, θ ) = −λ3 − λ2 θ − λ(θ 2 + 3) − 5θ − θ 3 , 1 √ 2π



5 −(x−θ)2 /2

x e −λ



1 +θ √ 2π

λ−θ 

D(λ, θ ) = −λ3 + λ2 θ − λ(θ 2 + 3) + 5θ + θ 3 ,

λ−θ λ−θ   1 4 2 4 −x 2 /2  dx = − √ (x + θ ) e +√ (x + θ )3 e−x /2 dx  2π 2π −λ−θ

 (x + θ )4 e−x

2 /2

−λ−θ

dx

−λ−θ



= E(λ, θ )ϕ(λ − θ ) + F (λ, θ )ϕ(λ + θ ) + (θ 5 + 10θ 3 + 15θ ) (λ − θ ) − (−λ − θ ) , where

E(λ, θ ) = −λ4 − λ3 θ − λ2 (θ 2 + 4) − λθ (θ 2 + 7) − 9θ 2 − θ 4 − 8 ,

F (λ, θ ) = −λ4 − λ3 θ + λ2 (θ 2 + 4) − λθ (θ 2 + 7) + 9θ 2 + θ 4 + 8 , 1 √ 2π



6 −(x−θ)2 /2

x e −λ

λ−θ λ−θ   1 5 2 5 −x 2 /2  dx = − √ (x + θ ) e +√ (x + θ )4 e−x /2 dx  2π 2π −λ−θ −λ−θ

(A.21)

494

S. Poornachandra, N. Kumaravel / Digital Signal Processing 17 (2007) 485–494



 λ−θ  1 5 −x 2 /2 +θ √ (x + θ ) e dx 2π −λ−θ



= G(λ, θ )ϕ(λ − θ ) + H (λ, θ )ϕ(λ + θ ) + (θ 6 + 15θ 4 + 45θ 2 + 15) (λ − θ ) − (−λ − θ ) ,

(A.22)

where

G(λ, θ ) = −λ5 − λ4 θ − λ3 (θ 2 + 5) − λ2 θ (θ 2 + 9) − λ(θ 4 + 12θ 2 + 15) − 33θ − 14θ 3 − θ 5 ,

H (λ, θ ) = −λ5 − λ4 θ − λ3 (θ 2 + 5) + λ2 θ (θ 2 + 9) − λ(θ 4 + 12θ 2 + 15) + 33θ + 14θ 3 + θ 5 . References [1] [2] [3] [4] [5] [6]

D.L. Donoho, De-noising by soft thresholding, IEEE Trans. Inform. Theory 41 (3) (1994) 613–627. A.G. Bruce, H.Y. Gao, Understanding WaveShrink: Variance and bias estimation, Biometrika 83 (4) (1996). H.-Y. Gao, Wavelet shrinkage denoising using the nonnegative garrote, J. Comput. Graph. Statist. 7 (4) (1998) 469–488. X.P. Zhang, M.D. Desai, Adaptive denoising based on SURE risk, IEEE SPL 5 (10) (1998) 265–267. A.G. Bruce, H.-Y. Gao, WaveShrink with firm shrinkage, Statist. Sin. 7 (1997) 855–874. S. Poornachandra, N. Kumaravel, Hyper-trim shrinkage for denoising of ECG signal, J. Digital Signal Process., in press.

S. Poornachandra received the B.E. degree from NIE, Mysore University and Master degree from MIT, Mangalore University. Currently he is pursuing research in Anna University in bio-signal processing. He is working as Assistant Professor, Department of ECE, SSN College of Engineering, Anna University, India. He is life member of ISTE, MIE, and Fellow IETE. He has authored few textbooks in the field of electronics and signal processing. He has presented few papers in national and international conferences and journals. His area of interest is bio-signal processing using wavelet transform and adaptive filter, and analysis of various shrinkage methods. N. Kumaravel received the Ph.D. degree from Anna University in bio-signal processing. Currently, he is working as Professor, School of ECE, Anna University, Chennai, India. He has published several papers in various national and international journals and conferences. His areas of interest are noise cancellation techniques using genetic algorithm, neural network, wavelet based adaptive filtering and image processing. He is a member of technical bodies like IEEE, IETE, ISTE, and Biomedical Society of India.