Approximated affine projection algorithm for feedback cancellation in hearing aids

Approximated affine projection algorithm for feedback cancellation in hearing aids

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 7 ( 2 0 0 7 ) 254–261 journal homepage: www.intl.elsevierhealth.com/j...

1MB Sizes 0 Downloads 56 Views

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 7 ( 2 0 0 7 ) 254–261

journal homepage: www.intl.elsevierhealth.com/journals/cmpb

Approximated affine projection algorithm for feedback cancellation in hearing aids Sangmin Lee a , In-Young Kim b , Young-Cheol Park c,∗ a

School of Electrical Engineering, Inha University, Incheon, Republic of Korea Department of Biomedical Engineering, Hanyang University, Seoul, Republic of Korea c Computer and Telecommunications Engineering Division, Yonsei University, 234 Heungup, Maeji, Wonju-city, Kwangwon 220-710, Republic of Korea b

a r t i c l e

i n f o

a b s t r a c t

Article history:

We propose an approximated affine projection (AP) algorithm for feedback cancellation in

Received 4 April 2006

hearing aids. It is based on the conventional approach using the Gauss–Seidel (GS) iteration,

Received in revised form

but provides more stable convergence behaviour even with small step sizes. In the proposed

9 January 2007

algorithm, a residue of the weighted error vector, instead of the current error sample, is used

Accepted 31 May 2007

to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme

Keywords:

determines step size in proportion to the prediction factor of the input, so that adaptation

Feedback cancellation

is inhibited whenever tone-like signals are present in the input. Simulation results verified

Gauss–Seidel iteration

the efficiency of the proposed algorithm. © 2007 Elsevier Ireland Ltd. All rights reserved.

Affine projection Hearing aids

1.

Introduction

The LMS adaptive filter has been widely used for feedback cancellation in hearing aids thanks to its simplicity and efficiency [1–3]. However, the convergence performance of the normalized least mean square (NLMS) algorithm is often deteriorated by coloured input signals. To overcome this problem, the affine projection (AP) algorithm that updates the weight vector based on a number of recent input vectors can be used [4]. This allows a higher convergence speed than the LMS algorithm, especially for coloured input signals, but it is computationally complex. Many fast versions of the AP algorithm have been suggested to provide significant simplifications [5], but they require a process of matrix inversion, which is not only computationally expensive but also a source of numerical instability. Recently, an algorithm approximating the process of matrix inversion using Gauss–Seidel (GS) iteration has been



suggested [6]. GS iteration has stable convergence behaviour, especially when the input autocorrelation matrix is diagonally dominant. However, the algorithm in Ref. [6] suffers from a convergence problem when the algorithm is associated with small step sizes. In this paper, we present a new approximated AP algorithm based on GS iteration. This new algorithm shows stable convergence even with small step sizes and is applied here to the problem of feedback cancellation in hearing aids. A long-standing issue regarding feedback cancellation in hearing aids is the correlation between the input and output signals of the hearing aid, which leads the adaptive feedback cancellation system to create a bias in the estimate of the feedback path [7,8]. This issue is often resolved by applying delays in the forward or the control paths. In [9], it was shown that, if a delay is inserted in the forward path, identification of the feedback path and the desired signal model is possible.

Corresponding author. Tel.: +82 33 760 2744; fax: +82 33 763 4323. E-mail address: [email protected] (Y.-C. Park). 0169-2607/$ – see front matter © 2007 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.cmpb.2007.05.014

255

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 7 ( 2 0 0 7 ) 254–261

For a wideband input, such delays enable an adaptive system to converge to an accurate estimate of the feedback path. However, for a narrowband input, even with delays, the feedback cancellation system tends to minimize the error signal by cancelling the input instead of modelling the feedback path. Adaptation with a sinusoidal input in general causes a large mismatch between the estimated and actual feedback paths. This mismatch results in signal cancellation, system instability and ringing, or colouration, of the output signal [8]. One approach to maintaining system stability is to use constrained adaptation [10]. However, this does not distinguish between a deviation caused by error in the noise model and one caused by a change in the external feedback path. In this paper, we propose a method of controlling the learning rate of the adaptive feedback cancellation filter to minimize the system’s instability and signal cancellation caused by narrowband inputs. The proposed method is combined with the approximated AP algorithm and varies the step size in relation to the prediction factor of the input signal. It provides fast convergence to changes in the feedback path and can prevent signal cancellation and colouration artefacts for narrowband inputs. In Section 2, we provide an AP algorithm with orthogonalized input vectors, which is approximated in Section 3. In Section 4, an algorithm for controlling the step size of the adaptive feedback cancellation is presented. We present simulation results in Section 5. Section 6 concludes this paper.

where U(n) = [ u0 (n) u1 (n) · · · uM−1 (n) ] and L is an (M × M) unit lower-triangular matrix. Transformation matrix L in Eq. (3) is obtained by solving M linear prediction problems of orders of 0 through M − 1, which are described as ¯ − i − 1)aM−1−i ||2 , min ||x(n − i) − X(n

aM−1−i

i = 0, 1, . . . , M − 1,

(6)

where the bar shows that the related matrix contains only its M − 1 − i leftmost vectors, and ||·||2 denotes the l2 norm. The output vectors of the GS orthogonalization are given as the residue of the prediction error filter ¯ − i − 1)aM−1−i , ui (n) = x(n − i) − X(n

i = 0, 1, . . . M − 1.

(7)

Let D denotes the correlation matrix of the output vectors ui (n). We can write D as D(n) = UT (n)U(n) = LT R(n)L = diag{||u0 (n)||2 , ||u1 (n)||2 , . . . ||uM−1 (n)||2 }.

(8)

where R(n) = XT (n)X(n) denotes the autocorrelation matrix of the input signal x(n). Because of a one-to-one correspondence between x(n − i) and ui (n), we have X(n) = U(n)L−1 .

(9)

Thus, from Eqs. (2) and (9), we have the following equation:

2. Affine projection algorithm with orthogonalized input vectors

(n) = U(n)D−1 (n)␧(n) =

M−1  εi (n) i=0

The affine projection algorithm updates the weight vector based on M most recent input vectors. Similar to the wellknown NLMS algorithm, a given step size is used to control the rate of convergence and the steady-state excess mean square error. Let w(n) be an estimate of an unknown weight vector at the time index n; the affine projection algorithm computes w(n) as w(n) = w(n − 1) + (n), (n) = X(n)(XT (n)X(n)) X(n) = [ x(n)

x(n − 1) T

−1

(1)

e(n),

(2)

· · · X(n − M + 1) ],

(3)

e(n) = d(n) − X (n)w(n − 1),

where x(n) and d(n) denote (N × 1) reference input vectors and (M × 1) primary input vectors, respectively, and  is the step size. Numerical instability arising during the matrix inversion can be overcome by using (XT (n)X(n) + ıI)−1 instead of (XT (n)X(n))−1 , where ı is a small positive constant. Consider a Gram–Schmidt orthogonalization of the reference input vectors x(n − i), 0 ≤ i ≤ M − 1. The output vectors of GS orthogonalization, denoted by ui (n), 0 ≤ i ≤ M − 1, can be written in a compact matrix form [11] U(n) = X(n)L,

(5)

ui (n),

(10)

T

where ␧(n) = [ε0 (n), ε1 (n), . . . , εM−1 (n)] denotes an (M × 1) transformed error vector given by ␧(n) = LT e(n). On the other hand, we know that the error vector can be written as

 e(n) =

e(n) ¯ − 1) (1 − )e(n

 ,

(11)

¯ − 1) is a vector consisting of the uppermost M − 1 where e(n elements of e(n − 1). Thus, for  = 1, the error vector is simpli¯ T , where 0¯ is a 1 × (M − 1) zero vector. In this fied to e(n) = [e(n)0]

¯ T , and we have case, LT e(n) = [e(n)0] (n) =

(4)

||ui (n)||2

e(n) u0 (n). ||u0 (n)||2

(12)

Eq. (12) indicates that the process of matrix inversion can be implemented using an (M − 1)th-order linear prediction filter whose coefficient vector aM−1 is determined by solving the ¯ − 1) least-square (LS) problem of order M − 1: min||x(n) − X(n aM−1

aM−1 ||2 . Thus, with a linear predictive pre-processor, the AP update is simplified to an NLMS-like equation. Similar approximation of the AP algorithm was presented in reference [12]. As shown in Eq. (12), for the case that  = 1, significant approximation is possible. However,  in practical cases is arbitrary, as it governs the convergence speed and the excessive mean square error in the steady-state. For an arbitrary , we generally need to solve M systems of M equations.

256

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 7 ( 2 0 0 7 ) 254–261

3.

stable convergence property even with small step sizes. The proposed algorithm is summarized as

Approximated AP algorithm

The AP algorithm shown in Eq. (8) requires a stack of M linear prediction filters of orders from 0 through M − 1. Although this approach can avoid the process of matrix inversion, this remains computationally expensive. In [6], the Gauss–Seidel (GS) iteration was used to solve the (M − 1)th-order LS linear prediction problem. To simplify the process, a single GS iteration per sample was performed to estimate the solution vector of the LS problem. This is equivalent to solving the system R(n) T ␣M = bM where ␣M = ˛M,0 [1, −aT M−1 ] and bM is an (M × 1) vector with only one nonzero element that is unity at the top, i.e., bM = [1, 0, 0, . . . , 0]T . The solution vector to this problem is −1 computed as ␣0M /˛0M,0 = R−1 (n)bM /(bT M R (n)bM ). Or, it is iteratively estimated using the GS iteration. The GS iteration at the time instance n is summarized as [6]

⎛ 1 ˛ˆ M,i (n) = rii (n)

⎝bM,i −

i−1 

rij (n)˛ˆ M,j (n − 1) −

j=0

M−1 

⎞ rij (n)˛ˆ M,j (n)⎠ ,

j=i+1

0 ≤ i ≤ M − 1,

(13)

where rij (n) = xT (n − i)x(n − j) denotes (i, j)th element of R(n). After the GS iteration at the time instance n, the LP residual is computed. Using the residual vector, the AP algorithm is approximated as in [7] ˆ ˆ − 1) +  w(n) = w(n

ˆ u(n) =



1 ˛ˆ M,0 (n)



e(n) ˆ u(n), ˆ ||u(n)|| 2+ı

X(n)␣ ˆ M (n).

(14)

(15)

This is able to approximate the AP algorithm closely with much smaller computational complexity. It updates the weight vector based on the assumption that  = 1. Thus, when  is close to 1, the approximated AP algorithm in Eqs. (15) and (16) works quite well. However, in practice the weight update is performed with an arbitrary . Moreover,  should be small to control the steady-state performance of the adaptive filter. When small  is used, the algorithm suffers from the convergence problem, and even becomes unstable, as will be shown later. In this paper, we propose a new approximated AP algorithm that performs a single GS iteration per sample but has

ˆ ˆ − 1) +  w(n) = w(n

 ˆ u(n) =

ˆ u(n)

ε(n) ˆ u(n) ˆ ||u(n)|| 2+ı



ˆ¯ − 1) u(n

T

,

ˆ u(n) =



T

ε(n) =

␣ ˆ M (n)ˆe(n) , ˛ˆ M,0 (n)

(16)

eˆ (n) =

␣ ˆ M (n)x(n) , ˛ˆ M,0 (n) e(n)

 (1 − ) eˆ¯ (n − 1)

(17)

 ,

(18)

ˆ where ε(n) and u(n) are prediction residues of the weighted error vector and reference input vector, respectively. The input ˆ ˆ vector u(n) for the weight update is obtained by appending u(n) ˆ − 1), instead to the beginning of the previous input vector u(n of computing Eq. (15), which allows us to save computations. If  = 1, the algorithm is reduced to the GS iteration-based approximated AP algorithm in Eq. (14). If  = 1 and  = 1, eˆ (n) is identical to e(n) in Eq. (11) and the algorithm is to approximate Eq. (10) using only an (M − 1)th order linear prediction filter implemented by the GS iteration. As  increases, less weight on previous errors is used. Thus,  is a factor of compromising between the cases that  = 1 and  = 1 in the approximation of the AP algorithm by using an (M − 1)th order prediction filter. The new approximated AP algorithm described by Eqs. (16)–(18) was tested and compared with the conventional algorithm of Eqs. (14) and (15) in a simple system identification problem. A 64-tap FIR system with the impulse response shown in Fig. 1(a) was identified using the algorithms. The input signal was a speech-shaped noise at an SNR level of 30 dB and the projection order (M) was 4. We measured the misalignment for the performance comparison and results are presented in Fig. 2. The results show that the conventional algorithm has a convergence problem with small step sizes. On the other hand, the new algorithm shows stable convergence behaviour. The algorithm works well with the parameter values  ≥ 2. However, it was observed that a higher  was associated with slower convergence speed. Thus,  = 2 was the most suitable in this example. The algorithms were also compared at different SNR’s. Results are shown in Fig. 3. The conventional algorithm starts to converge at around 25 dB SNR, but still shows much slower convergence than the proposed algorithm except the case that SNR = 10 dB. As SNR decreases, the autocorrelation matrix becomes more diago-

Fig. 1 – Impulse responses of (a) first and (b) second feedback paths.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 7 ( 2 0 0 7 ) 254–261

257

Fig. 2 – Results of system identification with different step-sizes: (a) conventional algorithm and a new algorithm for (b)  = 1, (c)  = 2, (d)  = 3.

nally dominant and, thus, GS iteration converges faster. On the other hand, the prediction residues will increase as SNR decreases. At an extremely low SNR, we will have ε(n) ≈ e(n). In this extreme case, the two algorithms will show the same convergence speed. Table 1 compares the computational steps of the FAP [5] and approximated AP algorithms. In the table, vectors with ¯ − bars show that their M − 1 upper elements are taken, and R(n 1) is a matrix containing (M − 1) × (M − 1) upper-left elements of R(n − 1). r˜ (n) consists of M − 1 lower elements of r(n). ␸(n) = T  ˆ M (n) = ␣ ˆ M (n)/˛ˆ M,0 (n). [x(n), . . . , x(n − M + 1)] and ␣ The FAP algorithm can be implemented with a complexity 2N + f(M) multiply accumulate (MAC) operations per sample. The term 2N is for steps (a2) and (a6), while the term f(M) is for the other steps including the matrix inversion in the step (a4) which is the most computationally demanding. With a direct

matrix inversion, we have f(M) = O(M3 ) which can be reduced to 20 M by using the fast recursive least-squares (RLS) technique. However, the fast RLS approach has a problem of numerical instability [5]. The GS iteration in Eq. (13) requires M2 MAC’s plus M divisions per sample. M divisions in the GS iteration can be reduced to a single division by exploiting a regular structure of  R(n) [6]. But we need to perform two more divisions for ␣ ˆ M (n) in step (b5) and the normalization process in step (b6). Thus, the total complexity of the approximated AP algorithm becomes M2 + 5M + 1 MAC’s plus 3 divisions per sample. Since N  M in practical applications, the computational difference between the two algorithms is marginal. However, the proposed algorithm is a stable version of the original FAP algorithm, because the GS iteration guarantees convergence only if the matrix R(n) is diagonally dominant and one GS iteration per sample is

Fig. 3 – Results of system identification at different SNR’s: (a) conventional algorithm and (b) proposed algorithm with  = 0.2, M = 4 and  = 2.

258

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 7 ( 2 0 0 7 ) 254–261

Table 1 – Complexity comparison between the FAP [5] and proposed algorithms FAP

Proposed

Computational steps For n < 0

wa (n) = 0N×1 , E(n) = 0M×1 , R(n) = X (n)X(n) = ıIM×M , r(n) = 0M×1 r(n) = r(n  − 1) + x(n)␸(n) −  x(n − N)␸(n − N), R(n) =

For n ≥ 0

# of MAC

r˜ T (n)

r(n)

T

(a2)

e(n) =

(a3)

E(n) =

0 ¯ − 1) E(n

−1

r(n) = r(n  − 1) + x(n)␸(n)− x(n − N)␸(n − N),

2M

¯ − 1) e(n) = y(n)  − x (n)wa (n − 1) − ˜r (n)E(n e(n) (1 − )¯e(n − 1)

e(n)

+ ␧(n)

wa (n) = wa (n − 1) + x(n − M + 1)EM−1 (n)

# of MAC

ˆ w(n) = 0N×1 , R(n) = ıIM×M , r(n) = 0M×1

R(n) =

(a1)

¯ − 1) R(n T

ε(n) = (R(n)  + ıIM×M )

r˜ T (n)

r(n)

(b1)

¯ − 1) R(n T

N + (M − 1)

2M

ˆ − 1) e(n) = y(n)  − x (n)w(n 

(b2) N

(M − 1)

eˆ (n) =

(b3) M − 1

(a4)

20M [5]

compute ␣ ˆ M (n)(GS − iteration)

(a5)

M

ˆ u(n) = ␣ˆ  M (n)x(n), ε(n) = ␣ˆ  M (n)ˆe(n),

e(n)  (1 − ) eˆ¯ (n − 1)

T

(a6)

2N + 20M + 5M − 2

2M + 2(1)

T

ˆ ˆ¯ (n)] , ˆ u u(n) = [u(n) 2 ˆ ˆ − N)|2 puu (n) = puu (n − 1) + |u(n)| − |u(n  ˆ ˆ ˆ − 1) + u(n)ε(n) w(n) = w(n puu (n) + ı Total

N

(b4) M2 (1a )

T

T

Total

a

Computational steps

T

(b5) (b6) N(1) 2N + M2 + 5M + 1(3)

Numbers in parenthesis indicates the number of division per sample.

enough for optimal performance [6]. In addition, unlike the FAP algorithm in which an auxiliary weight vector wa (n) is estimated, the proposed algorithm provides the weight vector w(n).

4. Learning rate control of the approximated AP algorithm for the feedback cancellation in hearing aids

ˆ ˇ(n) = ˇ(n − 1) + (1 − ) min{Tmax , (n)},



In the adaptive feedback cancellation system, increased step size will give faster adaptation, and reduced step size can improve the sound quality of the system with slow adaptation speed. When the spectrum of the input signal varies with time, the desired adaptation speed is a compromise between the rapid adaptation required to track changes in the feedback path and the slow adaptation required to avoid bias caused by temporary pure tones [10]. In this study, we propose a learning rate control method that is conveniently combinable with the approximated AP algorithm. For an input vector x(n), we define the prediction factor as a normalized power of the prediction error, computed as =

||u0 (n)||2 . ||x(n)||2

between the estimated and actual feedback paths. Thus, adaptation should be inhibited when the input contains tone-like signals. In this paper, we propose a new learning rate control method for the approximated AP algorithm, summarized as

(19)

Given the optimum coefficient vector of the (M − 1)th-order linear predictor aM−1 , the prediction factor can be rewritten ¯ − 1)aM−1 ||2 /||x(n)||2 . Note that  can never be as  = 1 − ||X(n ¯ − 1)aM−1 ||2 /||x(n)||2 is always posnegative and the ratio ||X(n itive, therefore we have 0 ≤  ≤ 1; thus, the prediction factor always lies between zero and unity. Ideally, the prediction factor is unity for a white noise input and zero for a tone input, when it is provided that M ≥ 2. Adaptation with white noise is the best condition for the feedback canceller to provide more accurate estimate of the feedback path. On the other hand, adaptation with tone signals can cause a large mismatch

(n) =

stone ˇ(n),

ˇ(n) < Ttone

ˇ(n),

otherwise

(n) ˆ = (n)max ,

,

(20a)

(20b)

(20c)

where max controls the maximum magnitude of the step size and (n) ˆ denotes the instantaneous estimate of prediction ˆ factor computed as (n) ˆ = ||u(n)|| 2 /||x(n)||2 . In the algorithm, (n) ˆ is limited by Tmax to disregard overestimated ones, and it is averaged using a simple pole IIR filter to obtain ˇ(n). Because the prediction factor always lies between zero and unity, ˇ(n) will be close to Tmax for wideband inputs, and it will become small for narrowband inputs. Therefore, the algorithm will use large step sizes for wideband sections and small step sizes for narrowband sections of the input, which will provide a more accurate estimate of the feedback path than using a fixed step size. When tone signals such as dual tone multi frequency (DTMF) or tone-like signals are added to the input, the prediction factor will decrease abruptly. The step size then decreases too and adaptation will be inhibited. Thus, the system can avoid adaptation using tone-like inputs. To further inhibit adaptation for tone-like inputs, ˇ(n) is compared with a threshold Ttone . The input is judged a tonelike signal, when the smoothed prediction factor ˇ(n) goes below the threshold Ttone , which is normally very low. For the tone-like inputs, a small constant stone (<1) is multiplied to the smoothed prediction factor.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 7 ( 2 0 0 7 ) 254–261

259

Fig. 5 – Misalignments for a speech-shaped noise.

Fig. 4 – Block diagram of the proposed adaptive feedback cancellation system based on the approximated AP algorithm.

Fig. 4 shows a block diagram of the proposed feedback cancellation system for hearing aids. In the diagram, the GS predictor block indicates an Mth-order predictor whose coefficients are iteratively updated using GS iteration. In a howling situation the feedback canceller must use a large step size, but the prediction factor will rapidly decrease because howling is normally associated with strong tone signals at the output. However, as the power of howling tones is abnormally high, those tones can be discriminated easily from tones embedded in the input by monitoring the output power. When the feedback path is rapidly changing, the hearing aid output will rapidly increase due to a mismatch between the current estimate and the new feedback path. The mismatch will emphasize the error at frequencies with small gain margin relative to other frequencies [7]. Consequently, the output spectrum will vary dramatically. The spectral variation of the input will produce large prediction errors, i.e., large prediction factors. Therefore, the feedback cancellation filter can quickly adapt to the new feedback path with large step sizes.

5.

by multiplying the first one by 1.75 and modulating it by 8 Hz. We measured misalignments of the proposed algorithm and compared them with those of the AP and NMLS algorithms. The affine projection order (M) was 4. Parameter values for the learning rate control were max = 0.01 and  = 0.001. Step sizes for the AP and NLMS were 0.002 and 0.02, respectively. The parameter Tmax , together with max , determines the maximum learning rate, and the parameter stone sets the minimum learning rate when the input is judged a tone-like signal. We used Tmax = 0.25 and stone = 0.25 in the simulations. The threshold parameter Ttone is to detect tone-like signals in the input of the linear predictor used to generate the prediction residue ˆ u(n). Thus, Ttone should be determined by considering the statistical behaviours of the linear predictor. For this purpose, we first measured the prediction factors of a 3rd-order linear predictor (M = 4) using tone-like speech segments and DTMF tones at 30 dB SNR. Then, stone was set to 0.05. During the tests, it was noted that Ttone should be changed to achieve the best performance at different SNR’s. Fig. 5 shows the misalignments obtained by averaging over 20 independent trials. It is seen that the approximated AP (AAP) closely follows the convergence speed of the AP algorithm and always achieves lower misalignments than the NLMS. More importantly, when the learning rate control method was combined, the proposed algorithm was not affected by the temporary DTMF tones in the input.

Computer simulations

For the simulations, a 20 dB hearing aid gain (G = 10) was assumed and an 80 sample decorrelation delay ( = 80) was inserted in the forward path together with a probe noise, which provides conditions for identifiable feedback path with a insignificant bias [7]. The feedback path was modelled using a 128-tap FIR filter at a sampling rate of 16 kHz. The feedback path models used in the simulations are shown in Fig. 1. Adaptive filters of N = 30 and  = 2 were used for the simulations presented in this section. First, the proposed algorithm was tested using a speechshaped noise with an additive white noise of 30 dB SNR. For the simulations, we added DTMF tones corresponding to ‘3’ and ‘5’ at the time indices of 5 and 11 s, respectively. The level of the DTMF tones was −18 dB lower than that of the input signal. The feedback path was changed at the time index (second 8). Initially, the first impulse response in Fig. 1(a) was used, and it was switched to the second one in Fig. 1(b). The second impulse response was obtained

Fig. 6 – (a) Input speech signal, (b) misalignments (dash-dotted line: NLMS algorithm, solid line: AP algorithm, dotted line: approximated AP algorithm, dashed line: approximated AP with the learning rate control algorithm), and (b) ˇ(n) estimated by the proposed learning rate control method.

260

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 7 ( 2 0 0 7 ) 254–261

tion during the period of DTMF tones. At the 9 s time index, the proposed algorithm misses the initial part of the DTMF tone and thus, misalignment decreases. However, soon after, the algorithm inhibits adaptation and maintains the previous level of misalignment throughout the DTMF period. The learning rate of the proposed algorithm observed after the feedback path change is also very close to the AP algorithm. The overall sound quality provided by the proposed algorithm was clean and stable. No ringing and colouration artefacts were heard except for a short transient period during adaptation to the new feedback path. Figs. 7 and 8 show the output speech signals and their spectrograms of the hearing aid output, respectively. It was observed that, unlike the AP and NLMS algorithms, the proposed algorithm produced no audible artefacts during the DTMF periods and/or narrowband sections of the input speech. Fig. 7 – (a) Input speech signal, (b) error signal of the NLMS algorithm, (c) AP algorithm, and (d) approximated AP algorithm with learning rate control.

Another simulation was performed using a true speech signal, in which first two sentences were recorded from a female speaker and next two were from a male. White noise of 30 dB SNR was added to the signal, and DTMF tones corresponding to ‘3’, ‘5’, and ‘7’ were added at the time indices of 9, 10, 12 s, respectively. DTMF level was set to −18 dB lower than that of speech input. The feedback path was changed at 12 and 20 s. At 12 s, the first impulse response in Fig. 1 was switched to the second one, and switched back to the first one at the 20 s point. In the simulations we used M = 4, max = 0.005 and  = 0.005. Step sizes for the AP and NLMS were 0.002 and 0.005, respectively. The parameters Tmax , Ttone and stone were set to the same values as in Fig. 5. Fig. 6 shows the misalignment. The results in Fig. 6 indicate that the proposed algorithm (indicated by AAP) combined with the learning rate control method successfully inhibits adapta-

Fig. 8 – Spectrograms of (a) input speech signal, (b) error signal of the NLMS algorithm, (c) AP algorithm, and (d) approximated AP algorithm with learning rate control.

6.

Conclusions

In this paper, we propose a new approximated AP algorithm and a learning rate control method for feedback cancellation in hearing aids. The proposed algorithm showed stable convergence behaviour even with small step sizes. In addition, by controlling the learning rate in relation to the prediction factor of the input, system instability and colouration artefacts caused by narrowband inputs could be prevented. Simulation results verified the efficiency of the proposed algorithm.Acknowledgements This study was supported by a grant of the Korea Health 21 R&D Project, Ministry of Health & Welfare, Republic of Korea (02-PJ3-PG6-EV10-0001).

references

[1] J.M. Kates, Feedback cancellation in hearing aids: results from a computer simulation, IEEE Trans. Signal Process. 39 (9) (1991) 553–562. [2] J.A. Maxwell, P.M. Zurek, Reducing acoustic feedback in hearing aids, IEEE Trans. Speech Audio Process. 3 (4) (1995) 304–313. [3] J. Benesty, Y. Huang (Eds.), Adaptive Signal Processing; Application to Real-World Problems, Springer-Verlag, Berlin Heidelberg, 2003. [4] K. Ozeki, T. Umeda, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties, Electron. Commun. Jpn. A 67 (5) (1984) 19–27. [5] S.L. Gay, S. Tavathia, The fast affine projection algorithm, Proc. IEEE ICASSP (1995) 3023–3026. [6] F. Albu, H.K. Kwan, “Combined echo and noise cancellation based on Gauss–Seidel pseudo affine projection algorithm,” in: Proceedings of the IEEE ISCS 2004, Vancouver, Canada, pp. 505–508. [7] M.G. Siqueira, A. Alwan, Steady-state analysis of continuous adaptation in acoustic feedback reduction systems for hearing-aids, IEEE Trans. Speech Audio Process. 8 (4) (2000) 443–453. [8] J. Hellgren, Analysis of feedback cancellation in hearing aids with filtered-x LMS and direct method of close loop identification, IEEE Trans. Speech Audio Process. 10 (2) (2002) 119–131.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 7 ( 2 0 0 7 ) 254–261

[9] A. Spriet, I. Proudler, M. Moonen, J. Wouters, Adaptive feedback cancellation in hearing aids with linear prediction of the desired signal, IEEE Trans. Signal Process. 53 (10) (2005 Oct.) 3749–3763. [10] J.M. Kates, Constrained adaptation for feedback cancellation in hearing aids, J. Acoust. Soc. Am. 106 (2) (1999) 1010–1019.

261

[11] S. Haykin, Adaptive Filter Theory, 4th ed., Prentice Hall, Upper Saddle River, NJ, 2002. [12] M. Rupp, A family of adaptive filter algorithms with decorrelating properties, IEEE Trans. Signal Process. 46 (3) (1998) 771–775.