Signal Processing 152 (2018) 141–147
Contents lists available at ScienceDirect
Signal Processing journal homepage: www.elsevier.com/locate/sigpro
Short communication
Variable step-size sign subband adaptive filter with subband filter selection Jaegeol Cho a, Hyun Jae Baek a, Bum Yong Park b, JaeWook Shin a,∗ a b
Department of Medical and Mechatronics Engineering, Soonchunhyang University, Asan, Chungnam 31538, Republic of Korea Department of Electronic Engineering, Kumoh National Institute of Technology 61 Daehak-ro (yangho-dong), Gumi, Gyeongbuk 39177, Republic of Korea
a r t i c l e
i n f o
Article history: Received 26 December 2017 Revised 17 May 2018 Accepted 28 May 2018 Available online 29 May 2018 Keywords: Adaptive filter Sign subband adaptive filter Subband filter selection Variable step size Impulsive noise
a b s t r a c t This letter proposes a novel sign subband adaptive filtering (SSAF) algorithm with a subset selection for subband filters, called the SS-SSAF. The proposed algorithm achieves the fast convergence performance and reduces the computational complexity by a proposed sufficient condition. The condition associated with each subband immediately ensures the decrease of the mean square deviation (MSD) value at every iteration. Furthermore, we suggest the variable step-size algorithm for SS-SSAF to achieve both fast convergence speed and small steady-state errors. Simulation results show that the proposed algorithm with fixed step-size performs better than the conventional SSAF and the other improved SSAF algorithms in terms of the convergence rate. In addition, the performance of proposed variable step-size algorithm is demonstrated in the system identification compared with recent variable step-size SSAFs.
1. Introduction Adaptive filtering algorithms have been utilized in a wide variety of applications, such as system identification, acoustic echo cancellation, active noise control, channel equalization, and noise cancellation, because of their ability to deal with unknown and changing environments. The least-mean-squares (LMS) algorithm and its normalized version (NLMS) are the most widely used among various adaptive algorithms due to their low computational complexity and robustness [1,2]. However, they suffer from a poor convergence rate when the input signals are highly correlated, a case called colored input signals. To address this drawback, a normalized subband adaptive filter (NSAF) has been developed [3]. The NSAF improves the convergence rate for colored input signals with a “pre-whitening” procedure. Furthermore, to reduce the computational complexity of the NSAF, a simplified selective partial-update subband adaptive filter algorithm, a dynamic selection NSAF algorithm, and a flexible complexity variable stepsize NSAF have been proposed [4–6]. Unfortunately, because of the nature of the L2 -norm optimization, NSAF type algorithms suffer from performance degradations in impulsive noisy environments. Recently, to overcome this drawback, several algorithms have been proposed using the L1 -norm optimization. Representative al-
∗
Corresponding author. E-mail addresses:
[email protected] (J. Cho),
[email protected] (H.J. Baek),
[email protected] (B.Y. Park),
[email protected] (J. Shin). https://doi.org/10.1016/j.sigpro.2018.05.027 0165-1684/© 2018 Published by Elsevier B.V.
© 2018 Published by Elsevier B.V.
gorithms include the affine projection sign algorithm (APSA) and the sign subband adaptive filter (SSAF) algorithm [7–11]. The APSA can achieve not only improved performance in impulsive noisy environments but also a fast convergence rate with colored input signals. As the projection order of the APSA increases, the computational complexity is getting high. The SSAF algorithm performs well in impulsive noisy environments with the lower computational complexity as compared to the APSA. However, this algorithm has a slow convergence speed. In order to improve the performance of SSAF, Jeong et al. proposed a novel SSAF algorithm (IS-SSAF) by using selection of subbands to obtain a fast convergence rate and save the computational cost [12]. In addition, Yu and Zhao proposed an individual-weighting-factor (IWF) SSAF algorithm that uses IWF for each subband instead of a common weighting factor as in the conventional SSAF. Although IS-SSAF and IWF-SSAF have fast convergence rate, IS-SSAF cannot improve the performance with large step size and IWF-SSAF has high computational complexity compared to the conventional SSAF. This letter proposes a novel SSAF algorithm to improve the performance in terms of the convergence rate in impulsive noisy environments and to save the computational cost. The proposed algorithm selects a subset of subband filters through a criterion that guarantees the instant decrease of the MSD value in each iteration. Moreover, the variable step-size algorithm for the proposed SSAF is also presented to achieve both the fast convergence rate and small steady-state errors. The simulation results show that the proposed SSAF algorithm improves the performance as compared with the algorithms available in the literature.
142
J. Cho et al. / Signal Processing 152 (2018) 141–147
Fig. 1. Structure of the SSAF.
2. Sign subband adaptive filter (SSAF)
lows:
Consider a desired signal d(n) that is derived from an unknown linear system
d ( n ) = uT ( n )w + v ( n ),
(1)
where w is an unknown m-dimensional vector to be identified with an adaptive filter, v(n) represents the measurement noise assumed with zero mean and variance σv2 , and u(n) denotes an m-dimensional input vector. Fig. 1 shows the structure of a typical SSAF, which is the same as that of the NSAF, where N is the number of subbands. The desired system output signal of the ith subband is di (n), and the filter output signal of the ith subband is yi (n). Both signals are divided into N subbands by analysis filters H0 (z ), . . . , HN−1 (z ). Then, di (n) and yi (n) for i ∈ [0, N − 1], are critically decimated to a lower sampling frequency, one which matches to their reduced bandwidth. In this letter, n is used to index the original sequences, and k is used to index the decimated sequences, respectively. The filter output signal of the ith subband T (k )ui (k ), where ui (k ) = [ui (kN ) ui (kN − is defined as yi,D (k ) = w T 1 ) . . . ui (kN − m + 1 )] . The output error of the ith subband is defined as ei,D (k ) = di,D (k ) − yi,D (k ), where di,D (k ) = di (kN ). Then, we define the subband input matrix, desired output signal vector, output error vector, and a posteriori output error vector as fol-
U(k ) = [u0 (k ) u1 (k ) . . . uN−1 (k )],
(2)
dD (k ) [d0,D (k ) . . . dN−1,D (k )]T = UT (k )w + vD (k ),
(3)
( k ), eD ( k ) = dD ( k ) − UT ( k )w
(4)
( k + 1 ), e p ( k ) = dD ( k ) − UT ( k )w
(5)
where vD (k ) = [v0,D (k ) . . . vN−1,D (k )]T , and vi, D (k) denotes the measurement noise with zero mean and variance σv2i,D . The conventional SSAF is derived by minimizing the L1 -norm of ep (k) with a constraint on the weight coefficient vectors [8]. Then, the update equation of the SSAF can be expressed as
U(k )sgn(eD (k ) ) (k + 1 ) = w (k ) + μ w , N−1 T i=0 ui (k )ui (k )
(6)
where sgn(·) denotes the sign function, sgn(eD (k ) ) [sgn(e0,D (k )) . . . sgn(eN−1,D (k ))]T , and μ is the step size.
J. Cho et al. / Signal Processing 152 (2018) 141–147
3. Proposed algorithm
follows:
δ¯ (k ) =
3.1. SSAF with subband filter selection (SS-SSAF) The coefficient update of the SSAF algorithm as in (6) can be that is defined as expressed in terms of the weight-error vector w (k ) w − w (k ), as follows: w
U(k )sgn(eD (k ) ) (k + 1 ) = w (k ) − μ w . N−1 T i=0 ui (k )ui (k )
(7)
To obtain the update recursion of MSD, we take the expectations after squaring both sides of (7) as
E
( k + 1 )2 w
=E
=E
⎛
2
(k ) sgn eTD (k ) UT (k )w
(k ) −2μE ⎝ w
N−1
T i=0 ui
⎛
( k )ui ( k )
⎞ ⎠ + μ2
+ μ2 E
⎞
(k )2 + (k ), w
(8)
where (k ) −2μE
N−1
|ei,D (k )|−sgn(ei,D (k ))vi,D (k ) + μ2 , which inN−1 T u ( k ) u ( k ) i i=0 i
i=0
dicates the difference of MSDs between successive iterations. To decrease the value of MSD, (k) should be negative. Therefore, we can obtain the following inequality:
⎛
μ 2
⎞ |ei,D (k )| − sgn(ei,D (k ))vi,D (k )⎠ N−1 T i=0 ui (k )ui (k )
N−1
< E⎝
i=0
E
N−1
μ
|ei,D (k )| >
2
i=0
E
(9)
+E
N−1
sgn(ei,D (k ))vi,D (k )
δ (k )
(10)
Because it is difficult to obtain the value of δ (k), we find the upper N−1 T bound of δ (k) using the upper bound of E u ( k ) u ( k ) i i=0 i and E
E
i=0
N−1
sgn(ei,D (k ))vi,D (k ) as follows [13,14]:
uTi (k )ui (k )
≈
m
i=0
E
N−1
N−1
σu2i ≤
N−1
i=0
sgn(ei,D (k ))vi,D (k )
i=0
≤E
mσu2i ,
(11)
i=0
N−1
|vi,D (k )| =
i=0
N−1
i=0
2σv2 , πN
σ2
(13)
If the following inequality is satisfied, then E δ¯ (k ), and (k) becomes negative.
μ
E (|ei,D (k )| ) >
2
mσ + 2 ui
N−1 i=0
|ei,D (k )| >
2σv2 πN
(14)
Accordingly, the proposed SSAF selects the subband filters that satisfy (14) at each iteration to improve the convergence performance. For implementing this algorithm, we should estimate the value of E(|ei, D (k)|). Since it is difficult to obtain expectation value of |ei, D (k)|, we use an instantaneous value instead of its expectation value as
μ 2
mσu2i (k ) +
2σv2 . πN
(15)
where σu2i (k ) = λσu2i (k − 1 ) + (1 − λ )u2i (kN ) and λ is smoothing factor. Let TN (k ) = {t1 , t2 , . . . , tN (k ) } denote an N(k)-subset of set {0, 1, . . . , N − 1}, where, ti denotes the index of the selected subband filters, and N(k) is the number of the selected subband filters at iteration k. The proposed SSAF is
⎧ ( k ), ⎨w N (k ) ut j (k )sgn(et j (k )) (k + 1 ) = j=1 w , ⎩w(k ) + μ N(k) T j=1
where |et j ,D (k )| > μ 2
ut (k )ut j (k )
otherwise
(16)
j
mσu2t (k ) + j
if N (k ) = 0
2σv2 πN .
In this section, in order to achieve both fast convergence speed and low steady-state errors of the SS-SSAF, variable step-size algorithm is derived by minimizing the MSD value every iteration. From update equation of the SS-SSAF (16), we can obtain the difference of MSD value between iterations k and k + 1 as follows:
⎛
( k )2
N ( k ) j=1
⎞ |et j (k )| − sgn(et j (k ))vt j (k )) ⎠ N (k ) T u ( k ) u ( k ) tj tj j=1
Nσ 2
(17)
Because we cannot calculate sgn(et j (k ))vt j (k ) directly, the proposed algorithm use the upper bound value instead of ss (μ(k)). ¯ ss (μ(k )) Using inequality (12), we can obtain the upper bound as
⎛
⎜ ss (μ(k )) ≤ −2μ(k )E ⎜ ⎝
⎞ 2σv2 e ( k ) | − | t j πN ⎟ j=1 ⎟ + μ2 ( k ) ⎠ N (k ) T u ( k ) u ( k ) tj tj j=1
N ( k )
¯ ss (μ(k ))
⎛
= E ui and v = E vD = vi ,D [15]. Fi¯ nally, δ (k ), which is the upper bound of δ (k), can be obtained as where
(kN )2
mσu2i +
2σv2 πN
(18)
To decreases rapidly the MSD value from iteration k to k + 1, the ¯ ss (μ(k ))/∂μ(k ) = 0 optimal step-size μopt (k) can be derived by ∂ as
(12)
σu2i
2
+ μ2 ( k )
i=0
N−1
ss (μ(k )) −2μ(k )E ⎝
uTi (k )ui (k )
i=0 N−1
μ
3.2. Variable step-size algorithm for SS-SSAF
If m is a large value, the fluctuations of uTi (k )ui (k ) can be assumed to be small from k to k + 1 iteration. Inequality (9) can be approximately expressed as:
i=0
E (|ei,D (k )| ) ≈ |ei,D (k )| >
N−1 |e (k )| − sgn(ei,D (k ))vi,D (k ) ⎠ (k )2 −2μE ⎝ i=0 i,D w N−1 T i=0 ui (k )ui (k )
N−1
143
⎜ μopt (k ) = E ⎜ ⎝
⎞ 2 |et j (k )| − 2πσNv j=1 ⎟ ⎟ ⎠ N (k ) T u ( k ) u ( k ) t j t j=1 j
N ( k )
(19)
144
J. Cho et al. / Signal Processing 152 (2018) 141–147 Table 1 Summary of the VSS-SS-SSAF algorithm. ( 0 ) = 0, μ ( 0 ) = Initialization: w
0.4
σd2 σu2 M
0.2
|ei,D (k )| >
μ(k−1 )
2
Amplitude
Parameters: λ, γ , predefined σv2 , known or estimated For each iteration k 1. Subband Selecting (k ) ( k )w ei,D (k ) = di,D (k ) − uH i σu2i (k ) = λσu2i (k − 1 ) + (1 − λ )|ui (kN )|2 Select subband filters that satisfy 2σv2 πN
mσu2t + j
β (k ) =
j=1
|et j (k )|−
N (k ) j=1
-0.2
-0.4
2. Step-size decision if N (k ) = 0 μ ( k − 1 ), μ (k ) = γ μ(k −1 ) + (1 − γ )min(μ(k −1 ), β (k ) ),otherwise where N ( k )
0
-0.6 0
100
200
300
400
500
Samples
2σ 2
v
πN
ut j 2
Fig. 2. Acoustic impulse response of a room.
ti : the index of the selected subband filters, N(k) : the number of the selected subband filters. 3. Weight vector update (k ) + μ (k ) (k + 1 ) = w w
N ( k )
ut (k )sgn (et j (k ))
j
j=1
N (k ) j=1
a
ut j 2
5
end
SSAF(μ=0.01) IWF-SSAF(μ=0.0025) IS-SSAF(μ=0.01, α=1) IS-SSAF(μ=0.01, α=2) SS-SSAF(μ=0.0078)
Table 2 The comparison of computational complexity. Algorithms
Multiplications
Square-roots
Divisions
SSAF BDVSS-SSAF SS-SSAF VSS-SS-SSAF
2m + m/N + 3NL 3m + 3NL + 3 m + (1 + N (k ))m/N + 3NL + 6 m + (1 + N (k ))m/N + 3NL + 6 + 3/N
1/N 1/N 2/N 2/N
1/N 1 + 1/N 1/N 2/N
NMSD (dB)
0 -5 -10 -15 -20
μ ( k − 1 ), μ (k ) = γ μ (k − 1 ) + (1 − γ ) min(μ(k − 1 ), β (k ) ),
if N (k ) = 0 (20) otherwise
where γ is a smoothing factor and
N ( k )
β (k ) =
j=1
|et j (k )| − N (k ) j=1
2σv2 πN
(21) utTj (k )ut j (k )
Note that the proposed step-size μ(k) is guaranteed as a positive value, because a SS-SSAF uses selected subbands that satisfy (15). Table 1 summarizes the proposed algorithm, where ( · )H denotes complex conjugate transposition. Although the proposed variable step-size algorithm is robust under impulsive noise environments, it exhibits degraded performance when the unknown system is changed rapidly. To avoid this drawback, the same reset algorithm in [16] is applied for dealing with tracking system changes.
-25 0
1
2
3
4
5
6
7
× 10 4
Number of iterations
b Average number of subbands
The optimal step-size μopt (k) can minimize the upper bound of difference of MSD value, however, it causes performance degradation under impulsive noise environments. Therefore, in order to maintain robustness for impulsive noises, the moving average method is adopted as follows:
IS-SSAF(μ=0.01, α=1) IS-SSAF(μ=0.01, α=2) SS-SSAF(μ=0.0078)
8
6
4
2
0 0
1
2
3
4
Number of iterations
5
6
7
× 10 4
Fig. 3. (a) NMSD learning curves for conventional SSAF, IWF-SSAF, IS-SSAFs, and SS-SSAF with large step size under impulsive noise environment (Pr = 0.01), SNR=30 dB, (b) Average number of selected subbands for IS-SSAFs, and SS-SSAF with large step size.
3.3. Computational complexity Table 2 shows the computational complexity in terms of the additional number of additions, multiplications, and comparisons compared to conventional SSAF for each iteration. Although the additional complexity is increased by subband filter selection and step-size algorithm, the averaged computational complexity is reduced because m > > N and the proposed algorithm selects the number of subband filters less than N on average.
4. Simulation results We simulated the performance of the proposed and the existing algorithms in a system identification scenario under impulsive noise environments. The unknown system was a room-truncated acoustic impulse response with 512 taps (m = 512), as shown in Fig. 2, and it is assumed that the adaptive filter and the unknown
J. Cho et al. / Signal Processing 152 (2018) 141–147
a
a
10
SSAF(N=2, μ =0.01) SSAF(N=4, μ =0.01) SSAF(N=8, μ =0.01) SS-SSAF(N=2, μ =0.011) SS-SSAF(N=4, μ =0.0085) SS-SSAF(N=8, μ =0.0078)
0
-5
-10
NMSD (dB)
NMSD (dB)
5
SSAF(μ=0.001) IWF-SSAF(μ=0.00015) IS-SSAF(μ=0.0007, α=1) IS-SSAF(μ=0.0007, α=2) SS-SSAF(μ=0.00065)
0
145
-20
-10
-15
-30 -20
-40 0
1
2
3
4
5
6
7
8
0
IS-SSAF(μ=0.0007, α=1) IS-SSAF(μ=0.0007, α=2) SS-SSAF(μ=0.00065)
8
1
2
3
4
5
SSAF(N=2, μ =0.001) SSAF(N=4, μ =0.001) SSAF(N=8, μ =0.001) SS-SSAF(N=2, μ =0.0008) SS-SSAF(N=4, μ =0.00065) SS-SSAF(N=8, μ =0.00065)
-5
NMSD (dB)
6
2
7 ×10 4
5 0
4
6
Number of iterations
b
b Average number of subbands
-25
× 10 5
Number of iterations
-10 -15 -20 -25 -30
0 0
1
2
3
4
5
Number of iterations
6
7
8
× 10
5
Fig. 4. (a) NMSD learning curves for conventional SSAF, IWF-SSAF, IS-SSAFs, and SS-SSAF with small step size under impulsive noise environment (Pr = 0.01), SNR=30 dB, (b) Average number of selected subbands for IS-SSAFs, and SS-SSAF with small step size.
system have the same number of taps. Each subband adaptive filter used the cosine modulated filter bank that has 64 filter lengths and 8 subbands (N = 8). The colored input signals were generated by filtering white Gaussian noise using a first-order system
G (z ) =
1 . 1 − 0.9z−1
The signal-to-noise ratio (SNR) was set to 20 dB or 30 dB for adding the measurement noise at output signal yi , where the SNR is defined as
SNR 10 log10
E[(uT (n )w )2 ] . E[v(n )2 ]
(22)
The impulsive noise η(n) was generated as η (n ) = p(n )A(n ), where p(n) is a Bernoulli process with probability of success P [ p(n ) = 1] = P r, and A(n) is a zero-mean Gaussian random variable with power σA2 = 10 0 0σy2 . Pr was set to 0.01. The normalized mean squared deviation (NMSD) is defined as
NMSD 10 log10
T ( k )w ( k )] E[w . wT w
(23)
We assumed that the measurement noise variance, σv2 , is known, because it can be calculated during silent periods and online, as in [14,17–20]. The simulation results were obtained by ensemble averaging over 50 trials. Figs. 3(a) and4(a) shows the NMSD learning curves for the conventional SSAF, IWF-SSAF [21], IS-SSAFs [12], SS-SSAF for colored
-35 0
1
2
3
4
5
Number of iterations
6
7
8 ×10
5
Fig. 5. NMSD learning curves for conventional SSAFs and SS-SSAFs with various N=2,4, and 8 under impulsive noise environment (Pr = 0.01), SNR=30 dB. (a) Large step size, and (b) Small step size.
input signal generated by G(z) under impulsive noise environment. To compare the conventional SSAF and improved SSAF algorithms in terms of the convergence rate, step sizes are chosen such that their steady-state errors become equal. In Fig. 3(a), it is clear that IWF-SSAF and proposed algorithm provide a faster convergence speed than the conventional SSAF with large step size(μ = 0.01). As can be seen in Fig. 4(a), although improved SSAF algorithms has slow convergence speed before iteration 3 × 105 , they converge quickly to steady state compared to conventional SSAF with small step size(μ = 0.001). Figs. 3(b) and4(b) shows average number of selected subbands for IS-SSAFs, and SS-SSAF. IS-SSAF and SS-SSAF has lower computational complexity than conventional SSAF and IWF-SSAF, because they use selected subbands that are less than or equal to N for updating the weight vector. Because √ IS-SSAF selects subbands satisfying |ei,D (k )| > ασvi,D , the more number of subbands are selected for large step size compared with SS-SSAF. Therefore, IS-SSAF causes performance degradation with large step size. From Figs. 3 and4, the proposed SSAF algorithm, SS-SSAF, achieves both faster convergence speed and lower computational complexity compared to not only conventional SSAF but also improved SSAF. Moreover, as you can see in Fig. 5, SS-SSAF has better performance than SSAF in terms of the convergence speed with varying values of N = 2, 4, and 8. Figs. 6(a) and7(a) shows the NMSD learning curves for the conventional SSAFs, VSSSSAF [22], BDVSS-SSAF [16], VSS-SS-SSAF(λ = γ = 0.9479) for colored input signal generated by G(z) under impulsive noise environment. As can be seen, the proposed variable step-size algorithm,
146
a 10
10 SSAF(μ=0.01) SSAF(μ=0.001) VSS-SSAF BDVSS-SSAF VSS-SS-SSAF
NMSD (dB)
0
-10
-20
-30
0
0.5
1
1.5
-20 -30
-50
2
Number of iterations
× 10
0
0.5
6
b
8
1
1.5
2
× 10 6
Number of iterations
VSS-SS-SSAF
8
Average number of subbands
Average number of subbands
-10
-40
-40
b
SSAF(μ=0.01) SSAF(μ=0.001) VSS-SSAF BDVSS-SSAF VSS-SS-SSAF
0
NMSD (dB)
a
J. Cho et al. / Signal Processing 152 (2018) 141–147
6
4
2
0
VSS-SS-SSAF
6
4
2
0 0
0.5
1
1.5
Number of iterations
2
0
0.5
× 10 6
Fig. 6. (a) NMSD learning curves for conventional SSAFs, VSS-SSAF, BDVSS-SSAF, and VSS-SS-SSAF under impulsive noise environment (Pr = 0.01), SNR=20 dB, Unknown system changes (w −→ −w) at iteration 1.0 × 106 suddenly (b) Average number of selected subbands for VSS-SS-SSAF.
1
1.5
2
× 10 6
Number of iterations
Fig. 7. (a) NMSD learning curves for conventional SSAFs, VSS-SSAF, BDVSS-SSAF, and VSS-SS-SSAF under impulsive noise environment (Pr = 0.01), SNR=30 dB, Unknown system changes (w −→ −w) at iteration 1.0 × 106 suddenly (b) Average number of selected subbands for VSS-SS-SSAF. 0
Table 3 The average number of multiplications required for obtaining Fig. 8. Multiplications
SSAF (μ = 0.01, and μ = 0.001) BDVSS-SSAF SS-SSAF (μ = 0.007) SS-SSAF (μ = 0.003) VSS-SS-SSAF
2624 3075 2229 2258 2336
-5
-10
NMSD (dB)
Algorithms
SSAF(μ =0.01) SSAF(μ =0.005) SS-SSAF(μ =0.007) SS-SSAF(μ =0.003) BDVSS-SSAF VSS-SS-SSAF
double talk
-15
-20
-25
VSS-SS-SSAF, shows fast convergence rate and small steady-state errors similar to BDVSS-SSAF algorithm. From Figs. 6(b) and7(b), moreover, VSS-SS-SSAF achieves the low computational complexity, because the number of used subbands is less than other algorithms on average. Fig. 8 describes the comparisons of SSAFs, SSSSAFs, BDVSS-SSAF, and VSS-SS-SSAF(λ = γ = 0.9948) in an acoustic echo-cancellation application with double-talk situation and Table 3 shows the average number of multiplications required for obtaining Fig. 8. As can be seen, the proposed algorithms have better performance in terms of convergence speed than SSAF algorithm and reduce the computational complexity compared to SSAF. 5. Conclusion In this paper, we presented a new SSAF algorithm that selects a subset of subband filters to improve the performance in terms of convergence rate and computational complexity. Because a con-
-30 0
1
2
3
4
5
6
Number of iterations
7
8
9
10 ×10 5
Fig. 8. NMSD learning curves in an acoustic echo-cancellation application with double-talk, SNR=30 dB.
dition that decreases the MSD every iteration cannot be calculated easily, the proposed SSAF, SS-SSAF, uses its upper bound. In addition, the variable step-size for SS-SSAF was derived by minimizing the upper bound of MSD in each iteration. Simulation results verified that the proposed SSAF had the fast convergence rate and small computational complexity and proposed variable stepsize algorithm had obtained both fast convergence speed and small steady-state errors, as compared with the existing algorithms in impulsive noisy environments.
J. Cho et al. / Signal Processing 152 (2018) 141–147
Acknowledgments This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP; Ministry of Science, ICT and Future Planning) (No. 2017R1C1B5017968). This work was supported by the Soonchunhyang University Research Fund. References [1] S. Haykin, Adaptive Filter Theory, fourth ed., NJ:Prentice-Hall, Upper Saddle River, 2002. [2] A.H. Sayed, Fundamentals of Adaptive Filtering, Wiley, New York, 2003. [3] K. Lee, W. Gan, Inherent decorrelating and least perturbation properties of the normalized subband adaptive filter, IEEE Trans. Signal Process. 54 (11) (2006) 4475–4480. [4] M. Abadi, J. Husøy, Selective partial update and set-membership subband adaptive filters, Signal Process. 88 (10) (2008) 2463–2471. [5] S. Kim, Y. Choi, M. Song, W. Song, A subband adaptive filtering algorithm employing dynamic selection of subband filters, IEEE Signal Process. Lett. 17 (3) (2010) 245–248. [6] M. Rabiee, M.A. Attari, S. Ghaemmaghami, A low complexity NSAF algorithm, IEEE Signal Process. Lett. 19 (11) (2012) 716–719. [7] T. Shao, Y. Zheng, J. Benesty, An affine projection sign algorithm robust against impulsive interferences, IEEE Signal Process. Lett. 17 (4) (2010) 327–330. [8] J. Ni., F. Li, Variable regularisation parameter sign subband adaptive filter, Electron. Lett. 46 (24) (2010) 1605–1607. [9] F. Huang, J. Zhang, S. Zhang, Combined-step-size affine projection sign algorithm for robust adaptive filtering in impulsive interference environments, IEEE Trans. Circuits Syst. II Express Briefs 63 (5) (2016) 493–497.
147
[10] F. Huang, J. Zhang, S. Zhang, Combined-step-size normalized subband adaptive filter with a variable-parametric step-size scaler against impulsive interferences, IEEE Trans. Circuits Syst. II Express Briefs (2017), doi:10.1109/TCSII. 2017.2771430. [11] J. Hur, I. Song, P. Park, A variable step-size normalized subband adaptive filter with a step-size scaler against impulsive measurement noise, IEEE Trans. Circuits Syst. II Express Briefs 64 (7) (2017) 842–846. [12] J.J. Jeong, S.H. Kim, G. Koo, S.W. Kim, Sign subband adaptive filter with selection of number of subbands, in: Proceedings of the Twelfth International Conference on Informatics in Control, Automation and Robotics (ICINCO), 1, IEEE, 2015, pp. 407–411. [13] J. Shin, J. Yoo, P. Park, Steady-state mean-square deviation analysis of the sign subband adaptive filter, Electron. Lett. 53 (12) (2017) 793–795. [14] J. Yoo, J. Shin, P. Park, Variable step-size affine projection sign algorithm, IEEE Trans. Circuits Syst. II Express Briefs 61 (4) (2014) 274–278. [15] W. Yin, A. Mehr, Stochastic analysis of the normalized subband adaptive filter algorithm, IEEE Trans. Circuits Syst. I Regul. Pap. 99 (2011). 1–1. [16] J. Yoo, J. Shin, P. Park, A band-dependent variable step-size sign subband adaptive filter, Signal Process. 104 (2014) 407–411. [17] N. Yousef, A. Sayed, A unified approach to the steady-state and tracking analyses of adaptive filters, IEEE Trans. Signal Process. 49 (2) (2001) 314–324. [18] J. Benesty, H. Rey, L.R. Vega, S. Tressens, A nonparametric VSS NLMS algorithm, IEEE Signal Process. Lett. 13 (10) (2006) 581–584. [19] C. Paleologu, J. Benesty, S. Ciochina, A variable step-size affine projection algorithm designed for acoustic echo cancellation, IEEE Trans. Audio Speech Lang. Process. 16 (8) (2008) 1466–1478. [20] M. Asif Iqbal, S. Grant, Novel variable step size NLMS algorithms for echo cancellation, in: Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing ICASSP, 2008, pp. 241–244. [21] Y. Yu, H. Zhao, Novel sign subband adaptive filter algorithms with individual weighting factors, Signal Process. 122 (2016) 14–23. [22] J. Shin, J. Yoo, P. Park, Variable step-size sign subband adaptive filter, IEEE Signal Process. Lett. 20 (2) (2013) 173–176.