Applied Mathematics and Computation 217 (2011) 7365–7371
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
A nonparametric variable step-size NLMS algorithm for transversal filters Liu Jian-chang, Yu Xia ⇑, Li Hong-ru Key Laboratory of Integrated Automation of Process Industry Ministry of Education, Northeastern University, Shenyang Liaoning Province 110004, China
a r t i c l e
i n f o
Keywords: Normalized least-mean-square (NLMS) Variable step-size NLMS Nonparametric Transversal filters System identification
a b s t r a c t A nonparametric adaptive filtering approach is proposed in this paper. The algorithm is obtained by exploiting a time-varying step size in the traditional NLMS weight update equation. The step size is adjusted according to the square of a time-averaging estimate of the autocorrelation of a priori and a posteriori error. Therefore, the new algorithm has more effective sense proximity to the optimum solution independent of uncorrelated measurement noise. Moreover, this algorithm has fast convergence at the early stages of adaptation and small final misadjustment at steady-state process. It works reliably and is easy to implement since the update function is nonparametric. Furthermore, the experimental results in system identification applications are presented to illustrate the principle and efficiency of the proposed algorithm. Ó 2011 Elsevier Inc. All rights reserved.
1. Introduction Adaptive filtering is frequently employed in communications, signal processing, control and many other applications as a consequence of its simplicity and robustness [1–3]. One of the most popular adaptive filters is the normalized least-meansquare (NLMS) algorithm. It is well known that the stability of this algorithm is governed by a step-size parameter and that the choice of this parameter reflects a compromise between the dual requirements demanded of most adaptive filtering applications of fast convergence rate and small misadjustment. To meet these conflicting requirements, researchers have continually looked for alternative means to control the step-size parameter. Over the last few decades, many different variable step-size NLMS algorithms have been proposed [4–7]. According to the literature, there are essentially three classes of variable step-size NLMS algorithms: the first class of algorithms are based on the gradient adaptive step-size (GASS), exemplified by algorithms such as the Mathews and Xie [4] and Benveniste et al. [5] algorithms. The condition for optimal adaptation in this sense is dE(n)/dl(n) = 0, where E(n) is the cost function of the system and l(n) is the step-size factor. However, a major disadvantage of these algorithms is in their sensitivity to the time correlation between input signal samples and the value of the additional step-size parameter that governs the gradient adaptation of the step-size. This sensitivity is reduced by using an algorithm based on an adaptive regularization factor such as the generalized normalized gradient descent (GNGD) algorithm proposed in [6]. While these algorithms are robust to changes in the initialization of all of the critical parameters, it is not easy to apply the algorithms to the utmost extent since this would require the setting of many parameters. A solution is to use an algorithm based on the various nonparametric variance estimates, such as Benesty’s recently developed algorithm in [7]. This is a nonparametric variable step-size NLMS algorithm, which can be obtained by adjusting the step-size value in accordance with the criterion of attempting to reduce the squared error at each instant. However, experimental results show that the performance of this algorithm is quite sensitive to noise disturbance. The algorithm’s advantageous performance over other algorithms is generally attained only
⇑ Corresponding author. E-mail address:
[email protected] (X. Yu). 0096-3003/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2011.02.026
7366
J.-c. Liu et al. / Applied Mathematics and Computation 217 (2011) 7365–7371
in a high signal-to-noise environment. This is intuitively obvious by noting that the criteria controlling the step-size update of these algorithms are directly obtained from the instantaneous error contaminated by the disturbance noise. Since the measurement noise is a reality in any practical system, the efficiency of any adaptive algorithm is judged by its performance in the presence of this noise.
2. Algorithm formulation In a number of other published works [8–11], adaptive algorithms have variable step-size parameters, with the weight update recursion given by:
^ ^ T ðn 1Þ þ lðnÞeðnÞxðnÞ wðnÞ ¼w
ð1Þ
where l(n) is the step-size factor which has a variable positive scalar included to control the changes along the selected direction and x(n) is the vector containing the L most recent samples of the system input signal. e(n) is the system output error, which is defined as:
^ T ðn 1ÞxðnÞ eðnÞ ¼ yðnÞ w
ð2Þ
where the corresponding filter output is:
yðnÞ ¼ wT0 xðnÞ þ v ðnÞ
ð3Þ
where v(n) is the system noise that is independent of the input signal x(n) and w0 is the optimal weight vector. In this paper v(n) is assumed stationary and all signals are real-valued and zero-mean. For analysis of the algorithm formulation it is also convenient to introduce some notation and additional variables. First of all, the weight error vector of a transversal filter is defined as the difference between the optimal solution and the filter weights:
^ mðnÞ ¼ w0 wðnÞ
ð4Þ
Then, a priori and a posteriori error signals are also defined respectively as:
^ T ðn 1ÞxðnÞ ¼ mT ðn 1ÞxðnÞ þ v ðnÞ ea ðnÞ ¼ yðnÞ w
ð5Þ
^ T ðnÞxðnÞ ¼ mT ðnÞxðnÞ þ v ðnÞ ep ðnÞ ¼ yðnÞ w
ð6Þ
and
For the classical NLMS algorithm, the step-size update expression is:
lNLMS ðnÞ ¼ xT ðnÞxðnÞ
1
ð7Þ
The way to derive l(n) in a way that makes the weight update recursion stable is to cancel the a posteriori error signal. Benesty et al. [7] proposes another way to derive l(n) in the presence of noise:
n o J ep ðnÞ ¼ E e2p ðnÞ ¼ r2v ;
8n
ð8Þ
where E{} denotes mathematical expectation and r2v is the power of the system noise. Using the approximation xT ðnÞxðnÞ ¼ Lr2x ¼ LEfx2 ðnÞg for L 1, where r2x is the power of the input signal, and knowing that l(n) is deterministic in nature, it will be found from (1), (3), (6) and (8) that:
2 J ep ðnÞ ¼ Efe2p ðnÞg ¼ 1 lðnÞLr2x r2e ðnÞ ¼ r2v
ð9Þ
where r2e ðnÞ ¼ E e2 ðnÞ ¼ J e ðnÞ is the power of the error signal. Developing (9), one obtains a nonparametric l(n) as
lVSS ðnÞ ¼
1 rv 1 xT ðnÞxðnÞ re ðnÞ
ð10Þ
In this paper, we achieve algorithm immunity against independent noise disturbance by requiring that the step-size of the algorithm be adjusted according to the square of the time-averaged estimate of the autocorrelation of ea(n) and ep(n). In the new proposed procedure then, the step-size parameter l(n) would be proposed thus:
J ap ðnÞ ¼ E ea ðnÞep ðnÞ ¼ r2v ;
8n
ð11Þ
In a similar fashion, it can be found that:
J ap ðnÞ ¼ E ea ðnÞep ðnÞ ¼ 1 lðnÞLr2x r2e ðnÞ ¼ r2v Developing (12), the obvious solution is
ð12Þ
7367
J.-c. Liu et al. / Applied Mathematics and Computation 217 (2011) 7365–7371 Table 1 Summarizes the proposed NVS–NLMS algorithm. Inputs:
k; r2v ; d; h. ^ ^ 2e ð0Þ ¼ 0. wð0Þ ¼ 0; r
Initialization: Loop:
lNVS ðnÞ ¼
1 xT ðnÞxðnÞ
n = 1,2,. . . ^ eðnÞ ¼ yðnÞ xT ðnÞwðnÞ r^ 2e ðnÞ ¼ kr^ 2e ðn 1Þ þ ð1 kÞe2 ðnÞ i h 2 lðnÞ ¼ d þ xT ðnÞxðnÞ 1 1 hþrr^ e2v ðnÞ 2 ^2 lNVS ðnÞ ¼ lðnÞ if re ðnÞ P rv 0 otherwise ^ ^ T ðn 1Þ þ lNVS ðnÞeðnÞxðnÞ wðnÞ ¼w
1
r2v r2e ðnÞ
ð13Þ
where lNVS(n) is the nonparametric variable step-size factor. Therefore, the proposed nonparametric variable step-size NLMS (NVS–NLMS) algorithm is
^ ^ T ðn 1Þ þ lNVS ðnÞxðnÞeðnÞ wðnÞ ¼w
ð14Þ
It can be seen that before the algorithm converges, r is large compared to rv , so lNVS(n) lNLMS(n). And when the algorithm starts to converge to the optimal solution r2e ðnÞ r2v , so then lNVS(n) 0. It can be seen that this algorithm has both good convergence and low misadjustment. In addition, while maintaining immunity against independent noise disturbance compared to all other algorithms belonging to the same family, the new algorithm can also effectively adjust step-size. In realistic applications, some practice considerations can be proposed in the same way as in a traditional nonparametric NLMS algorithm. First of all, the algorithm should be regularized in order to avoid divisions by small numbers. This implies that small positive constants d and h need to be added to the denominator of the step-sizes factor lNVS(n). So the modified lNVS(n) function is: 2 e ðnÞ
lNVS ðnÞ ¼ d þ xT ðnÞxðnÞ It is clear that
1
2
1
r2v ^ 2e ðnÞ hþr
ð15Þ
r2e ðnÞ P r2v implying lNVS(n) P 0. In practice, the quantity r2e ðnÞ is estimated as follows [7]:
r^ 2e ðnÞ ¼ kr^ 2e ðn 1Þ þ ð1 kÞe2 ðnÞ
ð16Þ
where k is an exponential window. This estimation could result in a lower magnitude than rv , which would make lNVS(n) < 0. In this instance, the simplest solution is to set lNVS(n) = 0. According to the analyses provided in this article, the transversal filter can be designed with this novel algorithm, and the procedure of NVS–NLMS is as described in Table 1. Another important consideration is in determining how to estimate the noise power r2v where different solutions can be used according to the particular application. In echo cancellation silences are common; so the square of the measured output at those time steps can be averaged to come up with an estimate. In this situation, the memory factor should be chosen carefully; we want the memory factor small to quickly react when to get in or out from a double talk situation, but we want it large to have a less biased estimate of the stationary background component. Alternatively, in system identification the noise power could be estimated from looking at the output when the input is small enough. For instance, the output component due to the noise could be larger than the one associated to the filtered input. 2
3. Performance analysis Consider the following variable regularized NLMS algorithm [7]:
^ ^ T ðn 1Þ þ wðnÞ ¼w
l dðnÞ þ xT ðnÞxðnÞ
eðnÞxðnÞ
ð17Þ
where d(n) is a variable regularization parameter. Following the same procedure as previously described, the regularization factor can be derived in such a way that:
J ap ðnÞ ¼ E ea ðnÞep ðnÞ ¼ 1 lðnÞLr2x r2e ðnÞ ¼ r2v ;
8n
ð18Þ
After straightforward calculations, it can be found that:
dVR ðnÞ ¼
rr
r
L 2x 2v 2 ðnÞ e
r2v
ð19Þ
7368
J.-c. Liu et al. / Applied Mathematics and Computation 217 (2011) 7365–7371
which can be compared (18) to the optimal regularization parameter derived in [12] that:
dOVR ðnÞ ¼
Lr2x r2v ¼ dVR ðnÞ r2e ðnÞ r2v
ð20Þ
Thus, the proposed algorithm has the optimal effect for good convergence and low misadjustment at the same time. Fur 1 thermore, the equality dVR ðnÞ þ xT ðnÞxðnÞ ¼ lNVS ðnÞ is easily verified showing that the proposed NVS–NLMS and variable regularized NLMS algorithms are strictly equivalent. The convergence of the misalignment is analysed with the supposition that the system is perfectly stationary. The weight error vector at time n is defined as (2), and rewriting the NVS–NLMS algorithm in terms of the misalignment gives
mðnÞ ¼ mðn 1Þ lNVS ðnÞeðnÞxðnÞ
ð21Þ T
Taking first the l2 norm and then the mathematical expectation of both sides and assuming that E{v(n)m (n 1)x(n)} = 0 (which is true if the noise signal is white), the function will be given as:
n o n o n n o n o 2 o E kmðnÞk22 E kmðn 1Þk22 ¼ E mðn 1Þ lNPVSS ðnÞxðnÞeðnÞ 2 E kmðn 1Þk22 6 lNPVSS ðnÞE kxðnÞeðnÞk22 1 r2 ¼ T ð22Þ 1 2 v xT ðnÞxðnÞr2e ðnÞ ¼ r2v r2e ðnÞ 6 0 x ðnÞxðnÞ re ðnÞ
It shows that for lNVS(n) defined in (13), the length of the misalignment vector is non-increasing. Therefore the convergence performance is analysed as follows:
Jð1Þ ¼ lim E e2 ðnÞ ¼ lim r2e ðnÞ ¼ r2e ð1Þ n!1
n!1
ð23Þ
Form the step-size update equation, it can be shown:
r2e ðnÞ ¼
r2v
1 lNVS ðnÞLr2x
ð24Þ
In processing the convergence, we can assume that lNVS(1) 0, which implies that r2e ð1Þ ¼ r2v . As a result, the cost function is
Jð1Þ ¼ lim E e2 ðnÞ ¼ r2v n!1
ð25Þ
Thus, we can see that the new algorithm as shown in Table 1 has better convergence performance and simpler structure in practical application than the algorithm in [7]. 4. Computational complexity of the proposed algorithm In this section, we study the computational complexity of the proposed nonparametric variable step-size NLMS algorithm. Since the number of additions is comparable to the number of multiplications for the proposed combinations, we will consider just the number of real multiplications. It is a well-known fact that LMS operation requires 2L + 1 multiplications, L being the length of the filter. Since the proposed nonparametric variable step-size NLMS can be seen as a special LMS with l(n), it needs L + 2 multiplications for the ^ 2e ðnÞ as (16). Therefore, the compuadaptation of this parameter (see (15)), and two more products are needed to update r tational complexity of the proposed nonparametric variable step-size NLMS is 3L + 5. While, the standard NLMS requires 3L + 2 multiplications as we can see, the complexity of proposed algorithm has almost no growth. 5. Simulation results In this section, the performance of the NVS–NLMS algorithm is compared with the standard NLMS algorithm and the class of variable step-size NLMS algorithms [4–7] in the implementation of system identification. The cases of uncorrelated data and correlated data for stationary and nonstationary environments are demonstrated. Parameters for these algorithms are selected to produce a comparable level of misadjustment, and the choice of these parameters is also guided by the recommended values in their corresponding publications. All the simulation plots are obtained by ensemble averaging 500 independent simulation runs. Here, the unknown system is assumed to be FIR and have a length of 7. The power of the system noise v(n) is assumed to be known as the white Gaussian sequence with zero-mean and variance of r2v ¼ 0:01. 5.1. White Gaussian input for a stationary environment In this case, the white Gaussian noise with zero-mean and unit variance is used as the input signal and the system coefficients are time invariant. To obtain comparable misadjustment, we use l = 0.4 for the NLMS algorithm. Fig. 1 shows the MSE behaviours of all algorithms for this case.
J.-c. Liu et al. / Applied Mathematics and Computation 217 (2011) 7365–7371
7369
Fig. 1. Comparison of MSE behaviors of all algorithms for white Gaussian input and stationary environment.
In Fig. 1(a), it is observed that at the beginning of the experiment, the convergence speeds of these two nonparametric algorithms are almost equivelent (the nonparametric variable step-size NLMS algorithm in [7] is denoted by VSS–NLMS, Line 2). But when it comes to stationary transition processing, the proposed NVS–NLMS algorithm provides the fastest convergence speed of the three algorithms. Fig. 1(b) compares convergence performance of the NVS–NLMS algorithm with the convergence of the GASS algorithm in [4] and the GNGD algorithm in [6]. When the conditions of this experiment are the same as that in Fig. 1(a), it is clear from Fig. 1(b) that the NVS–NLMS algorithm has a better convergence performance and lower final misadjustment than GASS and GNGD. 5.2. White Gaussian input and an abrupt change in the system coefficients Tracking is a very important issue in adaptive algorithms. In system identification applications it is essential that an adaptive filter tracks quickly since impulse responses are not stationary. In this case, an abrupt change is set at the 500th iteration while all remaining conditions are the same as in simulation 4.1. Fig. 2(a) and (b) shows the robustness of the proposed NVS–NLMS algorithm, which sustains the fastest convergence speed after the abrupt change in the plant. Note that the GNGD algorithm also quickly detects the abrupt change but provides slightly slower tracking speed than NVS–NLMS.
Fig. 2. Comparison of MSE behavior of all algorithms with an abrupt change in system coefficients.
7370
J.-c. Liu et al. / Applied Mathematics and Computation 217 (2011) 7365–7371
Fig. 3. Comparison of MSE behavior of all algorithms with correlated input and stationary environment.
5.3. Correlated input for a stationary environment In this simulation the input signal is correlated, which was generated by:
xðnÞ ¼ 0:9xðn 1Þ þ rðnÞ
ð26Þ
where signal r(n) is a white Gaussian noise with zero-mean and unit variance so that the independence between r(n) and internal system noise v(n) is ensured. To obtain a comparable misadjustment, we used l = 0.04 for the NLMS algorithm. Fig. 3(a) and (b) shows the MSE behaviours of the compared algorithms. Observe that the GNGD and NVS–NLMS algorithms both have faster initial convergence speeds than other algorithms with a correlated input environment. However, when the abrupt change occurs, GNGD becomes slower because it is highly sensitive to this change. Meanwhile, the proposed NVS–NLMS algorithm achieves the lowest MSE of the tested algorithms. 5.4. White Gaussian input for a nonstationary environment A time-varying system is modeled, where its coefficients vary in a random walk process given by:
wðn þ 1Þ ¼ wðnÞ þ cðnÞ
ð27Þ
where c(n) is a white Gaussian noise with zero-mean and small variance rv ¼ 0:01. The comparable misadjustment was obtained by setting the parameters as l = 0.2 for the NLMS algorithm. 2
Fig. 4. Comparison of MSE behavior of all algorithms with white Gaussian input and nonstationary environment.
J.-c. Liu et al. / Applied Mathematics and Computation 217 (2011) 7365–7371
7371
Fig. 4 shows that all these algorithms except NLMS achieve fast initial convergence speed. However, only the proposed NVS–NLMS and GNGD algorithms continue the convergence speed at later transition processing. In the last stage of convergence, the NVS–NLMS and VSS–NLMS algorithm have almost the same MSE behavior and the minimum level of MSE obtained by these algorithms is approximately 25 dB. 6. Conclusion In this paper, a new NVS–NLMS algorithm employing a time-varying step-size in the standard NLMS weight update equation was proposed. The step-size of the algorithm is adjusted according to the square of a time-averaging estimate of the autocorrelation of ea(n) and ep(n). As a result, the algorithm can effectively sense proximity to the optimum solution independent of uncorrelated measurement noise. The performance of the algorithm was compared with that of the standard NLMS algorithm as well as other variable step-size NLMS algorithms through simulations. This paper shows that our algorithm provides a significant improvement in convergence rate over other same-class algorithms in a stationary environment for the same excess MSE in both high and low SNR environments. Meanwhile, the performance in non-stationary cases is comparable with the standard NLMS algorithm. Acknowledgements This work is sponsored by the National Natural Science Foundation of China-BaoSteel Conjunct Foundation No. 50974145 and the Natural Science Foundation of Liaoning Province No. 20092012. References [1] M. Sugisaka, Adaptive chandrasekhar filter for linear discrete-time stationary stochastic-systems, Appl. Math. Comput. 69 (1) (1995) 137–145. [2] H.L. Yang, W.R. Wu, Multirate adaptive filtering for low complexity DS/CDMA code acquisition, Signal Process. 89 (6) (2009) 1162–1175. [3] S. Puthusserypady, T. Ratnarajah, Robust adaptive techniques for minimization of EOG artefacts from EEG signals, Signal Process. 86 (9) (2006) 2351– 2363. [4] V.J. Mathews, Z. Xie, A stochastic gradient adaptive filter with gradient adaptive step size, IEEE Trans. Signal Process. 41 (6) (1993) 2075–2087. [5] A. Benveniste, M. Metivier, P. Priouret, Adaptive Algorithms and Stochastic Approximation, Springer-Verlag, New York, 1990. [6] D.P. Mandic, A generalized normalized gradient descent algorithm, IEEE Signal Process. Lett. 11 (2) (2004) 115–118. [7] J. Benesty, H. Rey, L.R. Vega, S. Tressens, A nonparametric VSS NLMS algorithm, IEEE Signal Process. Lett. 13 (10) (2006) 581–584. [8] M.H. Costa, J.C.M. Bermudez, A noise resilient variable step-size LMS algorithm, Signal Process. 88 (3) (2008) 733–748. [9] H.M. Habib, E.R. El-Zahar, Variable step size initial value algorithm for singular perturbation problems using locally exact integration, Appl. Math. Comput. 200 (1) (2008) 330–340. [10] A.I. Sulyman, A. Zerguine, Convergence and steady-state analysis of a variable step-size NLMS algorithm, Signal Process. 83 (6) (2003) 1255–1273. [11] G.C. Kizilkan, K. Aydin, A new variable step size algorithm for Cauchy problem, Appl. Math. Comput. 183 (2) (2006) 878–884. [12] H. Rey, L.R. Vega, S. Tressens, J. Benesty, Optimum variable explicit regularized affine projection algorithm, in: 2006 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 3, Toulouse, 2006, pp. 197–200.