Available online at www.sciencedirect.com
ScienceDirect IFAC PapersOnLine 52-24 (2019) 103–106
Zero Attracting Maximum Total Correntropy Algorithm Zero Attracting Maximum Total Correntropy Algorithm for Sparse System Identification Zero Attracting Maximum Total Correntropy Algorithm for Sparse System Identification for Sparse System Identification
Lei Li*. Haiquan Zhao* Lei Li*. Li*. Haiquan Haiquan Zhao* Zhao* Lei *the Key Laboratory of Magnetic Suspension Technology and Maglev Vehicle, Ministry of Education, and the School of Lei Li*. Haiquan Zhao* *the Key Key Laboratory Laboratory of Magnetic Magnetic Suspension Technology and Maglev Maglev Vehicle, Ministry of Education, Education, and the the School School of of *the of Suspension Technology and Vehicle, Ministry of Electrical Engineering, Southwest Jiaotong University, Chengdu, 610031, China and *the Key Laboratory of Magnetic Suspension Technology and Maglev Vehicle, Ministry of Education, and the School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 610031, China Electrical Engineering, Southwest Jiaotong University, Chengdu, 610031, China (e-mail: leili_swjtu@ Electrical Engineering, Southwest126.com,
[email protected]). University, Chengdu, 610031, China (e-mail: leili_swjtu@ 126.com, (e-mail: leili_swjtu@ 126.com,
[email protected]). Corresponding author:
[email protected]). Haiquan Zhao. (e-mail: leili_swjtu@ 126.com,
[email protected]). Corresponding author: Corresponding author: Haiquan Haiquan Zhao. Zhao. Corresponding author: Haiquan Zhao. Abstract: Recently, a robust maximum total correntropy (MTC) adaptive filtering algorithm has been Recently, maximum correntropy adaptive filtering algorithm has been Abstract: Recently, aa robust robust maximum total correntropy (MTC) adaptive filtering algorithm hasnoises. been Abstract: used in errors-in-variables (EIV) model intotal which both input(MTC) and output data are contaminated with Recently, a robust maximum total correntropy (MTC) adaptive filtering algorithm has been Abstract: used in errors-in-variables (EIV) model in which both input and output data are contaminated with noises. used in extension errors-in-variables (EIV) model in which criterion both input(MCC), and output data arealgorithm contaminated with noises. As an of the maximum correntropy the MTC shows desirable used in extension errors-in-variables (EIV) model in which criterion both input(MCC), and output data arealgorithm contaminated with noises. As an of the maximum correntropy the MTC shows desirable As an extension of the maximum criterion (MCC),thetheMTC MTCalgorithm algorithmmay shows desirable However, suffer from performance in non-Gaussian noisecorrentropy environments. As an extension of the maximum correntropy criterion (MCC),thetheMTC MTC algorithmmay shows desirable performance in noise environments. However, suffer from performance deterioration in non-Gaussian non-Gaussian noise environments. However, the MTC algorithm algorithm may suffer from in the sparse system. To overcome this drawback, a robust and sparse adaptive performance in non-Gaussian noise However, MTC algorithm may suffer from performance deterioration in theattracting sparseenvironments. system. To overcome overcome thisthe drawback, robust and sparse adaptive performance deterioration the sparse system. To this drawback, aa robust and sparse adaptive filtering algorithm, called in zero maximum total correntropy (ZA-MTC), is derived by adding a performance deterioration in the sparse system. To overcome this drawback, a robust and sparse adaptive filtering algorithm, called called zero zero attracting attracting maximum maximum total total correntropy correntropy (ZA-MTC), (ZA-MTC), is is derived derived by by adding adding aa filtering algorithm, lfiltering norm penalty term to the maximum total correntropy algorithm in this brief. In addition, in the 1 algorithm, called zeromaximum attracting maximum total correntropy (ZA-MTC), is derived by adding a term to total correntropy algorithm in this In addition, in ll11 norm norm penalty penalty term to the the maximum total correntropy algorithm this brief. brief. addition, in the the reweighted version, a log-sum function is employed to replace the l1in penaltyIn Simulation l1 norm penalty term to the maximum total correntropy algorithm innorm this brief. In term. addition, in the reweighted version, version, aa log-sum log-sum function function is is employed employed to to replace replace the the ll11 norm norm penalty penalty term. term. Simulation Simulation reweighted results demonstrate advantages of the proposed toalgorithms under sparsity assumptions on the l1 norm reweighted version, athe log-sum function is employed replace the penalty term. Simulation results demonstrate the advantages of the proposed algorithms under sparsity assumptions results demonstrate the advantages of the proposed algorithms under sparsity assumptions on on the the unknown parameter vector. results the advantages of the proposed algorithms under sparsity assumptions on the unknowndemonstrate parameter vector. vector. unknown parameter unknown parameter vector. © 2019, IFAC (International of Automatic Control) Hosting Elsevier Ltd. All rights reserved. Keywords: Maximum totalFederation correntropy, Sparse adaptive filtering,by Zero attracting, Impulsive noise Keywords: Maximum total correntropy, correntropy, Sparse Sparse adaptive adaptive filtering, filtering, Zero Zero attracting, attracting, Impulsive Impulsive noise noise Keywords: Maximum total suppression, Noisy input. Keywords: total correntropy, Sparse adaptive filtering, Zero attracting, Impulsive noise suppression,Maximum Noisy input. input. suppression, Noisy suppression, Noisy input. signal of the system. However, their performance may 1. INTRODUCTION signal of system. their signal of the the system. However, their performance performance may deteriorate seriously in However, errors-in-variables (EIV) modelmay in 1. INTRODUCTION INTRODUCTION 1. signal of the system. However, their performance may deteriorate seriously in errors-in-variables (EIV) model in deteriorate seriously in errors-in-variables (EIV) model 1. INTRODUCTION which both input and output data are contaminated with A sparse system is defined whose impulse response contains deteriorate seriously in errors-in-variables (EIV) model in in which both input and data contaminated with which both input when and output output data are are disturbed contaminated with A sparse sparse system coefficients is defined defined whose whose impulse response contains noises, especially the signals by some A system is impulse response contains many near-zero and only few large ones. Sparse which both input and output data are contaminated with A sparse system coefficients is defined whose impulse response contains noises, especially when signals are by especially when the the signals are disturbed disturbed by some some many near-zero and few large Sparse impulsive noises (Davila 2002, Arablouei et al. 2014). In an many near-zero coefficients and only only few large ones. Sparse systems have found many applications, such as ones. acoustic and noises, noises, especially when the signals are disturbed by some many near-zero coefficients and only few large ones. Sparse impulsive noises (Davila 2002, Arablouei et al. 2014). In an impulsive noises (Davila 2002, Arablouei et al. 2014). Indata an systems have found many applications, such as acoustic and to deal with non-Gaussian noise in both input systems have found many applications, such as acoustic and attempt network echo cancellation and communication channel impulsive noises (Davila 2002, Arablouei et al. 2014). Indata an systems have found many applications, such as acoustic and attempt to deal with non-Gaussian noise in both input attempt to deal with non-Gaussian noise in both input data network echo cancellation and communication channel and output data, two robust adaptive algorithms have been network echo(Carbonelli cancellation identification et al.and 2007,communication Loganathan et al.channel 2009). attempt to deal with non-Gaussian noise in both input data network echo cancellation and communication channel and output data, two robust adaptive algorithms have been outputindata, twoyears robust adaptive algorithms have etbeen identification (Carbonelli et al. al. 2007, 2007, Loganathan Loganathan et al. al. 2009). 2009). and proposed recent (Shen and Li 2015, Wang al. identification et However, the (Carbonelli regular adaptive have noet and outputin data, twoyears robust adaptive algorithms have etbeen identification (Carbonelli et al. algorithms 2007, Loganathan etsignificant al. 2009). proposed recent (Shen and Li 2015, Wang al. proposed in recent years (Shen and Li 2015, Wang et However, the regular adaptive algorithms have no significant 2017). As an extension of the minimum error entropy (MEE), However, the regular adaptive algorithms have no significant advantage in regular sparse system due no to significant no use of proposed in recent years (Shen and Li 2015, Wang et al. al. However, the adaptiveidentification algorithms have As the error entropy (MEE), 2017). As an an extension extension ofentropy the minimum minimum error entropyhas (MEE), advantage in sparse system identification due to noadaptive use of of 2017). the minimum total errorof (MTEE) algorithm been advantage in sparse system identification due to no use sparse characteristic. Thus, there are a lot of sparse 2017). As an extension of the minimum error entropy (MEE), advantage in sparse system identification due to noadaptive use of the minimum total entropy (MTEE) has minimum total error entropyFurther, (MTEE) algorithm has been been sparse characteristic. characteristic. Thus, there are aa lot lot aof of sparse proposed (Shen anderror Li 2015). A algorithm new the maximum sparse there are sparse adaptive the algorithms, which areThus, derived by utilizing priori knowledge the minimum total error entropyFurther, (MTEE) algorithm has been sparse characteristic. Thus, there are a lot of sparse adaptive proposed (Shen and Li 2015). A new the maximum proposed (Shen and Li 2015). Further, A new the maximum algorithms, which are derived derived by (Chen utilizing priori knowledge total correntropy (MTC) algorithm is proposed, which has algorithms, which are by utilizing aaal.priori knowledge of the sparsity in recent decades et 2009, Eksioglu proposed (Shen and Li 2015). Further, A new the which maximum algorithms, which are derived by (Chen utilizing knowledge total correntropy (MTC) algorithm is proposed, has total correntropy (MTC) algorithm iscan proposed, which has of the the Eksioglu sparsity in recent decades (Chen et aal. al.priori 2009, Eksioglu lower computational complexity and achieve similar or of sparsity in recent decades et 2009, Eksioglu 2011, et al 2011, Kalouptsidis 2011, Gu et al. 2013, total correntropy (MTC) algorithm iscan proposed, which has of the Eksioglu sparsity in recent decades (Chen 2011, et al. 2009, Eksioglu lower computational complexity and achieve similar or lower computational complexity and can achieve similar or 2011, et al 2011, Kalouptsidis Gu et al. 2013, even better performance than the MTEE algorithm (Wang et 2011, Eksioglu et al Shi 2011, Kalouptsidis 2011, Gual.et 2019). al. 2013, Zhang et al. 2016, and Zhao 2018, Shi et A lowerbetter computational complexity and canalgorithm achieve similar or 2011, Eksioglu et al Shi 2011, Kalouptsidis 2011, Gual.et 2019). al. 2013, even performance than the MTEE (Wang even better performance than the MTEE algorithm (Wang et Zhang et al. 2016, and Zhao 2018, Shi et A al. 2017). The goal of this work is to present a robust and Zhang et al. mean 2016, square Shi and(LMS) Zhao algorithm 2018, Shi was et al.derived 2019). by A even better performance than the MTEE algorithm (Wang et sparse least et Zhang et al. mean 2016, square Shi and(LMS) Zhao algorithm 2018, Shi was et al.derived 2019). by A al. 2017). The goal of this work is to present a robust and al. 2017). The goal of this work is to present a robust and sparse least sparse adaptive filtering algorithm, called zero attracting sparse least mean square (LMS) was derived by al. 2017). The goal of this work is to present a robust and adding a convex approximation foralgorithm the l0 norm penalty to the sparse least mean square (LMS) algorithm was derived by sparse adaptive filtering algorithm, called zero attracting sparse adaptive filtering algorithm, called attracting maximum total correntropy (ZA-MTC), whichzero is derived by penalty to the adding aa convex convex approximation approximation for for the the ll00 norm adding norm penalty the maximum adaptive filtering algorithm, called zero attracting l1 norm total correntropy (ZA-MTC), which is derived by original function (Gu et al.for 2013). andto adding acost convex approximation the lThe penalty tologthe sparse maximum total correntropy (ZA-MTC), which is derived by 0 norm l adding a total penalty(ZA-MTC), term to the maximum total 1 norm original cost function (Gu et al. The norm and and loglog- maximum correntropy which is derivedtotal by ll11 norm original cost function (Gu to et regularize al. 2013). 2013). the TheLMS sum terms were also used algorithm to adding penalty term to the maximum aa ll11 norm l1 norm adding norm penalty term to the maximum total original cost function (Gu et al. 2013). The and logcorrentropy algorithm. Furthermore, by utilizing a log-sum sum terms were also used to regularize the LMS algorithm to l adding a 1algorithm. norm penalty term tobythe maximum total sum terms were also usedettoal. regularize the addition, LMS algorithm to correntropy sparse solutions (Chen 2009). In a sparse Furthermore, aa log-sum algorithm. by utilizing utilizing log-sum sum terms were also usedettoal. regularize the addition, LMS algorithm to correntropy l1 norm penalty function to replace the Furthermore, term, a reweighted sparse solutions (Chen 2009). In In sparse sparse solutions (Chen et al. algorithm 2009). addition, aa sparse correntropy algorithm. Furthermore, by utilizing a log-sum recursive least squares (RLS) was developed by to the penalty aa reweighted sparse solutions (Chen (RLS) et al. algorithm 2009). In was addition, a sparse function to replace replace the ll11 norm norm penalty term, term, reweighted recursive least squares squares developed by function zero attracting maximum total correntropy (RZA-MTC) is recursive least (RLS) algorithm by function to replace the l1 norm penalty term, a reweighted l1 norm adding a weighted penalty to the was RLS developed cost function recursive least squares (RLS) algorithm was developed by zero attracting maximum total correntropy (RZA-MTC) is zero attracting maximum total correntropy (RZA-MTC) is l also proposed. The new algorithms can achieve excellent norm penalty to the RLS cost function adding a weighted adding a weighted norm recent penaltyyears, to thea RLS cost zero proposed. attracting maximum total correntropy (RZA-MTC) is (Eksioglu 2011). Inll111 more robust andfunction sparse also The new algorithms can achieve excellent adding a weighted norm penalty to the RLS cost function also proposed. The new algorithms can achieve excellent performance for sparse system identification and show strong (Eksioglu 2011). In more recent years, a robust and sparse (Eksioglu 2011). In more recent years, a robust and sparse also proposed. The new algorithms can achieve excellent convex regularized recursive maximum correntropy (CR- performance for system show strong strong performance for sparse sparse system identification and show (Eksioglu 2011). In more recent years, a robust and sparse robustness when both input andidentification output data and are disturbed by convex isregularized regularized recursive maximum correntropy (CRconvex recursive maximum correntropy (CRperformance for sparse system identification and show strong RMC) derived by using a general convex function to robustness when both input and output data are disturbed by robustness when both input and output data are disturbed by convex regularized recursive maximum correntropy (CRnoises. RMC) is derived by using a general convex function to RMC) is the derived by using a general convex function to noises. robustness when both input and output data are disturbed by regularize maximum correntropy criterion (MCC), which noises. RMC) is the derived by using a general convex function to regularize the maximum correntropy criterion (MCC), which regularize maximum correntropy criterion which noises. shows strong in non-Gaussian noise(MCC), environments regularize therobustness maximum in correntropy criterion (MCC), which The rest of this brief is organized as follows. In Section II, we shows strong robustness non-Gaussian noise environments shows strong robustness in non-Gaussian noise environments The rest of brief is isalgorithm. organized as as follows. follows. In In Section Section II, we we (Zhang et al. 2016). restreview of this this MTC brief organized II, the ZA-MTC briefly shows robustness in non-Gaussian noise environments The restreview of this MTC brief isalgorithm. organizedIn as Section follows.III, In Section II, we (Zhangstrong et al. al. 2016). 2016). (Zhang et riefly In Section III, the ZA-MTC bThe riefly review MTC algorithm. In Section III, the ZA-MTC b algorithm and MTC its reweighted are derived in detail. (Zhang et al. 2016). riefly review algorithm.version In Section III, the ZA-MTC The algorithms mentioned above usually perform well under b algorithm and its reweighted reweighted version are derived derived in detail. detail. algorithm and its version are in Simulation results are shown in Section IV. Finally, the The algorithms mentioned above usually perform well under The algorithms above perform its reweighted version are derived in detail. conventional thementioned assumption thatusually the input signalwell has under been algorithm Simulation and results are shown in Section Section IV. Finally, Finally, the Simulation results are shown in IV. the The algorithms mentioned above usually perform well under conclusion is given in Section V conventional the assumption that the input signal has been conventional the assumption that the input signal has been Simulation results are shown in Section IV. Finally, the observed without noises and noises are confined to the output conclusion is given in Section V conventional the assumption that the input signal the has been conclusion is given in Section V observed observed without without noises noises and and noises noises are are confined confined to to the output output conclusion is given in Section V observed without noises and noises are confined to the output
2405-8963 © 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. Peer review under responsibility of International Federation of Automatic Control. 10.1016/j.ifacol.2019.12.389
Lei Li et al. / IFAC PapersOnLine 52-24 (2019) 103–106
104
2. REVIEW OF MAXIMUM TOTAL CORRENTROPY ALGORITHM
w (n 1) w ( n)
Consider an unknown L-dimensional vector that satisfies the following linear model d n hT x n
(1)
where h R L1 is the unknown system vector to be estimated, x n R L1 is the input vector at time n, d n R is the
d n d n v n
where u n R
L1
(2)
is the zero-mean input measurement noise
vector with auto-covariance matrix i2 I ( I denotes the
w (n)
e2 n w (n) exp 2 2 w n sgn w (n)
w n 2 e n x n +e 2 n w n 2 4 w n
(5)
where = / 2 is the step-size parameter and 0 . = and sgn () is a component-wise sign function defined as
corresponding output and superscript T denotes vector transposition. In the EIV model, both the input and output signals are assumed to be disturbed by noises, which is described as x n x n u n
J1 n
x / x , x 0 sgn ( x) x0 0 ,
(6)
Compared to the standard MTC algorithm, the ZA-MTC algorithm has an additional term sgn w (n) , which always attracts the tap coefficients to zero. Intuitively, the additional term will speed-up convergence when the majority of coefficients of unknown vector h are zero.
identity matrix) and v n R stands for output measurement
3.2 The Reweighted Zero-Attracting MTC Algorithm
noise with variance o2 . The noise u n and v n are supposed to be independent of the input signal.
Since all the taps are forced to zero uniformly, the performance of ZA-MTC algorithm would deteriorate in nonsparse or less sparse systems. Motivated by the reweighted method (Chen et al. 2009), the reweighted zero attracting maximum total correntropy (RZA-MTC) algorithm is derived by minimizing the following cost function
Under the above linear model, the regular MTC cost function is defined as 0 T d n w n x n J mtc n exp 2 2 2 w n
2
e2 n (3) e2 n 2 = exp 2 exp J n n 2 w 2 2 2 w n
where is the kernel width and parameter = o2 / i2 . In addition, w n
T
w T n is the modified augmented
weight vector and e n d n wT n x n is the output error.
3.1 The Zero-Attracting MTC Algorithm Inspired by l1 norm penalty of the weight vector, we propose a new robust adaptive algorithm called zero-attracting maximum total correntropy (ZA-MTC) algorithm, which is derived based on the following cost function e2 n J1 n exp 2 2 w n
2
w n 1
(4)
where is a positive number that can provide a balance between the MTC cost and l1 norm penalty term. When is larger, the penalty term will have more influence on the adaptation. Applying the gradient descent method, the ZAMTC algorithm filter update is derived as
L log 1 wi n i 1
(7)
where is a positive constant. Thus, the weight update of RZA-MTC algorithm can be easily derived as wi (n 1) wi (n)
J 2 n wi (n)
e2 n wi (n) exp 2 2 w n sgn wi (n) 1+ wi (n)
3. PROPOSED ALGORITHM
2
w n 2 e n x n +e 2 n w n i i (8) 2 4 w n
or equivalently, in vector form w (n 1) e2 n w (n) exp 2 2 w n sgn w (n) 1+ w ( n)
w n 2 e n x n +e 2 n w n 2 4 (9) w n
where = / 2 and = .The RZA-MTC selectively shrinks taps with small magnitudes. In addition, it can be seen that the log-sum penalty not only shrinks small weight coefficients to zeros, but also distinguishes non-zero coefficients because it reflects the effect of amplitudes instead of directly taking the signs of the coefficients. Further,
Lei Li et al. / IFAC PapersOnLine 52-24 (2019) 103–106
105
it can be observe that the reweighted zero attractor takes effect on those taps whose magnitudes are comparable to 1 / , and has little effect on the taps whose wi (n) 1 / . In this way, the RZA-MTC algorithm still has an excellent convergence performance when the system is not sparse. 4. COMPUTER SIMULATIONS In this section, computer simulations have been carried out to verify the performance of the proposed algorithms for sparse system identification when both input and output signals are disturbed by noises. The input measurement noise is zeromean Gaussian and variance i2 is set to 0.1. The Gaussian mixture model (GMM) is utilized as the distribution of nonGaussian noise. The probability density function (PDF) of the GMM noise v is defined as (Wang et al. 2017) p v = 1 c N (0, A2 ) cN (0, B2 )
(10)
2 where N (0, j )( j A, B ) denote the zero-mean Gaussian
Fig.1. NMSD learning curves of the proposed algorithms with Gaussian signal input. For the ZA-MTC algorithm: 0.0078 , For the RZA-MTC algorithm: 0.0062 , 10 .
2 distributions with variances j . A2 stands for original noise
and B2 is set as large value to represent the distribution of large outliers. The parameter c controls the occurrence probability of large outliers. For the sake of fairness, we set c 0.05 , A2 =0.1 , and B2 =10 in all simulations. In addition, the kernel width of all algorithms is also set to 0.8. The normalized mean-square-deviation (NMSD), defined as 2 2 NMSD(n) 10log[ w(n) h / h ] , is employed to evaluate the algorithm performance. All results are the average of 200 independent Monte Carlo runs. 4.1 Convergence Performance Comparison In this subsection, we compare the performance of the novel zero-attracting algorithms with the regular MTC and regular MCC algorithm. The sparse system vector h to be identified has a total of L 16 taps. We randomly set S 4 of them to 1, and set others as 0. Two common models are used to generate the input signals, one is the zero-mean unit variance Gaussian signal and the other is an AR (1) process as correlated signal. As can be seen from Fig.1, the proposed ZA-MTC and RZA-MTC algorithms outperform other algorithms. The RZA-MTC algorithm provides faster convergence than the ZA-MTC algorithm owing to its selective shrinkage.
Fig.2. NMSD learning curves of the proposed algorithms with correlated signal input. For the ZA-MTC algorithm: 0.0078 , For the RZA-MTC algorithm: 0.0062 , 10 . In Fig.2, The AR (1) process is generated by filtering the zero-mean Gaussian random sequence through a first-order system G( z ) 1/ (1 0.8z 1 ) . As expected, the RZA-MTC algorithm achieves the best performance and ZA-MTC algorithm also shows lower steady-state errors than regular MTC and MCC algorithms. 4.2 The Effect of Sparsity In this subsection, another sparse system to be identified has a total of L 32 .We compare the performance of ZA-MTC and RZA-MTC for different values of S (S=2, 8, 16 and 32). The average learning curves in terms of NMSD are shown in Fig.3 and Fig.4. One can be seen that ZA-MTC and RZAMTC achieve better convergence performance when the sparsity is higher.
106
Lei Li et al. / IFAC PapersOnLine 52-24 (2019) 103–106
REFERENCES
Fig.3. NMSD learning curves of the ZA-MTC algorithm for different values of S. For the ZA-MTC algorithm: 0.0078
Fig.4. NMSD learning curves of the RZA-MTC algorithm for different values of S. For the RZA-MTC algorithm: 0.003 , 10 . 5. CONCLUSIONS Considering the effect of the sparsity on the algorithm performance, a robust and sparse zero attracting maximum total correntropy (ZA-MTC) algorithm is derived by adding a l1 norm penalty term to the maximum total correntropy (MTC) in this brief. In addition, in the reweighted version, a reweighted zero attracting maximum total correntropy (RZAMTC) is also proposed. Simulation results demonstrate the advantages of the proposed algorithms for sparse system identification when both input and output data are disturbed by noises. ACKNOWLEDGMENTS This work was partially supported by National Science Foundation of P.R. China (Grant: 61871461, 61571374, 61433011), Sichuan Science and Technology Program (Grant: 19YYJC0681).
Arablouei, R., Werner, S., & Dogancay, K. (2014). Analysis of the gradient-descent total least-squares adaptive filtering algorithm. IEEE Transactions on Signal Processing, 62(5), 1256-1264. Carbonelli, C., Vedantam, S., & Mitra, U. (2007). Sparse channel estimation with zero tap detection. IEEE Transactions on Wireless Communications, 6(5), 17431763. Chen, Y., Gu, Y., & Hero, A.O., (2009). Sparse LMS for system identification. IEEE International Conference on Acoustics. 3125-3128. Davila, C. E. (2002). An efficient recursive total least squares algorithm for FIR adaptive filtering. IEEE Transactions on Signal Processing, 42(2), 268-280. Eksioglu, E.M., (2011). Sparsity regularised recursive least squares adaptive filtering. IET Signal Processing, 5(5), 480-487. Eksioglu, E.M., & Tanc, A.K. (2011). RLS algorithm with convex regularization. IEEE Signal Processing Letters, 18(8), 470-473. Gu, Y., Jin, J., & Mei, S. (2013). l0 norm constraint lms algorithm for sparse system identification. IEEE Signal Processing Letters, 16(9), 774-777. Kalouptsidis, N., Mileounis, G., Babadi, B., & Tarokh, V. (2011). Adaptive algorithms for sparse system identification. Signal Processing, 91(8), 1910-1919. Loganathan, P., Khong, A.W.H., & Naylor, P.A. (2009). A class of sparseness-controlled algorithms for echo cancellation. IEEE Transactions on Audio, Speech and Language Processing, 17(8), 1591-1601. L. Shi & H. Zhao, (2018). Diffusion Leaky Zero Attracting Least Mean Square Algorithm and Its Performance Analysis, IEEE Access, 6, 56911-56923. L. Shi, H. Zhao, W. Wang, & L. Lu, (2019). Combined regularization parameter for normalized LMS algorithm and its performance analysis, Signal Processing, 162, 75-82. Shen, P., & Li, C. (2015). Minimum total error entropy method for parameter estimation. IEEE Transactions on Signal Processing, 63(15), 4079-4090. Wang, F., He, Y., Wang, S., & Chen, B. (2017). Maximum total correntropy adaptive filtering against heavy-tailed noises. Signal Processing, 141, 84-85. Zhang, X., Li, K., Wu, Z., Fu, Y., Zhao, H., & Chen, B. (2016). Convex regularized recursive maximum correntropy algorithm. Signal Processing, 129, 12-16.