Adaptive combination of affine projection and NLMS algorithms based on variable step-sizes

Adaptive combination of affine projection and NLMS algorithms based on variable step-sizes

Accepted Manuscript Adaptive Combination of Affine Projection and NLMS Algorithms Based on Variable Step-Sizes Chunhui Ren, Zuozhen Wang, Zhiqin Zhao...

5MB Sizes 0 Downloads 49 Views

Accepted Manuscript Adaptive Combination of Affine Projection and NLMS Algorithms Based on Variable Step-Sizes

Chunhui Ren, Zuozhen Wang, Zhiqin Zhao

PII: DOI: Reference:

S1051-2004(16)30114-2 http://dx.doi.org/10.1016/j.dsp.2016.07.022 YDSPR 1993

To appear in:

Digital Signal Processing

Please cite this article in press as: C. Ren et al., Adaptive Combination of Affine Projection and NLMS Algorithms Based on Variable Step-Sizes, Digit. Signal Process. (2016), http://dx.doi.org/10.1016/j.dsp.2016.07.022

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Adaptive Combination of Affine Projection and NLMS Algorithms Based on Variable Step-Sizes Chunhui Ren*, Zuozhen Wang, Zhiqin Zhao, Senior Member, IEEE University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China

Abstract Considering the filters with variable step-sizes outperform their fixed step-sizes versions and the combination algorithms with proper mixing parameters outperform their components, a combination algorithm consisting of improved variable step-size affine projection (I-VSSAP) and normalized least mean square (I-VSSNLMS) algorithms, of which the former is fast and the latter is slow, is proposed for stationary environment. Different from the combination algorithms whose components are updated independently, the variable step-sizes components are adapted using the same input and error signals, and their step-sizes are derived via the mean-square deviation (MSD) of the overall filter. Therefore, the components reflect the working state of the combination filter more accurately than their fixed step-sizes versions. The mixing parameter is obtained by minimizing the MSD and gradually decreases from 1 to 0. Therefore the proposed algorithm has a performance similar to I-VSSAP and I-VSSNLMS in the initial stage and steady-state respectively. Simulations confirm that the proposed algorithm outperforms its components and its fixed step-sizes version. The mixing parameter is artificially set to 0 when the difference between the MSDs of two adjacent iterations is below a user-defined threshold, then the proposed algorithm degrades to I-VSSNLMS and exhibits a less computational complexity than AP algorithm. Keywords: Affine projection algorithm; Normalized least mean square algorithm; Variable step-size; Mixing parameter; Mean-square deviation.

1. Introduction Adaptive filters have been widely used in communication, control, acoustic processing, and many other fields [13]. Due to its low computational complexity and simple implementation, the normalized least mean square (NLMS) algorithm [4-13,31] has attracted lots of attentions. Compared with NLMS algorithm, the affine projection (AP) algorithm [13-20] updates the filter coefficients on the basis of multiple inputs and improves the convergence rate more effectively, especially for colored input signals. However the AP algorithm suffers from high computational complexity and high steady-state misalignment [1,28-30,38-40]. The number of input vectors used (named as projection order) in the family of AP algorithms governs the trade-off between convergence rate and stead-state misalignment [1,28-30]. A larger value of projection order results in a faster convergence rate but a higher

*

Corresponding author E-mail address: [email protected]. 1

misalignment in steady-state, and vice versa. The step-size is another important parameter that affects the performance of adaptive filter algorithms [4,5,8-10,31-37]. It has a similar effect as the projection order. In order to accelerate the convergence rate, reduce the misalignment and save the computation time, many modified versions of AP algorithm have been proposed from different perspectives [21-50]. In [21-27], the Gauss Seidel pseudo AP (GS-PAP) algorithms and Gauss Seidel fast AP (GS-FAP) algorithms which involve no matrix inversion were proposed, especially for adaptive echo cancellation. In comparison with the conventional AP algorithm, the GS-PAPs and GS-FAPs reduce the computational complexities effectively without performance degradation. The computation of the AP algorithm is mainly determined by the projection order, and thus many algorithms with variable projection order have been proposed [28-30]. In the evolving order AP (E-AP) algorithm [28], the projection order is obtained by using an evolutionary method which compares the output error with two thresholds (the upper and lower thresholds) associated with mean-square error (MSE). The projection order of E-AP has a large value in the initial stage and a small value in the steady-state, which results in both fast convergence and low misalignment. However, the E-AP cannot guarantee good performance for some situations since an approximate relation under long tap-length of the adaptive filter is introduced, which means deteriorations may occur in the filters with short tap-length. The proposed variable step-size NLMS (PVSS-NLMS) in [10] obtains its step-size via a variation method described in [11], which requires some user-defined parameters related with the step-size (such as its initial, maximum and minimum values and a small positive constant used to control its adaptive behavior) to adjust the convergence behavior. The recursion of the PVSS-NLMS is simple and robust, and eliminates the trade-off between the convergence rate and steady-state misalignment effectively. With appropriate choice of those user-defined parameters, the PVSS-NLMS outperforms the NLMS with fixed step-size in terms of convergence rate and steadystate misalignment. In [31], the variable step-size AP (VSS-AP) algorithm and NLMS (VSS-NLMS) were proposed, where the choice of the variable step-sizes guarantee that the mean-square deviation (MSD) undergo the largest decrease from current iteration to next iteration. In order to reduce the computational complexity, some empirical parameters 2

related to projection order and signal-to-noise ratio (SNR) were introduced to substitute the parameters related to input signals during the iterations. Due to the significant effect of the projection order and the step-size to adaptive filters, many authors have taken both the two factors into account [38-40]. In [38], an AP algorithm with both variable step-size and projection order (named as the VSS-VP-AP-1 algorithm) was proposed. The step-size is derived in the same way as the VSS-AP and the projection order is updated by comparing the variable step-size with two user-defined thresholds (the upper and lower thresholds). Therefore the step-size and the projection order are related to each other at each iteration. By selecting the two thresholds properly, the VSS-VP-AP-1 obtains large values of step-size and projection order in the initial stage and small values in the steady-state, which results in both fast convergence rate and low steady-state misalignment. Different from VSS-VP-AP-1, the algorithm proposed in [39] (named as the VSS-VP-AP-2) renewed its step-size and projection order independently. On the ground of the derivation in [37], the variable step-size is obtained by minimizing the MSD at each iteration, where an approximate relationship used to estimate the MSD requires a sufficiently high projection order. Different from the E-AP, the VSS-VP-AP-2 updates its projection order by comparing the current MSD with the steady-state MSD of its fixed step-sizes version. Similar to VSS-VP-AP-1, the VSS-VP-AP-2 achieves fast convergence rate and low misalignment due to the large values of the step-size and projection order in the initial stage and small values in the steady-state. Recently, a popular approach that attracts lots of attentions is to adaptively combine two filters with complementary characteristics [46-50], where one filter is fast and another is slow. The most important aspect of this scheme is to select the weighting factor (named as mixing parameter) that combines the two filters adaptively to obtain an overall filter. Both in the convex combination [47] (the mixing parameter is restricted in the range [0,1]) and the affine combination [48] (the mixing parameter is not necessarily restricted in the interval [0,1]), two LMSs with different step-sizes were combined. The two components update the unknown system independently with the same input signals and their own error signals, and the combination algorithms in [47] and [48] achieve similar performance as their best components at each iteration. Different from [47] and [48], the combination scheme studied in [50] involves AP algorithm with large step-size and NLMS algorithm with small step-size (named as the AP-NLMS algorithm). Each component in AP-NLMS is adapted by using the same input and error signals. The 3

mixing parameter is derived by minimizing the MSD at each iteration and is restricted to the range [0,1] to guarantee the stability of AP-NLMS. In initial stage, the mixing parameter is large and the AP-NLMS algorithm has a performance similar to its fast component (the AP algorithm with large step-size). In steady-state, the AP-NLMS algorithm yields results similar to its component with low steady-state misalignment (the NLMS algorithm with small step-size) due to the small mixing parameter. The AP-NLMS algorithm accelerates the convergence rate and reduces the misalignment more effectively than its components. While in some cases, the variable parameters derived by MSD or MSE analysis can better reflect the working state of the filters. When the mixing parameter decreases to 0, the AP-NLMS reduces to the NLMS with fixed step-size. As for adaptive filter algorithms, a common approach to improve the performance is introducing variable parameters (such as variable projection order [28-30], variable step-size [31-37], variable regularization parameter [41-45], and variable mixing parameters in combination schemes [46-50]) that reflect the working state of filters instantaneity. Considering that the combination algorithm outperforms its components with properly selected mixing parameter, and the AP algorithm with variable step-size achieves faster convergence rate and lower steady-state misalignment than its fixed step-size version, both the mixing parameter and variable step-sizes are simultaneously introduced and a new combination algorithm consisting of AP and NLMS algorithms with variable step-sizes is proposed in this paper. The variable step-sizes have been derived in [9] and [37], and the mixing parameter combining the two filters with variable step-sizes is obtained by minimizing the MSD at each iteration. In order to distinguish the algorithms in [9] and [37] from the VSS-NLMS and VSS-AP in [31], we name the algorithm in [9] as the improved variable step-size NLMS (I-VSSNLMS), and the algorithm in [37] as the improved variable step-size AP (I-VSSAP). Different from the algorithms in [47] and [48], the components of the proposed algorithm are adapted by using the same input and error signals, and their variable step-sizes are derived by the common MSD of the overall filter. Therefore, the components reflect the working state more accurately than their fixed step-size versions. For the proposed algorithm, a large value of mixing parameter (close to 1) in the initial stage results in fast convergence rate, which is similar to the I-VSSAP algorithm. And a small value of mixing parameter (close to 0) in the steady-state provides low misalignment since the I-VSSNLMS algorithm plays a leading role. In order to further reduce the 4

computational complexity of the proposed algorithm, a user-defined threshold is introduced to update the mixing parameter. The mixing parameter is artificially set as 0 when the difference between the MSDs of two adjacent iterations is less than the threshold, and the proposed algorithm degrades to the I-VSSNLMS algorithm. In this regard, the proposed algorithm achieves lower misalignment than the AP-NLMS algorithm when filters nearly reach the steady-state. Therefore, the proposed algorithm obtains dual advantages since both variable mixing parameter and variable step-sizes are introduced. Simulation results show that the proposed algorithm achieves a better performance in terms of convergence rate and steady-state misalignment than the AP algorithms, such as the NLMS, AP, PVSS-NLMS, VSS-NLMS, VSS-AP, E-AP, and VSS-VP-AP-1. Moreover, the proposed algorithm has a lower computational complexity than that of the conventional AP algorithm. The remainder of this paper is organized as follows. In the next section, the components of the new algorithm are presented. Sec. 3 gives the mathematical model and the derivation of the proposed algorithm. Simulation results are given in Sec. 4, which demonstrate the effectiveness of the proposed algorithm. Conclusions are drawn in the final section. The notations used in this paper are given below, ሺήሻ

transpose;

ሺήሻ‫כ‬

complex conjugate;

ሺήሻ Hermitian transpose; ”ሺήሻ matrix trace; ۳ሾήሿ expectation; ԡήԡ

Euclidean norm of a vector;

ԧ݉ ൈ݊ set of complex matrices of dimension mhn; IM

identity matrix of dimension MhM.

5

2. The components of the proposed algorithm

Fig.1. Adaptive filtering problem

Figure.1 shows an adaptive filter used in system identification. The system input is xn and the corresponding measured output is d n , possibly contaminated with measurement noise vn ( vn is zero-mean white Gaussian distribution with a variance σ v2 ). ‫ א ܐ‬ԧ‫ܯ‬ൈͳ denotes the unknown weight vector which we want to estimate. The objective of adaptive filter is to estimate a weight vector ‫ א ݊ܟ‬ԧ‫ܯ‬ൈͳ , such that the output yn = w Hn x n is as close as possible to the measured output d n = h H x n + vn , where x n = [ xn , xn −1 , xn − M +1 ] is the input vector at nth iteration. T

The input matrix of the AP algorithm with projection order K can be written as ‫ ݊ ܆‬ൌ ሾ‫ ݊ ܠ‬ǡ ‫݊ ܠ‬െͳ ǡ ‫ ڮ‬ǡ ‫݊ ܠ‬െ‫ܭ‬൅ͳ ሿ ‫ א‬ԧ‫ܯ‬ൈ‫ ܭ‬.

(1)

The measured output vector of the adaptive filter corresponding to X n is ‫ ݊܌‬ൌ ‫ כܐ ݊܆‬൅ ‫ א ݊ܞ‬ԧ‫ܭ‬ൈͳ ,

(2)

where d n = [ d n , d n −1 , d n − K +1 ] and the measurement noise vector v n = [vn , vn −1 , vn − K +1 ] . T

T

The output error vector at instant n is e n = d n − XTn w *n ,

(3)

where en = [en , en −1 , en − K +1 ] . T

The update equation of AP algorithm is [14] −1

w1, n +1 = w1, n + μ1, n X n ª¬ X Hn X n º¼ e*n ,

where μ1,n is the step-size.

6

(4)

Setting projection order K = 1, we have the update equation of NLMS algorithm xn

w 2, n +1 = w 2, n + μ 2, n

2

xn

en* ,

(5)

where μ 2,n is the step-size. Aiming to accelerate the convergence speed and reduce the steady-state misalignment of AP algoprithm, VSS-AP algorithm and VSS-NLMS algorithm are proposed in [31]. In order to reduce the computational complexity, some empirical parameters related to projection order and SNR were introduced to substitute the parameters related to input signals during iterations, which means that performance deteriorations may occur when inappropriate empirical parameters are selected. Thus we choose the I-VSSNLMS [9] and I-VSSAP [37] as the components of the proposed algorithm, which involve no empirical parameters. In order to facilitate subsequent analyses, we have the following definitions, −1

A1,n = X n ª¬ X Hn X n º¼ X Hn , 2

A 2,n = x n x Hn x n

,

(6) (7)

−1

(8)

2

,

(9)

C1, n = Tr ª¬ X nH X n º¼

−1

B1,n = X n ª¬ X Hn X n º¼ , B 2,n = x n x n

(

C2, n = 1 x n

2

),

.

(10) (11)

The variable step-sizes and the mixing parameter are determined by the MSD analysis. The weight-error vector of w n is defined as  n = h − wn . w

(12)

‫ ؜ ݊۾‬۳ሾ‫ܟ‬ ෥݊ ‫ܟ‬ ෥ ݊ ሿ .

(13)

The covariance matrix of w n is defined as

And the MSD at the nth iteration is defined as

7

‫” ؜ ݊ܦܵܯ‬ሺ۳ሾ‫ܟ‬ ෥݊ ‫ܟ‬ ෥ ݊ ሿሻ ൌ ”ሺ‫ ݊۾‬ሻ ൌ ‫ ݊݌‬.

(14)

Assumption 1. vn is independent and identically distributed (i.i.d), and statistically independent of w n [50]. According to “Assumption 1” and the derivation in [35], the step-size of the I-VSSAP is given as μ1, n =

Kp1, n Kp1, n + M σ v2C1, n

∈ ( 0,1) ,

(15)

where the MSD of the I-VSSAP algorithm is K Kº ª + μ1,2 n » p1, n + μ1,2 nσ v2C1, n . p1, n +1 = «1 − 2 μ1, n M M¼ ¬

(16)

The derivation in [9] shows that the MSD of the NLMS is § 2μ − μ2,2 n · 2 2 p2, n +1 = ¨¨1 − 2, n ¸¸ p2, n + μ2, nσ v C2, n , β M © ¹

(17)

where β ≥ 1 is a positive constant that related to the characteristics of input signal xn . Since the signal xn is generated by filtering a zero-mean white Gaussian random sequence through a second-order system (linear system) in this paper, it still obeys Gaussian distribution, and β = 1 is an appropriate choice [9]. Thus the MSD of the IVSSNLMS can be rewritten as § 2μ μ2 · p2, n +1 = ¨¨1 − 2, n + 2, n ¸¸ p2, n + μ 2,2 nσ v2C2, n . M M ¹ ©

(18)

By minimizing the MSD, the step-size of the I-VSSNLMS is derived as μ 2, n =

p2, n p2, n + M σ v2C2, n

∈ ( 0,1) .

(19)

3. The Proposed Algorithm The mathematical model and the stability condition of the new algorithm are firstly researched in this section. Then we derive the MSD and the variable mixing parameter of the new algorithm. Finally, the comparison of the computational complexities between the new algorithm and other AP algorithms is given. 3.1. The mathematical model The I-VSSAP algorithm has the advantage of fast convergence in the initial stage, and the drawback of relative 8

high steady-state misalignment. While I-VSSNLMS algorithm retains the advantage of low steady-state error, but slow convergence. In order to take the advantages of both algorithms, a scheme for combining I-VSSAP and IVSSNLMS algorithms is proposed in this paper.

Fig.2. The structure of the proposed algorithm

The combination schemes with optimal mixing parameters outperform their components, i.e. we have pn ≤ min { p1, n , p2, n } at any instant, where pn is the MSD of the overall filters. Similar conductions were derived in

[47,48] where the components (the LMS with different step-sizes) update the unknown system independently. Different from the algorithms in [47,48], the proposed algorithm is adapted based on the same error signals, which also guarantees pn ≤ min { p1, n , p2, n } . Brief proof can be referred to Appendix A. Using the same error signals, the update equations of the I-VSSAP and I-VSSNLMS can be rewritten as w1, n +1 = w n + μ1, nB1, ne*n .

(20)

w 2, n +1 = w n + μ2, nB 2, n en* .

(21)

A variable scalar mixing parameter λn is introduced to synthesize the weight vector of the new scheme, which can be expressed as w n +1 = λn w1, n +1 + (1 − λn ) w 2, n +1 .

9

(22)

Substituting (20) and (21) into (22), we have

w n +1 = λn w1, n +1 + (1 − λn ) w 2, n +1 = w n + ª¬λn μ1, n B1, ne*n + (1 − λn ) μ 2, n B 2, n en* º¼

.

(23)

The mathematical model of the proposed algorithm corresponding to (20) - (23) is intuitively illustrated in Fig.2, and the selection of mixing parameter λn ∈ [0, 1] is critical. It will affect the performance of the proposed algorithm.

3.2. Stability analysis Aiming to facilitate subsequent analyses, we have the following definitions: X n = [ x n −1 , x n − 2 , , x n − K +1 ] ,

(24)

en = [ en −1 , en − 2 ,  , en − K +1 ] ,

(25)

T

2

,

(26)

an = λn μ1, n ∈ [ 0,1] ,

(27)

bn = (1 − λn ) μ 2, n ∈ [ 0,1] .

(28)

En = XTn x*n x n

In order to guarantee the stability of the proposed algorithm, the condition given in (29) must be satisfied. (Derivations are given in Appendix B. ) c1, n − c2, n E n

2

≥0,

(29)

where ­ 1 − (1 − μ ) 2 , 2, n ° 2 ° c1, n = ® 1 − (1 − μ1, n ) , ° 2 °an ( 2 − an ) 2 ( an + bn ) − ( an + bn ) , ¯

{

}

λn = 0 λn = 1

,

(30)

otherwise

­ 0 , λn = 0 or λn = 1 , c2, n = ® 2 ¯bn , otherwise

(31)

and c1, n ≥ 0 , c2, n ≥ 0 . As derived in [50], the cross-correlation between input signals was neglected (i.e. En = 0 ) when analyzed the 2

stability considerations of AP-NLMS. The proposed algorithm is obviously stable (i.e. c1, n − c2, n E n = c1, n ≥ 0 ) if we follow the strategy of AP-NLMS. 10

In order to reflect the stability accurately, the cross-correlation between input signals is taken into account in the proposed algorithm. According to (29), we define convergence parameter Convergence = c1, n − c2, n En

2

to examine

the stability of the proposed algorithm. Moreover, the learning curve of Convergence reflects the details of the convergence behavior of the proposed algorithm, which will be confirmed in Sec. 4. 3.3. Derivation of the MSD The mixing parameter is determined by MSD analysis. Rewriting (23) in terms of weight-error vector, we have  n +1 = ĭ n w  n − Ȍn , w

(32)

ĭ n = I M − {λn μ1, n A1, n + (1 − λn ) μ 2, n A 2, n } ,

(33)

Ȍ n = λn μ1, n B1, n v*n + (1 − λn ) μ 2, n B 2, n vn* .

(34)

where

According to the definition in (13) and Assumption 1, the covariance matrix of w n +1 can be written as Pn +1 = ĭ n Pnĭ Hn + ǻ n ,

(35)

where ǻ n = E ª¬ Ȍ n Ȍ nH º¼ = λn2 μ1,2 nσ v2B1, n B1,Hn + (1 − λn ) μ 2,2 nσ v2B 2, n B H2, n 2

+ λn (1 − λn ) μ1, n μ 2, n E {B1, n v v B * n n

H 2, n

.

} + λ (1 − λ ) μ n

n

1, n

μ 2, nE {B v v B * 2, n n

T n

H 1, n

(36)

}

After taking trace on both sides of (35), the MSD can be derived as pn +1 = Tr ( Pn +1 ) = Tr ( ĭ n Pnĭ Hn ) + Tr ( ǻ n ) =Tr ( ĭ nH ĭ n Pn ) + Tr ( ǻ n ) .

(37)

3.3.1. Consider Tr ( ĭ Hn ĭ n Pn ) , the first term on the RHS of (37) Expanding the term ĭ Hn ĭ n , we get 2 ĭ nHĭ n = I M − ( 2λn μ1, n − λn2 μ1,2 n ) A1, n − ª 2 (1 − λn ) μ 2, n − (1 − λn ) μ 2,2 n º A 2, n ¬ ¼ + λn (1 − λn ) μ1, n μ 2, n ª¬ A1, n A 2, n + A 2, n A1, n º¼ .

Therefore,

11

(38)

2 Tr ( ĭ Hn ĭ n Pn ) = Tr ( Pn ) − ( 2λn μ1, n − λn2 μ1,2 n ) Tr ( A1, n Pn ) − ª 2 (1 − λn ) μ 2, n − (1 − λn ) μ 2,2 n º Tr ( A 2, n Pn ) ¬ ¼ .

+ λn (1 − λn ) μ1, n μ 2, n ª¬Tr ( A1, n A 2, n Pn ) + Tr ( A 2, n A1, n Pn ) º¼

(39)

Assume that the signal xn is generated by filtering a zero-mean white Gaussian random sequence through an second-order system, and according to the derivations in [9,19,37], we have the following relationships ‫ܭ‬ ”ሺ‫ ݊۾‬ሻ ‫ܯ‬ ൞ . ͳ ”൫‫ʹۯ‬ǡ݊ ‫ ݊۾‬൯ ؄ ”ሺ‫ ݊۾‬ሻ ‫ܯ‬ ”൫‫ͳۯ‬ǡ݊ ‫ ݊۾‬൯ ؄

(40)

Furthermore, according to the analysis in [50], we get ‫ܭ‬ ”ሺ‫ ݊۾‬ሻ ‫ʹܯ‬ ൞ . ‫ܭ‬ ”൫‫ʹۯ‬ǡ݊ ‫ͳۯ‬ǡ݊ ‫ ݊۾‬൯ ؄ ʹ ”ሺ‫ ݊۾‬ሻ ‫ܯ‬ ”൫‫ͳۯ‬ǡ݊ ‫ʹۯ‬ǡ݊ ‫ ݊۾‬൯ ؄

(41)

Substituting (40) and (41), the equation (39) can be rewritten as 2K °­ Tr ( ĭ nHĭ n Pn ) = ®1 − λn M °¯

ª 2 μ 2, n · μ 2,2 n º μ 2, n ( 2 − μ 2, n ) °½ § μ 2, n · μ 2, n (1 − μ 2, n ) º § K ª « μ1, n ¨ 1 − » + λn2 « μ1, n ¨ μ1, n − »− ¾ pn . ¸− ¸+ M ¹ K M «¬ M ¹ K »¼ M «¬ »¼ © © °¿

(42)

3.3.2. Consider Tr ( ǻ n ) , the second term on the RHS of (37) In order to facilitate the analysis, we define the following column vector of dimension Kh1 u K = [1, 0, , 0] . T

(43)

The relationships between x n and X n , vn and v n can be written as x n = X nu K ,

(44)

vn = u TK v n .

(45)

Then we have E{B 2, n vn* v Tn B1,Hn } =

X nu K u TK E ª¬ v*n v Tn º¼ X nH xn

4

=

x n x nH xn

4

σ v2 .

(46)

Taking the trace of both sides of (46), we get

(

)

Tr E {B 2, n vn* v Tn B1,Hn } = σ v2C2, n . 12

(47)

Substituting (47) into (36), the term Tr ( ǻ n ) can be expressed as Tr ( ǻ n ) = σ v2λn2 μ1,2 nC1, n + σ v2 (1 − λn ) μ 2,2 nC2, n + 2σ v2λn (1 − λn ) μ1, n μ 2, nC2, n . 2

(48)

Therefore, the updating recursion of MSD in (37) is derived as follows 2K °­ pn +1 = ®1 − λn M ¯°

ª § μ 2, n · μ 2, n (1 − μ 2, n ) º K « μ1, n ¨ 1 − » + λn2 ¸− M ¹ K M «¬ »¼ ©

+σ λ μ C1, n + σ 2 2 v n

2 1, n

2 v

((1 − λ ) μ 2

n

2 2, n

ª 2μ 2, n · μ 2,2 n º μ 2, n ( 2 − μ 2, n ) °½ § « μ1, n ¨ μ1, n − »− ¾ pn ¸+ M ¹ K »¼ M «¬ © °¿ .

(49)

)

+ 2λn (1 − λn ) μ1, n μ 2, n C2, n

3.4. The mixing parameter λn The mixing parameter is obtained by minimizing the MSD at each iteration. Taking partial derivative of (49) with respect to λn , the variable mixing parameter is derived as ª § μ 2, n · μ 2, n (1 − μ 2, n ) º 2 « μ1, n ¨1 − » pn + σ v μ 2, n ( μ 2, n − μ1, n ) C2, n ¸− M K © ¹ « » ¬ ¼ . λn' = 2 μ 2, n · μ 2,2 n º § K ª 2 2 ª º + + + − p C C σ μ μ μ 2 μ ( ) « μ1, n ¨ μ1, n − » ¸ 2, n 2, n 1, n 2, n ¼ n v ¬ 1, n 1, n M¬ M ¹ K ¼ © K M

(50)

The step-sizes are derived via the common MSD of the proposed algorithm, then we have p1, n = p2, n = pn , where p1,n , p2,n and pn are the MSD of I-VSSAP, I-VSSNLMS and the proposed algorithm at nth iteration, respectively.

Substituting (15), (19) and p1, n = p2, n = pn into (50), the mixing parameter can be rewritten as 1· 1 · § §1 ¨ 1 − ¸ μ1, n + ¨ − ¸ μ1, n μ 2, n K¹ K M¹ © © . λ = 1· 1 · 1 § §1 − + − − − 1 2 μ μ μ μ μ ( 1, n 2, n ) ¨ ¸ 1, n ¨ ¸ 1, n 2, n K © K¹ ©K M ¹ ' n

(51)

As can be seen in (51), the mixing parameter λn' is determined by the two step-sizes μ1,n and μ2,n . The relationship between λn' , μ1,n and μ2,n is determined by the requirement of ∂pn +1 ∂λn = 0 , which represents an underdetermined equation consisting of three values ( λn , μ1, n , μ2, n ) . The solution to the equation ∂pn +1 ∂λn = 0 is unique when two of the three values are determined. Then, deriving the step-sizes ( μ1, n , μ 2, n ) from the single filters based on the condition of p1, n = p2, n = pn is a wish choice. Three reasons for this are given as follows, (i) Deriving the step-sizes from the single filters guarantee the stability of the individual filters; 13

(ii) Since the step-sizes in (15) and (19) are restricted to the interval (0,1), the stability condition in (29) is easy to be satisfied; (iii) Any values of ( λn , μ1, n , μ 2, n ) that satisfy ∂pn +1 ∂λn = 0 and (29) will result in a desired performance. In order to further reduce the computational complexity of the proposed algorithm, the MSDs at two adjacent iterations are compared with each other to restrict the value of mixing parameter. Therefore, the mixing parameter λn is modified as follows ­°λn' , λn' > 0 and pn −1 > (1 + γ ) pn , ¯° 0 , otherwise

λn = ®

(52)

where γ is a small positive value close to 0. When λn = 0 , the proposed algorithm degrades to I-VSSNLMS algorithm, and the step-size in (19) guarantees low misalignment in the steady-state. As introduced in (52), the choice of γ depends on the precision one wants. A large γ results in slow convergence and low steady-state misalignment, and vice versa (which will be verified in the Sec. 4).

Table I The Proposed Algorithm Initialization: w 0 , γ and p0 ; For each iteration n, 1. The step-sizes of I-VSSAP and I-VSSNLMS algorithms, Kpn pn μ1,n = , μ 2,n = , Kpn + M σ v2C1,n pn + M σ v2C2,n 2. The mixing parameter, 1· § §1 1 · ¨1 − ¸ μ1,n + ¨ − ¸ μ1,n μ 2,n ­°λ ' , λn' > 0 and pn−1 > (1 + γ ) pn K¹ © ©K M ¹ ' λn = , and λn = ® n , 1· 1 § §1 1 · °¯ 0 , otherwise ¨1 − ¸ μ1,n + 2 ¨ − ¸ μ1,n μ 2,n − ( μ1,n − μ2,n ) K © K¹ ©K M ¹ 3. The weight vector, w n+1 = w n + ª¬λn μ1,n B1,ne*n + (1 − λn ) μ2,n B 2,n en* º¼ , 4. The MSD,

2K °­ pn+1 = ®1 − λn M ¯°

ª 2 μ2,n · μ 2,2 n º § μ 2,n · μ 2,n (1 − μ2,n ) º § K ª 1½ « μ1,n ¨ 1 − » + λn2 « μ1,n ¨ μ1,n − » − μ 2,n ( 2 − μ 2,n ) ¾ pn ¸− ¸+ M ¹ K M «¬ M ¹ K »¼ M¿ . «¬ © »¼ ©

(

)

+σ v2 λn2 μ1,2nC1,n + σ v2 (1 − λn ) μ 2,2 n + 2λn (1 − λn ) μ1,n μ2,n C2,n 2

end

The proposed algorithm is summarized in Table I. The AP algorithm is derived from an objective function that based on L2-norm [14], and thus the AP algorithms 14

(including the proposed algorithm) have a lower sensitivity against background and impulsive noise [8,32]. Remark 1. The mixing parameter (equation (52)) is gradually decreases during iteration and artificially set as 0 when reaches the steady-state. Therefore the proposed algorithm gradually converts from its fast component (IVSSAP) to the slow component (I-VSSNLMS). The proposed algorithm is sensitive to background and impulsive noise [8,32], and it is designed for stationary environments [50]. Additionally, a re-initialization mechanism [47,48] described in Table II is introduced whenever the unknown system is suddenly changed.

Table II Re-initialization method 2 ݁‫ ʹݒߪܭ ؜ ݄ݐ‬, flag = 0, eavg = e1 ,

α , t1 , t2 : user defined, for each n do if en2 < t1 ⋅ eth flag = 1

else if flag = 1 and en2 > t2 ⋅ eavg flag = 0, eavg = en2 , reset pn = p0

end

eavg ← α eavg + (1 − α ) en2 end for

3.5. The complexity Table III enumerates the computational complexities of the algorithms mentioned above. The variable step-size versions of NLMS (i.e. PVSS-NLMS, VSS-NLMS and I-VSSNLMS) require extra calculations for their step-sizes, and they have a similar number of multiplications. Since multiple inputs are involved, the complexity of AP algorithm is much higher than the NLMS algorithms. The modified versions of AP algorithm, such as the VSS-AP, I-VSSAP and E-AP, have slight differences in complexity, but share the same order of magnitude. Compared with the AP algorithm, the I-VSSAP algorithm requires extra (M+9) multiplications and (K+4) additions for calculating the step-size and the MSD. And compared with NLMS, the I-VSSNLMS algorithm requires extra (M+9) multiplications and 4 additions for estimating its step-size and the MSD. The mixing parameter 15

λn that used to combine I-VSSAP and I-VSSNLMS needs 5 multiplications and 4 additions.

The projection order of E-AP algorithm is variable during iteration, and approximately reduces to 1 when nearly reaches the steady-state. Thus E-AP algorithm has a lower complexity than other AP algorithms, such as AP, VSSAP and I-VSSAP, as the number of iterations is increased. The VSS-VP-AP-1 and VSS-VP-AP-2 update their step-sizes and projection orders simultaneously, and the complexities of the two algorithms with same projection order stay close to each other. Similar to E-AP, the complexities of VSS-VP-AP-1 and VSS-VP-AP-2 are mainly controlled by the projection orders, and are varied at different stages. The GS-PAP [23] involves no matrix inversion during iterations, it thus has a complexity much lower than other AP algorithms mentioned in this paper.

Table III Computational complexities of different algorithms Different algorithms

Multiplications

Additions

NLMS

2M+4

2M+5

PVSS-NLMS

3M+8

3M+5

VSS-NLMS

3M+5

-

I-VSSNLMS in Eq.(5), (18) and (19)

3M+13

2M+9

AP algorithm

(K2+2K)M+K3+K2

(Mütap-length, Küprojection order)

2

VSS-AP

3

(K2+2K)M+K3ˉK

2

(K +2K)M+K +K +3 (K +2K+1)M+K +K +9

(K2+2K)M+K3+4

E-AP (variable K)

(K2+2K)M+K3+K2+1

-

2

3

2

-

2

3

2

(K2+2K)M+K3ˉK+3

(K +2K)M+K +K +9 2

GS-PAP

The Proposed Algorithm

2

(K +2K)M+K +K +3

VSS-VP-AP-2 (variable K)

AP-NLMS

3

-

I-VSSAP in Eq.(4), (15) and (16)

VSS-VP-AP-1 (variable K)

2

2M+K +3K+5

-

λn = 1

(K +2K)M+K +K

λn = 0

2M+4

2

3

2

(K2+2K)M+K3ˉK 2M+5

0 < λn < 1

(K +2K+4)M+K +K +22

(K2+2K+2)M+K3ˉK+18

λn = 1

(K2+2K+1)M+K3+K2+9

(K2+2K)M+K3+4

λn = 0

3M+13

2M+9

0 < λn < 1

2

3

2

3

2

2

(K +2K+6)M+K +K +35

(K2+2K+1)M+K3+19

As listed in Table III, the computations of the combination algorithms (i.e. AP-NLMS and the proposed algorithm) are mainly determined by their mixing parameters λn . When λn =0 or λn =1 , the combination algorithms reduce to 16

one of their components, and less calculations are required. However opposite results occur when 0 < λn < 1 . In the initial period, the proposed algorithm has a computational complexity similar to that of I-VSSAP algorithm due to the large mixing parameter. Then the mixing parameter quickly decreases to 0, and the proposed algorithm reduces to I-VSSNLMS. Therefore, the computational complexity of the proposed algorithm is less than the AP, VSS-AP, and I-VSSAP. For the same unknown system h , the proposed algorithm reduces to its slow component sooner than the APNLMS, therefore the proposed algorithm needs less computations than the AP-NLMS, which will be verified in Sec.4. 4. Experimental results In this section, several experiments were conducted to verify the performance of the proposed algorithm under the system identification. The adaptive filter and the unknown system were assumed to have the same tap-length (M). The input signal xn was generated by filtering a zero-mean white Gaussian random sequence through a secondorder system [50], G(z) =

1 + 0.6 z −1 . 1 + 1.0 z −1 + 0.21z −2

(53)

In the initial state, we set the weight vector w 0 = 0 , the MSD p0 = 10 . The SNR used in the simulations is defined

(

)

2 as 10log10 E ª« h H x n º» σ v2 , where the noise variance σ v2 can be easily estimated during silence and is known a priori ¬ ¼

(

[28,50]. The normalized MSD (NMSD) is defined as E h − w n

2

h

2

) , where h has unit length (i.e.

h = 1 ).

When setting the value of tap-length (M) of the filter and the projection order (K), the following aspects should be considered, (i) In order to improve the computational efficiency, M and K should be set to power of 2; (ii) In AP algorithms, the projection order K should be less than or equal to the tap-length of the filter M [28]; (iii) In AP algorithms, a larger value of the projection order K results in a faster convergence rate [15-19,28-30]. In the following simulations, we set the tap-length of the filter M = 16, the projection order K = 8. In order to verify the performance of the proposed algorithm under different parameters, and change the ratio between M and K, 17

we also investigated the results when M = 64 and K = 16. In the case of the unknown system suddenly changed, the parameters in the re-initialization method are set as the recommended values in [50,51]: α = 0.99 , t1 = 10 , and t2 = 30 . 1000 independent Monte Carlo trials were conducted in the experiments. 4.1. The stability The convergence parameter (i.e. Convergence) always stay positive, which demonstrates the stability of the algorithm under the conditions described in the caption of Fig.3. 2

The parameter Convergence reflects the term e n − rn en

2

− rn

2

2

directly. As shown in Fig.3, in the initial stage, the term

as well as the parameter Convergence has a large value, indicating that the proposed algorithm in a state

of fast convergence. And in the steady-state, the positive parameter Convergence gradually decreases and tends to 0, 2

2

2

confirming that e n ≈ rn , and the proposed algorithm nearly accomplishes its iteration (i.e. w n +1 − w n ≈ 0 ).

Fig.3. The Convergence of the proposed algorithm [input signal generated by G(z), SNR = 10dB, M = 16, K = 8, the mixing parameter λn was calculated according to Eq.(51)].

4.2. The computational complexity Corresponding to the computational complexities listed in Table III, an experiment was conducted to give a more 18

intuitive comparison of different algorithms, and the results are shown in Fig.4. The proposed algorithm possesses the largest computational complexity than other algorithms mentioned in Table III when 0 < λn < 1 . When the proposed algorithm nearly reaches its steady-state, the modification described in (52) guarantees the proposed algorithm reduces to the I-VSSNLMS algorithm. Thus the computational complexity of the proposed algorithm is much lower than those of I-VSSAP and AP algorithms, but similar to those of the I-VSSNLMS and NLMS algorithms.

Fig.4. Cumulative running time of different algorithms [Running environment: Windows 7, Software: MATLAB R2012a] [input signal generated by G(z), SNR = 10dB, M = 64, K = 16, μ1 = 1.0 , μ 2 = 0.01 ]. (a) Conventional NLMS algorithm ( μ1 ). (b) PVSS-NLMS algorithm ( μ0 = 0.04 ,

μ max = 1.99 , μ min = 10−10 , ρ = 0.0008 ). (c) VSS-NLMS algorithm ( α = 0.99 , C = 0.001). (d) The I-VSSNLMS in Eq.(5). (e) Conventional AP algorithm ( μ 2 ). (f) The VSS-AP algorithm ( α = 0.99 , C = 0.15). (g) The I-VSSAP in Eq.(4). (h) The E-AP algorithm (Kmax = 16, μ2 ). (i) The VSS-VP-AP-1 algorithm (Kmax = 16, μmax = 2 , α = 0.5 , μup = 1 3 , μdown = 1 4 ). (j) The VSS-VP-AP-2 algorithm (Kmax = 16, μmax = μ1 ,

γ = 1.02 ). (k) The GS-PAP ( μ1 , γ = 100 ). (l) The AP-NLMS algorithm ( μ1 , μ2 , γ = 0.1 ). (m) Proposed algorithm ( γ = 0.1 ). The unknown system is suddenly changed from h to −h .

4.3. Comparison between the combination algorithm and its components Fig.5 shows the NMSD learning curves of I-VSSAP and I-VSSNLMS, the former achieves a faster convergence and the latter has a lower steady-state misalignment. The proposed algorithm consisting of those two filters with complementary characteristics retains both fast convergence and low steady-state misalignment, confirming that the combination schemes based on the properly selected mixing parameters outperform the best of their components (as 19

proved in Appendix A. ).

Fig.5. NMSD learning curves of the proposed algorithm and its components [input signal generated by G(z), SNR = 30dB, M = 16, K = 8, the mixing parameter λn was calculated according to Eq.(51)]. (a) The I-VSSNLMS in Eq.(5), (18), (19). (b) The I-VSSAP in Eq.(4), (15), (16). (c) The proposed algorithm.

4.4. Selection of the threshold γ in (52)

Fig.6. The Convergence of the proposed algorithm based on various γ [input signal generated by G(z), SNR = 10dB, M = 16, K = 8]. (a) – (d) The proposed algorithm based on various γ . (e) The proposed algorithm without γ (the mixing parameter λn was calculated according to Eq.(51)). 20

As mentioned above, the selection of the threshold γ in (52) governs the performance of the proposed algorithm in terms of convergence rate and steady-state misalignment. According to (52), the larger the value of γ is, the sooner the mixing parameter λn will be artificially set to 0, which has been confirmed in Fig.7. Since the parameter Convergence accurately reflect the convergence behavior of the proposed algorithm, a significant jitter occurs in Convergence when the mixing parameter is set to 0, but we still have Convergence > 0 (as shown in Fig.6), which means that the proposed algorithm with modification of (52) is stable under the conditions described in the caption of Fig.6.

Fig.7. Mixing parameters of the AP-NLMS algorithm and the proposed algorithm based on various γ [input signal generated by G(z), SNR = 10dB, M = 16, K = 8]. (a) The AP-NLMS algorithm ( μ1 = 1.0 , μ 2 = 0.01 , γ = 0.1 ). (b) – (e) The proposed algorithm based on various γ . (f) The proposed algorithm without γ (the mixing parameter λn was calculated according to Eq.(51)).

As shown in Fig.8, a smaller γ leads to a faster convergence but a higher steady-state misalignment, and vice versa. Compared with the proposed algorithm without threshold, the version with a threshold has deterioration in either the convergence rate or the steady-state misalignment, while the computational complexity is greatly reduced since the version with a threshold reduces to the I-VSSNLMS in the steady-state. The AP-NLMS algorithm consisting of two algorithms with fixed step-sizes has a fast convergence rate but high steady-state misalignment in comparison with the proposed algorithm. In order to reduce the computational complexity of the proposed algorithm without significant performance 21

deterioration, the threshold is set as γ = 0.1 in the following simulations.

Fig.8. NMSD learning curves of the AP-NLMS algorithm and the proposed algorithm based on various γ [input signal generated by G(z), SNR = 10dB, M = 16, K = 8]. (a) The AP-NLMS algorithm ( μ1 = 1.0 , μ 2 = 0.01 , γ = 0.1 ). (b) – (e) The proposed algorithm based on various γ . (f) The proposed algorithm without γ (the mixing parameter λn was calculated according to Eq.(51)).

Fig.9. NMSD learning curves of the conventional NLMS and AP algorithms, the proposed algorithm with fixed and variable parameters [input signal generated by G(z), SNR = 10dB, M = 16, K = 8, μ1 = 1.0 , μ 2 = 0.1 , μ3 = 0.01 , λ1 = 0.1 , λ2 = 0.01 ]. (a) Conventional AP algorithm ( μ1 ). (b) Conventional NLMS algorithm ( μ 2 ). (c) – (f) Proposed algorithm with fixed parameters. (g) Proposed algorithm with variable parameters.

22

4.5. The proposed algorithm based on fixed mixing parameter λn and step-sizes μ1, n , μ2, n It can be easily verified from Fig.9 that the mixing parameter λn and the step-sizes μ1 and μ 2 have the same effect to the proposed algorithm. The smaller the values of those parameters are, the lower the steady-state misalignment and the slower the convergence rate the algorithm achieves. The proposed algorithm with variable parameters obtains significant advantages in convergence rate and steady-state misalignment than its fixed parameters versions. 4.6. The proposed algorithm based on variable mixing parameter λn and fixed step-sizes μ1, n , μ2, n

Fig.10. NMSD learning curves of the conventional NLMS and AP algorithms, the proposed algorithm with fixed and variable parameters stepsizes μ1,n , μ 2,n [input signal generated by G(z), SNR = 10dB, M = 16, K = 8, μ1 = 1.0 , μ 2 = 0.01 ]. (a) Conventional AP algorithm ( μ1 ). (b) Conventional NLMS algorithm ( μ 2 ). (c) – (h) Proposed algorithm with variable mixing parameters λn and fixed step-sizes μ1,n , μ2,n . (i) Proposed algorithm with variable parameters.

The components of a combination scheme should have complementary characteristics that one is fast and another is slow but has low steady-state misalignment. As for the combination algorithm whose components are the AP ( μ1 ) and NLMS ( μ 2 ), the convergence rate is mainly influenced by the AP algorithm, and the steady-state misalignment is mainly affected by the NLMS algorithm as long as we set μ1 > μ 2 . As shown in Fig.10, the AP algorithm ( μ1 ) achieves fast convergence and the NLMS algorithm ( μ 2 ) has the 23

advantage of low steady-state misalignment. Therefore the proposed algorithm ( μ1 , μ 2 ) retains both fast convergence and low steady-state misalignment (as the learning curves of (a), (d) and (f) shown in Fig.10). If we set μ1 ≤ μ 2 , the proposed algorithm with fixed step-sizes has a performance similar to that of its slow component (i.e.

the NLMS algorithm), as the learning curves of (c), (e) and (g), (d) and (h) in Fig.10. The proposed algorithm with variable parameters exhibits faster convergence and lower steady-state misalignment than the fixed step-sizes version. And the simulation results prove that the introduction of variable step-sizes to combination scheme is wise. 4.7. The convergence performance under a system sudden change Fig.11 illustrates the learning curves of Convergence and the mixing parameter. Both the two parameter reflect the transient behavior accurately, even when the unknown system is suddenly changed. Detailed explanations about convergence parameter have been given in Fig.3.

Fig.11. The Convergence (the upper figure) and the mixing parameter λn (the below figure) of the proposed algorithm [input signal generated by

G(z), SNR = 10dB, M = 16, K = 8]. The unknown system is suddenly changed from h to −h .

Fig.12, 13 and 14 show the NMSD learning curves of different algorithms under different parameters (i.e. the taplength of filter M, the projection order K and the SNR).

24

Fig.12. NMSD learning curves of different algorithms [input signal generated by G(z), SNR = 10dB, M = 16, K = 8, μ1 = 1.0 , μ2 = 0.01 ]. (a) – (b) Conventional NLMS algorithm based on different step-sizes. (c) – (d) Conventional AP algorithm based on different step-sizes. (e) PVSSNLMS algorithm ( μ0 = 0.1 , μ max = 1.99 , μ min = 10−10 , ρ = 0.0008 ). (f) The VSS-NLMS algorithm ( α = 0.99 , C = 0.001). (g) The VSS-AP algorithm ( α = 0.99 , C = 0.15). (h) The E-AP algorithm (Kmax = 8, μ 2 ). (i) The VSS-VP-AP-1 algorithm (Kmax = 8, μ max = 2 , α = 0.5 ,

μup = 1 3 , μ down = 1 4 ). (j) The VSS-VP-AP-2 algorithm (Kmax = 8, μ max = μ1 , γ = 0.1 ). (k) The GS-PAP algorithm ( μ1 , γ = 100 ). (l) The APNLMS algorithm ( μ1 , μ2 , γ = 0.1 ). (m) Proposed algorithm. The unknown system is suddenly changed from h to −h .

The conventional AP (NLMS) algorithm with small step-size achieves low steady-state misalignment and slow convergence rate, and vice versa. The PVSS-NLMS calculates its step-size via the method studied in [11], and obtains a performance with faster convergence rate and lower steady-state misalignment when compared with conventional NLMS algorithm. The related parameters ( μ0 , μ max , μ min , ρ ) governs the performance of PVSSNLMS and proper selections are necessary for real-world applications. Comparing with conventional AP (NLMS), the VSS-AP (VSS-NLMS) algorithm updates its weight vector via a variable step-size which can better reflects the convergence state of the filters, and thus obtains a faster convergence rate and a lower steady-state misalignment. Moreover, VSS-NLMS achieves a better performance than that of the PVSS-NLMS under the conditions described in the captions of Fig.12, 13 and 14. The E-AP algorithm with a variable projection order has a fast convergence rate in the initial stage (not as fast as that of the AP based on the same step-size) and low misalignment in the steadystate (not as low as that of the NLMS based on the same step-size). The performance of the VSS-VP-AP-1 algorithm is deeply influenced by the parameters ( μmax ,α , μup , μdown ) , the VSS-VP-AP-1 with (2, 0.5, 1/3, 1/4) achieves a better 25

performance when M = 16, K = 8 than that of the case M = 64, K = 16. Similar to the proposed algorithm, the VSSVP-AP-2 has large values of projection order and step-size in the initial stage and small values of both parameters in the steady-state, which results in a performance very close to that of the proposed algorithm. The GS-PAP has a lower steady-state misalignment than those of the AP and NLMS algorithms based on the same step-sizes. The APNLMS algorithm has a similar performance to that of its fast component (i.e. the AP algorithm) in the initial stage and yields results similar to that of its slow component (i.e. the NLMS algorithm) in the steady-state. Therefore the AP-NLMS algorithm retains both the advantages of fast convergence rate and low steady-state misalignment. However the fixed step-sizes limit its performance since the variable parameters can better reflect the working state of the filters. The proposed algorithm overcomes this drawback and achieves a better performance than that of the AP-NLMS algorithm.

Fig.13. NMSD learning curves of different algorithms [input signal generated by G(z), SNR = 30dB, M = 16, K = 8, μ1 = 1.0 , μ2 = 0.01 ]. (a) – (b) Conventional NLMS algorithm based on different step-sizes. (c) – (d) Conventional AP algorithm based on different step-sizes. (e) PVSSNLMS algorithm ( μ0 = 0.1 , μ max = 1.99 , μ min = 10−10 , ρ = 0.0008 ). (f) The VSS-NLMS algorithm ( α = 0.99 , C = 0.0001). (g) The VSS-AP algorithm ( α = 0.99 , C = 0.15). (h) The E-AP algorithm (Kmax = 8, μ 2 ). (i) The VSS-VP-AP-1 algorithm (Kmax = 8, μ max = 2 , α = 0.5 ,

μup = 1 3 , μ down = 1 4 ). (j) The VSS-VP-AP-2 algorithm (Kmax = 8, μ max = μ1 , γ = 0.1 ). (k) The GS-PAP algorithm ( μ1 , γ = 100 ). (l) The APNLMS algorithm ( μ1 , μ2 , γ = 0.1 ). (m) Proposed algorithm. The unknown system is suddenly changed from h to −h .

When the unknown system is suddenly changed from h to −h , the proposed algorithm maintains its advantages and achieves a faster convergence rate and a lower steady-state misalignment than other AP algorithms mentioned 26

above (except the VSS-VP-AP-2, which obtains a similar performance to that of the proposed algorithm).

Fig.14. NMSD learning curves of different algorithms [input signal generated by G(z), SNR = 10dB, M = 64, K = 16, μ1 = 1.0 , μ2 = 0.01 ]. (a) (b) Conventional NLMS algorithm based on different step-sizes. (c) - (d) Conventional AP algorithm based on different step-sizes. (e) PVSSNLMS algorithm ( μ0 = 0.1 , μ max = 1.99 , μ min = 10−10 , ρ = 0.0008 ). (f) The VSS-NLMS algorithm ( α = 0.99 , C = 0.001). (g) The VSS-AP algorithm ( α = 0.99 , C = 0.15). (h) The E-AP algorithm (Kmax = 16, μ2 ). (i) The VSS-VP-AP-1 algorithm (Kmax = 16, μ max = 2 , α = 0.5 ,

μup = 1 3 , μ down = 1 4 ). (j) The VSS-VP-AP-2 algorithm (Kmax = 16, μ max = μ1 , γ = 0.1 ). (k) The GS-PAP algorithm ( μ1 , γ = 100 ). (l) The AP-NLMS algorithm ( μ1 , μ2 , γ = 0.1 ). (m) Proposed algorithm. The unknown system is suddenly changed from h to −h .

4.8. The advantage on the steady-state misalignment In order to investigate the effect of SNR, a simulation based on various SNRs was conducted. Fig.15 shows that the steady-state misalignments decrease gradually for all algorithms as the SNR is increased. The AP reduces to the NLMS when we set the projection order K = 1, and a smaller K results in a lower steadystate misalignment, and vice versa. Therefore the NLMS obtains lower steady-state misalignment than that of the AP algorithm with the same step-size (as the learning curves of (a) and (b) shown in Fig.15). The PVSS-NLMS achieves a steady-state misalignment close to those of the E-AP and GS-PAP under the conditions described in the caption of Fig.15. The empirical parameter C used in VSS-AP (VSS-NLMS) is related to the SNR and the projection order, a higher SNR results in a relatively more accurate parameter C, therefore the learning curve of the VSS-AP (VSS-NLMS) decreases more rapidly as the SNR is increased when compared with the other algorithms (as the learning curves of (d) and (e) shown in Fig.15). The average projection order of the E-AP is Kave (1 < Kave < 27

Kmax) during iterations, thus the E-AP has a steady-state misalignment between those of the AP and the NLMS based on the same step-size (as the learning curves of (a), (b) and (f) shown in Fig.15). The steady-state misalignment of VSS-VP-AP-1 is deeply affected by SNR since the parameters ( μmax ,α , μup , μdown ) are related with SNR (as the learning curve of (g) shown in Fig.15). The VSS-VP-AP-2 has a similar performance to that of the proposed algorithm which can also be seen in Fig.12, 13 and 14. The GS-PAP obtains a steady-state misalignment lower than the AP algorithm and higher than that of the E-AP. The AP-NLMS algorithm ( μ1 = 1.0 , μ 2 = 0.01 ) has already reduced to the NLMS algorithm after 2h104 iterations, thus those two algorithms have the coincident learning curves (as the learning curves of (a) and (j) shown in Fig.15). The proposed algorithm degrades to the I-VSSNLMS algorithm in the steady-state, and always achieves a lower misalignment than other AP algorithms mentioned above.

Fig.15. The NMSD after 2h104 iterations of different algorithms based on various SNRs [input signal generated by G(z), M = 16, K = 8,

μ1 = 1.0 , μ 2 = 0.01 ]. (a) Conventional NLMS ( μ2 ). (b) Conventional AP algorithm ( μ 2 ). (c) PVSS-NLMS algorithm ( μ0 = 0.04 , μ max = 1.99 ,

μ min = 10−10 , ρ = 0.0008 ). (d) The VSS-NLMS algorithm ( α = 0.99 , C = 0.0001). (e) The VSS-AP algorithm ( α = 0.99 , C = 0.15). (f) The EAP algorithm (Kmax = 8, μ2 ). (g) The VSS-VP-AP-1 algorithm (Kmax = 8, μ max = 2 , α = 0.5 , μup = 1 3 , μ down = 1 4 ). (h) The VSS-VP-AP-2 algorithm (Kmax = 8, μ max = μ1 , γ = 0.1 ). (i) The GS-PAP algorithm ( μ1 , γ = 100 ). (j) The AP-NLMS algorithm ( μ1 , μ2 , γ = 0.1 ). (k) Proposed algorithm.

4.9. The performance under different SNRs The SNR is an important parameter that influences the convergence performance of adaptive filtering algorithms. 28

An algorithm based on high SNR accelerates the convergence rate and reduces the steady-state misalignment simultaneously, which is quite different from the algorithm based on large value of step-size or projection order. As shown in Fig.16, the higher the SNR is, the faster the convergence rate and the lower the steady-state misalignment the proposed algorithm achieves.

Fig.16. NMSD learning curves of the proposed algorithms under different SNRs [input signal generated by G(z), M = 16, K = 8]. (a) SNR = ˉ 5dB. (b) SNR = 0dB. (c) SNR = 5dB. (d) SNR = 10dB. (e) SNR = 20dB.

5. Conclusion This paper proposes a new filter scheme where I-VSSAP and I-VSSNLMS algorithms are combined with a proper mixing parameter. I-VSSAP has the feature of fast convergence but with relative high steady-state misalignment, while I-VSSNLMS has the feature of slow convergence but with relative low steady-state misalignment. Thus the whole filter behaves the advantages of both algorithms. Furthermore, I-VSSAP and IVSSNLMS algorithms adapt variable step-sizes on updating the weight vectors. Different from the combination algorithms in [47,48] whose components are updated via the same input signals and the error signal of each component, the components of the proposed algorithm are adapted using the same input signals and the error signals of the overall filter. Unlike the AP-NLMS algorithm, variable step-sizes based on MSD analysis are introduced in the components of the proposed algorithm. The step-sizes are derived by using the common error signals and MSD of the overall filter. Therefore, the components reflect the working state of the 29

filters more accurately than the AP-NLMS algorithm. The mixing parameter of the proposed algorithm is derived by minimizing the MSD at each iteration. When the parameter decreases from 1 to 0 as the number of iteration increases, the proposed algorithm gradually converts from its fast component to the slow component. The mixing parameter is artificially set to 0 when the difference between the MSDs of two adjacent iterations is below a threshold, then the proposed algorithm degrades to its slow component (i.e. the I-VSSNLMS) and exhibits a lower computational complexity compared with conventional AP algorithm. Simulation results show that the proposed algorithm outperforms its components, its fixed step-sizes version, and many other AP algorithms, such as AP, NLMS, VSS-AP, VSS-NLMS, E-AP, VSS-VP-AP-1, VSS-VP-AP-2 and GS-PAP. The proposed algorithm is designed for stationary environment. How to deal with the situation with impulsive noise is a promising work for our future study.

Appendix A. Proof of the combination algorithm outperforms its components Assuming that the estimations of the combination algorithm and its components for the same unknown system h are w n , w1, n and w 2, n , respectively. The relation between those three estimations is w n = λn −1w1, n + (1 − λn −1 ) w 2, n ,

(A.1)

where λn −1 is the mixing parameter. Rewritten (A.1) in terms of the weight-error vector, we have  n = λn −1w  1, n + (1 − λn −1 ) w  2, n . w

(A.2)

According to the definition in (14), the MSD of the overall filter is pn = λn2−1 p1, n + (1 − λn −1 ) p2, n + 2λn −1 (1 − λn −1 ) p12, n , 2

(

 nw  nH º¼ pn = Tr E ª¬ w

where p1&2, n =

( (

)

1  1, n w  2,H n º¼ + Tr E ª¬ w  2, n w  1,Hn º¼ Tr E ª¬ w 2

)

(

(

 1, n w  1,Hn º¼ p1, n = Tr E ª¬ w

,

)

,

)) .

By minimizing the MSD in (A.3), the optimal mixing parameter is derived as 30

(A.3)

(

 2, n w  H2, n º¼ p2, n = Tr E ª¬ w

)

and

λn −1 =

p2, n − p1&2, n p1, n + p2, n − 2 p1&2, n

.

(A.4)

.

(A.5)

Substituting (A.4) into (A.3), and we have pn =

2 p1, n p2, n − p1&2, n

p1, n + p2, n − 2 p1&2, n

2 2 Since p1, n + p2, n − 2 p1&2, n = E ª« w 1, n − w 2, n º» = E ª« w1, n − w 2, n º» > 0 , then ¬ ¼ ¬ ¼

pn − p1, n = −

pn − p2, n = −

(p

1, n

− p1&2, n )

2

p1, n + p2, n − 2 p1&2, n

(p

2, n

− p1&2, n )

≤0,

(A.6)

≤0.

(A.7)

2

p1, n + p2, n − 2 p1&2, n

Therefore, we have pn ≤ min { p1, n , p2, n } , and the combination algorithm based on the optimal mixing parameter has a performance at least as well as the best of its components in the MSD sense.

Appendix B. The Stability Analysis of The Proposed Algorithm The stability condition of the proposed algorithm is obtained based on the a posteriori error analysis. B.1. First, consider the case that λn = 0 The proposed algorithm degrades to the I-VSSNLMS algorithm. The a posteriori estimate error is defined as rn = d n − w Hn +1x n .

(B.1)

rn = (1 − μ 2, n ) en .

(B.2)

2 2 2 2 en − rn = ª«1 − (1 − μ 2, n ) º» en . ¬ ¼

(B.3)

Substituting (23) into (B.1), and we have

Then,

{ } ≤ E{ e } (with equality only

In order to guarantee the stability of the proposed algorithm, the relation E rn when en = 0 ) must be satisfied. Therefore, the condition needs to be met is 31

2

2

n

c1, n = 1 − (1 − μ2, n ) ≥ 0 . 2

(B.4)

Condition (B.4) is always satisfied since the step-size μ2,n is restricted in the interval (0,1) (see (19)).

B.2. Second, consider that λn = 1 The proposed algorithm degrades to the I-VSSAP algorithm. The a posteriori estimate error vector is defined as rn = d n − X Tn w *n +1 .

(B.5)

Substituting (23) into (B.5), the a posteriori estimation error vector becomes rn = (1 − μ1, n ) e n .

(B.6)

Then, en

2

− rn

2

2 2 = ª«1 − (1 − μ1, n ) º» e n . ¬ ¼

(B.7)

{ } ≤ E{ e } (with equality only

In order to guarantee the stability of the proposed algorithm, the relation E rn

2

2

n

when e n = 0 ) must be satisfied. The stable condition is c1, n = 1 − (1 − μ1, n ) ≥ 0 . 2

(B.8)

Condition (B.8) is always satisfied since the step-size 0 < μ1, n < 1 (see (15)).

B.3. Finally, consider that 0 < λn < 1 The definition in (B.5) can be rewritten as rn = e n − λn μ1, ne n − (1 − λn ) μ 2, n X Tn x*n x n en 2

ª1 x n = e n − λn μ1, nen − (1 − λn ) μ 2, n X Tn X*n « ¬« 0

{

}

2

0 º ª en º »« » , 0 ¼» ¬ en ¼

(B.9)

= (1 − λn μ1, n ) I K − (1 − λn ) μ 2, nS n e n

where the matrix ‫ א ݊܁‬ԧ‫ܭ‬ൈ‫ ܭ‬is given by ª1 x n S n = XTn X*n « ¬« 0

2

1 0º ª »=« T * 0 ¼» «¬ X n x n x n

Therefore, we have 32

2

0º ª 1 »=« 0 »¼ ¬En

0º . 0 »¼

(B.10)

en

2

2

− rn

= e Hn {I K − DHn Dn } e n ,

(B.11)

where the matrix ۲݊ ‫ א‬ԧ‫ܭ‬ൈ‫ ܭ‬is given by Dn = (1 − λn μ1, n ) I K − (1 − λn ) μ 2, nS n .

(B.12)

DHn Dn = (1 − λn μ1, n ) I K + (1 − λn ) μ 2,2 nS Hn S n − (1 − λn ) (1 − λn μ1, n ) μ 2, n ª¬S Hn + S n º¼ .

(B.13)

ª M1, n M H2, n º I K − DnH Dn = « », ¬«M 2, n δ n I K −1 ¼»

(B.14)

And 2

2

Then we have

where M1, n = ( 2 − an − bn )( an + bn ) − bn2 E n ,

(B.15)

M 2, n = (1 − an ) bn En ,

(B.16)

δ n = an ( 2 − an ) .

(B.17)

2

{ } ≤ E{ e } (with equality only

In order to guarantee the stability of the proposed algorithm, the relation E rn

2

2

n

when e n = 0 ) must be satisfied, which requires the term E{I K − DHn Dn } must be a non-negative definite matrix. Let’s define a positive-definite matrix ‫ ۾‬ൌ ൤

ͳ ૙

Τ ߜ݊ െ ‫ʹۻ‬ǡ݊ ൨ ‫ א‬ԧ‫ܭ‬ൈ‫ ܭ‬. ۷‫ܭ‬െͳ

(B.18)

And we have ª M − M 2,H n M 2,n δ n H = « 1,n PDM {I K − D nH D n }PDM 0 ¬

0 º ». δ n I K −1 ¼

(B.19)

It can be seen from (B.19) that the positive definiteness of E{I K − DHn Dn } is equivalent to the positive definiteness of E [δ nI K −1 ] and E ª¬ M1, n − M 2,H n M 2, n δ n º¼ . B.3.1. Consider the matrix E [δ nI K −1 ] . 33

Condition need to be met is δ n ≥ 0 Ÿ an (2 − an ) ≥ 0 Ÿ 0 ≤ λn μ1, n ≤ 2 .

(B.20)

As we restrict 0 < λn < 1 and 0 < μ1, n < 1 , E [δ n I K −1 ] is always a positive-definite matrix. B.3.2. Consider the scalar E ª¬ M1, n − M 2,H n M 2, n δ n º¼ . Condition to be met is M1, n −

1

δn

M H2, n M 2, n ≥ 0 Ÿ c1, n − c2, n E n

2

≥0,

(B.21)

where c1, n = an (2 − an ) {2(an + bn ) − (an + bn ) 2 } .

(B.22)

c2,n = bn2 .

(B.23)

For the proposed algorithm, equation (29) is the mathematical description of its stability in a posteriori error sense.

References [1] B. Widrow and S. D. Sterns, Adaptive Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1985. [2] J Wang, Z Lu, Y Li, A New CDMA Encoding/Decoding Method for on-Chip Communication Network, IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 24 (4) (2015) 1607-1611. [3] M S E Abadi, F Moradiani, A unified approach to tracking performance analysis of the selective partial update adaptive filter algorithms in nonstationary environment, Digital Signal Processing. 23 (3) (2013) 817-830. [4] M Hamidia, A Amrouche, Improved variable step-size NLMS adaptive filtering algorithm for acoustic echo cancellation, Digital Signal Processing. 49 (C) (2015) 44-45. [5] K Mayyas, H. A. Abuseba, A new variable length NLMS adaptive algorithm, Digital Signal Processing. 34 (1) (2014) 82-91. [6] E V Kuhn, J E Kolodziej, S Rui, Stochastic modeling of the NLMS algorithm for complex Gaussian input data and nonstationary environment, Digital Signal Processing. 30 (1) (2014) 55-66. [7] D. T. M. Slock, On the convergence behavior of the LMS and the normalized LMS algorithms, IEEE Trans. Signal Processing. 41 (9) (1993) 2811-2825. [8] L. R. Vega, H. Rey, J. Benesty, and S. Tressens, A new robust variable step-size NLMS algorithm, IEEE Trans. Signal Process. 56 (5) (2008) 1878-1893. [9] P. Park, M. Jang, N. Kong, Scheduled-stepsize NLMS algorithm, IEEE Signal Process. Lett. 16 (12) (2009) 1055-1058. [10] A. I. Sulyman and A. Zerguine, Convergence and Steady-State Analysis of a Variable Step-Size NLMS Algorithm, Signal Processing. 83 (6) (2003) 1255-1273. [11] Y.K. Shin, J.G. Lee, A study on the fast convergence algorithm for the LMS adaptive filter design, Proc. KIEE, 1985. [12] Y. S. Choi, H. C. Shin, and W. J. Song, Robust regularization for normalized LMS algorithms, IEEE Trans. Circuit Systems II: Exp. Briefs. 53 (8) (2006) 627-631. [13] S. G. Sankaran and A. A. L. Beex, Tracking analysis results for NLMS and APA, in Proc. ICASSP-02, Orlando, USA, vol. 2, 2002, pp. 1105-1108. [14] K. Ozeki and T. Umeda, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties, Electron. Commun. Jpn. 67 (5) (1984) 19-27. [15] S. G. Sankaran and A. A. (Louis) Beex, Convergence behavior of affine projection algorithms, IEEE Trans. Signal Processing. 48 (4) [] (2000) 1086-1096. 34

[16] H. C. Shin and A. H. Sayed, Mean-square performance of a family of affine projection algorithms, IEEE Trans. Signal Processing. 52 (1) (2004) 90-102. [17] S. Werner, J. A. Apolinário, M.-L. R. Campos, and P. S. R. Diniz, Low-complexity constrained affine-projection algorithms, IEEE Trans. Signal Process. 53 (12) (2005) 4545-4555. [18] T. Paul and T. Ogunfunmi, On the convergence behavior of the affine projection algorithm for adaptive filters, IEEE Trans. Circuits Syst. I. 58 (8) (2011) 1813-1826. [19] P. Park, C.H. Lee, J.W. Ko, Mean-square deviation analysis of affine projection algorithm, IEEE Trans. Signal Process. 59 (12) (2011) 5789-5799. [20] M.V.S. Lima and P.S.R. Diniz, Steady-state MSE performance of the set-membership affine projection algorithm, Circuits, Systems, and Signal Processing. 32 (4) (2013) 1811-1837. [21] F. Albu, J. Kadlec, N. Coleman, and A. Fagan, The Gauss-Seidel fast affine projection algorithm, Proc. SIPS, (2002), pp. 109-114. [22] M. Bouchard and F. Albu, The multichannel Gauss–Seidel fast affine projection algorithm for active noise control, Proc. 7th Int. Symp. Signal Process. Applicat. (ISSPA), vol. 2, 2003, pp. 579-582. [23] F. Albu and A. Fagan, The Gauss-Seidel pseudo affine procetion algorithm and its application for echo cancellation, in Proc. 37th Asilomar Conf. Signals, Syst. Comput., Asilomar, CA, vol. 2, 2003, pp. 1303-1306. [24] F. Albu and M. Bouchard, A low-cost and fast convergence gaussseidel pseudo-affine projection algorithm for multichannel active noise control, in Proc. ICASSP, Montreal, QC, Canada, vol. 4, 2004, pp. 121-124. [25] F. Albu and H. K. Kwan, Combined echo and noise cancellation based on Gauss–Seidel pseudo affine projection algorithm, in Proc. IEEE Int. Symp. on Circuits and Systems (ISCAS), Vancouver, BC, Canada, vol. 3, 2004, pp. 505-508. [26] F. Albu and C. Kotropoulos, Modified gauss-seidel affine projection algorithm for acoustic echo cancellation, Proc. 2005 IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), vol. 3, 2005, pp. 121-124. [27] F. Albu and C. Paleologu, The Variable Step-Size Gauss-Seidel Pseudo Affine Projection Algorithm, Proceedings of World Academy of Science Engineering & Technolog1, 2009. [28] S.E. Kim, S.J. Kong, W.J. Song, An affine projection algorithm with evolving order, IEEE Signal Process. Lett. 16 (11) (2009) 937-940. [29] R. Arablouei, K. Dogançay, Affine projection algorithm with selective projections, Signal Process. 92 (9) (2012) 2253-2263. [30] R. Arablouei, K. Dogancay, Affine projection algorithm with variable projection order, IEEE International Conference on Communications, 2012, pp. 3681-3685. [31] H. C. Shin, A. H. Sayed, and W.-J. Song, Variable step-size NLMS and affine projection algorithms, IEEE Signal Process. Lett. 11 (2) (2004) 132-135. [32] L.R. Vega, H. Rey, J. Benesty, A robust variable step-size affine projection algorithm, Signal Process. 90 (9) (2010) 2806-2810. [33] F. Albu, C. Paleologu, S. Ciochina, New variable step size affine projection algorithms, in Proc. of IEEE COMM 2012, Bucharest, Romania, 2012, pp. 63-66. [34] C.H. Lee, P.G. Park, Optimal step-size affine projection algorithm, IEEE Signal Process. Lett. 19 (7) (2012) 431-434. [35] C.H. Lee, P.G. Park, Scheduled-step-size affine projection algorithm, IEEE Trans. Circuits Syst. I Reg. Pap. 59 (9) (2012) 2034-2043. [36] S. H. Kim, J. J. Jeong, G. Koo, and W. K. Sang, Variable step-size affine projection algorithm for a non-stationary system, Digital Signal Processing (DSP), 2014 19th International Conference on IEEE, 2014, pp. 179-183. [37] Song, Insun, P. Park, and I. Song, Fast communication: A variable step-size affine projection algorithm with a step-size scaler against impulsive measurement noise, Signal Processing. 96 (5) (2014) 321-324. [38] A. Gonzalez, M. Ferrer, M. de Diego and G. Piero, An affine projection algorithm with variable step size and projection order, Digit. Signal Process. 22 (4) (2012) 586-892. [39] N W Kong, J W Shin, and P G Park, Affine projection algorithm with decremental projection order and optimally-designed step size, Electronics Letters. 48 (9) (2012) 496-498. [40] W Y Jin, J W Shin, H T Choi, et al, An Affine Projection Algorithm with Evolving Order Using Variable Step-Size, International Journal of Computer & Electrical Engineering. 5 (1) (2013) 5-8. [41] G. Rombouts and M. Moonen, Avoiding explicit regularisation in affine projection algorithms for acoustic echo cancellation, Proceedings of ProRISC99, 1999, pp. 395-398. [42] V. Myllyläand G. Schmidt, Pseudo-optimal regularization for affine projection algorithms, in Proc. ICASSP-02, Orlando, FL. 2, 2002, pp. 1917-1920. [43] H. G. Rey, L. R. Vega, S. Tressens, and B.C. Frias, Analysis of explicit regularization in affine projection algorithms: robustness and optimal choice, in Proc. EUSIPCO-04, Vienna, Austria. 2004, pp. 1809-1812. [44] H. G. Rey, L. R. Vega, S. Tressens, and J. Benesty, Optimum variable explicit regularized affine projection algorithm, in Proc. ICASSP-06, Toulouse, France. 3, 2006, III-III. [45] H. Rey , L. R. Vega , S. Tressens and J. Benesty, Variable explicit regularization in affine projection algorithm: Robustness issues and optimal choice, IEEE Trans. Signal Process. 55 (5) (2007) 2096-2109. [46] J. Arenas-Garcia, M. Martinez-Ramon, A. Navia-Vazquez, A. R. Figueiras-Vidal, Plant identification via adaptive combination of transversal filters, Signal Process. 86 (9) (2006) 2430-2438. [47] J. Arenas-Garcia , A. R. Figueiras-Vidal and A. H. Sayed, Mean-square performance of a convex combination of two adaptive filters, IEEE Trans. Signal Process. 54 (3) (2006) 1078-1090. [48] N. J. Bershad , J. C. M. Bermudez and J.-Y. Tourneret, An affine combination of two LMS adaptive filters-transient mean-square analysis, IEEE Trans. Signal Process. 56 (5) (2008) 1853-1864. 35

[49] M. T. M. Silva and V. H. Nascimento, Improving the tracking capability of adaptive filters via convex combination, IEEE Trans. Signal Process. 56 (2) (2008) 3137-3149. [50] J. H. Choi, S. H. Kim, and W. K. Sang, Adaptive combination of affine projection and NLMS algorithms, Signal Processing. 100 (7) (2014) 64-70. [51] N.W. Kong, J.S. Shin, and P.G. Park, A two-stage affine projection algorithm with mean-square-error-matching step-sizes, Signal Process. 91 (11) (2011) 2639-2646.

36

Chunhui Ren received her master degree in communication and information systems in April 1998 from University of Electronic Science and technology, China. In June 2006, she obtained the Ph.D. degree in communication and information systems at the same university. Now she is work at the same university and focus on signal processing.

Zuozhen Wang received the B.S. degree in electronic engineering at University of Electronic Science and technology of China (UESTC). Now he is study for a M.S. degree at UESTC and focus on signal and information processing.

Zhiqin Zhao received the B.S. and M.S. degrees in electronic engineering from the University of Electronic Science and Technology of China (UESTC), Chengdu, China, and the Ph.D. degree in electrical engineering from Oklahoma State University, Stillwater, OK, USA, in 1990, 1993, and 2002, respectively. His current research interests include computational electromagnetics and signal processing.