Robust adaptive filter with lncosh cost

Robust adaptive filter with lncosh cost

Journal Pre-proof Robust adaptive filter with lncosh cost Chang Liu , Ming Jiang PII: DOI: Reference: S0165-1684(19)30401-3 https://doi.org/10.1016/...

2MB Sizes 0 Downloads 60 Views

Journal Pre-proof

Robust adaptive filter with lncosh cost Chang Liu , Ming Jiang PII: DOI: Reference:

S0165-1684(19)30401-3 https://doi.org/10.1016/j.sigpro.2019.107348 SIGPRO 107348

To appear in:

Signal Processing

Received date: Revised date: Accepted date:

6 May 2019 14 October 2019 17 October 2019

Please cite this article as: Chang Liu , Ming Jiang , Robust adaptive filter with lncosh cost, Signal Processing (2019), doi: https://doi.org/10.1016/j.sigpro.2019.107348

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Elsevier B.V. All rights reserved.

Highlights  A least Incosh (LIncosh) algorithm based on the lncosh cost function is proposed. Moreover, its variable parameter λ variant is introduced. 

Mean stability, mean-square stability and steady-state performance of the proposed algorithm are analyzed.



The proportionate-type variant of the LIncosh algorithm in acoustic echo cancellation (AEC) application is derived.



Some computer simulations validate the effectiveness of the proposed algorithm.

1

Robust adaptive filter with lncosh cost Chang Liu1*, ming Jiang1 1

School of Electronic Engineering, Dongguan University of Technology, Dongguan, 523808, People’s Republic of China *

Corresponding Author

Tel: +86-13631742215 E-mail address: [email protected]

Abstract: In this paper, a least lncosh (Llncosh) algorithm is derived by utilizing the lncosh cost function. The lncosh cost is characterized by the natural logarithm of hyperbolic cosine function, which behaves like a hybrid of the mean square error (MSE) and mean absolute error (MAE) criteria depending on adjusting a positive parameter λ. Hence, the Llncosh algorithm performs like the least mean square (LMS) algorithm for small errors and behaves as the sign-error LMS (SLMS) algorithm for large errors. It provides comparable performance to the LMS algorithm in Gaussian noise. When compared with several existing robust approaches, the superior steady-state performance and stronger robustness can be attained in impulsive noise. The mean behavior, mean-square behavior and steady-state performance analyses of the proposed algorithm are also provided. In addition, aiming to acquire a compromise between fast initial convergence rate and satisfactory steady-state performance, we introduce a variable-λ Llncosh (VLlncosh) scheme. Lastly, in order to resist the sparsity of the acoustic echo path, an improved proportionate least lncosh (PLlncosh) algorithm is presented. The good performance against impulsive noise and theoretical results of the proposed algorithm are validated by simulations. Index Terms: Adaptive filtering, lncosh cost function, hyperbolic cosine function, LMS algorithm, sign algorithm.

1 Introduction Mean square error (MSE) criterion, capturing the second order moments of the data, is more attractive when the signals or noises are assumed to be Gaussian distribution. Not a few wellknown adaptive filtering algorithms such as the least mean square (LMS) algorithm and normalized LMS (NLMS) algorithm are derived from this criterion [1], [2]. Unfortunately, signals, in many real-world applications, frequently obey non-Gaussian distribution and in this case, the MSE-based adaptive filtering algorithms encounter severe performance degradation, especially

2

when the heavy-tailed impulsive noises or large outliers occur in the desired signals. To address this problem, a variety of robust optimization criteria and cost functions have been utilized in adaptive filtering. Among these studies, the sign-error LMS (SLMS) algorithm based on the mean absolute error (MAE) (L1 error norm) criterion has been presented in [3]. It updates the tap-weight vector by employing only the directional information of the error signal, resulting into effective suppression of large outliers and high robustness to impulsive noises. However, it suffers from an inferior performance in the absence of impulsive noises. A robust mixed-norm (RMN) adaptive filtering algorithm [4] switching between the MSE and MAE criteria offers an improved robustness, which is derived under the assumption that the desired signal has a zero-mean Gaussian distribution. Recently, by minimizing the cost function of fractional lower order moments of error signals, the least mean p-power (LMP) algorithm [5-9] has been proposed in alpha-stable (α-S) distributed noise environment. The range of the fractional lower order p is 1 ≤ p < α, where α is the characteristic exponent in the characteristic function of the α-S random process. By assuming that the parameter α is known and selecting a suitable value of p, the LMP algorithm achieves more robustness and superior performance than the LMS and SLMS (special case for LMP, p = 1) algorithms in α-S distributed noises. In [10-13], several robust M-estimate algorithms based on the modified Huber’s two-part or Hample’s three-part redescending cost functions are proposed, including the least mean M-estimate (LMM) algorithm [10], the transform least mean M-estimate (TLMM) algorithm [10] and the recursive least M-estimate (RLM) algorithm [11], etc. The M-estimate algorithms compare the amplitude of the error signal with some pre-calculated threshold parameters, and then ignore the larger errors or only extract the sign information from them for the filter updating. The tacking and steady-state performances of the Mestimate algorithms are dependent on the proper selection of the threshold parameters, which are computed by assuming that the distribution of the impulse-free error signal is Gaussian. In [14], the RLM algorithm presents some decreased tacking performance and robustness in the case of sudden system change. In [15], maximum correntropy criterion (MCC) which maximizes the correntropy between the desired signal and filter output has been successfully applied in adaptive filtering. Significantly, the MCC-based adaptive algorithm achieves strong robustness to heavytailed impulsive noises [15],[16]. Recently, a new convex cost function called lncosh cost is presented in [17], which is characterized by the natural logarithm of hyperbolic cosine function and has been used in support

3

vector regression (SVR) successfully. Unlike MSE criterion, the lncosh cost is not dependent on the assumptions on the noise distribution. As a consequence, the SVR framework based on lncosh cost is insensitive to large outliers and yields a globally optimal solution in the maximum likelihood sense. The lncosh cost behaves like a hybrid of the MSE and the MAE criteria by tuning a positive parameter λ. When the parameter λ goes to zero, the lncosh cost approaches the MSE cost function, while when the parameter λ approaches  , the lncosh cost is equivalent to the MAE cost function. Hence the lncosh cost allows switching between MSE and MAE, and exhibits a tradeoff between them depending on the selection of λ. The purpose of this paper is to provide a robust adaptive filtering algorithm based on the lncosh cost function. The main contributions of this work are as follows. 1)

Based on the lncosh cost function presented in [17], we derive a robust adaptive filter algorithm, namely least lncosh (Llncosh) algorithm, which enjoys the advantages of both the LMS and SLMS algorithms. It not only offers a reduced steady-state mean square deviation (MSD) over that of some other popular robust adaptive schemes [3-16] in impulsive noise, but also acquires a similar performance with respect to the conventional LMS algorithm in Gaussian noise.

2)

By utilizing the Bassgang’s Theorem [18] and energy conservation relation [19-21], we analyze the mean convergence and mean-square convergence of the proposed algorithm. In order to predict the steady-state performance, the theoretical excess mean square error (EMSE) and MSD are also provided based on the framework of the Taylor series linearization [6, 15, 16, 22].

3)

We present a new variable-λ Llncosh (VLlncosh) scheme based on the study of the relation of the variable parameter λn and output error signal en, obtaining a compromise between fast initial convergence rate and low steady-state MSD.

4)

We extend the lncosh cost into the proportionate-type algorithm [23], and develop an improved proportionate least lncosh (PLlncosh) algorithm applied in acoustic echo cancellation (AEC).

The rest of the paper is organized as follows. Several properties of the lncosh cost are introduced in Section 2. In Section 3, we develop the Llncosh and VLlncosh algorithms, and the mean convergence, mean-square convergence and steady-state performance analyses of the Llncosh algorithm are provided. The PLlncosh algorithm for AEC is derived in Section 4.

4

Simulation results are illustrated in Section 5, and Section 6 concludes the paper.

2

lncosh cost function As shown in Fig.1, we consider a system identification problem in which the desired signal is

generated by dn  wT0 xn +vn , where w 0  R M 1 is the unknown M-dimensional system parameter vector, d n is the desired signal, xn  [ xn , xn 1 ,

, xn  M 1 ]T is the input signal vector

with length M at time instant n, vn denotes the ambient noise and the superscript (·) T denotes the vector transpose. The system error signal can be expressed as en  d n  yn

(1)

 d n  wTn 1x n ,

where yn  wTn 1xn represents the observable output in an adaptive filter and w n 1  [wn 1,0 , wn 1,1 ,

, wn 1, M 1 ]T denotes the weight vector of an adaptive filter with length M .

Based on the error signal en , various error optimization criteria have be developed in search of an optimal solution [3-16] . In this work, we define a new lncosh cost function as [17]

J In cosh  E[(1  )ln(cosh(en ))],

(2)

where ln() is the natural logarithm, E[] denotes the mathematical expectation,  is a positive parameter, i.e.,   (0, ) , and cosh(en ) is the hyperbolic cosine function which is expressed as cosh( en ) 

exp[ en ]  exp[ en ] , 2

(3)

And then, the score function can be given by

 (en ) 

[(1  ) ln(cosh( en )] =tanh( en ), en

(4)

where tanh( en ) is the hyperbolic tangent function, i.e., tanh( en ) 

exp[ en ]  exp[ en ] . exp[ en ]  exp[ en ]

(5)

According to the definition of the lncosh cost, several properties are presented as follows Property 1: The lncosh cost is strongly convex at en  (, ) . Proof:

The second derivative of J In cosh with respect to en is

5

J In cosh  E{[(1  ) In(cosh( en )]}  E{[tanh( en )]} 

 E[cosh 2 ( en )]

(6)

0

Note that for the hyperbolic cosine function cosh(en ) , we have cosh( en ) 

Property 2: Proof:

exp[ en ]  exp[ en ] 1 2

(7)

As   0 , the function J In cosh approaches the cost function E[( en )2 2] .

Since the function J In cosh and E[( en )2 2] are twice continuously differentiable, we

can obtain by using the L'Hospital's rule as lim  0

J In cosh J In cosh   lim 2   0 E[( en ) 2] E[( en ) 2 2]  E[tanh( en )en ]  0 E[ en2 ]

 lim  lim  0

E[tanh( en )en ]  E[ en2 ] 

 lim E[  0

(8)

en2 1 ] cosh 2 ( en ) E (en2 )

1

Note that as   0 , cosh2 ( en )  1 . Property 3: As    , the function J In cosh can be equivalent to the MAE cost function E[ en ] approximately.

Proof:

When the error signal en  0 , we have lim

 

J In cosh J In cosh   lim E[ en ]   E[ en ]   lim

 

E[tanh( en )en ] E[en ]

(9)

1

Note that for    and en  0 , tanh(en )  1 . When the error signal en  0 , we have lim

 

J In cosh J In cosh   lim   E[ en ] E[ en ]   lim

 

E[tanh( en )en ]  E[en ]

(10)

1

Note that for    and en  0 , tanh(en )  1 .

6

When the error signal en  0 , J In cosh  E[ en ]  0 . Fig. 2 and Fig. 3 show the behavior of the lncosh cost J In cosh and its score function for different values of the parameter  , respectively. We can clearly see that the cost function J In cosh approximates the MAE cost function gradually with the increase of  .

3

Least lncosh algorithm For tractability, using the instantaneous error instead of the expectation in (2) as usual, the

lncosh cost function can be approximated as

J In cosh  (1  ) In(cosh(en )).

(11)

We can derive the proposed algorithm by utilizing the stochastic gradient method and minimizing the cost function (11). The derivative of the lncosh cost function with respect to the weight vector w n1 can be expressed as  w n1 J In cosh 

J In cosh w n 1

 (1  )

ln(cosh( en )) w n 1

=(1  )

ln(cosh( en ))  cosh( en ) ( en )  cosh( en )  ( en ) w n 1

=(1  )

1 sinh( en )(  x n ) cosh( en )

(12)

=  tanh( en )x n .

The weight updating equation of the Llncosh algorithm can be derived from the gradient descent rule as w n  w n 1   wn1 J In cosh

(13)

 w n 1   tanh( en )xn ,

where  denotes a positive step size. From (13), it is clear how the Llncosh algorithm operates in the impulsive noise environments. At time instant n , the presence of a large impulsive noise dominates the error signal

en and leads to a large value of  en . The larger the scaling error  en is, the closer the hyperbolic tangent function tanh( en ) approaches sign( en ) . On the contrary, when the impulsive noise is absent at time instant n , the Llncosh algorithm performs like the conventional LMS algorithm if the scaling error  en is very small. However, if the parameter  is too small,

7

it reduces the convergence rate for the proposed algorithm.

3.1 Mean Convergence In this subsection, we present the mean convergence of the Llncosh algorithm in the Gaussian noise environment. The mean convergence analysis involves the application of the Bassgang’s Theorem [18, 24]. In order to facilitate the analysis, several assumptions are given below: A1: The desired signal d n and input vector x n are zero means and jointly Gaussian distributed. A2: The weight error vector, defined as w n  w0  w n , is uncorrelated with d n and x n . A3: The ambient noise vn is a independent, identically distributed (i.i.d) random sequence with zero-mean and variance  v2 , and independent of the input sequence xn . Using the definition of the weight error vector in A2, the updating equation of the Llncosh algorithm can be rewritten as w n  w n 1   tanh(en )xn .

(14)

Taking the mathematical expectations on both sides of (14), we have E[wn ]  E[wn 1 ]   E[tanh(en )xn ].

(15)

Lemma 1 [24]: Let X and Y be jointly Gaussian distributed and zero mean random vectors, with finite second moments. Then for any Borel function G with E[GT (Y )G(Y )]  , we have

E[ XGT (Y )]  E[ XY T ]E 1[YY T ]E[YGT (Y )].

(16)

Using assumption A1 and lemma 1, the error signal en and x n can also be considered as jointly Gaussian and zero mean [3]. Thus, we obtain E[tanh( en )xn ] 

E[en tanh( en )] E[xn en ]  C ( ) E[xn en ], E[en2 ]

(17)

where C ( ) 

E[en tanh( en )] . E[en2 ]

(18)

Property 4: If en is a zero mean Gaussian random variable and the parameter   (0, ) , the 4 3

scale parameter C ( ) is bounded by 0  C ( )  4

Proof:

E (en2 )

.

See Appendix A.

Substituting (17) into (15), we get

8

E[wn ]  E[wn 1 ]  C( ) E[xn en ].

(19)

Remembering that dn  wT0 xn +vn and using assumptions A1 and A3, E[xn en ] can be expressed as E[x n en ]  E[xn (wT0 xn  wTn 1x n  vn )]

(20)

 R X E[w n 1 ],

where R X  E[xn xTn ] is an auto-correlation matrix of input vector x n . We substitute (20) into (19) and obtain

E[wn ]  [I M  C( )RX ]E[w n 1 ],

(21)

where I M is a M identity matrix. From (21), it is easy to see that the sufficient condition for convergence of the Llncosh algorithm in mean is that the step size  satisfies 0 

where i (i  1, 2,

2 , C ( )i

(22)

, M ) are the eigenvalues of the matrix R X .

In the following, another more restrictive condition for convergence of the Llncosh algorithm can be given by 0 

2 , C ( )tr(R X )

(23)

where tr() is the trace operator of a matrix. Property 5: As   0 , C ( )   , the Llncosh algorithm performs like the conventional LMS algorithm with the step size     and the sufficient condition for mean convergence is 0   

Proof:

2 . tr(R X )

As   0 , we know that E[en tanh(en )]

(24)

E[en2 ] from (8), where “ ” denotes the

equivalence relation. Hence, C ( ) 

E[en tanh( en )]  0   C ( )   . E[en2 ]

(25)

Substituting C ( )   into (19), the updating equation (19) is equivalent to that of the LMS algorithm and the step size can be viewed as  . Property 6: As    , the Llncosh algorithm is equivalent to the SLMS algorithm and the

9

sufficient condition for mean convergence is [3] 0 

Proof:

2 E[en2 ] tr(R X )

(26)

.

See Appendix B.

3.2 Mean Square Convergence In this subsection, we investigate the mean-square stability of the Llncosh algorithm by resorting to the energy conservation relation [19-21]. By squaring both sides of (14) and then taking the mathematical expectation, we can derive E[ w n ]  E[ w n 1 ]  2 E[ea ,n tanh(en )]   2 E[ xn 2

where 

2

2

tanh 2 (en )],

(27)

denotes the Eucilidean norm of a vector, and ea , n is the noise-free a priori error which

can be defined as ea , n  wTn 1xn .

(28)

We can clearly see that (27) presents the averaged energy conservation relation of the weight error vector on two adjacent instants. As usual, this relation plays a founding role in our meansquare stability analysis and the most creative works are to concentrate on evaluating the two expectations involved in ea , n and en on the right hand of (28). Meanwhile, for the energy conservation relation, there is no need to restrict the distribution of the input or noise sequences to be Gaussian. Nevertheless, it requires that the adaptive filter must be long enough [19], making the following assumptions more realistic. A4: The noise-free a priori error ea , n is zero-mean Gaussian random variable and independent of the ambient noise vn . A5: For sufficiently long adaptive filter, ||x n ||2 and tanh 2 ( en ) are asymptotically uncorrelated, that is [19-21]

E[||xn ||2 tanh 2 (en )]  E[||xn ||2 ]E[tanh 2 (en )].

(29)

For ensuring the mean-square stability of the Llncosh algorithm, the averaged energy of the weight error vector on two adjacent instants should satisfy E[ w n ]  E[ w n 1 ]. 2

2

(30)

Substituting (30) into (27), the stability upper bound of the step size  is derived by

10

2 2

E[ea , n tanh( en )] E[||x n ||2 tanh 2 ( en )] E[ea , n tanh( en )] E[||x n ||2 ]E[tanh 2 ( en )]

(31) ,

Using the method in [19-21] and assumption A4, the upper bound of the step size in (31) can be expressed as the function of the second-order moment E[ea2, n ]



 hG [ E (ea2, n )]  2   inf E[ea2, n ] ,  2 2 2 E [ e ]  E[||x n || ] a ,n    hU [ E (ea , n )] 

(32)

where hG [ E (ea2, n )] 

E[ea , n tanh( en )] E[ea2, n ]

(33)

,

hU [ E (ea2, n )]  E[tanh 2 (en )],

(34)

1   =  E[ea2, n ] :   E[ea2, n ]  E[||xn ||2 ]E[||w n ||2 ] , 4  

(35)

and  is the Cramer-Rao bound depending on the estimation process of using wTn xn to estimate the random quantity wT0 x n . Besides, by utilizing the Cauchy-Schwartz inequality under assumptions A1 and A2, the upper bound of E[ea2, n ] in (35) can be obtained by 2 1 E[||w n ||2 ]1 2 E[||x n ||2 ]1 2   4 1  E[||x n ||2 ]E[||w n ||2 ] 4 1 = tr(R X ) E[||w n ||2 ]. 4

E[ea2, n ] 

(36)

3.3 Steady-State analysis In this subsection, we provide the steady-state analysis of the Llncosh algorithm by using the Taylor series linearization [6, 15, 16, 22]. This framework of the linearization analysis has been proved to be reasonable when the adaptive filter converges nearly to the optimum weight vector [22]. Furthermore, it has been successfully applied in the steady-state analyses of several adaptive algorithms with error nonlinearities [6, 15, 16]. In steady-state, we are commonly interested in the steady-state excess mean square error (EMSE) and MSD, which can be defined respectively as

  lim E[ea2, n ],

(37)

  lim E[||w n ||2 ].

(38)

n 

n 

11

If the input signal x n is i.i.d with zero-mean, by using (37) and (38), the steady-state MSD can be expressed by



M . tr(R X )

(39)

The averaged energy of the weight error vector in steady-state satisfies

lim E[ w n ]  lim E[ w n 1 ]. 2

n 

2

(40)

n 

Combining (27) and (40) under assumption A5, we get 2 lim E[ea, n tanh(en )]   tr(R X ) lim E[tanh 2 (en )]. n 

(41)

n 

For ease of equation derivation in the following, we define a nonlinear function

f (en )  tanh(en ) and expand the function f ( en ) by taking the Taylor series with respect to

 ea , n around  vn , which yields f ( en )  f ( ea , n   vn )

(42)

 f ( vn )  f ( vn )  ( ea , n )  (1 2) f ( vn )  ( ea , n ) 2  O[( ea , n ) 2 ],

where O[( ea , n )2 ] represents the third and high-order error power terms, and f ( vn ) and f ( vn ) denote the first and second derivative of the function f respectively, i.e., f (vn )  sech 2 (vn ),

(43)

f (vn )   sech 2 (vn )  tanh(vn ).

(44)

Then, we insert (42) into the left-hand side of (41) and obtain 2 lim E[ea , n tanh(en )]  2 lim E{f (vn )ea ,n   f (vn )  (ea2,n )  O[(ea ,n ) 2 ]},

(45)

O[(ea, n )2 ]  (1 2)  2 f (vn )  (ea, n )3  O[(ea, n )2 ].

(46)

n 

n 

where

At the steady-state, we know the fact that the noise-free a priori error ea , n is very small for sufficiently small step size  , hence we can neglect the high-power terms E{O[( ea , n )2 ]} [23]. Besides ea , n , it can also be shown from (46) that the high-order error power term O[( ea , n )2 ] depends upon the parameter  . In order to evaluate the dependence of O[( ea , n )2 ] on the parameter  , we define a evaluation factor (EF)  as the ratio between the amplitude of O[( ea , n )2 ] and the absolute value of tanh(  en) :

12

  E[|O[(ea,n )2 ] | |tanh(en ) |].

(47)

The EF  defined in (47) expresses how small the high-order error power term O[( ea , n )2 ] is compared with tanh( en ) at the steady-state. Table 1 shows the simulated

values of EF  for different choices of the parameter  in Gaussian noise case. From Table 1, we can see that the magnitude of O[(  ea , n ) 2] is less than 1/10th of the absolute value of

tanh(  en) with the parameter 1    10 . When the parameter  becomes large, the values of EF increase and this indicates that the steady-state EMSE and MSD derived from the neglect of E{O[(  ea , n) 2]} may not accurately enough evaluate the steady-state performance. The Gaussian

noise is used in Table 1, however, the similar simulated results can be obtained for uniform and Laplace noises. Therefore, for the proper values of  ( 1    10 ), we can neglect the high-power term E{O[( ea , n )2 ]} in (45) and using the assumptions A3 and A4, we get 2 lim E[ea, n tanh(en )]  2 lim E[f (vn )]. n 

(48)

n 

In a similar way, inserting (42) into the left -hand side of (41), we obtain lim E[tanh 2 (en )]  lim E[f 2 (vn )]   2 lim E[f 2 (vn )  f (vn ) f (vn )].

n 

n 

(49)

n 

Then, combining (41), (48) and (49), the steady-state EMSE can be derived as



 tr(R X ) lim E[f 2 ( vn )] n 

.

(50)

2 lim E[f ( vn )]   2 tr(R X ) lim E[f 2 ( vn )  f ( vn ) f ( vn )]

(51)

2 lim E[f ( vn )]   2 tr(R X ) lim E[f 2 ( vn )  f ( vn ) f ( vn )] n 

n 

Substituting (50) into (39), the steady-state MSD is expressed as



 M lim E[f 2 ( vn )] n 

n 

n 

3.4 Adaptation of the parameter λ From property 5, we know that the Llncosh algorithm is tantamount to the conventional LMS in Gaussian noise. Hence the aim of the adaptation of the parameter  is to reach a compromise between fast convergence rate and low steady-state MSD. To be specific, the variable parameter n can be set as large as possible during the early adaptation process for ensuring the fast convergence rate. As the algorithm gradually approaches its steady-state, the variable parameter n decreases adaptively, leading to a low steady-state MSD. In this study, the initial value of n is set between 5 and 10 which performs well both in Gaussian and impulsive noises. In addition, in order to

13

achieve a sufficient suppression for large error signals in impulsive noise, the variable parameter n can be very close to the initial value between 5 and 10. Thus, the variable parameter n related to the output error signal is presented as  1  exp( A0 |en |)  , 1  h exp( A0 |en |) 

n  0 

(52)

where the constant 0 determines the range of the variable parameter n . The factors A0 and h control the shape and characteristic on the bottom surface of function (52) respectively, as shown in Fig. 4. Fig. 5 shows the MSD curves of the variable n Llncosh (VLlncosh) algorithm for different values of factors 0 , A0 and h . The input is zero-mean white Gaussian noise with unit variance. From Fig. 4 and Fig. 5, we can observe: 1) the variable parameter n changes from 0 to a very small value with the decrease of the error signal. 2) The large values of factors 0 and A0 can increase the convergence rate of the proposed algorithm, but degrade the steady-state performance in terms of MSD. 3) The large value of the factor h can improve the steady-state performance while encounter with loss of the convergence rate. 4) In the presence of impulsive noises, the large error signals make the variable parameter n very close to a large constant 0 , guaranteeing good robustness against the impulses. In general, settings of the factors 0 , A0 and h depend on the practical applications. In our simulations, the values of three factors are chosen as 5  0  10 ,

A0 =10 and h=1 .

4 Application in acoustic echo cancellation (AEC) In AEC, a variety of proportionate-type algorithms were well-known and intensively studied for tacking the sparse acoustic echo channel [23-28]. The main idea of the proportionate scheme is that the proportionate-type algorithms update each adaptive filter coefficient by using an individual step size which is proportionate to the absolute value of the current estimated filter coefficient [23]. In this way, the coefficient adaptation mainly occurs in the active region (i.e., large coefficients) of the sparse echo channel, while the increments of the small and zero coefficients are very small. Therefore, the proportionate-type algorithms acquire an obvious improvement in convergence performance with respect to the conventional adaptive algorithms [23-28].

14

In the following, we consider a proportionate-type variant of the Llncosh algorithm. Defining the a posterior error as

n  dn  wTn xn .

(52)

Then, in order to derive the proportionate-type Llncosh (PLlncosh) algorithm, we now consider the minimization of the following normalized error cost function: 2

J [w(n)]  G 1 2 (n  1)[w n  w n 1 ]  20

where G(n  1)=diag[ g0 (n  1) g1 (n  1) the diagonal element gi (n  1)(i  0,1,

(1  ) In[cosh(n )] , xTn G(n  1)xn

(53)

gM 1 (n  1)] is a M  M diagonal matrix and

, M  1) is dependent on the filter weight vector at time

instant n  1 (i.e. w n1 ), and 0  0 is a small positive parameter which herein performs like the step-size for the PLlncosh algorithm. By setting the derivative of (53) with respect to w n into zero, we obtain w(n) w(n  1) 

0G(n  1)xn tanh(n ) , xTn G(n  1)xn  

(54)

where  is a small positive constant applied for avoiding zero-division. Note that the a posterior error  n in (52) is unknown in advance, thus a reasonable approximation n  en can be considered for small step-size. Then, we obtain the following recursion w(n) w(n  1) 

0G(n  1)xn tanh( en ) . xTn G(n  1)xn  

(55)

As shown in (55), different choices of the proportionate gains (diagonal elements of G(n  1) ) gi (n  1)(i  0,1,

, M  1) will lead to different types of the PLlncosh algorithm.

Herein, we consider a well-known proportionate gain for each adaptive filter coefficient because of its flexible mechanism for tracking the echo channels with different degrees of sparsity [25], which can be expressed as gi (n  1) 

wi (n  1) 1 +(1   ) , (i  0,1, M 1 2M 2 m 0 wm (n  1)   0

, M  1)

(56)

where the parameter  is a constant between -1 and 1, and  0 is another small positive constant used for avoiding zero-division.

5 Simulation results Simulations are carried out in system identification and AEC setups. All the simulation

15

results are obtained by averaging over 200 independent Monte Carlo trials except for the AEC setup. We assess the performance of the Llncosh algorithm in terms of MSD, defined as 10log10 E[||w n ||2 ] . The energies of the unknown system on the system identification and the

acoustic echo path for AEC are normalized to units, i.e., wT0 w 0  1 . The adaptive filter has the same number of taps as the unknown system in 5.1 and 5.2 subsections. In addition, it is initialized to a zero vector.

5.1 System identification The unknown system w 0 to be identified is randomly generated with length M = 16. The input signal x(n) is obtained by filtering the zero-mean white Gaussian random sequences with unit variance through an AR(1) system G( z)  1 (1  0.6 z 1 ) . The step sizes of the corresponding algorithms are selected properly for achieving the same initial convergence speed. In accordance with the reference papers [4-13], some simulation parameters related to the considered algorithms are shown in Table 2. The white Gaussian noise with zero-mean, vn , is added to the output of the unknown system. The signal-to-noise ratio (SNR) at the system output can be defined by SNR  10log10 ( y2  v2 ),

(57)

where  y2 and  v2 denotes the variance of the uncorrupted output and variance of the Gaussian noise, respectively. In addition, two different kinds of impulsive interference are considered: (1) Contaminated Gaussian (CG) noise

The CG noise can be expressed as vc, n  vn   n =vn  znbn , where vn is the abovementioned white Gaussian noise with zero-mean,  n is a Bernoulli-Gaussian (BG) impulse, zn is a white Gaussian process with zero-mean and variance  z2  t v2 (t

1) and bn is a Bernoulli

sequence with the probability mass function with P(b)  1  pr for b  0 and P(b)  pr for b  1 , where pr is the probability of the occurrence of the impulse. Thus, the variance of vc , n

can be expressed as  vc2   v2  pr z2  (1+prt ) v2 . (2) Symmetric α-Stable (SαS) noise There is no closed form probability density function of the SαS random process except for the particular cases α=1 (Cauchy distribution) and α=2 (Gaussian distribution). Its characteristic function can be given by [29]

16

 (u)  exp( jau   | u | ),

(58)

where j  1 ,   a   ,   0 and 0    2 . The parameter α is the characteristic exponent which measures the heaviness of the tails for the SαS distribution. The smaller a parameter α is, the larger the number of impulsive components. The parameter a is the location parameter, and γ is the dispersion of the distribution. For SαS noises, we define the factional-order signal to noise ratio (FSNR) as  E (| yn | p0 )  FSNR  10log10  , p0   E (| v , n | ) 

(59)

where v ,n denotes the SαS noise, p0 is the factional order with 0  p0   . In simulations, the parameters associated with the SαS noise are set as follows:  =1.3 , p0 =1.2 , a  0 and  =1 . Fig.6 shows a comparison of the MSD curves of the Llncosh algorithm with these of the conventional LMS and SLMS algorithms in white Gaussian noise at SNR =30dB, without impulsive noises (pr =0). The impulse response of the unknown system is suddenly shifted to the right by 3 samples at time instant n  10000 . In general, the steady-state performance of the Llncosh algorithm gradually degrades as the parameter λ increases. For large values of  (   100 ), the Llncosh and SLMS algorithms have almost the same performance due to the behavior of the lncosh cost function. However, for small values of  (   0.1 and   1 ), the LIncosh algorithm with the parameter   0.01 nearly obtains the identical convergence performance with that of the LMS using the step-size LMS  0.01 , which confirms the property 5. Fig.7, Fig.8 and Fig.9 show the MSD learning curves among the conventional LMS, LMM, LMP, RMN, and Llncosh algorithms in the SαS noise environment at different FSNR. The impulse response of the unknown system is changed by shifting 3 samples on the right at time instant n  5000 . As shown in Fig. 7, in the case of 13dB FSNR all the algorithms except the LMS are

robust against SαS impulsive noises. The Llncosh algorithm with   2 obtains the lower steadystate MSD. At the high FSNR (0dB and -5dB), the LMS and LMM achieve no robustness with respect to SαS impulsive noises as illustrated in the small figures on the top right corner of Fig.8 and Fig.9. However, the Llncosh algorithm still outperforms other cited algorithms in terms of the steady-state MSD. Fig.10 and Fig.11 indicate a comparison of the MSD learning curves of the Llncosh algorithm with these of other corresponding algorithms at different values of probability of the occurrence of the impulse in the CG impulsive noise environment. It is clearly can be seen

17

that the Llncosh algorithm obtains better steady-state performance compared with other comparative algorithms, especially for   2 . In Fig.12-Fig.15, we compare the performances of the adaptive generalized MMC (GMMC) and Llncosh algorithms. The updating equation of the GMMC is given as [15] w n  w n 1   exp[a0 | en |0 ] | en |0 1 sign(en )xn ,

(60)

where  represents a positive step size,  0  0 is the shape parameter and a0  0 is the kernel parameter. Fig. 12 and Fig. 13 show the MSD curves of the GMCC and Llncosh algorithms at different values of probability of the occurrence of the impulse in CG noise. Fig. 14 and Fig. 15 compare the MSDs of the GMCC and Llncosh algorithms at different FSNR in SαS noise environment. As can be seen, the Llncosh algorithm yields a lower steady-state MSD compared with the GMCC at the similar initial convergence rate. Fig. 16 and Fig. 17 illustrate the performance comparison between the Llncosh and variable-λ Llncosh (VLlncosh) algorithm in CG noise at different values of probability of the occurrence of the impulse. We can clearly see that the VLlncosh reaches a compromise between the convergence rate and steady-state MSD, achieving a small steady-state MSD and fast initial convergence rate.

5.2 Verification of steady-State analysis In this subsection, we verify the steady-state theoretical analysis of the Llncosh algorithm for different noise distributions (Gaussian, Uniform and Laplace). The system identification system scenario shown in Fig.1 is considered. The input signal is a zero-mean white Gaussian sequence with unit variance. The length of the unknown system is set to 20 with the same to that of the adaptive filter. In order to ensure the sufficient convergence of the proposed algorithm, 38000 iterations are needed. The noise is added at the output with SNR =30dB. All the simulated steadystate MSD values are computed by averaging over 200 trails and then averaging over the last 3000 iterations. The theoretical steady-state MSD values are calculated using (51). Simulated and theoretical steady-state MSDs versus step sizes are shown in Fig. 18. It can be seen that the steadystate MSD increases with the increase of step size and the simulated MSD matches very well with the theoretical one. Fig. 19 illustrates the simulated and theoretical steady-state MSDs versus the parameter λ. It is also can be observed that the steady-state MSD increases with λ and the simulated result are in good agreement with the theoretical value when the parameter λ is set to be small. However, there exists a gradual difference between the simulated result and theoretical

18

value when the parameter λ becomes large, and this validates the theoretical prediction in subsection 3.3.

5.3 Acoustic echo cancellation The PLlncosh algorithm is evaluated in an AEC scenario. The measured impulse response of the acoustic echo path is shown in Fig. 20 which has 512 tap weight coefficients. The length of the adaptive filter is set to 128. The far-end input speech and near-end input speech signals sampled at 8kHz are shown in Fig. 21(a) and Fig. 21(b) respectively, which have the same power. The zeromean white Gaussian noise with SNR =30dB is added at the output. As shown in Fig. 21(b), the double-talk occurs at the time instant n  15000 . A classical Geigel double-talk detector (DTD) [30, 31] is used to freeze adaptation for the current time step when simultaneous far-end input speech and near-end input speech signals are actively detected. In Geigel DTD, the double-talk is declared if max |xn |,|xn 1 |, |d n |

,|xn  M 1|

 T0 ,

(61)

where xn denotes the sample of the far-end input speech and d n represents the sample of the far-end input speech filtered by the impulse responses of the sparse echo path and possibly contaminated by a ambient noise and a near-end speech. The parameter T0 is the detection threshold which is set to 1.25 in simulations. In Fig.22, the performance of the PLlncosh algorithm is compared with those of the NLMS algorithm, improved proportionate NLMS algorithm (IPNLMS) [25] and real-coefficient improved proportionate affine projection sign algorithm (RIP-APSA) [28]. For a fair comparison, the projection order of the RIP-APSA is set to 1. The step sizes for all the algorithms are selected to obtain a similar initial convergence speed. The MSD curves are obtained by ensemble averaging the results of 50 independent trials. Other parameters are set as follows:  =0 ,  =0.001  0 = / M , and   20 . It can be noted from Fig. 22, at the similar initial convergence rate the PLlncosh algorithm achieves a lower steady-state MSD than other cited algorithms, even in the double-talk case.

6 Conclusion Based on the lncosh cost framework, a least lncosh (Llncosh) algorithm has been presented which provides a desired steady-state performance in both Gaussian-type noises and non-Gaussian

19

impulsive environment. Besides, a variable parameter λn scheme for the Llncosh algorithm is also introduced. Additionally, the mean behavior, mean-square behavior and steady-state performance analyses of the Llncosh algorithm are investigated. And lastly, the proportionate variant of the Llncosh algorithm is derived for acoustic echo cancellation (AEC) application. Simulations in the system identification and AEC confirm the effectiveness of the presented algorithm and its theoretical analyses.

Appendix A Bassgang’s Theorem [18]: If x(t ) is a zero mean Gaussian random process and y  g ( x)

is a memoryless system, the cross-correlation of x(t ) with the system output y(t )  g[ x(t )] can be expressed as E[ x(t   ) g[ x(t )]]  KE[ x(t   ) x(t )]  KRxx ( ),

(A.1)

where Rxx ( )  E[ x(t   ) x(t )] is the auto-correlation of x(t ) , K  E{g [ x(t )]} and g [ x(t )] denotes the first derivative of g[ x(t )] with respect to x . Using the Bassgang’s Theorem , the expectation E[en tanh(en )] can be derived as

E[en tanh(en )]  K0 E[en2 ],

(A.2)

where    K 0  E[ t a n h (ne  ) ]E   2 c o s h  e ( n  ) 

Therefore, substituting (A.2) into (17) we get    C ( )  K 0  E   2  cosh ( en ) 

(A.3)

Since en is assumed Gaussian and zero-mean, we can show that C ( )  





=

 en2  exp  den  2  cosh 2 ( en ) 2 E (en2 )  2 E (en ) 



2 2 E (en2 )

1





0

 en2  1 exp  den  2  cosh 2 ( en )  2 E (en ) 

(A.4)

For   (0, ) , we have [1 cosh 2 (en )]exp[en2 2E(en2 )]  0 , thus the scale parameter

C ( )  0 . Making an appeal to Cauchy-Schwartz inequality to obtain the upper bound of C ( ) , we have

20

C ( ) 



2

 2 E (e ) 2 n

2 2 E (en2 )

 en2  1 exp   den 2  cosh ( en )  2 E (en ) 



2

0





0

2

     en2   1 den   den 0 exp   2 2   cosh ( en )   2 E (en )    2

(A.5)

Using





0

2

2 2  cosh ( e )  sinh (  e )   1 n n den   den  0 2 4 cosh (  e ) cosh (  e ) n  n  2   tanh ( e ) 1 n = den   den 2 0 cosh ( e ) 0 cosh 2 ( e ) n n





and taking into account that



0

(A.6)

2 , 3

exp( x 2 )dx   , then 2





0

    en2   e2  den   exp   n 2  den exp   2  0  2 E (en )    E (en )  

(A.7)

 E (e ) 2 n

Now we can obtain C ( ) 

4 3

1 4

E (en2 )

(A.8)

Appendix B As    , note that tanh(en ) C ( ) 

sign(en ) from (9) and (10), thus we have

E[en tanh( en )]   E[en sign(en )]  C ( )  2 E[en ] E[en2 ]

(B.1)

Using Bassgang’s Theorem and assuming en is zero-mean Gaussian, we get E[en sign(en )] 

2 E (en2 )



(B.2)

Then, C ( ) can be written as C ( ) 

2  E (en2 )

(B.3)

So we obtain the property 6 by inserting (B.3) into (22).

Declaration of interests

21

☐ The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments This research was supported by the National Natural Science Foundation of China (61501119).

References [1] S. Haykin, Adaptive Filtering Theory. 3rd ed., Prentice Hall, NY, 1996. [2] A. H. Sayed, Fundamentals of Adaptive Filtering. New York, NY, USA: Wiley, 2003. [3] V. J. Mathews and S. H. Cho, Improved convergence analysis of stochastic gradient adaptive filters using the sign algorithm, IEEE Trans. Acoust. Speech and Signal Process., 35(4) (1987) 450-454. [4] J. Chambers and A. Avlonitis, A robust mixed-norm adaptive filtering, IEEE Signal Process. Lett., 4(2) (1997) 46-48. [5] S.C. Pei and C. C. Tseng, Least mean p-power error criterion for adaptive FIR filter, IEEE J. Select. Areas Commun., 12 (1994) 1540-1547. [6] B. Lin, R. He, X. Wang and B. Wang, The steady-state mean square error analysis for least mean p-order algorithm, IEEE Signal Process. Lett., 16(3) (2009) 176-179. [7] W. Gao and J. Chen, Kenel least mean p-power algorithm, IEEE Signal Process. Lett., 24(7) (2017) 996-1000. [8] B. Chen, L. Xing, Z. Wu, J. Liang and J. C. Principe, Smoothed least mean p-power error criterion for adaptive algorithm, Digital Signal Process., 40 (2015) 154-163. [9] L. Lu, H. Zhao, W. Wang and Y. Yu, Performance analysis of the robust diffusion normalized least mean p-power algorithm, IEEE Trans. Circuits Syst. II, Exp. Briefs, 65(12) (2018) 20472051. [10] Y. Zou, S. C. Chan and T. S. Ng, Least mean M-estimate algorithms for robust adaptive filtering in impulsive noise, IEEE Signal Process. Lett., 47(12) (2000) 1564-1569. [11] S. C. Chan and Y. Zou, A recursive least M-estimate algorithm for robust adaptive filtering in impulsive noise: fast algorithm and convergence performance analysis, IEEE Trans. Signal Process., 52(4) (2004) 975–991. [12] S. C. Chan and Y. Zhou, On the performance analysis of the least mean M-estimate and normalized least M-estimate algorithms with Gaussian inputs and additive Gaussian and contaminated noises, J. Signal Process. Syst., 60 (2010) 81–103. [13] Z. Zheng and H. Zhao, Affine projection M-estimate subband adaptive filters for robust adaptive filtering in impulsive noise, Signal Process., 120 (2016) 64–70. [14] M. Z. A. Bhotto and A. Antoniou, Robust recursive least-squares adaptive-filtering algorithm for impulsive-noise environments, IEEE Signal Process. Lett., 18(3) (2011) 185-188.

22

[15] B. Chen, L. Xing, H. Zhao, N. Zheng and J. C. Principe, Generalized correntropy for robust adaptive filtering, IEEE Trans. Signal Process., 64(13) (2016) 3376–3386. [16] B. Chen, L. Xing, J. Liang, N. Zheng and J. C. Principe, Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion, IEEE Signal Process. Lett., 21(7) (2014) 880-884. [17] O. Karal, Maximum likelihood optimal and robust support vector regression with lncosh loss function, Neural networks, 94 (2017) 1-12. [18] A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes, 4th ed., McGraw-Hill, Boston, 2002. [19] N. R. Yousef and A. H. Sayed, A unified approach to the steady-state and tracking analyses of adaptive filters, IEEE Trans. Signal Process., 49(2) (2001) 314–324. [20] T. Y. Al-Naffouri and A. H. Sayed, Transient analysis of adaptive filters with error nonlinearities, IEEE Trans. Signal Process., 51(3) (2003) 653–663. [21] T. Y. Al-Naffouri and A. H. Sayed, Adaptive filters with error nonlinearities: mean-square analysis and optimum design, EURASIP J. on Signal Process., 4 (2010) 192-205. [22] S. C. Douglas and T. H. -Y. Meng. Stochastic gradient adaptation under general error criteria, IEEE Trans. Signal Process., 42(6) (1994) 1335–1351. [23] D. L. Duttweiler, Proportionate normalized least-mean-square adaptive in echo cancelers, IEEE Trans. Speech Audio Process., 35(4) (1987) 450-454. [24] T. Koh and E. J. Powers, Efficient methods to estimate correlation functions of Gaussian processes and their performances analysis, IEEE Trans. Acoust. Speech and Signal Process., 33(4) (1985) 1032-1035. [25] J. Benesty and S. L. Gay, An improved PNLMS algorithm, in Proc. IEEE int. Conf. Acoust., Speech, Signal Process. (ICASSP’02), 2 (2002) 1881-1884. [26] P. Loganathan, A. Khong and P. Naylor, A class of sparseness-controlled algorithms for echo cancellation, IEEE Trans. Audio, Speech, Lang. Process., 17(8) (2009) 1591-1561. [27] C. Paleologu, S. Ciochina and J. Benesty, An efficient proportionate affine projection algorithm for echo cancellation, IEEE Signal Process. Lett., 17(2) (2010) 165-168. [28] Z. Yang, Y. R. Zheng and S. L. Grant, Proportionate affine projection sign algorithms for network echo cancellation, IEEE Trans. Audio, Speech, Lang. Process., 19(8) (2011) 22732284. [29] C. L. Nikias and M. Shao, Signal Processing with Alpha-Stable Distributions and Applications. New York, NY, USA: Wiley, 2010. [30] D. L. Duttweiler, a twelve-channel digital echo canceler, IEEE Trans. Commun., COM-26(5) (1978) 647-653. [31] T. Gansler, S. Gay, M. Sohdhi and J. Benesty, Double-talk robust fast converging algorithms for network echo cancellation, IEEE Trans. Speech Audio Process., 8(6) (2000) 656-663.

23

Fig.1. Adaptive system identification model

Fig.2. lncosh cost function for different λ

24

Fig.3. Score function for different λ

(a)

(b)

(c) Fig.4. The relation between the variable parameter λn and output error signal e. (a) For different initial value λ0. (b) For different factor A0 .(c) For different factor h.

25

(a)

(b)

(c) Fig.5. MSD curves of the variable λn Llncosh (VLlncosh) algorithm for different λ0, A0 and h with Gaussian input (SNR =30dB). (a) For different λ0. (b) For different factor A0 .(c) For different factor h.

Fig.6. MSD curves for AR(1) input in absence of impulsive noise. (SNR=30dB, pr =0)

26

Fig.7. MSD curves for AR(1) input in symmetric  -S noise. (FSNR= 13dB)

Fig.8. MSD curves for AR(1) input in symmetric  -S noise. (FSNR= -5dB)

27

Fig.9. MSD curves for AR(1) input in symmetric  -S noise.

(FSNR= 0dB)

Fig.10. MSD curves for AR(1) input in CG impulsive noise. ( SNR  10log10 ( y2  v2 )=30dB , t=100, pr =0.1)

28

Fig.11. MSD curves for AR(1) input in CG impulsive noise. ( SNR  10log10 ( y2  v2 )=30dB , t=178, pr=0.0005)

Fig.12. Comparison of the MSD curves for the GMCC and Llncosh algorithms in CG impulsive noise. ( SNR  10log10 ( y2  v2 )=30dB , t=317, pr=0.01)

29

Fig.13. Comparison of the MSD curves for the GMCC and Llncosh algorithms in CG impulsive noise. ( SNR  10log10 ( y2  v2 )=30dB , t=100, pr=0.1)

Fig.14. Comparison of the MSD curves for the GMCC and Llncosh algorithms in symmetric  -S noise. (FSNR= 13dB)

30

Fig.15. Comparison of the MSD curves for the GMCC and Llncosh algorithms in symmetric  -S noise. (FSNR= 0dB)

Fig.16. Comparison of the MSD curves for the variable-λ Llncosh (VLlncosh) and Llncosh algorithms in CG impulsive noise. ( SNR  10log10 ( y2  v2 )=30dB , t=100, pr=0.01, λ0=8, A0=10, h=1)

31

Fig.17. Comparison of the MSD curves for the variable-λ Llncosh (VLlncosh) and Llncosh algorithms in CG impulsive noise. ( SNR  10log10 ( y2  v2 )=30dB , t=178, pr=0.001, λ0=6, A0=10, h=1)

(a)

(b)

32

(c) Fig. 18. Simulated and theoretical MSDs versus step sizes for the Llncos algorithm (λ=2, SNR=30dB). (a) For Gaussian noise. (b) For Uniform noise. (a) For Laplace noise.

(a)

(b)

33

(c) Fig. 19. Simulated and theoretical MSDs versus λ for the Llncos algorithm (μ=0.003 , SNR=30dB). (a) For Gaussian noise. (b) For Uniform noise. (a) For Laplace noise.

Fig. 20. Impulse response of acoustic echo path.

34

Fig. 21. Speech signal. (a) Far-end speech (b) Near-end speech

Fig. 22. MSD curves for speech signal in Gaussian noise. (SNR=30)

35

Table 1. Evaluation factor (EF) β for different λ λ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

EF β SNR =20dB

SNR =30dB

SNR =40dB

SNR =50dB

3.46×10-4 0.005 0.0118 0.0241 0.0405 0.0974 0.0834 0.0738 0.0901 0.1249 0.1956 0.2054 0.1914 0.4457 0.2702 0.3226

2.80×10-5 3.39×10-4 0.004 0.0034 0.0083 0.0112 0.0231 0.0326 0.0436 0.0526 0.0677 0.0786 0.1111 0.1242 0.1698 0.1536

2.86×10-6 3.58×10-5 1.02×10-4 0.0011 5.92×10-4 0.0014 0.0040 0.0043 0.0077 0.0065 0.0137 0.0174 0.0188 0.0209 0.0274 0.0413

2.47×10-7 3.09×10-6 1.29×10-5 2.88×10-5 7.99×10-5 1.44×10-4 2.56×10-4 0.0011 5.68×10-4 6.86×10-4 0.0015 0.0011 0.0029 0.0027 0.0023 0.0032

Table 2.

Simulation parameters of the corresponding algorithms Algorithm LMM [10]

RMN [4] LMP [5]

Parameters Date window length N w Correction factor C1 Gain factor k Date window length N w Fractional order p

Values 11 1.483(1  5/( Nw  1))=2.2245 2.567 9 1.1, 1.2

36