Normalized fractional adaptive methods for nonlinear control autoregressive systems

Normalized fractional adaptive methods for nonlinear control autoregressive systems

Accepted Manuscript Normalized Fractional Adaptive Methods for Nonlinear Control Autoregressive Systems Naveed Ishtiaq Chaudhary , Zeshan Aslam khan ...

1MB Sizes 0 Downloads 72 Views

Accepted Manuscript

Normalized Fractional Adaptive Methods for Nonlinear Control Autoregressive Systems Naveed Ishtiaq Chaudhary , Zeshan Aslam khan , Syed Zubair , Muhammad Asif Zahoor Raja , Nebojsa Dedovic PII: DOI: Reference:

S0307-904X(18)30470-0 https://doi.org/10.1016/j.apm.2018.09.028 APM 12480

To appear in:

Applied Mathematical Modelling

Received date: Revised date: Accepted date:

25 May 2018 10 August 2018 26 September 2018

Please cite this article as: Naveed Ishtiaq Chaudhary , Zeshan Aslam khan , Syed Zubair , Muhammad Asif Zahoor Raja , Nebojsa Dedovic , Normalized Fractional Adaptive Methods for Nonlinear Control Autoregressive Systems, Applied Mathematical Modelling (2018), doi: https://doi.org/10.1016/j.apm.2018.09.028

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

Research highlights Novel normalized fractional methods for automatic adjustment of the learning rate. Better adaptation performance than standard fractional order gradient methods. Reliabilty is proven through MSE, NSE and VAF evaluation metrics. Application of proposed methods to nonlinear system identification. Validation through accurate estimation of electrically stimulated muscle model.

AC

CE

PT

ED

M

AN US

CR IP T

    

ACCEPTED MANUSCRIPT

Normalized Fractional Adaptive Methods for Nonlinear Control Autoregressive Systems 1a

Naveed Ishtiaq Chaudhary, 1bZeshan Aslam khan, 1cSyed Zubair, 2Muhammad Asif Zahoor Raja, 3,* Nebojsa Dedovic 1

1b

[email protected] 1c

[email protected]

2

CR IP T

Department of Electrical Engineering, International Islamic University, Islamabad, Pakistan 1a [email protected], [email protected]

3

AN US

Department of Electrical Engineering, COMSATS Institute of Information Technology, Attock Campus, Attock, Pakistan [email protected], [email protected] Department of Agricultural Engineering, Faculty of Agriculture, University of Novi Sad, Novi Sad, Serbia [email protected], *corresponding author

PT

ED

M

Abstract: The trend of applying mathematical foundations of fractional calculus to solve problems arising in nonlinear sciences, is an emerging area of research with growing interest especially in communication, signal analysis and control. In the present study, normalized fractional adaptive strategies are exploited for automatic tuning of the step size parameter in nonlinear system identification based on Hammerstein model. The brilliance of the methodology is verified by mean of viable estimation of electrically stimulated muscle model used in rehabilitation of paralyzed muscles. The dominance of the schemes is established by comparing the results with standard counterparts in case of different noise levels and fractional order variations. The results of the statistical analyses for sufficient independent runs in terms of NashSutcliffe efficiency, variance account for and mean square error metrics validated the consistent accuracy and reliability of the proposed methods. The proposed exploitation of fractional calculus concepts makes a firm branch of nonlinear investigation in arbitrary order gradientbased optimization schemes.

CE

Keywords: Fractional calculus; Signal processing; Nonlinear systems; Hammerstein model; Nonlinear adaptive strategies.

AC

1 Introduction

Fractional Calculus (FC) is a branch of mathematics that deals with the study of taking real (noninteger) order derivatives and integrals. It started about three centuries ago following the L’Hopital’s and Leibniz’s first work in 1695, but until the last few decades its utilization outside mathematics is less prominent [1-3]. Recently, exploration and exploitation has been made in applying FC to different fields of engineering, science and technology [4-7]. For example, thermal processes [8], nanotechnology [9], image processing [10-11], hydro-turbine system [12], diffusion theory [13], control systems design [14-16], signal processing [17-18], circuit theory [19-20], robotics [21], oscillators [22], heat conduction model [23], differential equations [24-

ACCEPTED MANUSCRIPT

25], Heisenberg uncertainty principle [26], biomedical science [27], epidemic model [28], fractional order system [29-31] and triple pendulum model [32].

M

AN US

CR IP T

The FC concepts and theories have been successfully applied to design novel fractional adaptive algorithms. Raja et al. gave the idea of incorporating fractional order gradient in least mean square (LMS) method to develop fractional LMS (FLMS) and presented two stage FLMS algorithm for faster convergence [33-34]. Qureshi et al. presented modified FLMS (MFLMS1) by using the concept of forgetting factor for better performance [35]. Chaudhary et al. proposed another modification in FLMS (MFLMS2) by using only fractional order derivative to reduce the computational cost of standard FLMS [36-37]. Aslam et al. introduced sliding window based FLMS adaptive strategy to improve the convergence of standard FLMS [38]. Zubair et al. proposed momentum FLMS adaptive algorithm to enhance the convergence speed of standard FLMS [39]. Osgouei et al. presented convex combination of FLMS to improve adaptive filtering performance [40]. Cheng et al. proposed innovative fractional order LMS algorithm to reduce the non-locality of FC and universal fractional order LMS method that generalizes the LMS to 0 < α < 2 case, where α is gradient order [41-42]. Chambers et al. developed fractional constant modulus blind algorithms [43]. Chaudhary et al. introduced fractional Volterra LMS algorithm [44-45]. Tan et al. presented new fractional order LMS by changing first order difference of weight update relation with fractional order [46]. Pu et al. generalized the steepest descent method by identifying the fractional extreme points of the quadratic energy norm [47]. Although many fractional order variants have been proposed but as per authors literature survey, the most widely used variant for modeling and optimizing well known nonlinear Hammerstein system identification problem is standard FLMS since it exploits the strength of both first and fractional order derivatives for its adaptation mechanism.

PT

ED

Hammerstein structure consists of static nonlinear block that is followed by a linear dynamical sub-system and its identification has been explored by different methods, techniques and frameworks [48-54]. The true strength of standard FLMS has been exploited by optimizing the stiff nonlinear structures based on Hammerstein nonlinear control autoregressive (HN-CAR), HN-CAR moving average (HN-CARMA) and HN-Box-Jenkins (HN-BJ) systems [55-57]. It is shown in these illustrative applications that FLMS outperformed standard adaptive methods of Volterra LMS and kernel LMS, if the step size is adjusted properly.

AC

CE

The performance of fractional adaptive algorithms is dependent on step size parameter and it is somewhat challenging to adjust the step size parameter. In a recent study, normalized variants of FLMS are proposed to adaptively tune the step size parameter for linear system identification [58]. The true strength and behavior of the normalized version of FLMS can only be assessed through some challenging nonlinear problem. It looks promising to investigate in applying normalized versions of FLMS and modified FLMS methods for nonlinear system identification based on Hammerstein structure. Moreover, the fractional adaptive algorithms are not yet exploited to study the dynamics of electrical stimulated muscle model represented through HNCAR structure [59-60], as well as, the performance of fractional methods is never tested to nongaussian scenarios [61-63]. Therefore, in this study for the first-time potential of proposed normalized fractional adaptive algorithms is exploited for nonlinear system identification based on HN-CAR model. Further, the superior performance of the proposed normalized methods is verified and validated through parameter estimation of electrically stimulated muscle model, that is not done before. Additionally, the robust performance of the normalized fractional methods is proven by taking

ACCEPTED MANUSCRIPT

gaussian, as well as non-gaussian noise in the system model. The salient features in terms of novel contributions of the proposed study are listed below:

   

CR IP T



Novel application of normalized fractional adaptive methods to well-known nonlinear system identification problem based on Hammerstein control autoregressive structure. The normalized FLMS (N-FLMS), normalized modified FLMS-1 (N-MFLMS1) and normalized modified FLMS-2 (N-MFLMS2) adaptive algorithms automatically tune the step size parameter to provide better performance than standard counterparts. The strength of fractional adaptive strategies is exploited for the first time to biomedical signal processing problem based on parameter estimation of electrically stimulated muscle model, required for rehabilitation of paralyzed muscles. The robustness of normalized fractional adaptive techniques is established through optimization under different noise conditions by taking gaussian, as well as, non-gaussian density functions and different signal to noise ratios. The superiority of the normalized fractional algorithms is demonstrated through comparison with standard counterparts for different performance indices. The legacy of fractional calculus applications, simple recursive structure, smooth implementation, inherent capability to grasp complex nonlinear systems, extendibility and applicability are other temptations of the proposed scheme.

AN US



M

Rest of the paper is organized as follows; Section 2 briefly describes the HN-CAR model. Section 3, presents the derivation of fractional calculus based normalized adaptive algorithms for HN-CAR system identification. Section 4 provides the detail simulation studies through statistics based on different performance measures. Concluding remarks are given in the last section with potential future research dimensions for efficiently applying FC concepts to solve practical problems arising in engineering, science and technology.

ED

2 System Model: Hammerstein nonlinear system The mathematical description of Hammerstein nonlinear control autoregressive (HN-CAR) system is given as follows, while the block diagram is presented in Fig. 1 [64-65]:

PT

L( z) y(t )  M ( z)s (t )  n(t ) ,

(1)

AC

CE

here, y(t) shows output of the system, n(t) represents the disturbance noise, 𝑠̅(𝑡) is the nonlinear function with x known basis, i.e., (𝑓1 , 𝑓2 , … , 𝑓𝑝 ), of system input s(t) and is represented as:

  f ( s(t )) s (t )   ,   x1 f1  s(t )   x2 f 2  s (t )  ,...,  x p f p  s (t )  T

where x   x1 , x2 ,, x p  

p

(2)

represents the vector of constants, L(z) and M(z) are known

polynomials given as:

L  z   1  l1 z 1  l2 z 2 ,, ln z  n ,

(3)

ACCEPTED MANUSCRIPT

M  z   m1 z 1  m2 z 2 ,, mn z  n , where l  l1 , l2 ,, ln   n and m   m1 , m2 ,, mn   the polynomials. Equation (1) can be rearranged as: T

T

(4)

n

represent the coefficient vectors of

y  t   1  L  z  y  t   M  z  s t   n t 

n

n

p

y (t )  li y  t  i   mi x j f j s  t  i   n  t  . i 1

i 1 j 1

CR IP T

By using (2) to (4) in (5) and 𝑧 −𝑖 = 𝑦(𝑡 − 𝑖)]:

(5)

(6)

AN US

Equation (6) in terms of HN-CAR identification model is written as:

y(t )  ψT  t  Θ  n(t ),

(7)

where the parameter vector Θ and the information vector ψ in (7) are given as: T

M

Θ   l T , m1xT , m2 xT ,, mn xT   ψ  t   ψ 0T  t  , ψ1T  t  ,..., ψ nT  t  

ED

T

n0

n0

, n0  n  np

, n0  n  np,

ψ 0  t     y  t  1 ,  y  t  2  ,,  y  t  n  , T

PT

ψ j  t    f j  s  t  1  , f j  s  t  2   ,, f j  s t  n   for j  1, 2,, p T

CE

3 Proposed Methodology: Normalized fractional adaptive methods

AC

Normalized fractional LMS (N-FLMS), normalized MFLMS-1 (N-MFLMS1) and normalized MFLMS-2 (N-MFLMS2) are given here for parameter estimation problem of HN-CAR system. The graphical summary of the proposed study is presented in Fig. 2. 3.1 Normalized FLMS The objective/cost function for HN-CAR system identification is defined as:

C (t )=E e2  t  ,

(8)

where E(.) represents a statistical expectation operator, while e(t) is the error signal i.e., the difference of original y  t  and estimated yˆ  t  response of the HN-CAR model

ACCEPTED MANUSCRIPT

e  t   y  t  -yˆ  t  .

(9)

The response yˆ  t  of the system is given by:

ˆ t  , yˆ  t   ψT  t  Θ

ˆ represents estimated parameter vector. where Θ

CR IP T

ˆ: Minimizing objective function (8) by taking first order derivative with respect to Θ   C  t   2e(t ) y (t )  ΘT  t  ψ  t  , ˆ ˆ Θ Θ



simplifying (10) yields [66]:

AN US

 C  t   2e(t )ψ  t  . ˆ Θ



(10)

(11)

Now, taking the fractional gradient of (10) well-establish procedure of fractional calculus, we have   ˆ T t  ψ t  , C t  2 e ( t ) y (t )  Θ    ˆ ˆ Θ Θ





(12)

M

where  shows the order of fractional derivative Assuming that the constant has a zero fractional order derivative, equation (12) reduces to

ED

  ˆ T C t   2 e ( t ) ψ t    ˆ  Θ t  . ˆ Θ Θ

(13)

CE

PT

Fractional derivatives are defined in variety of manners, but the commonly incorporated definitions include Grünwald-Letnikov, Caputo and Riemann-Liouville [1]. The  order n derivative of a polynomial, b(t )  t , is written as:

D t n 

n  1 n t , n    1

(14)

AC

where, D shows  order fractional derivative and  denotes the gamma function, given as: 

  t    xt 1e x dx .

(15)

0

By using (14) and (15) in (13), one gets [33]:

 1 ˆ 1  t  . C  t   2e(t )ψ  t  Θ  ˆ   2   Θ

The iterative mechanism of FLMS for parameter update is written as [33-35]:

(16)

ACCEPTED MANUSCRIPT

 C  t   1  C  t  ˆ ˆ Θ  t  1  Θ  t     I  F , ˆ t  ˆ   t   2  Θ Θ 

(17)

where  I and  F are the learning rates. Putting (11) and (16) in (17), the iterative relation of FLMS is given as:

1 ˆ 1  t  . Θ   2  

When  I   F   , equation (18) can be updated as:

(18)

CR IP T

ˆ  t  1  Θ ˆ t    e t  ψ t    e t  ψ t  Θ I F

1   1 ˆ  t  1  Θ ˆ  t    e  t  ψ  t  1  ˆ  t  , Θ Θ    2   

AN US

where the symbol is used for element by element multiplication and absolute value of parameter vector is used to avoid complex entries. By dividing the information vector with its norm, the parameter update rule of normalized FLMS for HN-CAR system identification is provided as: 1   1 ˆ  t  1  Θ ˆ  t    e  t  ψ  t  1  ˆ  t  . Θ Θ norm  ψ  t      2   

M

3.2 Normalized MFLMS-1 (N-MFLMS1)

ED

The idea of gain parameter  is used in modified FLMS-1 (MFLMS1) adaptive method and the iterative weight update expression of MFLMS1 is given as [35-36]:

ˆ  t  1  Θ ˆ  t     e  t  ψ  t   1     e  t  ψ  t  Θ I F

1 ˆ 1  t  , Θ   2  

(19)

PT

where the  lies between 0 and 1. When  I   F   , the equation (19) is updated as: (20)

CE

1   1 ˆ  t  1  Θ ˆ  t    e  t  ψ  t    (1   ) ˆ  t  . Θ Θ   2    

AC

If   0.5 in (20), the term corresponding to first order derivative is dominant while when   0.5 the term related to fractional order derivative prevails. The information vector in (20) is divided with its norm to derive the N-MFLMS1 relation for HN-CAR identification as: 1   1 ˆ  t  1  Θ ˆ  t    e  t  ψ  t    (1   ) ˆ  t  . Θ Θ   2   norm  ψ  t    

(21)

Let   1 in equation (21), the N-MFLMS1 method reduces to normalized LMS as:

ˆ  t  1  Θ ˆ t   e t  ψ t  . Θ norm  ψ  t  

(22)

ACCEPTED MANUSCRIPT

3.3 Normalized MFLMS-2 (N-MFLMS2) In modified FLMS-2 (MFLMS2), only the strength of fractional gradient is exploited instead of both first and fractional gradients and the iterative update rule is given as [36-37]:

   ˆ  t  1  Θ ˆ t      C t   . Θ ˆ  t   2  Θ 

(23)

ˆ  t  1  Θ ˆ t   e t  ψ t  Θ

CR IP T

By using equation (16) in (23), the parameter update rule for MFLMS2 is written as: 1 1 ˆ t . Θ   2  

(24)

Finally, the information vector in (24) is divided with its norm to obtain update rule for NMFLMS2 method as:

AN US

1 1 ˆ  t  1  Θ ˆ t   e t  ψ t  ˆ t . Θ Θ norm  ψ  t     2  

(25)

Let   1 in equation (25), the N-MFLMS2 update relation reduces to standard NLMS approach (22) and suppose   0 in N-MFLMS1 iterative expression (21), then the resulting parameter update rule is the same as of N-MFLMS2 approach (25).

4 Simulations studies with Discussion

ED

M

This section presents the simulation outcomes for two examples of HN-CAR system. The results of the statistical analyses based on different performance measures are also given here to validate the superior performance of the design methodology. All the simulations are performed in MATLAB 2012 with Windows 10 operating system running on HP laptop Pro-Book series, model 4530s, Core-i3, 2.0GHz Processor and 4GB RAM. 4.1 Example 1

PT

The HN-CAR system for example 1 is taken as follows [64-65]:

CE

L( z) y(t )  M ( z)s (t )  n(t ) , L( z)  1  l1 z 1  l2 z 2  1  1.35z 1  0.75z 2 ,

AC

M ( z)  m1 z 1  m2 z 2  z 1  1.68z 2 , s (t )  f ( s(t ))  x1 f1  s (t )   x2 f 2  s (t )   x3 f 3  s (t )   s  t   0.50s 2  t   0.20s 3  t 

,

ACCEPTED MANUSCRIPT

-

 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 T  T  Θ  l1 , l2 , x1 , x2 , x3 , m2 x1 , m2 x2 , m2 x3  .  T 1.35, 0.75,1.00, 0.50, 0.20,1.68, 0.84, 0.336  

(26)

ˆ Θ Θ Θ

AN US

Fit 

CR IP T

The input signal s(t) is a zero-mean unit variance sequence, n(t) is a zero-mean constant variance noise signal. In order to investigate the design methodology, four performance indices are used, i.e., fitness function (Fit); mean square error (MSE); Variance account for (VAF); and Nash Sutcliffe efficiency (NSE). These measures are respectively defined as:

2

ˆ , MSE  mean Θ  Θ 



M

ˆ  var Θ  Θ  VAF  1   var  Θ  



  100,  



ED

2  ˆ mean Θ  Θ NSE  1   2  mean Θ  mean(Θ) 

 ,   

(27)

(28)

(29)

(30)

PT

ˆ represents vector of estimated parameters. where, Θ is the vector of desired parameters and Θ The error functions related to VAF and NSE metrics are EVAF  100  VAF and

CE

ENSE  1  NSE , respectively.

AC

The proposed N-FLMS, N-MFLMS1 and N-MFLMS2 methods are applied to identify the parameters of HN-CAR model (26) for sufficient large iterations i.e., t = 20000. The termination criteria is based on either the number of iterations completed or sufficient fitness value achieved i.e., 10-9. The normalized adaptive variants are tested for three noise standard deviations i.e., σ = 0.2, 0.5 and 0.8. The N-MFLMS1 is examined for two gain parameters i.e., δ = 0.25 and 0.75. The learning rate or step size parameter plays a vital role in convergence and stability of adaptive methods. To choose an appropriate value of a learning rate, the standard and normalized variants are tested for different learning rate values i.e., 10-2, 10-3, 10-4 and 10-5. Ten independent executions of the algorithms are conducted, and results based on mean value and its standard deviation are presented in Table 1 for fractional order 0.5 and noise standard deviation 0.5. It is clear depicted from the results of Table 1 that 10-2 and 10-3 are appropriate learning rates for

ACCEPTED MANUSCRIPT

standard and normalized fractional techniques respectively. Same value is used for both step size parameters,  I and  F .

CR IP T

In fractional order adaptive methodologies, the role of one important parameter is the selection of the suitable arbitrary order derivative. The proposed techniques are examined for different fractional orders i.e., [0.1, 0.2, …, 0.9]. The results of nine fractional orders through 10 independent executions of the methods in terms of the MSE values are presented in Table 2. It is evident that the normalized fractional order variants are reasonably convergent for all nine fractional orders but conventional FLMS gives few divergent results in case of   0.1 fractional order. Moreover, very small difference is seen among different fractional orders for HN-CAR system identification. So, it is reasonable to take   0.5 in rest of the study. In measurements, there may be outliers and that is why sometimes, the noise n(t) in the HN-CAR model (7) is not exactly Gaussian. Hence, the probability density function (pdf) of the noise has approximately normal distribution class [61-63]:

AN US

p  n   1    p1  n    p2  n  , where,

p1  n   N  0, 1  , p2  n   N  0,  2  ,  2  1 .

(31)

(32)

The pdf p(n) is a mixture of normal distributions, where  1 and  2 are the noise standard deviation, while the parameter 0 < ɛ < 1 is called the degree of contamination. The non-Gaussian noise n(t) used in simulations is given by [61-63]:

M

p  n   1  0.1 N  0,0.1  0.1N  0,3

(33)

CE

PT

ED

The iterative adaptation of fitness function (27) using standard and normalized fractional algorithms is given in Fig. 3 for noise standard deviation   0.2,0.5 and for noise pdf given in (33). It is observed that the fractional order adaptive techniques are correct and convergent for all noise scenarios, but the convergence speed of proposed normalized methods is faster than the standard counterparts. The comparison of N-FLMS, N-MFLMS1 (δ = 0.25), N-MFLMS1 (δ = 0.75) and N-MFLMS2 algorithms based on fitness is presented in Fig. 4. It is observed that NFLMS provides better results than N-MFLMS1 and N-MFLMS2 algorithms in terms of accuracy and convergence.

AC

The performance of the proposed methods is examined through MSE (28) and results are tabulated in Table 3 in case of all noise standard deviations. The normalized schemes provide better results for lesser value of standard deviation i.e., σ = 0.2 while a bit degraded performance is seen for higher value of noise deviation i.e., σ = 0.8. However, it is observed that the N-FLMS, N-MFLMS1 and N-MFLMS2 algorithms are more accurate than the standard FLMS, MFLMS1 and MFLMS2 strategies. However, negligible difference is seen among proposed normalized methods. The single good run of the scheme is not adequate to conclude its performance; therefore, statistical analyses are conducted through 100 independent executions of the schemes. The results in terms of the ascending order of MSE against independent runs are given in Fig. 5. The results of FLMS, MFLMS1(δ = 0.25), MFLMS1(δ = 0.75) and MFLMS2 are plotted in Figs. 5a, 5c, 5e and 5g respectively, while the respective results of proposed N-FLMS, N-MFLMS1(δ =

ACCEPTED MANUSCRIPT

0.25), N-MFLMS1(δ = 0.75) and N-MFLMS2 algorithms are presented in Figs. 5b, 5d, 5f and 5h, for all noise levels. It is observed that all the methods are correct, but few divergent runs are seen in case of standard FLMS algorithm, while this behavior is not observed for proposed normalized variant N-FLMS method. It is also observed that number of divergent runs increases by increasing noise level i.e., one, two and twelve divergent executions for σ = 0.2, 0.5 and 0.8 respectively.

CR IP T

The normalized fractional variants are evaluated through statistics based on MSE, NSE and VAF (28)-(30) performance measures. The results based on minimum (MIN), mean and standard deviation (SD) statistical parameters are listed in Table 4 for all performance measures. Generally, one can evidently perceive that the normalized fractional order variants are reasonably accurate as well as convergent, while standard FLMS algorithm gives few divergent executions. Additionally, it is seen that proposed normalized strategies give more precise results than standard adaptive methods which validates their performance on different metrics.

AN US

The complexity of the normalized methods is studied through mean execution time (MET) and its SD, and results are presented in Table 5 for all variations in noise standard deviations and fractional order. It is observed that there is no noticeable difference among proposed normalized fractional adaptive methods and standard fractional algorithms. However, standard fractional strategies provide relatively smaller values of MET, due to the fact that normalized versions are always little heavier than standard methods. But this little computational cost is negligible as compared to the gain in convergence performance of the normalized fractional algorithms. 4.2 Example 2

ED

M

The application of HN-CAR system to model electrically stimulated muscle is considered in Example 2. The accurate mathematical model of a muscle is required for rehabilitation of paralyzed muscles. The stimulated muscle is modeled through HN-CAR system with the following parameters [59-60]:

PT

L( z) y(t )  M ( z)s (t )  n(t ) , L( z)  1  l1 z 1  l2 z 2  1  z 1  0.8z 2 ,

CE

M ( z)  m1 z 1  m2 z 2  z 1  0.6 z 2 ,

AC

s (t )  f ( s(t ))  x1 f1  s (t )   x2 f 2  s (t )   x3 f 3  s (t )   2.8s  t   4.8s 2  t   5.7 s 3  t 

,

ACCEPTED MANUSCRIPT

 1 ,  2 , 3 ,  4 , 5 , 6 , 7 , 8 T  T  Θ  l1 , l2 , m1 x1 , m1 x2 , m1 x3 , m2 x1 , m2 x2 , m2 x3  .  T 1.00, 0.80, 2.80,  4.80,5.70,1.68,  2.88,3.42   

(31)

CR IP T

For parameter estimation of electrically stimulated muscle model (31), the input is taken as a zero-mean unit variance and n(t) is the white noise sequence with zero-mean and constant standard deviation. Same variations of noise level σ and adjustable gain parameter δ are used as in Example 1. The step size parameter in Example 2 is taken as 10-5 for standard fractional adaptive algorithms and 0.007 for normalized fractional methods. The step size parameters are selected through performing a set of trials to obtain the best MSE value after convergence.

AN US

The fitness curves of normalized fractional algorithms are presented in Fig. 6 for all noise levels in case of muscle modeling. It is observed that the normalized variants are correct and convergent for estimation of electrically stimulated muscle model. The convergence speed of normalized variants is faster than their standard counterparts.

M

The proposed methods are evaluated through MSE (28), and results are given in Table 6 for Example 2 in case of each noise variances. It is observed that all schemes attain good results for lesser level of standard deviation i.e., σ = 0.2 and performance of the methods relatively degraded for high noise scenario i.e., σ = 0.8. It is further observed that N-FLMS, N-MFLMS1 and N-MFLMS2 algorithms are better than their standard counterparts. However, negligible difference is observed among proposed normalized algorithms.

5 Conclusions

AC

CE

PT

ED

The present contribution can be considered as an advancement in exploiting the fractional calculus heritage to design an alternate, accurate, robust and stable fractional order computing paradigm to solve nonlinear optimization problems. The normalized FLMS (N-FLMS), normalized MFLMS-1 (N-MFLMS1) and normalized MFLMS-2 (N-MFLMS2) adaptive methods are exploited for adaptively tuning the learning rate parameters of Hammerstein nonlinear control autoregressive systems. The correctness of the normalized fractional adaptive methods is established by effectively optimizing the parameters of electrical stimulated muscle model required in rehabilitation of paralyzed muscles. The design scheme is evaluated for nine different fractional orders and the proposed normalized methods consistently convergent for all fractional orders while standard FLMS operates with few divergent trails in case of lower fractional orders. Optimization capability of proposed normalized variants is better than their standard counterparts for all noise (gaussian and non-gaussian) and fractional order variations which establish their robustness. However, the accuracy of all the methods decreases by increasing noise standard deviation but the proposed normalized variants still achieve better results than their standard counterparts. The consistency and reliability of the methods is proven through multiple independent executions of the methods for different evaluation measures including mean square error, variance account for and Nash-Sutcliffe efficiency. The fractional normalized adaptive methods can be exploited to solve challenging parameter identification problems based on HN-CARMA, HN-BJ, output-error (OE), OE autoregressive, OE moving average, Weiner, Hammerstein-Weiner, Weiner-Hammerstein, multivariable and

ACCEPTED MANUSCRIPT

multiple input-multiple output models. Moreover, it looks promising to develop multi innovation and auxiliary model based fractional schemes to enhance the accuracy and precision of the proposed methods. One may also investigate in extending the present study for designing new fractional adaptive methods in the field of communication, signal processing and control; few potential examples include fractional kernel, recursive least squares, Kalman, momentum, leaky, sign, block, P-power LMS algorithms as well as their normalized versions.

CR IP T

Acknowledgements

AC

CE

PT

ED

M

AN US

For the last author, Ministry of Education, Science and Technological Development of the Republic of Serbia supported this contribution under the project “Improvement of the quality of tractors and mobile systems with the aim of increasing competitiveness and preserving soil and environment, No TR-31046”

ACCEPTED MANUSCRIPT

References

[7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]

AC

[19]

CR IP T

[6]

AN US

[5]

M

[4]

ED

[3]

PT

[2]

I. Podlubny, Fractional differential equations: an introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications (Vol. 198), Academic press, San Diego, California, 1998. J. T. Machado, A. M. Galhano, J. J. Trujillo, On development of fractional calculus during the last fifty years. Scientometrics 98(1) (2014) 577-582. J. T. Machado, V. Kiryakova, F. Mainardi, Recent history of fractional calculus. Communications in Nonlinear Science and Numerical Simulation 16(3) (2011), 1140-1153. Y. Zhou, C. Ionescu, J.T. Machado, Fractional dynamics and its applications, Nonlinear Dyn. 80(4) (2015) 1661-1664. Y. Zhou, Fractional Evolution Equations and Inclusions: Analysis and Control. Academic Press, Boston, 2016. D. Baleanu, K. Diethelm, E. Scalas, J.J. Trujillo, Fractional calculus: models and numerical methods (Vol. 5), World Scientific, London, 2016. C. Ionescu, Y. Zhou, J.T. Machado, Special Issue: Advances in Fractional Dynamics and Control, J. Vib. Control, 22(8) (2016) 1969-1971. B.S.T. Alkahtani, A. Atangana, Analysis of non-homogeneous heat model with new trend of derivative with fractional order, Chaos Solitons Fractals, 89 (2016) 566-571. M. Pan, L. Zheng, F. Liu, C. Liu, X. Chen, A spatial-fractional thermal transport model for nanofluid in porous media, Appl. Math. Model. 53 (2018) 622-634. G.C. Wu, D. Baleanu, Z.X. Lin, Image encryption technique based on fractional chaotic time series, J. Vib. Control 22(8) (2016) 2092-2099. Y.F. Pu, N. Zhang, Y. Zhang, J.L. Zhou, A texture image denoising approach based on fractional developmental mathematics, Pattern Anal. Appl. 19(2) (2016) 427-445. Y. Long, B. Xu, D. Chen, W. Ye, Dynamic characteristics for a hydro-turbine governing system with viscoelastic materials described by fractional calculus. Appl. Math. Model. 58 (2018) 128-139. X.J. Yang, D. Baleanu, H.M. Srivastava, Local fractional similarity solution for the diffusion equation defined on Cantor sets, Appl. Math. Lett. 47 (2015) 54-60. Y. Chen, D. Xue, A. Visioli, Guest editorial for special issue on fractional order systems and controls, IEEE/CAA J. Automatica Sinica 3(3) (2016) 255-256. C. Yin, X. Huang, Y. Chen, S. Dadras, S. M. Zhong, Y. Cheng, Fractional-order exponential switching technique to enhance sliding mode control. Appl. Math. Model. 44 (2017) 705-726. H. Malek, S. Dadras, Y. Chen, Performance analysis of fractional order extremum seeking control, ISA Trans. 63 (2016) 281-287. M.D. Ortigueira, J. T. Machado, M. Rivero, J. J. Trujillo, Integer/fractional decomposition of the impulse response of fractional linear systems. Signal Process. 114 (2015) 85-88. M.D. Ortigueira, C.M. Ionescu, J.T. Machado, J.J. Trujillo, Fractional signal processing and applications, Signal Process. 107 (2015) 197. X.J. Yang, J.T. Machado, C. Cattani, F. Gao, On a fractal LC-electric circuit modeled by local fractional calculus, Commun. Nonlinear Sci. 47 (2017) 200-206. L. Chen, W. Pan, R. Wu, K. Wang, Y. He, Generation and circuit implementation of fractional-order multiscroll attractors, Chaos Solitons Fractals 85 (2016) 22-31. J. T. Machado, A. M. Lopes, A fractional perspective on the trajectory control of redundant and hyperredundant robot manipulators. Appl. Math. Model. 46 (2017) 716-726. M. Rostami, M. Haeri, Undamped oscillations in fractional-order Duffing oscillator, Signal Process. 107 (2015) 361-367. S. Chen, F. Liu, I. Turner, X. Hu, Numerical inversion of the fractional derivative index and surface thermal flux for an anomalous heat conduction model in a multi-layer medium. Appl. Math. Model. 59 (2018) 514-526.

CE

[1]

[20] [21] [22] [23]

ACCEPTED MANUSCRIPT

[30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42]

AC

[43]

CR IP T

[29]

AN US

[28]

M

[27]

ED

[26]

PT

[25]

A. Dabiri, E. A. Butcher, Numerical solution of multi-order fractional differential equations with multiple delays via spectral collocation methods. Appl. Math. Model. 56 (2018) 424-448. A. A. Freitas, D. G. A. Vigo, M. G. Teixeira, C. A. de Vasconcellos, Horizontal water flow in unsaturated porous media using a fractional integral method with an adaptive time step. Appl. Math. Model. 48 (2017) 584-592. X.J. Yang, D. Baleanu, J.A.T. Machado, Mathematical aspects of the Heisenberg uncertainty principle within local fractional Fourier analysis, Bound. Value Probl. 2013(1) (2013) 131. M.M. Meerschaert, R.L. Magin, Q.Y. Allen, Anisotropic fractional diffusion tensor imaging, J. Vib. Control 22(9) (2016) 2211-2221. I. Ameen, I., P. Novati, The solution of fractional order epidemic model by implicit Adams methods. Appl. Math. Model. 43 (2017) 78-84. B. Safarinejadian, M. Asad, M.S. Sadeghi, Simultaneous state estimation and parameter identification in linear fractional order systems using coloured measurement noise, Int. J. Control 89(11) (2016) 1-20. R. Cui, Y. Wei, Y. Chen, S. Cheng, Y. Wang, An innovative parameter estimation for fractional-order systems in the presence of outliers, Nonlinear Dyn. 89(1) (2017) 453-463. M. J. Moghaddam, H. Mojallali, M. Teshnehlab, Recursive identification of multiple-input single-output fractional-order Hammerstein model with time delay, Applied Soft Computing 70 (2018) 486-500. A. Coronel-Escamilla, J.F. Gómez-Aguilar, M.G. López-López, V.M. Alvarado-Martínez, G.V. GuerreroRamírez, Triple pendulum model involving fractional derivatives with different kernels, Chaos Solitons Fractals 91 (2016) 248-261. R.M.A. Zahoor, I.M. Qureshi, A modified least mean square algorithm using fractional derivative and its application to system identification, Eur. J. Sci. Res. 35(1) (2009) 14-21. M.A.Z. Raja, N.I. Chaudhary, Two-stage fractional least mean square identification algorithm for parameter estimation of CARMA systems, Signal Process. 107 (2015) 327-339. B. Shoaib, I.M. Qureshi, A modified fractional least mean square algorithm for chaotic and nonstationary time series prediction, Chinese Physics B, 23(3) (2014) 030502. DOI: 10.1088/1674-1056/23/3/030502 N.I. Chaudhary, M.A.Z. Raja, A.U.R. Khan, Design of modified fractional adaptive strategies for Hammerstein nonlinear control autoregressive systems, Nonlinear Dyn. 82(4) (2015) 1811-1830. N.I Chaudhary, S. Zubair, M.A.Z. Raja, A new computing approach for power signal modeling using fractional adaptive algorithms. ISA Trans. 68 (2017).189-202. M.S. Aslam, N.I. Chaudhary, M.A.Z. Raja, A sliding-window approximation-based fractional adaptive strategy for Hammerstein nonlinear ARMAX systems, Nonlinear Dyn. 87(1) (2017) 519-533. S. Zubair, N.I Chaudhary, Z.A. Khan, W. Wang, Momentum fractional LMS for power signal parameter estimation, Signal Process. 142 (2018): 441-449. M. Geravanchizadeh, S. Ghalami Osgouei, Speech enhancement by modified convex combination of fractional adaptive filtering, Iranian J. Electr. Electron. Eng. 10(4) (2014), 256-266. S. Cheng, Y. Wei, Y. Chen, Y. Li, Y. Wang, An innovative fractional order LMS based on variable initial value and gradient order. Signal Process. 133 (2017) 260-269. S. Cheng, Y. Wei, Y. Chen, S. Liang, Y. Wang, A universal modified LMS algorithm with iteration order hybrid switching, ISA Trans. 67 (2017) 67-75. S.M. Shah, R. Samar, M.A.Z. Raja, J.A. Chambers, Fractional normalized filtered-error least mean squares algorithm for application in active noise control systems, Electron. Lett. 50.14 (2014) 973-975. N.I. Chaudhary, M.A.Z. Raja, M.S. Aslam, N. Ahmed, Novel generalization of Volterra LMS algorithm to fractional order with application to system identification, Neural Comput. Appl. 29(6) (2018) 41-58. N.I. Chaudhary, M.S. Aslam, M.A.Z Raja, Modified Volterra LMS algorithm to fractional order for identification of Hammerstein non-linear system, IET Signal Process. 11(8) (2017) 975-985. Y. Tan, Z. He, B. Tian, Generalization of modified LMS algorithm to fractional order, IEEE Signal Process. Lett. 122.9. (2015) 1244-1248. Y.F. Pu, J. L. Zhou, Y. Zhang, N. Zhang, G. Huang, P. Siarry, Fractional extreme value adaptive training method: Fractional steepest descent approach. IEEE Trans. Neural. Netw. Learn. Syst. 26(4) (2015) 653662.

CE

[24]

[44] [45] [46] [47]

ACCEPTED MANUSCRIPT

[54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65]

AC

[66]

CR IP T

[53]

AN US

[52]

M

[51]

ED

[50]

PT

[49]

C.M. Holcomb, R.A. de Callafon, R.R. Bitmead, Closed-loop identification of Hammerstein systems with application to gas turbines, IFAC Proc. Vol. 47(3) (2014) 493-498. H. Salhi, S. Kamoun, A recursive parametric estimation algorithm of multivariable nonlinear systems described by Hammerstein mathematical models. Appl. Math. Model. 39 (16) (2015) 4951-4962. F. Ding, Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling, Appl. Math. Model. 37(4) (2013) 1694-1704. D. Wang, L. Li, Y. Ji, Y. Yan, Model recovery for Hammerstein systems using the auxiliary model based orthogonal matching pursuit method. Appl. Math. Model. 54 (2018) 537-550. L Ma, X. Liu, Recursive maximum likelihood method for the identification of Hammerstein ARMAX system. Appl. Math. Model. 40(13-14) (2016) 6523-6535. D. Wang, F. Ding, Parameter estimation algorithms for multivariable Hammerstein CARMA systems, Inf. Sci. 355 (2016) 237-248. Y. Mao, F. Ding, A novel parameter separation based identification algorithm for Hammerstein systems, Appl. Math. Lett. 60 (2016) 21-27. N.I. Chaudhary, M.A.Z. Raja, J.A. Khan, M.S. Aslam, Identification of input nonlinear control autoregressive systems using fractional Signal Processing approach, Sci. World J. 2013 (2013) 1-13. DOI: 10.1155/2013/467276 N.I. Chaudhary, M.A.Z. Raja, Identification of Hammerstein nonlinear ARMAX systems using nonlinear adaptive algorithms, Nonlinear Dyn. 79 (2015) 1385-1397. N.I. Chaudhary, M.A.Z. Raja, Design of fractional adaptive strategy for input nonlinear Box–Jenkins systems, Signal Process. 116 (2015) 141-151. N.I. Chaudhary, M. Ahmed, Z.A. Khan, S. Zubair, M.A.Z. Raja, N. Dedovic, Design of normalized fractional adaptive algorithms for parameter estimation of control autoregressive autoregressive systems, Appl. Math. Model. 55 (2018) 698–715. F. Le, I. Markovsky, C. T. Freeman, E. Rogers, Recursive identification of Hammerstein systems with application to electrically stimulated muscle, Control Eng. Pract. 20(4) (2012) 386-396. F. Le, I. Markovsky, C. T. Freeman, E. Rogers, Identification of electrically stimulated muscle models of stroke patients. Control Engineering Practice, 18(4) (2010) 396-407 V. Stojanovic, N. Nedic, D. Prsic, L. Dubonjic, Optimal experiment design for identification of ARX models with constrained output in non-Gaussian noise. Appl. Math. Model. 40(13) (2016) 6676-6689. V. Stojanovic, N. Nedic, Identification of time‐ varying OE models in presence of non‐ Gaussian noise: Application to pneumatic servo drives. International Journal of Robust and Nonlinear Control. 26(18) (2016) 3974-3995. V. Stojanovic, V. Filipovic, Adaptive input design for identification of output error model with constrained output. Circuits, Systems, and Signal Process. 33(1) (2014) 97-113. M.A.Z. Raja, A.A. Shah, A. Mehmood, N.I. Chaudhary, M.S. Aslam, Bio-inspired computational heuristics for parameter estimation of nonlinear Hammerstein controlled autoregressive system. Neural Comput. Appl. 29(12) (2018) 1455-1474. N.I. Chaudhary, S. Zubair, M.A.Z. Raja, Design of momentum LMS adaptive strategy for parameter estimation of Hammerstein controlled autoregressive systems, Neural Comput. Appl. 30(4) (2018) 11331143. S. Haykin, Adaptive filter theory. Pearson Education India (2008). ISBN 978-81-317-0869-9

CE

[48]

ACCEPTED MANUSCRIPT

s t 

s t 

f  .

M z Lz

y t 

AN US

System Model

1 Lz

CR IP T

n t 

AC

CE

PT

ED

M

Fig. 1: Block diagram of HN-CAR model

ACCEPTED MANUSCRIPT

y (t )  ψT  t  Θ  n(t ), ψ  t   information vector Θ  parameter vector n  t   noise

M  z

1 y t   s t   n t  Lz Lz Nonlinear controlled autoregressive system

Identification model

ˆ  t  1  Θ ˆ t   Normalized Modified FLMS1: Θ

e t  ψ t 

CR IP T

1   1 ˆ  t  1  Θ ˆ  t    e  t  ψ  t  1  ˆ  t  Normalized fractional LMS : Θ Θ norm  ψ  t      2   

1   1 ˆ  t  Θ   (1   )   2    

norm  ψ  t  

1 1 ˆ  t  1  Θ ˆ t   e t  ψ t  ˆ t  Normalized Modified FLMS2 : Θ Θ norm  ψ  t     2  

AN US

Nonlinear fractional adaptive algorithms

ED

Initialization

M

s  t   zero mean, unit variance n  t   zero mean, constant variance ˆ Desired parameter Θ, Estimated parameter Θ Step size  , fractional order , 

PT

If fitness achieved or iterations completed

CE

Termination Criteria

ˆ Θ Θ Θ

Fitness evaluation

ˆ through normalized update Θ fractional adaptive algorithms with step increment in iterations

Parameter adjustment No

AC

Yes

Repeat adaptation procedure for100 runs and calculate the statistics

Mean Square Error, Varaiance Account For and Nash Sutcliffe Efficiencyindices

Statistical analyses

Performance comparison

Fig. 2: Overall flow diagram of the proposed study

Fitness Fitness

Iterations

Iterations

(f) MFLMS1(δ = 0.25), non-Gaussian noise

Fitness

Iterations

(i) MFLMS1(δ = 0.75), non-Gaussian noise

Fitness

Iterations

(h) MFLMS1(δ = 0.75), σ = 0.5

Fitness

PT CE

AC

Fitness

(g) MFLMS1(δ = 0.75), σ = 0.2

Iterations

(e) MFLMS1(δ = 0.25), σ = 0.5

M

Fitness

ED

Fitness

(d) MFLMS1(δ = 0.25), σ = 0.2

Iterations

Iterations

(c) FLMS, non-Gaussian noise

AN US

Fitness

Iterations

(b) FLMS, σ = 0.5

Fitness

Iterations

(a) FLMS, σ = 0.2

CR IP T

Fitness

Fitness

ACCEPTED MANUSCRIPT

Iterations

Iterations

Iterations

(j) MFLMS2, σ = 0.2

(k) MFLMS2, σ = 0.5

(l) MFLMS2, non-Gaussian noise

Fig. 3: Iterative adaptation of fitness function using standard and normalized algorithms for Example 1

ACCEPTED MANUSCRIPT

= 0.5

= 0.2

CR IP T

(c) Standard fractional algorithms, σ

(b) Normalized fractional algorithms, σ

AN US

= 0.2

(d) Normalized fractional algorithms, σ

= 0.5

(f) Normalized fractional algorithms, σ

= 0.8

ED

M

(a) Standard fractional algorithms, σ

(e) Standard fractional algorithms, σ

= 0.8

AC

CE

PT

Fig. 4: Comparison of proposed normalized adaptive algorithms based on fitness for Example 1

AN US

Number of runs

Number of runs

Number of runs

(e) N-MFLMS1 (δ = 0.25)

(f) N-MFLMS1 (δ = 0.75)

M

MSE

MSE

(d) MFLMS1 (δ = 0.75)

Number of runs

(c) MFLMS1 (δ = 0.25)

MSE

(b) N-FLMS

MSE

Number of runs

(a) FLMS

MSE

Number of runs

CR IP T

MSE

MSE

MSE

ACCEPTED MANUSCRIPT

Number of runs

ED

Number of runs

(g) MFLMS2

(h) N-MFLMS2

AC

CE

PT

Fig. 5: Statistical analysis (ascending order) of proposed adaptive algorithms through MSE for Example 1

(b) FLMS, σ = 0.5

(d) MFLMS1(δ = 0.25), σ = 0.2

(e) MFLMS1(δ = 0.25), σ = 0.5

(c) FLMS, σ = 0.8

(f) MFLMS1(δ = 0.25), σ = 0.8

ED

M

AN US

(a) FLMS, σ = 0.2

CR IP T

ACCEPTED MANUSCRIPT

(h) MFLMS1(δ = 0.75), σ = 0.5

(i) MFLMS1(δ = 0.75), σ = 0.8

(k) MFLMS2, σ = 0.5

(l) MFLMS2, σ = 0.8

AC

CE

PT

(g) MFLMS1(δ = 0.75), σ = 0.2

(j) MFLMS2, σ = 0.2

Fig. 6: Iterative adaptation of fitness function using standard and normalized algorithms for Example 2

ACCEPTED MANUSCRIPT

Table 1: Comparison of proposed adaptive algorithms based on step size parameter for Example 1 µ = 1E-03

µ = 1E-04

SD

MEAN

SD

MEAN

SD

MEAN

SD

NaN 4.04E-04 NaN NaN 5.76E-04 5.41E-04 NaN 6.70E-04

NaN 1.54E-04 NaN NaN 3.06E-04 2.45E-04 NaN 4.08E-04

8.60E-04 5.76E-02 1.47E-03 1.44E-03 1.09E-01 1.07E-01 1.69E-03 1.10E-01

8.32E-04 2.62E-02 9.18E-04 6.96E-04 5.08E-02 4.35E-02 1.23E-03 5.58E-02

7.39E-02 1.89E-01 1.23E-01 1.21E-01 2.16E-01 2.13E-01 1.24E-01 2.20E-01

NaN 7.38E-02 5.52E-02 4.81E-02 8.05E-02 7.59E-02 6.01E-02 8.43E-02

1.95E-01 3.66E-01 2.19E-01 2.17E-01 4.87E-01 4.36E-01 2.21E-01 5.47E-01

7.49E-02 8.79E-02 8.12E-02 7.67E-02 9.96E-02 9.75E-02 8.50E-02 8.69E-02

M ED PT CE AC

µ = 1E-05

MEAN

AN US

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

µ = 1E-02

CR IP T

METHOD

ACCEPTED MANUSCRIPT

Table 2: Comparison of proposed adaptive algorithms based on different fractional orders for Example 1 N-FLMS

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

NaN 7.48E-04 8.05E-04 8.47E-04 8.67E-04 8.56E-04 8.14E-04 7.43E-04 6.54E-04

3.71E-04 3.75E-04 3.79E-04 3.82E-04 3.84E-04 3.84E-04 3.83E-04 3.81E-04 3.77E-04

MFLMS1 MFLMS1 (δ = 0.25) (δ = 0.75) 3.27E-03 2.68E-03 2.21E-03 1.87E-03 1.64E-03 1.49E-03 1.42E-03 1.43E-03 1.53E-03

N-MFLMS1 (δ = 0.25)

N-MFLMS1 (δ = 0.75)

MFLMS2

N-MFLMS2

1.25E-03 9.99E-04 8.16E-04 6.86E-04 5.98E-04 5.44E-04 5.17E-04 5.16E-04 5.44E-04

6.33E-04 6.10E-04 5.90E-04 5.75E-04 5.65E-04 5.60E-04 5.61E-04 5.69E-04 5.84E-04

6.79E-03 4.67E-03 3.29E-03 2.42E-03 1.87E-03 1.55E-03 1.40E-03 1.37E-03 1.47E-03

3.01E-03 1.91E-03 1.28E-03 9.10E-04 6.96E-04 5.76E-04 5.16E-04 5.01E-04 5.27E-04

1.80E-03 1.73E-03 1.67E-03 1.63E-03 1.59E-03 1.58E-03 1.58E-03 1.61E-03 1.65E-03

CR IP T

FLMS

AC

CE

PT

ED

M

AN US



ACCEPTED MANUSCRIPT

Table 3: Performance comparison based on MSE values for Example 1 METHOD

0.2

Design parameters c2 c3 b2c1

a2

c1

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

-1.346 -1.346 -1.352 -1.352 -1.345 -1.345 -1.352 -1.351

0.749 0.750 0.751 0.751 0.745 0.745 0.751 0.746

1.005 1.000 1.003 0.997 0.990 0.993 1.016 1.003

0.494 0.495 0.493 0.492 0.502 0.502 0.493 0.493

0.208 0.196 0.195 0.197 0.205 0.203 0.189 0.198

0.5

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

-1.377 -1.354 -1.356 -1.357 -1.336 -1.337 -1.356 -1.336

0.746 0.748 0.750 0.750 0.737 0.737 0.750 0.737

0.982 1.005 1.000 0.993 1.001 1.004 1.013 0.998

0.501 0.513 0.482 0.480 0.504 0.504 0.483 0.504

0.189 0.217 0.190 0.191 0.199 0.197 0.184 0.202

0.8

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25)) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

-1.374 -1.354 -1.360 -1.361 -1.350 -1.328 -1.360 -1.346

0.765 0.745 0.747 0.747 0.767 0.740 0.747 0.760

1.034 1.015 0.999 0.990 0.987 1.006 1.012 0.989

0.489 0.523 0.471 0.469 0.506 0.505 0.472 0.496

0.166 0.224 0.183 0.185 0.200 0.192 0.178 0.206

-1.350

0.750

1.000

0.500

0.200

AC

CE

PT

ED

M

True values

b2c2

b2c3

MSE

1.685 1.682 1.663 1.648 1.675 1.670 1.673 1.671

0.837 0.833 0.843 0.843 0.841 0.842 0.843 0.846

0.337 0.334 0.348 0.355 0.341 0.344 0.344 0.344

2.02E-05 1.60E-05 6.89E-05 1.85E-04 2.86E-05 3.49E-05 7.03E-05 3.39E-05

1.680 1.677 1.672 1.656 1.686 1.679 1.681 1.689

0.849 0.834 0.847 0.847 0.842 0.843 0.847 0.842

0.341 0.334 0.350 0.358 0.341 0.344 0.345 0.340

1.60E-04 6.74E-05 1.00E-04 2.08E-04 5.50E-05 5.36E-05 1.14E-04 6.08E-05

1.664 1.686 1.681 1.664 1.677 1.684 1.691 1.663

0.829 0.830 0.850 0.851 0.831 0.843 0.850 0.834

0.331 0.329 0.351 0.360 0.349 0.346 0.346 0.344

4.57E-04 1.93E-04 1.99E-04 3.02E-04 9.36E-05 1.03E-04 2.32E-04 8.39E-05

1.680

0.840

0.336

0

AN US

a1

CR IP T

σ

ACCEPTED MANUSCRIPT

Table 4: Statistical analyses based on different performance measures for Example 1 MIN

σ = 0.2 MEAN MEAN

SD

MIN

MEAN

SD

MIN

MEAN

SD

METHOD

σ = 0.5

σ = 0.8

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

2.02E-05 1.60E-05 6.89E-05 1.85E-04 2.86E-05 3.49E-05 7.03E-05 3.39E-05

NaN 5.68E-05 1.62E-03 1.60E-03 4.07E-04 4.12E-04 1.87E-03 4.87E-04

NaN 2.71E-05 1.05E-03 7.90E-04 2.57E-04 1.97E-04 1.49E-03 3.86E-04

1.60E-04 6.74E-05 1.00E-04 2.08E-04 5.50E-05 5.36E-05 1.14E-04 6.08E-05

NaN 3.52E-04 1.67E-03 1.67E-03 5.35E-04 5.53E-04 1.92E-03 6.10E-04

NaN 1.67E-04 1.12E-03 8.53E-04 3.23E-04 2.73E-04 1.57E-03 4.46E-04

4.57E-04 1.93E-04 1.99E-04 3.02E-04 9.36E-05 1.03E-04 2.32E-04 8.39E-05

NaN 9.12E-04 1.80E-03 1.83E-03 7.85E-04 8.25E-04 2.02E-03 8.52E-04

NaN 4.28E-04 1.22E-03 9.69E-04 4.33E-04 3.93E-04 1.67E-03 5.51E-04

ENSE

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

3.02E-05 2.38E-05 1.03E-04 2.76E-04 4.28E-05 5.22E-05 1.05E-04 5.06E-05

NaN 8.47E-05 2.41E-03 2.38E-03 6.07E-04 6.15E-04 2.79E-03 7.27E-04

NaN 4.04E-05 1.56E-03 1.18E-03 3.83E-04 2.94E-04 2.22E-03 5.77E-04

2.39E-04 1.01E-04 1.50E-04 3.10E-04 8.20E-05 8.00E-05 1.70E-04 9.07E-05

NaN 5.25E-04 2.50E-03 2.49E-03 7.99E-04 8.26E-04 2.86E-03 9.11E-04

NaN 2.49E-04 1.66E-03 1.27E-03 4.81E-04 4.08E-04 2.34E-03 6.65E-04

6.82E-04 2.87E-04 2.97E-04 4.50E-04 1.40E-04 1.54E-04 3.46E-04 1.25E-04

NaN 1.36E-03 2.68E-03 2.73E-03 1.17E-03 1.23E-03 3.02E-03 1.27E-03

NaN 6.40E-04 1.82E-03 1.45E-03 6.46E-04 5.86E-04 2.49E-03 8.22E-04

EVAF

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25)) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

2.69E-03 1.60E-03 9.92E-03 2.62E-02 4.27E-03 5.19E-03 1.05E-02 4.98E-03

NaN 7.46E-03 2.29E-01 2.26E-01 5.78E-02 5.85E-02 2.65E-01 6.93E-02

NaN 3.75E-03 1.47E-01 1.11E-01 3.61E-02 2.78E-02 2.10E-01 5.47E-02

4.81E-02 2.49E-02 2.70E-02 3.88E-02 1.37E-02 1.30E-02 3.37E-02 1.24E-02

NaN 1.21E-01 2.48E-01 2.53E-01 1.08E-01 1.14E-01 2.80E-01 1.18E-01

NaN 5.89E-02 1.68E-01 1.34E-01 6.01E-02 5.54E-02 2.30E-01 7.62E-02

AN US

M ED PT CE AC

CR IP T

MSE

1.70E-02 9.29E-03 1.39E-02 2.80E-02 7.40E-03 7.52E-03 1.68E-02 8.16E-03

NaN 4.64E-02 2.34E-01 2.34E-01 7.47E-02 7.73E-02 2.68E-01 8.54E-02

NaN 2.32E-02 1.55E-01 1.19E-01 4.47E-02 3.84E-02 2.18E-01 6.19E-02

ACCEPTED MANUSCRIPT

Table 5: Computational analyses of proposed adaptive algorithms for Example 1 σ = 0.5

σ = 0.8

MET

SD

MET

SD

MET

SD

0.527 0.563 0.545 0.543 0.576 0.576 0.525 0.537

0.014 0.016 0.022 0.019 0.023 0.031 0.027 0.027

0.532 0.562 0.548 0.542 0.575 0.572 0.520 0.537

0.036 0.028 0.040 0.012 0.035 0.014 0.035 0.019

0.528 0.556 0.546 0.543 0.572 0.571 0.522 0.534

0.032 0.022 0.032 0.014 0.021 0.024 0.014 0.012

AC

CE

PT

ED

M

AN US

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

σ = 0.2

CR IP T

METHOD

ACCEPTED MANUSCRIPT

Table 6: Performance comparison based on MSE values for Example 2 METHOD

0.2

Design parameters c2 c3 b2c1

a2

c1

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

-1.000 -0.999 -0.998 -1.000 -1.000 -1.002 -0.998 -0.999

0.800 0.801 0.798 0.801 0.799 0.799 0.798 0.802

2.786 2.800 2.800 2.758 2.805 2.798 2.794 2.800

-4.798 -4.798 -4.797 -4.789 -4.799 -4.802 -4.801 -4.801

5.707 5.696 5.698 5.720 5.699 5.700 5.701 5.699

0.5

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

-1.002 -0.997 -1.001 -1.004 -0.999 -1.001 -1.001 -0.999

0.799 0.801 0.796 0.800 0.798 0.802 0.797 0.798

2.775 2.801 2.805 2.751 2.794 2.811 2.797 2.795

-4.754 -4.797 -4.614 -4.431 -4.794 -4.797 -4.703 -4.794

5.711 5.690 5.691 5.711 5.699 5.704 5.696 5.699

0.8

FLMS N-FLMS MFLMS1(δ = 0.25) MFLMS1(δ = 0.75) N-MFLMS1(δ = 0.25)) N-MFLMS1(δ = 0.75) MFLMS2 N-MFLMS2

-1.003 -0.995 -1.000 -1.004 -1.006 -1.001 -1.001 -0.998

0.799 0.802 0.795 0.800 0.795 0.789 0.795 0.796

2.775 2.800 2.804 2.751 2.788 2.808 2.796 2.790

-4.753 -4.796 -4.613 -4.432 -4.804 -4.802 -4.703 -4.791

5.710 5.686 5.689 5.712 5.700 5.702 5.694 5.699

-1.000

0.800

2.800

-4.800

5.700

AC

CE

PT

ED

M

True values

b2c2

b2c3

MSE

1.682 1.678 1.651 1.690 1.679 1.679 1.619 1.681

-2.881 -2.881 -2.881 -2.893 -2.880 -2.882 -2.877 -2.884

3.421 3.422 3.432 3.416 3.416 3.420 3.446 3.416

3.20E-05 3.68E-06 1.28E-04 3.22E-04 5.59E-06 2.94E-06 5.60E-04 4.50E-06

1.696 1.677 1.632 1.673 1.684 1.678 1.586 1.683

-2.924 -2.883 -2.879 -2.827 -2.883 -2.879 -2.789 -2.883

3.414 3.424 3.413 3.351 3.415 3.421 3.434 3.416

6.33E-04 2.01E-05 4.63E-03 1.83E-02 1.63E-05 1.84E-05 3.35E-03 1.20E-05

1.695 1.679 1.632 1.673 1.679 1.682 1.586 1.684

-2.923 -2.884 -2.878 -2.828 -2.877 -2.874 -2.788 -2.884

3.416 3.425 3.411 3.351 3.412 3.415 3.432 3.415

6.27E-04 3.79E-05 4.67E-03 1.82E-02 3.65E-05 3.22E-05 3.38E-03 3.16E-05

1.680

-2.880

3.420

0

AN US

a1

CR IP T

σ