Applied Mathematical Modelling 38 (2014) 1–11
Contents lists available at SciVerse ScienceDirect
Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm
Recursive computational formulas of the least squares criterion functions for scalar system identification q Junxia Ma a, Rui Ding a,b,⇑ a b
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, PR China School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, PR China
a r t i c l e
i n f o
Article history: Received 13 April 2012 Received in revised form 17 March 2013 Accepted 31 May 2013 Available online 24 June 2013 Keywords: Numerical algorithm Least squares System modeling Criterion function Recursive algorithm Parameter estimation
a b s t r a c t The paper discusses recursive computation problems of the criterion functions of several least squares type parameter estimation methods for linear regression models, including the well-known recursive least squares (RLS) algorithm, the weighted RLS algorithm, the forgetting factor RLS algorithm and the finite-data-window RLS algorithm without or with a forgetting factor. The recursive computation formulas of the criterion functions are derived by using the recursive parameter estimation equations. The proposed recursive computation formulas can be extended to the estimation algorithms of the pseudo-linear regression models for equation error systems and output error systems. Finally, the simulation example is provided. Ó 2013 Elsevier Inc. All rights reserved.
1. Introduction The parameter estimation is of great importance to system modeling and identification [1–5], adaptive control [6–10], signal processing [11,12]. Typical parameter estimation methods contain the iterative algorithms [13–17] and the recursive algorithms [18–20]. The iterative algorithms can be used to solve some matrix equations [21–28], such as the famous Jacobi iteration and the Gauss–Seidel iteration [29,30]. The recursive estimation algorithm can be used to on-line identify the parameters of systems and real-time update the parameters estimates at each step [31–33]. In the field of linear algebra, Xie et al. studied gradient based and least squares based iterative algorithms for linear matrix equations [34]; Dehghan and Hajarian presented two iterative algorithms for solving a pair of matrix equations AYB ¼ E and CYD ¼ F and the generalized coupled Sylvester matrix equations [35,36]; Ding et al. derived the iterative solutions to matrix equations of the form Ai XBi ¼ F i [37]. In the field of system identification, Wang et al. proposed a filtering based recursive least squares (RLS) identification algorithm for CARARMA systems [38] and an auxiliary model-based recursive generalized least squares parameter estimation algorithm for Hammerstein OEAR systems [39]; Ding and Chen studied the performance bounds of the forgetting factor least squares algorithm for time-varying systems with finite measurement data [40]; Ding and Xiao developed the
q This work was supported by the National Natural Science Foundation of China (No. 61273194), the 111 Project (B12018) and the PAPD of Jiangsu Higher Education Institutions. ⇑ Corresponding author at: Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, PR China. E-mail addresses:
[email protected] (J. Ma),
[email protected] (R. Ding).
0307-904X/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.apm.2013.05.059
2
J. Ma, R. Ding / Applied Mathematical Modelling 38 (2014) 1–11
finite-data-window recursive least squares algorithm with a forgetting factor for dynamical modeling (the FDW-FF-RLS algorithm for short) [41]. In general, a parameter estimation algorithm can be obtained by minimizing a quadratic cost function which is the square sum of the differences between the system outputs and the model outputs [42,43]. For online identification, the parameter estimation algorithm is implemented in a recursive form. Therefore, a natural question is how to compute the cost function in a recursive form since the values of the cost functions can measure the parameter estimation accuracy [44]. This is the work of this paper. Recently, Ma et al. studied the recursive computational formulas of the criterion functions for the well-known weighted recursive least squares algorithm and the finite-data-window recursive least squares algorithm for linear regressive models [45] and the recursive computational relations of the cost functions for the least-squares-type algorithms for multivariable (or multivariate) linear regressive models [46]. On the basis of the work in [45,46], this paper derives the recursive computational formulas of the quadratic criterion functions for recursive least squares type parameter estimation algorithms, including the RLS algorithm in Section 2, the weighted RLS algorithm in Section 3, the forgetting factor RLS algorithm in Section 4, the finite-data-window RLS algorithm in Section 5 and the FDW-FF-RLS algorithm in Section 6. Section 7 simply discusses the recursive computational formulas of the criterion functions for multivariable equation error systems. Section 8 provides a numerical example to illustrate the proposed methods. Finally, concluding remarks are given in Section 9. 2. The recursive least squares algorithm Let us introduce some notations first. ‘‘A ¼: X’’ or ‘‘X :¼ A’’ stands for ‘‘A is defined as X’’; the symbol I (In ) stands for an identity matrix of appropriate size (n n); the superscript T denotes the matrix transpose; the norm of a matrix X is defined by kXk2 :¼ tr½XXT ; ^ hðtÞ denotes the estimate of h at time t. Recently, Ma and Ding studied the recursive relations of the criterion functions for the least squares parameter estimation algorithms for multivariable systems, including the multivariate RLS (MRLS) algorithm for multivariate linear regressive models, the forgetting factor MRLS algorithm and the finite-data-window MRLS algorithm with a forgetting factor (i.e., the FDW-FF-MRLS algorithm for short) and the FDW-FF-RLS algorithm for the multivariable controlled autoregressive models [46]. On the basis of the work in [46], this paper discusses simply the recursive computational formulas of the least squares criterion functions for scalar systems described by the following linear regressive models,
yðtÞ ¼ uT ðtÞh þ v ðtÞ;
ð1Þ
where yðtÞ is the system output, v ðtÞ is a white noise with zero mean, h 2 R is the parameter vector to be identified and uðtÞ 2 Rn is the regressive information vector consisting of the system inputs and outputs. Consider the data set fyðiÞ; uðiÞ : 1 6 i 6 tg and define a quadratic criterion function, n
J 1 ðhÞ :¼
t X 2 ½yðjÞ uT ðjÞh : j¼1
Define the innovation eðtÞ :¼ yðtÞ uT ðtÞ^ hðt 1Þ 2 R1 [47–49]. Letting the derivative of J 1 ðhÞ with respect to h be zero:
t X @J 1 ðhÞ ¼ 2 uðjÞ½yðjÞ uT ðjÞ^hðtÞ ¼ 0; @h h¼^hðtÞ j¼1
ð2Þ
we can obtain the following recursive least squares (RLS) algorithm for estimating h [1,47,50]:
^hðtÞ ¼ ^hðt 1Þ þ LðtÞeðtÞ;
ð3Þ
eðtÞ ¼ yðtÞ uT ðtÞ^hðt 1Þ;
ð4Þ
LðtÞ ¼ PðtÞuðtÞ ¼
Pðt 1ÞuðtÞ ; 1 þ uT ðtÞPðt 1ÞuðtÞ
PðtÞ ¼ ½I LðtÞuT ðtÞPðt 1Þ; Pð0Þ ¼ p0 I;
ð5Þ ð6Þ
nn
where PðtÞ 2 R denotes the covariance matrix, and p0 is a large positive number. The criterion function J 1 ðhÞ with h ¼ ^ hðtÞ is given by
J 1 ½^hðtÞ ¼
t X 2 ½yðjÞ uT ðjÞ^hðtÞ : j¼1
ð7Þ
J. Ma, R. Ding / Applied Mathematical Modelling 38 (2014) 1–11
3
From (6), we have the relation t X
uðjÞuT ðjÞ þ P1 ð0Þ ¼ P1 ðtÞ:
j¼1
Using (3)–(5) and the above relation, Eq. (7) can be expressed as J1 ðtÞ ¼
t1 t1 X X 2 2 2 2 ½yðjÞ uT ðjÞ^hðtÞ þ ½yðtÞ uT ðtÞ^hðtÞ ¼ ½yðjÞ uT ðjÞ^hðt 1Þ uT ðjÞLðtÞeðtÞ þ ½yðtÞ uT ðtÞ^hðt 1Þ uT ðtÞLðtÞeðtÞ j¼1
j¼1
t1 t1 t1 X X X 2 2 2 ¼ ½yðjÞ uT ðjÞ^hðt 1Þ 2 ½yðjÞ uT ðjÞ^hðt 1ÞuT ðjÞLðtÞeðtÞ þ ½uT ðjÞLðtÞeðtÞ þ e2 ðtÞ 2uT ðtÞLðtÞe2 ðtÞ þ ½uT ðtÞLðtÞeðtÞ j¼1
j¼1
j¼1
t X ¼ J 1 ðt 1Þ 0 þ L ðtÞ uðjÞuT ðjÞLðtÞe2 ðtÞ þ e2 ðtÞ 2uT ðtÞLðtÞe2 ðtÞ T
j¼1
¼ J 1 ðt 1Þ þ uT ðtÞPðtÞ½P1 ðtÞ P1 ð0ÞPðtÞuðtÞe2 ðtÞ þ e2 ðtÞ 2LT ðtÞuðtÞe2 ðtÞ ¼ J 1 ðt 1Þ þ e2 ðtÞ LT ðtÞuðtÞe2 ðtÞ uT ðtÞPðtÞP1 ð0ÞPðtÞuðtÞe2 ðtÞ uT ðtÞPðt 1ÞuðtÞ 2 e ðtÞ uT ðtÞP2 ðtÞuðtÞe2 ðtÞ=p0 ¼ J 1 ðt 1Þ þ 1 1 þ uT ðtÞPðt 1ÞuðtÞ ¼ J 1 ðt 1Þ þ
e2 ðtÞ 1þu
T ðtÞPðt 1Þ
uðtÞ
uT ðtÞP2 ðtÞuðtÞe2 ðtÞ=p0 :
ð8Þ
In general, p0 is taken to be a large positive number, PðtÞ will decrease with the increase of time t. Thus the last term on the right-hand side of (8) can be neglected, and we have the approximate relation [46],
J 1 ðtÞ ¼ J 1 ðt 1Þ þ
e2 ðtÞ : 1 þ uT ðtÞPðt 1ÞuðtÞ
ð9Þ
3. The weighted recursive least squares algorithm Introduce the weighted factor wj (wj P 0) and define the weighted criterion function,
J 2 ðhÞ :¼
t X 2 wj ½yðjÞ uT ðjÞh : j¼1
Minimizing the criterion function J 2 ðhÞ, we have t X wj uðjÞ½yðjÞ uT ðjÞ^hðtÞ ¼ 0;
ð10Þ
j¼1
^hðtÞ ¼
" t X
#1 T
wj uðjÞu ðjÞ
j¼1
J 2 ðtÞ :¼ J 2 ½^hðtÞ ¼
t X wj uðjÞyðjÞ; j¼1
t X 2 wj ½yðjÞ uT ðjÞ^hðtÞ :
ð11Þ
j¼1
From (11), we can obtain the weighted recursive least squares (W-RLS) estimation algorithm:
^hðtÞ ¼ ^hðt 1Þ þ LðtÞeðtÞ;
ð12Þ
eðtÞ ¼ yðtÞ uT ðtÞ^hðt 1Þ;
ð13Þ
LðtÞ ¼ PðtÞuðtÞwt ¼
Pðt 1ÞuðtÞ ; 1=wt þ uT ðtÞPðt 1ÞuðtÞ
PðtÞ ¼ ½I LðtÞuT ðtÞPðt 1Þ; wt P 0; Pð0Þ ¼ p0 I: If wj ¼ 1, the W-RLS algorithm is equals to the RLS algorithm. From (15), we have the relation t X wj uðjÞuT ðjÞ þ P1 ð0Þ ¼ P1 ðtÞ: j¼1
Using (12)–(14) and the above relation, Eq. (11) can be expressed as
ð14Þ ð15Þ
4
J. Ma, R. Ding / Applied Mathematical Modelling 38 (2014) 1–11
J 2 ðtÞ ¼
t1 X 2 2 wj ½yðjÞ uT ðjÞ^hðtÞ þ wt ½yðtÞ uT ðtÞ^hðtÞ j¼1
t1 X 2 2 ¼ wj fyðjÞ uT ðjÞ½^hðt 1Þ þ LðtÞeðtÞg þ wj fyðtÞ uT ðtÞ½^hðt 1Þ þ LðtÞeðtÞg j¼1 t1 t1 t1 X X X 2 2 ¼ wj ½yðjÞ uT ðjÞ^hðt 1Þ 2 wj ½yðjÞ uT ðjÞ^hðt 1ÞuT ðjÞLðtÞeðtÞ þ wj ½uT ðjÞLðtÞeðtÞ j¼1
j¼1 2
T
2
j¼1 T
2
þ wt e ðtÞ 2wt u ðtÞLðtÞe ðtÞ þ wt ½u ðtÞLðtÞeðtÞ t X ¼ J 2 ðt 1Þ 0 þ LT ðtÞ wj uðjÞuT ðjÞLðtÞeðtÞ2 þ wt e2 ðtÞ 2wt uT ðtÞLðtÞe2 ðtÞ j¼1
¼ J 2 ðt 1Þ þ wt uT ðtÞPðtÞ½P1 ðtÞ P1 ð0ÞPðtÞuðtÞwt e2 ðtÞ þ wt e2 ðtÞ 2wt uT ðtÞLðtÞe2 ðtÞ ¼ J 2 ðt 1Þ þ wt e2 ðtÞ wt uT ðtÞLðtÞe2 ðtÞ wt uT ðtÞPðtÞP1 ð0ÞPðtÞuðtÞwt e2 ðtÞ ¼ J 2 ðt 1Þ þ
u
T ðtÞPðt
e2 ðtÞ w2t uT ðtÞP2 ðtÞuðtÞe2 ðtÞ=p0 : 1ÞuðtÞ þ 1=wt
For large p0 , we have the approximate relation,
J 2 ðtÞ ¼ J 2 ðt 1Þ þ
u
T ðtÞPðt
e2 ðtÞ : 1ÞuðtÞ þ 1=wt
ð16Þ
4. The forgetting factor recursive least squares algorithm Introduce a forgetting factor k (0 < k < 1) and define the weighted criterion function,
J 3 ðhÞ :¼
t X 2 ktj ½yðjÞ uT ðjÞh : j¼1
Letting the derivative of J 3 ðhÞ with respect to h be zero:
t X @J 3 ðhÞ ¼ 2 uðjÞ½yðjÞ uT ðjÞ^hðtÞ ¼ 0; @h h¼^hðtÞ j¼1
ð17Þ
we can obtain the forgetting factor least squares estimate:
^hðtÞ ¼
" #1 t t X X tj T k uðjÞu ðjÞ ktj uðjÞyðjÞ: j¼1
ð18Þ
j¼1
The criterion function J 3 ðhÞ with h ¼ ^ hðtÞ is given by
J 3 ½^hðtÞ ¼
t X 2 ktj ½yðjÞ uT ðjÞ^hðtÞ :
ð19Þ
j¼1
The forgetting factor least squares estimate ^ hðtÞ in (18) can be computed by the following the forgetting factor recursive least squares (FF-RLS) algorithm [40,47]:
^hðtÞ ¼ ^hðt 1Þ þ LðtÞeðtÞ;
ð20Þ
eðtÞ ¼ yðtÞ uT ðtÞ^hðt 1Þ;
ð21Þ
LðtÞ ¼ PðtÞuðtÞ ¼
PðtÞ ¼
Pðt 1ÞuðtÞ ; k þ uT ðtÞPðt 1ÞuðtÞ
1 ½I LðtÞuT ðtÞPðt 1Þ; 0 < k < 1; k
ð22Þ
Pð0Þ ¼ p0 I:
When k ¼ 1, the FF-RLS algorithm is equal to the RLS algorithm. From (23), we have and the relation t X ktj uðjÞuT ðjÞ þ kt P1 ð0Þ ¼ P1 ðtÞ: j¼1
ð23Þ
J. Ma, R. Ding / Applied Mathematical Modelling 38 (2014) 1–11
5
Using (17) and (20)–(24), from (19), we have t X 2 ktj ½yðjÞ uT ðjÞ^hðtÞ
J 3 ðtÞ :¼ J3 ½^hðtÞ ¼
j¼1 t1 X 2 2 ¼ ktj ½yðjÞ uT ðjÞ^hðtÞ þ ½yðtÞ uT ðtÞ^hðtÞ j¼1
¼k
t1 X 2 2 kt1j fyðjÞ uT ðjÞ½^hðt 1Þ þ LðtÞeðtÞg þ fyðtÞ uT ðtÞ½^hðt 1Þ þ LðtÞeðtÞg j¼1
t1 t1 X X 2 ¼ k kt1j ½yðjÞ uT ðjÞ^hðt 1Þ 2 ktj ½yðjÞ uT ðjÞ^hðt 1ÞuT ðjÞLðtÞeðtÞ j¼1
j¼1
t1 X 2 2 ktj ½uT ðjÞLðtÞeðtÞ þ e2 ðtÞ 2uT ðtÞLðtÞe2 ðtÞ þ ½uT ðtÞLðtÞeðtÞ þ j¼1 t X ¼ kJ3 ðt 1Þ 0 þ LT ðtÞ ktj uðjÞuT ðjÞLðtÞeðtÞ2 þ e2 ðtÞ 2uT ðtÞLðtÞe2 ðtÞ j¼1
¼ kJ3 ðt 1Þ þ LT ðtÞ½P1 ðtÞ kt P1 ð0ÞPðtÞuðtÞe2 ðtÞ þ e2 ðtÞ 2uT ðtÞLðtÞe2 ðtÞ ¼ kJ3 ðt 1Þ þ e2 ðtÞ uT ðtÞLðtÞe2 ðtÞ uT ðtÞPðtÞkt P1 ð0ÞPðtÞuðtÞe2 ðtÞ e2 ðtÞ kt uT ðtÞP2 ðtÞuðtÞe2 ðtÞ=p0 : ¼ k J 3 ðt 1Þ þ T k þ u ðtÞPðt 1ÞuðtÞ For large p0 , we have the approximate relation [46],
J 3 ðtÞ ¼ k J 3 ðt 1Þ þ
e2 ðtÞ : k þ uT ðtÞPðt 1ÞuðtÞ
ð24Þ
5. The finite-data-window recursive least squares algorithm Since the finite-data-window recursive least squares algorithm uses a newest data set fyðjÞ; uðjÞ: t p þ 1 6 j 6 tg with the window length p, it can track time-varying parameters. Define the quadratic criterion function, t X
J 4 ðhÞ :¼
2
½yðjÞ uT ðjÞh :
j¼tpþ1
Assume that when t 6 0; yðtÞ ¼ 0; uðtÞ ¼ 0. Define the covariance matrixes PðtÞ and P2 ðt 1Þ as
"
PðtÞ :¼
#1
t X
T
uðjÞu ðjÞ
j¼tpþ1
" P2 ðt 1Þ :¼
t1 X
; #1
T
uðjÞu ðjÞ
:
j¼tpþ1
It follows that
P1 ðtÞ ¼ P1 ð0Þ þ
t X
uðjÞuT ðjÞ;
ð25Þ
j¼tpþ1 t1 X
1 P1 2 ðt 1Þ ¼ P2 ð0Þ þ
uðjÞuT ðjÞ:
ð26Þ
j¼tpþ1
Letting the derivative of J 4 ðhÞ with respect to h be zero, we have
t X @J 4 ðhÞ ¼ 2 uðjÞ½yðjÞ uT ðjÞ^hðtÞ ¼ 0: @h h¼^hðtÞ j¼tpþ1
Thus, the least squares estimate of h is given by
"
^hðtÞ ¼
t X
#1
uðjÞuT ðjÞ
j¼tpþ1
t X j¼tpþ1
uðjÞyðjÞ:
ð27Þ
6
J. Ma, R. Ding / Applied Mathematical Modelling 38 (2014) 1–11
The criterion function J 4 ðhÞ with h ¼ ^ hðtÞ is given by
J 4 ½^hðtÞ ¼
t X
2 ½yðjÞ uT ðjÞ^hðtÞ :
ð28Þ
j¼tpþ1
Referring to [41], we can obtain the following finite-data-window recursive least squares (FDW-RLS) algorithm for estimating h:
^hðtÞ ¼ #ðt ^ 1Þ þ PðtÞuðtÞ½yðtÞ uT ðtÞ#ðt ^ 1Þ;
ð29Þ
T P1 ðtÞ ¼ P1 2 ðt 1Þ þ uðtÞu ðtÞ;
ð30Þ
^ 1Þ ¼ ^hðt 1Þ P2 ðt 1Þuðt pÞ½yðt pÞ uT ðt pÞ^hðt 1Þ; #ðt
ð31Þ
1 T P1 2 ðt 1Þ ¼ P ðt 1Þ uðt pÞu ðt pÞ; Pð0Þ ¼ p0 I;
ð32Þ
^ where #ðtÞ is the estimate of # at time t and with the window length p 1. Defining
^ 1Þ; e1 ðtÞ :¼ yðtÞ uT ðtÞ#ðt e2 ðtÞ :¼ yðt pÞ uT ðt pÞ^hðt 1Þ;
LðtÞ :¼ PðtÞuðtÞ
and applying the matrix inversion formula 1
ðA þ BCÞ1 ¼ A1 A1 BðI þ CA1 BÞ CA1
ð33Þ
to (30) and (32), the FDW-RLS estimation algorithm can equivalently be expressed as
^hðtÞ ¼ #ðt ^ 1Þ þ LðtÞe1 ðtÞ; PðtÞ ¼ P2 ðt 1Þ
ð34Þ
P2 ðt 1ÞuðtÞuT ðtÞP2 ðt 1Þ ; 1 þ uT ðtÞP2 ðt 1ÞuðtÞ
ð35Þ
^ 1Þ ¼ ^hðt 1Þ P2 ðt 1Þuðt pÞe2 ðtÞ; #ðt P2 ðt 1Þ ¼ Pðt 1Þ þ
ð36Þ
Pðt 1Þuðt pÞuT ðt pÞPðt 1Þ ; Pð0Þ ¼ p0 I: 1 uT ðt pÞPðt 1Þuðt pÞ
ð37Þ
The criterion function J 4 ðhÞ with h ¼ ^ hðtÞ is given by t X
J 4 ðtÞ :¼ J 4 ½^hðtÞ ¼
2 ½yðjÞ uT ðjÞ^hðtÞ :
ð38Þ
j¼tpþ1
Using (25)–(27), from (38), we have J4 ðtÞ :¼ J 4 ½^hðtÞ ¼ ¼ ¼ ¼
t X
t1 X
2 ½yðjÞ uT ðjÞ^hðtÞ ¼
j¼tpþ1 t1 X
2 2 ½yðjÞ uT ðjÞ^hðtÞ þ ½yðtÞ uT ðtÞ^hðtÞ
j¼tpþ1
^ 1Þ þ LðtÞe1 ðtÞg2 þ fyðtÞ uT ðtÞ½#ðt ^ 1Þ þ LðtÞe1 ðtÞg2 fyðjÞ uT ðjÞ½#ðt
j¼tpþ1 t1 X
t1 X
^ 1Þ2 2 ½yðjÞ uT ðjÞ#ðt
j¼tpþ1 t1 X
^ 1ÞuT ðjÞLðtÞe1 ðtÞ þ ½yðjÞ uT ðjÞ#ðt
j¼tpþ1
t X
2
½uT ðjÞLðtÞe1 ðtÞ þ e21 ðtÞ 2uT ðtÞLðtÞe21 ðtÞ
j¼tpþ1 2
2
fyðjÞ uT ðjÞ½^hðt 1Þ P2 ðt 1Þuðt pÞe2 ðtÞg þ ½yðt pÞ uT ðt pÞ^hðt 1Þ
j¼tpþ1 t X
2
½yðt pÞ uT ðt pÞ^hðt 1Þ þ
2
½uT ðjÞLðtÞe1 ðtÞ þ e21 ðtÞ 2uT ðtÞLðtÞe21 ðtÞ
j¼tpþ1
¼
t1 X
2 ½yðjÞ uT ðjÞ^hðt 1Þ þ
j¼tp
t1 X
2
½uT ðjÞP2 ðt 1Þuðt pÞe2 ðtÞ e22 ðtÞ þ 2
j¼tpþ1
t1 X
^ 1Þ fyðjÞ uT ðjÞ½#ðt
j¼tpþ1 t X
þ P2 ðt 1Þuðt pÞe2 ðtÞguT ðjÞP2 ðt 1Þuðt pÞe2 ðtÞ þ ¼ J4 ðt 1Þ e22 ðtÞ
t1 X
2
j¼tpþ1 t X
½uT ðjÞP2 ðt 1Þuðt pÞe2 ðtÞ þ
j¼tpþ1
2
½uT ðjÞLðtÞe1 ðtÞ þ e21 ðtÞ 2uT ðtÞLðtÞe21 ðtÞ 2
½uT ðjÞLðtÞe1 ðtÞ þ e21 ðtÞ 2uT ðtÞLðtÞe21 ðtÞ
j¼tpþ1
¼ J4 ðt 1Þ e22 ðtÞ uT ðt pÞP2 ðt 1Þ
t1 X
uðjÞuT ðjÞP2 ðt 1Þuðt pÞe22 ðtÞ
j¼tpþ1
þ e21 ðtÞ þ LT ðtÞ
t X j¼tpþ1
uðjÞu
T
ðjÞLðtÞe21 ðtÞ 2
uT ðtÞLðtÞe21 ðtÞ:
ð39Þ
J. Ma, R. Ding / Applied Mathematical Modelling 38 (2014) 1–11
7
Using (25), (26), (35) and (37), we have 1 2 2 J 4 ðtÞ ¼ J 4 ðt 1Þ e22 ðtÞ uT ðt pÞP2 ðt 1Þ½P1 2 ðt 1Þ P2 ð0ÞP2 ðt 1Þuðt pÞe2 ðtÞ þ e1 ðtÞ
þ LT ðtÞ½P1 ðtÞ P1 ð0ÞLðtÞe21 ðtÞ 2uT ðtÞLðtÞe21 ðtÞ ¼ J 4 ðt 1Þ e22 ðtÞ uT ðt pÞP2 ðt 1Þuðt pÞe22 ðtÞ þ e21 ðtÞ uT ðtÞLðtÞe21 ðtÞ uT ðt pÞP22 ðt 1Þuðt pÞe22 ðtÞ=p0 LT ðtÞLðtÞe21 ðtÞ=p0 ¼ J 4 ðt 1Þ þ
1þu
e21 ðtÞ T ðtÞP ðt 2
1ÞuðtÞ
1u
T ðt
e22 ðtÞ uT ðt pÞP22 ðt 1Þuðt pÞe22 ðtÞ=p0 pÞPðt 1Þuðt pÞ
LT ðtÞLðtÞe21 ðtÞ=p0 :
ð40Þ
For large p0 , we have the approximate recursive relation [46],
J 4 ðtÞ ¼ J 4 ðt 1Þ þ
1þu
e21 ðtÞ T ðtÞP ðt 2
1ÞuðtÞ
1u
T ðt
e22 ðtÞ : pÞPðt 1Þuðt pÞ
ð41Þ
6. The finite-data-window recursive least squares algorithm with a forgetting factor We introduce a forgetting factor k (0 < k < 1) into J 4 ðhÞ and define the weighted criterion function,
J 5 ðhÞ :¼
t X
2
ktj ½yðjÞ uT ðjÞh :
j¼tpþ1
Referring to [41], minimizing J 5 ðhÞ yields the finite-data-window recursive least squares algorithm with a forgetting factor (the FDW-FF-RLS algorithm for short):
^hðtÞ ¼ #ðt ^ 1Þ þ LðtÞe3 ðtÞ;
ð42Þ
^ 1Þ; e3 ðtÞ ¼ yðtÞ uT ðtÞ#ðt
ð43Þ
LðtÞ ¼ PðtÞuðtÞ;
ð44Þ
1 P2 ðt 1ÞuðtÞuT ðtÞP2 ðt 1Þ ; PðtÞ ¼ P2 ðt 1Þ k k þ uT ðtÞP2 ðt 1ÞuðtÞ
ð45Þ
^ 1Þ ¼ ^hðt 1Þ kp1 P2 ðt 1Þuðt pÞe4 ðtÞ; #ðt
ð46Þ
e4 ðtÞ ¼ yðt pÞ uT ðt pÞ^hðt 1Þ;
ð47Þ
P2 ðt 1Þ ¼ Pðt 1Þ þ
Pðt 1Þuðt pÞuT ðt pÞPðt 1Þ kpþ1 uT ðt pÞPðt 1Þuðt pÞ
; p P 2; Pð0Þ ¼ p0 I:
ð48Þ
hðtÞ is given by The criterion function J 5 ðhÞ with h ¼ ^
J 5 ðtÞ :¼ J 5 ½^hðtÞ ¼
t X
2 ktj ½yðjÞ uT ðjÞ^hðtÞ :
ð49Þ
j¼tpþ1
A similar derivation of J 4 ðtÞ, we can obtain
"
J 5 ðtÞ ¼ k J 5 ðt 1Þ þ
# e23 ðtÞ e24 ðtÞ : k þ uT ðtÞP2 ðt 1ÞuðtÞ kpþ1 uT ðt pÞPðt 1Þuðt pÞ
ð50Þ
7. The controlled autoregressive autoregressive moving average models The above recursive computation formulas of the criterion functions can be extended to the pseudo-linear regression models. For the sake of simplicity, we only focus on the RLS algorithm, the other W-RLS algorithm, the FF-RLS algorithm, the FDW-RLS algorithm and the FDW-FF-RLS algorithm will not be studied here. Consider the controlled autoregressive autoregressive moving average (CARARMA) models [1,38,51],
AðzÞyðtÞ ¼ BðzÞuðtÞ þ
DðzÞ v ðtÞ: CðzÞ
ð51Þ
8
J. Ma, R. Ding / Applied Mathematical Modelling 38 (2014) 1–11
where AðzÞ; BðzÞ; CðzÞ and DðzÞ are polynomials in the unit backward shift operator z1 ½z1 yðtÞ ¼ yðt 1Þ, and
AðzÞ :¼ 1 þ a1 z1 þ a2 z2 þ þ ana zna ; BðzÞ :¼ b1 z1 þ b2 z2 þ þ bnb znb ; CðzÞ :¼ 1 þ c1 z1 þ c2 z2 þ þ cnc znc ; DðzÞ :¼ 1 þ d1 z1 þ d2 z2 þ þ dnd znd : Let
wðtÞ :¼
DðzÞ v ðtÞ: CðzÞ
Define the parameter vector h and the information vector u(t) as
h :¼
hs 2 Rna þnb þnc þnd ; hn T
hs :¼ ½a1 ; a2 ; . . . ; ana ; b1 ; b2 ; . . . ; bnb 2 Rna þnb ; T
hn :¼ ½c1 ; c2 ; . . . ; cnc ; d1 ; d2 ; . . . ; dnd 2 Rnc þnd ; u ðtÞ 2 Rna þnb þnc þnd ; uðtÞ :¼ s un ðtÞ
us ðtÞ :¼ ½yðt 1Þ; yðt 2Þ; . . . ; yðt na Þ; uðt 1Þ; uðt 2Þ; . . . ; uðt nb ÞT 2 Rna þnb ; un ðtÞ :¼ ½wðt 1Þ; wðt 2Þ; ; wðt nc Þ; v ðt 1Þ; v ðt 2Þ; . . . ; v ðt nd ÞT 2 Rnc þnd ; where the subscript s and n stand for the system model and the noise model. Eq. (51) can be written as
yðtÞ ¼ uTs ðtÞhs þ wðtÞ ¼ uTs ðtÞhs þ uTn ðtÞhn þ v ðtÞ ¼ uT ðtÞh þ v ðtÞ: The recursive generalized extended least squares algorithm of estimating h is given by [1,38]
^hðtÞ ¼ ^hðt 1Þ þ LðtÞeðtÞ; ^ T ðtÞ^hðt 1Þ; eðtÞ ¼ yðtÞ u ^ ðtÞ Pðt 1Þu ; LðtÞ ¼ PðtÞLðtÞ ¼ ^ T ðtÞPðt 1Þu ^ ðtÞ 1þu T ^ ðtÞPðt 1Þ; Pð0Þ ¼ p0 I; PðtÞ ¼ ½I PðtÞu " # ^hs ðtÞ ^ ^ ðtÞ ¼ us ðtÞu ^ n ðtÞ ; hðtÞ ¼ u ; ^hn ðtÞ
us ðtÞ ¼ ½yðt 1Þ; yðt 2Þ; . . . ; yðt na Þ; uðt 1Þ; uðt 2Þ; . . . ; uðt nb ÞT ; ^ n ðtÞ ¼ ½wðt ^ 1Þ; wðt ^ 2Þ; . . . ; wðt ^ nc Þ; v^ ðt 1Þ; v^ ðt 2Þ; . . . ; v^ ðt nd ÞT ; u T ^ wðtÞ ¼ yðtÞ us ðtÞ^hs ðtÞ; v^ ðtÞ ¼ yðtÞ u^ T ðtÞ^hðtÞ:
ð52Þ ð53Þ ð54Þ ð55Þ ð56Þ ð57Þ ð58Þ ð59Þ ð60Þ
Similarly, for the criterion function,
J 6 ðhÞ :¼
t X 2 ^ T ðjÞh ; ½yðjÞ u j¼1
we can obtain its recursive computation formula,
J 6 ðtÞ :¼ J 6 ½^hðtÞ ¼ J 6 ðt 1Þ þ
e2 ðtÞ : ^ T ðtÞPðt 1Þu ^ ðtÞ 1þu
This recursive computation formula of the criterion function for the CARARMA model includes some special cases,.i.e., nc ¼ nd ¼ 0 (the CAR model), nc ¼ 0 (the CARMA model) and nd ¼ 0 (the CARAR model). 8. Example Consider the following model
AðzÞyðtÞ ¼ BðzÞuðtÞ þ v ðtÞ; AðzÞ ¼ 1 þ a1 z1 þ a2 z2 ¼ 1 1:60z1 þ 0:80z2 ; BðzÞ ¼ b1 z1 þ b2 z2 ¼ 0:412z1 þ 0:309z2 ; T
h ¼ ½a1 ; a2 ; b1 ; b2 ¼ ½1:60; 0:80; 0:412; 0:309T :
9
J. Ma, R. Ding / Applied Mathematical Modelling 38 (2014) 1–11 Table 1 The FF-RLS estimates and the criterion functions J 3 ðtÞ in (24). t
a1
a2
b1
b2
dð%Þ
J3 ðtÞ
100 200 500 1000 2000 3000
1.32820 1.27103 1.30202 1.21762 1.27340 1.23769
0.55362 0.53777 0.54186 0.47990 0.50900 0.47176
1.42695 1.59352 1.66012 1.68212 1.70966 1.65597
2.30163 2.24572 2.40200 2.31319 2.33267 2.30061
8.55790 3.85222 3.39974 1.22517 1.29056 1.37812
58.27381 137.70162 173.72272 175.61616 195.21175 188.69930
True values
1.25000
0.50000
1.68000
2.32000
Table 2 The FF-RLS estimates and the criterion functions J 3 ½^hðtÞ in (19). t
a1
a2
b1
b2
dð%Þ
J3 ½^ hðtÞ
100 200 500 1000 2000 3000
1.32820 1.27103 1.30202 1.21762 1.27340 1.23769
0.55362 0.53777 0.54186 0.47990 0.50900 0.47176
1.42695 1.59352 1.66012 1.68212 1.70966 1.65597
2.30163 2.24572 2.40200 2.31319 2.33267 2.30061
8.55790 3.85222 3.39974 1.22517 1.29056 1.37812
70.56116 151.13372 184.01511 182.78669 202.92660 196.68693
True values
1.25000
0.50000
1.68000
2.32000
In simulation, fuðtÞg is taken as persistent excitation signal sequences with zero mean and unit variance, fv ðtÞg as white noise sequences with zero mean and variances r2 ¼ 1:002 and the corresponding noise-to-signal ratio is dns ¼ 76:49%. Applying the FF-RLS algorithm in (20)–(23) with forgetting factor k ¼ 0:995 to estimate the parameters of this system, the parameter estimates, their estimation errors d :¼ k^ hðtÞ hk=khk and the criterion function J 3 ðtÞ in (24) are shown in Table 1. Applying Eq. (18) and (19) to compute the least squares estimates ^ hðtÞ and the criterion functions J 3 ½^ hðtÞ, the parameter estimates and their estimation errors d and the criterion function J 3 ½^ hðtÞ are shown in Table 2. Observing the parameter estimates and the criterion functions in Tables 1, 2, we can see that the values of the recursive criterion functions J 3 ðtÞ and the criterion functions J 3 ½^ hðtÞ are very close. 9. Conclusions The criterion functions of the recursive parameter estimation algorithms are studied for linear regressive models and pseudo-linear regressive models, including the equation error models and the output error models. The computation of the criterion functions has been implemented by the recursive formulas. The recursive computation of the criterion functions can be extended to other identification algorithms, e.g., the stochastic gradient algorithms [52–55], the multi-innovation identification algorithms [56–61]) and the hierarchical identification algorithms [62–65], the maximum likelihood methods [66–68] for linear or pseudo-linear multivariable systems or nonlinear systems [69–76]. References [1] F. Ding, System Identification – New Theory and Methods, Science Press, Beijing, 2013. [2] L.F. Zhuang, F. Pan, et al, Parameter and state estimation algorithm for single-input single-output linear systems using the canonical state space models, Appl. Math. Model. 36 (8) (2012) 3454–3463. [3] F. Ding, Y. Gu, Performance analysis of the auxiliary model based least squares identification algorithm for one-step state delay systems, Int. J. Comput. Math. 89 (15) (2012) 2019–2028. [4] Y. Zhang, Unbiased identification of a class of multi-input single-output systems with correlated disturbances using bias compensation methods, Math. Comput. Model. 53 (9–10) (2011) 1810–1819. [5] Y. Shi, H. Fang, Kalman filter based identification for systems with randomly missing measurements in a network environment, Int. J. Control 83 (3) (2010) 538–551. [6] Y. Shi, B. Yu, Output feedback stabilization of networked control systems with random delays modeled by Markov chains, IEEE Trans. Autom. Control 54 (7) (2009) 1668–1674. [7] F. Ding, T. Chen, Least squares based self-tuning control of dual-rate systems, Int. J. Adapt. Control Signal Process. 18 (8) (2004) 697–714. [8] F. Ding, T. Chen, A gradient based adaptive control algorithm for dual-rate systems, Asian J. Control 8 (4) (2006) 314–323. [9] F. Ding, T. Chen, Z. Iwai, Adaptive digital control of Hammerstein nonlinear systems with limited output sampling, SIAM J. Control Optim. 45 (6) (2007) 2257–2276. [10] J.B. Zhang, F. Ding, Y. Shi, Self-tuning control based on multi-innovation stochastic gradient parameter estimation, Syst. Control Lett. 58 (1) (2009) 69– 75. [11] B. Yu, Y. Shi, H. Huang, l–2 and l-infnity filtering for multirate systems using lifted models, Circuits Syst. Signal Process. 27 (5) (2008) 699–711. [12] F. Ding, T. Chen, Hierarchical identification of lifted state-space models for general dual-rate systems, IEEE Trans. Circuits Syst. – I: Regular Papers 52 (6) (2005) 1179–1187.
10
J. Ma, R. Ding / Applied Mathematical Modelling 38 (2014) 1–11
[13] F. Ding, Two-stage least squares based iterative estimation algorithm for CARARMA system modeling, Appl. Math. Model. 37 (7) (2013) 4798–4808. [14] D.Q. Wang, G.W. Yang, R.F. Ding, Gradient-based iterative parameter estimation for Box–Jenkins systems, Comput. Math. Appl. 60 (5) (2010) 1200– 1208. [15] J.H. Li, R.F. Ding, Y. Yang, Iterative parameter identification methods for nonlinear functions, Appl. Math. Model. 36 (6) (2012) 2739–2750. [16] F. Ding, Y.J. Liu, B. Bao, Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems, Proc. Inst. Mech. Eng. Part I: J. Syst. Control Eng. 226 (1) (2012) 43–55. [17] D.Q. Wang, R. Ding, X.Z. Dong, Iterative parameter estimation for a class of multivariable systems based on the hierarchical identification principle and the gradient search, Circuits Syst. Signal Process. 31 (6) (2012) 2167–2177. [18] H.H. Duan, J. Jia, R.F. Ding, Two-stage recursive least squares parameter estimation algorithm for output error models, Math. Comput. Model. 55 (3–4) (2012) 1151–1159. [19] L. Xie, H.Z. Yang, et al, Recursive least squares parameter estimation for non-uniformly sampled systems based on the data filtering, Math. Comput. Model. 54 (1–2) (2011) 315–324. [20] F. Ding, Y. Shi, T. Chen, Performance analysis of estimation algorithms of non-stationary ARMA processes, IEEE Trans. Signal Process. 54 (3) (2006) 1041–1053. [21] S.K. Li, T.Z. Huang, LSQR iterative method for generalized coupled Sylvester matrix equations, Appl. Math. Model. 36 (8) (2012) 3545–3554. [22] M. Dehghan, M. Hajarian, An iterative algorithm for the reflexive solutions of the generalized coupled Sylvester matrix equations and its optimal approximation, Appl. Math. Comput. 202 (2) (2008) 571–588. [23] M. Dehghan, M. Hajarian, Matrix equations over (R, S)-symmetric and (R, S)-skew symmetric matrices, Comput. Math. Appl. 59 (11) (2010) 3583–3594. [24] A.G. Wu, B. Li, Y. Yang, G.R. Duan, Finite iterative solutions to coupled Sylvester-conjugate matrix equations, Appl. Math. Model. 35 (3) (2011) 1065– 1080. [25] F. Ding, T. Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Trans. Autom. Control 50 (8) (2005) 1216–1221. [26] F. Ding, T. Chen, Iterative least squares solutions of coupled Sylvester matrix equations, Syst. Control Lett. 54 (2) (2005) 95–107. [27] L. Xie, J. Ding, et al, Gradient based iterative solutions for general linear matrix equations, Comput. Math. Appl. 58 (7) (2009) 1441–1448. [28] F. Ding, Transformations between some special matrices, Comput. Math. Appl. 59 (8) (2010) 2676–2695. [29] F. Ding, T. Chen, On iterative solutions of general coupled matrix equations, SIAM J. Control Optim. 44 (6) (2006) 2269–2284. [30] F. Ding, X.P. Liu, J. Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Appl. Math. Comput. 197 (1) (2008) 41–50. [31] Y.J. Liu, F. Ding, Y. Shi, Least squares estimation for a class of non-uniformly sampled systems based on the hierarchical identification principle, Circuits Syst. Signal Process. 31 (6) (2012) 1985–2000. [32] L.L. Han, J. Sheng, et al, Auxiliary models based recursive least squares identification for multirate multi-input systems, Math. Comput. Model. 50 (7–8) (2009) 1100–1106. [33] Y. Zhang, G.M. Cui, Bias compensation methods for stochastic systems with colored noise, Appl. Math. Model. 35 (4) (2011) 1709–1716. [34] L. Xie, Y.J. Liu, H.Z. Yang, Gradient based and least squares based iterative algorithms for matrix equations AXB þ CXT D = F, Appl. Math. Comput. 217 (5) (2010) 2191–2199. [35] M. Dehghan, M. Hajarian, An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices, Appl. Math. Model. 34 (3) (2010) 639–654. [36] M. Dehghan, M. Hajarian, Analysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equations, Appl. Math. Model. 35 (7) (2011) 3285–3300. [37] J. Ding, Y.J. Liu, et al, Iterative solutions to matrix equations of form AiXBi = Fi, Comput. Math. Appl. 59 (11) (2010) 3500–3507. [38] D.Q. Wang, F. Ding, Input-output data filtering based recursive least squares identification for CARARMA systems, Digital Signal Process. 20 (4) (2010) 991–999. [39] D.Q. Wang, Y.Y. Chu, et al, Auxiliary model-based recursive generalized least squares parameter estimation for Hammerstein OEAR systems, Math. Comput. Model. 52 (1–2) (2010) 309–317. [40] F. Ding, T. Chen, Performance bounds of the forgetting factor least squares algorithm for time-varying systems with finite measurement data, IEEE Trans. Circuits Syst.-I, Regular Paper 52 (3) (2005) 555–566. [41] F. Ding, Y.S. Xiao, A finite-data-window least squares algorithm with a forgetting factor for dynamical modeling, Appl. Math. Comput. 186 (1) (2007) 184–192. [42] D.Q. Wang, F. Ding, Hierarchical least squares estimation algorithm for Hammerstein–Winnear systems, IEEE Signal Process. Lett. 19 (12) (2012) 825– 828. [43] Y.S. Xiao, Y. Zhang, J. Ding, J.Y. Dai, The residual based interactive least squares algorithms and simulation studies, Comput. Math. Appl. 58 (6) (2009) 1190–1197. [44] F. Ding, Several multi-innovation identification methods, Digital Signal Process. 20 (4) (2010) 1027–1039. [45] J.X. Ma, W.L. Xiong, R. Ding, Recursive relations of the criterion functions for the recursive least squares algorithms, The 24th Chinese Control and Decision Conference (2012 CCDC), May 23 – 25, 2012, Taiyuan, China, pp. 2069–2074. [46] J.X. Ma, F. Ding, Recursive relations of the cost functions for the least squares algorithms for multivariable systems, Circuits Syst. Signal Process. 32 (1) (2013) 83–101. [47] G.C. Goodwin, K.S. Sin, Adaptive Filtering Prediction and Control, Prentice-hall, Englewood Cliffs, NJ, 1984. [48] F. Ding, T. Chen, Performance analysis of multi-innovation gradient type identification methods, Automatica 43 (1) (2007) 1–14. [49] F. Ding, X.P. Liu, G. Liu, Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises, Signal Process. 89 (10) (2009) 1883–1890. [50] L. Ljung, System Identification: Theory for the User, second ed., Prentice-Hall, Englewood Cliffs, New Jersey, 1999. [51] J.H. Li, Parameter estimation for Hammerstein CARARMA systems based on the Newton iteration, Appl. Math. Lett. 26 (1) (2013) 91–96. [52] F. Ding, Y. Gu, Performance analysis of the auxiliary model-based stochastic gradient parameter estimation algorithm for state space systems with onestep state delay, Circuits Syst. Signal Process. 32 (2) (2013) 585–599. [53] J. Ding, Y. Shi, et al, A modified stochastic gradient based parameter estimation algorithm for dual-rate sampled-data systems, Digital Signal Process. 20 (4) (2010) 1238–1249. [54] Y.J. Liu, J. Sheng, R.F. Ding, Convergence of stochastic gradient estimation algorithm for multivariable ARX-like systems, Comput. Math. Appl. 59 (8) (2010) 2615–2627. [55] F. Ding, G. Liu, X.P. Liu, Partially coupled stochastic gradient identification methods for non-uniformly sampled systems, IEEE Trans. Autom. Control 55 (8) (2010) 1976–1981. [56] F. Ding, Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling, Appl. Math. Model. 37 (4) (2013) 1694–1704. [57] J. Chen, Y. Zhang, R.F. Ding, Auxiliary model based multi-innovation algorithms for multivariable nonlinear systems, Math. Comput. Model. 52 (9–10) (2010) 1428–1434. [58] Y.J. Liu, L. Yu, et al, Multi-innovation extended stochastic gradient algorithm and its performance analysis, Circuits Syst. Signal Process. 29 (4) (2010) 649–667. [59] F. Ding, X.P. Liu, G. Liu, Multi-innovation least squares identification for linear and pseudo-linear regression models, IEEE Trans. Syst. Man Cybern. Part B: Cybern. 40 (3) (2010) 767–778. [60] F. Ding, G. Liu, X.P. Liu, Parameter estimation with scarce measurements, Automatica 47 (8) (2011) 1646–1655.
J. Ma, R. Ding / Applied Mathematical Modelling 38 (2014) 1–11
11
[61] Y.J. Liu, Y.S. Xiao, X.L. Zhao, Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model, Appl. Math. Comput. 215 (4) (2009) 1477–1483. [62] F. Ding, L. Qiu, T. Chen, Reconstruction of continuous-time systems from their non-uniformly sampled discrete-time systems, Automatica 45 (2) (2009) 324–332. [63] J. Ding, F. Ding, X.P. Liu, G. Liu, Hierarchical least squares identification for linear SISO systems with dual-rate sampled-data, IEEE Trans. Autom. Control 56 (11) (2011) 2677–2683. [64] H.Q. Han, L. Xie, et al, Hierarchical least squares based iterative identification for multivariable systems with moving average noises, Math. Comput. Model. 51 (9–10) (2010) 1213–1220. [65] Z.N. Zhang, F. Ding, X.G. Liu, Hierarchical gradient based iterative parameter estimation algorithm for multivariable output error moving average systems, Comput. Math. Appl. 61 (3) (2011) 672–682. [66] J.H. Li, F. Ding, G.W. Yang, Maximum likelihood least squares identification method for input nonlinear finite impulse response moving average systems, Math. Comput. Model. 55 (3–4) (2012) 442–450. [67] W. Wang, F. Ding, J.Y. Dai, Maximum likelihood least squares identification for systems with autoregressive moving average noise, Appl. Math. Model. 36 (5) (2012) 1842–1853. [68] J.H. Li, F. Ding, Maximum likelihood stochastic gradient estimation for Hammerstein systems with colored noise based on the key term separation technique, Comput. Math. Appl. 62 (11) (2011) 4170–4177. [69] F. Ding, X.P. Liu, G. Liu, Gradient based and least-squares based iterative identification methods for OE and OEMA systems, Digital Signal Process. 20 (3) (2010) 664–677. [70] F. Ding, X.P. Liu, G. Liu, Identification methods for Hammerstein nonlinear systems, Digital Signal Process. 21 (2) (2011) 215–238. [71] J. Ding, F. Ding, Bias compensation based parameter estimation for output error moving average systems, Int. J. Adapt. Control Signal Process. 25 (12) (2011) 1100–1111. [72] F. Ding, Decomposition based fast least squares algorithm for output error systems, Signal Process. 93 (5) (2013) 1235–1242. [73] F. Ding, Coupled-least-squares identification for multivariable systems, IET Control Theory Appl. 7 (1) (2013) 68–79. [74] F. Ding, X.G. Liu, J. Chu, Gradient-based and least-squares-based iterative algorithms for Hammerstein systems using the hierarchical identification principle, IET Control Theory Appl. 7 (2) (2013) 176–184. [75] D.Q. Wang, F. Ding, Least squares based and gradient based iterative identification for Wiener nonlinear systems, Signal Process. 91 (5) (2011) 1182– 1189. [76] D.Q. Wang, Least squares-based recursive and iterative estimation for output error moving average systems using data filtering, IET Control Theory Appl. 5 (14) (2011) 1648–1657.