Auxiliary model based multi-innovation algorithms for multivariable nonlinear systems

Auxiliary model based multi-innovation algorithms for multivariable nonlinear systems

Mathematical and Computer Modelling 52 (2010) 1428–1434 Contents lists available at ScienceDirect Mathematical and Computer Modelling journal homepa...

308KB Sizes 0 Downloads 32 Views

Mathematical and Computer Modelling 52 (2010) 1428–1434

Contents lists available at ScienceDirect

Mathematical and Computer Modelling journal homepage: www.elsevier.com/locate/mcm

Auxiliary model based multi-innovation algorithms for multivariable nonlinear systemsI Jing Chen a,d , Yan Zhang b , Ruifeng Ding c,d,∗ a

Wuxi Professional College of Science and Technology, Wuxi 214028, PR China

b

Wuxi Institute of Technology, Wuxi 214121, PR China

c

Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, PR China

d

School of Communication and Control Engineering, Jiangnan University, Wuxi 214122, PR China

article

info

Article history: Received 8 February 2010 Received in revised form 22 May 2010 Accepted 25 May 2010 Keywords: Parameter estimation Stochastic gradient Auxiliary model identification Multi-innovation identification Multi-input multi-output systems

abstract This paper considers the identification problem for multi-input multi-output nonlinear systems. The difficulty of the parameter identification of such systems is that the information vector in the identification model contains unknown variables. The solution is using the auxiliary model identification idea to overcome the difficulty. An auxiliary model based multi-innovation extended stochastic gradient algorithm is presented by expanding the innovation vector to an innovation matrix. The proposed algorithm uses not only the current innovation but also the past innovations at each recursion and thus the parameter estimation accuracy can be improved. The numerical example shows that the proposed algorithm is effective. © 2010 Elsevier Ltd. All rights reserved.

1. Introduction Parameter estimation has many applications in areas such as system modelling and signal processing, e.g., [1–4]. There are two classes of important parameter estimation approaches: least squares (LS) and stochastic gradient (SG) methods [5–8]. The SG algorithm has less computational effort but slower convergence rate than the recursive least squares (RLS) algorithm [8–10]. In order to improve the convergence rate of the identification algorithms, Ding et al. presented a multiinnovation identification theory for parameter estimation [11–22]. Other identification methods include the data filtering based algorithms [23], the gradient based algorithms [24–27], and the iterative algorithms [24,26,28,29]. A typical class of nonlinear systems are Hammerstein nonlinear systems, which are common in industry. Ding and Chen presented a least squares based iterative algorithm and recursive extended least squares algorithm for Hammerstein nonlinear ARMAX systems [24,29,30]. Wang presented extended stochastic gradient identification algorithms for Hammerstein–Wiener ARMAX systems [31]. This paper uses the auxiliary model identification idea [32–36] and the multi-innovation identification theory [11,12] to study the identification problem of multi-input multi-output (i.e., multivariable) nonlinear systems and presents an auxiliary model based multi-innovation extended stochastic gradient (AM-MI-ESG) algorithm for multivariable output error moving average nonlinear systems by expanding the innovation vector to an innovation matrix. The proposed algorithm uses not only the current innovation but also the past innovations at each recursion and thus the parameter estimation accuracy can be improved.

I This work was supported by the National Natural Science Foundation of China.



Corresponding author at: School of Communication and Control Engineering, Jiangnan University, Wuxi 214122, PR China. E-mail addresses: [email protected] (J. Chen), [email protected] (Y. Zhang), [email protected] (R. Ding).

0895-7177/$ – see front matter © 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.mcm.2010.05.026

J. Chen et al. / Mathematical and Computer Modelling 52 (2010) 1428–1434

1429

This paper is organized as follows. Section 2 introduces the identification model related to multivariable nonlinear systems. Section 3 develops the SG algorithm for the nonlinear MIMO systems and Section 4 derives an AM-MI-ESG algorithm for the nonlinear systems. Section 5 provides an illustrative example. Finally, concluding remarks are given in Section 6. 2. The system description and identification model Let us introduce some notations first. The symbol I stands for an identity matrix of the appropriate sizes; the norm of a matrix X is defined as kX k := tr[XX T ] = tr[X T X ]; the superscript T denotes the matrix transpose. Consider a multivariable output error moving average (OEMA) nonlinear system: y (t ) = A−1 (z )B(z )f (u(t )) + D(z )v (t ),

(1)

where u(t ) = [u1 (t ), u2 (t ), . . . , ur (t )] ∈ R is the system input vector, y (t ) ∈ R is the system output vector, v (t ) ∈ Rm a stochastic white noise vector with zero mean, A(z ), B(z ) and D(z ) are polynomial matrices in the unit backward shift operator [z −1 y (t ) = y (t − 1)] and T

r

m

A(z ) = I + A1 z −1 + A2 z −2 + · · · + Ana z −na , B(z ) = B1 z −1 + B2 z −2 + · · · + Bnb z −nb ,

Ai ∈ Rm×m ,

Bi ∈ Rm×r ,

D(z ) = I + D1 z −1 + D2 z −2 + · · · + Dnd z −nd ,

Di ∈ Rm×m .

The nonlinear function f (u(t )) ∈ R is a nonlinear vector function: r

f1 (u1 (t )) f2 (u2 (t ))

 f (u(t )) =  



.. .

 ∈ Rr , 

fr (ur (t )) fi (ui (t )) is a nonlinear function of a known basis (γ1 , γ2 , . . . , γl ): fi (ui (t )) = c1 γ1 (ui (t )) + c2 γ2 (ui (t )) + · · · + cl γl (ui (t )), ci are unknown parameters. Any pair (α f (u(t )), α −1 B(z )) for some nonzero constant α would produce identical input and output measurements due to α f (u(t )) × α −1 B(z ) = f (u(t )) × B(z ), none of the identification schemes can distinguish (f (u(t )), B(z )) from (α f (u(t )), α −1 B(z )). Therefore, to get a unique parameterization, without loss of generality, one of the gains of f (u(t )) and B(z ) has to be fixed. There are several ways to normalize the gains. Here, we adopt the assumption used in [24,29,34], the first coefficient of the function f (·) equals 1, i.e., c1 = 1. Define the middle vector: x(t ) := A−1 (z )B(z )f (u(t )).

(2)

From (1) and (2), we have y (t ) = x(t ) + D(z )v (t ).

(3)

Define the parameter matrix θ and information vector ϕ(t ) as:

θ T := [θ Ts , θ Tn ] ∈ Rm×n ,

n := mnd + mna + lrnb ,

θ := [A1 , A2 , . . . , Ana , B1 c1 , B2 c1 , . . . , Bnb c1 , B1 c2 , B2 c2 , . . . , Bnb c2 , . . . , B1 cl , B2 cl , . . . , Bnb cl ] ∈ Rm×(mna +lrnb ) , T s

θ Tn := [D1 , D2 , . . . , Dnd ] ∈ Rm×(mnd ) ,   ϕ (t ) ϕ(t ) := s ∈ Rn , ϕ n (t ) ϕs (t ) := [−xT (t − 1), −xT (t − 2), . . . , −xT (t − na ), γ T1 (u(t − 1)), γ T1 (u(t − 2)), . . . , γ T1 (u(t − nb )), γ T2 (u(t − 1)), γ T2 (u(t − 2)), . . . , γ T2 (u(t − nb )), . . . , γ Tl (u(t − 1)), γ Tl (u(t − 2)), . . . , γ Tl (u(t − nb ))]T ∈ Rmna +lrnb , ϕn (t ) := [v T (t − 1), v T (t − 2), . . . , v T (t − nd )]T ∈ Rmnd , γ i (u(t )) := [γi (u1 (t )), γi (u2 (t )), . . . , γi (ur (t ))]T ∈ Rr , where the subscripts (Roman) s and n represent the first letters of the words ‘‘system’’ and ‘‘noise’’, respectively. From (2) and (3), we have x(t ) = θ Ts (t )ϕs (t ),

(4)

y (t ) = θ (t )ϕ(t ) + v (t ).

(5)

T

1430

J. Chen et al. / Mathematical and Computer Modelling 52 (2010) 1428–1434

3. The stochastic gradient algorithm

ˆ t ) be the estimate of θ . Defining and minimizing the cost function Let θ( J1 (θ) := E[ky (t ) − θ T (t )ϕ(t )k2 ] give the following stochastic gradient algorithm for estimating the parameter matrix θ :

ˆ t ) = θ( ˆ t − 1) + ϕ(t ) eT (t ), θ( r (t )

(6)

T

e(t ) = y (t ) − θˆ (t − 1)ϕ(t ), r (t ) = r (t − 1) + kϕ(t )k , 2

(7) r (0) = 1.

(8)

Since the information vector ϕ(t ) on the right-hand sides of (6) contains the unknown inner variables x(t − i) and unmeasurable noise terms v (t − j), the algorithm in (6)–(8) is impossible to implement. The solution is using the auxiliary mode identification idea [32]: these unknowns x(t − i) are replaced by the outputs xa (t − i) of the auxiliary model (or reference model): xa (t ) =

Ba (z ) Aa (z )

f (u(t ))

or x(t ) = θ Tas (t )ϕas (t ),

1 ˆ then we can use the estimate of A−1 (z )B(z ) as an auxiliary model A− a (z )Ba (z ), namely, take θ as (t ) to be the estimate θ s (t ) of θ s (t ), and ϕs (t ) to be the regressive vector of xa (t ) and γ i (u(t − j)), and use ϕas (t ) as ϕs (t ), the identification algorithms based on this idea are called the auxiliary model identification method. There are other ways to choose auxiliary models, e.g., using the finite impulse response model [32,36]. According to the auxiliary model identification idea: the unknown variables x(t − i) in ϕs (t ) are replaced with the output xa (t − i) of the auxiliary model, and v (t − i) are replaced with the estimated residuals vˆ (t − i), we can obtain a auxiliary model based extended stochastic gradient (AM-ESG) algorithm:

ˆ t) T ˆ t ) = θ( ˆ t − 1) + ϕ( θ( e (t ), r (t )

(9)

T

e(t ) = y (t ) − θˆ (t − 1)ϕ(t ),

ˆ t )k , r (t ) = r (t − 1) + kϕ( 2

ˆ t) = ϕ(



(10) r (0) = 1,

(11)

ϕˆ s (t ) , ϕˆ n (t ) 

(12)

ϕˆ s (t ) = [−xTa (t − 1), −xTa (t − 2), . . . , −xTa (t − na ), γ T1 (u(t − 1)), γ T1 (u(t − 2)), . . . , γ T1 (u(t − nb )), γ T2 (u(t − 1)), γ T2 (u(t − 2)), . . . , γ T2 (u(t − nb )), . . . , γ Tl (u(t − 1)), γ Tl (u(t − 2)), . . . , γ Tl (u(t − nb ))]T , (13) ϕˆ n (t ) = [ˆv T (t − 1), vˆ T (t − 2), . . . , vˆ T (t − nd )]T ,

(14)

T

ˆ s (t ), xa (t ) = θˆ s (t )ϕ

(15)

T

ˆ t ). vˆ (t ) = y (t ) − θˆ (t )ϕ(

(16)

4. The multi-innovation gradient algorithm In order to enhance the convergence rate of the AM-ESG algorithm, the objective of this paper is to extend the AM-ESG algorithm such that the parameter estimation accuracy can be improved. Such an algorithm is derived from the multiinnovation identification algorithm. ˆ t ) thus has slow convergence rate. Next, we derive At time t, the AM-ESG algorithm only uses the current data y (t ) and ϕ( a new algorithm by expanding the single innovation vector e(t ) ∈ Rm to an innovation matrix [11] T

T

T

ˆ t ), y (t − 1) − θˆ (t − 1)ϕ( ˆ t − 1), . . . , y (t − p + 1) − θˆ (t − 1)ϕ( ˆ t − p + 1)] ∈ Rm×p E (p, t ) = [y (t ) − θˆ (t − 1)ϕ( ˆ t − i) : i = 1, 2, . . . , p − 1} where p represents the innovation length. which uses the past data {y (t − i), ϕ( ˆ (p, t ) and stacking output matrix Y (p, t ) as Define the information matrix Φ ˆ (p, t ) := [ϕ( ˆ t ), ϕ( ˆ t − 1), . . . , ϕ( ˆ t − p + 1)] ∈ Rn×p , Φ Y (p, t ) := [y (t ), y (t − 1), . . . , y (t − p + 1)] ∈ Rm×p .

J. Chen et al. / Mathematical and Computer Modelling 52 (2010) 1428–1434

1431

The innovation matrix E (p, t ) can be expressed as: T

ˆ (p, t ). E (p, t ) = Y (p, t ) − θˆ (t − 1)Φ Referring to the multi-innovation stochastic gradient method for linear regression models, we can obtain an auxiliary model based multi-innovation extended stochastic gradient algorithm for multivariable nonlinear OEMA systems with the innovation length p (the AM-MI-ESG algorithm for short) as follows:

ˆ ˆ t ) = θ( ˆ t − 1) + Φ(p, t ) E T (p, t ), θ( r (t )

(17)

T

ˆ (p, t ), E (p, t ) = Y (p, t ) − θˆ (t − 1)Φ

(18)

ˆ s (t )k + kϕˆ n (t )k , r (t ) = r (t − 1) + kϕ 2

r (0) = 1,

2

(19)

Y (p, t ) := [y (t ), y (t − 1), . . . , y (t − p + 1)],

ˆ (p, t ) = Φ



(20)

ˆ s (p, t ) Φ ˆ n (p, t ) , Φ 

(21)

ˆ s (p, t ) = [ϕˆ s (t ), ϕˆ s (t − 1), . . . , ϕˆ s (t − p + 1)], Φ

(22)

ˆ n (p, t ) = [ϕˆ n (t ), ϕˆ n (t − 1), . . . , ϕˆ n (t − p + 1)], Φ

(23)

ϕˆ s (t ) = [− (t − 1), − ( − 2), . . . , − ( − na ), γ (u(t − 1)), γ (u(t − 2)), . . . , γ (u(t − nb )), γ T2 (u(t − 1)), γ ( (t − 2)), . . . , γ ( (t − nb )), . . . , γ Tl (u(t − 1)), γ Tl (u(t − 2)), . . . , γ Tl (u(t − nb ))]T , (24) xTa

xTa t T 2 u

xTa t T 2 u

T 1

T 1

T 1

ϕˆ n (t ) = [ˆv T (t − 1), vˆ T (t − 2), . . . , vˆ T (t − nd )]T , T

ˆ s (t ), xa (t ) = θˆ s (t )ϕ

(25) (26)

T

ˆ t ). vˆ (t ) = y (t ) − θˆ (t )ϕ(

(27)

Because E (p, t ) ∈ Rp×m is an innovation matrix, namely, multi-innovation, the algorithm in (17)–(27) is known as the multi-innovation identification one. As p = 1, the AM-MI-ESG algorithm reduces to the AM-ESG algorithm in (9)–(16).

ˆ t ) by the AM-MI-ESG algorithm are listed in the following. The steps of computing the parameter estimation matrix θ( ˆ 0) = I /p0 , xa (i) = 1m /p0 , vˆ (i) = 1m /p0 for i ≤ 0, p0 = 106 , and set the innovation length p. 1. To initialize, let t = 1, θ( 2. Collect the input–output data u(t ) and y (t ), and compute γ i (u(t )). ˆ s (p, t ) by (22) and Φ ˆ n (p, t ) by (23). ˆ s (t ) by (24), ϕˆ n (t ) by (25), Φ 3. Form ϕ ˆ (p, t ) by (21) and Y (p, t ) by (20). 4. Form Φ 5. Compute r (t ) by (19) and E (p, t ) by (18). ˆ t ) by (17). 6. Update θ( 7. Compute xa (t ) by (26) and vˆ (t ) by (27). 8. Increase t by 1 and go to step 2. 5. Example Consider the following 2-input 2-output nonlinear system,



y1 (t ) 0.025 + y2 (t ) −0.19

θT =





0.025 −0.190



0.10 0.05

0.10 0.05

y1 (t − 1) y2 (t − 1)



1.73 −0.17



−0.58 0.28

1.73 −0.17

  −0.58 u1 (t − 1) + 0.5u21 (t − 1) 0.28 u2 (t − 1) + 0.5u22 (t − 1)      v (t ) −0.012 −0.05 v1 (t − 1) + 1 + , v2 ( t ) 0.200 −0.01 v2 (t − 1)  0.865 −0.29 −0.012 −0.050 . −0.085 0.14 0.200 −0.001 

=

The inputs u1 (t ) and u2 (t ) are taken as two uncorrelated persistent excitation signal sequences with zero mean and unit variances, v1 (t ) and v2 (t ) as two white noise sequences with a zero mean and variances σ12 = 0.602 for v1 (t ) and σ22 = 0.502 for v2 (t ). Applying the AM-ESG algorithm and AM-MI-ESG algorithm to estimate the parameters of this

1432

J. Chen et al. / Mathematical and Computer Modelling 52 (2010) 1428–1434

Table 1 The AM-ESG estimates and errors. t

100

a11 a12 b11 b12 c11 c12 d11 d12 a21 a22 b21 b22 c21 c22 d21 d22 δ(%)

0.21958 0.02221 1.05701 −0.36864 0.52019 0.09273 0.08771 −0.06164 −0.09425 0.00182 −0.06157 0.11311 0.03912 0.05286 0.04812 −0.00656 45.64416

200 0.17441 0.02415 1.10995 −0.38743 0.54175 0.06679 0.10521 −0.06426 −0.10605 0.00052 −0.06768 0.12359 0.04168 0.06089 0.06319 −0.00756 42.18468

500 0.15892 0.02650 1.17185 −0.40523 0.57068 0.03533 0.11070 −0.06713 −0.13006 0.00416 −0.07816 0.13603 0.02714 0.05959 0.08719 −0.01043 38.18736

1000 0.14792 0.02612 1.21075 −0.42500 0.59353 0.01885 0.11129 −0.06898 −0.14376 0.00620 −0.08752 0.14824 0.02036 0.06273 0.09773 −0.01328 35.55305

2000 0.12970 0.02937 1.24972 −0.44000 0.61015 −0.00239 0.11474 −0.06955 −0.15561 0.00724 −0.09408 0.16025 0.01301 0.06718 0.10676 −0.01619 32.97492

3000 0.12133 0.03037 1.27075 −0.44802 0.61955 −0.01082 0.11344 −0.07009 −0.16213 0.00835 −0.09735 0.16398 0.00980 0.06911 0.11106 −0.01658 31.66685

True values 0.02500 0.10000 1.73000 −0.58000 0.86500 −0.29000 −0.01200 −0.05000 −0.19000 0.05000 −0.17000 0.28000 −0.08500 0.14000 0.20000 −0.01000

Table 2 The AM-MI-ESG estimates and errors with p = 5. t

100

a11 a12 b11 b12 c11 c12 d11 d12 a21 a22 b21 b22 c21 c22 d21 d22

δ(%)

1000

2000

3000

True Values

0.13527 0.08236 1.56103 −0.45328 0.66504 −0.10225 −0.12411 −0.13740 −0.16019 0.03686 −0.13769 0.24570 0.00307 0.10929 0.16286 0.01438

200 0.08307 0.08171 1.59209 −0.47621 0.71167 −0.12674 −0.03901 −0.13860 −0.15799 0.03102 −0.13979 0.25146 0.00198 0.11620 0.16708 0.01670

500 0.05323 0.08467 1.62594 −0.49831 0.75621 −0.16382 −0.03340 −0.12995 −0.16976 0.03608 −0.14956 0.24948 −0.02653 0.10393 0.18809 0.01379

0.04654 0.08173 1.63921 −0.52093 0.77532 −0.18457 −0.03686 −0.12450 −0.17949 0.03980 −0.15877 0.25679 −0.03362 0.10427 0.18597 0.01023

0.03352 0.08321 1.65582 −0.53816 0.78659 −0.20232 −0.02569 −0.12066 −0.18343 0.04032 −0.16331 0.26473 −0.04418 0.10778 0.19096 0.00652

0.02902 0.08314 1.66015 −0.54550 0.78677 −0.21189 −0.02888 −0.11688 −0.18452 0.04134 −0.16262 0.26245 −0.04629 0.11001 0.19374 0.00499

0.02500 0.10000 1.73000 −0.58000 0.86500 −0.29000 −0.01200 −0.05000 −0.19000 0.05000 −0.17000 0.28000 −0.08500 0.14000 0.20000 −0.01000

19.47860

15.49035

11.74420

9.92920

8.28236

7.76830

system, the parameter estimates and their errors with different innovation lengths are shown in Tables 1–3, the parameter estimation errors δ := kθˆ − θk/kθk versus t are shown in Fig. 1 with p = 1, 5 and 10. The auxiliary model is as follows: T x1a (t ) = xa (t ) = θˆ s (t )ϕˆ s (t ), x2a (t )





ϕˆ s (t ) = [−x1a (t − 1), −x2a (t − 1), u1 (t − 1), u2 (t − 1), u21 (t − 1), u22 (t − 1)]T , [ θ Ts (t ) = [Aˆ 1 (t ), Bˆ 1 (t ), B 1 c2 (t )], ϕˆ n (t ) = [ˆv1 (t − 1), vˆ 2 (t − 1)]T . From Tables 1–3 and Fig. 1, we can draw the conclusions: (1) The AM-MI-ESG algorithm with p = 5 and p = 10 have a higher estimation accuracy than the AM-ESG algorithm; (2) The parameter estimation errors by the AM-Mi-ESG algorithm become smaller and smaller and go to zero with the data length t increasing. 6. Conclusions The AM-MI-ESG algorithm is developed for multi-input multi-output systems by using the multi-innovation identification theory and the auxiliary model identification idea. The proposed algorithm can improve the parameter estimation accuracy. The simulation results verify the proposed theorem.

J. Chen et al. / Mathematical and Computer Modelling 52 (2010) 1428–1434

1433

Table 3 The AM-MI-ESG estimates and errors with p = 10. t a11 a12 b11 b12 c11 c12 d11 d12 a21 a22 b21 b22 c21 c22 d21 d22

δ(%)

100

200

500

1000

2000

3000

True values

0.09824 0.11963 1.67677 −0.51386 0.76475 −0.22167 −0.09381 −0.12424 −0.15142 0.04126 −0.15935 0.31111 −0.00707 0.14322 0.14681 0.05715

0.02109 0.11492 1.68827 −0.53505 0.81481 −0.23722 −0.00887 −0.10841 −0.14978 0.03256 −0.15057 0.30222 −0.01068 0.14008 0.17194 0.04992

0.01146 0.11245 1.71250 −0.54842 0.85652 −0.26261 −0.01583 −0.07782 −0.17224 0.04396 −0.16197 0.27721 −0.05065 0.11753 0.19612 0.03300

0.02446 0.10450 1.71017 −0.56983 0.86054 −0.27852 −0.02113 −0.06246 −0.17582 0.04734 −0.17346 0.28025 −0.05215 0.11855 0.19813 0.01537

0.01664 0.10425 1.71906 −0.58511 0.86008 −0.28166 −0.00486 −0.05721 −0.18220 0.04748 −0.17692 0.28714 −0.06383 0.12314 0.19774 0.00350

0.01846 0.10300 1.71473 −0.58941 0.84948 −0.28775 −0.01247 −0.05187 −0.18661 0.04938 −0.17242 0.27909 −0.06421 0.12536 0.19704 0.00030

0.02500 0.10000 1.73000 −0.58000 0.86500 −0.29000 −0.01200 −0.05000 −0.19000 0.05000 −0.17000 0.28000 −0.08500 0.14000 0.20000 −0.01000

11.29218

7.61151

4.05827

2.74822

1.85895

1.78829

Fig. 1. The parameter estimation errors δ versus t.

References [1] M. Kohandel, S. Sivaloganathan, G. Tenti, Estimation of the quasi-linear viscoelastic parameters using a genetic algorithm, Mathematical and Computer Modelling 47 (3–4) (2008) 266–270. [2] J.L. Figueroa, S.I. Biagiola, O.E. Agamennoni, An approach for identification of uncertain Wiener systems, Mathematical and Computer Modelling 48 (1–2) (2008) 305–315. [3] J. Ding, L.L. Han, X.M. Chen, Time series AR modeling with missing observations based on the polynomial transformation, Mathematical and Computer Modelling 51 (5–6) (2010) 527–536. [4] L.L. Han, J. Sheng, F. Ding, Y. Shi, Auxiliary model identification method for multirate multi-input systems based on least squares, Mathematical and Computer Modelling 50 (7–8) (2009) 1100–1106. [5] J. Fang, A.R. Leyman, Y.H. Chew, H.P. Duan, Some further results on blind identification of MIMO FIR channels via second-order statistics, Signal Processing 87 (6) (2007) 1434–1447. [6] A. Mahmoudi, M. Karimi, Estimation of the parameters of multi-channel autoregressive signals from noisy observations, Signal Processing 88 (11) (2008) 2777–2783. [7] S.K. Zhao, Z.H. Man, S.Y. Khoo, H.R. Wu, Variable step-size LMS algorithm with a quotient form, Signal Processing 89 (1) (2009) 67–76. [8] G.C. Goodwin, K.S. Sin, Adaptive Filtering Prediction and Control, Prentice-Hall, Englewood Cliffs, NJ, 1984. [9] F. Ding, H.Z. Yang, F. Liu, Performance analysis of stochastic gradient algorithms under weak conditions, Science in China Series F-Information Sciences 51 (9) (2008) 1269–1280. [10] F. Ding, P.X. Liu, H.Z. Yang, Parameter identification and intersample output estimation for dual-rate systems, IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans 38 (4) (2008) 966–975. [11] F. Ding, T. Chen, Performance analysis of multi-innovation gradient type identification methods, Automatica 43 (1) (2007) 1–14. [12] F. Ding, P.X. Liu, G. Liu, Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises, Signal Processing 89 (10) (2009) 1883–1890. [13] J.B. Zhang, F. Ding, Y. Shi, Self-tuning control based on multi-innovation stochastic gradient parameter estimation, Systems & Control Letters 58 (1) (2009) 69–75. [14] L.L. Han, F. Ding, Multi-innovation stochastic gradient algorithms for multi-input multi-output systems, Digital Signal Processing 19 (4) (2009) 545–554. [15] L.L. Han, F. Ding, Identification for multirate multi-input systems using the multi-innovation identification theory, Computers & Mathematics with Applications 57 (9) (2009) 1438–1449.

1434

J. Chen et al. / Mathematical and Computer Modelling 52 (2010) 1428–1434

[16] D.Q. Wang, F. Ding, Performance analysis of the auxiliary models based multi-innovation stochastic gradient estimation algorithm for output error systems, Digital Signal Processing 20 (3) (2010) 750–762. [17] Y.J. Liu, Y.S. Xiao, X.L. Zhao, Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model, Applied Mathematics and Computation 215 (4) (2009) 1477–1483. [18] F. Ding, H.B. Chen, M. Li, Multi-innovation least squares identification methods based on the auxiliary model for MISO systems, Applied Mathematics and Computation 187 (2) (2007) 658–668. [19] L. Xie, H.Z. Yang, F. Ding, Modeling and identification for non-uniformly periodically sampled-data systems, IET Control Theory & Applications 4 (5) (2010) 784–794. [20] F. Ding, P.X. Liu, G. Liu, Multi-innovation least squares identification for linear and pseudo-linear regression models, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 40 (3) (2010) 767–778. [21] F. Ding, Several multi-innovation identification methods, Digital Signal Processing 20 (4) (2010) 1027–1039. [22] D.Q. Wang, Y.Y. Chu, F. Ding, Auxiliary model-based RELS and MI-ELS algorithms for Hammerstein OEMA systems, Computers & Mathematics with Applications 59 (9) (2010) 3092–3098. [23] D.Q. Wang, F. Ding, Input–output data filtering based recursive least squares parameter estimation for CARARMA systems, Digital Signal Processing 20 (4) (2010) 991–999. [24] F. Ding, Y. Shi, T. Chen, Gradient-based identification methods for Hammerstein nonlinear ARMAX models, Nonlinear Dynamics 45 (1–2) (2006) 31–43. [25] J. Ding, Y. Shi, H.G. Wang, F. Ding, A modified stochastic gradient based parameter estimation algorithm for dual-rate sampled-data systems, Digital Signal Processing 20 (4) (2010) 1238–1249. [26] F. Ding, P.X. Liu, G. Liu, Gradient based and least-squares based iterative identification methods for OE and OEMA systems, Digital Signal Processing 20 (3) (2010) 664–677. [27] Y.J. Liu, D.Q. Wang, F. Ding, Least-squares based iterative algorithms for identifying Box-Jenkins models with finite measurement data, Digital Signal Processing 20 (5) (2010) 1458–1467. [28] H.Q. Han, L. Xie, F. Ding, X.G. Liu, Hierarchical least squares based iterative identification for multivariable systems with moving average noises, Mathematical and Computer Modelling 51 (9–10) (2010) 1213–1220. [29] F. Ding, T. Chen, Identification of Hammerstein nonlinear ARMAX systems, Automatica 41 (9) (2005) 1479–1489. [30] D.Q. Wang, Y.Y. Chu, G.W. Yang, F. Ding, Auxiliary model-based recursive generalized least squares parameter estimation for Hammerstein OEAR systems, Mathematical and Computer Modelling 52 (1–2) (2010) 309–317. [31] D.Q. Wang, F. Ding, Extended stochastic gradient identification algorithms for Hammerstein–Wiener ARMAX systems, Computers Mathematics with Applications 56 (12) (2008) 3157–3164. [32] F. Ding, T. Chen, Combined parameter and output estimation of dual-rate systems using an auxiliary model, Automatica 40 (10) (2004) 1739–1748. [33] F. Ding, T. Chen, Parameter estimation of dual-rate stochastic systems by using an output error method, IEEE Transactions on Automatic Control 50 (9) (2005) 1436–1441. [34] F. Ding, Y. Shi, T. Chen, Auxiliary model based least-squares identification methods for Hammerstein output-error systems, Systems & Control Letters 56 (5) (2007) 373–380. [35] Y.J. Liu, L. Xie, F. Ding, An auxiliary model based recursive least squares parameter estimation algorithm for non-uniformly sampled multirate systems, Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 223 (4) (2009) 445–454. [36] F. Ding, T. Chen, Identification of dual-rate systems based on finite impulse response models, International Journal of Adaptive Control and Signal Processing 18 (7) (2004) 589–598.