Hierarchical least squares algorithms for single-input multiple-output systems based on the auxiliary model

Hierarchical least squares algorithms for single-input multiple-output systems based on the auxiliary model

Mathematical and Computer Modelling 52 (2010) 918–924 Contents lists available at ScienceDirect Mathematical and Computer Modelling journal homepage...

335KB Sizes 0 Downloads 26 Views

Mathematical and Computer Modelling 52 (2010) 918–924

Contents lists available at ScienceDirect

Mathematical and Computer Modelling journal homepage: www.elsevier.com/locate/mcm

Hierarchical least squares algorithms for single-input multiple-output systems based on the auxiliary modelI Lili Xiang a , Linbo Xie a , Yuwu Liao b , Ruifeng Ding a,∗ a

School of Communication and Control Engineering, Jiangnan University, Wuxi 214122, PR China

b

Department of Physics and Electronics Information Technology, Xiangfan University, Xiangfan 441053, PR China

article

info

Article history: Received 5 February 2010 Received in revised form 21 May 2010 Accepted 25 May 2010 Keywords: Least squares Parameter estimation Auxiliary model identification Hierarchical identification

abstract This paper presents an auxiliary model based hierarchical least squares algorithm to estimate the parameters of single-input multi-output system modelling by combining the auxiliary model identification idea and the hierarchical identification principle. A numerical example is given to show the performance of the proposed algorithm. © 2010 Elsevier Ltd. All rights reserved.

1. Introduction Parameter estimation is very important in system modelling and identification, signal processing, and adaptive control, e.g., [1–12]. Two typical estimation methods are the least squares methods [13–18] and stochastic gradient methods [19–22] for system identification. Other methods include the multi-innovation stochastic gradient type identification algorithms [23–30] and multi-innovation least squares type algorithms [30–33], the hierarchical stochastic gradient algorithm [34], hierarchical least squares algorithms [35,36], and the auxiliary model based algorithms [24,37–42]. Recently, Sun and Wu studied the consistency of the regularized least-square regression in a general reproducing kernel Hilbert space [43]; Han et al. proposed an auxiliary model identification method for multirate multi-input systems based on least squares [41]; Ding, Han and Chen presented an estimation algorithm for time series AR modeling with missing observations based on the polynomial transformation [44]. The auxiliary model identification method is effective for solving the identification problem with the unknown variables in the information vector, and the hierarchical identification principle is based on the decomposition and can deal with parameter estimation for multivariable systems. In this literature, Ding and Chen proposed an auxiliary model based recursive least squares algorithm for dual-rate sampled-data systems [37]; Wang proposed an auxiliary model based extended least squares algorithm for the output error moving average model [45]; Ding and Chen developed a hierarchical stochastic gradient algorithm and a hierarchical least squares algorithm for multi-input multi-output systems [34,35]. By combining the auxiliary model identification idea [37,38,45,46] and the hierarchical identification principle [34,35], this paper studies and presents an auxiliary model based hierarchical identification method for single-input multiple-output systems.

I This work was supported by the National Natural Science Foundation of China (No. 60804013).



Corresponding author. E-mail addresses: [email protected] (L. Xiang), [email protected] (L. Xie), [email protected] (Y. Liao), [email protected], [email protected] (R. Ding). 0895-7177/$ – see front matter © 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.mcm.2010.05.025

L. Xiang et al. / Mathematical and Computer Modelling 52 (2010) 918–924

919

The paper is organized as follows. Section 2 describes the system formulation related to the single-input multiple-output (SIMO) systems. Section 3 derives an auxiliary model based hierarchical identification method of SIMO systems. Section 4 provides an illustrative example for the results in this paper. Finally, concluding remarks are given in Section 5. 2. Problem formulation Consider a discrete-time single-input multiple-output (SIMO) system described by the following state space model [16,31]:



x(t + 1) = Ax(t ) + bu(t ), y (t ) = Cx(t ) + v (t ),

(1)

where x(t ) ∈ Rn is the state vector, u(t ) ∈ R is the input variable, y (t ) = [y1 (t ), y2 (t ), . . . , ym (t )]T ∈ Rm is the output vector, v (t ) = [v1 (t ), v2 (t ), . . . , vm (t )]T ∈ Rm is a white noise vector with a zero mean, A ∈ Rn×n , b ∈ Rn , and C ∈ Rm×n are unknown constant matrices or vector. Let I be an identity matrix of an appropriate size and z −1 represent a unit backward shift operator: z −1 x(t ) = x(t − 1) and zx(t ) = x(t + 1). The SIMO system in (1) has the following input–output relationship: y (t ) = C (zI − A)−1 bu(t ) + v(t )

= =:

z −n C adj[zI − A]b z −n det[zI − A]

u(t ) + v (t )

β(z ) u(t ) + v (t ), α(z )

(2)

where

α(z ) := z −n det[zI − A] = 1 + α1 z −1 + α2 z −2 + · · · + αn z −n ∈ R, β(z ) := z −n C adj[zI − A]b = β1 z −1 + β2 z −2 + · · · + βn z −n ∈ Rm . Assume that the order n is known and u(t ) = 0 and y (t ) = 0 for t 6 0. The objective of this paper is to present a new identification method to estimate the parameters (αi , βi ) for the the SIMO system in (2) by combining the auxiliary model identification idea [37] and the hierarchical identification principle [34,35], and to evaluate the estimation accuracy under different noise variances. 3. The auxiliary model based hierarchical least squares algorithm The idea of the auxiliary model identification is to construct an auxiliary model using the measured data and to replace the unmeasurable variables in the information matrix with the outputs of the auxiliary model. The following discusses the auxiliary model based hierarchical least squares algorithm. Define an intermediate variable (i.e., the true output or noise-free output of the system): s(t ) :=

β(z ) u(t ). α(z )

(3)

Then (2) can be written as y (t ) = s(t ) + v (t ).

(4)

Expanding (3) gives s(t ) = −α1 s(t − 1) − α2 s(t − 2) − · · · − αn s(t − n) + β1 u(t − 1) + β2 u(t − 2) + · · · + βn u(t − n).

(5)

Define the parameter vector α, the parameter matrix θ , the input information vector ϕ(t ) and the information matrix ψ(t ) as

α := [α1 , α2 , . . . , αn ]T ∈ Rn , θ T := [β1 , β2 , . . . , βn ] ∈ Rm×n , ϕ(t ) := [u(t − 1), u(t − 2), . . . , u(t − n)]T ∈ Rn , ψ(t ) := [−s(t − 1), −s(t − 2), . . . , −s(t − n)] ∈ Rm×n . Using these definitions, (5) and (4) can be respectively written as s(t ) = ψ(t )α + θ T ϕ(t ),

(6)

y (t ) = ψ(t )α + θ ϕ(t ) + v (t ).

(7)

T

Eq. (7) is the identification model for the SIMO system in (2) which contains the parameter vector α and the parameter matrix θ . Here we use the hierarchical identification principle and derive a hierarchical algorithm to estimate them.

920

L. Xiang et al. / Mathematical and Computer Modelling 52 (2010) 918–924

According to the hierarchical identification principle, we identify the parameter vector α ∈ Rn and the parameter matrix θ ∈ Rn×m , respectively, and thus define two quadratic cost functions: J1 (α) :=

t X

ky (j) − ψ(j)α − θ T ϕ(j)k2 ,

j=1

J2 (θ) :=

t X

ky (j) − ψ(j)α − θ T ϕ(j)k2 ,

j =1 2

where kX k := tr[XX T ].

ˆ t ) be the estimate of θ at time t. Minimizing J1 (α) and J2 (θ) yields the ˆ t ) be the estimate of α at time t, and θ( Let α( following recursive algorithm [36,47]: ˆ t ) = α( ˆ t − 1) + L1 (t )[y (t ) − θ T ϕ(t ) − ψ(t )α( ˆ t − 1)], α( P1 (t − 1)ψ (t )

(8)

T

L1 (t ) = P1 (t )ψ T (t ) =

1 + ψ(t )P1 (t − 1)ψ T (t )

,

(9)

P1 (t ) = [I − L1 (t )ψ(t )]P1 (t − 1),

(10) T

ˆ t ) = θ( ˆ t − 1) + L2 (t )[y (t ) − ψ(t )α − θˆ (t − 1)ϕ(t )]T , θ( L2 (t ) = P2 (t )ϕ(t ) =

P2 (t − 1)ϕ(t ) 1 + ϕT (t )P2 (t − 1)ϕ(t )

,

(11) (12)

P2 (t ) = [I − L2 (t )ϕT (t )]P2 (t − 1).

(13)

A difficulty is that Eqs. (8) and (11) contain the unknown θ and α, respectively. The solution is using the idea of the Jacobi ˆ t − 1) and α( ˆ t − 1), iteration for Ax = b [48], replacing the unknown θ in (8) and α in (11) with their preceding estimates θ( respectively, Eqs. (8)–(13) give T

ˆ t ) = α( ˆ t − 1) + L1 (t )[y (t ) − ψ(t )α( ˆ t − 1) − θˆ (t − 1)ϕ(t )], α( P1 (t − 1)ψ (t )

(14)

T

L1 (t ) = P1 (t )ψ T (t ) =

1 + ψ(t )P1 (t − 1)ψ T (t )

,

(15)

P1 (t ) = [I − L1 (t )ψ(t )]P1 (t − 1),

(16) T

ˆ t ) = θ( ˆ t − 1) + L2 (t )[y (t ) − ψ(t )α( ˆ t − 1) − θˆ (t − 1)ϕ(t )]T , θ( L2 (t ) = P2 (t )ϕ(t ) =

P2 (t − 1)ϕ(t ) 1 + ϕT (t )P2 (t − 1)ϕ(t )

,

(17) (18)

P2 (t ) = [I − L2 (t )ϕT (t )]P2 (t − 1).

(19)

However, the information vector ψ(t ) contains the unknown variables s(t − i), the algorithm in (14)–(19) cannot be implemented. The solution here is based on the auxiliary model identification idea to construct an auxiliary model [37,38]: the unknown s(t − i) in ψ(t ) are replaced with the outputs sˆ (t ) of the auxiliary model. Define the estimate of ψ(t ) as

ˆ t ) := [−ˆs(t − 1), −ˆs(t − 2), . . . , −ˆs(t − n)] ∈ Rm×n . ψ( ˆ t ) and ψ( ˆ t ), the output s(t ) of the auxiliary model can be ˆ t ) and θ( Replacing α, θ and ψ(t ) in (6) with the estimates α( computed by T

ˆ t )α( ˆ t ) + θˆ (t )ϕ(t ). sˆ (t ) = ψ(

(20)

ˆ t ), we can obtain the auxiliary model based hierarchical least squares (AM-HLS) Replacing ψ(t ) in (14)–(18) with ψ( algorithm for estimating the parameter vector α and the parameter matrix θ as follows: T

ˆ t )α( ˆ t ) = α( ˆ t − 1) + L1 (t )[y (t ) − θˆ (t − 1)ϕ(t ) − ψ( ˆ t − 1)], α(

(21)

T

T

ˆ (t ) = L1 (t ) = P1 (t )ψ

ˆ (t ) P1 (t − 1)ψ T

ˆ t )P1 (t − 1)ψ ˆ (t ) 1 + ψ(

P1 (t ) = [I − L1 (t )ψ(t )]P1 (t − 1),

,

P1 (0) = p0 I ,

(22) (23)

L. Xiang et al. / Mathematical and Computer Modelling 52 (2010) 918–924

921

ˆ t ). ˆ t ) and θ( Fig. 1. The flowchart of computing the AM-HLS estimates α(

ˆ t ) = θ( ˆ t − 1) + L2 (t )[y (t ) − θˆ T (t − 1)ϕ(t ) − ψ( ˆ t )α( ˆ t − 1)]T , θ( L2 (t ) = P2 (t )ϕ(t ) =

P2 (t − 1)ϕ(t )

,

(25)

P2 (0) = p0 I ,

(26)

1 + ϕT (t )P2 (t − 1)ϕ(t )

P2 (t ) = [I − L2 (t )ϕT (t )]P2 (t − 1),

(24)

ϕ(t ) = [u(t − 1), u(t − 2), . . . , u(t − n)] ,

(27)

ˆ t ) = [−ˆs(t − 1), −ˆs(t − 2), . . . , −ˆs(t − n)], ψ(

(28)

T

T

ˆ t )α( ˆ t ) + θˆ (t )ϕ(t ). sˆ (t ) = ψ(

(29)

L1 (t ) ∈ R and L2 (t ) ∈ R are two gain vectors and P1 (t ) ∈ R and P2 (t ) ∈ R are two covariance matrices. ˆ t ) in the AM-HLS algorithm are listed below. ˆ t ) and θ( The steps of computing the estimates α( n

n

n×n

n×n

ˆ 0) = 1n×m /p0 , where 1i represents an i-dimensional ˆ 0) = 1n /p0 , θ( 1. Let t = 1, set the initial values sˆ (j) = 1m /p0 (j 6 0), α( column vector whose elements are 1, 1n×m stands for an n × m-dimensional matrix whose elements are 1, P1 (0) = p0 I , P2 (0) = p0 I , p0 is a large constant, e.g., p0 = 106 . ˆ t ) by (28). 2. Collect the input–output data u(t ) and y (t ), construct the information vectors ϕ(t ) by (27) and ψ( 3. Compute L1 (t ) by (22), P1 (t ) by (23), and L2 (t ) by (25) and P2 (t ) by (26). ˆ t ) by (24). ˆ t ) by (21) and θ( 4. Update the parameter estimates α( 5. Compute sˆ (t ) by (29). 6. Increase t by 1 and go to step 2. ˆ t ) for the AM-HLS algorithm in (21)–(29) is shown in ˆ t ) and θ( The flowchart of computing the parameter estimates α( Fig. 1. 4. Simulation example Consider the following 1-input 2-output error system,

β(z ) u(t ) + v (t ), α(z ) α(z ) = 1 + α1 z −1 + α2 z −2 = 1 − 0.50z −1 + 0.60z −2 , β(z ) = β1 z −1 = [2.00, 1.00]T z −1 , y (t ) =

922

L. Xiang et al. / Mathematical and Computer Modelling 52 (2010) 918–924

Table 1 The parameter estimates and errors.

σ2

t

α1

α2

β11

β12

2

0.50

100 200 500 1000 2000 3000 4000

−0.51153 −0.49515 −0.49139 −0.49531 −0.49850 −0.49660 −0.49512

0.62574 0.61473 0.60135 0.59140 0.59938 0.59979 0.59877

1.88913 1.97069 2.00549 2.01978 2.01415 1.99915 2.00660

1.01356 1.05485 1.04255 1.00397 0.99818 0.99962 1.00313

4.86363 2.70607 1.84846 0.94691 0.60634 0.14912 0.37438

1.002

100 200 500 1000 2000 3000 4000

−0.51995 −0.48446 −0.48104 −0.48953 −0.49597 −0.49248 −0.48969

0.61860 0.60880 0.59456 0.57851 0.59602 0.59768 0.59610

1.88313 1.98880 2.03214 2.05042 2.03397 2.00206 2.01594

1.08236 1.13467 1.09620 1.01360 0.99932 1.00120 1.00770

6.14512 5.75521 4.36244 2.42483 1.45440 0.34709 0.88064

2.002

100 200 500 1000 2000 3000 4000

−0.54175 −0.46072 −0.45992 −0.47710 −0.48985 −0.48354 −0.47838

0.57464 0.58221 0.57624 0.55002 0.58662 0.59151 0.58941

1.94439 2.05684 2.10015 2.11911 2.07753 2.01046 2.03653

1.26277 1.31163 1.21168 1.03697 1.00375 1.00577 1.01788

11.52598 13.49741 10.08082 5.75429 3.35291 0.93067 1.99531

−0.50000

0.60000

2.00000

1.00000

True values

δ (%)

Fig. 2. The parameter estimation errors versus t with different noise variances.

α = [α1 , α2 ]T = [−0.50, 0.60]T , θ T = β1 = [2.00, 1.00]T . The input {u(t )} is taken as an uncorrelated persistent excitation signal sequence with a zero mean and unit variance, and v (t ) = [v1 (t ), v2 (t )]T as a white noise vector sequence with zero mean and variances σ12 for v1 (t ) and σ22 for v2 (t ). Changing the values of σ12 and σ22 , one can adjust the noise-to-signal ratio δns (1) and δns (2) of the two output channels. Set the initial

ˆ 0) = 11×2 /p0 , p0 = 106 . Applying the AM-HLS algorithm to estimate the parameters of this ˆ 0) = 12×1 /p0 , θ( values α( system, the parameter estimates and their errors are shown in Table 1 with different variances and data length, and the estimation errors s δ :=

ˆ t ) − θk2 ˆ t ) − αk2 + kθ( kα( 2 kαk + kθk2

versus t are shown in Fig. 2, when the noise variances σ12 = σ22 = σ 2 = 0.502 , the corresponding noise-to-signal ratios are δns (1) = 35.00% and δns (2) = 70.00%; when σ 2 = 1.002 , the noise-to-signal ratios are δns (1) = 70.00% and δns (2) = 140.00%; when σ 2 = 2.002 , the noise-to-signal ratios are δns (1) = 140.00% and δns (2) = 280.00%. From Table 1 and Fig. 2, we can get the following conclusions: the parameter estimation errors become (generally) large with the noise variances increasing and the parameter estimation errors become (generally) small with the data length increasing.

L. Xiang et al. / Mathematical and Computer Modelling 52 (2010) 918–924

923

5. Conclusions The AM-HLS algorithm is presented for estimating the parameters of the single-input multiple-output systems by combining the auxiliary model identification idea and the hierarchical identification principle. The AM-HLS approach can be combined with other methods (e.g., iterative algorithms) to study new identification algorithms [49–53]. The simulation results show that the proposed algorithm is effective. References [1] M. Kohandel, S. Sivaloganathan, G. Tenti, Estimation of the quasi-linear viscoelastic parameters using a genetic algorithm, Mathematical and Computer Modelling 47 (3–4) (2008) 266–270. [2] C. Mocenni, E. Sparacino, A. Vicino, J.P. Zubelli, Mathematical modelling and parameter estimation of the Serra da Mesa basin, Mathematical and Computer Modelling 47 (7–8) (2008) 765–780. [3] J.L. Figueroa, S.I. Biagiola, O.E. Agamennoni, An approach for identification of uncertain Wiener systems, Mathematical and Computer Modelling 48 (1–2) (2008) 305–315. [4] X.G. Liu XG, J. Lu, Least squares based iterative identification for a class of multirate systems, Automatica 46 (3) (2010) 549–554. [5] Y.S. Xiao, Y. Zhang, J. Ding, J.Y. Dai, The residual based interactive least squares algorithms and simulation studies, Computers & Mathematics with Applications 58 (6) (2009) 1190–1197. [6] L.Y. Wang, L. Xie, X.F. Wang, The residual based interactive stochastic gradient algorithms for controlled moving average models, Applied Mathematics and Computation 211 (2) (2009) 442–449. [7] Y. Shi, F. Ding, T. Chen, Multirate crosstalk identification in xDSL systems, IEEE Transactions on Communications 54 (10) (2006) 1878–1886. [8] Y. Shi, F. Ding, T. Chen, 2-Norm based recursive design of transmultiplexers with designable filter length, Circuits, Systems and Signal Processing 25 (4) (2006) 447–462. [9] F. Ding, T. Chen, Least squares based self-tuning control of dual-rate systems, International Journal of Adaptive Control and Signal Processing 18 (8) (2004) 697–714. [10] F. Ding, T. Chen, A gradient based adaptive control algorithm for dual-rate systems, Asian Journal of Control 8 (4) (2006) 314–323. [11] F. Ding, T. Chen, Z. Iwai, Adaptive digital control of Hammerstein nonlinear systems with limited output sampling, SIAM Journal on Control and Optimization 45 (6) (2006) 2257–2276. [12] J.B. Zhang, F. Ding, Y. Shi, Self-tuning control based on multi-innovation stochastic gradient parameter estimation, Systems & Control Letters 58 (1) (2009) 69–75. [13] F. Ding, T. Chen, Identification of Hammerstein nonlinear ARMAX systems, Automatica 41 (9) (2005) 1479–1489. [14] F. Ding, Y. Shi, T. Chen, Performance analysis of estimation algorithms of non-stationary ARMA processes, IEEE Transactions on Signal Processing 54 (3) (2006) 1041–1053. [15] F. Ding, X.P. Liu, Y. Shi, Convergence analysis of estimation algorithms of dual-rate stochastic systems, Applied Mathematics and Computation 176 (1) (2006) 245–261. [16] F. Ding, T. Chen, L. Qiu, Bias compensation based recursive least squares identification algorithm for MISO systems, IEEE Transactions on Circuits and Systems II: Express Briefs 53 (5) (2006) 349–353. [17] F. Ding, Y. Shi, T. Chen, Amendments to ‘‘Performance analysis of estimation algorithms of non-stationary ARMA processes’’, IEEE Transactions on Signal Processing 56 (10) (2008) Part I: 4983–4984. [18] F. Ding, Y.S. Xiao, A finite-data-window least squares algorithm with a forgetting factor for dynamical modeling, Applied Mathematics and Computation 186 (1) (2007) 184–192. [19] F. Ding, Y. Shi, T. Chen, Gradient-based identification methods for Hammerstein nonlinear ARMAX models, Nonlinear Dynamics 45 (1–2) (2006) 31–43. [20] F. Ding, H.Z. Yang, F. Liu, Performance analysis of stochastic gradient algorithms under weak conditions, Science in China Series F. Information Sciences 51 (9) (2008) 1269–1280. [21] D.Q. Wang, F. Ding, Extended stochastic gradient identification algorithms for Hammerstein–Wiener ARMAX Systems, Computers & Mathematics with Applications 56 (12) (2008) 3157–3164. [22] F. Ding, P.X. Liu, H.Z. Yang, Parameter identification and intersample output estimation for dual-rate systems, IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans 38 (4) (2008) 966–975. [23] F. Ding, T. Chen, Performance analysis of multi-innovation gradient type identification methods, Automatica 43 (1) (2007) 1–14. [24] F. Ding, P.X. Liu, G. Liu, Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises, Signal Processing 89 (10) (2009) 1883–1890. [25] L.L. Han, F. Ding, Multi-innovation stochastic gradient algorithms for multi-input multi-output systems, Digital Signal Processing 19 (4) (2009) 545–554. [26] L.L. Han, F. Ding, Identification for multirate multi-input systems using the multi-innovation identification theory, Computers & Mathematics with Applications 57 (9) (2009) 1438–1449. [27] Y.J. Liu, Y.S. Xiao, X.L. Zhao, Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model, Applied Mathematics and Computation 215 (4) (2009) 1477–1483. [28] L. Xie, H.Z. Yang, F. Ding, Modeling and identification for non-uniformly periodically sampled-data systems, IET Control Theory & Applications 4 (5) (2010) 784–794. [29] D.Q. Wang, F. Ding, Performance analysis of the auxiliary models based multi-innovation stochastic gradient estimation algorithm for output error systems, Digital Signal Processing 20 (3) (2010) 750–762. [30] F. Ding, Several multi-innovation identification methods, Digital Signal Processing 20 (4) (2010) 1027–1039. [31] F. Ding, H.B. Chen, M. Li, Multi-innovation least squares identification methods based on the auxiliary model for MISO systems, Applied Mathematics and Computation 187 (2) (2007) 658–668. [32] F. Ding, P.X Liu, G. Liu, Multi-innovation least squares identification for linear and pseudo-linear regression models, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 40 (3) (2010) 767–778. [33] D.Q. Wang, Y.Y. Chu, F. Ding, Auxiliary model-based RELS and MI-ELS algorithms for Hammerstein OEMA systems, Computers & Mathematics with Applications 59 (9) (2010) 3092–3098. [34] F. Ding, T. Chen, Hierarchical gradient-based identification of multivariable discrete-time systems, Automatica 41 (2) (2005) 315–325. [35] F. Ding, T. Chen, Hierarchical least squares identification methods for multivariable systems, IEEE Transactions on Automatic Control 50 (3) (2005) 397–402. [36] L.Y. Wang, F. Ding, P.X. Liu, Consistency of HLS estimation algorithms for MIMO ARX-like systems, Applied Mathematics and Computation 190 (2) (2007) 1081–1093. [37] F. Ding, T. Chen, Combined parameter and output estimation of dual-rate systems using an auxiliary model, Automatica 40 (10) (2004) 1739–1748. [38] F. Ding, T. Chen, Parameter estimation of dual-rate stochastic systems by using an output error method, IEEE Transactions on Automatic Control 50 (9) (2005) 1436–1441.

924

L. Xiang et al. / Mathematical and Computer Modelling 52 (2010) 918–924

[39] F. Ding, Y. Shi, T. Chen, Auxiliary model based least-squares identification methods for Hammerstein output-error systems, Systems & Control Letters 56 (5) (2007) 373–380. [40] Y.J. Liu, L. Xie, F. Ding, An auxiliary model based recursive least squares parameter estimation algorithm for non-uniformly sampled multirate systems, Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 223 (4) (2009) 445–454. [41] L.L. Han, J. Sheng, F. Ding, Y. Shi, Auxiliary model identification method for multirate multi-input systems based on least squares, Mathematical and Computer Modelling 50 (7–8) (2009) 1100–1106. [42] F. Ding, T. Chen, Identification of dual-rate systems based on finite impulse response models, International Journal of Adaptive Control and Signal Processing 18 (7) (2004) 589–598. [43] H.W. Sun, Q. Wu, Application of integral operator for regularized least-square regression, Mathematical and Computer Modelling 49 (1–2) (2009) 276–285. [44] J. Ding, L.L. Han, X.M. Chen, Time series AR modeling with missing observations based on the polynomial transformation, Mathematical and Computer Modelling 51 (5–6) (2010) 527–536. [45] D.Q. Wang, Recursive extended least squares identification method based on auxiliary models, Control Theory and Applications 26 (1) (2009) 51–56 (in Chinese). [46] D.Q. Wang, Y.Y. Chu, G.W. Yang, F. Ding, Auxiliary model-based recursive generalized least squares parameter estimation for Hammerstein OEAR systems, Mathematical and Computer Modelling 52 (1–2) (2010) 309–317. [47] F. Ding, System Identification Theory and Methods + Matlab Simulation, Power Press, Beijing, 2010 (in Chinese). [48] G.H. Golub, C.F. Van Loan, Matrix Computations, 3rd ed., Johns Hopkins Univ. Press, Baltimore, MD, 1996. [49] D.Q. Wang, F. Ding, Input–output data filtering based recursive least squares parameter estimation for CARARMA systems, Digital Signal Processing 20 (4) (2010) 991–999. [50] J. Ding, Y. Shi, H.G. Wang, F. Ding, A modified stochastic gradient based parameter estimation algorithm for dual-rate sampled-data systems, Digital Signal Processing 20 (4) (2010) 1238–1249. [51] F. Ding, P.X. Liu, G. Liu, Gradient based and least-squares based iterative identification methods for OE and OEMA systems, Digital Signal Processing 20 (3) (2010) 664–677. [52] H.Q. Han, L. Xie, F. Ding, X.G. Liu, Hierarchical least squares based iterative identification for multivariable systems with moving average noises, Mathematical and Computer Modelling 51 (9–10) (2010) 1213–1220. [53] Y.J. Liu, D.Q. Wang, F. Ding, Least-squares based iterative algorithms for identifying Box–Jenkins models with finite measurement data, Digital Signal Processing 20 (5) (2010) 1458–1467.