Signal Processing 93 (2013) 1235–1242
Contents lists available at SciVerse ScienceDirect
Signal Processing journal homepage: www.elsevier.com/locate/sigpro
Decomposition based fast least squares algorithm for output error systems$ Feng Ding a,b,n a b
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, PR China Control Science and Engineering Research Center, Jiangnan University, Wuxi 214122, PR China
a r t i c l e i n f o
abstract
Article history: Received 19 June 2012 Received in revised form 22 October 2012 Accepted 18 December 2012 Available online 27 December 2012
Parameter estimation methods have wide applications in signal processing, communication and system identification. This paper derives an iterative least squares algorithm to estimate the parameters of output error systems and uses the partitioned matrix inversion lemma to implement the proposed algorithm in order to enhance computational efficiencies. The simulation results show that the proposed algorithm works well. & 2012 Elsevier B.V. All rights reserved.
Keywords: Signal processing Filtering Parameter estimation Fast least squares Iterative algorithm
1. Introduction The iterative algorithms are important for finding the zeros of nonlinear functions and the solutions of linear or nonlinear matrix [1,2], e.g., the Newton iteration methods [3], the optimization and control algorithms [4–7], the Jacobi iteration and the Gauss–Seidel iterations for solving matrix equations Ax ¼ b [8,9], the least squares based iterative methods [10] and the hierarchical gradient based iterative methods [11] for solving coupled Sylvester matrix equations AX þXB ¼ C and DX þ XE ¼ F and general coupled matrix equations. Recently, Li et al. considered the fitting problems of nonlinear functions or nonlinear system modeling and presented a gradient based iterative algorithm and a Newton iterative algorithm to estimate the
$ This work was supported by the National Natural Science Foundation of China (No. 61273194), the Natural Science Foundation of Jiangsu Province (China, BK2012549) and the 111 Project (B12018). n Corresponding author at: Control Science and Engineering Research Center, Jiangnan University, Wuxi 214122, PR China. E-mail address:
[email protected]
0165-1684/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.sigpro.2012.12.013
parameters of a nonlinear function from noisy data according to the negative gradient search and the Newton iteration. Furthermore, two model transformation based iterative algorithms have been developed for improving computational efficiencies [12]; a two-stage least squares based iterative estimation algorithm has been presented for CARARMA system modeling [13]. The recursive algorithms are very related to the iterative algorithms [15–17]. In general, the recursive algorithms can be used for on-line identification and the basic idea is to update the parameters of the systems by using real-time measurement information [14]. Liu et al. discussed the auxiliary model based multi-innovation estimation algorithm for multiple-input single-output systems [18] and studied the convergence properties of stochastic gradient algorithm for multivariable systems [19]; Ding et al. explored time series autoregressive modeling in the presence of missing observations by using the polynomial transformation technique [20]. Xiao et al. presented a residual based interactive least squares algorithm for a controlled autoregressive moving average (C-ARMA) model [21]; Wang et al. proposed the residual
1236
F. Ding / Signal Processing 93 (2013) 1235–1242
based interactive stochastic gradient algorithm for controlled moving average models [22]. System identification and parameter estimation methods can obtain the parameters of the systems under consideration and are basic for state estimation and filtering [23–26], and adaptive control [27–29]. The iterative algorithms can be used not only for solving matrix equations but also compute the system parameters. In the area of system identification, Ding et al. derived a least squares based and a gradient based iterative estimation methods for output error moving average systems [30,31]; similar iterative methods have been developed for Box–Jenkins systems [32]; Zhang et al. derived a hierarchical gradient based iterative algorithm for multivariable output error moving average systems [33]; Wang studied recursive and iterative algorithms for output error moving average systems [34]. Recently, Hu et al. studied an iterative least squares estimation algorithm for controlled moving average systems based on matrix decomposition [35], and a decomposition based iterative estimation algorithm for autoregressive moving average models [36]. On the basis of the work in [35,36], this paper derives an iterative least squares identification algorithm for output error systems using the information matrix decomposition and the partitioned matrix inversion lemma. The rest of this paper is organized as follows. Section 2 gives the iterative least squares estimates for output error. Section 3 derives an iterative least squares algorithm using the partitioned matrix inversion lemma. Section 4 provides a simulation example to show the effectiveness of the proposed algorithm. Finally, Section 5 offers some concluding remarks. 2. Basic algorithms Consider the following output error system [30]: yðtÞ ¼ xðtÞ þ vðtÞ, xðtÞ ¼
na X i¼1
ai xðtiÞ þ
ð1Þ nb X
bi uðtiÞ,
ð2Þ
i¼1
where fuðtÞg and fyðtÞg are the input and output sequences of the system, fvðtÞg is a white noise sequence with zero mean. Assume that the orders na and nb are known and n :¼ na þnb and yðtÞ ¼ 0, uðtÞ ¼ 0 and vðtÞ ¼ 0 for t r 0. The objective is to derive an iterative parameter estimation algorithm to estimate the unknown parameters ðai ,bi Þ, using on the partitioned matrix inversion lemma, from available input–output measurement data fuðtÞ,yðtÞ : t ¼ 0,1,2, . . . ,Lg, where L denotes the data length (L b n). Define the parameter vector h and the information vector uðtÞ as " # /ðtÞ a h :¼ 2 Rn , 2 Rn , uðtÞ :¼ wðtÞ b where a :¼ ½a1 ,a2 , . . . ,ana T 2 Rna , b :¼ ½b1 ,b2 , . . . ,bnb T 2 Rnb ,
fðtÞ :¼ ½xðt1Þ,xðt2Þ, . . . ,xðtna ÞT 2 Rna , cðtÞ :¼ ½uðt1Þ,uðt2Þ, . . . ,uðtnb ÞT 2 Rnb :
ð3Þ
Then (2) and (1) can be written as xðtÞ ¼ /T ðtÞa þ wT ðtÞb,
ð4Þ
yðtÞ ¼ /T ðtÞa þ wT ðtÞb þvðtÞ:
ð5Þ
Define the stacked output vector Y, the stacked information matrices U and U as 2 T 3 2 3 / ð1Þ yð1Þ 6 T 7 6 7 6 / ð2Þ 7 6 yð2Þ 7 L Lna 6 7 7 , Y :¼ 6 6 ^ 7 2 R , U :¼ 6 ^ 7 2 R 4 5 4 5 yðLÞ /T ðLÞ 2 T 3 w ð1Þ 6 T 7 6 w ð2Þ 7 Lnb 7 : U :¼ 6 6 ^ 72R 4 5 wT ðLÞ Note that the matrix U and the vector Y contain all the measured data fuðtÞ,yðtÞ : t ¼ 0,1,2, . . . ,Lg, and the matrix U is unknown because the true output terms (i.e., the noise-free output terms) in U are the unknown inner variables. According to (5), define a cost function JðhÞ :¼
L X
½yðtÞ/T ðtÞawT ðtÞb2 ¼ JYUaUbJ2 ,
t¼1
where the norm of the vector x is defined as JxJ2 :¼ xT x. Minimizing JðhÞ and letting the partial derivative of JðhÞ with respect to h be zero give " # " # a @JðhÞ UT UT ½Y Y½ ¼ 2 U aUb ¼ 2 U ,U ¼ 0: @h b UT UT Provided that the involved matrix is invertible, we can obtain the relation: !1 " " # # a UT UT h¼ U ,U ½ Y ¼ b UT UT " #1 " # UT U UT U UT Y ¼ : ð6Þ UTU UTU UTY However, since U is unknown, it is impossible to calculate the parameter estimation vector h via the above equation directly. To solve this difficulty, the solution is based on the hierarchical identification principle [37,38]. Let k ¼ 1,2,3, . . . be an iteration variable and h^ k :¼ "
a^ k b^
# be the iterative estimate of h. Let x^ k ðtiÞ be the
k
estimate of xðtiÞ at iteration k, and define the estimates:
/^ k ðtÞ :¼ ½x^ k1 ðt1Þ,x^ k1 ðt2Þ, . . . ,x^ k1 ðtna ÞT 2 Rna , 2
T
/^ k ð1Þ
3
6 7 6 ^T 7 6 / k ð2Þ 7 ^ 6 7 2 RLna : U k :¼ 6 7 6 ^ 7 4 T 5 /^ ðLÞ k
ð7Þ
F. Ding / Signal Processing 93 (2013) 1235–1242
^ ðtÞ, a^ Replacing /ðtÞ, a and b in (4) with their estimates / k k ^ ^ and a k , respectively, the estimate x k ðtÞ of x(t) can be computed by ^ T ðtÞa^ þ wT ðtÞb^ : x^ k ðtÞ ¼ / k k k
where I represents an identity matrix of appropriate size, this completes the proof. & Let S :¼ U T U 2 Rnb nb . Applying Lemma 1 to (10) gives 2 31 ^ ^ TU ^ TU U U k 1 k k 4 5 Sk ¼ ^ UTU S k 2 3 ^ T 1 Q 1 Q 1 k k U k US 5, ¼4 ^ Q 1 S 1 þS 1 U T U ^ Q 1 U ^ T US 1 S 1 U T U k k k k k
ð8Þ
^ gives the followReplacing U in (6) with its estimate U k ing iterative least squares estimate of h: "
a^ k b^
#
k
2
^ ^ TU U k ¼4 k T ^ U Uk
T
31 "
^ U U k 5 UTU
T
^ Y U k UTY
#
" ¼:
S 1 k
T
^ Y U k UTY
# ,
1237
ð9Þ
ð11Þ where
2 S k :¼
^T^ 4 Uk Uk ^ UTU k
^ TU U k T U U
3 5 2 Rðna þ nb Þðna þ nb Þ :
^ TU ^ T US 1 U T U ^ U ^ 2 Rnb nb : Q k :¼ U k k k k
ð10Þ
Let a :¼ S 1 U T Y 2 Rnb . Inserting (11) into (9) gives
Eq. (9) shows that, at each iteration, we can calculate h^ k by solving a linear system with a coefficient matrix S k .
"
a^ k b^ k
Note that the (2,2) block U T U of S k is unchanged during all of the iteration steps. We will calculate h^ by applying the
#
" ¼ S 1 k
2 ¼4
k
partitioned matrix inversion lemma in order to achieve a high computational efficiency.
T
^ Y U k
3. The iterative algorithm using the partitioned matrix inversion lemma
^ Q 1 S 1 U T U k k
T
^ Q 1 U ^ T Y þ S 1 U T Y þS 1 U T U ^ Q 1 U ^ T US 1 U T Y S 1 U T U k k k k k k 1 ^ T ^T Q 1 k U k YQ k U k U a
^ Q 1 U ^ T Y þ a þ S 1 U T U ^ Q 1 U ^ TUa S 1 U T U k k k k k k 2 3 1 ^ T Q k U k ðYU aÞ 5: ¼4 ^ Q 1 U ^ T ðYU aÞ aS 1 U T U k k k
A21
" ¼
A2
Q 1 1 A1 2 A21 Q
Q 1 A12 A1 2 1 1 A2 þ A1 A12 A1 2 A21 Q 2
5
3 5
From above, we can summarize the decomposition based fast iterative least squares (D-ILS) algorithm as
Lemma 1 (The partitioned matrix inversion lemma). Suppose that A1 2 Rmm , A12 2 Rmn , A21 2 Rnm and A2 2 Rnn , and two matrices A1 2 Rmm and Q :¼ A1 A12 nn A1 are nonsingular. Then the following the block 2 A21 2 R matrix inversion relation holds: #1
3
T
1 ^ 1 T ^ Q 1 U Y k U k YQ k U k US
2
In this section, we will derive an iterative algorithm based on the partitioned matrix inversion lemma for calculating parameter estimation vectors. Firstly, let us give the partitioned matrix inversion lemma.
A12
3" # ^T 5 Uk Y T ^ Q 1 U ^ US 1 S 1 þ S 1 U T U UTY k k k T
1 ^ Q 1 k U k US
Q 1 k
¼4
A1
#
UTY
2 ¼4
"
ð12Þ
T
^ a^ k ¼ Q 1 k U k ðYU aÞ,
k ¼ 1,2,3, . . . ,
^ a^ , b^ k ¼ aS 1 U T U k k
ð13Þ ð14Þ
where
# 2 Rnn :
^ ð1Þ, / ^ ð2Þ, . . . , / ^ ðLÞT , ^ ¼ ½/ U k k k k
ð15Þ
/^ k ðtÞ :¼ ½x^ k1 ðt1Þ,x^ k1 ðt2Þ, . . . ,x^ k1 ðtna ÞT , t ¼ 1,2, . . . ,L,
Proof. Since
"
A1
A12
A21
A2 "
¼ " ¼ " ¼ " ¼
#"
Q 1
Q 1 A12 A1 2
1 A1 2 A21 Q
1 1 A1 A12 A1 2 þ A2 A21 Q 2
1 A1 Q 1 þA12 ðA1 Þ 2 A21 Q
#
1 1 1 A1 ðQ 1 A12 A1 A12 A1 2 Þ þA12 ðA2 þA2 A21 Q 2 Þ
1 Þ A21 Q 1 þ A2 ðA1 2 A21 Q 1 ðA1 A12 A1 2 A21 ÞQ A21 Q 1 A21 Q 1
I 0 I 0
ð16Þ
1 1 1 A21 ðQ 1 A12 A1 A12 A1 2 Þ þ A2 ðA2 þA2 A21 Q 2 Þ # 1 1 1 1 1 A1 Q A12 A2 þA12 A2 þ A12 A2 A21 Q A12 A2 1
1 A21 Q 1 A12 A1 A12 A1 2 þI þA21 Q 2 # 1 1 1 1 ðA1 þA12 A2 A21 ÞQ A12 A2 þ A12 A2 I # 1 1 I 0 QQ A12 A2 þA12 A1 2 ¼ , 0 I I
#
1238
F. Ding / Signal Processing 93 (2013) 1235–1242
wðtÞ ¼ ½uðt1Þ,uðt2Þ, . . . ,uðtnb ÞT ,
ð17Þ
^ T ðtÞa^ þ wT ðtÞb^ , x^ k ðtÞ ¼ / k k k
ð18Þ
Y :¼ ½yð1Þ,yð2Þ, . . . ,yðLÞT ,
ð19Þ
U ¼ ½wð1Þ, wð2Þ, . . . , wðLÞT ,
ð20Þ
S ¼ U T U,
ð21Þ
a ¼ S 1 U T Y,
ð22Þ
M ¼ IUS 1 U T ,
ð23Þ
^ T MU ^ : Qk ¼ U k k
ð24Þ
The steps of implementing the D-ILS algorithm are listed in the following:
1. Collect the input–output data fuðtÞ,yðtÞ: t ¼ 1,2,3, . . . ,Lg. 2. To initialize: let k¼ 1, x^ 0 ðtÞ be a random number, and give a small number e 40. 3. Form Y by (18), wðtÞ by (17) and U by (20). 4. Compute S, a and M by (21)–(23). ^ ðtÞ by (16) and U ^ by (15), and compute Q by 5. Form / k k k (24). 6. Update the parameter estimate a^ k by (13) and b^ k by (14), and compute x^ k ðtÞ by (18). 7. If Jh^ k h^ k1 J r e, terminate this procedure and obtain T T the parameter estimate h^ k ¼ ½a^ k , b^ k T ; otherwise increase k by 1 and turn to step 5. The flowchart of computing the estimates a^ k and b^ k is shown in Fig. 1. Fig. 1. The flowchart of computing the parameter estimates a^ k and b^ k .
4. Examples In this section, we present numerical experiments to illustrate the effectiveness of our derived iterative algorithm for output error systems. In comparison, we also present numerical results of the auxiliary model based recursive least squares (AM-RLS) algorithm [30,39] for estimating the parameter vector h:
h^ ðtÞ ¼ h^ ðt1Þ þ PðtÞu^ ðtÞ½yðtÞu^ T ðtÞh^ ðt1Þ, PðtÞ ¼ Pðt1Þ
^ ðtÞu ^ T ðtÞPðt1Þ Pðt1Þu T
^ ðtÞ ^ ðtÞPðt1Þu 1þu
,
ð25Þ ð26Þ
^ ^ ^ u^ ðtÞ ¼ ½xðt1Þ, xðt2Þ, . . . ,xðtn a Þ, uðt1Þ,uðt2Þ, . . . ,uðtnb ÞT , ^ ¼u ^ T ðtÞh^ ðtÞ, xðtÞ
ð27Þ ð28Þ
where h^ ðtÞ is the estimate of h at time t. Example 1. Consider the following second-order output error system: yðtÞ ¼
BðzÞ uðtÞ þ vðtÞ, AðzÞ
AðzÞ ¼ 1 þa1 z1 þ a2 z2 ¼ 10:428z1 þ 0:509z2 , BðzÞ ¼ b1 z1 þ b2 z2 ¼ 0:2904z1 þ 0:74030z2 ,
h ¼ ½a1 ,a2 ,b1 ,b2 T ¼ ½0:428,0:509,0:2904,0:7403T , where z1 is a unit backward shift operator: z1 yðtÞ ¼ yðt1Þ. In simulation, we generate a persistently excited sequence with zero mean and unit variance as the input fuðtÞg and take fvðtÞg to be an uncorrelated noise sequence with zero mean and variance s2 ¼ 0:50 and the noise-tosignal ratio of the system is dns ¼ 47:52%. Take the data length t ¼ L ¼ 3000 and using the D-ILS algorithm to identify this output error system, the parameter estimates and their estimation errors are shown in Table 1 and the estimation error d :¼ Jh^ k hJ=JhJ versus k is shown in Fig. 2. Applying the AM-RLS algorithm to estimate the parameters of this example system, the parameter estimates and their estimation errors are shown in Table 2 and the estimation error d :¼ Jh^ ðtÞhJ=JhJ versus t is shown in Fig. 3.
F. Ding / Signal Processing 93 (2013) 1235–1242
Example 2. Consider the following third-order output error system: yðtÞ ¼
1239
BðzÞ ¼ b1 z1 þ b2 z2 þb3 z3 ¼ 0:31z1 þ0:65z2 þ 0:76z3 ,
h ¼ ½a1 ,a2 ,a3 ,b1 ,b2 ,b3 T ¼ ½0:40,0:52,0:06,0:32,0:65,0:76T :
BðzÞ uðtÞ þvðtÞ, AðzÞ
AðzÞ ¼ 1 þa1 z1 þ a2 z2 þa3 z3 ¼ 10:40z1 þ 0:52z2 þ0:06z3 ,
The simulation conditions are the same as in Example 1, the noise-to-signal ratio of the system is dns ¼ 35:41%.
Table 1 The D-ILS parameter estimates and errors of Example 1. k
a1
a2
b1
b2
d (%)
1 2 3 4 5 10 15 20
0.00106 0.38348 0.45228 0.40910 0.42903 0.42440 0.42433 0.42432
0.00738 0.40604 0.54816 0.50202 0.51081 0.51198 0.51199 0.51199
0.28808 0.28935 0.29000 0.28956 0.29006 0.28987 0.28987 0.28987
0.86567 0.75150 0.73188 0.74601 0.73975 0.74108 0.74110 0.74110
64.68292 10.87516 4.51842 2.02185 0.21086 0.45960 0.46657 0.46669
True values
0.42800
0.50900
0.29040
0.74030
Table 2 The AM-RLS parameter estimates and errors of Example 1. t
a1
a2
b1
b2
d (%)
100 200 500 1000 2000 3000
0.54203 0.52049 0.43608 0.40845 0.42615 0.42513
0.49611 0.52238 0.52145 0.51050 0.50852 0.51850
0.31394 0.29622 0.28781 0.28048 0.28904 0.29009
0.66565 0.71230 0.72548 0.73661 0.74497 0.74063
13.39986 9.42771 2.03871 2.14912 0.50389 0.95876
True values
0.42800
0.50900
0.29040
0.74030
Fig. 2. The D-ILS estimation error d versus k of Example 1.
Fig. 3. The AM-RLS estimation error d versus t of Example 1.
1240
F. Ding / Signal Processing 93 (2013) 1235–1242
Table 3 The D-ILS parameter estimates and errors of Example 2. k
a1
a2
a3
b1
b2
b3
d (%)
1 2 3 4 5 10 15 20
0.01666 0.35140 0.42930 0.34798 0.41338 0.40280 0.40266 0.40267
0.00079 0.38602 0.60162 0.44201 0.54582 0.52404 0.52375 0.52377
0.01049 0.26207 0.05193 0.13802 0.04616 0.06259 0.06320 0.06319
0.31624 0.31493 0.30841 0.31037 0.30909 0.30904 0.30906 0.30906
0.77414 0.66058 0.63380 0.66775 0.64368 0.64886 0.64892 0.64892
0.89850 0.74122 0.75617 0.76750 0.74655 0.74883 0.74883 0.74883
54.59718 20.06943 11.52584 9.98201 2.86830 1.01537 1.01688 1.01757
True values
0.40000
0.52000
0.06000
0.31000
0.65000
0.76000
Table 4 The AM-RLS parameter estimates and errors of Example 2. t
a1
a2
a3
b1
b2
b3
d (%)
100 200 500 1000 2000 3000
0.35854 0.35494 0.32296 0.34168 0.37628 0.36872
0.47683 0.47793 0.46302 0.47196 0.49759 0.49646
0.05860 0.05722 0.08287 0.08122 0.07825 0.08920
0.34745 0.31555 0.31290 0.30246 0.31051 0.31068
0.63794 0.66084 0.66235 0.65773 0.65958 0.65740
0.92753 0.91742 0.83751 0.79153 0.77388 0.77048
14.72923 13.70383 10.18524 6.89309 3.31565 4.08271
True values
0.40000
0.52000
0.06000
0.31000
0.65000
0.76000
Fig. 4. The D-ILS estimation error d versus k of Example 2.
The simulation results are shown in Tables 3 and 4 and Figs. 4 and 5. From Tables 1–4 and Figs. 2–5, we can draw the following conclusions: 1. The estimation errors given by the D-ILS algorithm become smaller with the increasing iteration k—see the estimation errors in the last columns of Tables 1 and 3, and the estimation error curves in Figs. 2 and 4. The parameter estimates given by proposed D-ILS algorithm are very close to their true values only after several iterations (about six iterations). These show that the
proposed iterative least squares algorithm works well for estimating the parameters of output error systems. 2. In Example 1, when the data length t ¼ L ¼ 3000, the relative estimation errors given by the AM-RLS algorithm and the D-ILS algorithm with k¼20 iterations are d ¼ 0:95876% (Table 2) and d ¼ 0:46669% (Table 1), respectively. In Example 2, the relative estimation errors are d ¼ 4:08271% (Table 4 for the AM-RLS algorithm) and d ¼ 1:01757% (Table 3 for the D-ILS algorithm with k¼20 iterations). This shows that the D-ILS algorithm can generate more accurate parameter estimates than the AM-RLS algorithm.
F. Ding / Signal Processing 93 (2013) 1235–1242
1241
Fig. 5. The AM-RLS estimation error d versus t of Example 2.
5. Conclusions This paper derives an iterative least squares (ILS) estimation algorithm for output error systems and uses the partitioned matrix inversion lemma to implement the ILS algorithm and gives a decomposition based iterative least squares algorithm. The proposed decomposition based ILS algorithm can generate more accurate parameter estimates than the auxiliary model based recursive least squares algorithms and works well for estimating the parameters of output error systems. The proposed method can be extended to other linear or nonlinear systems [40,41]. References [1] M. Dehghan, M. Hajarian, Two algorithms for finding the Hermitian reflexive and skew-Hermitian solutions of Sylvester matrix equations, Applied Mathematics Letters 24 (4) (2011) 444–449. [2] M. Dehghan, M. Hajarian, An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices, Applied Mathematical Modelling 34 (3) (2010) 639–654. [3] F. Ding, X.P. Liu, G. Liu, Identification methods for Hammerstein nonlinear systems, Digital Signal Processing 21 (2) (2011) 215–238. [4] C.Y. Zhao, X.G. Liu, et al., Melt index prediction based on adaptive particle swarm optimization algorithm-optimized radial basis function neural networks, Chemical Engineering & Technology 33 (11) (2010) 1909–1916. [5] X.G. Liu, C.Y. Wang, L. Cong, Adaptive robust generic model control of high-purity internal thermally coupled distillation column, Chemical Engineering & Technology 34 (1) (2011) 111–118. [6] X.G. Liu, Y.X. Zhou, et al., High-purity control of internal thermally coupled distillation columns based on the nonlinear wave model, Journal of Process Control 21 (6) (2011) 920–926. [7] X.G. Liu, C.Y. Wang, et al., Adaptive generalised predictive control of high purity internal thermally coupled distillation column, Canadian Journal of Chemical Engineering 90 (2) (2012) 420–428. [8] L. Xie, Y.J. Liu, H.Z. Yang, Gradient based and least squares based iterative algorithms for matrix equations AXB þ CXT D ¼ F, Applied Mathematics and Computation 217 (5) (2010) 2191–2199. [9] G.H. Golub, C.F. Van Loan, Matrix Computations, 3rd ed. Johns Hopkins University Press, Baltimore, MD, 1996. [10] F. Ding, T. Chen, Iterative least squares solutions of coupled Sylvester matrix equations, Systems & Control Letters 54 (2) (2005) 95–107. [11] F. Ding, T. Chen, On iterative solutions of general coupled matrix equations, SIAM Journal on Control and Optimization 44 (6) (2006) 2269–2284.
[12] J.H. Li, R.F. Ding, Y. Yang, Iterative parameter identification methods for nonlinear functions, Applied Mathematical Modelling 26 (6) (2012) 2739–2750. [13] F. Ding, Two-stage least squares based iterative estimation algorithm for CARARMA system modeling, Applied Mathematical Modelling 37 (x) (2013). http://dx.doi.org/10.1016/j.apm.2012.10. 014. [14] F. Ding, Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling, Applied Mathematical Modelling 37 (4) (2013) 1694–1704. [15] I.J. Umoh, T. Ogunfunmi, An affine projection-based algorithm for identification of nonlinear Hammerstein systems, Signal Processing 90 (6) (2010) 2020–2030. [16] F. Ding, X.P. Liu, G. Liu, Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises, Signal Processing 89 (10) (2009) 1883–1890. [17] F. Ding, Y.J. Liu, B. Bao, Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems, Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 226 (1) (2012) 43–55. [18] Y.J. Liu, Y.S. Xiao, X.L. Zhao, Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model, Applied Mathematics and Computation 215 (4) (2009) 1477–1483. [19] Y.J. Liu, J. Sheng, R.F. Ding, Convergence of stochastic gradient estimation algorithm for multivariable ARX-like systems, Computers & Mathematics with Applications 59 (8) (2010) 2615–2627. [20] J. Ding, L.L. Han, X.M. Chen, Time series AR modeling with missing observations based on the polynomial transformation, Mathematical and Computer Modelling 51 (5–6) (2010) 527–536. [21] Y.S. Xiao, Y. Zhang, J. Ding, J.Y. Dai, The residual based interactive least squares algorithms and simulation studies, Computers & Mathematics with Applications 58 (6) (2009) 1190–1197. [22] L.Y. Wang, L. Xie, X.F. Wang, The residual based interactive stochastic gradient algorithms for controlled moving average models, Applied Mathematics and Computation 211 (2) (2009) 442–449. [23] F. Ding, T. Chen, Hierarchical identification of lifted state-space models for general dual-rate systems, IEEE Transactions on Circuits and Systems–I: Regular Papers 52 (6) (2005) 1179–1187. [24] Y. Shi, H. Fang, Kalman filter based identification for systems with randomly missing measurements in a network environment, International Journal of Control 83 (3) (2010) 538–551. [25] B. Yu, Y. Shi, H. Huang, l-2 and l-infinity filtering for multirate systems using lifted models, Circuits, Systems, and Signal Processing 27 (5) (2008) 699–711. [26] L.F. Zhuang, F. Pan, et al., Parameter and state estimation algorithm for single-input single-output linear systems using the canonical state space models, Applied Mathematical Modelling 36 (8) (2012) 3454–3463. [27] Y. Shi, B. Yu, Output feedback stabilization of networked control systems with random delays modeled by Markov chains, IEEE Transactions on Automatic Control 54 (7) (2009) 1668–1674.
1242
F. Ding / Signal Processing 93 (2013) 1235–1242
[28] M. Yan, Y. Shi, Robust discrete-time sliding mode control for uncertain systems with time-varying state delay, IET Control Theory & Applications 2 (8) (2008) 662–674. [29] H. Fang, J. Wu, Y. Shi, Genetic adaptive state estimation with missing input/output data, Proceedings of the Institution of Mechanical Engineers, Part I, Journal of Systems and Control Engineering 224 (5) (2010) 611–617. [30] F. Ding, X.P. Liu, G. Liu, Gradient based and least-squares based iterative identification methods for OE and OEMA systems, Digital Signal Processing 20 (3) (2010) 664–677. [31] Y.J. Liu, D.Q. Wang, et al., Least-squares based iterative algorithms for identifying Box–Jenkins models with finite measurement data, Digital Signal Processing 20 (5) (2010) 1458–1467. [32] D.Q. Wang, G.W. Yang, R.F. Ding, Gradient-based iterative parameter estimation for Box–Jenkins systems, Computers & Mathematics with Applications 60 (5) (2010) 1200–1208. [33] Z.N. Zhang, F. Ding, X.G. Liu, Hierarchical gradient based iterative parameter estimation algorithm for multivariable output error moving average systems, Computers & Mathematics with Applications 61 (3) (2011) 672–682. [34] D.Q. Wang, Least squares-based recursive and iterative estimation for output error moving average systems using data filtering, IET Control Theory and Applications 5 (14) (2011) 1648–1657. [35] H.Y. Hu, F. Ding, An iterative least squares estimation algorithm for controlled moving average systems based on matrix decomposition, Applied Mathematics Letters 25 (12) (2012) 2332–2338.
[36] H.Y. Hu, R.F. Ding, Decomposition based iterative estimation algorithm for autoregressive moving average models, in: The 31st Chinese Control Conference (2012 CCC), July 25–27, 2012, Hefei, China, pp. 1932–1937. [37] F. Ding, T. Chen, Hierarchical least squares identification methods for multivariable systems, IEEE Transactions on Automatic Control 50 (3) (2005) 397–402. [38] J. Ding, F. Ding, X.P. Liu, G. Liu, Hierarchical least squares identification for linear SISO systems with dual-rate sampled-data, IEEE Transactions on Automatic Control 56 (11) (2011) 2677–2683. [39] Y.J. Liu, L. Xie, et al., An auxiliary model based recursive least squares parameter estimation algorithm for non-uniformly sampled multirate systems, Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 223 (4) (2009) 445–454. [40] W. Wang, F. Ding, J.Y. Dai, Maximum likelihood least squares identification for systems with autoregressive moving average noise, Applied Mathematical Modelling 36 (5) (2012) 1842–1853. [41] J.H. Li, F. Ding, G.W. Yang, Maximum likelihood least squares identification method for input nonlinear finite impulse response moving average systems, Mathematical and Computer Modelling 55 (3–4) (2012) 442–450.