A recursive least squares parameter estimation algorithm for output nonlinear autoregressive systems using the input–output data filtering

A recursive least squares parameter estimation algorithm for output nonlinear autoregressive systems using the input–output data filtering

Accepted Manuscript A recursive least squares parameter estimation algorithm for output nonlinear autoregressive systems using the input–output data ...

1MB Sizes 0 Downloads 68 Views

Accepted Manuscript

A recursive least squares parameter estimation algorithm for output nonlinear autoregressive systems using the input–output data filtering Feng Ding, Yanjiao Wang, Jiyang Dai, Qishen Li, Qijia Chen PII: DOI: Reference:

S0016-0032(17)30382-4 10.1016/j.jfranklin.2017.08.009 FI 3092

To appear in:

Journal of the Franklin Institute

Received date: Revised date: Accepted date:

18 June 2016 6 March 2017 3 August 2017

Please cite this article as: Feng Ding, Yanjiao Wang, Jiyang Dai, Qishen Li, Qijia Chen, A recursive least squares parameter estimation algorithm for output nonlinear autoregressive systems using the input–output data filtering, Journal of the Franklin Institute (2017), doi: 10.1016/j.jfranklin.2017.08.009

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

A recursive least squares parameter estimation algorithm for output nonlinear autoregressive systems using the input–output data filtering Feng Dinga,b,d,∗, Yanjiao Wanga , Jiyang Daic , Qishen Lic , Qijia Chena a School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, PR China of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266042, PR China. c School of Information Engineering, Nanchang Hangkong University, Nanchang 330063, PR China d Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia

CR IP T

b College

Abstract

AN US

In the revised paper, we have highlighted the changes made in blue. Nonlinear systems exist widely in industrial processes. This paper studies the parameter estimation methods of establishing the mathematical models for a class of output nonlinear systems, whose output is nonlinear about the past outputs and linear about the inputs. We use an estimated noise transfer function to filter the input-output data and obtain two identification models, one containing the parameters of the system model, and the other containing the parameters of the noise model. Based on the data filtering technique, a data filtering based recursive least squares algorithm is proposed. The simulation results show that the proposed algorithm can generate more accurate parameter estimates than the recursive generalized least squares algorithm. Keywords: Mathematical modeling, Nonlinear system, Least squares, Parameter estimation

1. Introduction

AC

CE

PT

ED

M

Parameter identification and establishing the models of dynamical systems are the basis of control system analysis [1, 2, 3] and controller design [4, 5]. Linear system identification methods have been well developed [6, 7], e.g., the generalized projection algorithm [8] for time-varying systems and the auxiliary model based recursive least squares algorithm for linear-in-parameter systems [9]. Nonlinear system identification has received much research attention [10]. Xu et al studied the parameter estimation and controller design for dynamic systems from the step responses based on the Newton iteration [11] and the Newton iteration algorithm to the parameter estimation for dynamical systems [12] and presented a damping iterative parameter identification method for dynamical systems based on the sine signal measurement [13]. Raja and Chaudhary studied a twostage fractional least mean square identification algorithm for parameter estimation of CARMA systems [14]. Li discussed parameter estimation for Hammerstein CARARMA systems based on the Newton iteration [15]. Xie et al. studied the finite impulse response model identification of multirate processes with random delays using the expectation maxumum algorithm [16]. Hammerstein models [17, 18], Wiener models and their combination [19] are the most common model structures in the literature of nonlinear systems. Wang and Ding presented an interval-varying generalized extended stochastic gradient (GESG) algorithm and an interval-varying recursive generalized extended least squares (RGELS) algorithm for Hammerstein-Wiener systems [20] and a filtering based GESG algorithm and a filtering based RGELS algorithm for Hammerstein-Wiener systems with ARMA noise [21]. Wang and Zhang used the Taylor expansion and investigated an improved least squares algorithm for identifying the parameters of multivariable Hammerstein output-error moving average systems [22]. Wang et al. discussed the recasted models based hierarchical extended stochastic gradient method for MIMO nonlinear systems [23]. Information filtering has wide applications in many areas [24, 25], e.g., signal processing [26, 27], particle filtering [28] and state estimation [29]. Some filtering based methods have been used in different fields during the past decade [30, 31], e.g., signal processing [32] and parameter identification [33]. A filtering based iterative algorithm [34], a filtering based auxiliary model hierarchical gradient algorithm [35] and a filtering based auxiliary model hierarchical least squares algorithm [36] have been proposed for multivariable systems. A decomposition based least squares iterative identification algorithm has been proposed for multivariate pseudo-linear ARMA systems using the data filtering [37]. About the decomposition based identification, Xu et al. proposed the parameter estimation algorithms for dynamical response signals based on the multi-innovation theory and the ✩ This

work was supported by the National Natural Science Foundation of China (No. 61273194, 61663032). author at: School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, PR China Email address: [email protected] (Feng Ding)

∗ Corresponding

Preprint submitted to Journal of the Franklin Institute

Submitted: June 18, 2016; Revised: March 6, 2017

ACCEPTED MANUSCRIPT

2. The recursive generalized least squares algorithm

CR IP T

hierarchical principle [38]. On the basis of the work in [39], this paper presents a filtering based recursive least squares algorithm for output nonlinear autoregressive systems by using the hierarchical identification principle. The proposed filtering based parameter identification algorithm requires less computation cost and can give higher estimation accuracy, which can recursively estimate the system model parameters, the noise model parameters and the internal variables. Briefly, the rest of this paper is organized as follows. Section 2 gives the representation of output nonlinear systems. Section 3 derives a recursive least squares algorithm for output nonlinear systems. Two numerical examples are provided to show the effectiveness of the proposed algorithms in Section 4. Finally, some concluding remarks are offered in Section 5.

ˆ Let us define some symbols. “A =: X” or “X := A” stands for “A is defined as X”; θ(k) denotes the −1 estimate of θ at time k; z denotes a unit forward shift operator with zy(k) = y(k + 1) and z y(k) = y(k − 1); 1n denote an n-dimensional column vector whose elements are 1; In represents an n × n-dimensional identity matrix. Consider the following output nonlinear systems with colored noise:

v(k) = D(z)w(k),

AN US

y(k) = A(z)h(y(k)) + B(z)u(k) + w(k), h(y(k)) = c1 h1 (y(k)) + c2 h2 (y(k)) + . . . + cnc hnc (y(k)) = h(y(k))c,

(1) (2) (3)

where u(k) and y(k) are the input and output of the system, respectively, w(k) is the colored noise and v(k) is a stochastic white noise with zero mean and variance σ 2 , the nonlinear part is a linear combination of a known basis h = (h1 , h2 , . . . , hnc ) with coefficients c = (c1 , c2 , . . . , cnc ), and A(z), B(z) and D(z) are polynomials in the unit backward shift operator z −1 : A(z) := a1 z −1 + a2 z −2 + . . . + ana z −na ,

M

B(z) := b1 z −1 + b2 z −2 + . . . + bnb z −nb , D(z) := 1 + d1 z −1 + d2 z −2 + . . . + dnd z −nd .

PT

ED

Since the disturbance w(k) = 1/D(z)v(k) in the system (1) is an infinite impulse response (IIR) filter (note that v(k) is white noise), D(z) should be stable so that the output variable y(k) is bounded and the estimation accuracy can be improved. The stability of D(z) can improve convergence and accuracy because the output yf (t) of the filtered system in (31) involves only white noise v(k). Define the parameter vectors a, b and d as       a1 b1 d1  a2   b2   d2        a :=  .  ∈ Rna , b :=  .  ∈ Rnb , d :=  .  ∈ Rnd ,  ..   ..   .. 

CE

ana   a θ 1 := ∈ Rna +nb , b

bnb   c θ 2 := ∈ Rnc +nd . d

dnd

AC

and the information matrix H(k), the input information vector ψ b (k) and the noise information vector ψ d (k) as H(k) := [h1 (k), h2 (k), . . . , hnc (k)] ∈ Rna ×nc ,

hi (k) := [hi (y(k − 1)), hi (y(k − 2)), . . . , hi (y(k − na ))]T ∈ Rna ,

ψ b (k) := [u(k − 1), u(k − 2), . . . , u(k − nb )]T ∈ Rnb ,

ψ d (k) := [−w(k − 1), −w(k − 2), . . . , −w(k − nd )]T ∈ Rnd . Following the above definitions, Equations (1)–(3) can be written as y(k) = A(z)f (y(k)) + B(z)u(k) + [1 − D(z)]w(k) + v(k) = aT H(k)c + ψ Tb (k)b + ψ Td (k)d + v(k), T

w(k) = ψ d (k)d + v(k).

(4) (5)

The objective of this paper is to use the measurement data {u(k), y(k)} and to derive novel methods for estimating the unknown parameter vector c for the nonlinear part and the unknown parameter vectors a, b and d for the linear subsystems. 2

ACCEPTED MANUSCRIPT

For model (4), referring to [39], any pair γa and c/γ for any nonzero constant γ provides the same inputoutput mapping relation. For identifiability, we have to make one assumption that kck = 1 and the first entry of the vector c ispositive,  i.e., c1 > 0.   ˆ (k) a cˆ(k) na +nb ˆ ˆ Let θ 1 (k) := ˆ ∈R and θ 2 (k) := ˆ ∈ Rnc +nd be the estimates of θ 1 and θ 2 at time k, and b(k) d(k)    T  H(k)c H (k)a define φ1 (k) := ∈ Rna +nb and φ2 (k) := ∈ Rnc +nd . ψ b (k) ψ d (k) For the identification model in (4), define two quadratic cost functions: J1 (θ 1 ) :=

k X [y(j) − ψ Td (j)d − φT1 (j)θ 1 ]2 ,

J2 (θ 2 ) :=

CR IP T

j=1

k X [y(j) − ψ Tb (j)b − φT2 (j)θ 2 ]2 . j=1

According to the least squares principle, minimizing J1 (θ 1 ) and J2 (θ 2 ), we can obtain the following recursive generalized least squares relations:

AN US

ˆ 1 (k) = θ ˆ 1 (k − 1) + L1 (k)[y(k) − ψ T (k)d(k ˆ − 1) − φT (k)θ ˆ 1 (k − 1)], θ d 1 P1 (k − 1)φ1 (k) L1 (k) = 1 + φT1 (k)P1 (k − 1)φ1 (k) P1 (k) = P1 (k − 1) − L1 (k)LT1 (k)[1 + φT1 (k)P1 (k − 1)φ1 (k)], ˆ 2 (k − 1)], ˆ ˆ 2 (k) = θ ˆ 2 (k − 1) + L2 (k)[y(k) − ψ T (k)b(k) − φT (k)θ θ 2

b

P2 (k − 1)φ2 (k) , 1 + φT2 (k)P2 (k − 1)φ2 (k) P2 (k) = P2 (k − 1) − L2 (k)LT2 (k)[1 + φT2 (k)P2 (k − 1)φ2 (k)].

L2 (k) =

(6) (7) (8) (9) (10) (11)

M

ˆ (k), Replacing the unknown variables w(k), ψ d (k), φ1 (k) and φ2 (k) in (6)–(11) with their estimates w(k), ˆ ψ d ˆ ˆ φ1 (k) and φ2 (k), we can obtain the recursive generalized least squares (RGLS) algorithm for generating the ˆ 1 (k) and θ ˆ 2 (k) of θ 1 and θ 2 : estimates θ

1

ED

ˆ 1 (k) = θ ˆ 1 (k − 1) + L1 (k)[y(k) − ψ ˆ T (k)d(k ˆ − 1) − φ ˆ T (k)θ ˆ 1 (k − 1)], θ d 1 T ˆ (k)[1 + φ ˆ (k)P1 (k − 1)φ ˆ (k)]−1 L1 (k) = P1 (k − 1)φ 1

1

T

ˆ (k)P1 (k − 1)φ ˆ (k)], P1 (k) = P1 (k − 1) − L1 (k)L1 (k)[1 + φ 1 1   H(k)ˆ c (k − 1) ˆ (k) = φ , 1 ψ b (k)

PT

T

b

2

CE

AC d

(13) (14) (15)

ψ b (k) = [u(k − 1), u(k − 2), . . . , u(k − nb )]T , ˆ ˆ T (k)θ ˆ 2 (k − 1)], ˆ 2 (k) = ϑ(k ˆ − 1) + L2 (k)[y(k) − ψ T (k)b(k) −φ θ

ˆ (k)[1 + φ ˆ T (k)P2 (k − 1)φ ˆ (k)]−1 , L2 (k) = P2 (k − 1)φ 2 2 2 T T ˆ ˆ (k)], P2 (k) = P2 (k − 1) − L2 (k)L2 (k)[1 + φ2 (k)P2 (k − 1)φ 2  T  H (k)ˆ a (k) ˆ (k) = , φ 2 ˆ (k) ψ d ˆ (k) = [−w(k ψ ˆ − 1), −w(k ˆ − 2), . . . , −w(k ˆ − nd )]T ,

(12)

P2 (0) = p0 Inc +nd ,

(16) (17) (18) (19) (20) (21)

T

ˆ T (k)b(k), ˆ ˆ (k)H(k)ˆ w(k) ˆ = y(k) − a c(k) − ψ b

(22)

H(k) = [h1 (k), h2 (k), . . . , hnc (k)],

(23)

hi (k) = [hi (y(k − 1)), hi (y(k − 2)), . . . , hi (y(k − na ))]T ,   ˆ 1 (k) θ ˆ Θ(k) = ˆ . θ 2 (k)

(24) (25)

Theorem 1. For the identification model in (4) and the algorithm in (12)–(25), suppose that {v(k)} is a white noise sequence with zero mean and variance σ 2 , i.e., E[v(k)] = 0, E[v 2 (k)] = σ 2 , E[v(k)v(i)] = 0 (i 6= k), and that there exist positive constants αi and βi such that for large k, the following persistent excitation conditions holds, (C1)

α1 Ina +nb 6

k 1Xˆ ˆ T (j) 6 β1 In +n , a.s. φf 1 (j)φ f1 a b k j=1

3

ACCEPTED MANUSCRIPT

(C2)

α2 Inc +nd 6

k 1Xˆ ˆ T (j) 6 β2 In +n , a.s. φ (j)φ f2 c d k j=1 f 2

Then the parameter estimation error given by the RGLS algorithm in (42)–(62) converges to zero. T

ˆ − 1) − ψ ˆ (k)d(k ˆ − 1), we have ˆ T (k − 1)H(k)ˆ Proof Let e(k) := y(k) − a c(k − 1) − ψ Tb (k)b(k d ˜ 1 (k) = θ ˜ 1 (k − 1) + P1 (k)φ ˆ (k)e(k), θ 1 ˜ ˜ ˆ θ 2 (k) = θ 2 (k − 1) + P2 (k)φ (k)e(k).

(26) (27)

2

Define the nonnegative functions

2

CR IP T

˜ T (k)P −1 (k)θ ˜ 1 (k), T1 (k) := θ 1 1 T ˜ (k)P −1 (k)θ ˜ 2 (k), T1 (k) := θ 2

T (k) := T1 (k) + T2 (k). Using (26) and (27), we have

˜ 1 (k − 1) + P1 (k)φ ˆ (k)e(k)]T P −1 (k)[θ ˜ 1 (k − 1) + P1 (k)φ ˆ (k)e(k)] T1 (k) = [θ 1 1 1 2 ˜ 1 (k − 1)φ ˆ (k)] + 2θ ˜ 1 (k − 1)φ ˆ (k)e(k) + φ ˆ T (k)P1 (k)φ ˆ (k)e2 (k). = T1 (k − 1) + [θ 1 1 1 1

AN US

From the definition of e(k), we have

(28)

e(k) := aT H(k)c + ψ Tb (k)b + ψ Td (k)d + v(k) ˆ − 1) − ψˆ T (k)d(k ˆ − 1), −ˆ aT (k − 1)H(k)ˆ c(k − 1) − ψ Tb (k)b(k d = −˜ y1 (k) + ∆1 (k) + v(k),

where

ˆ T (k)θ ˜ 1 (k) + ψ ˆ T (k)d(k), ˜ y˜1 (k) := φ 1 d T ˆ (k)] θ 1 + [ψ (k) − ψ ˆ (k)]T d. ∆1 (k) := [φ1 (k) − φ 1 d d

M

Thus, Equation (28) can be written as

˜ 1 (k − 1)φ ˆ (k)]2 + 2θ ˜ 1 (k − 1)φ ˆ (k)e(k) + φ ˆ T (k)P1 (k)φ ˆ (k)e2 (k). T1 (k) = T1 (k − 1) + [θ 1 1 1 1

(29)

ˆ (k)P (k)ϕ(k) ˆ ˆ (k)P (k − 1)ϕ(k)] ˆ Here, we have used the inequality 1 − ϕ = [1 + ϕ > 0. T −1 ˜ ˜ Define a non-negative definite function V (k) := E[θ (k)P (k)θ(k)]. Note that {v(k)} is white noise independent of the input signal {u(k)}. Suppose that {∆(k)} is bounded with ∆2 (k) 6 ε < ∞, a.s. Since ˜ T (k − 1)P −1 (k − 1)θ(k ˜ − 1), y˜(k), ϕ ˆ T (k)P (k)ϕ(k) ˆ θ and ∆(k) are uncorrelated with v(k), taking the expectation to both sides of (29) and referring to Equations (18)–(19) in [40], we have T

−1

PT

ED

T

CE

2 ˆ T (k)P (k)ϕ(k)[v ˆ V (k) 6 V (k − 1) + 0 + E{ϕ (k) + ∆2 (k)]} T 2 ˆ (k)P (k)ϕ(k)](σ ˆ 6 V (k − 1) + E[ϕ + ε)   k X ˆ T (j)P (j)ϕ(j) ˆ  (σ 2 + ε) 6 V (0) + [ln |P −1 (k)| + n ln p0 ](σ 2 + ε). 6 V (0) + E  ϕ

(30)

j=1

AC

Using Assumption (C1), we have

P −1 (k) 6 βkIn + In /p0 = (βk + 1/p0 )In , P −1 (k) > αkIn + In /p0 = (αk + 1/p0 )I,

(αk + 1/p0 )n 6 |P −1 (k)| 6 (βk + 1/p0 )n .

According to the definition of V (k), we have 2 ˜ (αk + 1/p0 )E[kθ(k)k ] 6 V (k) 6 n/p2 + [n ln(βk + 1/p0 ) + n ln p0 ](σ 2 + ε), 0

Taking the limit gives n/p20 + [n ln(βk + 1/p0 ) + n ln p0 ](σ 2 + ε) = 0. k→∞ αk + 1/p0

2 ˜ lim E[kθ(k)k ] 6 lim

k→∞

ˆ This shows that the parameter estimation error kθ(k) − θk converges to zero as k increases, and Theorem 1 is proved.  The colored noise implies that noise model w(k) = 1/D(z)v(k) contains some parameters di in θ 2 to be identified. For the white noise w(k) = v(k), no noise model parameters need be identified. Here deals with the colored noise through estimating the parameters of the noise model. 4

ACCEPTED MANUSCRIPT

3. The filtering based RGLS parameter estimation algorithm Define the filtered input uf (k) and the filtered output yf (k) as uf (k) := D(z)u(k),

yf (k) := D(z)y(k).

Multiplying both sides of (1) by D(z), we have yf (k) = A(z)D(z)h(y(k)) + B(z)uf (k) + v(k).

(31)

Define the filtered information vector ψ f (k) and the filtered information matrix F (k) as F (k) := [f1 (k), f2 (k), . . . , fnc (k)] ∈ Rna ×nc ,

fi (k) := [fi (y(k − 1)), fi (y(k − 2)), . . . , fi (y(k − na ))]T ∈ Rna ,

CR IP T

ψ f (k) := [uf (k − 1), uf (k − 2), . . . , uf (k − nb )]T ∈ Rnb ,

i = 1, 2, . . . , nc ,

fj (k) := D(z)hj (y(k)), j = 1, 2, . . . , nc .

Thus, the filtered model in (31) can be rewritten as

yf (k) = A(z)(c1 f1 (k) + c2 f2 (k) + . . . + cnc fnc (k)) + B(z)uf (k) + v(k)

ˆ Let ξ(k) := cˆ(k) be the estimate of ξ := c at time k, φf 1 (k) := For the filtered model, defining two criterion functions k X J3 (θ 1 ) := [yf (j) − φTf 1 (j)θ 1 ]2 , j=1

k X [yf (j) − ψ Tf (j)b − φTf 2 (j)ξ]2 ,





F (k)ξ ∈ Rna +nb and φf 2 (k) := F T (k)a ∈ Rnc . ψ f (k)

M

J4 (ξ) :=

(32)

AN US

= aT F (k)c + ψ Tf (k)b + v(k).

j=1

ED

and minimizing J3 (θ 1 ) and J4 (ξ), we can derive the following recursive least squares relations: ˆ 1 (k) = θ ˆ 1 (k − 1) + L3 (k)[yf (k) − φT (k)θ ˆ 1 (k − 1)], θ f1

(33)

L3 (k) =

(34)

P3 (k − 1)φf 1 (k) , 1 + φTf 1 (k)P3 (k − 1)φf 1 (k)

(35)

L4 (k) =

(37)

PT

P3 (k) = P3 (k − 1) − L3 (k)LT3 (k)[1 + φTf 1 (k)P3 (k − 1)φf 1 (k)], ˆ ˆ − 1) + L4 (k)[yf (k) − ψ T (k)b(k) ˆ ˆ − 1)], ξ(k) = ξ(k − φT (k)ξ(k f

f2

CE

P4 (k − 1)φf 2 (k) , 1 + φTf 2 (k)P4 (k − 1)φf 2 (k)

P4 (k) = P4 (k − 1) − L4 (k)LT4 (k)[1 + φTf 2 (k)P4 (k − 1)φf 2 (k)].

(36)

(38)

AC

Note that the polynomial D(z) is unknown, so are the filtered variables uf (k) and yf (k), and the filtered ˆ information vectors ψ f (k), φf 1 (k), φf 2 (k) and the filtered information matrix F (k). Thus the estimates θ(k) ˆ and ξ(k) are impossible to compute. Here, we adopt the idea of replacing the unknown variables with their estimates to implement the recursive computation. For the noise model in (5), defining the criterion function J5 (d) :=

k X j=1

[w(j) ˆ − ψ Td (j)d]2 ,

and minimizing this function, we can derive the following recursive least squares relations: ˆ = d(k ˆ − 1) + L5 (k)[w(k) − ψ T (k)d(k ˆ − 1)], d(k) d P5 (k − 1)ψ d (k) L5 (k) = , 1 + ψ Td (k)P5 (k − 1)ψ d (k) P5 (k) = P5 (k − 1) − L5 (k)LT5 (k)[1 + ψ Td (k)P5 (k − 1)ψ d (k)].

5

(39) (40) (41)

ACCEPTED MANUSCRIPT

ˆ − 1) and cˆ(k − 1), respectively, ˆ (k − 1), b(k Replacing the parameter vectors a, b and c with their estimates a the estimate w(k) ˆ can be computed by ˆ − 1). ˆ T (k − 1)H(k)ˆ w(k) ˆ = y(k) − a c(k − 1) − ψ Tb (k)b(k Thus, use w(k) ˆ to construct the estimate of ψ d (k): ˆ (k) := [−w(k ψ ˆ − 1), −w(k ˆ − 2), . . . , −w(k ˆ − nd )]T ∈ Rnd . d ˆ = [dˆ1 (k), dˆ2 (k), . . . , dˆn (k)]T to construct the estimate of D(z): Using the parameter estimate d(k) d ˆ D(k, z) = 1 + dˆ1 (k)z −1 + dˆ2 (k)z −2 + . . . + dˆnd (k)z −nd .

nd X i=1

yˆf (k) = y(k) +

nd X i=1

fˆj (k) = hj (y(k)) +

dˆi (k)u(k − i), dˆi (k)y(k − i), nd X i=1

dˆi (k)hij (y(k − i)), j = 1, 2, . . . , nc .

AN US

u ˆf (k) = u(k) +

CR IP T

ˆ Filtering u(k), y(k) and h(y(k)) with D(k, z), we can get the estimates u ˆf (k), yˆf (k) and fˆj (k):

Using u ˆf (k) and fˆj (k) to construct the estimates of ψ f (k), fj (k) and F (k): ˆ (k) := [ˆ ψ uf (k − 1), u ˆf (k − 2), . . . , u ˆf (k − nb )]T ∈ Rnb , f fˆi (k) := [fˆi (y(k − 1)), fˆi (y(k − 2)), . . . , fˆi (y(k − na ))]T ∈ Rna , Fˆ (k) := [fˆ1 (k), fˆ2 (k), . . . , fˆn (k)] ∈ Rna ×nc , c

i = 1, 2, . . . , nc ,

M

and replacing the unknown variables in (33)–(41) with their estimates, we can obtain the filtering based recursive generalized least squares (F-RGLS) algorithm for the output nonlinear systems: ˆ 1 (k) = θ ˆ 1 (k − 1) + L3 (k)[ˆ ˆ T (k)θ ˆ 1 (k − 1)], θ yf (k) − φ f1

(42)

ED

ˆ (k)[1 + φ ˆ T (k)P3 (k − 1)φ ˆ (k)]−1 , L3 (k) = P3 (k − 1)φ f1 f1 f1 T

ˆ (k)P3 (k − 1)φ ˆ (k)], P3 (k) = P3 (k − 1) − L3 (k)L3 (k)[1 + φ f1 f1 T

T

T

PT

ˆ ˆ − 1) + L4 (k)[ˆ ˆ (k)b(k) ˆ ˆ (k)ξ(k ˆ − 1)], ξ(k) = ξ(k yf (k) − ψ −φ f f2 T

ˆ (k)[1 + φ ˆ (k)P4 (k − 1)φ ˆ (k)] L4 (k) = P4 (k − 1)φ f2 f2 f2

−1

,

T

ˆ (k)P4 (k − 1)φ ˆ (k)], P4 (k) = P4 (k − 1) − L4 (k)LT4 (k)[1 + φ f2 f2

CE

nd X

u ˆf (k) = u(k) +

AC

yˆf (k) = y(k) +

i=1 nd X i=1

fˆj (k) = hj (y(k)) +

(43) (44) (45) (46) (47)

dˆi (k)u(k − i),

(48)

dˆi (k)y(k − i),

(49)

nd X i=1

dˆi (k)hij (y(k − i)),

(50)

Fˆ (k) = [fˆ1 (k), fˆ2 (k), . . . , fˆnc (k)], fˆi (k) = [fˆi (y(k − 1)), fˆi (y(k − 2)), . . . , fˆi (y(k − na ))]T ,

ˆ (k) = [ˆ ψ uf (k − 1), u ˆf (k − 2), . . . , u ˆf (k − nb )]T , f   ˆ ˆ ˆ (k) = F (k)ξ(k − 1) , φ f1 ˆ (k) ψ

(51) (52) (53) (54)

f

ˆ (k) = Fˆ T (k)ˆ φ a(k), f2

(55) T

ˆ = d(k ˆ − 1) + L5 (k)[w(k) ˆ (k)d(k ˆ − 1)], d(k) ˆ −ψ d T ˆ (k)[1 + ψ ˆ (k)P5 (k − 1)ψ ˆ (k)]−1 , L5 (k) = P5 (k − 1)ψ d

d

d

T

ˆ (k)P5 (k − 1)ψ ˆ (k)], P5 (k) = P5 (k − 1) − L5 (k)L5 (k)[1 + ψ d d T

6

(56) (57) (58)

ACCEPTED MANUSCRIPT

ˆ (k) = [−w(k ψ ˆ − 1), −w(k ˆ − 2), . . . , −w(k ˆ − nd )]T , d ˆ − 1), ˆ T (k − 1)H(k)ˆ w(k) ˆ = y(k) − a c(k − 1) − ψ T (k)b(k

(59) (60)

b

ψ b (k) = [u(k − 1), u(k − 2), . . . , u(k − nb )]T ,   ˆ (k) ˆ T (k), ξˆT (k), dˆT (k)]T , θ ˆ 1 (k) = a ˆ Θ(k) = [θ . 1 ˆ b(k)

(61) (62)

The procedure of computing the parameter vectors θ 1 , c and d are listed in the following.

CR IP T

1. To initialize: let k = 1, and set the initial values P3 (0) = p0 Ina +nb , P4 (0) = p0 Inc , P5 (0) = p0 Ind , ˆ 1 (0) = 1n +n /p0 , kˆ ˆ = 1n /p0 , w(k θ c(0)k = 1, d(0) ˆ − i) = 1/p0 , i = 0, 1, . . . , nd , p0 = 106 . a b d 2. Collect the input-output data u(k) and y(k), construct ψ b (k) using (61). ˆ (k) by (59), compute L5 (k) using (57) and P5 (k) using (58). 3. Compute w(k) ˆ by (60), construct ψ d ˆ 4. Update the parameter estimate d(k) by (56). ˆ (k) using (53), 5. Compute u ˆf (k) by (48) and yˆf (k) by (49), construct ψ f 6. Construct fˆj (k) by (50) and form fˆi (k) using (52) and Fˆ (k) using (51). ˆ (k) in (54) and form L3 (k) using (43) and P3 (k) using (44). 7. Compute φ f1

ˆ ξ(k) ˆ , cˆ(k) = sgn[ξ(1)] ˆ kξ(k)k

AN US

ˆ ˆ ˆ 1 (k) in (62). ˆ (k) and b(k) 8. Update the parameter estimate θ(k) using (42) and read a from θ ˆ 9. Form φf 2 (k) in (55) and compute L4 (k) using (46) and P4 (k) using (47). ˆ 10. Update the parameter estimate ξ(k) by (45), normalize cˆ(k) using ˆ ξ(k) := cˆ(k).

ˆ ˆ where sgn[ξ(1)] represents the sign of the first element of ξ(k). 11. Increase k by 1 and go to Step 2, continue the recursive calculation.

M

The following studies the convergence of the presented F-RGLS algorithm.

α3 Ina +nb 6

(C4)

α4 Inc 6

k 1Xˆ ˆ T (j) 6 β4 In , a.s. φ (j)φ f2 c k j=1 f 2 k 1Xˆ ˆ T (j) 6 β5 In , a.s. ψ (j)ψ d d k j=1 d

CE

(C5)

k 1Xˆ ˆ T (j) 6 β3 In +n , a.s. φf 1 (j)φ f1 a b k j=1

PT

(C3)

ED

Theorem 2. For the identification model in (32) and (5) and the F-RGLS algorithm in (42)–(62), suppose that {v(k)} is a white noise sequence with zero mean and variance σ 2 , i.e., E[v(k)] = 0, E[v 2 (k)] = σ 2 , E[v(k)v(i)] = 0 (i 6= k), and that there exist positive constants αi and βi such that for large k, the following persistent excitation conditions hold,

α5 Ind 6

AC

Then the parameter estimation error given by the F-RGLS algorithm converges to zero. The proof can be done in a similar way in Theorem 2.

4. Examples

Example 1: Consider the following output nonlinear system: y(k) = A(z)f (y(k)) + B(z)u(k) +

v(k) , D(z)

A(z) = a1 z −1 = 0.40z −1 , B(z) = b1 z −1 + b2 z −2 = 0.60z −1 + 0.40z −2 , D(z) = 1 + d1 z −1 = 1 + 0.10z −1 , f (y(k)) = c1 y(k) + c2 cos(y(k)) = 0.80y(k) − 0.60cos(y(k)), θ = [a1 , b1 , b2 ]T = [0.40, 0.60, 0.40]T , c = [c1 , c2 ]T = [0.80, −0.60]T , 7

ACCEPTED MANUSCRIPT

d = d1 = 0.10. In simulation, the input u(k) is taken as a persistent excitation signal sequence with zero mean and unit variance, and v(k) as a white noise sequence with zero mean and variances σ 2 = 0.302 . Applying the RGLS algorithm in (12)–(25) and the F-RGLS algorithm in (42)–(62) to estimate the parameters of this system, the obtained ˆ estimates and errors are shown in Table 1, and the estimation errors δ := kΘ(k) − Θk/kΘk versus k are shown in Figure 1.

k 100 200 500 1000 2000 3000 100 200 500 1000 2000 3000

RGLS

a1 0.34225 0.37989 0.40210 0.40187 0.39042 0.39490 0.31336 0.32448 0.35475 0.37850 0.38621 0.39476 0.40000

0.6

0.5

δ (%) 13.45274 12.09980 6.92699 2.84559 2.90154 1.60978 16.55084 12.71258 9.02899 4.13110 4.59149 2.70182

M

0.3

ED

δ

0.4

0.2

0.1

RGLS

500

CE

0

PT

F−RGLS 0

d1 0.12056 0.09595 0.14209 0.11438 0.11773 0.10227 0.18977 0.16757 0.18080 0.13219 0.12903 0.10631 0.10000

AN US

True values

Table 1: The parameter estimates and errors b1 b2 c1 c2 0.62489 0.34522 0.66954 -0.52134 0.63566 0.35931 0.69518 -0.49798 0.61309 0.37926 0.75608 -0.53833 0.60538 0.39039 0.77671 -0.57777 0.60576 0.39521 0.77227 -0.61384 0.60152 0.39812 0.78590 -0.61422 0.62584 0.35369 0.63561 -0.57009 0.63730 0.38668 0.68104 -0.56361 0.61110 0.40439 0.74175 -0.55918 0.60582 0.40310 0.76649 -0.58519 0.61160 0.39041 0.76584 -0.63371 0.60586 0.39491 0.77967 -0.62632 0.60000 0.40000 0.80000 -0.60000

CR IP T

Algorithms F-RGLS

1000

1500 t

2000

2500

3000

Figure 1: The estimation errors δ versus k for Example 1

From Table 1 and Figure 1, we can draw the following conclusions.

AC

• The parameter estimation errors of the RGLS and F-RGLS algorithms become generally smaller with the data length k increasing. • For the same data length, the F-RGLS algorithm has faster convergence rates than the RGLS algorithm.

Example 2: For further validation of the proposed algorithms, we consider the following output nonlinear system: y(k) = A(z)f (y(k)) + B(z)u(k) +

v(k) , D(z)

A(z) = a1 z −1 + a2 z −2 = 0.86z −1 − 0.30z −2 ,

B(z) = b1 z −1 + b2 z −2 = 0.31z −1 + 0.20z −2 ,

D(z) = 1 + d1 z −1 + d2 z −2 = 1 − 0.23z −1 + 0.28z −2 ,

f (y(k)) = c1 y(k) + c2 cos(y(k)) = 0.80y(k) + 0.60cos(y(k)), θ = [a1 , a2 , b1 , b2 ]T = [0.86, −0.30, 0.31, 0.20]T , c = [c1 , c2 ]T = [0.80, 0.60]T ,

8

ACCEPTED MANUSCRIPT

d = [d1 , d2 ] = [−0.23, 0.28]T . The simulation conditions are similar to those of Example 1 but the noise variance σ 2 = 0.102 and σ 2 = 0.202 , respectively, By applying the F-RGLS algorithm to estimate the parameters of this example system, the parameter estimates and errors are shown in Table 2 and Figure 2. For 50 sets of noise realizations, the Monte Carlo simulation results are shown in Tables 3–4.

a1 0.66399 0.77658 0.81477 0.83178 0.83604 0.85439 0.69222 0.76054 0.80490 0.80262 0.80765 0.83418 0.86000

Table a2 -0.12559 -0.23101 -0.26081 -0.27941 -0.28167 -0.29931 -0.16422 -0.22331 -0.24977 -0.25325 -0.25495 -0.28112 -0.30000

2: The F-RGLS parameter estimates and errors b1 b2 c1 c2 d1 0.31304 0.24572 0.78243 0.62274 -0.10852 0.31389 0.22477 0.78290 0.62215 -0.16219 0.31240 0.21194 0.78487 0.61966 -0.26659 0.31414 0.20409 0.78621 0.61796 -0.25076 0.31305 0.20671 0.79087 0.61198 -0.22768 0.31072 0.20511 0.79308 0.60912 -0.22497 0.31292 0.24001 0.79947 0.60071 -0.25558 0.31439 0.22286 0.79075 0.61214 -0.24471 0.31210 0.20760 0.78927 0.61405 -0.31095 0.31590 0.19980 0.78872 0.61476 -0.27704 0.31471 0.20770 0.79105 0.61175 -0.24677 0.31029 0.20693 0.79391 0.60804 -0.24135 0.31000 0.20000 0.80000 0.60000 -0.23000

0.5 0.45 0.4

δ (%) 22.64821 11.73725 5.66814 3.78511 2.71605 1.02949 18.33204 11.82485 7.84956 6.29916 5.08065 2.53814

0.3

M

δ

0.35

0.25

0.15 0.1 0.05

2

ED

0.2

σ2 = 0.202

2

σ = 0.10

PT

0

d2 0.13494 0.17436 0.24755 0.25160 0.26129 0.27730 0.13295 0.16794 0.25917 0.26419 0.27212 0.28703 0.28000

AN US

k 100 200 500 1000 2000 3000 0.202 100 200 500 1000 2000 3000 True values

CR IP T

σ2 0.102

0

500

1000

1500 t

2000

2500

3000

CE

Figure 2: The F-RGLS estimation errors δ versus k for Example 2

Table 3: The parameter estimates based on 50 Monte Carlo runs for Example 2 (σ 2 = 0.102 )

a1

AC

k

a2

b1

b2

c1

c2

d1

d2

100

0.71823±0.18723 -0.17072±0.18062 0.31085±0.01502 0.22860±0.03617 0.78522±0.04548 0.61838±0.05452 -0.13963±0.13006 0.15619±0.22755

200

0.79803±0.09727 -0.24697±0.09266 0.31379±0.01142 0.21595±0.02206 0.78897±0.03329 0.61394±0.04482 -0.19134±0.14571 0.20933±0.18367

500

0.82939±0.05495 -0.27410±0.05229 0.31178±0.00515 0.20703±0.01404 0.79232±0.02414 0.60987±0.03247 -0.23767±0.10325 0.25107±0.07384

1000

0.84307±0.04452 -0.28724±0.04473 0.31242±0.00537 0.20278±0.01093 0.79389±0.01528 0.60794±0.02036 -0.23383±0.05894 0.26051±0.03952

2000

0.84833±0.02541 -0.29104±0.02231 0.31191±0.00408 0.20320±0.00768 0.79602±0.01262 0.60520±0.01689 -0.22983±0.03123 0.27070±0.02694

3000

0.85403±0.00882 -0.29637±0.00964 0.31096±0.00275 0.20294±0.00519 0.79674±0.00688 0.60428±0.00914 -0.22851±0.01547 0.27956±0.01556

True values

0.86000

k

a1

-0.30000

0.31000

0.20000

0.80000

0.60000

-0.23000

0.28000

Table 4: The parameter estimates based on 50 Monte Carlo runs for Example 2 (σ 2 = 0.202 ) a2

b1

b2

c1

c2

d1

d2

100

0.74716±0.28447 -0.20904±0.23424 0.31852±0.04798 0.23446±0.04713 0.78795±0.06232 0.61402±0.07407 -0.18825±0.18357 0.14481±0.30660

200

0.80763±0.23371 -0.26064±0.21308 0.31462±0.02502 0.21798±0.04089 0.78700±0.04280 0.61634±0.05428 -0.21660±0.10786 0.19301±0.12053

500

0.84540±0.09493 -0.28863±0.09354 0.31081±0.01604 0.20530±0.01808 0.79099±0.03508 0.61108±0.04752 -0.26782±0.09682 0.25483±0.08238

1000

0.84435±0.08603 -0.29041±0.08690 0.31320±0.01042 0.19984±0.02298 0.79001±0.02249 0.61281±0.02985 -0.25293±0.07857 0.26388±0.05035

2000

0.83941±0.04839 -0.28383±0.05039 0.31372±0.00568 0.20510±0.01146 0.79328±0.01905 0.60869±0.02472 -0.23734±0.04202 0.27263±0.03263

3000

0.84884±0.03756 -0.29282±0.03215 0.31153±0.00395 0.20476±0.01268 0.79453±0.01579 0.60712±0.02113 -0.23564±0.03671 0.28090±0.02916

True values

0.86000

-0.30000

0.31000

0.20000

9

0.80000

0.60000

-0.23000

0.28000

ACCEPTED MANUSCRIPT

From Tables 3–4 and Figure 2, we can see that the estimation accuracy given by the proposed algorithms becomes higher with the noise-to-signal ratios decreasing and the estimation errors become generally smaller with the data length k increasing. The Monte Carlo simulation results show that the proposed algorithms are effective. 5. Conclusions

CR IP T

In this paper, we present a filtering based recursive generalized least squares algorithm for output nonlinear systems using the data filtering technique. The main contribution is that the nonlinear system is identified through a recursive least squares method by means of the filtered input and output data. The presented algorithm in this paper requires less computation burden and can give more accurate parameter estimate compared with the generalized extended least squares algorithm. The proposed algorithm of this paper can be extended to study identification problems of dual-rate sampled-data systems [41, 42], other nonlinear systems with colored noises [43] and applied to other fields [44, 45, 46]. References

[1] H. Li, Y. Shi, W. Yan, On neighbor information utilization in distributed receding horizon control for consensus-seeking, IEEE Transactions on Cybernetics 46 (9) (2016) 2019-2027.

AN US

[2] H. Li, Y. Shi, W. Yan, Distributed receding horizon control of constrained nonlinear vehicle formations with guaranteed γ-gain stability, Automatica 68 (2016) 148-154. [3] H. Li, W.S. Yan, Y. Shi, Continuous-time model predictive control of under-actuated spacecraft with bounded control torques, Automatica 75 (2016) 144-153 [4] L. Xu, A proportional differential control method for a time-delay system using the Taylor expansion approximation, Applied Mathematics and Computation 236 (2014) 391-399. [5] X.L. Luan, C.J. Zhou, Z.T. Ding, F. Liu, Stochastic consensus control with finite frequency specification for Markov jump networks, International Journal of Robust and Nonlinear Control 26 (13) (2016) 2961-2974. [6] G.C. Goodwin, K.S. Sin, Adaptive Filtering Prediction and Control, Prentice Hall, Englewood Cliffs, New Jersey, 1984.

M

[7] L. Ljung, System Identification: Theory for the User, 2nd ed., Prentice Hall, Englewood Cliffs, New Jersey, 1999. [8] F. Ding, L. Xu, Q.M. Zhu, Performance analysis of the generalised projection identification for time-varying systems, IET Control Theory and Applications 10 (18) (2016) 2506-2514.

ED

[9] F. Ding, F.F. Wang, L. Xu, T. Hayat, A. Alsaedi, Parameter estimation for pseudo-linear systems using the auxiliary model and the decomposition technique, IET Control Theory and Applications 11 (3) (2017) 390-400. [10] J.H. Li, W.X. Zheng, J.P. Gu, L. Hua, Parameter estimation algorithms for Hammerstein output error systems using LevenbergMarquardt optimization method with varying interval measurements, Journal of the Franklin Institute 354 (1) (2017) 316-331.

PT

[11] L. Xu, L. Chen, W.L. Xiong, Parameter estimation and controller design for dynamic systems from the step responses based on the Newton iteration, Nonlinear Dyn. 79 (3) (2015) 2155-2163. [12] L. Xu, Application of the Newton iteration algorithm to the parameter estimation for dynamical systems, J. Comput. Appl. Math. 288 (2015) 33-43.

CE

[13] L. Xu, The damping iterative parameter identification method for dynamical systems based on the sine signal measurement, Signal Process. 120 (2016) 660-667. [14] M.A.Z. Raja, N.I. Chaudhary, Two-stage fractional least mean square identification algorithm for parameter estimation of CARMA systems. Signal Process. 107 (2015) 327-339.

AC

[15] J.H. Li, Parameter estimation for Hammerstein CARARMA systems based on the Newton iteration, Applied Mathematics Letters 26 (1) (2013) 91-96. [16] L. Xie, H.Z. Yang, B. Huang, FIR Model identification of multirate processes with random delays using EM algorithm, AIChE Journal 59 (11) (2013) 4124-4132. [17] D.Q. Wang, F. Ding, Parameter estimation algorithms for multivariable Hammerstein CARMA systems, Inf. Sci. 355 (2016) 237-248. [18] D.Q. Wang, Hierarchical parameter estimation for a class of MIMO Hammerstein systems based on the reframed models, Appl. Math. Lett. 57 (2016) 13-19. [19] A. Wills, T. Sch¨ on, L. Ljung, B. Ninness, Identification of Hammerstein-Wiener models, Automatica 49 (1) (2013) 70-81. [20] Y.J. Wang, F. Ding, Recursive parameter estimation algorithms and convergence for a class of nonlinear systems with colored noise, Circuits, Syst. Signal Process. 35 (10) (2016) 3461-3481. [21] Y.J. Wang, F. Ding, Recursive least squares algorithm and gradient algorithm for Hammerstein-Wiener systems using the data filtering, Nonlinear Dynamics 84 (2) (2016) 1045-1053. [22] D.Q. Wang, W. Zhang, Improved least squares identification algorithm for multivariable Hammerstein systems, J. Frankl. Inst. 352 (11) (2015) 5292-5307. [23] D.Q. Wang, L. Mao, F. Ding, Recasted models based hierarchical extended stochastic gradient method for MIMO nonlinear systems, IET Control Theory and Applications 11 (4) (2017) 476-485.

10

ACCEPTED MANUSCRIPT

[24] Y. Ji, X.M. Liu, Unified synchronization criteria for hybrid switching-impulsive dynamical networks, Circuits Syst. Signal Process. 34 (5) (2015) 1499-1517. [25] C.L. Fan, H.J. Li, X. Ren, The order recurrence quantification analysis of the characteristics of two-phase flow pattern based on multi-scale decomposition, Trans. Inst. Meas. Control 37 (6) (2015) 793-804. [26] S. Zhao, Yuriy S. Shmaliy, F. Liu, Fast Kalman-like optimal unbiased FIR filtering with applications, IEEE Transactions on Signal Processing 64 (9) (2016) 2284-2297. [27] S. Zhao, B. Huang, F. Liu, Linear optimal unbiased filter for time-variant systems without apriori information on initial condition, IEEE Transactions on Automatic Control 62 (2) (2017) 882-887. [28] S. Yin, X. Zhu, Intelligent particle filter and its application to fault detection of nonlinear system, IEEE Transactions on Industrial Electronics 62 (6) (2015) 3852-3861.

CR IP T

[29] S. Yin, X. Zhu, J. Qu, H. Gao, State estimation in nonlinear system using sequential evolutionary filter, IEEE Transactions on Industrial Electronics 63 (6) (2016) 3786-3794. [30] J. Pan, X.H. Yang, H.F. Cai, B.X. Mu, Image noise smoothing using a modified Kalman filter, Neurocomputing 173 (2016) 1625-1629. [31] X.K. Wan, Y. Li, C. Xia, M.H. Wu, J. Liang, N. Wang, A T-wave alternans assessment method based on least squares curve fitting technique, Measurement 86 (2016) 93-100 [32] L. Xu, F. Ding, Recursive least squares and multi-innovation stochastic gradient parameter estimation methods for signal modeling, Circuits, Systems and Signal Processing 36 (4) (2017) 1735-1753. [33] C. Wang, J. Xun, Novel recursive least squares identification for a class of nonlinear multiple-input single-output systems using the filtering technique, Advances in Mechanical Engineering 8 (11) (2016) 1-8.

AN US

[34] Y.J. Wang, F. Ding, The filtering based iterative identification for multivariable systems, IET Control Theory Appl. 10 (8) (2016) 894-902. [35] Y.J. Wang, F. Ding, The auxiliary model based hierarchical gradient algorithms and convergence analysis using the filtering technique, Signal Process. 128 (2016) 212-221. [36] Y.J. Wang, F. Ding, Novel data filtering based parameter identification for multiple-input multiple-output systems using the auxiliary model, Automatica 71 (2016) 308-313. [37] F. Ding, F.F. Wang, L. Xu, M.H. Wu, Decomposition based least squares iterative identification algorithm for multivariate pseudolinear ARMA systems using the data filtering, Journal of the Franklin Institute 354 (3) (2017) 1321-1339.

M

[38] L. Xu, F. Ding, The parameter estimation algorithms for dynamical response signals based on the multi-innovation theory and the hierarchical principle, IET Signal Process. 11 (2017). doi: 10.1049/iet-spr.2016.0220 [39] Q.J. Chen, Y. Gu, F. Ding, Data filtering based recursive least squares estimation algorithm for a class of Wiener nonlinear systems. The 2014 11th World Congress on Intelligent Control and Automation (2014 WCICA), June 29-July 4, 2014, Shenyang, China, pp. 4132-4136.

ED

[40] Y.B. Hu, B.L. Liu, Q. Zhou, A multi-innovation generalized extended stochastic gradient algorithm for output nonlinear autoregressive moving average systems, Appl. Math. Comput. 247 (2014) 218-224. [41] J. Chen, Several gradient parameter estimation algorithms for dual-rate sampled systems, Journal of the Franklin Institute 351 (1) (2014) 543-554.

PT

[42] J. Chen, Y.J. Liu, X.H. Wang, Recursive least squares algorithm for nonlinear dual-rate systems using missing-output estimation model, Circuits, Systems and Signal Processing 37 (4) (2017) 1406-1425. [43] D.Q. Wang, Y.P. Gao, Recursive maximum likelihood identification method for a multivariable controlled autoregressive moving average system, IMA Journal of Mathematical Control and Information 33 (4) (2016) 1015-1031.

CE

[44] T.Z. Wang, J. Qi, H. Xu, et al, Fault diagnosis method based on FFT-RPCA-SVM for cascaded-multilevel inverter, ISA Transactions 60 (2016) 156-163. [45] T.Z. Wang, H. Wu, M.Q. Ni, et al, An adaptive confidence limit for periodic non-steady conditions fault detection, Mechanical Systems and Signal Processing 72-73 (2016) 328-345.

AC

[46] L. Feng, M.H. Wu, Q.X. Li, et al, Array factor forming for image reconstruction of one-dimensional nonuniform aperture synthesis radiometers, IEEE Geoscience and Remote Sensing Letters 13 (2) (2016) 237-241.

11