Adaptive neural control for a class of stochastic nonlinear systems with unknown parameters, unknown nonlinear functions and stochastic disturbances

Adaptive neural control for a class of stochastic nonlinear systems with unknown parameters, unknown nonlinear functions and stochastic disturbances

Neurocomputing 226 (2017) 101–108 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Brief p...

721KB Sizes 4 Downloads 254 Views

Neurocomputing 226 (2017) 101–108

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Brief papers

Adaptive neural control for a class of stochastic nonlinear systems with unknown parameters, unknown nonlinear functions and stochastic disturbances

MARK



Chao-Yang Chena,b, , Wei-Hua Guib, Zhi-Hong Guanc, Ru-Liang Wangd, Shao-Wu Zhoua a

School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan 411201, PR China School of Information Science and Engineering, Central South University, Changsha 410012, PR China c College of Automation, Huazhong University of Science and Technology, Wuhan 430074, PR China d Computer and Information Engineering College, Guangxi Teachers Education University, Nanning 530001, PR China b

A R T I C L E I N F O

A BS T RAC T

Communicated by Mou Chen

In this paper, adaptive neural control (ANC) is investigated for a class of strict-feedback nonlinear stochastic systems with unknown parameters, unknown nonlinear functions and stochastic disturbances. The new controller of adaptive neural network with state feedback is presented by using a universal approximation of radial basis function neural network and backstepping. An adaptive neural network state-feedback controller is designed by constructing a suitable Lyapunov function. Adaptive bounding design technique is used to deal with the unknown nonlinear functions and unknown parameters. It is shown that the global asymptotically stable in probability can be achieved for the closed-loop system. The simulation results are presented to demonstrate the effectiveness of the proposed control strategy in the presence of unknown parameters, unknown nonlinear functions and stochastic disturbances.

Keywords: Unknown parameters Stochastic disturbances Unknown nonlinear functions Stochastic nonlinear Adaptive neural control

1. Introduction In recent years, the study of the robust control system design for nonlinear systems has attracted extensive attention. Lots of significant developments have been obtained [1–12], and interesting results of adaptive nonlinear control have been ever-increasing. Adaptive backstepping as a powerful method has a large number of applied research about synthesizing controllers for lower-triangular nonlinear systems. Backstepping design technique is one of the methods to design nonlinear system by structuring intermediate laws and Lyapunov functions step by step. It has obtained a large number of successful applications in nonlinear control. Such as [13–15]. Adaptive control is an important branch of robust control. Noticeably, due to neural network control and fuzzy logic control have a good approximation ability over a compact domain, they are very suitable for handling highly uncertain and nonlinear system, and they have become an important part of adaptive control and a lot of research has been obtained, such as [16–18,20]. In the early development stage of neural network control schemes, the control schemes that derive parameter adaptive law in off-line environments was usually used [21], which can perform well in some simple cases, but the stability, robustness and performance of nonlinear systems have few systematic



analytical methods. In order to avoid the above problems, the reference [19] has obtain some adaptive neural control schemes based on Lyapunov's stability theory. Stochastic nonlinear modelling has come to play an important role in many areas of industry, science and technology. After the success of systematic control design for deterministic nonlinear systems, how to extend this technique to the case of stochastic nonlinear systems has been an open research area [22–30]. Therefore, it is a challenging and meaningful issue for the stability analysis and control design of nonlinear stochastic systems, and have attracted more and more scholars’ attention in recent years. The main technical obstacle in the Lyapunov function design for stochastic nonlinear systems is that the gradient and the higher order Hessian term are involved Itô stochastic differentiation. In [22,23], strict-feedback stochastic system is studied for the first time using a backstepping design. Liu and Zhang [24] tried to extend the results in [22], a class stochastic nonlinear systems with time-delay is investigated. By using the quadratic Lyapunov function, [25,26] studied the stabilization problem of stochastic nonlinear systems. In [25], a class of stochastic nonholonomic system has investigated, and adaptive stabilization by state-feedback is resolved. For linear LTI SISO plants, [27] proposed a modified adaptive backstepping control. In [24], a class of stochastic nonlinear systems were

Corresponding author at: School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan 411201, PR China. E-mail address: [email protected] (C.-Y. Chen).

http://dx.doi.org/10.1016/j.neucom.2016.11.042 Received 25 April 2016; Received in revised form 28 October 2016; Accepted 21 November 2016 Available online 23 November 2016 0925-2312/ © 2016 Elsevier B.V. All rights reserved.

Neurocomputing 226 (2017) 101–108

C.-Y. Chen et al.

investigated via output feedback, linearly bounded unmeasurable states are involved in nonlinear function. By an adaptive neural control (ANC) scheme, [38] studied a class of non-affine pure feedback stochastic nonlinear system. It is shown that all the signals involved are semi-globally uniformly ultimately bounded under the action of the developed controller. A simplified adaptive backstepping neural control (ABNC) strategies are proposed for a class of uncertain strict-feedback nonlinear systems in [42]. In [44], a class of stochastic non-strictfeedback stochastic nonlinear system is studied. However, the system functions must be monotonously increasing and it should be bounded, which is a stricter assumption because it is the difficulty to confirm a bounded and monotonously increasing unknown function. In [45], this constraints that bounded and monotonously increasing for nonlinear functions are relaxed, but it is unavailable when the stochastic system with unknown parameters and unknown disturbances. And then, a series of studies on stochastic systems are carried out [28–30]. At the same time, the ANNC method has many successful applications for some unknown nonlinear systems, such as adaptive outputfeedback control [31–33], pure-feedback [34–37] and so on. In ANNC, the neural network is often used to online approximate unknown nonlinearity owning to their inherent approximation capabilities. Motivated by the above observations, this work focuses on the feedback nonlinear stochastic systems with unknown parameters, unknown nonlinear functions and stochastic disturbances, using the adaptive neural control method. By using backstepping method, an adaptive neural controller is designed. The proposed adaptive neural controller guarantees that all the signals in the closed-loop are bounded. Partial results of this paper have been presented in [39]. In this paper, the main contributions in this paper are as follows: (i) The new controller of adaptive neural network with state feedback is presented by using a universal approximation of radial basis function neural network and backstepping for a class of stochastic nonlinear systems. Compared with the results in [46,47], the gain function for virtual control signal xi+1 is constant 1. In this work, it is extended to a function of the partial state variables. Therefore, the systems in [46,47] are special situations of the system considered for this work. (ii) Corresponding model is more universal than References [38,42,45]. The unknown disturbances and unknown nonlinear functions are considered. Adaptive bounding design technique is used to deal with the unknown parameters and unknown nonlinear functions. The upper bound of the disturbances is not necessary to know in the design of the adaptive controller. When the derivatives of the state are available for feedback, the designed adaptive controller can be guaranteed that all the signals of the closed-loop system are bounded. (iii) A universal type adaptive feedback controller is designed, which globally regulates all the states of the uncertain system while keeping boundedness of all the states to guarantee global asymptotically stable in probability. The main advantage of the proposed control scheme is that all the closedloop signals are guaranteed to be globally bounded. However, many existing adaptive backstepping neural control approaches (e.g., see [16–19,27,32,33,36–38,40,41]) can only guarantee semi-globally bounded for all the closed-loop signals.

xi = [x1, x2, …, xi ]; u ∈ R is the control input and y ∈ R is the system output, respectively; fi (·) ∈ R and ψi* (·) ∈ R qi , ψi (0) = 0 are unknown smooth functions with fi (0) = 0, ψi* (0) = 0 ; gi (·) ∈ R, gi (·) ≠ 0 and ϕi (·) ∈ Rr are known smooth functions, θi* ∈ R qi are unknown bounded parameters, i=1,…,n; Δi (x, t ) is unknown disturbances. ω is an independent r-dimensional standard Wiener process and Δi (x, t ) satisfy the following assumption. Assumption 1 ([40]). For unknown disturbances Δi (x, t ) and ∀ (t , x ) ∈ R+ × R n , we have

|Δi (x, t )| ≤ pi* Φi* (xi )

(2)

where pi* ≥ 0 and Φi (xi ) are some unknown parameter values and known smooth functions, respectively. And, Φi* (0) = 0 , we define pi* is the smallest nonnegative constant such that (2) is satisfied. To simplify the notation, we let θ = [θ1*T , …, θn*T ]T ∈ R q , where q≔ ∑i qi . Therefore, we can rewrite the system (1) as in the following form:

dxi = (gi (xi ) xi +1 + θ T Ψi (xi ) + fi (xi ) + Δi (x , t ))dt + ϕiT (xi )dω

1≤i

≤ n − 1, dxn = (gn (xn ) u + θ T Ψn (xn ) + fn (xn ) + Δn (x, t )) dt + ϕnT (xn )dω, y (t ) = x1 (t ),

(3)

where each Ψi : Ri ↦ R q is given by T

Ψi = [Ψi1, Ψi2, …, Ψin] = [01T , …, 0Ti −1, ψi*, 0Ti +1, …, 0Tn ] with 0i ≔ [0, …, 0]T ∈ Riq .

Assumption 2. For unknown functions Ψij (xi ) and ∀ x ∈R n , we have

|Ψij (xi )| ≤ bij* φij* (xi )

(4)

where φi* = [φi*,1, φi*,2, …, φi*, nq ]T , φi* (0) = 0 . And bij* ≥ 0 is unknown parameter values and φij* (xi ) is known smooth functions, respectively. bij* is defined as the smallest nonnegative constant such that (2) is satisfied. Remark 1. Consider the system (3) cannot satisfy either the linear parametrization conditions, such as the following system

dx1 = ((x12 + 1) x1 x2 + θ1 x12 + θ3 sin(tθ4 x2 ))dt + x1 dω, d x2 = (e x 2 u + θ1 x 22 + θ2 e x1 + (θ4 + θ3 sin x1) x 22 ) + e−x 2 )dt , y (t ) = x1 (t ). It can be put in the form (2) by letting

g1 = x12 + 1,

g2 = e−x 2 ,

θ = [θ1 θ2]T , f1 = 0, T

ϕ1 = x1, ϕ2 = 0, Ψ1 = [x12 0] ,

f2 = e−x 2 , T

Ψ2 = [x 22 e x1] , Δ1 = θ3 sin(tθ4 x2 ),

Δ2 = (θ4 + θ3 sin x1) x 22.

The bounds on Δ1 ≤ p1* , Δ1 ≤ p1* x 22 , where

p1 ≔ θ3 , p2 ≔ θ3 + θ4 . Definition 1 ([48,49]). If for any ε > 0 ,

⎧ ⎫ lim P ⎨sup ∥ x (t )∥ ≥ ε⎬ = 0, ⎩ t ≥0 ⎭

2. Problem formulation and preliminaries

x (0)→0

2.1. Plant dynamics and for any initial condition x (0), we have P {limt→∞x (t ) = 0} = 1, then we claim that the solution {x (t ) = 0} of system (3) is asymptotically stable in the large.

In this section, we recall the basic background knowledge concerning the stability properties of stochastic systems. Consider the following stochastic nonlinear system

dxi = (gi (xi ) xi +1 + θi*T ψi* (xi ) + fi (xi ) + Δi (x , t ))dt + ϕiT (xi )dω

Definition 2 ([49]). If

1≤i

lim sup P {∥ x (t )∥ ≥ ε} = 0. ε →∞ t ≥0

≤ n − 1, dxn = (gn (xn ) u + θn*T ψn* (xn ) + fn (xn ) + Δn (x, t )) dt + ϕnT (xn )dω, y (t ) = x1 (t ),

The solution process {x (t ), t ≥ 0} of system (3) is said to be bounded in probability. Denote L with

(1)

where x = [x1, x2, …, xn]T ∈ R n is the state vector with initial value x (0), 102

Neurocomputing 226 (2017) 101–108

C.-Y. Chen et al.

LV (x ) =

∂V 1 ⎧ ∂ 2V ⎫ f + Tr ⎨gT 2 g⎬, ∂x 2 ⎩ ∂x ⎭

z1 = x1, (5)

where αi (t ) is an intermediate control. And, the system controller u(t) will be designed in the last step to stabilize the entire closed-loop system. Step 1: The Lyapunov function candidate is considered as

the infinitesimal generator of the solution of stochastic system (3) for any V (x ) ∈ C 2 (R n ; R ). Lemma 1 ([41]). For system (3), if there exist positive definite radial V1 (x ) ∈ C 2 (R n ; R ) unbounded Lyapunov functions and 2 m V2 (θ ) ∈ C (R ; R ), and V (x, θ ) = V1 (x ) + V2 (θ ) satisfying

V1 =

(6)

LV (x ) ≤ −λV (x ) + K ,

(9)

zi = xi − αi −1, i = 2, …, n ,

∼T −1∼ T ∼ ϑ1 Γϑ1 ϑ1 W͠ 1 Γw−11 W͠ 1 p T Γp ∼ p1 z14 ε∼2 + 1 1 + + 1 , + 2 2 2 2γε1 4

(10)

Γϑ−1 1

where and Γw1 are symmetric positive definite matrices, Γp1 ≤ 0, Γε1 ≤ 0 . Let l ϑi is the ith times estimate of ϑ, for clarity in every step, denote ϑi = ϑ for the ith step. In order to facilitate the analysis, we denote gi (xi (t )) = gi , others may be deduced by analogy. Then, we have

where constants λ > 0, K ≥ 0 , then, the stochastic system (3) is globally bounded stable in probability. Lemma 2 ([43]). For any ϵ > 0 and u ∈ R , the following inequality holds

dz1 = (g1 (z2 + α1) + θ1T Ψ1 + f1 + Δ1 (x, t ))dt + ϕ1T dω,

0 ≤ |u| − u tanh(u /ϵ) ≤ δ ϵ,

Let Λ1 = Δ1 , p1 = p1* , Φ1 = Φ1*, φ1 = φ1*. By using (9) and (11), we have

where δ = 0.2785.

dz1 = (g1 (z2 + α1) + θ1T Ψ1 + f1 + Λ1 (x, t ))dt + ϕ1T dω.

2.2. RBFNN approximation

Note (5), (10) and (12), we can obtain

Qnn (Z , W ) =

T l1̇ + 3z12 ϕ T ϕ1/2. − γε−1 ε∼T εl ̇ − W͠ 1 Γw−11 W 1 1 1 1

(7)

(13)

Let Q1 (Z1) = f1. Applying Young's inequalities, we have

where Z ∈ Ω ⊂ R q and,

3z12 ϕ1T ϕ1/2 ≤

W = [w1, w2, …, wl ]T ∈ Rl , are the input vector and weight vector, respectively. Denote that the number of nodes in the neural network is l > 1; and

3z 4 3ϵ11 + 1 ∥ ϕ1 ∥4 , 4ϵ11 4

3

(14)

1

g1 z13 z2 ≤ 4 g14/3 z14 + 4 z24,

S (Z ) = [s1 (Z ), …, sl (Z )]T ,

(15)

and applying Assumption 2, we have

with si(Z) being chosen as the following form of Gaussian functions

⎡ −(Z − μ )T (Z − μ ) ⎤ i i ⎥ si (Z ) = exp ⎢ , ⎢⎣ ⎥⎦ ηi2

q

θ1T Ψ1 ≤

(i = 1, 2, …, l . )

∀ ∈ ΩZ ,



q

θ1k φ1*k ≤

k =1



θ1k b1k φ1*k ≤ ϑ1T φ1* = ϑ1T φ1

(16)

k =1

where

where μi = [μ1, μ2 , …, μq ]T and η represent the center of the receptive field and the width of the Gaussian function, respectively. It is well known that neural network (7) can approximate any continuous function defined on a compact set ΩZ ⊂ R q , furthermore, the neural network can be given by

Q (Z ) = Qnn (Z , W *) + ε (Z )

(12)

∼T LV1 = z13 (g1 (z2 + α1) + θ1T Ψ1 + f1 + Λ1 (x, t )) − θ Γθ−1 θl̇ − ∼ p1T Γp−1 pl̇ 1 1 1

In this work, the RBFNN is used to approximate a nonlinear continuous function. The continuous function Q (Z ): R q → R is given by

W T S (Z ),

(11)

ϑ1 = [ θ11 b11, …, θ1q b1q]T ,

*, …, φ1*q]T . φ1* = [φ11

From (13) to (16) and applying Assumption 1, we have

LV1 ≤

(8)

where W* is the ideal neural network weights and |ε (Z )| ≤ ε* is the neural network approximation error. W* is the ideal constant weights, and for all Z ∈ ΩZ , there exists constant ε* > 0 such that |ε| ≤ ε*. Moreover, W* is bounded by ∥ W * ∥ ≤ wm on the compact set ΩZ , where wm is a positive constant. W* is usually unknown and need to be estimated in function approximation. W* is defined as follows:

3 4/3 4 1 4 g z1 + z2 + z13 g1 α1 + z1 |3ϑ1T φ1 + z13 W1*T S (Z1) + z1 |3ε1* 4 1 4 ∼T T 3 l1̇ + 3 ϵ11 + z1 | p1 Φ1 (x1) − θ Γθ−1 θl̇ − ∼ p1T Γp−1 pl̇ − γε−1 ε∼T εl ̇ − W͠ 1 Γw−11 W 1 1 1 1 1 1 4 3z14 4 ∥ ϕ1 ∥ . + 4ϵ11 (17)

Let intermediate law be

⎛ ⎞ 3 l1T S (Z1) − 3z1 ∥ ϕ ∥4 ⎟ α1 = g1−1 ⎜ −c1 z1 − g14/3 z1 − β10 − β11 − β12 − W 1 ⎝ ⎠ 4 4ϵ11 ⎡ z3 ⎤ , β10 = εl1 ϖ10, ϖ10≔tanh ⎢ 1 ⎥ , β11 = pl1T ϖ11, ⎣ ε10 ⎦

⎤ ⎡ W * = arg min⎢ sup |Qnn (Z , W ) − Q (Z )|⎥ , (W ) ⎣ Z ∈ ΩZ ⎦

l which is unknown and needs to be estimated in control design. Let W l − W * be the weight estimation be the estimate of W*, and let W͠ = W error.

⎡ z 3 Φ (x ) ⎤ T ϑ1 ϖ12, ϖ11≔Φ1 (x1)tanh ⎢ 1 1 1 ⎥; β12 = l ⎣ ε11 ⎦

⎡ z 3 φ (x1) ⎤ ⎥; ϖ12≔φ1 (x1)tanh ⎢ 1 1 ⎣ ε12 ⎦

Then, we have 3. Adaptive control design and stability analysis

∼T 1 ̇ l LV1 ≤ −c1 z14 + 4 z22 − ϑ1 (Γϑ−1 εl ̇ + z13 ϖ10 ) ϑ − z13 φ1) − ε∼1 (γε−1 1 1 1 1

In this section, adaptive control design is given by backstepping design, and there contain n steps in backstepping design procedure. At each step, an intermediate control function αi (t ) shall be developed using an appropriate Lyapunov function Vi(t). The design of both the control laws and the adaptive laws is based on the following change of coordinates:

T l1̇ + z13 S (Z1)) + z1 |3ε1* − z13 ε1* ϖ10 p1T (Γp−1 pl̇ + z13 ϖ11) − W͠ 1 (Γw−11 W −∼ 1 1

+ z1 |3p1T Φ1 (x1) − z13 p1T ϖ11

3

+ z1 |3ϑ1T φ1 (x1) − z13 ϑ1T ϖ12 + 4 ϵ11. (18)

The adaptive laws are chosen as 103

Neurocomputing 226 (2017) 101–108

C.-Y. Chen et al.

̇ l ϑ1 = −Γϑ1 (z13 φ1 − σϑ1 l ϑ1),

(σϑ1 0) εl1̇ = −γε1 (z13 ϖ10 − σε1 εl1),

Define operator θi |1 = [ θi1 , θi2 , …, θiq ]T , then using Assumptions 1 and 2, we have

(σε1 0)

l1̇ = −Γw (z13 S (Z1) − σw W l (σ p1 0) W 1 1 1),

pl1̇ = −Γp1 (z13 ϖ11 − σ p1 pl1), (σw1 0)

Λi ≤ piT Φi,

(19)

Note (18) and Lemma 2, we have

LV1 ≤

−c1 z14

1 2 z 4 2

+

3

+ 0.2785(ε10 + ε11 + ε12 ) + 4 ϵ11.

⎡ ∂αi −1 ⎤T ∂αi −1 * ∂αi −1 * pi = [p1* , p2* , …, pi* ]T , Φi = ⎢ Φ1*, Φ2 , …, Φi −1, Φi*⎥ ∂x2 ∂xi −1 ⎣ ∂x1 ⎦

(20)

T

, ϑi = [( θ1 |1•b1*)T , ( θ2 |1•b2*)T , …, ( θi |1•bi*)T ] , φi

Applying the following inequalities

⎡ ∂αi −1 ⎤T ∂αi −1 ∂αi −1 =⎢ φ1*T , φ2*T , …, φi*−1T , φi*T ⎥ . ∂x2 ∂xi −1 ⎣ ∂x1 ⎦

∼T ∼ 1 1 1 σϑ1 ϑ1 l ϑ1 ≤ − 2 σϑ1 ∥ ϑ1 ∥2 + 2 σϑ1 ∥ ϑ1 ∥2 , σε1 ε∼1 εl1 ≤ − 2 σε1 ∥ ε∼1 ∥2 ∼ pl ≤ − σ ∥ ∼ + 2 σεε1 ∥ ε1 ∥2 , σ p1 p p1 ∥2 + 2 σ p1 ∥ p1 ∥2 , σw1 W͠ 1 1 1 2 p1 1

1

T

1

l1 ≤ − 1 σw ∥ W͠ 1 ∥2 + 1 σw ∥ W1* ∥2 , W 1 1 2 2

Let

(21)

⎤T ⎡ ∂α ∂α ∂α Φi = ⎢ i −1 Φ1*, i −1 Φ2*, …, i −1 Φi*−1, Φi*⎥ , φi ⎦ ⎣ ∂x1 ∂x2 ∂xi −1

we can obtain

∼ 1 1 1 1 LV1 ≤ −c1 z14 + 4 z24 − 2 σϑ1 ∥ ϑ1 ∥2 − 2 σε1 ∥ ε∼1 ∥2 − 2 σ p1 ∥ ∼ p1 ∥2

⎡ ∂α ⎤T ∂α ∂α = ⎢ i −1 φ1*T , i −1 φ2*T , …, i −1 φi*−1T , φi*T ⎥ , ⎣ ∂x1 ⎦ ∂x2 ∂xi −1

1 1 1 1 − 2 σw1 ∥ W͠ 1 ∥2 + 2 σ p1 ∥ p1 ∥2 + 2 σϑ1 ∥ ϑ1 ∥2 + 2 σw1 ∥ W1* ∥2 3

1

+ 4 ϵ11 + 0.2785(ε10 + ε11 + ε12 ) ≤ −λ1 V1 + K1 + 4 z24,

(22) with 2 ≤ i ≤ n . By using (23)–(30), we have

where

σϑ1 max(Γϑ−1 ) 1

,

σ p1 max(Γp−1 ) 1

,

⎫ ⎪ ⎬, K1

σw1

3 4/3 1 4 g + zi +1 + zi3 gi αi + zi |3ϑTi φi + zi3 Wi*T S (Z i ) + zi |3εi* 4 i 4 i −1 ⎛ ∂α ∂α ∂α l̇ ∂α ̇ + |z|i3 piT Φi − zi3 ∑ ⎜⎜ i −1 xj +1 + i −1 l ϑ + i −1 W + i −1 εlj̇ lj j lj j ∂εlj ∂W ∂ϑ j =1 ⎝ ∂xj

LVi ≤ LVi −1 +

max(Γw−11 ) ⎪ ⎭

1 1 3 σϑ ∥ ϑ1 ∥2 + σw1 ∥ W1* ∥2 + ϵ11 2 1 2 4 ε11 + ε12 ).

+

Step i: (2 ≤ i < n ). Similarly, we can obtain

dzi = [gi (zi +1 + αi ) + θiT Ψi + fi + Δi (x, t )]dt + ϕi dω i −1 ⎡ i −1 ∂α ∂α ̇ − ⎢∑ i −1 (gj xj +1 + θ jT Ψj + f j + Δj ) + ∑ i −1 l ϑj + l ⎢⎣ j =1 ∂xj ∂ϑ j j =1 i −1

+

i −1

∑ ∂αi −1 εlj̇ + ∑ ∂αi −1 plj̇ ∂εlj

j =1

+

pj ∂l

j =1

1 2

i −1

∑ k , j =1

i −1

i −1 l̇ Wj ∑ ∂αl j =1

∂xj

j =1

⎛ ⎜ϕ − i 2 ⎜⎝ ≤

i −1

+

(23)

∑ j =1

⎞⎛ ∂αi −1 ⎟ ⎜ ϕj ⎟ ⎜ϕi − ∂xj ⎠ ⎝ i −1

3z 4 3ϵi1 + i 4 4ϵi1

ϕi −

∑ j =1

i −1

∑ j =1

⎞T ∂αi −1 ⎟ ϕj ⎟ ∂xj ⎠

∂αi −1 ϕj ∂xj

+



Qi (Z i ) = fi −

j =1

i −1

Λi = Δi −

∑ j =1

∂αi −1 Δj , ∂xj

i −1

Υi = θiT Ψi −

∑ j =1

Vi = Vi −1 +

zi4 + 4

∼T −1∼ ϑ Γϑi ϑ 2

where

(27)

(2 ≤ i ≤ n ) (28)

2



j =1

⎝ ∂xj

⎞ ∂αi −1 ̇ ∂α 3zi εlj + i −1 plj̇ ⎟⎟ − ∂εlj ∂plj ⎠ 4ϵi1

i −1

ϕi −

∑ j =1

∂αi −1 ϕj ∂xj

4

(31)

∑ j =1

∂αi −1 l̇ ∂α l̇ ϑ + i −1 W lj j lj j ∂W ∂ϑ

∂αi −1 ϕj ∂xj

4⎤

⎥. ⎥ ⎦

(32)

+

T W͠ i Γw−1 W͠ i i

2

+

ε∼i2 , 2γεi

⎡ z3⎤ ϖi 0≔ tanh ⎢ i ⎥ , βi1 = pliT ϖi1, ⎣ εi 0 ⎦

⎡ z 3 Φi ⎤ T ϖi1 ≔ Φi ⊙ tanh ⎢ i ⎥ , βi2 = l ϑi ϖi2, ⎣ εi1 ⎦

(26)

∼ piT Γpi ∼ pi

i −1

∑ ⎜⎜ ∂αi −1 xj +1 +

(25)

(2 ≤ i ≤ n )

+

k , j =1

∂ 2αi −1 T gg + ∂xk ∂xj k j

βi 0 = εli ϖi 0,

(2 ≤ i ≤ n )

∂αi −1 T θi Ψj , ∂xj

i −1



(24)

Let

∂αi −1 f (xj ), ∂xj j

1 2

,

gi zi3 zi +1 ≤ 4 g14/3 zi4 + 4 zi4+1.

i −1

ϕi −

Let

4

1

3

k , j =1

i −1

3z 4 ∂ 2αi −1 T 3ϵ gk gj + i1 + i ∂xk ∂xj 4 4ϵi1

⎡ 1 3 liT S (Z i ) αi = gi−1 ⎢ −ci zi − zi − g14/3 zi − βi 0 − βi1 − βi2 − W ⎢ 4 4 ⎣

Applying Young's inequalities, we have

3zi2

i −1



We select intermediate law αi, 2 ≤ i ≤ n − 1, as

∂Wj

⎤ ∂ 2αi −1 T ⎥ gk gj dt ⎥⎦ ∂xk ∂xj

∑ ∂αi −1 ϕj dω.

z3 ∂αi −1 ̇ ⎞⎟ plj ⎟ − i ∂plj ⎠ 2

∼T T ̇ ∼T Γ −1 pl̇ − γ −1 ε∼ εl ̇ . l li̇ − p − ϑi Γϑ−1 ϑi − W͠ i ΓW−1i W pi i εi i i i i

i −1



(30)

where

∼T T l + σϑ1 ϑ1 l ϑ1 + σε1 ε∼1 εl1 + σ p1 p͠ 1T pl1 + σW1 W͠ 1 W 1

⎧ ⎪ λ1 = min ⎨ 4c1, γε1σε , ⎪ 1 ⎩ 1 = σ p1 ∥ p1 ∥2 + 2 + 0.2785(ε10 +

Υi ≤ ϑTi φi ,

(29) 104

⎡ z 3φ ⎤ ϖi2 ≔ φi ⊙ tanh ⎢ i i ⎥ , ⎣ εi1 ⎦

(33)

Neurocomputing 226 (2017) 101–108

C.-Y. Chen et al.

⎡ ⎡ 3 ∂αi −1 ⎤ Φ1* ⎥ ⎢ zi ⎡ z 3 Φi ⎤ ⎢ ∂αi −1 ∂ x i 1 ⎢ ⎢ ⎥, …, ∂αi −1 Φi*−1 , φi ⎥= Φi ⊙ tanh ⎢ Φ1* tanh ⎢ ⎥ ∂xi −1 εi1 ⎣ εi1 ⎦ ⎢ ∂x1 ⎢⎣ ⎢⎣ ⎥⎦

⎡ 1 lnT S (Z n ) + 1 u = gn−1 ⎢ −cn zn − zn − βn0 − βn1 − βn2 − W ⎢ 4 2 ⎣ +

⎤T ⎡ 3 ∂αi −1 ⎤ Φi*−1 ⎥ ⎢ zi 3 *⎤⎥ ⎡ ∂xi −1 ⎥ , Φi* tanh ⎢ zi Φi ⎥ ⎥ tanh ⎢ ⎢ ⎥ εi1 ⎣ εi1 ⎦ ⎥ ⎥⎦ ⎢⎣ ⎥⎦



⎡ z 3φ ⎤ ⊙ tanh ⎢ i i ⎥ ⎣ εi2 ⎦

i =1

4⎤

⎥. ⎥ ⎦

(38)

n

∑ Ki, i =1

Theorem 1. Consider stochastic nonlinear systems (1), Assumptions 1 and 2. We can design state-feedback adaptive controller (38) and adaptation laws (19) and (35) by using backstepping approach. For bounded initial conditions, the following prosperities hold.

(σ pi 0)

(I) The closed-loop system's the equilibrium at the origin is asymptotically stable in probability. (II) The closed-loop system has an almost surely unique solution on l (0), εl (0), pl (0), system state x(t) and ϑ(0), W [0, ∞) for each x (0), l l (t ), εl (t ), pl (t ) satisfy ϑ(t ), W the parameter estimates l

(35)

Note (21), (34), (35) can be written as

∼T T l 1 LVi ≤ LVi −1 − 4 zi4 − ci zi4 + σϑi ϑi l ϑi + σεi ε∼i εli + σ pi p͠ i T pli + σWi W͠ i W i 1

+ 0.2785(εi 0 + εi1 + εi2 ) + 4 ϵi1 + 4 zi4+1

⎧ ⎫ ⎧ l (t ), lim εl (t ) and lim pl (t ) P ⎨ lim x (t ) = 0⎬ = 1; P ⎨ lim l ϑ(t ), lim W t →∞ t →∞ t →∞ ⎩t →∞ ⎭ ⎩t →∞ ⎫ exist and are finite⎬ = 1. ⎭

(36)

1 ∼ ∥2 σp ∥ p i 2 i ∥ Wi* ∥2

1 4 zi +1 ≤ − ∑ λj Vj + 4 j =1

1 + zi4+1, 4

j =1

∂αn −1 ϕj ∂xj

with 1 ≤ i ≤ n . At the present stage, ANC design has been completed based on backstepping technique. The main result can be summarized by the following theorem.

The adaptive laws are chosen as

i



⎫ ⎧ σ pi ⎪ ⎪ σϑi σwi ⎬, λi = min ⎨ 4ci , γεi σε , , , ⎪ i max(Γ −1) max(Γ −1) max(Γ −1) ⎪ ϑi pi wi ⎭ ⎩ 1 1 1 3 Ki = σ pi ∥ pi ∥2 + σϑi ∥ ϑi ∥2 + σwi ∥ Wi* ∥2 + ϵi1 2 2 2 4 + 0.2785(εi 0 + εi1 + εi2 ),

⎞ ⎛ ⎞ 1 4 T⎛ li̇ + zi3 S (Z i ) ⎟ − ε∼i ⎜γ −1 εli̇ + zi3 ϖi 0⎟ zi − ci zi4 − W͠ i ⎜Γw−1 W ⎠ ⎝ εi ⎠ ⎝ i 4 ⎞ ⎛ ∼T ̇ l −∼ piT ⎜Γp−1 pl̇ + zi3 ϖi1⎟ − ϑi [Γϑ−1 ϑi + zi3 ϖi2] + 0.2785(εi 0 + εi1 + εi2 ) i ⎠ ⎝ i i 3ϵ 1 + i1 + zi4+1. (34) 4 4

1 σε ∥ ε∼1 ∥2 − 2 i 1 ∥ ϑi ∥2 + σwi 2

n −1

ϕn −

⎞ ∂αn −1 l̇ ∂α l̇ ∂α ∂α ϑ + n −1 W + n −1 εlj̇ + n −1 plj̇ ⎟⎟ lj j lj j ∂εlj ∂plj ⎠ ∂W ∂ϑ

where

LVi ≤ LVi −1 −

∼ 1 4 1 zi +1 − σϑi ∥ ϑi ∥2 − 4 2 1 1 1 − σwi ∥ W͠ i ∥2 + σ pi ∥ pi ∥2 + σϑi 2 2 2 ⎛ 3 1 4⎞ + ϵi1 ≤ ⎜LVi −1 − zi ⎟ − λ1 Vi + Ki + ⎝ 4 4 ⎠

3zn 4ϵn1

n

and using Lemma 2 we can obtain

LVi ≤ LVi −1 − ci zi4 +

j =1

⎝ ∂xj

LVn ≤ − ∑ λi Vi +

⎤T ⎡ 3 ∂αi −1 ⎤ φi*−1 ⎥ 3 *⎤⎥ ⎢ zi ⎡ z φ ∂xi −1 ⎥ , φ * tanh ⎢ i i ⎥ ⎥ tanh ⎢ ⎢⎣ εi2 ⎥⎦ ⎥ ⎢ ⎥ i εi2 ⎥⎦ ⎢⎣ ⎥⎦

3



∑ ⎜⎜ ∂αn −1 xj +1 +

k , j =1

∂ 2αn −1 ϕ ϕT ∂xk ∂xj k j

A similar procedure is employed, by using (24)–(30), (33), (35), we have

⎡ ⎡ 3 ∂αi −1 ⎤ φ*⎥ ⎢ ⎢ zi ∂ ∂α α ∂x1 1 ⎥ = ⎢ i −1 φ1* tanh ⎢ , …, i −1 φi*−1 . ⎢ ∂x1 ⎢ ⎥ ∂xi −1 εi2 ⎢⎣ ⎢⎣ ⎥⎦

εli̇ = −γεi (zi3 ϖi 0 − σεi εli ), (σεi 0), pli̇ = −Γpi (zi3 ϖi1 − σ pi pli ), ̇ ,l ϑi = −Γϑi (zi3 (ϖi2 − σϑi l ϑi), ̇ 3 l li ), (σw 0). (σϑi 0), Wi = −Γwi (zi S (Z i ) − σwi W i

n −1

n −1



i

∑ Kj j =1

4. Simulation studies (37)

The following example is given to show the effectiveness of the proposed adaptive neural control algorithms for stochastic nonlinear system (39). We consider the following second-order system.

where

⎫ ⎧ σ pi ⎪ ⎪ σϑi σwi ⎬, Ki λi = min ⎨ 4ci , γεi σε , , , ⎪ i max(Γ −1) max(Γ −1) max(Γ −1) ⎪ ϑi pi wi ⎭ ⎩ 1 1 1 3 = σ pi ∥ pi ∥2 + σϑi ∥ ϑi ∥2 + σwi ∥ Wi* ∥2 + ϵi1 2 2 2 4 + 0.2785(εi 0 + εi1 + εi2 ).

dx1 = (x2 + f1 (x1) + Δ1 (x, t ))dt + x1 cos x1 dω, d x2 = ((1 + 0.5 sin x1) u + θ2* ψ2* + f2 (x ))dt + sin x2 dω, y = x1, where

g1 = 1,

Step n: We select the control

θ1* φ1* = 0,

f1 = x1 sin x1,

Δ1 = 0.5x1 sin x2 t , ϕ1 = x1 cos x1, θ2* ψ2* = 0.02x2 , f2 = x2 cos x2 ,

g2 = (1 + 0.5 sin x1), Δ2 = 0,

The adaptive laws can be designed in the following: 105

ϕ2 = sin x2 .

(39)

Neurocomputing 226 (2017) 101–108

C.-Y. Chen et al.

l1̇ εl1̇ = γε1 (z13 ϖ10 − σε1 εl1), pl1̇ = Γp1 (z13 ϖ11 − σ p1 pl1), W ̇ l1), l = Γw1 (z13 S (Z1) − σw1 W ϑ1 = Γϑ1 (z13 φ1 − σϑ1 l ϑ1), εl2̇ 3 3 ̇ l̇ = γ (z ϖ − σ εl ), pl = Γ (z ϖ − σ pl ), W ε2

=

2

20

Γw2 (z23 S (Z 2 )

ε2 2

p2

2

2

21

p2 2

2

̇ l2 ), l − σw 2 W ϑ2 = Γϑ2 (z23 ϖ22 − σϑ2 l ϑ2).

Finally, the system control law can be designed as

⎡ T⎛ 1 ∂α ⎞ lT u = g2−1 ⎢ −c2 z2 − zn − β20 − β21 − l ϑ 2 ⎜φ2 − 1 φ1⎟ − W 2 S (Z 2 ) ⎢⎣ ⎝ 4 ∂x1 ⎠ 1 ∂ 2α1 2 ⎛ ∂α1 ∂α ̇ ∂α1 l̇ ∂α ∂α ⎞ ϕ +⎜ x2 + 1 l W1 + 1 εl1̇ + 1 pl1̇ ⎟ ϑ1 + 2 1 l l 2 ∂x1 ∂εl1 ∂pl1 ⎠ ⎝ ∂x1 ∂W1 ∂ϑ1 4⎤ ⎞ ⎛ 3z2 ∂α − ⎜ϕ2 − 1 ϕ1⎟ ⎥ , 4ϵ21 ⎝ ∂x1 ⎠ ⎥⎦

+

Fig. 2. Control input u.

where

⎛ ∂α ⎡ ⎤⎞ ⎜ 1 ϕ * tanh ⎢z23 ∂∂αx1 ϕ1*/ ε21⎥⎟ ω 20 = tanh[z23 / ε20], ω 21 = ⎜ ∂x1 1 1 ⎣ ⎦⎟. ⎜ ⎟ ⎝ ⎠ 0 The initial conditions and design parameters are given as follows:

θl1 (0) = [0 0.1]T , θl2 (0) = [0 0.8]T , bl1 = [0 1]T , bl2 = [0 0.1]T , εl1 (0) = 0.1e − 3,

εl2 (0) = 0,

pl1 (0) = 0.1, pl2 (0) = [0 0.15]T , l1 (0) = W l1 (0) = 0, Γϑ = diag (0.30.3), W 1 Γϑ2 = diag (0.25 0.25), γε1 = 0.3, Γp2 = [0.4 0], σϑ1 = 0.3, σε1 = 0.4, σ p1 = 0.3,

γε2 = 0.4,

σϑ 2 = 0.25,

σ p2 = 0.4,

σw2 = 0.3, ε10 = ε20 = ε11 = ε21 = 0.3,

Γp1 = 0.3,

l2 ∥: “dashed line”. l1 ∥: “solid line.” ∥ W Fig. 3. Boundedness of weights ∥ W

σε1 = 0.3, σw1 = 1.5, c1 = c2 = 0.3.

l1T S (Z1) W

Specifically, neural network contains 27 nodes (i.e. l1 = 27) with centers μl (l = 1, …, l1) evenly spaced in [−1.5, 1.5] and widths l2T S (Z 2 ) contain 64 nodes (i.e. η = 0.8 (l = 1, …, l1). Neural networks W l

l2 = 64 ) with centers evenly spaced in μl (l = 1, …, l2 ) [−1.5, 1.5] × [−1.5, 1.5] × [−1.5, 1.5] × [−1.5, 1.5], and widths ηl = 1.5 (l = 1, …, l1). The simulation results are shown in Figs. 1–6, from which we can see that the controller renders the resulting closed-loop system asymptotically stable and the limits of estimated parameters exist and are finite. Fig. 1 shows that x1 and x2 converge to zero rapidly. Fig. 2 displays the control input signal u, Fig. 3 shows that boundedl1 ∥ and ∥ W l2 ∥, Fig. 4 displays the boundedness of ness of weights ∥ W approximation error εl1 and εl2 , and Figs. 5 and 6 show the response ϑ2 ∥. ϑ1 ∥ and ∥ l curve of the adaptive parameter ∥ pl1 ∥, ∥ pl2 ∥, ∥ l

Fig. 4. Boundedness of approximation error εl1: “solid line.” εl2 : “dashed line”.

l ∥1: “solid line.” ∥ pl ∥2 : “dashed line”. Fig. 5. Boundedness of parameters ∥ p Fig. 1. System states x1: “solid line.” x2: “dashed line”.

106

Neurocomputing 226 (2017) 101–108

C.-Y. Chen et al.

[13]

[14] [15]

[16]

[17]

[18]

ϑ1 ∥: “solid line.” ∥ l ϑ 2 ∥: “dashed line”. Fig. 6. Boundedness of parameters ∥ l

[19]

5. Conclusion

[20]

In this paper, the adaptive neural control method of nonlinear systems is extended to a class of stochastic nonlinear systems with the unknown disturbances, unknown parameters and unknown functions. Unknown nonlinearities can be approximated by RBFNN. All neural network weights are tuned online with no prior training needed. Adaptive bounding design technique is used to deal with the unknown parameters and unknown functions. We have designed a universal type adaptive feedback controller, which keep boundedness of all the states to guarantee global asymptotically stable in probability. Simulation has been conducted to show the performance of the proposed approach.

[26]

Acknowledgments

[27]

This work is partially supported by the National Natural Science Foundation of China (61503133, 51374107, 51577057, 61304162), by Major State Basic Research Development Program (973) sub-project (61325309), and by Postdoctoral Science Foundation of China under Grant (2016M592449), and by Natural Science Foundation of Hunan Province (2016JJ6043, 14JJ3110), and by Research Foundation of Education Bureau of Hunan Province (15C0548).

[28]

[21] [22] [23] [24]

[25]

[29] [30]

[31]

[32]

References

[33]

[1] D. Bertsimas, E. Litvinov, X.A. Sun, et al., Adaptive robust optimization for the security constrained unit commitment problem, IEEE Trans. Power Syst. 28 (1) (2013) 52–63. [2] W. Sun, Z. Zhao, H. Gao, Saturated adaptive robust control for active suspension systems, IEEE Trans. Ind. Electron. 60 (9) (2013) 3889–3896. [3] A. Lorca, X.A. Sun, Adaptive robust optimization with dynamic uncertainty sets for multi-period economic dispatch under significant wind, IEEE Trans. Power Syst. 30 (4) (2015) 1702–1713. [4] Z.H. Guan, G.S. Han, J. Li, et al., Impulsive multiconsensus of second-order multiagent networks using sampled position data, IEEE Trans. Neural Netw. Learn. Syst. 26 (11) (2015) 2678–2688. [5] M. Chen, S.S. Ge, Adaptive neural output feedback control of uncertain nonlinear systems with unknown hysteresis using disturbance observer, IEEE Trans. Ind. Electron. 62 (12) (2015) 7706–7716. [6] Z. Zuo, J. Zhang, Y. Wang, Adaptive fault-tolerant tracking control for linear and Lipschitz nonlinear multi-agent systems, IEEE Trans. Ind. Electron. 62 (6) (2015) 3923–3931. [7] Z.H. Guan, B. Hu, M. Chi, et al., Guaranteed performance consensus in secondorder multi-agent systems with hybrid impulsive control, Automatica 50 (9) (2014) 2415–2418. [8] H.C. Yan, F.F. Qian, H. Zhang, et al., H∞ fault detection for networked mechanical spring-mass systems with incomplete information, IEEE Trans. Ind. Electron. 63 (9) (2016) 5622–5631. http://dx.doi.org/10.1109/TIE.2016.2559454. [9] H.C. Yan, F.F. Qian, F.W. Yang, H.B. Shi, filtering for nonlinear networked systems with randomly occurring distributed delays, missing measurements and sensor saturation, Inf. Sci. (2016) 370–371, 772–782. [10] H. Zhang, Q.Q. Hong, H.C. Yan, et al., Event-based distributed H∞ filtering networks of 2DOF quarter-car suspension systems, IEEE Trans. Ind. Inform. (2016). http://dx.doi.org/10.1109/TII.2016.2569566. [11] Z.W. Liu, X. Yu, Z.H. Guan, et al., Pulse-modulated intermittent control in consensus of multiagent systems, IEEE Trans. Syst. Man Cybern.: Syst. (2016). http://dx.doi.org/10.1109/TSMC.2016.2524063. [12] M.F. Ge, Z.H. Guan, B. Hu, et al., Distributed controller-estimator for target

[34]

[35]

[36]

[37]

[38] [39]

[40] [41]

[42] [43] [44]

[45]

107

tracking of networked robotic systems under sampled interaction, Automatica 69 (2016) 410–417. J. Zhou, C. Wen, G. Yang, Adaptive backstepping stabilization of nonlinear uncertain systems with quantized input signal, IEEE Trans. Autom. Control 59 (2) (2014) 460–464. J. Zhai, W. Zha, Global adaptive output feedback control for a class of nonlinear time-delay systems, ISA Trans. 53 (1) (2014) 2–9. J. Wu, W. Chen, D. Zhao, et al., Globally stable direct adaptive backstepping NN control for uncertain nonlinear strict-feedback systems, Neurocomputing 122 (2013) 134–147. S. Tong, Y. Li, Adaptive fuzzy output feedback tracking backstepping control of strict-feedback nonlinear systems with unknown dead zones, IEEE Trans. Fuzzy Syst. 20 (1) (2012) 168–180. Q. Zhou, P. Shi, S. Xu, et al., Observer-based adaptive neural network control for nonlinear stochastic systems with time delay, IEEE Trans. Neural Netw. Learn. Syst. 24 (1) (2013) 71–80. J. Wu, J. Li, Adaptive fuzzy control for perturbed strict-feedback nonlinear systems with predefined tracking accuracy, Nonlinear Dyn. 83 (3) (2016) 1185–1197. M. Chen, S.S. Ge, B.V.E. How, Robust adaptive neural network control for a class of uncertain MIMO nonlinear systems with input nonlinearities, IEEE Trans. Neural Netw. 21 (5) (2010) 796–812. Z. Peng, D. Wang, H. Zhang, et al., Distributed neural network control for adaptive synchronization of uncertain dynamical multiagent systems, IEEE Trans. Neural Netw. Learn. Syst. 25 (8) (2014) 1508–1519. N. Bonavita, T. Matsko, Neural network technology applied to refinery inferential analyzer problems, Hydrocarb. Eng. (1999) 33–38. H. Deng, M. Krstić, Stochastic nonlinear stabilization—I: a backstepping design, Syst. Control Lett. 32 (3) (1997) 143–150. H. Deng, M. Krstić, Stochastic nonlinear stabilization—II: inverse optimality, Syst. Control Lett. 32 (3) (1997) 151–159. S.J. Liu, J.F. Zhang, Output-feedback control of a class of stochastic nonlinear systems with linearly bounded unmeasurable states, Int. J. Robust Nonlinear Control 18 (6) (2008) 665–687. F. Gao, F. Yuan, Adaptive stabilization of stochastic nonholonomic systems with nonlinear parameterization, Appl. Math. Comput. 219 (16) (2013) 8676–8686. H.J. Fan, L.X. Han, C.Y. Wen, L. Xu, Decentralized adaptive output-feedback controller design for stochastic nonlinear interconnected systems, Automatica 48 (11) (2012) 2866–2873. M. Song, Y. Lin, Modified adaptive backstepping design method for linear systems, IET Control Theory Appl. 6 (8) (2012) 1137–1144. S. Khoo, J. Yin, Z. Man, et al., Finite-time stabilization of stochastic nonlinear systems in strict-feedback form, Automatica 49 (5) (2013) 1403–1410. Z. Wu, M. Cui, P. Shi, et al., Stability of stochastic nonlinear systems with statedependent switching, IEEE Trans. Autom. Control 58 (8) (2013) 1904–1918. C.R. Zhao, X.J. Xie, Output feedback stabilization using small-gain method and reduced-order observer for stochastic nonlinear systems, IEEE Trans. Autom. Control 58 (2) (2013) 523–529. S. Tong, T. Wang, Y. Li, et al., Adaptive neural network output feedback control for stochastic nonlinear systems with unknown dead-zone and unmodeled dynamics, IEEE Trans. Cybern. 44 (6) (2014) 910–921. Q. Zhou, P. Shi, H. Liu, et al., Neural-network-based decentralized adaptive outputfeedback control for large-scale stochastic nonlinear systems, IEEE Trans. Syst. Man Cybern. Part B: Cybern. 42 (6) (2012) 1608–1619. H. Wang, K. Liu, X. Liu, et al., Neural-based adaptive output-feedback control for a class of nonstrict-feedback stochastic nonlinear systems, IEEE Trans. Cybern. 45 (9) (2015) 1977–1987. Y. Gao, S. Tong, Y. Li, Adaptive fuzzy backstepping output feedback control for a class of uncertain stochastic nonlinear system in pure-feedback form, Neurocomputing 122 (2013) 126–133. B. Jiang, Q. Shen, P. Shi, Neural-networked adaptive tracking control for switched nonlinear pure-feedback systems under arbitrary switching, Automatica 61 (2015) 119–125. C.L.P. Chen, Y.J. Liu, G.X. Wen, Fuzzy neural network-based adaptive control for a class of uncertain nonlinear stochastic systems, IEEE Trans. Cybern. 44 (5) (2014) 583–593. B. Niu, T. Qin, X. Fan, Adaptive neural network tracking control for a class of switched stochastic pure-feedback nonlinear systems with backlash-like hysteresis, Int. J. Syst. Sci. 47 (14) (2016) 3378–3393. H. Wang, X. Liu, K. Liu, et al., Adaptive neural control for a general class of purefeedback stochastic nonlinear systems, Neurocomputing 135 (2014) 348–356. R. Wang, C. Chen, Robust adaptive neural control for a class of stochastic nonlinear systems, in: Proceeding of 2010 International Conference on Computational Intelligence and Security, 2010, p. 511–514. M. Chen, S.S. Ge, B. Ren, Adaptive tracking control of uncertain MIMO nonlinear systems with input constraints, Automatica 47 (3) (2011) 452–465. H.E. Psillakis, A.T. Alexandridis, NN-based adaptive tracking control of uncertain nonlinear systems disturbed by unknown covariance noise, IEEE Trans. Neural Netw. 18 (6) (2007) 1830–1835. Y. Pan, Y. Liu, H. Yu, Simplified adaptive neural control of strict-feedback nonlinear systems, Neurocomputing 159 (2015) 251–256. M.M. Polycarpou, P.A. Ioannou, A robust adaptive nonlinear control design, Automatica 31 (3) (1995) 423–427. B. Chen, X.P. Liu, S.S. Ge, et al., Adaptive fuzzy control of a class of nonlinear systems by fuzzy approximation approach, IEEE Trans. Fuzzy Syst. 20 (6) (2012) 1012–1021. Y. Su, B. Chen, C. Lin, et al., Adaptive neural control for a class of stochastic

Neurocomputing 226 (2017) 101–108

C.-Y. Chen et al.

complex systems and complex networks, impulsive and hybrid control systems, networked control systems, multiagent systems and intelligent robot control, biological systems, and smart grids.

nonlinear systems by backstepping approach, Inf. Sci. (2016). [46] H. Wang, B. Chen, C. Lin, et al., Observer-based adaptive neural control for a class of nonlinear pure-feedback systems, Neurocomputing 171 (2016) 1517–1523. [47] S. Sui, Y. Li, S. Tong, Adaptive fuzzy control design and applications of uncertain stochastic nonlinear systems with input saturation, Neurocomputing 156 (2015) 42–51. [48] Y. Li, Y. Li, S. Tong, Adaptive fuzzy decentralized output feedback control for stochastic nonlinear large-scale systems, Neurocomputing 83 (2012) 38–46. [49] R. Khasminskii, Stochastic Stability of Differential Equations, Springer Science & Business Media, 2011.

Ru-Liang Wang received his B.A. degree in Mathematics from Guangxi Normal University, China, M.A. degree in Mathematics from Guangxi Normal University, China, and Ph.D. degree in Automation Science and Engineering from South China University of Technology, China, in 1984, 1991 and 2001, respectively. Currently, he is a Full Professor with Computer and Information Engineering College, Guangxi Teachers Education University, Nanning, China. His research interests include nonlinear control systems, adaptive control, and fuzzy control theory.

Chao-Yang Chen received the Ph.D degree in Huazhong University of Science and Technology, Wuhan, China, in 2010. He is an Lecturer in the School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, China, in 2014. He is currently a Postdoctoral in School of Information Science and Engineering, Central South University, Changsha. His research interests include adaptive control, nonlinear time-delay systems, networked control systems, and complex dynamical networks.

Shao-Wu Zhou received the M.Sc. degree from the Central South University, Changsha, China, in 1990 and the Ph.D. degree in electrical engineering from Hunan University, Changsha, China, in respectively. He is currently a Full Professor with the School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, China. His research interests are in intelligent control and robot control.

Wei-Hua Gui received the degree of the B.Eng. in Electrical Engineering and the M.Eng. in Automatic Control Engineering from Central South University, China, in 1976 and 1981 respectively. From 1986 to 1988 he was a visiting scholar at Universitt-GH-Duisburg, Germany. He has been a Full Professor in Central South University since 1991. His main research interests are in modeling and optimal control of complex industrial process, distributed robust control, and fault diagnoses.

Zhi-Hong Guan received the Ph.D. degree in automatic control theory and applications from the South China University of Technology, Guangzhou, China, in 1994. He is currently a Huazhong Leading Professor with the College of Automation, Huazhong University of Science and Technology (HUST), Wuhan, China. He was a Full Professor with the Jianghan Petroleum Institute, Jingzhou, China, in 1994. Since 1997, he has been a Full Professor with the Department of Control Science and Engineering, HUST. Since 1999, he has held visiting positions at Harvard University, Cambridge, MA, USA, Central Queensland University, Rockhampton, QLD, Australia, Loughborough University, Loughborough, UK, the National University of Singapore, Singapore, the University of Hong Kong, Hong Kong, and the City University of Hong Kong, Hong Kong. His research interests include

108