- Email: [email protected]

Contents lists available at ScienceDirect

Information Sciences journal homepage: www.elsevier.com/locate/ins

Event-triggered adaptive neural network controller for uncertain nonlinear system Hui Gao a, Yongduan Song b,∗, Changyun Wen c a

School of Electrical and Control Engineering, Shaanxi University of Science and Technology, China School of Automation, Chongqing University, Chongqing, China c Nanyang Technological University, Singapore b

a r t i c l e

i n f o

Article history: Received 7 January 2019 Revised 2 August 2019 Accepted 4 August 2019 Available online 5 August 2019 Keywords: Event-triggered Adaptive neural networks Nonlinear system

a b s t r a c t In this paper, an event-triggered adaptive controller, consisting of a basic adaptive neural network controller and an event-triggered mechanism, is developed for a class of singleinput and single-output high-order nonlinear systems with neural network approximation. Both the static and the dynamic event-triggered mechanisms are proposed in our design, without the input-state stability (ISS) assumption which is needed in most existing results. It is shown that the proposed methods can ensure that the closed loop system is globally stable. The minimal inter-event time internal is lower bounded by a positive number so that no Zeno behavior occurs. Finally, the numerical simulations are presented to illustrate our theory. © 2019 Elsevier Inc. All rights reserved.

1. Introduction Event-triggered control has gained considerable attention in the control community, especially in network control ﬁeld. Traditional control method is based on continuous control input to guarantee the system stability which need much communication resource to transfer the control signal. Event-triggered strategy can be seen as a supervised control which can decide if control signal needs to be sent to the plant based on some predeﬁned performance condition. In recent years, many results have been reported with different event-triggered conditions, for example see [1,6,11–13,19–21]. Most of the existing event-triggered results are developed for known and linear systems, e.g. [12,20,24,27], and the references therein. In [20], a very simple but useful event-triggered scheduling strategy is proposed to ensure system stability under the condition that the measurement error caused by the event-triggered mechanism. Furthermore, the event-triggered mechanism has been studied for robust control of linear uncertain system [23]. Nonlinear systems, which are more practical, also attract attentions from many researchers [1,11,19,21]. Event-triggered schemes are developed in [15] based on hybrid systems tools for the stabilization of nonlinear system with ISS condition. Based on the decreasing property of Lyapunov function, results with static event-triggered conditions are established in [7,11,20]. A dynamic event-triggered condition is considered in [3] to reduce the communication burden which allows the Lyapunov function to increase at some points but decrease in the whole execution times. As a more realistic scenario, event-triggered design for unknown nonlinear systems is more meanful. However, because of the complicated system structure and unknown parameters, only limited results on event-triggered control of unknown nonlinear systems are available. Three different mechanisms are proposed ∗

Corresponding author. E-mail addresses: [email protected] (H. Gao), [email protected] (Y. Song), [email protected] (C. Wen).

https://doi.org/10.1016/j.ins.2019.08.015 0020-0255/© 2019 Elsevier Inc. All rights reserved.

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

149

in [29] to address the adaptive event-triggered problem of nonlinear systems. On the other hand, neural networks have been studied for decades and the universal property of neural network has been used in adaptive control process, see for example [4,14,25,26,30]. The application of on-line neural network for control and learning system need response rapidly, but the neural network need a lot of calculation and learning processes which cost too much time, so it is necessary to ﬁnd a practical method to reduce the calculation burden of the neural network application. Considering the event-triggered mechanism, the adaptive neural controllers with event-triggered condition have been proposed in [17] and [16], in order to reduce the computational and communication burden of the closed-loop system for the continuous-time and discrete-time domains, respectively. The objective of this paper is on designing event-triggered adaptive neural controllers for unknown nonlinear systems. We begin with designing a basic adaptive controller based on the backstepping technique [9] and neural network with low-frequency learning [31]. Then we propose static mechanism and dynamic mechanism for the obtained adaptive neural controller. In the static mechanism, we design an event-triggered condition based on a given ﬁxed bound of the measurement error, which is the error between the control signals generated by the event-triggered controller and the designed controller. In order to ensure the stability of the system, the basic adaptive neural network controller with the addition of introducing an extra term. In the dynamic mechanism, a new variable which can be considered as a ﬁltered value of the signal in the static mechanism is designed. The stability of the closed-loop system is then established. Both event-triggered mechanisms can guarantee that the lower bound of inter-event time is larger than zero, i.e., Zeno behavior is avoided. Compared with the event-triggered mechanisms in existing neural controllers such as those in [17] and [16], our event-triggered conditions are designed without the input-to-state linearizable condition while with simpler threshold. Furthermore, the established results are to solve the nonlinear tracking problems which are more practical than the stabilization problem in [17]. It is noted that ISS condition is also required in [17] where continuous-time systems are considered. The remainder of the paper is organized as follows. Section 2 formulates the control problem. Static and dynamic eventtriggered controllers are given in Section 3, followed by some simulation examples which illustrate our theories in Section 4. The paper is concluded in Section 5. 2. System description and problem formulation In this section, we formulate the design problem and introduce the RBF neural network which will be used in next section. 2.1. Problem formulation Consider a class of nonlinear single-input single-output unknown systems model led as

⎧ x˙ 1 = F1 (x¯1 ) + G1 x2 ⎪ ⎪ ⎨x˙ 2 = F2 (x¯2 ) + G2 x3 ...

(1)

⎪ ⎪ ⎩x˙ r = Fr (x¯r ) + Gr u(t ) y(t ) = x1 ,

where xi ∈ R, i ∈ {1, 2, . . . , r} denote the states of the plant and x¯i = {x1 , x2 , . . . , xi }, u(t) and y(t) are the control signal and output, respectively. Fi , i ∈ {1, 2, . . . , r} are unknown system functions and Gi , i ∈ {1, 2, . . . , r} are known parameters. The desired trajectory is yd . Assumption 1. The desired trajectory yd and its rth orders are piecewise continuous, known and bounded. The objective of this paper is to design an event-triggered controller which can guarantee that y(t) tracks the desired trajectory yd with bounded tracking errors and no Zeno behavior occurs in the execution times, i.e., the event is not triggered for inﬁnite times in a ﬁnite time interval. 2.2. Function approximation To solve this problem, we approximate the unknown functions with RBF neural network which has been proved to have the universal approximation property [2]. In other words, any continuous function fi (x) in a compact set can be written as

fi (x¯i ) = WiT i + i (x¯i )

(2)

where Wi i = [i,1 , i,2 , . . . , i,n ] ∈ and i (x¯i ) ∈ R are neural network ideal weights, radial basis functions and reconstruction errors, respectively. n is the number of neural nodes. i is a Gaussian function and given by i, j = ∈ Rn ,

exp(−

xi −bi, j 2 2 2ci, j

Rn

), where bi,j is the center of the Gaussian function peak and ci,j is the standard deviation. The ideal net-

work weights can be deﬁned as Wi = arg minwi ∈Rn [sup WiT i − f (x¯i )]. In this paper, we consider linear neural networks that approximate unknown functions, i.e., the parameters in radial basis functions keep constant and the network weights will be updated through adaptive learning on line. The assumption about the reconstruction errors is given as follows:

150

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

Assumption 2 [28]. The neural network reconstruction error i (x¯i ), (i = 1, 2, . . . , r ) satisﬁes

i (x¯i ) ≤ ψi,1 + ψi,2 θi (x¯i )

(3)

where ψi, j , j = 1, 2 are unknown non-negative constants, and θi (x¯i ) is a known nonnegative function. Remark 1. In most existing adaptive neural control approaches, it is assumed that neural network errors are bounded by unknown constants which is only true for the situation that all input signals of neural network are in a compact set. Its validity cannot be veriﬁed before the stability of the controlled system is achieved. In this paper, Assumption 2 speciﬁes the neural network errors bounded by a state-dependent bound while modelling the unknown system function. Therefore, Assumption 2 is more reasonable and practical than the existing assumption before. 3. Event-triggered neural network with adaptive backstepping In this section, we present a basic adaptive backstepping control with low-frequency leanrning neural network ﬁrstly. Then static event-triggered mechanism and dynamic event-triggered mechanism are introduced based on the proposed adaptive neural network control method. 3.1. Adaptive backstepping control with low-frequency learning neural network To facilitate the backstepping control, we deﬁne new variable zi as

z1 = x1 − yd z2 = x2 − α1 ... zr = xr − αr−1

(4)

where α i is a virtual control in backstepping process. So we have

⎧ ⎪ ⎨z˙ 1 = F1 (x¯1 ) + G1 (z2 + α1 ) − y˙ d z˙ 2 = F2 (x¯2 ) + G2 (z3 + α2 ) − α˙ 1 ⎪ ⎩. . . z˙ r = Fr (x¯r ) + Gr u(t ) − α˙ r−1

(5)

Now designing the basic adaptive backstepping control with low-frequency learning neural network is summarised as follows, by following the standard recursive procedure. Step 1. Let F1 = wT1 φ1 + 1 (x¯1 ) based on the RBF neural network approximation (2), where w1 ∈ Rn , φ 1 ∈ Rn and 1 ∈ R are ideal weight vector, the radial basis function and network reconstruction error, respectively. And 1 ≤ ψ1,1 + ψ1,2 θ1 , where ψ 1,1 and ψ 1,2 are unknown positive constants and θ 1 is a known function as stated in Assumption 2. Then from (4) we have

z˙ 1 = x˙ 1 − y˙ d = wT1 φ1 + 1 (x¯1 ) + G1 x2 − y˙ d The virtual control α1 = x2 − z2 is given by

α1 = −G1 −1 z1 + wˆ T1 φ1 − y˙ d +

where υ (t ) ∈ L1

0< 0<

t 0 t 0

z1 [ψˆ + ψˆ 1,2 θ1 (x¯1 )] z1 + δ1 υ (t ) 1,1

(6)

(7)

L2 [0, ∞ ) is a design function [18], i.e.,

υ 2 dt ≤ < +∞, ∀t ∈ [0, +∞ )

(8)

υ dt ≤ < +∞, ∀t ∈ [0, +∞ )

(9)

ˆ 1 , ψˆ 1,1 and ψˆ 1,2 are the estimates of w1 , ψ 1,1 and ψ 1,2 , respectively. and are positive bounded constant. δ 1 is given w by

δ1 = /(1 + θ1 (x¯1 ))

(10)

with ϖ being a small positive constant. Now we design

ˆ 1, f ) − k1 w ˆ˙ 1 = w1 [φ1 z1 − μ(w ˆ1 − w ˆ 1] w ˆ 1, f − k1, f w ˆ˙ 1, f = w1, f [w ˆ1 − w ˆ 1, f ] w

ψˆ˙

1,1

=

1

γ1,1

z1 2 − k1,1 ψˆ 1,1 z1 + δ1 υ (t )

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

ψˆ˙ 1,2 =

1 z1 2 θ (x¯ ) − k1,2 ψˆ 1,2 γ1,2 z1 + δ1 υ (t ) 1 1

151

(11)

where μ, γ 1,1 , γ 1,2 , k1 , k1,f , k1,1 and k1,2 are positive constants, w1 ∈ Rn×n , w1, f ∈ Rn×n are symmetric positive matrices. ˆ 1, f is a ﬁltered value of w ˆ 1 through a low-pass ﬁlter. Using the ﬁltered value w ˆ 1, f can reduce the learning rate of the And w ˆ 1. update law of w Remark 2. A high-gain learning rate in adaptive control will lead to high-frequency oscillations of control output. The lowpass ﬁltered parameter is introduced to reduce the oscillations of the system response as in [31]. Deﬁne a Lyapunov function as

V1 =

1 2 1 T −1 1 T 1 1 ˜ w ˜1 + w ˜ −1 w ˜ + γ1,1 ψ˜ 12,1 + γ1,2 ψ˜ 12,2 z + w 2 1 2 1 w1 2 1, f w 1, f 1, f 2 2

(12)

˜ 1 = w1 − w ˜ 1, f = w1 − w ˆ 1, w ˆ 1, f , ψ˜ 1,1 = ψ1,1 − ψˆ 1,1 and ψ˜ 1,2 = ψ1,2 − ψˆ 1,2 . where w The derivative of V1 along the trajectory (12) is given by −1 ˙ −1 ˙ ˜ T1 w ˜1 + w ˜ T1, f w ˜ 1, f + γ1,1 ψ˜ 1,1 ψ˜˙ 1,1 + γ1,2 ψ˜ 1,2 ψ˜˙ 1,2 V˙ 1 =z1 z˙ 1 + w w w 1 1, f

˜ T1 [φ1 z1 − μ(w ˆ 1, f ) − k1 w ˆ1 − w ˆ 1] =z1 (wT1 φ1 + 1 (x¯1 ) + G1 (z2 + α1 ) − y˙ d ) − w

z1 2 − k1,1 ψˆ 1,1 z1 + δ1 υ (t ) ˆ θ1 (x¯1 ) − k1,2 ψ1,2

˜ T1, f [w ˆ 1, f − k1, f w ˆ1 − w ˆ 1, f ] − ψ˜ 1,1 −w

− ψ˜ 1,2

z1 2 z1 + δ1 υ1 (t )

ˆ 1, f ) T ( w ˆ 1, f ) + z 1 G 1 z 2 + k1 w ˜ T1, f w ˆ 1, f − μ ( w ˆ1 − w ˆ1 − w ˜ T1 w ˆ 1 + k1,1 ψ˜ 1,1 ψˆ 1,1 + k1,2 ψ˜ 1,2 ψˆ 1,2 ≤ − z12 + k1, f w ˜ T1 w ˜ 1 − k1, f w ˜ T1, f w ˜ 1, f − k1,1 ψ˜ 1,1 ψ˜ 1,1 − k1,2 ψ˜ 1,2 ψ˜ 1,2 + k1 wT1 w1 + k1, f wT1, f w1, f + z1 G1 z2 = − z12 − k1 w + k1,1 ψ1,1 ψ1,1 + k1,2 ψ1,2 ψ1,2 + (ψ1,1 + ψ1,2 ) υ (t ) = − β1V1 + η1 + z1 G1 z2

(13)

where β 1 is a positive constant and η1 = (k1 + k1, f )wT1 w1 + k1,1 ψ1,1 ψ1,1 + k1,2 ψ1,2 ψ1,2 + (ψ1,1 + ψ1,2 ) υ (t ). Step j. (2 ≤ j < r) Firstly, we deﬁne

ϕ j−1 (z¯ j−1 , wˆ¯ j−1 , y¯ d( j ) ) = α˙ j−1

(14)

ˆ¯ j−1 , y¯ d( j ) ) = wTj φ j + j fr (x¯r ) − ϕ j−1 (z¯ j−1 , w

(15)

ˆ¯ j = [w ˆ T1 , . . . , w ˆ Tj ]T , y¯ d( j ) = [y¯ d(1 ) , . . . , y¯ d( j ) ]T . And wj ∈ Rn , φ j ∈ Rn and j ∈ R are ideal weight vector, where z¯ j = [z1T , . . . , zTj ]T , w the radial basis function and network reconstruction error, respectively. And j ≤ ψ j,1 + ψ j,2 θ j based on Assumption 2. Then

z˙ j = x˙ j − α˙ j−1 = wTj φ j + j (x¯ j ) + G j (x¯ j )x j+1

(16)

In the jth step, we deﬁne the following Lyapunov function

V j = V j−1 +

1 T 1 T −1 1 T −1 1 1 ˜ w ˜j+ w ˜ w ˜ + γ j,1 ψ˜ j,21 + γ j,2 ψ˜ j,22 z zj + w 2 j 2 j wj 2 j, f w j, f j, f 2 2

(17)

˜ j = wj − w ˜ j, f = w j − w ˆ j, w ˆ j, f , ψ˜ j,1 = ψ j,1 − ψˆ j,1 and ψ˜ j,2 = ψ j,2 − ψˆ j,2 . w j ∈ Rn×n , w j, f ∈ Rn×n are symmetric poswhere w itive matrices. γ j,1 and γ j,2 are positive constants. The adaptive laws are given as

ˆ j, f ) − k j w ˆ˙ j = w j [φ j z j − μ(w ˆj −w ˆ j] w ˆ j, f − k j, f w ˆ˙ j, f = w j, f [w ˆj −w ˆ j, f ] w

ψˆ˙ ψˆ˙

j,1

j,2

= =

1

γ j,1 1

γ j,2

z j 2 − k j,1 ψˆ j,1 z j + δ j υ (t )

z j 2 θ (x¯ ) − k j,2 ψˆ j,2 z j + δ j υ (t ) j j

(18)

where kj , kj,f , kj,1 and kj,2 are positive constants. Following similar analysis in Step 1, we can obtain that,

V˙ j ≤ − β jV j +

j i=1

ηi + z j G j z j+1

(19)

152

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

by choosing the following virtual control law

αj =

−G−1 j

G j−1 z j−1 +

ˆ Tj z j +!‘!‘w

zj φj + [ψˆ + ψˆ j,2 θ j (x¯ j )] z j + δ j υ (t ) j,1

(20)

with δ j = /(1 + θ j (x¯ j )) and ηi = (ki + ki, f )wTi wi + ki,1 ψi,1 ψi,1 + ki,2 ψi,2 ψi,2 + (ψi,1 + ψi,2 ) υ (t ). Step r. In the last step, We choose the Lyapunov function as

1 T 1 T −1 1 T −1 1 1 ˜ w ˜r + w ˜ w ˜ + γr,1 ψ˜ r,21 + γr,2 ψ˜ r,22 zr zr + w 2 2 r wr 2 r, f wr, f r, f 2 2

Vr = Vr−1 +

(21)

with

ϕr−1 (z¯r−1 , wˆ¯ r−1 , yd(r ) ) = α˙ r−1

(22)

ˆ¯ r−1 , yd(r ) ) = wTr φr + r fr (x¯r ) − ϕr−1 (z¯r−1 , w

(23)

z˙ r = wTr φr + r (x¯r ) + Gr (x¯ j )u(t )

(24)

δr = /(1 + θr (x¯r ))

(25)

˜ r = wr − w ˜ r, f = wr − w ˆ r, w ˆ r, f , ψ˜ r,1 = ψr,1 − ψˆ r,1 and ψ˜ r,2 = ψr,2 − ψˆ r,2 . where r ≤ ψr,1 + ψr,2 θr as in Assumption 2, w

wr ∈ Rn×n and wr, f ∈ Rn×n are symmetric positive matrices. γ r,1 and γ r,2 are positive constants. The adaptive laws are

ˆ˙ r = wr w

φr zr − μ(wˆ r − wˆ r, f ) − kr wˆ r

ˆ r, f − kr, f w ˆ˙ r, f = wr, f w ˆr − w ˆ r, f w

ψˆ˙ ψˆ˙

r,1

r,2

= =

1

γr,1 1

γr,2

z1 2 − kr,1 ψˆ r,1 zr + δr υ (t )

zr 2 θ (x¯ ) − kr,2 ψˆ r,2 zr + δr υ (t ) r r

(26)

where kr , kr,f , kr,1 and kr,2 are positive constants. The control of the basic adaptive controller ub is designed as

ˆ Tr φr + ub (t ) = αr = −G−1 Gr−1 zr−1 + zr +!‘!‘w r

zr zr + δr υ (t )

ψˆ r,1 + ψˆ r,2 θr (x¯r )

(27)

If u(t ) = ub (t ), this together with (21) and (26) yields

V˙ r ≤ −βrVr +

r

ηi ≤ −βrVr +

(28)

i=1

where ηi = (ki + ki, f )wTi wi + ki,1 ψi,1 ψi,1 + ki,2 ψi,2 ψi,2 + (ψi,1 + ψi,2 ) υ (t ), = ri=1 ηi and β r is a positive constant. This shows that the basic adaptive controller can ensure all the signals in the closed loop system bounded. Remark 3. Global stability means that the domain of attraction is the entire space Rn , that is, there is no constraint on the initial conditions. For arbitrary initial system state x¯i (0 ) and bounded reference trajectory yd , we have z¯i ≤ as t → ∞, where z¯i and ϱ are system tracking error and a small positive constant respectively. In our controller design, the compact set condition is relaxed, so we ensure that the whole system is globally stable. 3.2. Event-triggered adaptive neural controller with static threshold To have a suitable event-triggered controller, the basic adaptive neural network control ub in (27) is modiﬁed to us as follows

¯ us =αr − G−1 r B tanh

zn B¯

ξ

(29)

where B¯ and ξ are positive parameters with B¯ being called the static threshold. The proposed event-triggered controller in t ∈ [tk , tk+1 ) is then given as

u(t ) = us (tk )

(30)

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

153

where the event-triggered condition can be designed as

tk = min{t ≥ tk−1 : us (t ) − u(t ) ≥ B¯ }, t0 = 0

(31) ¯

zn B ¯ where us (t ) − u(t ) denotes the measurement error. As shown in (29), the term −G−1 r B tanh ( ξ ) is introduced to compensate

for measurement error in inter-event time.

Theorem 1. Consider the high-order uncertain system in (1). The designed event-triggered neural network controller (27), (29) and (30) with the event-triggered mechanism (31) and adaptive laws (11), (18) and (26) guarantees that all the signals are bounded and the whole system is globally stable. In addition, no Zeno behavior occurs in the execution times. Proof. Consider in t ∈ [tk , tk + 1 ). From (31), we have us (t ) − u(t ) < B¯ So

u(t ) = us (t ) − λ(t )B¯

(32)

where λ(t) is time-varying with λ(tk ) = 0, λ(tk+1 ) = 1 and λ(t) ≤ 1. Now considering the Lyapunov function (21), we obtain

˙ ˙ −1 ˙ −1 ˙ ˜ Tr w ˜ Tr, f w ˆ r + zr z˙ r − w ˆ r, f − γr,1 ψ˜ r,1 ψˆ r,1 − γr,2 ψ˜ r,2 ψˆ r,2 V˙ r ≤ V˙ r−1 − w w w r r, f

(33)

Based on (19) and (24),

V˙ r ≤ − βr−1Vr−1 +

r−1

T −1 ˙ −1 ˙ ˜ Tr w ˜ Tr, f w ˆ r + zr (Gr u(t ) + wTr φr + r ) − w ˆ r, f ηi + zr−1 Gr−1 zr − w w w r r, f

i=1

˙ ˙ − γr,1 ψ˜ r,1 ψˆ r,1 − γr,2 ψ˜ r,2 ψˆ r,2

(34)

From (27), (29) and (32), one has

V˙ r ≤ − βr−1Vr−1 +

r−1

T −1 ˙ −1 ˙ ˜ Tr w ˜ Tr, f w ˆ r + zr (Gr us (t ) − λ(t )B¯ + wTr φr + r ) − w ˆ r, f ηi + zr−1 Gr−1 zr − w w w r r, f

i=1

≤ − βr−1Vr−1 +

r−1

T −1 ˙ ˜ Tr w ˆr ηi + zr−1 Gr−1 zr − w w r

i=1

zn B¯

φ j − B¯ tanh − λ(t )B¯ + ξ

r z B¯ = − βrVr + ηi + zr (−B¯ tanh n − λ(t )B¯ ) ξ i=1 + zr −G j−1 z j−1 −

ˆ Tj z j −!‘!‘w

wTr

φr + r − w˜ Tr, f w−1r, f wˆ˙ r, f (35) ¯

From the property of the hyperbolic tangent function, we know 0 ≤ zn B¯ − zn B¯ tanh( znξB ) ≤ 0.2785ξ [14]. Then

V˙ r ≤ − βrVr +

r

ηi + 0.2785ξ = −βrVr + s

(36)

i=1

where s = ri=1 ηi + 0.2785ξ . It is obvious that event-triggered controller we designed is a discrete-event system. Based on [4] and [5], we deﬁne m=

z u

α

, z ∈ Rr , u ∈ R, α =

0 1

t=tk . t∈ (tk−1 ,tk )

Then ﬂow occurs when t ∈ (tk−1 , tk ). During ﬂow, the control input u(t) is constant.

The ﬂow set C and the ﬂow map f˜ can be taken to be:

C = R × R × (tk−1 , tk ), f˜(m ) = r

f (z, u ) 0 1

(37)

Jumps occur when the event-triggered condition is satisﬁed at t = tk . At jumps, the control input u(t) is updated according to (29) and (30). Even thought the input u(t) to system (1) is discontinuous with step changes, it has no impulse involved and thus all the states of the system remain. The jump set D and the jump map g(m) can be taken to be:

D = R × R × {tk }, g(m ) = r

z us (tk ) 0

(38)

154

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

As shown in (35) and (36), we know that the function Vr can converge to a set during ﬂows. Now we need to analyze Vr at jumps. Note that at t = tk ,

Vr (tk , z(tk )) =Vr (tk+ , z(tk+ )) − Vr (tk , z(tk ))

(39)

As mentioned above, based on (29) and (30), we know that all the variables in (21) remain unchanged, so

Vr (tk , z(tk )) = 0

(40)

i.e., the Lyapunov function is non-increasing at jumps. Thus all the signals are bounded for all time and the tracking error will converge to a set depending on s in (36). From (36), we have that for every initial state x¯i , we have z¯i ≤ as t → ∞, where ϱ is a small positive constant, i.e., the compact condition of initial states is relaxed, so we conclude that the whole system is globally stable. Now let the minimal inter-execution time be τ . Based on the above analysis, we have

us (tk+1 ) − u(t ) ≥ B¯ Because u˙ s = u˙ − G−1 r

B¯ 2 z˙ r

(41) ¯

ξ cosh2 ( zξr B )

, then u˙ s ≤ B based on the boundedness of all signals of the system, where B is a positive ¯

number. So the event-triggered sequence tk satisﬁes tk+1 − tk ≥ τ = BB > 0, which means that the event-triggered system cannot make an inﬁnite number of jumps in a ﬁnite amount of time, i.e., the Zeno behavior is avoided in our design. Remark 4. The event-triggered condition (31) is designed based on the difference between the current us (t) and u(t) where B¯ is a positive constant. When B¯ is smaller, the event-triggered condition can be satisﬁed in a shorter period of time, which means that the event-triggered system approximate to an continuous system, the tracking bound can be reduced. From (36), we know that bigger β r and smaller s also make the tracking bound reduced. Note that u(t) is a constant for all t ∈ [tk , tk+1 ), so the event-triggered condition (31) is mainly inﬂuenced by us (t). From (27) and (29), we know that us (t) is determined by the system variables zi , neural network approximate value ω ˆ rT φr and the estimated value of the bound ˆ ˆ ¯ ¯ ψr,1 + ψr,2 θr (xr ) for i (xi ). Then we have that a larger reconstruction error makes the us (t) change greater, which implies that the event-triggered condition can be satisﬁed in a shorter period of time. Remark 5. Compared with the traditional real-time control such as the direct application of the basic control (27), the event-triggered controller (29)–(31) will save more communication resource. Remark 6. ISS with respect to the measurement error is normally required in most existing literatures including [17]. Such a condition is diﬃcult to check or may not hold [13,21]. Now this condition is no longer needed with our proposed scheme. Remark 7. In [17], an event-triggered neural network controller is designed based on the nonlinear impulsive dynamical model. It requires Lspschitz continuity of activation function and a dead-zone operator, which are not needed in our eventtriggered condition (31). Thus, our design is simpler. Furthermore, Our event-triggered mechanism is designed for solving the tracking problem instead of only the stabilization problem considered in [17]. 3.3. Event-triggered adaptive neural controller with dynamic event-triggered condition Firstly, us in (29) is further revised as ud

¯ ud =αr − G−1 r (B + ρr ) tanh

zn (B¯ + ρr )

ξ

(42)

where ρ r is a dynamic variable deﬁned as

ρ˙ r = −ηr ρr + (B¯ − ud (t ) − u(t ) )

(43)

with ηr being a positive constant. ρ r can be regarded as a ﬁltered value of B¯ − ud (t ) − u(t ). Deﬁning t0 = 0, the dynamic event-triggered mechanism is designed as

tk = min{t ≥ tk−1 : ρr + (B¯ − ud (t ) − u(t ) ) ≤ 0}

(44)

and proposed event-triggered control is now given by

u(t ) = ud (tk ), t ∈ [tk , tk+1 )

(45)

Remark 8. In [3], a dynamic mechanism is proposed for a linear known system. Now we extend the idea to unknown nonlinear systems, in which the resulting controller contains the estimates of unknown parameters. On other other hand, the dynamic event-triggered mechanism is an extension of the static mechanism with dynamically ﬁltered variable ρ r .

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

155

Lemma 1. For the system (43) with ρr (0 ) = 0 and event-triggered condition (44), we have ρr + (B¯ − ud (tk+1 ) − u(t ) ) ≥ 0 and ¯

0 ≤ ρr ≤ ηBr for all t ∈ [0, t∞ ).

Proof. From (44), we know that if ρr + (B¯ − ud (t − ) − u(t ) ) ≤ 0 at time t, then tk+1 = t and u(t ) = ud (tk+1 ), ρr + (B¯ − ud (t ) − u(t ) ) = ρr + B¯ ≥ 0. If ρr + (B¯ − ud (t − ) − u(t ) ) > 0, we design u(t ) = u(t − ), then ρr + (B¯ − ud (t ) − u(t ) ) ≥ 0. So (B¯ − ud (t ) − u(t ) ) ≥ −ρr . Thus

ρ˙ r > −ηr ρr − ρr

(46)

By the Comparison Lemma [8], we must have ρ r > 0. ¯ ¯ Now we give the proof of ρr < ηBr by contradiction, we assume that ρr > ηBr , then (43) can be written as ρ˙ r = −ηr ρr + (B¯ − ud (t ) − u(t ) ) < 0. That is, the derivative of ρ r is less than zero, then we have ρr ≤ ηB¯r with ρr (0 ) = 0, which is ¯

contradictory with ρr > ηBr . ¯ Then ρ r is bounded by 0 ≤ ρr ≤ ηBr .

Theorem 2. If the designed event-triggered neural network controller (29) and (30) are replaced by (42) and (45) with the event-triggered condition (44), the results in Theorem 1 still hold. Proof. From (44) and Lemma 1, we have ud (t ) − u(t ) ≤ ρr + B¯ . Then

u(t ) = ud (t ) − λ(t )(B¯ + ρr )

(47)

Now we consider the Lyapunov function (21), from (35) and (43)

V˙ r (t ) ≤ − βr−1Vr−1 +

r−1

T −1 ˙ ˜ Tr w ˆr ηi + zr−1 Gr−1 zr + zr (Gr ud − λ(B¯ + ρr ) + wTr φr + r ) − w w r

i=1

˙ ˙ −1 ˙ ˜ Tr, f w ˆ r, f − γr,1 ψ˜ r,1 ψˆ r,1 − γr,2 ψ˜ r,2 ψˆ r,2 −w w r, f = − βrVr +

z (B¯ + ρr ) ηi + zr −(B¯ + ρr ) tanh n − λ(B¯ + ρr ) ξ i=1

r

(48)

¯ Because zr (−(B¯ + ρr ) tanh( zn (Bξ+ρr ) ) − λ(B¯ + ρr )) ≤ 0.2785ξ [14], then we have

V˙ r (t ) ≤ −βrVr +

r

ηi + 0.2785ξ = −βrVr + s

(49)

i=1

Thus all the signals are bounded. The minimal inter-execution time τ of the dynamic event-triggered mechanism is now analyzed as follows. Firstly, we know that ud (tk+1 ) − u(t ) ≥ B¯ + ρr and ρ r ≥ 0 from Lemma 1, i.e., the controller error between every two subsequent inter-time events is larger than zero. From (42) and (43), we get that u˙ d must be continuous and bounded with u˙ d ≤ B∗ , where B∗ is a positive constant. Then we have tk+1 − tk ≥ τ = occurs.

B¯ +ρr B∗

> 0. Then no Zeno behavior

Remark 9. Again, compared with the dynamic event-triggered mechanism in [3], the ISS condition is not needed in our method which makes the event-triggered condition more convenient to be applied. Remark 10. It is easy to notice that the static mechanism and dynamic mechanism have the similar tracking performance from (36) and (49). But because of the introduction of the variable ρ r , the dynamic event-triggered mechanism adopts the idea of increasing the threshold value ρr + B¯ as B¯ − ud (t ) − u(t ) is approaching to B¯ , which corresponds to that the signal ud changes rapidly in the case of the tracking problem. In others words, the dynamic event-triggered mechanism is more ﬂexible for a system requiring rapidly changing control signal when tracking a changing desired trajectory, whereas the advantage of the static mechanism mainly be advantageous in the stabilization problem. 4. Simulation example In this section, we consider the following system from a rigid robot of actuator dynamics [10,22] to illustrate the established theoretical results.Deﬁne F2 = D−1 x3 − D−1 N sin x1 and F3 = −M−1 Km ∗ x2 − M−1 Hx3 , which are unknown functions to designers. The initial states of the system are chosen to be x(0 ) = [−0.4, 10, 0]. The parameters of the system are D = 1, N = 10, B = 1, M = 0.05, H = 0.5, Km = 10 for simulation purpose. We chooseγ2,1 = 10, γ2,2 = 10, γ3,1 = 1 and γ3,2 = 100. The event-triggered parameters are chosen as B¯ = 10 and ξ = 1. The sampling time chosen for the simulation is 0.001s. The

156

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

Fig. 1. (a) Tracking trajectory and (b) tracking error with static condition (31).

Fig. 2. (a) Control input and (b) triggered event with static condition (31).

initial neural network activation function φ i is initialized with random parameters as in [17]. The neural network initial ˆ i (0 ) are zero and θi = x¯i + x¯2i . Let ηr = 0.1 and ρ r (0) based on Lemma 1. weights w

x˙ 1 = x2 x˙ 2 = D−1 x3 − D−1 N sin x1 − D−1 B ∗ x2 x˙ 3 = −M−1 Km ∗ x2 − M−1 Hx3 + M−1 u

(50)

The desired trajectory is yd = sin(2π t ). Figs. 1 and 5 show the tracking trajectories and errors with the static condition (31) and the dynamic condition (44), respectively. It can be seen that both schemes ensure system stability in the sense that all the signals are bounded, even though the control signals are not continuous as shown in Figs. 2(a) and 6(a). The

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

157

Fig. 3. (a) Event-triggered error E = B¯ − us (t ) − u(t ) (B¯ = 10) and (b) inter-event time tk = tk − tk−1 with static condition (31).

Fig. 4. (a) Event-triggered error E = B¯ − us (t ) − u(t ) (B¯ = 6) and (b) inter-event time tk = tk − tk−1 with static condition (31).

points in Figs. 2(b) and 6(b) represent the time of the triggered events. The numbers of events triggered in 3 seconds for the static mechanism and dynamic mechanism are 136 and 113, respectively, being much less than 30 0 0 = 3/0.001 which is the total number of communication with the basic control strategy (27). In Fig. 4, B¯ = 6 and the numbers of events triggered increases to 181 which means that a smaller B¯ leads to more triggered events but smaller event-triggered errors. In Figs. 3 and 7, the event-triggered error and inter-event times are plotted. It is observed that the minimum inter-event times respectively in Fig. 3(b) and Fig. 7(b) are bounded below, which means that no Zeno behavior occurs. The dynamic variable ρ r is bounded in the execution time in Fig. 7 as proved in Lemma 1. These simulation results successfully illustrate and verify our theoretical ﬁndings stated in Theorems 1 and 2.

158

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

Fig. 5. (a) Tracking trajectory and (b) tracking error with dynamic condition (44).

Fig. 6. (a) Control input and (b) triggered event with dynamic condition (44).

5. Conclusions In this paper, we consider the event-triggered adaptive control for unknown nonlinear system. Two different eventtriggered mechanisms are proposed to ensure the stability of the closed-loop system with low communication burden. The static event-triggered method is designed based on the event-triggered error of the designed control signal and eventtriggered control signal. And in dynamic mechanism, a ﬁltered variable of the event-triggered condition in static mechanism is introduced to build a new dynamic even-triggered condition which is more ﬂexible in the system requiring rapidly changing controller. It is shown in the simulation section that the proposed method can guarantee the system stability and reduce the system communication resource.

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

159

Fig. 7. (a) Threshold with E = B¯ − ud (t ) − u(t ) and (b)inter-event time tk = tk − tk−1 with dynamic condition (44).

Declaration of Competing Interest We declare that we have no ﬁnancial and personal relationships with other people or organizations that can inappropriately inﬂuence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as inﬂuencing the position presented in, or the review of, the manuscript. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]

A. Anta, P. Tabuada, To sample or not to sample: self-triggered control for nonlinear systems, IEEE Trans. Autom. Control 55 (9) (2010) 2030–2042. C.M. Bishop, Neural networks for pattern recognition, Agric. Eng. Int. 12 (5) (1995) 1235–1242. A. Girard, Dynamic triggering mechanisms for event-triggered control, IEEE Trans. Autom. Control 60 (7) (2014) 1992–1997. R. Goebel, R.G. Sanfelice, A.R. Teel, Hybrid Dynamical Systems: Modeling, Stability, and Robustness, Princeton University Press, 2012. W.M. Haddad, V.S. Chellaboina, S.G. Nersesov, Impulsive and Hybrid Dynamical Systems:Stability, Dissipativity, and Control, Princeton University Press, 2014. W.P.M.H. Heemels, M.C.F. Donkers, A.R. Teel, Periodic event-triggered control for linear systems, IEEE Trans. Autom. Control 58 (4) (2013) 847–861. W.P.M.H. Heemels, K.H. Johansson, P. Tabuada, An introduction to event-triggered and self-triggered control, in: Decision and Control, 2012, pp. 3270–3285. H.K. Khalil, Nonlinear Systems, third ed., Prentice-Hall, Inc., Upper Saddle River, NJ, 2002. 2002. M. Krstic, I. Kanellakopoulos, P.V. Kokotovic, Nonlinear and adaptive control, Lect. Notes Control Inf. Sci. 5 (2) (1995) 4475–4480 Vol. 5. F.L. Lewis, D.M. Dawson, C.T. Abdallah, Control of Robot Manipulators, Macmillans, 1993. T. Liu, Z.P. Jiang, A small-gain approach to robust event-triggered control of nonlinear systems, IEEE Trans. Autom. Control 60 (8) (2015) 2072–2085. J. Manuel Mazo, A. Anta, P. Tabuada, Brief paper: an ISS self-triggered implementation of linear controllers, Automatica 46 (8) (2009) 1310–1314. M. Mazo, P. Tabuada, Input-to-state stability of self-triggered control systems, in: Decision and Control, 20 09 Held Jointly with the 20 09 Chinese Control Conference. CDC/CCC 2009. Proceedings of the IEEE Conference on, 2009, pp. 928–933. M.M. Polycarpou, Stable adaptive neural control scheme for nonlinear systems, IEEE Trans. Autom. Control 41 (3) (2002) 447–451. R. Postoyan, P. Tabuada, D. NeAi, A. Anta, A framework for the event-triggered stabilization of nonlinear systems, IEEE Trans. Autom. Control 60 (4) (2015) 982–996. A. Sahoo, H. Xu, S. Jagannathan, Adaptive neural network-based event-triggered control of single-input single-output nonlinear discrete-time systems, IEEE Trans. Neural Netw. Learn. Syst. 27 (1) (2015) 151–164. A. Sahoo, H. Xu, S. Jagannathan, Neural network-based event-triggered state feedback control of nonlinear continuous-time systems, IEEE Trans. Neural Netw. Learn. Syst. 27 (3) (2015) 497–509. Y.D. Song, Guaranteed performance control of nonlinear systems with application to ﬂexible space structure, J. Guid. Control Dyn. 18 (1) (2012) 143–150. C. Stocker, J. Lunze, Event-based control of nonlinear systems: an input-output linearization approach, in: IEEE Conference on Decision & Control & European Control Conferenc, 413 (1), 2011, pp. 2541–2546. P. Tabuada, Event-triggered real-time scheduling of stabilizing control tasks, IEEE Trans. Autom. Control 52 (9) (2007) 1680–1685. P. Tallapragada, N. Chopra, On event triggered tracking for nonlinear systems, IEEE Trans. Autom. Control 58 (9) (2013) 2343–2348. T.J. Tarn, A.K. Bejczy, X. Yun, Z. Li, Effect of motor dynamics on nonlinear feedback robot arm control, IEEE Trans. Rob. Autom. 7 (1) (1991) 114–122. N.S. Tripathy, I.N. Kar, K. Paul, Robust dynamic event-triggered control for linear uncertain system, IFAC-PapersOnLine 49 (1) (2016) 207–212. M. Velasco, P. Mart, J.M. Fuertes, The self triggered task model for real-time control systems, in: 24th IEEE Real-Time Systems Symposium (work in progress), 2003, pp. 67–70. D. Wang, J. Huang, Brief Adaptive Neural Network Control for a Class of Uncertain Nonlinear Systems in Pure-Feedback Form, Pergamon Press, Inc., 2002.

160

H. Gao, Y. Song and C. Wen / Information Sciences 506 (2020) 148–160

[26] D. Wang, J. Huang, Neural Network-Based Adaptive Dynamic Surface Control for a Class of Uncertain Nonlinear Systems in Strict-Feedback Form, IEEE Press, 2005. [27] X. Wang, M.D. Lemmon, Self-triggered feedback control systems with ﬁnite-gain stability, IEEE Trans. Autom. Control 54 (3) (2009) 452–467. [28] Z. Xiang, C. Wen, Stable robust control of nonlinear systems with neural networks, in: American Control Conference IEEE, 5, 20 0 0, pp. 3439–3443. [29] L. Xing, C. Wen, Z. Liu, H. Su, J. Cai, Event-triggered adaptive control for a class of uncertain nonlinear systems, IEEE Trans. Autom. Control 62 (4) (2017) 2071–2076. [30] C. Yang, Y. Jiang, Z. Li, H. Wei, C.Y. Su, Neural control of bimanual robots with guaranteed global stability and motion precision, IEEE Trans. Ind. Inform. PP (99) (2016) 1. [31] T. Yucelen, W.M. Haddad, Low-frequency learning and fast adaptation in model reference adaptive control, IEEE Trans. Autom. Control 58 (4) (2013) 1080–1085.

Copyright © 2023 C.COEK.INFO. All rights reserved.