Neural-network-based distributed adaptive asymptotically consensus tracking control for nonlinear multiagent systems with input quantization and actuator faults

Neural-network-based distributed adaptive asymptotically consensus tracking control for nonlinear multiagent systems with input quantization and actuator faults

Neurocomputing 349 (2019) 64–76 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Neural-ne...

2MB Sizes 0 Downloads 151 Views

Neurocomputing 349 (2019) 64–76

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Neural-network-based distributed adaptive asymptotically consensus tracking control for nonlinear multiagent systems with input quantization and actuator faultsR Yu Li a, Chaoli Wang a,∗, Xuan Cai a, Lin Li a, Gang Wang b a b

Department of Control Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China Department of Electrical and Biomedical Engineering, University of Nevada, Reno, NV 89557, USA

a r t i c l e

i n f o

Article history: Received 9 November 2018 Revised 25 February 2019 Accepted 9 April 2019 Available online 13 April 2019 Communicated by Dr. Ma Lifeng Ma Keywords: Distributed consensus control Neural networks Input quantization Actuator fault Consensus asymptotic convergence

a b s t r a c t This paper investigates the consensus asymptotic convergence problem for a class of nth-order strictfeedback multiagent systems, which include the input quantization, actuator faults, unknown nonlinear functions and directed communication topology. Because the upper bounds of the time-varying stuck faults and external disturbance are commonly difficult to accurately determine, it is also assumed that these upper bounds are unknown in this paper. First, a group of first-order filters are designed to estimate the bounds of the reference signal for each agent. Second, smooth functions are introduced to compensate the effect of quantization and bounded stuck faults. Meanwhile, a new back-stepping method is used to propose an intermediate control law and an adaptive design procedure, and the final distributed control protocols are established. All closed-loop signals are uniformly bounded, and the tracking errors asymptotically converge to zero. Finally, a practical example simulation is provided to demonstrate the effectiveness of the proposed scheme.

1. Introduction In recent years, because of the prospect of its theory and practical application, the consensus control of multiagent systems has attracted wide attention. In particular, the consensus control of distributed multiagent systems has been widely studied in various fields such as sensor networks, stochastic multi-agent systems, multi robots, robot formations [1,2]. Multiagent distributed consensus control is commonly applied to a fixed or time-varying communication topology [3,4]; then, each agent can only obtain information from adjacent neighbors. The distributed control schemes make the multiagent systems obtain the result of consensus states or consensus output. At present, consensus control can be further divided into two categories: leaderless consensus and leader-following consensus (i.e., distributed tracking) [5]. There are some results on linear multiagent systems [6,7], but most systems in engineering practice are nonlinear [8], which have more complex dynamics and nonlinear parts that cannot be modeled, such as nonholonomic mobile robots [9] or robotic

R This work was supported in part by the National Natural Science Foundation (61374040 and 61673277). ∗ Corresponding author. E-mail address: [email protected] (C. Wang).

https://doi.org/10.1016/j.neucom.2019.04.018 0925-2312/© 2019 Elsevier B.V. All rights reserved.

© 2019 Elsevier B.V. All rights reserved.

manipulators [10]. Therefore, compared with linear multiagent systems, nonlinear multiagent systems have more research value, complexity and challenge. Some scholars have studied more general nonlinear systems [11,12]. Chen et al. [9] extended the result [11] to the first-order nonlinear system under the undirected communication topology. In [12], a class of high-order nonlinear systems was considered, and the approach of adaptive neural network approximation was used for multiagent systems with unknown nonlinear functions. Wang et al. [13] also used the same method to handle the high-order nonlinear system with unknown parameters under the undirected graph. Furthermore, based on the study of [14–16], the consensus tracking control of the nonlinear multiagent systems under the directed graph was investigated, and the nonsmooth controller proposed by the former is improved to a smooth controller by Huang et al. [17]. In practical industrial applications, when the designed control signal is input into the actual system by the actuator, they face two problems: signal quantization and actuator failures or faults. In reality, there are inevitably quantized signals in various systems such as discrete nonlinear stochastic systems, hybrid system, or network system. The signal quantization can be recognized as the mapping of continuous analog signals to piecewise continuous signals [18–20]. Unlike digital discrete signals, which are discrete in time and amplitude, the signal is quantized to a finite set only

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

in amplitude but remains continuous on the time axis. More research results have been obtained in signal quantization in recent years. Zhou et al. [21] studied the input quantized signal for a strict-feedback system and proposed a hysteretic quantizer model. However, in [21], the nonlinearities in the system are required to satisfy the global Lipschitz conditions, and Lipschitz constants are known. On this basis, the quantization problem of a single system is further studied in [22] using a robust adaptive technique, which avoids the nonlinearity assumption in [21]. Wang et al.[13,23] extended this result to linear multiagent systems and high-order nonlinear multiagent systems, respectively, and an event-triggered control strategy was investigated in [23]. Despite these efforts, the tracking errors in these papers are not asymptotic convergence. Another problem is actuator failures and faults. Actuator failures indicate actuator efficiency degradation, i.e., the actuator loses part of its efficiency, whereas actuator faults represent the complete loss of power of the actuator, i.e., they are blocked by unknown nonlinear functions, and the outputs are no longer affected by the actuators [24,25]. Actuator failures and faults will cause instability of the system; thus, it is necessary to develop an efficient fault-tolerant control (FTC) to ensure the stable and desired control performance. [26] and [27] presented the FTC method for a single linear system. Tang and co-workers [28,29] extended the FTC to a strict-feedback system. In [29], the tracking error asymptotically converges to zero despite the disturbance and quantized input. Subsequently, the FTC schemes of multiagent systems are considered in [30,31]. Although there are some research results on FTC in the field of multiagent systems, these studies about consensus tracking for multiagent systems did not consider the situation of input quantization and actuator failures or faults. Furthermore, the asymptotical consensus tracking has great application value [17,32]. It is essential to consider the asymptotic tracking problem for nonlinear multiagent systems with input signal quantization and actuator faults. Li and Yang [29] first proposed the consensus tracking problem for a strict-feedback system with input signal quantization and actuator faults, and the tracking error asymptotically converged to zero, but it was only extended to a single system. Wang et al. [13] included the input signal quantization and actuator failures or faults, which extended this result to a multiagent systems, but it did not obtain the asymptotic convergence of the tracking error. It only guaranteed the tracking error in an adjustable range and only considered the situation of the undirected graph. In summary, the studies on multiagent systems with input quantization and actuator failures or faults remain inadequate, which is also the motivation of this paper. Compared with the aforementioned literature, this paper investigates an asymptotically consensus tracking problem for a multiagent system that consists of an nth-order strict-feedback system with input quantization and actuator failures or faults. Based on the directed communication topology and leader-following model, a class of first-order filters, normal backstepping methods and robust adaptive techniques are used to compensate the effect of the input quantization and actuator failures or faults. Compared with the previous literature, the following points should be emphasized:

65

the nonlinear function. The approximation error is compensated by the smooth function. 3. Compared with [13], the multiagent systems under the directed graph are considered, and the tracking error can asymptotically converge to zero. The other parts of this paper describe the problem and introduce the concept and some required lemmas. The detailed design process of the distributed controller and the stability analysis of the closed-loop system are shown in the third part. In the fourth part, a practical application example is provided to demonstrate the effectiveness of the designed controller. Notations: Let  ·  denote the Euclidean norm of a vector; | · | is the absolute value of a real number. Matrix A > 0 if A is positive definite. 2. Problem statement and preliminaries 2.1. Graph theory Suppose that the communication topology among N agents is described by a directed graph G = {V, E } without self-loops. Here, V is a finite nonempty node set, where each node represents an agent, and E ∈ V 2 is a set of edges between nodes. An edge (i, j ) ∈ E denotes that node j is a neighbor of node i; then, agent j can receive information from node i. Ni indicates the set of all neighbors of agent i. A directed path from agent i1 to agent is is a sequence of ordered adjacent edges of the form (i1 , i2 ), (i2 , i3 ), . . . , (is−1 , is ). A graph includes a spanning tree if there is a root node that provides a path to any other node. The weighted adjacency matrix A = [ai j ] ∈ RN×N is defined as aij > 0 if ( j, i ) ∈ E and otherwise ai j = 0. The Laplacian matrix L = [li j ] ∈ RN×N of a directed graph G  is described as lii = j∈N ai j and li j = −ai j , i = j. We define μi ≥ 0 i to describe that the ith agent is connected with the leader agent. The diagonal matrix H = diag{h1 , h2 , . . . , hN } is the leader adjacency matrix. If any agent i is connected with the leader agent, then hi > 0; otherwise, hi = 0. 2.2. RBF neural networks In this paper, the radial basis function (RBF) neural networks are designed to estimate the unknown nonlinear functions. The output of the RBF neural networks is defined as:

ˆ T φ (Z ) O (Z ) = W

(1) ]T

Rn ,

where Z = [Z1 , Z2 , . . . , Zn ∈ O ∈ R are the input and output of ˆ = [W1 , W2 , . . . , Wl ]T ∈ Rl the RBF neural networks, respectively; W denotes the weight vector; and l is the node number of the neural network. φ (Z ) = [φ1 (Z ), φ2 (Z ), . . . , φl (Z )]T ∈ Rl is a known function called the activation function, which is selected as the Gaussian function[13]:



φi (Z ) = exp

− (Z − μ ¯ i )T ( Z − μ ¯ i)

σi2



,

i = 1, 2, . . . , l

(2)

μ¯ i = [μi1 , μi2 , . . . , μin ]T is the center of the receptive field, and σ i 1. Compared with [14–16], a smooth function is used to compensate the effect of the input quantization and actuator failures or faults, and a smooth distributed controller consisting of several intermediate virtual controllers is designed for each agent under the directed graph in this paper. As a result, the tracking error asymptotically converges to zero. 2. Compared with [29], the result is extended to multiagent systems, and a more general system model, which contains the unmodeled nonlinear function, is proposed. Meanwhile, the neural network approximation method is used to handle

is the parameter of Gaussian function. It has been proven that an unknown smooth function f(Z), which is in a compact set Z , can be approximated through an ideal weight vector W and a Gaussian function such that

f (Z ) = W T φ (Z ) + ε (Z )

(3)

ε is the approximation error, which can be compressed arbitrarily small by selecting a suitable number of nodes. The weight matrix is defined as:

ˆ T φ (Z )}. W = argminWˆ ∈W {supz∈Z  f (Z ) − W

(4)

66

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

The weight matrix W is selected by minimizing the output of RBF neural networks and f(Z); then, ε , which is described in (3), satisfies ε ≤ ε m , W ≤ Wm , ε m , where Wm are all unknown positive constants. 2.3. Problem formulation Consider a class of the nth-order strict-feedback nonlinear agents including the input quantization, which can be described in the following form: T x˙ i,q = xi,q+1 + φi,q (xi,q )θi + fi,q (xi,q ), q = 1, 2, . . . , n − 1

x˙ i,n =

m 

T bi, j Q (ui, j ) + φi,n (xi,n )θi + fi,n (xi,n ) + di (t )

j=1

yi = xi,1

(5) ]T

Rq , q

where xi,q = [xi,1 , xi,2 , . . . , xi,q ∈ = 1, 2, . . . , n − 1 and xi,n = [xi,1 , xi,2 , . . . , xi,n ]T ∈ Rn are the system states, yi ∈ R and ui, j ∈ R are the output and jth control input of the ith agent, respectively; bi, j ∈ R and θi ∈ R are unknown parameters. The sign of bi,j , which is the jth control direction of the ith agent, is known; φ ( · ) are known nonlinear smooth functions; f( · ) are unknown nonlinear functions; di (t ) ∈ R is an unknown bounded time-varying additional disturbance, and the bound of di (t) is unknown; and Q (ui, j ) ∈ R is a quantized control input signal. The following hysteresis quantizer input model in this paper is identical to that in [21]:

Q (ui, j )

⎧ ui, j,k ui, j,k sgn(ui, j ), ≤ |ui, j | ≤ ui, j,k , u˙ i, j < 0, or ⎪ 1+δi, j ⎪ ⎪ ⎪ u < |ui, j | ≤ 1u−i,δj,ki, j , u˙ i, j > 0, ⎪ i, j,k ⎪ ⎪ ui, j,k ⎪ ⎪ ⎨ui, j,k (1 + δi, j )sgn(ui, j ), ui, j,k < |ui, j | ≤ 1−δi, j , u˙ i, j < 0, or ui, j,k u (1+δ ) = < |ui, j | ≤ i, j,k1−δ i, j , u˙ i, j > 0 1−δi, j i, j ⎪ ⎪ u i, jmin ⎪ ˙ i, j < 0, or 0 , 0 ≤ | u | < , u ⎪ i, j 1+ δ ⎪ i, j ⎪ ui, jmin ⎪ ⎪ ≤ |ui, j | < ui, jmin , u˙ i, j > 0 ⎪ 1+δi, j ⎩ Q (ui, j (t − )), u˙ i, j = 0 (6) ui, j,k = ρi,1−k ui, jmin , k = 1, 2, . . .. Before the input signal is quanj tified in every sampling time, the unique value of k can be calculated by the quantizer. ui,jmin > 0 denotes the dead-zone for the quantizer, the quantized signal is zero when the signal is less than the dead-zone value regardless of whether the input signal increases or decreases. 0 < ρ i,j < 1 and δi, j =

1−ρi, j 1+ρi, j

, where ρ i,j is

a step of quantization density, i.e., a larger δ i,j corresponds to a coarser quantized signal. Remark 1. The quantized input is different from a smooth input for its nonlinear feature. In addition, most of quantized signals have both of linear and nonlinear features. Normally, the quantized input signal turns a closed-loop system into a hybrid system, and the control method of a nonlinear system with a smooth input controller cannot be directly used in systems with quantized signals. The control signal in the closed-loop system should be redesigned to compensate the nonlinear feature caused by the quantized input signal. Lemma 1. The control scheme, which is the quantized input signal, can be divided into a linear part and an unknown nonlinear part; thus, a hysteresis quantizer can be formulated as follows [21]:

Q (ui, j ) = ui, j + i, j where i,j satisfies a set of inequalities:

(7)

2i, j ≤ δi,2 j u2i, j ,

∀|ui, j | ≥ ui, jmin



∀|ui, j | ≤ ui, jmin .

2 i, j

u2i, jmin ,

(8)

Although the input signal is quantized, many practical systems continue facing other problems such as the actuator faults. In practice, actuator faults can be described as:

ωi, j (t ) = ki, j,h Q (ui, j (t )) + ui,s j,h (t ) = ki, j,h (ui, j (t ) + i, j (t )) + ui,s j,h (t ), t ∈ [ti,s j,h , ti,e j,h ] ki, j,h ui,s j,h = 0,

j = 1, 2, . . . , m, h = 1, 2, 3, . . .

(9)

ki,j,h ∈ [0, 1] is the efficiency coefficient. Actuator faults may appear many times during a period of time [ti,s j,h , ti,e j,h ], where ti,s j,h and ti,e j,h are all unknown constant and satisfy 0 ≤ ti,s j,1 ≤ ti,e j,1 ≤ ti,s j,2 ≤ ti,e j,2 . . . ≤ ti,s j,h ≤ ti,e j,h . . . , etc. ui,sj,h is the jth actuator stuck function of the ith agent. It is an unknown nonlinear or linear signal; meanwhile, it satisfies the piecewise continuous and bounded condition. (9) formulates a fact that the jth actuator fails during a period of time from ti,s j,h to ti,e j,h . h expresses the time point of fault occurrence, i.e., the time span [ti,s j,1 , ti,e j,1 ] denotes the first fault occurrence to the jth actuator. The fault model includes the following four cases:

1. ki, j,h = 1 and ui,s j,h = 0, the actuator is in a fault-case state. 2. ki,j,h = 0 and ui,s j,h = 0, 0 < ki, j,h ≤ ki, j,h ≤ k¯ i, j,h < 1, the actuator loses a part of efficiency. 3. ki, j,h = 1 and ui,sj,h = 0, the actuator starts to be stuck. 4. ki, j,h = 0 and ui,sj,h = 0, the actuator totally loses its efficiency. The input signal ui,j cannot affect the actuator, the closed-loop system can only be influenced by ui,sj,h instead of ui,j , i.e., ωi,j is stuck by an unknown bounded signal ui,sj,h . According to the above information, the system model with the quantizer and actuator faults can be modified as: T x˙ i,q = xi,q+1 + φi,q (xi,q )θi + fi,q (xi,q ), q = 1, 2, . . . , n − 1

x˙ i,n =

m 

T bi, j (ki, j,h ui, j + ki, j,h i, j + ui,s j,h ) + φi,n (xi,n )θi

j=1

+ fi,n (xi,n ) + di (t ) yi = xi,1 .

(10)

In the multiagent systems, the reference signal can be only transferred to agent i when hi = 1; otherwise, hi = 0. In this paper, the following notations are used. a ∈ Rn and b ∈ Rn are two vectors; then, we define a new vector operation as follows:.∗ , whose operation indicates that a. ∗ b = [a(1 )b(1 ), . . . , a(n )b(n )]T . λmin (M) is the minimum eigenvalue of positive definition matrix M. The main problem in this paper is indicated below. Control object: For the multiagent system (10) with input quantization, actuator faults and unknown bounded time-varying disturbance, it is assumed that only finite agents are directly connected with the leader, and other agents only communicate with their neighbors. In this paper, the goal is to design a smooth controller ui,j for every agent described in (10) such that (1) All signals in the closed-loop system remain globally uniformly bounded. (2) The closed-loop system has asymptotic convergent performance, i.e., the error between the output of each agent and the reference signal can asymptotically be zero. Assumption 1. The directed graph G has a spanning tree, and the leader is the root agent. Assumption 2 [17]. The first nth-order derivative of reference signal yr (t) is piecewise bounded and continuous. Fj , j = 1, 2 . . . , n denote the bound of jth-order of the reference signal yr (t), and Fj is

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

only known to the agent when the agent is directly connected to the leader.

67

3.1. Filter design Design a filter for agent i, i = 1, 2, . . . , N, j = 1, 2, . . . , n

Lemma 2 [1]. For a nonsingular M-matrix N, there always exists a positive diagonal matrix G that satisfies GN + N T G > 0. Moreover, G = diag(g1 , . . . , gN ), where g = [g1 , . . . , gN ]T = (N T )−1 1. Based on Assumption 1, matrix (L + H ) is a M-matrix, where H = diag{h1 , . . . , hN }. Define

q˙ i,2 = qi,3 .. ⎪ ⎪ ⎩. q˙ i,n = ηi .

(14)

Introduce the filter synchronization error as follows:

P = [ p1 , . . . , pN ]T = (L + H )−T 1 P = diag{ p1 , . . . , pN } M = P (L + H ) + (L + H )T P,

⎧ q˙ i,1 = qi,2 ⎪ ⎪ ⎨

(11)

zi, j =

N 

ai,k (qi, j − qk, j ) + hi (qi, j − yr( j−1) )

( j−1 )

then P > 0 and M > 0.

where yr bound of

Lemma 3 [29]. For any positive bounded and uniform continuous function (t) satisfying (t) > 0 and any variable z, the following inequalities hold:

(15)

k=1

d ( j−1 ) yr , dt ( j−1 )

and define Fˆi, j as the estimation of the , i.e., Fj . For agent i, continue to define Fˆ˙i, j as

=

d ( j ) yr dt ( j )





Fˆ˙i, j = ai,k Fˆk, j − Fˆi, j + hi Fj − Fˆi, j . N

(16)

k=1

0 ≤ |z | −

|z | −



z2 z2 + 2 (t )

( j)

Let z j = [z1, j , . . . , zN, j ]T , q j = [q1, j , . . . , qN, j ]T and yr( j ) = [yr , . . . ,

< (t )

|z |2 (t )|z| = < (t ) |z| + (t ) |z| + (t )

( j) yr ]T . We have

(12)

lim

t→∞ 0



(t )ds ≤ .

(17)

Set

with (t) satisfying



z j = (L + H )(q j − yr( j−1) ). d n−1 ) z1 dt 0 n−1 −1 0 = Cn−1 λ z1 + Cn1−1 λn−2 z2 + Cnn−1 λ zn .

z = (λ + (13)

where is any positive constant. For convenience, we use instead of ( t). Assumption 3. The external disturbance is bounded, and the bound of disturbance is unknown and positive such that |d (t )| ≤ d¯. Assumption 4. Function ui,sj,h is bounded, and the bound is unknown and positive such that |ui,s j,h (t )| ≤ ui,s j,h . Assumption 5. The signs of bi,j are known; for i = 1, . . . , N j = 1, . . . , m. Assumption 6. All actuators do not simultaneously fail. If the number of actuator faults is less than m, the closed-loop system still works well with the remaining actuators. Remark 2. Assumptions 3–5 are basic requirements about adaptive back-steeping control and fault-tolerant control schemes mentioned in several studies [28,29,32,33]. In this paper, these assumptions are required to design our control scheme. As described in [29], Assumption 6 is a standard condition to ensure that there is at least one control signal in the closed-loop system; the controllability can also be ensured.

(18)

Let z = [z1 , z2 , . . . , zn ]T , and the detail of z is

z = (L + H )[Cn0−1 λn−1 (q1 − yr ) + Cn1−1 λn−2 (q2 − y˙r ) −2 −1 0 + . . . + Cnn−1 λ(qn−1 − yr(n−2) ) + Cnn−1 λ (qn − yr(n−1) )].

(19)

From the definition of z, more details of vector z are



⎡z ⎤



−1 0 Cn0−1 λn−1 (q1,1 − yr ) + . . . + Cnn−1 λ (q1,n − yr(n−1) ) (n−1 ) ⎥ n −1 0 n −1 0 ⎢ )⎥ ⎢z 2 ⎥ ⎢Cn−1 λ (q2,1 − yr ) + . . . + Cn−1 λ (q2,n − yr ⎥ ⎣ .. ⎦ = (L + H )⎢ .. ⎣ ⎦ . . zn −1 0 Cn0−1 λn−1 (qN,1 − yr ) + . . . + Cnn−1 λ (qN,n − yr(n−1) ) 1

(20) and the derivative of z is



⎡z˙ ⎤



−1 0 Cn0−1 λn−1 (q1,2 − y˙ r ) + . . . + Cnn−1 λ (η1 − yr(n) ) (n ) ⎥ n −1 0 n −1 0 ⎢ ⎢z˙2 ⎥ ⎢ Cn−1 λ (q2,2 − y˙ r ) + . . . + Cn−1 λ (η2 − yr ) ⎥ ⎥. ⎣ .. ⎦ = (L + H )⎢ .. ⎣ ⎦ . . z˙n −1 0 Cn0−1 λn−1 (qN,2 − y˙ r ) + . . . + Cnn−1 λ (ηN − yr(n) ) 1

(21) z˙ can be rewritten as

z˙ = (L + H )[Cn0−1 λn−1 (q2 − y˙r ) + Cn1−1 λn−2 (q3 − yr(2) ) + . . . −2 −1 0 + Cnn−1 λ(qn − yr(n−1) ) + Cnn−1 λ (η − yr(n) )]



3. Adaptive distributed controller design In this section, we consider using the backstepping method to design a smooth controller ui,j , and the estimations of the unknown parameters will be provided. The actuator faults and input quantization in a class of strict-feedback (5) system can be compensated. First, a filter that can estimate the jth-order derivative of reference signal yr is designed for each agent.

−2 = (L + H ) Cn0−1 λn−1 q2 + Cn1−1 λn−2 q3 + . . . + Cnn−1 λqn

−1 + Cnn−1

λ η− 0

n 

 ( j)

δ j yr

(22)

j=1

where δ j = Cn−1 λn− j , and η = [η1 , η2 , . . . , ηN ]T ; then, the first distributed intermediate controller ηi can be designed as j−1

68

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

ηi = −czi − Cn0−1 λn−1 qi,2 − Cn1−1 λn−2 qi,3 − Cn2−1 λn−3 qi,4 − . . . −2 − Cnn−1 λqi,n +

n  j=1

δ j |zi |zi ˆ Fi, j . z2i + 2

fi,1 = Wi,T1 ϕi,1 (xi,1 ) + εϕi,1 (xi,1 ) (23)

Define Fˆj = [Fˆ1, j , . . . , FˆN, j ]T as the estimations of Fj , and F j = [Fj , . . . , Fj ]T . The adaptive laws of Fˆi, j are

Fˆ˙ j = −(L + H )F˜j

z˙ = (L + H ) −cz +

n  j=1

δ j Z. ∗ Fˆj −

n 



( j)

δ j yr

(25)

ei,1 fi,1 (xi,1 ) ≤ ei,1 μi ϕi,1 (xi,1 ) + ei,1 εi,1 ≤ ei,1 μi ϕ i,1 (xi,1 ) + μi εμi +

(29)

ϕ i,1 (xi,1 ) = 

ei,1 ϕi,T1 (xi,1 )ϕi,1 (xi,1 ) 2 e2i,1 ϕi,T1 (xi,1 )ϕi,1 (xi,1 ) + εμ i

.

(30)

The first virtual control law α i,1 , which can be designed as

Remark 3. In this part, a class of first-order filters is designed to estimate the bound of the first jth-order derivative of reference signal yr (t). A smooth part consisting of the synchronization error of ( j) the filters is proposed to handle the unknown bound of yr for the agent, which cannot be directly connected by the leader. This smooth term can ensure a desire trajectory performance in using a smooth control protocol. In a leader-following tracking problem, the input quantization and actuator faults can be compensated by this smooth control protocol. Remark 4. Based on [8], the origin x = 0 may not be an equilibrium point of the system (5), because the bounded external disturbance. We can no longer study stability of the origin as an equilibrium point, nor should we expect the solution of the perturbed system to approach the origin as t → ∞. The best we can hope for is that all the closed-loop signal will be ultimately bounded by a small bound, if the disturbance term di (t) is bounded in some sense, and this is one of our control objects. It is necessary to point out here that what we are discussing is a tracking control problem of multi-agent systems. A distributed control topology is designed in our paper for converging the tracking errors to zero. The Lyapunov function candidate about the tracking error is designed to verify the validity of the control topology. However, other signals existed in the closed-loop system are bounded.

1 2

αi,1 = −ki,1 ei,1 − φi,1 (xi,1 )θˆi − μˆ i ϕ i,1 (xi,1 ) − ei,1

Vi,1 =



˙ ≤ ei,1 ei,2 − ki,1 e2i,1 + θ˜iT ei,1 φi,1 (xi,1 ) − θˆi



Consider that fi,1 (xi,1 ) is an unknown smooth function; then, the RBF neural networks are used to approximate it. The following equality can be established.



1 2 ε . 2 i,1

(33)

Step j (2 ≤ j ≤ n − 1 ). According to (10) and (26), the virtual controllers αi, j−1 are functions of xi,1 , xi,2 , . . . , xi, j−1 , θˆi , μ ˆ i, ( j−1 )

qi,1 , q˙ i,1 , . . . , qi,1

. Thus, we obtain

e˙ i, j = x˙ i, j − α˙ i, j−1 − qi,( j1) = xi, j+1 + φi, j (xi, j )θi + fi, j (xi, j ) − α˙ i, j−1 − qi,( j1) = ei, j+1 + αi, j + φi, j (xi, j )θi + fi, j (xi, j ) − α˙ i, j−1



 ∂αi, j−1 = ei, j+1 + αi, j + φi, j (xi, j ) − φ ( x ) θi ∂ xi,k i,k i,k k=1   j−1  ∂αi, j−1 + fi, j (xi, j ) − f (x ) ∂ xi,k i, j i,k k=1   j−1 j−1   ∂αi, j−1 ∂αi, j−1 (k) − x + q ∂ xi,k k+1 k=1 ∂ q(k−1) i,1 k=1 i,1 −

j−1 

∂αi, j−1 ˆ˙ ∂αi, j−1 ˙ θi − μˆ . ∂ μˆ i i ∂ θˆi

(34)

Similar to step 1, the following inequality can be proven.



ei, j (27)

+μ ˜ Ti ei,1 ϕ i,1 (xi,1 ) − μ ˆ˙ i + μi εμi +

(26)

= xi,2 + φi,1 (xi,1 )θi + fi,1 (xi,1 ) − q˙ i,1

(32)

V˙ i,1 = ei,1 e˙ i,1 + μ ˜ Ti μ ˜˙ i + θ˜iT θ˜˙ i

ei,1 = xi,1 − qi,1

e˙ i,1 = x˙ i,1 − q˙ i,1

1 2 1 T 1 e + μ ˜ μ ˜ i + θ˜iT θ˜i 2 i,1 2 i 2

where θ˜i = θ − θˆi , μ ˜i = μ−μ ˆ i . According to (26)–(31), the derivative of Vi,1 is

A distributed adaptive control scheme will be designed using the standard back-stepping method. First, as usual, two errors are mentioned:

where α i,j is the virtual control law designed in each step during the back-stepping procession. qi,1 is the output of each filter, i.e., it is the estimation of reference signal yr . The controller design procession is elaborated as follows: Step 1. The derivative of ei,1 is

(31)

where θˆi is the estimation of θ i , μ ˆ i is the estimation of μi , and ki,1 > 0 is any positive designed constant. The Lyapunov function candidate can be selected as

3.2. Adaptive fault-tolerant and quantization control design

= ei,2 + αi,1 + φi,1 (xi,1 )θi + fi,1 (xi,1 ).

1 2 1 e + ε2 2 i,1 2 i,1

where εμi > 0 is a positive constant, and ϕ i,1 (xi,1 ) is

j=1

|z |z |z |z |z |z where Z. ∗ Fˆi, j = [ 21 12 Fˆj , 22 22 Fˆj , . . . , 2N N2 Fˆj ]T . z1 + z2 + zN +

) ei, j = xi, j − αi, j−1 − qi,( j−1 , i = 1, 2, . . . , N, j = 2, 3, . . . , n 1

(28)

We define μi = max{Wi, j  : j = 1, 2, . . . , n}, and εϕi,1 (xi,1 ) is the RBF neural network error with upper bound ε i,1 . Based on Lemma 3, the following inequality can be obtained:

(24)

where F˜j is the error between Fj and Fˆj , i.e., F˜j = F j − Fˆj = [F˜1, j , . . . , F˜N, j ]T . Substituting (23) into (22) shows



ei,1 fi,1 (xi,1 ) = ei,1Wi,T1 ϕi,1 (xi,1 ) + ei,1 εϕi,1 (xi,1 ).

j−1 

∂αi, j−1 fi, j (xi, j ) − f (x ) ∂ xi,k i, j i,k k=1



= ei, jW T ϕi, j (xi,1 , . . . , xi, j ) + ei, j εϕi, j (xi,1 , . . . , xi, j ) ≤ ei, j μi ϕ i, j (xi,1 , . . . , xi, j ) + μi εμi + In this inequality, we use the fact that

1 2 1 e + ε2 . 2 i, j 2 i, j

(35)

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

ϕ i, j (xi,1 , . . . , xi, j ) = 

ei, j ϕi,T j (xi,1 , . . . , xi, j )ϕi, j (xi,1 , . . . , xi, j ) 2 e2i, j ϕi,T j (xi,1 , . . . , xi, j )ϕi, j (xi,1 , . . . , xi, j ) + εμ i

(36)

Similar to the above steps, the following inequality can be obtained:



αi, j





 ∂αi, j−1 φi, j (xi, j ) − φ (x ) θˆi ∂ xi,k i,k i,k k=1

1 = − ki, j + ei, j − ei, j−1 − 2

 ∂αi, j−1 (k ) ∂αi, j−1 x + q (k−1 ) i,1 ∂ xi,k i,k+1 k=1 k=1 ∂ qi,1    j k−1  ∂αi, j−1  ∂αi,k−1 + ei,k φi,k (xi,k ) − φ (x ) ∂ xi,l i,l i,l ∂ θˆi k=1 l=1   j ∂αi, j−1  + e ϕ (x , . . . , xi,k ) . (37) ∂ μˆ i k=1 i,k i,k i,1 −μ ˆ i ϕ i, j (xi,1 , xi,2 , . . . , xi, j ) +



l=1

1 = Vi, j−1 + e2i, j . 2

(38)

Considering (34) to (37), the derivative of Vi,j satisfies the following inequality:

V˙ i, j ≤ −

j 

j 

 ei,k

φi,k (xi,k ) −

k−1 

k=1





l=1

j



∂αi,k−1 φ (x ) − θˆ˙ i ∂ xi,l i,l i,l 

1 2

2 (μi εμi + εi,k ).

(39)

bi, j (ki, jh ui, j + ki, jh i, j + ui,s j,h ) + φi,n (xi,n )θi + fi,n (xi,n )



bi, j ki, jh (ui, j + i, j ) +

n−1  l=1



n−1  l=1



Vi,n = Vi,n−1 +

1 2 1 r e + ρ˜ 2 + β˜i2 2 i,n 2 i 2

(43)

with ρ˜i = ρi − ρˆi and β˜i = βi − βˆi , where ρˆi and βˆi are the estimations of ρ i and β i , respectively. From Assumptions 3–6, we can  m infer that m j=1 bi, j ki, j,h ≤ j=1 |bi, j |ki, j,h and



min |bi,1 |ki,1,h , |bi,2 |ki,2,h , . . . , |bi,m |ki,m,h m 



|bi, j |ki, j,h

m 

(44)

  |bi,j |ki,j,h ≥ min |bi,1 |ki,1,h , |bi,2 |ki,2,h , . . . , |bi,m |ki,m,h > 0

j=1

(45) can be satisfied. Furthermore, we define

ri = inf

m 

|bi, j |ki, j,h

j=1

βi =

1 ri

ρi = sup|

m 

bi, j ui,s j,h | + d + |bi, j |ki, j,h ui, jmin .

(46)

j=1

V˙ i,n ≤ −

n−1 

ki,l e2i,l + ei,n−1 ei,n

l=1



− α˙ i,n−1 − qi,(n1)

j=1

and ε i,n is the upper bound of ε ϕ i,n . The final Lyapunov function is designed by

In this paper, various parameters described in (46) can be estimated. Considering (40)–(46), the derivative of Vi,n is

j=1

m 

(42)

inf

e˙ i,n = x˙ i,n − α˙ i,n−1 − qi,(n1)

=

T ( x , . . . , x )ϕ ( x , . . . , x ) + ε 2 e2i,n ϕi,n i,n i,1 μi i,1 i,n i,n

 with m j=1 |bi, j |ki, j,h > 0. ki,1,h is the lower bound of ki,1,h , and ki,2,h is the lower bound of ki,2,h , etc. Thus, the inequality

Step n. In this step, the final controller for each agent will be designed. Considering (26) and system model (10), we have

m 

T ei,n ϕi,n (xi,1 , . . . , xi,n )ϕi,n (xi,1 , . . . , xi,n )

j=1

ei,k ϕ i,k (xi,1 , . . . , xi,k ) − μ ˆ˙ i

k=1

=

ϕ i,n (xi,1 , . . . , xi,n ) = 



    j k−1  ∂αi, j−1  ∂αi,k−1 ˙ˆ + ei, j ei,k φi,k (xi,k ) − φ ( x ) − θi ∂ xi,l i,l i,l ∂ θˆi k=1 l=1   j ∂αi, j−1  + ei, j e ϕ (x , . . . , xi,k ) − μ ˆ˙ i ∂ μˆ i k=1 i,k i,k i,1 j 

(41)

with



k=1

+

1 2 1 e + ε2 2 i,n 2 i,n

≤ min{|bi,1 |ki,1,h , |bi,2 |ki,2,h , . . . , |bi,m |ki,m,h }



+μ ˜ Ti



≤ ei,n μi ϕ i,n (xi,1 , . . . , xi,n ) + μi εμi +

ki,l e2i,l + ei, j ei, j−1

l=1

+ θ˜iT

∂αi,n−1 f (x ) ∂ xi,l i,l i,l

j−1

The jth Lyapunov function candidate is considered

Vi, j

n−1 

= ei,nW T ϕi,n (xi,1 , . . . , xi,n ) + ei,n εϕi,n (xi,1 , . . . , xi,n )

j−1 

j−1

fi,n (xi,n ) −

ei,n

and ε i,j is the upper bound of ε ϕ i,j . The jth virtual controller can be designed as



69

m 

+ θ˜iT

bi, j ui,s j,h + φi,n (xi,n )θi

 ei,k

φi,k (xi,k ) −

k=1



j=1

k−1  l=1

n−1 

  ∂αi,k−1 ˙ˆ φ ( x ) − θi ∂ xi,l i,l i,l 

∂αi,n−1 φi,l (xi,l ) θˆi ∂ xi,l l=1   n −1 

+μ ˜ Ti ei,l ϕ i,l (xi,1 , . . . , xi,l ) − μ ˆ˙ i

n−1  ∂αi,n−1 ∂αi,n−1 φi,l (xi,l )θi + fi,n (xi,n ) − f (x ) ∂ xi,l ∂ xi,l i,l i,l l=1

− ei,n

n−1 ∂αi,n−1 (l )  ∂αi,n−1 ∂αi,n−1 ˆ˙ q − x − θi (l−1 ) i,1 ∂ xi,l i,l+1 ∂ qi,1 ∂ θˆi l=1

∂αi,n−1 ˙ μˆ − qi,(n1) + d (t ). ∂ μˆ i i

n−1 

φi,n (xi,n ) −

l=1

(40)

+ ei,n

m  j=1

bi, j ki, j,h (ui, j + i, j ) + ei,n

m  j=1

bi, j ui,s j,h + ei,n d (t )

70

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

 φi,n (xi,n ) −

+ ei,n

n−1  l=1

 ∂αi,n−1 φ ( x ) θi ∂ xi,l i,l i,l

+ ei,n μi ϕ i,n (xi,1 , . . . , xi,n )



( j)

the derivatives of the yr (t), i.e., q˙ i, j = d ( jy)r , j = 1, . . . , n − 1, and dt q˙ i,n is the input of the filter. To achieve limt→∞ qi,1 − yr = 0, the adaptive laws (24) are designed in the filter input (23) for the ( j)



n−1  ∂αi,n−1 ∂αi,n−1 (l ) ∂αi,n−1 ˆ˙ xi,l+1 + q + qi,(n1) − ei,n θi (l−1 ) i,1 ∂ xi,l ∂ q ∂ θˆi l=1 l=1 i,1  1 n   ∂αi,n−1 ˙ 1 μi εμi + εi,l2 + e2i,n + ρ˜i ρ˜˙ i + rβ˜i β˜˙ i . − ei,n μˆ i + 2 2 ∂ μˆ i l=1



n−1 

(47) Introduce the second distributed intermediate controller for each agent as follows to integrate Vi,n







1 vi = ei,n−1 + ki,n + ei,n + 2





T

θˆi

l=1



n k−1  ∂αi,k−1 ∂α i, n − 1  − ei,k φi,k (xi,k ) − φ (x ) ∂ xi,l i,l i,l ∂ θˆi







n−1 n−1   ∂αi,n−1 ∂αi,n−1 (l ) xi,l+1 + q + qi,(n1) (l−1 ) i,1 ∂ xi,l ∂ q l=1 l=1 i,1

+ ϕ i,n (xi,1 , . . . , xi,n )μ ˆi −



n−1  ∂αi,n−1 φi,n (xi,n ) − φ (x ) ∂ xi,l i,l i,l

k=1



n ρˆi2 ei,n ∂α i, n − 1  ei,l ϕ i,l (xi,1 , . . . , xi,l ) + − ηi . ∂ μˆ i ρˆi |ei,n | + l=1

V˙ i,n ≤ −

n  l=1

ki,l e2i,l



˜T i

n 



ei,k

k=1



˜ Ti

n 



k−1  ∂αi,k−1 φi,k (xi,k ) − φ (x ) ∂ xi,l i,l i,l

(48)

Theorem 1. Considering the multiagent systems (5) that satisfy Assumptions 1 and 2, all signals in the closed-loop system with the first distributed controller are uniformly bounded. The reference signal yr (t) can be asymptotically tracked by the output of all first-order filters, i.e., limt→∞ (qi,1 − yr ) = 0. Proof. The Lyapunov function is constructed as follows:

V =



V˙ = z P (L + H ) −cz + T







˙ − θˆi



+ ei,n

bi, j ki, j,h (ui, j + i, j ) + ei,n

j=1

m 

n

αi,n = −  ui, j =

l=1 (μi εμi

e2i,n βˆi2 v2i + 2

sgn(bi, j ) αi,n . 1 − δi, j





ei,n βˆi2 ηi2

n 



( j)

δ j yr

j=1

F˜jT P (L + H )F˜j

n 

j=1

zT P (L + H )δ j Z. ∗ F˜j −

j=1

n 

zT P (L + H )δ j yr( j )

j=1

n n  1 1 T ≤ − czT Mz − F˜j MF˜j + δ j Z P (L + H )F˜j . 2 2 j=1

(52)

j=1

It is easy to check that Z < 1. In light of Lemma 2, we have n n  1 1 T V˙ ≤ − czT Mz − F˜j MF˜j + 2λmin (M ) δ j Z 2 2 2 j=1

+

ρ

n 

2

4λmin (M )

≤ −cz2 − γ

j=1

δ j F˜j 2

j=1 n 

F˜j 2

(53)

j=1

e2i,n βˆi2 ηi2 + 2

where ρ = P (L + H ), c = λmin (M )( 12 c − 2

(50)

Remark 5. In this part, the final controller ui,j is designed using the back-stepping method. It is necessary to mention that the nonlinear term



(49)

2 ). The control laws are selected as + 12 εi,l

ei,n βˆi2 v2i

δ j Z. ∗ Fˆj −

n 

j=1

bi, j ui,s j,h + ei,n d (t ) + ei,n vi

    n k−1  ∂αi,k−1 ∂αi,n−1  ˙ ˆ + ei,n ei,k φi,k (xi,k ) − φ ( x ) − θi ∂ xi,l i,l i,l ∂ θˆi k=1 l=1   n ∂αi,n−1  ˙ + ei,n ei,l ϕ i,l (xi,1 , . . . , xi,l ) − μ ˆi ∂ μˆ i l=1

with Di =

n 

j=1

j=1

ρˆi2 e2i,n + Di − 2 + ei,n ηi + ρ˜i ρ˜˙ i + r β˜i β˜˙ i ρˆi |ei,n | +

(51)

n n  1 1 T = − czT Mz − F˜j MF˜j + zT P (L + H )δ j Z. ∗ Fj 2 2

l=1 m 

n 1 T 1 T z Pz + F˜j P F˜j 2 2

j=1

l=1

ei,l ϕ i,l (xi,1 , . . . , xi,l ) − μ ˆ˙ i

Now, the stability analysis is described in the following content; two theorems are provided and proven in this part.

where z and F˜j are defined in (19) and (24), respectively; then, substituting (24) and (25) into the derivative of V shows





3.3. Stability analysis

j=1

l=1

Integrating (47) and (48), the V˙ i,n can be rewritten as

estimation of the bound of the d ( jy)r . Thus, the coupling relationdt ship between each agent and the reference signal yr (t) can be decoupled.

ρˆi2 ei,n in (50) is used to compensate the unρˆi2 |ei,n |+ 2

n j=1

δ j ), and γ =

1 2



ρ2 . According to the definitions of F˜j in (24) and z in (18), it 4λmin (M )

can be ensured that these two signals are bounded. It is also easy to prove that z˜ and F˜j are bounded. Integrating both sides of (53), we have



V (t ) ≤ V (0 ) −

0



cz ds +





2

0

γ

n 



F˜j  ds 2

(54)

j=1

∞ ∞  which implies that 0 cz2 and 0 γ nj=1 F˜j 2 are bounded. Using Barbalat’s lemma, we have

known parameter ρ i , i.e., the sum of the upper bounds of the actuator fault, external disturbance, unknown bounded parameters and dead-zone of quantization. In the next section, the analysis of the function of this compensation term will be provided.

t→∞

Remark 6. The filter designed in (14) for each agent is used for estimating the reference signal yr (t). The states of the filter denote

Since z1  = (L + H )q1 − yr  and L + H is nonsingular, which z  indicates q1 − yr  ≤ λ (L1 +H ) , we have

lim z = 0.

(55)

min

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

71

Substituting (58) and the adaptive laws (57)–(49), V˙ i,n can be rewritten as

V˙ i,n ≤ −

n 

m 

ki,l e2i,l + ei,n

(1 − δi, j )bi, j ki, j, jh ui, j

j=1

l=1

+ |bi, j |ki, j, jh ui, jmin |ei,n | + |ei,n ||

m 

bi, j ui,s j,h + d (t )| + ei,n vi

j=1

+ ei,n ηi + Di −

Fig. 1. Communication topology.

lim (qi,1 − yr ) = 0.

 Theorem 2. Considering the multiagent systems (5) with signal quantizer (6) and actuator fault model (9), under Assumptions 2–6, the final control (50) and adaptive laws can be described as follows:

 θˆ˙ i = ei,k n

φi,k (xi,k ) −

k=1

μˆ˙ i =

n 

(59) 

(56)

t→∞



ρˆi2 e2i,n + ρ˜i ρ˜˙ i + r β˜i β˜˙ i . ρˆi2 |ei,n | + 2

k−1  l=1

∂αi,k−1 φ (x ) ∂ xi,l i,l i,l

Based on Lemma 3 and Assumption 3, the following inequality always holds:

|bi, j |ki, j, jh ui, jmin |ei,n | + |ei,n ||

bi, j ui,s j,h + d (t )|

j=1

≤ |ei,n |(sup|



m 

m 

bi, j ui,s j,h | + d + |bi, j |ki, j, jh ui, jmin ) = |ei,n |ρi .

j=1

(60) Substituting (50), (60) and (57) into (59), we have

ei,l ϕ i,l (xi,1 , . . . , xi,l ) V˙ i,n ≤ −

l=1

ρˆ˙ i = |ei,n | − ρˆ

n 

ki,l e2i,l −

(57)



All signals in the closed-loop system are bounded for any given δ i,j . The reference signal yr (t) can be asymptotically tracked by the output of the system, i.e., limt→∞ (xi,1 − qi,1 ) = 0. Proof. Based on Lemma 1 and (50), noticing that ei,n bi,j ki,j,jh ui,j ≤ 0, the following inequality is provided:

m 

|bi, j |ki, j, jh 

j=1

− ≤−

≤ −δi, j bi, j ki, j, jh ei,n ui, j + |bi, j |ki, j, jh ui, jmin |ei,n |. (58)



e2i,n βˆi2 ηi2 +

n 

ki,l e2i,l −



re2i,n βˆi2 v2i

+ |ei,n |ρi + ei,n vi + ei,n ηi + Di

e2i,n βˆi2 v2i +





re2i,n βˆi2 ηi2 e2i,n βˆi2 ηi2 +

+ |ei,n |ρi

ρˆi2 e2i,n − |ei,n |ρ˜i + ρ˜i ρˆi − r β˜i ei,n vi − r β˜i ei,n ηi ρˆi |ei,n | +

0.4

0.2

0

-0.2

-0.4

-0.6 5

e2i,n βˆi2 v2i +

e2i,n βˆi2 ηi2

0.6

0

e2i,n βˆi2 v2i

ρˆi2 e2i,n − |ei,n |ρ˜i + ρ˜i ρˆi − r β˜i ei,n vi − r β˜i ei,n ηi ρˆi |ei,n | + l=1

ei,n bi, j ki, j, jh i, j ≤ δi, j |ei,n bi, j ki, j, jh ui, j | + ui, jmin |ei,n bi, j ki, j, jh |

|bi, j |ki, j, jh 

j=1

l=1

βˆ˙ i = ei,n (vi + ηi ).

m 

10

Time(sec) Fig. 2. Trajectory of yi , i = 1, 2, 3, 4.

15

72

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

0.6

0.4

0.2

0

-0.2

-0.4

-0.6 0

5

10

15

Time(sec) Fig. 3. Tracking error of yi , i = 1, 2, 3, 4.

50

0

-50

-100 0

5

10

15

Time(sec) Fig. 4. Controller of ith agent, i = 1, 2, 3, 4.

+ ei,n vi + ei,n ηi + Di ≤−

n 

ρˆi2 e2i,n − |ei,n |ρ˜i ρˆi |ei,n | + ρˆi ei,n = − rei,n βˆi vi ρˆi |ei,n | +

|ei,n |ρi −

ki,l e2i,l + r − rei,n βˆi vi − r β˜i ei,n vi + r − rei,n βˆi ηi

l=1

− r β˜i ei,n ηi + ei,n vi + ei,n ηi + ≤−

n  l=1

ki,l e2i,l + (2r + 1 +

ρ

2 i

4

ρˆi ei,n + ρ˜i ρˆi + Di ρˆi |ei,n | + ) + Di .

− r β˜i ei,n vi + ei,n vi − rei,n βˆi ηi − r β˜i ei,n ηi + ei,n ηi = 0. (61)

Note that the following inequalities and Lemma 3 are used in (61):

(62)

Furthermore, there always exists a positive constant Di that satisfies Di ≤ Di . Let τ = 2r + 1 + ten as

V˙ i,n ≤ −

n  l=1

ki,l e2i,l + τ .

ρi2 4

+ Di ; the (61) can be further writ-

(63)

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

73

40

20

0

-20

-40

-60

-80

-100 0

5

10

15

Time(sec) Fig. 5. Quantized controller of ith agent, i = 1, 2, 3, 4.

4.14

4.12

4.1

4.08

4.06

4.04

4.02

4 0

5

10

15

Time(sec) Fig. 6. Estimation of βi , i = 1, 2, 3, 4. n 

Integrating both sides of (63), we have

l=1

Vi,n (∞ ) +

n  l=1

∞ 0

ki,l e2i,l ds ≤ Vi,n (0 ) +



∞ 0

τ ds ≤ Vi,n (0 ) + τ (64)

∞ 0

ki,l e2i,l ds

is bounded. Thus, all signals in the closed-loop system are bounded. It is similar to the proof of Theorem 1; using Barbalat’s lemma to (66), it is easy to deduce that limt→∞ ei,1 = 0, i.e.,

lim (xi,1 (t ) − qi,1 (t ) ) = 0,

t→∞

which indicates the signals including ρ˜i , r, ei,n are bounded. From (26), ei,1 and qi,1 are bounded, which implies that xi,1 is bounded. From the calculation process of virtual controllers and the definition of ei,j , we obtain that α i,j is bounded. According to (50), controller ui,j is bounded. From (57), it is concluded that the adaptive laws are bounded; then, from (64), it is concluded that

(65)

(66)

which indicates that the asymptotic convergence is ensured. This completes the proof. Remark 7. Consider the multiagent (10), which consists of N nonlinear systems and satisfies Assumptions 1–6, smooth controller (50) and adaptive laws (57). All signals are uniformly bounded, and

74

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

2

1.5

1

0.5

0

-0.5

-1

-1.5 0

5

10

15

Time(sec) Fig. 7. Estimation of θi , i = 1, 2, 3, 4.

4

3.5

3

2.5

2

1.5

1 0

5

10

15

Time(sec) Fig. 8. Estimation of ρi , i = 1, 2, 3, 4.

the asymptotic consensus tracking of the output of the closed-loop system to the reference signal is guaranteed.

τ1 = ui,1 , τ2 = ui,2 , and consider the input signal quantization and

4. Illustrative example

x˙ i,1 = xi,2

In this section, we introduce a practical simulation example with a group of single-link manipulators to verify the feasibility of the proposed method. This practical system can be described as

Mi y¨ i + Ni sin(yi ) + fi (y, y¨ ) = τ1 + τ2

(67)

where yi , y˙ , and y¨ denote the angle position, velocity and acceleration of the ith single-link, respectively; Mi is the inertia; Ni is the product of the mass of the single-link and the gravity constant; and τ 1 and τ 2 are the control torques. Define xi,1 = yi , xi,2 = y˙ i ,

external disturbance; then, we can rewrite (67) as follows:

x˙ i,2 = b(q(ui,1 ) + q(ui,2 )) + f (xi,1 ) + f (xi,1 , xi,2 ) + d (t )

(68)

where f (xi,1 ) = Ni sin(xi,1 ), f (xi,1 , and xi,2 ) = 0.05 sin(xi,1 )e−xi,2 . Let Mi = 1 and Ni = 1, the parameters of quantization are selected as δ1 = 0.2, δ2 = 0.2, u1, jmin = 0.02, and u2, jmin = 0.02. The initial values for simulation are selected as ki,1 = 4, ki,2 = 0.1, (t )i = 0.1e−0.1t , and xi,1 (0 ) = 0.57, 0.44, −0.14, −0.57 for i = 1, 2, 3, 4, xi,2 (0 ) = 0.01, 0, −0.34, 0.2, and the external disturbance is 0.1sin (t). The control objective is to ensure that the system follows a reference signal yd = 0.1 sin(t ). The fault model

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

75

1.004 1.0035 1.003 1.0025 1.002 1.0015 1.001 1.0005 1 0.9995 0

5

10

15

Time(sec) Fig. 9. Estimation of μi , i = 1, 2, 3, 4.

is considered



ωi, j =

q(ui, j ) ki, j,h q(ui, j ) + ui,s j,h

if t ∈ [2k, 2k + 1 ) if t ∈ [2k + 1, 2k + 2 )

(69)

where ki, j, jh = 0, ui,s j,h = 0.1 sin(t ) for i = 1, 2, 3, 4 j = 1, 2 h = 0, 1, 2, . . .. Thus, during every time interval [2k, 2k + 1], the actuators work normally, and during time interval [2k + 1, 2k + 2], all actuators are stuck at 0.01sin (t). In this simulation, all basis functions are Gaussian functions, which contain five nodes. The centers are evenly spaced in the range of [−4, 4], The initial conditions are qi = 0, Fi = 0, μ ˆ i = 0, θˆi = 2, ρˆi = 3, βˆi = 4 for i = 1, 2, 3, 4. The communication topology consisting of N agents is described by a directed graph G, which is shown in Fig. 1. The simulation results are shown in Figs. 2–9, where Fig. 2 shows the tracking performance of the agents, Figs. 4 and 5 show the smooth and quantized control input signals, and Figs. 5–9 show the adaptive laws. The tracking errors clearly asymptotically converge to zero under the control protocol proposed in this paper. Moreover, from Figs. 3–9, the boundedness of all signals in the closed-loop systems is guaranteed. Therefore, the consensus asymptotic convergence is achieved in the presence of the input signal quantization and actuator faults. 5. Conclusion In this paper, a smooth distributed adaptive controller is designed for a class of strict-feedback systems with input quantization, actuator faults and external disturbance. We use a filter to estimate the bound of the first nth-order derivative of the reference signal yd and a smooth function to compensate the nonlinear part caused by the input quantization and actuator fault. It is clearly revealed that the closed-loop stability and asymptotic consensus can be ensured by the discussed distributed control scheme. Meanwhile, all signals in the closed-loop system are bounded. References [1] Y. Cao, W. Yu, W. Ren, G. Chen, An overview of recent progress in the study of distributed multi-agent coordination, IEEE Tran. Indust. Inf. 9 (1) (2013) 427– 438, doi:10.1109/TII.2012.2219061.

[2] L. Ma, Z. Wang, Q.-L. Han, Y. Liu, Consensus control of stochastic multiagent systems: a survey, Sci. China Inf. Sci. 60 (12) (2017), doi:10.1007/ s11432- 017- 9169- 4. [3] W. Ren, R.W. Beard, Consensus seeking in multiagent systems under dynamically changing interaction topologies, IEEE Trans. Autom. Control 50 (5) (2005) 655–661, doi:10.1109/TAC.2005.846556. [4] R. Olfati-Saber, R.M. Murray, Consensus problems in networks of agents with switching topology and time-delays, IEEE Trans. Autom. Control 49 (9) (2004) 1520–1533, doi:10.1109/TAC.2004.834113. [5] W. Ren, R.W. Beard, E.M. Atkins, Information consensus in multivehicle cooperative control, IEEE Control Syst. Mag. 27 (2) (2007) 71–82, doi:10.1109/MCS. 2007.338264. [6] N. Wang, C. Qian, J.-C. Sun, Y.-C. Liu, Adaptive robust finite-time trajectory tracking control of fully actuated marine surface vehicles, IEEE Trans. Control Syst. Technol. 24 (4) (2016a) 1454–1462, doi:10.1109/TCST.2015.2496585. [7] N. Wang, M.J. Er, J.-C. Sun, Y.-C. Liu, Adaptive robust online constructive fuzzy control of a complex surface vehicle system, IEEE Trans. Cybern. 46 (7) (2016b) 1511–1523, doi:10.1109/TCYB.2015.2451116. [8] H.K. Khalil, Nonlinear Systems, Prentice-Hall, New Jersey, 2002. [9] C.L.P. Chen, G.-X. Wen, Y.-J. Liu, F.-Y. Wang, Adaptive consensus control for a class of nonlinear multiagent time-delay systems using neural networks, IEEE Trans. Neural Netw. Learn. Syst. 25 (6) (2014) 1217–1226, doi:10.1109/TNNLS. 2014.2302477. [10] D. Zhao, Q. Zhu, N. Li, S. Li, Synchronized control with neuro-agents for leaderfollower based multiple robotic manipulators, Neurocomputing 124 (SI) (2014) 149–161, doi:10.1016/j.neucom.2013.07.016. [11] Z.-G. Hou, L. Cheng, M. Tan, Decentralized robust adaptive control for the multiagent system consensus problem using neural networks, IEEE Trans. Syst. Man Cybern. Part B 39 (3) (2009) 636–647, doi:10.1109/TSMCB.2008.2007810. [12] G. Wang, C. Wang, L. Li, Q. Du, Distributed adaptive consensus tracking control of higher-order nonlinear strict-feedback multi-agent systems using neural networks, Neurocomputing 214 (2016) 269–279, doi:10.1016/j.neucom.2016.06. 013. [13] Z. Wang, J. Yuan, Y. Pang, J. Wei, Neural network-based adaptive fault tolerant consensus control for a class of high order multiagent systems with input quantization and time-varying parameters, Neurocomputing 266 (2017) 315– 324, doi:10.1016/j.neucom.2017.05.043. [14] J. Lu, F. Chen, G. Chen, Nonsmooth leader-following formation control of nonidentical multi-agent systems with directed communication topologies, Automatica 64 (2016) 112–120, doi:10.1016/j.automatica.2015.11.004. [15] S.J. Yoo, Distributed consensus tracking for multiple uncertain nonlinear strictfeedback systems under a directed graph, IEEE Trans. Neural Netw. Learn. Syst. 24 (4) (2013) 666–672, doi:10.1109/TNNLS.2013.2238554. [16] W. Dong, Adaptive consensus seeking of multiple nonlinear systems, Int. J. Adapt. Control Signal Process. 26 (5) (2012) 419–434, doi:10.1002/acs.1295. [17] J. Huang, Y.-D. Song, W. Wang, C. Wen, G. Li, Smooth control design for adaptive leader-following consensus control of a class of high-order nonlinear systems with time-varying reference, Automatica 83 (2017) 361–367, doi:10.1016/ j.automatica.2017.06.025. [18] R. Brockett, D. Liberzon, Quantized feedback stabilization of linear systems, IEEE Trans. Autom. Control 45 (7) (20 0 0) 1279–1289, doi:10.1109/9.867021.

76

Y. Li, C. Wang and X. Cai et al. / Neurocomputing 349 (2019) 64–76

[19] N. Elia, S. Mitter, Stabilization of linear systems with limited information, IEEE Trans. Autom. Control 46 (9) (2001) 1384–1400, doi:10.1109/9.948466. [20] L. Ma, Z. Wang, Q.-L. Han, H.-K. Lam, Envelope-constrained H-infinity filtering for nonlinear systems with quantization effects: the finite horizon case, Automatica 93 (2018) 527–534, doi:10.1016/j.automatica.2018.03.038. [21] J. Zhou, C. Wen, G. Yang, Adaptive backstepping stabilization of nonlinear uncertain systems with quantized input signal, IEEE Trans. Autom. Control 59 (2) (2014) 460–464, doi:10.1109/TAC.2013.2270870. [22] L. Xing, C. Wen, H. Su, Z. Liu, J. Cai, Robust control for a class of uncertain nonlinear systems with input quantization, Int. J. Robust Nonlinear Control 26 (8) (2016) 1585–1596, doi:10.1002/rnc.3367. [23] Z. Zhang, L. Zhang, F. Hao, L. Wang, Leader-following consensus for linear and Lipschitz nonlinear multiagent systems with quantized communication, IEEE Trans. Cybern. 47 (8, SI) (2017) 1970–1982, doi:10.1109/TCYB.2016.2580163. [24] J.D. Boskovic, R.K. Mehra, A decentralized scheme for accommodation of multiple simultaneous actuator failures, in: Proceedings of the American control conference, 6, 2002, pp. 5098–5103, doi:10.1109/ACC.2002.1025476. [25] J. Boskovic, R. Mehra, Multiple-model adaptive flight control scheme for accommodation of actuator failures, J. Guid. Control Dyn. 25 (4) (2002) 712–724, doi:10.2514/2.4938. [26] G. Tao, S. Joshi, X. Ma, Adaptive state feedback and tracking control of systems with actuator failures, IEEE Trans. Autom. Control 46 (1) (2001) 78–95, doi:10. 1109/9.898697. [27] S. Chen, G. Tao, S. Joshi, Adaptive actuator failure compensation designs for linear systems, Int. J. Control Autom. Syst. 2 (1) (2004) 1–14. [28] X. Tang, G. Tao, S. Joshi, Adaptive actuator failure compensation for parametric strict feedback systems and an aircraft application, Automatica 39 (11) (2003) 1975–1982, doi:10.1016/S0 0 05-1098(03)0 0219-X. [29] Y.-X. Li, G.-H. Yang, Adaptive asymptotic tracking control of uncertain nonlinear systems with input quantization and actuator faults, Automatica 72 (2016) 177–185, doi:10.1016/j.automatica.2016.06.008. [30] D. Ye, X. Zhao, B. Cao, Distributed adaptive fault-tolerant consensus tracking of multi-agent systems against time-varying actuator faults, IET Control Theory Appl. 10 (5) (2016) 554–563, doi:10.1049/iet-cta.2015.0790. [31] L. Zhao, Y. Jia, Neural network-based adaptive consensus tracking control for multi-agent systems under actuator faults, Int. J. Syst. Sci. 47 (8) (2016) 1931– 1942, doi:10.1080/00207721.2014.960906. [32] W. Wang, C. Wen, J. Huang, Distributed adaptive asymptotically consensus tracking control of nonlinear multi-agent systems with unknown parameters and uncertain disturbances, Automatica 77 (2017) 133–142, doi:10.1016/ j.automatica.2016.11.019. [33] M. Krstic, I. Kanellakopoulos, P.V. Kokotovic, Nonlinear and Adaptive Control Design, Wiley New York, 1995.

Chaoli Wang received the B.S. and M.Sc. degrees from Mathematics Department, Lanzhou University, Lanzhou, China, in 1986 and 1992, respectively, and the Ph.D. degree in control theory and engineering from the Beijing University of Aeronautics and Astronautics, Beijing, China, in 1999. He is a Professor with the School of OpticalElectrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China. From 1999 to 20 0 0, he was a Post-Doctoral Research Fellow with the Robotics Laboratory of Chinese Academy of Sciences, Shenyang, China. From 2001 to 2002, he was a Research Associate with the Department of Automation and Computer-Aided Engineering, the Chinese University of Hong Kong, Hong Kong. Since 2003, he has been with the Department of Electrical Engineering, University of Shanghai for Science and Technology, Shanghai, China. His current research interests include nonlinear control, robust control, robot dynamic and control, visual servoing feedback control, and pattern identification.

Yu Li received the M.E. degree from University of Shanghai for Science and Technology, Shanghai, China in 2017 and is currently pursuing the Ph.D. degree in control science and engineering at University of Shanghai for Science and Technology, Shanghai, China. His current research interests include nonlinear control theory, distributed control of multiagent systems, quantized and adaptive control.

Gang Wang was born in Chifeng, China, in 1990. He received the B.Sc. degree in Information and Computing Science and the Ph.D. degree in Systems Analysis and Integration from University of Shanghai for Science and Technology, Shanghai, China, in 2012 and 2017, respectively. He is currently a Research Associate in the Department of Electrical and Biomedical Engineering, University of Nevada, Reno. His research interests include distributed control of nonlinear systems, adaptive control, and robotics.

Xuan Cai received the M.E. degree from University of Shanghai for Science and Technology, Shanghai, China in 2015 and is currently pursuing the Ph.D. degree in control science and engineering at University of Shanghai for Science and Technology, Shanghai, China. His current research interests include nonlinear control theory, distributed control of nonlinear systems and adaptive control.

Lin Li received B.E. degree in automation from Qufu Normal University, Qufu, China, in 2004, and the Ph.D. degree in control theory and control engineering from Beihang University, Beijing, China, in 2010. She is currently with University of Shanghai for Science and Technology as an Associate Professor. Her current research interests include robust control and filtering, adaptive control and the cooperative control of multi-agent systems.