Accepted Manuscript
Event-triggered consensus of nonlinear multi-agent systems with stochastic switching topology Lei Liu, Jinjun Shan PII: DOI: Reference:
S0016-0032(17)30276-4 10.1016/j.jfranklin.2017.05.041 FI 3015
To appear in:
Journal of the Franklin Institute
Received date: Revised date: Accepted date:
19 November 2015 25 May 2017 28 May 2017
Please cite this article as: Lei Liu, Jinjun Shan, Event-triggered consensus of nonlinear multiagent systems with stochastic switching topology, Journal of the Franklin Institute (2017), doi: 10.1016/j.jfranklin.2017.05.041
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Highlights • Further explanation is included after Eq. (9) to clarify the imperfect communication network.
AC
CE
PT
ED
M
AN US
CR IP T
• Some typos have been found and corrected.
1
ACCEPTED MANUSCRIPT
Lei Liu∗ Jinjun Shan
†
CR IP T
Event-triggered consensus of nonlinear multi-agent systems with stochastic switching topology Department of Earth and Space Science and Engineering, York University
AN US
4700 Keele St., Toronto, Canada, M3J 1P3
Abstract
ED
M
This paper is concerned with a leader-follower consensus problem for networked Lipschitz nonlinear multi-agent systems. An event-triggered consensus controller is developed with the consideration of discontinuous state feedback. To further enhance the robustness of the proposed controller, modeling uncertainty and switching topology are also considered in the stability analysis. Meanwhile, a time-delay equivalent approach is adopted to deal with the discrete-time control problem. Particularly, a sufficient condition for the stochastic stabilization of the networked multi-agent systems is proposed based on the Lyapunov functional method. Furthermore, an optimization algorithm is developed to derive the parameters of the controller. Finally, numerical simulation is conducted to demonstrate the effectiveness of the proposed control algorithm.
Keywords: Event-triggered, sampled-data, leader-follower, consensus, Markovian switching topol-
PT
ogy
1
2
Multi-agent systems have attracted great interests due to their potential applications in a variety
3
of areas. In multi-agent systems, consensus seeking algorithm can be considered as one of the most
5
6
AC
4
Introduction
CE
1
crucial issues [1–4]. Consensus has been investigated tremendously in computer science [5], physics [6] and management science [7]. In control area, consensus seeking algorithm has been studied in multiple missions, i.e. formation control [8], rendezvous [9], flocking [10]. ∗ †
Currently a Post-doctoral Fellow at Ryerson University, Email:
[email protected] Corresponding author, Professor, Tel: +1-416-7362100 ext. 33854, Email:
[email protected]
2
ACCEPTED MANUSCRIPT
In classical single agent system, the system output are expected to converge to a desired tra-
8
jectory in the tracking problem. Unlike in single agent system, consensus is the ultimate goal that
9
multi-agent systems are supposed to achieve. In multi-agent systems, multiple agents are coupled
10
together through the wireless network. To accomplish a common mission, their motion should be
11
synchronized to reach the common goal cooperatively. Once their motion is synchronized and the
12
networked agents achieve their common mission cooperatively, the consensus is achieved by the
13
multi-agent systems. Therefore, compared to the conventional single agent stability, consensus can
14
be considered as a specific type of stability for multi-agent systems. In order to achieve the group
15
mission, consensus seeking algorithms are applied to each agent in the multi-agent systems.
CR IP T
7
Among different types of consensus seeking algorithms, leader-follower consensus algorithm is
17
particularly interesting and received broad attention. In previous research on leader-follower con-
18
sensus, it is usually assumed that agents exchange information continuously through the coupling
19
network [2]. However, it is most likely in practice that information sharing can only take place at
20
discrete instants since the bandwidth of the coupling network is limited. With the appearance of
21
sampled-data information exchange, the leader-follower consensus problem was investigated in [11].
22
In their work, the M-matrix theory is applied to derive the sufficient conditions for system stability,
23
while the velocity and acceleration of the leader are unavailable for the controller. Furthermore, the
24
stable sampling period can be indicated based on their results. Although the periodically sampling
25
strategy can effectively reduce the network consumption, the control output still has to be updated
26
periodically. Namely, each agent is still computing the output value periodically though it might
27
be unnecessary. To further reduce the computational burden, event-triggered control strategy was
28
proposed in [12] for first-order consensus problem. The event-triggered conditions were proposed for
29
both centralized and distributed situations in their work. Moreover, the self-triggered multi-agent
31
M
ED
PT
CE
AC
30
AN US
16
control protocol was proposed to relax the trigger condition. The event-triggered control algorithm was extended to the second-order multi-agent systems in [13]. Particularly, Lipschitz nonlinearity
32
was considered in their work because the nonlinear dynamics are almost unavoidable in practice.
33
The leader-follower consensus problem for Lipschitz nonlinear multi-agent systems was also consid-
3
ACCEPTED MANUSCRIPT
ered in [14], where the jointly connected topology was assumed for the coupling relationship. The
35
leader-follower consensus problem for second-order nonlinear multi-agent systems was investigated
36
in [15] with a specific type of nonlinearity. In their work, the stability analysis was conducted on the
37
basis of LaSalle’s invariance principle. Further, by taking advantage of M-matrix method and the
38
property of nonnegative matrices, the second-order nonlinear multi-agent systems were also investi-
39
gated in [16], and it was extensively proven that the leader-follower consensus can be reached easier
40
with higher pinning feedback gains. The leader-follower consensus with uncertain Euler-Lagrange
41
systems was studied in [17]. In the appearance of the switching communication interaction, the
42
convergence of the error systems was guaranteed by their distributed adaptive controller. Moreover,
43
the communication topology in their work is not necessarily connected all the time. Specifically,
44
the time-varying topology in consensus problem was also widely investigated [4, 18] because it is
45
fairly more generic. Ref. [19] conducted the research on Markovian switching topology for second-
46
order multi-agent systems, and a necessary and sufficient condition for consensus achievement was
47
presented in their work. Markovian switching topology was also considered in [20], where the
48
leader-follower consensus problem was investigated with the consideration of nonlinear dynamics
49
and communication delay. Furthermore, Ref. [21] discussed the leader-follower consensus with
50
switching topology for general linear agent, and the convergence of the closed-loop control system
51
was proven along the Riccati-inequality-based approach. With the consideration of the switching
52
topology, the leader-follower consensus control was investigated on the basis of the discrete-time
53
multi-agent systems in [22]. Both fixed and switched topologies were considered in [23] with a
54
globally reachable leader approach. In their work, finite-time convergent leader-follower consensus
55
problem was studied and the second-order consensus was successfully reached. Ref. [24] further
56
extended the leader-follower consensus algorithm to second-order nonlinear multi-agent systems
58
59
60
AN US
M
ED
PT
CE
AC
57
CR IP T
34
with both fixed and time-varying communication topologies. A large class of nonlinear dynamics was dealt with in their work, and the leader-follower consensus was achieved with the intermittent information exchange. In this work, the sampled-data communication is considered in the stability analysis because
4
ACCEPTED MANUSCRIPT
the networked agents can only share the information intermittently, not continuously. Therefore,
62
the entire multi-agent system is essentially a discrete-time dynamical system. To effectively resolve
63
the discrete-time control problem, the time-delay equivalent method [25] is adopted in this work to
64
convert the discrete-time control problem into a continuous-time issue. Apparently, the sampled-
65
data communication can only reduce the network burden. To further reduce the computational load
66
of each agent, an event-triggered control strategy is developed and the event-triggered condition is
67
proposed in the inequality form. The control input signal is generated only if the event-triggered
68
condition is violated, thus the agent actuator does not have to be updated periodically. Namely,
69
the computational load of each agent has been effectively reduced because the on-board processor is
70
always available for other computational work if the event-triggered condition is satisfied. Further-
71
more, the stochastic switching communication topology is considered in this paper, and the finite
72
Markov jump process is recruited to describe the interaction switching of the multi-agent systems.
73
In order to enhance the robustness of the proposed controller, modeling uncertainty is also consid-
74
ered in this work. Since the nonlinear term in the dynamics of single agent might not be precisely
75
replicated in the stability analysis, the modeling uncertainty is included in the error dynamics of
76
the multi-agent systems. In the stability analysis, the modeling uncertainty is systematically inves-
77
tigated, and it is proven that the stability of the networked system can be guaranteed even with
78
the appearance of bounded system uncertainty.
ED
M
AN US
CR IP T
61
The remainder of this paper is organized as follows. In Section 2, the nonlinear dynamics of the
80
multi-agent systems and the error dynamics are formulated. Meantime, the mathematical descrip-
81
tion of the interaction relationship between agents is essentially explained using graph theory and
82
Markov jump process. Moreover, an event-triggered condition is proposed to reduce the compu-
83
tational burden of the multi-agent systems. To further clarify the stability of the error dynamics,
85
CE
AC
84
PT
79
the stochastic stability is formally defined as well. In Section 3, three assumptions are proposed to clearly claim the communication structure. Based on the three assumptions, the controller de-
86
sign and stability analysis are systematically presented with the assistance of Lyapunov functional
87
method. Subsequently, the sufficient condition for the convergence of the error dynamics are de-
5
ACCEPTED MANUSCRIPT
rived on the basis of the results in the stability analysis. Moreover, an iterative convex optimization
89
algorithm is developed to derive the controller gain. In Section 4, four networked Chua’s circuits
90
are used in the numerical simulations. The leader-follower consensus is finally achieved by the four
91
Chua’s circuits in the occurrence of stochastic switching interaction. It is shown in the simulation
92
that all the tracking errors converge to zero with the appearance of system uncertainties, which
93
further demonstrates the effectiveness of the proposed controller. Section 5 concludes this paper.
CR IP T
88
Notation: The notations adopted in this work are fairly standard. Rn denotes the n-dimensional
95
Euclidean space. Identity matrix and zero matrix with appropriate dimensions are represented using
96
I and 0, respectively. The superscript “T ” is used to indicate the matrix transpose, and M > 0
97
represents a positive definite matrix. In symmetric matrix, “∗” indicates the entry implied by the
98
symmetry.
99
2
AN US
94
Problem formulation
A distributed leader-follower consensus seeking problem is investigated in this work, and k ∈ N
101
nonlinear agents are included in the multi-agent systems. The dynamics of each nonlinear agent
102
can be described as follows
M
100
ED
x˙ i (t) = Axi (t) + Bf (xi (t)) + ui (tu )
(1)
where xi (t) ∈ Rn is the state vector of agent i, ui (tu ) ∈ Rn represents the control input of agent i,
104
A ∈ Rn×n , B ∈ Rn×n are system matrices and nonlinear term f (xi (t)) ∈ Rn satisfies the Lipschitz
105
condition, namely, the following inequality is true for any vectors a ∈ Rn and b ∈ Rn
CE
PT
103
107
where α > 0 is the Lipschitz constant. The desired trajectory is generated by a self-driven nonlinear agent with the following dynamics x˙ 0 (t) = Ax0 (t) + Bf (x0 (t))
108
(2)
AC
106
[f (a) − f (b)]T [f (a) − f (b)] ≤ α2 (a − b)T (a − b)
where x0 (t) ∈ Rn is the state vector of the desired trajectory. 6
(3)
ACCEPTED MANUSCRIPT
Remark 1. Linear dynamics are usually studied in previous works [13, 18, 22, 26–28]. In this work,
110
Lipschitz nonlinearity is considered in the single agent dynamics, because a large class of nonlinear
111
behaviors in mechanical/electrical systems can be described using Lipschitz nonlinearity. Hence,
112
once the consensus seeking problem for networked Lipschitz nonlinear systems is solved, a large
113
class of consensus seeking problems for networked mechanical/electrical systems have been resolved
114
fundamentally.
115
Remark 2. Initial condition of Eq. (1) can be freely selected in practice. It can be observed in
116
stability analysis (in Section 3) that the Lyapunov functional method is adopted and the initial
117
condition of Eq. (1) does not appear explicitly. Therefore, any practical-related initial condition will
118
be within the stability domain.
119
Remark 3. The control algorithm proposed in this work is characterized by a distributed structure.
120
Namely, all the nonlinear agents are coupled locally through a wireless network, and there is not a
121
central controller in the multi-agent systems. The decentralized structure has inherently guaranteed
122
the low communication burden for each agent. Therefore, neither the communication nor compu-
123
tation burden will be increased significantly with the growth of agent number. In this situation, the
124
agent number k can be arbitrarily selected without the limitation of upper bound.
ED
M
AN US
CR IP T
109
Since the multiple agents are coupled through digital network, it is essential to describe the
126
network structure properly. In this work, the communication relationship is mathematically depicted
127
using algebraic graph theory. Consequently, each agent is represented by a vertex vi ∈ X , where
128
X ∈ X(Rn ) is the set of the vertices. The graph corresponding to a group of agents can be denoted
129
by G(X , E), where E(X ) ⊆ X × X is the edge set. The information can be shared between two
130
agents if there is a communication channel connecting them, and this information sharing direction
132
133
CE
AC
131
PT
125
is indicated by the edge between them. The vertices connected to vi is considered as the neighbors of vi , and the neighbor set can be defined by NG (vi ) = {vj : (vi , vj ) ∈ E(X ∪ {vi })} [29]. The leader
set is defined as L0 , namely, if a vertex vi has access to the desired trajectory, then vi ∈ L0 . The
7
ACCEPTED MANUSCRIPT
134
graph Laplacian associated with the graph G is defined as (4)
L=H−A
136
where H is the degree matrix and A is the adjacency matrix. The following event-triggered control algorithm is considered m(t)
ui (tu ) = Ki
X
vj ∈NG (vi ) m(t)
m(t) m(t) pi [xi (tu )
[xi (tu ) − xj (tu )] + Ki
− x0 (tu )]
CR IP T
135
(5)
137
where Ki
138
time instants tu , and m(t) is a finite Markov jump process. The value of m(t) is assigned from a
139
finite set.
The transition probability from m(t) = i to m(t) = j is defined as
AN US
140
∈ Rn×n , tu represents the update instant, i.e. ui (tu ) only updates its value at discrete-
P r {m(t + ) = j|m(t) = i} =
pij + o() i 6= j 1 + pii + o() i = j
(6)
144
the consideration of controller in Eq. (5), error dynamics of the closed-loop control system can be
145
described by the following equation
ED
Subtracting the leader’s dynamics in Eq. (3) from the dynamics of agent i in Eq. (1) with
PT
142
M
143
where is a small positive parameter and o() is a term that is decreasing faster than , i.e. P o()/ → 0. The transition rate pii and pij ≥ 0 satisfy j=1,j6=i pij = −pii .
141
e˙ i (t) = Aei (t) + Bf (xi (t), x0 (t)) + ui (tu )
CE
where ei (t) = xi (t) − x0 (t) and f (xi (t), x0 (t)) = f (xi (t)) − f (x0 (t)). Furthermore, the lumped form
AC
146
(7)
8
ACCEPTED MANUSCRIPT
of the error dynamics of all the agents can be expressed as e˙ 1 (t) Ae1 (t) + Bf (x1 (t), x0 (t)) + u1 (tu ) e˙ 2 (t) Ae2 (t) + Bf (x2 (t), x0 (t)) + u2 (tu ) .. = .. . .
ek (t)
= (Ik ⊗ A)
e1 (t) e2 (t) .. . ek (t)
+ (Ik ⊗ B)
e1 (tu ) e2 (tu ) .. .
ED
L ⊗ In ) +Km(t) (L
153
154
+
u1 (tu ) u2 (tu ) .. . uk (tu )
ek (tu )
f (x1 (t), x0 (t)) f (x2 (t), x0 (t)) .. .
f (xk (t), x0 (t))
m(t) m(t) D ⊗ I + K n
e1 (tu ) e2 (tu ) .. . ek (tu )
(8)
PT
n o n o m(t) m(t) m(t) m(t) m(t) m(t) where D m(t) = diag p1 , p2 , . . . , pk and Km(t) = diag K1 , K2 , . . . , Kk . Fur-
CE
ther details about the derivation with Kronecker product and Laplacian matrix involved can be found in the Appendix.
Ideally, Eq. (8) is the error dynamics of multi-agent systems with respect to the desired tra-
AC
152
AN US
151
f (xk (t), x0 (t))
f (x1 (t), x0 (t)) e1 (t) f (x2 (t), x0 (t)) e2 (t) = (Ik ⊗ A) . + (Ik ⊗ B) .. .. . f (xk (t), x0 (t)) ek (t) m(t) m(t) m(t) P p1 [x1 (tu ) − x0 (tu )] K1 vj ∈NG (v1 ) [x1 (tu ) − xj (tu )] + K1 m(t) P m(t) m(t) K p2 [x2 (tu ) − x0 (tu )] 2 vj ∈NG (v2 ) [x2 (tu ) − xj (tu )] + K2 + . .. P m(t) m(t) m(t) pk [xk (tu ) − x0 (tu )] Kk vj ∈NG (vk ) [xk (tu ) − xj (tu )] + Kk
148
150
Aek (t) + Bf (xk (t), x0 (t)) + uk (tu ) e1 (t) f (x1 (t), x0 (t)) e2 (t) f (x2 (t), x0 (t)) = (Ik ⊗ A) . + (Ik ⊗ B) .. . . .
149
follows
CR IP T
e˙ k (t)
M
147
jectory. However, imperfect communication network is always unavoidable due to time-varying disturbance and other uncertainties. Moreover, the nonlinear term f (xi (t)) and f (x0 (t)) may be in-
155
accurate because of perturbations and unmodeling effects. Therefore, the compact form of the error
156
dynamics is reformulated in Eq. (9) with the appearance of the stochastic switching communication
9
ACCEPTED MANUSCRIPT
157
topology and modeling uncertainty. ˙ e(t) = (Ik ⊗ A) e(t) + (Ik ⊗ B) (I + ∆ ) ¯f (x(t), x0 (t)) + Km(t)
158
159
T eT1 (t) eT2 (t) . . . eTk (t) , ¯f (x(t), x0 (t)) = f T (x1 (t), x0 (t)) , f T (x2 (t), x0 (t)) , T . . . , f T (xk (t), x0 (t)) , L m(t) represents a group of Laplacian matrices, and they are switched
where e(t) =
h i L m(t) + D m(t) ⊗ In e(tu )(9)
according to the finite Markov jump process m(t). ∆ represents the modeling uncertainty which
161
∆k < δ, and δ is an arbitrary positive constant. satisfies k∆
162
Remark 4. The consensus seeking problem has been widely investigated in the previous works
163
[13, 14, 18–20, 26–28]. However, it is usually assumed in their works that the dynamics of agents
164
are precisely derived, thus system uncertainties are usually ignored. In contrast, the system un-
165
certainties are essentially considered in this work. Uncertainty always exists in practice because of
166
external/internal perturbations, modeling errors or other unmodeled effects. In order to enhance
167
the robustness of the proposed control algorithm, system uncertainty is greatly considered in the
168
event-triggered consensus algorithm design and the robustness against bounded system uncertainty
169
is demonstrated in the stability analysis. This can be considered as one of the main contributions in
170
this paper.
171
Remark 5. As mentioned above, the modeling uncertainty is assumed to be bounded by a positive
172
∆k < δ. Due to the diversity of mechanical/electrical systems, it is impossible to constant, i.e. k∆
173
propose a unified method to estimate the boundary of all uncertainties. In practice, this boundary
174
can be evaluated based on the characteristics of specific control systems. In particular, trial method
175
or frequency domain approach can be utilized in some practical applications [30–33].
177
178
AN US
M
ED
PT
CE
It is commonly assumed in previous works [1, 2] that all the agents exchange information con-
AC
176
CR IP T
160
tinuously. However, it is most likely in practice that agents can only receive data package discontinuously through the digital network. Therefore, the periodically sampling communication is taken
179
into account in this work. Meanwhile, to further reduce the computational load, an event-triggered
180
manner is investigated for the multi-agent systems. In event-triggered control algorithm, the control
10
ACCEPTED MANUSCRIPT
signal is generated only if the specific event-triggered condition is violated. Obviously, the compu-
182
tational burden is dramatically reduced by the event-triggered controller because the control signal
183
does not have to be generated in each sampling period. Since the communication is still conducted
184
periodically, the event-triggered condition will be verified periodically but the control signal will be
185
calculated only if it is necessary. Motivated by [28], the event-triggered condition is designed as
186
follows m(t)
σ1 eTi (ts )Pi
m(t)
ei (ts ) > rTi (ts )Pi
CR IP T
181
ri (ts )
(10)
187
where ts is the periodically sampled time instant, ri (ts ) = ei (ts ) − ei (tu ), σ1 < 1 is a positive
188
constant and Pi
189
Remark 6. It has been observed in Eq. (5) that the control input ui (tu ) is updated only at instants
190
tu . Here, tu is determined by the event-triggered condition in (10). If the event-triggered condition
191
is satisfied by ri (ts ) and ei (ts ), the control input ui (tu ) will remain to be the same value; otherwise,
192
the the control input ui (tu ) will be updated according to the algorithm in Eq. (5). One of the most
193
important advantages of event-triggered algorithm is that it will effectively reduce the burden of the
194
on-board processor. For example, in the conventional consensus problem, each agent has to update
195
the control input continuously to achieve the consensus with its neighbors, i.e. the computational
196
work is being conducted since the mission begins. In contrast, each agent only updates the control
197
input when it is necessary in the event-triggered consensus problem. If the control input does not
198
need to be calculated, then the on-board processor will be available for other works.
199
Remark 7. The consensus seeking problem for networked multi-agent systems has been studied
200
in several previous works [11, 19, 20, 27, 34] with the consideration of sampled-data information
201
exchange. As explained above, each agent has to update the control signal periodically even when
203
CE
PT
ED
M
AN US
is the weight matrix.
AC
202
m(t)
it is not necessary in the sampled-data information exchange strategy. In contrast, the control signal in this work will be updated only if the event-triggered condition is violated. Namely, in the
204
control strategy proposed in this paper, the computational burden is greatly reduced compared to the
205
controllers in the previous works [11, 19, 20, 27, 34]. This is one of the major advantages of the
206
controller proposed in this work. 11
ACCEPTED MANUSCRIPT
207
In the consensus seeking mission, the desired trajectory, which is generated by a central work-
208
station, is transmitted to the agents in L0 directly. As for any agent vi ∈ X \L0 , it has no direct
209
connection with the workstation and they can only exchange information with vj ∈ NG (vi ). Unlike the continuous-time dynamical system, the control system in this paper is a stochastic
211
switching system. Hence, the definition of the stability for Markovian jump system in Eq. (9) is
212
presented as follows
213
Definition 1. [35] Markovian jump system in Eq. (9) is stochastically stable if the following con-
214
dition is satisfied lim E
t→∞
Z
0
t
e (t)e(t)dt < ∞ T
CR IP T
210
(11)
AN US
Based on the definition of the stability of Markovian jump system in Eq. (9), the consensus of
215
the networked control system in Eq. (1) can be defined as
217
Definition 2. The consensus of the networked control system in Eq. (1) is considered to be achieved
218
by the control algorithm in Eq. (5) if Markovian jump system in Eq. (9) is ensured to be stochastically
219
stable for any initial condition.
M
216
The main objective of this paper is to develop a consensus seeking algorithm for the networked
221
nonlinear multi-agent systems in Eq. (1). Basically, the control algorithm is expected to be in the
222
form of Eq. (5), and an iterative algorithm will be proposed to numerically derive the feedback gain
223
Km(t) .
224
3
225
Assumption 1. The communication interaction can be represented by a digraph containing a span-
226
ning tree, and each leader is located at the root of the spanning tree.
PT
CE
Stability analysis
AC
227
ED
220
Assumption 2. The agent information is shared intermittently, and the control signals are updated
228
when the event-triggered condition in Eq. (10) is violated.
229
Assumption 3. The communication topology is stochastically switched among q structures, where q
230
is a finite number. The switching can be mathematically described by a finite Markov jump process. 12
ACCEPTED MANUSCRIPT
Lemma 1. [36] Let Y be a symmetric matrix and A, B be matrices with compatible dimensions
232
and F satisfying FT F ≤ I. Then, Y + AFB + BT FT AT < 0 holds if and only if there exists a
233
scalar ε > 0 such that Y + εAAT + ε−1 BT B < 0.
234
Theorem 1. Suppose that the communication topology of the nonlinear multi-agent systems in
235
Eq. (1) and the information sharing satisfy Assumptions 1 - 3, then the leader-follower consensus
236
of the networked multi-agent systems in Eq. (1) can be achieved by the control algorithm presented
237
238
in Eq. (5) if there exist symmetric matrices Qr > 0, r = 1, . . . , q, Ri > 0, i = 1, 2, Pm(t) = n o m(t) m(t) m(t) diag P1 , P2 , . . . Pk , positive scalars εj , j = 1, . . . , 5 and matrix W such that
and
Φ 2 hW Ψ 4 ∗ −hR1 Ψ 5 < 0 ∗ ∗ Ψ6
(12)
(13)
CE
PT
ED
M
where h is the step size of the intermittent communication, σ1 , σ2 are arbitrary positive constants,
AC
240
Φ1 hNT Ψ1 ∗ −hR−1 Ψ 2 < 0 1 ∗ ∗ Ψ3
AN US
239
CR IP T
231
13
ACCEPTED MANUSCRIPT
241
and Φ 1 =MT1
q X i=1
pri Qi M1 + MT1 Qr N + NT Qr M1 − MT1 R2 M1 − MT2 R2 M2 + MT1 R2 M2
+ MT2 R2 M1 + hMT1 R2 N + hNT R2 M1 − hMT2 R2 N − hNT R2 M2 + WM1 + MT1 WT − WM2 − MT2 WT + α2 σ2 MT1 M1 − σ2 MT3 M3 + σ1 MT2 Pm(t) M2 − MT4 Pm(t) M4 q X i=1
pri Qi M1 + MT1 Qr N + NT Qr M1 − MT1 R2 M1 − MT2 R2 M2 + MT1 R2 M2
CR IP T
Φ 2 =MT1
+ MT2 R2 M1 + WM1 + MT1 WT − WM2 − MT2 WT + α2 σ2 MT1 M1 − σ2 MT3 M3 + σ1 MT2 Pm(t) M2 − MT4 Pm(t) M4
Ψ2 =
δMT1 Qr NMT3
δε1 MT3
δhMT1 R2 NMT3
δε2 MT3
δhMT2 R2 NMT3 δε3 MT3
0 0 0 0 0 0 δhNMT3
0
AN US
Ψ1 =
0 δε4 MT3
Ψ 3 =diag {−δε1 I, −δε1 I, −δε2 I, −δε2 I, −δε3 I, −δε3 I, −δε4 I, −δε4 I}
Ψ5 =
δMT1 Qr NMT3 0 0
δε5 MT3
M3 = M4 = N=
243
244
0 I 0 0
0 0 I 0
0 0 0 I
Ik ⊗ A Km(t)
L m(t) + D m(t) ⊗ In Ik ⊗ B −Km(t) L m(t) + D m(t) ⊗ In
Remark 8. The time-delay equivalent method is adopted in this work. This method was originally
AC
242
I 0 0 0
PT
M2 =
CE
M1 =
ED
Ψ6 =diag {−δε5 I, −δε5 I}
M
Ψ4 =
developed in [25]. Based on their work, a sufficient condition for sampled-data stabilization of linear systems was proposed in linear matrix inequalities (LMIs) form in [37] along the descriptor approach.
245
To further enhance the theoretical foundation, a discontinuous Lyapunov functional method was
246
presented in [38], based on which the exponential convergence of the sampled-data control system was
247
further investigated in [39] using the discontinuous Lyapunov functional method. The essential part 14
ACCEPTED MANUSCRIPT
248
of this method is to recruit an artificial time-delay d(t) so that the sampling time ts is equivalently
249
converted to ts = t − d(t) in each sampling period, which implies that the original discontinuous
250
control problem is transformed to a continuous control problem with a time-varying delay.
251
Proof. Defining the Lyapunov functional T
V (m(t), r) =e (t)Qr e(t) +
Z
t
t−d(t)
˙ )dτ [h − d(t)] e˙ T (τ )R1 e(τ
(14)
T
The weak infinitesimal operator F of the stochastic process {m(t)} is defined as FV (m(t)) = lim
→0+
253
254
E {V (m(t + ))} − V (m(t))
To better clarify the derivation, the Lyapunov functional will be separated into three parts and expressed as follows
AN US
252
CR IP T
+ [h − d(t)] [e(t) − e(ts )] R2 [e(t) − e(ts )]
V1 (m(t), r) =eT (t)Qr e(t) Z t ˙ )dτ V2 (m(t), r) = [h − d(t)] e˙ T (τ )R1 e(τ t−d(t)
Consequently,
FV1 (m(t), r) =eT (t)
q X
M
255
V3 (m(t), r) = [h − d(t)] [e(t) − e(ts )]T R2 [e(t) − e(ts )] ˙ pri Qi e(t) + 2eT (t)Qr e(t)
i=1
T
ED
˙ − FV2 (m(t), r) = [h − d(t)] e˙ (t)R1 e(t)
t
˙ )dτ e˙ T (τ )R1 e(τ
t−d(t)
˙ FV3 (m(t), r) = − [e(t) − e(ts )] R2 [e(t) − e(ts )] + 2 [h − d(t)] [e(t) − e(ts )]T R2 e(t)
Therefore,
CE
FV (m(t), r) = FV1 (m(t), r) + FV2 (m(t), r) + FV3 (m(t), r) = eT (t)
AC
256
PT
T
Z
−
Z
q X i=1
t
t−d(t)
˙ ˙ + [h − d(t)] e˙ T (t)R1 e(t) pri Qi e(t) + 2eT (t)Qr e(t)
˙ )dτ − [e(t) − e(ts )]T R2 [e(t) − e(ts )] e˙ T (τ )R1 e(τ
˙ +2 [h − d(t)] [e(t) − e(ts )]T R2 e(t) T
= e (t) −
Z
q X i=1
t
t−d(t)
˙ + [h − d(t)] e˙ T (t)R1 e(t) ˙ pri Qi e(t) + 2eT (t)Qr e(t)
˙ )dτ − eT (t)R2 e(t) − eT (ts )R2 e(ts ) e˙ T (τ )R1 e(τ
˙ − 2 [h − d(t)] eT (ts )R2 e(t) ˙ +2eT (t)R2 e(ts ) + 2 [h − d(t)] eT (t)R2 e(t) (15) 15
ACCEPTED MANUSCRIPT
257
258
On the basis of the Newton-Leibniz formula, the following equation is obtained with a free weight matrix W ∈ R4kn×kn 2ξξ T We(t) − 2ξξ T We(ts ) − 2ξξ T W
eT (t) eT (ts ) ¯f T (x(t), x0 (t)) rT (ts )
T
Z
t
˙ )dτ = 0 e(τ
(16)
ts
where ξ =
260
Remark 9. Reducing the conservativeness is always an important expectation in the investigation
261
on stability condition. In this work, sufficient conditions are derived based on Lyapunov theory.
262
Therefore, the more conservativeness can be reduced, the more generic results can be derived fun-
263
damentally. In order to reduce the conservativeness in the stability analysis, more freedom will be
264
included in the inequality. Motivated by the previous work [34, 37, 38], the free weight matrix is
265
incorporated to increase the freedom of the final results. Consequently, the free weight matrix W
266
will appear in the final sufficient condition to render more flexibilities.
AN US
Eq. (15) can be further manipulated by considering Eq. (16) as follows FV (m(t), r) = e (t) t
i=1
t−d(t)
˙ + [h − d(t)] e˙ T (t)R1 e(t) ˙ pri Qi e(t) + 2eT (t)Qr e(t)
˙ )dτ − eT (t)R2 e(t) − eT (ts )R2 e(ts ) e˙ T (τ )R1 e(τ
ED
−
Z
q X
M
T
˙ − 2 [h − d(t)] eT (ts )R2 e(t) ˙ +2eT (t)R2 e(ts ) + 2 [h − d(t)] eT (t)R2 e(t) Z t ˙ )dτ e(τ +2ξξ T We(t) − 2ξξ T We(ts ) − 2ξξ T W ts
PT = eT (t)
q X i=1
˙ + [h − d(t)] e˙ T (t)R1 e(t) ˙ pri Qi e(t) + 2eT (t)Qr e(t)
CE
−eT (t)R2 e(t) − eT (ts )R2 e(ts ) + 2ξξ T We(t) − 2ξξ T We(ts )
AC
267
.
CR IP T
259
˙ − 2 [h − d(t)] eT (ts )R2 e(t) ˙ +2eT (t)R2 e(ts ) + 2 [h − d(t)] eT (t)R2 e(t) Z t T T T T ˙ ) R−1 ˙ ) dτ (17) +d(t)ξξ T WR−1 W ξ + R1 e(τ W ξ + R1 e(τ 1 W ξ − 1 ts
16
ACCEPTED MANUSCRIPT
268
Subsequently, the following inequality is equivalent to FV (m(t), r) < 0 ξ T (t)MT1
q X
pri Qi M1ξ (t) + ξ T (t)MT1 Qr Nξξ (t) + ξ T (t)NT Qr M1ξ (t)
i=1
+ξξ T (t)MT1 Qr NMT3 ∆ M3ξ (t) + ξ T (t)MT3 ∆ T M3 NT Qr M1ξ (t) + [h − d(t)] ξ T (t) N + NMT3 ∆ M3
T
R1 N + NMT3 ∆ M3 ξ (t)
CR IP T
−ξξ T (t)MT1 R2 M1ξ − ξ T (t)MT2 R2 M2ξ + ξ T (t)MT1 R2 M2ξ (t) + ξ T (t)MT2 R2 M1ξ (t)
+2 [h − d(t)] ξ T (t)MT1 R2 N + NMT3 ∆ M3 ξ (t) + ξ T (t)WM1ξ (t) + ξ T (t)MT1 WT ξ (t) −2 [h − d(t)] ξ T (t)MT2 R2 N + NMT3 ∆ M3 ξ (t) − ξ T WM2ξ (t) − ξ T MT2 WT ξ (t) T +d(t)ξξ T WR−1 1 W ξ <0
270
Further taking advantage of inequalities (2) and (10), the following equivalent condition can be
AN US
269
(18)
obtained ξ
T
(t)MT1
pri Qi M1ξ (t) + ξ T (t)MT1 Qr Nξξ (t) + ξ T (t)NT Qr M1ξ (t)
i=1 T (t)M1 Qr NMT3 ∆ M3ξ (t)
+ ξ T (t)MT3 ∆ T M3 NT Qr M1ξ (t) T
R1 N + NMT3 ∆ M3 ξ (t)
M
+ξξ
T
q X
+ [h − d(t)] ξ T (t) N + NMT3 ∆ M3
ED
−ξξ T (t)MT1 R2 M1ξ − ξ T (t)MT2 R2 M2ξ + ξ T (t)MT1 R2 M2ξ (t) + ξ T (t)MT2 R2 M1ξ (t)
+2 [h − d(t)] ξ T (t)MT1 R2 N + NMT3 ∆ M3 ξ (t) + ξ T (t)WM1ξ (t) + ξ T (t)MT1 WT ξ (t)
PT
−2 [h − d(t)] ξ T (t)MT2 R2 N + NMT3 ∆ M3 ξ (t) − ξ T WM2ξ (t) − ξ T MT2 WT ξ (t) +α2 σ2ξ T (t)MT1 M1ξ (t) − σ2ξ T (t)MT3 M3ξ (t) + σ1ξ T (t)MT2 Pm(t) M2ξ (t)
272
(19)
where σ2 is an arbitrary positive constant.
AC
271
CE
T −ξξ T (t)MT4 Pm(t) M4ξ (t) + d(t)ξξ T (t)WR−1 1 W ξ (t) < 0
Since the left hand side of Eq. (19) is a linear polynomial of d(t), the following inequalities can
17
ACCEPTED MANUSCRIPT
273
be derived by setting d(t) = 0 and d(t) = h, respectively. MT1
q X
pri Qi M1 + MT1 Qr N + NT Qr M1 + MT1 Qr NMT3 ∆ M3 + MT3 ∆ T M3 NT Qr M1
i=1
+h N + NMT3 ∆ M3
T
R1 N + NMT3 ∆ M3 − MT1 R2 M1 − MT2 R2 M2
T +MT1 R2 M2 + MT2 R2 M1 + hMT1 R2 N + NMT3 ∆ M3 + h N + NMT3 ∆ M3 R2 M1
CR IP T
T −hMT2 R2 N + NMT3 ∆ M3 − h N + NMT3 ∆ M3 R2 M2 + WM1 + MT1 WT
−WM2 − MT2 WT + α2 σ2 MT1 M1 − σ2 MT3 M3 + σ1 MT2 Pm(t) M2 − MT4 Pm(t) M4 < 0 (20) 274
and MT1
q X
pri Qi M1 + MT1 Qr N + NT Qr M1 + MT1 Qr NMT3 ∆ M3 + MT3 ∆ T M3 NT Qr M1
AN US
i=1
T T T −MT1 R2 M1 − MT2 R2 M2 + MT1 R2 M2 + MT2 R2 M1 + hWR−1 1 W + WM1 + M1 W
−WM2 − MT2 WT + α2 σ2 MT1 M1 − σ2 MT3 M3 + σ1 MT2 Pm(t) M2 − MT4 Pm(t) M4 < 0 (21) Based on the Schur complement lemma, it is obtained from inequality (20) that where
+
Φ∆ hMT3 ∆ T M3 NT 1 ∗ 0
<0
(22)
ED
276
Φ1 hN ∗ −hR−1 1
M
275
T T T T T T T Φ∆ 1 =M1 Qr NM3 ∆ M3 + M3 ∆ M3 N Qr M1 + hM1 R2 NM3 ∆ M3
ity (13) can be derived from inequality (21).
CE
278
Inequality (12) can be derived from inequality (22) on the basis of Lemma 1. Similarly, inequal-
AC
277
PT
+ hMT3 ∆ T M3 NT R2 M1 − hMT2 R2 NMT3 ∆ M3 − hMT3 ∆ T M3 NT R2 M2
18
ACCEPTED MANUSCRIPT
279
Define ˜ 1 = MT M 1
q X
pri Qi M1 + MT1 Qr N + NT Qr M1 + MT1 Qr NMT3 ∆M3 + MT3 ∆T M3 NT Qr M1
i=1
+h N + NMT3 ∆ M3
T
R1 N + NMT3 ∆ M3 − MT1 R2 M1 − MT2 R2 M2
T +MT1 R2 M2 + MT2 R2 M1 + hMT1 R2 N + NMT3 ∆ M3 + h N + NMT3 ∆ M3 R2 M1
CR IP T
T −hMT2 R2 N + NMT3 ∆ M3 − h N + NMT3 ∆ M3 R2 M2 + WM1 + MT1 WT
−WM2 − MT2 WT + α2 σ2 MT1 M1 − σ2 MT3 M3 + σ1 MT2 Pm(t) M2 − MT4 Pm(t) M4 ˜ 2 = MT M 1
q X
pri Qi M1 + MT1 Qr N + NT Qr M1 + MT1 Qr NMT3 ∆ M3 + MT3 ∆ T M3 NT Qr M1
i=1 T −M1 R2 M1
T T T − MT2 R2 M2 + MT1 R2 M2 + MT2 R2 M1 + hWR−1 1 W + WM1 + M1 W
280
AN US
−WM2 − MT2 WT + α2 σ2 MT1 M1 − σ2 MT3 M3 + σ1 MT2 Pm(t) M2 − MT4 Pm(t) M4 n o ˜ 1 , λmin M ˜ 2 . According to Eq. (14), it is obtained that and λ1 = min λmin M FV (m(t), r) ≤ −λ1 eT (t)e(t)
On the basis of Dynkin’s formula, it is also obtained that Z t T E [V (m(t), r) − V (m(t0 ), r)] ≤ −λ1 E e (τ )e(τ )dτ
M
281
and it is further derived that
ED
282
λ1 E
Z
t
T
e (τ )e(τ )dτ
285
≤ V (m(t0 ), r)
CE
Moreover, the following relationship is derived based on Eq. (14)
where λ2 = λmin {Qr }.
E {V (m(t), r)} ≥ λ2 E eT (t)e(t)
Consequently, following [35], the stochastically stable inequality can be derived as shown below Z t λ2 T lim E e (t)e(t)dt ≤ 2 < ∞ t→∞ λ1 0
AC
284
PT
0
283
t0
286
According to Definition 1, it is proven that the Markovian jump system in Eq. (9) is stochastically
287
stable, which in turn implies that the leader-follower consensus is achieved by the proposed leader-
288
follower consensus algorithm based on Definition 2. 19
ACCEPTED MANUSCRIPT
Remark 10. In this work, the sampled-data vector is equivalently converted to a time-delay vector,
290
thus, the stability analysis can be smoothly conducted without theoretical difficulties. It is also pos-
291
sible to directly deal with the problems with time-varying delays through the similar approach. For
292
˙ = τ < 1, the derivative example, if the artificial time-delay d(t) is a time-varying function with d(t)
293
of the term t − d(t) will be 1 − τ which is always positive. Then, this positive term can be used in
294
the stability analysis. Similarly, other quadratic terms can be included in the Lyapunov functional
295
for the stability proof according to the specific problem.
296
Theorem 2. Suppose that the communication topology of the nonlinear multi-agent systems in
297
Eq. (1) and the information sharing satisfy Assumptions 1 - 3, then the leader-follower consensus
298
problem of the networked multi-agent systems in Eq. (1) is solvable if there exist symmetric matrices
AN US
˜ i > 0, i = 1, 2, 3, such that inequality (13) and the following LMIs are feasible R3 > 0, R ˜ T Ψ1 Φ 1 hN ¯2 < 0 ∗ −hR3 Ψ ∗ ∗ Ψ3 ˜1 R ˜2 −R < 0 ˜3 ∗ −R ˜ R1 0 0 R1 0 0 ˜ 2 0 ∗ R2 0 = I ∗ R ˜3 ∗ ∗ R3 ∗ ∗ R
300
where
¯2 = Ψ
302
˜ m(t) R2 (Ik ⊗ A) K
0
(23) (24) (25)
L m(t) + D m(t) ⊗ In R2 (Ik ⊗ B) h ii ˜ m(t) Lm(t) + Dm(t) ⊗ In −K
Proof. By pre- and post-multiplying both side of inequality (12) by diag {I4kn , R2 , I8kn }, the
AC
301
0 0 0 0 0 0 δhR2 NMT3
CE
˜ = N
PT
˜ m(t) Km(t) =R−1 2 K
ED
M
299
CR IP T
289
following inequality can be obtained Φ1 hNT I4kn 0 0 0 R2 0 ∗ −hR−1 1 0 0 I8kn ∗ ∗
Ψ1 I4kn 0 R2 Ψ2 0 0 0 Ψ3 ˜T Φ1 hN ∗ −hR2 R−1 R2 1 ∗ ∗
20
0 0 I8kn
<0
Ψ1 ¯2 < 0 Ψ Ψ3
(26)
ACCEPTED MANUSCRIPT
If
303
R3 ≤ R2 R−1 1 R2
then the following inequalities are equivalent to inequality (26) based on Schur complement lemma ˜ T Ψ1 Φ 1 hN ¯2 < 0 ∗ −hR3 Ψ ∗ ∗ Ψ3 −1 −1 −R1 R2 < 0 ∗ −R−1 3 Consequently, the inequalities (23), (24) and Eq. (25) can be derived on the basis of Theorem
305
306
CR IP T
304
(27)
1.
Apparently, the inequalities presented in Theorem 2 cannot be solved linearly due to the inclusion
308
of matrix equality (25). Therefore, the cone complementarity linearization method [40] is employed
309
to derive the feedback gain of the proposed controller.
310
Corollary 1. Suppose that the communication topology of the nonlinear multi-agent systems in
311
Eq. (1) and the information sharing satisfy Assumptions 1 - 3, then the feedback gain Ki
312
Eq. (5) and the matrix parameters in the inequalities (13), (23) and (24) can be derived by solving
313
the following optimization problem
ED min trace
3 X
˜ w Rw R
w=1
!
s.t. LMIs in inequalities (13), (23) and (24) ˜w R I ≥ 0 w = 1, 2, 3 ∗ Rw
PT CE
m(t)
in
M
AN US
307
(28)
(29)
Remark 11. In classical LMI theory, the feasible values of the matrix variables in LMIs can be
315
derived using well-developed numerical methods [41]. However, Eq. (25) in Theorem 2 is a matrix
316
317
AC
314
equality, not LMI. Thus, the conventional numerical method cannot be applied directly to solve the matrix equality problem. In order to convert the matrix equality problem into a solvable problem,
318
the cone complementarity linearization method is adopted, and a feasible optimization problem is
319
proposed in Corollary 1.
21
ACCEPTED MANUSCRIPT
Based on the optimization proposed in Corollary 1, an iterative algorithm is developed to num(t)
321
merically obtain the feedback gain Ki
322
Algorithm 1:
323
324
325
as follows
Step 1 Initialize the maximum number of the iterations imax and the set o ˜ 0 that satisfies inequalities (13), (23), (24) and (29). Q0i , N
Step 2 Solve the following optimization problem: X ˜ 0 Rw + R ˜ w R0 min trace R w w
n ˜ 0 , R0 , W0 , P0 , R w w
CR IP T
320
s.t. LM Is in inequalities (13), (23), (24) and (29)
326
Step 3 Substitute the feasible solution derived from Step 2 into inequality (12), if it is satisfied,
327
then output the feasible value of the demanded matrices and EXIT.
329
330
Step 4 If i > imax , then EXIT. Otherwise, set i = i + 1. o n o n n ˜ f , where R ˜j = R ˜ fw , ˜ fw , Rfw , Wf , Pf , Qf , N ˜ jw , Rjw , Wj , Pj , Qj , N Step 5 Update R i i o ˜ f is the feasible set derived from Step 2. Rfw , Wf , Pf , Qfi , N
AN US
328
Step 6 Go to Step 2.
332
Remark 12. In Algorithm 1, it has been indicated that the optimization problem in Step 2 is
333
constrained by the inequalities (13), (23), (24) and (29), which implies that the more complicated
334
these inequalities are, the slower the numerical computation will be. Specifically, the dimension of
335
the matrix variables is increased with the growing number of agents. Hence, the more agents the
336
multi-agent systems contain, the slower the feedback gain will be derived. But it should be noticed
337
that the derivation of the feedback gain is conducted before the consensus mission. Once the feedback
338
gain is derived prior to the consensus mission, the consensus will be accomplished successfully using
339
the proposed control algorithm in Eq. (5).
341
ED
PT
CE
AC
340
M
331
Remark 13. It is pointed out in Remark 12 that the growing number of agents will not greatly influence the achievement of consensus. However, it is worth mentioning that the communication
342
topology will have a strong impact on the achievement of consensus. Unsuitable selection of the
343
communication topology may lead to the failure of the consensus mission. It can be observed from
344
the parameter N in inequality (12) that the feedback gain is closely related to the communication 22
ACCEPTED MANUSCRIPT
347
topology. In matrix N, the distribution of the feedback gain Km(t) is largely determined by the matrix L m(t) + D m(t) ⊗ In , and the structure of the matrix L m(t) + D m(t) ⊗ In is completely determined
348
it may result in an ineffective distribution of the feedback gain, which in turn fails the consensus
349
mission.
350
Remark 14. In this work, the matrix computation has to be conducted to derive the values of
351
eTi (ts )Pi
352
with other event-triggered conditions for the purpose of simplification in the future work. For ex-
353
ample, motivated by the work in [42], the event-triggered condition (10) can hopefully be simplified
354
by just comparing the control output values in different sampling instants. Once the norm of the
355
error vector of the control output is greater than a specific positive scalar, the control output will be
356
updated.
357
Remark 15. It is possible to extend the proposed results to the event-triggered consensus problem
358
with packet dropouts. As motivated by [43], an information indicator can be incorporated in the
359
stability analysis. If the information indicator equals to zero, the information packet will be ignored;
360
otherwise, the information packet is received by the specific agent. It is also possible to further
361
generalize the event-triggered consensus problem with packet dropouts by adopting more generic
362
stochastic model to describe the switching of the information indicator.
363
4
364
Four Chua’s circuits are utilized in the numerical simulation. In the simulated leader-follower
365
mission, a self-driven Chua’s circuit will generate a desired trajectory. At the same time, the desired
366
367
m(t)
ei (ts ) and rTi (ts )Pi
ri (ts ). It is possible to replace the event-triggered condition (10)
PT
ED
M
AN US
m(t)
CR IP T
by the communication topology. Therefore, if the communication topology is inappropriately selected,
Simulation
CE
346
AC
345
trajectory is stochastically broadcast to agent 1 and 2, i.e. they are considered as the leaders of the group. Since the desired trajectory is not available to agent 3 and 4, they can only exchange the
368
local information with their neighbors according to the communication topology, namely, they are
369
the followers in the leader-follower mission.
23
ACCEPTED MANUSCRIPT
370
The dynamics of Chua’s circuit can be described as follows x˙ i (t) = Axi (t) + B (In + ∆ i (t)) f (xi (t)) + ui (t)
and i = 1, 2, 3, 4, a = 9, b = 14.28, c = 1, m0 = 17 , m1 =
2 7
[34].
CR IP T
372
where ∆ i (t) represents the norm bounded uncertainty in agent i, −am1 a 0 −a(m0 − m1 ) 1 −1 1 0 A = B= 0 −b 0 0 1 xi (t) xi (t) = x2i (t) x3i (t) 1 1 xi (t) + c − x1i (t) − c f (x1i (t)) = 2
AN US
371
(30)
Remark 16. System uncertainty is considered in each agent, as shown in Eq. (30). In numerical
374
simulations, bounded random signal is generated to simulate system uncertainty, i.e. ∆ i (t) is a
375
random signal in numerical simulations, and the boundary of the uncertainty in each agent can be
376
observed in Table 1.
M
373
Since the communication relationship is dynamically changing, two communication topologies
378
are considered in the simulation and they are stochastically switched with the evolve of the simula-
379
tion. Figures 2(a) and 2(b) depict these two communication topologies. As shown in Figure 2(a),
380
both agent 1 and 2 have access to the desired trajectory, and agent 2 can also get information from
381
agent 1. Agent 3 has access to both agent 1 and 2, while agent 4 will only receive information from
382
agent 2. In Figure 2(b), agent 1 loses its connection with the self-driven Chua’s circuit, and only
PT
CE
has access to agent 2. The corresponding Lm(t) and Dm(t) are 1 0 0 0 0 1 0 0 D1 = 0 0 0 0 0 0 0 0 0 0 0 0 −1 1 0 0 L1 = −1 −1 2 0 0 −1 0 1
AC
383
ED
377
24
ACCEPTED MANUSCRIPT
and
D2
L2
0 0 0 0 1 0 = 0 0 0 0 0 0 1 −1 0 0 = −1 −1 0 −1
0 0 0 0 0 0 2 0
0 0 0 1
On the basis of the proposed Algorithm 1, the feedback gains are derived as follows −39.9851 −10.9067 6.5041 6.6656 K11 = −4.8488 −9.9629 5.7949 10.8603 −35.3108 −15.2637 −4.6030 1.4749 3.8932 K12 = −2.2798 −4.8693 1.4865 4.9797 −15.3802 −10.4316 −2.7374 1.5970 2.0403 K13 = −1.5946 −3.8823 1.2688 3.2532 −11.2940 −20.3105 −5.3863 3.7223 4.5170 K14 = −3.4134 −6.1899 3.1269 5.5011 −22.6398 and
−10.7246 −2.1144 2.1870 = −1.2557 −1.9206 1.6094 1.8228 2.2246 −7.5080 −23.8219 −5.7827 4.6505 6.0614 = −4.1711 −4.8507 4.4891 6.5738 −19.1308 −5.2199 −0.8650 0.8340 = −0.7885 −1.5873 0.8814 0.7755 0.8893 −4.9528 −9.8336 −1.8675 1.8207 2.1271 = −1.6449 −2.5436 1.6511 2.0199 −10.0260
CE
PT
K21
AC
386
ED
M
AN US
385
CR IP T
384
K22
K23
K24
25
ACCEPTED MANUSCRIPT
CR IP T
and P21 =
P22 =
11.6449 0.6119 0.6119 11.3196 −0.4361 −0.8545 20.0943 3.0129 3.0129 13.2385 −2.8212 −4.1552 10.9565 −0.0534 −0.0534 11.0480 0.2480 0.0130 10.6029 −0.0930 −0.0930 10.8000 0.2353 0.0607
ED
P23 =
AN US
388
The weight matrices in event-triggered condition (10) are derived as follows 16.1566 2.4607 −0.5934 P11 = 2.4607 16.6611 −2.6803 −0.5934 −2.6803 20.0023 15.5444 1.9593 −0.0673 P12 = 1.9593 15.7926 −1.9338 −0.0673 −1.9338 19.2448 11.1575 0.3695 0.6046 P13 = 0.3695 12.8525 −0.6089 0.6046 −0.6089 12.7390 10.8161 0.3009 0.4923 P14 = 0.3009 11.8288 −0.1179 0.4923 −0.1179 12.4691
M
387
The initial values of the self-driven Chua’s circuit and the four agents are 0.1 −0.5 1 −1.5 1.5 x0desired = 0.5 x01 = 2 x02 = −3 x03 = −2 x04 = −3 0.9 1.2 1.5 2 −1
390
391
AC
CE
389
PT
P24 =
−0.4361 −0.8545 11.9011 −2.8212 −4.1552 21.1905 0.2480 0.0130 11.2533 0.2353 0.0607 11.0836
The values of other parameters are shown in Table 1
The desired trajectory of the multi-agent systems is shown in Figure 1, and it is generated by the
392
self-driven Chua’s circuit. Applying the controller in Eq. (5), the tracking errors of the four agents,
393
defined as xdesired (t) − xi (t), are exhibited in Figures 3 and 4. It is clearly observed that all the
394
tracking errors along three directions converge to zero with the appearance of system uncertainties in 26
ACCEPTED MANUSCRIPT
1 2 3 4
Value 0.01 1 0.1 3 0.1 0.2 0.5 0.3
CR IP T
Table 1: Parameters Parameter Sampled period, h Lipschitz constant, α σ1 σ2 Boundary of uncertainty in agent Boundary of uncertainty in agent Boundary of uncertainty in agent Boundary of uncertainty in agent
every agent, which firmly demonstrates the effectiveness and robustness of the proposed controller.
396
Figure 5 shows the control input signals of agent 1. The solid lines represent the periodically
397
sampled signal, while the event-triggered control input signals are accordingly displayed using those
398
lines other than solid line. It is further shown in the zoom-in window that the update frequency
399
of the event-triggered signal is much lower than that of the periodically sampled signal. Namely,
400
the computational burden of each agent is greatly reduced in the proposed event-triggered strategy.
401
The switching signal is presented in Figure 6, and the value “1” and “-1” indicates the Topology 1
402
and Topology 2, respectively. The topologies are switched stochastically according to the Markov
403
jump process.
404
Remark 17. Compared with the previous work [14, 16, 18, 19, 26–28], the main characteristics
405
can be exhibited from the simulation results. It is displayed in Figure 5 that the update frequency
406
of the event-triggered control signal is obviously lower than that of the continuous signal or period-
407
ically sampled signal. Moreover, as shown in Figures 3 and 4, the tracking errors converge to zero
408
eventually even with the appearance of system uncertainties.
409
5
M
ED
PT
CE
Conclusion
AC
410
AN US
395
A leader-follower consensus algorithm for networked Lipschitz multi-agent systems is systematically
411
investigated in this work. A feedback consensus controller is successfully developed with the event-
412
triggered condition. In the multi-agent systems, Markov jump process is adopted to describe the
413
stochastic switching topologies. Since the information is locally shared through a digital network, a 27
ACCEPTED MANUSCRIPT
0 −5 1 0.5 0 −0.5 X2
−1
−3
−2
−1
0
X1
1
2
3
AC
CE
PT
ED
M
AN US
Figure 1: Desired trajectory
CR IP T
X3
5
(a) Communication topology 1
(b) Communication topology 2
Figure 2: Communication topologies
28
4
X−axis Y−axis Z−axis
2 0
Tracking error of agent 2
−2 0
5
time(s)
10
15
1 0 X−axis Y−axis Z−axis
−1 −2 0
5
time(s)
10
CR IP T
Tracking error of agent 1
ACCEPTED MANUSCRIPT
15
4 2 0 5
5
0
time(s)
M
Tracking error of agent 4
−2 0
AN US
Tracking error of agent 3
Figure 3: Tracking errors of agent 1 and 2
−5 0
ED
5
time(s)
10
10
X−axis Y−axis Z−axis
15 X−axis Y−axis Z−axis
15
PT
Figure 4: Tracking errors of agent 3 and 4
15
Event−triggered X−axis Event−triggered Y−axis Event−triggered Z−axis Periodically sampled X−axis Periodically sampled Y−axis Periodically sampled Z−axis
Control input of agent 1
AC
CE
10
5 0
−5 −10 −15 0
0.4 0.2 0 −0.2 −0.4
10.1 10.2 10.3 10.4 10.5 5
time(s)
10
Figure 5: Control input of agent 1 29
15
ACCEPTED MANUSCRIPT
2 1.5
Switching
1 0.5 0 −0.5
−1.5 −2 0
5
time(s)
10
CR IP T
−1
15
Figure 6: Stochastic switching of the two topologies
time-delay equivalent approach is utilized to solve the discrete-time control problem caused by the
415
discontinuous state feedback. By taking advantage of the Lyapunov functional method, the sufficient
416
condition for system stability is derived with the consideration of system uncertainties. Moreover,
417
the feedback gain of the proposed controller can be derived by the presented optimization algorithm.
418
Furthermore, the effectiveness of the proposed control algorithm is demonstrated by the numerical
419
simulation. In the future work, the information sharing and algorithm simplification are expected
420
to be improved essentially. It is widely assumed that the information exchange among neighboring
421
agents are updated at the same time. However, asynchronous information update is more generic
422
and flexible in practice. Hence, the consensus algorithm with asynchronous information update can
423
be systematically investigated in the future work. In addition, the consensus algorithm proposed in
424
this paper is expected to be simplified to further reduce the computational burden of each agent.
425
Appendix
426
In order to further clarify the expression in Eqs. (7, 8), Kronecker product is briefly introduced
428
M
ED
PT
CE
AC
427
AN US
414
with two examples in this section, and details can be found in [44]. Definition 3. Suppose A ∈ Rm×n , B ∈ Rp×q , then the Kronecker product of matrices A and B is
30
ACCEPTED MANUSCRIPT
430
431
defined as
a11 B . . . a1n B .. .. mp×nq .. A⊗B= ∈R . . . am1 B . . . amn B
Therefore, based on Definition 3, the following expression can be derived as an example to show the derivation with Kronecker product. Ae1 (t) Ae2 (t) = .. . Aek (t)
A 0 .. .
0 A .. .
0 0 .. .
0
... ... e1 (t) e2 (t) = Ik ⊗ A . ..
ek (t)
(A.2)
Similarly, another example including both Kronecker product and Laplacian matrix is provided as follows
=
AC
=
− xj (tu )]
− xj (tu )] .. . m(t) P Kk [x (t ) − x (t )] u j u k vj ∈NG (vk ) P m(t) [x (t ) − x (t )] K1 0 0 0 1 u j u v ∈N (v ) P j G 1 m(t) 0 K2 0 0 vj ∈NG (v2 ) [x2 (tu ) − xj (tu )] .. .. .. .. .. . . . . . P m(t) [x (t ) − x (t )] u j u k 0 ... . . . Kk vj ∈NG (vk ) m(t) m(t) m(t) m(t) K1 0 0 0 L11 In L12 In . . . L1k In m(t) m(t) m(t) m(t) 0 K2 0 0 L21 In L22 In . . . L2k In .. .. .. .. .. .. .. .. . . . . . . . .
CE
=
m(t) P vj ∈NG (v1 ) [x1 (tu ) m(t) P K2 vj ∈NG (v2 ) [x2 (tu )
K1
M
ED
434
A
e1 (t) e2 (t) .. .
PT
433
where ei ∈ Rn .
0 0 .. .
AN US
ek (t)
432
(A.1)
CR IP T
429
0
m(t)
K1 0 .. .
0
m(t)
...
. . . Kk
0
0 0 .. .
m(t) K2
.. . ...
0 0 .. .
m(t)
. . . Kk
m(t)
m(t)
Lk1 In Lk2 In e1 (t) e2 (t) m(t) L ⊗ In . ..
ek (t)
m(t)
. . . Lkk In
e1 (t) e2 (t) .. . ek (t)
(A.3)
435
In Eq. (A.3), Laplacian matrix is involved to simplify the expression with matrix multiplication.
436
Details about Laplacian matrix and its application in multi-agent systems can be found in [2, 45, 46] 31
ACCEPTED MANUSCRIPT
438
439
440
441
442
443
References [1] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multiagent systems,” Proc. IEEE, pp. 215 – 233, 2007. [2] W. Ren, “Information consensus in multivehicle cooperative control,” IEEE Control Syst. Mag., vol. 27, no. 2, pp. 71 – 82, 2007.
CR IP T
437
[3] J. Fax and R. Murray, “Information flow and cooperative control of vehicle formations,” IEEE Trans. Autom. Control, vol. 49, no. 9, pp. 1465 – 1476, 2004.
[4] R. Olfati-Saber and R. Murray, “Consensus problems in networks of agents with switching
445
topology and time-delays,” IEEE Trans. Autom. Control, vol. 49, no. 9, pp. 1520 – 1533, 2004.
AN US
444
446
[5] N. A. Lynch, Distributed Algorithms, 1st ed. Morgan Kaufmann, 1996.
447
[6] T. A. Witten and L. M. Sander, “Diffusion-limited aggregation, a kinetic critical phenomenon,”
452
453
454
455
456
M
ED
451
1974.
[8] L. Liu and J. Shan, “Distributed formation control of networked “lagrange systems with fault diagnosis,” J. Franklin Inst., vol. 352, no. 3, pp. 952 – 973, 2015.
PT
450
[7] M. H. DeGroot, “Reaching a consensus,” J. Am. Stat. Assoc., vol. 69, no. 345, pp. 118 – 121,
[9] H. Su, X. Wang, and G. Chen, “Rendezvous of multiple mobile agents with preserved network
CE
449
Phys. Rev. Lett., vol. 47, no. 19, pp. 1400 – 1403, 1981.
connectivity,” Syst. Control Lett., vol. 59, no. 5, pp. 313 – 322, 2010. [10] R. Olfati-Saber, “Flocking for multi-agent dynamic systems: Algorithms and theory,” IEEE
AC
448
Trans. Autom. Control, vol. 51, no. 3, pp. 401 – 420, 2006.
457
[11] Z.-J. Tang, T.-Z. Huang, J.-L. Shao, and J.-P. Hu, “Leader-following consensus for multi-agent
458
systems via sampled-data control,” IET Control Theory Appl., vol. 5, no. 14, pp. 1658 – 1665,
459
2011. 32
ACCEPTED MANUSCRIPT
[12] D. V. Dimarogonas, E. Frazzoli, and K. H. Johansson, “Distributed event-triggered control for
461
multi-agent systems,” IEEE Trans. Autom. Control, vol. 57, no. 5, pp. 1291 – 1297, 2012.
462
[13] H. Li, X. Liao, T. Huang, and W. Zhu, “Event-triggering sampling based leader-following
463
consensus in second-order multi-agent systems,” IEEE Trans. Autom. Control, vol. 60, no. 7,
464
pp. 1998 – 2003, 2015.
CR IP T
460
465
[14] X. Mu, X. Xiao, K. Liu, and J. Zhang, “Leader-following consensus of multi-agent systems
466
with jointly connected topology using distributed adaptive protocols,” J. Franklin Inst., vol.
467
351, no. 12, pp. 5399 – 5410, 2014.
469
[15] Q. Song, J. Cao, and W. Yu, “Second-order leader-following consensus of nonlinear multi-agent systems via pinning control,” Syst. Control Lett., vol. 59, pp. 553 – 562, 2010.
AN US
468
[16] Q. Song, F. Liu, J. Cao, and W. Yu, “M-matrix strategies for pinning-controlled leader-following
471
consensus in multiagent systems with nonlinear dynamics,” IEEE Trans. Cybern., vol. 43, no. 6,
472
pp. 1688 – 1697, 2013.
474
[17] H. Cai and J. Huang, “Leader-following consensus of multiple uncertain euler-lagrange systems under switching network topology,” Int. J. Gen. Syst., vol. 43, no. 3-4, pp. 294 – 304, 2014.
ED
473
M
470
[18] J. Xi, X. Yang, Z. Yu, and G. Liu, “Leader-follower guaranteed-cost consensualization for high-
476
order linear swarm systems with switching topologies,” J. Franklin Inst., vol. 352, no. 4, pp.
477
1343 – 1363, 2015.
CE
PT
475
[19] G.-H. Xu, Z.-H. Guan, D.-X. He, M. Chi, and Y.-H. Wu, “Distributed tracking control of
479
second-order multi-agent systems with sampled data,” J. Franklin Inst., vol. 351, no. 10, pp.
480
481
AC
478
4786 – 4801, 2014.
[20] L. Ding and G. Guo, “Sampled-data leader-following consensus for nonlinear multi-agent sys-
482
tems with markovian switching topologies and communication delay,” J. Franklin Inst., vol.
483
352, no. 1, pp. 369 – 383, 2015.
33
ACCEPTED MANUSCRIPT
484
485
[21] W. Ni and D. Cheng, “Leader-following consensus of multi-agent systems under fixed and switching topologies,” Syst. Control Lett., vol. 59, pp. 209 – 217, 2010. [22] M. S. Mahmoud and G. D. Khan, “Leader-following discrete consensus control of multi-agent
487
systems with fixed and switching topologies,” J. Franklin Inst., vol. 352, no. 6, pp. 2504 – 2525,
488
2015.
CR IP T
486
489
[23] Z.-H. Guan, F.-L. Sun, Y.-W. Wang, and T. Li, “Finite-time consensus for leader-following
490
second-order multi-agent networks,” IEEE Trans. Circuits Syst. I Regul. Pap., vol. 59, no. 11,
491
pp. 2646 – 2654, 2012.
[24] N. Huang, Z. Duan, and Y. Zhao, “Leader-following consensus of second-order non-linear multi-
493
agent systems with directed intermittent communication,” IET Control Theory Appl., vol. 8,
494
no. 10, pp. 782 – 795, 2014.
496
[25] Y. Mikheev, V. Sobolev, and E. Fridman, “Asymptotic analysis of digital control systems,” Autom. Remote Control, vol. 49, pp. 1175 – 1180, 1988.
M
495
AN US
492
[26] R. Rakkiyappan, B.Kaviarasan, and J. H.Park, “Leader-following consensus for networked
498
multi-teleoperator systems via stochastic sampled-data control,” Neurocomputing, vol. 164, pp.
499
272 – 280, 2015.
ED
497
[27] R. Rakkiyappan, B. Kaviarasan, and J. Cao, “Leader-following consensus of multi-agent systems
501
via sampled-data control with randomly missing data,” Neurocomputing, vol. 161, pp. 132 –
502
147, 2015.
504
505
CE
[28] G. Guo, L. Ding, and Q. Han, “A distributed event-triggered transmission strategy for sampled-
AC
503
PT
500
data consensus of multi-agent systems,” Automatica, vol. 50, no. 5, pp. 1489 – 1496, 2014.
[29] J. Cortes, S. Martinez, and F. Bullo, “Spatially-distributed coverage optimization and control
506
with limited-range interactions,” ESAIM: Control, Optimisation and Calculus of Variations,
507
vol. 11, pp. 691 – 719, 2005.
34
ACCEPTED MANUSCRIPT
508
[30] L. Wang, S. Mo, H. Qu, D. Zhou, and F. Gao, “H∞ design of 2d controller for batch processes
509
with uncertainties and interval time-varying delays,” Control Eng. Pract., vol. 21, no. 10, pp.
510
1321 – 1333, 2013. [31] W. Paszke, E. Rogers, K. GaÅ‚kowski, and Z. Cai, “Robust finite frequency range iterative
512
learning control design and experimental verification,” Control Eng. Pract., vol. 21, no. 10, pp.
513
1310 – 1320, 2013.
CR IP T
511
514
[32] G. A. Ingram, M. A. Franchek, V. Balakrishnan, and G. Surnilla, “Robust siso H∞ controller
515
design for nonlinear systems,” Control Eng. Pract., vol. 13, no. 11, pp. 1413 – 1423, 2005. [33] K. Zhou, Essentials of Robust Control, 1st ed. Prentice Hall, 1998.
517
[34] X. Xiao, L. Zhou, and Z. Zhang, “Synchronization of chaotic lur’e systems with quantized
518
sampled-data controller,” Commun. Nonlinear Sci. Numer. Simul., vol. 19, no. 6, pp. 2039 –
519
2047, 2014.
523
524
525
M
ED
522
delay,” Automatica, vol. 45, no. 10, pp. 2300 – 2306, 2009. [36] L. Xie, “Output feedback H∞ control of systems with parameter uncertainty,” Int. J. Control, vol. 63, no. 4, pp. 741 – 750, 1996.
PT
521
[35] Z. Fei, H. Gao, and P. Shi, “New results on stabilization of markovian jump systems with time
[37] E. Fridman, A. Seuret, and J.-P. Richard, “Robust sampled-data stabilization of linear systems: an input delay approach,” Automatica, vol. 40, no. 8, pp. 1441 – 1446, 2004.
CE
520
AN US
516
[38] P. Naghshtabrizi, J. P. Hespanha, and A. R. Teel, “Exponential stability of impulsive systems
527
with application to uncertain sampled-data systems,” Syst. Control Lett., vol. 57, no. 5, pp.
528
529
530
AC
526
378 – 385, 2008.
[39] E. Fridman, “A refined input delay approach to sampled-data control,” Automatica, vol. 46, no. 2, pp. 421 – 427, 2010.
35
ACCEPTED MANUSCRIPT
531
[40] L. E. Ghaoui, F. Oustry, and M. AitRami, “A cone complementarity linearization algorithm
532
for static output-feedback and related problems,” IEEE Trans. Autom. Control, vol. 42, no. 8,
533
pp. 1171 – 1176, 1997.
535
[41] P. Gahinet, A. Nemirovski, A. Laub, and M. Chilali, LMI Control Toolbox User’s Guide, 1st ed. The MathWorks, Inc, 1995.
CR IP T
534
536
[42] D. Ding, Z. Wang, B. Shen, and G. Wei, “Event-triggered consensus control for discrete-time
537
stochastic multi-agent systems: The input-to-state stability in probability,” Automatica, vol. 62,
538
pp. 284 – 291, 2015.
[43] D. Ding, Z. Wang, B. Shen, and H. Dong, “Event-triggered distributed H∞ state estimation
540
with packet dropouts through sensor networks,” IET Control Theory Appl., vol. 9, no. 13, pp.
541
1948 – 1955, 2015.
546
547
M
ing interaction topologies,” IEEE Trans. Autom. Control, vol. 50, no. 5, pp. 655 – 661, 2005.
ED
545
[45] W. Ren and R. W. Beard, “Consensus seeking in multiagent systems under dynamically chang-
[46] W. Ren and R. Beard, Distributed Consensus in Multi-vehicle Cooperative Control, 1st ed. Springer-Verlag, 2008.
PT
544
and Applied Mathematics, 2004.
CE
543
[44] A. J. Laub, Matrix Analysis for Scientists and Engineers, 1st ed. SIAM: Society for Industrial
AC
542
AN US
539
36