Automatica 104 (2019) 189–195
Contents lists available at ScienceDirect
Automatica journal homepage: www.elsevier.com/locate/automatica
Brief paper
Reset control for synchronization of multi-agent systems✩ ∗
Xiangyu Meng a , , Lihua Xie b , Yeng Chai Soh b a
Division of Electrical & Computer Engineering, Louisiana State University, Baton Rouge, LA 70803, USA EXQUISITUS, Centre for System Intelligence and Efficiency, School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore b
article
info
Article history: Received 18 July 2017 Received in revised form 12 October 2018 Accepted 5 January 2019 Available online xxxx Keywords: Reset control Synchronization Multi-agent systems
a b s t r a c t Standard approaches to synchronization problems in multi-agent systems apply local smooth control signals to achieve ultimate coherence. Most existing algorithms emphasize asymptotic behavior rather than transient performance. An alternative design framework based on reset control is presented to guarantee not only the asymptotic convergence but also a desired transient response. Specifically, the control protocol consists of a proportional integral (PI) compensator and a Clegg integrator (CI), where the integral term is reset to zero whenever the proportional term crosses zero. Numerical examples are included to illustrate the main result, and to compare the proposed reset control approach with the traditional static/dynamic state feedback control and the quasi-reset control methods. © 2019 Elsevier Ltd. All rights reserved.
1. Introduction Advancement in communication and computation technologies makes distributed control a more viable option, leading to a new research area: cooperation and coordination of multiagent systems. Theoretic research on multi-agent systems has been thus kicked off, and experienced a great boom, which is evident from several monographs (Mesbahi & Egerstedt, 2010; Ren & Beard, 2008), and numerous articles in different settings, such as consensus (Liu, Lam, Yu, & Chen, 2016; Qin, Zheng, & Gao, 2011; Yang, Meng, Dimarogonas, & Johansson, 2014), eventtriggered control (Hu, Liu, & Feng, 2016; Meng & Chen, 2013; Meng, Xie, & Soh, 2017, 2018; Wu, Meng, Xie, Lu, Su, & Wu, 2017; Xiao, Meng, & Chen, 2015), tracking control (Zhao, Duan, Wen, & Zhang, 2013), containment control (Meng, Ren, & You, 2010), output regulation (Meng et al., 2018; Su & Huang, 2012) and pulse-width modulation (Meng, Meng, Chen, Dimarogonas, & Johansson, 2016). Classical control algorithms have been tailored to fit particular application needs, ranging from search/rescue to patrol/surveillance (Hu, Xu, & Xie, 2013; Meng, He, Teo, Su, & Xie, 2015). However, most of the aforementioned results focus on asymptotic behavior of a closed-loop system rather than transient performance. From a pragmatic point of view, transient response is ✩ The material in this paper was not presented at any conference. This paper was recommended for publication in revised form by Associate Editor Antonis Papachristodoulou under the direction of Editor Christos G. Cassandras. ∗ Corresponding author. E-mail addresses:
[email protected] (X. Meng),
[email protected] (L. Xie),
[email protected] (Y.C. Soh). https://doi.org/10.1016/j.automatica.2019.02.042 0005-1098/© 2019 Elsevier Ltd. All rights reserved.
equally important as asymptotic behavior. There has long been of interest in control theory to overcome so called fundamental limitations for linear control system design on the performance. Reset control is one of the possibilities to deal with trade-offs between competing design objectives (Beker, Hollot, & Chait, 2001; Guo, Gui, Yang, & Xie, 2012; Zhao, Yin, & Shen, 2016). As the name suggests, reset controller is no more than a standard controller endowed with a reset mechanism (Baños & Barreiro, 2011; Guo, Xie, & Wang, 2015). More specifically, the controller state is reset to zero at the zero crossing of some defined error signals. Motivated by the success of reset control on single agent systems with linear dynamics, we make an attempt to apply reset control to the synchronization problem of multi-agent systems to improve the transient performance. The transient performance, such as settling time, is also important in the consensus problem. For example, the transient stability in smart grids reflects the resilience of synchronous power generators to large disturbances. The power system’s transient should be decayed over a very small period of time. Also, in formation control, it would be desirable for multiple agents to reach a specific formation within the shortest possible time duration. However, reset control is much more challenging in multi-agent systems due to the discontinuity of control signal and interactions between neighboring agents, which make the standard Lyapunov stability method inapplicable. The authors in Yucelen and Haddad (2014) use a quasi-impulsive uniformly continuous switching to approximate the standard impulsive resetting, and a backward Euler method to implement the integral behavior, respectively. The ultimate consensus is shown and the worst case transient performance bound is provided by utilizing Lyapunov–Krasovskii theory. Another work which
190
X. Meng, L. Xie and Y.C. Soh / Automatica 104 (2019) 189–195
draws our attention is from Bragagnolo, Morărescu, Daafouz, and Riedinger (2016). The authors apply reset mechanisms to the leaders of strongly connected clusters in a centralized way. It is worth mentioning that the reset mechanism is used distributively in our work, and we use non-smooth analysis to overcome the discontinuities of the control signal (Ai, Song, & You, 2016). Some preliminary work on reset control of multi-agent systems has also been done in the leader-following framework for single and double integrator dynamics in Meng, Xie, and Soh (2016a,b), respectively. In this paper, we address the synchronization problem for multi-agent systems by using the standard impulsive reset control. We consider a multi-agent network with single integrator dynamics, and construct a control law comprising a proportional integral compensator, and a Clegg integrator. The integral term is reset to zero whenever an agent is in the centroid of its neighbors. Under this framework, we study the regularity and stability of the reset control system, and compare the performance with existing results. For the regularity problem, we show that at least one agent will cross the centroid of its neighbors if the dominant nonzero eigenvalues of the closed-loop matrix are complex. We also show that the whole network will reach consensus if the control gains are chosen appropriately. In addition, we compare the proposed reset control strategy with the existing static feedback and dynamic state feedback methods, and demonstrate that the proposed reset control method has a faster convergence rate when compared with the static feedback control, and a smaller peak value when compared with the dynamic feedback control. We show explicitly how the time regularization technique can be applied to Eqs. (20) and (21) in Yucelen and Haddad (2014) to guarantee the average consensus as well as Zeno free behavior. Through a simulation example, we show that when compared with the traditional static/dynamic state feedback control laws and the quasi-reset control (Yucelen & Haddad, 2014), the proposed reset control method provides a better trade off between fast response and short settling time. The results developed here are based on the fact that the graph is fixed and undirected. However, it is possible to extend the results to directed graphs, such as a graph with a directed spanning tree. 2. Algebraic graph theory Here we collect basic definitions about graphs and their algebraic properties. Further details can be found in Diestel (2010). A multi-agent system consists of n agents, labeled as 1, . . . , n. Here the information flow among agents is described by an undirected graph G . An undirected graph G consists of a vertex set V (G ) and an arc set E (G ) = {(i, j) |i, j ∈ V (G )}. Here vertex i ∈ V (G ) represents agent i, and (j, i) ∈ E (G ) means that agent j and agent i are neighbors, that is, agent i and agent j can exchange information. Let Ni (G ) = {j|(j, i) ∈ E (G )} denote the neighborhood set of agent i. A path of length r from i0 to ir in an undirected graph is a sequence i0 , i1 , . . . , ir of r + 1 distinct vertices starting with i0 and ending with ir such that for k = 0, 1, . . . , r − 1, the arc (ik , ik+1 ) ∈ E (G ). An undirected graph is called connected if any two vertices are linked by a path. The matrix [ adjacency ] for the communication graph G is A (G ) = aij ∈ Rn×n , where aij > 0 if (j, i) ∈ E (G ) and aij = 0 otherwise. Also assume aii = 0 for all i ∈ V (G ). The Laplacian matrix for G is defined as L (X ) = D (G ) − A (G ), ∑ where D (∑ G ) is the degree ∑n matrix for n n G defined as D (G ) = diag{ j=1 a1j , j=1 a2j , . . . , j=1 anj }. The dependency on the graph may be omitted if it is clear from the context. For connected graphs, the eigenvalues of L are ordered by 0 = λ1 < λ2 ≤ · · · ≤ λn .
3. Main results The network is modeled by a fixed and connected graph. Consider information states with single integrator dynamics given by x˙ i (t ) = ui (t ) , i = 1, . . . , n
(1)
where xi (t ) ∈ R is the information state and ui (t ) is the information control input of agent i. To be convenient, define the state difference of agent i with its neighbors as
ξi (t ) =
∑(
xi (t ) − xj (t ) ,
)
(2)
j∈Ni
and therefore ξ = col{ξ1 , ξ2 , . . . , ξn } satisfies n ∑
ξi (t ) = 0.
i=1
The state difference ξi (t ) is fed into a Clegg integrator described by when ξi (t ) ̸ = 0, when ξi (t ) = 0,
v˙ i ((t ) )= ξi (t ) , vi t + = 0,
(3)
with the initial condition vi (0) = 0, and the reset consensus algorithm for agent i is given by ui (t ) = −kξi (t ) − hvi (t ) .
(4)
Remark 1. As is well known, the steady-state error for a unit step input for the unity feedback system is zero if the loop transfer function possesses one or more integrators. To obtain a zero steady-state error for a ramp input, two or more integrators are needed. At the same time, each linear integrator introduces a −90◦ phase angle. However, Clegg integrator introduces only −38.1◦ phase angle, resulting in a larger phase margin as compared with the linear integrator (Clegg, 1958). Note that the phase margin quantifies relative stability, and measures the system’s tolerance to time delay. Our purpose here is to show that such a reset algorithm not only ensures consensus of the network, that is, xi (t ) → c
(5)
as t → ∞ for any i ∈ V , but also achieves a better transient performance such as settling time and overshoot than static and dynamic state feedback consensus algorithms by an appropriate design of the parameters k and h. Here, c is a scalar belonging to [mini∈V {xi (0)}, maxi∈V {xi (0)}]. 3.1. Regularity With the definitions
ξ = col {ξ1 , . . . , ξn } , v = col {vi , . . . , vn } , χ = col {ξ , v} ,
(6)
the closed-loop system of (1), (2), (3), and (4) between resets can be written as the following form:
[
ξ˙ (t ) v˙ (t )
]
[ =
−kL
− hL
I
0
Φ
][
ξ (t ) v (t )
] ,
(7)
where the matrix Φ has a zero eigenvalue with algebraic multiplicity of two and geometric multiplicity of one. In addition, the real parts of all other eigenvalues are negative if and only if k > 0 and h > 0 (Ren & Beard, 2008).
X. Meng, L. Xie and Y.C. Soh / Automatica 104 (2019) 189–195
{ i }∞
We denote the set of reset instants for agent i as Ti = tl l=1 . A reset control system is referred to as regular if there exists at least one agent i ∈ V such that Ti ̸ = ∅, and t1i < ∞. The regularity problem of a reset control system is analogous to the continuous Skolem–Pisot problem (Bell, Delvenne, Jungers, & Blondel, 2010), which is still left open. Here we give a sufficient condition to guarantee that the reset control system (1), (3) and (4) is regular. Theorem 2. The reset control system (1), (2) (3) and (4) is regular if the dominant nonzero eigenvalues of Φ in (7) are complex. Proof. First, let us show that zero crossings of ξ (t ) and v (t ) are not influenced by the zero eigenvalues of Φ under the initial condition v (0) = 0n before the first reset. The block diagonalization of Φ is borrowed from Ren and Beard (2008). There exists { } an invertible matrix T such that Φ has the form Φ = T diag J , J ′ T −1 , where
[
0 0
J =
1 0
] ,
and J = diag {J1 , J2 , . . . , Jk }. Each block Ji is a Jordan block for either real eigenvalues λi or complex eigenvalues αi ± jβi . We can choose T = [w1 , w2 , . . . , w2n ] {with w1 = col}{0n , 1n } and T with v1 = w2 {= col}{1n , 0n }, and T −1 {= col} v1T , v2T , . . . , v2n T T T T col 0n , p and v2 = col p , 0n without loss of generality. Furthermore, we have exp (Φ t ) χ (0) = T diag exp (Jt ) , exp J t
( ′ )}
{
T
−1
χ (0) ,
where χ (0) = col {ξ (0) , 0n }, and exp (Jt ) =
1 0
The eigenvalues of Φ associated with λi are
µi± =
−kλi ±
√
k2 λ2i − 4hλi
. (8) 2 Therefore, the easiest way to make the dominant nonzero eigenvalues of Φ complex is to let 4h > λn k2 .
(9)
Then, all nonzero eigenvalues of Φ are complex. According to Theorem 2, the condition (9) ensures at least one crossing of the reset surface. Without this condition, the proposed reset control algorithm may degenerate into a dynamic state feedback control algorithm (13) to be shown later. 3.2. Consensus analysis For ease of analysis, we rewrite the closed-loop base system in an alternative compact form as follows: x˙ (t ) = u (t ) , v˙ (t ) = ξ (t ) ,
′
[
191
t 1
]
u (t ) = −kξ (t ) − hv (t ) ,
where x = col {x1 , . . . , xn } and u = col {u1 , . . . , un }. In the following, a formal proof by using nonsmooth analysis is provided to show that the consensus is guaranteed in the sense of (5). Theorem 3. Assume that the communication graph is undirected and connected. Given the dynamics of each agent (1), and the control law (3) and (4) with 1 2 k λn , 4 all agents achieve consensus with at least one reset.
k > 0, and h >
.
A direct calculation shows that
Proof. Consider the differential equation
diag exp (Jt ) , exp J ′ t
x˙ (t ) = X (x (t ) , v (t )) ,
{
( )}
T −1 χ (0)
} { ( ) = col tpT ξ (0) , pT ξ (0) , exp J ′ t V χ (0) , { } T T where V = col v3T , . . . , v2n , tp ξ (0) = pT ξ (0) = 0n since pT L = 0n . Therefore, the zero eigenvalues do not affect the zero-crossings of ξ (t ) before the first reset. Next, there exists at least an ei such that fi (t ) = eTi exp (Φ t ) χ (0) is not zero uniformly. The following analysis is based on Bell et al. (2010). Now let us denote the dominant nonzero eigenvalues of Φ as {α + jβi |i = 1, . . . , η}, with the corresponding algebraic multiplicity µΦ (α + jβi ). In addition, we let q = maxi=1,...,η {µΦ (α + jβi )}. Then, we have fi (t ) = t q−1
η ∑ i=1
gi (t ) +
η ∑ (
(10)
O t q−2 li (t ) + h (t ) ,
)
i=1
where gi (t ) = exp (α t ) [ai cos βi t + bi sin βi t] , li (t ) = exp (α t ) [ci cos βi t + di sin βi t] , with some coefficients ai , bi , ci and di , and h (t ) is the summation of the remaining terms corresponding to eigenvalues with the real parts less than α . Note that h (t ) tends to zero the fastest, and li (t ) tends to zero faster than gi (t ). For arbitrarily large times, gi (t ) takes both positive and negative values while li (t ) and h (t ) tend to zero, which implies the zero crossing of ξi (t ) or vi (t ). Lastly, if the index i in fi (t ) is greater than n, we still have the zero crossing of ξi (t ). This is because the sign ∫ change of vi (t ) indicates a sign change of ξi (t ) since vi (t ) = ξi (τ ) dτ . ■
(11)
where X (x (t ) , v (t)) = −kLx (t ) − hv (t), and v (t) is regarded as a discontinuous input function of x(t). The solution to (11) in the Filippov sense is concerned here. Consider the candidate Lyapunov function V (x(t)) = max {xi (t)} − min {xi (t)} . i∈V
i∈V
It is easy to verify that V (x(t)) ≥ 0, and V (x(t)) = 0 if and only if x1 (t) = x2 (t) = · · · = xn (t) for a connected graph. Let j, p ∈ V , such that xj (t) = mini∈V {xi (t)} , xp (t) = maxi∈V {xi (t)} . Then LX V (x(t)) = ∂ V (x(t))T X (x(t), v (t)).
The generalized gradient of V (x(t)) is ∂ V (x(t)) = ζ (t) − ς (t), where ζ (t) = col {ζ1 (t), . . . , ζn (t), 0, .∑ . . , 0}, ς (t) = col ∑{ς1 (t), . . . , ςn (t), 0, . . . , 0}, ζi (t) ≥ 0, ςi (t) ≥ 0, ni=1 ζi (t) = 1, ni=1 ςi (t) = 1, ζi (t) = 0 if i does not belong to the set of indices at which the maximum is attained, and ςi (t) = 0 if i does not belong to the set of indices at which the minimum is attained (Clarke, 1990). Therefore, LX V (x(t)) =
n ∑
( ) ςj (t) kξj (t) + hvj (t)
j=1
−
n ∑ p=1
( ) ζp (t) kξp (t) + hvp (t) .
192
X. Meng, L. Xie and Y.C. Soh / Automatica 104 (2019) 189–195
If xj (t) = xp (t), then ∂ V (x(t)) = Rn . In this case, we have ξ (t) = 0n , v (t) = 0n and therefore LX V (x(t)) = 0. If xj (t) ̸= xp (t), there must exist j and p such that ξj (t ) < 0 and ∫ t ξp (t ) > 0. In addition, together with the fact that vp (t ) = t p ξp (τ ) dτ and l ( ] ξp (τ ) > 0 for τ ∈ tlp , t , we have kξp (t ) + hvp (t ) > 0, and similarly kξj (t ) + hvj (t ) < 0. Therefore, we have LX V (x(t)) <
0. By applying LaSalle Invariance Principle (Cortés, 2006), the states of all agents x(t) converge to the largest invariant set in {x(t)|LX V (x(t)) = 0}, that is, {x(t)|x1 (t) = x2 (t) = · · · = xn (t)} as t → ∞. ■ 3.3. Performance analysis In this subsection, we would like to share some nice properties and advantages of the reset control (3), (4) compared with the static feedback control ui (t ) = −kξi (t ) ,
(12)
and dynamic feedback control ui (t ) = −kξi (t ) − hvi (t ) ,
(13)
where v˙ i (t ) = ξi (t ) and vi (0) = 0. The closed-loop system of (1) governed by (12) and (13) are given by x˙ (t ) = Y (x (t )) , with Y (x (t ))∫= −kLx (t ), and x˙ (t ) = Z (x (t )), with Z (x (t )) = t −kLx (t ) − h 0 Lx (τ ) dτ , respectively. In this subsection, we use the decay rate of the following Lyapunov function V (x (t )) =
with V¯ X (x(0)) = V (x(0)). By using the Comparison Lemma (Khalil, 2002), we know V¯ X (x (t )) ≤ V¯ Y (x (t )) due to the positiveness of the term β (t ) hλ2 V¯ X (x(t)). Therefore, the upper bound of the Lyapunov function V (x(t)) along the trajectory of the proposed reset control is smaller than that along the trajectory of the static state feedback control (12). ■ Remark 5. As is well known, the exponential convergence rates of the static state feedback control (12) and the dynamic state feedback control (13) algorithms are related to λ2 k if k and h have the relationship (9). One can increase the rate of convergence by using a large gain k. However, no matter how large k is, the rate of convergence is still within the exponential family. In addition, a too large gain is not desirable in practical control systems since it may cause overshoots. We can see from (14) that a large h will also increase the convergence rate of the reset control algorithm. Remark 6. Even though the comparison is performed between only the upper bounds V¯ X (t) and V¯ Y (t), we would like to emphasize that the upper bound of the convergence rate of the static state feedback method V¯ Y (t) is very tight (Olfati-Saber & Murray, 2004). In addition, it is difficult to characterize the actual convergence rate of the reset control method due to the time-varying nature of β (t). Next, we will show that the reset control algorithm (3), and (4) has a smaller peak value of the Lyapunov function than that of the dynamic feedback control (13), where utilizing Lyapunov theory to show the worst-case transient performance guarantees has been done in Yucelen and Haddad (2014).
1
xT (t ) Lx (t ) 2 to characterize the convergence rates of different algorithms.
Theorem 4. For the same initial condition x (0), and the same feedback gain k, the upper bound of the convergence rate of the reset control (1), (2), (3), (4) is smaller than that of the static feedback (12) for any h > 0. Proof. Taking the derivative of the Lyapunov function along the trajectories of (1) governed by static feedback (12) and reset control (3) (4), we have
∥VX (x(t))∥∞ ≤ ∥VZ (x(t))∥∞ . Proof. Define the set Ψ = {x(t)|V (x(t)) ≤ V (x(0))}. As we have already seen in the previous proof, the set Ψ is compact, and strongly invariant for the reset control (3) and (4). On the other hand, the control law of (13) can be written as ui (t ) = −kξi (t) − h
LY V (x(t)) = −kxT (t)L2 x(t).
and LX V (x(t)) = −kx (t)L x(t) − x (t)Lhv (t), respectively. By using the inequalities T
Theorem 7. For the same initial condition x (0), and the same feedback gains k and h, the reset control (3) and (4) has a contraction property, which may not be true for the dynamic feedback control (13), that is,
2
T
t
∫
ξi (τ ) dτ .
(15)
0
xT (t)L2 x(t) ≥ λ2 xT (t)Lx(t) = 2λ2 V (x(t)) ,
For the control law (13), the set Ψ is not invariant since the second term in (15) may have a different sign from the first term. The conclusion can thus be made. ■
we have LY V (x(t)) ≤ −2kλ2 V (x(t)). Therefore, the upper bound of V (x(t)) governed by static feedback (12) is determined by V¯ Y (x(t)) such that
Remark 8. Let us compare the dynamic control law (13), and the reset control law (3) and (4). Recall that the control input of reset control of agent i can be written as
V˙¯ Y (x(t)) = −2kλ2 V¯ Y (x(t)) with V¯ Y (x(0)) = V (x(0)). Since sign (Lx(t)) = sign (v (t)), v (t ) can be written as
v (t ) = diag {β1 (t ) , . . . , βn (t )} Lx (t ) where all βi (t ) ≥ 0, i = 1, . . . , n. Then, we obtain LX V (x(t)) ≤ −2kλ2 V (x(t)) − 2β (t ) hλ2 V (x(t)) ,
t
∫ tli
ξi (τ ) dτ ,
where tli is the last reset instance of agent i. The difference lies in the lower limit of the integral. For reset control, the sign of the integral term always aligns with the first term kξi (t). However, it is not the case for the dynamic feedback control law (13).
(14)
where β (t ) = mini∈V {βi (t )} ≥ 0. Therefore, the upper bound of V (x(t)) governed by reset control (3) (4) is determined by V¯ X (x(t)) such that V˙¯ X (x(t)) = −2kλ2 V¯ X (x(t)) − 2β (t ) hλ2 V¯ X (x(t))
ui (t ) = −kξi (t) − h
4. Further discussions The proposed reset control algorithm (2), (3) and (4) does not guarantee the average consensus, that is, xi (t ) ↛ α (x (0)) as
X. Meng, L. Xie and Y.C. Soh / Automatica 104 (2019) 189–195
193
In order to preserve the average consensus and to rule out Zeno behavior, we propose a modified reset control algorithm:
v˙ i((t ) )= ξi (t ) , v t + = 0,
Fig. 1. Communication graphs.
∑n
t → ∞ in general, where α (x) = 1n i=1 xi . In addition, the above results are valid for control without time regularization, which means that Zeno behavior may be present for the proposed reset control algorithm.
when (ξi (t ) ̸ = 0) ∨ (∆ (t ) ≤ δ) , when (ξi (t ) = 0) ∧ (∆ (t ) > δ) ,
(16)
where ∆(t) = t − tl is the elapsed time since the most recent reset instant tl of the entire network, and δ > 0 is some given constant. First, it is clear that the time regularization based on ∆(t) excludes Zeno behavior. Next, we will show that the average consensus is guaranteed as well. Now let us take the derivative of α (x (t )) with respect to t. It follows that
α˙ (x (t )) = α (˙x (t )) = −kα (Lx (t )) − hα (v (t ))
(17)
between resets (tl , tl+1 ]. By solving the second differential equa∫t ∫t tion in (10), we have v (t ) = t + ξ (τ ) dτ +v (tl+ ) = t + Lx (τ ) dτ +
Fig. 2. State trajectories by using different consensus methods.
l
l
194
X. Meng, L. Xie and Y.C. Soh / Automatica 104 (2019) 189–195
Table 1 Settling time of the state trajectories by using different algorithms. M1 M2 M3 M4 M5
A1
A2
A3
A4
A5
A6
3.0159 3.8972 5.665 2.4822 1.6428
3.0651 3.9492 5.8685 2.4484 1.2969
1.9891 3.6836 4.6669 1.4252 0.897
1.7509 3.9781 3.3249 0.6715 0.6174
2.7012 3.4815 5.0453 2.1641 1.353
3.0268 3.9781 5.9231 2.4849 1.5962
v (tl+ ). Then, we know that the average of v (t ) is ∫ 1 t T 1 α (v (t )) = 1 Lx (τ ) dτ + 1T v (tl+ ). n
n
tl+
(18)
Due to the zero column sum of the Laplacian matrix L, the first terms in (17) and (18) are both zero. Also note that v (t ) instead of just vi (t ) is reset to zero whenever ξi (t ) = 0, however, requiring a centralized coordination of reset which is a drawback of this approach. Then, the average of v (t ) is zero, so is α˙ (x (t )). It implies that α (x (t )) = α (x (0)) for all t ≥ 0, that is, the average consensus is preserved under the modified reset control algorithm (16). The states at the next reset instance are given by
χ (tl+1 ) = AR exp [Φ∆(tl+1 )] χ (tl ) ,
(19)
where AR = diag {In , 0n }. Recalling the proof of Theorem 2, we have exp [Φ∆(tl+1 )] χ (tl ) = W exp J ′ ∆(tl+1 ) V χ (tl ) ,
[
]
(20)
where W = [w3 , . . . , w2n ] , V = col {v3 , . . . , v2n } . By taking the norm on both sides of (19) and considering (20), we have
[ ] ∥χ (tl+1 )∥ ≤ ∥AR ∥ ∥W ∥ exp J ′ ∆(tl+1 ) ∥V ∥ ∥χ (tl )∥ , Here ∥AR ∥ (= 1.)Since J ′ is Hurwitz, there always exists a δ such that exp J ′ ∆ ≤ ε for all ∆ > δ , where ε ∥W ∥ ∥V ∥ < 1. Therefore, the average consensus is preserved by the fact that the state average is time-invariant. 5. Numerical examples We consider the topology shown in Fig. 1 with 6 agents. Note that the topology formed by all agents is connected. The eigenvalues of L in ascending order are given by λ1 = 0, λ2 = 1, λ3 = 1.5858, λ4 = 3, λ5 = 4, λ6 = 4.4142. We choose k = 1, and h = 0.5. Then, the eigenvalues of Φ are −3.8393, −3.4142, −2.3660, −0.7929 ± j0.4052, −0.634, −0.5858, −0.5749, −0.5 ± j0.5, 0, 0, where the nonzero dominant eigenvalues are a pair of complex eigenvalues. In the following, we would like to numerically compare the consensus performance of different methods, namely, the traditional static state feedback method (M1) (Olfati-Saber & Murray, 2004), finite time consensus method (M2) (Cortés, 2006), the dynamic state feedback method (M3), the quasi-reset control method (M4) (Yucelen & Haddad, 2014) and the proposed reset control method (M5). For a fair comparison, the static feedback method (M1) is given as ui (t ) = −ks ξi (t ), where the gain ks is chosen as 0.8542 so that the control effort 103.4966 of the static state feedback control is slightly more than effort ∫ tfthe ∑ncontrol T 103.4942 of the reset control. Here we use 0 i=1 ui (t)ui (t)dt to characterize the control effort of an algorithm and tf is the terminal time. The finite time consensus method (M2) uses the standard form ui (t ) = −sgn (ξi (t )) ; and the dynamic feedback method (M3) and the reset control method (M5) share the same feedback gains. For the quasi-reset control method, Eqs. (13) and (14) in Yucelen and Haddad (2014) are used with parameters k1 = 1.001, k2 = 0.001, τ = 0.0001, ξ = 1, and θ = 1.
Fig. 3. Control signal with resets.
The simulations of different methods have the same initial condition, which is randomly generated from the uniform distribution between [0, 10]. The state trajectories by using different control algorithms are visualized in Fig. 2. As seen from the figures, the dynamic state feedback algorithm and the reset control method achieve a faster response than the static state feedback algorithm. However, the dynamic feedback algorithm may cause the closedloop system to oscillate. This is overcome by the reset control algorithm. Finite-time consensus algorithm would perform well with a large gain. Fig. 3 shows the control inputs of the reset control algorithm. The reset actions are clearly seen in the figure. The quantitative calculations in terms of the performance are shown in Table 1 by comparing different methods, where Ai denotes agent i. Here we mainly consider the settling time performance, which is the time required for all agents to reach and stay within a range of certain percentage of the consensus value and 2% is used here. It is shown that the reset control method reduces the settling time by 36.84% and 53.33% on average compared with M4 and M1, respectively; the reductions are as high as 67.77%, 75.72% in contrast to the performance using M2 and M3, respectively. At the same time, the proposed reset control uses 61.08% less energy than the quasi-reset control. It should be pointed out that the control gain parameters for all methods are chosen casually without prejudice to a particular one. The performance of all methods could be improved if the control parameters are well tuned. The justification of the utilization of reset mechanisms is that the proposed reset strategy provides a simple and easy way to improve the transient performance without fine tuning of the parameters. 6. Conclusions Alternative methods for synchronization of multi-agent systems were presented based on a direct application of reset control to each individual agent in a network. The proposed reset control strategies improved the transient performance in multi-agent systems compared with traditional methods which primarily focused on asymptotic behavior alone. Numerical examples confirmed the efficacy of the proposed reset control method. To design a fully distributed and Zeno free reset control strategy for the consensus problem of multi-agent systems is still left open, which deserves further study.
X. Meng, L. Xie and Y.C. Soh / Automatica 104 (2019) 189–195
Acknowledgments Partially supported by the Republic of Singapore National Research Foundation (NRF) through a grant to the Berkeley Education Alliance for Research in Singapore (BEARS) for the SingaporeBerkeley Building Efficiency and Sustainability in the Tropics (SinBerBEST) Program. BEARS has been established by the University of California, Berkeley as a center for intellectual excellence in research and education in Singapore. Also partially supported by National Research Foundation of Singapore under 2011 NRFCRP8-2011-03, Projects of Major International (Regional) Joint Research Program NSFC (Grant no. 61720106011), and National Science Foundation of China under grant NSFC 61633014. References Ai, X., Song, S., & You, K. (2016). Second-order consensus of multi-agent systems under limited interaction ranges. Automatica, 68, 329–333. Baños, A., & Barreiro, A. (2011). Reset control systems. Springer. Beker, O., Hollot, C., & Chait, Y. (2001). Plant with integrator: an example of reset control overcoming limitations of linear feedback. IEEE Transactions on Automatic Control, 46(11), 1797–1799. Bell, P., Delvenne, J., Jungers, R., & Blondel, V. (2010). The continuous skolem-pisot problem. Theoretical Computer Science, 411(40), 3625–3634. Bragagnolo, M. C., Morărescu, I.-C., Daafouz, J., & Riedinger, P. (2016). Reset strategy for consensus in networks of clusters. Automatica, 65, 53–63. Clarke, F. (1990). Optimization and nonsmooth analysis, ser. Classics in applied mathematics. Society for Industrial and Applied Mathematics. Clegg, J. C. (1958). A nonlinear integrator for servomechanisms. Transactions of the American Institute of Electrical Engineers, Part II: Applications and Industry, 77(1), 41–42. Cortés, J. (2006). Finite-time convergent gradient flows with applications to network consensus. Automatica, 42(11), 1993–2000. Diestel, R. (2010). Graph theory (4th ed.). Springer. Guo, Y., Gui, W., Yang, C., & Xie, L. (2012). Stability analysis and design of reset control systems with discrete-time triggering conditions. Automatica, 48(3), 528–535. Guo, Y., Xie, L., & Wang, Y. (2015). Analysis and design of reset control systems. IET. Hu, W., Liu, L., & Feng, G. (2016). Consensus of linear multi-agent systems by distributed event-triggered strategy. IEEE Transactions on Cybernetics, 46(1), 148–157. Hu, J., Xu, J., & Xie, L. (2013). Cooperative search and exploration in robotic networks. Unmanned Systems, 1(01), 121–142. Khalil, H. K. (2002). Nonlinear systems (3rd ed.). Prentice Hall. Liu, X., Lam, J., Yu, W., & Chen, G. (2016). Finite-time consensus of multiagent systems with a switching protocol. IEEE Transactions on Neural Networks and Learning Systems, 27(4), 853–862. Meng, X., & Chen, T. (2013). Event based agreement protocols for multi-agent networks. Automatica, 49(7), 2123–2132. Meng, W., He, Z., Teo, R., Su, R., & Xie, L. (2015). Integrated multi-agent system framework: decentralised search, tasking and tracking. IET Control Theory & Applications, 9(3), 493–502. Meng, X., Meng, Z., Chen, T., Dimarogonas, D. V., & Johansson, K. H. (2016). Pulse width modulation for multi-agent systems. Automatica, 70, 173–178. Meng, Z., Ren, W., & You, Z. (2010). Distributed finite-time attitude containment control for multiple rigid bodies. Automatica, 46(12), 2092–2099. Meng, X., Xie, L., & Soh, Y. C. (2016). Reset control for multi-agent systems. In Proc. American Control Conference (ACC), 2016 (pp. 3746–3751). Meng, X., Xie, L., & Soh, Y. C. Reset control of multi-agent systems with double integrator dynamics. In Proc. 35th Chinese Control Conference (CCC), 2016 (pp. 8049–8054). Meng, X., Xie, L., & Soh, Y. C. (2017). Asynchronous periodic event-triggered consensus for multi-agent systems. Automatica, 84, 214–220. Meng, X., Xie, L., & Soh, Y. C. (2018). Event-triggered output regulation of heterogeneous multiagent networks. IEEE Transactions on Automatic Control, 63(12), 4429–4434. Mesbahi, M., & Egerstedt, M. (2010). Graph theoretic methods in multiagent networks. Princeton University Press. Olfati-Saber, R., & Murray, R. (2004). Consensus problems in networks of agents with switching topology and time-delays. IEEE Transactions on Automatic Control, 49(9), 1520–1533. Qin, J., Zheng, W., & Gao, H. (2011). Consensus of multiple second-order vehicles with a time-varying reference signal under directed topology. Automatica, 47(9), 1983–1991. Ren, W., & Beard, R. (2008). Distributed consensus in multi-vehicle cooperative control: theory and applications. Springer.
195
Su, Y., & Huang, J. (2012). Cooperative output regulation of linear multi-agent systems. IEEE Transactions on Automatic Control, 57(4), 1062–1066. Wu, Y., Meng, X., Xie, L., Lu, R., Su, H., & Wu, Z. (2017). An input-based triggering approach to leader-following problems. Automatica, 75, 221–228. Xiao, F., Meng, X., & Chen, T. (2015). Sampled-data consensus in switching networks of integrators based on edge events. International Journal of Control, 88(2), 391–402. Yang, T., Meng, Z., Dimarogonas, D. V., & Johansson, K. H. (2014). Global consensus for discrete-time multi-agent systems with input saturation constraints. Automatica, 50(2), 499–506. Yucelen, T., & Haddad, W. M. (2014). Consensus protocols for networked multiagent systems with a uniformly continuous quasi-resetting architecture. International Journal of Control, 87(8), 1716–1727. Zhao, Y., Duan, Z., Wen, G., & Zhang, Y. (2013). Distributed finite-time tracking control for multi-agent systems: an observer-based approach. Systems & Control Letters, 62(1), 22–28. Zhao, X., Yin, Y., & Shen, J. (2016). Reset stabilisation of positive linear systems. International Journal of Systems Science, 47(12), 2773–2782.
Xiangyu Meng is an Assistant Professor with the Division of Electrical & Computer Engineering at Louisiana State University, the United States. He received his Ph.D. degree in Control Systems from the University of Alberta, Canada, in 2014. He was a Research Associate in the Department of Mechanical Engineering at the University of Hong Kong between June 2007 and July 2007, and between November 2007 and January 2008. He was a Research Award Recipient in the Department of Electrical and Computer Engineering at the University of Alberta between February 2009 and August 2010. Between December 2014 and December 2016, he was with the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, as a Research Fellow. He worked as a Postdoctoral Associate in the Division of Systems Engineering at Boston University, the United States, between January 2017 and December 2018. His research interests include multi-agent systems, reset control, and cyber–physical systems. Lihua Xie received the B.E. and M.E. degrees in electrical engineering from Nanjing University of Science and Technology in 1983 and 1986, respectively, and the Ph.D. degree in electrical engineering from the University of Newcastle, Australia, in 1992. Since 1992, he has been with the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, where he is currently a professor and Director, Delta-NTU Corporate Laboratory for Cyber–Physical Systems. He served as the Head of Division of Control and Instrumentation from July 2011 to June 2014. He held teaching appointments in the Department of Automatic Control, Nanjing University of Science and Technology from 1986 to 1989. His research interests include robust control and estimation, networked control systems, multi-agent networks, localization and unmanned systems. He is an Editor-in-Chief for Unmanned Systems and an Associate Editor for IEEE Transactions on Network Control Systems. He has served as an editor of IET Book Series in Control and an Associate Editor of a number of journals including IEEE Transactions on Automatic Control, Automatica, IEEE Transactions on Control Systems Technology, and IEEE Transactions on Circuits and Systems-II. He is an elected member of Board of Governors, IEEE Control System Society (Jan 2016–Dec 2018). He is a Fellow of IEEE and a Fellow of IFAC. Yeng Chai Soh received the B.Eng. (Hons. I) degree in electrical and electronic engineering from the University of Canterbury, New Zealand, and the Ph.D. degree in electrical engineering from the University of Newcastle, Australia. He joined the Nanyang Technological University, Singapore, after his PhD study and is currently a professor in the School of Electrical and Electronic Engineering. He has served as the Head of the Control and Instrumentation Division, the Associate Dean (Research and Graduate Studies) and the Associate Dean (Research) at the College of Engineering. He has served as panel members of several national grants and scholarships evaluation and awards committees. His research interests have been in robust control and applications, robust estimation and filtering, optical signal processing, and energy efficient systems. He has published more than 270 refereed journal papers in these areas. His most recent research projects and activities are in sensor networks, sensor fusion, distributed control & optimization, and control & optimization of ACMV systems.