Global optimal consensus for higher-order multi-agent systems with bounded controls

Global optimal consensus for higher-order multi-agent systems with bounded controls

Automatica 99 (2019) 301–307 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Brief paper ...

628KB Sizes 2 Downloads 28 Views

Automatica 99 (2019) 301–307

Contents lists available at ScienceDirect

Automatica journal homepage: www.elsevier.com/locate/automatica

Brief paper

Global optimal consensus for higher-order multi-agent systems with bounded controls✩ Yijing Xie a,b , Zongli Lin b, * a

Department of Automation, and Key Laboratory of System Control and Information Processing of Ministry of Education, Shanghai Jiao Tong University, Shanghai 200240, China b Charles L. Brown Department of Electrical and Computer Engineering, University of Virginia, P.O. Box 400743, Charlottesville, VA 22904-4743, USA

article

info

Article history: Received 29 December 2017 Received in revised form 4 June 2018 Accepted 27 September 2018 Available online 5 December 2018 Keywords: Multi-agent systems Actuator saturation Distributed optimization Optimal consensus

a b s t r a c t This paper revisits the problem of global optimal consensus by bounded controls for a multi-agent system, where each agent is described by the dynamics of chains of integrators and has its own objective function known only to itself. A bounded local control law is constructed that uses the information accessible to it through the communication topology underlying the multi-agent system and the information of its own objective function. Under the assumption that the communication topology is strongly connected and detailed balanced, these control laws together achieve global optimal consensus for the multi-agent system, that is, all agents reaching consensus at a state that minimizes the sum of the objective functions of all agents. A numerical example illustrates the theoretical results. © 2018 Elsevier Ltd. All rights reserved.

1. Introduction The past few decades have witnessed great research interest in distributed cooperative control of multi-agent systems due to its many applications in various fields such as engineering, nature and social sciences. In distributed cooperative control of a multi-agent system, each agent utilizes its local information to accomplish a global task cooperatively. As a foundational problem in distributed cooperative control of multi-agent systems, consensus has been intensively studied and many results have been obtained (see, for example, Liu & Huang, 2017; Ni & Cheng, 2010; Olfati-Saber & Murray, 2004; Su & Lin, 2016; Zhang, Liu, & Feng, 2015). Consensus entails the states of all agents to reach a common value, and finds many applications (see, for example, Alighanbari & How, 2005; Cook & Hu, 2010; Cortés & Bullo, 2005; Klein, Bettale, Triplett, & Morgansen, 2008). As one of the most commonly encountered constraints in the physical world, actuator saturation has motivated the study of both the problems of stabilization and consensus with bounded controls. It is known that a linear system subject to actuator saturation can be globally asymptotically stabilized only when it is asymptotically null controllable with bounded controls (ANCBC), that is, ✩ Work supported in part by US Army Research Office under Grant No. W911NF17-1-0535 and in part by the National Natural Science Foundation of China under Grant No. 61733018. The material in this paper was not presented at any conference. This paper was recommended for publication in revised form by Associate Editor Julien M. Hendrickx under the direction of Editor Christos G. Cassandras. Corresponding author. E-mail addresses: [email protected] (Y. Xie), [email protected] (Z. Lin).

*

https://doi.org/10.1016/j.automatica.2018.10.048 0005-1098/© 2018 Elsevier Ltd. All rights reserved.

when it is stabilizable with all poles in the closed left-half plane. Accordingly, global consensus can only be achieved for agents that are ANCBC. For agents that are described by single-integrator dynamics, global leaderless consensus was achieved in Li, Xiang, and Wei (2011) by linear feedback when the communication topology among agents contains a directed spanning tree. For agents that are represented by double-integrator dynamics or neutrally stable linear systems, global leader-following consensus was achieved by linear feedback in Meng, Zhao, and Lin (2013) both under a fixed topology and a switching topology. Yang, Meng, Dimarogonas, and Johansson (2014) discussed the discrete-time counterpart of Meng et al. (2013) under a fixed undirected topology. In general, global consensus of multi-agent systems entails nonlinear feedback. Zhao and Lin (2016) solved the problem of global-leader following consensus for general ANCBC agents by nonlinear feedback. When all agents reach to a consensus state that optimizes the sum of objective functions of all agents, the consensus problem is referred to as optimal consensus. Optimal consensus has many applications such as motion coordination of UAVs. For example, a team of UAVs needs to rendezvous at a location which is optimal for the team. However, the control laws designed in the absence of actuator saturation may fail when the actuator saturation is present. Only a few results are available on optimal consensus for multi-agent systems in the presence of actuator saturation. In Xie and Lin (2017), global optimal consensus was achieved for single-integrator agents and double-integrator agents in the presence of actuator saturation when the communication topology is strongly-connected and detailed-balanced. Yang, Wang, Wang, and Lin (2018) extended the results of Xie and Lin (2017) for

302

Y. Xie, Z. Lin / Automatica 99 (2019) 301–307

the continuous-time single-integrator agents to the discrete-time setting. In contrast with the limited number of results on optimal consensus for multi-agent systems with bounded controls, many results are available on optimal consensus for multi-agent systems in the absence of actuator saturation. Most of these results pertain to single-integrator agents (see, for example, Chen & Ren, 2017; Kia, Cortés, & Martínez, 2015; Lu & Tang, 2012; Shi, Johansson, & Hong, 2011). The optimal consensus problem for second-order integrator agents was considered in Rahili and Ren (2017), Wang and Liu (2015) and Zhang and Hong (2014). For agents described by higher-order integrators systems, optimal consensus was achieved in Zhang and Hong (2015). In Zhao, Liu, Wen, and Chen (2017), the optimal consensus problem for agents modeled by general linear systems was considered. Recently, optimal consensus has been solved for agents with time-varying cost functions in Rahili and Ren (2017) and Sun, Ye, and Hu (2017). In this paper, we consider global optimal consensus for higherorder agents in the presence of actuator saturation. The dynamics of each agent is represented by multiple chains of integrators, sometimes referred to as higher-order integrators in the literature. Each agent has its own objective function known only to itself. For each agent, a bounded optimal consensus law is constructed that only uses the information of other agents obtained through the communication network and the information of its own objective function. We will first prove that the equilibrium point of the closed-loop multi-agent system is the optimal solution of the global objective function. Next, we will prove that the higher-order agents reach consensus at the equilibrium point. The main contributions of this paper are three-fold. First, the agents we consider are described by higher-order integrators, while existing works on the optimal consensus problem in the presence of actuator saturation deal with single-/double- integrator agents. Second, optimal consensus can be achieved from arbitrary initial states of agents in the presence of actuator saturation. Moreover, the results in this paper also apply to the situation in the absence of actuator saturation but with more appealing features in comparison with Zhang and Hong (2015). In particular, the use of a different Lyapunov function not only significantly simplifies the proof but also leads to more freedom in the choice of controller parameters. The difficulties associated with the global optimal consensus problem in the presence of actuator saturation are two-fold. Compared with the global leader-following consensus problem in the presence of actuator saturation (Zhao & Lin, 2016), the consensus state is unknown and needs to be determined by the optimal consensus algorithm. Compared with the optimal consensus problem in the absence of actuator saturation (Chen & Ren, 2017; Kia et al., 2015; Lu & Tang, 2012; Shi et al., 2011), agents with actuator saturation are nonlinear systems, which significantly complicates the analysis. We will use standard notation. For a vector a = [a1 a2 · · · am ]T ∈ Rm , ∥a∥ √ denotes the Euclidean norm of a defined by ∥a∥ =



2. Preliminaries A directed graph GN consists of a finite, nonempty set of nodes V = {ν1 , ν2 , . . . , νN }, each of them representing an agent, and a set of ordered pairs of nodes E ∈ V × V , representing edges of the graph. In a directed graph, an edge (νi , νj ) indicates that agent j has access to the information from agent i, and a directed path is a sequence of edges of the form (νi1 , νi2 ), (νi2 , νi3 ), . . .. A directed graph is strongly connected if there exists a directed path between any pair of distinct nodes. Let A = [aij ] ∈ RN ×N be the adjacency matrix associated with GN , where aij > 0 if (νj , νi ) ∈ E and aij = 0 otherwise. Here we assume that aii = 0 for all i ∈ I [1, N ]. Let L = [lij ] ∈ RN ×N be the Laplacian matrix associated with ∑N A, where lii = j=1 aij and lij = −aij when i ̸ = j. A graph is said to be detailed balanced if there exist some real numbers ωi > 0, i ∈ I [1, N ], such that the coupling weights of the graph satisfy ωi aij = ωj aji for all i ∈ I [1, N ] (Jiang & Wang, 2009). The communication topology is described by a directed graph that satisfies the following assumption. Assumption 1. The directed graph GN is strongly connected and detailed balanced. Define M = diag{ω}L, where diag{ω} = diag{ω1 , ω2 , . . . , ωN }, with ωi > 0, i ∈ I [1, N ]. By the definition of the detailed balanced graph, we have diag{ω}L = LT diag{ω}, namely M = MT . Since L1N = 0, M1N = diag{ω}L1N = 0. Matrix M has the following property. Lemma 2 (Godsil & Royle, 2001). Under Assumption 1, M ≥ 0, and all the eigenvalues of M are nonnegative and real. Moreover, 0 is a single eigenvalue of M. We next review some basic knowledge of convex analysis. A function f : Rm → R is convex if, for any x, y ∈ Rm , f (κ x + (1 − κ )y) ≤ κ f (x) + (1 − κ )f (y), κ ∈ [0, 1].

(1)

A function is strictly convex if strict inequality holds in (1) whenever x ̸ = y and 0 < κ < 1. Assume that f is twice differentiable, i.e., ∇ 2 f exists. If ∇ 2 f (x) > 0 for all x ∈ Rm , then f is strictly convex. 3. Problem statement Consider a group of N agents, each described by higher-order integrators as, (n)

xi

= ui , i ∈ I [1, N ],

(2)

(n)

where xi ∈ Rm , xi ∈ Rm represents the nth derivative of xi , and ui ∈ Rm is the bounded control input of agent i, with ∥ui ∥∞ ≤ umax for some positive scalar umax . Each agent has its own objective function fi (xi ) : Rm → R to minimize. We make the following assumption on these objective functions of the agents.

|ai |2 and ∥a∥∞ denotes the maximum norm defined by ∥a∥∞ = maxi |ai |. Also, 1N = [1 1 · · · 1]T ∈ RN , Im denotes the m × m identity matrix, and ⊗ denotes the Kronecker product of matrices. For a symmetric matrix A, by A > 0 (≥ 0)

Assumption 3. The objective functions fi : Rm → R, i ∈ I [1, N ], are twice differentiable and ∇ 2 fi (x) > 0 for all x ∈ Rm . Moreover, there exists a positive scalar θ such that ∥∇ 2 fi (x)∥∞ ≤ θ , ∀x ∈ Rm , i ∈ I [1, N ].

we mean that A is positive definite (positive semi-definite), and

The problem we are to solve is the global optimal consensus problem. Consider the multi-agent system whose agents are described by dynamics (2). For each agent i, construct a bounded optimal consensus control law ui , ∥ui ∥∞ ≤ umax , which uses the information obtained through the communication network and the information of its own objective function such that the ∗ closed-loop multi-agent system achieves consensus ∑N at a state x that minimizes the objective function f (x) = f (x), where i i=1

aT a =

∑m

i=1

λ¯ (A) and λ(A) represents its maximum and minimum eigenvalues, respectively. For two integers p1 and p2 , I [p1 , p2 ] = {p1 , p1 + 1, · · · , p2 }, if p2 ≥ p1 , and I [p1 , p2 ] = {p1 , p1 − 1, · · · , p2 }, if p1 > p2 . For any column vectors ai , i ∈ [1, n], denote col[a1 , a2 , . . . , an ] = [aT1 aT2 · · · aTn ]T . For a function f : Rm → R, ∇ f and ∇ 2 f are respectively the gradient and the Hessian matrix of f .

Y. Xie, Z. Lin / Automatica 99 (2019) 301–307

the convex function fi (x) : Rm → R is known only to agent i, that is, x∗ is the solution to the following optimization problem (s) minx∈Rm f (x), with limt →∞ xi (t) = x∗ , limt →∞ xi (t) = 0, i ∈ I [1, N ], s ∈ I [1, n−1]. Under Assumption 3, f (x) is strictly convex. Thus, the optimal consensus state x∗ is reached if the following optimality condition is satisfied,

∇ f (x∗ ) =

N ∑

∇ fi (x∗ ) = 0.

(3)

x˙ i = Axi + Bui ,

i ∈ I [1, N ], (1)

(n−2)



−1) , x(n ], i ⎡ ⎤

0 0

0 0

··· .. . ··· ···

.. .

.. ⎥ .⎥ ⎦

⎢ .. ⎥ .⎥ ,B = ⎢ ⎣ ⎦

0

Im 0

0

0 Im

nm×nm

. nm×m

According to Sussmann and Sontag (1994), there exists a nonsingular matrix Θ such that the state transformation x¯ i = Θ xi , i ∈ I [1, N ], transforms system (4) into the following form

¯ i , i ∈ I [1, N ], x˙¯ i = A¯ x¯ i + Bu (1)

(5) (n−2)



−1) , x¯ (n ], i ⎡ ⎤

.. ⎥ .⎥ ⎦

⎢ .. ⎥ .⎥ , B¯ = ⎢ ⎣ ⎦

where x¯ i = col[¯xi , x¯ i , . . . , x¯ i 0

Im

. A¯ = ⎢ ⎣ 0 0

0 0



⎢ ..

··· .. . ··· ···

.. .

Im Im 0

nm×nm

. nm×m

Such a transformation Θ is explicitly expressed as



Cn0−1

⎢ ⎢ 0 Θ=⎢ ⎢ .. ⎣ .



Cn1−1

···

−1 Cnn− 1

Cn0−2

.. .

··· .. .

−2 ⎥ Cnn− 2

0

···

0

⎥ ⎥ .. ⎥ ⊗ Im , . ⎦

C00 p! . q!(p−q)!

Based on the transformed state equation (5), we construct a bounded optimal consensus control law for each agent i, i ∈ I [1, N ], as follows,

βωi

N ∑



l=1

⎛ ⎝βωi σ∆l (x¯ (l) i ) − σ∆n

Proof. Under the control laws (6), the dynamics of agent i, i ∈ I [1, N ], are given by

N ∑

(7)

i

Let x = col[x1 , x2 , . . . , xN ], x¯ = col[¯x1 , x¯ 2 , . . . , x¯ N ], v = col[v1 , v2 , (s) (s) . . . , vN ], u = col[u1 , u2 , . . . , uN ], x(s) = col[x(s) 1 , x2 , . . . , xN ], (s) (s) (s) (s) (1) x¯ = col[¯x1 , x¯ 2 , · · · , x¯ N ], s ∈ I [1, n − 1], F (x¯ − x¯ ) =

∑N

Nm ¯ − x¯ (1) → R, and M = M ⊗ Im . Then, we can write i ) : R

i=1 fi (xi

the dynamics of the entire multi-agent system (7) in the following compact form,

aij (x¯ i − x¯ j ), vi (0) = 0,

j=1 n−1 ∑

Theorem 5. Consider the multi-agent system (2). Let Assumptions 1 and 3 hold. Then, under the bounded optimal consensus control laws (6) with any positive scalar α , the equilibrium point of the closedloop multi-agent system minimizes the objective function f (x) = ∑N i=1 fi (x).

i

q

where, for two integers p ≥ q ≥ 0, Cp =

⎧ ⎪ ⎪ ⎪ v˙ i = ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ui = ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

Next, we will present two theorems. Theorem 5 establishes that the equilibrium point of the closed-loop multi-agent system minimizes the objective function. Theorem 6 establishes that the agents reach consensus at the equilibrium point, which implies that the consensus control laws (6) solve the global optimal consensus problem for higher-order multi-agent system (2).

⎧ N ∑ ⎪ ⎪ ⎪ ⎪ v˙ i = βωi aij (x¯ i − x¯ j ), vi (0) = 0, ⎪ ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎪ (1) ⎪ ⎪ x˙ i = xi , ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ x˙ (s) = x(si +1) , s ∈ I [1, n − 2⎛ ], i ⎞ N n−1 ) ( ∑ ∑ ( ) ⎪ (n−1) ⎪ ⎪ aij x¯ i − x¯ j ⎠ − σn ⎝βωi = − σl x¯ (l) x˙ ⎪ i ⎪ i ⎪ ⎪ j = 1 l = 1 ⎪ ⎪ ( ( ) ) ⎪ ⎪ ⎪ −σn+1 vi + α∇ fi x¯ i − x¯ (1) + x¯ (1) ⎪ i i ⎪ ⎪ ⎪ ) ) ( ( ⎪ ⎪ ⎩ −σ0 α∇ 2 fi x¯ i − x¯ (1) x¯ (1) .

Im Im Im

j=1 aij (xi

is the gradient term that helps each agent to optimize its local function; and vi is a term to achieve consensus at the global optimal point. (4)

where xi = col[xi , xi , . . . , xi

⎢ .. . A=⎢ ⎣

∑N

¯ − x¯ j ) is the (1) consensus term such that agents reach an agreement; α∇ fi (x¯ i −¯xi )

Each nth order-integrator system (2) can be written as

Im

Remark∑ 4. As will be clear in the proofs of Theorems 5 and 6, in (l) (1) (1) n−1 each ui , l=1 σl (x¯ i ) and σ0 (α∇ 2 fi (x¯ i − x¯ i )x¯ i ) are terms that drive (l)

4. Main results

0

both a scalar and a vector valued saturation function. ∑n+1 It is noted that the control input is bounded by ∥ui ∥∞ ≤ i=0 ∆i . We will set the value of ∆i , i ∈ I [0, n + 1], such that ∥ui ∥∞ ≤ umax . For notational brevity, we denote σ∆i as σi in the remainder of the paper.

the derivatives x¯ i , l ∈ I [1, n − 1], to 0; βωi

i=1



303

⎞ aij (x¯ i − x¯ j )⎠

(6)

j=1

( ) ¯ (1) −σ∆n+1 vi + α∇ fi (x¯ i − x¯ (1) i ) + xi ( ) (1) ¯ −σ∆0 α∇ 2 fi (x¯ i − x¯ (1) ) x , i i

where β is any positive scalar and α is a positive scalar whose value is to be determined, and, for a given positive scalar ∆i > 0, i ∈ I [0, n + 1], σ∆i : Rm → Rm is a saturation function with a saturation level ∆i , that is, for s = [s1 s2 · · · sm ]T ∈ Rm , σ∆i (s) = [σ∆i (s1 ) σ∆i (s2 ) · · · σ∆i (sm )]T , σ∆i (si ) = sgn(si ) min{|si |, ∆i }, i ∈ I [1, m], where we have abused the notation by using σ∆i to denote

⎧ ⎪ v˙ = β M x¯ , v(0) = 0, ⎪ ⎪ ⎪ ⎪ ⎪ x˙ = x(1) , ⎪ ⎪ ⎪ (s) (s+1) ⎪ ⎪ , s ∈ I [1, n − 2], ⎨ x˙ = x n−1 ∑ ⎪ x˙ (n−1) = − σl (x¯ (l) ) − σn (β M x¯ ) ⎪ ⎪ ⎪ ⎪ l=1 ⎪ ⎪ ⎪ ⎪ ¯ − x¯ (1) ) + x¯ (1) ) −σ ⎪ n+1 (v + α∇ F (x ⎪ ⎩ −σ0 (α∇ 2 F (x¯ − x¯ (1) )x¯ (1) ),

(8)

where ∇ F (x¯ − x¯ (1) ) = col[∇ f1 , ∇ f2 , . . . , ∇ fN ] with ∇ fi = ∇ fi (x¯ i − (1) x¯ i ), i ∈ I [1, N ], and ∇ 2 F (x¯ −¯x(1) ) = blkdiag{∇ 2 f1 , ∇ 2 f2 , . . . , ∇ 2 fN }, (1) is a block diagonal matrix with∇ 2 fi = ∇ 2 fi (x¯ i − x¯ i ), i ∈ I [1, N ].

304

Y. Xie, Z. Lin / Automatica 99 (2019) 301–307

Since x¯ i = Θ xi , i ∈ I [1, N ], we have ⎧ n−1 ⎪ ∑ ⎪ j ⎪ ⎪ ¯ x = x + Cn−1 x(j) , ⎪ ⎨ j=1 n−1−s j Cn−1−s x(s+j) j=0

∑ ⎪ ⎪ ⎪ ⎪ x¯ (s) = ⎪ ⎩

Theorem 6. Consider the multi-agent system (2). Let Assumptions 1 and 3 hold. Then, under the control laws (6), with parameters satisfying the following conditions for some positive scalars ε and δ ,

⎧ n+1 ∑ ⎪ ⎪ ⎪ ∆i ≤ umax , ⎪ ⎪ ⎪ ⎪ i=0 ⎪ ⎪ i−1 ⎪ ∑ ⎪ ⎪ ⎪ ∆l + ∆n + ∆n+1 + ε ≤ ⎨

, s ∈ I [1, n − 1].

Thus, (8) can be written as

⎧ ⎪ ⎪ ⎪ ⎪ v˙ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ x˙ ⎪ ⎪ ⎪ ⎪ (s) ⎪ ˙ x ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ x˙ (n−1) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

⎛ = β M ⎝x +

n−1 ∑

⎞ j

j=1

= x , (1)

= x(s+1) , s ⎛ ∈ I [1, n − 2], ⎞ n−1 n−1−l ∑ ∑ j = − σl ⎝ Cn−1−l x(l+j) ⎠ l=1

⎛j=0



−σn ⎝β M ⎝x +

n−1 ∑

j

j=1

−σn+1 ⎝v + α∇ F ⎝x +

⎞ j Cn−2 x(j)

(9)



n−2 ∑

j

Cn−2 x(j+1) ⎠

j=0



−σ0 ⎝α∇ 2 F ⎝x +

n−2 ∑

×

⎠.

j=0

Let the equilibrium of the closed-loop multi-agent system (9) be denoted as [v 0 , x0 , (x(1) )0 , . . . , (x(n−1) )0 ], where v 0 = col[v10 , v20 ,

. . . , vN0 ], x0 = col[x01 , x02 , . . . , x0N ], (x(s) )0 = col[(x1(s) )0 , (x2(s) )0 , . . . , (s) (xN )0 ], s ∈ I [1, n − 1]. Then, v 0 , x0 and (x(s) )0 must satisfy (x(s) )0 = 0, s ∈ I [1, n − 1], (M ⊗ Im )x = 0, 0

v + α∇ F x 0

( 0)

(10) (11)

= 0.

(12)

It follows from (11) that x0 belongs to the null space of M ⊗ Im . For a strongly connected and detailed balanced diagraph, the nullspace of M is spanned by 1N . Thus, x0 = 1N ⊗ x0 , x0 ∈ Rm . For a detailed balanced diagraph, 1TN M = 0. Multiplying to the left of first equation of (8) by 1TN ⊗ Im gives (1TN ⊗ Im )v˙ = β (1TN M ⊗ Im )x¯ = 0,

∑N

from which we have ˙ i = 0, which in turn implies that i=1 v ∑N ∑N v (t) = v (0) = 0. Multiplying (12) to the left by i=1 i i=∑ 1 i ( ) ∑N N (1N ⊗ Im ) gives that i=1 vi0 (t) +α i=1 ∇ fi x0 = 0, which implies

∑N

(e)

) ρ ∗ ∈ (0, 1) be a positive scalar satisfying ρ ∗ ≤ min 6m2λN(M2 λ1(M , 1) ) umax n+1 n n−i . Choose ∆ = ∆ = ρ ∆ , ∆ = ρ ∆ , ∆ = ρ ∆ , i ∈ 0 n+1 n i 3∆ λ(M1 ) ρ2 n n+2 √ [1, n − 1], δ = 2λ(M ρ ∆ , ε = ρ ∆ and α ∗ = θ . Then, ) mN

Proof of Theorem 6. The dynamics of the closed-loop multi-agent system can be written as



⎞ j Cn−2 x(j+1)

(d)

⎞ j Cn−2 x(j)

j=1 n−2 ∑

(13) (c)

1





(b)

(13)(a)–(e) will be satisfied as long as ρ ∈ (0, ρ ∗ ] and α ∈ (0, α ∗ ]. These parameters are designed offline, while in the cooperative control process, each agent only needs its own information and the information of its neighbors.

j=1

+

, i ∈ I [1, n − 1],

Remark 7. There always exist controller parameters that satisfy conditions (13)(a)–(e). To see this, let ∆ be any positive ( scalar and

Cn−1 x(j) ⎠⎠ n−2 ∑

3

(mN) 2

the multi-agent system (2) reaches global optimal consensus at the equilibrium point.

⎞⎞



∆i

⎪ ⎪ ⎪ mN(∆0 + ∆n+1 + ε ) ≤ δ, ⎪ ⎪ ⎪ ⎪ λ(M1 ) √ ⎪ ⎪ ( mN δ + ∆n+1 + ∆0 ) ≤ ∆n , ⎪ ⎪ ⎪ λ(M1 ) ⎪ ⎩ αθ ∆1 ≤ ∆0 ,

Cn−1 x(j) ⎠ , v(0) = 0,

⎛ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

l=0

(a)

0 that = 0. That is, x0i = x0 , i ∈ I [1, N ], and the i=1 ∇ fi x optimality condition (3) is satisfied with x∗ = x0 . This completes proof. □

( )

We next note that, under Assumption 1, there is an orthogonal matrix T ∈ RmN ×mN such that TMT T = diag{0, M1 }, where M1 ∈ Rm(N −1)×m(N −1) is a diagonal matrix with all positive eigenvalues, and T = [T0 T1 ]T , with T0 ∈ RmN ×m and T1 ∈ RmN ×m(N −1) . Then, we have the following result showing that the agents reach consensus at the equilibrium point.

⎧ ⎪ v˙ = β M x¯ , v(0) = 0, ⎪ ⎪ ⎪ n−1 ⎪ ⎪ ∑ ⎪ ⎪ ˙¯ = ⎪ x x¯ (l) + u, ⎪ ⎪ ⎪ ⎪ l=1 ⎪ ⎪ n−1 ⎪ ∑ ⎪ (s) ⎪ ˙¯ = ⎪ x x¯ (l) + u, s ∈ I [1, n − 2], ⎪ ⎨ l=s+1

(n−1) ⎪ ⎪ x˙¯ = u, ⎪ ⎪ ⎪ n−1 ⎪ ∑ ( ) ⎪ ⎪ ⎪ u = − σl x¯ (l) − σn (β M x¯ ) ⎪ ⎪ ⎪ ⎪ l=1 ⎪ ⎪ ( ( ) ) ⎪ ⎪ ¯ − x¯ (1) + x¯ (1) −σ ⎪ n+1 v + α∇ F x ⎪ ⎪ ( ( ) ) ⎩ −σ0 α∇ 2 F x¯ − x¯ (1) x¯ (1) .

(14)

Let u¯ s = −

s−1 ∑

σl (x¯ (l) ) − σn (β M x¯ )

l=1

−σn+1 (v + α∇ F (x¯ − x¯ (1) ) + x¯ (1) ) −σ0 (α∇ 2 F (x¯ − x¯ (1) )x¯ (1) ), s ∈ I [1, n − 1], u¯ n = −σn+1 (v + α∇ F (x¯ − x¯ (1) ) + x¯ (1) ) −σ0 (α∇ 2 F (x¯ − x¯ (1) )x¯ (1) ), ∑0 ¯ where the summation l=1 is taken to be zero. Then, ∥us ∥∞ ≤ ∑ s−1 ¯ ∥ ∥ ∆ + ∆ + ∆ , s ∈ I [ 1 , n − 1 ] , and u ≤ ∆ + ∆n+1 . n n+1 n ∞ 0 l=0 l We first consider the evolution of x¯ (n−1) , ( ) (n−1) x˙¯ = u = −σn−1 x¯ (n−1) + u¯ n−1 . (15) Choose a Lyapunov function candidate Vn−1 (x¯ (n−1) ) =

1 2

(x¯ (n−1) )T (x¯ (n−1) ).

Y. Xie, Z. Lin / Automatica 99 (2019) 301–307

The derivative of Vn−1 along the trajectory of (15) can be evaluated as follows,

) (n−1) T

V˙ n−1 ≤ −

∆2n−1 mN

σn−1 x¯

) (n−1) T

+ x¯ u¯ n−1 . 2 { ( ) ∆ Let cn−1 = n2−1 , LVn−1 (cn−1 ) = x¯ (n−1) ∈ RmN : Vn−1 x¯ (n−1) ≤ cn−1 } and Qn−1 = {¯x(n−1) : ∥¯x(n−1) ∥∞ ≤ ∆n−1 }. For any x¯ (n−1) ̸ ∈ LVn−1 (cn−1 ), we have Vn−1 (x¯ (n−1) ) > cn−1 , which indicates that ∆ −1 ∥¯x(n−1) ∥ > ∆n−1 and ∥¯x(n−1) ∥∞ > √nmN . Then, we have, by (13)(b), V˙ n−1 = − x¯

(

(

) (n−1)

(

mN

l=0

Thus, LVn−1 (cn−1 ) is an invariant set and there exists a finite time tn−1 such that x¯ (n−1) will enter LVn−1 (cn−1 ) at tn−1( and remain in ) (n−1) ¯ there. For any  x¯ (n−1) ∈ L (c ), we have V x ≤ c V n − 1 n − 1 n − 1. n − 1    It follows that x¯ (n−1) ∞ ≤ x¯ (n−1)  ≤ ∆n−1 , which indicates that x¯ (n−1) will stay in the unsaturated region of the saturation function σn−1 , namely Qn−1 , for t ≥ tn−1 . For each s ∈ I [n − 2, 1], following the same analysis as that of the evolution of x¯ (n−1) , we can recursively show that x¯ (s) will enter and remain in Qs = {¯x(s) : ∥¯x(s) ∥(∞ ≤ ∆ )s } in a finite ∇ 2 F x¯ − x¯ (1)  ≤ θ . By time ts . By Assumption 3, we have  ∞ ( )  (13)(e), we can get α∇ 2 F x¯ − x¯ (1) x¯ (1) ∞ ≤ αθ ∆1 ≤ ∆0 . ( ) Thus, α∇ 2 F x¯ − x¯ (1) x¯ (1) will stay in the unsaturated region of the saturation function σ0 for t ≥ t1 , where the evolution of x¯ is governed by x˙¯ = −σn (β M x¯ ) + u¯ n . Since M ≥ 0, there exists a matrix N ∈ RmN ×mN such that M = N T N . Let w = N x¯ . We have

(

)

(16)

Let Qn = {w : ∥β N w ∥∞ ≤ δ}. We next prove that, for any w ̸ ∈ Qn , β N T w will enter the unsaturated region of σ∆n in a finite time and remain in it. Choose a Lyapunov function candidate β Vn (w) = 2 w T w. The derivative of Vn along the trajectory of (16) can be evaluated as follows, T

( )T ( ) V˙ n = − β N T w σn β N T w + (β N T w)u¯ n . V˙ n ≤ −δ 2 + δ mN(∆n+1 + ∆0 ) ≤ −mN εδ, which means that there exists a finite time tn ≥  t1 at which  w will arrive at the boundary of Qn . Then, we have β N T w(tn )∞ = ∥β M x¯ (tn )∥∞ = δ < ∆n . Next, we will show that β N T w will remain in the unsaturated region of σ∆n for all t ≥ tn . For w ∈ Qn , the dynamics of x¯ can be written as x˙¯ = −β M x¯ + u¯ n . Let z = T x¯ = [z0T z1T ]T and Ψ = [ψ0T ψ1T ]T = T u¯ n , with z0 ∈ Rm , z1 ∈ Rm(N −1) , ψ0 ∈ Rm , and ψ1 ∈ Rm(N −1) . We have,

[ ] z˙0 = −β TMT T z + Ψ z˙1 [ ][ ] [ ] 0 0 z0 ψ = −β + 0 . 0 M1 z1 ψ1

Solving Eq. (17) for t ≥ tn , we have z1 (t) = e−β M1 (t −tn ) z1 (tn ) +



t

e−β M1 (t −τ ) ψ1 (τ )dτ , tn

t

∥e−β M1 (t −τ ) ∥∥ψ1 (τ )∥dτ . tn

Since M1 > 0, we have e−β M1 t  ≤ e−βλ(M1 )t . Then, ∥z1 (t)∥ ≤



∥z1 (tn )∥ +

∆n+1 +∆0 . βλ(M1 )



Since β N T w = β M x¯ = β T T TMT T z =

  β T1 M1 z1 , and β N T w(tn )



√ mN δ . Then, βλ(M1 ) ∆n+1 +∆0 , and βλ(M1 )

= δ , we have ∥z1 (tn )∥ ≤ √

mN δ βλ(M1 )

for t ≥ tn , by (13)(d), we have ∥z1 (t)∥ ≤

+

T

where we have used the fact that T1T T1 = Im(N −1) . Thus, we prove that, after a finite time tn , β N T w, namely β M x¯ , remains in the unsaturated region of σn for t ≥ tn . From the above analysis, we conclude that, in the closed(s) loop multi-agent ( ) system (14), x¯ , s ∈ I [1, n − 1], β M x¯ and α∇ 2 F x¯ − x¯ (1) x¯ (1) will all enter and remain in the unsaturated regions of the respective saturation functions in a finite time, from which the closed-loop multi-agent system (14) can be written as (18).

⎧ ⎪ ⎪ v˙ = β M x¯ , v(0) = 0, ⎪ ⎪ ⎪ ⎪ ⎪ x˙¯ = x¯ (1) + u0 , ⎪ ⎪ ⎪ ⎪ (1) ⎪ ⎪ ⎨x˙¯ = u0 , s ∑ (18) (s) ⎪ x˙¯ = − x¯ (l) + u0 , s ∈ I [2, n − 1], ⎪ ⎪ ⎪ ⎪ l=2 ⎪ ⎪ ( ) ⎪ ⎪ ⎪ u0 = −¯x(1) − β M x¯ − α∇ 2 F x¯ − x¯ (1) x¯ (1) ⎪ ⎪ ⎪ ( ( ) ) ⎩ −σn+1 v + α∇ F x¯ − x¯ (1) + x¯ (1) . [ ] [ ] Let χ1 = col v , x¯ , x¯ (1) and χ2 = col x¯ (2) , x¯ (3) , . . . , x¯ (n−1) . Then, system (18) can be written in the following cascade system,

{ χ˙ 1 = g1 (χ1 ), χ˙ 2 = A2 χ2 + B2 g2 (χ1 ),

(19)

where g1 (χ1 ) is a Lipschitz function with g1 (0) = 0 and is defined in an obvious way, g2 (χ1 ) = u0 , and

⎡ − Im ⎢−Im A2 = ⎢ ⎣ .. .

For w ̸ ∈ Qn , we have, by (13)(c),

z˙ =

∫ ∥z1 (t)∥ ≤ ∥z1 (tn )∥ +

∥β N w ∥∞ ≤ ∥β N w ∥ ≤ βλ(M1 )∥z1 ∥ λ(M1 ) √ ≤ ( mN δ + ∆n+1 + ∆0 ) ≤ ∆n . λ(M1 )

≤ − mN ε ∆n−1 < 0.

˙ = −N σn β N T w + N u¯ n . w

and

T

( n−2 ) ∑ ∆n−1 +√ mN ∆l + ∆n + ∆n+1



305

− Im

0 −Im

.. . −Im

··· ··· .. . ···

0 0 ⎥



Im

⎡ ⎤

⎢Im ⎥ ⎢ ⎥ .. ⎥ ⎦ , B2 = ⎣ .. ⎦ . . . − Im Im

We next show that system (19) is globally asymptotically stable at its equilibrium point χ10 = col[v 0 , x0 , (x(1) )0 ], χ20 = col[(x(2) )0 , (x(3) )0 , . . . , (x(n−1) )0 ], where we recall that (x(s) )0 = 0, s ∈ I [1, n − 1]. To this end, let φ = x¯ (1) and ϕ = x¯ − x¯ (1) . We can transform subsystem χ1 into

⎧ v˙ = β M (φ + ϕ), v(0) = 0, ⎪ ⎨ ϕ˙ = φ, 2 ˙ ⎪ ⎩φ = −φ − β M (ϕ + φ) − α∇ F (ϕ)φ −σn+1 (v + α∇ F (ϕ) + φ) .

(20)

Choose the following Lyapunov function (17)

V (v , ϕ, φ) =

1 2

φT φ +

β 2

ϕT M ϕ + Φn+1 (v + α∇ F (ϕ) + φ),

T m where for an ∫ si [s1 s2 · · · sm ] ∈ R , Φn+1 (s) is defined as ∑ms = Φn+1 (s) = i=1 0 σ∆n+1 (t)dt. Let the equilibrium point of (20) be [v 0 , ϕ0 , φ0 ], where ϕ0 = x0 and φ0 = (x(1) )0 . Clearly, V ≥ 0 and V = 0 if and only if φ = 0, M ϕ = 0 and v + α∇ F (ϕ) = 0, namely,

306

Y. Xie, Z. Lin / Automatica 99 (2019) 301–307

v = v 0 , ϕ = ϕ0 , φ =∑ φ0 . Note that V is radially unbounded in the N subspace {(v , ϕ, φ) : i=1 vi (t) = 0}. The derivative of V along the trajectories of (20) can be evaluated as follows, V˙ = −φT φ − βφT M φ − 2φT σn+1 (v + α∇ F (ϕ) + φ)

−σnT+1 (v + α∇ F (ϕ) + φ) σn+1 (v + α∇ F (ϕ) + φ) −αφT ∇ 2 F (ϕ)φ = − ∥φ + σn+1 (v + α∇ F (ϕ) + φ)∥2 −βφT M φ − αφT ∇ 2 F (ϕ)φ ≤ − ∥φ + σn+1 (v + α∇ F (ϕ) + φ)∥2 − αφT ∇ 2 F (ϕ)φ.

Fig. 1. The communication topology represented by a directed graph.

By Assumption 3, ∇ 2 F (ϕ) > 0. Thus, we get V˙ ≤ 0 and V˙ = 0 if and only if φ = 0 and v + α∇ F (ϕ) = 0. Let S = {(v , ϕ, φ) : V˙ = 0}. Let (v(t), ϕ(t), φ(t)) be a trajectory that belongs identically to S, that is, V˙ ≡ 0. Then, v + α∇ F (ϕ) = 0, φ(t) = 0, and φ˙ (t) = 0. By (20), we get M ϕ = 0, which implies that v(t) ≡ v 0 , ϕ(t) ≡ ϕ0 and φ(t) ≡ φ0 . By LaSalle’s Invariance Principle, system (20) is asymptotically stable at its equilibrium point (v 0 , ϕ0 , φ0 ), at which g2 (χ1 ) = 0. Since A2 is Hurwitz, we can conclude that system (19) is globally asymptotically stable at its equilibrium point [v 0 , x0 , (x(1) )0 , . . . , (x(n−1) )0 ], which indicates that the multiagent system (2) reaches consensus at x = x∗ under the consensus control laws (6). □ Remark 8. In the absence of actuator saturation, the boundness assumption on ∇ 2 fi , i ∈ I [1, N ], in Assumption 3 is not needed, and the bounded optimal control law ui for each agent i, i ∈ I [1, N ], can be simplified to

⎧ N ∑ ( ) ⎪ ⎪ ⎪ v ˙ = βω aij x¯ i − x¯ j , vi (0) = 0, i ⎪ i ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎪ n−1 N ⎨ ∑ ∑ ( ) (l) ui = − x¯ i − βωi aij x¯ i − x¯ j ⎪ ⎪ l=1 j=1 ⎪ ⎪ ⎪ ⎪ ⎪ ¯ (1) −(vi + α∇ fi (x¯ i − x¯ (1) ⎪ i ) + xi ) ⎪ ⎪ ⎩ ¯ (1) −α∇ 2 fi (x¯ i − x¯ (1) i ) xi ,

300]T , 100]T , are most suitable locations known to agent 1, agent 2 and agent 3, respectively. The three agents need to rendezvous at a location from which the total distance to the three locations is minimum. Thus, the three agents are endowed with cost functions: f1 (x1 ) = (x11 − d11 )2 + (x12 − d12 )2 , f2 (x2 ) = (x21 − d21 )2 + (x22 − d22 )2 , and f1 (x3 ) = (x31 − d31 )2 + (x32 − d32 )2 , respectively. Let the communication topology among agents be given as shown in Fig. 1. The adjacency matrix A and the Laplacian matrix L of this topology are given by 0 1 1

[ A=

2 −1 −1

g2 (χ1 ) = −¯x(1) − β M x¯ − (v + α∇ F (x¯ − x¯ (1) ) + x¯ (1) )

− α∇ 2 F (x¯ − x¯ (1) )x¯ (1) . Using the same analysis of (19) but a slightly different Lyapunov function β 1 V¯ = ∥v + α∇ F (ϕ) + φ∥2 + φT φ + ϕT M ϕ, 2 2 asymptotic stability of the closed-loop multi-agent system at the equilibrium point, which implies that optimal consensus, can be established. In comparison with Zhang and Hong (2015), the construction of the control law (21) is much simpler in the sense that the only two design parameters can be chosen arbitrarily, while 2N + 1 parameters are needed in Zhang and Hong (2015). 5. Simulation results Consider three vehicles moving on a 2D plane. Each vehicle is modeled as a double-integrator agent i = 1, 2, 3,

where xi ∈ R2 is the position of agent i. We assume that, for each i, the acceleration ui ∈ R2 , namely, the control input of agent i, has

0.5 1 , L= 0

]

−1

[

[ (21)

0.5 0 1

[

1 −1 −1

−0.5 2

] −0.5 −1 .

−1

2

This communication topology satisfies Assumption 1, with ω1 = 2, ω2 = 1, and ω3 = 1. Then, we have M=

where the two controller parameters α and β can be any positive scalars. Under these control laws, the closed-loop multi-agent system from the initial time can be written in the cascading form (19), but with

x¨ i = ui ,

a bound umax = 30. The locations [d11 d12 ]T = [−110 [d21 d22 ]T = [−314 200]T , and [d31 d32 ]T = [−500

T =

2

−1

0.5774 0.7634 0.2895

] −1 −1 , 2

0.5774 −0.6325 0.5164

0.5774 −0.1310 . −0.8059

]

Choose α = 0.001, β = 0.1, ∆0 = 0.056, ∆1 = 28, ∆2 = 1.78, ∆3 = 0.056, δ = 0.68 and ε = 0.001, which satisfy (13)(a)–(e). Then, the bounded control law for each agent can be constructed according to (6). Under these control laws, we simulate the closed-loop system with

[

x1 (0)

[

(1)

x1 (0)

x2 (0) (1)

x2 (0)

]

x3 (0) = (1) x3 (0) =

]

[ −150

−240

210

300

[

6 2

−1 10

] −500 100

,

]

0 . −5

Shown in Fig. 2 are the simulation results under control laws (6). It shows that the states of agents all converge to x∗ = [−308 200]T , which is the optimal value of the overall objective function. Additional simulation results also show that the control laws designed without taking actuator saturation into consideration in Zhang and Hong (2015) fail to achieve global optimal consensus in the presence of actuator saturation. 6. Conclusions In this paper we have studied the global optimal consensus problem with bounded controls where each agent is represented by higher-order integrators dynamics and is endowed with its own cost function that is only known to itself. By utilizing the information obtained through the communication network and the information of its own cost function, an optimal control law was constructed for each agent to achieve global optimal consensus when the communication topology among agents is strongly connected and detailed balanced. Further study on optimal consensus subject to input time-delay is currently underway.

Y. Xie, Z. Lin / Automatica 99 (2019) 301–307

Fig. 2. Simulation results under the control laws (6).

References Alighanbari, M., & How, J. P. (2005). Decentralized task assignment for unmanned aerial vehicles. In Proceeding of the 44th IEEE conference on decision and control and european control conference (pp. 5668–5673). Chen, W., & Ren, W. (2017). Event-triggered zero-gradient-sum distributed consensus optimization over directed networks. Automatica, 65, 90–97. Cook, J., & Hu, G. (2010). Experimental validation and algorithm of a multi-robot cooperative control method. In Proceeding of the IEEE/ASME international conference on advanced intelligent mechatronics (pp. 109–114). Cortés, J., & Bullo, F. (2005). Coordination and geometric optimization via distributed dynamical systems. SIAM J. Control Optim., 44(5), 1543–1551. Godsil, C., & Royle, G. (2001). Algebraic graph theory. New York: Springer-Verlag. Jiang, F., & Wang, L. (2009). Finite-time information consensus for multi-agent systems with fixed and switching topologies. Physica D, 238(16), 1550–1560. Kia, S. S., Cortés, J., & Martínez, S. (2015). Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication. Automatica, 55, 254–264. Klein, D. J., Bettale, P. K., Triplett, B. I., & Morgansen, K. A. (2008). Autonomous underwater multivehicle control with limited communication: theory and experiment. IFAC Proceedings Volumes, 41(1), 113–118. Li, Y., Xiang, J., & Wei, W. (2011). Consensus problems for linear time-invariant multi-agent systems with saturation constraints. IET Control Theory Appl., 5(6), 823–829. Liu, W., & Huang, J. (2017). Adaptive leader-following consensus for a class of higher-order nonlinear multi-agent systems with directed switching networks. Automatica, 79, 84–92. Lu, J., & Tang, C. Y. (2012). Zero-gradient-sum algorithms for distributed convex optimization: The continuous-time case. IEEE Trans. Automat. Control, 57(9), 2348–2354. Meng, Z., Zhao, Z., & Lin, Z. (2013). On global leader-following consensus of identical linear dynamic systems subject to actuator saturation. Systems Control Lett., 62, 132–142. Ni, W., & Cheng, D. (2010). Leader-following consensus of multi-agent systems under fixed and switching topologies. Systems Control Lett., 59(3), 209–217. Olfati-Saber, R., & Murray, R. M. (2004). Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Automat. Control, 49(9), 1520–1533. Rahili, S., & Ren, W. (2017). Distributed continuous-time convex optimization with time-varying cost functions. IEEE Trans. Automat. Control, 62(4), 1590–1605.

307

Shi, G., Johansson, K. H., & Hong, Y. (2011). Multi-agent systems reaching optimal consensus with time-varying communication graphs. In IFAC Proceedings Volumes, 44(1), 5693–5699. Su, S., & Lin, Z. (2016). Distributed consensus control of multi-agent systems with higher order agent dynamics and dynamically changing directed interaction topologies. IEEE Trans. Automat. Control, 61(2), 515–519. Sun, C., Ye, M., & Hu, G. (2017). Distributed time-varying quadratic optimization for multiple agents under undirected graphs. IEEE Trans. Automat. Control, 62(7), 3687–3694. Sussmann, H. J., & Sontag, E. D. (1994). A general result on the stabilization of linear systems using bounded controls. IEEE Trans. Automat. Control, 39(12), 2411–2425. Wang, J., & Liu, Q. (2015). A second-order multi-agent network for boundconstrained distributed optimization. IEEE Trans. Automat. Control, 60(12), 3310–3315. Xie, Y., & Lin, Z. (2017). Global optimal consensus for multi-agent systems with bounded controls. Systems Control Lett., 102, 104–111. Yang, T., Meng, Z., Dimarogonas, D. V., & Johansson, K. H. (2014). Global consensus for discrete-time multi-agent systems with input saturation constraints. Automatica, 50(2), 499–506. Yang, T., Wang, Y., Wang, H., & Lin, Z. (2018). Global optimal consensus for discretetime multi-agent systems with bounded controls. 97, 182-185. Zhang, Y., & Hong, Y. (2014). Distributed optimization design for second-order multi-agent systems. In Proceedings of the 33rd Chinese control conference (pp. 1755–1760). Zhang, Y., & Hong, Y. (2015). Distributed optimization design for high-order multi-agent systems. In Proceedings of the 34th Chinese control conference (pp. 7251–7256). Zhang, X., Liu, L., & Feng, G. (2015). Leader-follower consensus of time-varying nonlinear multi-agent systems. Automatica, 52, 8–14. Zhao, Z., & Lin, Z. (2016). Global leader-following consensus of a group of general linear systems using bounded controls. Automatica, 68, 294–304. Zhao, Y., Liu, Y., Wen, G., & Chen, G. (2017). Distributed optimization of linear multiagent systems: edge- and node-based adaptive designs. IEEE Trans. Automat. Control, 62(7), 3602–3609.

Yijing Xie was born in, Luoyang, Henan, China, in 1993. She received her Bachelor of Engineering degree in Automation from Wuhan University, Wuhan, Hubei, China, in 2014. She is currently a Ph.D. candidate in the Department of Automation at Shanghai Jiao Tong University, Shanghai, China. Since 2017, she has been a visiting Ph.D. student at the University of Virginia. Her main research interest lies in coordinated control of multi-agent systems, optimization, and event-triggered control.

Zongli Lin is the Ferman W. Perry Professor in the School of Engineering and Applied Science at University of Virginia. He received his B.S. degree in Mathematics and Computer Science from Xiamen University, Xiamen, China, in 1983, his Master of Engineering degree in Automatic Control from Chinese Academy of Space Technology, Beijing, China, in 1989, and his Ph.D. degree in Electrical and Computer Engineering from Washington State University, Pullman, Washington, in 1994. His current research interests include nonlinear control, robust control, and control applications. He was an Associate Editor of the IEEE Transactions on Automatic Control (2001-2003), IEEE/ASME Transactions on Mechatronics (2006-2009) and IEEE Control Systems Magazine (2005-2012). He was elected a member of the Board of Governors of the IEEE Control Systems Society (2008-2010, 2019-2021) and chaired the IEEE Control Systems Society Technical Committee on Nonlinear Systems and Control (2013-2015). He has served on the operating committees several conferences and was the program chair of the 2018 American Control Conference. He currently serves on the editorial boards of several journals and book series, including Automatica, Systems & Control Letters, Science China Information Sciences, and Springer/Birkhauser book series Control Engineering. He is a Fellow of the IEEE, a Fellow of the IFAC, and a Fellow of AAAS, the American Association for the Advancement of Science.