Consensus condition for linear multi-agent systems over randomly switching topologies

Consensus condition for linear multi-agent systems over randomly switching topologies

Automatica 49 (2013) 3125–3132 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Brief pape...

622KB Sizes 0 Downloads 32 Views

Automatica 49 (2013) 3125–3132

Contents lists available at ScienceDirect

Automatica journal homepage: www.elsevier.com/locate/automatica

Brief paper

Consensus condition for linear multi-agent systems over randomly switching topologies✩ Keyou You a , Zhongkui Li b,c , Lihua Xie c,1 a

Department of Automation, Tsinghua University, Beijing, 100084, China

b

State Key Laboratory for Turbulence and Complex Systems, Department of Mechanics and Aerospace Engineering, College of Engineering, Peking University, Beijing, 100871, China c

EXQUISITUS, Center for E-City, School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore

article

info

Article history: Received 7 March 2012 Received in revised form 11 May 2013 Accepted 5 July 2013 Available online 12 August 2013 Keywords: Network topology Consensus Riccati inequality Consensus gain Markov process Link failure

abstract This paper studies both continuous and discrete time consensus problems for multi-agent systems with linear time-invariant agent dynamics over randomly switching topologies. The switching is governed by a time-homogeneous Markov process, whose state corresponds to a possible interaction topology among agents. Necessary and sufficient conditions are derived for achieving consensus under a common control protocol, respectively. It is shown that the effect of switching topologies on consensus is determined by the union of topologies associated with the positive recurrent states of the Markov process. Moreover, the effect of random link failures on discrete time consensus is investigated. The implications and relationships with the existing results are discussed. Finally, the theoretical results are validated via simulations. © 2013 Elsevier Ltd. All rights reserved.

1. Introduction Recent years have witnessed an increasing attention on distributed coordination of multiple agents due to its broad applications in many areas including formation control, distributed sensor network, flocking, distributed computation and synchronization of coupled chaotic oscillators. A fundamental requirement on this topic is that all the agents should reach an agreement using the local information, which is determined by the underlying network topology. Toward this objective, a key step is to design a network based control protocol such that as time goes on, all the agents asymptotically agree on some variable of interest in an appropriate sense. Most of the existing work focus on agents with single or doubleintegrator dynamics and assume that the neighbor’s information is perfectly obtained, see Olfati-Saber and Murray (2004), Ren

✩ This work was partially supported by National Natural Science Foundation of China under grants NSFC 61104153 and 61120106011. The material in this paper was partially presented at the 9th IEEE Conference on Control and Automation, December 19–21, 2011, Santiago, Chile. This paper was recommended for publication in revised form by Associate Editor Hideaki Ishii under the direction of Editor Ian R. Petersen. E-mail addresses: [email protected] (K. You), [email protected] (Z. Li), [email protected] (L. Xie). 1 Tel.: +65 67904524; fax: +65 67920415.

0005-1098/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.automatica.2013.07.024

(2008) and references therein. For those special agent dynamics, necessary and sufficient conditions on the interaction topology to achieve consensus are established under various settings (Jadbabaie, Lin, & Morse, 2003; Olfati-Saber & Murray, 2004; Ren & Beard, 2005). However, for the agent dynamics described by linear time-invariant systems, it is challenging to derive necessary and sufficient conditions for reaching consensus (Ma & Zhang, 2010; You & Xie, 2011). For the discrete-time case, it requires the connectivity of the graph be strong enough to dominate the instability of agent dynamics (You & Xie, 2011). Since systems are often operating under uncertain environments, stochastic consensus with single-integrator agent dynamics has been well studied (Matei & Baras, 2009; Tahbaz-Salehi & Jadbabaie, 2010). In Tahbaz-Salehi and Jadbabaie (2010), a necessary and sufficient condition for achieving consensus with singleintegrator agent dynamics is that the mean topology with respect to a stationary distribution of the switching process must be connected. This result is extended to Markovian switching topologies in Matei and Baras (2009) for both discrete and continuous time consensus. The condition for average consensus becomes that the union of topologies corresponding to the positive recurrent states of the Markov process is strongly connected. Similar conclusion is available for discrete multi-agent systems with double integrator dynamics in Miao, Xu, and Zou (2013). This work investigates both continuous and discrete time consensus problems of linear multi-agent systems over randomly

3126

K. You et al. / Automatica 49 (2013) 3125–3132

switching topologies, and is devoted to revealing the effect of Markovian switching topologies and random link failures on consensus. Specifically, we consider a group of identical agent dynamics, each of which is described by a linear time-invariant system. An individual agent updates its state by using neighbors’ information so that all agents asymptotically reach an agreement in some sense. The topology switching is governed by a time-homogeneous Markov process, whose state space corresponds to all the possible topologies. Compared to the single/double-integrator agent dynamics, the main difference of the current problem is that the open-loop matrix of agent dynamics may contain strictly unstable eigenvalues, rendering the design of a consensus protocol more challenging, and may require a stronger connectivity. Intuitively, the more unstable of agent dynamics, the stronger consensus condition is needed. Under a fixed topology, the connectedness is no longer able to ensure the discrete-time consensus of agent dynamics with unstable open-loop poles (You & Xie, 2011). In fact, those unstable poles will make agents exponentially deviate from each other while the network topology is for local communications to drive agents move toward a common goal in some sense. Thus, the two competing factors have to be carefully considered when designing a consensus protocol. However, for the single/double-integrator dynamics, there does not exist such a problem, which significantly simplifies the design of a consensus protocol. In addition, it is well recognized that fast switching topologies may have a destabilizing effect (Chatterjee & Liberzon, 2011). From this perspective, switching may further accelerate the deviation among agents and thus requires a stronger consensus condition than the case without switching. Thus, this paper is to derive necessary or/and sufficient conditions for achieving consensus of linear agent dynamics under randomly switching topologies. Although Su and Huang (2012) have derived consensus conditions for a class of linear agent dynamics under deterministic switching topologies, their results are mainly restricted to neutrally stable agent dynamics. This paper shows that the effect of switching topologies on consensus is determined by the union of topologies corresponding to the positive recurrent states of the Markov process. For balanced graphs, the necessary and sufficient condition for the continuoustime consensus is that agent dynamics is stabilizable and the union of topologies contains a spanning tree. Moreover, if the agent dynamics is controllable, the convergence speed to consensus can be achieved arbitrarily fast. For the discrete-time consensus, the condition is stronger than the continuous-time case, and the connectivity of the union of topologies should be strong enough to dominate the instability of agent dynamics. The effect of random link failures of undirected topologies on discrete-time consensus is studied as well. If the recovery probability (rate) of the failure prone link is strictly greater than an explicitly derived lower bound, which depends on unstable open-loop poles of the agent and eigenratio of the underlying undirected topology (You & Xie, 2011), consensus can be achieved. The effect of random link failure, the network topology, and the agent dynamics on consensus is jointly quantified through a simple strict inequality. A preliminary version of this result has been reported in You, Li, and Xie (2011). We also note that the discrete time consensus for linear agent dynamics under a weighted random lossy network is independently reported in Zhang and Tian (2012). Their recovery probability is expressed in terms of linear matrix inequalities (LMIs). Our results contain the existing results in the literature as special cases. The rest of the paper is organized as follows. The problem formulation is delineated in Section 2. In Section 3, necessary and sufficient conditions are derived to achieve the continuous-time consensus. This is in contrast with the discrete-time case, where the necessary and sufficient conditions for achieving consensus are respectively established. In Section 4, we study the discretetime consensus problem subject to random link failures. All the

consensus gains are designed using modified Riccati inequalities. Simulation results are delivered in Section 5. Concluding remarks are drawn in Section 6. Notation: for a symmetric matrix M , M ≥ 0 (M > 0) means that the matrix is positive semi-definite (definite), and the relation M1 ≥ M2 for symmetric matrices means that M1 − M2 ≥ 0. The sets of nonnegative integers, real and nonnegative numbers are denoted by N, R and R≥0 , respectively. Let 0 be zero vector with an appropriate dimension, which is clear from the context. Similar notation is adopted for 1. The null space of matrix A ∈ RN ×N is denoted by N (A) , {x ∈ RN |Ax = 0}. Let In ∈ Rn×n be an identity matrix. Given a set A, let 1A (w) be a sign function of set A, e.g., 1A (w) = 1 if w ∈ A and 1A (w) = 0 if w ̸∈ A. The Kronecker product (Horn & Johnson, 1985), denoted by ⊗, facilitates the manipulation of matrices by the following properties: (1) (A ⊗ B) (C ⊗ D) = AC ⊗ BD; (2) (A ⊗ B)T = AT ⊗ BT . 2. Problem formulation 2.1. Communication topologies Let V = {1, . . . , N } be the set of N agents with i representing the i-th agent. The topology (graph) G = {V , E } will be utilized to model the interactions among agents, where E ⊆ V × V is the edge set of paired agents. The ordered pair (i, j) ∈ E if there exists an edge originated from agent i to agent j. In this case, agent i can send information to agent j. G = {V , E } is a null graph if its edge set is empty. A sequence of edges (i1 , i2 ), (i2 , i3 ), . . . , (ik−1 , ik ) with (ij−1 , ij ) ∈ E , ∀j ∈ {2, . . . , k} is called a directed path from agent i1 to agent ik . G is said to contain a spanning tree if there is a root agent such that there exists a direct path from this agent to the rest of agents. The structure of G can also be described by a weighted adjacency matrix, A = [aij ] ∈ RN ×N , i.e., aij > 0 if (j, i) ∈ E , otherwise aij = 0. G is undirected if A = AT . The in-degree and out-degree of agent i are represented by N  out degin = Nj=1 aji , respectively. G is balanced i = j=1 aij and degi out if degin for all i ∈ V . The Laplacian matrix of G is defined i = degi in as L = D − A, where D , diag(degin 1 , . . . , degN ). For a positive integer s ∈ N, the union of s graphs G1 = {V , E1 }, . . . , Gs = {V , Es } is denoted by ∪si=1 Gi = {V , ∪si=1 Ei }. Define the adjacency matrix of ∪si=1 Gi by the summation of the adjacency matrix of Gi . }, where The mirror of G = {V , E } is defined as  G = {V , E ∪ E  is a set of reverse edges of E obtained by reversing the direction E  , of each edge of E and its adjacency matrix is expressed as A 1/2 · (A + AT ). For a balanced graph G, the Laplacian matrix of the  , 1/2 · (L + LT ), which is positive semimirror of G is given by L definite and its eigenvalues in an ascending order are written by ) ≤ · · · ≤ λN (L ). By Ren and Beard (2005), λ2 (L ) > 0 0 ≤ λ2 (L if G contains a spanning tree, which is also equivalent to that  G is ) is called the algebraic connectivity of  connected, and λ2 (L G. Let G(t ) = {V , E (t )} be the interaction topology of agents at time t, where the edge set E (t ) and the adjacency matrix A(t ) are time-varying. If G(t ) is stochastically time-varying, i.e., driven by a random process, it is called randomly switching topologies.

Assumption 1. As in Matei and Baras (2009), all network topologies are assumed to be balanced throughout this paper. 2.2. Consensus over random topologies The dynamics of agent i in continuous time takes the following form: x˙ i (t ) = Ac xi (t ) + Bc ui (t ),

t ∈ R ≥0

(1a)

and its discrete counterpart is given by xi (t + 1) = Ad xi (t ) + Bd ui (t ),

t ∈ N,

(1b)

where xi (t ) ∈ Rn and ui (t ) ∈ Rm represent the state and control input of agent i, respectively.

K. You et al. / Automatica 49 (2013) 3125–3132

K such that for any finite xi (0) and any initial distribution of

Under a fixed topology G, a consensus protocol ui ( t ) = K



aij (xj (t ) − xi (t ))

(2)

σ (0) and γij (0),

lim E[∥xi (t ) − xj (t )∥2 ] = 0,

j∈V

t →∞

is adopted in Ma and Zhang (2010) and You and Xie (2011). Here K ∈ Rn×m is a common consensus gain and is independent of the agent index i. Necessary and sufficient conditions are given for the existence of K to reach continuous consensus of multi-agent systems (1a) in Ma and Zhang (2010) while in You and Xie (2011), sufficient conditions are provided for reaching discrete consensus of multi-agent systems (1b) under protocol (2), which are also necessary for certain classes of systems. In those papers, the consensus problem under protocol (2) is converted into a simultaneous stabilization problem. However, if G(t ) is time-varying, this approach is no longer applicable. In this work, we address the consensus problem of multi-agent systems (1) over randomly switching topologies G(t ), and the consensus protocol is modified as follows. ui ( t ) = K



aij (t )(xj (t ) − xi (t )).

3127

(3a)

j∈V

G(t ) will randomly switch among s distinct topologies G(t ) ∈ {G1 , . . . , Gs }, and G(t ) = Gi if and only if the random variable σ (t ) = i ∈ S , {1, . . . , s}. The switching process {σ (t ), t ≥ 0}

∀i, j ∈ V ,

(4)

where the expectation E[·] is taken under measure P. The continuous-time consensus is achieved with a speed α > 0 if there exists positive numbers C and t0 such that

E[∥xi (t ) − xj (t )∥2 ] ≤ C exp(−α(t − t0 )),

∀t > t0 .

(5)

While for the discrete-time consensus case, the consensus speed β ∈ (0, 1) is defined by E[∥xi (t ) − xj (t )∥2 ] ≤ C β t −t0 , ∀t ∈ (t0 , ∞) ∩ N . Ac is called Hurwitz if all the eigenvalues of Ac lie in the open left half-plane while Ad is Schur if all the eigenvalues of Ad lie in the open unit disk. (Ac , Bc ) is stabilizable if there exists a Kc ∈ Rm×n such that Ac + Bc Kc is Hurwitz. Similarly, (Ad , Bd ) is stabilizable if there exists a Kd ∈ Rm×n such that Ad + Bd Kd is Schur. It is obvious that if Ac is Hurwitz or Ad is Schur, consensus can be achieved under zero consensus gain. To avoid any trivial case, it is sensible to make the following assumption. Assumption 2. Ac is not Hurwitz and Ad is not Schur.

is governed by a time-homogeneous Markov process, whose state space corresponds to all the possible topologies. The consensus gain K is fixed with robustness to the Markovian switching topologies. The common probability space for all random variables in the paper is denoted by (Ω , F , P) where Ω is the space of elementary events, F is the underlying σ -field on Ω , and P is a probability measure on F . Let the infinitesimal generator (Meyn, Tweedie, & Hibey, 1996) of the continuous-time Markov process {σ (t ), t ∈ R≥0 } be Q = (qij ), which is given by

Our objective is to reveal the effect of Markovian switching topologies and random link failures on consensus of multi-agent systems (1) under protocol (3).

P{σ (t + h) = j|σ (t ) = i}

3.1. Continuous-time consensus

 =

qij h + o(h), 1 + qii h + o(h),

when σ (t ) jumps from i to j, otherwise.

Then, qij is the transition rate from state i to state j with qij ≥ 0 if i ̸= j, qii = − j̸=i qij , and o(h) denotes an infinitesimal of higher order than h, i.e. limh→0 o(h)/h = 0. Note that Q is a transition rate matrix, whose row summation is zero and all off-diagonal elements are nonnegative. While for the discrete time case, denote the transition probability matrix of the Markov process {σ (t ), t ∈ N} by Λ = (Λij ), where

Λij = P{σ (t + 1) = j|σ (t ) = i},

t ∈ N.

It is also interesting to consider that each link of G is subject to random link failures under the protocol ui ( t ) = K



γij (t )aij (xj (t ) − xi (t )),

t ∈ N,

(3b)

j∈V

where γij (t ) is a binary random variable. Here γij (t ) are both spatially and temporally independent, and identically distributed for each t and (i, j) ∈ E with P{γij (t ) = 1} = p = 1 − P{γij (t ) = 0}. In the sequel, p is referred as recovery rate. It is obvious that consensus protocol (3b) is a special case of (3a). Definition 1. The multi-agent systems (1) are said to reach consensus2 under protocol (3) if there exists a fixed consensus gain

3. Consensus with Markovian switching topologies This section is to derive necessary and/or sufficient conditions for achieving both continuous and discrete time consensus over Markovian switching topologies.

In this subsection, the consensus of multi-agent systems (1a) under protocol (3a) is to be studied. It is shown that the stationary distribution and the positive recurrent state of the Markov process {σ (t ), t ∈ R≥0 } play important roles in the design of consensus gain. Assumption 3. The continuous-time Markov process with a transition rate matrix Q is ergodic. This assumption is only for ease of presentation. See Remark 2 (c) in the later part of the paper on how to revise the main result if it is violated. The Markov process under Assumption 3 admits a unique invariant distribution π c = [π1c , . . . , πsc ]T , and each state of the Markov process is reachable from any other state in the state space. Then πic > 0, ∀i ∈ S . Additionally, it does not lose generality to focus on the consensus problem with the Markov process starting from the invariant distribution π c . This implies that the distribution of σ (t ) is always given by π c for all t ∈ R. For ease of notation, the union of topologies Gi , i ∈ S is defined by Gun = ∪si=1 Gi . Similar notations will be made for Lun ,  Gun and un , respectively. If (Ac , Bc ) is stabilizable, there exists a Pc > 0 L such that Pc Ac + ATc Pc − 2Pc Bc BTc Pc < 0. We can find a α > 0 such that Pc Ac + ATc Pc − 2Pc Bc BTc Pc + α Pc < 0.

(6)

Note that if (Ac , Bc ) is controllable, there always exists a Pc > 0 solving (6) for any positive α (Li, Duan, & Chen, 2011). 2 Here consensus is defined in mean square sense. In view of Corollary 3.46 (do Valle Costa, Fragoso, & Marques, 2005), it implies that consensus can also be achieved in almost sure sense.

Theorem 1. Consider the Markovian switching topologies with a generator Q . Under Assumptions 1–3, a necessary and sufficient

3128

K. You et al. / Automatica 49 (2013) 3125–3132

condition for achieving consensus of multi-agent systems (1a) under protocol (3a) is that (a) Gun contains a spanning tree; (b) (Ac , Bc ) is stabilizable. Moreover, if the above conditions are satisfied, let K = τ · BTc Pc , where un ))−1 with π c = mini∈S {πic } and Pc is the coefficient τ ≥ (π c λ2 (L given in (6), consensus can be achieved with a speed α . Proof. Denote the Laplacian matrix at time t by L(t ) and the avN 1 T erage state by x¯ (t ) , N1 i=1 xi (t ) = N (1 ⊗ In )x(t ). Then, the deviation from the average is expressed by δi (t ) , xi (t ) − x¯ (t ). Inserting control protocol (3a) into (1a), we obtain that

un ), i.e., φiT L un = λi (L un )φiT for all i ∈ {2, . . . , N }. Similarly, λi ( L T  define δ(t ) = (Φ ⊗ In )δ(t ). It is clear that  δ1 (t ) = 0 for all t ≥ 0. Thus, it follows that

L + LT ⊗ Pc Bc BTc Pc )δ(t ) δ(t )T (IN ⊗ (Pc Ac + ATc Pc ) − un ) λ2 ( L   N  un ) 2λj (L T T T  = δj (t ) Ac Pc + Pc Ac − P B B P  δ (t ) un ) c c c c j λ2 (L j =2

≤ −α

N 

 δj (t )T Pc δj (t ) = −αδ(t )T (IN ⊗ Pc )δ(t ).

(14)

j =1

δ˙i (t ) = Ac δi (t ) − Bc K

N 

Lij (t )δj (t ),

(7)

j =1

where Lij (t ) is the (i, j)-th element of L(t ). Let δ(t ) = [δ1T (t ), . . . , δNT (t )]T . It follows that

˙ t ) = (IN ⊗ Ac − L(t ) ⊗ Bc K )δ(t ). δ(

(8)

Necessity: assume that Gun does not contain a spanning tree. It follows from Ren and Beard (2005) that there exists a vector v ∈ RN ̸∈ span{1} satisfying Lun v = 0. Since Li 1 = 0, it yields that −Li is a transition rate matrix. By Lemma 3.2 in Matei and Baras (2009), it implies that

N (−Lun ) =

s 

N (−Li ).

(9)

i=1

Then, v ∈ i=1 N (−Li ), i.e., Li v = 0 for all i ∈ S . Define Φ = [1, v, Φ1 ], where Φ1 is selected to make Φ nonsingular. There exists a matrix Li ∈ R(N −2)×(N −2) such that Φ −1 Li Φ = diag(0, 0, Li ), ∀i ∈ S . Let  δ(t ) = (Φ −1 ⊗ In )δ(t ) and partition  δ(t ) in conformity with that of δ(t ). It follows that  δ1 (t ) = 0 ∈ Rn and

s

 δ˙2 (t ) = Ac δ2 (t ) ∈ Rn , ∀t ≥ 0. For  δ2 (0) ̸∈ N (Ac ), it yields that limt →∞ ∥ δ2 (t )∥ ̸= 0 under Assumption 2. This contradicts with

the definition of consensus. If (Ac , Bc ) is not stabilizable, there exists an unstable and uncontrollable mode of A. We write this mode by λ. By PBH controllability test (Chen, 1984), there exists v ̸= 0 such that v ∗ Ac = v ∗ λ and v ∗ Bc = 0, where ∗ denotes the conjugate transpose of a vector. Let z (t ) = (I ⊗ v)∗ δ(t ). It follows from (8) that z˙ (t ) = λz (t ). Since λ is unstable, lim inft →∞ |z (t )| > 0 if z (0) ̸= 0. Again, it results in a contradiction. Sufficiency: define stochastic Lyapunov function candidates by V (t ) = E[δ(t )T (IN ⊗ Pc )δ(t )], Vi (t ) = E[δ(t ) (IN ⊗ Pc )δ(t )1{σ (t )=i} ], T

(10)

∀i ∈ S .

(11)

By Lemma 4.2 in Costa and Fragoso (2005), it follows that dVi (t ) = E[(dδ(t ))T (IN ⊗ Pc )δ(t )1{σ (t )=i}

+ δ(t )T (IN ⊗ Pc )dδ(t )1{σ (t )=i} ] s  + qji Vj (t )dt + o(dt ).

Combining the above, we obtain that dV (t )/dt ≤ −α V (t ). It follows from the comparison lemma (Khalil & Grizzle, 2002) that V (t ) ≤ exp(−α t )V (0). This implies that V (t ) exponentially converges to zero with a speed α . Then, it is trivial that consensus of multi-agent systems (1a) can be achieved under protocol (3a) with a consensus speed α . Remark 2. (a) The method for designing a consensus gain in Theorem 1 is important since it allows for separation of the design problem from the graphs. Specifically, each agent constructs a feedback gain BTc Pc by only using (6), and scales it by a coefficient τ taking the graph into consideration. In this view, the effects of the agent dynamics and the network topologies on consensus are decoupled. (b) For the single-integrator agent dynamics, i.e., Ac = 0 and Bc = 1, it is obvious that (6) is solvable. The necessary and sufficient condition reduces to that Gun contains a spanning tree, which is consistent with Matei and Baras (2009). Under this case, solvability of (6) holds for any α > 0. This implies that the consensus can be achieved with an arbitrary speed. Our result is stronger than Matei and Baras (2009) by appropriately designing a consensus gain. (c) If Assumption 3 is violated, the state space of the Markov process can uniquely decomposed as S = {D , S1 , . . . , Sf }, where D is a set of transient and null recurrent states and Si is a closed set of positive recurrent states. The graph condition is modified into that ∪i∈Sj Gi contains a spanning tree for all j ∈ {1, . . . , f }. Similar condition has been established in Miao et al. (2013) for the double-integrator agent dynamics. (d) If (Ac , Bc ) is controllable, the solvability of (6) always holds for any positive α > 0. Then, consensus can be achieved with any speed as in the fixed topology case (Li et al., 2011). (e) The balanced assumption is only used in the sufficiency proof due to the use of Lyapunov approach. For the single-integrator agent dynamics, an additional variable called surplus is introduced to study consensus over general digraphs in Cai and Ishii (2012). However, this approach may not be directly applicable to the current consensus problem since the agent dynamics contains the strictly unstable open-loop poles. Nonetheless, the study of unbalanced graphs is an interesting future research topic.

(12) 3.2. Discrete-time consensus

j =1

Since σ (t ) starts at the invariant distribution π , it follows from πic ≥ π c that c

dV (t ) dt

≤ E[δ(t )T (IN ⊗ (Pc Ac + ATc Pc ) −

L + LT

un ) λ2 (L

⊗ Pc Bc BTc Pc )δ(t )].

(13)

 √  Take a unitary matrix Φ = 1/ N , φ2 , . . . , φN , where φi ∈ RN un associated with eigenvalue is an orthonormal eigenvector of L

For the discrete case, the connectivity of the union of topologies should be strong enough to dominate the instability of Ad to achieve consensus. Similarly, we make the following assumption. Assumption 4. The discrete-time Markov process with a transition probability matrix Λ is ergodic. Lemma 3 (Hengster-Movric, You, Lewis, & Xie, 2013). Consider a discrete-time modified Riccati inequality: Pd > ATd Pd Ad − γ ATd Pd Bd (BTd Pd Bd )−1 BTd Pd Ad .

(15)

K. You et al. / Automatica 49 (2013) 3125–3132

Assume that (Ad , Bd ) is stabilizable and Ad is not Schur, there is a critical γc ∈ [0, 1) such that ∀γ > γc , there always exists a positive definite matrix Pd solving (15). Again, the discrete Markov process is assumed to start at its invariant distribution π d = [π1d , . . . , πsd ]T . Theorem 4. Consider the Markovian switching topologies with the transition probability matrix Λ. Under Assumptions 1, 2 and 4, a necessary condition for achieving consensus of multi-agent systems (1b) under protocol (3a) is that (a) Gun contains a spanning tree; (b) (Ad , Bd ) is stabilizable.



un ) > γc ρ/π d , consensus can be achieved by Moreover, if λ2 (L T −1 T K = τ (Bd Pd Bd ) Bd Pd Ad , where π d = mini∈S {πid }, τ ∈ {τ ∈ un ) − τ 2 ρ 2 }, γc is given in Lemma 3, ρ 2 is R|γc < γ (τ ) = 2π d τ λ2 (L s T the maximum eigenvalue of i=1 Li Li , and Pd is a positive definite matrix solving (15) under γ = γ (τ ). Proof. Only the sufficiency is to be elaborated as the proof of necessity is similar to the continuous-time case. By Lemma 3, it follows that there is a Pd > 0 solving (15) under γ = γ (τ ). It is possible to find a sufficiently small positive ζ < 1 such that

4. Consensus with random link failures In this section, we discuss the discrete consensus problem subject to random link failures. Consider the random link failures on an undirected topology, we derive a sufficient condition to achieve consensus for multi-agent systems (1b) under protocol (3b). This condition is given in terms of agent dynamics, the eigenratio of the undirected topology (You & Xie, 2011) and the recovery rate, which is also necessary for some special cases. λ (L)−λ (L) Given an undirected graph G, denote φ = λN (L)+λ2 (L) . In You N 2 and Xie (2011), it is demonstrated that without link failures, the effect of undirected topologies on reaching consensus under protocol (2) is completely determined by φ . In what follows, we will continue to use φ to characterize the effect of graphs. Theorem 6. Given an undirected G, assume that the random link failure of G is governed by γij (t ). A sufficient condition for achieving consensus of discrete-time multi-agent systems (1b) under protocol (3b) is that (a) (Ad , Bd ) is stabilizable; (b) p(1 − φ 2 ) > γc , where γc is given in Lemma 3. Moreover, if the above conditions are satisfied, select a γ such that γc < γ ≤ p(1 − φ 2 ) and let Pd be the positive definite solution to (15). Then, a feasible consensus gain can be given by

ATd Pd Ad − Pd − γ ATd Pd Bd (BTd Pd Bd )−1 BTd Pd Ad < −ζ Pd . Consider a Lyapunov function candidate as follows V (t ) = E[δ(t )T (IN ⊗ Pd )δ(t )].

3129

(16)

Let the consensus gain be K = τ ( ) . As in the sufficiency proof of Theorem 1, it is not difficult to obtain that V (t + 1) ≤ (1 − ζ )V (t ). Then, the rest of proof is straightforward. BTd Pd Bd −1 BTd Pd Ad

Remark 5. (a) For a continuous-time system (1a) under a sufficiently small sampling period, the unstable eigenvalues of the discretized system (1b) can be made arbitrarily close to one. This implies that γc will be arbitrarily close to zero as well. un ) > √γc ρ/π d will be eventuHence, the inequality λ2 (L ally satisfied if the union of topologies contains a spanning tree. This is consistent with Theorem 1. In fact, for a continuous-time system, information can be transmitted arbitrarily fast so that it requires a weaker connectivity than the discrete case. (b) As in the continuous-time case, Assumption 4 is only for the purpose of simplifying presentation, which can be easily extended to a general discrete-time Markov process. (c) For single/double-integrator agent dynamics, it follows from Lemma 3 that γc = 0. Then, a necessary and sufficient for achieving consensus under protocol (3a) is that the union of topologies contains a spanning tree. This is consistent with the results in Matei and Baras (2009) and Miao et al. (2013). un ) characterizes the goodness of con(d) Roughly speaking, λ2 (L nectivity of  Gun while γc quantifies the instability of Ad . In particular, if Ad is more unstable, it follows from Lemma 3 that γc is larger. Then, a better connectivity is required to achieve consensus. This is consistent with our intuition as well.

K =

2

λ2 (L) + λN (L)

(BTd Pd Bd )−1 BTd Pd Ad .

Proof. Under random link failures, G(t ) becomes stochastically time-varying. Let F (t ) ⊂ F be an increasing sequence of σ -fields generated by the random variables {γij (s), s < t , (i, j) ∈ E }. Since γ > γc , there exists a positive definite matrix Pd solving (15). Consider the Lyapunov function candidate as follows: V2 ( t ) =

N 

δi (t )T Pd δi (t ).

(17)

i=1

Letting δ(t ) = [δ1T (t ), . . . , δNT (t )]T , it is trivial that

λm ∥δ(t )∥2 ≤ V2 (t ) ≤ λM ∥δ(t )∥2 , where λm and λM are the smallest and the largest eigenvalues of Pd . By (15), there is a positive β < 1 such that Pd − ATd Pd Ad + γ ATd Pd Bd (BTd Pd Bd )−1 BTd Pd Ad > βλM I . In addition, it trivially holds that

E[Lij (t )] = pLij , N

E[L2ij (t )] ≤



(18)

Liu Luj (E[γiu2 (t )]E[γuj2 (t )])1/2

u=1

= pL2ij ,

3.3. Relation to deterministic switching topologies

E[Liv (t )Liu (t )] ≤ pLiv Liu .

Under the Markovian switching topologies, it follows from As t +T sumption 3 and the Birkhoff’s ergodic theorem that limT →∞ T1 t L(τ )dτ is well-defined with probability one for any t ≥ 0 (Meyn et al., 1996). Since the Markov process starts from its stationary distribution, the convergence of limit is uniform with respect to t. By Theorem 1, our result is consistent with Kim, Shim, Back, and Seo (2013). If the switching is governed by an ergodic Markov process, the condition is also necessary, which is not studied in Kim et al. (2013). While for the discrete case, the consensus problem over time-varying graphs is much more involved under Assumption 2.

Inserting control protocol (3b) into (1b), it follows that

δi (t + 1) = Ad δi (t ) − Bd K

N 

(19)

Lij (t )δj (t ),

j =1

which can be written in a compact form:

δ(t + 1) = (IN ⊗ Ad − L(t ) ⊗ Bd K )δ(t ).

(20)

When it is clear from the context, we shall drop the dependence 2 , we on time t. Combining the above and letting ϕ = λ (L)+λ (L) 2

N

3130

K. You et al. / Automatica 49 (2013) 3125–3132

obtain that

E[V2 (t + 1)|F (t )] − V2 (t )

=

N  

E[δi (t + 1)T Pd δi (t + 1)|F (t )] − δi (t )T Pd δi (t )



i=1

  ≤ δ T IN ⊗ (ATd Pd Ad − Pd ) − 2pL ⊗ ATd Pd Bd K δ +p

N  N 

Liu Liv δuT K T BTd Pd Bd K δv

i=1 u,v=1

= δ (IN ⊗ (ATd Pd Ad − Pd ) + p((ϕ L)2 − 2ϕ L) T

⊗ ATd Pd Bd (BTd Pd Bd )−1 BTd Pd Ad )δ. With an abuse of notation, take a unitary matrix Φ = [ √1 , φ2 , N

. . . , φN ], where φiT L = λi (L)φiT , to transform L into a diagonal form diag(0, λ2 (L), . . . , λN (L)) = Φ T LΦ . Denote  δ(t ) = (Φ ⊗ In )T δ(t ) and partition  δ(t ) ∈ RnN into two parts, i.e.  δ(t ) = [ δ1T (t ),  δ2T (t )]T , where  δ1 (t ) ∈ Rn is a vector consisting of the first  n elements of  δ(t ). Then,  δ1 (t ) = √1N Ni=1 δi (t ) = 0. By denoting λ¯ i = ϕλi (L), it implies that

Fig. 1. Network topologies: (1) G1 ; (2) G2 ; (3) Gun = G1 ∪ G2 .

¯ i − λ¯ 2i } = min{2λ¯ 2 − λ¯ 22 , 2λ¯ N − λ¯ 2N } = 1 − φ 2 . min {2λ

i∈{2,...,N }

¯i − Thus for all i ∈ {2, . . . , N }, it follows that ATd Pd Ad − Pd − p(2λ ¯λ2i )ATd Pd Bd (BTd Pd Bd )−1 BTd Pd Ad ≤ −βλM I. Then, we obtain that E[V2 (t + 1)|F (t )] − V2 (t ) ≤ −βλM

N 

 δiT δi

i =2

≤ −βλM ∥ δ∥ = −βλM (P )∥δ∥2 ≤ −β V2 (t ),

(21)

which implies that for any fixed t ∈ N, E[V2 (t + 1)] ≤ (1 − β)V2 (t ) ≤ (1 − β)k+1 V2 (0). Since 0 < β < 1, it follows that limt →∞ E[∥δ(t )∥2 ] = 0, i.e., consensus is achieved for multi-agent systems (1). Remark 7. (a) The condition (b) of Theorem 6 implies that φ < 1. Then, λ2 (L) > 0, which means that G contains a spanning tree, and is consistent with Olfati-Saber and Murray (2004). If there does not exist link failures, i.e. p = 1, assume that all eigenvalues of Ad lie on or outside the unit circle and rank(Bd ) = 1. Then, the second condition is reduced to that j |λuj (Ad )| <

φ −1 . Jointly with You and Xie (2011), this implies that the suf-

ficient condition in Theorem 6 is necessary as well. (b) For the average consensus of multi-agent systems with single integrator dynamics, i.e., Ad = Bd = 1, it is clear that γc = 0. Then, p > 0 is sufficient to achieve consensus, which is also necessary and consistent with Tahbaz-Salehi and Jadbabaie (2010).

Fig. 2. Continuous-time consensus.

nor G2 admits a spanning tree while G contains a spanning tree. The generator is chosen as

 Q =

−1 2



1 −2

and the initial distribution of the continuous-time Markov process is given by its invariant distribution π c = [2/3, 1/3]. The initial state of each agent is assumed to be a standard Gaussian random vector. Suppose that we want to achieve a consensus speed α = 0.1. In view of Theorem 1, the consensus gain is solved as K = [2.4273, 3.3773, 2.3198]. Fig. 2 shows a sample path of the consensus seeking process, where only the first element of the vector δi (t ) is displayed in the vertical axis. Similar results can be observed for other elements of δi (t ). It illustrates that consensus is achieved with a speed α , which is consistent with Theorem 1.

5. Simulations 5.2. Discrete-time consensus In this section, the adjacency matrix is selected as a binary matrix, whose element is either 1 or 0.

Consider the discrete-time agent dynamics:



5.1. Continuous-time consensus Ad = Let the continuous-time agent dynamics be specified as 0 0 0

 Ac =

−1 1 0

0 0 1



1 0 . 0

  and

Bc =

Consider four networked agents and the interaction topologies will randomly switch between G1 and G2 of Fig. 1. Note that neither G1

1 −1 1.5

0.5 0 0

0 1 0



1 −0.5 . 0.4

 and Bd =



(22)

Let the interaction topologies among agents randomly switch between G1 and G2 of Fig. 1. The transition probability matrix is chosen as 0.4 Λ= 0.7



0.6 0.3



K. You et al. / Automatica 49 (2013) 3125–3132

3131

been respectively derived under different network environments, which feature the importance of the union of graphs corresponding to the states of the Markov process. We also have included simulation examples to demonstrate our theoretical results. All the results can recover the related results in the existing literature. References

Fig. 3. Discrete-time consensus with Markovian topology switching: (1) Markov process; (2) consensus seeking.

Fig. 4. Network topology G.

Fig. 5. Discrete-time consensus with link failures: (1) p = 0.3; (2) p = 0.25.

and the Markov process starts at its stationary distribution π d = un ) = 2, ρ = 2.4495, τ = [7/13, 6/13]. By Theorem 4, λ2 (L 0.4095 and γ = 0.25. A consensus gain is computed by K = [0.4878, 0.2178, 0.1713]. Fig. 3 shows a realization of randomly switching topologies and consensus is achieved as expected in Theorem 4. 5.3. Consensus with random link failures Let the agent dynamics given in (22). The network topology G is specified in Fig. 4 with φ = 1/3. By Lemma 3, γc = 0.2293. Select γ = 0.23 in (15), we obtain a consensus gain K = [3.6410, 5.0659, 3.4798]. The recovery rate in Theorem 6 satisfies that p > 0.2580. Fig. 5 shows two sample paths under different recovery rates. It illustrates that consensus is reached if p = 0.3 while they have less chance to reach consensus if p = 0.25. Thus, Theorem 6 is supported by this simulation example. 6. Conclusion Motivated by the uncertainties in real communication networks, we have addressed both the continuous and discrete-time consensus problems of linear multi-agent systems under randomly switching topologies. Necessary and sufficient conditions have

Cai, K., & Ishii, H. (2012). Average consensus on general strongly connected digraphs. Automatica, 48(11), 2750–2761. Chatterjee, D., & Liberzon, D. (2011). Stabilizing randomly switched systems. SIAM Journal on Control and Optimization, 49(5), 2008–2031. Chen, C. (1984). Linear system: theory and design. Philadelphia, PA, USA: Saunders College Publishing. Costa, O., & Fragoso, M. (2005). A unified approach for stochastic and mean square stability of continuous-time linear systems with markovian jumping parameters and additive disturbances. SIAM Journal on Control and Optimization, 44(4), 1165–1191. do Valle Costa, O. L., Fragoso, M. M. D., & Marques, R. P. (2005). Discrete-time Markov jump linear systems. Springer. Hengster-Movric, K., You, K., Lewis, F. L., & Xie, L. (2013). Synchronization of discrete-time multi-agent systems on graphs using riccati design. Automatica, 49(2), 414–423. Horn, R., & Johnson, C. (1985). Matrix analysis. Cambridge University Press. Jadbabaie, A., Lin, J., & Morse, A. (2003). Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Transactions on Automatic Control, 48(6), 988–1001. Khalil, H. K., & Grizzle, J. (2002). Nonlinear systems. Upper Saddle River: Prentice hall. Kim, H., Shim, H., Back, J., & Seo, J. H. (2013). Consensus of output-coupled linear multi-agent systems under fast switching network: averaging approach. Automatica, 49(1), 267–272. Li, Z., Duan, Z., & Chen, G. (2011). Dynamic consensus of linear multi-agent systems. IET Control Theory and Applications, 5(1), 19–28. Ma, C., & Zhang, J. (2010). Necessary and sufficient conditions for consensusability of linear multi-agent systems. IEEE Transactions on Automatic Control, 55(5), 1263–1268. Matei, I., & Baras, J.S. (2009). Convergence results for the linear consensus problem under markovian random graphs, ISR Technical Report 2009-18, the University of Maryland. Meyn, S., Tweedie, R., & Hibey, J. (1996). Markov chains and stochastic stability. London: Springer-Verlag. Miao, G., Xu, S., & Zou, Y. (2013). Necessary and sufficient conditions for mean square consensus under markov switching topologies. International Journal of Systems Science, 44(1), 178–186. Olfati-Saber, R., & Murray, R. (2004). Consensus problems in networks of agents with switching topology and time-delays. IEEE Transactions on Automatic Control, 49(9), 1520–1533. Ren, W. (2008). On consensus algorithms for double-integrator dynamics. IEEE Transactions on Automatic Control, 53(6), 1503–1509. Ren, W., & Beard, R. (2005). Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Transactions on Automatic Control, 50(5), 655–661. Su, Y., & Huang, J. (2012). Two consensus problems for discrete-time multi-agent systems with switching network topology. Automatica, 48(9), 1988–1997. Tahbaz-Salehi, A., & Jadbabaie, A. (2010). Consensus over ergodic stationary graph processes. IEEE Transactions on Automatic Control, 55(1), 225–230. You, K., Li, Z., & Xie, L. (2011). Consensus for general multi-agent systems over random graphs. In Proc. 9th IEEE international conference on control and automation, Santiago, Chile. You, K., & Xie, L. (2011). Network topology and communication data rate for consensusability of discrete-time multi-agent systems. IEEE Transactions on Automatic Control, 56(10), 2262–2275. Zhang, Y., & Tian, Y. (2012). Maximum allowable loss probability for consensus of multi-agent systems over random weighted lossy networks. IEEE Transactions on Automatic Control, 57(8), 2127–2132.

Keyou You was born in Jiangxi Province, China, in 1985. He received the B.S. degree in statistical science from Sun Yat-sen University, Guangzhou, China, in 2007 and the Ph.D. degree in electrical and electronic engineering from Nanyang Technological University, Singapore, in 2012. He was with the ARC Center for Complex Dynamic Systems and Control, the University of Newcastle, Australia, as a visiting scholar from May 2010 to July 2010, and with the Sensor Network Laboratory at Nanyang Technological University as a Research Fellow from June 2011 to June 2012. Since July 2012, he has been with the Department of Automation, Tsinghua University, China as a Lecturer. From May 2013 to July 2013, he held a visiting position in The Hong Kong University of Science and Technology, Hong Kong. His current research interests include control and estimation of networked systems, distributed control and estimation, and sensor network.

3132

K. You et al. / Automatica 49 (2013) 3125–3132

Dr. You won the Guan Zhaozhi best paper award at the 29th Chinese Control Conference, Beijing, China, in 2010.

Zhongkui Li received the B.S. degree in pace engineering from the National University of Defense Technology, China, in 2005, and his Ph.D. degree in Dynamics and Control from Peking University, China, in 2010. From November 2008 to February 2009, he was a Research Assistant in City University of Hong Kong. From February 2011 to August 2011, he was a Research Fellow in Nanyang Technological University. From February 2012 to April 2012, he was a Postdoctoral Fellow with City University of Hong Kong. From August 2010 to April 2013, he was a Postdoctoral Research Associate with the School of Automation, Beijing Institute of Technology. Since April 2013, he has been with the State Key Laboratory for Turbulence and Complex Systems, Department of Mechanics and Aerospace Engineering, College of Engineering, Peking University, where he is currently an Assistant Professor.

Lihua Xie received the B.E. and M.E. degrees in electrical engineering from Nanjing University of Science and Technology in 1983 and 1986, respectively, and the Ph.D. degree in electrical engineering from the University of Newcastle, Australia, in 1992. He held teaching appointments in the Department of Automatic Control, Nanjing University of Science and Technology from 1986 to 1989. Since 1992, he has been with the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, where he is currently a professor and Head of Division of Control and Instrumentation and Director, Centre for E-City. He was a Changjiang visiting professor with South China University of Technology from 2006 to 2010. Dr. Xie’s research interests include robust control and estimation, networked control systems, sensor networks, multi-agent systems, time delay systems and control of hard disk drive systems. He is an editor of IET Book Series on Control and has served as an Associate Editor of several journals including IEEE Transactions on Automatic Control, Automatica, IEEE Transactions on Control Systems Technology, IEEE Transactions on Circuits and Systems-II, and IET Proceedings on Control Theory and Applications. Dr. Xie is a Fellow of IEEE and a Fellow of IFAC.