Applied Mathematics and Computation 273 (2016) 1234–1245
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
Consensus seeking over Markovian switching networks with time-varying delays and uncertain topologies Yilun Shang∗ Department of Mathematics, Tongji University, Shanghai 200092, China
a r t i c l e
i n f o
MSC: 93E03 93D05 Keywords: Consensus Multi-agent system Markov process Time delay Uncertainty
a b s t r a c t Stochastic consensus problems for linear time-invariant multi-agent systems over Markovian switching networks with time-varying delays and topology uncertainties are dealt with. By using the linear matrix inequality method and the stability theory of Markovian jump linear system, we show that consensus can be achieved for appropriate time delays and topology uncertainties which are not caused by the Markov process, provided the union of topologies associated with the positive recurrent states of the Markov process admits a spanning tree and the agent dynamics is stabilizable. Feasible linear matrix inequalities are established to determine the maximal allowable upper bound of time-varying delays. Numerical examples are given to show the feasibility of theoretical results. © 2015 Elsevier Inc. All rights reserved.
1. Introduction Cooperative control of networked multi-agent systems has received increasing attention during the past few years, mainly due to wide applications of multi-agent systems in many areas such as flocking/swarming, formation control, attitude alignment, parallel computing, and distributed sensor fusion. In multi-agent cooperative control, an important topic is consensus (synchronization or agreement) which refers to steering a specific variable of group members to a common value across the network by using the local information, which is determined by the underlying network topology; see the surveys in [1,2]. Thus, the design of appropriate control protocols and algorithms for a group of dynamic agents seeking to reach consensus on certain quantities of interest is a critical problem. The theoretic foundation of consensus control design for agents with simple single/double-integrator dynamics on deterministic (fixed or switching) topologies has been well understood so far through the application of graph-theoretical tools (see e.g. [3–5]). In many practical systems, the communication link between the agents may only be available at random times due to link/node failure, signal losses, or packet drops; this motivates the recent investigation of consensus over random graphs [6–15]. For example, Hatano and Mesbahi [7] considered the asymptotic agreement of a continuous-time single-integrator agent dynamics over classical undirected random graphs, where each information channel between a pair of agents exists independently at random. The results were further extended by Porfiri and Stilwell [8] to solve mean square consensus problem on directed and weighted random information networks. Shang [9] addressed multi-agent coordination in directed moving neighborhood random networks generated by random walkers. Tahbaz-Salehi and Jadbabie [10] provided necessary and sufficient conditions for almost sure consensus of a group of single-integrator agents, where the communication graph was derived from a strictly stationary ergodic graph process. Matei et al. [11] discussed the consensus problem of both discrete-time and ∗
Fax: +86-21-65981384. E-mail address:
[email protected],
[email protected]
http://dx.doi.org/10.1016/j.amc.2015.08.115 0096-3003/© 2015 Elsevier Inc. All rights reserved.
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245
1235
continuous-time multi-agent systems with single-integrator dynamics over Markovian switching topologies; they showed that the system achieves average consensus almost surely if and only if the union of topologies corresponding to the states of the Markov process is strongly connected. Similar results were obtained for double-integrator agent dynamics by Miaoet al. [12], and more realistic aspects including measurement noises as well as quantization errors were factored in by Huang et al. [13]. Group consensus of discrete-time and continuous-time multi-agent systems with Markovian switching topologies was recently discussed by Zhao and Park [14] and Shang [15], respectively. Besides, means square consensus problems over fixed topology with communication noises were tackled for discrete-time and continuous-time linear time-invariant systems in [16,17], respectively. It is noted that for the agent dynamics described by more general linear time-invariant systems, it is challenging to derive consensus conditions for achieving consensus due to the possible existence of strictly unstable poles in the open-loop matrix; see e.g. [18,19]. Recently, Youet al. [20] extended the results of [11] to linear time-invariant systems; they established almost sure convergence by utilizing the linear dynamics governing the evolution of the mean square consensus error. In [21], network-induced delay and random noise effect were tackled in-depth drawing on the stability analysis of differential delay equations. Graphic conditions for group consensus were discussed by Shang [22] for linear time-invariant systems under Markovian switching topologies. The stochastic consensus of linear multi-input and multi-output systems with communication noises and Markovian switching topologies was studied in [23]. However, the systems studied in [20,22,23] are without time delay. It is well recognized that unmodelled time delay may affect the performance and causes instability of a system [24,25]. In multi-agent systems, time-varying delays arise naturally due to the asymmetry of interactions, the congestion of the communication channels, and the finite transmission speed. Moreover, the system uncertainties exist in many situations; see e.g. [26–28]. Thus, this paper is devoted to deriving consensus conditions for linear multi-agent systems on Markovian switching topologies in the presence of both time-varying delays and topology uncertainties which are not related to the Markov process. Although delay robustness has been addressed in [12] for Markovian jump linear systems, their methods are mainly restricted to discrete-time systems with fixed communication delays. This paper deals with the consensus problem of a group of agents with continuous-time linear dynamics, whose communication topology is a randomly switching network driven by a time-homogenous Markov process. Each communication pattern (i.e., directed graph) corresponds to a state of the Markov process. Due to the introduction of time-varying delays and uncertainties, the present approaches in [11–13,20,22,23] do not apply. Here, we show how the linear matrix inequality (LMI) method, together with results inspired by stability analysis of Markovian jump linear systems, can be used to prove stochastic consensus results. It is shown that the multi-agent system can reach mean square and almost sure consensus for appropriate time-varying delays and topology uncertainties if the union of the topologies corresponding to the positive recurrent states of the Markov process has a spanning tree and the agent dynamics is stabilizable. Our results are presented in terms of feasible linear matrix inequalities, from which the maximal allowable upper bound of time-varying delays and uncertainties can be easily obtained by using Matlab’s LMI Toolbox. The consensus gain is designed via a Riccati inequality, and the speed of consensus is also estimated. Finally, we work out some numerical examples to illustrate the availability of our theoretical results. We mention that the LMI method was used in [29] for system with random delays governed by a Markov chain, where the communication topology is, nevertheless, fixed. We mention that another topic closely related to consensus is the synchronization of complex networks, where the synchronization stability of a network of oscillators is usually studied by using the master stability function method. The difference between synchronization and consensus is that the former analyzes the case that the uncoupled systems have identical nonlinear node dynamics which constitutes the ultimate synchronous trajectory, while the latter focuses on reaching an agreement on some variable of interest through local interactions. Synchronization can be viewed as a generalization of consensus to encompass nonlinear dynamics. Recent works along this line include [30] and [31], where finite-time synchronization for complex networks with Markov jump topology is studied. The LMI method was also used in [30] to determine sufficient synchronization conditions. But the system therein is without time delay. The rest of the paper is organized as follows. Section 2 contains the problem formulation. Section 3 presents the main results. Section 4 gives simulation results and Section 5 concludes the paper. Throughout this paper, the wildcard ∗ represents the elements below the main diagonal of a symmetric matrix. 1n and 0n mean the n-dimensional column vectors of all ones and all zeros, respectively. In is an n × n identity matrix. We often suppress the subscript n when the dimension is clear from the context. We say A > B (A ≥ B) if A − B is positive definite (semi-definite), where A and B are symmetric matrices of same dimensions. AT means the transpose of the matrix A. For a vector x, x refers to its Euclidean norm. The set of real numbers is denoted by R. Let 1E signify the indicator function of an event E. By A⊗B we denote the Kronecker product of two matrices A and B, which admits the following useful properties: (A ⊗ B)(C ⊗ D) = AC ⊗ BD, (A ⊗ B)−1 = A−1 ⊗ B−1 , and (A ⊗ B)T = AT ⊗ BT . 2. Problem formulation Let G = (V, E, A) represent a weighted directed graph of order N, where V = {v1 , v2 , . . . , vN } is the set of nodes, i.e., agents, and E ⊆ V × V is the set of directed edges. A directed edge from node vi to node v j is denoted as an ordered pair (vi , v j ), indicating that the information can be sent from agent vi to agent v j . The weighted adjacency matrix A = (ai j ) ∈ RN×N is defined by aij > 0 if (v j , vi ) ∈ E, and ai j = 0 otherwise. diin = Nj=1 ai j and diout = Nj=1 a ji are called in-degree and out-degree of agent vi , respectively. G is said to be balanced if diin = diout for all i = 1, . . . , N [3]. The graph Laplacian matrix associated with the graph G
1236
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245
in ) is the diagonal in-degree matrix. It is known that L always has a zero satisfies L = (li j ) = D − A, where D = diag (d1in , . . . , dN eigenvalue with the right eigenvector 1. A directed path from agent vi1 to agent vik consists of a sequence of nodes vi1 , vi2 , . . . , vik , such that (vi j−1 , vi j ) ∈ E for j =
2, . . . , k. We say that G contains a spanning tree if there is an agent (called root) such that every other agent can be connected ˆ Aˆ), is the graph obtained by via a directed path originating from the root. The disoriented version of G, denoted by Gˆ = (V, E, replacing the directed edges of G with undirected ones and setting Aˆ = (A + AT )/2. When G is balanced [3], Lˆ = (L + LT )/2 ˆ which is positive semi-definite; hence its real eigenvalues can be ordered as λ1 (Lˆ) ≤ is the associated Laplacian matrix for G, λ2 (Lˆ) ≤ · · · ≤ λN (Lˆ), with λ1 (Lˆ) = 0. It follows from [4, Lemma 3.3] that λ2 (Lˆ) > 0 if G is balanced and contains a spanning tree. For an integer s, the union of s graphs G1 = (V, E1 , A1 ), . . . , Gs = (V, Es , As ) is defined as ∪si=1 Gi = (V, ∪si=1 Ei , si=1 Ai ). For agent vi , denote its state at time t by xi (t ) ∈ Rn , where t ≥ 0. We consider the following continuous-time linear dynamics
x˙ i (t ) = Axi (t − τ (t )) + Bui (t ),
(1)
where τ (t) ≥ 0 is the time-varying self communication delay, ui (t ) ∈ Rm represents the control input of agent vi , and A and B are constant matrices with compatible dimensions. In this work, the communication topology among agents is described by a randomly switching graph G (t ), which is governed by a time-homogeneous Markov process θ (t), taking value in a finite set S = {1, 2, . . . , s}. More precisely, G (t ) ∈ {G1 , . . . , Gs }, and G (t ) = Gi if and only if θ (t ) = i. In the multi-agent system (1), the consensus protocol ui (t) takes the following form
ui (t ) = K
(ai j (t ) + ai j (t ))(x j (t − τ (t )) − xi (t − τ (t ))),
(2)
v j ∈V
where K ∈ Rm×n is a common consensus gain matrix to be determined, and ai j (t ) ∈ R are topology uncertainties. Here, aij (t) characterizes temporary perturbation of the link quality caused by, for instance, the appearance/disappearance of an obstacle between the transmitting and receiving agents. Since negative weights are difficult to find applications in real settings, we assume that the coefficients ai j (t ) + ai j (t ) ≥ 0 for all t ≥ 0. Sufficient conditions are given for continuous-time consensus with topology uncertainties in [27,28,32] by using a Lyapunov function and graph theory. The switching topologies considered there, nevertheless, are deterministic; the methods are no longer applicable in the present scenario. Denote by (, F, P) the underlying probability space for the Markov process θ (t). Its generator = (γi j ) ∈ Rs×s is formally given by
P(θ (t + t ) = j|θ (t ) = i) =
γi j t + o(t ), 1 + γii t + o(t ),
if i = j, if i = j,
where t > 0. Here, γ ij is the transition rate from i to j if i = j, while γii = − j =i γi j . If θ (t) is ergodic, each state of the process is reachable from any other state, and there exists a unique invariant distribution π = (π1 , . . . , πs )T such that π i > 0 for any i ∈ S [33]. We mention that the uncertainties aij (t) in (2) are arbitrary functions; hence, they are not related to the Markov process θ (t) governing the switches. This feature gives needed flexibility in applications. Definition 1. The system (1) under the control protocol (2) is said to achieve consensus if there exists a consensus gain K such that for any xi (0) ∈ Rn and initial distribution of θ (0),
lim E(xi (t ) − x j (t )2 ) = 0
t→∞
(3)
for any i, j ∈ {1, 2, . . . , N}, where the expectation E is taken under the measure P. The consensus in Definition 1 is defined in the sense of mean square convergence. This implies that the consensus can also be achieved in the almost sure sense [11,20,34]. We say the matrix A in (1) is Hurwitz (or stable) if every eigenvalue of A has strictly negative real part. The pair (A, B) is said to be stabilizable if there exists a K ∈ Rm×n such that A + BK is Hurwitz [35]. If A is Hurwitz, by definition, consensus can be reached trivially by setting K = 0. We make the following assumptions Assumption 1. (a) The communication graphs G1 , G2 , . . . , Gs are balanced. (b) The Markov process θ (t) is ergodic. (c) 0 ≤ τ (t) ≤ h and τ˙ (t ) ≤ κ < 1 for t ≥ 0, where h, κ ≥ 0. The item (b) is for ease of presentation. We will show in Remark 1 later on how to modify the main result when it is violated. The item (c) implies that the delay τ (t) is differentiable. The situation with non-differentiable delay will be treated in Corollary 1. The following several lemmas are useful in the later derivation. Lemma 1 (Schur complement, [36]). Let X, Y, Z be given matrices such that Z > 0. Then
X YT
Y <0 −Z
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245
1237
if and only if X + Y Z −1Y T < 0. Lemma 2 ([37]). For any real differentiable vector function x(t ) ∈ Rn and any n × n constant matrix W = W T > 0, we have the following inequality
(x(t ) − x(t − τ (t )))T W (x(t ) − x(t − τ (t ))) ≤ h
t
t−τ (t )
x˙ (s)T W x˙ (s)ds,
for t ≥ 0 and 0 ≤ τ (t) ≤ h. Lemma 3 ([38]). Let X, Y, Z be real matrices of appropriate dimensions with ZT Z ≤ I. Then for any μ > 0, we have
XZY + Y T Z T X T ≤ μ−1 XX T + μY T Y. Lemma 4. ([39, p.1175]) Suppose that f(t) is F-measurable and that E( f (t )1{θ (t )=i} ) exists. Then for i ∈ S,
E( f (t )d(1{θ (t )=i} )) =
s
γ ji E f (t )1{θ (t )= j} dt + o(dt ).
j=1
3. Main results In this section, we will explore consensus conditions for multi-agent system (1) under protocol (2). It is shown that the system can achieve consensus for suitable time-varying delays and topology uncertainties if the union of the topologies corresponding to the positive recurrent states of the Markov process has a spanning tree and the agent dynamics is stabilizable. Let GS be the union of graphs in the set S, i.e., GS = ∪si=1 Gi . For i ∈ S, the Laplacian matrices of GS and Gi will be denoted by LS and Li , respectively; hence LS = si=1 Li by definition. Let L(t ) = (li j (t )) be the Laplacian matrix of the switching topology G (t ) at time t. Analogously, let L(t ) = (li j (t )) with
−ai j (t ), li j (t ) = N k=1 aik (t ),
i = j, i = j,
be the uncertain Laplacian matrix. For i = 1, . . . , N, define the disagreement error for agent vi as εi (t ) = xi (t ) − x¯(t ), where x¯(t ) = state vector. Rearranging the Eqs. (1) with (2) yields
ε˙ i (t ) = Aεi (t − τ (t )) + BK
N
1 N
N
i=1 xi (t )
(li j (t ) + li j (t ))ε j (t − τ (t )).
is the average
(4)
j=1
employing the balancedness condition. Set ε(t ) = (ε1T (t ), . . . , εNT (t ))T ∈ RnN . The system (4) can be recast as
ε( ˙ t ) = (IN ⊗ A − (L(t ) + L(t )) ⊗ BK )ε(t − τ (t )).
(5)
If (A, B) is stabilizable, it follows from a matrix Riccati equation [40] that there exists a P > 0 such that PA + AT P − 2PBBT P < 0. We can choose a β > 0 satisfying
PA + AT P − 2PBBT P + β P < 0.
(6)
Theorem 1. Suppose that Assumption 1 holds. Assume that GS contains a spanning tree and that (A, B) is stabilizable. Choose the
consensus gain K = α BT P with P given by (6) and α ≥ λ2 (LˆS ) · mini∈S πi
(LT (t )L(t )) ⊗ PBBT PPBBT P ≤
2 δ I , t ≥ 0, α nN
−1
. If (7)
then, for any 0 ≤ κ < 1 and appropriate δ > 0, there exists h > 0 such that the system (1) with protocol (2) achieves consensus, where h can be obtained from the following feasible LMI:
⎡ 11 + ⎢ ∗ =⎣ ∗ ∗
12 − 22 + ∗ ∗
13 23
−h−1 IN ⊗ R ∗
⎤
CT 0 ⎥ < 0, IN ⊗ R ⎦ −μInN
(8)
where 11 = C T (κ IN ⊗ Q − β IN ⊗ P )C, 12 = C T ( − IN ⊗ PA + λ−1 (LˆS )LS ⊗ PBBT P + (1 − κ)IN ⊗ Q )C, 13 = −C T (IN ⊗ AT − 2 ˆS )LTS ⊗ PBBT )(IN ⊗ R), 22 = C T ((κ − 1)IN ⊗ Q − h−1 IN ⊗ R)C, 23 = −13 , = δ 2 μC T C, C = [IN−1 − 1N−1 ]T , β is given L λ−1 ( 2 by (6), Q ≥ 0 and R > 0 are n × n matrices to be determined, and μ > 0 is a constant.
1238
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245
Proof. To begin with, we will show that the LMI (8) is always feasible for any 0 ≤ κ < 1 and appropriate δ > 0 under the assumption of Theorem 1. It suffices to show that there exist some matrices Q ≥ 0, R > 0, constants μ > 0, δ > 0, and h > 0 such that (8) holds for any 0 ≤ κ < 1. Taking Q = 0 and R = In , it follows from Lemma 1 that (8) is equivalent to
C T ( − β IN ⊗ P + δ 2 μInN )C ∗ ∗
C T −IN ⊗ PA + λ−1 (LˆS )LS ⊗ PBBT P − δ 2 μInN C 2 (δ 2 μ − h−1 )C T C ∗
⎤
−C T IN ⊗ AT − λ−1 (LˆS )LTS ⊗ PBBT 2 −1 ˆ T T C IN ⊗ A − λ2 (LS )LTS ⊗ PBBT −h−1 InN
CT 0 [C 0 InN ] < 0. InN
⎦ + μ−1
(9)
Recall that CT C > 0, P > 0 and β > 0. For any 0 ≤ κ < 1, and a large enough μ, in order to show (9) it suffices to show
C T ( − β IN ⊗ P + δ 2 μInN )C ∗
+h
C T −IN ⊗ PA + λ−1 (LˆS )LS ⊗ PBBT P − δ 2 μInN C 2 (δ 2 μ − h−1 )C T C
−C T IN ⊗ AT − λ−1 (LˆS )LTS ⊗ PBBT 2 −1 ˆ T T C IN ⊗ A − λ2 (LS )LTS ⊗ PBBT
·
T
−C T IN ⊗ AT − λ−1 (LˆS )LTS ⊗ PBBT 2 −1 ˆ T T C IN ⊗ A − λ2 (LS )LTS ⊗ PBBT
< 0.
involving Lemma 1. If we choose h small enough, then it suffices to show
C T ( − β IN ⊗ P + δ 2 μInN )C + (h−1 − δ 2 μ)−1C T ·
T 2 ˆ − IN ⊗ PA + λ−1 2 (LS )LS ⊗ PBB P − δ μInN
T 2 ˆ − IN ⊗ PA + λ−1 2 (LS )LS ⊗ PBB P − δ μInN
T
C<0
(10)
by applying Lemma 1 again. Now it is clear that the inequality (10) holds if we take δ small enough. Therefore, (8) is always feasible for any 0 ≤ κ < 1 and appropriate δ > 0 under the assumption of Theorem 1. Next, we show that, for any 0 ≤ κ < 1 and suitable δ > 0, the multi-agent system (1) achieves consensus for τ (t) ≤ h if (8) holds. It suffices to show that the system (5) is asymptotically stable if (8) holds. Define stochastic Lyapunov function candidates by
V (t ) = E ε T (t )(IN ⊗ P )ε(t ) +
t
t−τ (t )
and
ε (s)(IN ⊗ Q )ε(s)ds +
Vi (t ) = E ε T (t )(IN ⊗ P )ε(t )1{θ (t )=i} + +
t
t−τ (t )
t
t−h
T
t
t−h
(s − t + h)ε˙ (s)(IN ⊗ R)ε( ˙ s)ds T
(11)
ε T (s)(IN ⊗ Q )ε(s)ds1{θ (t )=i}
(s − t + h)ε˙ T (s)(IN ⊗ R)ε( ˙ s)ds1{θ (t )=i}
(12)
for i ∈ S. It is worth mentioning that the above mode-independent Lyapunov functions are useful since we will later draw on Lemma 4, which provides an important relation concerning the expectation only. Rewrite the system (5) as
ε( ˙ t ) = (IN ⊗ A − (L(t ) + L(t )) ⊗ BK )ε(t ) − (IN ⊗ A − (L(t ) + L(t )) ⊗ BK )η(t ), where η(t ) = ε(t ) − ε(t − τ (t )). By Lemma 4, we obtain
dVi (t ) = E dε T (t )(IN ⊗ P )ε(t )1{θ (t )=i} + ε T (t )(IN ⊗ P )dε(t )1{θ (t )=i} + d +d
t
t−h
(s − t + h)ε˙ T (s)(IN ⊗ R)ε( ˙ s)ds 1{θ (t )=i} +
Using (14) and following the trajectory of the solution of (13), we have
s
t
t−τ (t )
ε T (s)(IN ⊗ Q )ε(s)ds 1{θ (t )=i}
γ jiV j (t )dt + o(dt ).
j=1
ε T (t ) κ IN ⊗ Q + IN ⊗ (PA + AT P) − (L(t ) + L(t )) ⊗ PBK − (LT (t ) + LT (t )) ⊗ K T BT P ε(t )1{θ (t )=i} + (κ − 1)ηT (t )(IN ⊗ Q )η(t )1{θ (t )=i} − 2ε T (t ) IN ⊗ PA − (L(t ) + L(t )) ⊗ PBK + (κ − 1)IN ⊗ Q η(t )1{θ (t )=i} T + hyT (t ) IN ⊗ A − (L(t ) + L(t )) ⊗ BK − IN ⊗ A + (L(t ) + L(t )) ⊗ BK
V˙ i (t ) ≤ E
(13)
(14)
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245
1239
· (IN ⊗ R) IN ⊗ A − (L(t ) + L(t )) ⊗ BK − IN ⊗ A + (L(t ) + L(t )) ⊗ BK s · y(t )1{θ (t )=i} − h−1 ηT (t )(IN ⊗ R)η(t ) + γ jiV j (t ),
(15)
j=1
where yT (t ) = (ε T (t ), ηT (t ))T ∈ R1×2nN , and we have exploited the following inequality
−
t
t−τ (t )
ε˙ T (s)(IN ⊗ R)ε( ˙ s)ds ≤ −h−1 ηT (t )(IN ⊗ R)η(t )
thanks to Lemma 2. Under Assumption 1(b), we can assume that the Markov process starts from the invariant distribution π without loss of −1 −1 ≥ λ2 (LˆS ) · πi , i ∈ S, generality. Noting that V (t ) = si=1 Vi (t ), si=1 γ ji = 0, and K = α BT P with α ≥ λ2 (LˆS ) · mini∈S πi by (15) we have
V˙ (t ) ≤ E yT (t )(t )y(t ) , where
(t ) =
11 (t )
(16)
−IN ⊗ PA +
∗
(LˆS )LS + α L(t ) ⊗ PBBT P + (1 − κ)IN ⊗ Q λ−1 2 (κ − 1)IN ⊗ Q − h−1 IN ⊗ R
+ h[0 (t ) − 0 (t )]T (IN ⊗ R)[0 (t ) − 0 (t )],
11 (t ) = κ(IN ⊗ Q ) + IN ⊗ (PA + AT P) − λ−1 (LˆS )LS + α L(t ) ⊗ PBBT P − λ−1 (LˆS )LTS + α LT (t ) ⊗ PBBT P, and 0 (t ) = 2 2 IN ⊗ A − λ−1 (LˆS )LS + α L(t ) ⊗ BBT P. 2 be a group of orthonormal eigenvectors of LˆS satisfying uTi LˆS = λi (LˆS )uTi for i = 2, . . . , N. Then the matrix U = Let {ui }N i=2 √ [1/ N, u2 , . . . , uN ] ∈ RN×N is a unitary matrix. Define ε( ¯ t ) = (U T ⊗ In )ε(t ) and partition ε( ¯ t ) in conformity with that of ε (t). We have ε¯1 (t ) = 0 for all t ≥ 0, and
T T ˆ ε T (t ) IN ⊗ (PA + AT P) − λ−1 2 (LS )(LS + LS ) ⊗ PBB P ε(t )
=
N
ˆ ε¯ Tj (t ) AT P + PA − 2λ j (LˆS )λ−1 2 (LS ) ε¯ j (t )
j=2
≤ −β
N
ε¯ Tj (t )Pε¯ j (t ) = −βε T (t )(IN ⊗ P)ε(t ),
(17)
j=1
where the second inequality is derived by using (6). Since T (ε1T (t ), . . . , εN−1 (t ))T . Inserting (17) into (16), we obtain
N i=1
εi (t ) = 0, we set ε(t ) = C ε( ˜ t ) with the cut-off version ε( ˜ t) =
V˙ (t ) ≤ E y˜T (t )DT (t )Dy˜(t ) , where
(t ) =
11 (t ) ∗
−IN ⊗ PA +
(18)
(LˆS )LS + α L(t ) ⊗ PBBT P + (1 − κ)IN ⊗ Q λ−1 2 (κ − 1)IN ⊗ Q − h−1 IN ⊗ R
+ h[0 (t ) − 0 (t )]T (IN ⊗ R)[0 (t ) − 0 (t )],
11 (t ) = κ(IN ⊗ Q ) − β IN ⊗ P − α(L(t ) + LT (t )) ⊗ PBBT P, D = diag(C, C ) ∈ R2nN×2n(N−1) .
y˜(t ) = (ε˜T (t ), η˜ T (t ))T ,
η( ˜ t ) = ε( ˜ t ) − ε( ˜ t − τ (t )),
and
Since
DT (t )D
11 − αC T (L(t ) + LT (t )) ⊗ PBBT P C
12 + αC T (L(t ) ⊗ PBBT P)C = 22 ∗ 13 − C T (IN ⊗ A − α LT (t ) ⊗ PBBT )(IN ⊗ R) + h(IN ⊗ R−1 ) 23 + C T (IN ⊗ A − α LT (t ) ⊗ PBBT )(IN ⊗ R) T 13 − C T (IN ⊗ A − α LT (t ) ⊗ PBBT )(IN ⊗ R) · , 23 + C T (IN ⊗ A − α LT (t ) ⊗ PBBT )(IN ⊗ R) by Lemma 1, we have DT (t)D < 0 if and only if
˜ +δ
−C T CT J(t )[−C C 0] < 0, 0 C T JT (t )[C 0 IN ⊗ R] + δ IN ⊗ R 0
(19)
1240
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245
where J(t ) = αδ −1 L(t ) ⊗ PBBT P satisfying JT (t)J(t) ≤ InN for t ≥ 0, and
˜ =
11
12 22
∗ ∗
13 23
.
−h−1 IN ⊗ R
∗
It follows from Lemma 3 that the summation of last two terms in the left-hand side of (19) is upper bounded by
δ2μ
−C T CT −1 T [−C C 0] + μ [C 0 IN ⊗ R]. 0 C IN ⊗ R 0
Hence, using Lemma 1 we see that the condition (8) implies (19). Thus, V˙ (t ) < 0 for t ≥ 0. Moreover, there exists a constant ρ > 0 ˜ t )2 . such that V˙ (t ) ≤ −ρε( Iterating over the basic inequality 2(a1 2 + a2 2 ) ≥ a1 + a2 2 yields 2N−1 (a1 2 + · · · + aN 2 ) ≥ a1 + · · · + aN 2 , where ai ∈ Rn , i = 1, . . . , N. Therefore, we have
ε( ˜ t )2 = ε1 (t )2 + · · · + εN−1 (t )2 ≥ (2N−2 + 1)−1 (ε1 (t )2 + · · · + εN−1 (t )2 + ε1 (t ) + · · · + εN−1 (t )2 ) = (2N−2 + 1)−1 ε(t )2 . Thus, V˙ (t ) ≤ −ρ(2N−2 + 1)−1 ε(t )2 . This means the system (5) is asymptotically stable [36], which concludes our proof. Before proceeding, we provide some remarks here. Remark 1. (a) The design of consensus gain matrix K in Theorem 1 favorably decouples the effects of the agent dynamics and the network topologies (see also [19,20]). Specifically, each agent builds a feedback gain BT P by using (6) and a multiplicative factor α taking into account of the network topologies. (b) If (A, B) is controllable, for any β > 0, there always exists a P > 0 satisfying (6) [19]. (c) When the Markov process θ (t) is not ergodic, the state space S = {1, 2, . . . , s} can be decomposed uniquely into the form S = {J ∪ S1 ∪ · · · ∪ Sr }, where each S j ( j = 1, . . . , r) is a closed set of positive recurrent states and J is a set of transient states [33]. The topology condition in Theorem 1 needs to be modified into that ∪i∈S j Gi contains a spanning tree for all
j = 1, . . . , r. See [12,20,22] for more similar treatment. (d) When there is no time-delay or topology uncertainties, i.e., τ (t) ≡ 0 and aij (t) ≡ 0 for all i and j, the system (1) reduces to the one studied in [20] and [11] if further choosing A = 0 and B = I. The sufficient conditions that GS contains a spanning tree and (A, B) is stabilizable are also necessary conditions in this case. Theorem 1 guarantees the consensus of the multi-agent system (1) under appropriate time-varying delays and uncertainties. On the other hand, the speed to consensus can also be estimated as follows. Assume that there exist some matrices Q ≥ 0, R > 0, constants μ > 0, δ > 0, and h > 0 such that (8) holds for given 0 ≤ κ < 1. Consider new Lyapunov functions
ε T (t )(IN ⊗ P)ε(t ) +
V (t ) = E
t
t−τ (t )
eν(s−t ) ε T (s)(IN ⊗ Q )ε(s)ds +
t
t−h
eν(s−t ) (s − t + h)ε˙ T (s)(IN ⊗ R)ε( ˙ s)ds ,
(20)
and analogously defined Vi (t), i ∈ S, where ν > 0 is a constant to be determined. Using the same argument as in the proof of Theorem 1 and the inequality e−ντ (t ) ≥ 1 − ν h for t ≥ 0, we derive that
V˙ (t ) + νV (t ) ≤ yT (t )( (t ) + )y(t ), where (t) and y(t) are defined as above, and
I ⊗Q = (1 − κ)ν h N −IN ⊗ Q
−IN ⊗ Q 0 +ν 0 IN ⊗ Q
0 . IN ⊗ R
It follows from (8) that there exists a constant ν > 0 such that
+
0
0 < 0. 0
Following the same reasoning as in the proof of Theorem 1 yields V˙ (t ) + νV (t ) ≤ −cε(t )2 , t ≥ 0, for some appropriate constant c > 0. Hence, d(eν t V (t )) ≤ −cε(t )2 eν t dt, and by the definition of V(t), we obtain ν
ν
e 2 t ε(t ) ≤ e 2 t
V (t ) =
eν t V (t ) → 0
as t → ∞. Thus, consensus of multi-agent system (1) can be achieved under protocol (2) with a speed ν /2 [36, p. 66]. Theorem 1 above requires the time-delay function τ (t) be differentiable. The following corollary deals with the case where τ (t) may not be differentiable or the derivative of it is unknown.
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245
1241
Fig. 1. Communication networks G1 , G2 , and their union GS = G1 ∪ G2 .
Corollary 1. Suppose that Assumption 1 (a) and (b) hold, and 0 ≤ τ (t) ≤ h. Assume that GS contains a spanning tree and that (A, B) −1 . If is stabilizable. Choose the consensus gain K = α BT P with P given by (6) and α ≥ λ2 (LˆS ) · mini∈S πi
LT (t )L(t ) ⊗ PBBT PPBBT P ≤
2 δ I , t ≥ 0, α nN
then, for appropriate δ > 0, there exists h > 0 such that the system (1) with protocol (2) achieves consensus, where h can be obtained from the following feasible LMI:
⎡ 11 + ⎢ ∗ =⎣ ∗
12 − 22 + ∗ ∗
∗
⎤
13 23
−h−1 IN ⊗ R ∗
CT 0 ⎥ < 0, IN ⊗ R ⎦ −μInN
where 11 = C T ( − β IN ⊗ P )C, 12 = C T ( − IN ⊗ PA + λ−1 (LˆS )LS ⊗ PBBT P)C, 13 = −C T (IN ⊗ AT − λ−1 (LˆS )LTS ⊗ PBBT )(IN ⊗ R), 2 2 T −1 2 T 22 = C ( − h IN ⊗ R)C, 23 = −13 , = δ μC C, C = [IN−1 − 1N−1 ]T , β is given by (6), R > 0 is an n × n matrix to be determined, and μ > 0 is a constant. This result can be proved in the same way as in Theorem 1 by using the following Lyapunov functions:
ε T (t )(IN ⊗ P)ε(t ) +
V (t ) = E and
t
t−h
(s − t + h)ε˙ T (s)(IN ⊗ R)ε( ˙ s)ds
ε T (t )(IN ⊗ P)ε(t )1{θ (t )=i} +
Vi (t ) = E
t
t−h
(s − t + h)ε˙ T (s)(IN ⊗ R)ε( ˙ s)ds1{θ (t )=i}
for i ∈ S. We omit the details of the proof. 4. Simulations In this section, we present a couple of examples to illustrate the feasibility of the obtained results. The adjacency matrices of the interaction topologies {Gi }si=1 are taken as binary 0–1 matrices in the following. Example 1. Consider the multi-agent system (1) with N = 6 agents. The communication topologies will randomly switch between G1 and G2 (see Fig. 1) following a time-homogeneous Markovian process θ (t) with generator = [−2 1 S = {1, 2}. The initial distribution of θ (t) is given by its invariant distribution π = tains a spanning tree while GS does. Take n = 3, m = 1, and let the agent dynamics be specified as
A=
−2 0 0
0 1 1
0 0 0
and
B=
0 1 0
(1/3, 2/3)T .
2 ] −1
and state space
Note that neither G1 nor G2 con-
.
It is direct to check that (A, B) is stabilizable and (6) is satisfied by choosing β = 0.2 and P =
1 0 0 0 2 1 0 1 4
. Since λ2 (LˆS ) = 0.793,
we take α = 3.8 and the consensus gain K = (1, 7.6, 3.8) in view of Theorem 1. We choose δ = 0.1 and L(t ) = 0.05δα −1
1242
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245 Table 1 The allowable values of h for different κ .
h
κ =0
κ = 0.2
κ = 0.5
κ = 0.8
κ unknown
0.237
0.220
0.186
0.162
0.158
2 agent 1 agent 2 agent 3 agent 4 agent 5 agent 6
1.5 1 0.5 0 −0.5 −1 −1.5 −2 0
10
20
Second component of εi (i=1,...,6)
i
First component of ε (i=1,...,6)
2
agent 1 agent 2 agent 3 agent 4 agent 5 agent 6
1.5 1 0.5 0 −0.5 −1 −1.5 −2 0
30
10
20
t
(a) (t) = 0.237
(b) (t) = 0.237 8
1.5 1 0.5 0 −0.5 −1 −1.5 −2 0
10
20
30
t
(c) (t) = 0.237
i
agent 1 agent 2 agent 3 agent 4 agent 5 agent 6
First component of ε (i=1,...,6)
2 Third component of εi (i=1,...,6)
30
t
6 4 2
agent 1 agent 2 agent 3 agent 4 agent 5 agent 6
0 −2 −4 −6 −8 0
10
20
30
t
(d) (t) = 0.35
Fig. 2. Consensus errors with time-delays on Markovian switching topologies given in Fig. 1.
sin (t ) · L(t ) such that (7) holds. For different values of κ , we solve (8) by using Matlab’s LMI Toolbox to derive the allowable values of h in Table. 1. Let the initial state of each agent be taken randomly from [−1, 1]3 , and τ (t) be a constant delay (i.e., κ = 0). Fig. 2 shows that the system achieves consensus for τ (t ) = 0.237 (three components of the vector ε i (t) are displayed) while it is divergent for τ (t ) = 0.35 (only the first component of the vector ε i (t) is displayed). The results are consistent with Theorem 1. Example 2. Next, consider the multi-agent system (1) with N = 4 agents. The communication topologies will randomly switch among a set of s = 5 graphs {Gi }5i=1 (see Fig. 3) following a time-homogeneous Markovian process θ (t) with generator
⎡
=
⎤
−1 1 0 0 0 0 −2 1 0 1 ⎣ 0 0 −1 1 0 ⎦ 0 0 1 −2 1 2 0 0 0 −2
and state space S = {1, . . . , 5}. The initial distribution of θ (t) is taken as the invariant distribution
π = (2/7, 1/7, 2/7, 1/7, 1/7)T . Clearly, none of the graphs {Gi }5i=1 contains a spanning tree but GS does. Set n = 2, m = 1, and let the agent dynamics be specified as
−1 A= 1
0.5 0
and B =
0 . 1
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245
1243
Fig. 3. Communication networks G1 , G2 , . . . , G5 , and their union GS = G1 ∪ G2 ∪ · · · ∪ G5 .
Fig. 4. Consensus errors with time-delays on Markovian switching topologies given in Fig. 3.
Notice that (A, B) is stabilizable and that (6) is satisfied by choosing β = 0.5 and P = [21 13]. Since λ2 (LˆS ) = 2.614, we take α = 2.7 and the consensus gain K = (2.7, 8.1) due to Theorem 1. Similarly, we choose δ = 0.1 and L(t ) = 0.01δα −1 cos (t ) · L(t ) such that (7) holds. For κ = 0.5, we solve (8) by using Matlab’s LMI Toolbox to derive the allowable upper bound as h = 0.214. Let the initial state of each agent be taken randomly from [−1, 1]2 , and τ (t ) = 0.1(1 + sin (t )). As expected, Fig. 4 (a) and (b) show the convergence of the system, which is in line with our theoretic prediction of Theorem 1.
1244
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245
Next, we choose the time delay τ (t ) = 0.1(1 + sin (7t )) and keep other parameters unchanged. In this case, we still have 0 ≤ τ (t) ≤ h, but τ˙ (t ) ≤ κ does not hold for all t ≥ 0. Namely, Assumption 1 (c) fails. The evolution of error vector is shown to be divergent in Fig. 4(c) and (d), which highlights the tightness of our sufficient conditions. 5. Conclusion This paper has studied the stochastic consensus problems in directed networks of dynamic agents under Markovian switching topologies. Realistic aspects such as time-varying delays and uncertain topologies are factored in. Based on the design of modeindependent Lyapunov functions and the LMI approach, we proved that all the agents can reach mean square and almost sure consensus for appropriate topology uncertainties and an upper bound of communication delays if the union network topology admits a spanning tree and the agent dynamics is stabilizable. Some feasible LMIs have also been established to derive the maximal allowable upper bound of the time-varying delays. A couple of simulation examples are presented to illustrate the effectiveness and the availability of our theoretical results. It is noteworthy that the transition rates of the Markov chain are assumed to be known and time-homogeneous, but in practice they are often time-dependent and difficult to obtain. How to extend the current results to imperfect transition rates and inhomogeneous process are interesting problems. Furthermore, it would be desirable to consider heterogeneous time delay τ ij (t) between agents i and j in future work. Acknowledgments The author is supported by the Program for Young Excellent Talents in Tongji University (2014KJ036), the Shanghai Pujiang Program (15PJ1408300), and the National Natural Science Foundation of China (11505127). The author acknowledges the valuable comments of the three anonymous referees and the editor, which greatly improved the presentation of the manuscript. References [1] R. Olfati-Saber, J.A. Fax, R.M. Murray, Consensus and cooperation in networked multi-agent systems, Proc. IEEE 95 (2007) 215–233. [2] W. Ren, R.W. Beard, E.M. Atkins, A survey of consensus problems in multi-agent coordination, in: Proceedings of the American Control Conference, Portland, OR, 2005, pp. 1859–1864. [3] R. Olfati-Saber, R.M. Murray, Consensus problem in networks of agents with switching topology and time-delays, IEEE Trans. Autom. Control 49 (2004) 1520–1533. [4] W. Ren, R.W. Beard, Consensus seeking in multiagent systems under dynamically changing interation topologies, IEEE Trans. Autom. Control 50 (2005) 655–661. [5] M. Mesbahi, M. Egerstedt, Graph Theoretic Methods in Multiagent Networks, Princeton University Press, New Jersey, 2010. [6] Y. Shang, R. Bouffanais, Consensus reaching in swarms ruled by a hybrid metric-topological distance, Eur. Phys. J. B 87 (2014) 294. [7] Y. Hatano, M. Mesbahi, Agreement over random networks, IEEE Trans. Autom. Control 50 (2005) 1867–1872. [8] M. Porfiri, D.J. Stilwell, Consensus seeking over random weighted directed graphs, IEEE Trans. Autom. Control 52 (2007) 1767–1773. [9] Y. Shang, Multi-agent coordination in directed moving neighborhood random networks, Chin. Phys. B 19 (2010) 070201. [10] A. Tahbaz-Salehi, A. Jadbabaie, Consensus over ergodic stationary graph processes, IEEE Trans. Autom. Control 55 (2010) 225–230. [11] I. Matei, J.S. Baras, C. Somarakis, Convergence results for the linear consensus problem under Markovian random graphs, SIAM J. Control Optim. 51 (2013) 1574–1591. [12] G. Miao, S. Xu, Y. Zou, Necessary and sufficient conditions for mean square consensus under Markov switching topologies, Int. J. Syst. Sci. 44 (2013) 178–186. [13] M. Huang, S. Dey, G.N. Nair, J.H. Manton, Stochastic consensus over noisy networks with Markovian and arbitrary switches, Automatica 46 (2010) 1571–1583. [14] H. Zhao, J.H. Park, Group consensus of discrete-time multi-agent systems with fixed and stochastic switching topologies, Nonlinear Dyn. 77 (2014) 1297– 1307. [15] Y. Shang, Couple-group consensus of continuous-time multi-agent systems under Markovian switching topologies, J. Frankl. Inst. (2015), doi:10.1016/j.jfranklin.2015.08.003. [16] Y. Wang, L. Cheng, Z.-G. Hou, M. Tan, C. Zhou, M. Wang, Consensus seeking in a network of discrete-time linear agents with communication noises, Int. J. Syst. Sci. 46 (2015) 1874–1888. [17] L. Cheng, Z.-G. Hou, M. Tan, A mean square consensus protocol for linear multi-agent systems with communication noises and fixed topologies, IEEE Trans. Autom. Control 59 (2014) 261–267. [18] K. You, L. Xie, Network topology and communication data rate for consensusability of discrete-time multi-agent systems, IEEE Trans. Autom. Control 56 (2011) 2262–2275. [19] Z. Li, Z. Duan, G. Chen, Dynamic consensus of linear multi-agent systems, IET Control Theory Appl. 5 (2011) 19–28. [20] K. You, Z. Li, L. Xie, Consensus condition for linear multi-agent systems over randomly switching topologies, Automatica 49 (2013) 3125–3132. [21] Y. Shang, Consensus of noisy multiagent systems with Markovian switching topologies and time-varying delays, Math. Probl. Eng. 2015 (2015) 453072. [22] Y. Shang, Group pinning consensus under fixed and randomly switching topologies with acyclic partition, Netw. Heterog. Media 9 (2014) 553–573. [23] Y. Wang, L. Cheng, W. Ren, Z.-G. Hou, M. Tan, Seeking consensus in networks of linear agents: communication noises and Markovian switching topologies, IEEE Trans. Autom. Control 60 (2015) 1374–1379. [24] J.K. Hale, S.M. Verduyn Lunel, Introduction to Functional Differential Equations, Springer, New York, 1993. [25] U. Münz, A. Papachristodoulou, F. Allgöwer, Delay robustness in consensus problems, Automatica 46 (2010) 1252–1265. [26] D. Han, G. Chesi, Robust discrete-time consensus of multi-agent systems with uncertain interaction, in: Proceedings of the IEEE Conference on Control Applications, Dubrovnik, 2012, pp. 1136–1141. [27] Y. Sun, Average consensus in networks of dynamic agents with uncertain topologies and time-varying delays, J. Frankl. Inst. 349 (2012) 1061–1073. [28] Y. Shang, Average consensus in multi-agent systems with uncertain topologies and multiple time-varying delays, Linear Algebra Appl. 459 (2014) 411–429. [29] J. Wu, Y. Shi, Consensus in multi-agent systems with random delays governed by a Markov chain, Syst. Cont. Lett. 60 (2011) 863–870. [30] H. Shen, J.H. Park, Z.-G. Wu, Finite-time synchronization control for uncertain Markov jump neural networks with input constraints, Nonlinear Dyn. 77 (2014) 1709–1720. [31] H. Shen, J.H. Park, Z.-G. Wu, Z. Zhang, Finite-time H∞ synchronization for complex networks with semi-Markov jump topology, Commun. Nonlinear Sci. Numer. Simulat. 24 (2015) 40–51. [32] J. Xi, H. Li, N. Cai, Y. Zhong, Consensus of swarm systems with time delays and topology uncertainties, IET Control Theory Appl. 7 (2013) 1168–1178. [33] E. Seneta, Non-negative Matrices and Markov Chains, Springer, New York, 2006. [34] X. Feng, K.A. Loparo, Stability of linear Markovian jump systems, in: Proceedings of the 29th IEEE Conference Decision and Control, Honolulu, HI, 1990, pp. 1408–1413.
Y. Shang / Applied Mathematics and Computation 273 (2016) 1234–1245
1245
[35] T. Kailath, Linear Systems, Prentice-Hall, New Jersey, 1980. [36] S. Boyd, L.E. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, Society for Industrial and Applied Mathematics Philadelphia, 1994. [37] Y. Gu, S. Wang, Q. Li, Z. Cheng, J. Qian, On delay-dependent stability and decay estimate for uncertain systems with time-varying delay, Automatica 34 (1998) 1035–1039. [38] Y.Y. Cao, Y.X. Sun, C. Cheng, Delay-dependent robust stabilization with multiple state delays, IEEE Trans. Autom. Control 43 (1998) 1608–1612. [39] O. Costa, M. Fragoso, A unified approach for stochastic and mean square stability of continuous-time linear systems with Markovian jumping parameters and additive disturbances, SIAM J. Contr. Optim. 44 (2005) 1165–1191. [40] K. Ogata, Modern Control Engineering, Prentice Hall, New Jersey, 2010.