Consensus stabilization in stochastic multi-agent systems with Markovian switching topology, noises and delay

Consensus stabilization in stochastic multi-agent systems with Markovian switching topology, noises and delay

Author’s Accepted Manuscript Consensus Stabilization in Stochastic Multi-agent Systems with Markovian Switching Topology, Noises and Delay Pingsong Mi...

604KB Sizes 2 Downloads 66 Views

Author’s Accepted Manuscript Consensus Stabilization in Stochastic Multi-agent Systems with Markovian Switching Topology, Noises and Delay Pingsong Ming, Jianchang Liu, Shubin Tan, Gang Wang, Liangliang Shang, Chunying Jia www.elsevier.com/locate/neucom

PII: DOI: Reference:

S0925-2312(16)00327-1 http://dx.doi.org/10.1016/j.neucom.2015.10.128 NEUCOM16831

To appear in: Neurocomputing Received date: 27 May 2015 Revised date: 23 September 2015 Accepted date: 4 October 2015 Cite this article as: Pingsong Ming, Jianchang Liu, Shubin Tan, Gang Wang, Liangliang Shang and Chunying Jia, Consensus Stabilization in Stochastic Multiagent Systems with Markovian Switching Topology, Noises and Delay, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2015.10.128 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Neurocomputing Neurocomputing 00 (2016) 1–16

Consensus Stabilization in Stochastic Multi-agent Systems with Markovian Switching Topology, Noises and Delay Pingsong Minga,1,∗, Jianchang Liua , Shubin Tana , Gang Wangb , Liangliang Shanga , Chunying Jiaa a Information

Science and Engineering, and the State Key Laboratory of Synthetical Automation for Process Industries, Northeastern University, Shenyang, Liaoning, 110819, PR China. b Shenyang Institute of Engineering, No. 18, Puchang Road, Shenbei New District, Shenyang, Liaoning, 110136, PR China.

Abstract This paper investigates the consensus stabilization problem of stochastic multi-agent systems with noises, Markovian switching topology, and communication delays. To solve the consensus problem of the group of interacting agents, it is supposed to use the stochastic approximation type algorithm with the step-size non-decreasing to zero. For stochastic approximation based consensus algorithms with switching topologies, the existing convergence analysis methods may be difficult to guarantee for switching digraphs because these techniques heavily relies on quadratic Lyapunov functions. To overcome the inherent limitations of the existing methods, we develop a new ergodicity approach for backward products of degenerating stochastic matrices via a discrete time dynamical system approach to analyze the consensus stabilization problem of stochastic multi-agent systems with noises, Markovian switching topology, and communication delays; and we attain necessary and sufficient condition of the consensus stabilization. Our approach does not require the double stochasticity condition typically assumed for the existence of a quadratic Lyapunov function, and provides an effective tool for analyzing consensus problems via the existing theory of stochastic matrices and ergodicity of backward products. Finally, numerical simulation is shown to demonstrate theoretical results. c 2011 Published by Elsevier Ltd.  Keywords: consensus, stabilization, stochastic multi-agent systems, Markovian topologies, delay, stochastic communication noises.

1. Introduction Recently, an enormous amount of research effort has been devoted to the distributed consensus of multi-agent systems due to its fundamental role in coordination applications including traffic control [1], sensor networks [2], mobile robots [3], social networks [4], and so on. Consensus algorithms with imperfect information exchange have attracted considerable attention from the control community, addressing additive noises [5], [6], [7], or switching topology [8], [9], [10], or time delay [11], [12], [13]. ∗ Corresponding

author Email addresses: [email protected] (Pingsong Ming), [email protected] (Jianchang Liu), [email protected] (Shubin Tan), [email protected] (Gang Wang), [email protected] (Liangliang Shang), [email protected] (Chunying Jia) 1 This work was supported by the National Natural Science Foundation of China (50974145).

1

/ Neurocomputing 00 (2016) 1–16

2

Most researches assumed an ideal communication environment between agents, i.e., every agent measures the states of its neighbors accurately. However, in real conditions, the measurements or information communication are often subject to noises and/or perturbations, such as source noise, channel noise, quantization noise [14]. The consensus problem of multi-agent system with stochastic communication noises is a common problem in distributed systems [15], and has aroused the interest of some researchers ([16], [17], and [18]). [16] and [17] studied stochastic consensus of the first-order stochastic multi-agent system, and both necessary and sufficient conditions had been obtained for stochastic multi-agent system, which were balanced and containing a spanning tree. Then [19] investigated the double-integrator stochastic multi-agent system, and necessary and sufficient conditions were strongly connected balanced graph and the stochastic-approximation type consensus gain. Moreover, [18] dealt with the stochastic consensus problem for high-order stochastic multi-agent system, and the stochastic consensus condition was that the leader was a globally reached node. The problem of stochastic consensus in non-linear stochastic multi-agent system was investigated by [20], and the stochastic consensus condition was that the coupling strength was larger than a threshold value. Now we will handle consensus problem of more general high-order stochastic multi-agent systems. Generally speaking, the distributed consensus control of multi-agent systems with switching topologies can be divided into the following cases: arbitrary switching topologies [21], random switching topologies[22], and Markovian switching topologies[23]. In the case of the independent and identically distributed random switching, the result of Tahbaz-Salehi and Jadbabaie in [24] contained the results of [25] and [26] as special cases. Then the necessary and sufficient condition was extended to the strictly stationary ergodic random graph that was generated by an ergodic and stationary random process in [27]. Matei and Baras in [8] investigated the consensus problem of the first-order multi-agent systems with Markovian switching topologies. Similar conclusion was available for discrete time multiagent systems with double integrator dynamics in [18]. Next You, Li and Xie in [9] contemplated that the open-loop matrix of general high-order multi-agent systems with Markovian switching topologies contained strictly unstable eigenvalues, however, they assumed that multi-agent system worked in an ideal communication environment. Cheng, Hou and Tan [10] researched the mean square consensus of a linear multi-agent system with communication noise, however, they did not take into account any switching topology and any open-loop unstable agent problem. To the best of our knowledge, no result has been reported yet about consensus stabilization of a general linear multi-agent system with Markovian topologies, communication noises and delays. Therefore, it is of great importance to investigate the consensus stabilization of general linear multi-agent systems with Markovian topologies, communication noises and delays. In addition, it is very important to study the delay effect on convergence of consensus protocols. Generally speaking, there are three kinds of time-delays, i.e., discrete delay, neutral delay and distributed delay. For the analysis of time-delay systems, it is widely recognized that the LMI-based conditions are highly appreciated, since such kind of conditions are more convenient for the synthesis of controllers and filters. By combining the Lyapunov theory and the synchronization manifold method, sufficient conditions of nonlinear multi-agent consensus are provided under the discrete delay and distributed delay in [28]. On the other hand, from the view of the actual physical environment, there are two kinds of delays in multi-agent systems. One is related to communication from one agent to another, which will be called communication delay in this paper. The other is related to processing and connecting time for the packets arriving at each agent, which will be called input delay. Input delays also occur when actuators and controllers are connected by networks. The conditions on event-based leader-following consensus for multi-agent systems subject to input time delay between controller and actuator are presented with a event-based control strategy in [29]. By Lyapunov first method and Hopf bifurcation theory, [30] studied the group consensus problem of second-order multi-agent systems (MASs) with time coupling delays. Necessary and sufficient condition was derived for average consensus of single-integrator multi-agent systems with asymmetry, non-uniformity, and heterogeneous time delays including the self-delay and communication delay (input delay) in [31]. By using the truncated predictor feedback method, algebraic graph theory and Riccati equation, [32] investigated ooperative output feedback tracking control for multi-agent consensus with time-varying input delay and communication delay and with arbitrary switching topology. The exponential leader-following consensus problem was investigated for a nonlinear stochastic networked multiagent systems with partial mixed impulses and unknown time-varying but bounded internal delays in [33]. Based upon Lyapunov stability theory and frequency domain input-output analysis, consensus of nonlinear multi-agent systems with self and communication time delays: A unified framework on directed graph communication topologies was studied in [34]. However, the existing convergence analysis with dynamic topologies heavily relies on quadratic Lyapunov functions, whose existence may be difficult to ensure for switching directed graphs.Based on the stability 2

/ Neurocomputing 00 (2016) 1–16

3

theory of stochastic differential delay equations and algebraic graph theory, Shang[45] present sufficient conditions for achieving group consensus in the presence of random noises and communication delays, when the time delay is sufficiently small and the system matrix is stable, the group consensus can be achieved for suitable network partition. Based on graph theories and linear matrix inequality approach, Yu and Wang[46] the authors extended the results in to switching topologies and treated communication delays invoking double-tree-form transformations. In fact, searching for Lyapunov functions one may be a challenging even impossible task, especially for the nonexistence condition of quadratic Lyapunov functions with general weight matrices in [35]. To overcome the inherent limitations of the existing methods, we develop a new approach to analyze stochastic approximation for consensus. In fact, there are few paper about consensus problem of multi-agent systems using the stochastic approximation approach. [36] proposed a stochastic approximation algorithm to achieve asymptotic consensus for first-order multi-agent systems with time delay. [37] extended to stochastic approximation algorithms for first-order multi-agent systems system with time delays and randomly switching dynamics. [38] analyzed the distributed stochastic approximation type consensus protocols problem in mean square for uncertain first-order integrator multi-agent systems with stochastic measurement noises and symmetric or asymmetric time-varying delays. [7] considered synchronous and asynchronous stochastic approximation for consensus seeking with delayed measurements in the first-order integrator multi-agent systems noisy environments. However, the above literatures only consider the first-order integrator multi-agent systems and did not consider high-order stochastic multi-agent systems with the uncertainty include measurement and communication noises, Markovian switching topology, and communication delays. Actually, uncertainty exists in all kinds of networks, and the networks in real world are usually made in uncertain environments and various types of uncertainty are frequently encountered in practice. Now we deal with consensus problem of more general stochastic multi-agent systems with Markovian switching topology, noises and communication delays using the stochastic approximation ergodicity approach. Our approach does not require the double stochasticity condition typically assumed for the existence of quadratic Lyapunov functions. To our best knowledge, this paper is the first to systematically establish consensus problem of more general stochastic multi-agent systems with Markovian switching topology, noises and communication delays using the stochastic approximation ergodicity approach. Our main contributions of this paper: (i) We consider that the openloop matrix of high-order agent dynamics contain strictly unstable eigenvalues and stabilize the dynamics. (ii) This paper the first to investigate the convergence analysis of consensus stabilization problem of more general stochastic multi-agent systems over unreliable communication networks. The uncertainties include measurement and communication noises, Markovian switching topology, and communication delays. (iii) We develop a new ergodicity approach for backward products of degenerating stochastic matrices via a discrete time dynamical system approach to analyze the consensus stabilization problem of stochastic multi-agent systems with noises, Markovian switching topology, and communication delays, and attain necessary and sufficient condition of the consensus stabilization. The stochastic approximation ergodicity approach avoids the limitation that the existing convergence analysis heavily relies on quadratic Lyapunov functions, what is more, the existence of the quadratic Lyapunov functions may be difficult to guarantee for switching digraphs. Our approach does not require the double stochasticity condition typically assumed for the existence of a quadratic Lyapunov function, and provides an effective tool for analyzing consensus problems via the existing ergodic theory of stochastic matrices and ergodicity of backward products. The rest of the paper is organized as follows. The preliminary is delineated in Section 2. In Section 3, the condition is derived to achieve consensus. Simulation results provided to demonstrate the theoretical results in Section 4. And Section 5 concluded the paper. The following notation will be used throughout this paper. All random elements are defined on a generic probability space (Ω, F , P), where Ω is the space of elementary events, F is the underlying σ-field on Ω, and P is a probability measure on F . We denote E(X) the mathematical expectation of a given random variable X. For a given set collection C, σ(C) denotes the σ algebra generated by C. For a family of R n values-random variables {η λ , λ ∈ Λ}, denotes the σ    algebra σ ηλ ∈ B , B ∈ Bn , λ ∈ Λ , where Bn is an n dimensional Borel σ algebra. For a vector or matrix A, denote the Frobenius norm |A| = [T r(A T A)]1/2 . ⊗ denotes the Kronecker product. I N denotes the N dimensional identity matrix. 1 N is an N dimensional vector whose elements are all ones 1.

3

/ Neurocomputing 00 (2016) 1–16

4

2. Problem Description In the literature, the multi-agent system is usually modeled by a graph. Let G = (V G , EG , AG ) be a weighted digraph of order N, where V G = (v1 , v2 , ..., vN ) is the set of nodes, and node i ∈ V represents agent i, which might represent physical quantities including attitude, position, temperature, voltage, and so on. E G ⊆ V × V is the set of edges and AG = [ai j ] ∈ RN×N is the weighted adjacency matrix of G with a i j ≥ 0, and ai j ≥ 0 if and only if (i, j) ∈ E G . A direct edge (i, j) ∈ E G if and only if the j-th agent can send information to the i-th agent directly. The set of   neighbors of i-th agent is denoted by N i = j ∈ V| ( j, i) ∈ EG . The i-th node is called a source, if it has no neighbors but is a neighbor of another node in V. A node is called an isolated node, if it has no neighbor and it is not a neighbor of any other node. The Laplacian matrix of G is defined as L G = DG − AG , where DG = diag(deg in(1), ..., degin(N)),   and degin (i) = Nj=1 ai j is called the in-degree of i-th node, and deg out (i) = Nj=1 a ji is called the out-degree of node. If degin (i) = degout (i), i = 1, ..., N, G is called a balanced digraph. The digraph G is strongly connected, if there exists a directed path from every node to any other node. A directed tree is a digraph where each node, except the root node, has exactly one parent node. A balanced digraph is strongly connected if and only if it has a spanning tree ([39]). Let {r(k), k ∈ N} is a Markov chain taking values in a given finite set S = {1, 2, · · · , s}, with transition probability from mode i at time k to mode j at time k + 1, pi j = Pr{r(k + 1) = j|r(k) = i}

(1)

 with pi j ≥ 0 for i, j ∈ S , and sj=1 pi j = 1. Consider the general stochastic high-order LTI discrete-time systems described as follow: xi (k + 1) = Axi (k) + Bui (k) + Dvi vi (k),

(2)

where i = 1, 2, · · · , N, xi (k) ∈ Rn and ui (k) ∈ Rm represent the state and control input of agent i, respectively. vi (k) ∈ Rm2 is the external disturbance. A is Schur if all the eigenvalues of A lie in the open unit disk. (A, B) is stabilizable if there exists a K ∈ R m×n such that A + BK is Schur. It is obvious that if A is Schur, consensus can be achieved under zero consensus gain [9]. In this paper, we will assume A is not Schur, and (A, B) is stabilizable, and will prove the system (2) can reach consensus, which is called consensus stabilization [40]. Each pair of adjacent agents exchanges information in the following way: at the sender side of the communication channel ( j, i) ∈ E G , agent j communicate x j (k) to agent i. Due to the uncertainties of communication channels, at the receiver side of the channel ( j, i), agent i receives the information of x j (k), denoted by  x j (k − di j (k)) + ωi j (k), i f ( j, i) ∈ EG(k) , (3) yi j (k) = 0, i f ( j, i) ∈ EG /EG(k) . where ωi j (k) is the stochastic additive communication noises, which can be used to model the thermal noises, channel fading. {ωi j (k), 1 ≤ i, j ≤ N} are the independent standard Gaussian white noises, which are defined on a generic probability space (Ω, F , P). The combination of the Markovian link failure model and model is reflected by the random edge set sequence E G(k) . 0 ≤ di j (k) ≤ d ∗ is an integer-valued random delay and d i j (k) with probability 1 for a fixed integer d ∗ . Since the system starts at k = k0 neq0, the implicit requirement for the neighbor set is that j ∈ Ni (k)

implies

k − di j (k) ≥ 0,

(4)

where di j (k) has a fixed upper bound d ∗ and is defined for all (i, j) ∈ E max , where Emax  {(i, j)|Pr((i, j) ∈ EG(k) ) > 0 for some k ≥ k0 } is the maximal set of communication links. Each node will use its own state and its noisy measurements to form a weighted average. For agent i(i = 1, · · · , N), we propose the consensus protocol: ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ ui (k) = K ⎪ ai j (k)(yi j (k) − xi (k))⎪ , (5) ⎪ ⎪ ⎪ ⎪ ⎩ j∈Ni (k) ⎭ 4

/ Neurocomputing 00 (2016) 1–16

5

where Ni (k) denotes the neighborhood of agent i at time k; y ji (k) is the information of x j (k) received by agent i, which is defined by (3); a i j (k) is decided by the weighted adjacency matrix A G(k) of the communication topology graph  . When Ni (k) is empty, j∈Ni (k) (·) is defined as 0. {a i j (t)} represents the weight sequence of G(k), i.e. ai j (k) = a(r(k)) ij unreliable link ( j, i) driven by a Markov chain. By some decompositions, consensusability of the interconnected identical systems under a common control protocol is shown to be equivalent to a discrete-time simultaneous stabilization problem. At first, we give the definition of the consensus and consensus stabilization as follows Definition 1. [41] The general stochastic high-order discrete-time multi-agent system (2) is said to reach consensus under the protocol (5) if there exists a fixed consensus gain K such that for any finite x i (0), lim E[|xi (k) − x j (k)|] = 0,

k→∞

∀i, j ∈ VG ,

(6)

where the expectation E[·] is taken under measure P. Definition 2. [41] The general stochastic high-order discrete-time multi-agent system (2) is said to reach the mean square consensus stabilization under the protocol (5) if there exists a fixed consensus gain K such that system (2) is the mean square consensus for any finite x i (0), and if there exists K such that lim E |xi (k) − x j (k)| = 0. In this case t→∞ we say that K stabilizes (A, B). Now, inserting control protocol (5) into (2), we obtain that    ai j (k) yi j (k) − xi (k) + Dvi v(k), xi (k + 1) = Axi (k) + BK

k∈N

j∈Ni (k)

= Axi (k) + BK







ai j (k) x j (k − di j (k)) + ω ji (k) − xi (k)

j∈Ni (k)

= Axi (k) − BK



ai j (k)xi (k) + BK

j∈Ni (k)

+ Dvi vi (k),

k ∈ N,





+ Dvi vi (k)

ai j (k)x j (k − di j (k)) + BK

j∈Ni (k)



ai j (k)ω ji (k)

j∈Ni (k)

i = 1, 2, · · · , N.

(7)

To facilitate convergence analysis of (7), we rewrite it in the following compact form: X(k + 1) = (A ⊗ I N − BKL(r(k)) ⊗ IN )X(k) + (BKL(r(k)) ⊗ IN )X(k − d(k)) + (BKL (r(k)) ⊗ IN )ω(k) + Dv v(k), G G G

(8)

where X(k) = [x T1 (k), · · · , xTN (k)]T with the initial state satisfying E[|X(0)| 2] < ∞, k ≥ 0, d(k) = 0, 1, · · · , d ∗ , where d ∗ is the maximum delay, and X(k) ≡ 0 for −d ∗ ≤ k < 0. L(r(k)) denotes the Laplacian matrix of the communication G topology graph G(k), D v = diag{Dv1 , · · · , DvN }, v(k) = [vT1 (k), · · · , vTN (k)]T , ω(k) = [ωT1 (k), · · · , ωTN (k)]T , Denoting X(k) = [X(k); X(k − 1); · · · ; X(k − d ∗ )], we have the state space representation X(k + 1) = (U − B(r(k)) (k))X(k) + D(k)W(k),

(9)

where nNd1∗ × nNd1∗ matrix U is defined ⎡ ⎢⎢⎢ A ⊗ IN ⎢⎢⎢ I N ⎢⎢⎢ ⎢⎢⎢ 0 U = ⎢⎢ ⎢⎢⎢ .. ⎢⎢⎢ . ⎣ 0

0 ··· 0 ··· 0 ··· .. . . . . 0 ···

0 0 IN .. . 0

5

0 0 0 .. .

0 0 0 .. .

IN

0

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦

(10)

/ Neurocomputing 00 (2016) 1–16

where d1∗ = d ∗ + 1, each identity matrix I N is nN × nN, and  Ξ −Ξ B(r(k)) (k) = 0nNd∗ ×nNd1∗ 0nNd∗ ×nNd1∗   D(k) = D(k) 0N×N(d∗ −1) ,   W(k) . W(k) = 0N(d∗ −1)×N

··· ···

6

−Ξ



0nNd∗ ×nNd1∗

,

(11) (12) (13)

where Ξ = BKL(r(k)) ⊗ IN , and D(k) := [BKL(r(k)) ⊗ IN , Dv ]. W(k) := [ω(k), v(k)] T is a sequence of independent G G random vectors of zero mean and is independent of {A G(r(k)) , {di j (k)|( j, i) ∈ Emax }), t ≥ 0}, where 0 ≤ d i j (k) ≤ d ∗ with probability 1 for a fixed integer d ∗ , and sup E|W(k)|2 < ∞ t≥0

Let {A(k) := U − B(r(k)) (k), k ≥ 0} be nNd 1∗ × nNd1∗ deterministic nonnegative matrices, where d 1∗ = d∗ + 1. Each A(k) is a stochastic matrix of the form ⎤ ⎡ ⎢⎢⎢ A00 (k) · · · A0d∗ (k) ⎥⎥⎥   ⎥⎥⎥ ⎢⎢⎢ A00 (k), · · · , A0(d∗ −1) (k) A0d∗ (k) .. .. .. ⎥ , (14) A(k) = ⎢⎢⎢ = ⎥⎥⎥ . . . I 0nNd∗ ×nN ⎦ ⎣⎢ Ad∗ 0 (k) · · · Ad∗ d∗ (k) where each matrix A i j (k) is nN × nN and the identity matrix is nNd ∗ × nNd ∗ . Let Ai j (k) be the (i, j)th element of A(k). When k is large, each A(k) is almost a 0-1 matrix. But the asymptotic quality of the backward products is still affected by these approximately zero elements. The compatible matrices A(k) by associating with digraphs G (r(k)) , 1 ≤ r(k) ≤ s, and their union of the Markovian communication topology set G un = {G(r(k)) ; 1 ≤ r(k) ≤ s} are some quite strong transitions characterized by A(k). And {W(k), k ≥ k 0 } is independent of {(A(k), D(k)), k ≥ k 0 }. Now we write (9) in the equivalent form X(k + 1) = A(k)X(k) + D(k)W(k).

(15)

At first, we define the δ(k)-compatible perperty of {A(k), k ≥ 0} and {M(k), t ≥ 0}. Definition 3. Let {δ(k), k ≥ k0 } be a sequence of nonnegative numbers with digraphs lim δ(k) = 0 and G(r(k)) , 1 ≤ t→∞

r(k) ≤ s, and their union of the Markovian communication topology set G un = {G(r(k)) ; 1 ≤ r(k) ≤ s} are sequences of digraphs of N nodes. a) The sequence of nNd 1∗ × nNd1∗ stochastic matrices {A(k), k ≥ 0} of the form (14) is said to be δ(k)-compatible with digraphs lim δ(k) = 0 and G(r(k)) , 1 ≤ r(k) ≤ s, and their union of the Markovian communication topology set t→∞

Gun = {G(r(k)) ; 1 ≤ r(k) ≤ s}, if there exist constants kc , 0 < c ≤ c¯ such that for all k ≥ k c Ai j (k) ≤ c¯δ(k), ∀1 ≤ i ≤ nN, 1 ≤ j ≤ nNd 1∗ , i  j, ∀( j, i) ∈ E G(r(k)) . max∗ Ai, j+dnN (k) ≥ cδ(k),

(16) (17)

0≤d≤d1

b) The sequence of nN×nN stochastic matrices {M(k), t ≥ 0} is said to be δ(k)-compatible with digraphs lim δ(k) = k→∞

0 and G(r(k)) , 1 ≤ r(k) ≤ s, and their union of the Markovian communication topology set G un = {G(r(k)) ; 1 ≤ r(k) ≤ s}, if tc , 0 < c ≤ c¯ there exist , such that Mi j (k) ≤ c¯ δ(k), Mi j (k) ≥ cδ(k),

∀1 ≤ i ≤ nN, 1 ≤ j ≤ nNd 1∗ , i  j, ∀( j, i) ∈ E G(r(k)) .

(18) (19)

where Mi j (k) is the ( j, i)th element of M(k). Remark 1. At first, using a Lyapunov-like matrix equation depending on the network structure, [47] took into account a combination of local stochastic approximation algorithms and a global consensus strategy. Then under both fixed 6

/ Neurocomputing 00 (2016) 1–16

7

and time-varying topologies, [48] the consensus problem of multi-agent networks with communication noises turned to be corresponding stochastic approximation algorithm and the δ(k)-compatible properties of stochastic matrices played an important part. In [49], using the stochastic approximation theory, the consensus problem for multiagent systems with measurement noises is transformed to a root finding problem, and the consensus convergence rate was in accordance with the exponent of the step size of the stochastic approximation algorithm. Howerver, the above literatures only consider the first-order integrator multi-agent systems and did not consider the uncertainty of the communication delays. Remark 2. Based on [49], the consensus problem with time delay was taken as a root finding problem via a stochastic approximation algorithm in [36], and the convergence rate depended on the step size and was independent of the delay. [37] extended to stochastic approximation algorithms for first-order multi-agent systems system with the time delay and randomly switching dynamics. Using weak convergence methods, the stochastic approximation algorithm updated by an adaptation stepsize and converged to the consensus solution in probability. However, the time delay was a constant. Then [38] analyzed the stochastic approximation consensus protocols problem for multi-agent systems with stochastic measurement noises and symmetric or asymmetric time-varying delays. But this method based on a reduced-order transformation and a new Lyapunov function in terms of linear matrix inequalities. To overcome the inherent limitations of the existing methods, [7] considered synchronous and asynchronous stochastic approximation for consensus seeking with delayed measurements in the first-order integrator multi-agent systems noisy environments. Remark 3. Although [7] also considered synchronous and asynchronous stochastic approximation for consensus seeking with delayed measurements in the first-order integrator multi-agent systems noisy environments. However, our method is quite different from [7]: 1) Different from [7] only considered the first-order integrator multi-agent systems, we investigate the high-order stochastic multi-agent systems, which is more general stochastic multi-agent systems over unreliable communication networks. [7] considered that B(k) was a matrix-valued random process, which described two cases between the synchronous update and asynchronous update, but we take account of more general B(k). 2) Different from [7] did not consider that the open-loop matrix of high-order agent dynamics contain strictly unstable eigenvalues, we take into consideration that the open-loop matrix of high-order agent dynamics contain strictly unstable eigenvalues and stabilize the dynamics. 3) Different from [7] did not consider that the multi-noises problem include measurement and communication noises, but we make allowances for the more general stochastic multi-agent systems over unreliable communication networks, and the uncertainties include measurement and communication noises, Markovian switching topology, and communication delays. 4) Different from that [7] did not consider that the switching topology problem, we take account of the distributed consensus control of multi-agent systems with the Markovian switching topology. 5) Different from that [7] firstly analysed the stochastic approximation consensus algorithms with ergodic theorems for backward products of degenerating stochastic matrices without noise, then introduced a key notion of compatible nonnegative matrices and developed a dynamical system approach to prove ergodicity with noise, but we directly develop a ergodicity approach for backward products of degenerating stochastic matrices via a discrete time dynamical system approach to analyze the consensus stabilization problem of stochastic multi-agent systems with noises, Markovian switching topology, and communication delays, and attain necessary and sufficient condition of the consensus stabilization. 3. Necessary and Sufficient Condition for Consensus Stabilization In this section, we state our main results of the consensus stabilization problem of more general stochastic multiagent systems with Markovian switching topology, noises and communication delays using the stochastic approximation ergodicity approach. Assumption 1. The digraphs G(r(k)) , 1 ≤ r(k) ≤ s and the union of the Markovian communication topology set Gun = {G(r(k)) ; 1 ≤ r(k) ≤ s} are strongly connected. The following lemmas are useful to get the main results of this paper. 7

/ Neurocomputing 00 (2016) 1–16

8

Lemma 1. [42]. Consider a discrete-time modified Riccati inequality: P > AT PA − γAT PA(BT PB)−1 BT PA.

(20)

Assume that (A, B) is stabilizable and A is not Schur, there is a critical γ d ∈ [0, 1) such that ∀γ > γ d , there always exists a positive definite matrix P solving (20). Theorem 1. Assuming Assumption 1, let K = τ(B T PB)−1 BT PA, if {A(k), t ≥ 0} is δ(k)-compatible with digraphs G(r(k)) , 1 ≤ r(k) ≤ s and the union of the Markovian communication topology set G un = {G(r(k)) ; 1 ≤ r(k) ≤ s}, Then the general stochastic high-order discrete-time multi-agent systems (2) under the protocol (5) reaches consensus stabilization with synchronous step size, i.e. lim E|xi (k) − x j (k)| = 0,

t→∞

2 2  where P is a positive definite matrix solving (20), πd = mini∈S {πdi }, τ ∈ {τ ∈ R|γd < γ(τ) = 2πd τλ2 (G un ) − τ ρ }, T  Gun +Gun s  , γd is given in Lemma 1, ρ2 is the maxium eigenvalue of i=1 LTG(r(k)) LG(r(k)) , and P is a positive definite G un = 2 matrix solving (20) under γ = γ(τ).

Proof: Given {A(k)} in (14), define M (k) = A

d∗

A0d (k),

(21)

d=0

which is a stochastic matrix. Obviously, if {A(k), t ≥ 0} is δ(k)-compatible with digraphs G (r(k)) , 1 ≤ r(k) ≤ s and the union of the Markovian communication topology set G un = {G(r(k)) ; 1 ≤ r(k) ≤ s}, then so is {M A (k), k ≥ k0 }. Sufficiency The proof of the Theorem 1’s sufficiency is divided into two parts. Assume that {A(k), k ≥ 0} is (δ(k))-compatible with digraphs G (r(k)) , 1 ≤ r(k) ≤ s and the union of the Markovian  2 A communication topology set G un = {G(r(k)) ; 1 ≤ r(k) ≤ s} and ∞ k=0 δ (k) < ∞. Let {M (k), t ≥ 0} be defined by (21). For k ≥ k0 and with the initial pair (k 0 , X(k0 )), we write (15) in equivalent form X(k + 1) =Φ(k + 1, k 0 )X(k0 ) +

k

Φ( + 1, + 1)D( )W( )

=k0

=A00 (k)X(k) +

d∗

A0d (k)X (−d)(k) +

Part I: Denote ξ(k) =

d∗  d=1

Φ( + 1, + 1)D( )W( )

=k0

d=1

=M A (k)X(k) +

k

d∗

k   A0d (k) X (−d) (k) − X(k) + Φ( + 1, + 1)D( )W( ).

d=1

=k0

(22)

  ˜ ˜ ˜ A0d (k) X (−d) (k) − X(k) , Φ(k+ j, k+1) = {φi j (k+ j, k+1)} = A(k+ j−1) · · · A(k+2) A(k+1),

˜ · · · A(k ˜ 0 + 2) A(k ˜ 0 + 1) for k > k0 , Φ(k0 , k0 ) = I, and denote and Φ(k + 1, k 0 + 1) = {φi j (k + j, k0 + 1)} = A(k) k T + j−1  Φ(K + j, k + 1)D(k)W(k), then ζ k,1 = Φ( + 1, + 1)D( )W( ) for T > 0, j ≥ 1. Since {A(k), k ≥ k 0 } ζT, j = k=T =k0

is (δ(k))-compatible with digraphs G (r(k)) , 1 ≤ r(k) ≤ s and the union of the Markovian communication topology set Gun = {G(r(k)) ; 1 ≤ r(k) ≤ s} such that (16), (16) hold for k ≥ k 0 . For all samples Δ and k ≥ k 0 , Φ := ΦΔ (k, k0 ) is a stochastic matrix ensuring |Φ| 2 = T r(ΦT Φ) = T r(ΦΦT ) ≤ nNd1∗ . We have   E|ζT, j |2 = E WT (i)DT (i)ΦT (k + j, i + 1)Φ(k + j, k + 1)D(k)W(k) . T ≤i,k≤T + j−1

8

/ Neurocomputing 00 (2016) 1–16

9

For each pair (i, k),    E WT (i)DT (i)ΦT (k + j, i + 1)Φ(k + j, k + 1)D(k)W(k)         = T r E DT (i)ΦT (k + j, i + 1)Φ(k + j, k + 1)D(k) E W(k)WT (i)        ≤ 2nN E DT (i)ΦT (K + j, i + 1)Φ(K + j, k + 1)D(k) E W(k)WT (i)    ≤ 2nNE|DT (i)ΦT (K + j, i + 1)Φ(K + j, k + 1)D(k)||E W(k)WT (i) | ≤ (nNd1∗ )(2nN)φ|k−i| E(|HiT ||D(k)|)(E|W(k)| 2 E|W(i)|2) 2 1

≤ 2n2 N 2 d1∗ φ|k−i| (E|D(k)|2|D(i)|2 ) 2 (E|W(k)|2 E|W(i)|2 ) 2   2n2 N 2 d1∗ φ|k−i| E|D(i)|2 |W(i)|2 + E|D(k)|2 E|W(k)|2 , ≤ 2 1

1

(23)



where X(k) ∈ RnNd1 denotes the states of N agents, W(k) ∈ R 2nN is the noises vector, and the initial condition is X 0 . Hence, by (23)   

 T + j−1 j−1   2n2 N 2 d1∗ T T T  E W (i)D (i)Φ ( + j, i + 1)Φ( + j, k + 1)D( )W( )  ≤ φ|l| 2E|D( )|2 E|W( )|2  2 T ≤i, ≤T + j−1 l=− j+1 =T



(nN)2 d1∗



φl

T + j−1

l=0

We may show that supk≥k0 E|X(k)|2 < ∞. Let ζT, j =

T + j−1 =T

E|D( )|2 E|W( )|2.

=T

Φ( + j, +1)D( )W( ), ζ k,1 =

for T > 0, j ≥ 1 for T > k 0 and j ≥ 1. For any given  > 0, we may select k 1 ≥ k0 such that !    2 2 sup E|ζk,1 | ≤ , sup E|ζk,1 | ≤ . sup sup E|ζT, j | ≤ , 2 2 2 T ≥k1 j≥1 k≥k0 k≥k0

k  =k0

Φ( +1, +1)D( )W

(24)

Part II: For k ≥ max{k 0 , kc } + d ∗ and any 1 ≤ d ≤ d ∗ , we have X (−d)(k) = X(k − d) and |A 00 (k − 1) − I| ≤ Cδk−1 . Here C may take different values. Hence,   d∗   (−d) |X(k) − X(k − 1)| =  A00 (k − 1)X(k − 1) + A0d (k − 1)X (k − 1) − X(k − 1) + ζ k,1 − ζk−1,1    d=1   ∗ d     A0d (k − 1)X (−d)(k − 1) − X(k − 1)  + ζk,1 − ζk−1,1  ≤  A00 (k − 1)X(k − 1) +   d=1   ∗ d     (−d) ≤  A00 (k − 1)X(k − 1) + A0d (k − 1)X (k − 1) − X(k − 1)  + 2ζk,1    d=1 !  ≤ Cδ(k − 1) + 2 . 2 Similarly,

! |X(k − l + 1) − X(k − l)| ≤ Cδ(k − l) + 2

 , 1 ≤ l ≤ d∗ . 2

Hence, for any d ≤ d ∗ , |X(k) − X (−d) (k)| = |X(k) − X(k − d)| ≤ C(δ(k − 1) + · · · + δ(k − d)) + 2 If d ∗ ≥ 1, we have !    (−d) ∗ , max X(k) − X (k) ≤ Cδ (k) + 2 1≤d≤d ∗ 2 9

" 2

≤ Cδ∗ (k) + 2

(25)

"

2.

/ Neurocomputing 00 (2016) 1–16

10

for k ≥ max{k0 , tc } + d ∗ , where δ∗ (k) = max s≥0,k−d∗≤k0 ≤k δ s . ! !   ≤ C(δ∗ (k))2 + 2 |ξ(k)| ≤ Cδ(k)δ∗ (k) + 2 2 2 for k ≥ max{k0 , kc } + d ∗ . So



|ξ( )| < ∞.

=k0

˜ − 1) · · · A(k ˜ 0 ) for k > k0 , and Φ(k0 , k0 ) := I. For k > k0 ≥ 0, Since Φ(k, k0 ) = A(k X(k + 1) = Φ(k, k0 )X(k0 ) +

k−1

Φ( , k0 + 1)ξ( ) +

=k0

k

Φ( + 1, + 1)D( )W( )

(26)

=k0

" Since |Φ(k, k0 )| ≤ nNd1∗ for k ≥ k0 , there exists a fixed C such that sup k≥k0 |X(k)| ≤ C. Given any ε, we may find a sufficiently large k0 such that   k0

−1  

Φ(k0 , k0 + 1)ξ( ) ≤ ε. sup   k

>k ≥k0 

0

0

(27)

=k0

Denote Φ(k, k0 )X(k0 ) = [η(k, 1), · · · , η(k, nNd 1∗ )] and X(k) = [X(k), X(k−1), · · · , X(k−d)], X(k) = [x T1 (k), · · · , xTN (k)]T . For a sufficiently large k 1 > k0 , by ergodicity we have for all k ≥ k 1 . Hence, max1≤i, j≤nNd1∗ |ηi (k) − η j (k)| ≤ 3ε for all k ≥ k1 . Since ε is arbitrary   (28) lim max  xi (k) − x j (k) = 0. k→∞ 1≤i, j≤N

That is to say, the general stochastic high-order discrete-time multi-agent systems (2) under the protocol (5) reaches consensus stabilization. Necessity-Then by consensus ∗ ˜ ˜ A(1) ˜ Φ(nN(d ∗ + 1), 1) = {φi j (nN(d ∗ + 1), 1)} = A(nN(d + 1)) · · · A(2) ∗ ˜ A(1)(e ˜ ˜ + 1)) · · · A(2) = A(nN(d 1 , · · · , enN(d ∗ +1) )t → ∞(c1 1nN(d ∗ +1) , · · · , cnN(d ∗ +1) 1nN(d ∗ +1) ), −−−−−→ ∗

for some constants c 1 , · · · , cnN(d∗ +1) , where {e1 , · · · , enN(d∗ +1) } denote the canonical basis of R nN(d +1) . So {A(k), k ≥ k0 } has ergodic backward products. ∗ Given any  > 0 and k 0 ≥ 0, we select k1 ≥ k0 such that (24) holds. Let e i ∈ RnN(d +1) be a unit column vector with the ith element equal to 1. Take the initial pair (k 1 , X(k1 )) for (15) with X(k 1 ) = ei . We have X(k + 1) = Φ(k + 1, k1 )ei +

k

Φ( + 1, + 1)D( )W( ),

=k1

for k ≥ k1 . By consensus there exists a random variable η i such that lim E|X(k + 1) − ηi 1nN(d∗ +1) |2 = 0.

k→∞

(29)

We have E|Φ(k + 1)ei − ηi 1nN(d∗ +1) |2 ≤ 2E|X(k + 1) − ηi 1nN(d∗ +1) |2 + E|ζk1 ,k−k1+1 |2 ≤ 2E|X(k + 1) − ηi 1nN(d∗ +1) |2 + . This combined with (29) implies that there exists k 2 ≥ k1 such that for k ≥ k 2 and each ei , 1 ≤ i ≤ nN(d ∗ + 1), E|Φ(k+ 1, k1 )ei − η(i)1nN(d∗ +1) |2 ≤ 2. Since Φ(k + 1, k1 )ei is the ith column of Φ(k + 1, k 1 ), it follows that 2  E Φ(k + 1, k1 ) − 1nN(d∗ +1) (η1 , · · · , ηnN(d∗ +1) ) ≤ 2nN(d ∗ + 1), k ≥ k2 . 10

/ Neurocomputing 00 (2016) 1–16

11

Here the asymptotic property of Φ(k + 1, k 1 ) will not be directly analyzed since k 1 changes with . In fact (η1 , · · · , ηnN(d∗ +1) ) also changes with k 1 and . For the fixed k 0 , we have  2  2 Φ(k + 1, k0 ) − 1nN(d∗ +1) (η1 , · · · , ηnN(d∗ +1) )Φ(k1 , k0 ) = Φ(k + 1, k1 )Φ(k1 , k0 ) − 1nN(d∗ +1) (η1 , · · · , ηnN(d∗ +1) )Φ(k1 , k0 )  2 ≤ Φ(k + 1, k1 ) − 1nN(d∗ +1) (η1 , · · · , ηnN(d∗ +1) ) |Φ(k1 , k0 )|2  2 ≤ nN(d ∗ + 1)Φ(k + 1, k1 ) − 1nN(d∗ +1) (η1 , · · · , ηnN(d∗ +1) ) . Hence, |Φ(k + 1, k0 ) − 1nN(d∗ +1) (η1 , · · · , ηnN(d∗ +1) )Φ(k1 , k0 )|2 ≤ 2(nN(d ∗ + 1))2 .

(30)

where k ≥ k2 . Since  > 0 is arbitrary, (30) implies that {Φ(k + 1, k 0), k ≥ k0 } is a Cauchy sequence in the L 2 norm, and {Φ(k + 1, k0 ), k ≥ k0 } converges in mean square to Π(k 0 ). Clearly, for each Δ, Π(k 0 )(Δ) is a stochastic matrix. Since 1nN(d∗ +1) (η1 , · · · , ηnN(d∗ +1) )Φ(k1 , k0 ) in (30) is a identical rows matrix, any two rows mean square error of Φ(k + 1, k 0 ) is at most 4nN(d ∗ + 1)2  for all k ≥ k2 , which implies that Π(k 0 ) has identical rows since  > 0 is arbitrary. The mean square convergence of Φ(k + 1, k 0 ) implies convergence with probability 1. Since lim E|Φ(k + 1, k0 ) − t→∞

Π(k0 )|2 = 0, there exists a sequence of integers k 0 < τ1 , τ2 < · · · such that {Φ(τk , k0 ), k ≥ 1} converges to Π(k 0 ) for all Δ ∈ Ω\Null, where Null is a null set. For any Δ ∈ Ω\Null and  > 0, there exists k 0 depending on Δ such that for all k ≥ k 0 |Φ(k + 1, k0 ) − Π(k0 )| ≤ . For any k ≥ τk0 , since Φk,τk0 is a stochastic matrix and Π(k 0 ) has identical rows |Φ(k, k0) − Π(k0 )| = |Φ(k, τk0 )Φ(τk0 , k0 ) − Π(k0 )| = |Φ(k, τk0 )Φ(τk0 , k0 ) − Φ(k, τk0 )Π(k0 )| ≤ |Φ(k, τk0 )||Φ(τk0 , k0 ) − Π(k0 )|. Hence, for each Δ ∈ Ω\Null and k ≥ τ k0 , |Φ(k, k0 )(Δ) − Π(k0 )(Δ)|, which implies that Φ(k, k 0 ) converges to Π(k 0 ) with probability 1. This completes the proof of necessity. We have {W(k), k ≥ 0} is independent of {(A(k), D(k)), k ≥ 0}. Take a sufficiently large k 0 such that A(k) with probability 1 is a stochastic matrix for k ≥ k 0 . In addition, ∞

E|D(k)|2 E|W(k)|2 =

k=0



E|D(k)|2 E|W(k)|2 < ∞.

k=0

Suppose that {A(k), k ≥ 0} has ergodic backward products. By (A2), {A(k), k ≥ 0} satisfies the (δ(k))-compatibility condition with some constants k c , c, c¯ . Consider X(k + 1) = M A (k)X(k) with the initial pair (k 0 , Xk0 ). Since (18) holds for M A (k), |X(k + 1) − X(k)| ≤ Cδ(k) for k ≥ max{k0 , tc } and some C. Define X (−1) (k + 1) = X(k), X (−d)(k + 1) = X −(d−1) (k) for d = 2, · · · , d ∗ . Fix any initial ∗ condition (X(k 0 ), X (−1)(k0 ), · · · , X (−d ) (k0 )). We may show that   max X(k) − X (−d) (k) ≤ Cδ∗ (k), t ≥ max{k , t } + d ∗ 0

1≤d≤d ∗

c

Then X(k + 1) =A00,t X(k) +

d∗

A0d,t X (−d) (k) +

d=1

d∗

t   A0d,t X(k) − X (−d) (k) + Φ(k + 1, k + 1)D(k)W(k).

d=1

k=k0



Letting X(k) = [X(k); X (−1)(k); · · · ; X (−d ) (k)], we may write X(k + 1) = A(k)X(k) + ξ(k) + ζ(k, 1), t ≥ 0.   (−d) where ξ(k) = (k))0nNd∗ ×1 . We have ∞ k=k0 |ξ(k)| < ∞. Since {A(k), t ≥ 0} has ergodic d=1 A0d,t (X(k) − X backward products, there exists x(∞) such that lim k→∞ A(k) = x(∞)1nN(d∗ +1) , which implies limk→∞ X(k) = x∞ 1nN . Since (k0 , Xk0 ) is arbitrary, it implies that M A (k), k ≥ 0} has ergodic backward products (see appendix). Then {A(k), k ≥ 0} has ergodic backward products if and only if {M A (k), k ≥ 0} has ergodic backward products. 11  ∗ d

/ Neurocomputing 00 (2016) 1–16

12

4. Simulations In this section, we provide an example to demonstrate illustrate the effectiveness of the obtained results, which is considered the consensus stabilization of the general stochastic high-order multi-agent systems in continuous time with Markovian switching topology, noises and communication delays. Let each agent dynamics be specified as ⎡ ⎤ ⎡ ⎤ ⎢⎢⎢ 0 1 2 ⎥⎥⎥ ⎢⎢⎢ 0 1 ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ ⎥ Ac = ⎢⎢ 0 1 0 ⎥⎥ and Bc = ⎢⎢ 1 0 ⎥⎥⎥⎥ . ⎣ ⎦ ⎣ ⎦ 1 1 1 0 1 These agents are unstable for the eigenvalues of each agent: λ 1 = −1, λ2 = 1, λ3 = 2. To stabilize the stochastic multi-agent system, we can choose matrix K such that A + BK is Schur, that is to say, obtain the feedback gain matrix K, for example, by using the Ackermann’s formula [43], such that all the ⎡ eignvalues⎤ of matrix A + BK lie in the open   ⎢⎢⎢ 0 1 1 ⎥⎥⎥ 0 4 0 ⎢ ⎥ for P = ⎢⎢⎢⎢ 1 0 0 ⎥⎥⎥⎥. unit disk. We compute state feedback gains K = 0 1 4 ⎣ ⎦ 0 1 0 Consider four networked agents and the interaction topologies randomly switch between G 1 and G2 of Figure 1. We can see that the digraph G (1) and G2 are balanced, and the union of the communication topology set G un = G(1) ∪ G(2) contains a spanning tree.

























Figure 1. Network topologies: (1) G(1) ; (2) G(2) ; (3) Gun = G(1) ∪ G(2) .

The three-order multi-agent system is composed of four agents in which each unstable dynamic states of the agents are specified as the above (A, B) (see the topologies in Figure 1). The switching topology of the four agents is determined by the Markov chain r(k) = k whose state space is S = {1, 2}. The related topology graph is G (k) = (k) {VG(k) , E(k) G , AG }, the adjacent matrices are:

A(1) G

⎡ ⎢⎢⎢ 0 0 1 ⎢⎢⎢ 1 0 0 = ⎢⎢⎢⎢ ⎢⎢⎣ 0 1 0 0 0 0

0 0 0 0

⎡ ⎤ ⎢⎢⎢ 0 0 0 ⎥⎥⎥ ⎢ ⎥⎥⎥ ⎥⎥⎥ , A(2) = ⎢⎢⎢⎢⎢ 0 0 1 ⎥⎥⎥⎦ G ⎢⎢⎢⎣ 0 0 0 0 1 0

0 0 1 0

⎡ ⎤ ⎢⎢⎢ 0 0 1 ⎥⎥⎥ ⎢ ⎥⎥⎥ ⎥⎥⎥ , AG = ⎢⎢⎢⎢⎢ 1 0 1 un ⎥⎥⎥⎦ ⎢⎢⎢⎣ 0 1 0 0 1 0

0 0 1 0

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ . ⎥⎥⎥⎦

The related Laplacian matrices are:

L(1) G

⎡ ⎡ ⎡ ⎤ ⎤ ⎤ 0 −1 0 ⎥⎥ 0 0 ⎥⎥ 0 −1 0 ⎥⎥ ⎢⎢⎢ 1 ⎢⎢⎢ 0 0 ⎢⎢⎢ 1 ⎥ ⎥ ⎢⎢⎢ −1 1 ⎢⎢ −1 1 −1 0 ⎥⎥⎥⎥ 0 0 ⎥⎥⎥⎥ (2) ⎢⎢⎢⎢ 0 1 −1 0 ⎥⎥⎥⎥ ⎥⎥ . ⎥⎥⎥ , LG = ⎢⎢⎢ ⎥⎥⎥ , LGun = ⎢⎢⎢⎢⎢ = ⎢⎢⎢⎢ 1 −1 ⎥⎥⎦ ⎢⎢⎣ 0 −1 1 0 ⎥⎥⎦ ⎢⎢⎣ 0 0 ⎢⎢⎣ 0 −1 1 −1 ⎥⎥⎥⎥⎦ 0 0 0 0 0 −1 0 1 0 −1 0 1

Note that neither G(1) nor G(2) admits a spanning tree while G un contains a spanning tree. The generator is chosen as   −1 1 Q= 2 −2 and the initial distribution of the continuous-time Markov process is given by its invariant distribution π k = [2/3, 1/3]. The initial state of each agent is assumed to be a standard Gaussian random vector on [-0.3,0.3]. The initial state is X(0) = [0.9, −3.5, −3.1, −5.7, −2.4, 1.0, −1.6, −1.6, −1.8, 4.8, 2.0, −3.6] T . The communication noises are independent white noises with uniform distribution. 12

/ Neurocomputing 00 (2016) 1–16

13

10 8 6 4

x1(t)

x (t)

x (t) 2

x8(t)

x3(t)

x9(t)

x4(t)

x (t)

x (t)

x (t)

x (t)

x12(t)

States

5 6

2

7

10 11

0 −2 −4 −6 −8

0

10

5

15

Time(sec).

Figure 2. The trajectories of the state vectors x1 (k) − x12 (k) without delays.

Fig. 2 Trajectories of all nodes corresponding to topologies G un = G(1) ∪ G(2) with zero delays. Fig. 2 shows a sample path of the consensus seeking process, where every element of the vector X(k) is displayed in the vertical axis. The sample time of the networks is T = 0.05sec. The upper bound of delays is d ∗ = 30T = 1.5sec. Assume that the communication delays d i j (k) satisties d12 (k) = d23 (k) = d31 (k) = 10T = 0.5sec, d 24(k) = d43 (k) = 20T = 1sec, while d32 (k) = 30T = 1.5sec, for any k ∈ Z + . In Fig. 3, we see that, with delays, all the agents in the group reach consensus due to the use of delay-dependent adjacency matrices. With regard to the example, we have illustrates that consensus is achieved, which is consistent with Theorem 1. 5. Conclusion In this paper, we have investigated the continuous consensus problems of linear multi-agent systems with the Markovian switching topologies, stochastic communication noises, and delay. By using the stochastic approximation ergodicity approach, probability limit theory, stochastic differential align theory, algebraic graph theory and Markov chain theory, under mild conditions on measurement or communication noises, we prove the consensus stabilization in mean square sense. Necessary and sufficient conditions have been derived under unreliable network environment. The consensus stabilization of stochastic multi-agents with the Markovian switching topologies will also be obtained under the noisy and delayed measurements of its neighbors’ states due to environmental uncertainties and communication delays in the future, and each agent minimizes its individual discounted cost function through continuous state feedback, and these topics require further investigation. 6. Appendix 6.1. Ergodicity of Backward Products ˜ ˜ is a stochastic matrix. Let {A(k), t ≥ 0} be a sequence of deterministic nonnegative matrices, where each A(k) Define the so-called backward product: ˜ − 1) · · · A(k ˜ 0 + 1) A(k ˜ 0) Φ(k, k0 ) = {φi j (k, k0 )} = A(k

13

/ Neurocomputing 00 (2016) 1–16

14

10 8

x (t)

2

x8(t)

x (t)

x (t)

x4(t)

x (t)

x (t)

x (t)

x (t)

x (t)

3

6 4 States

x1(t) x (t)

5 6

2

7

9

10 11 12

0 −2 −4 −6 −8

10

5

0

15

Time(sec).

Figure 3. The trajectories of the state vectors x1 (k) − x12 (k) with delays.

for k > k0 > 0, and Φ(k0 , k0 ) := I. Obviously, the product Φ(k, k 0 ) is still a stochastic matrix. Let φi j (k, k0 ) denoteits i (xi /yi ) (i, j)-th element. Define for any two vectors x = (x1 , · · · , xn ) > 0 , y = (y1 , · · · , yn ) > 0 by d(x , y ) = ln max mini (xi /yi ) = $ # x /y d(x Φ(k,k0 ),y Φ(k,k0 )) max ln xij /yji , then we denote τ(Φ(k, k 0 )) = sup . d(x ,y ) i, j x, y > 0 x  λy ˜ Definition 4. [44] We say weak ergodicity holds for backward products of the sequence of stochastic matrices { A(k), k≥ 0} if lim |Φi1 j (k, k0 ) − Φi2 j (k, k0 )| = 0 t→∞

for any given k 0 ≥ 0 and i1 , i2 , j, i.e., the difference between any two rows of Φ(k, k 0 ) converges to zero as t → ∞. If in addition to weak ergodicity, Φ i j (k, k0 ) converges as t → ∞, for any k 0 , i, j, we say strong ergodicity holds. We may thus call a combination of Lemma 2 with Lemma 3, the “Weak Ergodicity Theorem”. ˜ = { A˜ i, j (k)}, k ≥ 1, Lemma 2. [44] If for the sequence of non-negative allowable matrices A(k) (i): Φ(k, k0 ) > 0, for k0 ≥ 0 where k ≥ 1 is some fixed integer independent of k; and (ii): min+ A˜ i, j (k) i, j

max A˜ i, j (k)

≥ γ > 0,

(31)

i, j

where min+ refers to the minimum of the positive elements and γ is independent of k, then if k 0 ≥ 0, k ≥ 1, Φ(k, k0 ) weak ergodicity (at a geometric rate) obtains. ˜ − 1) · · · A(k ˜ 0 + 1) A(k ˜ 0 ) is the backward product Φ(k, k 0 ) = {φ(k, k0 )} Lemma 3. [44] If Φ(k, k0 ) = {φi j (k, k0 )} = A(k ˜ in the previous notation, and all A(k) are allowable, then τ(Φ(k, k 0 )) → 0 as k → ∞ for each p ≥ 0 if and only if the following conditions both hold: (a) Φ(k, k0 ) > 0; (s) 0) (b) φφikjk(k,k (k,k0 ) → Wi j > 0, for all i, j, k0 , k, where the limit is independent of k (i.e. the rows of Φ(k, k 0 ), tend to proportionality as k → ∞, and where W i(s) j > 0 is a positive constant matrix, and is independent of k and k 0 . Lemma 4. [44] For backwards products Φ(k, k 0 ), k0 ≥ 0, k ≥ 1, weak and strong ergodicity are equiualent. 14

/ Neurocomputing 00 (2016) 1–16

15

References [1] B.Y. Kim, and H.S. Ahn, Distributed Coordination and Control for a Freeway Traffic Network Using Consensus Algorithms, IEEE Systems Journal, DOI:10.1109/JSYST.2014.2318054(2014). [2] S. Kar, J. Moura, Distributed consensus algorithms in sensor networks with imperfect communication: link failures and channel noises, IEEE Trans. on Signal Processing 57(1)(2009) 355–369. [3] R. Aragues, J. Cortes, C. Sagues, “Distributed consensus on robot networks for dynamically merging feature-based maps,” IEEE Trans. on Robotics28(4) (2012) 840–854. [4] I. Palomares, L. Martinez, F. Herrera, “A consensus model to detect and manage non-cooperative behaviors in large scale group decision making Palomares,” IEEE Trans. on Fuzzy Systems 22(3)(2014) 516-530. [5] T. Li, J. Zhang, “Consensus conditions of multi-agent systems with time-varying topologies and stochastic communication noises,” IEEE Transactions on Automatic Control 55(9) (2010) 2043-2057. [6] Q. Zhang, J. Zhang, “Distributed parameter estimation over unreliable networks with Markovian switching topologies,” IEEE Transactions on Automatic Control, 57(10)(2012)2545-2560. [7] M.Huang, “Stochastic approximation for consensus: a new approach via ergodic backward products,” IEEE Transactions on Automatic Control, 57(12)(2012)2994-3008. [8] I. Matei, J.S. Baras, “Convergence results for the linear consensus problem under Markovian random graphs,” ISR Technical Report 2009-18, the University of Maryland(2009). [9] K. You, Z. Li, L. Xie,“Consensus condition for linear multi-agent systems over randomly switching topologies,” Automatica, 49(10) (2013) 3125-3132. [10] L. Cheng, Z. Hou, M. Tan, “A mean square consensus protocol for linear multi-agent systems with communication noises and fixed topologies,” IEEE Trans. on Autom. Control, 59(1)(2014)261-267. [11] R. Olfati-Saber, R.M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Trans. on Autom. Control, 49(9)(2004)1520-1533. [12] Y. Sun, D. Zhao, J. Ruan, “Consensus in noisy environments with switching topology and time-varying delays,” Physica A: Statistical Mechanics and its Applications, 389(19)(2010) 4149-4161. [13] N. O. Amelina, A. L. Fradkov, “Approximate consensus in the dynamic stochastic network with incomplete information and measurement delays,” Automation and Remote Control November, 73(11)(2012) 1765-1783. [14] D.Thanou, E. Kokiopoulou, Y. Pu, “Distributed average consensus with quantization refinement,” IEEE Trans. on Signal Processing, 61(1)(2013) 194-205. [15] W. Ren, R.W. Beard, E.M. Atkins, A survy of consensus problems in multi-agent coordination. in Proc. Amer. Control Conf., Portland, OR, Jun. 2005, 1859–1864, 2005. [16] T. Li, J. Zhang, Mean square average-consensus under measurement noises and fixed topologies: Necessary and sufficient conditions. Automatica, 45(8)(2009) 1929-1936. [17] M. Huang, J.H. Manton, Stochastic consensus seeking with noisy and directed inter-agent communication: fixed and randomly varying topologies. IEEE Transactions on Automatic Control, 55(1)(2010) 235-241. [18] G. Miao, S. Xu, Y. Zou, Consentability for high-order stochastic multi-agent system under noise environment and time delays. Journal of the Franklin Institute , 350(2)(2013)244-257. [19] Cheng, L., Hou, Z., Tan, M., and Wang, X. Necessary and sufficient conditions for consensus of double-integrator multi-agent systems with measurement noises. IEEE Transactions on Automatic Control, 56: 235–241, 2010. [20] G. Wen, Z. Duan, Z. Li, G. Chen, Stochastic consensus in directed networks of agents with non-linear dynamics and repairable actuator failures. Control Theory & Applications, IET, 6(11)(2012) 1583-1593. [21] W. Zhu, D. Cheng, Leader-following consensus of second-order agents with multiple time-varying delays, Automatica 46 (12) (2010)1994– 1999. [22] H. Li, X. Liao, X. Lei, T. Huang, W. Zhu, Second-order consensus seeking in multi-agent systems with nonlinear dynamics over random switching directed networks, IEEE Trans. Circuits Syst.-I: Regul. Pap. 60 (6) (2013) 1595–1607. [23] J. Liu, H. Zhang, X. Liu, W. Xie, Distributed stochastic consensus of multi-agent systems with noisy and delayed measurements, IET Control Theory and Appl. 7 (10) (2013) 1359–1369. [24] A. Tahbaz-Salehi, A. Jadbabaie, A necessary and sufficient condition for consensus over random networks, IEEE Trans. Autom. Control 53 (3) (2008) 791–795. [25] Y. Hatano, M. Mesbahi, Agreement over random networks, IEEE Trans. Autom. Control 50 (11) (2005) 1867–1872. [26] C. Wu, Synchronization and convergence of linear dynamics in random directed networks, IEEE Trans. Autom. Control 51 (7) (2006) 1207– 1210. [27] A. Tahbaz-Salehi, A. Jadbabaie, Consensus over ergodic stationary graph processes, IEEE Trans. Autom. Control 55 (1) (2010) 225–230. [28] H. Hu, W. Yu, Q. Xuan, Consensus of multi-agent systems in the cooperation-competition network with inherent nonlinear dynamics: A time-delayed control approach, Neurocomputing 158 (2015) 134-143. [29] W. Zhu, Z. Jiang, Event-Based Leader-following Consensus of Multi-Agent Systems with Input Time Delay, IEEE Trans. Autom. Control 60 (5) (2015) 1362-1367. [30] D. Xie, T. Liang, Second-order group consensus for multi-agent systems with time delays, Neurocomputing 153 (2015) 133-139. [31] K. Sakurama, K. Nakano, Necessary and sufficient condition for average consensus of networked multi-agent systems with heterogeneous time delays, International Journal of Systems Science 46 (5) (2015) 818-830. [32] Y. Jiang, J. Liu, S. Wang,Cooperative output feedback tracking control for multi-agent consensus with time-varying delays and switching topology, Transactions of the Institute of Measurement and Control 37 (4) (2015) 550-559. [33] Y. Tang, H. Gao, W. Zhang,Leader-following consensus of a class of stochastic delayed multi-agent systems with partial mixed impulses, Automatica 53 (2015) 346-354.

15

/ Neurocomputing 00 (2016) 1–16

16

[34] L. Ma, H. Min, S. Wang, Y. Liu, Consensus of nonlinear multi-agent systems with self and communication time delays: A unified framework, Journal of the Franklin Institute 352(3) (2015) 745-760. [35] A. Olshevsky, J. N. Tsitsiklis, “On the nonexistence of quadratic Lyapunov functions for consensus algorithms,” IEEE Transactions on Automatic Control, 53(11)(2008) 2642-2645. [36] J. Xu, H. Zhang, L. Shi, Consensus and Convergence Rate Analysis for Multi-agent Systems with Time Delay, 12th International Conference on Control, Automation, Robotics and Vision, Guangzhou, China (2012) 590-595. [37] G. Yin, L. Wang, Y. Sun, Stochastic Recursive Algorithms for Networked Systems with Delay and Random Switching: Multiscale Formulations and Asymptotic Properties, Multiscale Model. Simul., 9(3) (2011) 1087-1112. [38] Y. Sun, Mean Square Consensus for Uncertain Multiagent Systems with Noises and Delays, Abstract and Applied Analysis, (2012) Article ID 621060, 18 pages. [39] J. Cortes, “Distributed algorithms for reaching consensus on general functions,” Automatica 44(3)(2008)726-737. [40] Y. Hu, P. Li, J. Lam, Brief paper - Consensus of multi-agent systems: a simultaneous stabilisation approach. Control Theory & Applications, IET, 6(11) (2012)1758-1765. [41] P. Ming, J. Liu, S. Tan, G. Wang, L. Shang, C. Jia, Consensus stabilization of stochastic multi-agent system with Markovian switching topologies and stochastic communication noise. Journal of the Franklin Institute, 352(2015)3684-3700. [42] K. Hengster-Movric, K. You, F.L. Lewis,L. Xie, “Synchronization of discrete-time multi-agent systems on graphs using riccati design,” Automatica 49(2)(2013)414-423. [43] J. Ackermann, V. Utkin, Sliding mode control design based Ackermann’ s formula. IEEE Transactions on Automatic Control 43(2)(1998)234237. [44] E. Seneta, Non-negative Matrices and Markov Chains, 2nd ed. Springer, New York(2006). [45] Y. Shang, Group consensus of multi-agent systems in directed networks with noises and time delays. International Journal of Systems Science, 46(14)(2015)2481-2492. [46] J. Yu, and L. Wang, Group consensus of multi-agent systems with directed information exchange. International Journal of Systems Science, 43(2)(2012)334-348. [47] S.S. Stankovic, M.S. Stankovic, D.M. Stipanovic, Decentralized parameter estimation by consensus based stochastic approximation. IEEE Transactions on Automatic Control, 56(3)(2011)531-543. [48] H. Fang, H. Chen, L. Wen, On control of strong consensus for networked agents with noisy observations. Journal of Systems Science and Complexity, 25(1)(2012)1-12. [49] J. Xu, H. Zhang, L. Xie, Stochastic approximation approach for consensus and convergence rate analysis of multiagent systems. IEEE Transactions on Automatic Control, 57(12)(2012)3163-3168.

16