Distributed finite-time calculation of node eccentricities, graph radius and graph diameter

Distributed finite-time calculation of node eccentricities, graph radius and graph diameter

Systems & Control Letters 92 (2016) 20–27 Contents lists available at ScienceDirect Systems & Control Letters journal homepage: www.elsevier.com/loc...

541KB Sizes 0 Downloads 31 Views

Systems & Control Letters 92 (2016) 20–27

Contents lists available at ScienceDirect

Systems & Control Letters journal homepage: www.elsevier.com/locate/sysconle

Distributed finite-time calculation of node eccentricities, graph radius and graph diameter Gabriele Oliva a,∗ , Roberto Setola a , Christoforos N. Hadjicostis b a

Unit of Automatic, Department of Engineering, Università Campus Bio-Medico di Roma, via Álvaro del Portillo 21, 00128, Rome, Italy

b

Department of Electrical and Computer Engineering, University of Cyprus, 75 Kallipoleos Avenue, P.O. Box 20537, 1678 Nicosia, Cyprus

article

info

Article history: Received 9 July 2015 Received in revised form 25 November 2015 Accepted 18 February 2016

Keywords: Distributed algorithms Diameter Eccentricity Radius Max-consensus

abstract The distributed calculation of node eccentricities, graph radius and graph diameter are fundamental steps to tune network protocols (e.g., setting an adequate time-to-live of packets), to select cluster heads, or to execute distributed algorithms, which typically depend on these parameters. Most existing methods deal with undirected topologies and have high memory and/or bandwidth requirements (or simply provide a bound on the diameter to reduce such costs). Other approaches, instead, require nodes able to communicate with their neighbors on a point-to-point basis, thus requiring each node to be aware of its neighbors. In this paper, we consider strongly connected directed graphs or connected undirected graphs, and we develop an algorithm that takes advantage of the robustness and versatility of the max-consensus algorithm, and has low computational, memory and bandwidth requirements. Moreover, the agents communicate by broadcasting messages to their (out-) neighbors without requiring any knowledge on them or needing point-to-point communication capability. Specifically, each node has memory occupation proportional to the number of its neighbors, while the bandwidth for each link at each time instant is O(log n) bits, where n is the number of nodes. The completion time of the proposed algorithm is O(δ n) for undirected graphs and O(n2 ) for directed graphs, where δ is the network diameter. © 2016 Elsevier B.V. All rights reserved.

1. Introduction In the literature much effort has been spent in the parallel (see [1] for a recent survey) and distributed calculation of the network diameter (i.e., the length of the maximum shortest path among any two nodes in the network), of the eccentricity of a given node (i.e., the maximum shortest path from a particular node to any other node) and the network radius (i.e., the minimum among the eccentricities of the nodes). Having such insights can help reduce computational effort in consensus algorithms [2–4], or can be used to set time-to-live parameters in routing protocols [5]. Moreover, the eccentricities of the nodes can be used to select cluster heads or local coordinators [6]. In the following, we denote the total number of nodes/agents in the network by n, and the network diameter by δ ; we also denote by |Niin | the size of the inneighborhood Niin of an agent i, i.e., the number of agents that may send information to the ith agent.



Corresponding author. E-mail addresses: [email protected] (G. Oliva), [email protected] (R. Setola), [email protected] (C.N. Hadjicostis). http://dx.doi.org/10.1016/j.sysconle.2016.02.015 0167-6911/© 2016 Elsevier B.V. All rights reserved.

The approaches available to date apply to the connected undirected graph case and, although some methods like the one in [7] have short completion time, they may have high memory and/or bandwidth requirements, which become overwhelming, especially for big networks. A typical approach to calculate eccentricities [6,8] is to construct the minimum spanning tree [9] rooted at each node, or to calculate the shortest paths in the graph [10,11]. For undirected graphs, a distributed way to calculate the shortest paths is given in [12] and it requires O(n2 ) steps to complete. The approach for undirected graphs presented in [13], is able to calculate the diameter in O(n) steps, O(log n) bandwidth per link per step and O(|Niin | log n) bits of memory at each node (if a time-to-live is attached to the messages, otherwise O(n log n) bits are required to check for retransmissions). The algorithm in [13] requires nodes outfitted with nontrivial communication and computational capabilities, as it combines several communication approaches, ranging from flooding and breadth-first visit (broadcast) to depth-first visit and ConvergeCast (point-to-point). Specifically, the algorithm calculates the maximum among the eccentricities of the nodes by constructing n trees via a breadth-first visit; the visits are suitably staggered to avoid collisions and terminate in O(n) steps. This feature, however, relies on a precise scheduling of the

G. Oliva et al. / Systems & Control Letters 92 (2016) 20–27

21

Table 1 Comparison of the main features of the proposed algorithm against the state of the art. Feature

Ref. [13]

Ref. [14]

Ref. [7]

Ref. [15]

Ref. [4]

Proposed approach

Approximation

Exact

δ ≤ δ ≤ 2δ

Exact

δ ≤ δ ≤ ( 32 + η)δ

δ ≤ δ ≤ 2δ

Exact

Undirected Directed

Yes No

Yes No

Yes No

Yes No

Yes No

Yes Yes

Steps

O(n)

O(n)

O(δ)

O(

O(δ n)

Memory

O(|Niin | log n) if using a time-to-live approach for Flooding, O(n log n) if checking for re-transmission O(log n)

O(M |Niin | log n)

O(n log n)

O(|Niin | log n)

O(|Niin | log n)

O(n2 ) for directed graphs, O(δ n) for undirected graphs O(|Niin | log n)

O(M log n)

O(n log n)

O(log n)

O(log n)

O(log n)

The nodes must be able to switch between broadcasting and point-to-point communication. The algorithm requires a precise scheduling of the messages exchanged.

M ≥ 1 is used to trade off memory/bandwidth for accuracy

Bandwidth Remarks

visits, and a failure (e.g., packet loss) may cause errors or inconsistencies. In [14] a procedure based on max-consensus [2] is used in order to obtain an approximation δ of the diameter δ in O(n) steps; the accuracy, however, depends on a factor M which also influences memory requirements (O(M |Niin | log n) bits for each agent) and bandwidth (O(M log n) bits per link per time step). In [7] an algorithm for the exact calculation of the eccentricity of each node and of the network diameter is provided. Specifically, the nodes exchange messages containing information on an estimate of the diameter, on the hop counts and on the identifier of the sender, following a flooding approach. The algorithm requires a bandwidth of O(n log n) bits per link per step and terminates in O(δ) steps. As for the memory resources, at each step, the agents need to store the identifiers associated to the messages received in the previous 2 steps, hence it requires storage of O(n) identifiers (i.e., O(n log n) bits of memory occupancy). In [15] the authors develop an algorithm that provides an approximation δ of the diameter δ such that δ ≤ ( 32 + η)δ , where η ∈ (0, 13 ] is a parameter that constitutes a trade off between accuracy and completion time. Each node can transmit a different message of O(log n) bits to each of its neighbors in a synchronous way, and the graph is assumed to be undirected. The number of rounds for the algorithm to terminate  is O(

n log n

δη

+ δ).

In [4] we develop an algorithm that provides an upper bound of the diameter in O(δ n) steps; this upper bound is guaranteed to be at most twice the actual diameter. The memory requirement for this algorithm is O(|Niin | log n) bits per node and the bandwidth is O(log n) bits per link per step. In this paper, we provide a distributed algorithm which provides the exact value of the eccentricities, graph radius and graph diameter, both in the undirected and directed graph cases. The proposed algorithm maintains low bandwidth (i.e., O(log n) bits per link per step) and memory (i.e., O(|Niin | log n) bits per node). Moreover, in the proposed approach each node only has to broadcast its messages, without any knowledge of its neighbors and without requiring point-to-point communication capability. In a nutshell, the algorithm calculates the eccentricities of the nodes in a sequential way, resorting to max-consensus algorithms [2]. Notice that the usage of max-consensus provides increased robustness to transmission failures such as packet loss (although mainly related to the average consensus case, some hints



n log n

δη

+ δ)

η ∈ (0, 31 ] is used to trade off completion time for accuracy

on this issue can be found in [16–18]), compared to more fragile approaches like the one in [13]. As for the completion time, the proposed algorithm terminates in O(δ n) steps in the connected and undirected graph case and O(n2 ) steps in the strongly connected and directed graph case. Table 1 summarizes the comparison between the proposed approach and the state of the art. The proposed algorithm relies on successive runs of the maxconsensus algorithm, and on a novel algorithm to calculate the depth of each node over the minimum spanning tree rooted at a given node, which has a structure similar to max-consensus. This approach has several advantages in terms of memory, bandwidth and robustness when compared against previous approaches. The advantages are explained in detail later on, once we have the chance to more precisely describe the characteristics of the algorithm. The outline of the paper is as follows: Section 2 collects some background material; Sections 3 and 4 develop our algorithms to calculate the eccentricity and the diameter (and radius), respectively; Section 5 contains simulations that illustrate the potential of the proposed approach. Finally, some conclusive remarks and future work directions are collected in Section 6. 2. Preliminaries Let G = {V , E } be a graph with n nodes V = {v1 , v2 , . . . , vn } and e edges E ⊆ V × V , where (vi , vj ) ∈ E captures the existence of a link from node vi to node vj . A graph is said to be undirected if (vi , vj ) ∈ E whenever (vj , vi ) ∈ E, and is said to be directed otherwise. A path over a graph G = {V , E }, starting from a node vi ∈ V and ending in a node vj ∈ V , is a subset of links in E that connect vi and vj without creating loops. The length of the path is the cardinality of such set. A graph is connected if for each pair of nodes vi , vj there is a path over G that connects them without necessarily respecting the edge orientation, while it is strongly connected if the path respects the orientation of the edges. It follows that every undirected connected graph is also strongly connected. A minimum path that connects vi and vj is the path from vi to vj of minimum length. A minimum spanning tree rooted at a node vi ∈ V is a tree (i.e., an acyclic connected subgraph of G with n − 1 links, where n is the number of nodes) that connects vi to each other node via edges belonging to the minimum paths in G

22

G. Oliva et al. / Systems & Control Letters 92 (2016) 20–27

from vi to the other nodes. Notice that, if G is strongly connected we are guaranteed that there is a minimum spanning tree rooted at each node vi ∈ V . The eccentricity ϵi of a node vi ∈ V is the length of the maximum among the minimum paths connecting vi to each other node. The diameter δ of a graph G is the maximum length among the minimum paths that connect each possible pair of distinct nodes vi , vj ∈ V . In other terms

δ = max {ϵi }. i=1,...,n

The radius r of a graph G is defined as

h∈Niin

i=1,...,n

Let the in-neighborhood Niin of a node vi be the set of nodes {vj |(vj , vi ) ∈ E }, while the out-neighborhood Niout is the set of nodes {vj |(vi , vj ) ∈ E }. For undirected graphs Niin = Niout = Ni , where Ni is simply the neighborhood of node vi . A node vj ∈ V is an m-hop neighbor of a node vi ∈ V if there is a minimum path of length m from vi to vj which respects the orientation of the edges. 2.1. Max-consensus Suppose each node in a graph G represents an agent with an initial condition xi (0) ∈ R. In the max-consensus problem, the nodes have to converge to the maximum of the initial conditions. Assuming the graph is connected (in the undirected graph case), or strongly connected (in the directed graph case), the maxconsensus problem is known to have a solution in finite time [2] (and, specifically, in no more than δ steps, where δ is the network diameter) if each agent adopts the following update rule max

Theorem 1. Let ∆i (0) = ∞ for all nodes vi except for the leader node vi∗ , for which ∆i∗ (0) = 0. If each node executes the following update rule

   ∆i (k + 1) = min ∆i (k), min ∆h (k) + 1

r = min {ϵi }.

xi (k + 1) =

us introduce an algorithm to calculate the depth ∆i of each node vi over the minimum spanning tree rooted at the leader node vi∗ . This algorithm is used as a subroutine of the main algorithm presented in this section. We can state the following theorem. Notice that, for the sake of generality, we make weaker assumptions on G than what we do later in this section.



h∈Niin ∪{i}



xh (k) .

(1)

In the pseudocode of the algorithms presented later in this paper, we use the notation xi = max-consensusi (x1 (0), . . . , xn (0); k) to represent the fact that the agents execute (in a distributed manner) a max-consensus procedure for k time steps and that each agent vi selects xi (0) as its initial condition.1 2.2. Leader election The max-consensus algorithm can be used to elect a leader among the nodes in the network. Suppose each node vi has a unique identifier IDi , represented by a real or integer number, and it knows an upper bound n˜ on the number n of nodes. If the graph is strongly connected, then executing IDi = max-consensusi (ID1 , . . . , IDn ; n˜ ) implies that IDi = maxj=1,...,n {IDj }, for all agents vi ; hence the node with IDi = IDi is elected as leader. 3. Distributed eccentricity calculation In this section we develop a distributed algorithm that, assuming the graph G is directed and strongly connected (or undirected and connected), elects a leader node vi∗ and lets each node vi calculate the eccentricity ϵi∗ of node vi∗ . To this end, let

1 The value of x is the value that agent v has at the end of the execution at time i i step k.

(2)

and G contains a (directed or undirected) minimum spanning tree rooted at the leader node, then each node vi converges to its depth over the minimum spanning tree in at most δ steps, where δ is the network diameter. Proof. Let us prove by induction that, for all integers h ≥ 0, the depth of the m-hop neighbors vm ∈ Vm of node vi∗ at time instant h is given by ∆m (h) = m for all m = 0, . . . , h, while the depth for all other nodes vj ∈ V / ∪hl=1 Vl is equal to ∞. It can be noted that the statement holds true for h = 0 (in this case the only 0-hop neighbor of vi∗ is vi∗ itself). Let us suppose that the statement holds true for a given k ≥ 0. At time instant k + 1, by Eq. (2), all the m-hop neighbors vm of node vi∗ for m = 0, . . . , k may receive a depth that is no smaller than m − 1, otherwise they would not be m-neighbors of vi∗ and therefore their depth does not change—it can be shown that they receive depths in [m − 1, k] ∪ {∞} from their neighbors. As for the (k + 1)-hop neighbors vi of node vi∗ , they receive at least a value equal to k from some of their neighbors and they may receive +∞ from some others, hence they set ∆i (k + 1) = k + 1. Notice that the m-hop neighbors for m > k + 1 only receive +∞, and hence they will not change their +∞ state and the statement is proved. Since the maximum possible depth over the minimum spanning tree is δ , the proof is complete. In the pseudocode of the algorithm presented later in this paper, we use the notation

∆i = spanning-tree-depthi (∆1 (0), . . . , ∆n (0); k, IDi∗ ), to represent the fact that each agent vi selects ∆i (0) as its initial condition and the agents calculate their depth over the minimum spanning tree rooted at the node whose identifier is IDi∗ , by running the algorithm for k time steps. Let us now present the Distributed Eccentricity Calculation (DEC) algorithm, which elects a leader node and calculates the eccentricity of the leader, assuming the graph G is strongly connected. The pseudocode of the DEC algorithm is shown in Algorithm 1. The DEC Algorithm is composed of 3 subroutines, each executed for a fixed number k of steps.2 Initially, a leader is elected via max-consensus, using the unique identifiers as initial conditions. Then, the eccentricity of the leader is calculated. This is done by first calculating the depth of each node in the spanning tree, using the spanning-tree-depth algorithm presented at the beginning of this section. Then, the eccentricity is calculated via a maxconsensus algorithm, where each node chooses an initial condition equal to the depth calculated beforehand. Proposition 1. If G is strongly connected and k ≥ δ , then Algorithm 1 succeeds in calculating the eccentricity of the leader node.

2 Assuming all nodes know k, the agents are able to autonomously detect the termination of each of the 3 subroutines. The same remark applies in the following where several successive runs of the DEC Algorithm are considered.

G. Oliva et al. / Systems & Control Letters 92 (2016) 20–27

Algorithm 1: Distributed Eccentricity Calculation (DEC) Algorithm. Data: Number k of steps, unique identifier IDi Result: Eccentricity ϵ i of the leader node, identifier ID of the leader

/* Initialization */ ∆i = −∞; ϵ i = −∞; /* Select the maximum identifier */ ID = max-consensusi (ID1 , . . . , IDn ; k); if ID > −∞ then /* Calculate the depth ∆i over the minimum spanning tree rooted at the node with maximum identifier */  0 if IDi == ID, ∆i (0) = ∞ otherwise; ∆i = spanning-tree-depthi (∆1 (0), . . . , ∆n (0); k, ID); ϵ i = max-consensusi (∆1 , . . . , ∆n ; k); end return ϵ i , ID; Proof. It is well known that, for a strongly connected graph G and for k ≥ δ the max-consensus procedures succeed [2], and the same holds for the spanning-tree-depth algorithm. Hence, by Theorem 1, the result obtained is the eccentricity of the leader node. Notice that, in the presence of a directed strongly connected graph or a connected undirected graph, the conditions required by the above proposition hold true for any node in the network. It can be noted that the above algorithm requires O(|Niin | log n) memory at each node vi and O(log n) bandwidth per link per step. This happens because each node stores at each time step the depth/identifier received by its neighbors, and at each time step each link is used to transmit just one depth/identifier. Remark 1. If the graph G is undirected and connected, then if ϵi is the eccentricity of any node vi ∈ V , 2ϵi is an upper bound of the network diameter δ ; this holds true because any node is connected to any other node by means of a path over the minimum spanning tree rooted at the leader node, which has length at most 2ϵi . Thus, in the undirected graph case, a bound on the diameter of the network is given by 2ϵ ∗ where ϵ ∗ is the value provided by the DEC algorithm. On the other hand, we can provide the following result for the directed graph case. Proposition 2. Let G be a strongly connected directed graph and let MST i be the minimum spanning tree rooted at node vi . If we consider the subset of nodes VM ⊆ V that are leafs in MST i (i.e., their outneighborhood Niout over MST i is empty), it holds

δ ≤ ϵi + max {ϵj }. vj ∈VM

Proof. Let us consider the path over G from a generic node vh ∈ V to a generic node vk ∈ V . Node vh lies on a path from vi to a node vj ∈ VM , otherwise MST i is not a spanning tree. Moreover, there is a path in G from a node vj ∈ VM to node vk , otherwise G is not strongly connected. There is, therefore, a path over G from vh to vk , whose length is smaller or equal than ϵi + maxvj ∈VM {ϵj }. Such a path must be longer or equal to the minimum path between vh and vk . Since the above statement holds true for any vh , vk in V , the proof is complete.

23

Notice that the above bound is quite hard to obtain in a distributed way over a directed graph, because it is a global bound, in that it depends on the eccentricities of other nodes, which are not known by node vi . Based on this argument, we show in Section 4.1 that the distributed diameter calculation for a directed graph has increased complexity with respect to the undirected graph case. In the latter case, we can exploit the upper bound discussed in Remark 1 to significantly reduce the number of steps required to calculate the network diameter, as clarified in the next section. 4. Distributed diameter and radius calculation Let us develop an algorithm for the exact calculation of the diameter and the radius for directed and undirected graphs in a fully distributed manner, using successive executions of Algorithm 1. We first discuss the strongly connected directed graph case, then we provide an even more efficient version of the algorithm for the connected undirected graph case. 4.1. Strongly connected directed graphs In the case of a strongly connected directed graph, the pseudocode of the algorithm is shown in Algorithm 2. The algorithm executes the DEC algorithm (Algorithm 1) n times, and each time the selected leader sets its identifier to −∞, effectively removing itself from successive leader electing phases. The agents calculate the maximum (diameter) and the minimum (radius) among the eccentricities found by iterating the DEC algorithm. Let us provide the following result. Proposition 3. Let us assume that G is strongly connected, the nodes have a unique identifier, and they know an upper bound n˜ on n. Algorithm 2 calculates the diameter and the radius exactly, and it requires T = (3n + 1)˜n steps. Proof. Algorithm 2 calculates the diameter of the network as the maximum among the eccentricities of the nodes, where for each node the eccentricity is calculated via the DEC algorithm. Since this calculation is done for all the nodes, Algorithm 2 finds the exact diameter. The same argument applies for the radius. As for the steps required to terminate, Algorithm 2 executes the DEC algorithm for n˜ steps, and the DEC algorithm is executed exactly n times, as after n executions all the identifiers have been set to −∞. Notice that, in order to detect the termination of the algorithm, we need n˜ more steps, as we need to run a max-consensus one last time to detect that the maximum of the identifiers is −∞. The bound in Proposition 2 is hard to obtain in a distributed way, hence the agents need to execute each run of the DEC algorithm for n˜ steps. The above algorithm, therefore, requires O(n2 ) steps. We show in the next subsection that, in the case of connected undirected graphs, we can easily identify an upper bound on δ that yields a remarkable reduction in the number of steps. Moreover, the amount of memory required at each node is considerably small, being proportional to the number of neighbors (i.e., O(|Niin | log n) bits of memory occupation are required at each node). As for the bandwidth required for each link at each time step, it can be noted that since the information exchanged involves just messages containing the identifier or the depth of a node, then O(log n) bits are transmitted over each link at each time step. Notice that other implementations, like [7], require higher bandwidth, because at each step each node transmits information about a portion of the spanning tree to its neighbors. Finally, it can be noted that, as a by-product of Algorithm 2, the agents are able to calculate the number n of nodes in the network by counting the executions of the DEC algorithm.

24

G. Oliva et al. / Systems & Control Letters 92 (2016) 20–27

Algorithm 2: Distributed Diameter and Radius Calculation (DRC) Algorithm for strongly connected directed graphs.

Algorithm 3: Distributed Diameter and Radius Calculation (DRC) Algorithm for undirected connected graphs.

Data: Upper bound n˜ on the number of agents n, unique identifier IDi for each agent Result: network diameter δ and radius r

Data: Upper bound n˜ on the number of agents n, unique identifier IDi Result: network diameter δ , network radius r

ID = 0; ϵM = 0;

/* Calculate the first eccentricity */ [ϵ i , ID] = DEC (˜n, IDi ); /* Initialize the minimum and maximum */ eccentricities to ϵ i ϵM = ϵ i ; ϵm = ϵ i ; while ID > −∞ do /* Set the identifier to −∞ if i is the current leader */ if IDi == ID then IDi = −∞;

ϵm = ∞; /* Calculate the eccentricity of all the nodes */ while ID > −∞ do /* Elect a leader and calculate its eccentricity */ [ϵ i , ID] = DEC (˜n, IDi ); if ID > −∞ then ϵM = max{ϵM , ϵ i }; ϵm = min{ϵm , ϵ i }; /* Set the identifier to −∞ if i is the current leader */ if IDi == ID then IDi = −∞; end end end δ = ϵM ; r = ϵm ; return δ, r; 4.2. Undirected graphs In the case of a connected undirected graph, the pseudocode of the algorithm is shown in Algorithm 3. The algorithm is analogous to the directed case, except for the fact that, in the undirected case, the eccentricity ϵi calculated at each run of the DEC Algorithm is such that δ ≤ 2ϵi . In the proposed Algorithm 3, therefore, each agent initially executes the DEC Algorithm for n˜ steps, and then it stores the minimum ϵm among the eccentricities calculated, so that the next execution of the DEC Algorithm lasts just 2ϵm steps (or n˜ , if smaller). We can provide the following result. Proposition 4. Let us assume that G is undirected and connected, the nodes have a unique identifier, and they know an upper bound n˜ on n. Algorithm 3 calculates exactly the diameter and the radius, and it requires

end

/* new upper bound [ϵ i , ID] = DEC (min{˜n, 2ϵm }, IDi ); if ID > −∞ then /* Update the minimum and maximum eccentricities ϵM = max{ϵM , ϵ i }; ϵm = min{ϵm , ϵ i };

Proof. Algorithm 3 calculates the diameter of the network as the maximum among the eccentricities of the nodes, where for each node the eccentricity is calculated via the DEC algorithm. Since this calculation is done for all the nodes, Algorithm 3 finds the exact diameter. The same argument applies for the radius. As for the steps required to terminate, Algorithm 3 executes the DEC algorithm exactly n times, as after n executions all the identifiers have been set to −∞. Notice that the first run of the DEC algorithm is executed for n˜ steps while, in order to save computational time, the information obtained at each execution of the DEC algorithm is used to attempt to reduce the number of steps at the next execution, by exploiting the upper bound discussed in Remark 1. Specifically, the agents maintain a variable ϵm (j) containing the minimum among the eccentricities found by the previous j runs of the DEC algorithm, so that 2ϵm (j) is an upper bound on δ . In

*/

end end δ = ϵM ; r = ϵm ; return δ, r; other words, each run of the DEC algorithm after the first one is executed only for 2ϵm (j) steps instead of n˜ steps (unless n˜ is smaller). Notice that, after the first n runs are done, 2ϵm (n) more steps (or n˜ , if smaller) are required to attempt to elect a leader when all identifiers are equal to −∞. The DEC algorithm, therefore, is executed exactly n + 1 times (although, during the last execution, all the identifiers are equal to −∞ and only the first max-consensus procedure within the DEC Algorithm is executed), hence all minimum spanning trees are inspected and the maximum among the eccentricities is selected, which coincides with the diameter. The key to the reduction in complexity is the fact that, after each phase, the updated bound on the diameter is used to reduce the number of steps for subsequent phases. As for the number of steps required to terminate, it can be shown that the algorithm terminates in

T ≤ 3n˜ + (6n − 4)δ steps.

*/

T = 3n˜ + 3

n −1 

min{˜n, 2ϵm (j)} + min{˜n, 2ϵm (n)}

j=1

≤ 3n˜ + (6n − 4)δ steps. The above algorithm, therefore, requires O(δ n) steps to terminate. However, as shown in Section 5, this bound is quite conservative, and the total number of steps required is remarkably smaller. Notice that, also in this case, the memory and bandwidth requirements are the same as in the directed graph case, and the agents are able to calculate the number n of nodes in the network by counting the executions of the DEC algorithm. 5. Simulations Fig. 1 shows an example of the execution of Algorithm 1 over an undirected graph with n = 20 nodes and δ = 5. Choosing n˜ = 30,

G. Oliva et al. / Systems & Control Letters 92 (2016) 20–27

25

Fig. 1. Example of distributed eccentricity calculation using Algorithm 1 over an undirected graph with n = 20 nodes and δ = 5.

Fig. 2. Example of distributed diameter calculation using Algorithm 3 over two undirected graphs with n = 14 nodes and n˜ = 14. In the upper plots δ = 10, while in the lower plots δ = 2. We show in gray the trajectories of the agents and by red triangles the current estimate for the diameter. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Algorithm 3 elects as a leader the node vi∗ reported in white and requires T = 90 steps to calculate the eccentricity ϵi∗ = 4 of the leader node. The three phases of the algorithm (leader election, depth calculation over the minimum spanning tree rooted at the leader node, and eccentricity calculation) are delimited via black dotted vertical lines. In Fig. 2 we compare the results of Algorithm 3 over two graphs with n = 14 nodes, assuming n˜ = n. The topology is shown in the left plots and the trajectory of each agent is shown in the right plots. The red triangles in the right plots show the current estimate for δ during the evolution of the algorithm, while the state of the different agents during the different phases of Algorithm 3 is reported in gray. The upper plots show the results for a graph with δ = 10, while the lower plots consider a graph with δ = 2. In the first case, the algorithm calculates the diameter in T = 505 steps, while T = 197 steps are required in the second case. Notice that the calculation of the upper bound of the diameter δ using the result of Remark 1 saves a considerable number of steps when the diameter of the network is small. In both cases, in fact, Algorithm 3 executed without calculating an upper bound for δ would converge in T = (3n + 1)˜n = 602 steps. This value is comparable to the number of steps required by the first example, because δ ≈ n. In the second case, conversely, δ is considerably smaller than n and we save about 67% of the iterations. In Fig. 3 we show the results of Algorithm 3 over a strongly connected directed graph with n = 20 nodes and δ = 10.

The graph has both bidirectional and oriented edges (the oriented edges are reported with black arrows, while the blue edges are bidirectional). In this case, we cannot rely on an upper bound for δ , hence the algorithm requires T = (3n + 1)˜n = 1220 steps to terminate. Notice that, in the case of an undirected graph with the same topology, Algorithm 2 terminates in T = 470 steps, i.e., about 2.59 times more steps are required, in this case, for a directed graph. In Fig. 4 we plot the completion time T of Algorithm 3 with respect to the number of nodes for several undirected networks; the results are the average over 100 runs. Specifically, we consider a Watts–Strogatz small-world network with 3 links per node and rewiring probability 30%, a Barabási–Albert scale-free network with 3 preferential attachments per node and a unit disk graph where the nodes are generated at random in the unit square and are connected if their Euclidean distance is less than a radius ρ (the √ particular value chosen is ρ = 2 1.44/n, as suggested in [19], to guarantee, with high probability, that the graph is connected). In the figure we also consider a star topology and we plot the number of steps required when the bound on the diameter is not used to reduce the steps (e.g., for directed graphs). From the figure, it is evident that there is a relevant reduction in steps when such a bound is used: for n = 1000, the algorithm requires about 3 × 106 steps to terminate if the bounds are not used, while the upper bound identified in Proposition 4 is around 1.36 × 105 steps for

26

G. Oliva et al. / Systems & Control Letters 92 (2016) 20–27

Fig. 3. Example of distributed diameter calculation using Algorithm 3 over a directed graph. The graph is composed of n = 20 nodes in random position in the unit square, and a link is created provided that the Euclidean distance is less than ρ = 0.4. We choose n˜ = n. We show in gray the trajectories of the agents and by red triangles the current estimate for the diameter. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Fig. 4. Upper plot: completion time depending on the number of nodes for several topologies. Lower plots: distribution of the eccentricities of the nodes for particular instances of unit disk graph, small world and scale-free topologies with n = 1000 nodes.

the unit disk graph (i.e., about 96% reduction), about 6.89 × 104 steps for the small world network (i.e., about 97.7% reduction) and about 3.89 × 104 steps for the scale free network (i.e., 98.7% reduction). Notice that the star topology, for n = 1000 requires about 1.49 × 104 steps to terminate. It can also be noted that the actual number of steps required to terminate is remarkably smaller than the upper bound identified in Proposition 4; for n = 1000 the average of the actual number of steps is 7.62 × 104 for the unit disk graph (i.e., 43.9% reduction with respect to the bound), 5.19 × 104 for the small world network (i.e., 24.6% reduction) and 2.83 × 104 for the scale free network (i.e., 27.2% reduction).

The differences in step reduction for the above topologies can be better understood considering the lower plots in Fig. 4, where we show the distribution of the eccentricities for particular instances of unit disk graph, small world and scale-free topologies, with n = 1000 nodes. It can be noted that, while in the small-world and scale-free cases the eccentricities are quite close to the diameter and vary in a small range (between 4 and 6 for the scale-free topology and between 8 and 10 for the small-world case), in the unit disk graph case there is much more variability (the diameter is 23, but the eccentricities vary from 12 to 23), hence the bound given in Proposition 4 is less sharp in this case.

G. Oliva et al. / Systems & Control Letters 92 (2016) 20–27

6. Conclusions and future work In this paper we provide novel distributed algorithm to calculate exactly the eccentricity of the nodes and the diameter of the network, both in the case of undirected and directed graphs. Such algorithms take advantage of the robustness and versatility of the max-consensus algorithm to calculate the eccentricity of the nodes and the diameter with low memory (i.e., each node has a memory occupation proportional to the number of its neighbors) and low bandwidth (i.e., O(log n) per link per step). The case of directed graphs appears particularly challenging; in fact, while in the undirected case it is possible to find a bound on δ that allows to limit the number of steps, it is not trivial to do the same for directed graphs. Future work will be aimed at providing a real-world implementation and to use the insights obtained to optimize finite-time consensus algorithms and to identify local leaders to be used for coordination of the agents. We will also investigate potential distributed mechanisms to obtain an upper bound on the diameter of a directed graph in reasonable time. A last foreseen work direction is to provide distributed stopping criteria to obtain a good upper bound on the network diameter in a reduced number of steps. References [1] J. Shun, An evaluation of parallel eccentricity estimation algorithms on undirected real-world graphs, in: ACM Conference on Knowledge Discovery and Data Mining, KDD, 2015, pp. 1095–1104. [2] C. Han-Lim, L. Brunet, J.P. How, Consensus-based decentralized auctions for robust task allocation, IEEE Trans. Robot. 25 (4) (2009) 912–926. [3] R. Olfati-Saber, R.M. Murray, Consensus problems in networks of agents with switching topology and time-delays, IEEE Trans. Automat. Control 49 (9) (2004) 1520–1533. [4] G. Oliva, R. Setola, C. Hadjicostis, Distributed finite-time average-consensus with limited computational and storage capability, IEEE Trans. Control Netw. Syst. http://dx.doi.org/10.1109/TCNS.2016.2524983 (in press).

27

[5] S. Lee, E.M. Belding-Royer, C.E. Perkins, Scalability study of the ad hoc ondemand distance vector routing protocol, Int. J. Netw. Manage. 13 (2) (2003) 97–114. [6] E. Korach, D. Rotem, N. Santoro, Distributed algorithms for finding centers and medians in networks, ACM Trans. Program. Lang. Syst. 6 (3) (1984) 380–401. [7] P.S. Almeida, C. Baquero, A. Cunha, Fast distributed computation of distances in networks, in: Proceedings of the 51st IEEE Conference on Decision and Control, 2012, pp. 5215–5220. [8] N.A. Lynch, Distributed Algorithms, Morgan Kaufmann, 1996. [9] B. Awerbuch, Optimal distributed algorithms for minimum weight spanning tree, counting, leader election and related problems (detailed summary), in: Proceedings of the 19th Annual ACM Symposium on Theory of Computing, 1987, pp. 230–240. [10] S. Kanchi, D. Vineyard, An optimal distributed algorithm for all-pairs shortestpath, Int. J. Inf. Theor. Appl. 11 (2) (2004) 141–146. [11] D. Nanongkai, Distributed approximation algorithms for weighted shortest paths, in: Proceedings of the 46th Annual ACM Symposium on Theory of Computing, 2014, pp. 565–573. [12] S. Haldar, An ‘‘All Pairs Shortest Paths’’ distributed algorithm using 2n2 messages, J. Algorithms 24 (1) (1997) 20–36. [13] D. Peleg, L. Roditty, E. Tal, Distributed algorithms for network diameter and girth, in: Automata, Languages, and Programming, Springer, 2012, pp. 660–672. [14] F. Garin, D. Varagnolo, K. Johansson, Distributed estimation of diameter, radius and eccentricities in anonymous networks, in: in Proceedings of the 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems, 2012, pp. 13–18. [15] S. Holzer, D. Peleg, L. Roditty, R. Wattenhofer, Brief announcement: Distributed 3/2-approximation of the diameter, in: In Proceedings of the 28th International Symposium on Distributed Computing, DISC 2014, 2014, pp. 562–564. [16] F. Fagnani, S. Zampieri, Average consensus with packet drop communication, SIAM J. Control Optim. 48 (1) (2009) 102–133. [17] S. Patterson, B. Bamieh, A. El Abbadi, Convergence rates of distributed average consensus with stochastic link failures, IEEE Trans. Automat. Control 55 (4) (2010) 880–892. http://dx.doi.org/10.1109/TAC.2010.2041998. [18] C. Hadjicostis, N. Vaidya, A. Dominguez-Garcia, Robust distributed average consensus via exchange of running sums, IEEE Trans. Automat. Control PP (99) (2015) 1–1. http://dx.doi.org/10.1109/TAC.2015.2471695. [19] M. Penrose, Random Geometric Graphs, Oxford University Press, Oxford, 2003.