Artifical bee colony algorithm using problem-specific neighborhood strategies for the tree t-spanner problem

Artifical bee colony algorithm using problem-specific neighborhood strategies for the tree t-spanner problem

Accepted Manuscript Title: Artifical Bee Colony Algorithm using Problem-Specific Neighborhood Strategies for the Tree t-Spanner Problem Author: Kavita...

490KB Sizes 0 Downloads 30 Views

Accepted Manuscript Title: Artifical Bee Colony Algorithm using Problem-Specific Neighborhood Strategies for the Tree t-Spanner Problem Author: Kavita Singh Shyam Sundar PII: DOI: Reference:

S1568-4946(17)30627-0 https://doi.org/doi:10.1016/j.asoc.2017.10.022 ASOC 4517

To appear in:

Applied Soft Computing

Received date: Revised date: Accepted date:

4-4-2017 21-8-2017 9-10-2017

Please cite this article as: Kavita Singh, Shyam Sundar, Artifical Bee Colony Algorithm using Problem-Specific Neighborhood Strategies for the Tree t-Spanner Problem, (2017), https://doi.org/10.1016/j.asoc.2017.10.022 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Kavita Singh, Shyam Sundar∗

ip t

Artifical Bee Colony Algorithm using Problem-Specific Neighborhood Strategies for the Tree t-Spanner Problem

us

cr

Department of Computer Applications, National Institute of Technology Raipur Raipur 492010, India

Abstract

te

d

M

an

A tree t-spanner is a spanning tree T in which the ratio of distance between every pair of vertices is at most t times their shortest distance in a connected graph, where t is a value called stretch factor of T . On a given connected, undirected, and weighted graph, this paper studies the tree t-spanner problem (Tree t-SP) that aims to find a spanning tree whose stretch factor is minimum amongst all spanning trees of the graph. Being a N P-Hard for any fixed t > 1, this problem is under-studied in the domain of metaheuristic techniques. In literature, only genetic algorithm has been proposed for this problem. This paper presents an artificial bee colony (ABC) algorithm for this problem, where ABC algorithm is a swarm intelligence technique inspired by intelligent foraging behavior of honey bees. Neighborhood strategies of ABC algorithm particularly employ problem-specific knowledge that makes ABC algorithm highly effective in searching high quality solutions in less computational time. Computational experiments on a large set of randomly generated graph instances exhibit superior performance of ABC algorithm over the existing genetic algorithm for the Tree t-SP.

Ac ce p

Key words: Tree Spanner, Weighted Graph, Problem-Specific Knowledge, Swarm Intelligence, Artificial Bee Colony Algorithm

1. Introduction

Given a connected graph G, a spanning tree T is called a tree t-spanner if the distance between every pair of vertices, say x and y in T is at most t times their shortest distance in G. t is a value called the stretch factor of T . The stretch factor is the maximum stretch taken over all pairs of vertices in G, where the stretch of a pair of vertices x and y in T is the ratio of the distance between x and y in T to their shortest distance in G. The tree t-spanner problem seeks to find a spanning tree in G whose stretch factor (t) is minimum amongst all spanning trees in G. ∗ Corresponding

author Email addresses: [email protected] (Kavita Singh), [email protected] (Shyam Sundar)

Preprint submitted to Applied Soft Computing

October 20, 2017

Page 1 of 22

Ac ce p

te

d

M

an

us

cr

ip t

The term t-spanner in tree t-spanner was introduced by Peleg and Ullman [10]. A t-spanner in the graph is a spanning subgraph of the graph in which the distance between every pair of vertices is at most t times their shortest distance in the graph. Peleg and Ullman demonstrated that spanners can be utilized for constructing synchronizers for transforming synchronous algorithms into asynchronous ones [10]. Also, the approximation of pairwise vertex-to-vertex distances in the original graph by spanning subgraphs that is the property of spanners makes spanners practical relevance in areas such as communication networks, distributed systems, motion planning, network design, and parallel machine architectures [2, 3, 4, 16, 20, 21]. In a connected graph, a spanning subgraph T can be termed as a tree t-spanner iff T is both a t-spanner and a tree. A spanning tree of a connected graph is always a tree spanner, where a tree spanner is a tree t-spanner for some t ≥ 1. A tree spanner with small stretch factor has practical relevance and can be utilized in performing multisource broadcast in a network [3], which can significantly simplify the message routing only at the cost of small delay in message delivery. Cai and Corneil [7] worked on various topics related to graph theoretic, algorithmic, and complexity for tree spanners of edge-weighted and unweighted connected graphs. They proved on edge-weighted graphs that if a tree 1-spanner is a minimum spanning tree, then it can be obtained in polynomial time. They also proved on edge-weighted graphs that for any fixed t > 1, the problem of finding a tree t-spanner is N P-Hard; whereas on unweighted graphs, the tree t-spanner is N P-Hard for any fixed integer t ≥ 4. The tree t-spanner problem for connected, undirected and unweighted graph, in literature, is known as minimum max-stretch spanning tree problem [10]. Most of the work in literature has been devoted to develop approximation algorithms on some specific kinds of unweighted graph [6, 19, 34]; however, such special graphs can not be guaranteed in every situation. This paper concerns the tree t-spanner problem for connected, undirected and edgeweighted graph. Hereafter, the tree t-spanner problem for connected, undirected and edge-weighted graph will be referred to as Tree t-SP. Since Tree t-SP is N P-Hard for t > 1, researchers look towards metaheuristic approaches which can find high quality solutions in a reasonable time. In literature, to the best of my knowledge, only one of such metaheuristic approaches, i.e., genetic algorithm (GA) [17] has been proposed recently. To develop more efficient metaheuristic approach for Tree t-SP, our research interest led us to propose ABC algorithm. ABC algorithm [13, 14] is a swarm intelligence technique inspired by intelligent foraging behavior of honey bees and has shown its capabilities to find high quality solutions in a reasonable time for many optimization problems. In our proposed ABC algorithm for Tree t-SP, problem-specific knowledge is employed in neighborhood strategies of ABC algorithm that makes ABC algorithm highly effective in searching high quality solutions in less computational time. On a large set of randomly generated graph instances, computational experiments show the superiority of ABC algorithm over the existing GA [17] for the Tree t-SP. The rest of this paper is organized as follows. Section 2 describes the Tree t-SP in detail. Section 3 and Section 4 respectively presents a brief introduction on ABC algorithm and ABC algorithm for the Tree t-SP. Computational results are reported in Section 5. Finally, Section 6 contains some concluding remarks.

2

Page 2 of 22

2. Tree t-SP

76

1

2

3 5

6

3

8

2 3

1

0

3

2 3

4

13

7

5

9

8

13

4

an

3

0

us

cr

ip t

Given a connected, edge-weighted and undirected graph G (V, E, w), where V is a set of vertices; E is a set of edges; and for each edge, there exists an edge weight which is a positive rational number, the tree t-spanner problem (Tree t-SP) aims to find a spanning tree in G whose stretch factor (t) is minimum amongst all spanning trees in G. For a spanning tree (say T ) of given G, the stretch factor t of T is the maximum stretch taken over all pairs of vertices in G, where the stretch of a pair of vertices u and v in T is ddT (u,v) . dT (u, v) is the distance between u and v in T , and dT (u, v) is the G (u,v) shortest distance between u and v in G.

6

1 (a)

1

(b)

dT (u, v) 3 10 30 22 13 14 7 27 19 10 11 20 12 3 4 8 17 18 9 10 1

Ac ce p

te

d

u-v 0-1 0-2 0-3 0-4 0-5 0-6 1-2 1-3 1-4 1-5 1-6 2-3 2-4 2-5 2-6 3-4 3-5 3-6 4-5 4-6 5-6

M

Figure 1: A graph (G) and a spanning tree (T ) of G

dG (u, v) 3 6 12 12 3 2 7 13 15 6 5 6 12 3 4 8 9 10 9 10 1

dT (u,v) dG (u,v)

1.00 1.66 2.50 1.80 4.30 7.00 1.00 2.07 1.26 1.66 2.20 3.33 1.00 1.00 1.00 1.00 1.88 1.80 1.00 1.00 1.00

Table 1: The stretch of each pair of vertices for T depicted in Figure 1-(b)

Figure 1-(a) presents a connected, undirected and edge-weighted graph G with 7 vertices and 8 edges. Figure 1-(b) presents a spanning tree (say T ) of G. Table 1 reports 3

Page 3 of 22

cr

ip t

the stretch ( ddT (u,v) ) of each pair of vertices (u-v) in T , where in Table 1 the first column G (u,v) denotes the pair of vertices (u-v) in T , the second column denotes the distance dT (u, v) between u and v in T , the third column denotes the shortest distance dG (u, v) between u and v in G, and the fourth column denotes the stretch ( ddT (u,v) ) of u-v pair of vertices G (u,v) in T . The maximum stretch taken over all pairs of vertices in T is 7.00 between vertex 0 and vertex 6 (see 7th row of Table 1, and also see thick grey edges between vertex 0 and vertex 6 in T depicted in Figure 1-(b)), which is also the stretch factor t of T .

us

3. Artificial Bee Colony (ABC) Algorithm

Ac ce p

te

d

M

an

Artificial bee colony (ABC) algorithm, introduced by Karaboga [13, 14], is swarm intelligence technique that has been applied successfully to number of optimization problems. ABC algorithm is inspired by foraging behavior of bees, where bees exploit a number of food sources and continue more intensively for finding richer food sources. This idea is modeled into an ABC algorithm, where each food source exploited by the bee corresponds to a candidate solution of the problem under consideration. ABC algorithm mimics the exploitation of a food source in the so called employed bees phase and onlooker bees phase through determining neighboring solutions to those of the current set of candidate solutions. If a candidate solution doesnot improve over a successive fixed number of iterations called limit, new candidate solution is generated by the so-called scout bees which corresponds to the search for a new food source in real bees. Although ABC algorithm was initially developed for continuous optimization problems [13, 14], a number of modification to the initial version of ABC algorithm have been proposed [15]. In the domain of combinatorial optimization problems, Singh [24] was the first to propose a variant of ABC algorithm that is applicable for subset selection problems. Pan et al. [18] proposed a variant of ABC algorithm that is suitable for permutation problems. Later, Sundar and Singh [29, 31] proposed another variant of ABC algorithm that are suitable for grouping problems. Recent years have witnessed successful applications of ABC algorithm to various combinatorial optimization problems, such as single machine batch processing with non-identical job sizes [1], hybrid flow shop problems [8], job-shop scheduling problem with no-wait constraint [33], construction of spanning trees in vehicular ad hoc networks [35], cooperative maximum covering location problem [12], ring loading problems [25]. One who is interested in a general introduction to ABC algorithm and its applications can refer to [13, 14, 15]. 4. ABC Algorithm for the Tree t-SP In the beginning of ABC algorithm for the Tree t-SP, shortest paths between allpairs of vertices of a given G are computed. Hereafter, the following subsections describes the components of ABC algorithm for the Tree t-SP.

4

Page 4 of 22

4.1. Solution Encoding

ip t

Edge-set encoding [23] is followed to represent each solution, as each solution is itself a spanning tree. This encoding contains |V |-1 edges representing a spanning tree. This encoding offers high heritability and locality. Moreover, this encoding can adapt with problem-specific knowledge easily.

cr

4.2. Initial Solution Generation

an

us

Since each solution is a spanning tree T , therefore, a random version of Prim’s algorithm [22] is applied to generate each initial solution in the population. The procedure is as follows: The spanning tree T which is empty initially starts with a random vertex ∈ V called root vertex (say r) and grows until the T spans all vertices of V ; however, at each step, instead of adding a new edge of minimum cost that connects a new vertex to the growing T , a new random edge connecting a new vertex to the growing T is added. The procedure is carried out repeatedly until the construction of spanning tree T is completed. Each generated initial solution is associated with an employed bee.

M

4.3. Fitness computation

d

Once a spanning tree (or a tree spanner) T is constructed, the fitness (i.e. stretch factor (t)) of T is computed by determining the maximum stretch taken over all pairs of vertices in G, i.e.,

te

dT (u, v) dG (u, v)

∀(u, v) ∈ G

(1)

Ac ce p

In order to compute the fitness of T , lowest common ancestors for all pairs of vertices in T with the root vertex r, are preprocessed through applying least common ancestor algorithm [11], where lowest common ancestor for any pair of vertices, say u and v, in T is the common ancestor of vertices u and v that is located farthest from the root, and is referred to as LCA(u,v). In doing so, one can get lowest common ancestor query for any pair of vertices say u and v in T , i.e., LCA(u, v) in constanttime. Distance dT (u, v) between any two vertices u and v in T can be computed as dT (u, v) = dT (r, u) + dT (r, v) − 2 × (dT (r, LCA(u, v)))

(2)

4.4. Probability of Selecting a solution Binary tournament selection method is applied to a solution for an onlooker bee. Two solutions are uniformly picked at random from the population of employed bees. With probability Pbts , a solution with better fitness is picked from these two picked solutions; otherwise, a worse one is picked from these two picked solutions with probability 1-Pbts .

5

Page 5 of 22

M

an

us

cr

ip t

4.5. Determination of a Neighboring Solution The success of ABC algorithm for a particular problem relies heavily on neighborhood strategy applied for determining a new solution in the neighborhood of a solution. It has been observed [30, 32] that ABC algorithm for a particular problem that employs problem-specific knowledge in neighborhood strategy is considered to be highly effective than a simple ABC algorithm that does not employ such knowledge in the neighborhood strategy. For the Tree t-SP, a problem-specific knowledge is explored intuitively after observing the structure of tree-t spanner (spanning tree or solution), say X. This knowledge reveals that only those edges that exist in between a pair of vertices, say v1 and v2 , with maximum stretch in comparison to all other pairs of vertices in X contributes in the fitness of T . Such edges that form a set, say E f , are, in general, much less than |V |-1 edges of X. Such knowledge has been employed in two neighborhood strategies that are applied in a mutual exclusive way in order to determine a new solution X 0 in the neighborhood of solution X. First neighborhood strategy is Dex -Iey which is based on knowledge-directed deletion of an edge of the current solution X and insertion of an edge taken from another solution Y in the population, whereas second neighborhood strategy is Dex -IeG(E) which is based on knowledge-directed deletion of an edge of the current solution X and insertion of an edge from its graph. Initially, a copy (say X 0 ) of current solution X is created. With probability Pnbr , Dex -Iey is applied, otherwise Dex -IeG(E) is applied.

Ac ce p

te

d

 Dex -Iey : In addition to problem-specific knowledge, this strategy also follows the basic idea [24, 28] that if a solution is good in terms of fitness, then its components (edges) that constitute the solution (spanning tree) would be available in many other solutions of the population of employed bees. As per this strategy, an edge that is picked randomly from X 0 is first deleted, making X 0 infeasible as X 0 is partitioned into two components. To make it feasible, a new candidate edge (different from the deleted edge) whose insertion can make X 0 feasible is searched from another solution Y that is picked randomly from the population of employed bees. However, instead of picking an edge randomly from |V |-1 edges of X 0 for the deletion, an edge is picked from the set E f of X 0 according to the problem-specific knowledge of the solution (spanning tree). As ABC algorithm iteratively progresses, the population of employed bees becomes better and better in terms of fitness; hence edges in the set E f that contribute in the fitness of a solution under consideration are part of edges of X 0 that are potentially better in comparison to other good edges of X 0 . In such a scenario, if any other edge in X 0 that has no contribution in the fitness of X 0 is picked for deletion, then it is highly likely that the survival rate of such resultant solution X 0 which becomes feasible after insertion of an edge from another solution is less in comparison to that resultant solution that follows knowledge-directed deletion of an edge, resulting in slower convergence towards high quality solutions overall. It is to be noted that if the procedure fails to search a new candidate edge (different from the deleted edge in X 0 ) from another solution Y of the employed bee population whose insertion can make X 0 feasible, then deleted edge in X 0 is restored and the same procedure is again started. Such attempt is carried out at 6

Page 6 of 22

3

6

8

2 1 8

6

7

7

4

5 6

1

13

3 5

9

1

3

6

7

8

0

6

1

4

5

5

7

9

1

1

3

7

8

2 8

5

5

6

1

6

1

13

8

4

5 1

(c)

1

3

3

7

0

8

2 1 8

5

5

7

6

1

(d)

8

2

6

4

1

3

6

7

7

9

Ac ce p

7

te

3

d

(b)

0

1

3

8

2

M

1

3

an

(a)

0

cr

7

us

1

3

0 10

ip t

most five times which is parameter to be determined empirically. If it still fails, then second neighborhood strategy Dex -IeG(E) is applied on X 0 . Note that such repeated failures, if it occurs, show that the current population is suffering from the lack of diversity. To overcome this situation, a diversity strategy in the form of second neighborhood strategy Dex -IeG(E) on X 0 is applied.

4 9

1

(e)

Figure 2: The various stages in execution of Dex -Iey .

One can understand Dex -Iey through depiction of a series of Figure 2 (a-e). Figure 2 (a) represents a connected, undirected and edge-weighted graph G = (V, E, w), where |V | = 9, |E| = 14, and each edge ∈ E has its weight (w). Figure 2 (a) depicts a solution (spanning tree) X 0 which is copy of solution (X) selected from the population of employed bees. Edges in X 0 are represented by thick lines which are either dark black grey colour or light black grey colour. The pair of vertices, i.e., { 2, 8} has the maximum stretch in comparison to all other pairs of vertices in X 0 , contributing in the fitness of X 0 . Edges that lie on the path between vertex 2 and vertex 8 in X 0 belong to the set E f of X 0 . Edges in E f of X 0 are < 2, 3 >, < 3, 4 >, < 4, 5 >, < 5, 6 >, < 6, 8 >. Edges that are part of E f in X 0 are light black grey colour, whereas Edges that are not part of E f in X 0 are dark black grey colour. The fitness of X 0 is 29. Dex -Iey neighborhood strategy 7

Page 7 of 22

0 10

1

2

1 6

8 13

8 6

(b)

0

1

1

Ac ce p

3

0

3

6

7 1

4

5

7

1

3

8

2

13

8

d

0

3

6

7

te

1

9

1

M

(a)

4

5

6

1

13

3 5

7

3

8

2

8

6

7

3

6

7

an

1

3

us

cr

ip t

first deletes an edge < 2, 3> that is picked from the set E f of X 0 as per problemspecific knowledge of X 0 . This causes partitioning of X 0 into two components { 0, 1, 2} and { 3, 4, 5, 6, 7, 8} and makes X 0 infeasible. To make it feasible, a new candidate edge (different from the deleted edge < 2, 3>) whose insertion can make X 0 feasible is searched from another solution picked randomly from the employed bee population. Figure 2(d) represents an another solution (spanning tree) Y . Edges in Y are also represented by thick lines which are either dark black grey colour or light black grey colour. The searched edge in Y is < 2, 8> that can make X 0 feasible. The edge < 2, 8> in Y is selected and added to X 0 in order to make X 0 feasible. Figure 2(e) denotes the resultant X 0 whose fitness is now 4.

5

7

6

4

1

(c)

1

3

6

7

8

2

13

8

5

7

6

1

4

1

(d)

Figure 3: The various stages in execution of Dex -IeG(E) .

 Dex -IeG(E) : This strategy is similar to Dex -Iey strategy; however, instead of searching an edge from other solution Y in order to make X 0 feasible, an edge is searched from those edges of G that does not contain edges of spanning tree X 0 . This strategy allows ABC algorithm to explore new areas of search space by introducing new edges of the graph, creating new spanning tree. One can understand this strategy through depiction of a series of Figure 3(a-d).

8

Page 8 of 22

M

an

us

cr

ip t

Figure 3(a) represents a connected, undirected and edge-weighted graph G = (V, E, w), where |V | = 9, |E| = 14, and each edge ∈ E has its weight (w). Figure 3(b) represents a solution (spanning tree) X 0 which is copy of solution selected from the population of employed bees. Edges in X 0 are represented by thick lines which are either dark black grey colour or light black grey colour. The pair of vertices, i.e., { 7, 6 } has the maximum stretch in comparison to all other pairs of vertices in X 0 , contributing in the fitness of X 0 . Edges that lie on the path between vertex 7 and vertex 6 in X 0 belong to the set E f of X 0 . Edges in E f of X 0 are < 7, 8 >, < 8, 2 >, < 2, 3 >, < 3, 5 >, < 5, 6 >. Edges that are part of E f in X 0 are light black grey colour, whereas Edges that are not part of E f in X 0 are dark black grey colour. The fitness of X 0 is 27. Dex -IeG(E) neighborhood strategy first deletes an edge < 7, 8> that is picked from the set E f of X 0 as per problemspecific knowledge of X 0 . This causes partitioning of X 0 into two components { 7} and {0, 1, 2, 3, 4, 5, 6, 8} and makes X 0 infeasible. To make it feasible, a new candidate edge (different from the deleted edge < 7, 8 >) whose insertion can make X 0 feasible is searched from the edges of G. The searched edge in G is < 7, 6 > that can make X 0 feasible. The edge < 7, 6> in G is selected and added to X 0 in order to make X 0 feasible. Figure 3(d) denotes the resultant X 0 whose fitness is now 6.33.

4.6. Other features

Ac ce p

te

d

In the course of searching high quality solutions, if a solution that is currently a part of population of employed bees fails to improve over a successive fixed number of iterations called limit, then it is presumed that this solution is of no-use keeping in the population further. To overcome this situation, the employed bee associated with this solution (food source) becomes a scout. This scout finds a new food source (solution) in the search space through generating a new solution (see Section 4.2). Algorithm 1 represents the framework of ABC algorithm for the Tree t-SP, where binary tournament selection method is called by BT S Method(E 1, E 2, ..., EN E ) function which returns a selected solution; and the functions N brhood S trategy1(X ) and N brhood S trategy2(X ) are used for determining a solution in the neighborhood of a solution, say X that returns a new neighboring solution, say X 0 . 5. Computational Results

The C language has been used to implement ABC algorithm for the Tree t-SP. The proposed approach has been tested on a large set of randomly generated graph instances. All experiments have been carried out on a Linux with the configuration of 3.2 GHz × 4 Intel Core i5 processor with 4 GB RAM. Hereafter the proposed approach ABC algorithm for the Tree t-SP will be referred to as ABC. Although instances that were used in GA [17] are unavailable, we have generated a large set of Euclidean graph instances. Note that authors [17] used only two instances (i.e. graph of size 50 nodes and 100 nodes (see Table 5 of [12]) in order to check the effectiveness of GA for the Tree t-SP. Also, we have re-implemented GA[17] for the

9

Page 9 of 22

Algorithm 1: The pseudo-code of ABC algorithm for the Tree t-SP

cr

ip t

Generate N E solutions randomly viz. E1 , E2 , ..., EN E ; best ← Best solution in N E solutions; while Termination criteria is not met do for i ← 1 to N E do if u01 < Pnbr then E 0 ← N brhood S trategy1(Ei ); else E 0 ← N brhood S trategy2(Ei );

us

if E 0 is better than Ei then Ei ← E 0 ;

else if Ei is not improving last limit iterations then Scout bee;

an

if Ei is better than best then best ← Ei ;

M

for i ← 1 to N O do si ← BT S Method(E 1, E 2, ..., EN E ); if u01 < Pnbr then Oi ← N brhood S trategy1(Esi ); else

Oi ← N brhood S trategy2(Esi );

te

d

for i ← 1 to N O do if Oi is better than Esi then Esi ← Oi ;

Ac ce p

if Oi is better than best then best ← Oi ;

purpose of comparison with our proposed approach (ABC). Same values of parameters in GA [17] are taken into account. For example, the population size is equal to the size of vertices, |V |, of the graph under consideration; crossover probability = 0.9; mutation probability = 0.2; and total number of generations = 300. Since 300 generations are considered to be less in the domain of metaheuristic techniques, therefore, to match with ABC in terms of total number of solutions generated in ABC for each graph instances, GA is also allowed to execute (150000/|V |) generations in order to generate approximately 3 Lakh solutions for each graph instance, as approximately 3 Lakh solutions are generated for each graph instance in ABC. Both ABC and GA have been also executed 10 independent runs for each graph instance in order to check their robustness. Subsequent subsections discuss about generation of graph instances, parameter tuning and a comparison study of ABC and GA on a large set of randomly generated graph instances.

10

Page 10 of 22

5.1. Graph Instances

te

d

M

an

us

cr

ip t

To carry out experiments, four different sets of Euclidean graph instances with |V | = {50, 100, 150, 200} are generated randomly in 100 × 100 plane. The generation of a particular size of vertices, say |V | = 50, of graph instance (a complete graph) is as follows: 50 different points in this plane where each point in this plane denotes a particular vertex in G are selected uniformly at random. The Euclidean distance between two vertices (say v1 , v2 ) is considered as its edge-weight. Similar procedure is followed to generate other three different complete graph instances with |V | = {100, 150, 200}. Each generated graph instance Gi (Vi , Ei ) is used to generate further three sparse graphs with different edge-density, i.e., |Ei1 |, |Ei2 |, |Ei3 |, where |Ei1 | = 0.8×|Ei |; |Ei2 | = 0.6 × |Ei |; and |Ei3 | = 0.4 × |Ei |. Note that 0.8 × |Ei | (0.6 × |Ei | and 0.4 × |Ei |) means 20% (40% and 60%) random edges of Ei are not considered into Ei1 (Ei2 and Ei3 ). All graph instances are represented by A1 A2 A3 (see Table 3, where A1 is the total number of vertices of that graph (see column |V | of Table 3); A2 is X% (see column E-D of Table 3) random edges of total number of edges in its corresponding complete graph with A1 vertices are not considered in current graph instance; and A3 presents different graph instance (in terms of edge density) with same number of vertices (A1). In Table 3, consider 50 0.0 1 instance which is a complete graph with A1 = 50, A2 = 0.0 and A3 = 1. Three different instances with different edge-density have been generated from this graph by doing changes in A2, i.e., A2 = {0.2, 0.4, 0.6}. The generated graph instances corresponding to 50 0.0 1 are 50 0.2 1, 50 0.4 1 and 50 0.6 1. Hence, generation of such four sets (|V | ∈ {50, 100, 150, 200}) of graph instances leads to 48 graph instances. 5.2. Parameter Setting

Ac ce p

Being a stochastic in nature, selection of parameter values becomes important for its success. Although setting of parameters is a difficult task; however one can approximate setting of parameters in such a way that high quality solutions overall can be obtained. For setting of various parameters used in ABC algorithm, the performance of ABC algorithm has been tested on five instances, i.e. 100 0.0 1, 100 0.2 1, 150 0.0 1, 200 0.0 1, and 200 0.2 1. Various potential values of each candidate parameter have been selected based on available literature, our own previous experience with ABC and some preliminary experimentations (see Table 2). An offline full factorial strategy [5, 9] is used for the proposed approach in order to determine the best combination of parameter values from all combinations of parameter values shown in Table 2. On investigation, we found that the combination comprising N E = 50, N O = 100, limit = 100, BT S Method= 0.8 and Pnbr = 0.95 has provided the best results overall. We have allowed ABC for the Tree t-SP to execute for 2000 generations for each graph instance that leads to generation of approximately 3 Lakh solutions. 5.3. Comparison of ABC with GA [17] In this subsection, we compare ABC with GA [17]. Table 3 reports the results of various approaches for the Tree t-SP. In Table 3, the first three columns (Instance,

11

Page 11 of 22

Table 2: Influence of parameter values on set of graph instances

BT S − Method

Pnbr

limit

150 0.0 1 Best Avg 8.88 9.35 8.12 9.06 8.68 9.28 8.22 9.07 9.55 9.95 8.66 9.68 8.12 9.06 8.70 9.09 8.77 9.36 8.12 9.06 8.60 9.39 8.94 9.40 8.90 9.50 8.12 9.06 8.91 9.21 8.74 9.25 8.12 9.06 8.20 9.14

200 0.0 1 Best Avg 11.58 13.37 9.96 11.63 10.90 12.29 10.17 12.44 13.00 14.84 11.02 12.74 9.96 11.63 10.82 11.74 10.78 12.17 9.96 11.63 11.45 12.99 11.65 13.38 11.21 12.48 9.96 11.63 10.80 12.09 10.78 12.62 9.96 11.63 10.67 11.67

200 Best 10.85 11.20 11.23 10.59 11.52 10.64 11.20 11.01 11.12 11.20 10.44 10.63 11.81 11.20 10.39 11.04 11.20 11.47

0.2 1 Avg 12.36 12.28 12.98 12.34 14.38 12.58 12.28 12.45 12.14 12.28 12.44 12.81 13.50 12.28 11.85 12.46 12.28 12.33

ip t

100 0.2 1 Best Avg 6.00 6.30 6.04 6.19 6.04 6.20 5.98 6.07 6.10 6.38 5.98 6.28 6.04 6.19 6.04 6.16 6.12 6.23 6.04 6.19 5.88 6.09 5.98 6.20 5.82 6.14 6.04 6.19 5.88 6.07 6.07 6.28 6.04 6.19 5.99 6.13

cr

NO

25 50 100 150 25 50 100 150 0.75 0.80 0.85 0.90 0.90 0.95 0.98 50 100 150

100 0.0 1 Best Avg 6.30 6.58 6.17 6.36 6.10 6.28 6.24 6.36 6.55 6.72 6.16 6.55 6.17 6.36 6.18 6.38 6.26 6.50 6.17 6.36 6.11 6.40 6.30 6.47 6.28 6.50 6.17 6.36 6.34 6.52 6.29 6.41 6.17 6.36 6.28 6.42

us

NE

Parameter value

an

Parameter

Ac ce p

te

d

M

|V | and E-D) denote the instance type, the number of vertices and the edge-density respectively; and each next three columns report the best value (Best), the average solution quality (Avg), standard deviation (SD) and the average total execution time(ATET) obtained over 10 runs by the three proposed approaches (GA, ABC-PSK and ABC). ABC-PSK which is a version of ABC algorithm for the Tree t-SP and does not use problem-specific knowledge will be discussed in the next subsection. The last two columns (%Gap-Best and %Gap-Avg) denote percentage improvement in best and average that will be discussed in the next subsection. Table 3 clearly shows superior performance of ABC over GA [17] for the Tree t-SP on all instances in terms of solution quality. Results in terms of Best and Avg obtained by ABC are many fold better than that of GA. Table 3 also shows that ABC is faster than GA in terms of computational time (ATET). For statistical analysis similar to [26, 27], we have also used Wilcoxons signed rank test to compare ABC and GA. For that we follow the website http://www.socscistatistics.com/tests/signedranks/Default2.aspx that provides Wilcoxon Signed-Rank Test Calculator. Using level of significance 0.05,

12

Page 12 of 22

13

Page 13 of 22

50 0.0 1 50 0.2 1 50 0.4 1 50 0.6 1 50 0.0 2 50 0.2 2 50 0.4 2 50 0.6 2 50 0.0 3 50 0.2 3 50 0.4 3 50 0.6 3 100 0.0 1 100 0.2 1 100 0.4 1 100 0.6 1 100 0.0 2 100 0.2 2 100 0.4 2 100 0.6 2 100 0.0 3 100 0.2 3 100 0.4 3 100 0.6 3 150 0.0 1 150 0.2 1 150 0.4 1 150 0.6 1 150 0.0 2 150 0.2 2

Instance

|V | 50 50 50 50 50 50 50 50 50 50 50 50 100 100 100 100 100 100 100 100 100 100 100 100 50 50 50 50 50 50

Characteristics E-D 0% 20% 40% 60% 0% 20% 40% 60% 0% 20% 40% 60% 0% 20% 40% 60% 0% 20% 40% 60% 0% 20% 40% 60% 0% 20% 40% 60% 0% 20% Best 15.59 14.91 10.25 9.82 12.74 12.10 10.15 7.45 17.57 14.45 14.45 7.57 100.64 92.10 70.33 61.03 125.89 84.52 102.02 64.37 116.94 105.12 59.33 53.58 361.81 331.68 142.80 109.40 295.98 285.25

ATET 5.28 5.30 5.23 5.15 5.32 5.43 5.33 5.11 5.29 5.24 5.24 5.09 17.81 17.97 17.52 17.94 18.52 18.74 18.75 18.79 19.09 19.00 18.80 19.02 40.11 40.80 39.25 37.63 36.89 37.51

d

te

GA Avg SD 24.08 6.00 23.16 5.69 14.51 3.29 11.29 1.58 22.27 7.40 20.07 4.22 16.06 3.35 10.15 2.04 26.62 7.51 20.05 6.26 20.05 6.26 10.18 1.43 193.01 78.69 154.31 43.51 120.60 59.66 79.89 14.92 176.60 31.91 132.41 31.36 175.95 54.73 75.73 7.78 151.16 15.36 127.47 16.25 107.56 21.66 91.50 15.04 460.38 69.91 483.14 105.96 299.74 74.77 162.52 43.10 411.61 85.83 339.01 43.80 Best 4.67 4.33 4.44 4.64 5.02 4.69 3.98 4.00 5.45 4.68 4.68 4.20 6.17 6.04 6.16 6.51 5.89 5.93 5.86 5.83 7.09 7.08 6.84 6.83 8.12 8.67 7.54 6.86 8.97 8.53

us

ATET 3.99 4.01 4.02 4.03 3.95 3.96 3.98 3.90 3.97 3.98 3.99 3.98 12.29 12.32 12.38 12.38 12.28 12.30 12.35 12.36 12.30 12.36 12.37 12.37 25.96 25.92 26.02 26.01 25.79 25.90

an

ABC-PSK Avg SD 4.99 0.08 4.49 0.05 4.46 0.04 4.79 0.12 5.27 0.08 4.71 0.03 4.09 0.14 4.04 0.05 5.79 0.11 4.88 0.05 4.88 0.05 4.30 0.09 9.84 1.03 10.51 1.15 10.12 1.43 9.07 1.22 11.03 1.61 11.22 0.82 11.18 0.81 10.32 0.80 15.43 1.32 14.82 1.82 12.68 1.39 10.62 1.04 38.56 3.21 30.01 2.50 27.12 2.48 21.93 1.54 38.10 4.34 35.18 2.51

M

Best 4.82 4.39 4.44 4.64 5.10 4.69 3.98 4.00 5.58 4.76 4.76 4.20 8.47 8.31 8.26 7.48 9.14 9.57 9.75 9.09 13.49 12.67 10.63 8.46 34.02 25.97 22.38 18.54 30.28 31.30

cr

ATET 4.26 4.28 4.31 4.28 4.20 4.26 4.28 4.23 4.22 4.24 4.25 4.23 13.51 13.48 13.64 13.72 13.56 13.63 13.62 13.72 13.55 13.58 13.71 13.72 28.33 28.67 29.03 29.02 28.56 28.62

ip t

ABC Avg SD 4.84 0.09 4.36 0.03 4.44 0.00 4.66 0.04 5.04 0.06 4.69 0.00 3.98 0.00 4.00 0.00 5.49 0.10 4.74 0.05 4.74 0.05 4.20 0.00 6.36 0.12 6.19 0.09 6.36 0.15 6.65 0.10 6.12 0.15 6.15 0.19 6.10 0.14 6.16 0.20 7.49 0.24 7.57 0.26 7.30 0.30 7.25 0.17 9.06 0.44 9.13 0.37 8.41 0.45 7.57 0.49 9.40 0.28 8.95 0.27

Table 3: Results of GA [17], ABC-PSK and ABC on various sizes of graph instances.

Ac ce p

%Gap-Best %Gap-Avg 3.21 3.10 1.38 2.98 0.00 0.45 0.00 2.79 1.59 4.56 0.00 0.42 0.00 2.76 0.00 0.99 7.33 5.46 1.71 2.95 1.71 2.95 0.00 2.38 34.27 54.71 37.58 69.78 34.09 59.11 14.90 36.39 55.17 80.22 61.38 82.43 66.38 83.27 55.91 67.53 90.26 106.00 78.95 95.77 55.40 73.69 23.86 46.48 318.96 325.60 199.53 228.69 196.81 221.75 170.26 189.69 237.56 305.31 266.94 293.07 Continued on next page

14

Page 14 of 22

Best 213.08 146.37 209.40 218.02 209.29 168.89 555.56 475.45 367.35 238.48 407.64 348.68 353.76 288.71 414.90 316.86 262.36 255.22

SD 61.25 50.84 27.92 23.36 17.50 18.44 63.74 54.02 76.51 99.53 50.06 71.70 22.90 23.45 27.94 29.86 31.46 18.76

ATET 40.25 41.06 40.88 41.14 42.15 42.21 73.22 72.86 71.86 70.42 61.77 64.52 72.73 74.46 72.97 75.08 73.38 71.99

d

te

GA Avg 282.13 201.87 255.78 237.62 226.12 195.89 682.71 578.38 544.24 348.32 470.66 419.70 381.80 321.67 474.24 362.72 324.65 300.03

Ac ce p

Table 3 – continued from previous page Instance Characteristics |V | E-D 150 0.4 2 50 40% 150 0.6 2 50 60% 150 0.0 3 50 0% 150 0.2 3 50 20% 150 0.4 3 50 40% 150 0.6 3 50 60% 200 0.0 1 100 0% 200 0.2 1 100 20% 200 0.4 1 100 40% 200 0.6 1 100 60% 200 0.0 2 100 0% 200 0.2 2 100 20% 200 0.4 2 100 40% 200 0.6 2 100 60% 200 0.0 3 100 0% 200 0.2 3 100 20% 200 0.4 3 100 40% 200 0.6 3 100 60% ATET 28.96 28.88 28.41 28.66 28.90 29.16 49.63 50.68 52.55 50.67 50.11 49.56 50.59 50.85 49.66 49.70 50.67 50.54

ip t

ABC Avg SD 8.78 0.52 8.20 0.49 11.53 0.42 10.84 0.42 10.72 0.69 9.31 0.48 11.63 0.72 12.28 0.76 11.71 0.61 10.53 0.64 13.41 1.13 12.09 0.68 11.74 0.45 11.16 0.70 13.35 0.63 14.04 1.18 13.27 1.01 12.07 1.13

cr

Best 8.00 7.28 11.01 10.39 10.00 8.63 9.96 11.20 10.43 9.57 11.57 11.06 11.17 10.14 12.37 12.47 11.77 10.90

us

ATET 26.10 26.44 26.01 25.90 26.08 26.12 45.74 45.34 45.47 45.62 45.23 46.02 46.01 46.35 45.35 46.56 45.67 47.35

an

ABC-PSK Avg SD 32.86 1.84 25.74 2.02 41.42 2.41 38.64 1.54 36.65 3.15 25.85 2.66 82.60 3.52 64.48 2.16 57.11 3.49 47.63 2.99 77.06 4.74 75.12 4.99 60.26 4.69 43.73 2.99 82.16 6.52 71.02 6.23 58.97 4.70 54.81 3.26

M

Best 30.71 22.66 37.95 36.42 30.99 22.62 75.44 58.96 51.11 42.52 69.92 67.64 50.82 38.34 74.46 60.52 49.26 51.26

%Gap-Best 211.26 211.26 244.68 250.52 209.90 162.10 657.42 426.42 390.02 344.30 504.32 511.57 354.96 278.10 501.94 385.32 318.52 370.27

%Gap-Avg 274.25 213.90 259.23 256.45 256.86 177.65 610.23 425.08 387.70 352.32 474.64 521.33 413.28 291.84 515.43 405.84 344.38 354.10

Table 4: p-value of ABC over ABC-PSK and GA [17]

100 A2? A3? 150 A2? A3? 200 A2? A3?

0% 0.00512 0.0251 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512

ABC A3? = {1} 20% 40% 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512

60% 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512

0% 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512

ABC A3? = {2} 20% 40% 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512

60% 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512

0% 0.00512 0.00694 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512

? One can see the details of A2 and A3 in the Instance in Section 5.1.

ABC A3? = {3} 20% 40% 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512

60% 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512 0.00512

ip t

50 A2? A3?

A2? GA ABC-PSK GA ABC-PSK GA ABC-PSK GA ABC-PSK

cr

Instances

us

we found that ABC significantly performs better than GA on all instances (see Table 4). 5.4. Influence of the Problem-Specific Knowledge in ABC

Ac ce p

te

d

M

an

To investigate the effect of problem-specific knowledge in ABC, we have also experimented with no problem-specific knowledge in ABC algorithm for the Tree t-SP. The resulting approach is referred to as ABC-PSK. Table 3 clearly shows the differences in solution quality on most of instances obtained by ABC-PSK and ABC. As the size of graph instance increases, ABC performs better and better in terms of solution quality in comparison to ABC-PSK. One can observe in terms of %Gap-Best value, ABC−PSK − AvgABC ABC−PSK − Best ABC ( Best Best ABC ) × 100%, and %Gap-Avg value, ( Avg AvgABC ) × 100%, of each instance for ABC versus ABC-PSK in the last two columns of Table 3, ABC shows significant improvement over ABC-PSK on most of the instances. Particularly, as the size of instance increases, the improvement also increases. Note that Values Best ABC−PSK , Best ABC , AvgABC−PSK and AvgABC respectively denote the best value obtained by ABC-PSK, the best value obtained by ABC, the average value obtained by ABC-PSK ,and the average value obtained by ABC. On applying Wilcoxons signed rank test (level of significance 0.05) for statistical analysis, we found that ABC significantly performs better than ABC-PSK on all instances except five instances (see Table 4). One can also observe that even ABC-PSK is much better than GA on all instances in terms of solution quality. In terms of computational time (ATET), ABC-PSK is faster than GA. 5.5. Analysis of Convergence Behaviour of GA, ABC-PSK and ABC

In this subsection, we analyze the convergence behaviour of GA, ABC-PSK and ABC in order to get useful insight into the behaviour of GA, ABC-PSK and ABC. For the analysis purpose, we consider two instances randomly from the last three sets of graph instances, i.e., inst 100 0.0 1, inst 100 0.6 2, inst 150 0.0 1, inst 150 0.4 2, inst 200 0.0 1 and inst 200 0.2 3. Figure 4 (a-f) exhibit the solution quality versus the number of solutions generated so-far on considered instances. The Solution Quality represented by X-axis is obtained after carrying out the Number of solutions generated so-far represented by the Y-axis. One can observe from Figure 4 (a-f) that the evolution curves of GA and ABC-PSK are very close to each other at the beginning of first few number of solutions

15

Page 15 of 22

GA ABC

150

500

125

200

100

50

0 100K

200K

300K

100K

200K

300K

Number of solutions generated so far

an

Number of solutions generated so far

(a) inst 100 0.0 1

(b) inst 100 0.6 2

800

600

600 500 400

d

300

te

200

100K

200K

Solution Quality

M

700

Solution Quality

75

25

0

0

100

cr

300

us

Solution Quality

Solution Quality

400

100

ip t

ABC − PSK

500

400

300

200

100

0 300K

100K

300K

Number of solutions generated so far

Ac ce p

Number of solutions generated so far

200K

(c) inst 150 0.0 1

(d) inst 150 0.4 2

800

500

400

600

Solution Quality

Solution Quality

700

500 400 300

300

200

200

100

100 0

0 100K

200K

300K

Number of solutions generated so far

100K

200K

300K

Number of solutions generated so far

(e) inst 200 0.0 1

(f) inst 200 0.2 3

Figure 4: Evolution of Solution Quality versus Number of solutions generated so-far

16

Page 16 of 22

cr

ip t

generated so-far; however, as the number of solutions generated so-far increases, ABCPSK converges rapidly towards better solution quality in comparison to that of GA. The convergence behavior of ABC-PSK shed light on its superiority over GA. One can also observe from Figure 4 (a-f) that the evolution curve of ABC in comparison to that of ABC-PSK and GA shows that ABC converges very rapidly from the beginning towards much better solution quality than that of ABC-PSK and GA. Such analysis shed light on the rationality of employing problem-specific knowledge in the neighborhood strategies of ABC.

us

6. Conclusions

Ac ce p

te

d

M

an

This paper presents an artificial bee colony (ABC) algorithm for the tree t-spanner problem on a given connected, undirected, and edge-weighted graph. Problem-specific knowledge is employed in the neighborhood strategies of ABC algorithm that makes ABC algorithm highly effective in searching high quality solutions in lesser computational time. On a large set of randomly generated Euclidean graph instances exhibit superior performance of ABC algorithm over existing genetic algorithm (GA) [17] for this problem. On these instances, solution quality obtained by ABC algorithm are many fold better than that of GA. Another version (ABC-PSK) of ABC algorithm for this problem that does not use problem-specific knowledge has been also tested. Experimental results show that even ABC-PSK is superior to the existing GA in terms of solution quality and computational time. Experimental analyses based on the convergence behavior of GA, ABC-PSK and ABC algorithm show that ABC algorithm converges much faster towards much higher solution quality than that of existing GA and ABC-PSK. This analyses also justify the rationality of employing problem-specific knowledge in the neighborhood strategies of ABC. As a future work, problem-specific knowledge employed in ABC algorithm for this problem can be employed in other metaheuristic techniques for this problem. Acknowledgement

This work is supported in part by a grant (grant number YSS/2015/000276) from the Science and Engineering Research Board – Department of Science & Technology, Government of India. References

[1] M. Al-Salamah, Constrained binary artificial bee colony to minimize the makespan for single machine batch processing with non-identical job sizes, Applied Soft Computing 29 (2015) 379–385. [2] I. Alth´ofer, G. Das, D. Dobkin, D. Joseph, J. Soares, On sparse spanners of weighted graphs, Discrete Comput. Geom. 9 (1993) 81–100.

[3] B. Awerbuch, A. Baratz, D. Peleg, Efficient broadcast and light-weight spanners, 1992. 17

Page 17 of 22

ip t

[4] S. Bhatt, F. Chung, F. Leighton, A. Rosenberg, Optimal simulations of tree machines, in: Proceedings of 27th IEEE Foundation of Computer Science, pp. 274– 282. [5] M. Birattari, Tuning Metaheuristics: A Machine Learning Perspective, SpringerVerlag, 2005.

cr

[6] L. Cai, Tree spanners: Spanning trees that approximate distances, 1992.

[7] L. Cai, D. Corneil, Tree spanners, SIAM J. Discrete Math. 8 (1995) 359–387.

us

[8] Z. Cui, X. Gu, An improved discrete artificial bee colony algorithm to minimize the makespan on hybrid flow shop problems, Neurocomputing 148 (2015) 248259.

an

[9] A.E. Eiben, S.K. Smith, Parameter tuning for configuring and analyzing evolutionary algorithms, Swarm and Evolutionary Computation 1 (2011) 19–31. [10] Y. Emek, D. Peleg, Approximating minimum max-stretch spanning trees on unweighted graphs, SIAM J. Comput. 38 (2008) 1761–1781.

M

[11] D. Harel, R. Tarjan, Fast algorithms for finding nearest common ancestors, SIAM J. Comput. 13 (1984) 338–355.

d

[12] B. Jayalakshmi, A. Singh, A hybrid artificial bee colony algorithm for the cooperative maximum covering location problem, International Journal of Machine Learning and Cybernetics 8 (2017) 691–697.

te

[13] D. Karaboga, An idea based on honey bee swarm for numerical optimization, 2005.

Ac ce p

[14] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numeric function optimization: Artificial bee colony (ABC) algorithm, Journal of Global Optimization 39 (2007) 459–471. [15] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: artificial bee colony (ABC) algorithm and applications, Artificial Intelligence Review 42 (2014) 21–57. [16] A. Liestman, T. Shermer, Additive graph spanners, Networks 23 (1993) 343–364. [17] R. Moharam, E. Morsy, Genetic algorithms to balanced tree structures in graphs., Swarm and Evolutionary Computation 32 (2017) 132–139. [18] Q.K. Pan, M.F. Tasgetiren, P.N. Suganthan, T.J. Chua, A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem, Information Sciences 181 (2011) 2455–2468. [19] D. Peleg, D. Tendler, Low stretch spanning trees for planar graphs, 2001.

18

Page 18 of 22

ip t

[20] D. Peleg, J. Ullman, An optimal synchronizer for the hypercube, in: Proceedings of 6th ACM Symposium on Principles of Distributed Computing, Vancouver, pp. 77–85.

cr

[21] D. Peleg, E. Upfal, A tradeoff between space and efficiency for routing tables, in: Proceedings of 20th ACM Symposium on Theory of Computing, Chicago, pp. 43–52. [22] R. Prim, Shortest connection networks and some generalizations, Bell Systems Technical Journal 36 (1957) 1389–1401.

us

[23] G. Raidl, B. Julstrom, Edge-sets: an effective evolutionary coding of spanning trees, IEEE Transactions on Evolutionary Computation 7 (2003) 225–239.

an

[24] A. Singh, An artificial bee colony algorithm for the leaf-constrained minimum spanning tree problem, Applied Soft Computing 9 (2009) 625–631.

M

[25] A. Singh, B. Jayalakshmi, Hybrid artificial bee colony algorithm based approaches for two ring loading problems, Applied Intelligence (2017). doi:10. 1007/s10489-017-0950-z. [26] R. Soto, B. Crawford, R. Olivares, J. Barraza, I. Figueroa, F. Johnson, F. Paredes, E. Olgun, Solving the non-unicost set covering problem by using cuckoo search and black hole optimization, Natural Computing 16 (2017) 213–229.

te

d

[27] R. Soto, B. Crawford, R. Olivares, S. Niklander, F. Johnson, F. Paredes, E. Olgun, Online control of enumeration strategies via bat algorithm and black hole optimization, Natural Computing 16 (2017) 241–257.

Ac ce p

[28] S. Sundar, A. Singh, A swarm intelligence approach to the quadratic minimum spanning tree problem, Information Sciences 180 (2010) 3182–3191. [29] S. Sundar, A. Singh, A swarm intelligence approach to the quadratic multiple knapsack problem, in: Proceedings of the 17th International Conference on Neural Information Processing (ICONIP 2010), LNCS, Sydney, volume 6443, pp. 626–633. [30] S. Sundar, A. Singh, New heuristic approaches for the dominating tree problem, Applied Soft Computing 13 (2013) 4695–4703. [31] S. Sundar, A. Singh, Metaheuristic approaches for the blockmodel problem, IEEE Systems Journal 9 (2015) 1237–1247. [32] S. Sundar, A. Singh, Metaheuristic approaches for the blockmodel problem, IEEE Systems Journal 9 (2015) 1237–1247. [33] S. Sundar, P. Suganthan, C. Jin, C. Xiang, C. Soon, A hybrid artificial bee colony algorithm for the job-shop scheduling problem with no-wait constraint, Soft Computing 7 (2015) 1–10.

19

Page 19 of 22

ip t

[34] G. Venkatesan, U. Rotics, M.S. Madanlal, J.A. Makowsky, C.P. Rangan, Restrictions of minimum spanner problems, Information and Computation 136 (1997) 143–164.

Ac ce p

te

d

M

an

us

cr

[35] X. Zhang, X. Zhang, A binary artificial bee colony algorithm for constructing spanning trees in vehicular ad hoc networks, Ad Hoc Networks 58 (2017) 198– 204.

20

Page 20 of 22

Highlights The tree t­spanner problem finds practical relevance in network.



Neighborhood   strategies   of   artificial   bee   colony   (ABC)   algorithm   particularly   employ problem­specific knowledge that makes ABC algorithm highly effective in searching high quality solutions in lesser computational time.



Computational experiments on a large set of randomly generated graph instances exhibit superior performance of ABC algorithm over the existing genetic algorithm for the tree t­ spanner problem.

Ac ce pt e

d

M

an

us

cr

ip t



Page 21 of 22

u M an ed pt ce Ac

Page 22 of 22