Computers & Operations Research 32 (2005) 239 – 255
www.elsevier.com/locate/dsw
Heuristic and exact algorithms for the spanning tree detection problem Takeo Yamada∗ , Seiji Kataoka, Kohtaro Watanabe Department of Computer Science, The National Defense Academy, Yokosuka, Kanagawa 239-8686, Japan
Abstract Given an integer and an undirected graph with edges associated with integer weights, the spanning tree detection problem (STDP) is to 0nd, if one exists, a spanning tree of weight . STDP is NP-hard. In this paper we develop both approximate and exact algorithms to solve STDP. Approximate algorithm consists of a greedy method to construct an initial spanning tree quickly, and a local search method that improves the weight of the tree successively toward . To solve STDP completely, we take a ‘divide and conquer’ approach and develop an exact algorithm. These algorithms are implemented in C language and we conduct some numerical tests to evaluate the performance of the developed algorithms for various types and sizes of instances. In most cases, we are able to solve STDPs with up to 1000 nodes in less than a few seconds. Moreover, to solve harder instances we try tabu search as well, and mention how the developed algorithm can be modi0ed to list up all the spanning trees of weight . ? 2003 Elsevier Ltd. All rights reserved.
1. Introduction Let G = (V; E) be an undirected graph [1] with vertex set V = {v1 ; : : : ; vn } and edge set E = {e1 ; : : : ; em } ⊆ V × V . Each edge e ∈ E is associated with an integer weight w(e). We assume G is connected and simple, i.e., there exist neither self-loops nor multiple edges. A tree is a connected acyclic subgraph of G. For a tree T , its weight w(T ) is de0ned as the sum of the weights of constituent edges. A tree is a spanning tree if it covers all the nodes of G. The minimum spanning tree problem (MSTP, [2]) has been well studied in this framework, and eAcient algorithms are widely known to solve this problem [3,4]. With minor changes, the same algorithms can be used to 0nd the maximum spanning tree as well.
∗
Corresponding author. E-mail address:
[email protected] (T. Yamada).
0305-0548/$ - see front matter ? 2003 Elsevier Ltd. All rights reserved. doi:10.1016/S0305-0548(03)00215-6
240
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
Fig. 1. A planar graph.
In this paper, we are concerned with the following variation of the problem, which is termed the spanning tree detection problem (STDP). STDP: Given an arbitrary integer and an undirected graph G, 0nd, if one exists, a spanning tree T of G such that w(T ) = . Here, the integer is referred to as the ‘target value’, and we call a spanning tree with this weight an -spanning tree, or -ST for short. Closely related to this problem is the following exact spanning tree problem [5,6]. ESTP: Determine if there exists an -ST in G. STDP is NP-hard [7], since ESTP is solved whenever STDP is solved, and ESTP is known to be NP-complete [5]. Although ESTP can be solved by a pseudo-polynomial time algorithm due to Barahona et al. [8], this does not help solving STDP, since the algorithm in [8] determines the existence problem, but does not actually yield an -ST as an ‘evidence’. Although no published algorithms are known to us to solve STDP, some naive approaches are possible. First, we may list up all the spanning trees included in G and see if there exists an -ST among them. For an algorithm to list up all the spanning trees readers are referred to, e.g., Read and Tarjan [9]. Alternatively, we may list up the K smallest (or largest) spanning trees for suAciently large K using the algorithm found in, e.g., [10]. However, except for very small instances, usually the number of all the spanning trees in a graph is too large to allow such a primitive approach. Example 1. By KirchhoJ’s method [11] we know that there exists more than 2 × 109 spanning trees in the graph of Fig. 1. Thus, complete enumeration is almost infeasible in practice. STDP may be encountered in the following applications.
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
241
Example 2. Let us consider a set of n cities represented by G = (V; E), where E is the set of possible communication links. Here E consists of p mutually disjoint subsets Ei served by p diJerent telephone companies, i.e., E = pi=1 Ei . We wish to construct a communication network connecting all the cities and using exactly ri links served by company i. Here we assume pi=1 ri = n − 1, so the resulting network is a spanning tree. Let us de0ne := max{ri | i = 1; : : : ; p} + 1, and let the edge weights be w(e) := i−1 for e ∈ Ei . Then, by the uniqueness of -adic expansion of arbitrary integers, the problem can be solved as a STDP with = r1 + r2 + r3 2 + · · · + rp p−1 . Example 3. Again consider G = (V; E) as a communication network connecting n cities. Let ce be the maintenance cost allocated to link e ∈ E, and the probability of failure at that link is assumed to be pe = exp(−ce ) for some constant ¿ 0. We want to 0nd a spanning tree such that the total cost is minimized, while keeping the failure probability of that tree at most some prespeci0ed value . This reduces to 0nding a spanning tree T such that w(T ) := e∈T ce = −log =, which is exactly a STDP. In this paper, we develop both approximate and exact algorithms to solve STDP. Section 2 is devoted to an approximate algorithm which quickly 0nds a spanning tree with the weight as close to as possible. This consists of a greedy method [12] to construct an initial spanning tree, and a local search method [13] to improve the weight of the tree successively toward . To solve STDP completely, in Section 3 we make use of the approximate solution and develop an exact algorithm by taking the ‘divide and conquer’ approach [14]. That is, we make use of the solution from the heuristic algorithm to divide the problem into a set of mutually disjoint subproblems, and apply the same procedure to each subproblem recursively until we obtain an -ST or a proof that no such spanning tree exists. In Section 4, we implement these algorithms in C language and solve STDPs for various types and sizes of instances. Exact solutions are often obtained by applying the heuristic method, and even if this is not the case an -ST is usually found by solving only a few number of subproblems. Some technical issues in implementing the algorithms are examined in Section 5. These include branching strategies in exact algorithms, tabu search method to solve problems approximately but more accurately, and extension of these algorithms to list up all the -STs in G. Finally, Section 6 concludes the paper. 2. Heuristic algorithms We set out with a local search approach to solve STDP approximately. It starts with an initial spanning tree (or a ‘solution’), and at each step it searches the neighborhood of the current solution for an improved spanning tree until no such improvement is possible any further. Of course, the solution obtained this way is not guaranteed to be an -ST. As the initial spanning tree we take either the minimum or the maximum spanning tree, whichever is of the weight closer to . Let Tmin (Tmax , resp.) denote a minimum (or a maximum) spanning tree, and w := w(Tmin ) and wL := w(Tmax ) are their weights. These trees are easily found using the well-known algorithms due to Kruskal [3] or Prim [4]. Then, the initial spanning
242
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
tree is given by T0 :=
Tmin
if dist(Tmin ; ) 6 dis(Tmax ; );
Tmax
otherwise:
(1)
Here dis(T; ) := |w(T ) − |
(2)
is the distance (or the ‘gap’) of T to the target value . Next, for a spanning tree T , its neighborhood is de0ned as follows. For an arbitrary non-tree edge e ∈ E \ T , {e} ∪ T includes a unique elementary cycle. Let Ce ⊆ T be the set of edges such that {e}∪Ce is the above mentioned cycle. Then, replacing e with an arbitrary edge e ∈ Ce gives another spanning tree. This is denoted as T + e − e := T ∪ {e} \ {e }, and the neighborhood of T is de0ned as the set of spanning trees obtained from T this way, i.e., N (T ) := {T + e − e | e ∈ E \ T; e ∈ Ce }:
(3)
Then, a prototype local search algorithm is as follows: Algorithm Heuris Step 1: Find the initial spanning tree T0 by (1), and let T := T 0 . Step 2: If there exists T ∈ N (T ) such that dist(T ; ) ¡ dist(T; ), go to Step 3; else go to Step 4. Step 3: Set T := T , and go back to Step 2. Step 4: Output T and stop. In the above algorithm we replace the solution from T to T ∈ N (T ) as soon as an improved solution is found in N (T ). This algorithm, based on the random descent scheme, is henceforth referred to as Heuris1 . Instead, we may search for the best solution in N (T ) before leaving T . This is realized by replacing Step 2 as follows: Step 2 : Find T that minimizes dist(·; ) in N (T ). If dist(T ; ) ¡ dist(T; ), go to Step 3; else go to Step 4. This version of the heuristic algorithm, based on the steepest descent scheme, is referred to as Heuris2 .
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
243
Example 4. For the graph of Fig. 1, we have w = 1718 and wL = 3561. Let = 2000. In Heuris1 we start with T0 =Tmin , and the tree weights are improved successively as 1718 → 1845 → 2018 → 2011. Thus, we obtain an approximate solution in 3 steps with the gap of 11 remaining to the target value. Heuris2 produces the trees with weights 1718 → 2009 → 1998 → 2001 → 2000. Thus, with this heuristics we get an exact -ST in 4 steps.
3. Exact algorithms Let T = {e1 ; e2 ; : : : ; en−1 } be a spanning tree in G. This may be obtained by the heuristic methods of the previous section. We assume w(T ) = , since otherwise the problem is solved. If ¡ w or wL ¡ holds, the problem is also solved since no -ST exists in G. If these are not the case, we can divide STDP into the following set of mutually disjoint subproblems Pi , i = 1; : : : ; n − 1. Pi : Find, if one exists, an -ST that contains e1 ; e2 ; : : : ; ei−1 , but does not contain ei . More generally, for a tree F = {e1 ; : : : ; ek } in G and a set of edges R ⊆ E that is disjoint with F, i.e., F ∩ R = ∅, a spanning tree T is (F; R)-admissible if it contains all the edges of F, but does not contain those of R. That is, T is (F; R)-admissible if and only if F ⊆ T and R ∩ T = ∅. Here F and R are the sets of 5xed and restricted edges, respectively. We pose the following problem. P(F; R): Find, if one exists, an (F; R)-admissible -ST. Clearly STDP is solved if we solve P(∅; ∅). We note that an (F; R)-admissible spanning tree of the minimum (or the maximum) weight is easily obtained by slightly modifying the Kruscal’s (or Prim’s) algorithm. Let the weight of such a minimum (maximum) spanning tree be denoted as w(F; R) and w(F; L R), respectively. If no (F; R)-admissible spanning trees exist, these are w(F; R) := ∞ and w(F; L R) := −∞. Then, the following is straightforward. L R) = , we have an -ST and the problem P(F; R) is solved. 1. If w(F; R) = or w(F; 2. If ¡ w(F; R) or w(F; L R) ¡ , no (F; R)-admissible -ST exists in G and the problem P(F; R) is also solved. L R), we can modify the heuristic algorithms of Section 2 For the case of w(F; R) ¡ ¡ w(F; to obtain an approximate solution, that is (F; R)-admissible, provided that such a spanning tree exists. Indeed, we start with the (F; R)-admissible spanning tree of the minimum (or the maximum) weight, as obtained above, whichever is closer to in weight. Then, we apply Heuris with an additional restriction that no edges of F leave from, and no edges of R enter into, the current tree. Clearly, the solution obtained this way is (F; R)-admissible. Let this solution be T = F ∪ {ek+1 ; : : : ; en−1 }. Again, we assume w(T ) = , and de0ne Fi := F ∪ {ek+1 ; : : : ; ei−1 }; Ri := R ∪ {ei }. Then, P(F; R) is further divided into the set of mutually disjoint subproblems P(Fi ; Ri ), i = k + 1; : : : ; n − 1.
244
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
Fig. 2. The behavior of Exact.
Thus, a prototype algorithm can be constructed in the divide and conquer paradigm [12,14] as the following recursive procedure: Algorithm Exact Input: A tree F = {e1 ; : : : ; ek }, and R ⊆ E such that F ∩ R = ∅. Step 1: If no (F; R)-admissible spanning tree exists in G, return. L R). Step 2: Calculate w(F; R) and w(F; Step 3: (i) If w(F; R) = or w(F; L R) = , output -ST and stop. (ii) If ¡ w(F; R) or w(F; L R) ¡ , then no (F; R)-admissible -ST exists in G. Thus return. Step 4: Use Heuris to obtain an approximate (F; R)-admissible spanning tree T = F ∪ {ek+1 ; : : : ; en−1 }. If w(T ) = , output T and stop. Step 5: For i = k + 1; : : : ; n − 1 call Exact recursively with (Fi ; Ri ). In implementing this algorithm, we have a choice of the heuristic method used in Step 4. If Heuris1 (Heuris2 ) is used in this part, the resulting algorithm is called Exact1 (Exact2 , resp.). Also, the order in which the edges are arranged in representing a spanning tree may greatly affect the performance of the resulting algorithm, since it determines the order of branchings. In this paper, we assume that the edges of any spanning tree is arranged in the depth-5rst fashion as we traverse the tree starting from some 0xed ‘ROOT’ node. By traversing the same tree, say, in the width-5rst way, we obtain a completely diJerent order of edges, and thus a diJerent set of subproblems will be produced from the same parent problem. This will be experimentally examined later in Section 5. Example 5. Fig. 2 shows the behavior of Exact1 for the graph of Fig. 1 with = 2000 and ROOT = 1. Each node in this 0gure represents the subproblem P(F; R). Attached to its shoulder
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
245
is w(F; R)= w(F; L R), and the second line in italic is the weight of the (F; R)-admissible spanning tree obtained by Heuris1 . Between the nodes, edge (i; j) is 0xed or restricted depending on ◦ or × attached to its shoulder. The algorithm starts with subproblem P0 = P(∅; ∅), and traverses the tree in a depth-5rst fashion, due to the recursive structure of the algorithm. P3 is terminated since it is infeasible. Thus, after producing 5 subproblems, the algorithm stops with an -ST found at subproblem P5 . 4. Numerical experiments We have implemented Heuris and Exact in C language, and conducted a series of numerical experiments on an IBM RS/6000 44P 2 way Model 270 workstation (CPU: POWER3-II, 375 MHz) to evaluate the performance of the developed algorithms. 4.1. Design of experiments The instances used in our experiments are the following four types: 1. Hn;Um : This is a planar graph drawn on the rectangle of 1000 × 1000 (pixels). The edge weights are the Euclidean distance rounded to integer. For an example of this type of graph, see Fig. 1, U which is H20; 46 . L 2. Hn; m : This is identical to Hn;Um except that the edge weights are uniformly random integers over [1; 10L ] (L = 3; 4; 5). 3. KnL : This is the complete graph with n nodes. The edge weights are uniformly random over [1; 10L ]. 4. RLn; m : This is the random graph obtained from Hn;L m by adding m − m more edges randomly. The weights of these edges are also uniformly random over [1; 10L ]. Here planar graphs stand for the sparsely edged cases, complete graphs (KnL ) are for densely edged ones, and random graphs represent the intermediates. For the data of these graphs readers are referred to [15]. Tables 1–3 summarize the characteristics of the instances, where ‘mean’ and ‘s.d.’ represent the mean and the standard deviation of the edge weights. Also shown here are the minimum and maximum weights of the spanning trees. For each instance, we try 200 points from the interval [w; w] L as the target value, i.e., i := w + i(wL − w)=200
(i = 0; 1; : : : ; 199):
(4)
4.2. Results of the Heuristic algorithms Tables 4–6 give the results of experiments for the heuristic algorithms. Here ‘#step’, ‘gap’ and ‘CPU’ show, respectively, the number of iterations, the gap of the obtained solution to the target value, and the CPU time in seconds. These are average of the measurements for 200 target values
246
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
Table 1 Planar graphs Graph
Mean
s.d.
w
wL
U H100; 260 U H200; 560 U H400; 1120 U H600; 1680 U H800; 2240 U H1000; 2800
72.05 55.26 38.95 30.65 26.17 23.35
32.15 29.60 18.88 13.85 11.46 10.02
4591 6618 9301 11362 13015 14663
9838 15737 21825 25549 28872 32029
3 H100; 260 3 H200; 560 3 H400; 1120 3 H600; 1680 3 H800; 2240 3 H1000; 2800
501.35 520.13 511.69 508.16 507.62 508.53
286.99 281.02 285.45 287.46 286.53 285.57
22201 46761 88095 126858 171891 216644
77836 160317 319761 480378 637776 797134
4 H100; 260 4 H200; 560 4 H400; 1120 4 H600; 1680 4 H800; 2240 4 H1000; 2800
4839.81 4943.34 4964.37 5004.59 4998.69 4969.24
2899.77 2941.91 2899.85 2905.18 2871.58 2873.36
199469 374845 811015 1217641 1665438 2028588
765368 1583310 3159283 4778674 6340755 7909117
5 H100; 260 5 H200; 560 5 H400; 1120 5 H600; 1680 5 H800; 2240 5 H1000; 2800
47378.27 48407.63 49134.01 49486.73 49471.90 49047.82
28668.25 28771.92 29334.05 29085.40 28816.89 28737.25
1935491 3835900 7724888 12060698 16042076 19613904
7549913 15448854 31704703 47526829 62940589 78488587
given in (4). Since these algorithms may or may not end up with an exact -ST, ‘%hit’ is given to show the percentage that an -ST is found by the heuristic algorithm for the target values tried. From these, we conclude the following: • For almost all cases tested, we obtain an approximate spanning tree within a few seconds. The solution is usually very close to in weight, and often an exact -ST is obtained. • Heuris1 is faster than Heuris2 , especially for the complete and random graphs. However, in solution quality (i.e., in gap and %hit) the two algorithms are almost competitive. • As the size of instance (n; m) increases we usually obtain a solution of better quality, although the CPU time naturally increases. • As L increases from 3 to 5, and thus the edge weights are distributed over wider interval, gap increases and %hit decreases, while CPU time remains almost the same. Especially, in planar graphs this deterioration of solution quality is remarkable. This eJect is not salient in complete and random graphs.
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
247
Table 2 Complete graphs Graph
Mean
s.d.
w
wL
3 K100 3 K200 3 K400 3 K600
504.20 499.53 500.77 500.26
286.56 288.07 289.23 288.67
1352 1392 1422 1448
97754 197725 398013 598082
4 K100 4 K200 4 K400 4 K600
4959.55 4982.50 4978.36 4987.72
2902.08 2892.66 2885.09 2884.11
9654 10523 12419 11513
978328 1978893 3978842 5978346
5 K100 5 K200 5 K400 5 K600
49561.57 49971.44 49904.93 49940.42
28818.66 28886.07 28860.47 28877.33
104907 126339 129450 124243
9777045 19786467 39777840 59774744
s.d.
w
wL
Table 3 Random graphs Graph
Mean
R4400; 5000 R4400; 10000 R4400; 20000 R4400; 40000 R4800; 10000 R4800; 20000 R4800; 40000 R4800; 80000
4930.34 4998.83 4987.80 4994.05 4960.16 4986.07 4996.17 4986.94
2896.27 2901.02 2900.66 2885.42 2892.60 2897.00 2883.44 2881.81
238697 133217 62155 28452 499864 271767 140287 67377
3734129 3860520 3923268 3963208 7474526 7717490 7858438 7924712
R5400; 5000 R5400; 10000 R5400; 20000 R5400; 40000 R5800; 10000 R5800; 20000 R5800; 40000 R5800; 80000
49869.27 50334.83 50098.58 50021.84 50071.47 50098.07 50017.64 50023.22
29106.60 28799.03 28927.49 28909.14 28013.62 28891.46 28879.84 28862.70
2399791 1335819 650420 301803 5057642 2724606 1350440 637804
37494547 38586233 39281197 39601762 74844804 77292066 78435327 79208166
4.3. Results of the exact algorithms Throughout experiments in this section, unless speci0ed otherwise, we assume the center node of G as the ROOT to represent each spanning tree in that graph, and traverse the tree in the depth-0rst way starting from this node.
248
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
Table 4 Heuristic algorithms for the planar graphs Graph
Heuris1 #step
Heuris2 Gap
%hit
CPU
#step
Gap
%hit
CPU
U H100; 260 U H200; 560 U H400; 1120 U H600; 1680 U H800; 2240 U H1000; 2800
35.4 79.4 152.6 226.7 296.3 365.8
0.154 0.005 0.005 0.000 0.000 0.000
96.5 99.5 99.5 100.0 100.0 100.0
0.00 0.01 0.05 0.12 0.26 0.44
12.2 19.3 45.4 73.1 98.1 124.1
0.005 0.000 0.000 0.000 0.000 0.000
99.5 100.0 100.0 100.0 100.0 100.0
0.00 0.01 0.07 0.21 0.47 0.86
3 H100; 260 3 H200; 560 3 H400; 1120 3 H600; 1680 3 H800; 2240 3 H1000; 2800
29.5 59.8 113.0 165.3 219.8 274.1
0.771 0.338 0.055 0.060 0.035 0.000
53.0 78.0 95.0 98.5 98.0 100.0
0.00 0.00 0.02 0.04 0.08 0.12
16.5 33.1 65.1 99.3 130.3 162.7
0.413 0.075 0.005 0.000 0.010 0.000
66.5 93.0 99.5 100.0 99.0 100.0
0.00 0.02 0.09 0.28 0.61 1.13
4 H100; 260 4 H200; 560 4 H400; 1120 4 H600; 1680 4 H800; 2240 4 H1000; 2800
29.1 60.9 114.0 170.0 216.7 281.9
8.338 2.682 2.005 0.796 0.562 0.896
12.5 26.5 33.5 71.5 64.0 85.5
0.00 0.01 0.02 0.04 0.10 0.13
16.6 34.1 66.1 99.6 131.3 165.2
4.746 1.841 0.597 0.294 0.154 0.129
14.0 32.0 58.5 76.0 85.0 89.0
0.00 0.02 0.10 0.28 0.60 1.15
5 H100; 260 5 H200; 560 5 H400; 1120 5 H600; 1680 5 H800; 2240 5 H1000; 2800
30.0 59.0 117.7 174.6 223.4 279.7
85.940 47.129 9.950 6.060 5.682 2.910
2.5 5.0 12.0 10.5 13.5 23.5
0.00 0.01 0.02 0.04 0.12 0.16
16.7 33.2 67.4 99.9 131.1 165.1
44.995 13.831 6.075 2.503 2.781 1.866
2.0 4.5 7.5 21.5 12.0 24.0
0.00 0.02 0.10 0.29 0.63 1.17
Table 7 shows the result of experiments for the planar graphs. Here ‘%exist’ is the percentage that an -ST exists for the 200 target values tested, and ‘#sub’ denotes the number of subproblems generated. Similarly, Tables 8 and 9 are the results for the complete and random graphs, respectively. From these, we conclude: • For almost all cases tested, STDP is solved exactly either by Exact1 or Exact2 within a few seconds. • Exact2 usually generates fewer subproblems than Exact1 . This is especially the case for the planar graphs. • For complete and random graphs Exact1 is much faster than Exact2 . For planar graphs, however, this advantage of Exact1 is often reversed as L becomes large and thus the edge weights are wider distributed. • In planar graphs, the number of subproblems generated and CPU time increase as L increases. Same tendency is observed, but not very signi0cant in complete and random graphs.
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
249
Table 5 Heuristic algorithms for the complete graphs Graph
Heuris1
Heuris2
#step
Gap
%hit
CPU
#step
Gap
%hit
CPU
3 K100 3 K200 3 K400 3 K600
54.4 104.1 202.6 301.6
0.002 0.000 0.000 0.000
100.0 100.0 100.0 100.0
0.01 0.02 0.20 0.58
24.8 49.8 99.1 149.1
0.000 0.000 0.000 0.000
100.0 100.0 100.0 100.0
0.10 1.23 16.38 74.44
4 K100 4 K200 4 K400 4 K600
56.1 106.6 204.1 304.6
0.062 0.000 0.000 0.000
95.0 100.0 100.0 100.0
0.01 0.03 0.20 0.58
25.1 50.0 99.1 149.1
0.052 0.000 0.000 0.000
95.0 100.0 100.0 100.0
0.10 1.27 17.54 77.97
5 K100 5 K200 5 K400 5 K600
55.7 107.6 208.4 307.6
3.062 0.447 0.000 0.000
28.0 62.5 100.0 100.0
0.01 0.06 0.27 0.68
25.3 49.9 99.0 149.0
0.784 0.096 0.000 0.000
52.5 90.5 100.0 100.0
0.10 1.31 16.60 78.29
Table 6 Heuristic algorithms for the random graphs Graph
Heuris1
Heuris2
#step
Gap
%hit
CPU
#step
Gap
%hit
CPU
R4400; 5000 R4400; 10000 R4400; 20000 R4400; 40000 R4800; 10000 R4800; 20000 R4800; 40000 R4800; 80000
182.2 195.7 204.7 209.2 354.5 380.1 393.8 405.9
0.026 0.003 0.000 0.000 0.000 0.000 0.000 0.000
98.0 99.5 100.0 100.0 100.0 100.0 100.0 100.0
0.03 0.04 0.06 0.12 0.14 0.19 0.28 0.44
91.6 95.3 97.8 99.2 181.2 189.3 193.3 196.8
0.014 0.000 0.000 0.000 0.000 0.000 0.000 0.000
98.5 100.0 100.0 100.0 100.0 100.0 100.0 100.0
0.65 1.48 3.40 7.69 4.34 9.95 21.77 49.42
R5400; 5000 R5400; 10000 R5400; 20000 R5400; 40000 R5800; 10000 R5800; 20000 R5800; 40000 R5800; 80000
187.4 198.5 209.6 214.0 359.2 383.7 398.2 409.2
0.715 0.941 0.749 0.025 0.314 0.040 0.000 0.000
54.5 78.5 94.5 98.0 71.5 96.0 100.0 100.0
0.04 0.06 0.08 0.14 0.17 0.22 0.31 0.47
91.8 95.9 98.0 99.4 181.6 190.0 193.0 196.8
1.014 0.351 0.127 0.020 0.614 0.067 0.000 0.000
36.5 70.5 87.5 98.0 54.5 93.5 100.0 100.0
0.66 1.48 3.34 7.91 4.40 9.87 22.06 48.12
• In complete graphs and random graph with L = 5, especially for those with large n, the number of subproblems generated is usually very close to 1, which reRects the fact that Heuris almost always 0nds an exact -ST (See Table 5).
250
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
Table 7 Exact algorithms for the planar graphs Graph
%exist
Exact1 #sub
Exact2 CPU
#sub
CPU
U H100; 260 U H200; 560 U H400; 1120 U H600; 1680 U H800; 2240 U H1000; 2800
100.0 100.0 100.0 100.0 100.0 100.0
1.08 1.02 1.01 1.00 1.00 1.00
0.00 0.01 0.05 0.12 0.27 0.44
1.01 1.00 1.00 1.00 1.00 1.00
0.00 0.01 0.07 0.21 0.47 0.87
3 H100; 260 3 H200; 560 3 H400; 1120 3 H600; 1680 3 H800; 2240 3 H1000; 2800
99.5 100.0 100.0 100.0 100.0 100.0
23.91 17.27 1.05 1.02 1.08 1.00
0.01 0.02 0.02 0.04 0.09 0.12
6.92 1.07 1.01 1.00 1.01 1.00
0.01 0.02 0.09 0.28 0.61 1.13
4 H100; 260 4 H200; 560 4 H400; 1120 4 H600; 1680 4 H800; 2240 4 H1000; 2800
99.5 99.5 99.5 99.5 100.0 100.0
1615.03 310.17 113.41 19.33 54.64 2.35
0.72 0.53 0.58 0.27 1.72 0.20
248.47 14.86 31.32 8.78 13.79 1.17
0.15 0.09 0.29 0.43 0.93 1.24
5 H100; 260 5 H200; 560 5 H400; 1120 5 H600; 1680 5 H800; 2240 5 H1000; 2800
99.5 99.5 99.5 100.0 100.0 100.0
47247.07 16359.81 12856.00 822.33 333.94 4370.59
22.30 30.44 37.89 10.63 8.56 85.42
46119.10 8336.97 539.34 23.45 106.05 26.19
21.63 15.60 5.72 1.16 6.80 6.11
5. Further implementation issues This section examines some alternative approaches in implementing the algorithms developed in the previous sections. 5.1. Branching strategy As discussed in Section 3, diJerent branching strategies may drastically change the performance of the developed algorithms in the exact methods. Here we solve the same instances as in Table 7 traversing the spanning trees in an width-5rst way. Table 10 summarizes the result of this calculation. Comparing these two tables, we observe no substantial diJerence in the number of subproblems generated as well as in CPU time between these two branching strategies.
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
251
Table 8 Exact algorithms for the complete graphs Graph
%exist
Exact1 #sub
Exact2 CPU
#sub
CPU
3 K100 3 K200 3 K400 3 K600
100.0 100.0 100.0 100.0
1.00 1.00 1.00 1.00
0.00 0.03 0.22 0.65
1.00 1.00 1.00 1.00
0.11 1.29 17.09 75.46
4 K100 4 K200 4 K400 4 K600
100.0 100.0 100.0 100.0
1.12 1.01 1.00 1.00
0.01 0.03 0.23 0.67
1.05 1.00 1.00 1.00
0.11 1.28 17.72 80.61
5 K100 5 K200 5 K400 5 K600
99.5 100.0 100.0 100.0
12.07 4.65 1.00 1.00
0.07 0.11 0.29 0.72
7.59 3.11 1.00 1.00
0.27 1.54 16.65 80.35
Table 9 Exact algorithms for the random graphs Graph
%exist
Exact1 #sub
Exact2 CPU
#sub
CPU
R4400; 5000 R4400; 10000 R4400; 20000 R4400; 40000 R4800; 10000 R4800; 20000 R4800; 40000 R4800; 80000
100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
19.86 1.02 1.00 1.00 1.00 1.00 1.00 1.00
0.15 0.04 0.06 0.11 0.14 0.18 0.26 0.42
19.83 1.00 1.00 1.00 1.00 1.00 1.00 1.00
0.82 1.55 3.53 7.69 4.59 10.40 22.81 50.57
R5400; 5000 R5400; 10000 R5400; 20000 R5400; 40000 R5800; 10000 R5800; 20000 R5800; 40000 R5800; 80000
100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
142.83 4.00 5.22 4.44 27.09 10.49 1.00 1.00
0.99 0.10 0.13 0.21 1.03 0.55 0.29 0.50
62.11 4.04 1.16 2.05 2.42 1.89 1.00 1.00
2.48 2.32 4.12 8.09 10.73 11.45 23.02 49.48
5.2. Tabu search The algorithms developed in the previous sections were able to solve most of the complete and random instances in less than a second. However, for the planar graphs with edge weights distributed
252
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
Table 10 Exact algorithms for the planar graphs with width-0rst branching Graph
%exist
Exact1 #sub
Exact2 CPU
#sub
CPU
4 H100; 260 4 H200; 560 4 H400; 1120 4 H600; 1680 4 H800; 2240 4 H1000; 2800
99.5 99.5 99.5 99.5 100.0 100.0
2025.90 488.30 278.95 52.43 134.34 24.22
0.84 0.82 1.73 0.51 3.18 0.73
331.95 17.39 34.19 32.61 85.36 1.23
0.19 0.08 0.30 0.65 2.17 1.27
5 H100; 260 5 H200; 560 5 H400; 1120 5 H600; 1680 5 H800; 2240 5 H1000; 2800
99.5 99.5 99.5 100.0 100.0 100.0
48060.13 19272.59 6147.47 1673.19 1005.83 1778.76
23.20 33.67 38.33 18.92 12.20 53.15
44651.79 9756.73 1774.20 22.04 48.21 19.23
20.39 18.96 15.38 1.75 4.76 5.86
over a wide interval, the number of subproblems generated tends to be large. Average CPU time is usually less than 20 s in our instances, but for some target values it is sometimes more than a thousand seconds. One reason for this is the poor performance of the heuristic algorithms, as we seen in Table 4, where gaps are extremely large and %hit is small for some instances. To overcome this diAculty, we try to incorporate the tabu search to Heuris2 in the following way. First, we prepare the tabu list T of the maximum length TL. Initially this is empty. Then, we substitute Step 2 of Heuris2 with the following: Step 2 : Find T that minimizes dist(·; ) in N (T ), and go to Step 3; That is, we do not stop even if dist(T ; ) ¿ dist(T; ), and repeat iteration until we 0nd an -ST or the iteration exceeds ITER cycles. Next, as in Heuris we look for a spanning tree of the form T + e − e in N (T ), but edges included in T are not allowed to enter into the spanning tree. This is why we denote N (T ) instead of N (T ) in Step 2 . On the other hand, the edge that leaves the tree is put into T and remains there for the period of TL iterations. Table 11 gives the result of the tabu search with ITER = 1000 and TL = 5, 10 and 20. From this, TL = 10 appears to be most eJective. Next, Table 12 shows the case where TL is 0xed at 10. From this, ITER=200 is clearly insuAcient, while ITER = 500 and 1000 are almost competitive in solution quality. Since ITER = 500 is slightly faster than ITER = 1000, we 0x ITER at 500. Finally, Table 13 gives the result of the exact algorithm incorporating tabu search in Step 4 of Exact. Comparing this against Table 7, we see that a smaller number of subproblems are generated in this method, and we were able to solve planar instances with L = 5 in less than a few seconds.
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
253
Table 11 Tabu search with ITER = 1000 Graph
4 H100 4 H200 4 H400 4 H800 5 H100 5 H200 5 H400 5 H800
TL = 5
TL = 10
TL = 20
Gap
%hit
CPU
Gap
%hit
CPU
Gap
%hit
CPU
0.433 0.039 0.050 0.015 5.209 1.403 1.159 0.348
83.58 96.02 96.52 98.51 28.36 56.72 53.23 81.10
0.03 0.04 0.14 1.11 0.10 0.22 0.71 2.51
0.060 0.005 0.020 0.005 0.517 0.323 0.104 0.060
99.01 99.50 99.50 99.50 68.16 92.54 97.51 99.01
0.01 0.03 0.11 1.05 0.07 0.13 0.30 1.67
0.070 0.005 0.020 0.005 0.756 0.358 0.075 0.060
98.51 99.50 99.50 99.50 57.71 88.56 98.01 99.01
0.02 0.03 0.11 1.04 0.08 0.15 0.35 1.68
Table 12 Tabu search with TL = 10 Graph
4 H100 4 H200 4 H400 4 H800 5 H100 5 H200 5 H400 5 H800
ITER = 200
ITER = 500
ITER = 1000
Gap
%hit
CPU
Gap
%hit
CPU
Gap
%hit
CPU
0.164 0.020 0.025 ∞ 2.607 0.980 0.582 ∞
88.56 98.01 99.01 74.63 22.39 49.25 58.21 48.76
0.01 0.03 0.10 0.94 0.02 0.07 0.20 1.23
0.070 0.005 0.020 0.005 0.985 0.468 0.159 0.119
98.01 99.50 99.50 99.50 41.27 80.10 92.04 94.53
0.01 0.03 0.11 1.03 0.05 0.11 0.27 1.59
0.060 0.005 0.020 0.005 0.517 0.323 0.105 0.110
99.00 99.50 99.50 99.50 68.16 92.54 97.51 95.52
0.01 0.03 0.11 1.05 0.07 0.13 0.30 1.76
∞: Includes cases where Heuris2 does not 0nish in 200 steps.
Table 13 Exact algorithm using tabu search (TL = 10, ITER = 500) Graph
#sub
CPU
Graph
#sub
CPU
4 H100 4 H200 4 H400 4 H600 4 H800 4 H1000
2.24 10.30 30.14 8.48 13.63 1.00
0.01 0.07 0.41 0.47 0.94 1.16
5 H100 5 H200 5 H400 5 H600 5 H800 5 H1000
1170.93 629.36 4.66 18.36 40.13 21.85
3.47 2.95 0.38 0.68 1.82 2.06
5.3. Listing up all the -STs The algorithm Exact can be modi0ed to list up all the -STs. Indeed, instead of stopping after 0nding an -ST in Step 3 (i) or in Step 4 of Exact, we just continue the algorithm using the
254
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
Table 14 Listing up all the -STs for the graph of Fig. 1
1768 1818 1868 3411 3461 3511
#st
26 548 4936 2600 295 16
Listup-All1
Listup-All2
#sub
CPU
#sub
CPU
3035 50718 502187 363677 43754 2732
0.09 1.53 14.50 7.48 0.85 0.05
2003 39482 422474 315257 36868 2393
0.06 1.23 12.76 6.74 0.75 0.04
obtained -ST as the spanning tree T in Step 4. The resulting algorithm is termed as Listup-All1 or Listup-All2 , depending on whether Heuris1 or Heuris2 is used in Step 3 of the algorithm. Table 14 gives the results of these algorithms for the graph of Fig. 1 with various values of . Here #st is the number of -STs identi0ed, which always coincided between Listup-All1 and Listup-All2 . For this instance, Listup-All2 produces a slightly smaller number of subproblems, but in CPU time both algorithms are almost competitive. 6. Conclusion We have formulated the spanning tree detection problem, and developed heuristic as well as exact algorithms to solve the problem. With these algorithms, problems with up to 1000 nodes have been solved within a few seconds. The algorithms have been extended to list up all the spanning trees of an arbitrary speci0ed weight. As the future work, we mention the re0nements of the developed algorithms to solve yet larger instances. References [1] Busacker RG, Saaty TL. Finite graphs and networks: an introduction with applications. New York: McGraw-Hill, 1965. [2] Ahuja RK, Magnanti TL, Orlin JB. Network Rows: theory, algorithms, and applications. Englewood CliJs, NJ: Prentice-Hall, 1993. [3] Kruskal JB. On the shortest spanning subtree of a graph and the traveling salesman problem. Proc Amer Math Soc 1956;7:8–50. [4] Prim RC. Shortest connection networks and some generalizations. Bell Syst Tech J 1957;36:1389–401. [5] Papadimitriou CH, Yannakakis M. The complexity of restricted spanning tree problems. J Assoc Comput Mach 1982;29:285–309. [6] Nakayama H, Nishizeki T, Saito N. Lower bound for combinatorial problems on graphs. J Algorithms 1985;6: 393–9. [7] Garey MR, Johnson DS. Computers and intractability: a guide to the theory of NP-completeness. San Francisco: Freeman, 1979. [8] Barahona F, Pulleyblank WR. Exact arborescences, matchings and cycles. Discrete Appl Math 1987;16:91–9.
T. Yamada et al. / Computers & Operations Research 32 (2005) 239 – 255
255
[9] Read RC, Tarjan RE. Bounds on backtrack algorithms for listing cycles, paths, and spanning trees. Networks 1975;5:237–52. [10] Hamacher HW, Queyranne M. K-best solutions to combinatorial optimization problems. Ann Oper Res 1985;4: 123–43. [11] Chen WK. Applied graph theory: graphs and electrical networks. Amsterdam: North-Holland, 1976. [12] Baase S. Computer algorithms: introduction to design and analysis, 2nd ed. Reading, MA: Addison-Wesley, 1993. [13] Aarts E, Lenstra JK, editors. Local search in combinatorial optimization. Chichester, England: Wiley; 1997. [14] Sedgewick R. Algorithms in C, 3rd ed. Reading, MA: Addison-Wesley, 1998. [15] Yamada T. http://www.nda.ac.jp/cc/users/yamada/Papers/stdp