A tabu search heuristic for the generalized minimum spanning tree problem

A tabu search heuristic for the generalized minimum spanning tree problem

Available online at www.sciencedirect.com European Journal of Operational Research 191 (2008) 306–319 www.elsevier.com/locate/ejor Discrete Optimiza...

194KB Sizes 2 Downloads 136 Views

Available online at www.sciencedirect.com

European Journal of Operational Research 191 (2008) 306–319 www.elsevier.com/locate/ejor

Discrete Optimization

A tabu search heuristic for the generalized minimum spanning tree problem ¨ ncan a, Jean-Franc¸ois Cordeau Temel O

b,*

, Gilbert Laporte

c

a

b

Department of Industrial Engineering, Galatasaray University, Ortako¨y, Istanbul 34357, Turkey Canada Research Chair in Logistics and Transportation and Center for Research on Transportation, HEC Montre´al, 3000 chemin de la Coˆte-Sainte-Catherine, Montre´al, Canada H3T 2A7 c Canada Research Chair in Distribution Management and Center for Research on Transportation, HEC Montre´al, 3000 chemin de la Coˆte-Sainte-Catherine, Montre´al, Canada H3T 2A7 Received 2 October 2006; accepted 16 August 2007 Available online 25 August 2007

Abstract This paper describes an attribute based tabu search heuristic for the generalized minimum spanning tree problem (GMSTP) known to be NP-hard. Given a graph whose vertex set is partitioned into clusters, the GMSTP consists of designing a minimum cost tree spanning all clusters. An attribute based tabu search heuristic employing new neighborhoods is proposed. An extended set of TSPLIB test instances for the GMSTP is generated and the heuristic is compared with recently proposed genetic algorithms. The proposed heuristic yields the best results for all instances. Moreover, an adaptation of the tabu search algorithm is proposed for a variation of the GMSTP in which each cluster must be spanned at least once.  2007 Elsevier B.V. All rights reserved. Keywords: Generalized minimum spanning tree; Tabu search

1. Introduction Consider an undirected graph G = (V, E) where V = {v1, . . ., vn} is a vertex set partitioned into clusters V1, . . ., Vk, E = {{i, j}:i 2 Vp, j 2 Vq, for p, q = 1, . . ., k, p 5 q} is an edge set, and C = (cij) is a symmetric cost metric defined on E. The generalized minimum spanning tree problem (GMSTP) consists of determining a minimum spanning tree (MST) containing exactly one vertex per cluster. Fig. 1 illustrates a GMSTP solution with k = 5 clusters and n = 13 vertices. The GMSTP has applications, for example, in telecommunications, energy distribution and agricultural irrigation. The problem was first introduced by Myung et al. (1995) who have shown that it is NP-hard by reduction from the vertex cover problem. These authors have also provided four integer programming *

Corresponding author. Tel.: +1 514 340 6278. ¨ ncan), [email protected] (J.-F. Cordeau), [email protected] (G. E-mail addresses: [email protected] (T. O Laporte). 0377-2217/$ - see front matter  2007 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2007.08.021

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

307

13

4

2

6 12 1

8 9

3 7

10

11

5

8

Fig. 1. An example of the GMSTP.

formulations and have developed a branch-and-bound algorithm. They have reported the exact solution of instances with up to 100 vertices. Feremans et al. (2002) have presented a theoretical comparison of eight GMSTP formulations including those of Myung et al. (1995). In another article, Feremans et al. (2004) have proposed several classes of valid inequalities and a branch-and-cut algorithm for this problem. They have employed a basic tabu search algorithm to obtain an initial feasible solution. Optimal solutions were reported for instances with up to 200 vertices. Duin et al. (2004) have transformed the GMSTP into a Steiner tree problem which can then be solved with any of the specialized algorithms for this problem. More recently, Pop et al. (2006) have proposed a new formulation of the GMSTP and a solution procedure that finds an optimal solution on instances with up to 240 vertices. Feremans et al. (2004) have also suggested GMSTP adaptations of the well-known MSTP construction heuristics of Kruskal (1956), Prim (1957), and Sollin Ahuja et al. (1993). They have tested the three adapted heuristics and concluded that the adaptation of the Kruskal algorithm is the most effective. Golden et al. (2005) have also studied, from an experimental point of view, the straightforward adaptations of the Kruskal, Prim and Sollin algorithms. They report that although for most instances the adaptation of the Kruskal algorithm gives the best solution values, the adaptation of the Sollin algorithm yields the best average percent deviation from the optimal solution value. The difficulty of obtaining optimum solutions for the GMSTP has led to the development of several metaheuristics. The first such algorithms were the tabu search (TS) heuristic of Feremans (2001) and the simulated annealing (SA) heuristic of Pop (2002). Two variants of a TS heuristic and four variable neighborhood search (VNS) based heuristics were later devised by Ghosh (2003). Another VNS algorithm for the GMSTP was proposed by Hu et al. (2005). The authors report that their VNS approach can produce solutions that are comparable to those obtained by means of the second variant of the TS heuristic of Ghosh (2003), Golden et al. (2005) have devised a local search heuristic (LSH) and a genetic algorithm (GA) for the GMSTP which was tested on the TSPLIB instances generated by Feremans (2001). Both algorithms have yielded improvements on TSPLIB instances with sizes 198 6 n 6 225 and unknown optimal solution values. The best known solution values for these instances were reported by Feremans (2001). On none of these instances did the LSH outperform the GA. Recently, another TS heuristic for the GMSTP was devised by Wang et al. (2006). The authors note that their TS heuristic produces solutions slightly better than those obtained by the GA of Golden et al. (2005). A variation of the GMSTP involves the design of a tree spanning at least one vertex out of each cluster. This version of the problem is known as the L-GMSTP where the letter ‘L’ stands for the word least. The L-GMSTP was first discussed by Dror et al. (2000) who have proposed two formulations, a Lagrangian relaxation scheme, and five heuristics, including a GA. Haouari and Chaouachi (2006) have later improved the GA proposed by

308

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

Dror et al. (2000) and have devised a stochastic heuristic using the probabilistic greedy search method of Haouari and Chaouachi (2002). The authors have also analyzed three integer programming formulations and proposed an improved Lagrangian relaxation algorithm for the L-GMSTP. A branch-and-bound algorithm for this problem was also described by Haouari et al. (2005). A prize-collecting variation of the GMSTP has been very recently considered by Pop (2007). In the prize-collecting GMSTP (PC-GMSTP) vertices in each cluster are competing to be selected and each vertex offers a prize if it is selected. Pop (2007) has proposed several integer programming formulations and an efficient exact solution procedure for the PC-GMSTP. In this paper we introduce an attribute based tabu search heuristic for the GMSTP. For all benchmark instances with known optimal solution values, the TS algorithm reaches an optimal solution within reasonable CPU times. Moreover, among the benchmark instances with unknown optimal solution values it yields new best known solutions on three instances. Encouraged by these preliminary results, we have generated larger GMSTP instances with sizes 229 6 n 6 783 using the TSPLIB test set (Reinelt, 1991). Experimental results on these new instances are reported for the LSH and GA proposed by Golden et al. (2005) and with our TS algorithm. The same instances are also tested with a new and improved GA proposed by Golden et al. (2006). The TS algorithm yields the best solution values for all instances. Moreover, the TS algorithm is also adapted to handle the L-GMSTP. On the test instances generated by Dror et al. (2000), the TS algorithm for the L-GMSTP gives results comparable to those of the GA developed by Haouari and Chaouachi (2006). The remainder of this paper is organized as follows. Section 2 introduces our TS algorithm for both versions of the GMSTP. Section 3 presents a lower bounding procedure for the problem. This is followed in Section 4 by computational results, and by conclusions in Section 5. 2. Tabu search Tabu search is an iterative local search method widely used in combinatorial optimization Glover (1986). Starting from an initial solution, the search method moves at each iteration from the current solution to the best one in a subset of its neighborhood, even if this causes a deterioration in the objective function value. To avoid cycling, solutions possessing some attributes of recently visited solutions are declared forbidden or tabu for a given number of iterations, called the tabu tenure. The algorithm stops whenever a preset criterion is satisfied. During the search, the algorithm maintains short term and long term memory structures. When an attribute is declared tabu, all solutions possessing this attribute are implicitly declared tabu. However, some of these solutions may in fact never have been considered by the search. To remedy this, an aspiration criterion is defined to override the tabu status of a solution. One common aspiration criterion is to allow tabu solutions yielding better solution values than that of the best known solution. We refer to the book by Glover and Laguna (1997) for more details on TS. In our TS implementation, we apply an extension of the aspiration level concept by associating an attribute to each vertex of the graph. Hence, the attribute set A(s) of solution s is defined as the set of vertices selected for the solution s. We define the tabu rule as follows. If vertex i is removed from the minimum spanning tree, reinserting it into the tree is forbidden for the next h iterations, where h is the tabu tenure. The last iteration for which attribute i is declared tabu is denoted by bi. The tabu status of an attribute can be revoked if it leads to a solution with smaller cost than that of the best solution identified having that attribute. The aspiration level ci of an attribute is initially set equal to the cost of the initial solution s0 if vertex i belongs to this solution, and to 1 otherwise. At every iteration, the aspiration level of each attribute i 2 A(s) of the current solution is updated to min{ci, c(s)}, where c(s) stands for the cost value of solution s. Given a solution s, its neighbor solutions are defined by N(s). At each iteration, we consider a subset T(s)  N(s) which consists of all solutions s 2 N ðsÞ reachable from s without incurring the risk of cycling, i.e., those that are not tabu or that satisfy the aspiration criterion. 2.1. Neighborhoods for the GMSTP A solution s of the GMSTP is represented with an array of size k where each element stores one vertex i out of each cluster Vp. Hence, one can obtain a solution of the GMSTP by solving the MSTP for the set of vertices of the array. In order to solve the MSTP we employ the Kruskal algorithm which runs in OðjEj þ k log kÞ time.

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

309

For the GMSTP we propose the c-cluster neighborhood Nc(s) defined as follows. First select c clusters at random. For each cluster Vp, replace the vertex ip 2 Vp \ A(s) with a different vertex jp 2 VpnA(s). Then, reconstruct a solution with the set of selected vertices by using the Kruskal algorithm. The 1-cluster neighborhood N1(s) considers all possible jVpj  1 solutions as the neighborhood of s, while N2(s) considers all possible (jVpj  1) · (jVqj  1) solutions, where Vp and Vq are the two selected clusters. Clearly, the larger the number of selected cluster, the wider the search space. In our TS algorithm we employ the N1(s), N2(s) and N3(s) neighborhoods. Although N3(s) is more powerful than N1(s) and N2(s), it may require excessive CPU times to check all possible combinations. On the other hand, although the neighborhood N1(s) is the weakest, it may help to search the neighbor solutions faster than N2(s) and N3(s). Hence, considering this trade-off, we propose a hybrid strategy to employ the three neighborhoods in a balanced way. More precisely, we use them consecutively following the sequence N1(s), N2(s), N3(s), N1(s), . . . We first start with neighborhood N1(s) and apply it until there is no improvement in the solution for t1 iterations. Then we switch to neighborhood N2(s) and use it until no improvement is obtained for t2 iterations. Finally, neighborhood N3(s) is employed until there is no improvement for t3 iterations and again we switch to neighborhood N1(s). This process is repeatedly performed until the algorithm satisfies the stopping criterion. We have chosen t1 = 250, t2 = 175 and t3 = 100, which seemed to work best for most of the instances. 2.2. Neighborhoods for the L-GMSTP Unlike what is done in the GMSTP, a solution of the L-GMSTP is represented with a binary array of size n: the ith element of this array is set equal to 1 if and only if the vertex i is in the solution. As explained in Dror et al. (2000), minimizing the cost of the spanning tree may require the inclusion of additional vertices other than those already in the solution. Given a solution array of size n, let R denote the set of selected vertices in the solution, namely R consists of the vertices in A(s). Let S denote the set of vertices that are not in the solution, namely S consists of the vertices in VnA(s). We can consider R as the terminal vertices and S as the Steiner vertices. To find a feasible topology for the L-GMSTP we need to solve a Steiner tree problem (see, e.g., Hwang et al., 1992, Chopra and Rao, 1994), with terminal vertices R and Steiner vertices S. For this purpose, we employ the insertion heuristic proposed by Takahashi and Matsuyama (1980), and which runs in OðjRjn2 Þ time. The insertion heuristic starts by removing a vertex i from the set R and assigning it to a temporary set U. At each iteration, a vertex j 2 R closest to the set U is found. The edges of the shortest path from j to U are added to an edge list L. Then, the vertex j is removed from R and inserted into U. The Steiner vertices on the shortest path are also added to U. The heuristic algorithm stops when the set R is empty. The insertion heuristic outputs the set of selected vertices U and the set of selected edges L. The steps of the insertion heuristic are summarized below: Insertion Heuristic Takahashi and Matsuyama (1980) begin Input R Initialization Set U = ;, L = ; Choose a vertex i from R and insert it into U while R 5 ; do Find a j 2 R closest to the set U Add to the edge list L the edges of a shortest path from j to U Remove vertex j from R and insert it into U Add to U the Steiner vertices on the shortest path from j to U endwhile Output U and L end The definition of the c-cluster neighborhood Nc(s) discussed in the previous subsection is slightly modified for the L-GMSTP. Again, we first select c clusters at random. For each selected cluster p, we then define the set

310

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

Qp of vertices from Vp that belong to the current solution, i.e., Qp = Vp \ A(s). We then initialize the set R with the vertices in A(s) that do not belong to any of the selected clusters. Finally, one vertex ip 2 VpnQp is added to R for each selected cluster p, and the insertion heuristic is run from the set R. As a result, N1(s) considers jVpnQpj solutions as the neighborhood of the current solution s, while N2(s) considers jVpnQpj · j VqnQqj where Vp and Vq are the two selected clusters. As for the GMSTP, we use three neighborhoods by following the sequence N1(s), N2(s), N3(s), N1(s), . . . Here, we set t1 = 25, t2 = 20 and t3 = 15. 2.3. An attribute based tabu search algorithm We now describe the steps of the TS algorithm which are the same for both the GMSTP and the L-GMSTP. However, the neighborhood definitions and the parameter settings are different. 2.3.1. Construction of an initial solution In order to construct an initial feasible solution we randomly select a vertex from each cluster. For the GMSTP, we solve the MSTP for the set of selected vertices by using the Kruskal algorithm. For the L-GMSTP, instead we run the Takahashi and Matsuyama (1980) insertion heuristic. This initial solution is set as the best solution s* identified by these two algorithms, respectively. We should point out that the initial solution has little impact on the final result due to the quick improvements produced by the TS algorithm. 2.3.2. Initialization of parameters We first initialize the number of times attribute i has been added to the solution. This parameter is set equal to 1 for all vertices in the current solution (i.e., ai = 1 for all i 2 A(s), and ai = 0 for all i 62 A(s)). Then, the last iteration for which attribute i is declared as tabu is set equal to 0 for all vertices (i.e., bi = 0 for all i). The aspiration levels ci are set equal to the initial solution value c(s) for all vertices in the current solution (i.e., ci = c(s) for all i 2 A(s) and ci = 1 for all i 62 A(s)). The number of times cluster j has been considered for the neighborhood search is set equal to 0 for all clusters (i.e., dj = 0 for all j = 1, . . ., k). Finally, the iteration counter k is set equal to 0. 2.3.3. Neighborhood search The first step of our neighborhood scheme is the cluster selection procedure which is crucial for the performance of the neighborhoods N1(s), N2(s) and N3(s), and for the performance of the TS algorithm. As a simple rule, we can randomly select clusters. However, during the run of the TS algorithm this may cause the algorithm to repeatedly visit the same clusters and hence the same search space without improving the solution. Especially when the TS algorithm runs for a small number of iterations, some clusters are selected less often than the others. Therefore, we propose a diversification scheme for the cluster selection considering the cluster visit frequencies, i.e., dj which denotes the number of times a cluster j has been selected. The cluster selection is performed with probabilities proportional to the inverse of the dj values. Hence, the lower the value of dj, the higher the probability of selecting cluster j. When all clusters have been visited an equal number of times, we apply the random selection rule for at most u non-improving iterations. If the random selection of clusters has not yielded an improved solution for u iterations, we again switch to cluster selection by visit frequencies, and so on. The next step after the cluster selection is to evaluate a subset T(s) of neighbor solutions. For this purpose, we first initialize T(s) = B. Then, for each neighbor solution s of the current solution s, if there exists an attribute i 2 AðsÞ n AðsÞ such that bi < k or cðsÞ < ci , we set T ðsÞ ¼ T ðsÞ [ fsg. In other words, we add to T(s) the neighbor solutions s such that either s is non-tabu or the cost of s is lower than the aspiration level ci of one of the attributes i 2 AðsÞ. To penalize the vertices frequently added into the solution, we define a penalized cost value for each solution s in the subset of neighborhoods T(s) as follows: 8 pffiffiffiffiffi P < cðsÞ þ /cðsÞ kn ai =k; if cðsÞ P cðsÞ i2AðsÞnAðsÞ pðsÞ ¼ : cðsÞ; otherwise:

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

311

pffiffiffiffiffi Here, / is a factor used to adjust the intensity of the diversification, while the square root factor kn, which was first proposed by Taillard (1993) in the context of the vehicle routing problem, is used to compensate for instance size. Finally, considering all solutions s in T(s) we identify the solution s 0 with minimum penalized cost value p(s 0 ). 2.3.4. Parameters update Considering the new solution s 0 we update the following parameters. First, we update the tabu status of the removed vertices i 2 A(s)nA(s 0 ) in order to forbid them from entering the solution for at least h iterations (i.e., 0 bi = k + h). Second, for each attribute i 2 A(s )nA(s), the number of times attribute i has been added to the solution is increased by one (i.e., ai = ai + 1). Third, if the new solution is better than the current best solution (i.e., c(s 0 ) < c(s*)) we update the current best solution (i.e., s* = s 0 ). Finally, for each vertex of the new solution 0 we update its aspiration level (i.e., ci = min{ci, c(s 0 )} for all i 2 A(s )). 2.3.5. Stopping criterion Several stopping criteria can be adopted for the TS algorithm such as a time limit, an iteration limit or an absence of improvement for a given number of iterations. We choose our criterion so as to achieve a balance between computation time and solution quality. The TS algorithm stops when the iteration counter k reaches the iteration limit g. 3. Lower bounding procedure To compute lower bound values for the GMSTP we use the directed multi-commodity flow formulation proposed by Myung et al. (1995). In this formulation, for each cluster l = 2, . . ., k, a commodity l is sent from cluster 1 to cluster l which receives exactly one unit of this commodity. We define the following variables: continuous variables fijl equal to 1 if and only if commodity l flows from vertex i to vertex j; binary variable yi equal to 1 if and only if vertex i is in the solution; and binary variable wij equal to 1 if and only if vertices i and j are connected. Note that intra-cluster arcs and variables are not defined. The formulation is the following: Minimize

n X n X i¼1

subject to

X

ð1Þ

cij wij

j¼1

y i ¼ 1;

l ¼ 1; . . . ; k

ð2Þ

fjil ¼ y i ;

ð3Þ

i2V l n X

fijl 

n X

j¼1 n X

fijl 

n X

j¼1 n X

i 2 V 1 ; l ¼ 2; . . . ; k

j¼1

fjil ¼ y i ;

i 2 V l ; l ¼ 2; . . . ; k

ð4Þ

j¼1

fijl 

n X

j¼1

X X

fjil ¼ 0;

i 2 V n ðV 1 [ V l Þ;

l ¼ 2; . . . ; k

ð5Þ

j¼1

wij ¼ 1;

l ¼ 2; . . . ; k

ð6Þ

i2V nV l j2V l

0 6 fijl 6 wij ; y i ; wij 2 f0; 1g;

i; j ¼ 1; . . . ; n; l ¼ 2; . . . ; k i; j ¼ 1; . . . ; n:

ð7Þ ð8Þ

Constraints (2) ensure that exactly one vertex is chosen from each cluster. Constraints (3) state that one unit of each commodity l = 2, . . ., k leaves cluster 1 while constraints (4) state that this unit enters cluster l. Constraints (5) enforce flow conservation at intermediate nodes. Finally, constraints (6) ensure that there

312

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

exists exactly one incoming arc for each cluster. Note that constraints (3)–(5) prevent cycles in the solution. This formulation contains a polynomial number of variables and constraints. For small instances, its linear programming relaxation can thus be solved directly by any commercial solver. In our experiments we have used CPLEX 9.0.1 to solve these relaxations. 4. Computational experiments We now present the details of our TS algorithm which was implemented in C++ and run on a Pentium IV PC with a 3 GHz CPU and 2 GB RAM. 4.1. Parameter calibration The fine tuning of parameters used within the TS heuristic must be carefully handled in order to obtain good results. For the GMSTP, we calibrated the parameters considering the TSPLIB instances with vertex sizes 198 6 n 6 226. We have first fixed h = 7, u = 100 and g = 30,000 and performed tests on all instances using different values of parameter /, which is used to adjust the intensity of the diversification, in the interval [0.01, 0.1] with increments of 0.01. We observed that for most of the instances the best value of the parameter / lies in the interval [0.03, 0.08]. Hence we have set / = 0.05 as the most appropriate value. To determine the most appropriate value for h we have performed experiments by fixing other parameters (namely, / = 0.05, u = 100 and g = 30,000). Generally, setting parameter h = 7 seems to be appropriate for most of the TSPLIB instances with known optimal solution values. However, the proper value for this parameter also seems to depend on instance size. We thus performed experiments by choosing the value of h from the interval [d3 log10 ne, d5 log10 ne]. For most of the instances we observed that when the tabu duration h is fixed to d4 log10 ne we obtain better results. However, according to our preliminary tests we observed that varying the tabu tenure throughout the search yields better solutions than choosing a fixed value. Hence every time a vertex is declared tabu, we randomly select the tabu tenure from the interval [d4 log10 ne  2, d4 log10 ne + 2]. In order to control the diversification of the cluster selection process, we use the parameter u. In this phase we successively apply two rules: random cluster selection and cluster selection according to frequency. At the start of the algorithm the clusters are selected randomly. Then, after u iterations without improving the solution value, we switch to the cluster selection by visit frequency. In our preliminary experiments, we observed that the most appropriate value for u is 50. For values larger than 50 we have observed that the algorithm wastes too much time by selecting the recently visited clusters. On the other hand when we set lower values, we have observed that the clusters are visited sequentially since the visit frequencies of clusters become almost equal to each other. To reach a balance between solution quality and CPU time we have set g = 75n iterations with an upper bound of 40,000 in order to limit the CPU time on larger instances. The parameters of the TS algorithm for the L-GMSTP are fine-tuned as for the GMSTP. To calibrate our parameters we performed preliminary experiments on the instances provided by Dror et al. (2000). For the L-GMSTP, we set / = 0.05, u = 25, g = 20n as the most appropriate values. The tabu tenure parameter h is randomly chosen from [d4 log10 ne  3, d4 log10 ne + 3]. 4.2. Extended TSPLIB instances for the GMSTP We have generated an extended set of instances from the TSPLIB with sizes 229 6 n 6 783 by following the method used by Feremans (2001). The new set of test instances can be downloaded from the site http://neumann.hec.ca/chairedistributique/data/gmstp. For each instance we have applied the center clustering procedure devised by Fischetti et al. (1997). In this cluster generation process, the number of clusters is fixed to k = djnj/5e. Then, k center vertices are determined

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

313

by considering k vertices that are as far as possible from each other. Then the remaining n  k vertices are assigned to the closest center vertices and k clusters are obtained. As explained by Fischetti et al. (1997) the grid clustering procedure is developed for instances with known geographical coordinates (xi, yi) for i = 1, . . ., n. Considering all these coordinates, let xmin, ymin and xmax, ymax denote the minimum and maximum of the x and y coordinates, respectively. Then, the rectangle with coordinates (xmin, ymin), (xmin, ymax), (xmax, ymax) and (xmin, ymax) is subdivided into g · g grids such that each cell has width (xmax  xmin)/g and height (ymax  ymin)/g. In this procedure, g is the smallest integer such that CLUSTER(g) P n/l, where CLUSTER(g) is the number of nonempty clusters corresponding to the g · g grids. The grid clustering procedure is applied to all new instances except pa561 and si535 since for these two instances we only know the intervertex distances. However, for all other instances we have applied both center clustering and grid clustering for l = 3, l = 5, l = 7 and l = 10. We should note a correction on the TSPLIB instances generated by Feremans (2001) for the GMSTP. For all these instances, Feremans (2001) has used the Euclidean distance (L2 metric). However, for the att type instances and for the geo type instances, pseudo-Euclidean functions and geographical distance functions must be, respectively, used to compute the distances Reinelt (1991). Hence, the cost values for the TSPLIB instances spain47, europ47, gr96, gr137, gr202 and att48 differ from the correct values. However, we believe this small difference does not significantly affect the results reported in earlier articles. 4.3. Comparison with several local search heuristics We now compare our TS heuristic with several other local search heuristics. The TS heuristic proposed by Feremans (2001) works as follows: randomly select a vertex ip 2 Vp in the current solution and replace it with another non-tabu vertex jp 2 Vp yielding the best solution; the selected vertex jp is then declared tabu for h = 8 iterations and the algorithm is run for 1000n iterations. The two variants of the TS heuristic developed by Ghosh (2003) use the 1-swap neighborhood which can be defined by the following move operation. Given a feasible solution of the GMSTP, one vertex ip 2 Vp in the current solution leaves the solution while another vertex jp 2 Vp, not in the current solution, enters it. More precisely, the 1-swap neighborhood looks at all possible neighbor solutions that can be obtained by replacing only one vertex ip 2 Vp in the current solution with another vertex jp 2 Vp which is not in the current solution. The first variant incorporates recency based memory and aspiration rules. The second variant incorporates, in addition to recency based memory and aspiration rules, frequency based memory to store the number of times a certain move has been executed. Ghosh (2003) has also proposed four VNS based heuristics. The six heuristics were tested on two types of GMSTP instances: randomly generated instances and TSPLIB instances Reinelt (1991), Ghosh (2003) notes that the second variant of the TS heuristic typically yields the best solutions among the six heuristics. The TS heuristic of Wang et al. (2006) uses the following search scheme which is repeated until no improvement is observed for a preset number of iterations. Randomly choose a cluster p and replace vertex ip 2 Vp in the current solution with another vertex jp 2 Vp. The heuristic maintains an elite candidate list of local optimal solutions to produce an intensive search space and applies a threshold strategy to increase the diversity of the intensive search space. Wang et al. (2006) report that their TS heuristic produces solutions slightly better than those obtained by the GA heuristic of Golden et al. (2005). Recently, a VNS algorithm for the GMSTP was proposed by Hu et al. (2005). In order to compare their VNS algorithm, Hu et al. (2005) have tested the second variant of the TS heuristic proposed by Ghosh (2003), the variable neighborhood decomposition search (VNDS) algorithm of Ghosh (2003), and the SA heuristic of Pop (2002). The authors report that their VNS approach can produce solutions comparable to those obtained by means of the second variant of the TS heuristic proposed by Ghosh. They also note that most of the time their VNS approach outperforms the algorithms of Pop (2002), Ghosh (2003). In Table 1 we report the experimental results on twelve TSPLIB instances for the GMSTP. In columns 3–7 we present the results reported by Hu et al. (2005). The column headings are as follows. The columns G-TS and G-VNDS are for the solution costs obtained with the TS and VNDS heuristics of Ghosh (2003), respectively. The column P-SA provides the solution costs obtained with the SA heuristic of Pop (2002) which are reported by Ghosh (2003). The column H-VNS gives the solution costs obtained with the VNS algorithm by

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

314

Table 1 Comparison of the proposed TS heuristic with several local search heuristics Instances

k

G-TSa

G-VNDSa

P-SAa

H-VNSa

Limita

W-TSb

Secs.b

TSc

Secs.c

gr137 kroa150 krob200 ts225 gil262 pr264 pr299 lin318 rd400 fl417 pr439 pcb442

28 30 40 45 53 54 60 64 80 84 88 89

329 9815 11,245 62,366 942 21,886 20,339 18,521 5943 7990 51,852 19,621

330 9815 11,353 63,139 979 22,115 20,578 18,533 6056 7984 52,104 19,961

352 10,885.6 12,532 67,195.1 1022 23,445.8 22,989.4 20,268 6440.8 8076 55,694.1 21,516

329 9815 11,244 62,280.5 943.2 21,890.8 20,347.4 18,511.2 5955 7982 51,849.7 19,729.3

150 150 300 300 300 300 450 450 600 600 600 600

na na na 62,268 942 21,886 20,316 18,501 na na na na

na na na 94 230 267 307 350 na na na na

329 9815 11,244 62,269 887 21,872 20,290 18,471 5868 7935 51,760 19,571

14 16 31 37 74 72 94 130 208 233 574 266

a b c

Pentium IV 2.8 GHz and 2 GB RAM. Dual-core Xeon 3 GHz and 2 GB RAM. Pentium IV 3 GHz and 2 GB RAM.

Hu et al. (2005). The column Limit denotes time limits in seconds imposed by Hu et al. (2005) to run the heuristics. The column W-TS gives the solution costs obtained with the TS heuristic of Wang et al. (2006). In the column TS we report the solution costs produced by our TS algorithm. The columns Secs. provides the CPU times in seconds. Wang et al. (2006) did not perform experiments on all TSPLIB instances. Hence, ‘‘na’’ indicates missing results. We provide below Table 1 the platforms used to perform the experiments. Since these

Table 2 Computational results for LS, GA1 and our TS for TSPLIB instances with sizes 198 6 n 6 226 Instances

k

Lower

LS

GA

TS

Bound

Cost

Cost

Secs.

Cost

Center clustering d198 gr202 ts225 pr226

40 41 45 46

7044 242 62,246 55,515

7044 244 62,400 55,515

7044 243 62,315 55,515

31 33 37 39

7044 242 62,269 55,515

l=3 d198 gr202 ts225 pr226

67 68 75 84

8283 292.5 79,019 62,527

8285 293 79,085 62,527

8283 293 79,019 62,527

58 64 70 84

8283 293 79,019 62,527

l=5 d198 gr202 ts225 pr226

40 41 45 50

7098 232 60,578 56,721

7098 234 60,713 56,721

7098 232 60,659 56,721

31 34 37 43

7098 232 60,588 56,721

l=7 d198 gr202 ts225 pr226

32 31 35 33

6501 202 50,813 48,249

6501 203 50,813 48,249

6501 203 50,813 48,249

23 23 26 26

6501 203 50,813 48,249

l = 10 d198 gr202 ts225

25 21 25

6185 177 40,339

6185 177 40,339

6185 177 40,339

17 14 18

6185 177 40,339

Optimal solution values are in bold characters.

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

315

platforms are different it is clear that we cannot directly compare the CPU times of these metaheuristics, but our TS heuristic achieves the best results in all cases except one (ts225). Finally, we should mention that we report only the results of the best run of our tabu search algorithm on each instance because of the low variance observed in the final solution values.

4.4. Comparison with the LSH and GA of Golden et al. We provide in Table 2 the computational results for the LSH and GA proposed by Golden et al. (2005), as well as our results using TS, on the TSPLIB instances with sizes 198 6 n 6 226. The lower bound values are computed using the linear programming relaxation of the formulation by Myung et al. (1995). The linear programming relaxation is a straightforward approach to compute the lower bounds. Furthermore, these are tighter than the previous best lower bounds reported by Golden et al. (2005). Unfortunately, we were not able to compute the lower bound values for larger instances with the linear programming relaxation of the formulation due to prohibitive memory resource requirements. The results in columns LSH and GA are taken from Golden et al. (2005). Note that the TS heuristic improves the best known upper bound for three instances: gr202 and ts225 obtained with center clustering, and ts225 obtained with grid clustering l = 5. Moreover, for all other instances the TS heuristic yielded bounds at least as good as those of LSH and GA. These preliminary results encouraged us to compare our TS algorithm on larger instances. In Tables 3–7 we report the experimental results on the new TSPLIB instances for the GMSTP. The experiments with LSH and GA were separately performed by Golden et al. (2006) on a workstation with a 2.66 GHz Xeon processor and 2 GB RAM. In the LS column we report the results obtained with the LSH of Golden et al. (2005). In the GA1 column we provide the results obtained for the first version of the GA developed by Golden et al. (2005). The GA2 column is for the improved version recently developed by Golden et al. (2006). In the last column, we present the results obtained with our TS heuristic. As can be observed, for

Table 3 Computational results with the LS, GA1, GA2 and our TS on new TSPLIB instances with the center clustering procedure Instances

k

LS Cost

Secs.

Cost

Secs.

Cost

Secs.

Cost

Secs.

ali535 att532 d493 d657 fl417 gil262 gr229 gr431 gr666 lin318 p654 pa561 pcb442 pr264 pr299 pr439 rat575 rat783 rd400 si535 u574 u724

107 107 99 132 84 53 46 87 134 64 131 113 89 53 60 88 115 157 80 107 115 145

11,4350 12,017 16,512 19,487 7936 890 59,740 86,923 145,015 18,488 22,218 870 19,656 21,872 20,291 51,799 2187 3043 5884 12,791 15,073 16,039

374 401 295 942 133 40 27 218 797 77 641 1114 224 38 64 203 603 1307 155 302 603 1076

11,4379 12,007 16,526 19,465 7936 887 59,740 86,899 144,918 18,476 22,214 868 19,670 21,886 20,307 51,808 2189 3044 5880 12,791 15,069 16,015

492 500 388 969 218 73 39 266 2866 105 881 559 284 57 86 981 627 1653 205 458 620 1281

11,4303 12,001 16,493 19,433 7935 887 59,740 86,891 144,756 18,471 22,209 864 19,571 21,872 20,290 51,763 2170 3021 5869 12,791 15,037 15,905

595 604 506 810 142 41 25 200 2296 75 678 425 373 40 65 835 542 1432 171 487 541 1052

11,403 12,001 16,493 19,427 7935 887 59,740 86,885 144,756 18,471 22,208 864 19,571 21,872 20,290 51,760 2170 3017 5868 12,791 15,037 15,905

683 597 587 1056 233 74 63 233 1365 130 1045 702 266 72 94 574 762 1916 208 573 517 1290

30594.6

437

30589.7

618

30557.8

542

30556.9

690

Average

GA1

GA2

TS

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

316

Table 4 Computational results with the LS, GA1, GA2 and our TS on new TSPLIB instances with the grid clustering procedure l = 3 Instances

ali535 att532 d493 d657 fl417 gil262 gr229 gr431 gr666 lin318 p654 pcb442 pr264 pr299 pr439 rat575 rat783 rd400 u574 u724

k

181 182 171 221 142 95 81 149 224 108 230 156 101 102 163 196 285 135 198 266

Average

LS

GA1

GA2

TS

Cost

Secs.

Cost

Secs.

Cost

Secs.

Cost

Secs.

134,468 15,664 20,282 25,781 8358 1255 74,792 103,844 174,800 24,092 23,561 27,638 29,199 23,096 65,048 2942 4253 7651 19,743 21,854

1130 1102 772 2054 341 120 87 563 2055 209 1551 585 108 155 1695 2854 4823 430 2928 3408

134,362 15,662 20,269 25,718 8355 1255 74,792 103,844 174,683 24,092 23,553 27,525 29,199 23,096 64,996 2898 4251 7642 19,700 21,823

1449 1437 1058 2541 2368 178 128 2253 2623 257 2641 813 1228 1108 824 1671 6081 646 1555 4340

134,370 15,652 20,269 25,714 8355 1255 74,792 103,844 174,681 24,092 23,555 27,513 29,199 23,096 64,983 2896 4211 7638 19,701 21,761

1021 988 699 1741 1503 202 72 1406 1802 167 5334 581 128 702 543 1192 4161 1094 1054 3002

134,362 15,652 20,269 25,703 8353 1255 74,792 103,844 174,671 24,092 23,553 27,513 29,199 23,096 64,953 2884 4211 7632 19,699 21,761

1576 1569 1104 3027 681 204 67 1697 3105 206 4127 810 102 399 932 2080 7180 548 1340 5201

40416.1

1348

40385.8

1759

40378.9

1369

40374.8

2042

Table 5 Computational results with the LS, GA1, GA2 and our TS on new TSPLIB instances with the grid clustering procedure l = 5 Instances

ali535 att532 d493 d657 fl417 gil262 gr229 gr431 gr666 lin318 p654 pcb442 pr264 pr299 pr439 rat575 rat783 rd400 u574 u724 Average

k

108 110 102 137 93 63 47 87 139 64 134 95 55 69 96 121 169 81 127 166

LS

GA1

GA2

TS

Cost

Secs.

Cost

Secs.

Cost

Secs.

Cost

Secs.

108,201 11,903 16,155 19,914 7952 989 54,305 81,856 144,235 17,667 22,390 19,604 21,351 18,632 54,253 2065 2912 5461 15,313 15,565

409 420 284 952 153 55 31 214 739 66 506 221 34 86 251 682 1594 170 653 1334

108,201 11,897 16,132 19,878 7953 988 54,305 81,856 144,077 17,667 22,380 19,461 21,351 18,582 54,253 2036 2877 5450 15,289 15,615

497 634 383 980 250 472 41 250 3490 92 781 297 57 107 1547 661 2262 210 627 1459

108,201 11,896 16,143 19,854 7952 985 54,305 81,872 144,077 17,667 22,383 19,420 21,351 18,582 54,253 2024 2832 5435 15,259 15,458

562 661 519 763 163 62 26 170 2216 59 569 405 36 77 976 533 1420 189 507 1075

108,201 11,896 16,132 19,846 7952 984 54,305 81,856 144,077 17,667 22,379 19,411 21,351 18,582 54,230 2018 2832 5434 15,255 15,458

632 641 725 1249 230 63 43 244 2192 116 1316 357 67 105 587 705 2424 284 1037 2064

32036.2

442

32012.4

754

31997.5

549

31993.3

814

all these new instances the TS heuristic yields the best solution values which are indicated in boldface. We cannot directly compare the CPU times of our TS algorithm with those of the GAs by Golden et al. (2006), but

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

317

Table 6 Computational results with the LS, GA1, GA2 and our TS on new TSPLIB instances with the grid clustering procedure l = 7 Instances

ali535 att532 d493 d657 fl417 gil262 gr229 gr431 gr666 lin318 p654 pcb442 pr264 pr299 pr439 rat575 rat783 rd400 u574 u724

k

83 80 78 96 61 49 34 64 96 49 100 64 43 47 74 100 121 64 92 119

Average

LS

GA1

GA2

TS

Cost

Secs.

Cost

Secs.

Cost

Secs.

Cost

Secs.

94,149 10,231 14,174 16,663 7446 803 46,049 71,416 119,783 14,909 21,773 14,753 20,438 15,271 47,101 1762 2271 4595 12,230 13,305

242 214 169 479 69 38 16 114 354 43 2012 105 23 38 151 529 897 109 290 912

94,149 10,211 14,152 16,586 7446 803 46,049 71,416 119,271 14,909 21,771 14,709 20,438 15,241 47,101 1745 2238 4602 12,191 13,180

1840 1499 225 489 113 49 22 138 446 61 460 996 37 55 217 475 1001 135 349 755

94,149 10,214 14,152 16,578 7446 803 46,049 71,416 119,271 14,909 21,770 14,644 20,438 15,238 47,101 1743 2230 4581 12,186 13,151

962 1063 176 422 74 35 14 97 309 39 321 102 24 36 141 382 747 102 265 645

94,149 10,206 14,152 16,542 7446 802 46,049 71,415 119,271 14,909 21,770 14,644 20,438 15,238 47,101 1735 2230 4581 12,186 13,101

1012 992 212 440 99 63 13 111 452 44 462 117 28 37 140 601 850 97 304 710

27456.1

340

27410.4

468

27403.5

297

27398.3

357

Table 7 Computational results with the LS, GA1, GA2 and our TS on new TSPLIB instances with the grid clustering procedure l = 10 Instances

ali535 att532 d493 d657 fl417 gil262 gr229 gr431 gr666 lin318 p654 pcb442 pr264 pr299 pr439 rat575 rat783 rd400 u574 u724 Average

k

57 57 52 73 42 36 23 52 70 36 69 48 27 35 48 64 81 49 64 80

LS

GA1

GA2

TS

Cost

Secs.

Cost

Secs.

Cost

Secs.

Cost

Secs.

73,376 8497 11,121 13,531 6986 639 39,793 62,722 97,229 10,119 20,739 11,941 16,546 11,624 40,518 1235 1711 3844 9758 9703

111 109 83 244 35 19 8 73 208 352 150 74 22 25 338 182 563 69 146 316

73,359 8497 11,121 13,495 6986 639 39,793 62,722 97,198 10,119 20,739 11,941 16,546 11,624 40,518 1235 1697 3835 9759 9633

146 151 141 318 54 27 16 95 251 37 222 102 17 32 80 206 445 78 185 365

73,376 8497 11,121 13,530 6986 639 39,793 62,722 97,198 10,119 20,739 11,941 16,546 11,624 40,518 1235 1682 3825 9755 9608

100 105 84 275 36 18 7 64 172 24 149 58 11 21 53 148 330 60 135 294

73,359 8497 11,121 13,495 6986 639 39,793 62,722 97,198 10,119 20,739 11,941 16,546 11,624 40,518 1235 1682 3825 9755 9608

213 187 180 386 75 33 28 110 315 31 274 135 28 63 117 284 468 99 234 365

22581.6

156

22572.8

148

22572.7

107

22570.1

181

one can argue that they are comparable. Again, we have observed that our algorithm is very stable and that multiple runs on the same instance tend to produce solutions with similar costs.

318

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

Table 8 Computational results with the GA and our TS for the L-GMSTP Instances

Optimal solution cost

GA

TS

Cost

Secs.

Cost

Secs.

DHC1 DHC2 DHC3 DHC4 DHC5 DHC6 DHC7 DHC8 DHC9 DHC10 DHC11 DHC12 DHC13 DHC14 DHC15 DHC16 DHC17 DHC18 DHC19 DHC20

23 41 36 18 27 55 67 53 37 48 50 68 44 50 60 117 88 85 88 105

23 41 36 18 27 55 67 53 37 48 50 68 44 50 60 118 89 86 91 120

0.02 0.01 0.03 0.01 0.11 0.12 0.2 0.3 0.26 1.64 0.46 3.24 2.36 7.13 8.25 45 115 246 331 379

23 41 36 18 27 55 67 53 37 48 50 68 44 50 60 118 89 86 94 124

0.08 0.06 0.1 0.6 0.41 0.48 0.59 0.68 0.63 2.43 1.01 7.03 5.38 16.9 19.6 134 269 395 594 617

Average

58

59.05

57

59.40

120

4.5. Results for the L-GMSTP In Table 8 we present our experiments with the adaptation of our TS algorithm to the L-GMSTP case on instances generated by Dror et al. (2000). The optimal solutions are reported in Feremans (2001). In the column GA, we report the experimental results obtained by Haouari and Chaouachi (2006) using their GA. These experiments were performed on a Pentium IV 2.4 GHz PC. In the column TS, we report the results obtained with the TS algorithm for the L-GMSTP. As can be observed, in only two out of 20 instances (i.e., DHC19 and DHC20) does the GA yield better solutions. The TS heuristic is clearly more costly than the GA, but the attribute based TS heuristic remains promising for the L-GMSTP.

5. Conclusion We have developed an attribute based TS heuristic for the GMSTP and the L-GMSTP, using new neighborhoods. A new and larger set of test instances for the GMSTP was generated. The TS heuristic, two recently proposed state-of-the-art GAs and an LSH were tested on the new GMSTP instances. We have observed that the TS yields the best solutions. We have also adapted our TS algorithm to the L-GMSTP. Compared to a recent GA, the TS yields high quality results for the L-GMSTP.

Acknowledgments This research was partially funded by the Canadian Natural Sciences and Engineering Research Council under grants 227837-04 and 39682-05. The first author acknowledges the post-doctoral grant of _ ¨ BITAK-BAYG ‘‘TU Yurt Dısßı Doktora Sonrası Arasßtırma Burs Programı (NATO-B1)’’. The authors thank Bruce L. Golden, S. Raghavan and Daliborka Stanojevic´ from the University of Maryland for performing additional experiments on the new data set. Thanks are due to three referees for their valuable comments which have helped improve the paper.

¨ ncan et al. / European Journal of Operational Research 191 (2008) 306–319 T. O

319

References Ahuja, R.K., Magnanti, T.L., Orlin, J.B., 1993. Network Flows: Theory, Algorithms and Applications. Prentice-Hall, Englewood Cliffs, NJ. Chopra, S., Rao, M.R., 1994. The Steiner tree problem I: Formulations, compositions and extension of facets. Mathematical Programming 64, 209–229. Dror, M., Haouari, M., Chaouachi, J., 2000. Generalized spanning trees. European Journal of Operational Research 120, 583–592. Duin, C.W., Volgenant, A., Voss, S., 2004. Solving group Steiner problems as Steiner problems. European Journal of Operational Research 154, 323–329. Feremans, C., 2001. Generalized spanning tree and extensions. Ph.D. Thesis, Universite´ Libre de Bruxelles. Feremans, C., Labbe´, M., Laporte, G., 2002. A comparative analysis of several formulations for the generalized minimum spanning tree problem. Networks 39, 29–34. Feremans, C., Labbe´, M., Laporte, G., 2004. The generalized minimum spanning tree problem: Polyhedral analysis and branch-and-cut algorithm. Networks 43, 71–86. Fischetti, M., Salazar, J., Gonza´lez, J., Toth, P., 1997. A branch-and-cut algorithm for the symmetric generalized traveling salesman problem. Operations Research 45, 378–394. Ghosh, D., 2003. Solving medium to large sized Euclidean generalized minimum spanning tree problems. Technical report NEP-CMP2003-09-28 Indian Institute of Management, Research and Publication Department, Ahmedabad, India. Glover, F., 1986. Future paths for integer programming and links to artificial intelligence. Computers & Operations Research 13, 533–549. Glover, F., Laguna, M., 1997. Tabu Search. Kluwer, Dordrecht. Golden, B.L., Raghavan, S., Stanojevic´, D., 2005. Heuristic search for the generalized minimum spanning tree problem. INFORMS Journal on Computing 17, 290–304. Golden, B.L., Raghavan, S., Stanojevic´, D., 2006. Private communication. Haouari, M., Chaouachi, J.S., 2002. A probabilistic greedy search algorithm for combinatorial optimization with application to the set covering problem. Journal of the Operational Research Society 53, 792–799. Haouari, M., Chaouachi, J.S., Dror, M., 2005. Solving the generalized minimum spanning tree problem by a branch-and-bound algorithm. Journal of the Operational Research Society 56, 382–389. Haouari, M., Chaouachi, J.S., 2006. Upper and lower bounding strategies for the generalized minimum spanning tree problem. European Journal of Operational Research 172, 632–647. Hu, B., Leitner, M., Raidl, G.R., 2005. Computing generalized minimum spanning trees with variable neighborhood search. In: Proceedings of the 18th Mini-Euro Conference on Variable Neighborhood Search. Hwang, F.K., Richards, D.S., Winter, P., 1992. The Steiner Tree Problem. North-Holland, Amsterdam. Kruskal, J.B., 1956. On the shortest spanning tree of graph and the travelling salesman problem. Proceedings of the American Mathematical Society 7, 48–50. Myung, Y.S., Lee, C.H., Tcha, D.W., 1995. On the generalized minimum spanning tree problem. Networks 26, 231–241. Pop, P.C., 2002. The generalized minimum spanning tree problem. Ph.D. Thesis, University of Twente, The Netherlands. Pop, P.C., 2007. On the prize collecting generalized minimum spanning tree problem. Annals of Operations Research 150, 193–204. Pop, P.C., Kern, W., Still, G., 2006. A new relaxation method for the generalized minimum spanning tree problem. European Journal of Operational Research 170, 900–908. Prim, R.C., 1957. Shortest connection networks and some generalizations. Bell Technical Journal 36, 1389–1401. Reinelt, G., 1991. TSPLIB – a traveling salesman problem library. INFORMS Journal on Computing 3, 376–384. ´ .D., 1993. Parallel iterative search methods for vehicle routing problems. Networks 23, 661–673. Taillard, E Takahashi, H., Matsuyama, A., 1980. An approximate solution for the Steiner problem in graphs. Mathematica Japonica 24, 573–577. Wang, Z., Che, C.H., Lim, A., 2006. Tabu search for generalized minimum spanning tree problem. In: Yang, Q., Webb, G. (Eds.), PRICAI 2006, LNAI, Vol. 4099. Springer-Verlag, Berlin, Heidelberg, pp. 918–922.