- Email: [email protected]

Discrete Optimization

A tabu search algorithm for the open vehicle routing problem Jose Brand~ ao Gest~ ao e Administracßa~o P ublica, EEG, Universidade do Minho, 4704-553 Braga, Portugal Received 19 March 2001; accepted 15 February 2002 Available online 12 August 2003

Abstract The problem studied in this paper is diﬀerent from the basic vehicle routing problem in that the vehicles do not return to the distribution depot after delivering the goods to the customers or, if they do so, they must visit the same customers, for the collection of goods, in the reverse order. The practical importance of this problem has been established some decades ago, but it has received very little attention from researchers. In this paper we present a new tabu search algorithm that explores the structure of this type of problem and we compare its performance with another heuristic designed for the same purpose, which has been published recently. 2003 Elsevier B.V. All rights reserved. Keywords: Vehicle routing; Tabu search; Open vehicle routing; Hamiltonian path

1. Introduction The open vehicle routing problem (OVRP) consists of deﬁning the best routes for a ﬂeet of vehicles that must service a set of customers with a given demand and known geographical location. Each route is a sequence of customers that starts at the depot and ﬁnishes at one of the customers. The constraints considered in this problem are the following: all the vehicles have the same capacity; the travelling time of each vehicle should not exceed a given threshold, which is deﬁned by the driversÕ legal travelling time; the total demand of all the customers on a route must not exceed the capacity of the vehicle; each customer is visited just once by one of the vehicles, and its requirements

E-mail address: [email protected] (J. Brand~ao).

must be completely fulﬁlled. The objective is to minimise the number of vehicles (we assume that each vehicle travels exactly one route) and, for a given number of vehicles, to minimise the total distance (or time) travelled by the vehicles. We assume that the OVRP studied here is undirected but the same algorithm and computer programmes can be used for solving the directed case without any modiﬁcation. However, a procedure for determining the initial solution and another for improving a route require some modiﬁcations, but these are alternative procedures inside the main algorithm. The major diﬀerence between the OVRP and the well-known vehicle routing problem (VRP) is that in the OVRP each route is a Hamiltonian path, instead of a Hamiltonian cycle as it happens in the VRP. This diﬀerence is due to the fact that the vehicles do not return to the starting depot or, if they do so, they make exactly the same trip in the

0377-2217/$ - see front matter 2003 Elsevier B.V. All rights reserved. doi:10.1016/S0377-2217(03)00238-8

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

opposite order for the collection of goods from the customers. In practice this happens, for example, when a company contracts common carriers for the distribution of its goods since, after visiting the last customer of the route, the vehicles do not need to return to the depot. We can easily show that the OVRP is NP-hard, because for its resolution it is necessary to ﬁnd the best Hamiltonian path for each set of customers assigned to a vehicle. Since this subproblem is NP-hard, as stated, for example, by Syslo et al. (1983), because it can be converted into an equivalent Hamiltonian cycle, the whole problem is also NP-hard. The VRP has been widely studied by diﬀerent researchers during the last three decades, and many good algorithms have been created for its resolution. A classical survey of the VRP and many of its extensions and applications is given by Bodin et al. (1983). Among others more recent surveys, we can cite Laporte (1992) and the book edited by Crainic and Laporte (1998). Since the exact algorithms are still unable to solve most of the practical examples, the algorithms used in practice are the approximate ones. The best of these algorithms can ﬁnd the optimal or near optimal solutions in a very reasonable computing time. Among these we can mention those given by Osman (1993), Taillard (1993), Gendreau et al. (1994), Rochat and Taillard (1995), Brand~ ao (2001) and, more recently, Toth and Vigo (submitted). This last algorithm has the particularity of ﬁnding very good solutions in a short computing time, due to the reduction of the size of the neighbourhoods by exploring only the most potentially promising moves. Since there are such good algorithms for the VRP, why not try solving the OVRP as a VRP and, in the end, for each VRP route, delete the largest edge between the depot and the adjacent customers? There are two reasons for not doing this: the computing time is much higher and the solutions are worse than using a speciﬁc algorithm for the OVRP. An optimal VRP solution can originate a bad OVRP solution by that process and vice versa. Furthermore, if maximum route length constraints are present, this approach is even less appropriate, because an infeasible VRP solution may correspond to a feasible OVRP so-

553

lution. Our algorithm for the OVRP has been adapted from a VRP algorithm (Brand~ao, 2001) in order to take into account the many similarities between both problems but, at the same time, to explore the diﬀerences. Contrary to the VRP, the OVRP has only been studied by very few people. So far as we know, the ﬁrst author to mention the OVRP was Schrage (1981) in an article dedicated to the description of realistic routing problems, bringing attention to some of its applications. However, in terms of resolution methods for the OVRP, we are only aware of two studies: one is described in Bodin et al. (1983) and another is due to Sariklis and Powell (2000). The ﬁrst one refers to a real problem of express airmail distribution in the USA and contains many practical features such as delivery or pickup time windows, total route length and capacity of the plain. The authors of the study solved two routing problems separately, one for deliveries and another for the pickups. The authors used the Clarke and Wright savings algorithm with the necessary modiﬁcations for taking into account the open routes (‘‘incomplete’’, as they say) and all the other constraints that do not belong to basic VRP. Sariklis and PowellÕs (2000) research is only theoretical and applied to the symmetric OVRP, without considering a maximum route length. They present a new algorithm that comprises two phases: phase I and phase II. In phase I, the customers are assigned to clusters, taking into account the capacity constraint and trying to create the minimum number of clusters. After this, reassignments of customers to diﬀerent clusters are performed, according to some given rules, with the aim of reducing the travelling cost. In phase II, each cluster is transformed into an open route, starting with a minimum spanning tree (MST) and then applying to it a set of operations with the objective of converting it into a minimum cost path. When designing our algorithm we had in mind two goals: i(i) To keep it simple, we only apply the basic principles of the tabu search theory, without using other reﬁnements that usually make the algorithms much more complex, less eﬃcient

554

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

computationally, and whose marginal beneﬁt in terms of solution quality is normally low or null. Obviously, we try to use those tabu search concepts that we consider most important after several years of experience in using tabu search with diﬀerent types of problems, as in Brand~ ao and Mercer (1997) and Brand~ao (2001). There are already many studies that demonstrate the great potential of the tabu search methodology for solving combinatorial problems and, in particular, vehicle routing problems. Nevertheless, this research will show it even more clearly, since our comparison is made with an heuristic carefully designed for the OVRP structure, but that is quickly trapped in a local optimum. (ii) To explore the structure of the OVRP in order to enhance the quality of the solutions in a short computing time.

2. The initial solution Any tabu search algorithm requires an initial solution from which all search process begins, namely the deﬁnition of the neighbourhood and the consequent ﬁrst move. Most of the studies that apply tabu search, and in particular those dedicated to VRP problems, pay little attention to the initial solution. Generally, the initial solution is trivial, such as assigning each customer to a route, or is obtained with a fast and well-known heuristic. We think that the main reasons for this have been the belief that the initial solution has very little inﬂuence on the quality of the ﬁnal solution, and the need for ﬁnding a starting solution very quickly, leaving all the improvement work for the tabu search algorithm. This practice has not impeded the attainment of very good results, because the tabu search is in fact very powerful when correctly applied. However, as we will show in this article, the initial solution can be calculated quickly and still give an important contribution to enhance the ﬁnal solution. In relation to this, we should mention the article of Van Breedam (2001), which is dedicated to the VRP and states that in a tabu search algorithm ‘‘for almost all problems of

the four move types, the good initial solution gives signiﬁcantly better ﬁnal solutions’’. In this article we use two methods for obtaining the initial solutions: a nearest neighbour heuristic (NNH) and another based on a pseudo lower bound. Furthermore, we also experiment with initial solutions given by an insertion heuristic (IH) and a lower bound. We do this in order to compare them with the other two methods, but only global results are provided for these. In the selection of the method to produce the initial solution, our goal has been to ﬁnd a solution with a good structure and less importance has been given to its cost. The NNH creates a set of routes sequentially. Each route is started with the unrouted customer nearest from the depot, then the nearest admissible customer from this one is added, and so on. A customer is said to be admissible if its insertion in the route does not violate any of the constraints, namely, the capacity of the vehicle and the maximum allowable length of the route. When no customer is admissible in the route under construction a new route is started or the process stops if all the customers are already routed. The rationale behind this heuristic is explained in the following. The NNH has been applied frequently for solving the travelling salesman problem (TSP). Although being a very natural heuristic for the TSP, the quality of its solutions is generally not high. The main reason for this behaviour is that this method is myopic, principally, because in the end there is no freedom of choice and the last unrouted customer must enter the tour. Usually, it is the edge that links this last customer with the initial customer that contributes most for hampering the solution. However, in the Hamiltonian path, such an edge does not exist and so there is a better chance for good behaviour of the NNH. Obviously, the resolution of the OVRP comprises another interrelated task, which is the formation of the clusters of customers corresponding to each of the paths. Concerning the creation of the clusters, this heuristic is rather greedy and short sighted towards the end of the route, when the vehicle is almost full. This is because a customer is chosen more due to his demand and less due to his vicinity to the other members of the cluster.

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

The other method used to obtain the initial solution is based on a pseudo lower bound as described in the following. Let us deﬁne K as the minimum number of vehicles necessary to service all the customers. For a given example, the value of K is unknown, but a lower bound can be obtained through Eq. (1): & ’ n X di =Q ; ð1Þ i¼1

where n is the number of customers, di the demand of customer i and Q is the capacity of each vehicle. If only capacity constraints are present, this is usually a good lower bound, being equal to K for all test problems used in this research. However, when time constraints are also considered, it is frequently loose. In any case, the algorithm accepts a value of K estimated by Eq. (1), which means that this value represents a goal that for some examples is unattainable. A MST with degree K on the depot (K-MST) deﬁnes a lower bound for the OVRP. This lower bound is, in general, quite loose because the degree of each customer, and the capacity of each vehicle are not taken into account. In spite of this, the initial solutions obtained from this lower bound have also been tried. Christoﬁdes et al. (1981) deﬁned for the VRP a lower bound similar to the K-MST, which they called the k-degree centre tree, with K 6 k 6 2K, but including a set of constraints that impose a degree of two on each customer. These constraints are not valid for the OVRP, because the last customer of each route, which is unknown beforehand, has only degree one. Fisher (1994a) formulated the VRP as a minimum cost K-tree, with degree 2K on the depot, and including two other types of constraints: the degree two on each customer and the capacity constraints. The lower bound for the VRP is obtained by making a Lagrangian relaxation of these two types of constraints and then calculating the minimum degree-constrained K-tree. Neither the kdegree centre tree nor the K-tree give a proven lower bound for the OVRP. However, the model based on the K-tree has the great advantage of taking into account the capacity and the route

555

length constraints, which are not considered by the k-degree centre tree. Therefore, its structure should be more similar to a true lower bound for the OVRP. A K-tree is deﬁned as a set of n þ K edges that span a graph with n þ 1 nodes. The 0-tree is the usual spanning tree. The minimum cost K-tree can be determined by a polynomial algorithm provided by Fisher (1994b). Very brieﬂy, this algorithm consists of the following: ﬁrst the MST of the graph is determined and then K least cost edges not belonging to the MST are added to it. After this, appropriate interchanges of edges are made, until the degree of the depot is 2K. The starting point is the formulation of the VRP as a minimum cost K-tree, and then the degree constraints and the capacity constraints are relaxed in a Lagrangian way, originating a new model that is solved by the subgradient method. We have followed FisherÕs (1994a) research, but have also introduced a few modiﬁcations, especially when route length constraints are present, since Fisher (1994a) has not dealt explicitly with this case. The details of the calculation can be found in Brand~ao (2001), but the way of dealing with the limit in the route length is presented as follows: let S be a set of customers, T the length of the MST of a graph constituted by S and the depot, and L the maximum allowable route length, then the minimum number of vehicles necessary to serve S, is rS ¼ dT =Le. This value of rS is not very tight, but the computational experiments have shown that the VRP lower bounds are better than if only the demand of the customers in S and the capacity of each vehicle are taken into account for the calculation of rS . The procedure used to obtain the initial solution from the minimum cost K-tree is explained in the following paragraphs. In the graph deﬁned by the edges of the minimum cost K-tree, the minimum cost spanning tree (MST) is calculated, which is not necessarily a lower bound for the OVRP. Then, deleting the edges incident on the depot in this MST, a given number of connected components is obtained and each will give rise to one or more routes, according to the procedure described in the following paragraph. For each connected component, the ﬁrst chain starts at the depot and follows the sequence of

556

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

customers (deﬁned by the component) until one with a degree greater than two appears. At this point we decide to include this customer in the chain if its demand does not exceed the capacity already available in the vehicle; otherwise, the chain ends and a new chain is started with this customer. In both cases, the next customer to enter the chain is the nearest in the component not yet in any chain. When this chain ﬁnishes, because a customer with degree one or greater than two (but exceeding the capacity) was encountered, a new one is started at the last customer with degree greater than two that is linked to the customers that are not yet in any chain. Each chain generated in this way is an OVRP route. The number of initial routes may be signiﬁcantly larger than K. In order to reduce the number of routes and improve the initial solution before starting the tabu search algorithm, a simple heuristic method was applied as follows. The routes are sorted by decreasing order of their free capacity. The ﬁrst route of the list is then inserted into one of the other routes where it is feasible (the capacity is not exceeded) and the cost of insertion is lower. If the insertion is not feasible in any route, stop. Otherwise, sort the routes again using the same criterion and repeat the process. The insertion cost is the minimum cost of inserting the whole route, in its original sequence or the reverse, between two customers (or the depot and one customer) of the destination route. Most of the OVRP solutions are infeasible in terms of number of routes, which is greater than K, and also in terms of capacity. This is not a problem for the tabu search algorithm because it can deal easily with infeasible solutions. However, these initial solutions, even if their cost is far from the optimum, contain very useful information for the tabu search algorithm, namely, the clusters of customers and crucial links of customers inside each cluster.

3. The tabu search algorithm The tabu search algorithm (TSA) starts with an initial solution, feasible or infeasible, generated by one of the methods described in the previous

section. Before starting the application of the TSA, the initial solution is submitted to one of the procedures described below that aims to improve the solution cost by reducing the cost of each individual route: (i) The nearest neighbour (NN) method. The NN is identical to the NNH, except that the NNH is used for solving the whole problem, the OVRP, while the NN is only for improving each route of the OVRP solution. Therefore, they are given diﬀerent names to facilitate identiﬁcation throughout the paper. Considering that each route is a set of customers (S) plus the depot, this method works as follows: starting at the depot, the next customer of the sequence is the customer of S not yet in the sequence and nearest to the last that entered in the sequence. In this research this method as been applied only to the routes with three or more customers. For a route with g customers, the computational complexity of this procedure is Oðg2 Þ. (ii) The unstringing and stringing (US) procedure created by Gendreau et al. (1992), with the necessary modiﬁcations for the Hamiltonian path, since the original algorithm was conceived for TSP. This algorithm depends on a parameter that deﬁnes the neighbourhood among the customers, p, and we use p ¼ 5. The US comprises an unstringing operation followed by the stringing, which is performed by a procedure called GENI. The authors do not provide the computational complexity of the US which, given in detail, is the following: considering that these procedures (unstringing plus stringing) are applied for each customer (at least once because, if there is an improvement, there is a restart with the initial customer) and for both orientations of the route, the global computational complexity of the US is at least Oð2gðg þ p3 þ p4 þ g þ p þ p3 þ p4 ÞÞ, where the ﬁrst three terms correspond to the unstringing and the other four to the stringing. The neighbourhood of the current solution is deﬁned through the set of candidates to change

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

their present position, which consists of all the customers, and by two types of trial moves, insert and swap. An insert move consists of taking a customer from one route and inserting it into another route or in a new route, and a swap move consists of exchanging two customers belonging to two diﬀerent routes. The displaced customer is inserted between the customers (or depot and customer) of the receiving route, wherever its insertion cost is lower. On the other hand, the insert move originates a solution with one route less if the customer is removed from a route with one customer only. Since the ﬁrst objective is to minimise the number of vehicles, the algorithm does not allow the creation of new routes, except while no feasible solution is known. The computing time of each iteration of the TSA depends, to a great extent, on the number of trial moves. Considering n the number of customers, r the number of routes in the solution and n=r the average number of customers per route, the approximate number of potential moves in each iteration is the following: (i) Insert moves––nðr 1Þn=r ¼ n2 ð1 1=rÞ, because each customer can be inserted in one of the other ðr 1Þ routes between each of the n=r customers of the route. (ii) Swap moves––½nðn 1Þ=2 2ðn=rÞ, where the ﬁrst term represents the total number of permutations, not considering the fact that no permutation exists if both customers belong to the same route. The second term represents the total number of places inside each route where each customer can be inserted, multiplied by 2, because there are two customers involved in each permutation. The approximate number of swaps is n3 =r, but, obviously, no swaps exist if r ¼ 1. Therefore, we can say that the computational complexity of each iteration is approximately Oðn2 þ n3 Þ, if the swap and insert moves are both considered. The computing has been reduced about 30% (when only insert moves are performed this reduction is higher) by keeping in memory the cost of the trial moves that remain unchanged from one iteration to another.

557

A crucial feature of the TSA is the strategic oscillation that is achieved by crossing the boundary of the feasibility. Therefore, a move can be performed even if the resulting solution is infeasible. Each potential move is evaluated according to the cost of the solution resulting from the move, which is given by the following equation: r X ½cðiÞ þ P ðlðiÞ þ oðiÞÞ; ð2Þ Cost ¼ i¼1

where cðiÞ is the cost (time) of route i, r is the total number of routes in the solution, P is a penalty, lðiÞ is the excess of load in route i and oðiÞ is the overtime in route i. If a solution is feasible, lðiÞ and oðiÞ are equal to zero for all the routes. P 2 ½0:001; 1024 is initially equal to 1 and is multiplied by 2 if during the last 10 consecutive iterations all the solutions have been infeasible. Conversely, P is divided by 2 if all the solutions have been feasible during the last 10 consecutive iterations. Gendreau et al. (1994) use for the VRP the same type of penalty adjustment, but they use two independent penalties, one for the excess of load and another for the overtime. In each iteration one chooses the trial move, without a tabu status, that generates the minimum cost solution as deﬁned by Eq. (2). The tabu status of a move can be overruled if a solution is feasible and is better than any feasible solution known so far, or if it is infeasible and its cost is lower than the cost of any infeasible solution already known. In these two cases, not only the tabu status of the move is overruled but the move is also immediately performed without executing the remaining trial moves. This ﬁrst best strategy has the objective of accelerating the search, but obviously it may impede the discovery of a better local optimum of the current neighbourhood. The tabu status of a move is deﬁned in the following way: a customer that leaves a route cannot return to it during a given number of iterations, h ¼ dn=6e This kind of tabu restriction has been used with the VRP by many authors such as Osman (1993), Taillard (1993), Gendreau et al. (1994), Xu and Kelly (1996), etc. However, in all of them the tabu tenure (h) is random (Osman (1993) uses three ﬁxed values, during a given number

558

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

of iterations, but the order in which that values are applied is chosen at random). This raises the problem of the reproducibility of the solutions given by these algorithms. The value chosen for h is based on our previous experience with several types of VRP problems (Brand~ ao and Mercer, 1997; Brand~ ao and Mercer, 1998; Brand~ ao, 2001). Each iteration of the TSA modiﬁes only two routes of the current solution. Each of these routes is a Hamiltonian path and, as explained before, the problem of deﬁning the optimal path is NP-hard. Therefore, instead of using an exact method to ﬁnd the optimal routes, we preferred to use two alternative heuristic methods, the NN and the US procedures already described in the beginning of this section, which are very fast and give good solutions. However, since both heuristics have very diﬀerent characteristics and computational complexity, the computational experiments will show which is most appropriate for the OVRP. It must be very clear, that in the current implementation of the TSA these procedures (the one chosen) are applied in each iteration only after the execution of the move. Obviously, this could have been implemented diﬀerently by applying them in each trial move, in order to evaluate more precisely the cost of each move. However, this implementation has the disadvantage of increasing very substantially the computation time. Therefore, with the current programme, most of the computation time is spent by the trial moves, especially for the higher frequencies of the trial swap moves, and not by the improving procedure of the routes changed. In the TSA, there are three ways of controlling the execution time: the maximum total number of iterations ðN1 Þ, the maximum number of iterations without improvement of the best known feasible solution ðN2 Þ and the maximum number of iterations without improvement of the best known feasible or infeasible solution ðN3 Þ. The execution of the programme is stopped when the number of iterations N1 and N2 are both attained, or when the value N3 is achieved. Therefore, the total number of iterations is not known in advance, depending on the evolution of the search. If during a given number of iterations ðMÞ there is no improvement neither in the best known infeasible solution nor in

the best known feasible one, the search restarts from the best known feasible solution (or best infeasible if none is feasible). With this restart, the tabu list is emptied. The aim of these restarts is to promote intensiﬁcation. So far as we know, there is no tabu search algorithm for the VRP or its variants using restarting. The values of all these parameters have been established as a function of the problem size ðnÞ and in a way that allows them to adapt to the search process. For example, the combination of N1 and N2 as stopping criterion allows the search to go on if a new best solution as been discovered recently, indicating that the algorithm is exploring a promising region. A similar reasoning is behind the use of N3 . On the other hand, N2 should be greater than M to give time for improvement after the restart. When all the parameters are ﬁxed there is no reason for using N3 greater than the double of M, since after two restarts without improvement, the algorithm starts cycling. In order to try to reduce the number of routes in the solution, there is a mechanism, not related to the tabu search, which consists in joining the two routes with the lowest demand. This mechanism is only applied if the following two conditions are veriﬁed simultaneously: (i) the number of routes in the current solution is greater than the predeﬁned value K; pﬃﬃﬃ (ii) the current iteration is greater than 50 n and at least 200 iterations have been executed since the last application of this procedure, or at each restart.

4. Computational experiments All the programmes of this article have been written in C language and executed on a Pentium III HP Vectra VEi8 at 500 MHz. The test problems, which have been taken from Christoﬁdes et al. (1979) and from Fisher (1994a), are identiﬁed by their original number, preﬁxed, respectively, with letters C and F. The problems C1–C5 and C11, C12, F11 and F12 have no driving time constraints, while for all the others the maximum route length ðLÞ is equal to the original multiplied

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

559

Table 1 Characteristics of the test problems

n K C L

C1

C2

C3

C4

C5

C11

C12

F11

F12

C6

C7

C8

C9

C10

C13

C14

50 5 0.97

75 10 0.97

100 8 0.91

150 12 0.93

199 16 0.996

120 7 0.98

100 10 0.91

71 4 0.96

134 7 0.95

50 6 0.81 180

75 10 0.89 144

100 9 0.81 207

150 13 0.80 180

199 17 0.94 180

120 11 0.63 648

100 11 0.82 936

by 0.9. This transformation is required because the original problems have been conceived for the VRP and not for the OVRP. Otherwise, the driving time constraints are loose, being nonbinding for almost all solutions. All these problems are Euclidean and it is assumed that the distance between each pair of customers is equal to the travelling time, and, for the problems with a maximum driving time, there is a service time of each customer, but in the solutions we report only the travelling time. Some of the characteristics of these problems are in Table 1, where the value of K has been Pn estimated, the capacity ratio ðCÞ stands for i¼1 di =ðQKÞ, and the value of L for the problems without driving time limit is not indicated. The objective of the computational experiments is to evaluate the performance of the TSA in terms of quality of the solutions and computing time and to determine the inﬂuence of several procedures and parameters in that performance. Another aim is to compare the performance of the TSA with Sariklis and PowellÕs (2000) algorithm, which is done in the end of this section. Since the total number of combinations of procedures and parameters is very large, it is necessary to conduct the experiments in a very systematic way, in order to guarantee that the most important factors are considered, without trying all possible combinations. The NN heuristic used for improving the routes is much faster than the US heuristic and, therefore, it will be used to determine the inﬂuence of the following factors: initial solution, tabu list size, ﬁxed versus random tabu tenure, and swap moves. Obviously, while the factor under test is changed, all the others must remain stable. Therefore, in each experiment only the parameters under test are mentioned. In all the experiments we used the 16 test problems presented in Table 1. The TSA always produces solutions with the minimum number of routes for the

problems without route length constraints, independently of the set of parameters used. The total number of iterations is controlled by pﬃﬃﬃ the parameters N1 ¼ 300 n, N2 ¼ 5n, N3 ¼ 8n and M ¼ 4n. The computational experience with this and other tabu search algorithms shows that in the beginning the solution improves very quickly and after some hundreds of iterations the improvement is, in general, very slow. Therefore, the value of these parameters has been established after some experiments, trying to get a good compromise between solution quality and computing time. 4.1. Inﬂuence of the tabu tenure and the trial swap moves For choosing the tabu tenure, we experimented the values dn=6e, dn=3e and dn=2e designated, respectively, h, h3 and h2 . Simultaneously, we tested the following frequencies of trial swap moves: no swap moves (S0), every 10 iterations (S10), every ﬁve iterations (S5), every three iterations (S3) and every iteration (S1). Thus, this originates 15 different experiments, which allowed us to take the following main conclusions: • On average, the lowest tabu tenure gives slightly better solutions than the other two, both in terms of number of routes and travelling time, and h2 presents the worst average behaviour. • When the frequency of the trial swap moves increases, independently of the tabu tenure, there is a clear trend of improvement of the solutions for both the number of routes and also the routing time. However, this does not always happen, i.e., there are a few exceptions either on average or for individual solutions. • The largest percentage gain, resulting from the trial swap moves, occurs for those problems with larger number of customers and routes,

560

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

especially when driving time constraints are present. Moreover, the percentage diﬀerence of routing time from S0 to S10 is higher than between any other two consecutive frequencies (S10, S5, etc.). Both facts can be explained by the existence of more scope for improvement in the respective solutions, since they should be far from the optimum. In this paper, the frequency used is S10, unless stated otherwise. • The computing time increases very substantially when the frequency of the trial moves increases, as expected based on the computational complexity analysis presented in the previous section. In order to study the inﬂuence of a random tabu tenure in the performance of the TSA, its value was chosen randomly, in each iteration, in the interval ½n=6; n=2 (the quotient is rounded up to the nearest integer). Therefore, since the TSA becomes stochastic, it was executed 10 times, for each of the ﬁve swap frequencies deﬁned previously. For each frequency of trial swap moves (and each individual problem), we registered the best and the average of the 10 runs, arriving at the following conclusions: the best solutions are considerably better than those obtained with a ﬁxed tabu tenure (both in terms of number of routes and travelling time); the average values are almost identical to those obtained with the ﬁxed tabu tenures, considering the average solutions given by the three values deﬁned previously; the average CPU time is also almost the same. 4.2. Inﬂuence of the initial solution The methods used for producing the initial solutions other than the NNH, are the following: (i) Generated from the pseudo lower bound based on the K-tree. The subgradient method, which is applied to improve the pseudo lower bound, was executed during of 1200 iterations. Each iteration gives a pseudo lower bound, but only the best 10 have been chosen. Each of these 10 initial solutions is submitted to the TSA and the best three diﬀerent solutions are selected (if, for example, all the ten

solutions are identical only one is chosen). The TSA is then applied again to these three, and the best is the ﬁnal solution. The time for computing the K-trees (62 seconds on average) is not included in the TSA because they are computed in advance. (ii) From the lower bound given by the K-MST. This lower bound is transformed into an OVRP solution by the same process described before for K-tree. (iii) The IH, starting each route with the unrouted customer nearest from the depot. (iv) A random procedure (RH) that works as follows: each route is constructed sequentially, starting with a randomly chosen customer, and chosing the next customer to enter the route at random. This customer is inserted in the end of the route if its insertion does not violate the capacity and route length constraints, otherwise a new route is started with it. In our experiments, the TSA can take as initial solutions those given by the K-tree (10 diﬀerent Ktrees are used to produce 10 initial solutions), the NNH, the K-MST, the IH and the RH (however, this procedure is only used to produce seven different initial solutions, which jointly with the NNH, the K-MST and the IH, give a set of ten diﬀerent solutions that are used by the TSA in the same way as the ten solutions generated from the K-trees). To facilitate identiﬁcation, when the TSA uses these initial solutions we call it, respectively, TSA-K-tree, TSA-NNH, TSA-K-MST, TSA-IH and TSA-RH. The use of several initial solutions, as it happens for the TSA-K-tree, and TSA-RH, contributes signiﬁcantly to improve the performance of the TSA, as we will show in this section. Rochat and Taillard (1995), have also used 20 initial solutions in their algorithm, but in a very diﬀerent way: they are obtained by partitioning the problem into subproblems and by applying a tabu search method. They are then combined for producing other solutions of higher quality. First we compare the performance of the TSANNH, the TSA-K-MST and the TSA-IH, considering the global average, i.e., the average for the sixteen test problems and for the ﬁve diﬀerent

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

frequencies of trial swap moves. For the TSAK-MST, the average number of routes is 0.38% higher and the routing time is 1.17% lower than for the TSA-NNH. On the contrary, for the TSA-IH, the average number of routes is 0.12% lower and the routing time is 0.25% higher, than for the TSANNH. In Table 2 we present the results given by the TSA-K-tree and the TSA-RH. These results show that the solutions of the TSA-K-tree are consistently better, and the global average diﬀerences are the following: 0.12% for the number of routes, 0.66% for the routing time and 2.86% for the computing time, always in favour of the TSAK-tree. The diversity of the 10 initial solutions in each of the cases (TSA-K-tree and TSA-RH) can be measured by the number of common edges among them for each of the test problems (in the case of the TSA-K-tree we compare the MSTs from which the initial solutions are generated, instead of using these, which are generally infeasible). For simplicity, we present the global averages for the sixteen problems: the range of variation is 65–83% and 5–55%, for the TSA-K-tree and the TSA-RH, respectively, and the mean is, respectively, 74% and 30%. This shows that the better results given by the TSA-K-tree are not due to the higher diversity of the initial solutions, but to their intrinsic quality. Another indication of this quality, is the fact that the average number of common edges between the best solutions discovered, for the sixteen test problems, and the MSTs, from which their generation started, is 62%. Comparing the solutions given by the TSAK-tree with those produced by the TSA-NNH, we can conclude the following: in this last case the global average number of routes and travelling

561

time are, respectively, 1.28% and 3.27% higher, but the computing time is about eleven times smaller. Another way of showing the inﬂuence of the initial solution is given by the following experiment: the TSA-NNH was executed during the same time used by the TSA-K-tree which is implicit in Table 2, for each individual problem. In order to do this, the values of N1 , N2 and N3 are unlimited and the value of M is much higher, 40n, to avoid cycling, but still allowing for several restarts. From this experiment, we can draw the following conclusions: considering the average for each frequency of the trial swap moves, the TSAK-tree always performs better both in terms of number of routes and travelling time; in relation to the global average, the percentage increases of the TSA-NNH over the TSA-K-tree are 0.92% and 1.67%, for the number of routes and routing time, respectively. 4.3. Inﬂuence of the improving procedures of the routes NN and US In order to compare the performance of the improving procedures NN and US, we again executed the TSA-NNH and the TSA-K-tree, considering the ﬁve frequencies of trial swap moves, using the US instead of the NN procedure. The results of these experiments allow us to conclude the following: (i) For the TSA-NNH: Considering the averages for each frequency of trial swap moves, we ﬁnd that in terms of number of routes, for some frequencies the NN performs better and for others the opposite is true. In terms of routing time, the US always performs better. Considering the global averages, we can

Table 2 Average solutions when the initial solutions are given by the K-trees and by other methods TSA-K-tree

r Time CPU (s)

TSA-RH

S0

S10

S5

S3

S1

S0

S10

S5

S3

S1

9.94 652.3 141

9.81 652.1 316

9.81 649.0 470

9.75 651.0 760

9.75 644.0 1761

9.94 658.6 145

9.81 657.5 337

9.81 653.9 542

9.81 651.7 750

9.75 648.4 1771

562

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

Table 3 Comparison of solutions A

C1 C2 C3 C4 C5 C11 C12 Mean

B

C

D

E

Time

CPU

Time

CPU

Df

Time

CPU

Df

Time

CPU

Df

Time

Df

488.2 795.3 815.0 1034.1 1349.7 828.3 882.3

0.22 0.16 0.94 0.88 2.20 1.54 0.76

459.6 782.9 685.5 888.7 1324.4 722.3 665.9

0.09 0.06 0.38 0.35 0.88 0.62 0.30

6.2 1.6 18.9 16.4 1.9 14.7 32.5

438.2 584.7 643.4 767.4 1010.9 713.3 543.2

1.7 4.9 12.3 33.2 116.9 15.7 7.8

11.4 36.0 26.7 34.8 33.5 16.1 62.4

416.1 574.5 641.6 740.8 953.4 683.4 535.1

88.8 167.5 325.3 870.2 1415.0 696.0 233.6

17.3 38.4 27.0 39.6 41.6 21.2 64.9

416.06 567.14 640.42 734.72 914.51 683.02 534.40

17.3 40.2 27.3 40.7 47.6 21.3 65.1

884.7

0.96

789.9

0.38

13.2

671.6

27.5

31.6

649.3

542.3

35.7

641.47

37.1

show that the number of routes is 0.14% lower, the routing time is 1.48% higher and computing time is 24.76% lower when the NN is used, i.e., the NN procedure only performs worse in terms of routing time. (ii) For the TSA-K-tree: Considering the averages for each frequency, the US always gives equal or better results in terms of number of routes, and always better results in terms of routing time. Considering the global averages, the number of routes and the travelling time are 0.12% and 0.43% lower, respectively, when the US is used, but the computing time is 37.27% higher. (iii) The use of the US, instead of the NN, aﬀects much more strongly the computing time in the lower frequencies of trial swap moves. For example, without swap moves this time is about tripled, both for the TSA-NNH and for the TSA-K-tree. We performed another experiment with the TSA-NNH, using the US, which consisted of stopping the execution when the computing time, for each individual problem, was the same as required by the NN. In this case, the NN produced slightly better results both in terms of number of routes and routing time. 4.4. Comparison of the TSA algorithm with an algorithm from the literature In this subsection we compare the performance of the TSA with Sariklis and PowellÕs (2000)

algorithm, which is, as far as we know, the best published. For the comparison, we use the same test problems of these authors, excluding the smallest ones (number of customers between 6 and 32). These are not used because they are too easy to solve. The results are presented in Table 3, where ÔDfÕ is the percentage increase, in relation to the routing time, of Sariklis and PowellÕs (2000) algorithm over the TSA, CPU is the computing time in seconds, and the top of the columns indicates the following: A––The solutions given by Sariklis and Powell (2000). B––The solutions given by the TSA-NNH with S10, but stopping the execution when the computing time equals the value given in the column A divided by 2.5. Therefore, we are assuming that our computer is 2.5 times faster than the used by Sariklis and Powell (2000), a Pentium at 133 MHz. C––The solutions given by the TSA-NNH with S10. D––The solutions produced by the TSA-K-tree, with S10 and using the US. E––The best solutions, according to the number of routes and, for the same number of routes, the routing time, found during the experiments described in this section. The number of routes is never reported in Table 3, because it is the minimum possible and deﬁned in Table 1 as K. The results of Table 3 show that the TSA performs much better than Sariklis and

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

563

Table 4 Solutions found for the other test problems C6

C7

C

Time CPU

F11 179.45 5.7

825.9 32.7

416.00 1.4

580.97/11 652.09 3.4 10.4

827.57/14 25.2

D

Time CPU

177.40 398.1

781.21 1000.2

412.96 55.8

634.54 123.7

644.63 351.7

177.00

771.75

412.96

583.19

644.63

E

F12

PowellÕs (2000) algorithm, even for the ‘‘same’’ computing time; the average percentage diﬀerence is 13.2%. Column C of this table shows that it is possible to improve the solutions much further, widening the percentage diﬀerence to 31.6%, with an increase in the computing time, but is still very low in practical terms. The solutions under column D are even better, but require substantially more computing time as well as the use of two more complex procedures, the pseudo lower bounds and the US. We ﬁnish this section by presenting Table 4 that contains the solutions for cases C, D and E as those of Table 3, for the other test problems not solved by Sariklis and Powell (2000). The number of routes is given if it is diﬀerent from the value K of Table 1.

5. Conclusions In this research we created a tabu search algorithm that is able to ﬁnd very good solutions for the OVRP in a very short computing time, strongly outperforming Sariklis and PowellÕs (2000) algorithm. We have introduced an innovative concept that consists of integrating the lower bounds of a problem in the tabu search. We are convinced that this technique would be applied a lot more in the future with all types of combinatorial optimisation problems for which simple and fast methods of determining good lower bounds exist. The US procedure of Gendreau et al. (1992) has been adapted for the Hamiltonian path and, alternatively, a NNH was used for the same purpose. In spite of the former being more powerful than the latter, especially for routes with a large number of customers, this one is also very eﬀec-

C8

C9

C10

C13

C14

Mean

946.84 100.1

994.26 25.8

651.92/12 8.1

675.00 23.6

785.20 622.2

884.63 2060.3

943.66 401.9

597.31 419.8

651.28 603.7

767.27

879.22

916.70

592.18

638.32

tive. Inside the tabu search algorithm, it also performs better than the US procedure, if the available computing time is low. Frequently, the literature contains claims that a random tabu tenure produces better solutions than a ﬁxed one, but this sentence deserves clariﬁcation. Our experiments conﬁrm that the best solution of several runs is generally better than the solution obtained with a ﬁxed tabu tenure, included in the interval of variation of the random tabu tenure. However, the mean of the solution values obtained with the random tabu tenure is similar to the solution value obtained with that ﬁxed tabu tenure. Acknowledgements This research was supported by Fundacß~ao para a Ci^encia e Tecnologia under the program PRAXIS XXI, project no. 3/3.1/CEG/ 2661/95. The author is indebted to the anonymous referees for their valuable comments on this article. References Bodin, L., Golden, B., Assad, A., Ball, M., 1983. Routing and scheduling of vehicles and crews: The state of the art. Computers and Operations Research 10 (2), 63–211. Brand~ao, J., 2001. A lower bound based meta-heuristic for the vehicle routing problem. In: Ribeiro, C., Hansen, P. (Eds.), Essays and Surveys in Metaheuristics. Kluwer Academic Publishers, pp. 151–168. Brand~ao, J., Mercer, A., 1997. A tabu search algorithm for the multi-trip vehicle routing and scheduling problem. European Journal of Operational Research 100 (1), 180–191. Brand~ao, J., Mercer, A., 1998. The multi-trip vehicle routing problem. Journal of the Operational Research Society 49, 799–805. Christoﬁdes, N., Mingozzi, A., Toth, P., 1979. The vehicle routing problem. In: Christoﬁdes, N., Mingozzi, A., Toth,

564

J. Brand~ao / European Journal of Operational Research 157 (2004) 552–564

P., Sandi, C. (Eds.), Combinatorial Optimization. Wiley, Chichester, pp. 313–338. Christoﬁdes, N., Mingozzi, A., Toth, P., 1981. Exact algorithms for the vehicle routing problem, based on spanning tree and shortest path relations. Mathematical Programming 20, 255–282. Crainic, T., Laporte, G., 1998. Fleet Management and Logistics. Kluwer Academic Publishers. Fisher, M., 1994a. Optimal solution of vehicle routing problems using minimum k-trees. Operations Research 42 (4), 626– 642. Fisher, M., 1994b. A polynomial algorithm for the degreeconstrained minimum k-tree problem. Operations Research 42 (4), 775–779. Gendreau, M., Hertz, A., Laporte, G., 1992. New insertion and post-optimization procedures for the traveling salesman problem. Operations Research 40 (6), 1086–1094. Gendreau, M., Hertz, A., Laporte, G., 1994. A tabu search heuristic for the vehicle routing. Management Science 40 (10), 1276–1290. Laporte, G., 1992. The vehicle routing problem: An overview of exact and approximate algorithms. European Journal of Operational Research 59 (3), 345–358. Osman, I., 1993. Metastrategy simulated annealing and tabu search algorithms for the vehicle routing problem. In:

Glover, F., Laguna, M., Taillard, E., Werra, D. (Eds.), Annals of Operations Research, vol. 41. J.C. Baltzer AG, Science Publishers, Basel–Switzerland, pp. 421–451. Rochat, Y., Taillard, E., 1995. Probabilistic diversiﬁcation and intensiﬁcation in local search for vehicle routing. Journal of Heuristics 1, 147–167. Sariklis, D., Powell, S., 2000. A heuristic method for the open vehicle routing problem. Journal of the Operational Research Society 51, 564–573. Schrage, L., 1981. Formulation and structure of more complex/ realistic routing and scheduling problems. Networks 11, 229–232. Syslo, M., Deo, N., Kowaklik, J., 1983. Discrete Optimization Algorithms with Pascal Programs. Prentice Hall, Inc. Taillard, E., 1993. Parallel iterative search methods for the vehicle routing problems. Networks 23, 661–673. Toth, P., Vigo, D., submitted. The granular tabu search (and its application to the vehicle routing problem). INFORMS Journal on Computing, submitted for publication. Van Breedam, A., 2001. Comparing descent heuristics and metaheuristics for the vehicle routing problem. Computers and Operations Research 28, 289–315. Xu, J., Kelly, J., 1996. A network ﬂow-based tabu search heuristic for the vehicle routing problem. Transportation Science 30, 379–393.

Copyright © 2023 C.COEK.INFO. All rights reserved.