Computers & Operations Research 37 (2010) 432 -- 442
Contents lists available at ScienceDirect
Computers & Operations Research journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / c o r
A Hybrid Multi-Swarm Particle Swarm Optimization algorithm for the Probabilistic Traveling Salesman Problem Yannis Marinakis a,∗ , Magdalene Marinaki b a b
Technical University of Crete, Department of Production Engineering and Management, Decision Support Systems Laboratory, 73100 Chania, Greece Technical University of Crete, Department of Production Engineering and Management, Industrial Systems Control Laboratory, 73100 Chania, Greece
A R T I C L E
I N F O
Available online 24 March 2009 Keywords: Particle swarm optimization Expanding Neighborhood Search-GRASP Metaheuristics Probabilistic Traveling Salesman Problem
A B S T R A C T
The Probabilistic Traveling Salesman Problem (PTSP) is a variation of the classic Traveling Salesman Problem (TSP) and one of the most significant stochastic routing problems. In the PTSP, only a subset of potential customers need to be visited on any given instance of the problem. The number of customers to be visited each time is a random variable. In this paper, a new hybrid algorithmic nature inspired approach based on Particle Swarm Optimization (PSO), Greedy Randomized Adaptive Search Procedure (GRASP) and Expanding Neighborhood Search (ENS) Strategy is proposed for the solution of the PTSP. The proposed algorithm is tested on numerous benchmark problems from TSPLIB with very satisfactory results. Comparisons with the classic GRASP algorithm, the classic PSO and with a Tabu Search algorithm are also presented. Also, a comparison is performed with the results of a number of implementations of the Ant Colony Optimization algorithm from the literature and in 13 out of 20 cases the proposed algorithm gives a new best solution. © 2009 Elsevier Ltd. All rights reserved.
1. Introduction During the last decade nature inspired intelligence became increasingly popular. Among the most popular nature inspired approaches, when the task is optimization within complex domains of data or information, are those methods representing successful animal and micro-organism team behavior, such as swarm or flocking intelligence (bird flocks or fish schools inspired Particle Swarm Optimization, PSO), artificial immune systems (that mimic the biological one), optimized performance of bees, or ant colonies (ants foraging behaviors gave rise to Ant Colony Optimization, ACO), etc. PSO is a population-based swarm intelligence algorithm that was originally proposed by Kennedy and Eberhart [29]. PSO is inspired by the behavior of social organisms, and the physical movements of individuals in a swarm. PSO includes elements of exploration and exploitation (or local and global search). Most applications of PSO have concentrated on the optimization in continuous space while some work has been done to the discrete optimization [30,45]. Recent comprehensive surveys for PSO can be found in [3,4,40]. PSO is a very popular optimization method and its wide use, mainly during the last years, is due to the number of advantages that this
∗ Corresponding author. Tel.: +30 282 103 7282. E-mail addresses:
[email protected] (Y. Marinakis),
[email protected] (M. Marinaki). 0305-0548/$ - see front matter © 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2009.03.004
method has compared to other optimization methods. Some of the key advantages are that this method does not need the calculation of derivatives that the knowledge of good solutions is retained by all particles and that particles in the swarm share information between them. PSO is less sensitive to the nature of the objective function, can be used for stochastic objective functions and is less likely to get stuck in local minima. Concerning its implementation, PSO can easily be programmed and has few parameters to regulate. Usually, in PSO-based algorithms only one swarm is used. Recently, a number of works have been conducted that use more than one swarm either using the classic PSO [16,39,47] or using some variations of the classic PSO like the method called TRIBES [19]. These methods have more exploration and exploitation abilities due to the fact that the different swarms have the possibility to explore different parts of the solution space. In this paper, we demonstrate how a nature inspired intelligent technique, the PSO [29], and two metaheuristic techniques, the Greedy Randomized Adaptive Search Procedure (GRASP) [43] and the Expanding Neighborhood Search (ENS) [37] can be incorporated in a hybrid scheme, in order to give very good solutions for the Probabilistic Traveling Salesman Problem (PTSP). In the proposed Hybrid Multi-Swarm Particle Swarm Optimization (HybMSPSO) algorithm more than one swarm is used. The main difference of this algorithm from the other algorithms that use more than one swarm is a feedback procedure that is used in order to, initially,
Y. Marinakis, M. Marinaki / Computers & Operations Research 37 (2010) 432 -- 442
distribute the information (i.e., the good solutions of each swarm) in all other swarms and, afterwards, to help all the particles of all swarms to follow this information in order to finally find a new better solution. Another goal of our research is to achieve as good as possible results in a very short computational time. This goal led us to use two different procedures one as a speeding up technique (the ENS) and one for the production of as good as possible initial solutions (the ENS-GRASP). Thus, the combination of a PSO algorithm with a very fast local search strategy, like the ENS strategy [37,38], will lead to a very fast and efficient algorithm and will reduce, significantly, the computational time of the algorithm. This makes the algorithm suitable for solving very large scale PTSPs in short computational time. The rest of the paper is organized as follows: in the next section a description of the PTSP is presented. In the third section, the proposed algorithm, the HybMSPSO algorithm is presented and analyzed in detail. Computational results are presented and analyzed in the fourth section while in the last section conclusions and future research are given. 2. The Probabilistic Traveling Salesman Problem The PTSP is an extension of the well-known TSP, which has been extensively studied in the field of combinatorial optimization. The PTSP is perhaps the most fundamental stochastic routing problem that can be defined [41]. It was introduced in 1985 by Jaillet in his PhD thesis [26]. Some theoretical properties of the PTSP derived in [26] have been later published in [27]. In the PTSP, a demand at each node occurs (with probability p), or does not occur (with probability 1 − p) during a given day. The main difference of the PTSP from the TSP is that while in the TSP the objective is to find the shortest tour through all the cities such that no city is visited twice and the salesman returns at the end of the tour back to the starting city, in the PTSP the objective is to minimize the expected length of the a priori tour where each customer requires a visit only with a given probability. The a priori tour can be seen as a template for the visiting sequence of all customers. In a given instance, the customers should be visited based on the sequence of the a priori tour while the customers that do not need to be visited will simply be skipped [34]. The PTSP belongs to the class of NPhard problems [5]. This means that no polynomial time algorithm is known for its solution. Thus, there is a great need for powerful heuristics that find good suboptimal solutions in reasonable amounts of time. A formulation of the problem is the following [8,26]: let a fully connected graph whose node set is denoted by V = {1, 2 . . . , n}. Each node i (i = 1, 2, . . . , n) is associated with a presence probability pi that represents the possibility that node i will be present in a given instance. Given an a priori tour , if problem instance S(⊆ V) will occur with probability p(S) and will require covering a total distance L (S) to visit the subset S of customers, that problem instance will receive a weight of p(S)L (S) in the computation of the expected length. If we denote the length of the tour by L , then our problem is to find an a priori tour through all n potential customers, which minimizes the quantity E[L ] =
p(S)L (S)
(1)
S⊆V
with the summation being over all subsets of V. The probability for the subset of customers S to require a visit is p(S) =
i∈S
pi
i∈V−S
(1 − pi )
(2)
433
In fact, let us consider (without loss of generality) an a priori tour
= (1, 2, . . . , n) then its expected length is E[L ] =
n n
dij pi pj
i=1 j=i+1
+
n n i=1 j=i+1
j−1
(1 − pk )
k=i+1
dij pi pj
n
(1 − pk )
k=i+1
j−1 (1 − pl )
(3)
l=1
where dij is the distance between the nodes i and j. This expression is derived by looking at the probability for each arc of the complete graph to be used, that is, when the a priori tour is adapted by skipping a set of customers which do not require a visit. Since the introduction of the problem in 1985 by Jaillet [26], a number of heuristic and metaheuristic algorithms have been proposed for its solution. Initially, researchers focused on heuristics [7,6,28,42,44]. Recent studies focus on adopting new algorithmic approaches based on metaheuristics such as ACO [2,1,8–11,13,12,15], genetic algorithm [35], threshold accepting (TA) [46], scatter search [34] and stochastic annealing [14]. An aggregation method for the solution of the PTSP is presented in [17]. An exact algorithm based on an integer L-shaped method has been used to solve 50-node instances [32] while in [18] a stochastic dynamic programming algorithm has been applied. 3. HybMSPSO for the Probabilistic Traveling Salesman Problem 3.1. General description of HybMSPSO algorithm In this paper, a new algorithm based on PSO is proposed that uses more than one swarm and utilizes a feedback procedure in order to extend the exploration abilities of the classical PSO. The proposed algorithm for the solution of the PTSP, HybMSPSO, combines a PSO algorithm, the GRASP algorithm and the ENS strategy. In this algorithm, the initial population of particles (where a particle is a solution to the problem) is calculated by using the GRASP algorithm (see Section 3.2). In the following the pseudocode of the algorithm is presented. algorithm Hybrid Multi-Swarm PSO for the PTSP Initialization Select the number of Swarms Select the number of Particles for each swarm Initialize the position and velocity of each particle using ENS-GRASP Calculate the initial cost function value (fitness function) of each particle Apply ENS in each particle Find Best particles of each of the swarms Set the initial solution of each particle as its current best solution Main Phase Do until the maximum number of iterations has not been reached: Do for each swarm and for a prespecified number of iterations Calculate the velocity of each particle Calculate the new position of each particle Evaluate the new fitness function of each particle Apply ENS in each particle Update the best solution of each particle Update the best particle of the whole swarm Update the inertia weight Enddo Create a new swarm with the best particles of all swarms Do for the new swarm and for a prespecified number of iterations
434
Y. Marinakis, M. Marinaki / Computers & Operations Research 37 (2010) 432 -- 442
Calculate the velocity of each particle Calculate the new position of each particle Evaluate the new fitness function of each particle Apply ENS in each particle Update the best solution of each particle Update the best particle of the whole swarm Update the inertia weight Enddo Return the particles to their original swarms Delete the constructed swarm Enddo Return the best particle (the best solution) over all swarms 3.2. Initial population Instead of using a randomly generated initial population which may or may not necessarily contain good candidate solutions, a modified version of the well known GRASP algorithm, the Expanding Neighborhood Search-GRASP (ENS-GRASP) [37] is used to initialize the population. GRASP [21,43] is an iterative two phase search method which has gained considerable popularity in combinatorial optimization. Each iteration consists of two phases, a construction phase and a local search phase. In the construction phase, a randomized greedy function is used to built up an initial solution. This randomized technique provides a feasible solution within each iteration. This solution is then exposed for improvement attempts in the local search phase. The final result is simply the best solution found over all iterations. The construction phase can be described as a process which iteratively adds one element at a time to the partial (incomplete) solution. The choice of the next element to be added is determined by ordering all elements in a candidate list (Restricted Candidate List—RCL) with respect to a greedy function. The heuristic is adaptive because the benefits associated with every element are updated during each iteration of the construction phase to reflect the changes brought on by the selection of the previous element. The probabilistic component of a GRASP is characterized by randomly choosing one of the best candidates in the list but not necessarily the top candidate. The greedy algorithm is a simple one pass procedure for solving the problem (the PTSP in our case). In the second phase, a local search is initialized from these points, and the final result is simply the best solution found over all searches (cf. multi-start local search). Marinakis et al. [37] proposed an algorithm for the solution of the TSP that adds new features to the original GRASP algorithm and has been proved very efficient for the solution of the problem. The ENS-GRASP [36–38] consists of two phases. In the first phase, initial feasible solutions are produced and in the second phase, the ENS is utilized in order to improve these solutions. In the first phase, initially, an array is converted into a heap using the heap data structure, then the RCL is constructed, from where the candidate elements for inclusion in the partial solution are selected randomly, and the RCL is readjusted. Finally, a step-wise construction algorithm is used for the construction of the initial feasible solution. The second phase of the algorithm consists of the ENS strategy (see Section 3.4). In the construction phase of ENS-GRASP initially, the RCL of all the edges of a given graph G = (V, E) is created by ordering all the edges from the smallest to the largest cost using a heap data structure, where V is the set of nodes and E is the set of edges. From this list, the first D edges, where D is chosen empirically after thorough testing, are selected in order to form the final RCL. This type of RCL is a cardinality-based RCL. The candidate edge for inclusion in the tour is selected randomly from the RCL using a random number generator. Finally, the RCL is readjusted in every iteration by replacing the edge which has been included in the tour by another edge that does not belong to the RCL, namely the (D+t1 )th edge where t1 is the number
of the iteration of the GRASP. Once an element has been chosen for inclusion in the tour, a tour construction heuristic is applied in order to insert it in the partial tour. The construction heuristic is the pure greedy algorithm. Initially, the degree of all the nodes is set equal to zero. Then, one edge at a time is selected randomly from the RCL. Each edge (i, j) is inserted in the partial route taking into account that the degree of i and j should be less than 2 and the new edge should not complete a tour with fewer than n vertices. In this algorithm, a number of paths are created which are finally joined to a single tour. For each candidate edge there are the following possibilities: • If both end-nodes of the edge do not belong to any path (their degree is equal to 0), then they are joined together, a path is created and their degrees are increased by one. • If the one node belongs to a path and its degree is equal to 1 and the other node is a single node (its degree is equal to 0), then they are joined together and their degrees are increased by one. • If both nodes belong to different paths and their degrees are equal to 1, then, they are both in the beginning or in the end of their paths and the two paths are joined into a path. The degrees of the nodes are then increased by one. • If both nodes have degree equal to 1 but they are in the same path, then, there are two possibilities: (1) If the path has less nodes than the number of nodes in the problem, then, the edge is rejected because if the two nodes are joined, a tour with less nodes (a subtour) would be created. (2) If the number of nodes in the path is equal to the number of nodes in the problem, then, the two edges are joined producing thus a feasible tour. • If one or both edges has degree equal to 2, then the edge is rejected and we proceed to the next edge. 3.3. Multi-Swarm Particle Swarm Optimization One of the key issues in designing a successful PSO for PTSP is to find a suitable mapping between PTSP solutions and particles in PSO. Each particle is represented via the path representation of the tour, that is, via the specific sequence of the nodes. In the proposed algorithm, the node (customer) with number 1 is fixed to be always in the position 1 in the representation of every particle, overcoming thus the obstacle of multiple encodings of the same tour. The position of each individual (called particle) is represented by a n-dimensional vector in problem space sij , i=1, 2, . . . , N (N is the population size), j= 1, 2, . . . , n (n is the number of nodes) and its performance is evaluated on the predefined fitness function. Concerning the fitness function, it should be noted that in PTSP, the fitness of each particle is related to the expected length of the a priori tour where each customer requires a visit only with a given probability in each circle and since the problem that we deal with is a minimization problem, if a feasible solution has a high objective function value then it is characterized as an unpromising solution candidate. The velocity of the ith particle vij is defined as the change of its position. The flying direction of each particle is the dynamical interaction of individual and social flying experience. The algorithm completes the optimization through following the personal best solution of each particle and the global best value of the whole swarm. Each particle adjusts its trajectory toward its own previous best position and the previous best position attained by any particle of the swarm, namely pij and pgj . The basic PSO and its variants have been successfully applied to continuous optimization functions. In order to extend the application to discrete space, Kennedy and Eberhart proposed a discrete binary version of PSO [30] where a particle moves in a state space restricted to zero and one on each dimension.
Y. Marinakis, M. Marinaki / Computers & Operations Research 37 (2010) 432 -- 442
In the proposed algorithm we encode the position sij of each particle by using an adjacency matrix, Ajl where j = 1, 2, . . . , n and l = 1, 2, . . . , n (an adjacency matrix is a symmetric, n × n matrix in which the entry in row j and column l is the number of connections from customer j to customer l [20]) in order to be able to apply the binary version of Particle Swarm Optimization. For example, for a particle i1 with five nodes its position si1 j is given by 13524 and its adjacency matrix becomes ⎛
0
⎜ ⎜0 ⎜ ⎜ Ajl = ⎜ 1 ⎜ ⎜ ⎝1 0
0
1
1
0
⎞
⎟ 1⎟ ⎟ ⎟ 0 0 0 1⎟ ⎟ ⎟ 1 0 0 0⎠
0
1
0
1
1
0
(4)
The velocities of particles are updated using the following equation [29,30,45]: vi (t + 1) = wvij (t) + c1 rand1(pij − sij (t)) + c2 rand2(pgj − sij (t))
(5)
In order to calculate the positions we apply for each j iteratively (i.e., l times) the Eq. (6) (see below). The adjacency matrix for each particle is checked for feasibility and if the solution is not feasible we mutate the matrix by changing accordingly some elements taking into account that a feasible solution always has exactly two elements equal to one in each row and in each column. Finally, the adjacency matrix is transformed back in the tour representation sij in order these values of sij to be used in Eq. (5) and in the local search phase (ENS).
1 if rand3 < sig(vij ) 0
wmax − wmin ×t tmax
(7)
0
1 1 + exp(−vij )
Ajl (t + 1) =
can ensure that the bit can transfer between 1 and 0 with a positive probability. In practice, Umax is often set at ±4. The proposed algorithm is established based on standard PSO, namely basic PSO with inertia weight developed by Shi and Eberhart in [45], where w is the inertia weight. The inertia weight controls the impact of previous histories of velocities on current velocity, which is often used as a parameter to control the trade-off between exploration and exploitation. The particle adjusts its trajectory based on information about its previous best performance and the best performance of its neighbors. The inertia weight w is also used to control the convergence behavior of the PSO. In order to reduce this weight over the iterations, allowing the algorithm to exploit some specific areas, the inertia weight w is updated according to the following equation: w = wmax −
In this matrix, e.g. node (customer) 1 which corresponds to both row 1 and column 1 of the previous matrix Ajl is connected to nodes 3 and 4 because there are ones in the third and fourth columns of row 1 and in the third and fourth rows in column 1. As in the Probabilistic Traveling Salesman each node is visited only once, the elements of Ajl must be either ones or zeros. A solution is feasible if and only if there are exactly two ones in each row and in each column. The main diagonal elements of the matrix must be zero as there is no route from a node to itself. Thus, in the proposed algorithm each vij represents the probability of each element of row j of matrix Ajl taking the values 1 or 0. If the velocity is higher it is more likely to choose 1, and lower values favor choosing 0. A sigmoid function is applied to transform the velocity from real number space to probability space: sig(vij ) =
435
if rand3 sig(vij )
(6)
In Eqs. (4)–(6), t is the iteration counter; c1 and c2 are the acceleration coefficients; sig(vij ) is calculated according to the Eq. (4), rand1, rand2, rand3 are three random numbers in [0, 1]. The acceleration coefficients c1 and c2 control how far a particle will move in a single iteration. Low values allow particles to roam far from target regions before being tugged back, while high values result in abrupt movement towards, or past, target regions [29]. Typically, these are both set equal to a value of 2.0, although assigning different values to c1 and c2 sometimes leads to improved performance. As in basic PSO, a parameter Umax is incorporated to limit the vij so that sig(vij ) does not approach too closely 0 or 1 [31]. Such an implementation
where wmax , wmin are the maximum and minimum values that the inertia weight can take, and t is the current iteration (generation) of the algorithm while the tmax is the maximum number of iterations (generations). In each of the swarms, the main procedures of the classic PSO algorithm described previously are used. Thus, each swarm finds a local optimum applying the PSO procedure. This local optimum is located in a different point of the solution space. The ENS strategy is utilized in order to improve the solutions of each particle in the swarm (see Section 3.4). Then, the best particles (solutions) of each swarm create a new swarm in which again a PSO procedure is applied. The creation of the new swarm helps the distribution of the information, i.e., the movement direction of each swarm. The application of the PSO algorithm in the new swarm has the possibility to create even better solutions that are restored back to the other swarms, i.e., the particles that constitute the new swarm return their new positions and velocities to their original swarms. Then, the whole procedure is repeated until a prespecified number of iterations. When this number has been reached, the best particle from all swarms is returned as best solution found. 3.4. Expanding Neighborhood Search ENS is based on a method called Circle Restricted Local Search Moves (CRLSM) and, in addition, it has a number of local search phases. In the CRLSM strategy [36], the computational time is decreased significantly compared to other heuristic and metaheuristic algorithms because all the edges that are not going to improve the solution are excluded from the search procedure. This happens by restricting the search into circles around the candidate for deletion edges. The center of each is defined as the one of the two nodes of the candidate for deletion edges. In the following, a description of the CRLSM strategy for a 2-opt trial move [33] is presented. In this case, there are three possibilities based on the costs of the candidates for deletion and inclusion edges (In Fig. 1, the dashed lines are the new edges, the dotted lines mean that there is a possibility to have more nodes between the two nodes connected with these lines, and there are three circles in each of the subfigures in order to focus in the differences of the three possibilities): • If both new edges increase in cost, a 2-opt trial move cannot reduce the cost of the tour (e.g., in Fig. 1 (Possibility A), for both new edges the costs C2 and C4 are greater than the costs B2 and A of both old edges). • If one of the two new edges has cost greater than the sum of the costs of the two old edges, a 2-opt trial move again cannot
436
Y. Marinakis, M. Marinaki / Computers & Operations Research 37 (2010) 432 -- 442
C4 B2
C2
A
A C3
B3
Possibility A
Possibility B
C5 B1
C1
A
Possibility C Fig. 1. Circle Restricted Local Search Moves Strategy.
reduce the cost of the tour (e.g. in Fig. 1 (Possibility B) the cost of the new edge C3 is greater than the sum of the costs A + B3 of the old edges). • The only case for which a 2-opt trial move can reduce the cost of the tour is when at least one new edge has cost less than the cost of one of the two old edges (e.g., in Fig. 1 (Possibility C), the cost C1 of the new edge is less than the cost of the old edge A) and the other edge has cost less than the sum of the costs of the two old edges (e.g., C5 < A + B1 in Fig. 1 (Possibility C)). Taking these observations into account, the CRLSM strategy restricts the search to edges where one of their end-nodes is inside a circle with radius length at most equal to the sum of the costs (lengths) of the two candidates for deletion edges. The algorithm has the ability to change between different local search strategies. The idea of using a larger neighborhood to escape from a local minimum to a better one, had been proposed initially by Garfinkel and Nemhauser [22] and recently by Hansen and Mladenovic [25]. Garfinkel and Nemhauser proposed a very simple way to use a larger neighborhood. In general, if with the use of one neighborhood a local optimum was found, then a larger neighborhood is used in an attempt to escape from the local optimum. On the other hand, Hansen and Mladenovic proposed a more systematical method to change between different neighborhoods, called Variable Neighborhood Search. The ENS method starts with one prespecified length of the radius of the circle of the CRLSM strategy. Inside this circle, a number of different local search strategies are applied until all the possible trial moves have been explored and the solution cannot further be improved in this neighborhood. Subsequently, the length of the radius of the circle is increased and, again, the same procedure is repeated until the stopping criterion is activated. The main differences of ENS from the other two methods are the use of the CRLSM
strategy which restricts the search in circles around the candidates for deletion edges and the more sophisticated way that the local search strategy can be changed inside the circles. In ENS strategy, the size of the neighborhood is expanded in each external iteration [36]. In each external iteration, the radius of the neighborhood is increased. Initially, the size of the neighborhood, m, is defined based on the CRLSM strategy, for example m = A/2, where A is the cost of one of the candidates for deletion edges. For the selected size of the neighborhood, a number of different local search strategies are applied until all the possible trial moves have been explored and the solution cannot further be improved in this neighborhood. The local search strategies are changed if the current local search strategy finds a local optimum. Then, the neighborhood is expanded by increasing the length of the radius of the CRLSM strategy m by a percentage and the algorithm continues. When the length of the radius of the CRLSM strategy is equal to A, the length continues to increase until the length becomes equal to A+B, where B is the length of the other candidate for deletion edge. If the length of the radius of the CRLSM strategy is equal to A + B, and the algorithm has reached the maximum number of iterations, then the algorithm terminates with the current solution. In Fig. 2, the ENS method is presented. In the ENS strategy two local search algorithms are used. The 2-opt and the 3-opt algorithms which were introduced by Lin for the classic TSP [33]. In the first algorithm, the neighborhood function is defined as exchanging two edges of the current solution with two other edges not in the current solution, while in the second the neighborhood function is defined as exchanging three edges of the current solution with three other edges not in the current solution. When a new tour is achieved its expected length is calculated. If it is less than the expected length of the current tour then the new tour is considered as the current tour and the procedure continues from this one.
Y. Marinakis, M. Marinaki / Computers & Operations Research 37 (2010) 432 -- 442
B
437
A/2 A
A
A+B >A+B no possible improvement in cost
B
Fig. 2. Expanding Neighborhood Search Strategy.
Table 1 Parameters of the proposed algorithm and of the metaheuristic approaches used in the comparisons. Parameters
GRASP
Tabu Search
ENS-GRASP
PSO
MSPSO
HybPSO
HybMSPSO
Number of Swarms Particles in each swarm Generations/iterations c1 c2 wmax wmin D
– – 10 000 – – – – 50 – –
– – 10 000 – – – – – – 10
– – 10 000 – – – – 50 10% –
1 100 100 2 2 0.9 0.01 50 – –
5 20 100 2 2 0.9 0.01 50 – –
1 100 100 2 2 0.9 0.01 50 10% –
5 20 100 2 2 0.9 0.01 50 10% –
Tabu list
4. Computational results The HybMSPSO algorithm was implemented in Fortran 90 and was compiled using the Lahey f95 compiler on a Centrino Mobile Intel Pentium M750 at 1.86 GHz, running Suse Linux 9.1. Homogeneous PTSP instances were generated starting from TSP instances and assigning to each customer a probability p of requiring a visit. The test instances were taken from the TSPLIB (http://www.iwr.uni-heidelberg.de/groups/comopt/software/ TSPLIB95). Heterogeneous PTSP instances were, also, generated from the TSP instances and by assigning to each customer i a probability pi of requiring a visit in the ranges of (0, 1), (0.1, 0.9), (0.2, 0.8), (0.3, 0.7) and (0.4, 0.6), respectively. The algorithm was tested on a set of 10 Euclidean sample problems with sizes ranging from 51 to 1400 nodes. Each instance is described by its TSPLIB name and size, e.g. in Table 2 the instance named kroA100 has size equal to 100 nodes. For each PTSP instance tested, various experiments were done by varying the value of the customer probability p. In Table 2, in the last column the results of the algorithm (the length of best solution found) for three probability values (0.1, 0.5 and 0.9) are presented. For the instances that will
be used in comparisons with other algorithms of the literature, the results of the algorithm for two more probabilities (0.2 and 0.75) are presented. Finally, for two indicative instances, the eil101 and the kroA100, the results for probabilities 0.1–0.9 with step 0.1 for the homogeneous case and the results for probabilities in the ranges of (0, 1), (0.1, 0.9), (0.2, 0.8), (0.3, 0.7) and (0.4, 0.6) for the heterogeneous case are presented. The parameters of the proposed algorithm are selected after thorough testing. A number of different alternative values were tested for probabilities 0.1 and 0.9 for all instances (we started with some classic values that have already been used in other studies in literature and, then, we modified these values until the selected values are chosen) and the ones selected are those that gave the best computational results concerning both the quality of the solution and the computational time needed to achieve this solution. Thus, the selected parameters are given in the following. • • • •
Number of swarms: 5. Number of particles: 20. Number of generations: 100. c1 = 2, c2 = 2.
438
Y. Marinakis, M. Marinaki / Computers & Operations Research 37 (2010) 432 -- 442
Table 2 Comparisons of the solution of the proposed algorithm with other metaheuristic approaches. Instance
Probability
GRASP
Tabu Search
ENS-GRASP
PSO
MSPSO
HybPSO
HybMSPSO
eil51
0.1 0.5 0.9
130.95 315.34 410.28
130.82 313.50 411.63
130.30 311.04 407.92
130.26 312.18 408.21
130.22 311.29 408.19
130.17 311.04 407.92
130.12 310.75 407.92
kroA100
0.1 0.2 0.3 0.4 0.5 0.6 0.75 0.8 0.9 rand(0,1) rand(0.1,0.9) rand(0.2,0.8 ) rand(0.3,0.7) rand(0.4,0.6)
9191.29 11 791.12 13 992.21 15 801.21 16 717.81 18 498.41 20 314.19 20 398.21 20 511.18 17 643.21 17 501.92 17 284.92 17 089.90 17 102.34
9116.64 11 821.35 14 032.21 15 889.21 16 658.46 18 501.84 20 319.21 20 409.09 20 510.21 17 655.32 17 502.32 17 299.01 17 101.21 17 106.02
9079.86 11 765.67 13 889.90 15 678.98 16 584.32 18 321.21 20 308.07 20 378.90 20 508.77 17 601.29 17 490.89 17 203.43 17 065.32 17 097.87
9076.23 11 741.21 13 692.11 15 543.12 16 584.32 18 109.09 20 301.01 20 345.84 20 508.77 17 553.20 17 483.29 17 201.19 17 047.21 17 023.19
9076.23 11 728.21 13 689.13 15 531.17 16 583.22 18 103.49 20 298.29 20 333.10 20 508.77 17 448.98 17 476.87 17 195.42 17 046.28 17 022.21
9074.94 11 730.85 13 685.21 15 510.21 16 584.32 18 101.21 20 295.28 20 321.97 20 508.77 17 329.81 17 481.21 17178.23 17045.67 17 001.34
9074.94 11 726.50 13 682.16 15 407.06 16 581.64 18 068.79 20 281.91 20 184.05 20 508.77 17 327.54 17 463.87 17 171.71 17 038.86 16 991.24
eil101
0.1 0.2 0.3 0.4 0.5 0.6 0.75 0.8 0.9 rand(0,1) rand(0.1,0.9) rand(0.2,0.8 ) rand(0.3,0.7) rand(0.4,0.6)
203.06 287.56 290.01 412.32 465.05 547.30 583.21 607.39 603.45 506.01 487.01 473.10 472.90 472.78
202.42 289.08 293.01 415.62 461.52 551.21 584.12 608.21 602.35 507.89 488.21 475.21 473.23 472.39
200.03 286.93 289.59 410.39 455.73 534.32 581.19 605.28 601.51 505.92 485.32 472.20 471.37 470.09
201.01 286.01 288.09 409.89 457.23 521.10 581.23 602.35 602.21 504.87 483.40 471.97 471.23 469.21
200.03 285.49 287.05 409.89 457.23 520.09 580.22 601.17 601.50 504.69 483.40 471.72 470.28 469.21
200.03 285.29 286.01 407.98 455.73 519.21 579.55 601.68 601.50 504.51 482.32 469.21 469.98 468.59
200.03 284.93 283.72 405.39 455.65 514.18 579.24 601.17 601.50 503.46 471.87 467.82 469.17 467.55
ch150
0.1 0.2 0.5 0.75 0.9
2530.27 3499.28 5112.86 6088.29 6295.38
2554.59 3501.21 5071.51 6101.21 6294.32
2520.10 3475.31 5016.85 6039.09 6292.01
2515.08 3458.21 5017.23 6017.54 6293.23
2514.76 3446.39 5017.23 6012.55 6293.23
2514.31 3447.21 5016.85 6001.23 6292.01
2510.11 3444.61 5016.85 5988.34 6292.01
d198
0.1 0.2 0.5 0.75 0.9
7564.90 9457.21 12702.50 15 131.21 15 229.27
7525.03 9490.38 12606.23 15 134.21 15 227.34
7525.03 9438.94 12538.45 15 001.23 15 225.26
7512.12 9432.57 12537.26 14 952.01 15 225.28
7509.18 9429.12 12537.26 14 935.28 15 225.28
7504.94 9421.29 12534.92 14 928.31 15 224.18
7504.94 9415.08 12527.56 14876.35 15 216.61
pr439
0.1 0.5 0.9
50 898.31 91 955.72 105 228.21
50 848.29 90 463.31 104 828.21
50 402.09 88 914.76 104 735.27
50 408.99 90 121.76 104 742.28
50 394.48 90 012.81 104 721.17
50 373.95 88 841.09 104 656.70
50 066.27 88752.34 104 584.67
p654
0.1 0.5 0.9
20 069.91 28 748.19 33 828.28
20 034.61 28 510.05 33 737.12
19 766.74 28 388.01 33 646.79
19 821.01 28 401.45 33 649.33
19 805.04 28 409.37 33 647.51
19 728.54 28 383.71 33 642.6
19 690.95 28 383.71 33 631.26
rat783
0.1 0.2 0.5 0.75 0.9
3737.69 4998.23 7299.24 8312.12 8699.01
3705.31 5001.23 7123.76 8321.29 8677.34
3618.01 4973.17 7097.85 8269.01 8625.26
3621.21 4801.92 7097.85 8257.28 8625.26
3620.11 4800.09 7096.21 8255.67 8624.32
3618.01 4782.23 7097.33 8254.47 8625.26
3616.44 4775.14 7094.87 8217.31 8604.28
pr1002
0.1 0.5 0.9
111967.65 215 116.35 255 221.32
113 868.22 210 639.29 255 001.23
111 959.65 207 916.81 254 819.95
111 888.21 209 330.70 254 830.5
111 762.10 209 028.90 254 823.20
111 547.03 207 916.81 254 819.95
110 873.41 207 605.5 254 593.28
fl1400
0.1 0.5 0.9
9960.55 17 092.57 19 920.01
9767.43 16 570.99 19 857.28
9727.04 16 541.93 19 729.55
9752.24 16 544.93 19 732.23
9752.24 16 543.71 19 731.19
9727.04 16 541.93 19 729.55
9703.32 16416.11 19 729.55
• wmax = 0.9, wmin = 0.01. • D = 50. • = 10%. Also, a comparison with other metaheuristic approaches for the solution of the PTSP is presented in Table 2. In this table, the results of six other metaheuristic algorithms that have been used
for the solution of the PTSP are given. The first one is an implementation of the classic GRASP algorithm without using the ENS strategy. The second is a classic Tabu Search [23,24]. The third is the ENS-GRASP [38], the fourth is a simple PSO algorithm without the ENS-GRASP, the fifth is a Multi-Swarm Particle Swarm Optimization (MSPSO) without the ENS-GRASP and the sixth is a Hybrid Particle Swarm Optimization (HybPSO) algorithm using all the
Y. Marinakis, M. Marinaki / Computers & Operations Research 37 (2010) 432 -- 442
439
Table 3 Comparisons of the computational times (in s) of the proposed algorithm with other metaheuristic approaches. Instance
Probability
eil51
0.1 0.5 0.9
kroA100
GRASP
Tabu Search
ENS-GRASP
PSO
MSPSO
HybPSO
HybMSPSO
7.79 7.52 8.23
7.82 7.12 8.98
0.78 0.87 2.27
4.57 3.97 5.67
4.21 3.88 5.21
0.62 0.51 0.67
0.57 0.54 0.63
0.1 0.2 0.3 0.4 0.5 0.6 0.75 0.8 0.9 rand(0,1) rand(0.1,0.9) rand(0.2,0.8 ) rand(0.3,0.7) rand(0.4,0.6)
9.08 8.21 8.95 10.21 9.57 9.01 10.28 9.28 10.20 10.29 10.01 10.78 10.29 10.67
8.90 8.09 9.01 9.18 9.12 8.79 9.02 9.45 10.21 10.45 10.21 10.02 12.21 11.21
1.57 1.48 1.21 1.57 1.51 1.51 1.52 1.61 1.51 1.51 1.34 1.48 1.98 1.44
7.37 7.52 7.01 7.12 7.84 7.45 7.23 8.23 8.57 8.99 8.01 7.21 8.67 8.36
7.01 7.36 6.95 7.01 7.57 7.21 7.59 8.01 9.09 8.78 7.81 7.09 8.01 7.99
1.28 1.21 1.38 1.39 1.21 1.28 1.34 1.37 1.30 1.30 1.28 1.37 1.23 1.25
1.23 1.17 1.24 1.28 1.17 1.31 1.28 1.22 1.21 1.19 1.31 1.22 1.38 1.26
eil101
0.1 0.2 0.3 0.4 0.5 0.6 0.75 0.8 0.9 rand(0,1) rand(0.1,0.9) rand(0.2,0.8 ) rand(0.3,0.7) rand(0.4,0.6)
11.37 12.09 13.51 10.50 13.09 17.23 11.90 11.01 10.88 11.29 10.90 11.08 11.98 13.09
11.21 12.21 11.29 11.20 12.21 12.21 12.29 10.21 10.29 10.21 11.29 11.29 12.09 12.58
2.01 1.89 2.01 1.69 1.48 1.72 1.90 1.71 1.79 1.59 2.07 1.92 2.29 1.97
8.97 9.78 10.13 9.87 10.29 10.90 10.42 10.34 10.21 10.57 11.98 8.97 11.02 11.02
9.01 9.21 10.20 10.31 10.78 12.21 10.38 10.09 10.61 10.31 12.20 9.98 10.98 10.92
1.48 1.51 1.67 1.52 1.57 1.61 1.58 1.67 1.80 1.69 1.69 1.72 1.87 1.89
1.55 1.58 1.54 1.61 1.68 1.59 1.61 1.58 1.68 1.64 1.71 1.89 1.79 1.71
ch150
0.1 0.2 0.5 0.75 0.9
14.97 17.72 17.58 18.45 18.79
14.29 19.21 18.90 19.01 19.09
2.41 3.87 3.01 3.12 3.29
15.21 16.37 17.62 17.69 16.92
14.28 18.21 17.39 18.20 16.79
2.51 2.64 2.89 2.85 2.89
2.41 2.76 2.77 2.68 3.01
d198
0.1 0.2 0.5 0.75 0.9
24.51 22.71 25.91 26.88 25.34
22.83 23.82 23.89 27.01 26.21
6.09 6.43 6.45 5.98 5.88
20.19 21.98 21.89 26.23 23.01
21.20 20.29 21.48 23.51 22.49
5.38 5.98 5.28 5.29 5.51
5.48 5.28 5.32 5.41 5.38
pr439
0.1 0.5 0.9
97.23 91.12 93.45
98.21 89.29 91.54
36.31 27.12 31.28
95.34 95.67 88.32
97.28 93.28 87.29
35.21 28.24 30.47
32.35 32.10 31.01
p654
0.1 0.5 0.9
118.98 150.23 138.57
120.34 159.02 140.89
61.23 51.12 53.92
109.29 133.28 138.24
112.23 134.25 140.29
55.43 54.35 51.20
58.17 55.43 48.21
rat783
0.1 0.2 0.5 0.75 0.9
190.34 187.23 165.43 187.45 194.62
188.98 188.95 168.21 171.02 198.20
74.32 60.28 61.20 76.34 63.29
173.29 171.32 172.30 166.48 176.46
170.21 168.34 178.23 169.37 189.54
68.24 67.32 58.21 61.98 58.92
70.29 63.45 64.39 58.38 55.32
pr1002
0.1 0.5 0.9
231.82 251.20 245.21
240.18 267.21 278.99
82.41 73.40 79.12
217.33 281.39 257.23
220.19 278.34 189.29
77.56 71.29 60.28
80.12 67.28 63.21
fl1400
0.1 0.5 0.9
332.12 198.20 309.71
301.20 230.29 302.61
109.21 100.21 99.28
311.72 201.29 295.27
312.09 258.21 299.29
101.21 109.09 93.21
98.49 103.21 97.54
characteristics of the proposed HybMSPSO but using only one swarm. The simple PSO and the simple MSPSO use the same local search strategies as in the proposed method but not embedded in the ENS-GRASP procedure. In all implementations, the parameters were chosen in such a way that in all algorithms the same number of function evaluations to be made. The parameters that do not affect the
number of the function evaluations, e.g., size of RCL, c1 , c2 , etc., were set equal to the parameters of HybMSPSO and the local search strategies were the same as in HybMSPSO (see Table 1). It should be noted that the maximum number of iterations for the algorithms GRASP, Tabu Search and ENS-GRASP was set equal to 10 000 (Table 1), but if for a specific instance the solution was not improved for a number
440
Y. Marinakis, M. Marinaki / Computers & Operations Research 37 (2010) 432 -- 442
Table 4 Comparisons of the proposed algorithm with other metaheuristic approaches from the literature. Method
kroA100
eil101
ch150
d198
rat783
Probability = 0.5 HybMSPSO HybPSO MSPSO PSO ENS-GRASP GRASP Tabu Search pACS [8] pACS-S [8] pACS + 1-shift [8] pACS − S + 1-shift-S [8] pACS + 1-shift-T [8] pACS + 1-shift-P [8] TSP-ACO [15] Angle-ACO [15] Depth-ACO [15] HS/1-Shift [15]
16 581.64 16 584.32 16 583.22 16 584.32 16 584.32 16 717.81 16 658.46 16 605.4 16 793.61 17 723.7 16679.3 16 839.5 16 681.61 N/M(not mentioned) N/M N/M N/M
455.65 455.73 457.23 457.23 455.73 465.05 461.52 470.7 476.03 500.6 460.7 491.7 465.96 467.441 463.570 460.563 463.368
5016.85 5016.85 5017.23 5017.23 5016.85 5112.86 5071.51 5164.3 5318.64 5467.1 5051.3 5447.5 5099.03 N/M N/M N/M N/M
12 527.56 12 534.92 12 537.26 12 537.26 12 538.45 12 702.50 12 606.23 12 745.5 13 109.46 13 539.7 12 613.3 13 261.4 13 502.58 N/M N/M N/M N/M
7094.87 7097.33 7096.21 7097.85 7097.85 7299.24 7123.76 7334.1 7423.20 7785.5 7261.4 7427.4 7346.53 N/M N/M N/M N/M
Probability = 0.75 HybMSPSO HybPSO MSPSO PSO ENS-GRASP GRASP Tabu Search TSP-ACO [15] Angle-ACO [15] Depth-ACO [15] HS/1-Shift [15]
20 281.91 20 295.28 20 298.29 20 301.01 20 308.07 20 314.19 20 319.21 N/M(not mentioned) N/M N/M N/M
579.24 579.55 580.22 581.23 581.19 583.21 584.12 567.07 564.71 564.03 572.11
5988.34 6001.23 6012.55 6017.54 6039.09 6088.29 6101.21 N/M N/M N/M N/M
14876.35 14 928.31 14 935.28 14 952.01 15 001.23 15 131.21 15 134.21 N/M N/M N/M N/M
8217.31 8254.47 8255.67 8257.28 8269.01 8312.12 8321.29 N/M N/M N/M N/M
Probability = 0.2 HybMSPSO HybPSO MSPSO PSO ENS-GRASP GRASP Tabu Search pACS [8] pACS-S [8] pACS + 1-shift [8] pACS − S + 1-shift-S [8] pACS + 1-shift-T [8] pACS + 1-shift-P [8]
11 726.50 11 730.85 11 728.21 11 741.21 11 765.67 11 791.12 11 821.35 11720.6 11819.46 13 679.3 12 241.7 12 115.8 11 778.0
284.93 285.29 285.49 286.01 286.93 287.56 289.08 286.7 287.97 349.2 313.7 308.1 288.27
3444.61 3447.21 3446.39 3458.21 3475.31 3499.28 3501.21 3444.2 3486.08 4053.5 3539.1 3753.0 3446.41
9415.08 9421.29 9429.12 9432.57 9438.94 9457.21 9490.38 9489.2 9607.01 1 0645.9 10 071.9 10 031.1 9919.90
4775.14 4782.23 4800.09 4801.92 4973.17 4998.23 5001.23 4781.2 4863.61 5672.8 4943.5 4886.7 4857.52
Probability = 0.1 HybMSPSO HybPSO MSPSO PSO ENS-GRASP GRASP Tabu Search pACS [8] pACS-S [8] pACS + 1-shift [8] pACS − S + 1-shift-S [8] pACS + 1-shift-T [8] pACS + 1-shift-P [8]
9074.94 9074.94 9076.23 9076.23 9079.86 9191.29 9116.64 9039.4 9098.05 11 715.1 9161.6 9229.9 9084.80
200.03 200.03 200.03 201.01 200.03 203.06 202.42 199.7 201.68 283.6 236.1 209.8 201.61
2510.11 2514.31 2514.76 2515.08 2520.10 2530.27 2554.59 2493.6 2533.73 3418.3 2814.2 2577.8 2540.74
7504.94 7504.94 7509.18 7512.12 7525.03 7564.90 7525.03 7556.1 7584.43 9312.1 8028.3 7748.7 7650.94
3616.44 3618.01 3620.11 3621.21 3618.01 3737.69 3705.31 3368.9 3472.58 4619.2 3514.5 3488.3 3440.87
of iterations (100) the algorithm was stopped with the current best solution. As it can be observed from Table 2 (the best solution is denoted with bold cases), the results of the HybMSPSO are better than the results of the six other metaheuristics in all instances. More precisely, the HybMSPSO improved the results of the classic GRASP from 0.01% (in kroA100 with probability 0.9) to 6.44% (in eil101 with probability 0.6), the results of the Tabu Search from 0.007% (in kroA100 with probability 0.9) to 7.20% (in eil101 with probability 0.6), the results of the ENS-GRASP from 0% (in some instances) to 4.14%
(in rat783 with probability 0.2), the results of the classic PSO from 0% (in kroA100 with probability 0.9) to 2.44% (in eil101 with heterogeneous probabilities in the range (0.1, 0.9)), the results of the MSPSO from 0% (in kroA100 with probability 0.9) to 2.44% (in eil101 with heterogeneous probabilities in the range (0.1, 0.9)) and the results of the classic HybPSO from 0% (in some instances) to 2.21% (in eil101 with heterogeneous probabilities in the range (0.1, 0.9)). It can, also, be seen that the results of the Tabu Search metaheuristic are better than the results of the GRASP algorithm without the ENS strategy but worse than the results of the ENS-GRASP algorithm. Thus taking
Y. Marinakis, M. Marinaki / Computers & Operations Research 37 (2010) 432 -- 442
into account these results we could say that the use of the ENS strategy is very significant in order to have improvement in the results. The results of HybMSPSO are better than the results of the HybPSO. This last issue proves the fact that the use of more than one swarms with the feedback procedure improves the results significantly and, thus, the algorithm has more exploration abilities. These conclusions hold for both the homogeneous and the heterogeneous cases. In Table 3 the computational time of all implementations are presented. From this table and Table 2 it can be observed that the use of each of the characteristics in the HybMSPSO improves either the quality of the solution or the computational time or both of them. As it can be observed ENS strategy improves the computational time, as ENS-GRASP, HybPSO and HybMSPSO find the best solution in less computational time than GRASP, PSO and MSPSO, respectively. The proposed HybMSPSO algorithm and the ENS-GRASP algorithm have competitive computational times due to the use of ENS strategy. It should, also, be noted that the HybPSO and the proposed method find the best solution in almost the same computational time as for these two methods the number of particles was chosen to be equal (i.e., 1 swarm × 100 particles for HybPSO and 5 swarms × 20 particles in each swarm for HybMSPSO) and all the other parameters are common. The same conclusion holds for PSO and MSPSO. The results of the algorithm are, also, compared (Table 4 (the best solution is denoted with bold cases)) with the results of a number of implementations of ACO metaheuristic taken from [8,15]. In these implementations, the same instances are used as in this paper and, thus, comparisons of the results can be performed. It should be noted that the comparisons are only based on quality of the results and not on computational time needed to find the solution as a fair comparison in terms of computational efficiency is difficult as the computational speed is affected, mainly, from the compiler and the hardware that is used. We use for the comparisons five instances with four different probabilities. As it can be observed the proposed algorithm gives a new best solution in 13 out of the 20 cases that have been used in the comparisons. More precisely, the proposed algorithm gives better results in all instances when the probability is equal to 0.5, all but one when the probability is equal to 0.75, three out of five when the probability is equal to 0.2 and in one instance when the probability is equal to 0.1. For six instances the pACS [8] algorithm gives the best solution while for one instance the DepthACO [15] gives the best solution and the proposed algorithm in five out of the seven instances gives the second best solution.
5. Conclusions and future research In this paper, a hybrid algorithmic nature inspired methodology was proposed, namely the HybMSPSO algorithm, for the effective handling of the PTSP. The algorithm was tested on a number of instances from the TSPLIB. Homogeneous and heterogeneous PTSP instances were generated starting from TSP instances. In order to show the effectiveness of the algorithm the results of the HybMSPSO were compared with the results of the classic GRASP, of the ENS-GRASP, of a Tabu Search metaheuristic, of a classic PSO and of a Hybrid PSO algorithm where the only difference of this algorithm with the HybMSPSO is that it only uses one swarm. The proposed algorithm gave very good results in all instances and these results are better than the results of all the other algorithms used in the comparisons. Also, a comparison was performed with the results of a number of implementation of the ACO algorithm from the literature and in 13 out of 20 cases the proposed algorithm gave a new best solution. Future research will focus in the application of the algorithm to other stochastic problems like the Stochastic Vehicle Routing Problem and in the development of other metaheuristic approaches for the solution of the PTSP and the Stochastic Vehicle Routing Problem.
441
References [1] Balaprakash P, Birattari M, Stutzle T. Engineering stochastic local search algorithms: a case study in estimation-based local search for the probabilistic traveling salesman problem. In: Cotta C, van Hemert J, editors. Recent Advances in Evolutionary Computation for Combinatorial Optimization. Studies in Computational Intelligence. Berlin: Springer; 2008. [2] Balaprakash P, Birattari M, Stutzle T, Dorigo M. An experimental study of estimation-based metaheuristics for the probabilistic traveling salesman problem. In: Learning and intelligent optimization, LION 2007 II, December 8–12, 2007, Trento, Italy. [3] Banks A, Vincent J, Anyakoha C. A review of particle swarm optimization. Part I: background and development. Natural Computing 2007;6(4):467–84. [4] Banks A, Vincent J, Anyakoha C. A review of particle swarm optimization. Part II: hybridisation, combinatorial, multicriteria and constrained optimization, and indicative applications. Natural Computing 2008;7:109–24. [5] Bertsimas DJ. Probabilistic combinatorial optimization problems. PhD thesis, MIT, Cambridge, MA, USA; 1988. [6] Bertsimas D, Howell L. Further results on the probabilistic traveling salesman problem. European Journal of Operational Research 1993;65(1):68–95. [7] Bertsimas D, Chervi P, Peterson M. Computational approaches to stochastic vehicle routing problems. Transportation Science 1995;29(4):342–52. [8] Bianchi L. Ant colony optimization and local search for the probabilistic traveling salesman problem: a case study in stochastic combinatorial optimization. PhD thesis, Universite Libre de Bruxelles, Belgium; 2006. [9] Bianchi L, Knowles J, Bowler N. Local search for the probabilistic traveling salesman problem: correction to the 2-p-opt and 1-shift algorithms. European Journal of Operational Research 2005;162(1):206–19. [10] Bianchi L, Gambardella LM, Dorigo M. An ant colony optimization approach to the probabilistic traveling salesman problem. In: Merelo Guervòs JJ, Adamidis P, Beyer H-G, Fernández-Villacañas J-L, Schwefel H-P, editors. Proceedings of the seventh parallel problem solving from nature (PPSN VII). Lecture notes in computer science, vol. 2439. Berlin: Springer; 2002. p. 883–92. [11] Bianchi L, Gambardella LM, Dorigo M. Solving the homogeneous probabilistic traveling salesman problem by the ACO metaheuristic. In: Dorigo M, DiCaro G, Sampels M, editors. Proceedings of third international workshop ANTS 2002. Lecture notes in computer science, vol. 2463. Berlin: Springer; 2002. p. 176–87. [12] Birattari M, Balaprakash P, Dorigo M. The ACO/F-RACE algorithm for combinatorial optimization under uncertainty. In: Doerner KF et al., editor. Metaheuristics—progress in complex systems optimization. Operations research. Computer science interfaces series. Berlin, Germany: Springer; 2006. [13] Birattari M, Balaprakash P, Stutzle T, Dorigo M. Estimation-based local search for stochastic combinatorial optimization using delta evaluations: a case study in the probabilistic traveling salesman problem. INFORMS Journal on Computing 2008;20(4):644–58. [14] Bowler NE, Fink TMA, Ball RC. Characterization of the probabilistic traveling salesman problem. Physical Review E 2003;68(2):036703-1–7. [15] Branke J, Guntsch M. Solving the probabilistic TSP with ant colony optimization. Journal of Mathematical Modelling and Algorithms 2004;3(4):403–25. [16] Brits R, Engelbrecht AP, van den Bergh F. Locating multiple optima using particle swarm optimization. Applied Mathematics and Computation 2007;189:1859– 83. [17] Campbell AM. Aggregation for the probabilistic travelling salesman problem. Computers and Operations Research 2006;33:2703–24. [18] Choi J, Lee JH, Realff MJ. An algorithmic framework for improving heuristic solutions: part II. A new version of the stochastic traveling salesman problem. Computers and Chemical Engineering 2004;28(8):1297–307. [19] Clerc M. Particle swarm optimization. London: ISTE; 2006. [20] Dolan A, Aldous J. Networks and algorithms—an introductory approach. Chichester: Wiley; 1993. [21] Feo TA, Resende MGC. Greedy randomized adaptive search procedure. Journal of Global Optimization 1995;6:109–33. [22] Garfinkel R, Nemhauser G. Integer programming. New York: Wiley; 1972. [23] Glover F. Tabu search I. ORSA Journal on Computing 1989;1(3):190–206. [24] Glover F. Tabu search II. ORSA Journal on Computing 1990;2(1):4–32. [25] Hansen P, Mladenovic N. Variable neighborhood search: principles and applications. European Journal of Operational Research 2001;130:449–67. [26] Jaillet P. Probabilistic traveling salesman problems. PhD thesis, MIT, Cambridge, MA, USA; 1985. [27] Jaillet P. A priori solution of a traveling salesman problem in which a random subset of the customers are visited. Operations Research 1988;36(6):929–36. [28] Jaillet P, Odoni AR. The probabilistic vehicle routing problem. In: Golden BL, Assad AA, editors. Vehicle routing: methods and studies. Amsterdam: Elsevier Science Publishers B.V., North-Holland; 1988. p. 293–318. [29] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of 1995 IEEE international conference on neural networks, vol. 4, 1995. p. 1942–8. [30] Kennedy J, Eberhart R. A discrete binary version of the particle swarm algorithm. In: Proceedings of 1997 IEEE international conference on systems, man, and cybernetics, vol. 5, 1997. p. 4104–8. [31] Kennedy J, Eberhart R, Shi Y. Swarm intelligence. San Francisco: Morgan Kaufmann Publisher; 2001. [32] Laporte G, Louveaux E, Mercure H. A priori optimization of the probabilistic traveling salesman problem. Operations Research 1994;42:543–9. [33] Lin S. Computer solutions of the traveling salesman problem. Bell System Technical Journal 1965;44:2245–69.
442
Y. Marinakis, M. Marinaki / Computers & Operations Research 37 (2010) 432 -- 442
[34] Liu YH. A hybrid scatter search for the probabilistic traveling salesman problem. Computers and Operations Research 2007;34(10):2949–63. [35] Liu Y-H, Jou R-C, Wang C-J. Genetic algorithms for the probabilistic traveling salesman problem. In: Proceedings of the conference on E-logistics, 2004. p. 77–82. [36] Marinakis Y. Vehicle routing in distribution problems. PhD thesis, Technical University of Crete, Department of Production Engineering and Management, Chania, Greece; 2005. [37] Marinakis Y, Migdalas A, Pardalos PM. Expanding neighborhood GRASP for the traveling salesman problem. Computational Optimization and Applications 2005;32:231–57. [38] Marinakis Y, Migdalas A, Pardalos PM. Expanding neighborhood search GRASP for the probabilistic traveling salesman problem. Optimization Letters 2008;2(3):351–61. [39] Niu B, Zhu Y, He X, Wu H. MCPSO: a multi-swarm cooperative particle swarm optimizer. Applied Mathematics and Computation 2007;185:1050–62. [40] Poli R, Kennedy J, Blackwell T. Particle swarm optimization. An overview. Swarm Intelligence 2007;1:33–57. [41] Powell WB, Jaillet P, Odoni A. Stochastic and dynamic networks and routing. In: Ball MO, Magnanti TL, Momma CL, Nemhauser GL, editors. Network routing,
[42]
[43]
[44]
[45] [46] [47]
handbooks in operations research and management science, vol. 8. Amsterdam: Elsevier Science B.V.; 1995. p. 141–295. Psaraftis HN. Dynamic vehicle routing problems. In: Golden BL, Assad AA, editors. Vehicle routing: methods and studies. Amsterdam: Elsevier Science Publishers B.V., North-Holland; 1988. p. 223–48. Resende MGC, Ribeiro CC. Greedy randomized adaptive search procedures. In: Glover F, Kochenberger GA, editors. Handbook of metaheuristics. Boston: Kluwer Academic Publishers; 2003. p. 219–49. Rossi FA, Gavioli I. Aspects of heuristics methods in the probabilistic traveling salesman problem. In: Andreatta G, Mason F, Serafini P, editors. Advanced school on stochastics in combinatorial optimization. Singapore: World Scientific Press; 1987. p. 214–27. Shi Y, Eberhart R. A modified particle swarm optimizer. In: Proceedings of 1998 IEEE world congress on computational intelligence. 1998. p. 69–73. Tang H, Miller-Hooks E. Approximate procedures for the probabilistic traveling salesperson problem. Transportation Research Record 1882;2004:27–36. Tillett T, Rao TM, Sahin F, Rao R. Darwinian particle swarm optimization. In: Proceedings of the 2nd Indian international conference on artificial intelligence, Pune, India. 2005. p. 1474–87.