Information Sciences 481 (2019) 311–329
Contents lists available at ScienceDirect
Information Sciences journal homepage: www.elsevier.com/locate/ins
A Multi-Adaptive Particle Swarm Optimization for the Vehicle Routing Problem with Time Windows Yannis Marinakis a,∗, Magdalene Marinaki a, Athanasios Migdalas b,c a
Technical University of Crete, School of Production Engineering and Management, University Campus, Chania 73100, Greece Aristotle University of Thessalonike, Department of Civil Engineering, Thessalonike 54124, Greece c Luleå Technical University, Industrial Logistics, Luleå 97187, Sweden b
a r t i c l e
i n f o
Article history: Received 30 December 2017 Revised 30 December 2018 Accepted 31 December 2018 Available online 2 January 2019 Keywords: Particle swarm optimization Vehicle routing problem with time windows Combinatorial neighborhood topology Greedy randomized adaptive search procedure Adaptive strategy
a b s t r a c t In this paper, a new variant of the Particle Swarm Optimization (PSO) algorithm is proposed for the solution of the Vehicle Routing Problem with Time Windows (VRPTW). Three different adaptive strategies are used in the proposed Multi-Adaptive Particle Swarm Optimization (MAPSO) algorithm. The first adaptive strategy concerns the use of a Greedy Randomized Adaptive Search Procedure (GRASP) that is applied when the initial solutions are produced and when a new solution is created during the iterations of the algorithm. The second adaptive strategy concerns the adaptiveness in the movement of the particles from one solution to another where a new adaptive strategy, the Adaptive Combinatorial Neighborhood Topology, is used. Finally, there is an adaptiveness in all parameters of the Particle Swarm Optimization algorithm. The algorithm starts with random values of the parameters and based on some conditions all parameters are adapted during the iterations. The algorithm was tested in the two classic sets of benchmark instances, the one that includes 56 instances with 100 nodes and the other that includes 300 instances with number of nodes varying between 200 and 10 0 0. The algorithm was compared with other versions of PSO and with the best performing algorithms from the literature. © 2019 Elsevier Inc. All rights reserved.
1. Introduction Vehicle Routing Problem (VRP) is one of the most important problems in Combinatorial Optimization, in Operational Research and, especially, in the field of Supply Chain Management. A number of variants of the VRP have been proposed, most of them arising from real life problems where the authors are trying to find the most suitable algorithm for the solution of their problem. The interesting reader can find a large number of papers [23,24,40,50] and books [16,46] that cover the problem that he/she would like to solve. One of the most important variants of the VRP is the one that time windows are used. This is a very important variant as the addition of time windows in the Capacitated Vehicle Routing Problem leads to more realistic problems. Also, other variants of VRP different than the Capacitated Vehicle Routing Problem incorporate time windows constraints.
∗
Corresponding author. E-mail addresses:
[email protected] (Y. Marinakis),
[email protected] (M. Marinaki),
[email protected] (A. Migdalas).
https://doi.org/10.1016/j.ins.2018.12.086 0020-0255/© 2019 Elsevier Inc. All rights reserved.
312
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
In the Vehicle Routing Problem with Time Windows (VRPTW) the customers and, possibly, the vehicles are associated with time windows. Thus, the customers must be serviced within some time window and the vehicles may be allowed to operate within some time window. In this paper, as in most publications concerning the Vehicle Routing Problem with Time Windows [16,45,46], the objective of the problem concerns the design of least cost routes for a fleet of identical capacitated vehicles that service geographically scattered customers within pre-specified time windows. Each customer must be serviced once by a vehicle within a specified time window and the total demands of the customers serviced by the vehicle must not exceed the capacity of the vehicle. The vehicle must wait until the service is possible, if it arrives at a customer earlier than the lower bound of the customer’s time window. The depot has, also, a time window. All the vehicles must return to the depot until its closing time. In this paper, a hierarchical objective function is used where, initially, the number of vehicles is minimized and, then, for this number of vehicles the total traveled distance is minimized. In this paper, a Multi-Adaptive Particle Swarm Optimization (MAPSO) algorithm is proposed where three different adaptive strategies are used. Particle Swarm Optimization algorithm has been proved to be a very efficient algorithm for the solution of routing type problems. Our research group has efficiently applied variants of the Particle Swarm Optimization for the solution of the Capacitated Vehicle Routing Problem [28,30,33], the Vehicle Routing Problem with Stochastic Demands [31,32], the Probabilistic Traveling Salesman Problem [29] and the Location Routing Problem [27]. Besides our group, a number of very interesting algorithms based on the PSO algorithm for the solution of VRP variants have been published by other researchers [1,9,17]. For a complete review about the application of PSO in Vehicle Routing Problems please see [35]. The differences of the proposed algorithm from the other algorithms proposed by our group are the three different adaptation strategies described in Section 3.1. In order to see if these adaptation strategies are efficient, the proposed algorithm is compared with the other versions of PSO that we have proposed and they do not use the adaptive strategy for the parameters. The comparisons are performed in the two classic sets of benchmark instances that are used for testing every algorithm that solves the VRPTW. The one was proposed by Solomon [45] and includes 56 instances divided in 6 sets with number of nodes equal to 100 and the other was proposed by Gehring and Homberger [12] and includes 300 instances of different sizes divided in 5 sets of instances with number of nodes equal to 200, 40 0, 60 0, 80 0 and 10 0 0 where each one of them contains 60 instances and each different set is divided in 6 subsets of 10 instances each. The proposed algorithm is, also, compared with some of the best performing algorithms (twenty of them) from the literature used for the solution of the Vehicle Routing Problem with Time Windows in the same sets of instances. The motivation that leaded us to extend our previous research and to propose a PSO algorithm with these adaptive strategies is that there is a large number of routing problems and a larger number of algorithms that solve these problems and all of them have different parameters (calculated using effective ways by the authors in order to find the suitable parameters for their algorithms) but there is not a set of parameters that could be applied in every routing problem. We did not say that we found such a set because it is almost impossible but we propose a way to give to the algorithm the flexibility to find a suitable set of parameters for each instance without the interference of the user. Thus, having proposed in the past a number of very efficient algorithms for solving routing type problems with the use of the PSO and without having an adaptive procedure for finding the best set of parameters, in this research we would like to extend our research in such a way in order to propose a new version of the PSO algorithm for solving routing type problems using a number of adaptive strategies to find the most effective set of parameters in order to give very good results. As it will be presented in the next sections of this paper, these two goals were achieved. We found with an easy and very efficient way a set of parameters, different for each set but with no large differences between them and we achieved computational results better (in most of the sets), equal (in other sets) or at least competitive (for a small number of sets) compared either to the other PSO variants proposed from our research group or, especially, to the most effective methods from the literature used for solving the Vehicle Routing Problem with Time Windows. Finally, the proposed framework could be applied with the necessary modifications in other swarm intelligence algorithms. The rest of the paper is organized as follows: In Section 2, a short literature review concerning both the adaptive strategies and the VRPTW is given, then in Section 3 an analytical presentation of the proposed algorithm is given while in Section 4 the computational results of the algorithm are presented and analyzed with analytical comparisons with other algorithms from the literature. Finally, in Section 5 conclusions and future research are given.
2. Literature review A number of adaptive strategies have been proposed for the calculation of the parameters of the PSO algorithm, Most of them are denoted as Adaptive Particle Swarm Optimization (APSO) algorithms [49]. Thus, there are papers devoted to the finding of an adaptive way to increase or decrease the inertia factor, to the finding of the acceleration coefficients or both through the iterations. Also, fuzzy systems have been used for the adaptiveness of the inertia weight. There is a number of publications that use an adaptive strategy to change the size of the population using different kinds of adaptiveness. For an analytical review for the kind of adaptiveness in PSO, please see the review section in paper [34]. A number of algorithms have been proposed for the solution of the Vehicle Routing Problem with Time Windows using the same formulation as the one used in this paper [2–8,12,13,17,19–22,25,26,36–39,41,42,44,47,48].
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
313
3. Multi-Adaptive Particle Swarm Optimization algorithm 3.1. Analysis of the three novel adaptive strategies of MAPSO In this section, the proposed Multi-Adaptive Particle Swarm Optimization (MAPSO) algorithm is analyzed in detail. It should be noted that before the detailed analysis of the proposed algorithm, we analyze the three adaptive strategies that are incorporated in the proposed algorithm. Initially, in the initialization phase of the algorithm and when a new solution is created during the iterations of the algorithm, an adaptive strategy based on Greedy Randomized Adaptive Search Procedure (GRASP) [11] is used for the development of the initial solutions and of the solutions during the iterations. With this strategy a number of new solutions are developed where each one is different from the other and it is ensured that the particles (solutions of the problem) are diversified in order to begin from different regions of the solution space. The second adaptive strategy that is used concerns the movement of the particles from one solution to another. This is a very important part of the algorithm as in the past, the main problem in the application of a Particle Swarm Optimization algorithm in a routing problem and in general in a combinatorial optimization problem, was the transformation of the solutions from continuous values (that are necessary for the proper operation of the PSO algorithm) to discrete values (that are necessary in routing problems as in most routing problems a path or multiple paths representations are used) and vice versa and the subsequent losing of information. One way to avoid this transformation was proposed by our team in [30] and was denoted as Combinatorial Neighborhood Topology where a Path Relinking procedure was used for the movement of the particles from one solution to another. The addition of the Path Relinking procedure in the PSO improved significantly the performance of the PSO algorithm in this kind of problems and gave algorithms that in some problems as in the Vehicle Routing Problem with Stochastic Demands [31], performed better than classic metaheuristic and evolutionary algorithms. The use of the Path Relinking [15] gave algorithms that needed a step by step procedure to move from one solution to another. However, for the solution of the VRPTW, the use of this procedure may lead to a number of infeasible solutions due to the existence of the time windows and this step by step procedure may move customers (nodes) that they are, initially, feasibly assigned in routes into, finally, infeasible routes. This is not an unsolved problem as the feasibility of the routes was gained again by using a suitable strategy, however, this may cause the increase of the computational time and, in some solutions, the increase of the number of routes. Thus, in this paper, a new version of the Combinatorial Neighborhood Topology (CNT), the Adaptive Combinatorial Neighborhood Topology (ACNT), is proposed where the conditions of the CNT are maintained but instead of using the Path Relinking strategy, an Adaptive Memory [43] procedure is proposed. In this procedure, the main idea is the movement from one solution to another to be realized using whole tours. Thus, two kinds of adaptive memories are created, where in the first one a number is used that records how many times a tour with a specific set of nodes (without taking into account the sequence of the nodes) participates in the global best particle. This number increases (or decreases based on some conditions) the possibilities of a tour to become member of the new solution. In the second adaptive memory, the same procedure is followed for the personal best solution of each one of the particles. This distinction is made as in the CNT there is a possibility for the new particle to follow either the personal best solution or the global best solution (or a new direction as it will be described analytically later). The addition of a number that records the consecutive iterations where a route has been presented in the global or in the local best solutions is inspired by the medium and the long term memories of Tabu Search [14]. Finally, in the third adaptive strategy all the parameters (acceleration coefficients, iterations, local search iterations, upper and lower bounds of the velocities and of the positions and number of particles in each swarm) are adapted during the procedure and, thus, the algorithm works independently and without any interference from the user. All parameters are randomly initialized and, afterwards, during the iterations, the parameters are adapted using three different conditions, the first one is used for all parameters except of the number of particles, the second is used for the increase of the number of particles and the third one is used for the decrease of the number of particles. In the proposed algorithm, all parameters including the number of particles, the iterations and the local search iterations are adapted during the iterations. In the proposed algorithm, the parameters are adapted based on an equation where the parameters that led to the best so far solution are kept as best so far parameters and the new parameters are calculated using the summation of the best so far parameters with quantities depending each time from the best so far parameters and the current parameters. These are the main differences of the proposed algorithm from the above mentioned algorithms from the literature. In the following subsections, initially, the Constriction Particle Swarm Optimization algorithm, in which the proposed algorithm is based, is presented and, then, all the new features of the algorithm are presented and analyzed in detail in the subsequent subsections. 3.2. Constriction Particle Swarm Optimization algorithm In a PSO algorithm, initially, a set of particles is created randomly where each particle corresponds to a possible solution, it has a position in the space of solutions and it moves with a given velocity. In the VRPTW, a particle is recorded via the path representation of the tour, that is, via the specific sequence of the nodes. It should be noted that a classic PSO algorithm when it is applied in a routing problem it needs a sequence of transformations of the solution from continuous to discrete values and vice versa in order to calculate the velocity equation of each particle (see below Eq. (1)). This is a
314
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
drawback of the PSO algorithm. The position of each particle is represented by vector xi = (xi1 , xi2 , . . . , xid ), i = 1, 2, . . . , N (N is the population size and d is the number of the vector’s dimension) and its performance is evaluated on the predefined fitness function. The velocity vij represents the changes that will be made to move the particle from one position to another. In a PSO algorithm, the particle can follow its own path, or it can move towards the best position it had during the iterations (pbestij ) or it can move to the best particle’s position (gbestj ). In constriction PSO [10], the equations used for the velocities and positions are the following:
vi j (t + 1 ) = χ (vi j (t ) + c1 rand1 ( pbesti j − xi j (t )) + c2 rand2 (gbest j − xi j (t ))) xi j (t + 1 ) = xi j (t ) + vi j (t + 1 )
(1)
(2)
where
χ=
2 and c = c1 + c2 , c > 4 √ | 2 − c − c 2 − 4 c|
(3)
t is the iterations counter, c1 and c2 are the acceleration coefficients, rand1 and rand2 are two random variables in the interval (0, 1). 3.3. Adaptive Combinatorial Neighborhood Topology In this paper, a new version of the Combinatorial Neighborhood Topology (CNT), the Adaptive Combinatorial Neighborhood Topology (ACNT), is proposed where the conditions of the CNT are maintained but instead of using the Path Relinking strategy, an Adaptive Memory [43] procedure is proposed. In the ACNT, the positions’ Eq. (2) is not used at all. However, the role of the velocities’ equation is very important as it will be explained in the following. As it was previously mentioned, the main difference of the ACNT from the initially proposed CNT is the replacement of the Path Relinking Strategy from an Adaptive Memory strategy [43]. In the ACNT, two kinds of memories are used, the one is for the global best particle (global best solution in general as the best particle may change during the iterations and is denoted as global best memory (GBM)) and the other is for every particle separately (it is denoted as personal best memory (PBM)). In these memories, complete routes with specific nodes are recorded. Two routes with the same set of nodes but with different sequence of the nodes are recorded as one solution in the memories and the sequence that is kept is the one with the better cost that does not violate the constraints. In each iteration, there are two possibilities for the routes in the GBM and in the PBM. The one possibility is that a route that already belongs to the GBM (or to the PBM) appears to the global best particle (or to the personal best particle) and, thus, the number that shows the times (iterations) where a route (GBMvall and PBMvall where l is the number of different routes in the GBM and in the PBM, respectively) is in the GBM (or in the PBM) is increased by one. The other possibility is a new route to appear to the global best particle (or to the personal best particle) and, thus, it has to be added in the GBM (or in the PBM) with the counter of the routes in the GBM (or in the PBM), l, increased by one and the GBMvall (or the PBMvall depending on the case) to take an initial value equal to one. It is preferable the best particle to be different in most iterations as it may be possible different routes to be added in the global best memory. In general, a particle can either follow its own path, or go towards to its previous best solution, or go towards to the global best solution (to the best particle in the swarm). This was achieved in the CNT with a Path Relinking strategy. In the ACNT, a number of routes from the two kind of adaptive memories, the GBM or the PBM, are selected. The routes are selected from the GBM if the particle goes towards to the global best solution and from the PBM if the particle goes towards to the personal best solution. The next step of this procedure is how the particle decides to follow the previous best or the global best of the whole swarm. In this phase of the algorithm, we keep the same procedure as in the CNT, with the appropriate adaptations [30,31]. Initially, the average value of the velocity equation of each particle is calculated: d
averagev =
vi j (t + 1 )
j=1
(4)
d
Also, the values L1 and L2 are calculated from the following equations:
L1 = (ubound − lbound ) × (w1 −
w1 × t ) + lbound itermax
(5)
L2 = (ubound − lbound ) × (w2 −
w2 × t ) + lbound 2 ∗ itermax
(6)
and
where t is the current iteration, itermax is the maximum number of iterations, ubound and lbound are the upper and lower bounds for the velocities of each particle. The parameters w1 and w2 control the range of the values L1 and L2 . If in some iterations the velocity of the particle violates these bounds, then, the velocity is initialized with a new value inside the bounds. The parameters w1 and w2 should have large values as it is desired the value of L1 to be as large as possible in the
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
315
beginning of the algorithm and to be reduced during the iterations. Also, the value of L2 should be larger than the value of L1 and, thus, the value of w2 should be larger than the value of w1 . There are three possibilities from the averagev value: 1. averagev < L1 : there is not any selection from either of the two memories, meaning that the particle follows its own path (using a local search procedure as it will be described later). 2. L1 ≤ averagev ≤ L2 : the particle selects routes from the PBM. 3. L2 < averagev : the particle selects routes from the GBM. In the beginning of the algorithm, L1 and L2 have large values. These values decrease during the iterations due to the fact that in this case the possibility of a selection of routes from the two memories is increased and on the other hand the possibility of searching by its own the solution space using a local search is decreased. Thus, depending on the previous selection of L1 and L2 , a number of routes is selected from the memories and a number of routes is selected from the particles. The percentage of the routes that is selected from the memories is depending on the average values of the velocities (averagev ). If, in the case that the particle selects to go towards its personal best solution, this value is near to the L1 , then, most of the routes are selected from the PBM memory and if it is near to L2 , then, most of the routes are selected from the current solution (as it holds that L1 ≤ averagev < L2 ). On the other hand, in the case that the particle selects to go towards its global best solution, if the value is near to L2 , then, most of the routes are selected from the GBM memory and if it is far from L2 , most of the routes are selected from the current solution. The routes are selected in such a way that two nodes could not belong in two different routes. As with this procedure there is a possibility some nodes not to be assigned in any routes, a greedy procedure, based on the procedure described in Section 3.4 is used in order to produce new routes from these nodes. The local search procedure described in Section 3.5 is applied in each one of the particles.
3.4. Description of the initial solutions Usually in a PSO algorithm, the initial solutions are produced at random. However, this may lead to very bad initial solutions due to the fact that there is a number of constraints that we have to take into account in the solution of a VRP with Time Windows. Thus, it is preferred to use a more systematical way to produce the initial solutions. This was achieved with the use of a variant of the Greedy Randomized Adaptive Search Procedure (GRASP) [11]. GRASP has been successfully used in the past for the production of the initial solutions in routing problems in the frame of a PSO algorithm [27–29,33]. GRASP is a two phase procedure where in each step, in the partial solution a node is added until all nodes have been assigned in a route. The differences from the classic nearest neighborhood procedure is that in the GRASP algorithm not the best node is added in the solution but the selection is performed randomly (in most cases) from a list, the Restricted Candidate List (RCL), where the most promising moves (nodes in the specific problem) are stored. When a selection of a node is performed, the list is updated with the next most possible candidate for inclusion to the route. The most interesting part of the GRASP algorithm is to find the suitable RCL for the problem. In the Vehicle Routing Problem with Time Windows in order to find a solution with a Greedy algorithm, we have to take into account which is the nearest node from the node that is served last and the constraint of the time windows. The nearest node to the last served node is possible not to be served as the time windows constraint is violated or it can be served but the vehicle should wait a long time to start the service due to the time window. In order to avoid these two situations, we create the RCL as follows. In the step by step procedure of adding a node (customer) to a route, there are nodes that already have been served, nodes that could be served from the current route or nodes that could not be served from the current route. All nodes that have already been served are not taken into account anymore. For the other nodes, initially, two constraints are checked. The one is the capacity constraint of the vehicle (if with the residual capacity of the vehicle, the customer could be served) and the other is the time windows constraint. For any node i, a value (twvali ) is calculated that shows the difference between the ending time of the time window of the node i and the time that the vehicle arrives to the node i. If this value is negative, then, the node i could not be a candidate node for inclusion in the route. If from this procedure there is no any candidate node, the vehicle returns to the depot and a new route begins. On the other hand, if this value is positive, we have to find a way of how we will combine the twvali and the distance of the current node to node i. Thus, two auxiliary lists are constructed. In the first one, the nodes based on the distance from node i are added in ascending order while in the second one the nodes based on the twvali are added in ascending order. Thus, the most preferable nodes are high in both lists. Then, a new value for each node is calculated (RCLvali ) that is the summation of the order that the node has in each of the two lists (for example if a node is third in the first list and eighth in the second, its RCLvali is equal to eleven) and the RCL is constructed based on the RCLvali in ascending order. The length of the RCL is different for each particle and it is selected randomly. A node is selected to be included in the route randomly from the RCL. In order any node to be added in the route that is constructed, it is, also, examined if there is enough time for the vehicle to return to the depot. If there is not enough time, then, the node is rejected from the current route and either the vehicle moves to the next node (if it is possible) or it returns back to the depot for starting a new route.
316
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
3.5. Description of the local search procedure A Variable Neighborhood Search (VNS) algorithm [18] suitably adjusted for the Vehicle Routing Problem with Time Windows is applied in each particle in the swarm in order to improve the solutions (either the initial solutions (as the local search is the second phase of the GRASP) or the solutions produced in each iteration of the algorithm). The local search procedure begins with the addition of a penalty function (PFl ) in each one of the routes (l). The penalty function shows if the route is feasible or not based on the demands of the customers. Initially, all routes will be feasible routes. In general, when the iterations of the local search algorithm will start, there will be two possibilities for a route, either to have a positive PFl value or to have a negative PFl value. A positive value means that the route is infeasible (means that a node should be relocated to another route) and a negative value means that the route is feasible. The higher the PFl value, the more unacceptable is the route, meaning that from this route we will begin to reduce the PFl value. On the other hand, the smaller the PFl value, the more acceptable is the route, meaning that in this route we will try to relocate a node from an infeasible route. The target is all the PFl values to have negative values or values equal to zero. As the Vehicle Routing Problem with Time Windows is treated as a hierarchical optimization problem where initially the number of vehicles should be minimized and, then, the corresponding routing cost should be, also, minimized, initially, we calculate the ideal number of vehicles by dividing the summation of the demands with the capacity of the vehicles (we consider that there is a homogeneous set of vehicles). If the number of vehicles produced in the construction phase of the routes is equal to the ideal number of vehicles, (this is a seldom case in the VRPTW), then, there are two kinds of local search algorithms that are applied in the solution. The one is inside the routes in order to improve the specific route and the other is either a 1-0 relocate or 1-1 exchange algorithm in order to relocate some nodes between different routes. When a successful move between two different routes will be performed, then, a local search inside the routes that participate in this move is applied. On the other hand, if the number of vehicles is larger than the ideal number of vehicles, an attempt to reduce the number of routes and to approach the ideal number is performed. This attempt is achieved by using a sequence of 1-0 relocate moves taking into account not to violate the time windows where from the most possible route (the route with the less number of nodes) nodes are relocated to other routes using either an 1-0 relocate procedure or in general a k-0 relocate procedure. Then, the PFl value from all remaining routes is calculated. If the PFl value for all routes is negative, then, the new solution is feasible, local search inside the routes is performed and we proceed by trying to reduce again the number of routes (if it is not equal to the ideal number) with the same procedure. If a new reduction is not possible, then, we proceed to the next particle. On the other hand, if there is one or more positive PFl values, then, a sequence of 1-0 relocates is performed where nodes from the routes with positive PFl values are relocated to routes with negative PFl values until either a feasible solution is achieved or it is proved that from this solution we cannot take a feasible solution with this number of routes. If the last will happen, then, the last feasible solution for this particle is kept and the algorithm proceeds to the next particle. 3.6. Adaptiveness in the parameters of the Particle Swarm Optimization The parameters that should be adapted are the number of particles, the number of iterations, the number of local search iterations, the c1 and c2 , the upper and lower bounds in velocities and, finally, the w1 and w2 from Eqs. (5) and (6). Initially, random values of the parameters of PSO are given taking into consideration the fact that these values should not violate some specific bounds. Thus, for example, the sum of c1 and c2 should be greater than 4 due to Eq. (3) and the upper bounds should not have values less than zero as in this case the PSO algorithm could not work properly. The number of consecutive iterations (Max iterations) with no improvement in the results of the best solution was selected to be equal with the initial number of particles. The initial velocities are set equal to zero. After the initialization of the particles, the fitness function of each particle in addition to the best particle are calculated. It is, also, calculated the average value in the fitness function of all particles and the average value of the best solutions of all particles. In the first iteration, these values are equal for each particle. The initial random values of all parameters are the best values so far and the algorithm proceeds as a classic constriction PSO. The best values for each parameter are updated when the average values of the best solutions of all particles in a specific iteration are less than the best average values of the best solutions of all particles. Three different conditions are controlling the parameters during the iterations of the algorithm. In the first one, if for a consecutive number of iterations the best solution has not been improved the values of c1 and c2 , of the upper and lower bounds in positions and in velocities, of the number of local search iterations, and of the w1 and w2 are updated as follows:
c1 = c1opt +
c1 − c1opt c2 − c2opt and c2 = c2opt + c1opt c2opt
u positions = UB +
uvelocities = V +
u positions − UB and l positions = −u positions UB
l positions − V V
and lvelocities = −uvelocities
(7)
(8)
(9)
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
Local Search iter = LS +
Local Search iter − LS LS
w1 = w1opt + α5 and w2 = w2opt + α6
317
(10)
(11)
where c1opt , c2opt , UB, V and LS are the optimum values for c1 , c2 , the upper bounds in positions, the upper bounds in velocities and the local search iterations, respectively. In the second condition, the increase of the number of particles is performed. If for a consecutive number of iterations, the best solution and the average best solution of all particles have not been improved, then, a number of new particles (Particles) are created. All new particles are initialized from random values and they use as initial values of the parameters the current values. The number of new particles is calculated from:
Particles = N P +
Particles − NP NP
(12)
where NP is the best value of the number of particles. Finally, in the third condition, the decrease of the number of particles is performed. If for a consecutive number of iterations the best solution has not been improved and the best value of a particle is more than 5% of the best particle of the swarm, then, this particle is deleted from the swarm. The number of consecutive iterations was set equal to abs(Initial Number o f Particles − Particles ) if the number of particles has been changed at least one time during the iteraMax iterations tions, otherwise the value was set equal to Initial Number o f Particles − abs(Max iterations . When the algorithm −Local Search iter ) converges, except of the best solution, the best parameters have, also, been found. 4. Computational results 4.1. General results Two sets of total 356 benchmark instances were used to test the algorithm (implemented in modern Fortran, in an Intel Core i5 2430M@ 2.40GHz processor with 4Gb RAM on Windows 7 Home Premium 64-bit operation system). The first set proposed by Solomon [45] includes 56 instances divided in 6 sets with number of nodes equal to 100 and the other proposed by Gehring and Homberger [12] includes 300 instances of different sizes divided in 5 sets of instances with number of nodes equal to 20 0, 40 0, 60 0, 80 0 and 10 0 0 where each one of them contains 60 instances and each different set is divided in 6 subsets of 10 instances each. Each one of these subsets in all tables and figures are denoted as R1, R2, C1, C2, RC1 and RC2. The results in all tables are the average results for the corresponding instances of each subset. This is the way that most researchers that study the Vehicle Routing Problem with Time Windows present their results. The differences of the subsets are in the distribution of the nodes in the solutions’ space. All instances that are denoted as R1 and R2 have randomly generated coordinates from a uniform distribution. On the other hand, all instances that are denoted as C1 and C2 have clustered customers and, finally, the instances that are denoted as RC1 and RC2 have semiclustered customers (some of them are clustered and some of them are randomly generated). Finally, the differences between the subsets R1, C1 and RC1 from the subsets R2, C2 and RC2 are that in the first three subsets there exist tighter time windows, shorter scheduling horizons and smaller vehicle capacities, on the other hand, in the last three subsets there exist softer time windows, longer scheduling horizons and larger vehicle capacities [36,42]. The efficiency of the proposed Multi-Adaptive Particle Swarm Optimization (MAPSO) algorithm is measured by the quality of the produced solutions. The quality is given in terms of the relative deviation from the best known solution, that is −cBKS ) ω = (cMAPSO %, where cMAPSO denotes the cost of the solution found by MAPSO and cBKS is the cost of the best known cBKS solution. To test the performance of the proposed algorithm we applied MAPSO (and the other algorithms used in the comparisons) 30 times to each test instance. In Table 1 and in Figs. 1 and 2, the results of the proposed algorithm are presented in detail. Table 1 is divided in six parts depending on the number of nodes (customers) that each one of the subsets belonging to each one of the parts has. Thus, the first part has 100 nodes (customers) and they are the 56 instances proposed by Solomon [45]. The other five parts have subsets with 20 0, 40 0, 60 0, 80 0 and 10 0 0 nodes (customers), respectively, and they are the 300 instances proposed by Gehring and Homberger [12]. In Table 1, the average number of vehicles (NV), the average total traveled distance (TD), the average quality (ω), the average total traveled distance for the 10 (T Dav10 ), the 20 (T Dav20 ) and the 30 (T Dav30 ) runs, the average standard deviation for the 10 (std10 ), the 20 (std20 ) and the 30 (std30 ) runs and the average computational time (AT(s)) in seconds for each instance of each subset are presented, respectively. We present, initially, the average number of vehicles as we solve the problem as a hierarchical optimization problem where the most important condition is the minimization of the number of vehicles used for the service of the customers and, then, for this number of vehicles the second objective function is the minimization of the distance traveled. Thus, a solution with better traveled distance and with larger number of vehicles was rejected if a solution with smaller number of vehicles and with worst traveled distance
318
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
Fig. 1. Analytical results of the proposed algorithm in the first set of benchmark instances.
Fig. 2. Analytical results of the proposed algorithm in the second set of benchmark instances.
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
319
Table 1 Results of the algorithm in the two sets of benchmark instances. 100 customers TD
ω
T Dav10
std10
T Dav20
std20
T Dav30
std30
AT(s)
R1 11.92 RC1 11.50 C1 10.00 R2 2.73 RC2 3.25 C2 3.00 200 customers NV
1209.99 1384.18 828.38 952.06 1119.60 589.86
0.01 0.00 0.00 0.02 0.02 0.00
1211.32 1385.33 829.67 953.22 1121.09 591.20
0.92 0.89 0.92 0.88 1.01 0.97
1211.09 1385.23 829.47 953.10 1120.78 590.72
0.85 0.77 0.81 0.77 0.90 0.64
1210.78 1384.92 829.15 952.78 1120.44 590.65
0.84 0.78 0.82 0.79 0.89 0.84
22.58 23.43 18.55 22.04 21.58 19.55
TD
ω
T Dav10
std10
T Dav20
std20
T Dav30
std30
AT(s)
R1 18.20 RC1 18.00 C1 18.90 R2 4.00 RC2 4.30 C2 6.00 400 customers NV
3618.53 3187.95 2719.09 2942.64 2545.46 1832.46
0.25 0.37 0.22 0.40 0.39 0.05
3621.17 3190.43 2722.11 2945.46 2547.02 1834.14
1.46 1.74 1.49 1.42 1.66 1.73
3620.31 3189.84 2721.01 2944.66 2547.44 1834.30
1.35 1.36 1.47 1.43 1.44 1.34
3620.01 3189.46 2720.65 2944.28 2547.09 1834.00
1.24 1.28 1.35 1.33 1.32 1.23
53.45 51.28 59.22 55.48 54.83 50.48
TD
ω
T Dav10
std10
T Dav20
std20
T Dav30
std30
AT(s)
R1 36.40 RC1 36.00 C1 37.60 R2 8.00 RC2 8.40 C2 11.60 600 customers NV
8483.26 7921.42 7220.25 6234.38 5357.34 3954.52
1.67 0.56 0.75 1.63 1.30 0.20
8484.92 7923.14 7222.33 6236.63 5359.07 3957.09
1.81 1.52 1.52 1.65 1.68 1.46
8485.19 7923.49 7222.14 6236.26 5359.39 3956.48
1.36 1.42 1.40 1.29 1.37 1.39
8484.84 7923.08 7221.79 6235.90 5358.99 3956.14
1.26 1.35 1.29 1.22 1.30 1.29
99.35 97.22 85.48 91.32 90.35 86.14
TD
ω
T Dav10
std10
T Dav20
std20
T Dav30
std30
AT(s)
R1 54.50 RC1 55.00 C1 57.30 R2 11.00 RC2 11.40 C2 17.40 800 customers NV
18,569.92 16,176.27 14,427.88 12,776.09 10,726.38 7602.39
3.52 0.94 0.62 4.26 1.37 0.30
18,571.66 16,178.77 14,430.33 12,778.50 10,728.03 7604.41
1.38 1.26 1.63 1.50 1.55 1.54
18,571.65 16,178.36 14,429.75 12,778.13 10,728.29 7604.27
1.36 1.40 1.36 1.45 1.47 1.41
18,571.52 16,177.95 14,429.45 12,777.73 10,727.91 7603.89
1.25 1.33 1.25 1.35 1.36 1.32
155.31 154.29 152.47 158.19 156.31 153.28
TD
ω
T Dav10
std10
T Dav20
std20
T Dav30
std30
AT(s)
R1 72.80 RC1 72.00 C1 74.90 R2 15.00 RC2 15.40 C2 23.20 10 0 0 customers NV
32,240.66 29,617.50 25,482.40 20,478.57 16,647.78 11,669.78
3.51 1.13 0.22 3.27 1.37 0.28
32,242.37 29,620.72 25,485.74 20,479.81 16,649.77 11,671.89
1.51 2.16 1.56 1.05 1.33 1.59
32,242.32 29,619.49 25,484.26 20,480.46 16,649.54 11,671.58
1.36 1.39 1.40 1.35 1.34 1.35
32,242.26 29,619.12 25,483.93 20,480.13 16,649.24 11,671.28
1.28 1.30 1.28 1.25 1.22 1.24
255.49 252.13 235.41 251.57 248.44 244.19
TD
ω
T Dav10
std10
T Dav20
std20
T Dav30
std30
AT(s)
R1 RC1 C1 R2 RC2 C2
50,282.50 44,962.90 41,982.32 30,314.44 24,467.57 16,777.49
5.57 1.53 0.73 5.40 1.89 0.61
50,284.79 44,965.60 41,984.47 30,316.87 24,470.23 16,780.02
1.59 1.38 1.71 1.77 1.56 1.68
50,284.34 44,964.80 41,984.31 30,316.34 24,469.50 16,779.41
1.33 1.35 1.33 1.40 1.44 1.46
50,284.06 44,964.47 41,983.93 30,315.98 24,469.16 16,779.07
1.23 1.24 1.26 1.30 1.33 1.33
401.38 394.26 387.15 400.49 395.22 391.31
NV
92.00 90.00 94.10 19.00 18.20 28.90
than the other solution was found. The number of vehicles found from the algorithm is very good but we will analyze more the number of vehicles in the last part of the comparisons where the results of the proposed algorithm are compared with the results of other algorithms from the literature. In general, the algorithm performs very well as it can be concluded from the quality of the results and the average quality of the results. In the small set of instances the algorithm finds almost always the best known solution. The results in the instances with number of nodes between 200 and 10 0 0 are, also, very good as when we have the clustered instances (C1 and C2 subsets) the quality of the solutions varies between 0.05 and 0.75 even for the instances with number of nodes equal to 10 0 0. For the semiclustered instances (RC1 and RC2), the quality of the solutions varies between 0.37 and 1.89 and when the instances are random (R1 and R2), the quality of the solutions varies between 0.25 and 5.57. In the last case, although the quality of the solutions is very good and less than 2, when the number of nodes is 200 and 400 the quality of solutions deteriorates. The reason that this happens is that as the first objective function is the minimization of the number of the vehicles and as the coordinates in the solution space are random in these instances we are led to a deterioration in the quality of these solutions.
320
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
Fig. 3. Analytical comparison of the proposed algorithm with two versions of PSO in the first set of benchmark instances.
In order to give the stability of the algorithm we performed initially 10 different runs for each one of the instances in each one of the sets (356 instances with 10 runs, total 3560 runs of the algorithm). As the results were stabilized in specific values we increased the number of runs initially to 20 runs for each one of the instances and, finally, to 30 runs (10680 runs of the algorithm). We tested the algorithm so many times as we would like to see if by increasing the number of executions of the algorithm, the computational results will be deteriorated. An improvement of the results it was very difficult to be found as in most of the instances the algorithm found the best known solution or a value near to this solution from the first ten runs of the algorithm. In Table 1, they are presented three different average values and the corresponding standard deviation values for the first ten executions of the algorithm (for each instance in each subset), then for the twenty executions and, finally, for the thirty executions of the algorithm. As it is observed the average values are decreasing and are close to the best values. Also, the standard deviation is not affected from the increase of the executions of the algorithm. In most of the cases the standard deviation decreases or remains constant. This means that there are not statistical differences between the different executions of the algorithm. As it is known, a swarm intelligence algorithm or an evolutionary algorithm may produce due to its stochasticity very different results between a number of different executions of the algorithm in the same instance and if an algorithm manages to give so stable results with low standard deviations it means that the algorithm is a very promising and effective algorithm. Thus, the main reason that we performed this kind of analysis is that we would like to prove this fact for our algorithm. Also, for the computational time of the algorithm we could observe that in the clustered instances the algorithm converges faster, that the computational times increase with the number of nodes and that the computational time varies from around 20 seconds in the small instances to at most 400 seconds when the number of nodes is equal to 10 0 0. In Figs. 1 and 2, an analytical presentation of the quality of the results of the proposed algorithm (MAPSO) and of the average quality of the ten runs are presented (MAPSOav ), respectively. In the first figure (Fig. 1), the 56 small instances are presented, while in the second figure (Fig. 2) the 300 large instances are given, respectively. The order of the instances in Fig. 1 is the same as in the Tables, meaning R1, RC1, C1, R2, RC2 and C2, while the order in the instances in Fig. 2 is initially the 50 R1 instances for 20 0, 40 0, 60 0, 80 0 and 10 0 0 nodes, respectively, then the 50 R2 instances, then the 50 RC1 and the 50 RC2 instances and finally, the 50 C1 and the 50 C2 instances. In all figures (Figs. 3–8), the same order of the instances is followed. From Fig. 1, we can see that in almost all instances the proposed algorithm finds the best known solution and in the instances that the best known solution was not found the quality of the solution is less than 0.1 in almost all instances. Fig. 2 gives the same observations as the ones taken from Table 1. More precisely, the quality for the
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
Fig. 4. Analytical comparison of the proposed algorithm with two versions of PSO in the second set of benchmark instances.
Fig. 5. Analytical comparison of the proposed algorithm with two versions of the CNTPSO in the first set of benchmark instances.
321
322
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
Fig. 6. Analytical comparison of the proposed algorithm with two versions of the CNTPSO in the second set of benchmark instances.
Fig. 7. Analytical comparison of the proposed algorithm with another adaptive PSO algorithm (ACNTPSO) in the first set of benchmark instances.
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
323
Fig. 8. Analytical comparison of the proposed algorithm with another adaptive PSO algorithm (ACNTPSO) in the second set of benchmark instances.
random instances is, initially, very good and less than 1 but as the number of nodes increased (between instances 40 to 50 and 90 to 100 in Fig. 2) the quality deteriorated significantly. However, the results for the RC1, RC2, C1 and C2 subsets are much better and with small variations in their qualities than the results of the R1 and R2 as it was explained earlier. It should be noted that in some instances the quality of the solutions seems to be very bad, as it approaches 9% for the best known solution. This is due to the fact that the researchers in the Vehicle Routing Problem with Time Windows use an hierarchical optimization function, where, initially, the number of vehicles is optimized and, afterwards for the selected number of vehicles the traveled distance is minimized. Of course, as we can see, when we compare the proposed algorithm with the most promising algorithms from the literature (see Table 4) the algorithm despite this fact, is one of the most effective in the Cumulative Total Distance in all data sets which is the value the most researchers used for the comparison of an algorithm with the other algorithms for the solution of the Vehicle Routing Problem with Time Windows. In Table 2, the average values of the best parameters that are calculated from the proposed algorithm are presented. As it is mentioned in the description of the algorithm, the algorithm has three different adaptive strategies. One of them is the adaptation (optimization) of the parameters inside the procedure. We do not use the term optimization of the parameters but instead of this we use the term adaptation as it is very difficult to find the same set of parameters in every execution of the algorithm. Usually in the literature, these parameters are given in the initial phase of the algorithm and, then, are constant for all the iterations of the algorithm. In the proposed algorithm, initially, a random value for each one of these parameters is given and, then, an adaptation of each one of these parameters is performed inside the algorithm. The significant outcome of the results as they are presented in Table 2 is that all these parameters converged in values that are not very different between them. The last outcome shows that the use of the adaptation of the parameters helped the user to avoid a procedure to find a suitable set of parameters before the starting of the execution of the algorithm which is a very difficult and time consuming procedure. More precisely, in Table 2, the nine parameters (number of iterations (Maxiterations), number of local search iterations (LS), number of particles (NP), the four parameters for the velocities’ equation (c1opt and c2opt , ubound and lbound ) and the two parameters of the ACNT equations (w1opt and w2opt )) are presented. The structure of the Table is the same as the structure of the Table 1. The most important outcome of this Table is that the results are very stable independently of the size of the instance or if the customers are clustered or not. The number of particles that gave the best results varies between 69.48 and 93.10 depending on the set, the c1opt varies between 2.43 and 3.48 and the number of iterations in which the algorithm converges varies between 167.87 and 196.66. The same small variations, also, hold for the other six parameters. This is very important as, although it is very difficult to find the optimum set of the parameters, the parameters found in all runs and
324
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329 Table 2 Average Values of the Best Parameters taken from the proposed algorithm in the two sets of benchmark instances. Max iterations 100 customers
LS
NP
c1opt
c2opt
w1opt
w2opt
ubound
lbound
R110 184.42 184.62 R120 184.82 R130 RC1 182.96 C1 196.66 R2 185.71 RC2 184.87 C2 185.17 200 customers 179.80 R110 180.09 R120 179.85 R130 RC1 175.81 C1 167.87 R2 189.63 RC2 170.47 C2 179.95 400 customers 174.20 R110 174.35 R120 174.68 R130 RC1 172.76 C1 185.70 R2 174.84 RC2 186.07 C2 175.84 600 customers 176.40 R110 176.48 R120 176.86 R130 RC1 172.90 C1 174.71 R2 184.96 RC2 177.05 C2 181.58 800 customers 184.40 R110 184.41 R120 184.43 R130 RC1 174.02 C1 173.14 R2 181.79 RC2 174.49 C2 175.75 10 0 0 customers 175.70 R110 175.82 R120 176.17 R130 RC1 169.82 C1 173.04 R2 173.50 RC2 179.64 C2 179.36
36.67 36.87 37.10 33.81 36.41 34.98 32.21 32.47
77.42 77.60 77.56 93.10 89.88 77.07 76.08 75.92
2.43 2.56 2.45 3.02 2.76 2.77 2.87 3.38
2.55 2.66 2.81 2.83 2.75 2.65 2.58 2.90
0.98 1.07 1.32 1.06 1.25 1.13 1.03 1.21
1.17 1.55 1.38 1.30 1.32 1.34 1.34 1.58
3.18 3.59 3.60 2.84 3.74 3.33 3.65 3.76
−3.09 −2.76 −2.63 −2.83 −3.14 −2.74 −2.91 −2.97
35.90 36.13 36.20 40.55 34.77 39.29 37.17 37.22
80.70 80.77 80.87 82.76 83.66 75.81 76.35 78.14
2.60 3.07 2.83 3.10 2.64 2.85 3.17 2.61
2.59 2.81 3.09 3.00 2.93 2.56 2.71 2.73
1.13 1.44 1.58 1.17 1.25 1.26 1.16 1.30
1.49 1.75 1.49 1.48 1.40 1.51 1.61 1.44
3.34 3.36 3.45 3.72 3.53 3.57 3.70 3.62
−3.13 −3.13 −3.05 −2.98 −3.27 −3.04 −2.85 −3.37
33.90 34.13 34.25 37.89 38.66 33.96 34.99 41.47
75.50 75.88 75.87 85.43 73.47 79.45 69.48 78.52
2.50 2.76 2.58 2.89 2.74 2.84 2.59 2.80
2.39 2.58 2.86 2.83 2.72 2.89 2.79 2.69
0.99 1.48 1.08 1.19 1.13 1.22 1.43 1.29
1.24 1.34 1.43 1.77 1.46 1.81 1.46 1.53
3.55 3.86 3.61 3.66 3.34 3.80 3.62 3.65
−3.55 −3.09 −3.47 −3.18 −3.14 −3.23 −3.13 −2.96
36.30 36.72 36.63 33.43 36.94 39.63 33.57 35.70
74.20 74.50 74.70 78.55 78.32 80.72 75.84 82.29
2.50 2.94 2.68 2.53 2.88 2.82 2.85 2.58
2.59 2.97 2.62 3.10 3.00 2.74 2.81 2.84
1.00 1.12 1.41 1.21 1.17 1.04 1.29 1.07
1.26 1.54 1.50 1.77 1.38 1.58 1.70 1.78
3.05 3.38 3.33 3.40 3.43 3.62 3.42 3.52
−3.04 −2.78 −2.74 −2.62 −2.59 −3.03 −2.79 −3.40
35.20 35.35 35.55 40.04 33.09 36.97 37.24 34.36
84.10 84.29 84.46 77.51 73.76 81.45 74.71 79.12
2.53 2.69 2.63 2.92 2.60 3.01 2.54 3.12
2.44 2.77 2.63 2.79 2.69 2.85 3.05 2.86
1.04 1.32 1.23 1.12 1.21 1.17 1.45 1.22
1.30 1.46 1.56 1.63 1.79 1.61 1.53 1.32
2.98 3.06 3.08 3.21 3.73 3.32 3.68 3.49
−3.32 −2.95 −3.13 −3.14 −3.21 −3.07 −2.97 −2.95
36.10 36.14 36.22 36.55 37.29 37.67 35.84 34.49
80.00 80.45 80.44 80.54 79.22 79.53 79.97 76.52
2.45 2.68 2.71 2.90 2.68 2.82 2.96 2.68
2.59 2.69 3.08 2.94 2.87 3.04 2.80 3.10
1.01 1.26 1.06 1.27 1.16 1.54 1.53 1.19
1.28 1.42 1.48 1.49 1.67 1.92 1.56 1.71
3.32 3.46 3.55 3.37 3.21 3.46 3.36 3.45
−3.35 −2.86 −3.29 −3.48 −3.05 −2.86 −3.12 −3.19
in different sets are very close between them and there are not any extreme values between them. An extreme value of 500 particles in one instance and in all other instances the best number of particles to be between 70 and 90 will lead to the fact that the algorithm cannot converge to any parameter values but it will give values strongly dependent from the instance that it is solved. Also, in Table 2 for the R1 set (for any number of customers) we present in addition with the average results of the 30 runs of the algorithm, the average results for the first 10 and the 20 runs. The reason that we give these numbers is that we would like to prove the stability of the algorithm. We observe that independent of the number of executions of the algorithm, the parameters converge to similar values, thus, we can say that for the proposed algorithm the region where the best values of the parameters for each instance are, has been located.
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
325
Table 3 Comparison of the proposed algorithm with other PSO versions in the two sets of benchmark instances. MAPSO 100 customers
CNTPSO1
CNTPSO2
CNTPSOin
HybPSO
ACNTPSO
R1 1209.99 RC1 1384.18 C1 828.38 R2 952.06 RC2 1119.60 C2 589.86 21.29 AT(s)100 200 customers R1 3618.53 RC1 3187.95 C1 2719.09 R2 2942.64 RC2 2545.46 C2 1832.46 AT(s)200 54.12 400 customers R1 8514.11 RC1 7921.42 C1 7220.25 R2 6234.38 RC2 5357.34 C2 3954.52 AT(s)400 91.64 600 customers R1 18,569.92 RC1 16,176.27 C1 14,427.88 R2 12,776.09 RC2 10,726.38 C2 7602.39 154.98 AT(s)600 800 customers R1 32,240.66 RC1 29,617.50 C1 25,482.40 R2 20,478.57 RC2 16,647.78 C2 11,669.78 AT(s)800 247.87 10 0 0 customers R1 50,282.50 RC1 44,962.90 C1 41,982.32 R2 30,314.44 RC2 24,467.57 C2 16,777.49 394.97 AT(s)1000 160.81 AT(s)total
1211.37 1386.98 828.38 955.60 1122.30 589.86 22.41
1212.87 1386.40 830.44 955.27 1122.96 591.67 22.28
1214.12 1389.14 828.70 958.36 1125.20 590.36 23.01
1217.53 1392.13 831.16 961.86 1129.54 590.28 23.56
1210.08 1384.24 828.38 952.71 1120.00 589.86 21.85
3622.61 3192.81 2723.75 2948.45 2553.04 1836.50 55.28
3623.27 3192.46 2723.25 2946.25 2548.94 1838.34 55.17
3630.00 3199.90 2732.10 2955.29 2559.16 1843.85 55.49
3631.28 3199.83 2732.13 2955.34 2560.04 1842.55 56.15
3621.80 3194.91 2723.51 2948.35 2550.97 1836.65 54.88
8490.04 7927.69 7228.83 6239.81 5363.72 3961.71 92.15
8489.67 7926.36 7226.91 6238.23 5361.49 3960.29 92.18
8501.35 7938.14 7238.36 6249.92 5373.89 3971.69 92.59
8497.18 7939.26 7237.82 6250.31 5371.17 3969.04 93.18
8489.50 7928.43 7228.32 6241.39 5362.63 3961.65 91.82
18,578.00 16,182.77 14,434.05 12,785.69 10,732.51 7609.35 155.91
18,575.70 16,183.17 14,434.81 12,783.53 10,736.94 7610.40 156.18
18,587.66 16,196.70 14,449.49 12,795.54 10,745.89 7620.53 156.11
18,590.57 16,195.43 14,448.07 12,798.92 10,744.49 7618.37 157.31
18,579.04 16,183.49 14,435.54 12,784.06 10,732.13 7607.39 155.08
32,249.73 29,628.14 25,490.10 20,483.56 16,656.04 11,681.24 247.91
32,252.49 29,630.40 25,490.95 20,491.11 16,657.73 11,678.68 247.75
32,269.11 29,647.45 25,506.69 20,508.33 16,672.42 11,695.09 248.85
32,269.81 29,643.18 25,506.75 20,504.99 16,674.40 11,694.82 249.91
32,253.55 29,627.98 25,489.86 20,490.26 16,657.04 11,679.36 247.18
50,296.65 44,975.69 42,001.79 30,328.71 24,482.71 16,788.44 398.15 161.96
50,296.78 44,974.54 41,994.18 30,327.24 24,480.26 16,789.97 398.28 161.97
50,315.50 44,994.37 42,017.42 30,341.30 24,500.39 16,817.03 399.31 162.56
50,320.41 44,991.54 42,016.71 30,344.15 24,501.79 16,816.83 402.15 163.71
50,293.89 44,971.80 41,993.43 30,326.22 24,478.06 16,796.11 396.28 161.18
4.2. Comparison of the proposed method with other PSO algorithms In Table 3, a comparison with other versions of the Particle Swarm Optimization algorithm is presented. More precisely, the algorithm was compared with 5 other versions of PSO that have mostly been published by our research group in the past for the solution of a variant of the vehicle routing problem. The reason that we use these 5 algorithms is that each one of them has some specific characteristics and their comparison with the proposed algorithm will show that the adaptiveness used in the proposed algorithm leads to a very efficient implementation of PSO for routing type problems. Initially, the algorithm is compared with the HybPSO algorithm [33]. The HyPSO algorithm was used for the solution of the Capacitated Vehicle Routing Problem with very good results as in most instances in which it was tested it found the best known solutions. It uses for the production of the initial solutions a Greedy Randomized Adaptive Search Procedure and it uses a variant of the VNS algorithm for the improvement of the solutions. This algorithm does not use at all the Combinatorial Neighborhood Topology. The second algorithm with which the proposed algorithm is compared is the CNTPSOin algorithm [27]. The CNTPSOin algorithm uses an initial version of the Combinatorial Neighborhood Topology without using the Eqs. (5) and (6) and it was used for the solution of the Capacitated Location Routing Problem. When it was published some new best
326
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
solutions were found (these best solutions have been improved more nowadays from other algorithms from the literature). The third and the fourth algorithms are two versions of the CNTPSO algorithm [30,31], denoted as CNTPSO1 and CNTPSO2. The CNTPSO is the base of the proposed algorithm as it was analytically explained in previous sections. For the comparisons, we use two different versions of the CNTPSO algorithm where in the first one, the CNTPSO1, all the parameters are optimized before the algorithm begins and the best set of parameters are used for all instances while in the second one, the CNTPSO2, the algorithm uses the best set of parameters produced by the proposed algorithm, different for each instance. Finally, the proposed algorithm is compared with the results of another Adaptive PSO algorithm, the ACNTPSO, where this algorithm uses GRASP and VNS with the same way as in the proposed algorithm but it uses the initial version of CNT and, instead of equations for the calculation of the adaptiveness of the parameters, it restricts all the parameters in specific bounds and the adaptation is realized using the summation of each best value of a parameter with a constant (random) number. It can be seen from the Table 3 that the algorithm performs better than all the other algorithms used in the comparisons. In order to analyze better these comparisons as they are very important for the effectiveness of the proposed algorithm we present the analytical results of the proposed algorithm and of the other five algorithms in six figures in addition to Table 3. More precisely, the proposed algorithm is compared, initially, with the results of the HybPSO and the CNTPSO in algorithms in Figs. 3 and 4 (in Fig. 3 the results for the first set of instances and in Fig. 4 the results for the second set of instances are presented, respectively), afterwards, with the results of the CNTPSO1 and the CNTPSO2 algorithms in Figs. 5 and 6 (in Fig. 5 the results for the first set of instances and in Fig. 6 the results for the second set of instances are presented, respectively) and, finally, with the results of ACNTPSO in Figs. 7 and 8 (in Fig. 7 the results for the first set of instances and in Fig. 8 the results for the second set of instances are presented, respectively). From Figs. 3 and 4, we can see that the differences in the qualities of the proposed algorithm from HybPSO and the CNTPSO in are significant. Especially, the results of the HybPSO compared to the results of the proposed algorithm have deviation more than 1 in the qualities of the results with the proposed algorithm having better results. The main difference of the proposed algorithm from the other two algorithms is the use of the Adaptive Combinatorial Neighborhood Topology and, thus, it seems that the addition of this topology gives a more effective algorithm. The most important comparisons are presented in Figs. 5 and 6 where the proposed algorithm is compared with the CNTPSO algorithm without the use of the adaptiveness. The improvement in the results is not significant but the proposed algorithm performs better than the other two algorithms in every instance in both figures. It is very important when the best parameters found from the proposed algorithm to be used as parameters in the no adaptive CNTPSO algorithm. This algorithm found very good results but not better or even similar results with the results found by the proposed algorithm. This proves that most important is not the final parameters but the fact that in the whole procedure the algorithm runs with different parameters which gives more exploration and exploitation abilities in the algorithm. Of course, the differences in the qualities are around 0.5 which is not a significant value. Finally, when the algorithm is compared with the other adaptive PSO algorithm (Figs. 7 and 8), the proposed algorithm performed better in all instances and this fact shows that the adaptation that it is used in the proposed algorithm is a suitable adaptation for the application of PSO for the problem solved in this paper. Also, in Table 3 the average running times in all sets and in all executions of the algorithms are presented. We present separately the average running times when the instances have 100 customers, 200 customers, up to 10 0 0 customers. In the last line of the Table the average running times are presented. We can see that as the number of nodes increases the computational running times are also increasing. However, as we can see the proposed algorithm is faster than all the other PSO implementations. This happens as the proposed algorithm has adaptive procedures in each iteration for finding its parameters, has the ability to increase or decrease the number of particles, the number of local search iterations and the iterations of the algorithm. Thus, a more flexible, faster and effective algorithm has been presented.
4.3. Comparison of the proposed method with the most effective algorithms from the literature In the following, the proposed algorithm is compared with a number of algorithms from the literature. More precisely, in Table 4 the proposed algorithm is compared with the results of other 20 algorithms in both set of instances. Most of the results are taken from [17,36,42] and concern the most important algorithms that have been applied in these benchmark sets for the solution of the VRPTW [7,12,17,36,42,47]. In order to be consistent with the literature we use the same abbreviations for the algorithms as in [42]. More precisely, the proposed algorithm is compared with a hybrid genetic algorithm (HGSADC) [47], an Arc-Guided Evolutionary Algorithm (AGEA) [42], an effective local search algorithm (I05) [21], a two-stage heuristic with ejection pools and generalized ejection chains (LZ) [25], a two-phase metaheuristic (GH02) [13] and an improvement of the previous algorithm from the same authors denoted as a two-phase hybrid metaheuristic (HG05) [20], a multi-start local search algorithm (BHD) [7], a multi-parametric evolution strategies algorithm (MBD) [37], a cooperative parallel metaheuristic (LC) [4] and a guided cooperative search algorithm (LCK) [5], a branch and price based large neighborhood search algorithm (PDR) [39], a discrete particle swarm optimization approach (S-PSO) [17], a two-stage hybrid local search (BVH) [3], an adaptive large neighborhood search (PR) [38], a reactive variable neighborhood search (B) [6], a reactive greedy randomized variable neighborhood tabu search (RPTI) [41], a scatter search algorithm (RC) [44], an evolutionary meta-heuristic (HG99) [19], an iterated local search algorithm (I06) [22], an active guided evolution strategies algorithm (MB05) [36], a hybrid shuffled frog leaping algorithm (HSFLA) [26] and a cooperative population learning algorithm (SPLA) [2].
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
327
Table 4 Comparison of the proposed algorithm with other algorithms from the literature in both sets of benchmark instances in CNV and CTD. Algorithm
100 customers
200 customers
400 customers
600 customers
800 customers
10 0 0 customers
CNV
CTD
CNV
CTD
CNV
CTD
CNV
CTD
CNV
CTD
CNV
CTD
MAPSO HGSADC [47] AGEA [42] I05 [21] LZ [25] GH02 [13] HG05 [20] BHD [7] MBD [37] LC [4] PDR [39] S-PSO [17] BVH [3] PR [38] B [6] LCK [5] RPTI [41] RC [44] HG99 [19] I06 [22] MB05 [36] HSFLA [26] SPLA [2]
405 405 405 405 405 406 408 406 406 407 405 422 405 405 405 405 406 408 406 407 – 405 406
57,197 57,196 57,216 57,444 57,368 57,641 57,422 57,422 56,812 57,412 57,255 58,677 57,567 57,332 57,710 57,360 58,006 57,589 57,876 57,437 – 57,186.87 57,198.93
694 694 694 – 694 696 699 695 695 694 694 – – 694 694 694 – – – 694 694 – –
168,461 168,092 169,163 – 169,296 179,328 180,602 172,406 169,968 173,061 168,556 – – 169,042 176,244 169,958 – – – 170,331 168,573 – –
1380 1381 1381 – 1382 1392 1397 1391 1390 1390 1385 – – 1385 1390 1389 – – – 1384 1389 – –
391,712 388,013 395,936 – 393,695 428,489 431,089 399,132 394,818 408,281 389,011 – – 393,210 412,088 396,611 – – – 401,285 390,386 – –
2066 2068 2066 – 2068 2079 2088 2084 – 2088 2071 – – 2071 – 2086 – – – 2070 2082 – –
802,789 786,373 816,326 – 802,681 890,121 890,293 820,372 – 836,261 800,797 – – 807,470 – 809,493 – – – 827,192 796,172 – –
2733 2739 2739 – 2742 2760 2773 2776 – 2766 2745 – – 2758 – 2761 – – – 2750 2765 – –
1,361,367 1,334,963 1,424,321 – 1,372,427 1,535,849 1,516,648 1,384,306 – 1,475,281 1,391,344 – – 1,358,291 – 1,412,363 – – – 1,421,225 1,361,586 – –
3422 3420 3428 – 3429 3446 3459 3465 – 3451 3432 – – 3438 – 3442 – – – 3431 3446 – –
2,087,872 2,036,700 2,144,830 – 2,071,643 2,290,367 2,288,819 2,133,376 – 2,225,366 2,096,823 – – 2,110,925 – 2,133,645 – – – 2,155,374 2,078,110 – –
In addition to these algorithms, there is a large number of researchers that solved the vehicle routing problem with time windows with different algorithms. However, these algorithms could not be used in the comparisons as the authors have tested their algorithms in selected instances from the whole set of the 356 instances and, thus, it is very difficult to compare them with the proposed algorithm. The algorithms that are used in the comparisons are, mainly, single population metaheuristic algorithms with very effective strategy for avoiding local optimum, like tabu search, large scale neighborhood search, variable neighborhood search. There are few evolutionary algorithms with local search and even fewer swarm intelligence algorithms. If we present analytically all the results for each set and each number of nodes, a huge table (or tables) will be produced and the most important outcomes will not be sufficiently presented. Thus, we present only two values the Cumulative Number of Vehicles (CNV) and the Cumulative Total Distance (CTD) for each one of the algorithms. If someone would like to see the analytical results of all these algorithms it may see either the corresponding publication or the presentation and analysis in [42]. The results are divided in 6 sets based on the number of nodes. For the CNV the proposed algorithm has the smallest number of vehicles in all 5 out of 6 categories (in the case of 10 0 0 customers only one algorithm has smallest number of vehicles). This is a very important outcome as the proposed algorithm is focused on the minimization of the number of vehicles. For the CTD, the results of the proposed algorithm are very satisfactory. The proposed algorithm has the second smaller CTD when the number of nodes is equal to 200, the third smaller CTD when the number of customers is equal to 800, the fourth smaller CTD when the number of nodes is equal to 10 0, 40 0 and 10 0 0 and the fifth one when the number of nodes is equal to 600. Thus, we can say that the proposed algorithm is a very effective algorithm and one of the best performing algorithms for the solution of the Vehicle Routing Problem with Time Windows. 5. Conclusions and future research In this paper, a new algorithm based on PSO, the Multi-Adaptive PSO (MAPSO), for the solution of the Vehicle Routing Problem with Time Windows is presented. The algorithm has three adaptive strategies, one for the initialization of the solutions using the Greedy Randomized Adaptive Search Procedure (GRASP), the other is the use of an Adaptive Memory procedure to replace the Path Relinking procedure in the Combinatorial Neighborhood Topology and the last one is an adaptive strategy for the calculation of the parameters of the PSO. Each of these strategies are very important. The first one is important as very good initial solutions are produced, the second one is important as the algorithm does not need any transformation in continuous values and with the use of the adaptive memory, tours that belong for a number of times in the best solution have a large possibility to belong to the global best solution and, finally, the third one is important as it gives to the user the possibility to run the algorithm without finding a good set of parameters before starting the execution of the algorithm as the best values of the parameters are found inside the algorithm during the execution of the algorithm and, thus, to use the algorithm as a black box. The algorithm was tested in the two classic set of benchmark instances, it was compared with other PSO implementations and with other algorithms from the literature and gave very good results. Our future research will focus in the application of the proposed algorithm in more difficult routing problems.
328
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
References [1] T.J. Ai, V. Kachitvichyanukul, A particle swarm optimization for the vehicle routing problem with simultaneous pickup and delivery, Comput. Oper. Res. 36 (2009) 1693–1702. [2] D. Barbucha, A cooperative population learning algorithm for vehicle routing problem with time windows, Neurocomputing 146 (2014) 210–229. [3] R. Bent, P. van Hentenryck, A two-stage hybrid local search for the vehicle routing problem with time windows, Transp. Sci. 38 (2004) 515–530. [4] A.L. Bouthillier, T.G. Crainic, Co-operative parallel metaheuristic for vehicle routing with time windows, Comput. Oper. Res. 32 (2005) 1685–1708. [5] A.L. Bouthillier, T.G. Crainic, P. Kropf, A guided cooperative search for the vehicle routing problem with time windows, IEEE Trans. Intell. Syst. 20 (2005) 36–42. [6] O. Braysy, A reactive variable neighborhood search for the vehicle routing problem with time windows, INFORMS J. Comput. 15 (2003) 347–368. [7] O. Braysy, G. Hasle, W. Dullaert, A multi-start local search algorithm for the vehicle routing problem with time windows, Eur. J. Oper. Res. 159 (2004) 586–605. [8] W. Chaovalitwongse, D. Kim, P.M. Pardalos, GRASP with a new local search scheme for vehicle routing problems with time windows, J. Comb. Optim. 7 (2) (2003) 179–207. [9] A.L. Chen, G.K. Yang, Z.M. Wu, Hybrid discrete particle swarm optimization algorithm for capacitated vehicle routing problem, J. Zhejiang Univ. Sci. A 7 (4) (2006) 607–614. [10] M. Clerc, J. Kennedy, The particle swarm: explosion, stability and convergence in a multi-dimensional complex space, IEEE Trans. Evol. Comput. 6 (2002) 58–73. [11] T.A. Feo, M.G.C. Resende, Greedy randomized adaptive search procedure, J. Global Optim. 6 (1995) 109–133. [12] H. Gehring, J. Homberger, A parallel hybrid evolutionary metaheuristic for the vehicle routing problem with time windows, in: Proceedings of EUROGEN 99, University of Jyvaskyla, Jyvaskyla, Finland, 1999, pp. 57–64. [13] H. Gehring, J. Homberger, Parallelization of a two-phase metaheuristic for routing problems with time windows, J. Heuristics 8 (2002) 251–276. [14] F. Glover, Tabu search I, ORSA J. Comput. 1 (3) (1989) 190–206. [15] F. Glover, M. M. Laguna, R. Marti, Scatter search and path relinking: advances and applications, in: F. Glover, G.A. Kochenberger (Eds.), Handbook of Metaheuristics, Kluwer Academic Publishers, Boston, 2003, pp. 1–36. [16] B.L. Golden, A.A. Assad, Vehicle routing: methods and studies, North Holland, Amsterdam, 1988. [17] Y.J. Gong, J. Zhang, O. Liu, R.Z. Huang, H.S.H. Chung, Y.H. Shi, Optimizing the vehicle routing problem with time windows: a discrete particle swarm optimization approach, IEEE Trans. Syst. Man Cybern.-Part C: Appl. Rev. 42 (2) (2012) 254–267. [18] P. Hansen, N. Mladenovic, Variable neighborhood search: principles and applications, Eur. J. Oper. Res. 130 (2001) 449–467. [19] J. Homberger, H. Gehring, Two evolutionary meta-heuristics for the vehicle routing problem with time windows, in: Proceedings INFOR, vol. 37, 1999, pp. 297–318. [20] J. Homberger, H. Gehring, A two-phase hybrid metaheuristic for the vehicle routing problem with time windows, Eur. J. Oper. Res. 162 (2005) 220–238. [21] T. Ibaraki, S. Imahori, M. Kudo, T. Masuda, T. Uno, M. Yagiura, Effective local search algorithms for routing and scheduling problems with general time window constraints, Transp. Sci. 39 (2005) 206–232. [22] T. Ibaraki, S. Imahori, K. Nonobe, K. Sobue, T. Uno, M. Yagiura, An iterated local search algorithm for the vehicle routing problem with convex time penalty functions, Discrete Appl. Math. 156 (2008) 2050–2069. [23] J. Li, P.M. Pardalos, H. Sun, J. Pei, Y. Zhang, Iterated local search embedded adaptive neighborhood selection approach for the multi-depot vehicle routing problem with simultaneous deliveries and pickups, Expert Syst. Appl. 42 (7) (2015) 3551–3561. [24] J. Li, Y. Li, P.M. Pardalos, Multi-depot vehicle routing problem with time windows under shared depot resources, J. Comb. Optim. 31 (2) (2016) 515–532. [25] A. Lim, X. Zhang, A two-stage heuristic with ejection pools and generalized ejection chains for the vehicle routing problem with time windows, INFORMS J. Comput. 19 (2007) 443–457. [26] J. Luo, X. Li, M.R. Chen, H. Liu, A novel hybrid shuffled frog leaping algorithm for vehicle routing problem with time windows, Inf. Sci. 316 (2015) 266–292. [27] Y. Marinakis, M. Marinaki, A particle swarm optimization algorithm with path relinking for the location routing problem, J. Math. Modell. Algorithms 7 (1) (2008) 59–78. [28] Y. Marinakis, M. Marinaki, A hybrid genetic - particle swarm optimization algorithm for the vehicle routing problem, Expert Syst. Appl. 37 (2010) 1446–1455. [29] Y. Marinakis, M. Marinaki, A hybrid multi-swarm particle swarm optimization algorithm for the probabilistic traveling salesman problem, Comput. Oper. Res. 37 (2010) 432–442. [30] Y. Marinakis, M. Marinaki, Combinatorial neighborhood topology particle swarm optimization algorithm for the vehicle routing problem, in: M. Middendorf, C. Blum (Eds.), EvoCOP 2013, LNCS 7832, 2013, pp. 133–144. [31] Y. Marinakis, M. Marinaki, Combinatorial expanding neighborhood topology particle swarm optimization for the vehicle routing problem with stochastic demands, in: GECCO: 2013, Genetic and Evolutionary Computation Conference, 6–10 July 2013, Amsterdam, The Netherlands, 2013, pp. 49–56. [32] Y. Marinakis, G. Iordanidou, M. Marinaki, Particle swarm optimization for the vehicle routing problem with stochastic demands, Appl. Soft Comput. 13 (4) (2013) 1693–1704. [33] Y. Marinakis, M. Marinaki, G. Dounias, A hybrid particle swarm optimization algorithm for the vehicle routing problem, Eng. Appl. Artif. Intell. 23 (2010) 463–472. [34] Y. Marinakis, M. Marinaki, A. Migdalas, Adaptive tunning of all parameters in a multi-swarm particle swarm optimization algorithm: an application to the probabilistic traveling salesman problem, in: A. Migdalas, A. Karakitsiou (Eds.), Optimization, Control, and Applications in the Information Age, Springer Proceedings in Mathematics & Statistics, vol. 130, 2015, pp. 187–207. [35] Y. Marinakis, M. Marinaki, A. Migdalas, Particle swarm optimization for the vehicle routing problem: a survey and a comparative analysis, in: R. Marti, et al. (Eds.), Handbook of Heuristics, 2018, pp. 1163–1196. [36] D. Mester, O. Braysy, Active guided evolution strategies for large-scale vehicle routing problems with time windows, Comput. Oper. Res. 32 (2005) 1593–1614. [37] D. Mester, O. Braysy, W. Dullaert, A multi-parametric evolution strategies algorithm for vehicle routing problems, Expert Syst. Appl. 32 (2006) 508–517. [38] D. Pisinger, S. Ropke, A general heuristic for vehicle routing problems, Comput. Oper. Res. 34 (2006) 2403–2435. [39] E. Prescott-Gagnon, G. Desaulniers, L.M. Rousseau, A branch and price based large neighborhood search algorithm for the vehicle routing problem with time windows, Networks 54 (2009) 190–204. [40] Y. Qiu, L. Wang, X. Xu, X. Fang, P.M. Pardalos, Formulations and branch-and-cut algorithms for multi-product multi-vehicle production routing problems with startup cost, Expert Syst. Appl. 98 (2018) 1–10. [41] P.P. Repoussis, D.C. Paraskevopoulos, C.D. Tarantilis, G. Ioannou, A reactive greedy randomized variable neighborhood tabu search for the vehicle routing problem with time windows, in: Hybrid Metaheuristics 2006, LNCS 4030, Springer-Verlag, New York, 2006, pp. 124–138. [42] P.P. Repoussis, C.D. Tarantilis, G. Ioannou, Arc-guided evolutionary algorithm for the vehicle routing problem with time windows, IEEE Trans. Evol. Comput. 13 (3) (2009) 624–647. [43] Y. Rochat, E.D. Taillard, Probabilistic diversification and intensification in local search for vehicle routing, J. Heuristics 1 (1995) 147–167. [44] R.A. Russell, W.C. Chiang, Scatter search for the vehicle routing problem with time windows, Eur. J. Oper. Res. 169 (2006) 606–622. [45] M.M. Solomon, Algorithms for the vehicle routing and scheduling problems with time window constraints, Oper. Res. 35 (2) (1987) 254–265. [46] P. Toth, D. Vigo, Vehicle Routing: Problems, Methods and Applications, Second Edition, MOS-SIAM Series on Optimization, SIAM, Philadelphia, 2014.
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329
329
[47] T. Vidal, T.G. Crainic, M. Gendreau, C. Prins, A hybrid genetic algorithm with adaptive diversity management for a large class of vehicle routing problems with time-windows, Comput. Oper. Res. 40 (2013) 475–489. [48] E.T. Yassen, M. Ayoba, M.Z.A. Nazria, N.R. Sabarc, Meta-harmony search algorithm for the vehicle routing problem with time windows, Inf. Sci. 325 (2015) 140–158. [49] Z.H. Zhan, J. Zhang, Y. Li, H.S.H. Chung, Adaptive particle swarm optimization, IEEE Trans. Syst. Man Cybern. - Part B: Cybern. 39 (6) (2009) 1362–1381. [50] T. Zhang, W.A. Chaovalitwongse, Y.J. Zhang, P.M. Pardalos, The hot-rolling batch scheduling method based on the prize collecting vehicle routing problem, J. Ind. Manage. Optim. 5 (4) (2009) 749–765.