Accepted Manuscript An Adaptive Hybrid Algorithm for Vehicle Routing Problems with Time Windows Esam Taha Yassen, Masri Ayob, Mohd Zakree Ahmad Nazri, Nasser R. Sabar PII: DOI: Reference:
S0360-8352(17)30451-5 https://doi.org/10.1016/j.cie.2017.09.034 CAIE 4920
To appear in:
Computers & Industrial Engineering
Received Date: Revised Date: Accepted Date:
30 June 2016 19 September 2017 20 September 2017
Please cite this article as: Taha Yassen, E., Ayob, M., Zakree Ahmad Nazri, M., Sabar, N.R., An Adaptive Hybrid Algorithm for Vehicle Routing Problems with Time Windows, Computers & Industrial Engineering (2017), doi: https://doi.org/10.1016/j.cie.2017.09.034
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
1
An Adaptive Hybrid Algorithm for Vehicle Routing Proble ms with Time Windows
1
Esam Taha Yassen 1,2 , Masri Ayob1 , Mohd Zakree Ahmad Nazri1 and Nasser R. Sabar3 Data Mining and Optimization Research Group (DMO), Center fo r Artificial Intelligent (CAIT), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor, Malaysia.
[email protected], {masri,zakree}@ukm.edu.my. 2 Informat ion Technology Centre, Un iversity of Anbar, Iraq.
[email protected] 3
Queensland University of technology, 2 George St, Brisbane City QLD 4000, Australia.
[email protected] m.
An Adaptive Hybrid Algorithm for Vehicle Routing Proble ms with Time Windows
Abstract The harmony search algorithm has been proven to be an effective optimization method for solving diverse optimization problems. However, due to its slow convergence, the performance of HSA over constrained optimization problems is not very competitive. Therefore, many researchers have hybridized HSA with local search algorithms. However, it’s very difficult to known in advance which local search should be hybridized with HSA as it depends heavily on the problem characteristics. The question is how to design an effective selection mechanism to adaptively select a suitable local search to be combined with HSA during the search process. Therefore, this work proposes an adaptive HSA that embeds an adaptive selection mechanism to adaptively select a suitable local search algorithm to be applied. This work hybridizes HSA with five local search algorithms: hill climbing, simulated annealing, record to record, reactive tabu search and great deluge. We use the Solomon’s vehicle routing problem with time windows benchmark to examine the effectiveness of the proposed algorithm. The obtained results are compared with basic HSA, the local search algorithms and existing methods. The results demonstrate that the proposed adaptive HSA achieves very good results compared other methods. This demonstrates that the selection mechanism can effectively assist HSA to adaptively select a suitable local search during the problem solving process.
2
Keywords : harmony search algorithm; vehicle routing; adaptive algorithm; adaptive selection mechanism; metaheuristics.
1.
Introduction
The harmony search algorithm (HSA), proposed in 2001 by Geem et al. [1], is a populationbased approach inspired by modern nature. HSA mimics the natural steps of musical improvisation taken by musicians to improve their musical tones. It has been shown to be an effective approach for tackling numerous difficult real-world problems, such as binary-coded problems [2], nurse rostering [6], university course timetabling [3], structural optimization [4], dynamic optimization [7],[8], portfolio selection problem [5], and other optimization problems [9], [10].
HSA is similar to other population-based approaches with regard to the problem of slow convergence [10], [11], [12], [13]. This problem usually occurs because they operate on a population of solutions that scatter over the landscape of the search space. In addition, most of population-based approaches are not good at exploiting the areas around the explored solutions [14], [15]. A local search algorithm has been used within population-based approaches to address this issue. [14]. The local search algorithm (LS) has a good exploitation performance and can thus improve the convergence of HSA. To this end, various LS(s) were introduced in the literature. Hence, the following question will arise [15]: “Which LS algorithm should be used within HSA?” Different LS algorithms would perform well over different instances or only during certain stages of the solving process.
Studies such as [16-21] have successfully utilized adaptive parameter setting, internal tuning, to address the problem of selecting suitable parameter values or mutation/crossover strategies for evolutionary algorithms. Their success in attaining good results is because parameter setting is dependent on two issues: problem features and search area [17], [18].
3
Inspired by the studies mentioned above, this work proposes an adaptive HSA that uses an adaptive selection mechanism and a set of LS algorithms. The question is “how to use an effective selection mechanism to adaptively select a suitable local search to be combined with HSA during the search process?” Among the various selection mechanisms available, such as roulette wheel and tournament mechanisms, we adopt a multi-armed bandit (MAB) selection mechanism [17]. MAB has been effectively used in solving various optimization problems such as the royal road problem. The MAB is utilized to control the selection of which LS to be used to improve the current instance based on their previous improvement strength in the searching process.
Five common local search algorithms (LSs): Hill climbing, simulated annealing, record to record, great deluge and reactive tabu search are used to enhance the exploitation performance of HSA. That is, at each iteration, the solutions generated by HSA are further improved by LS, and the decision of which LS should be applied is determined by the MAB. Our aims are as follows:
1- To improve the exploitation ability of HSA by hybridizing it with local search algorithms. 2- To propose an adaptive HSA that uses MAB as a selection mechanism that can adaptively and effectively selects a suitable LS from a given set of LS(s) in an on- line manner. 3- To examine the performance of the standard HSA, hybrid HSA and adaptive HSA over a well-known hard optimization problem, the Solomon’s benchmark [22].
The results of the proposed HSA is compared against basic HSA (without the selection mechanism), the five local search algorithms (implemented herein) and the best results reported in the literature.
2.
Proble m Description
Transportation and distribution systems are among the most significant activities in numerous real-world applications. These systems comprise various problems, such as grouping and scheduling. The vehicle routing problem (VRP) [23] is a well-known challenging problem in transportation field. VRP seeks for a set of vehicle trips to serve a group of customers [22]. The
4
goal is to minimize the total cost (distances or time). A variant of VRP known as vehicle routing problem with time windows (VRPTW) has been widely studied in the literature. This is because of the
involvement of the time window constraint which embodies real life conditions [24]. Consequently, this work focuses on VRPTW in which a particular group of customers are distributed in various areas, and each one of them needs a specific demand to be delivered or picked during its pre-defined time window. The goal is to generate set of vehicle trips to serve all customers at minimal cost while satisfying the following constraints: (i) Each customer should be visited within its time window. (ii) The demands assigned to each route should not be greater than vehicle capacity (iii) Each vehicle must commence at the depot and terminate at the depot (iv) Split deliveries should be avoided.
Many solution techniques have been suggested to solve VRPTW. These techniques are divided into two groups [24]: exact methods and meta- heuristic methods. Exact methods are capable of achieving optimal results [25]. Nonetheless, they are only recommended to deal with small-sized problems [24]. Thus, for large VRPTW, researchers have employed meta-heuristic methods, as they can provide good quality solutions in a practical amount of time, yet they do not guarantee the optimality of these solutions [26]. Such techniques include, for example, particle swarm optimization [27], simulated annealing [30], genetic algorithm [28], GRASP [29], and tabu search [31].
The VRPTW is described using a graph (N, A). N refers to the group of nodes N={0,1, 2,…, n, n+1} where nodes 0 and n+1 refer to the depot, and the rest nodes 1, 2, .., n refer to the group customers ‘symbolized by C’. Connections between the nodes are referred to by the arc set
N} in which all routes in this graph should originate at node 0 and terminate at note n+1. Each arc (i, j) A has two elements: cost (cij) and travel time (t ij) where A, A={(i, j): i j and i, j
the latter encompasses the customer i service time. The symbol V refers to the set of vehicles whose capacities ‘q’ are identical. Each customer i, i
C, has a demand di and should be visited
with the time window [ai, bi]. All vehicles must depart at the depot (node 0) and return to the depot (node n+1). Any vehicle arrives before ai it should wait until the beginning of the time window. On the other hand, if the vehicle arrives after bi it vehicle cannot serve the customer.
5
All vehicles depart the depot (node 0) at time 0. There are two kinds of variables, X ijk and S ik . The decision variable X ijk is defined as follow [32]:
1 X ijk 0
if vehiclek travelled directly from customer i to j Otherwise
The decision variable S ik refers to the time at which the vehicle k starts to serve the customer i. Consequently, if the customer i is not served by vehicle k, the S ik has no meaning. It can be assumed that S 0k =0, while S nk1 refers to k vehicle arrival time at the depot (node n+1). The goal of solving VRPTW is to generate feasible routes that serve all customers with minimal cost (see Equation (1)) and satisfying the four conditions mentioned above. The mathematical formulations are as follows [32]:
f ( S ) min
kV ( i , j )A
cij X ijk
(1)
s.t.
i C
(2)
k V
(3)
1,
k V
(4)
X hjk 0,
h C, k V
(5)
k V
(6)
X ijk (Sik tij S kj ) 0,
(i, j ) A, k V
(7)
ai Sik bi ,
i N , k V
(8)
X ijk 0,1,
(i, j ) A, k V
(9)
kV jN
xijk 1,
d X iC
i
jN
X
k 0j
X
k ih
X
k i ,n 1
jN
iN
iN
k ij
q,
jN
1,
Based on the mathematical model above, the constraint set (2) ensures that each customer is served one time only. Constraint set (3) ensures that the capacity of the vehicle is not surpassed.
6
The flow constraint sets (4), (5) and (6) ensure that each vehicle k departs node 0 only once, departs node h, h C , only if it enters that particular node, and comes back to node n+1. It is worth noting that set (6) in the model is redundant, yet it is kept to underline the structure of the network. To allow empty tours in the network, the arc (0, n+1) is involved. Constraint set (7) ensures that vehicle k service node j after S ik t ij when it moves from node i to node j. All time windows are taken care of through constraint (8), and finally (9) constitutes the set of constraints integration.
3.
Basic Harmony Search Algorithm
HSA has been introduced as a new nature inspired population-based approach [1]. HSA works with a population of solutions and then applies various procedures to evolve a new population of solutions. The procedures for evolving a new population of solutions are known as: memory consideration, pitch adjustment and random consideration. In what follows, we preset the working steps of basic HSA.
Step 1: Parameter initialization. The parameters are: - Harmony memory size (HMS) –– indicates the set of solutions to be added into the harmony memory (HM). - Harmony memory consideration rate (HMCR). HMCR controls the process of evolving new solutions. It takes a value between “0” and “1”. The element of a new solution is either selected from those in HM or randomly generated according to HMCR value. - Pitch adjustment rate (PAR). PAR parameter is used to improve the diversification process. It takes a value between “0” and “1”. Based on the assigned value, the elements of a new solution can be randomly modified or kept unchanged. - The number of improvisations (NI) – indicates the maximum number of iterations to halt the search process.
7
Step 2: Initialization of HM. This step creates a set of initial of solutions. The number of solutions is equal to HM. For VRPTW, we use the following procedure to create the initial set of solutions: i) Design one empty route, R. ii) Randomly select one un-routed customer and add it into R. Repeat it until no un-routed customer can be added. iii) If all customers have been added stop. Otherwise, go to i.
Step 3: Improvisation process. This step is responsible for evolving a new solution. For VRPTW, we improvise a new solution using the following procedure.
i)
Create an empty solution X.
ii)
Draw a random number r between “0” and “1”.
iii)
If r < HMCR, select a route R from HM.
iv) v) vi) vii) viii) ix)
If r > HMCR, randomly generate a route R. Add R into X. If R HM, generates a random number rr within [0, 1]. If rr < PAR, modify R using a neighborhood operator. Repeated ii to vii until a complete solution X is obtained. If X is an infeasible, repair X by inserting missed customers and removing the duplicated ones.
Step 4: HM updating. Calculate the quality of X using Equation (1). Replace X with the worst one in HM if X is better.
Step 5: Termination process. If the total number of NI has been reached, terminate the process return the best solution in HM. If not, go to Step 3.
4.
The proposed algorithm
Hybridizing local search algorithms with population-based algorithms has been shown to provide very good results for diverse optimization problems [12], [13]. The reason for such a hybrid paradigm is that LS algorithms are good at exploitation, while population based approaches are
8
good at exploration [14]. Therefore, hybridizing them may produce an effective search method [15]. This hybrid paradigm is called memetic algorithms [14]. Motivated by the success of memetic algorithms and to further enhance the exploitation mechanism in HSA, we hybridized HSA with LS algorithms. However, due to the existence of many LS algorithms in the literature, it is very difficult to determine the most suitable one for hybridization with HSA, as each LS has a different strategy to escape from the local optimum and may be suitable for certain instances and/or stages of the search process. Furthermore, different VRPWT instances have different characteristics, such as the distribution of customers, the vehicle capacity and the time window. As a result, it is difficult to determine which LS algorithm has the ability to provide the best results across all problem instances.
Therefore, to solve the problem of which LS algorithm should be hybridized with HSA, we propose an adaptive HSA that uses the multi-armed bandit selection strategy to choose LS algorithm to be utilized at each decision point. The selection is based on the LS algorithms’ previous improvement strength obtained after their application. The proposed adaptive HSA works as follows (see Figure 1):
i) Given a set of LS algorithms (see Section 4.1) and their recent improvement strength, call MAB (see Section 4.2) to select one LS to be applied at the generated solution. ii) Update the improvement strength of the selected LS and the solutions of the HM, and check the termination criterion of HSA. iii) If the termination criterion has been reached (maximum number of improvisation process (NI)), terminate the search and save the best solution. If not, start a new iteration.
9
Figure 1 Adaptive HSA flowchart. The following subsections discuss the employed LS algorithms and the adopted selection mechanism.
10
4.1
The pool of LS algorithms
LS algorithm is often employed within population-based algorithms to accelerate the convergence process [26]. A traditional LS begins with a starting solution and then iteratively explores its neighborhood areas [26]. It moves to a new area if it’s better than old one. The exploration of a neighborhood area is achieved by a neighborhood operator. The process of neighborhood exploration is repeated until satisfying the defined halting condition (in our work, LS stops after a number of consecutive non- improving iterations). In this work, we employ five different LS(s) within the proposed HSA. We use the 2-opt star operator as our neighborhood operator within all LS(s) [33]. The employed LS(s) are [26]:
1- Hill climbing algorithm (HC): Given an initial solution S, calls 2-opt operator to generates a neighbor solution S`. Replace S with S` if S` is better.
2- Simulated Annealing (SA): SA allows the acceptance of the worse solution to get out of local optima using the so called the annealing acceptance rule. Given a starting solution S, calls 2-opt operator to generate S`. Replace S with S` if S` is better. Otherwise, calculates the probability of accepting S` as follows [26]: p= e-∆f/temp
(10)
where ∆f is the difference in the solution quality of S` and S, i.e., ∆f = f(S`) - f(S). temp is a parameter that control the acceptance rate (temperature). temp is updated at each iteration as follows: temp =β. temp, where β is the cooling rate.
3- Great Deluge Algorithm (GD): GD is an improvement variant of HC. Given an initial solution S, calls 2-opt operator to generate S`. Replace S with S` if S` is better. Otherwise, accept S` if it’s better than the defined level. The initial value of the level is set equal to the quality of S and updated at each iteration as follows [26]: level= level – UP where UP is the decreasing rate.
(11)
11
4- Record Travel Algorithm (RRT): Similar to GD, worse solutions can be accepted if their qualities are better than the Record plus Deviation. Record initial value is set equal to the quality of S. Deviation is the parameter of RRT and updated as follows [26]:
Deviation = 0.01× Record
(12)
5- Reactive Tabu Search Algorithm: Tabu search (TS) is a well-known local search algorithm [34]. TS uses a memory known as the tabu list (tl) to avoid re- visiting same soltuion. tl has a size which represents how long the search is prohibited from visiting previously explored areas. A basic TS generates a set of neighborhood solutions N(S) that different from those already in tl. TS moves to the best one in the N(S) and used it as an initial starting point for the next iteration. The vised point is added in tl. This process is repeated until the stopping criterion is satisfied. An important parameter in TS is the size of tl which control process of go back to the visited points or visiting new ones. To address this issue, a new TS variant called reactive TS (RTS) was proposed [35]. In RTS, tl size keep increasing or decreasing based on the search state. It counts how many times a solution has been visited. Based on this counter, the size of tl is either increases for the diversity purpose or decrease for intensification purpose. The equation that assign a value to each solution is as follows: Sr
v
x p 1 x j R p
j
Rp
(13)
Where Sr is a value to compare two solutions. For VRPTW application, Equation (13) generates same Sr value for different solutions. To remedy this issue, we propose a modified equation as follows:
Sr
Rp
Rp
j 1
j 1
v
x j ( x j i)
p 1
Rp
(14)
12
where p represents the route index, |Rp|>0, x j is the customer in route Rp, and i is the index of customer x j in the current solution. Then, the corresponding Sr of each visited solution is added to the tl along with the solution age.
4.2
The adaptive selection mechanism
Given a set of LS algorithms and their recent improvement strengths, the main goal of the selection mechanism is to choose one LS to be applied to a given problem instance. The selection mechanism has two components: LS impact evaluation and LS selection mechanism.
4.2.1 LS impact evaluation component
In this work, each LS is associated with an empirical quality estimate (q’ls) that represents the average reward obtained by LS from the first iteration until the current iteration i of the search process. The empirical quality estimate is calculated using Equation (15), and it is updated during the search process [17]. (n 1) qls' ,i rls qls' ,i 1 ls ,i nls ,i
(15)
where nls represents how many times the local search ls is applied, q'ls is the total reward of the local search ls before the i-th iteration, and rls represents the current reward of the local search ls if it is applied at the current iteration (calculated using Equation (16)) [17]. rls
( f ( S ) f ( S ' )) f (S )
(16)
13
4.2.2 LS selection component This work utilizes MAB, following [17], as an on- line LS selection mechanism. MAB takes the empirical quality estimate (q’ls) of each LS and deterministically selects the one that returns the maximum value using equation (17) [17]: ' Select arg max ls 1..L qls C
L 2 log nl l 1 nls
(17)
where L represents the number of LS algorithms, and C represents the scaling factor that achieves a balance between exploration (represented in the right side equation (17), preferring the least selected local search algorithm) and exploitation (represented in the left- side of equation (17), preferring the LS algorithm, which has an empirical quality). The MAB algorithm process is presented in Algorithm 1. In this algorithm, the credit assignment step assigns a reward for the applied LS algorithm using Equation (16). In this algorithm, for each available LS, the values of nls and q’ls are initiated to ‘0’ [17].
Algorithm 1 Multi-Armed Bandit Mechanism Start While NotTerminated do if there is a local search algoriths not applied yet then randomly select the LS that has not been applied else
end if apply LSls and calculate its improvement strength CreditAssignment.Get Reward(ls)
end while End
5.
Experime ntal Set-up
This work utilizes VRPTW benchmark instances [22] to assess the effectiveness of the proposed HSA (denoted as AHSA). We first present the characteristics of VRPTW benchmark. Next, we discuss the adopted parameter values of the proposed AHSA.
14
5.1
VRPTW Benchmark instances
The utilized VRPTW benchmark involves 56 instances (represented by six groups) [22]. These instances are categorized into six different categories: R1, R2, C1, C2, RC1 and RC2. Each category contains R which indicates that customers are distributed randomly, C which means customers are clustered around the depot and RC which means customers are distributed both random and clustered. Table 1 shows the characteristics of these types. The symbols in the table can be read as: R: Random, C: Cluster, ST: Small Time Windows, LT: Large Time Windows, NI: No. of Instances, NC: No. of Customers, NV: No. of Vehicles, CV: Capacity of Vehicle, DC: Distribution of Customers, WTW: Width of Time Window. Table 1 The description of the VRPTW benchmark Group R1 R2 C1 C2 RC1 RC2
5.2
NI 12 11 9 8 8 8
NC 100 100 100 100 100 100
NV 25 25 25 25 25 25
CV 200 1000 200 700 200 1000
DC R R C C R&C R &C
WTW ST LT ST LT ST LT
Experime ntal set-up
The proposed AHSA was coded in Java and the experiments were conducted using PC 2.8 GHz Intel Xeon CPU and 6 GB RAM. To find the appropriate values for some AHSA parameters, we have performed a preliminary test [36]. We randomly selected a set of instances this test. All algorithms (HSA, HC, SA, GD, RRT, RTS and AHSA), were tested using various parameter values. The best values over 10 runs were recorded.
5.2.1 HSA parameter settings
The parameters of HSA are: HMS, HMCR, PAR and NI. We set PAR using Equation (18). PAR ( gn) PAR max
PAR max PAR min NI
gn
(18)
15
Where NI is the total number of iterations and gn indicates current iteration. In this work, PAR will be dynamically decreased throughout the improvisation process. The values of PARmax and PARmin were fixed to 0.9 and 0.3, respectively. Other parameters were set as follows: HMS=20, HMCR=0.7 and NI=1000 (based on the best average results).
5.2.2 HC parameter settings HC has only one parameter, Max_Iter which indicates the total number of iterations. This parameter has been fixed to 3000 according to the results of the preliminary testing conducted on different Max_Iter values.
5.2.3 SA parameter settings A traditional SA has three parameters. These are: the starting temperature (T_max), the ending temperature (T_min) and the cooling rate (β). In our work, T_min, T_max and β were fixed to 0.5, 50 and 0.99, respectively.
5.2.4 GD parameter settings GD has two parameters: UP and NI. The first is rain speed (UP), which needs to be tuned and is calculated using equation (19):
UP
f ( IS ) estimated ( LB) NI
(19)
Where IS is the initial solution, LB is the lower bound and NI is the total number of iterations. According to the literature, we set LB same as the best result. NI is set to 4000 based on preliminary testing.
5.2.5 RRT parameter settings
RRT has two parameters. The first is the Deviation, which needs to be tuned and usually calculated using equation (20).
16
Deviation = 0.01× Record
(20)
The second is the number of iterations, which is set to 4000, based on preliminary testing.
5.2.6 RTS parameter settings
RTS uses five parameters: (i) the tabu list (tl) size (TL_size), (ii) the number of neighborhood solutions (N_neighbors), (iii) the total number of non-improving iterations (MAXI), (iv) the total number of iteration for a solution to remain in the tabu list (Max-age), and (v) the total number of iterations (T_itr) to halt the search. The value of TL_size changes dynamically during the solution process. T_itr is set to n*200, where n represents the total number of customers. N_neighbors is set to 50, MAXI to 300 and Max-age to10.
5.2.7 MAB parameter settings
MAB has one parameter, the scaling factor C. Based on the preliminary test, the value of this parameter was set to 0.01. Table 2 shows the parameter values of all the algorithms employed in this study. Table 2 The parameter settings of the hybridized algorithm Parameter HMS HMCR NI PAR_max PAR-min Max_Iter T_max T_min Max_Iter Max_Iter T_itr(RTS) MAXI Max_age N_neighbors C
Algorithm HSA HSA HSA HSA HSA HC SA SA SA GD RRT RTS RTS RTS RTS MAB
Value 20 0.7 1000 0.9 0.3 3000 50 0.5 0.99 4000 4000 n*200 300 10 50 0.01
17
6.
Results and comparison
To investigate the effectiveness of the proposed algorithm (AHSA), we conducted three sets of experiments. The first one (Section 6.1) was designed to investigate the impact of using LS(s). Thus, we compared the computational results of SA, HC, RRT, GD, RTS and HSA with the hybrid algorithms (HSA-HC, HSA-SA, HSA-GD, HSA-RRT and HSA-RTS). The number of fitness function evaluations for each of these algorithms (GD, RRT, RTS, HSA, HSA-HC, HSASA, HSA-GD, HSA-RRT and HSA-RTS) was set to 1,000,000 in this experiment. The stopping criterion for HC, SA and RTS inside the hybrid algorithm is set to 300 non- improving iterations (MAXI (see Table 2). The second experiment (Section 6.2) was designed to examine the effect of the implemented selection mechanisms on the search ability of hybrid HSA (RHSA, RWHSA and AHSA). The third experiment (Section 6.3) was designed to verify the efficiency of the adaptive harmony search algorithm (AHSA) in solving the VRPTW compared to the state of the art methods. To conduct a reliable statistical analysis for the obtained results, all algorithms were executed 31 times on each of the tested instances.
6.1
HSA hybridization results
In this section, the computational results obtained by SA, HC, RRT, RTS, and GD are compared with these of the hybrid algorithms (HSA-SA, HSA-HC, HSA-RRT, HSA-RTS, and HSA-GD). The obtained results are compared with regards the best (Best), the average (Ave) and standard deviation (Std) of 31 independent runs using different random seeds.
The results of HC vs. HSA-HC, SA vs HSA-SA, GD vs HSA-GD, RRT vs HSA-RRT and RST vs HSA-RTS are presented in Tables 3, 4, 5, 6, and 7, respectively. We show in bold font the best results out of those obtained by the 31 runs. These tables demonstrate that all hybrid algorithms HSA-HC, HSA-SA, HSA-GD, HSA-RRT and HSA-RTS are more efficient than HC, SA, GD, RRT and RTS with regard to all instances tested in this experiment i.e. the hybrid algorithms obtained the best results in terms of the best, Ave and Std.
Table 3 The results of HC against HSA-HC
18
Dataset
HC
HSA-HC
Best
Ave
Std
Best
Ave
Std
R1-01
1707.77
1806.69
48.61
1644.25
1650.42
3.98
R2-01
1300.81
1358.83
37.54
1235.97
1278.16
16.22
C1-09
1016.97
1124.76
70.53
865.54
894.07
11.92
C2-06
706.09
784.65
39.30
647.32
674.02
12.76
RC1-01
1818.32
1895.51
46.29
1678.66
1709.00
13.56
RC2-01
1449.41
1532.65
49.69
1366.31
1425.54
21.65
Table 4 The results of SA against HSA-SA Dataset
SA
HSA-SA
Best
Ave
Std
Best
Ave
Std
R1-01
1715.94
1777.50
36.81
1645.49
1659.43
5.80
R2-01
1284.52
1354.26
33.01
1246.51
1286.28
16.91
C1-09
977.77
1102.39
73.47
873.30
905.90
17.19
C2-06
704.21
803.59
36.15
643.40
668.35
10.55
RC1-01
1796.88
1894.18
70.46
1685.44
1721.21
14.02
RC2-01
1442.89
1529.23
44.58
1390.51
1438.83
21.99
Table 5 The results GD against HSA-GD Dataset
GD
HSA-GD
Best
Ave
Std
Best
Ave
Std
R1-01
1677.64
1786.21
46.59
1645.36
1653.29
6.61
R2-01
1307.30
1353.44
34.24
1216.12
1275.05
17.18
C1-09
1001.85
1109.48
58.41
874.30
899.35
14.00
C2-06
696.96
800.06
39.91
649.26
663.28
9.39
RC1-01
1826.44
1907.94
58.58
1675.34
1710.73
16.10
RC2-01
1429.96
1515.61
52.77
1384.42
1421.71
15.90
Table 6 The results of RRT against HSA-RRT Dataset
RRT
HSA-RRT
Best
Ave
Std
Best
Ave
Std
R1-01
1719.82
1785.26
37.96
1646.30
1654.68
5.18
R2-01
1260.93
1348.49
38.44
1239.35
1274.33
14.99
C1-09
1004.10
1100.36
56.06
861.70
895.30
13.80
C2-06
717.48
788.16
37.51
644.13
664.98
13.16
RC1-01
1813.84
1905.73
44.37
1692.93
1709.55
11.11
RC2-01
1400.35
1512.85
51.54
1369.32
1419.21
20.19
Table 7 The results of RST against HSA-RST
19
RTS
Dataset
HSA-RTS
Best
Ave
Std
Best
Ave
Std
R1-01
1708.84
1785.42
38.99
1643.18
1659.17
7.66
R2-01
1259.45
1345.63
39.77
1261.32
1314.71
21.84
C1-09
993.97
1111.20
54.24
883.99
932.63
17.70
C2-06
700.98
799.85
49.50
675.33
706.26
14.06
RC1-01
1765.20
1877.04
41.13
1691.58
1732.75
15.00
RC2-01
1425.40
1516.85
42.88
1425.42
1472.97
20.76
A Wilcoxon test at 95% confidence level was also performed to compare all algorithms. The pvalues are presented in Table 8, where the symbol “S+” means that the hybrid algorithm is statistically better than the non-hybridized algorithm (p- value < 0.05), while “S-” means that there is no significant difference between the hybrid algorithm and the non-hybridized algorithm (p-value > 0.05). Table 8 The p- values of the compared algorithms HSA-HC vs. HC
HSA-SA vs. SA
HSA-GD vs. GD
R1-01
P-value S+
P-value S+
P-value S+
HSA-RRT vs. RRT P-value S+
R2-01
S+
S+
S+
S+
S+
C1-09
S+
S+
S+
S+
S+
Instances
HSA-RTS vs. RTS P-value S+
C2-06
S+
S+
S+
S+
S+
RC1-01
S+
S+
S+
S+
S+
RC2-01
S+
S+
S+
S+
S+
Table 8 demonstrates that HSA-HC, HSA-SA, HSA-GD, HSA-RRT and HSA-RTS are statistically better than HC, SA, GD, RRT and RTS. To further investigate the effectiveness of the hybrid methods, the results of HSA-HC, HSA-SA, HSA-GD, HSA-RRT and HSA-RTS were compared. We also included standard HSA results for comparison. The results of this comparison are Table 9 illustrates the, can be read as follows:
i)
In terms of the Best values, all hybrid algorithms are better than the standard HSA for all tested instances. The best results were distributed among the hybrid algorithms (i.e., HSA-GD obtained the best results on two instances, RC2-01 and RC1-01, while
20
each of HSA-HC, HSA-SA, HSA-RRT and HSA-RTS obtained the best result on one instance each, RC2-01, C2-06, C1-09 and R1-01, respectively). ii)
Based on the Ave values, HSA-HC obtained the best Ave results on three of the six instances. HSA-GD and HSA-RRT obtained the best Ave results on one and two instances, respectively.
iii)
According to the Std values, the best results were distributed among HSA-HC, HSAGD and HSA-RRT. Each obtained the best results on two different instances. Table 9 The results of the compared hybrid methods . HSA
HSA-HC
HSA-SA
HSA-GD
HSA-RRT
HSA-RTS
Instances Best
Ave
Std
Best
Ave
Std
Best
Ave
Std
Best
Ave
Std
Best
Ave
Std
Best
Ave
Std
R1-01
1831.18 1937.02 44.32
1644.25 1650.42 3.98 1643.34 1658.67 6.31 1645.36 1653.29
6.61
1646.30 1654.68 5.18 1643.18 1659.17 7.66
R2-01
1723.80 1910.49 67.95
1235.97 1278.16 16.22 1246.51 1286.28 16.91 1216.12 1275.05
17.18
1239.35 1274.33 14.99 1261.32 1314.71 21.84
C1-09
1367.38 1594.91 91.62
865.54 894.07 11.92 873.30 905.90 17.19 874.30 899.35
14.00
861.70 895.30 13.80 883.99 932.63 17.70
C2-06
1246.32 1648.09 203.63
647.32 674.02 12.76 643.40 668.35 10.55 649.26 663.28
9.39
644.13 664.98 13.16 675.33 706.26 14.06
RC1-01 1818.27 1933.38 70.88
1678.66 1709.00 13.56 1685.44 1721.21 14.02 1675.34 1710.73
16.10
1692.93 1709.55 11.11 1691.58 1732.75 15.00
RC2-01 2054.50 2187.09 84.94
1366.31 1425.54 21.65 1390.51 1438.83 21.99 1384.42 1421.71
15.90
1369.32 1419.21 20.19 1425.42 1472.97 20.76
Table 10 presents the p- values of HSA-HC, HSA-SA, HSA-GD, HSA-RRT and HSA-RTS compared to the standard HSA. These values prove that the hybrid algorithms are more significant than the standard HSA. Table 10 The p- values of the Wilcoxon test hybrid HSA algorithms compared to HSA Instances R1-01 R2-01 C1-09 C2-06 RC1-01 RC2-01
HSA-HC vs. HSA P-value S+ S+ S+ S+ S+ S+
HSA-SA vs. HSA P-value S+ S+ S+ S+ S+ S+
HSA-GD vs. HSA P-value S+ S+ S+ S+ S+ S+
HSA-RRT vs. HSA-RTS vs. HSA HSA P-value P-value S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+
The “ S+” sign indicates that p-value is less than or equal to the critical level (p-value
0.05).
21
Overall, results demonstrated that the hybrid algorithms are better than the standard HSA and local search algorithms. However, results also revealed that none of the hybrid algorithms performed the best over all instances, as each one obtained good results for a few instances only.
6.2
The effect of the selection mechanis m on hybrid HSA
This experiment is designed to verify the effect of the selection mechanism on the performance of hybrid HSA. In addition to MAB, two different selection mechanisms are utilized to control the selection process of LS to be hybridized with HSA:
1. The random selection mechanism (RM). This mechanism randomly selects a local search regardless of its history behavior. 2.
The roulette wheel mechanism (RWM). In this mechanism, each local search within the available set is assigned to its selection probability. To decide these probabilities, the RW divides the wheel into a number of slots based on the number of available LSs, where each LS is allocated a slot. Initially, the width of these sectors is equal (1/LSs), and then the width will be changed due to the performance of these available LS algorithms during the search. The width of the applied LS sector will be either incremented or decremented by rls, while the width of other sectors (unapplied LS algorithms) will be decremented or incremented by rls/(L-1). The rls represents the current reward of the applied LS (calculated using Equation (16)).
Consequently, three different hybrid algorithms symbolized as RHSA, RWHSA and AHSA have emerged as a result of utilizing RM, RWM and MAB, respectively. Figure 2 illustrates the process of selecting a local search algorithm during the search based on RM, RWM and MAB over instance RC1-05. In the part ‘a’ of this figure, the selection process depends on the RM. In part ‘b’, the selection process is performed depending on RWM. As shown in this part, some LS(s) (HC, SA and RTS) have chances to be selected at the early stage of the search. After that, the RTS has been selected constantly because it has the best improvement history and other local search algorithms have no more chances to be selected. In part ‘c’, local search algorithm is selected based on MAB. As shown in this part, though the RTS, which has the best improvement
22
history, is the most selected local search algorithm, other local search algorithms get chances to be selected during the search.
Table 11 demonstrates that adopting the MAB as a selection mechanism inside HSA can improve the search performance to find good quality results compared to RHSA, as AHSA performs the best on almost all instances.
The results of the three selection mechanisms and other algorithms are furthered verified using a statistical test. The Friedman test is conducted to rank HC, SA, GD, RRT, RTS, HSA, HSA-HC, HSA-SA, HSA-GD, HSA-RRT, HSA-RTS, RHSA, RWHSA and AHS). In table, the lower value indicates the high ranked algorithm. The obtained ranking is presented in Table 12. As can be seen, AHSA is in the first rank, as it achieved the lowest value.
Table 11 Comparison between RHSA and AHSA Dataset
RHSA
RWHSA
AHSA
Ave 1657.35
Std 5.88
Best
Ave
Std
R1-01
Best 1645.38
2.45
Best 1642.88
Ave 1645.71
Std 2.31
1642.88
1646.02
R2-01
1256.52
1293.30
18.50
1223.68
1256.10
16.29
1207.45
1259.64
17.75
C1-09
879.80
918.14
17.46
856.60
886.63
12.26
833.61
857.63
16.86
C2-06
631.22
671.34
16.57
641.26
670.09
18.47
632.23
666.35
20.99
RC1-01
1686.71
1720.00
15.04
1670.27
1699.32
10.46
1653.32
1674.68
16.30
RC2-01
1402.78
1448.10
22.05
1358.91
1395.24
14.82
1332.56
1386.42
29.40
23
Figure 2 process of selecting a LS during the search based on RM, RWM and MAB
Table 12 The ranking of the adopted algorithms Algorith m AHSA RWHSA HSA-RRT HSA-GD HSA-HC HSA-SA RHSA HSA-RTS RRT RTS SA GD HC HSA
Ranking 1.5 2.33 3.5 3.83 4.33 6 6.5 8 10 10.5 11 11.5 12 14
Index 1 2 3 4 5 6 7 8 9 10 11 12 13 14
24
To further verify AHSA results, we performed a Wilcoxon test. The p-values are provided in Table 13. From the table we can see that, on five instances out of six tested instances, AHSA is statistically better than other algorithms. For other instance C2-06, AHSA is better than HC, SA, RRT, GD, RTS, HSA, HSA-HC and HSA-RTS and is worse than the other five algorithms. Table 13 The p- value hybrid HSA against other algorithms AHSA vs. HC SA RRT GD RTS HSA HSA-HC HSA-SA HSA-RRT HSA-GD HSA-RTS RHSA RWHSA
R1-01 p-value S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+
R2-01 p-value S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+
Instances C1-09 C2-06 p-value p-value S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ SS+ SS+ SS+ S+ S+ SS+ S-
RC1-01 p-value S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+
RC2-01 p-value S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+ S+
Overall, the results demonstrated that the multi-armed bandit selection mechanism can effectively assist HSA in selecting a suitable LS algorithm in an on- line manner, and producing good results across all instances.
6.3 Comparison of AHSA with the state of the art method
We compared the results obtained by AHSA against those reported in the scientific literature. The reported best known results were obtained from [37]. There are many studies of VRPTW; four of these studies achieved the best known results, so they were selected for comparison with the proposed method. These studies are as follows:
CGH: genetic algorithm and a set partitioning [38].
MA: a memetic algorithm [39].
MetaOpt: a generic framework [40].
25
The best quality solutions of AHSA, CGH, MA, MetaOpt and HGA over all tested instances are presented and compared in Table 14. We also reported the best known results. The reported results are average total distances for each category (R1, R2, C1, C2, RC1 and RC2). In the table, last row reports the average total distances (Ava) and last column shows the deviation in percent (Gap): Gap= ((a1 -a2 )/a2 )*100, where a1 is the AHSA best result, and a2 is the best reported results. Boldfont indicates the best obtained result.
Table 14 Comparison among different heuristics dataset
Best Known
CGH
MA
M etaOpt
AHSA
Gap(%)
R1
1178.98
1183.38
1184.16
1178.30
1216.54
3.19
R2
877.20
899.90
879.51
891.38
973.67
11.00
C1
828.38
828.38
828.38
826.83
835.09
0.81
C2
589.86
589.86
589.86
589.39
630.4
6.87
RC1
1338.18
1341.67
1352.02
1342.09
1381.51
3.24
RC2
1003.95
1015.90
1009.37
1016.28
1106.06
10.17
Ava.
969.43
976.51
973.88
974.05
1023.88
5.88
AHSA was executed 31 runs, while CGH, MA, MetaOpt and HGA were executed 3, 1, 3 and 10 times respectively. The results tabulated above show that AHSA achieved competitive results compared to others. The average computational time (T) are presented in Table 15. We report the average running time in seconds. We also reported the average running time over all instances (T) and the number of executions of each algorithm in the two last rows. Best computational times are highlighted in bold font. Please note that CGH, MetaOpt and HGA reported the total computational time only. As can be seen from the table, the computational time of AHSA is very competitive with regard to CGH, MA, MetaOpt and HGA.
The presented results showed that the proposed AHSA obtained very competitive results for the VRPTW. This could be due to the ability of AHSA in selecting different LS for different instances in an on- line manner. By using different LS, the proposed AHSA can effectively solve different instances and handle instance changes that might happened as the search progresses.
26
Table 15 The computational time (average) of AHSA, CGH, MA, MetaOpt and HGA CGH MA MetaOpt AHSA Group T (s) T (s) T (s) T (s) R1 R2 C1 C2 RC1 RC2 T(s) No. of Runs 7.
3600
151.8 210.91 78.07 161.29 152.32 199.63 159
1200
64.32 635.09 108.14 141.54 109.88 442.07 250.17
3
1
3
31
Conclusion
An adaptive harmony search algorithm has been proposed for vehicle routing problem with time windows. The performance of the harmony search algorithm is enhanced by hybridization with a local search algorithm. The hybridization integrates the exploration power of HSA with the exploitation power of the local search algorithms. Five local search algorithms were individually hybridized with the harmony search algorithm. Each of these hybrid algorithms obtained the good results in certain instances. Therefore, the selection of the appropriate local search algorithm to be used inside the harmony search algorithm should depend on the problem instance as well as the search status. Thus, we have proposed an adaptive harmony search algorithm (AHSA) that embeds an adaptive selection mechanism to adaptively select which local search algorithm to be applied. That is, the role of selection mechanism is to select a suitable local search algorithm to be hybridized with HSA. The local search was selected to enhance the harmony search algorithm’s exploitation. We adopted Solomon’s benchmark instances to examine the performance of the proposed harmony search algorithm. The results illustrated that the proposed harmony search algorithm provided very good results compared to standard harmony search algorithm, hybrid methods and the local search algorithms. It also was competitive with regard to the state of the art methods. Acknowledgments This work was supported by Universiti Kebangsaan Malaysia grant Dana Impak Perdana (DIP -2014-039).
27
References [1] Z. W. Geem, J. H. Kim, and G. Loganathan, "A new heuristic optimization algorith m: harmony search," Simu lation, vol. 76, pp. 60-68, 2001. [2] Wang, L., Yang, R., Xu, Y., Niu, Q., Pardalos, P. M., & Fei, M. (2013). An imp roved adaptive binary harmony search algorith m. Information Sciences, 232, 58-87. [3] M. A. Al-Betar, A. T. Khader, and M. Zaman, "Un iversity course timetabling using a hybrid harmony search metaheuristic algorith m," Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, vol. 42, pp. 664-681, 2012. [4] Saka, Mehmet Polat, O. Hasançebi, and Zong Woo Geem. "Metaheuristics in structural optimizat ion and discussions on harmony search algorith m." Swarm and Evolutionary Co mputation 28 (2016): 88-97. [5] N. R. Sabar and G. Kendall, "Using harmony search with multip le pitch adjustment operators for the portfolio selection problem," in Evolutionary Co mputation (CEC), 2014 IEEE Congress on, 2014, pp. 499503. [6] Hadwan, M., Ayob, M., Sabar, N. R., & Qu, R. (2013). A harmony search algorith m for nurse rostering problems. Informat ion Sciences, 233, 126-140. [7] M. Turky, S. Abdullah, and N. R. Sabar, "Meta-heuristic algorith m for b inary dynamic optimisation problems and its relevancy to timetabling," in 10th International Conference of the Pract ice and Theory of Automated Timetabling PATAT 2014, 26-29 August 2014, York, Un ited Kingdom, pp. 568-573. [8] M. Turky, S. Abdullah, and N. R. Sabar, "A Hybrid Harmony Search Algorith m for Solv ing Dynamic Optimisation Problems," Procedia Co mputer Science, vol. 29, pp. 1926-1936, 2014. [9] S. S. Shreem, S. Abdullah, and M. Z. A. Nazri, "Hybrid ising harmony search with a Markov blanket for gene selection problems," Information Sciences, vol. 258, pp. 108-121, 2014. [10] O. M. Alia and R. Mandava, "The variants of the harmony search algorithm: an overview," Artificial Intelligence Review, vol. 36, pp. 49-68, 2011a. [11] Abdullah, S., Sabar, N. R., Nazri, M . Z. A., & Ayob, M. (2014). An Exponential Monte-Carlo algorith m for feature selection problems. Co mputers & Industrial Eng ineering, 67, 160-167. [12] Abuhamdah, A., Ayob, M., Kendall, G., & Sabar, N. R. (2014). Population based local search for university course timetabling problems. Applied intelligence, 40(1), 44-53. [13] Blu m, J. Puchinger, G. R. Raid l, and A. Ro li, "Hybrid metaheuristics in co mbinatorial optimization: A survey," Applied Soft Co mputing, vol. 11, pp. 4135-4151, 2011. [14] F. Neri and C. Cotta, "Memetic algorith ms and memet ic computing optimizat ion: A literature review," Swarm and Evolutionary Co mputation, vol. 2, pp. 1-14, 2012. [15] Y. S. Ong, M.-H. Lim, N. Zhu, and K.-W. Wong, "Classificat ion of adaptive memetic algorith ms: a comparative study," Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 36, pp. 141-152, 2006. [16] Thierens, "Adaptive operator selection for iterated local search," in Engineering Stochastic Local Search Algorith ms. Designing, Imp lementing and Analyzing Effective Heuristics, ed: Springer, 2009, pp. 140-144. [17] W. Gong, Á. Fialho, Z. Cai, and H. Li, "Adaptive strategy selection in different ial evolution for nu merical optimization: an emp irical study," Information Sciences, vol. 181, pp. 5364-5386, 2011b. [18] N. R. Sabar, M. Ayob, R. Qu, and G. Kendall, "A graph coloring constructive hyper-heuristic for examination timetabling problems," Applied Intelligence, vol. 37, pp. 1 -11, 2012. [19] N. R. Sabar, M. Ayob, G. Kendall, and Q. Rong, " Grammatical Evolution Hyper-Heuristic for Co mbinatorial Optimization Problems," Evolutionary Co mputation, IEEE Transactions on, vol. 17, pp. 840-861, 2013. [20] N. R. Sabar, M. Ayob, G. Kendall, and R. Qu, "The Automatic Design of Hyper-heuristic Framework with Gene Expression Programming for Co mb inatorial Optimization problems," IEEE Transactions on Evolutionary Co mputation, vol. PP, pp. 1-1, 2014.
28 [21] N. R. Sabar, M. Ayob, G. Kendall, and R. Qu, "A Dynamic Multiarmed Bandit -Gene Exp ression Programming Hyper-Heuristic fo r Co mbinatorial Optimizat ion Problems," IEEE Transactions on Cybernetics, vol. PP, pp. 1-1, 2014. [22] M. M. Solo mon, "Algorith ms for the vehicle routing and scheduling problems with time window constraints," Operations research, pp. 254-265, 1987. [23] G. B. Dantzig and J. H. Ramser, "The truck dispatching problem," Management science, pp. 80-91, 1959. [24] O. Bräysy and M. Gendreau, "Vehicle routing problem with time windows, part II: Metaheuristics," Transportation science, vol. 39, pp. 119-139, 2005. [25] A. W. J. Kolen, A. H. G. R. Kan, and H. Trienekens, " Vehicle routing with time windows," Operations research, vol. 35, pp. 266-273, 1987. [26] E. G. Talbi, Metaheuristics fro m design to implementation: Wiley On line Library, 2009. [27] Y. J. Gong, J. Zhang, O. Liu, R. Z. Huang, H. S. H. Chung, and Y. H. Shi, " Optimizing the Vehicle Routing Problem With Time Windows: A Discrete Particle Swarm Optimization Approach," IEEE Transactions on Systems Man and Cybernetics -Part C-Applications Reviews, vol. 42, p. 254, 2012b. [28] C. B. Cheng and K. P. Wang, "Solving a vehic le routing problem with time windows by a decomposition technique and a genetic algorith m," Expert systems with applications, vol. 36, pp. 7758-7763, 2009. [29] G. Kontoravdis and J. F. Bard, "A GRASP for the vehicle routing problem with time windows," ORSA journal on Co mputing, vol. 7, pp. 10-23, 1995. [30] Z. J. Czech and P. Czarnas, "Parallel simulated annealing for the vehicle routing problem with time windows," in Parallel, Distributed and Network-based Processing, 2002. Proceedings. 10th Euro micro Workshop on, 2002, pp. 376-383. [31] B. L. Garcia, J. Y. Potvin, and J. M. Rousseau, "A parallel imp lementation of the tabu search heuristic for vehicle routing problems with time window constraints," Computers & Operations Research, vol. 21, pp. 1025-1033, 1994. [32] O. Bräysy and M. Gendreau, "Tabu search heuristics for the vehicle routing problem with time windows," Top, vol. 10, pp. 211-237, 2002. [33] O. Bräysy and M. Gendreau, " Vehicle routing problem with time windows, Part I: Route construction and local search algorith ms," Transportation science, vol. 39, pp. 104-118, 2005a. [34] F. Glover, "Tabu search—part I," ORSA journal on Co mputing, vol. 1, pp. 190-206, 1989. [35] R. Battit i and G. Tecchio lli, "The reactive tabu search," ORSA journal on co mputing, vol. 6, pp. 126 -140, 1994. [36] Kendall, G., Bai, R., Błazewicz, J., De Caus maecker, P., Gendreau, M., John, R., & Sabar, N. (2016). Good laboratory practice for optimizat ion research. Journal of the Operational Research Society, 67(4), 676-689. [37] T. Vidal, T. G. Crain ic, M. Gendreau, and C. Prins, Time-Window Relaxations in Vehicle Routing Heuristics: Tech. rep., CIRRELT, Montréal, 2013. [38] G. B. Alvarenga, G. R. Mateus, and G. De To mi, "A genetic and set partitioning two -phase approach for the vehicle routing problem with time windows," Co mputers & Operations Research, vol. 34, pp. 15611584, 2007. [39] N. Labadi, C. Prins, and M. Reghioui, "A memetic algorith m for the vehicle routing problem with time windows," RAIRO-Operations Research, vol. 42, pp. 415-431, 2008. [40] İ. Muter, S. I. Birbil, and G. Sahin, " Co mbination of metaheuristic and exact algorith ms for solving set covering-type optimizat ion problems," INFORMS Journal on co mputing, vol. 22, pp. 603 -619, 2010.
29
We propose adaptive hybrid algorithm for vehicle routing problem. We hybridise population based algorithm with multiple local search algorithms. We propose an adaptive selection method to control local search selection. We tested the proposed algorithm using configuration on 56 instances. The results demonstrated that the proposed algorithm is better others