Accepted Manuscript Title: Hybrid Evolutionary Approaches for the Single Machine Order Acceptance and Scheduling Problem Author: Sachchida Nand Chaurasia Alok Singh PII: DOI: Reference:
S1568-4946(16)30510-5 http://dx.doi.org/doi:10.1016/j.asoc.2016.09.051 ASOC 3850
To appear in:
Applied Soft Computing
Received date: Revised date: Accepted date:
25-3-2015 27-5-2016 30-9-2016
Please cite this article as: Sachchida Nand Chaurasia, Alok Singh, Hybrid Evolutionary Approaches for the Single Machine Order Acceptance and Scheduling Problem, (2016), http://dx.doi.org/10.1016/j.asoc.2016.09.051 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
*Highlights (for review)
te
d
M
an
us
cr
ip t
Two evolutionary approaches are proposed for order acceptance and scheduling problem. First approach is based on steady-state genetic algorithm. Second approach is based on evolutionary algorithm with guided mutation. Our approaches are compared with two state-of-the-art approaches. Computational results show the effectiveness of our approaches.
Ac ce p
Page 1 of 43
us
cr
ip t
*Graphical abstract (for review)
an
EA/G-LS
20,000
M
15,000
10,000
d
5,000
0 0
10
20
30
40
50
60
te
Number of solutions
HSSGA
70
80
90
100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 250
ce p
Instances with 100 orders (n=100)
Ac
Figure 1: Convergence behaviour of HSSGA and EA/G-LS on each instance with 100 orders
1
Page 2 of 43
TEX File of Manuscript Click here to view linked References
Hybrid Evolutionary Approaches for the Single Machine Order Acceptance and Scheduling Problem Sachchida Nand Chaurasia, Alok Singh∗
ip t
School of Computer and Information Sciences, University of Hyderabad Hyderabad 500 046, India
cr
Abstract
us
This paper presents two hybrid metaheuristic approaches, viz. a hybrid steady-state genetic algorithm (SSGA) and a hybrid evolutionary algorithm with guided mutation (EA/G) for order acceptance and scheduling (OAS) problem in a single machine environment where orders are supposed to have release
an
dates and sequence dependent setup times are incurred in switching from one order to next in the schedule. OAS problem is an N P-hard problem. We have compared our approaches with the state-of-the-art
M
approaches reported in the literature. Computational results show the effectiveness of our approaches. Keywords: Steady-state genetic algorithm, Estimation of distribution algorithm, Evolutionary algorithm, Guided mutation, Order acceptance and scheduling, Single machine scheduling, Sequence
te
1. Introduction
d
dependent setup time.
Ac ce p
In a make-to-order system, customers put their orders at a time for processing. Due to this an additional wait time is incurred to receive the product. But it gives system the flexibility to customize the processing of these orders according to its capability. Tight order delivery deadline requirements may force a system with limited processing capability to first decide which orders should be accepted and then where to put the accepted orders in the processing sequence. The order acceptance and scheduling problem (OAS) [1] deals with this class of problems where acceptance and timely delivery of orders play important role in generating revenue for the system. The OAS problem has two folds- first is to decide which orders should be accepted and the second is sequencing of the accepted orders. A proper balance need to be maintained between acceptance and rejection of orders. The acceptance of an order and its timely delivery earns the revenue and goodwill of a customer, whereas the rejection of an order aids in keeping the production load within the production limits. The ∗
Corresponding author Email addresses:
[email protected] (Sachchida Nand Chaurasia),
[email protected] (Alok Singh)
Preprint submitted to Applied Soft Computing
May 27,Page 2016 3 of 43
OAS problem can be considered as a mix of knapsack and scheduling problems. If there is no scheduling component then the OAS problem reduces to knapsack problem. On the other side, if all the orders are accepted then the OAS problem reduces to a single machine scheduling problem. The coupled decisions of the OAS problem makes it a complex problem, but solving this problem helps in maximizing the net revenue gained through the production system. The OAS problem finds application in diverse
ip t
industries such as printing [2], lamination [3], laundry services [4], textile [5], steal production [6]. A number of diverse versions of the OAS problem have been studied in the last two decades. The
cr
extensive study of various OAS problem versions can be found in the literature survey of [7] and [8]. This paper is focused on the OAS problem, where the number of incoming orders is fixed in a single
us
machine environment. In other words, a set of orders are put at a time and the orders are processed sequentially one after another.
an
Many exact approaches have been proposed to solve the diverse versions of OAS problem. An integer-programming approach was proposed in [5]. Mixed-integer programming approaches were developed in [9, 10]. Further, Yang and Geunes [11], Oguz et al. [3], and Nobibon and Leus [12] developed
M
mixed integer linear programming based approaches to solve the OAS problem. Dynamic programming based approaches were proposed in [13, 14, 15]. The generalized version of order acceptance and
d
scheduling problem was solved by two exact branch-and-bound approaches [12]. An optimal branchand-bound approach using a linear (integer) relaxation for bounding is applied to a problem where the
was proposed in [16].
te
acceptance and the scheduling of orders happen together. Another enhanced branch-and-bound method
Ac ce p
The complex decision procedure of the OAS problem makes it N P-hard [17, 18, 3]. Due to this, it is not possible to solve large sized problem instances through exact approaches. To get a good solution with a limited computational expense, a number of heuristic algorithms are also proposed. In general, the approximation algorithms can be classified into two categories, viz. construction heuristics (CHs) and improvement heuristics (IHs).
Construction heuristics also called augmentation heuristics are used to get quick feasible solutions for a problem under consideration. But these solution are not so competitive with regard to other heuristics. This limitation discourages their use when a good solution is desired. But on the other side, these heuristics are frequently used when only a fairly good solution is required which usually serve as an initial solution for improvement heuristics or other metaheuristics. In CHs, an order is added at a time according to some criterion. Once an order is accepted and scheduled, it cannot be reversed. The whole process is repeated until all the orders are processed. The examples of CHs are the methods proposed in [5, 14, 11, 19, 3, 20]. A common shortcoming of all these methods is the non-robustness of the solutions
Page 4 of 43
obtained by them. Another limitation of these methods is no one outperforms all other CHs on a given performance criteria on large sized problem instances. On the other hand, an IH method begins with an initial solution and repeatedly try to improve it in a reasonable time. Some of the IH methods which are proposed for the OAS problem can be found in [21, 13, 10, 18, 22, 20]. Further, metaheuristic approaches can also be regarded as extension of IH methods
ip t
where a single solution or a group of solutions are processed to get even better solution(s).
Several metaheuristic based algorithms have been proposed to solve the different versions of OAS
cr
problem in a single machine environment. The examples of metaheuristic algorithms are simulated annealing (SA) [3], genetic algorithm (GA) [23, 6], extremal optimization (EO) [23], learning and optimizing
us
system [24], tabu search (TS) [20] and artificial bee colony algorithm (ABC) [25]. The last two approaches, viz. TS and ABC are the two best performing approaches for the OAS problem, and hence, the next two
an
paragraphs provide a brief introduction to these approaches.
Cesaret et al. [20] proposed three algorithms for the OAS problem, viz. a modified apparent tardiness cost rule-based approach (called m-ATCS), a SA-based algorithm (called ISFAN) and a tabu search based
M
algorithm (called TS). The OAS problem is also solved by the CPLEX solver with a time limit of 3600 seconds using the mixed integer linear programming (MILP) formulation given in [3]. The tabu search (TS)
d
algorithm uses compound moves to guide the search. Each compound move performs iterative drop, add and insertion operations by using the problem specific information. The TS algorithm maintains
te
a database of two different factors- first is the status of accepted as well as rejected orders and the second is the processing sequence of the orders. Based on the information in the database about these two
Ac ce p
factors the acceptance and scheduling of orders are done simultaneously rather than sequentially. After each iteration of the TS algorithm the solution is passed through a probabilistic local search procedure to improve the solution and also to maintain diversification in the search process. Lin and Ying [25] proposed an artificial bee colony algorithm (ABC) which is a swarm-based metaheuristic to solve the OAS problem. They used permutation based representation of solution. In the ABC approach, the neighbouring solutions are generated using either order crossover [26] or IG algorithm [27]. The IG algorithm consists of destruction and construction phases, and implemented in the same way as Yin and Lin [28] implemented. In destruction phase, randomly a fixed number of orders is deleted, whereas, in construction phase, the orders deleted by destruction phase are reinserted. In the employed bee phase, the neighbouring solutions are generated by using either IG algorithm or order crossover. However, in the onlooker bee phase, neighboring solutions are generated through IG algorithm only. The performance of ABC algorithm was compared with TS approach on the same benchmark instances as used in [20]. Computational results established the superiority of ABC approach over TS.
Page 5 of 43
In this paper, we present a steady-state genetic algorithm (SSGA) and an evolutionary algorithm with guided mutation (EA/G), both hybridized with a local search for the OAS problem in a single machine environment. EA/G is a relatively recent evolutionary technique developed by Zhang et al. [29] that uses a guided mutation operator to produce offspring. This operator uses a combination of global statistical information and location information of the solutions found so far to generate an offspring. EA/G can be
ip t
considered as a cross between estimation of distribution algorithms (EDA) and genetic algorithm (GA). Many variants of estimation of distribution algorithm (EDA) have been proposed for various kinds of
cr
scheduling problems, e.g., [30, 31, 32, 33, 34, 35, 36, 37]. However, so far, no one has investigated the performance of an EDA for the OAS problem. Hence, our EA/G based approach is the first attempt in
us
this direction.
Actually, for some scheduling problems adjacency information (who is the immediate predecessor/-
an
successor of a job) among jobs matters, whereas for others, relative ordering (a job should occur early/late in a good schedule) among jobs matters. For the OAS problem both factors are vital. As there exists a sequence dependent setup time between orders so adjacency information matters. On the other
M
hand, presence of release dates and deteriorating profit with increased delay in processing of orders makes the relative ordering important. ABC approach [20] gave importance to both factors and it is the
d
best approach so far for the OAS problem. The two-point crossover operator such as partially matched crossover, order crossover is suitable for those problems where adjacency information among jobs mat-
te
ter, whereas multi-point crossover operators such as uniform order based crossover are suitable for those problems where relative ordering among jobs matter. The previous genetic algorithm based approaches
Ac ce p
for the OAS problem [23, 6] focused on adjacency information only and used two point crossover operators. Motivated by the success of ABC approach, we have designed our approaches in such a way so that they try to balance the two factors. For SSGA, we have used a multi-point crossover operator which generates a partial child where only a subset of jobs is allocated to the schedule based on the information about relative ordering. Similarly, mutation operator of SSGA deletes some jobs from a schedule preserving the relative ordering among the remaining jobs. Then we have used an assignment operator to insert the remaining jobs at the best possible position which considers the sequence dependent setup time also while determining the best position. Likewise, guided mutation operator of our EA/G approach generates a partial schedule and rely on the assignment operator to complete the schedule. We have compared our hybrid SSGA and hybrid EA/G approaches with the TS [20] and ABC [25] approaches on the same benchmark instances as used in [20, 25]. In comparison to these approaches, our hybrid approaches obtained better quality solutions. The remainder of this paper is organized as follows: Section 2 provides a survey of EDA based ap-
Page 6 of 43
proaches for scheduling problems. Section 3 formally defines the OAS problem studied in this paper. Section 4 and Section 5 respectively describe our hybrid steady-state genetic algorithm and hybrid evolutionary algorithm with guided mutation approaches for the OAS problem. Computational results and their analysis are presented in Section 6. Finally, Section 7 outlines some concluding remarks.
ip t
2. Survey of EDA Based Approaches for Scheduling problems There exists several variants of EDAs in the literature for solving scheduling problems. Wang et al.
cr
[30] did the seminal work in this area and proposed an EDA based approach to solve the flexible jobshop scheduling problem. In [31], EDA is applied for solving the flexible Job-Shop scheduling problem
us
with fuzzy processing times, which is an extension of the flexible job-shop scheduling problem. In real-world scheduling, the processing time of a job cannot be known precisely and hence the
an
completion time can only be obtained ambiguously. The fuzzy job-shop scheduling problem considers the processing times or the due-dates to be fuzzy variables. Jun and Yong [32] proposed an EDA to solve the job schedule problem. In this problem, jobs are scheduled on a single processor and the processing
M
time of all the jobs is same. In [33], an effective EDA approach is proposed to solve the stochastic job-shop scheduling problem where processing times were uncertain with an objective to minimize the expected average makespan within a reasonable amount of computational time. In [35], Wang et al. proposed
d
an effective EDA to solve the distributed permutation flow-shop scheduling problem. In this problem,
te
jobs to be processed is in different factories and each factory contains the same number of machines. Once a job is assigned to a factory, then the job can not be transferred to another factory and also all the
Ac ce p
operations on this job have to be performed in the same factory. In [36], Wang et al. proposed fuzzy logic-based hybrid EDA to solve the distributed permutation flow-shop scheduling problems under machine breakdown assumption. To produce a new offspring, crossover and mutation operators of genetic algorithm are hybridised with probabilistic model of EDA, and this enables better exploration of promising regions in thr search space. Ceberio et al. [37] proposed a novel EDA based approach that uses the generalized Mallows model [38] to deal with the flow-shop scheduling problem. However, the proposed method is adapted with a variable neighborhood search algorithm to further improve the solution quality. Salhi et al.[39] developed an EA/G for a complex flow shop scheduling problem where distribution of promising solutions in the search space is modelled using a probability matrix whose each element (i, j) gives the probability of job j being at the ith place in the schedule. In [40], Chen et al. designed an estimation of distribution algorithm to solve the single machine scheduling problem. The designed algorithm is an extension of the guided mutation of [29] and called adaptive EA/G. The algorithm is designed in such a way that it maintains the balance between intensification and diversification. In [41], Page 7 of 43
Ricardo et al. proposed simulation optimization using EDA for a flexible job-shop scheduling problem. The processing time of jobs is variable and they used the bivariate probabilistic model to estimate the distribution of promising solutions. The novelty of this EDA is that it uses a continuous optimization procedure instead of discrete one to solve the scheduling problem.
ip t
3. Problem formulation The formulation of the order acceptance and scheduling (OAS) problem is same as provided in [25].
cr
In a single machine environment, a set of n incoming orders is to be processed one at a time without any precedence and without pre-emption. In other words, any order can be processed at any time (af-
us
ter its release date) and can occupy any position in the processing sequence, and, once an order starts processing, it can not be stopped until it finishes. Each order i has the following characteristics:
an
1. A release date ri , i.e., processing for order i can not start before its release date. 2. A processing time Pi , i.e., machine takes Pi time to process order i 3. A maximum revenue gain Ei , i.e., if order i is accepted and its completion time Ci is less than or
M
equal to its due date di , then the revenue gain for the order i will be Ei . If Ci is greater than di then a tardiness penalty is incurred and revenue gain decreases with time and, finally, the revenue
d
gain will be zero when Ci is greater than or equal to the dead line di for order i. The per unit time tardiness penalty of an order i is calculated as follows wi = Ei /(di − di ) where di
te
4. A sequence dependent setup time Sij (i 6= j) between order i and j is incurred when order i is
Ac ce p
processed immediately before order j in the sequence. It is assumed that all the above information about all the orders are known in advance. Based on the above definitions, the objective function of OAS problem is to find a processing sequence of all the accepted orders on the single machine that maximizes the total net revenue (TNR). The objective function for TNR is presented in Equation (1).
T N R = max
n X
xi (Ei − wi Ti )
(1)
i=1
Where n is number of orders, Ti = max(0, Ci − di ) is tardiness of an order i. xi ∀i ∈ {1, 2, . . . , n} are boolean variables. xi = 1 means order i is accepted and xi = 0 means order is rejected. Here, it is to be noted that any order j for which Cj ≥ dj is rejected by setting its associated boolean variable xj to zero. 4. Hybrid steady-state genetic algorithm Our genetic algorithm for the OAS problem uses the steady-state population model [26]. During each generation a single offspring is produced which replaces the worst member of the population irrespective of its own fitness in case this offspring is different from all the existing population members. Page 8 of 43
This offspring is discarded in case it is found to be identical to any existing population members. Other salient features of our SSGA approach are described in the subsequent subsections. 4.1. Solution encoding We have used the same encoding scheme as used in [25]. In this scheme, each chromosome is a linear
ip t
permutation of all the n orders. An order at a particular position in the chromosome is accepted for processing only when its deadline constraint is not violated considering all accepted orders preceding it in the chromosome and its release date. For the sake of example, suppose there are 10 orders 1, 2,. . . ,10
cr
and consider a chromosome 10, 2, 8, 7, 1, 3, 4, 5, 9, 6 . Suppose the orders 2, 5 and 9 violate their respective deadlines when we try to schedule them as per their position in the chromosome then these orders will
us
not be accepted. The remaining orders will be accepted and will be scheduled according to their position in the chromosome, i.e., orders will be executed in the order 10, 8, 7, 1, 3, 4, 6.
an
4.2. Fitness evaluation
Fitness function calculates the fitness of only accepted orders in the solution and is same as the
M
objective function (Equation (1)).
Algorithm 1 The pseudo-code for generation of each initial solution. ⊲ n is number of orders
te
d
O ← {o1 , o2 , . . . , on } OAS ← Φ; oac ← 1; Ct ← 0; while (O 6= ∅) do Generate a random number r such that 0 ≤ r ≤ 1; if (r < θ) then o o ← arg max E ϕo ;
Ac ce p
1: 2: 3: 4: 5: 6: 7: 8:
⊲ Equation (2)
o∈O
9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23:
else o ← random(O); end if if (R[o] < Cl and (Cl + S[OAS[oac − 1]][o] + Po ) < d(o)) then OAS[oac ] ← o; Cl ← Cl + S[OAS[oac − 1]][o] + Po ; oac ← oac + 1; else Uo ← Uo ∪ {o}; end if O ← O\{o}; end while Apply Assignment Operator to insert the orders from Uo to OAS; Apply Local Search to improve OAS; return OAS;
⊲ Section 4.7 ⊲ Section 4.8
4.3. Initial population generation Each member of the initial population except the first member is generated in the following manner: Initially, OAS is an empty schedule, Cl is the completion time of the last order in the schedule which is Page 9 of 43
initialized to zero. O is set of orders which are yet to be scheduled. Initially, O contains all the orders. Uo is a set which maintains the rejected orders and is initially empty. Iteratively, an order o is selected from O in one of the two ways - With probability θ, Equation (2) is used to select an order, otherwise randomly an order is selected from the set O.
i∈O
Ei ϕi
(2)
ip t
o ← arg max
Where ϕi is (Ski + Pi ) × di and k is the order last added in OAS. Obviously, Equation (2) favors an order
cr
with higher revenue, lesser sum of setup & processing times and earlier deadline. Here, θ is a parameter to be determined empirically. o is deleted from O and added to OAS at the first available position if it is
us
possible to finish its processing before its deadline. If not, o is added to Uo . This process is repeated till O becomes empty. After that the orders from set Uo are inserted to OAS with the help of assignment operator
an
(Section 4.7). The first member of the initial population is also generated using the above process except for the fact that Equation (2) is used always to select an order from O, i.e., first member is generated by setting θ = 1. Each member of the initial population is passed through a local search to further improve
M
its quality. The pseudo-code of initial solution generation process is given in Algorithm 1. 4.4. Selection of parent(s)
d
We have utilized binary tournament selection (BTS) method to choose two parents for crossover and
te
one parent for mutation. To select a parent, BTS starts by choosing two chromosomes uniformly at random from the current population and comparing their fitness values. The chromosome with better
Ac ce p
fitness is selected to be a parent with the probability bt , otherwise the worse one is selected. 4.5. Crossover
As OAS problem posses the characteristics of subset selection and permutation problems both, we have developed a special crossover operator for it. Our crossover operator begins with an empty offspring, and then starting from the first position in the offspring, each position in the offspring is considered one-by-one. For each position, one of the two parents is selected uniformly at random, and the order at that position in the selected parent is copied at the same position into the offspring in case this order is not already copied to the offspring at some other position. If the order is already copied to the offspring then this position is left blank and the next position is considered. Once every position is considered, all empty positions are shifted towards the end by moving orders in the offspring towards the start while respecting the relative ordering among them. Now, all unassigned orders will be assigned to this offspring with the help of Assignment operator (Section 4.7). For example, consider two parents p1 and p2 as shown below:
Page 10 of 43
Parent p1 : 3 6 8 10 9 5 7 4 2 1. Parent p2 : 3 10 9 8 7 4 6 2 1 5. Position: 1 2 3 4 5 6 7 8 9 10. Parent Selected: p1 p1 p1 p2 p2 p1 p2 p2 p1 p1 . Offspring: 3 6 8
– 7
5 –
2 –
1.
ip t
Offspring after shifting: 3 6 8 7 5 2 1 – – –.
cr
From the positions 1st to 3rd , parent p1 is selected and the orders 3rd , 6th and 8th are copied to the offspring at the positions 1st , 2nd and 3rd respectively. At 4th position, parent p2 is selected, but the order
us
8th is already included in the child solution. Therefore, this position is left blank and the 5th position is considered. Similarly, at positions 7th and 9th , parents p2 and p1 are selected respectively, but the
an
corresponding orders 6th and 2nd are already included in the child solution. Therefore, the next position is considered without including these two orders. Finally, at the last position, parent p1 is selected and order 1st is included in the child solution. After this, orders already included in the offspring are shifted
M
towards start without disturbing their relative order. Now, the unassigned orders, viz. orders 4, 9 and 10 are assigned with the help of assignment operator (Section 4.7). The final solution sequence is 4, 3, 9, 6,
d
8, 7, 5, 2, 1, 10.
We have also tried UOB crossover for OAS problem, but the crossover operator described above in
te
combination with assignment operator consistently produced better results than UOB crossover. It is pertinent to mention that UOB crossover produces a permutation containing all n orders as offspring
Ac ce p
and there is no need to use assignment operator, so it is faster than the combination of our crossover operator and assignment operator. 4.6. Mutation operator
The purpose of mutation operator is to maintain diversity in the population as well as to prevent the search process from getting stuck into a local optima. The proposed mutation operator starts by randomly deleting mut (1 ≤ mut ≤ Del) number of orders, where Del is some fixed integer which is always less than the number of orders. Like the crossover operator, the remaining orders are shifted towards the beginning while maintaining the relative ordering among orders by shifting the vacant positions towards the end. For example, suppose the parent solution is 10, 8, 7, 1, 3, 4, 6, 2, 5, 9 and the value of mut is 4. Suppose the randomly deleted orders are 1, 6, 5 and 9. After deletion the sequence is 10, 8, 7, –, 4, –, 2, –, –. Now, all the empty positions are shifted towards end and then the sequence is 10, 8, 7, 3, 4, 6, 2, –, –, –, –. The deleted orders are re-inserted by the assignment operator (Section 4.7). Crossover and mutation are applied in a mutually exclusive manner. With probability πc crossover Page 11 of 43
is used, otherwise mutation is used. 4.7. Assignment operator Assignment operator assigns left-out orders into the schedule. Our assignment operator is a modified version of the Construction phase of [27]. It begins by computing the set of unassigned orders Uo and then
ip t
an iterative process follows. During each iteration, an order o is selected randomly from the set Uo and inserted at the best position in the partial schedule. Here by best position, we mean the position at which inserting the job o increases T N R by maximum amount. If no position is available which increases the
cr
T N R, then we look for a position at which inserting the job o reduces the completion time of the last order in the schedule while keeping the T N R same. If no such position is available also, then we insert
us
the order o at the end of the partial schedule. This whole process is repeated until Uo becomes empty. 4.8. Local search
an
To further improve the solution quality, each initial solution is passed through a local search. This local search is also applied to the child solution obtained after the application of crossover/mutation,
M
only when the difference of the fitness of child solution and the fitness of the best solution found so far is less than Pf times the fitness of the best solution found so far. Our local search is a modified version of the local search used in [25], which is based on exchange of orders with subsequent orders. This
d
local search considered only the T N R while exchanging the orders, whereas the modified version also
te
considers the completion time, Cl , of the last order in the schedule in case T N R remains same before and after the exchange. The motivation is to accept an exchange which makes space to accommodate other
Ac ce p
orders without losing any revenue. The pseudo-code of our local search is presented in Algorithm 2. Algorithm 2 The pseudo-code of local search 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13:
⊲ Csol : Input solution on which local search is to be applied Cl ← Completion time of last order in Csol ; for (i ← 1; i ≤ n − 1; i + +) do for (j ← i + 1; j ≤ n; j + +) do Obtain a new solution Nsol by exchanging the orders at positions i and j in solution Csol ; Ctem ← Completion time of last order in Nsol ; if ((T N R(Nsol ) > T N R(Csol )) or (T N R(Nsol ) = T N R(Csol ) and Ctem < Cl )) then Csol ← Nsol ; Cl ← Ctem ; end if end for end for return Csol ;
The pseudo-code of our hybrid SSGA approach for the OAS problem is given in Algorithm 3, where BTS(), Crossover(), Mutation(), Assignment Operator() and Local Search() functions return the solution as
Page 12 of 43
per their description in Section 4.4, Section 4.5, Section 4.6, Section 4.7 and Section 4.8 respectively. Hereafter, our hybrid steady-state genetic algorithm based approach for OAS problem will be referred to as HSSGA. Algorithm 3 The pseudo-code of HSSGA approach
us
cr
ip t
⊲ Section 4.3
⊲ Section 4.4 ⊲ Section 4.4 ⊲ Section 4.5 ⊲ Section 4.4 ⊲ Section 4.6 ⊲ Section 4.7
⊲ Section 4.8
te
d
M
an
Generate initial population of solutions; Solb ← Best solution of initial population; fSolb ← f itness(Solb ); while (termination condition is not satisfied) do Generate a random number r01 such that 0 ≤ r01 ≤ 1; if (r01 ≤ πc ) then p1 ← BT S(); p2 ← BT S(); Cs ← Crossover(p1 , p2 ); else p ← BT S(); Cs ← M utation(p); end if Cs ← Assignment Operator(Cs ); fCs ← f itness(Cs ); if ((fSolb − fCs ) < (fSolb × Pf )) then Cs ← Local Search(Cs ); fCs ← f itness(Cs ); end if if (Cs is different from current population members) then Replace worst member of population with Cs ; if (fCs > fSolb ) then Solb ← Cs ; fSolb ← fCs ; end if end if end while return Solb ;
Ac ce p
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28:
5. Hybrid EA/G approach
The evolutionary algorithm with guided mutation (EA/G) approach was developed by Zhang et al. [29]. EA/G is a new member of the class of evolutionary algorithms. The intention behind the development of EA/G was to combine the benefits of two evolutionary algorithms, viz. estimation of distribution algorithm (EDA) and genetic algorithm (GA). Conventionally, GAs use the variety of crossover and mutation operators to generate an offspring from the selected parents. GAs directly use the location information of the current population members to generate offspring and do not make use of global statistical information about the search space which can be extracted by keeping track of the solution generated since the beginning of the algorithm. On the other hand, EDAs use probability model to generate offspring. The probability model characterizes the distribution of the promising solutions in the search space, and is updated at each iteration using the statistical information about the search Page 13 of 43
space extracted from the population members present in that iteration. An offspring is generated by sampling the probability model. So EDAs do not utilize the location information of the solutions. EA/G generates an offspring by making use of the location information of the solutions generated and global statistical information about the search space both. EA/G employs a mutation operator called guided mutation (GM) to produce an offspring. In guided mutation, an offspring is created partly by sampling
ip t
a probability model characterising global statistical information and partly by copying elements from its parent, i.e., offspring is produced by taking into account the global statistical information as well as the
cr
location information of the solutions found so far.
Our hybrid EA/G approach uses the same solution encoding, fitness function, assignment operator
us
and local search as used by HSSGA. Other salient features of our hybrid EA/G approach are described in subsequent subsections. Hereafter, our hybrid EA/G approach will be referred to as EA/G-LS.
an
5.1. Initialization and update of the probability matrix
There are many methods to model the distribution of promising solutions in the search space, i.e.,
M
univariate distribution model, bivariate distribution model, multivariate model. In a univariate distribution model, it assumed that the variables are independent of each other. Under this assumption, the probability distribution of any individual variable should not depend on the value of any other vari-
d
ables. In a bivariate distribution model, also called tree-base model [42], a pair-wise interaction between
te
variables is captured using tree-base models. The conditional probability of a variable may only depend on at most one other variable. When a variable has multiple interactions between variables, then
Ac ce p
a bivariate model fails to capture all possible interactions between variables and a multivariate distribution mode is used. Extended compact genetic algorithm [42], Bayesian optimization algorithm [42] and BayesianDirichlet (BD) metric [42] are examples of the multivariate distribution model. Permutation EDAs [42] are also developed to deal with permutation problems, but these are not frequently in use due to their complex design as well as inefficiency. One can refer to [42] for a detailed study about the above mentioned models.
Similar to EA/G approaches of [29] and [39], to model the distribution of promising solutions in the search space, a univariate marginal distribution (UMD) model is used. The reason for using this model is its simplicity and the failure of other models to perform significantly better despite their higher computational costs on permutation problems. In this model, a n × n probability matrix P (g) models the distribution of promising solutions at iteration g. The value pij gives the probability of order i at the position j in any schedule. Salhi et al. [39] used this matrix representation in the context of a flowshop scheduling problem. This probability matrix is initialised using Ns initial solutions which are generated in the same manner as all the initial population members except the first one in genetic algorithm. The Page 14 of 43
pseudo-code for initializing the probability matrix is given in Algorithm 4.
p11
p12 · · · p1n
ip t
p21 p22 · · · p2n P (g) = . .. .. .. .. . . . pn1 pn2 · · · pnn
cr
Algorithm 4 The pseudo-code for initializing the probability matrix P (g) 1: Compute nij ← number of initial solutions containing order i at position j, ∀i ∈ O, j = 1, 2, . . . , n; nij 2: Compute pij ← N , ∀i ∈ O, j = 1, 2, . . . , n; s
us
Once the n × n probability matrix P (g) is formed, a n element vector Bp (g) is computed which contains for each order i, the position ji which has maximum probability, i.e.,
an
i h Bp (g) = j1 j2 · · · ji · · · jn
M
where each ji is computed as follows:
ji ← arg max pik
(3)
d
k=1,2,...,n
At each generation g, a parent set parent(g) is formed by selecting the best
Ns 2
solution from the
te
current population pop(g) like the previous paper. Once parent(g) is formed, it is used for updating the probability vector P (g). The pseudo-code for updating the probability vector is given in Algorithm 5,
Ac ce p
where λ ∈ (0, 1] is the learning rate and it governs the contribution of solutions in parent(g) to the updated probability vector P (g), i.e., higher the value of λ, more is the contribution of solutions in parent(g). Algorithm 5 The pseudo-code for updating a probability vector p in generation g 1: Compute nij ← number of solutions in parent(g) containing order i at position j, ∀i ∈ O, j = 1, 2, . . . , n; n 2: Compute pij ← (1 − λ)pij + λ Nijs , ∀i ∈ O, j = 1, 2, . . . , n; 2
5.2. Guided mutation (GM) operator
As mentioned already in the beginning of this section, GM operator uses both the global statistical information about the search space (stored in the form of probability matrix P (g)) and the location information of the parent solution to generate a new offspring. Our GM operator is applied on a set of M best solutions in the current population pop(g). The proposed GM operator works as follows: On the set of M best solution {s1 , s2 , . . . , sM }, GM operator is applied only once on each st , t = 1, 2, . . . , M to generate {c1 , c2 , . . . , cM } offspring. The pseudo-code of GM operator is presented in Algorithm 6, where β ∈ [0, 1] is an adjustable parameter and N S is a new offspring constructed through GM operator. Positioning of Page 15 of 43
orders in N S are either sampled randomly from the probability matrix P (g) or directly copied from the solution st in pop(g). To sample the positioning of an order i through probability matrix, we first access Bp (g) to get ji and then make use of entry piji in P (g). In case of sampling from probability vector P (g), the order i is copied to position ji only when the position ji in the partial schedule N S is not occupied, otherwise that order is added into the set Uo . Whereas in case an order is to be directly copied from the
ip t
solution st , it is copied from solution st only when the corresponding position of that order in N S is not occupied. In other words, suppose the position index of the order i in the solution st is l (1 ≤ l ≤ n).
cr
Now, the order i will be copied at the position l in N S if that position is not occupied, otherwise add the order i into set Uo . This process is repeated till all the orders are considered. After this the orders in the
us
set Uo are assigned to N S using the assignment operator (Section 4.7).
Algorithm 6 The pseudo-code of generating a solution through GM operator
te
d
M
an
for (i ← 1; i ≤ n; i + +) do N S[i] ← 0; end for Uo ← ∅; for (each order i ∈ O) do Generate a random number r1 such that 0 ≤ r1 ≤ 1; if (r1 < β) then Generate a random number r2 such that 0 ≤ r2 ≤ 1; if (r2 < piji ) then if (N S[ji ]=0) then N S[ji ] ← i; else Uo ← Uo ∪ i; end if end if else if (N S[st [ji ]]=0) then N S[st [ji ]] ← i; else Uo ← Uo ∪ i; end if end if end for N S ← Assignment Operator(N S); return N S;
Ac ce p
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25:
⊲ Section 4.7
5.3. Other features Same local search as used in HSSGA (Section 4.8) is applied to each solution obtained through GM operator only when the difference of the fitness of the solution under consideration and the fitness of the best solution found so far is less than Pf times the fitness of the best solution found so far. Zhang et al. [29] copied best
Ns 2
solutions of pop(g) into parent(g) and produced
by applying the guided mutation operator
Ns 2
Ns 2
new children
times on the best solution. The population of the next
Page 16 of 43
generation is formed by using the the best
Ns 2
Ns 2
newly produced children through guided mutation operator and
solutions of pop(g). On the other hand, in our approach parent(g) is formed by using the best
M solutions from pop(g), and M new solutions are produced through applying guided mutation operator once on each of the best M solutions in pop(g) in each generation. The best Ns − M solutions of pop(g) along with M newly produced children constitute pop(g+1). We have also tried the same scheme as [29],
ip t
but the proposed strategy produced better results.
We have made one more modification in the scheme of Zhang et al. [29]. If all the solutions in the
cr
population become identical, then they re-initialized the population and various probabilities. Whereas, in our version, if the best solution of the population does not improve over RI generations, we reinitialize
us
the population and probability matrix. For this, we create a new population of Ns solutions. The best solution obtained since the beginning of the algorithm is included in this population as its first member.
an
Remaining Ns − 1 members of this population are generated in the same manner as non-first members of initial population are generated in genetic algorithm. Then we make use of Algorithm 4 to re-initialize the probability matrix. Salhi et al. [39] followed another strategy to escape from local optima. In their
M
strategy, when there is no improvement in solution quality over certain number of iterations then Ns new solutions are generated by sampling a probability matrix whose each element is complement of
d
the original probability matrix. These Ns solutions are then used to update the probability matrix. This helps in directing the search process to an area different from the area it is currently exploring. For some
te
OAS problem instances, local optima are so strong that this strategy fails to escape from them. Therefore, we have used population and probability matrix reinitialization as described above.
Ac ce p
The pseudo-code of our EA/G-LS approach is given in Algorithm 7. Algorithm 7 EA/G-LS Approach for OAS 1: At generation g ← 0, an initial population pop(g) consisting of Ns solutions, is generated randomly; 2: Initialize the probability vector p for all orders using Algorithm 4; 3: Select best M solutions from pop(g) to form a parent set parent(g), and then update the probability vector p
using Algorithm 5;
4: Apply the GM operator once on each of the M best solutions in pop(g) in order to generate M new solutions.
The assignment operator is applied to each generated solution, if necessary, and then the local search is applied to each generated solution only in case solution under consideration is sufficiently close to the best solution found so far (See Section 5.2). Add all M newly generated solutions along with Ns − M best solutions in pop(g) to form pop(g+1). If the stopping condition is met, return the solution with maximum T N R found so far ; 5: g ← g + 1 ; 6: If the best solution of the population did not improve over RI generations, then reinitialize entire pop(g) except for the best solution, and then go to step 2 ; 7: Go to step 3 ;
Page 17 of 43
6. Computational results The proposed approaches have been coded in C language and executed on a Linux based system with 3.10GHz Intel core i5-2400 processor and 4GB RAM. gcc 4.6.3-2 compiler with O3 flag has been used to compile the C programs, and their performances are examined on the same set of
ip t
test instances as used in [20, 25]. These test instances were generated by Cesaret et al. [20] and can be downloaded from http://home.ku.edu.tr/˜coguz/Research/Dataset_OAS.zip. These test instance were generated considering the three factors: number of orders (n), the tardiness factor (τ ) and
cr
the due date range (R). Number of orders is set to 10, 15, 20, 25, 50 and 100. The values of τ and R are set to 0.1, 0.3, 0.5, 0.7 and 0.9. For each combination of these three factors, 10 instances were generated.
us
Therefore, total number of instances is 2000 (8×5×5×10). The instances were generated in the following way: A maximum revenue gain, Ei and a processing time, Pi , for each order is generated from the
an
uniform distribution [0, 20]. A release date ri is generated with uniform distribution in the interval of [0, τ PT ], where PT is the total processing time of all orders. The sequence-dependent setup time Sji
M
(j, i = 1, 2, . . . , n.j 6= i) between orders j and i is generated with uniform distribution in the interval of [0, 10]. A due date di = ri + maxj=0,1,...,n Sji + max{slack, Pi } of an order (i = 1, 2, . . . , n) is generated in the interval of [PT (1-τ -R/2), PT (1-τ +R/2)]. A dead line of an order i is defined as d¯i =di +RPi . A non-
d
integer tardiness weight of an order i is calculated as wi =Ei /(d¯i -di ). We have also considered some larger
te
sets of instances, viz. instances with 150 and 200 orders and evaluated the performance of HSSGA and EA/G-LS on them. These instances are generated in the same manner as the instances of[20].
Ac ce p
Proposed HSSGA parameters: The proposed hybrid steady-state genetic algorithm starts with the population size P ops =60, bt =0.50 is used to choose a better member through the binary tournament selection (BTS), probability for crossover Co =0.80 (hence mutation is used with probability 0.20 as crossover and mutation are used mutually exclusively). Del=6 is used to delete orders in mutation operator. Pf is set to 0.05, and θ=0.65 is used in initial solution generation. The HSSGA is allowed to execute till the best solution does not improve over Next = 5000 generations and it has executed for at least 10000 generations.
Proposed EA/G-LS parameters: For EA/G-LS, we have used the population size of Ns = 40, M =20 to generate new offspring through the guided mutation operator. The value of β is set to 0.10. λ=0.35 is used to update the probability vector of the promising solutions. The value of θ=0.65 is used in the initial solution generation. Pf is set to 0.05. If the best solution does not improve over RI =50 generations, the entire population, except the best solution, and the probability matrix are reinitialized. The EA/G-LS is allowed to execute till the best solution does not improve over 50 generations and it has executed for at Page 18 of 43
least 200 generations. All the above mentioned parameters for EA/G-LS and HSSGA are set empirically after extensively experimenting with different values. These selected parameter values produce good results, though they may not be optimal for all instances. We have compared our HSSGA and EA/G-LS approaches with Tabu Search (TS) [20] and Artificial
ip t
bee colony algorithm (ABC) [25] based approaches. Like TS and ABC approaches, our approaches have been executed on each instance once. The criteria to evaluate the performance of various approaches is
cr
the percentage deviation of T N R obtained by an approach from the upper bound (U B) on each of the 1500 instances. The percentage deviation on an instance is calculated as follows: (U B−T N R) UB
× 100%
us
% Deviation from UB=
an
Where T N R indicates the T N R obtained by the approach in consideration and U B is the upper bound proposed in Cesaret et al. [20]. We have reported the maximum, average and minimum percentage deviations from UB of various approaches on each group of 10 instances with same n, τ and R. Tables 1
M
to 6 report the results obtained by TS, ABC, HSSGA and EA/G-LS approaches for instances with 10, 15, 20, 25, 50 and 100 orders respectively. These tables also report the number of proven optimal solutions found over each group of 10 instances by each of the 4 methods and their average execution times.
d
Detailed instance by instance results for TS and ABC approaches have been obtained from authors of [25]
te
through personal communication. We have made use of these detailed results to report the percentage deviations of ABC and TS approaches with an increased precision in these tables than those reported in
Ac ce p
[20, 25]. TS approach was executed on 3.0 GHz Intel Xeon Processor based system with 4GB of RAM and ABC approach was executed on 3.0 GHz Intel Pentium 4 Processor based system with 4GB of RAM. As TS and ABC approaches were executed on systems whose configuration is different from the system used to execute HSSGA and EA/G-LS approaches, and therefore, execution times of different approaches in these tables can not be compared precisely. Tables 1 to 6 show that the HSSGA and EA/G-LS solve more instances optimally in comparison to TS and ABC approaches. For problem size 10, out of 250 test instances TS, ABC, HSSGA and EA/G-LS found 188, 247, 249 and 247 optimal solutions respectively (Table 1). For problem size 15, out of 250 test instances TS, ABC, HSSGA and EA/G-LS found 91, 101, 121 and 118 optimal solutions respectively(Table 2). Out of 250 test instances with size 20, TS, ABC, HSSGA and EA/G-LS found 47, 58, 76 and 75 optimal solutions respectively(Table 3). Table 4 shows that for problem size 25, out of 250 test instances TS, ABC, HSSGA and EA/G-LS found 34, 43, 55 and 54 optimal solutions respectively. Table 5 shows that for problem size 50, out of 250 test instances TS, ABC, HSSGA and EA/G-LS found 4, 18, 28 and 31 optimal solutions respectively. As per Table 6, for problem size 100, out of 250 test instances TS, ABC, HSSGA Page 19 of 43
and EA/G-LS found 5, 21, 30 and 31 optimal solutions respectively. From these tables, it can be observed that most of the Max., Avg. and Min. % deviation from UB of the proposed HSSGA approach are less than or equal to TS and ABC approaches. Similar observations can be made about EA/G-LS. As mentioned already that systems used to execute TS and ABC approaches had different configuration than the system on which HSSGA and EA/G-LS have been executed, and
ip t
therefore, execution times of HSSGA and EA/G-LS can not be compared precisely with those of TS and ABC. However, HSSGA and EA/G-LS approaches are definitely slower than TS and ABC on most of the
cr
instances. 6.1. Comparison between HGGA and EA/G-LS
us
As far as comparison between our two proposed approaches are concerned, we can observe from Tables 1 to 6 that on most of the instance groups HSSGA and EA/G-LS obtained results which are very
an
close to each other. But, overall HSSGA shows the dominance over EA/G-LS approach in terms of solution quality. From these tables we can see that the “Total Average” returned by HSSGA is better
M
than EA/G-LS in terms of Max., Avg. and Min. % deviation from U B, except on instances with 50 orders where the minimum (Min.) % deviation from U B is worse in comparison to EA/G-LS. If we compare the number of optimal solutions, HSSGA found more number of optimal solutions than EA/G-LS on all
d
instance groups (except the groups containing 50 orders). However, the computational time taken by
te
HSSGA is always greater than EA/G-LS on all instance groups. We have evaluated the performance of the proposed approaches on large size instances also, i.e.,
Ac ce p
instances with 150 and 200 orders. As the upper bounds for the instances with 150 and 200 orders are not known. Therefore, we take the best of EA/G-LS and HSSGA as an upper bound and reported the % deviation from this upper bound. Table 10 shows the results for instances with 150 and 200 orders. In Table 10, column ”HSSGA vs EA/G-LS” indicates that number of instances, out of 10 instances, on which HSSGA is better (>), equal (=) and worse (<) than EA/G-LS. For instances with 150 orders, HSSGA is better on 130, equal on 83 and worse on 37 in comparison to EA/G-LS. In case of instances with 200 orders, HSSGA is better on 157, equal on 65 and worse on 28 instances in comparison to EA/G-LS. If we compare the computational time of HSSGA and EA/G-LS, it is clear that HSSGA is slower than EA/G-LS on all instances HSSGA.
Page 20 of 43
Page 21 of 43
R
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
τ
0.1
0.3
0.5
0.7
0.9
Total
Total Average
n=10
2.55
0.00 0.00 0.01 0.00 0.00
4.26 0.01 1.26 0.08 3.60
6.06 5.07 0.27 2.15 1.62
1.68 2.13 7.89 2.39 3.39
2.61 7.89 7.50 2.68 1.24
0.45
0.00 0.00 0.00 0.00 0.00
0.43 0.01 0.13 0.01 0.37
0.65 0.95 0.08 0.27 0.23
0.25 0.76 2.11 0.63 0.86
0.84 1.31 0.84 0.35 0.24
0.02
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.01 0.00 0.00 0.00
0.08 0.09 0.09 0.08 0.08
0.09 0.09 0.00 0.00 0.00
0.45
0.00 0.00 0.01 0.00 0.00
0.00 0.01 0.05 0.08 0.06
0.09 0.10 0.10 0.10 0.10
0.10 0.12 1.56 0.10 0.09
0.10 2.33 6.00 0.10 0.09
ABC
Min.
Max.
Avg.
TS
Max.
0.08
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.01 0.01
0.06 0.07 0.06 0.07 0.08
0.09 0.09 0.23 0.09 0.07
0.09 0.32 0.69 0.07 0.02
Avg.
0.02
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.01 0.00 0.00 0.00
0.08 0.09 0.09 0.08 0.00
0.09 0.09 0.00 0.00 0.00
Min.
0.30
0.00 0.00 0.01 0.00 0.00
0.00 0.01 0.04 0.08 0.07
0.09 0.10 0.10 0.10 0.10
0.07
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.01 0.02
0.06 0.07 0.06 0.07 0.08
0.09 0.09 0.08 0.09 0.07
0.09 0.10 0.67 0.06 0.02
0.02
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.01 0.00 0.00 0.00
d
0.08 0.09 0.09 0.08 0.00
0.09 0.09 0.00 0.00 0.00
Min.
te
0.10 0.12 0.10 0.10 0.10
0.10 0.10 6.00 0.10 0.09
Avg.
HSSGA
Max.
0.41
0.00 0.00 0.01 0.00 0.00
0.00 0.01 0.04 0.08 0.07
0.08
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.01 0.02
0.06 0.07 0.06 0.07 0.08
0.09 0.09 0.08 0.09 0.07
0.25 0.10 0.67 0.06 0.13
0.02
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.01 0.00 0.00 0.00
0.08 0.09 0.09 0.08 0.00
0.09 0.09 0.00 0.00 0.00
Min.
TS
188
7.52
10 9 8 9 8
247
9.88
10 10 10 10 10
10 10 10 10 10
10 10 10 10 10
10 10 9 10 10
10 9 9 10 10
ABC
249
9.96
247
9.88
0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.01 0.00 0.00
0.00 0.00 0.00 0.01 0.00
TS
0.02
0.03 0.03 0.03 0.03 0.03
0.03 0.02 0.03 0.03 0.03
0.03 0.02 0.02 0.03 0.03
0.02 0.02 0.03 0.02 0.03
0.02 0.02 0.02 0.03 0.02
ABC
Run times (s)
ip t
10 10 10 10 10
10 10 10 10 10
10 10 10 10 10
10 10 10 10 10
9 10 9 10 9
EA/G-LS
cr 10 10 10 10 10
10 10 10 10 10
10 10 10 10 10
10 10 10 10 10
10 10 9 10 10
HSSGA
us
9 10 9 8 6
9 8 9 4 7
9 6 6 6 4
6 5 8 8 7
an
M
0.09 0.10 0.10 0.10 0.10
0.10 0.12 0.10 0.10 0.10
1.68 0.10 6.00 0.10 1.24
Avg.
EA/G-LS Max.
Number of optimal solutions
Table 1: Performance of various approaches on instances with 10 orders
Ac ce p
% Deviations from UB
0.02
0.02 0.02 0.02 0.02 0.02
0.02 0.02 0.02 0.02 0.02
0.02 0.02 0.02 0.02 0.02
0.02 0.02 0.02 0.02 0.03
0.02 0.02 0.02 0.02 0.02
HSSGA
0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
EA/G-LS
Page 22 of 43
R
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
τ
0.1
0.3
0.5
0.7
0.9
Total
Total Average
n=15
Ac ce p
6.80
2.70 4.59 0.08 0.10 0.10
4.06 2.78 2.40 8.28 0.27
12.56 13.73 15.29 11.00 18.49
8.05 11.33 8.04 7.57 7.10
3.57 13.73 3.65 4.45 6.08
2.98
0.27 0.47 0.03 0.03 0.05
0.70 0.57 0.43 2.05 0.11
7.48 7.68 9.35 6.47 6.36
4.42 4.92 4.61 3.97 3.79
2.70 3.19 1.71 1.77 1.32
0.93
0.00 0.00 0.00 0.00 0.00
0.00 0.07 0.07 0.08 0.08
2.47 4.26 5.81 1.16 0.10
3.36 1.57 1.57 1.55 0.00
1.19 0.00 0.00 0.00 0.00
5.84
0.00 0.08 0.08 0.09 0.10
2.21 0.10 0.10 8.28 0.10
12.56 13.73 15.29 11.00 18.39
5.75 11.33 7.76 7.57 7.10
3.55 7.19 3.65 3.86 6.08
Max.
Min.
Max.
Avg.
ABC
TS
% Deviations from UB
2.55
0.00 0.01 0.03 0.03 0.05
0.30 0.09 0.08 1.03 0.08
7.00 7.51 9.17 6.09 6.09
3.21 3.94 4.05 2.98 2.94
2.34 3.01 1.59 0.99 1.16
Avg.
0.63
0.00 0.00 0.00 0.00 0.00
0.05 0.06 0.00 0.03 0.00
2.47 4.27 4.07 0.09 0.10
0.00 1.08 1.08 1.17 0.00
0.00 1.20 0.00 0.00 0.00
Min.
5.73
0.00 0.08 0.08 0.09 0.10
0.10 1.25 0.13 8.28 0.10
12.56 13.73 15.29 11.00 18.39
2.45
0.00 0.01 0.03 0.03 0.05
0.08 0.21 0.09 0.90 0.08
7.00 6.89 9.10 6.04 6.09
2.77 3.73 3.88 2.91 2.94
1.96 2.76 1.46 1.01 1.16
Avg.
0.44
0.00 0.00 0.00 0.00 0.00
0.00 0.07 0.07 0.03 0.00
2.47 3.05 3.88 0.09 0.10
0.00 0.00 0.00 1.17 0.00
0.00 0.00 0.00 0.00 0.00
Min.
d
te
5.75 11.33 6.88 7.57 7.10
3.55 6.21 3.65 3.87 6.08
Max.
HSSGA
5.80
0.00 0.08 0.08 0.09 0.10
0.10 0.10 0.10 8.28 0.10
2.49
0.00 0.01 0.03 0.03 0.05
0.08 0.09 0.08 1.03 0.08
7.00 6.89 9.11 6.04 6.09
3.06 3.73 4.05 3.03 2.94
2.07 2.85 1.49 1.33 1.16
Avg.
0.44
0.00 0.00 0.00 0.00 0.00
0.00 0.06 0.00 0.03 0.00
2.47 3.05 4.07 0.09 0.10
0.00 0.00 0.00 1.17 0.00
0.00 0.00 0.00 0.00 0.00
Min.
TS
91
3.64
9 9 10 10 10
101
4.04
10 9 8 7 8
9 10 10 6 8
0 0 0 0 1
1 0 0 0 2
1 0 1 5 5
121
4.84
118
4.72
0.01
0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01
0.04 0.01 0.01 0.01 0.01
TS
0.04
0.05 0.06 0.05 0.05 0.05
0.04 0.05 0.05 0.05 0.05
0.04 0.05 0.04 0.05 0.05
0.04 0.04 0.03 0.04 0.04
0.04 0.03 0.04 0.03 0.04
ABC
Run times (s)
ip t
10 10 10 10 10
10 10 10 8 10
0 0 0 1 1
1 1 0 0 3
1 1 3 3 5
EA/G-LS
cr 10 10 10 10 10
10 9 10 9 10
0 0 0 1 1
1 1 0 0 3
1 1 3 6 5
HSSGA
us
7 7 7 3 9
0 0 0 0 1
0 0 0 0 2
0 1 1 1 4
ABC
Number of optimal solutions
an
M
12.56 13.73 15.29 11.00 18.39
6.90 11.33 7.76 7.57 7.10
3.55 6.54 3.65 4.45 6.08
Max.
EA/G-LS
Table 2: Performance of various approaches on instances with 15 orders
0.06
0.05 0.05 0.06 0.05 0.06
0.05 0.05 0.06 0.06 0.06
0.05 0.05 0.06 0.07 0.06
0.06 0.05 0.07 0.07 0.07
0.06 0.06 0.07 0.07 0.06
HSSGA
0.02
0.01 0.01 0.02 0.01 0.01
0.02 0.01 0.02 0.02 0.01
0.02 0.02 0.02 0.02 0.02
0.02 0.02 0.02 0.02 0.02
0.02 0.02 0.02 0.02 0.02
EA/G-LS
Page 23 of 43
R
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
τ
0.1
0.3
0.5
0.7
0.9
Total
Total Average
n=20
Ac ce p
7.90
1.21 0.74 2.33 1.48 13.21
14.85 14.29 20.18 12.17 12.83
7.69 9.16 10.09 10.70 10.57
7.65 6.22 7.61 6.96 4.69
4.57 9.16 3.29 1.92 3.91
4.25
0.12 0.12 0.48 0.44 1.94
9.50 9.21 9.13 6.71 7.90
5.75 6.32 6.91 5.32 6.25
4.65 4.75 5.21 3.27 2.13
3.16 3.27 1.91 0.64 1.16
1.39
0.00 0.00 0.00 0.00 0.00
5.47 5.94 0.09 0.09 0.00
3.61 2.83 3.09 1.21 0.00
1.65 3.17 3.17 1.29 0.00
1.45 0.82 0.82 0.00 0.00
7.03
0.01 0.74 0.10 0.10 13.21
13.97 12.61 20.18 12.17 12.50
5.91 8.74 9.21 10.70 10.57
7.45 6.22 5.77 6.96 4.69
4.11 2.34 3.04 1.32 3.04
Max.
Min.
Max.
Avg.
ABC
TS
% Deviations from UB
3.53
0.00 0.11 0.07 0.08 1.45
8.75 8.45 8.72 6.46 7.11
4.60 5.43 5.49 4.65 5.54
3.82 3.69 3.70 2.47 1.68
1.90 1.70 1.41 0.46 0.54
Avg.
0.97
0.00 0.00 0.00 0.00 0.00
5.47 5.94 0.09 0.09 0.00
2.06 1.42 2.06 1.16 0.00
1.65 1.44 1.44 0.62 0.00
0.00 0.82 0.00 0.00 0.00
Min.
6.89
0.01 0.09 0.10 0.10 13.21
13.54 11.76 20.18 12.17 12.50
5.53 9.16 9.21 10.70 10.57
3.41
0.00 0.04 0.07 0.08 1.45
8.57 8.23 8.60 6.51 7.11
4.17 5.40 5.49 4.74 5.48
3.23 3.18 3.71 2.38 1.49
1.62 1.66 1.43 0.33 0.40
Avg.
0.93
0.00 0.00 0.00 0.00 0.00
5.47 5.94 0.09 0.09 0.00
2.06 1.42 2.06 1.16 0.00
0.00 1.44 1.44 0.55 0.00
0.00 0.82 0.73 0.00 0.00
Min.
d
te
7.45 5.33 7.00 6.96 4.69
2.74 2.34 3.04 1.28 2.61
Max.
HSSGA
6.90
0.01 0.09 0.10 0.10 13.21
13.97 11.76 20.18 12.17 12.50
3.46
0.00 0.04 0.07 0.08 1.45
8.27 8.44 8.71 6.51 7.11
4.33 5.10 5.53 4.85 5.55
3.45 3.45 3.93 2.46 1.45
1.95 1.66 1.49 0.33 0.41
Avg.
0.87
0.00 0.00 0.00 0.00 0.00
1.99 5.94 0.09 0.09 0.00
2.06 1.42 2.06 1.16 0.00
1.65 1.59 1.59 0.55 0.00
0.00 0.82 0.77 0.00 0.00
Min.
TS
47
1.88
0 8 8 8 6
58
2.32
10 8 7 6 4
0 0 1 2 0
0 0 0 0 1
0 0 0 1 2
2 0 1 6 7
76
3.04
75
3.00
0.04
0.06 0.03 0.03 0.03 0.03
0.05 0.04 0.04 0.07 0.04
0.03 0.07 0.04 0.03 0.05
0.03 0.04 0.03 0.04 0.04
0.10 0.04 0.03 0.04 0.03
TS
0.08
0.11 0.12 0.11 0.11 0.10
0.09 0.10 0.09 0.10 0.08
0.08 0.07 0.09 0.09 0.07
0.07 0.07 0.07 0.07 0.08
0.06 0.06 0.05 0.05 0.07
ABC
Run times (s)
ip t
10 10 10 10 8
0 0 2 3 1
0 0 0 0 1
0 0 0 0 4
1 0 0 7 8
EA/G-LS
cr 10 10 10 10 8
0 0 2 3 1
0 0 0 0 1
1 0 0 0 3
2 0 0 7 8
HSSGA
us
0 0 2 3 1
0 0 0 0 1
0 0 0 0 2
0 0 0 5 3
ABC
Number of optimal solutions
an
M
5.91 7.65 9.21 10.70 10.57
6.67 5.33 7.00 6.96 4.69
4.11 2.34 3.04 1.28 3.04
Max.
EA/G-LS
Table 3: Performance of various approaches on instances with 20 orders
0.11
0.09 0.10 0.10 0.12 0.11
0.10 0.11 0.10 0.11 0.10
0.10 0.10 0.10 0.13 0.12
0.10 0.11 0.12 0.13 0.13
0.12 0.11 0.13 0.13 0.14
HSSGA
0.05
0.02 0.03 0.03 0.04 0.05
0.05 0.05 0.05 0.05 0.05
0.05 0.05 0.06 0.07 0.06
0.05 0.06 0.06 0.06 0.06
0.06 0.06 0.05 0.06 0.05
EA/G-LS
Page 24 of 43
R
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
τ
0.1
0.3
0.5
0.7
0.9
Total
Total Average
n=25
9.54
6.40 0.44 12.43 24.81 22.27
18.24 14.11 14.75 13.75 14.64
7.37 9.24 7.89 10.89 7.45
5.24 7.33 5.59 5.69 4.05
4.90
1.28 0.07 3.51 7.53 6.77
9.28 9.74 11.51 7.86 10.24
6.35 5.49 5.28 5.70 4.32
3.66 4.89 2.99 2.48 1.85
1.74
0.00 0.00 0.00 0.00 0.00
3.24 7.22 6.61 1.69 2.55
2.80 3.17 1.94 1.60 1.03
1.71 2.61 2.61 1.06 0.00
1.09 1.61 0.87 0.00 0.00
Min.
8.13
4.93 1.48 12.30 20.99 19.33
15.88 12.40 14.03 12.08 13.64
6.76 7.29 6.02 7.17 7.00
5.24 6.04 5.24 5.32 3.38
3.85 5.78 2.45 2.68 2.09
Max.
4.02 3.31 2.25 1.19 0.94
Avg.
Max.
6.28 9.24 4.29 4.07 2.11
ABC
3.95
0.59 0.16 2.58 6.81 6.09
7.72 8.64 10.45 6.90 8.12
4.88 4.49 4.48 4.20 3.37
3.11 3.37 2.40 1.97 1.27
2.70 2.18 1.38 0.63 0.35
Avg.
1.15
0.00 0.00 0.00 0.00 0.00
0.93 5.05 5.37 1.69 0.01
2.80 2.85 1.94 0.64 1.03
1.21 1.12 1.12 0.88 0.00
1.45 0.69 0.00 0.00 0.00
Min.
7.63
4.93 0.01 12.30 20.99 19.10
14.86 12.40 14.03 12.08 13.64
6.76 7.29 6.08 7.17 7.00
3.65
0.49 0.00 2.49 6.67 5.99
7.43 8.49 10.05 6.56 8.12
4.17 4.22 4.28 4.16 3.05
2.48 3.12 1.52 1.61 1.05
2.08 1.66 0.90 0.39 0.35
Avg.
1.00
0.00 0.00 0.00 0.00 0.00
0.93 5.78 5.37 1.58 0.01
1.68 1.82 0.97 0.64 1.03
1.21 1.12 1.12 0.00 0.00
1.09 0.69 0.00 0.00 0.00
Min.
d
te
4.55 4.62 3.50 4.61 2.82
3.16 3.20 1.90 1.67 2.09
Max.
HSSGA
7.76
4.93 0.01 12.30 20.99 19.10
16.22 12.50 14.03 12.08 13.64
3.70
0.49 0.00 2.49 6.67 5.99
1.08
0.00 0.00 0.00 0.00 0.00
0.93 5.78 5.37 1.58 0.01
1.68 1.82 0.97 0.64 1.03
1.21 2.15 2.15 0.00 0.00
1.09 0.69 0.00 0.00 0.00
Min.
TS
34
1.36
5 8 3 4 4
43
1.72
0 0 0 0 1 8 6 3 4 2
0 0 0 3 0
0 0 0 0 2
0 0 1 6 7
55
2.20
54
2.16
0.07
0.06 0.06 0.07 0.08 0.09
0.06 0.08 0.08 0.07 0.08
0.07 0.08 0.09 0.08 0.08
0.10 0.09 0.08 0.08 0.07
0.08 0.06 0.06 0.06 0.05
TS
0.13
0.18 0.17 0.19 0.16 0.17
0.14 0.11 0.18 0.15 0.15
0.12 0.12 0.16 0.19 0.11
0.11 0.10 0.12 0.12 0.12
0.09 0.09 0.08 0.10 0.10
ABC
Run times (s)
ip t
9 10 6 5 4
0 0 0 0 1
0 0 0 0 0
0 0 0 1 2
0 0 2 7 7
EA/G-LS
cr 9 10 6 5 4
0 0 0 0 1
0 0 0 0 0
0 0 0 1 3
0 0 2 7 7
HSSGA
us
0 0 0 0 0
0 0 0 0 0
0 0 0 0 2
0 0 0 4 4
ABC
Number of optimal solutions
an 7.53 8.53 10.17 6.47 8.27
4.11 4.12 4.25 4.19 3.13
2.44 3.46 1.85 1.90 1.10
2.16 1.58 0.91 0.39 0.35
Avg.
M 6.76 7.29 6.08 7.17 6.79
4.55 6.04 3.50 5.32 2.82
3.16 3.20 1.77 1.67 2.09
Max.
EA/G-LS
Table 4: Performance of various approaches on instances with 25 orders
Ac ce p
TS
% Deviations from UB
0.21
0.18 0.17 0.17 0.18 0.21
0.19 0.19 0.19 0.21 0.20
0.20 0.22 0.21 0.21 0.21
0.21 0.21 0.29 0.30 0.26
0.22 0.22 0.24 0.24 0.24
HSSGA
0.12
0.07 0.07 0.08 0.09 0.10
0.12 0.12 0.12 0.12 0.12
0.12 0.14 0.15 0.14 0.13
0.14 0.13 0.13 0.14 0.14
0.13 0.14 0.13 0.12 0.11
EA/G-LS
Page 25 of 43
R
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
τ
0.1
0.3
0.5
0.7
0.9
Total
Total Average
n=50
9.86
18.49 20.66 20.26 21.05 18.74
8.61 10.77 13.17 17.72 17.66
5.34 8.02 8.01 5.16 5.83
3.48 5.05 4.84 3.44 2.85
6.25
12.67 16.33 15.50 13.85 14.37
6.61 7.26 8.94 9.23 10.32
4.15 5.69 4.42 3.10 3.17
2.63 3.54 2.56 1.40 1.35
3.58
8.05 11.62 7.21 7.75 10.75
3.72 4.25 6.99 2.41 4.27
2.82 3.08 1.55 1.51 2.41
1.58 2.66 2.66 0.00 0.40
1.20 1.39 1.12 0.00 0.00
Min.
8.24
16.77 17.80 17.29 17.85 16.27
5.79 9.36 11.39 15.26 13.67
4.17 6.11 6.82 4.24 4.04
3.77 4.49 3.88 1.56 1.83
2.81 2.33 2.03 16.39 0.00
Max.
2.23 2.44 1.61 2.45 0.54
Avg.
Max.
3.00 3.92 2.10 16.39 1.90
ABC
5.01
11.33 13.77 12.77 11.53 12.38
4.57 5.74 7.09 7.56 8.02
3.07 4.45 3.78 2.28 1.71
2.59 2.98 1.88 0.75 0.50
1.89 1.77 0.96 1.90 0.00
Avg.
2.90
7.30 9.28 3.28 5.81 10.02
3.16 3.51 5.30 1.89 3.69
1.71 3.08 2.07 0.76 3.61
1.63 1.94 1.94 0.00 0.00
1.20 1.06 0.34 0.00 0.00
Min.
7.73
16.77 17.60 16.88 17.85 15.92
5.20 8.30 11.03 15.38 13.38
3.20 5.15 6.33 3.87 4.31
4.57
10.97 13.49 12.33 11.48 12.37
4.17 5.02 6.71 7.18 7.69
2.20 3.43 3.04 2.00 1.54
1.80 2.06 1.51 0.39 0.29
1.18 1.12 0.43 1.73 0.00
Avg.
2.38
6.55 9.28 3.28 5.62 9.57
2.42 3.01 5.08 1.72 2.91
1.11 2.26 1.03 0.36 0.00
1.02 1.55 1.55 0.00 0.00
0.51 0.71 0.00 0.00 0.00
Min.
d
te
2.79 2.66 3.88 1.03 1.21
1.69 1.57 0.84 16.39 0.00
Max.
HSSGA
7.74
16.77 17.40 16.73 17.85 16.09
5.39 8.30 11.03 14.31 13.00
4.58
11.15 13.47 12.45 11.39 12.30
2.28
7.30 9.28 3.46 5.62 9.58
2.42 3.17 5.17 1.37 3.30
1.11 2.26 1.03 0.38 -4.02
1.22 1.54 1.54 0.00 0.00
0.51 0.71 0.00 0.00 0.00
Min.
TS
4
0.16
0 0 0 0 0
18
0.72
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 3
0 0 0 4 10
28
1.12
31
1.24
1.19
1.25 1.26 1.42 1.43 1.37
1.44 1.52 1.59 1.27 1.15
1.36 1.18 1.35 1.22 1.16
1.61 1.49 1.16 0.85 0.68
1.19 0.98 0.89 0.58 0.46
TS
0.89
1.48 1.32 1.19 1.24 1.03
0.97 0.95 0.94 0.95 1.09
0.64 0.45 0.77 1.01 0.72
0.69 0.44 0.80 0.72 1.01
0.68 0.60 0.80 0.73 1.14
ABC
Run times (s)
ip t
0 0 0 0 0
0 0 0 0 0
0 0 0 0 1
0 0 0 4 6
0 0 2 8 10
EA/G-LS
cr 0 0 0 0 0
0 0 0 0 0
0 0 0 0 1
0 0 0 3 6
0 0 1 7 10
HSSGA
us
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 0 1 2
ABC
Number of optimal solutions
an 4.18 5.05 6.70 7.09 7.62
2.23 3.53 3.17 2.03 1.57
1.93 2.18 1.53 0.35 0.23
1.08 1.10 0.45 1.66 0.00
Avg.
M 3.38 5.73 7.14 4.24 4.04
2.86 3.05 3.68 1.04 1.02
1.47 1.57 0.95 16.39 0.00
Max.
EA/G-LS
Table 5: Performance of various approaches on instances with 50 orders
Ac ce p
TS
% Deviations from UB
1.91
1.68 1.34 1.30 1.38 1.78
1.63 1.83 1.73 1.72 2.02
1.88 2.10 2.37 2.85 2.27
1.71 2.11 2.07 1.87 2.14
1.85 2.30 2.06 1.93 1.89
HSSGA
1.33
1.10 0.99 1.03 1.10 1.16
1.35 1.35 1.47 1.28 1.30
1.42 1.37 1.66 1.51 1.41
1.48 1.46 1.42 1.45 1.40
1.35 1.52 1.49 1.15 1.04
EA/G-LS
Page 26 of 43
R
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
τ
0.1
0.3
0.5
0.7
0.9
Total
Total Average
n=100
7.99
11.95 17.41 18.38 19.95 22.21
6.42 10.83 12.85 13.20 12.95
5.27 5.99 4.53 4.04 4.65
3.96 5.39 3.81 2.75 2.41
3.28 3.06 2.74 0.80 0.89
5.44
9.03 13.61 15.68 15.22 15.50
4.98 6.66 6.45 7.36 8.40
3.96 3.95 3.54 2.71 2.10
2.65 2.94 2.21 1.65 0.91
2.44 2.10 1.39 0.43 0.22
3.43
7.30 7.32 10.80 10.70 11.23
2.70 3.93 4.04 3.36 5.30
2.49 2.68 2.57 1.61 0.67
1.47 1.94 1.94 0.37 0.19
1.18 1.51 0.36 0.00 0.00
5.73
9.44 11.86 17.25 12.85 16.83
4.84 7.16 11.94 7.37 8.33
4.33 4.00 4.21 2.99 3.13
2.71 3.97 2.45 1.16 0.82
2.21 1.71 1.23 0.35 0.00
ABC
Min.
Max.
Avg.
TS
Max.
3.96
6.62 9.40 11.88 11.19 11.31
3.99 4.72 5.27 5.37 5.90
3.37 3.01 3.13 1.82 1.42
2.11 2.12 1.67 0.63 0.26
1.75 1.35 0.70 0.09 0.00
Avg.
2.44
4.70 5.39 8.82 7.96 4.11
3.20 2.37 3.27 2.77 3.57
2.50 2.46 2.26 0.77 0.53
1.27 1.40 1.40 0.00 0.00
1.11 0.90 0.36 0.00 0.00
Min.
5.03
9.06 11.25 16.29 13.07 16.60
3.53 6.07 10.95 7.78 7.49
2.99 2.95 3.35 2.33 2.15
3.43
6.27 9.03 11.26 11.23 11.07
2.84 3.88 4.62 5.16 5.48
2.09 2.29 2.36 1.49 1.04
1.18 1.27 1.07 0.35 0.10
0.77 0.71 0.19 0.00 0.00
Avg.
1.97
4.57 5.11 8.60 8.07 3.57
2.09 1.71 2.25 2.18 3.73
1.22 1.54 1.22 0.51 0.18
0.50 0.66 0.66 0.00 0.00
0.40 0.45 0.00 0.00 0.00
Min.
d
te
1.71 2.64 1.52 0.85 0.56
1.32 0.98 0.38 0.00 0.00
Max.
HSSGA
5.08
9.34 12.35 15.84 13.06 16.73
3.82 6.26 10.95 7.65 6.88
3.47
6.45 9.09 11.20 11.07 11.02
2.98 4.09 4.69 5.39 5.34
2.31 2.43 2.51 1.32 0.90
1.25 1.41 1.11 0.24 0.14
0.94 0.74 0.20 0.00 0.00
Avg.
2.07
4.75 5.39 8.28 8.05 4.07
2.27 1.67 2.62 2.77 3.15
1.69 1.67 1.48 0.51 0.18
0.69 0.66 0.66 0.00 0.00
0.72 0.44 0.00 0.00 0.00
Min.
TS
5
0.20
0 0 0 0 0
21
0.84
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 4
0 0 0 6 10
ABC
30
1.20
31
1.24
19.13
29.46 26.32 21.51 22.70 17.48
33.60 26.62 22.30 26.36 17.84
25.96 28.87 20.56 15.57 12.15
22.37 20.10 16.77 9.48 7.58
16.76 17.26 10.87 6.21 3.53
TS
7.03
13.63 9.82 6.46 7.52 8.61
6.04 6.25 6.77 6.78 7.42
5.27 6.94 5.86 6.57 6.76
4.89 5.29 4.88 6.16 8.59
3.98 8.22 7.61 6.25 9.08
ABC
Run times (s)
ip t
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 4 5
0 0 2 10 10
EA/G-LS
cr 0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 7
0 0 2 10 10
HSSGA
us
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 4
an
M
3.27 2.96 3.61 2.24 2.15
1.71 2.64 1.69 0.63 0.47
1.22 1.06 0.37 0.00 0.00
Max.
EA/G-LS
Number of optimal solutions
Table 6: Performance of various approaches on instances with 100 orders
Ac ce p
% Deviations from UB
19.16
19.59 19.64 19.06 17.61 17.92
23.40 24.90 19.89 24.24 21.90
19.30 20.96 21.52 17.27 18.66
16.13 19.70 21.00 16.60 18.33
17.91 16.68 17.07 15.08 14.73
HSSGA
12.78
10.65 11.75 11.62 14.30 11.04
11.75 12.79 12.79 14.33 15.45
12.24 12.87 13.75 11.79 13.72
13.53 14.33 13.31 14.04 12.10
13.72 13.76 12.82 11.06 10.02
EA/G-LS
6.2. Comparison between the proposed approaches without local search In this section, we investigate the potential of proposed approaches without local search. The steadystate genetic algorithm without local search is called SSGA and the evolutionary algorithm with guided mutation without local search is called EA/G. The computational results are presented in Tables 7 to 9. A careful look at Tables 1 to 9 show that the local search does not have much influence on the solution
ip t
quality. On small problem sizes, viz. instances with 10, 15 and 20 orders, SSGA and EA/G are able to find as good solutions as HSSGA and EA/G-LS obtained.
cr
Tables 7 to 9 clearly show that SSGA dominates EA/G in terms of solution quality, whereas in terms of computational time EA/G always takes lesser time in comparison to SSGA. From Tables 7 to 9, it can
us
be observed that most of the Max., Avg. and Min. % deviation from UB of the proposed SSGA approach are less than or equal to EA/G. The other observation can be made from these tables is that SSGA and
an
EA/G always perform better than TS (Tables 1 to 6) on all problem sizes, viz. 10, 15, 20, 25, 50 and 100. If we compare the number of optimal solutions returned by these two approaches, SSGA always found more number of optimal solutions than EA/G. Another important observation that can be made is that
M
SSGA and EA/G always found more number optimal solutions than TS and ABC. It is interesting to note that on problem sizes 20 and 25, SSGA found more number of optimal solutions than HSSGA. This
Ac ce p
te
d
is due to the difference in sequence of random numbers generated with and without local search.
Page 27 of 43
Page 28 of 43
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1
0.3
0.5
0.7
0.9
Total
Total Average
R
τ
Ac ce p
0.74
10.81 0.01 0.01 0.00 0.00
0.00 0.01 0.04 0.08 0.07
0.09 0.10 0.10 0.10 0.10
0.10 0.12 0.10 0.10 0.10
0.11
1.08 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.01 0.02
0.06 0.07 0.06 0.07 0.08
0.09 0.09 0.08 0.09 0.07
0.02
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.01 0.01
0.00 0.01 0.00 0.00 0.00
0.08 0.09 0.09 0.08 0.00
0.09 0.09 0.00 0.00 0.00
Min.
0.66
0.00 0.01 0.01 0.00 0.00
0.00 0.01 0.04 0.08 0.07
0.09 0.10 0.10 1.79 2.43
0.10 0.12 0.10 0.93 0.51
0.95 1.15 6.00 0.10 1.79
Avg.
0.11
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.01 0.02
0.06 0.07 0.06 0.24 0.31
0.09 0.09 0.08 0.20 0.12
0.18 0.20 0.78 0.06 0.20
Min.
0.02
0.00 0.00 0.00 0.00 0.01
0.00 0.00 0.00 0.00 0.00
0.00 0.01 0.00 0.00 0.00
0.08 0.09 0.09 0.08 0.00
0.09 0.09 0.00 0.00 0.00
248
9.96
9 10 10 10 10
10 10 10 10 10
10 10 10 10 10
242
9.68
10 10 10 10 10
10 10 10 10 10
10 10 10 9 9
10 10 10 9 10
9 9 8 10 9
0.01 0.01 0.02 0.02 0.01
SSGA
0.02
0.01 0.01 0.01 0.02 0.02
0.01 0.01 0.02 0.02 0.02
0.01 0.02 0.02 0.02 0.02
0.01 0.02 0.02 0.02 0.02
d
te
10 10 10 10 10
10 10 9 10 10
Max.
0.09 0.10 0.67 0.06 0.02
Avg.
Max.
0.10 0.10 6.00 0.10 0.09
SSGA
EA/G
SSGA
EA/G
Number of optimal solutions
% Deviations from UB
0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
15 (Number of orders)
5.57
0.00 0.08 0.84 0.09 0.10
0.10 1.25 0.13 8.28 0.10
12.56 13.73 15.29 11.00 18.39
5.75 11.33 6.88 7.57 7.10
3.55 6.54 3.04 3.87 6.08
Max.
SSGA
2.45
0.00 0.01 0.10 0.03 0.05
0.08 0.21 0.09 0.90 0.08
7.00 6.89 9.11 6.04 6.27
2.59 3.73 3.88 2.98 2.94
2.07 2.73 1.35 0.97 1.16
Avg.
0.44
0.00 0.00 0.00 0.00 0.00
6.25
2.64
0.00 0.01 0.03 0.03 0.05
0.44 0.21 0.08 1.21 0.20
7.05 7.72 9.11 6.23 6.40
3.36 3.89 4.40 2.98 3.11
2.22 3.27 1.60 1.14 1.16
Avg.
cr 0.00 0.08 0.08 0.09 0.10
3.68 1.25 0.10 8.28 1.15
12.56 16.34 15.29 12.44 18.39
8.05 11.33 7.76 7.57 7.10
3.55 6.54 3.65 4.86 6.08
Max.
EA/G
us 0.00 0.07 0.07 0.03 0.00
2.47 3.05 4.07 0.09 0.10
0.00 0.00 0.00 1.17 0.00
0.00 0.00 0.00 0.00 0.00
Min.
% Deviations from UB
an
M
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
EA/G
Run times (s)
Table 7: Performance of SSGA and EA/G approaches on instances with 10 and 15 orders
10 (Number of orders)
0.49
0 0 0 1 1
1 1 0 0 3
1 2 3 6 5
121
4.84
10 10 9 10 10
10 9 10 9 10
ip t
0.00 0.00 0.00 0.00 0.00
0.05 0.07 0.00 0.03 0.08
2.47 3.05 4.07 0.09 0.10
0.00 0.59 0.59 1.17 0.00
0.00 0.00 0.00 0.00 0.00
Min.
SSGA
116
4.64
10 10 10 10 10
9 9 10 8 10
0 0 0 1 1
1 0 0 0 3
1 1 2 5 5
EA/G
Number of optimal solutions
0.04
0.04 0.04 0.04 0.05 0.05
0.04 0.04 0.04 0.05 0.05
0.04 0.04 0.04 0.05 0.04
0.04 0.04 0.05 0.05 0.04
0.04 0.05 0.04 0.05 0.04
SSGA
0.01
0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01
EA/G
Run times (s)
Page 29 of 43
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1
0.3
0.5
0.7
0.9
Total
Total Average
R
τ
Ac ce p
7.02
0.01 0.09 0.10 0.10 13.21
13.97 11.76 20.18 12.17 12.50
5.91 9.16 9.21 10.70 11.50
6.67 6.22 6.52 6.96 4.69
3.46
0.00 0.04 0.07 0.08 1.44
8.45 8.23 8.60 6.51 7.18
4.06 5.26 5.53 4.75 5.88
3.69 3.64 3.62 2.38 1.53
0.83
0.00 0.00 0.00 0.00 0.00
1.99 5.94 0.09 0.09 0.00
2.06 1.42 2.06 1.16 0.00
1.65 1.44 1.44 0.55 0.00
0.00 0.82 0.00 0.00 0.00
Min.
7.61
0.01 0.09 1.31 0.10 13.21
13.97 12.61 20.18 13.91 12.50
7.53 9.56 10.09 10.70 12.18
9.41 6.51 6.52 6.96 6.50
4.57 3.47 3.59 1.32 3.59
Avg.
3.73
0.00 0.04 0.20 0.08 1.44
8.61 8.38 8.93 6.68 7.18
4.63 5.81 6.05 5.12 5.82
4.48 4.01 4.24 2.40 1.91
2.27 1.98 1.80 0.46 0.59
Min.
1.02
0.00 0.00 0.00 0.00 0.00
5.47 5.94 0.09 0.09 0.00
2.06 1.42 2.06 1.21 0.00
1.65 1.59 1.59 0.83 0.00
0.00 0.82 0.77 0.00 0.00
78
3.12
10 10 10 10 8
0 0 2 3 1
0 0 0 0 1
0 0 0 0 3
70
2.80
10 10 9 10 8
0 0 1 3 1
0 0 0 0 1
0 0 0 0 3
1 0 0 6 7
0.09 0.08 0.09 0.09 0.09
SSGA
0.08
0.09 0.09 0.09 0.09 0.08
0.08 0.08 0.09 0.08 0.08
0.07 0.08 0.08 0.08 0.08
0.08 0.09 0.08 0.09 0.08
d
te
2 0 2 7 9
Max.
1.91 1.70 1.25 0.33 0.26
Avg.
Max.
4.57 2.34 3.04 1.28 2.61
SSGA
EA/G
SSGA
EA/G
Number of optimal solutions
0.03
0.02 0.02 0.03 0.03 0.03
0.03 0.03 0.03 0.03 0.03
0.03 0.03 0.03 0.03 0.03
25 (Number of orders)
8.26
5.42 0.01 12.30 20.99 19.33
14.86 12.50 14.03 12.08 13.64
6.76 7.29 6.37 7.47 6.79
4.55 5.57 3.50 4.61 3.38
3.16 5.78 1.81 1.67 2.09
Max.
SSGA
4.01
0.54 0.00 2.49 6.76 6.08
7.48 8.61 10.43 6.65 8.07
4.01 4.19 4.14 4.37 3.05
2.53 3.13 1.80 1.61 1.32
2.40 1.88 0.86 0.39 0.35
Avg.
1.23
0.00 0.00 0.00 0.00 0.00
1.75
0.87
0.78 0.06 2.49 6.78 6.12
7.71 8.68 10.60 7.15 8.54
4.90 4.39 4.84 4.66 3.59
3.74 3.54 1.95 1.86 1.31
2.35 2.14 1.16 0.53 0.35
Avg.
cr 4.93 0.58 12.30 22.14 20.35
16.89 12.40 15.19 13.00 13.64
6.76 8.26 7.14 7.47 6.79
6.32 5.28 3.50 5.32 3.04
3.16 5.78 2.45 1.67 2.09
Max.
EA/G
us 0.93 5.78 6.61 1.69 0.01
1.68 1.68 0.97 0.64 1.03
1.21 1.12 1.12 0.00 0.00
1.09 0.69 0.00 0.00 0.00
Min.
% Deviations from UB
an
M
0.03 0.03 0.03 0.03 0.03
0.03 0.03 0.03 0.03 0.03
EA/G
Run times (s)
Table 8: Performance of SSGA and EA/G on instances with 20 and 25 orders
% Deviations from UB
20 (Number of orders)
1.16
0 0 0 0 1
0 0 0 0 0
0 0 0 2 4
0 0 3 7 7
57
2.28
9 10 6 5 3
ip t
0.00 0.00 0.00 0.00 0.00
0.93 5.78 7.02 1.69 0.01
2.80 1.68 0.97 0.96 1.03
1.90 2.15 2.15 0.00 0.00
1.09 0.69 0.00 0.00 0.00
Min.
SSGA
51
2.04
8 9 6 5 4
0 0 0 0 1
0 0 0 0 0
0 0 0 1 2
0 0 2 6 7
EA/G
Number of optimal solutions
0.15
0.16 0.15 0.14 0.15 0.14
0.13 0.13 0.14 0.15 0.15
0.15 0.14 0.14 0.14 0.15
0.14 0.15 0.16 0.16 0.15
0.14 0.15 0.17 0.16 0.16
SSGA
0.06
0.05 0.05 0.06 0.06 0.06
0.07 0.07 0.06 0.06 0.06
0.06 0.06 0.06 0.06 0.06
0.06 0.06 0.06 0.06 0.06
0.05 0.06 0.06 0.05 0.05
EA/G
Run times (s)
Page 30 of 43
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1
0.3
0.5
0.7
0.9
Total
Total Average
R
τ
Ac ce p Min.
8.05
16.77 18.60 16.88 17.98 15.92
5.59 8.58 11.92 15.60 14.29
3.54 5.80 6.82 4.05 4.20
3.63 3.05 3.88 1.18 1.83
2.17 1.74 0.97 16.39 0.00
4.80
11.18 13.77 12.56 11.76 12.55
4.33 5.48 6.98 7.73 7.94
2.61 3.86 3.58 2.10 1.76
2.20 2.22 1.69 0.48 0.32
1.46 1.24 0.54 1.67 0.00
2.54
6.93 9.28 3.28 5.81 9.27
2.42 3.70 4.98 2.92 3.30
1.33 2.58 1.03 0.38 0.00
1.02 1.55 1.55 0.00 0.00
1.03 0.71 0.34 0.00 0.00
8.37
16.99 17.80 19.01 18.58 16.54
6.18 9.19 11.74 16.18 15.06
3.38 6.30 7.14 4.79 4.85
3.82 3.23 3.60 1.56 1.83
2.40 1.57 1.26 16.39 0.00
4.98
11.57 14.03 13.20 11.95 12.96
4.81 5.45 6.90 8.01 8.38
2.66 4.09 3.57 2.33 1.89
2.35 2.39 1.73 0.74 0.28
1.55 1.32 0.63 1.82 0.00
2.67
7.49 11.11 4.19 5.62 10.27
2.79 3.67 5.17 4.12 4.08
1.52 2.26 1.38 0.55 4.02
1.41 1.55 1.55 0.00 0.00
1.00 0.76 0.34 0.00 0.00
27
1.08
0 0 0 0 0
0 0 0 0 0
0 0 0 0 1
0 0 0 2 6
25
1.00
0 0 0 0 0
0 0 0 0 0
0 0 0 0 1
0 0 0 2 7
0 0 0 5 10
1.32 1.49 1.54 1.20 1.03
SSGA
1.13
0.95 0.85 0.92 0.84 0.93
0.98 1.02 1.04 0.85 1.04
1.16 1.14 1.13 1.24 1.12
1.27 1.33 1.30 1.26 1.18
0.43
0.41 0.42 0.42 0.46 0.43
0.49 0.53 0.50 0.49 0.46
0.48 0.46 0.45 0.45 0.44
6.13
10.79 14.39 18.05 15.12 17.79
5.23 7.71 11.61 9.68 9.07
4.24 3.74 5.11 2.80 2.69
2.89 3.36 2.05 1.16 0.74
2.48 1.51 0.64 0.30 0.00
Max.
SSGA
4.11
7.15 10.01 12.53 12.18 11.61
4.05 5.00 5.54 6.41 6.46
3.18 2.98 3.00 1.81 1.35
2.18 1.91 1.37 0.52 0.31
1.65 1.14 0.37 0.03 0.00
Avg.
EA/G
2.44
5.50 6.68 9.84 8.75 4.07
6.18
4.11
7.60 10.02 12.03 11.89 12.13
4.08 5.09 5.65 6.63 6.25
2.97 3.22 3.27 1.79 1.43
1.81 1.98 1.44 0.55 0.14
1.40 1.10 0.31 0.00 0.00
Avg.
cr 11.27 12.87 16.62 14.40 19.09
5.73 8.62 13.60 9.12 8.92
4.36 4.52 4.58 3.23 2.95
2.40 4.48 2.03 1.16 0.56
1.98 1.52 0.53 0.00 0.00
Max.
us 2.83 2.19 2.25 3.11 3.36
1.96 2.02 1.66 0.77 0.35
1.49 1.15 1.15 0.00 0.00
1.09 0.81 0.00 0.00 0.00
Min.
% Deviations from UB
an
M
0.43 0.41 0.39 0.39 0.41
0.41 0.38 0.42 0.37 0.36
EA/G
Run times (s)
d
te
0 0 0 8 10
EA/G
SSGA
Avg.
EA/G
Max.
Max.
Min.
SSGA
Avg.
Number of optimal solutions
100 (Number of orders)
Table 9: Performance of SSGA and EA/G on instances with 50 and 100 orders
% Deviations from UB
50 (Number of orders)
2.47
22
0.88
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 1
0 0 1 9 10
SSGA
ip t
5.62 6.59 8.91 8.52 5.06
2.83 2.28 3.10 3.95 3.66
1.63 1.76 2.10 0.34 0.35
0.88 1.23 1.23 0.00 0.00
1.09 0.72 0.00 0.00 0.00
Min.
29
1.16
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 2 6
0 0 1 10 10
EA/G
Number of optimal solutions
11.36
8.15 8.82 8.17 9.07 10.72
9.22 11.58 10.86 11.70 12.97
11.76 13.12 12.75 12.23 12.85
9.53 13.10 15.56 13.21 10.32
12.20 14.16 14.54 10.16 7.36
SSGA
3.55
3.52 3.68 3.87 4.33 4.21
3.52 3.57 4.01 4.57 5.44
3.83 3.19 3.91 3.47 3.74
3.14 3.17 3.63 3.21 3.11
3.11 2.83 2.67 2.59 2.50
EA/G
Run times (s)
Page 31 of 43
R
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
0.1 0.3 0.5 0.7 0.9
τ
0.1
0.3
0.5
0.7
0.9
0.00 1.04 0.35 0.77 0.52
0.07 0.00 0.08 0.48 0.26
0.14 0.06 0.00 0.06 0.12
0.00 0.19 0.06 0.00 0.06
0.00 0.23 0.04 0.13 0.11
0.01 0.00 0.01 0.07 0.04
0.01 0.01 0.00 0.01 0.03
0.00 0.04 0.01 0.00 0.01
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
1.53 1.28 1.03 0.37 0.32
0.92 1.25 1.23 1.19 0.81
0.44 0.63 0.48 0.12 0.19
0.42 0.19 0.00 0.06 0.00
0.31 0.00 0.00 0.00 0.00
0.85 0.34 0.27 0.12 0.08
0.5 0.67 0.61 0.5 0.32
0.21 0.19 0.19 0.03 0.05
0.14 0.05 0.00 0.01 0.00
0.12 0.00 0.00 0.00 0.00
Avg.
Max.
0.00 0.01 0.01 0.00 0.00
Avg.
Max.
0.00 0.06 0.06 0.00 0.00
EA/G-LS
Min.
Ac ce p
0.13 0.00 0.00 0.00 0.00
0.00 0.13 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
Min.
10 6 6 5 5
8 10 9 8 7
8 7 9 4 4
9 6 0 0 0
9 0 0 0 0
>
0 0 2 1 1
1 0 0 0 1
1 2 1 4 2
1 1 9 10 9
1 7 9 10 10
=
0 4 2 4 4
1 0 1 2 2
1 1 0 2 4
0 3 1 0 1
0 3 1 0 0
<
HSSGA vs EA/G-LS
171.20 131.14 107.15 115.00 92.56
138.01 186.74 195.00 170.35 111.56
123.84 151.02 148.85 113.44 102.68
200 (Number of orders)
49.57 41.44 40.10 33.63 33.50
36.49 42.58 44.88 44.93 31.77
0.00 0.00 0.02 0.02 0.01
0.00 0.00 0.00 0.00 0.00
Avg.
0.00 0.81 0.92 0.95 0.27
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.08 0.28 0.16 0.05
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
2.49 1.03 0.93 0.52 0.28
0.99 1.13 1.27 1.69 0.85
0.64 0.67 0.47 0.42 0.23
0.52 0.28 0.14 0.00 0.00
0.42 0.00 0.00 0.00 0.00
Max.
1.55 0.50 0.21 0.14 0.09
10 9 3 5 7
10 9 10 10 10
10 10 10 9 5
10 8 2 0 0
10 0 0 0 0
>
0 0 0 1 0
0 1 0 0 0
0 0 0 1 5
0 2 4 4 8
0 9 10 10 10
=
0 1 7 4 3
0 0 0 0 0
0 0 0 0 0
0 0 4 6 2
0 1 0 0 0
<
HSSGA vs EA/G-LS
ip t 0.98 0.00 0.00 0.00 0.00
0.33 0.00 0.39 0.50 0.16
0.05 0.05 0.09 0.00 0.00
0.19 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
Min.
cr
0.72 0.61 0.77 1.15 0.44
0.38 0.3 0.22 0.17 0.06
0.29 0.1 0.02 0.00 0.00
0.17 0.00 0.00 0.00 0.00
Avg.
EA/G-LS
us
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
Min.
an
0.00 0.00 0.05 0.05 0.05
0.00 0.01 0.00 0.00 0.00
Max.
HSSGA
% Deviations from UB
M
41.10 40.28 41.32 39.45 45.03
46.12 45.66 45.00 42.19 41.47
43.12 42.59 39.14 31.57 31.25
EA/G-LS
d
105.24 105.36 97.49 74.26 71.68
te
90.49 75.49 71.02 68.12 69.09
HSSGA
Run times (s)
Table 10: Performance of HSSGA and EA/G-LS on instances with 150 and 200 orders
HSSGA
% Deviations from UB
150 (Number of orders)
532.10 379.68 243.39 301.99 308.00
502.93 476.25 514.40 459.64 399.87
331.93 412.94 362.96 360.71 241.31
281.13 269.00 183.34 157.35 181.68
208.91 180.07 164.93 162.11 172.16
HSSGA
139.67 105.25 98.68 90.98 94.85
100.64 105.49 96.93 102.91 99.15
99.57 119.52 100.10 93.44 95.09
95.66 104.12 100.94 104.19 105.83
103.45 102.90 95.70 85.93 84.91
EA/G-LS
Run times (s)
6.3. Comparison of TS, ABC, HSSGA, EA/G-LS, SSGA and EA/G approaches over groups of same size Table 11 compares TS, ABC, HSSGA, EA/G-LS, SSGA and EA/G approaches over the groups of instances with same problem size, i.e., same number of orders. Each problem size consists of 25 groups, and each group contains 10 instances with the same number of orders, tightness factor (τ ) and range factor (R). Results in the corresponding columns indicate that on how many groups, the former approach
ip t
is less than (Entries marked with <), better than (entries marked with >) and equal to (entries marked with =) to the latter approach. For example, column “HSSGA vs. TS” reports that, out of 25 groups, on
cr
how many groups HSSGA is worse than TS, better than TS and equal to TS for each problem size. The comparison is done in terms of the maximum (Max.), average (Avg.) and minimum (Min.) % deviation
us
from upper bound. In this table, row “Total” provides the overall count out of all 150 groups (25 groups for each problem size). This table provides a summary of all the results and observations similar to
Ac ce p
te
d
M
an
Tables 1 to 9 can be drawn from this table also about the relative performance of various approaches.
Page 32 of 43
Page 33 of 43
0< 25> 0=
0< 25> 0=
0<
0< 24> 1=
0< 25> 0=
1<
25
50
100
9=
0< 25> 0=
0< 25> 0=
20
27=
0< 25> 0=
0< 17> 8=
15
141>
0< 22> 3=
0< 13> 12=
10
122>
0< 19> 6=
1< 18> 6=
Total
Avg.
69=
81>
0<
0< 23> 2=
0< 22> 3=
0< 15> 10=
0< 10> 15=
0< 10> 15=
0< 1> 24=
Min.
HSSGA vs. TS
Max.
n
=
28=
121>
1<
0< 25> 0=
0< 24> 1=
0< 25> 0=
0< 18> 7=
0< 12> 13=
1< 17> 7=
Max.
9=
141>
0<
0< 25> 0=
0< 25> 0=
0< 25> 0=
0< 25> 0=
0< 22> 3=
0< 19> 6=
Avg.
67=
83>
0<
0< 23> 2=
0< 22> 3=
0< 15> 10=
0< 10> 15=
0< 12> 13=
0< 1> 24=
Min.
EA/G-LS vs. TS
71=
66>
13<
2< 22> 1=
2< 18> 5=
1< 12> 12=
2< 8> 15=
3< 3> 19=
3< 3> 20=
Max.
40=
101>
9<
1< 23> 1=
0< 24> 1=
0< 23> 2=
4< 15> 6=
3< 12> 10=
1< 4> 20=
Avg.
90=
52>
8<
2< 19> 4=
0< 19> 6=
1< 6> 18=
2< 2> 21=
3< 6> 16=
0< 0> 25=
Min.
HSSGA vs. ABC
77=
60>
13<
3< 21> 12=
1< 18> 6=
3< 10> 12=
1< 6> 18=
2< 2> 21=
3< 3> 19=
Max.
39=
95>
16<
1< 23> 1=
1< 24> 0=
2< 22> 1=
7< 13> 5=
2< 9> 14=
3< 4> 18=
Avg.
92=
48>
10<
2< 17> 6=
0< 18> 7=
3< 6> 16=
4< 2> 19=
1< 5> 19=
0< 0> 25=
Min.
93=
34>
23<
9< 10> 6=
8< 10> 7=
2< 4> 19=
56=
65>
29<
7< 16> 2=
11< 13> 14=
6< 12> 7=
3< 13> 9=
2< 9> 14=
0< 2> 23=
d
2< 4> 19=
2< 4> 19=
0< 2> 23=
Max.
Avg.
1< 14> 10=
2< 18> 5=
Max.
113=
26>
11<
5< 11> 9=
4< 8> 13=
0< 2> 23=
1< 4> 21=
25=
119>
6<
1< 24> 0=
1< 23> 1=
0< 25> 0=
1< 15> 9=
0< 10> 15=
0< 11> 14=
0< 1> 24=
Min.
7< 12> 6=
4< 10> 11=
3< 16> 6=
Max.
7=
141>
2<
0< 25> 0=
0< 25> 0=
0< 25> 0=
70=
77>
3<
2< 21> 2=
1< 21> 3=
0< 13> 12=
24=
106>
9=
135>
6<
0< 25> 0=
0< 25> 0=
1< 24> 0=
70=
73>
8<
2< 21> 2=
1< 21> 3=
2< 11> 12=
0< 8> 17=
1< 11> 13=
2< 1> 23=
Min.
us 20<
2< 23> 0=
1< 23> 1=
3< 22> 0=
0< 25> 0=
4< 18> 3=
1< 18> 6=
Avg.
EA/G vs. TS
an
0< 25> 0=
1< 22> 2=
1< 19> 5=
Avg.
SSGA vs. TS
M
1< 1> 23=
0< 0> 25=
Min.
HSSGA vs. EA/G-LS
te
EA/G-LS vs. ABC
12< 12> 1=
4< 19> 2=
2< 22> 1=
6< 15> 4=
4< 12> 9=
2< 4> 19
Avg.
70=
46>
34<
95=
42>
13<
6< 14> 5=
2< 15> 8=
2< 5> 18=
1< 2> 22=
2< 6> 17=
0< 0> 25=
Min.
57=
28>
65<
14< 9> 2=
13< 7> 5=
9< 7> 9=
14< 1> 10=
7< 1> 17=
8< 3> 14=
Max.
28=
47>
75<
13< 11> 1=
10< 14> 1=
13< 11> 1=
17< 5> 3=
14< 3> 8=
8< 3> 14=
Avg.
84=
36>
30<
7< 14> 4=
7< 13> 5=
6< 4> 15=
6< 0> 19=
2< 5> 18=
2< 0> 23=
Min.
EA/G vs. ABC
ip t
27=
84>
30<
cr
12< 11> 2=
6< 13> 6=
4< 10> 11=
4< 5> 16=
4< 4> 17=
4< 3> 18=
Max.
SSGA vs. ABC
58=
68>
24<
12< 11> 2=
5< 17> 3=
4< 11> 10=
0< 14> 11=
2< 8> 15=
1< 7> 17=
Max.
33=
96>
21<
11< 13> 1=
4< 20> 1=
2< 21> 2=
1< 19> 5=
2< 15> 8=
1< 8> 16=
Avg.
96=
40>
14<
8< 9> 8=
5< 13> 7=
0< 6> 19=
0< 6> 19=
1< 4> 20=
0< 2> 23=
Min.
SSGA VS. EA/G
Table 11: Comparison of our approaches (HSSGA, EA/G-LS, SSGA and EA/G) among themselves and with TS and ABC in terms of number of groups of same size where they perform better(>)/same (=)/worse(<)
Ac ce p
6.4. Convergence behaviour of HSSGA, EA/G-LS, SGGA and EA/G We have also studied the convergence behaviour of HSSGA, EA/G-LS, SGGA and EA/G on each instance. Figures 1(a) to 1(f) plot the number of solutions generated by HSSGA, EA/G-LS, SGGA and EA/G to reach the best solution on each of the 250 instances of a particular size. In these figures, the vertical axis indicates the average number of solutions generated by the corresponding approach (HSSGA,
ip t
EA/G-LS, SGGA or EA/G) to reach the best solution. From these figures, it can be observed that HSSGA, EA/G-LS, SGGA and EA/G approaches converge very fast on most of the instances.
cr
HSSGA generates minimum 10000 solutions, but on most of the instances, it reached the best solution after generating very few solutions only. On problem size 10, HSSGA took on an average 42 and
us
maximum 3075 solutions to reach the best solution, on problem size 15, the average number of solutions is 417, whereas the maximum number of solutions is 7404, on problem size 20, the average number of
an
solutions is 889, whereas the maximum number of solutions is 13368. Similarly, on problem sizes 25, 50 and 100, HSSGA generates on an average 1379, 3950 and 6618 solutions respectively to reach the best solution, whereas the maximum number of solutions generated by HSSGA to reach the best solution
M
are 12223, 22178 and 21561 on instances of size 25, 50 and 100 respectively. Same kind of observations can be made about the convergence behaviour of EA/G-LS. On problem sizes 10, 15, 20, 25, 50 and 100,
d
EA/G-LS generates on an average 120, 320, 580, 820, 1940 and 3520 solutions respectively to reach the
te
best solution. The maximum number of solutions generated by EA/G-LS to reach the best solution are about 2800, 5540, 5700, 6580, 6580 and 12760 on instances of size 10, 15, 20, 25, 50 and 100 respectively.
Ac ce p
SGGA, on problem sizes 10, 15, 20, 25, 50 and 100 generates on an average 87, 662, 1038, 1523, 6108 and 15324 solutions respectively to reach the best solution. The maximum number of solutions generated by SGGA to reach the best solution are about 3126, 11795, 9082, 10854, 23158 and 39598 on instances of size 10, 15, 20, 25, 50 and 100 respectively. Similarly EA/G, on problem size 10, took on an average 240 and maximum 3460 solutions to reach the best solution, on problem size 15, the average number of solutions is 566, whereas the maximum number of solutions is 8000, on problem size 20, the average number of solutions is 906, whereas the maximum number of solutions is 5180. Similarly, on problem sizes 25, 50 and 100, EA/G generates on an average 1278, 2497 and 4044 solutions respectively to reach the best solution, whereas the maximum number of solutions generated by EA/G to reach the best solution are 6940, 10460 and 15020 on instances of size 25, 50 and 100 respectively.
Page 34 of 43
HSSGA
EA/G-LS
SGGA
HSSGA
EA/G
EA/G-LS
SGGA
EA/G
3,000
2,000
1,000
10,000
8,000
6,000
ip t
Number of solutions
Number of solutions
12,000
4,000
0
cr
2,000
0 0
25
50
75
100 125 150 175 200 225 250
0
75
100 125 150 175 200 225 250
SGGA
(b) Instances with 15 orders (n=15) HSSGA
EA/G
EA/G-LS
SGGA
EA/G
an
EA/G-LS
50
Instances
(a) Instances with 10 orders (n=10) HSSGA
25
us
Instances
M
Number of solutions
10,000
d
5,000
0 25
50
75
8,000
6,000
4,000
0
100 125 150 175 200 225 250
Ac ce p
0
10,000
2,000
te
Number of solutions
12,000
0
25
50
Instances
Number of solutions
20,000
15,000
EA/G-LS
SGGA
(d) Instances with 25 orders (n=25) HSSGA
EA/G
10,000
5,000
0
EA/G-LS
SGGA
EA/G
40,000
Number of solutions
HSSGA
100 125 150 175 200 225 250
Instances
(c) Instances with 20 orders (n=20)
25,000
75
30,000
20,000
10,000
0 0
25
50
75
100 125 150 175 200 225 250
Instances (e) Instances with 50 orders (n=50)
0
25
50
75
100 125 150 175 200 225 250
Instances (f) Instances with 100 orders (n=100)
Figure 1: Number of solutions generated by HSSGA and EA/G-LS to reach the best solution on each instance with 10, 15, 20, 25, 50 and 100 orders
Page 35 of 43
Figure 2(a), Figure 2(b), Figure 2(c) and Figure 2(d) depict the convergence graphs of EA/G-LS, EA/G, HSSGA and SGGA approaches on four instances of different sizes respectively. In these figures, horizontal axis indicates the number of generations and vertical axis indicates the revenue gain. Please note that different approaches terminate at different generation as per our stopping criteria. For a fair comparison, each generation in these figures is a set of 20 solutions, because SGGA generates one solution in
ip t
each generation, whereas EA/G generates 20 solutions in each generation. From these figures, it is clear that EA/G-LS and HSSGA converge faster than EA/G and SGGA respectively. Form these figures, we
cr
can see that convergence rate of both EA/G-LS and HSSGA comes closer to each other as the number of
us
orders increases.
900 500
an
880
480
840
M
Revenue
Revenue
860
490
820
800
470
HSSGA
EA/G
SGGA
d
EA/G-LS
0
100
200
300
400
500
0
70
140
210
Generation
Ac ce p
Revenue
1,010
1,000
990
280
350
420
490
560
(b) Instance with 100 orders (n=100)
1,580
1,560
1,540
Revenue
1,020
SGGA
Generation
(a) Instance with 50 orders (n=50) 1,030
HSSGA
EA/G
780
te
460
EA/G-LS
1,520
1,500
1,480 980 EA/G-LS
HSSGA
EA/G
SGGA
EA/G-LS
1,460
HSSGA
EA/G
SGGA
970 1,440
0
70
140
210
280
350
420
490
Generation (c) Instance with 150 orders (n=150)
560
0
70
140
210
280
350
420
490
560
Generation (d) Instance with 200 orders (n=200)
Figure 2: Comparison of convergence graphs of EA/G-LS, EA/G, HSSGA and SGGA approaches on instances of different sizes
Page 36 of 43
6.5. Influence of parameter settings on solution quality To investigate how the solution quality is affected by the parameter settings, we have taken different groups of instances comprising different number of orders, τ and R values. Each group has a name of the form (number of orders)orders τ (τ value) R(R value). For HSSGA, the groups are 25orders τ 0.3 R0.7, 100orders τ 0.3 R0.1 and 100orders τ 0.7 R0.1. For EA/G-LS, groups that are chosen are
ip t
50orders τ 0.7 R0.7, 100orders τ 0.3 R0.7 and 100orders Tao0.7 R0.9. We have varied all the parameters one-by-one while keeping all the other parameters unaltered. In doing so all other parameters were set
cr
to their values reported at the start of Section 6. The results are reported in Table 12. In Table 12, rows in bold show the results with original control parameter values that are used in all the computational
us
experiments involving our approaches. The row with dash (-) indicates that M value can not be more than pop value. From the table, it can be clearly observed that the original parameter values provide as
Ac ce p
te
d
M
an
good as or better results than other parameter values.
Page 37 of 43
Page 38 of 43
12.08 12.08 12.08 12.08 12.08
12.08 12.08 12.08 12.08 12.08
12.08 12.08 12.08 12.08 12.08
12.08 12.08 12.08 12.08 12.08
12.08 12.08 12.08 12.08 12.08
12.08 12.08 12.08 12.08 12.08
0.40 0.45 0.50 0.55 0.60
0.70 0.75 0.80 .85 .90
2 4 6 8 10
0.01 0.05 0.10 0.15 0.20
0.55 0.60 0.65 0.70 0.75
3000 4000 5000 6000 7000
BT
CO
Del
P er
IM
Next
6.72 6.72 6.56 6.72 6.72
6.49 6.77 6.56 6.47 6.47
6.59 6.56 6.59 6.59 6.70
6.59 6.59 6.56 6.71 6.63
6.49 6.77 6.56 6.62 6.62
6.59 6.59 6.56 6.67 6.62
1.69 1.69 1.58 1.69 1.69
1.58 1.69 1.58 1.58 1.58
1.69 1.58 1.69 1.69 1.69
1.69 1.69 1.58 1.69 1.69
1.58 1.69 1.58 1.58 1.58
1.69 1.69 1.58 1.69 1.58
1.69 1.69 1.58 1.58 1.69
2.16 1.89 1.71 1.89 1.89
2.16 1.89 1.71 1.80 1.89
2.12 1.71 1.71 1.71 1.71
2.07 1.89 1.71 1.98 1.89
2.07 2.16 1.71 2.16 2.16
1.89 2.07 1.71 1.98 1.80
1.89 1.89 1.71 1.89 2.25
1.34 1.31 1.18 1.26 1.22
1.33 1.21 1.18 1.27 1.29
1.29 1.18 1.24 1.27 1.25
1.25 1.27 1.18 1.30 1.28
1.31 1.27 1.18 1.42 1.42
1.26 1.31 1.18 1.34 1.30
1.28 1.23 1.18 1.32 1.3
0.69 0.59 0.50 0.59 0.59
0.69 0.50 0.50 0.78 0.59
0.69 0.50 0.59 0.88 0.69
0.69 0.59 0.50 0.78 0.69
0.69 0.59 0.50 0.78 0.78
0.78 0.78 0.50 0.78 0.79
0.79 0.69 0.50 0.79 0.69
3.82 3.75 3.53 3.75 3.75
4.02 3.53 3.53 3.92 4.02
3.58 3.53 3.92 3.82 3.82
3.53 4.02 3.53 3.61 3.72
3.16 3.12 2.84 3.08 3.08
2.90 2.80 2.84 2.89 2.99
3.03 2.84 2.91 2.97 2.99
2.89 2.86 2.84 3.02 2.88
2.92 2.92 2.84 3.11 3.11
2.89 2.88 2.84 2.87 2.93
2.99 2.94 2.84 2.88 2.9
2.45 2.45 2.09 2.45 2.45
2.18 2.07 2.09 2.18 2.36
2.49 2.09 2.37 2.18 2.18
2.28 2.09 2.09 2.28 2.01
2.28 2.47 2.09 2.59 2.59
1.87 2.09 2.09 2.45 2.09
2.39 2.27 2.09 2.27 1.87
d
te
3.72 3.55 3.53 3.95 3.95
3.62 3.65 3.53 3.72 3.72
3.82 3.72 3.53 3.72 3.55
Rn
IM
P er
β
0.25 0.30 0.35 0.40 0.45
10 20 30 40 -
20 30 40 50 60
Value
30 40 50 60 70
0.55 0.60 0.65 0.70 0.75
0.01 0.05 0.10 0.15 0.20
0.05 0.10 0.15 0.20 0.25
14.31 14.31 14.31 14.31 14.31
15.67 14.65 14.31 15.33 15.38
15.04 14.31 14.31 14.31 14.31
15.38 14.31 15.38 14.31 14.31
15.38 14.48 14.31 14.31 14.31
15.33 14.31 14.31 14.31 -
15.67 15.26 14.31 15.31 15.26
Max.
7.01 6.93 7.09 7.01 7.14
7.3 7.09 7.09 7.23 7.28
7.38 7.09 6.99 7.05 7.05
7.26 7.09 7.17 7.2 7.2
7.3 7.09 7.09 7 7.05
7.63 7.09 7.06 6.95 -
7.5 7.32 7.09 7.3 7.1
Avg.
2.06 1.37 1.37 1.37 2.41
2.41 1.37 1.37 2.06 2.41
0.26 0.24 0.28 0.23 0.28
0.39 0.24 0.45 0.71 0.71
0.29 0.32 0.24 0.28 0.31
0.28 0.24 0.37 0.30 -
0.39 0.3 0.24 0.31 0.27
Avg.
0.81 0.63 0.63 0.72 0.77
0.81 0.74 0.63 0.81 0.63
0.34 0.28 0.24 0.28 0.30
0.31 0.26 0.24 0.27 0.29
ip t
0.58 0.63 0.72 0.63 0.72
0.99 0.63 0.99 1.38 1.38
0.77 0.85 0.63 0.72 0.77
0.74 0.63 0.81 0.81 -
0.85 0.63 0.63 0.81 0.81
Max.
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.19 0.19
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.16 0.00 -
0.00 0.00 0.00 0.00 0.00
Min.
100orders τ 0.3R0.7
cr
2.41 1.37 1.37 1.89 1.89
2.41 1.37 2.06 2.41 2.41
2.06 2.06 1.37 2.06 1.89
2.41 1.37 2.41 1.37 -
2.41 2.41 1.37 2.41 2.06
Min.
50orders τ 0.7R0.7
% Deviations from UB
us
an
M λ
M
P op
P op
6.80 6.66 6.56 6.47 6.74
Min.
13.00 12.08 12.08 12.08 12.08
Avg.
100orders τ 0.7R0.1
40 50 60 70 80
Min.
Max.
Ac ce p Avg.
100orders τ 0.3R0.1
Min.
Max.
Avg.
Max.
Parameter
25orders τ 0.7R0.7
EA/G-LS
% Deviations from UB
Value
Parameter
Table 12: Influence of parameter settings on solution quality
HSSGA
7.39 7.83 6.88 7.83 7.83
8.10 7.31 6.88 7.16 7.07
7.70 6.88 6.88 6.97 7.39
7.52 6.88 8.51 9.37 9.37
7.07 7.43 6.88 7.49 7.19
7.52 6.88 7.57 7.83 -
8.10 7.57 6.88 7.13 7.13
Max.
5.40 5.49 5.34 5.49 5.57
5.31 5.33 5.34 5.35 5.38
5.75 5.34 5.49 5.40 5.59
5.92 5.34 6.45 6.94 6.94
5.43 5.52 5.34 5.50 5.32
5.65 5.34 5.68 5.76 -
5.85 5.49 5.34 5.17 5.36
Avg.
3.8 3.48 3.15 3.32 3.32
3.18 3.15 3.15 3.14 3.32
3.35 3.15 3.30 3.48 3.48
3.74 3.15 3.42 4.54 4.54
3.15 3.07 3.15 3.57 3.32
3.48 3.15 3.18 3.32 -
3.62 3.48 3.15 3.32 2.90
Min.
100orders τ 0.7R0.9
7. Conclusions This paper presented two hybrid metaheuristic approaches, viz. HSSGA and EA/G-LS for order acceptance and scheduling (OAS) problem in a single machine environment. We have compared our approaches with two state-of-the-art approaches, viz. TS and ABC for OAS problem and the computational
ip t
results show the superiority of the proposed approaches in terms of solution quality. However, in terms of computational times, TS and ABC approaches are faster than HSSGA and EA/G-LS approaches on most of the instances. As far as comparison between the two proposed approaches is concerned, HSSGA
cr
and EA/G-LS obtained solutions close to each other on most of the instances. Though, HSSGA is slightly better than EA/G-LS in terms of solution quality, it is slower.
us
As a future work, we would like to extend our approaches to the other version of OAS problem where orders are processed in a multiple machine environment. Approaches similar to our approaches can be
an
developed for other permutation problems also. In our EA/G-LS approach for OAS problem, we have used univariate marginal distribution (UMD) model to represent the distribution of promising solutions
M
in the search space. UMD model is simple and may not be able to properly model all the requirements of a complex permutation problem like OAS problem. Recently, a hybrid approach combining an estimation of distribution algorithm with a variable neighborhood search has been developed for permutation
d
flowshop scheduling problem which provides very good results [37]. This estimation of distribution
te
algorithm uses a dedicated probabilistic model for permutation problems called generalized Mallows model [38]. Therefore, a possible future work is to explore this model for representing the distribution
Ac ce p
of promising solutions in the search space for OAS problem. The exact algorithm proposed in [43] efficiently solves a number of closely related problems. Therefore, another possible future work is to extend this algorithm to the OAS problem version considered in this paper and evaluate its performance on the benchmark instances used in this paper. Acknowledgement
Authors would like to place on record their sincere thanks to Dr. S.-W. Lin and Dr. K.-C. Ying for providing the test instances and the detailed instance-by-instance solution values. Authors are also grateful to three anonymous reviewers for their valuable comments and suggestions which helped in improving the quality of this manuscript. References [1] H. Guerrero, G. Kern, How to more effectively accept and refuse orders, Production and Inventory Management 29 (1988) 59–62. Page 39 of 43
[2] J. Herbots, W. Herroelen, R. Leus, Dynamic order acceptance and capacity planning on a single bottleneck resource, Naval Research Logistics 54 (2007) 874–889. [3] C. Oguza, ˘ F. Salmana, Z. Yalc¸in, Order acceptance and scheduling decisions in make-to-order systems, International Journal of Production Economics 125 (2010) 200–211.
ip t
[4] Y.-Y. Xiao, R.-Q. Zhang, Q.-H. Zhao, I. Kaku, Permutation flow shop scheduling with order acceptance and weighted tardiness, Applied Mathematics and Computation 218 (2012) 7911–7926.
cr
[5] H. Stern, Z. Avivi, The selection and scheduling of textile orders with due dates, European Journal
us
of Operational Research 44 (1990) 11–16.
[6] W. Rom, S. Slotnick, Order acceptance using genetic algorithms, Computers & Operations Research
an
36 (2009) 1758–1767.
[7] P. Keskinocak, S. Tayur, Due date management policies, Handbook of Quantitative Supply Chain
M
Analysis, International Series in Operations Research & Management Science 74 (2004) 485–554. [8] S. Slotnick, Order acceptance and scheduling: A taxonomy and review, European Journal of Oper-
d
ational Research 212 (2011) 1–11.
[9] K. Charnsirisakskul, P. Griffin, P. Keskinocak, Order selection and scheduling with leadtime flexi-
te
bility, IIE Transactions 36 (2004) 697–707.
Ac ce p
[10] K. Charnsirisakskul, P. Griffin, P. Keskinocak, Pricing and scheduling decisions with leadtime flexibility, European Journal of Operational Research 171 (2006) 153–169. [11] J. G. B. Yang, Heuristic approaches for solving single resource scheduling problems with jobselection flexibility, Tech. rep., Department of Industrial & Systems Engineering, University of Florida (2003).
[12] F. Nobibon, R. Leus, Exact algorithms for a generalization of the order acceptance and scheduling problem in a single-machine environment, Computers & Operations Research 38 (2011) 367–378. [13] H. Lewis, S. Slotnick, Multi-period job selection: Planning work loads to maximize profit, Computers & Operations Research 29 (2002) 1081–1098. [14] D. Engelsa, D. Kargerb, S. Kolliopoulosc, S. Senguptab, R. Umad, J. Weine, Techniques for scheduling with rejection, Journal of Algorithms 49 (2003) 175191.
Page 40 of 43
[15] V. Gordona, V. Strusevich, Single machine scheduling and due date assignment with positionally dependent processing times, Computers & Operations Research 198 (2009) 57–62. [16] S. Nguyen, M. Zhang, M. Johnston, Enhancing branch-and-bound algorithms for order acceptance and scheduling with genetic programming, in: Genetic Programming, Lecture Notes in Computer
ip t
Science, Vol. 8599, 2014, pp. 124–136. [17] J. Ghosh, Job selection in a heavily loaded shop, Computers & Operations Research 24 (1997) 141–
cr
145.
[18] S. Slotnick, T. Morton, Order acceptance with weighted tardiness, Computers & Operations Re-
us
search 34 (2007) 3029–3042.
[19] I. Lee, C. Sung, Single machine scheduling with outsourcing allowed, International Journal of Pro-
an
duction Economics 111 (2008) 623–634.
[20] B. Cesaret, C. Oguz, ˘ F. Salman, A tabu search algorithm for order acceptance and scheduling, Com-
M
puters & Operations Research 39 (2012) 1197–1205.
[21] C. Akkan, Finite-capacity scheduling-based planning for revenue-based capacity management, Eu-
d
ropean Journal of Operational Research 100 (1997) 170–179.
te
[22] B. Yang, J. Geunes, A single resource scheduling problem with job-selection flexibility, tardiness
Ac ce p
costs and controllable processing times, Computers & Industrial Engineering 53 (2007) 420–432. [23] Y.-W. Chen, Y.-Z. Lu, G.-K. Yang, Hybrid evolutionary algorithm with marriage of genetic algorithm and extremal optimization for production scheduling, The International Journal of Advanced Manufacturing Technology 36 (2008) 959–968. [24] S. Nguyen, A learning and optimizing system for order acceptance and scheduling, The International Journal of Advanced Manufacturing Technology (2016) 1–16. [25] W. Lin, K.-C. Ying, Increasing the total net revenue for single machine order acceptance and scheduling problems using an artificial bee colony algorithm, journal of the Operational Research Society 64 (2013) 293–311. [26] L. Davis, Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1991. [27] R. Ruiz, T. Stutzle, ¨ A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem, European Journal Operational Research 177 (2007) 2033–2049.
Page 41 of 43
[28] K.-C. Ying, S.-W. Lin, Unrelated parallel machine scheduling with sequence and machinedependent setup times and due date constraints, International Journal of Innovative Computing, Information and Control 8 (2012) 3279–3297. [29] Q. Zhang, J. Sun, E. Tsang, An evolutionary algorithm with guided mutation for the maximum
ip t
clique problem, IEEE Transactions on Evolutionary Computation 9 (2005) 192–200. [30] S. Wang, L. Wang, G. Zhou, Y. Xu, An estimation of distribution algorithm for the flexible job-shop
cr
scheduling problem, in: Advanced Intelligent Computing Theories and Applications: With Aspects of Artificial Intelligence, Lecture Notes in Computer Science, Vol. 6839, Springer Berlin Heidelberg,
us
2012, pp. 9–16.
[31] S. Wang, L. Wang, Y. Xu, M. Liu, An effective estimation of distribution algorithm for the flexible job-
an
shop scheduling problem with fuzzy processing time, International Journal of Production Research 51 (12) (2013) 3778–3793.
M
[32] J. Li, Y. Jiang, Estimation of distribution algorithms for job schedule problem, in: Information and Computing Science, 2009. ICIC ’09. Second International Conference on, Vol. 1, 2009, pp. 7–10.
d
[33] X.-J. He, J.-C. Zeng, S.-D. Xue, L.-F. Wang, An efficient estimation of distribution algorithm for job shop scheduling problem, in: Swarm, Evolutionary, and Memetic Computing, Vol. 6466 of Lecture
te
Notes in Computer Science, Springer-Verlag, 2010, pp. 656–663.
Ac ce p
[34] X. Hao, L. Lin, M. Gen, K. Ohno, Effective estimation of distribution algorithm for stochastic job shop scheduling problem, Procedia Computer Science 20 (2013) 102–107. [35] S.-Y. Wang, L. Wang, M.L., Y. Xu, An effective estimation of distribution algorithm for solving the distributed permutation flow-shop scheduling problem, International Journal of Production Economics 145 (2013) 387–396.
[36] K. Wang, Y. Huang, H. Qin, Fuzzy logic-based hybrid estimation of distribution algorithm for distributed permutation flowshop scheduling problems under machine breakdown, Journal of the operational research society 67 (2015) 68–82. [37] J. Ceberio, E. Irurozki, A. Mendiburu, J. Lozano, A distance-based ranking model estimation of distribution algorithm for the flowshop scheduling problem, IEEE Transactions on Evolutionary Computation 18 (2014) 286–300. [38] M. Flinger, J. Verducci, Distance based ranking models, Journal of the Royal Statistical Society 48 (1986) 359–369. Page 42 of 43
[39] A. Salhi, J. Rodriguez, Q. Zhang, An estimation of distribution algorithm with guided mutation for a complex flow shop scheduling problem, in: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, 2007, pp. 570–576. [40] S.-H. Chen, M.-C. Chen, P.-C. Chang, Q. Zhang, Y.-M. Chen, Guidelines for developing effective es-
ip t
timation of distribution algorithms in solving single machine scheduling problems, Expert Systems with Applications 37 (9) (2010) 6441–6451.
cr
[41] R. P´erez-Rodr´ıguez, S. Jons, ¨ A. Hern´andez-Aguirre, C. Alberto-Ochoa, Simulation optimization for a flexible jobshop scheduling problem using an estimation of distribution algorithm, The Interna-
us
tional Journal of Advanced Manufacturing Technology 73 (1) (2014) 3–21.
[42] M. Hauschild, M. Pelikan, An introduction and survey of estimation of distribution algorithms,
an
Swarm and Evolutionary Computation 1 (3) (2011) 111–128.
[43] S. Tanaka, A unified approach for the scheduling problem with rejection, in: Automation Science
Ac ce p
te
d
M
and Engineering (CASE), 2011 IEEE Conference on, 2011, pp. 369–374.
Page 43 of 43