Author's Accepted Manuscript
Single-machinie batch delivery scheduling with job release dates, due windows and earliness, tardiness, holding and delivery costs Fardin Ahmadizar, Soma Farhadi
www.elsevier.com/locate/caor
PII: DOI: Reference:
S0305-0548(14)00222-6 http://dx.doi.org/10.1016/j.cor.2014.08.012 CAOR3638
To appear in:
Computers & Operations Research
Cite this article as: Fardin Ahmadizar, Soma Farhadi, Single-machinie batch delivery scheduling with job release dates, due windows and earliness, tardiness, holding and delivery costs, Computers & Operations Research, http://dx. doi.org/10.1016/j.cor.2014.08.012 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Single-machine batch delivery scheduling with job release dates, due windows and earliness, tardiness, holding and delivery costs
Fardin Ahmadizar*, Soma Farhadi
Department of Industrial Engineering, University of Kurdistan, Pasdaran Boulevard, Sanandaj, Iran
*
Corresponding author. Tel./fax: +98-871-6660073
[email protected] (F. Ahmadizar);
[email protected] (S. Farhadi)
Abstract This paper deals with a single-machine scheduling problem in which jobs are released in different points in time but delivered to customers in batches. A due window is associated with each job. The objective is to schedule the jobs, to form them into batches and to decide the delivery date of each batch so as to minimize the sum of earliness, tardiness, holding, and delivery costs. A mathematical model of the problem is presented, and a set of dominance properties are established. To solve this NP-hard problem efficiently, a solution method is then proposed by incorporating the dominance properties with an imperialist competitive algorithm. Unforced idleness and forming discontinuous batches are allowed in the proposed algorithm. Moreover, the delivery date of a batch may be decided to be later than the completion time of the last job in the batch. Finally, computational experiments are conducted to evaluate the proposed model and solution procedure, and results are discussed. Keywords: Scheduling; Single-machine; Batch delivery; Release dates; Due windows; Dominance properties; Imperialist competitive algorithm.
1
1 Introduction Although single-machine scheduling problems have been broadly studied with different objective functions in past decades, most of these studies have been focused on the simple form of the problem and less attention has been paid to single-machine batch delivery scheduling problems. Batch delivery is a common characteristic of many real-life systems in which jobs are ultimately delivered to customers in batches, leading to decrease in the total delivery cost. Batch delivery scheduling problems were first introduced by Cheng and Kahlbacher [1]. They have studied a single-machine batch delivery scheduling problem with the objective of minimizing the sum of the total weighted earliness and delivery costs, where the earliness of a job is defined as the difference between the delivery time of the batch it belongs and the job completion time. They have shown that the problem is NP-hard in the ordinary sense but polynomially solvable for equal weights. Hermann and Lee [2] have considered a singlemachine scheduling problem where all the jobs have a given restrictive common due date and the objective is to minimize the sum of earliness and tardiness penalties and delivery costs of the tardy jobs; the authors have assumed that all early jobs are delivered in one batch at the due date without any cost, thus referring to holding costs as earliness penalties. They have claimed that the problem is NP-hard, and proposed a pseudo-polynomial dynamic programming algorithm to solve it. Moreover, Cheng et al. [3] have studied single-machine scheduling with batch deliveries to minimize the sum of the batch delivery and earliness penalties, where the earliness of a job is defined as in [1]. They have formulated the problem as a classical parallel-machine scheduling problem; as a result, one can straightforwardly extend the complexity results as well as algorithms for the corresponding parallel-machine problem to this problem.
2
Wang and Cheng [4] have considered a parallel-machine batch delivery scheduling problem with the objective of minimizing the sum of flow times and delivery cost, where the delivery cost is a non-decreasing function of the number of deliveries, and shown that the problem is strongly NP-hard. They have developed a dynamic programming algorithm for the problem and two polynomial time algorithms for the special cases where the job assignment is predetermined or the job processing times are all identical. A similar problem with a single machine and unequal job weights has been investigated by Ji et al. [5]. They have shown that the problem of minimizing the sum of the total weighted flow time and delivery cost on a single machine is strongly NP-hard, and developed a dynamic programming algorithm for the problem and two polynomial time algorithms for the special cases where the jobs have a linear precedence constraint or the weights are agreeable with the processing times. Moreover, Mazdeh et al. [6] have developed a branch-and-bound algorithm for the same problem with the assumption that the delivery cost is a linear increasing function of the number of deliveries. For the special case in which the maximum number of batches is fixed, they have shown, through experiments carried out with problems up to 20 jobs, that their algorithm is superior to that of Ji et al. [5] in terms of efficiency. Hall and Potts [7] have provided dynamic programming algorithms for a variety of scheduling, batching and delivery problems arising in a supply chain, where the objective is to minimize the overall scheduling and delivery cost, using several classical scheduling objectives. Single-machine scheduling with batch deliveries and job release dates has been investigated by Mazdeh et al. [8]. They have allowed forming both continuous and discontinuous batches, and proposed a branch-and-bound algorithm to minimize the sum of flow times and delivery costs under the assumptions that the release dates are agreeable with the processing times and the jobs for each customer are to be processed in the shortest processing time order. The authors have shown, through experiments carried out with
3
problems up to 40 jobs, that their algorithm is more efficient than the dynamic programming algorithm of Hall and Potts [7] for solving the problem. Hamidinia et al. [9] have introduced a novel complex single-machine batch delivery scheduling problem with the objective of minimizing the sum of earliness, tardiness, holding, and delivery costs. They have presented a mathematical model of the problem, and proposed a genetic algorithm-based heuristic to solve it. Furthermore, in the literature there are some references where batch delivery scheduling and due date assignment problems are considered. Shabtay [10] has studied single-machine scheduling with batch deliveries and earliness, tardiness, holding, batching and due date assignment costs, where the due dates are controllable. The author has proved that the problem is NP-hard, and presented a polynomial time algorithm for the special cases where the job processing times are all identical or the acceptable lead times are all equal to zero and the holding penalty is less than the tardiness or due date assignment penalty. Yin et al. [11] have studied a single-machine batch delivery scheduling and common due date assignment problem with the option of performing a rate-modifying activity on the machine. Considering the objective of minimizing the sum of earliness, tardiness, holding, due date and delivery costs, they have presented polynomial time algorithms for the special cases where the jobs have equal modifying rates, the processing times are all identical or the jobs and modifying rates are agreeable. Recently, Yin et al. [12] have studied single-machine scheduling with batch delivery and an assignable common due window, where the objective is to minimize the sum of earliness, tardiness, holding, window location, window size, and delivery costs. They have shown that the problem is polynomially solvable by a dynamic programming algorithm under a reasonable assumption on the relationships among the cost parameters.
4
Although a variety of batch delivery scheduling problems has been investigated in the above mentioned papers, the researchers have worked with two most common assumptions as follows: (1) the delivery date of a batch is equal to the completion time of the last job in the batch, and (2) there is no capacity limitation on the size of a batch and the delivery cost is a non-decreasing function of the number of batch deliveries. This study provides an extension to that of Hamidinia et al. [9] by taking into consideration job release dates and due windows. The problem considered here is a singlemachine scheduling problem where jobs are released in different points in time but delivered in batches to their respective customers or to other machines for further processing. Each job must be delivered within its due window; otherwise, a penalty is incurred. It is assumed that the delivery cost for a batch, which is different for each customer, is independent of the number of jobs in the batch. The objective is then to schedule the jobs, to form them into batches and to decide the delivery date of each batch so as to minimize the sum of earliness, tardiness, holding, and delivery costs. The major limitations of the work of Hamidinia et al. [9] concern the following implicit assumptions made: (1) continuous batch forming, and (2) non-delay scheduling. The first assumption states that the jobs forming a batch are processed continuously, that is, no job belonging to another customer is processed between them. The second assumption implies that the machine is not kept idle while a job is waiting for processing, that is, unforced idleness is prohibited. Moreover, Hamidinia et al. [9] have simply assumed that the delivery date of a batch equals to the completion time of the last job in the batch. In this paper, however, we show that, even when all jobs are released at time zero, it may be advantageous to form discontinuous batches, to have periods of unforced idleness, and to deliver a batch at a time later than the completion time of the last job in the batch. Furthermore, Hamidinia et al. [9] have proposed a mathematical model of their problem, which is clearly a special case of the problem considered here, and claimed that it
5
cannot be solved by off-the-shelf optimizers even for small test problems. Although they have argued that this failure is due to the problem complexity, we show that the erroneous formulation is the main reason. In addition to presenting an appropriate mathematical model, we establish a set of dominance properties and propose a solution method by incorporating them with an imperialist competitive algorithm (ICA). The proposed solution procedure schedules the jobs, forms them into batches and decides the delivery date of each batch. The rest of the paper is organized as follows. In the next section, the problem is described and formulated. Section 3 is devoted to the derivation of some dominance properties. In Section 4, the proposed solution method is described, followed by Section 5 providing computational results. Finally, Section 6 gives a summary as well as future work.
2 Problem formulation 2.1 Problem description The problem considered in this paper is a single-machine scheduling problem in which jobs are ultimately delivered to customers in batches, leading to decrease in the total delivery cost. There are N jobs belonging to F customers that have to be processed by the
¦
machine; each customer j has n j jobs and N
j
n j . The machine is continuously available
and can process at most one job at any point of time. Preemption is not allowed, and there is no setup time. Associated with each job i belonging to customer j, which is denoted by job
(i, j ) , is a release date rij at which it becomes available for processing on the machine, a processing time pij , and a due window [d ijl , diju ] where dijl and diju are the earliest and latest due dates, respectively; the job is on-time and thus no penalty is incurred if delivered to the customer within the due window, while it will incur an earliness penalty if delivered before
dijl and a tardiness penalty if delivered after diju . Distinct due windows represent a
6
generalization of distinct due dates and are relevant in many practical situations because of uncertainty and tolerance [13]. Moreover, associated with each job (i, j ) is also a unit tardiness cost D ij , a unit earliness cost E ij , and a unit holding cost hij ; the job will incur a holding penalty if not delivered to the customer at its completion time. The cost of delivering each batch to customer j is denoted by D j . It is assumed that this cost is independent of the number of jobs in the batch and that there is no capacity limit on each batch delivery. Although this assumption may appear restrictive, it may be reasonable under the situations where all the jobs of each customer can be delivered by one truck at a shipment, and consequently, regardless of whether a truck is fully loaded or not, the delivery cost remains unchanged; it clearly depends on the customer site. So, the jobs of each customer may be delivered in one or more batches; each job must clearly wait for all other jobs belonging to the same batch to be completed. The total delivery cost is then a linear increasing function of the number of batch deliveries. The objective of the problem investigated is to schedule the jobs, to form them into batches and to decide the delivery date of each batch so as to minimize the sum of earliness, tardiness, holding, and delivery costs. Several special cases of this problem, such as the single-machine total weighted tardiness scheduling problem [14], have been proved to be strongly NP-hard. Hence, the batch delivery scheduling problem under consideration, which is significantly more complicated, is strongly NP-hard too.
2.2 Mathematical model A special case of the problem in which, for each job (i, j ) , rij
0 and d ijl
d iju has
been investigated by Hamidinia et al. [9]. They have proposed a mathematical model and claimed that, due to the problem complexity, current applications such as Lingo are unable to implement it even for simple test problems. As shown in the next section, the work of 7
Hamidinia et al. [9] has limitations in that unforced idleness and forming discontinuous batches are not allowed and the delivery of a batch is forbidden to occur at a time later than the completion time of the last job in the batch. However, even regardless of these limitations, we found that their mathematical model suffers from serious bugs as follows: there is no guarantee of assigning each part inside a batch to at most one job; the third constraints have to be revised such that without adding any auxiliary binary variables, each batch is assigned to at most one customer, and accordingly the last term in the objective function is redundant and should be omitted; and finally, instead of having for each part inside a batch the fourth and sixth types of constraints, the summations in these constraints have to be over all parts inside a batch as well. Therefore, in the following an appropriate mathematical model is presented which, in addition to overcoming the above-mentioned limitations, can be solved by off-the-shelf optimizers. To formulate the problem under consideration, the following decision variables are introduced: Cij
Completion time of job (i, j )
Rij
Delivery date of job (i, j )
Rbjc
Delivery date of the bth batch of customer j ( 1 d b d n j )
xijb
1, if job (i, j ) is in the bth batch of customer j; 0, otherwise
y jb
1, if the bth batch of customer j contains at least one job; 0, otherwise
zijicjc
1, if job (i, j ) is scheduled before job (ic, jc) ; 0, otherwise Let M be a big number. The problem can then be formulated as the following
mathematical model:
Min ¦¦Dij max 0, R ij d iju ¦¦Eij max 0, d ijl R ij ¦¦hij R ij C ij ¦¦y jb D j (1) j
i
^
`
j
i
^
`
Subject to: 8
j
i
j
b
Cij t rij pij ,
j 1, }, F , i 1, }, n j
(2)
Cicjc Cij t picjc M 1 zijicjc , °° j , j c 1,}, F , i 1,}, n j , ic 1,}, n jc , (i, j ) z (ic, j c) ® °C C t p Mz , icj c ij ijicj c °¯ ij
(3)
¦x
(4)
j 1, }, F , i 1, }, n j
1,
ijb
b
¦x ijb M (1 y jb ) t 1, ° i ° ® ° x My t 0, ijb jb ° ¦ ¯ i Rbjc t xijb Cij ,
Rij
¦x
ijb
Rbjc ,
j
1,}, F , b 1,}, n j
j 1, }, F , i 1, }, n j , b 1, }, n j
j 1, }, F , i 1, }, n j
(5)
(6) (7)
b
xijb , y jb , zijicjc ^0,1` , j , j c 1, }, F , i, b 1, }, n j , ic 1, }, n jc , (i, j ) z (ic, j c)
(8)
The objective function (1) minimizes the total cost which includes, respectively, the sum of tardiness costs, the sum of earliness costs, the sum of holding costs, and the sum of delivery costs. Constraints (2) ensure that the processing of each job cannot start before its release date. Constraints (3) make sure that the processing of each job may only start after the processing of the previously scheduled jobs has been completed. Constraints (4) ensure that each job is assigned only to one batch. As the initial number of batches of each customer j equals n j , some batches may remain empty, and accordingly, constraints (5) ensure that no empty batches contribute to the total delivery cost, that is, to the last term in the objective function. Constraints (6) make sure that each batch may only be delivered to the corresponding customer after the processing of the last job in the batch has been completed, that is, the batch may be held in the manufacturer's store for some time before being delivered to its destination. Constraints (7) are used to calculate the delivery date of each job, which is
9
clearly equal to the delivery date of the batch containing the job. Finally, constraints (8) specify that the decision variables x, y and z are binary. As can be seen, constraints (2) and (3) are used to schedule the jobs, constraints (4) and (5) to form the jobs belonging to each customer j into one or at most n j batches, and constraints (6) and (7) to decide the job and batch delivery dates. The above mixed integer non-linear program can be simply converted to a mixed integer linear programming (MILP) model by linearizing the objective function as well as constraints (6) and (7). After linearization, the model would have 10¦ j n 2j 3N 2 17N 2 constraints, 2¦j n 2j 5N continuous variables and
¦ n N j
2 j
2
N 2 binary variables. As a
result, for every fixed number of jobs N, the number of constraints and variables increases with increasing
¦n j
2 j
¦ j n 2j . It is easy to see that
¦j n 2j
¦j n j
2
N 2 for all F !1 and
N 2 when F 1. Hence, if all the N jobs belong to just one customer, the resulting
MILP model has the maximum number of constraints and variables and thus is harder to solve. Nevertheless, when the N jobs belong to two or more customers, the number of constraints and variables may even increase with increasing F. For example, let us consider two cases, both with 10 jobs but with different number of customers: in the first case, there are two customers each with 5 jobs, while in the other case there are three customers with 7, 1, and 2 jobs; then, the model corresponding to the latter case clearly has more constraints and variables.
3 Dominance properties In this section, three lemmas are first introduced and then a number of dominance properties providing necessary conditions for any given solution to be optimal are derived.
10
From the objective function stated in (1), we obtain immediately the following corollary. Corollary 1 The objective function (1) is not necessarily non-decreasing in the job
completion times and delivery dates. Lemma 1 The optimal solution is not necessarily non-delay. Proof Given a solution that is non-delay, if any idle time is inserted before processing a job,
the job completion time and possibly its delivery date increase; clearly, the completion time and delivery date of some other jobs may increase too. From Corollary 1, this may improve the solution, that is, it may be advantageous to keep the machine idle while a job is waiting for processing, and so, unforced idleness may occur in the optimal solution. Lemma 2 In the optimal solution, the jobs forming a batch are not
necessarily processed continuously. Proof Given a solution in which the jobs forming every batch are processed continuously, if a
job belonging to a batch is processed later between the jobs belonging to one of the following batches (a batch of the same customer or another customer), the job completion time, its delivery date and the delivery date of all other jobs of the same batch increase; clearly, the completion time and delivery date of some other jobs may increase too. From Corollary 1, this may improve the solution and consequently, discontinuous batches may occur in the optimal solution. Lemma 3 In the optimal solution, the delivery of a batch does not necessarily occur
at the completion time of the last job in the batch. Proof Given a solution in which the delivery date of every batch is equal to the completion
time of the last job in the batch, if the delivery of a batch is postponed, the delivery date of all its jobs clearly increase. From Corollary 1, this may improve the solution and consequently, according to the optimal solution, completed batches may be held in the manufacturer's store
11
for a while before being delivered to their destinations. It is obvious that the above lemmas are still valid for the special case in which rij and d ijl
0
d iju for each job (i, j ) (i.e., for the problem investigated by Hamidinia et al. [9]).
Property 1 Given a solution in which, after the processing of a job (i, j ) , the machine
is idle for a time period IT, if Rij t Cij IT c , where 0 IT c d IT , the processing of job (i, j ) should be postponed to complete at Cij IT c . Proof Suppose that S1 denotes a solution in which, after the processing of job (i, j ) , the
machine is idle for a time period IT and Rij ! Cij . Let IT c be the maximum period of time, but not longer than IT, for which Rij t Cij IT c . Assume that S2 denotes the same solution in which the processing of job (i, j ) is postponed to complete at Cij IT c ; under S2 , the delivery date of job (i, j ) is still Rij . Clearly, the two solutions differ only in the holding cost for job (i, j ) ; under S1 , this cost is equal to hij Rij Cij , while under S2 it is equal to hij Rij Cij IT c . Thus, S2 dominates S1 , and the proof is complete.
Since, with the exception of the last job in a batch, for the other jobs (i, j ) belonging to the batch we necessarily have Rij ! Cij , the following corollary is obtained immediately. (Recall that for the last job (i, j ) in a batch we may have Rij
Cij .)
Corollary 2 In any optimal solution, the machine is not idle after the processing of a
job, except for the last jobs in the batches. Property 2 Given a solution in which a job (i, j ) is processed immediately after
another job (ic, j ) belonging to the same batch, if rij d Cicj picj and hicj pij ! hij picj , the two jobs should be interchanged. Proof Suppose that S1 denotes a solution in which two adjacent jobs (ic, j ) and (i, j ) , with 12
(i, j ) following (ic, j ) , belong to the same batch; from Property 1, the machine is not idle between the processing of these two jobs. Assume that we have rij d Cicj picj (i.e., when the processing of job (ic, j ) starts, job (i, j ) is available) and hicj pij ! hij picj . Now, construct a new solution S2 in which jobs (ic, j ) and (i, j ) are interchanged and all other jobs finish at the same time as in S1 ; under S2 , the delivery date of these two jobs is still Rij
Ricj , but their
completion times change. Clearly, the two solutions differ only in the holding cost for jobs
(ic, j ) and (i, j ) . Let ST be the point in time at which job (ic, j ) begins in S1 and at which job (i, j ) begins in S2 . In addition, let H ( S1 ) and H ( S2 ) denote the holding costs for jobs (ic, j ) and (i, j ) in S1 and S2 , respectively. Then, H ( S1 )
hicj Ricj ST picj hij Rij ST picj pij ,
(9)
H (S2 )
hij Rij ST pij hicj Ricj ST pij picj .
(10)
After taking the difference of (9) and (10), it is obtained that H S1 H S 2
hicj pij hij picj ! 0 .
Thus, S2 dominates S1 , and the proof is complete. Property 3 Given a solution in which a job (i, j ) belonging to a batch b of customer j
is early, if for another batch
bc
of the same customer
d ijl d Rbccj d diju
and
E ij dijl Rbjc ! hij Rbccj Rbjc , job (i, j ) should be included in batch bc . Proof Suppose that S1 denotes a solution in which job (i, j ) belonging to batch b of
customer j is early, i.e., Rij
Rbjc dijl . Assume that there is another batch bc of the same
customer such that d ijl d Rbccj d diju
(so, batch bc is delivered after batch b) and
E ij dijl Rbjc ! hij Rbccj Rbjc . Now, construct a new solution S2 in which job (i, j ) is
13
included in batch bc ; under S2 , the job is on time and its delivery date is now equal to Rbccj . Clearly, the two solutions differ only in the earliness and holding costs for job (i, j ) . Denoting by EH ( S1 ) and EH ( S2 ) these costs in S1 and S2 , respectively, we have EH S1
E ij dijl Rbjc hij Rbjc Cij ,
(11)
EH S 2
hij Rbccj Cij .
(12)
After taking the difference of (11) and (12), it is obtained that EH S1 EH S 2
E ij dijl Rbjc hij Rbccj Rbjc ! 0 .
Thus, S2 dominates S1 , and the proof is complete. Property 4 Given a solution in which a job (i, j ) belonging to a batch b of customer j
is tardy, if for another batch bc of the same customer Rbccj t Cij and dijl d Rbccj d diju , job (i, j ) should be included in batch bc . Proof Consider a solution in which job (i, j ) belonging to batch b of customer j is tardy, i.e.,
Rij
Rbjc ! diju . Assume that there is another batch bc of the same customer such that Rbccj t Cij
and dijl d Rbccj d diju (so, batch bc is delivered before batch b). Clearly, if job (i, j ) is included in batch bc to be delivered earlier, not only the job is on time, but also its corresponding holding cost decreases. Property 5 Given a solution in which every job (i, j ) in a batch b of customer j is
early or on time, if
¦E ! ¦h , where a ij
iebj
ij
b j
is the set of all jobs in batch b and ebj is the set of
iabj
its early jobs, the delivery of batch b should be postponed until minb ^diju ` . ia j
Proof Suppose that S1 denotes a solution in which every job (i, j ) in the bth batch of
customer j is early or on time, i.e., Rij
Rbjc d diju , and so, Rbjc d minb ^diju ` . Assume that we ia j
14
have
and Rbjc minb ^diju ` . Now, construct a new solution S2 in which the
¦E ! ¦h ij
ij
iebj
ia j
iabj
delivery of batch b is postponed until Rbjc TP , where TP is the time period for which Rbjc TP
^ `
minb diju ; under S2 , every job (i, j ) in the batch is still early or on time and its ia j
delivery date is now equal to Rij TP . Clearly, the two solutions differ only in the earliness and holding costs for the jobs belonging to batch b. Let EH ( S1 ) and EH ( S2 ) denote these costs in S1 and S2 , respectively. Then, EH S1 EH S 2
¦E d ij
l ij
iebj
¦E d ij
l ij
iebj
Rij ¦hij Rij Cij ,
(13)
Rij TP ¦hij Rij TP Cij .
(14)
iabj
iabj
In view of the fact that under S2 the number of early jobs belonging to batch b may decrease,
EH ( S2 ) may be even lesser than that stated in (14). After taking the difference of (13) and (14), it is obtained that
EH S1 EH S2 Since TP ! 0 and
¦E TP ¦h TP . ij
ij
iebj
iabj
¦E ! ¦h , S ij
iebj
ij
2
dominates S1 , and the proof is complete.
iabj
Property 6 Given a solution in which every job in a batch is tardy or on time, the
delivery of the batch should occur at the completion time of its last job. Proof Consider a solution in which every job in a batch is tardy or on time. Clearly, each job
belonging to the batch incurs holding and/or tardiness costs, which increase if the job delivery date increases. Thus, the delivery of the batch should not be postponed.
4 Proposed solution procedure
15
In order to efficiently solve the problem under consideration, a hybrid algorithm is proposed by incorporating the dominance properties with an ICA. Such an integrating framework has previously been applied to solve other scheduling problems (see, e.g., [1517]). ICA, firstly introduced by Atashpaz-Gargari and Lucas in 2007 [18], is a new population-based algorithm in the evolutionary computation field mimicking the human's socio-political evolution. The ICA was originally developed for continuous optimization problems. However, researchers have recently applied ICAs to successfully solve a range of optimization problems (see, e.g., [19-25]). As Atashpaz-Gargari and Lucas [18] introduced, ICAs start with an initial population composed of many candidate solutions to a specific problem, each of which is called a country. Random generation or heuristic procedures can be used to initialize the population. Some of the most powerful countries in the initial population are chosen to be the imperialists, while the rest of the countries form the colonies of these imperialists. The colonies are then distributed among the imperialists proportionate to their power, where the power of a country is determined on the basis of its objective value –the lower the country cost, the higher its power. An imperialist and its colonies form an empire whose total power will then be a function of both the power of the imperialist country and that of the colonies. Over time, as the search progresses, each imperialist tries to assimilate its colonies, modeled by moving all the colonies in the empire to the imperialist; thus forcing the search towards more promising areas in the search space. In addition, a social and political revolution in a country causes changes in the country characteristics, without being assimilated by an imperialist. In an ICA, the revolution devised as a diversity preservation procedure is modeled by randomly moving some of the colonies to produce new solutions. If in an empire a colony reaches a position better than the imperialist, their positions are exchanged and the algorithm will then continue by the imperialist in the new position (i.e., the colonies of this
16
empire will then move to the new imperialist). Moreover, all the empires try to compete for taking possession of colonies from each other, modeled by picking some (usually one) of the weakest colonies of the weakest empire and making a competition among the empires to possess them. Through the imperialistic competition, powerless empires gradually lose their power and ultimately collapse. This process can be iterated until a situation with just one empire, hopefully the optimal solution, is caused. Finally, it is noted that an ICA makes use of many stochastic components, like any other metaheuristic. In order to improve the ICA's search capability, the derived dominance properties are employed as a local search strategy enhancing the quality of a solution. They have an important role in generating the initial population as well as in enhancing the quality of solutions generated by the ICA. The dominance properties, which are implemented in the same order as presented in Section 3, test whether the solution satisfies the necessary optimality conditions. If a given property is satisfied, the solution is properly modified. We can easily conclude, from computational time point of view, that it is better to apply Property 1 in backward direction (i.e., by starting from the last job in the schedule and going backward to the first one). Moreover, if two adjacent jobs (ic, j ) and (i, j ) , with (i, j ) following (ic, j ) , are interchanged through Property 2 and the job immediately before (i, j ) in the new solution belongs to the same batch, interchanging these two jobs via the property is then considered. Implementing the dominance properties may result in a solution better than the original one, even though the resulting solution may not be optimal. The general structure of the proposed ICA, called ICADP, is then expressed as follows (the details are presented in the remainder of this section): Step 1. (Initialization) Generate randomly a population of feasible solutions, improve them by implementing the dominance properties, and then, initialize the empires. Step 2. (Assimilation) Move the colonies towards their relevant imperialists. 17
Step 3. (Revolution) Implement the revolution process. Step 4. (Improvement) Implement the dominance properties. If a colony is better than its imperialist, exchange their positions. Step 5. (Imperialistic competition) Make the competition among the empires. Step 6. (Elimination) Eliminate any empire with no colony. Step 7. (Termination) Repeat steps 2 to 6 until the termination criterion is satisfied.
4.1 Initialization
4.1.1 Solution representation The solution representation mapping solution characteristics into a string of symbols is the first step in designing an ICA for a particular problem. It sets up a bridge between the original problem space and the space being searched by the algorithm. Defining an appropriate solution representation strategy is an important issue that affects the algorithm performance. As mentioned earlier, a solution to the problem under consideration is completely characterized by a schedule of the jobs, a partition of them into batches, and a choice of the job (and batch) delivery dates. Accordingly, to describe and manipulate a solution, which in the ICA terminology is named as country, a matrix with 4 rows and N columns is employed in ICADP. Assume that all the jobs are numbered from 1 to N, by considering firstly the jobs of customer 1, then those of customer 2, and so on. The first row of the matrix is then a permutation of the numbers from 1 to N, thus indicating a processing order of the jobs. As unforced idleness of the machine is allowed, a job sequence alone cannot represent a schedule, and consequently, the second row of the matrix, a string of real numbers, is employed to determine the job completion times. Clearly, for a solution to be feasible, the schedule represented by the first two rows must satisfy constraints (2) and (3). It should be
18
noted here that this type of schedule representation gives the algorithm the opportunity to efficiently search the solution space: the processing order of the jobs will change through changes in the entries in the first or even the second row, while it will also be possible to complete some jobs at other points in time, without changing the order of processing on the machine, through changes in the entries in the second row. Moreover, the third row of the matrix, a string of integers (with repetition allowed), is used to assign the jobs of each customer j to one or at most n j batches so as to satisfy constraints (4) and (5). Finally, the last row, a string of real numbers, is employed to determine the job (and batch) delivery dates; for a solution to be feasible, the delivery schedule represented by this row must satisfy constraints (6) and (7). It is noted that every element of the second, third, and last rows of the matrix is attached to the job in the same column (in the first row). As an illustrative example, consider a simple problem instance with two customers, where customer 1 has three jobs and customer 2 five jobs. The jobs of customer 1 numbered from 1 to 3 may be delivered at most in 3 batches, and those of customer 2 numbered from 4 to 8 at most in 5 batches. Fig. 1 then gives an example of a solution representation for this problem instance. From the first two rows of the matrix shown in Fig. 1, the jobs are appended to the schedule: job 3, the third job of customer 1, is processed first and completed at time 63, and then job 5, the second job of customer 2, is processed and completed at time 151.2, and so on. From the third row, the jobs of each customer having the same index are assigned to the same batch; e.g., the third and first jobs of customer 1 have index ‘3’ and form the first batch of the customer, while the second job of this customer having index ‘1’ itself forms the second batch – due to the absence of index ‘2’, the jobs of this customer are assigned to 2 batches. Accordingly, the jobs of customer 2 are delivered in 3 batches. As seen, batch indices in the third row are cardinal and not ordinal numbers. Finally, the job (and batch) delivery dates are set according to the last row. 19
-- Fig. 1 --
4.1.2 Generation of initial population Each solution of the initial population is generated randomly, and then, improved by implementing the dominance properties. The following procedure is exploited to randomly produce a feasible solution. The entries in the first row of the given matrix are first selected by generating a random permutation of the numbers from 1 to N. To choose the entries in the second row, starting from the first job in the first row, the completion time of each job (i, j ) is
generated
from
a
uniform
distribution
between
max ^ FT , rij ` pij
and
UIF max ^ FT , rij ` pij , where FT is the time at which the machine is free to start job (i, j ) and UIF a factor to allow unforced idleness (indeed, max ^ FT , rij ` is the earliest time at which job (i, j ) can start its processing). Evidently, the schedule represented by the first two rows then satisfies constraints (2) and (3). The entries in the third row are set by picking, for each job (i, j ) , a random integer from 1 to n j . Considering the decoding scheme as described in the last paragraph of the previous subsection, the assignment of the jobs to batches clearly satisfies constraints (4) and (5). Furthermore, to choose the entries in the last row, the delivery date of each batch b of customer j is first generated from a uniform distribution bj bj bj between Clast and DF u Clast , where Clast is the completion time of the last job in the batch
and DF a factor to allow a completed batch to be held in the manufacturer's store for a while before being delivered to its destination; then, the delivery date of every job belonging to this batch is set equal to the generated delivery date. Obviously, the delivery schedule satisfies constraints (6) and (7).
4.1.3 Initialization of the empires 20
Let Size p be the size of the initial population. A number, say Sizei , of the most powerful countries (solutions) in the population are then selected as imperialists. The rest of the countries are the colonies (with size Sizec ) of these imperialists and distributed among them based on their power. Let OVS denote the objective value of a solution S; that is, the sum of earliness, tardiness, holding, and delivery costs as stated in (1). To distribute colonies among imperialist, the normalized objective value of each imperialist imp is calculated as follows: c OVimp
OVmax OVimp
(15)
where OVmax is the maximum objective value among all imperialists. The normalized power of imperialist imp is then estimated by Sizei
POWimp
c / ¦OVqc OVimp
(16)
q 1
Having the normalized power of all imperialists, the initial colonies are distributed among them proportionate to their normalized power as follows: round POWimp Sizec
NCimp
(17)
A number NCimp of the colonies are randomly chosen and assigned to empire imp; since the equation
¦
Sizei
q 1
NC q
Sizec has to be satisfied, the last empire of course takes possession of
the remaining colonies. As a result, empires with greater power, i.e., with a lower objective function value, have a larger number of colonies.
4.2 Assimilation Each imperialist tries to absorb its colonies; the aim of this process, called assimilation, is to assimilate socio-political characteristics of the colonies. In ICAs, the assimilation is modeled by moving all the colonies in an empire towards its imperialist, thus 21
evolving weak countries. As the imperialists are the best solutions found so far, in addition to having a role in the diversification due to the generation of new candidate solutions, the assimilation has a role in the intensification of the search. Given a colony and its relevant imperialist, the assimilating operator adopted in this paper, which always generate a feasible solution, is as follows. One of the first three rows of the matrix used as the representation scheme is first selected at random, and then, one of the following policies is implemented depending on the selected row.
x
Assimilation policy 1 If the first row, i.e., the job permutation, is chosen, the assimilating operator
developed by Shokrollahpour et al. [20] for a job permutation is applied. One of the jobs in the imperialist array is randomly selected. That job is found in the colony's array and shifted until reaches to the same position as in the imperialist array; the other jobs are shifted as well. Finally, the job in the right-hand side of the selected job in the imperialist array is put in the colony's array at the same position by a swapping process. An example of this procedure is demonstrated in Fig. 2 using the problem instance given in the previous subsection; let job 7 be the job selected at random. -- Fig. 2 -After applying the above operator, the entries in the other rows of the assimilated colony are reset. For every job, its batch index in the old colony is selected, while its corresponding completion time and delivery date may no longer be feasible. Therefore, considering the new job permutation, they are chosen in the same way as in generating the initial population; in addition to resulting in a feasible solution, this may allow unforced idleness as well as the delivery of some completed batches to be postponed.
x
Assimilation policy 2
22
If the second row, i.e., the job completion times, is chosen, an assimilating operator derived from the original ICA suggested for continuous optimization problems is applied. Let Cijimp and Cijcol be the completion times of job (i, j ) in the imperialist and the colony, respectively. For every job (i, j ) , a random number G ij is picked from a uniform distribution on 0,W u disij , where disij is the distance between the colony and the imperialist along dimension ij, that is, disij
Cijimp Cijcol , and is a parameter greater than 1 causing more
areas around the imperialist to be searched. Then, the completion time of job (i, j ) in the assimilated colony is set to Cijcol G ij if Cijimp Cijcol t 0 , and Cijcol G ij otherwise. An example of this procedure is demonstrated in Fig. 3 using a two-dimensional optimization problem. -- Fig. 3 -After applying the above operator, the jobs in the first row of the assimilated colony are rearranged in the non-decreasing order of their new completion times, where ties are broken at random; clearly, the new job permutation may be the initial one. Nevertheless, the new job completion times are not necessarily feasible, and consequently, starting from the first job in the first row, if the completion time of a job is smaller than the earliest time at which the job can be completed, it is set to that time. For every job, its batch index in the old colony is selected, while its corresponding delivery date may no longer be feasible. In the case where the job permutation in the assimilated colony is the same as in the old colony, if the delivery date of a job is smaller than the completion time of the last job in the corresponding batch, it is changed to that time; otherwise, the (old) date is selected. However, in the other case in which the processing order of the jobs has changed through the changes in the entries in the second row, considering the new job permutation, the delivery dates are all chosen in the same way as in generating the initial population.
x
Assimilation policy 3 23
If the third row, i.e., the batch indices, is chosen, an assimilating operator is applied as follows. One of the customers is selected at random. Let customer j be the customer selected; then, for each batch of customer j in the imperialist, with probability Pbi all the jobs belonging to that batch are reassigned in the colony to one batch – this can be simply achieved by setting the batch indices of these jobs to an unused integer between 1 and n j . Applying this operator only affects the delivery dates of some or all of the jobs belonging to the selected customer, and so, the corresponding entries in the last row of the assimilated colony are reset; for each newly formed batch as well as each of the other batches whose last job has been removed, the job delivery dates are chosen in the same way as in generating the initial population. It should be noted here that an assimilating operator may not be developed to efficiently apply to the last row of the colony, i.e., the job delivery dates, since the batches in the imperialist may differ considerably from those in the colony. However, as in all the three above cases changes are made in the entries in the last row of the colony, there may be no strong need for applying such an operator to the job delivery dates.
4.3 Revolution In a country, a social and political revolution may take place causing sudden changes in its characteristics, without being assimilated by its relevant imperialist. In ICAs, the revolution is modeled by randomly moving some of the colonies to generate new solutions in as yet unvisited regions of the search space in order to keep the diversity of the population. After the assimilation process is applied to a colony, the revolution takes place with probability Prev referred to as the revolution rate. To ensure that good solutions are not distorted too much, the revolution rate is set as a small number. The revolution operator always generating a feasible solution is then as follows. One of the rows of the matrix is 24
selected at random, and then, one of the following policies is implemented depending on the row selected.
x
Revolution policy 1 If the first row is chosen, one of the jobs is randomly selected and inserted into
another randomly chosen position in the sequence. Then, starting from the first job whose position has been changed, the job completion times and delivery dates are chosen in the same way as in generating the initial population; obviously, the delivery date of any other job in the first positions of the sequence may change as well.
x
Revolution policy 2 If the second row is chosen, an operator derived from the genetic algorithms
suggested for continuous optimization problems is applied; an amount taken from a normal distribution with zero mean and standard deviation V CT is added to the completion time of a job. V CT is a parameter controlling the amount of change. Then, the jobs in the first row are rearranged in the non-decreasing order of their new completion times, where ties are broken at random; clearly, the resulting permutation may be the initial one. As the new job completion times may not be feasible, starting from the first job in the sequence, if the completion time of a job is smaller than the earliest time at which the job can be completed, it is set to that time. Besides, the job delivery dates may no longer be feasible. In the case where the job permutation has not changed, if the delivery date of a job is smaller than the completion time of the last job in the corresponding batch, it is changed to that time. However, in the other case where the job permutation has changed, the delivery dates are all chosen in the same way as in generating the initial population.
x
Revolution policy 3 If the third row is chosen, one of the jobs is randomly selected; let job (i, j ) belonging
to batch b be the job selected. Then, the batch index of the job is changed to another 25
randomly generated integer between 1 and n j . If either the job alone forms a new batch or it is included in an existing batch as its last job, its delivery date is chosen in the same way as in generating the initial population; in the later case, the delivery date of any other job belonging to the same batch is then set to the delivery date of job (i, j ) . In the other case in which it is included in an existing batch but not as its last job, its delivery date is clearly set equal to the delivery date of the last job in the batch. Moreover, in all the three mentioned cases, if job
(i, j ) has previously been the last job in batch b, the delivery date of any other job belonging to batch b is chosen in the same way as in generating the initial population.
x
Revolution policy 4 If the last row is chosen, an operator similar to that of revolution policy 2 is applied;
an amount taken from a normal distribution with zero mean and standard deviation V DD is added to the delivery date of a batch. V DD is a parameter controlling the amount of change. If the new delivery date of a batch is smaller than the completion time of its last job, it is set to that time. Then, the delivery date of every job is set equal to the delivery date of the batch containing that job.
4.4 Imperialistic competition The empires try to compete for taking possession of colonies from each other and increase their power. The imperialistic competition gradually results in an increase in the power of more powerful empires and a decrease in the power of weaker ones. In other words, any empire that is not able to succeed cannot increase its power and will be gradually eliminated from the competition. In our ICA, the imperialistic competition is implemented by picking the weakest colony of the weakest empire and making a competition among all the empires to possess it. To commence the competition, the total power of an empire is measured as a function of both 26
the power of the imperialist and that of its colonies. This is modeled by defining the total cost of each empire imp as follows: TCimp
c OVimp J AOVimp
(18)
c where AOVimp is the average objective function value of the colonies of empire imp and is a
parameter between 0 and 1 determining the effect of colonies. The normalized total cost of empire imp is then calculated as below: c TCimp
TCmax TCimp
(19)
where TCmax is the maximum total cost among all empires, i.e., the total cost of the weakest empire. Now, the possession probability of each empire imp is estimated by Sizei
POSimp
c / ¦TCqc TCimp
(20)
q 1
For each empire imp, a random number rand imp is then picked from a uniform distribution between 0 and 1, and the freed colony is finally assigned to the empire with the highest POSimp randimp . As a result, each empire having more total power has a higher chance to
take possession of the weakest colony of the weakest empire.
4.5 Elimination As time passes, the colonies of a powerless empire are divided among other empires, and accordingly, it gradually loses its power and ultimately collapses. Different criteria may be applied in modeling the collapse mechanism. In this research, however, an empire will be eliminated when it loses all of its colonies.
4.6 Termination
27
The termination condition in the original ICA is that when there exists only one empire, it stops. However, to ensure the algorithm terminates in a reasonable amount of time, ICADP continues until only one empire survives or the time limit Max _ Time is elapsed, depending on which criterion is met first.
5 Computational results Computational experiments are conducted in order to assess the performance of the proposed mathematical model as well as the performance of the proposed ICA hybridized with the dominance properties, called ICADP. A total of 100 problem instances with sizes ranging from 7 to 70 jobs are randomly generated, where there are five instances for each problem size by considering 1 to 5 customers. For an instance with N jobs and F ! 1 customers, the jobs are randomly distributed among the customers in such a way that each customer has at least one job. The job processing times are uniformly sampled from the range [1, 100]. Since it may be reasonable to assume that each job is released before its latest due date, the due windows and release dates are generated in the following way. Similarly as in [13], the center of the due window associated with each job (i, j ) is drawn from a uniform distribution between
1 TAR RDD 2 ¦¦p ij j
i
and 1 TAR RDD 2 ¦¦pij , where TAR and RDD are the j
i
tardiness factor and the relative range of the due window, respectively, and the window size is drawn from a uniform distribution between 1 and
¦¦p j
ij
N . Then, the release date of
i
the job is uniformly sampled from the range [0, d iju ] . For each job, TAR and RDD take randomly values from 0.1, 0.2, 0.3 and 0.8, 1.0, 1.2, respectively. Moreover, the unit earliness and tardiness costs are chosen from a uniform distribution between 1 and 10, and the unit holding costs from a uniform distribution between 1 and 5. Finally, the cost of delivering 28
each batch to a given customer is drawn from a uniform distribution over the range [100, 200]. ICADP has been coded in Visual C# and run on a 2.27 GHz Pentium IV processor with 3 GB of RAM. In all experiments of this paper, we set the parameters UIF and DF used in the initial population generation to be 1.1, the parameters V CT and V DD used respectively in revolution policies 2 and 4 to be 5, and the time limit Max _ Time to be 600 seconds. However, the other numeric parameters of ICADP have been determined empirically; in a preliminary experiment, several values for each of them have been considered. The population size Size p has been tested on 50N, 75N, 100N, and 125N, and the initial number of imperialists Sizei on 0.1 Size p and 0.2 Size p . The parameter used in assimilation policy 2 has been tested on 1.5, 2.0, and 2.5, and Pbi used in assimilation policy 3 on 0.05, 0.1, and 0.15. The revolution rate Prev has been tested between 0.05 and 0.25 in increments of 0.05. Moreover, the parameter determining the effect of colonies in the imperialistic competition has been tested on 0.01, 0.05, 0.1, and 0.5. To make the performance of ICADP robust, each combination of the parameter values has been tested on 10 randomly chosen instances. Experimental results have then shown that the following values can obtain satisfactory results: Size p
100N , Size i
0.1 Size p , W
2 , Pbi
0.05 , Prev
0.1 , and J
0.05 .
The performance of ICADP is compared with the performance of some alternative methods, including:
x
MILP: the MILP model implemented in CPLEX 11; with a time limit of 7200 seconds
x
RODP: the revolution operator together with the dominance properties as a heuristic algorithm
x
ICA1: the proposed ICA without the revolution operator and the dominance properties
x
ICA2: the proposed ICA without the dominance properties 29
Like local search algorithms, RODP starts with a random initial solution (generated by the procedure stated in Section 4.1.2) and repeatedly tries to improve the current solution. At each iteration, a new solution is generated by applying the revolution operator to the current solution and then improving the resulting solution through the application of the dominance properties; the new solution is accepted only if it is better than the current solution. However, to avoid entrapment in local optima, following a preliminary experiment RODP restarts with a random solution if 3N successive iterations are performed without finding a solution better than the current one. To make a fair comparison, RODP terminates when the maximum time spent by ICADP is arrived. The proposed assimilating operator is the only operator exploited by ICA1 to generate new solutions, and consequently, the contribution of this operator can be observed by comparing the performance of ICA1 with MILP and RODP. The contribution of the proposed revolution operator can be clearly observed by comparing the performance of ICA2 with ICA1. Moreover, a question that may arise is whether the proposed ICA does not perform better without the dominance properties derived but with generating an additional number of solutions. Accordingly, the performance comparison of the proposed ICA with (ICADP) and without (ICA2) the dominance properties can serve to validate their effectiveness and efficiency. Each of ICADP, RODP, ICA1 and ICA2 has been run five times for each problem instance, and the minimum, average and maximum of the objective function values obtained have been provided. The results obtained for small and large instances are, respectively, shown in Tables 1 and 2, where the solution quality is measured by the mean percentage difference from the best found solution. In both tables the third column gives, for each problem instance, the number of jobs belonging to the customers in order. The average CPU times (in seconds) spent by ICADP, RODP, ICA1 and ICA2 are also provided in the tables.
30
In Table 1, a value indicated in boldface corresponds to a solution whose optimality has been verified by CPLEX, and an asterisk in Table 2 indicates that CPLEX has failed to identify even a feasible solution within the time limit imposed. In these tables, if the time spent by CPLEX is less than 7200 seconds but its solution is not optimal, it implies that CPLEX has terminated due to memory limitations. -- Table 1 --- Table 2 -From Tables 1 and 2, it can be observed that CPLEX is capable of achieving optimal solutions for all of the instances with up to 10 jobs and for some instances with up to 13 jobs. However, as the number of jobs increases, it fails to find optimal solutions within a time limit of 7200 seconds; for 14 out of 25 instances with 50 or more jobs, CPLEX has failed to identify even a feasible solution within the imposed time limit. ICA1 which exploits only the proposed assimilating operator to produce new solutions is capable of achieving optimal solutions for 13 small instances. On the whole, the performance of ICA1 is comparable with MILP in the solution quality, although ICA1 is superior to MILP in terms of CPU time. When comparing ICA1 with RODP, ICA1 is capable of finding better solutions in almost the same CPU time. ICA2 which exploits both the proposed assimilating and revolution operators to produce new solutions is capable of achieving optimal solutions for 21 small instances. When comparing ICA2 with ICA1, ICA2 is capable of finding better solutions in almost the same CPU time. ICADP is capable of achieving optimal solutions for all of the instances with up to 10 jobs as well as for those instances with up to 13 jobs whose optimality has been verified by CPLEX. ICADP has a promising performance even for the instances with a large number of jobs; besides, the differences between its best and worst results are sufficiently low. When comparing ICADP with ICA2, they perform almost the same in terms of solution quality for the instances with up to 13 jobs. However, as the
31
number of jobs increases, ICA2 is not capable of competing with ICADP; even the worst results achieved by ICADP are better than the best results of ICA2. Moreover, in terms of CPU time, except for four small instances, ICADP despite employing the dominance properties does not require more computational effort than ICA2. The reason is, of course, that due to the lack of the dominance properties which play an important role in the intensification of the search, ICA2 converges to a situation with just one empire by generating an additional number of solutions compared to ICADP. Accordingly, it is suggested to apply the dominance properties in order to achieve better solutions in relatively shorter computation times. On the whole, the computational results suggest that ICADP is fast and robust, and provides an efficient approach to achieve optimal or near-optimal solutions with small computational requirements. Furthermore, another experiment is conducted on the two instances reported in details in [9], denoted by simple cases a and b. In each of these two small instances, there are 2 customers each with 4 jobs in which rij
0 and dijl
diju for every job (i, j ) . For each test
problem, Hamidinia et al. [9] have executed their proposed genetic algorithm (GA) 20 times, and reported the average objective function value. Accordingly, each of ICADP, RODP, ICA1 and ICA2 has been run 20 times for each instance, and the average results obtained have been provided. Besides, for each instance we have executed the corrected version (a MILP model, denoted as MILP1) of the mathematical model of Hamidinia et al. [9] using CPLEX 11. Recall that unlike in MILP, unforced idleness and forming discontinuous batches are not allowed in MILP1 and also the delivery of a batch is forbidden to occur at a time later than the completion time of the last job in the batch. The results acquired are then summarized in Table 3; the average CPU times are in seconds, and the optimality of the solutions of MILP and MILP1 has been verified by CPLEX. -- Table 3 -32
As can be seen from Table 3, only ICA1, ICA2 and ICADP are always able to find optimal solutions for both instances on every run. Although all the algorithms perform the same in terms of solution quality for test problem b, even RODP significantly outperforms GA in instance a. In terms of CPU time, however, a proper comparison is not possible since Hamidinia et al. [9] have not reported the computer platform of their experiments. Finally, by comparing MILP against MILP1, it is observed that in instance a the total cost increases 15.32% if unforced idleness and forming discontinuous batches are not allowed and the delivery of a batch is forbidden to occur at a time later than the completion time of the last job in the batch.
6 Summary and future work This paper deals with a single-machine scheduling problem in which jobs are released in different points in time but delivered in batches to their respective customers or to other machines for further processing. Every job must be delivered within its due window; otherwise, a penalty is incurred. The delivery cost for a batch, which is different for each customer, is assumed to be independent of the number of jobs in the batch. The aim is to schedule the jobs, to form them into batches and to decide the delivery date of each batch so as to minimize the sum of earliness, tardiness, holding, and delivery costs. A mathematical formulation is developed, and then, it is shown that it may be advantageous to have periods of unforced idleness, to form discontinuous batches, and to deliver a batch at a time later than the completion time of the last job in the batch. Moreover, a number of dominance properties providing necessary conditions for any solution to be optimal are established. In order to efficiently solve this NP-hard problem, a hybrid algorithm is then proposed by integrating the dominance properties with an ICA utilizing both assimilating and revolution operators. The dominance properties, employed as a local search strategy, have an important role in
33
generating the initial population as well as in enhancing the quality of solutions generated by the ICA. It is demonstrated, through computational experimentation, that the proposed hybrid algorithm is able to obtain optimal or near-optimal solutions within a reasonable amount of time. A number of possible, significant extensions to this study could be investigated in future research. It would be interesting to consider the problem with sequence-dependent setup times (or costs). Since the jobs are released in different points in time, in such a situation two versions of the problem may then be investigated: setup can be assumed to begin either when both the corresponding job and the machine are available, or when the machine is available even when the job is not released. Another extension would be to consider the case in which there are precedence constraints between the jobs belonging to the same customer, or even between all jobs. Moreover, it would be interesting to study the problem where unavailability constraints are imposed on the machine, i.e., when it is not continuously available for processing, for example, due to preventive maintenance.
Acknowledgements The authors would like to thank the two anonymous referees for their constructive comments and suggestions, which have been very helpful in improving the presentation of this paper.
References 1. Cheng TCE, Kahlbacher HG. Scheduling with delivery and earliness penalties. Asia-Pacific Journal of Operational Research 1993;10:145–152. 2. Hermann JW, Lee CY. On scheduling to minimize earliness-tardiness and batch delivery costs with a common due date. European Journal of Operational Research 1993;70:272–288. 3. Cheng TCE, Gordon VS, Kovalyov MY. Single machine scheduling with batch delivery. European Journal of Operational Research 1996;94:227–283. 34
4. Wang G, Cheng TCE. Parallel machine scheduling with batch delivery costs. International Journal of Production Economics 2000;68:177–183. 5. Ji M, He Y, Cheng TCE. Batch delivery scheduling with batch delivery cost on a single machine. European Journal of Operational Research 2007;176:745–755. 6. Mazdeh MM, Shashaani S, Ashouri A, Hindi KS. Single-machine batch scheduling minimizing weighted flow times and delivery costs. Applied Mathematical Modelling 2011;35:563–570. 7. Hall NG, Potts CN. Supply chain scheduling: batching and delivery. Operations Research 2003;51:566–584. 8. Mazdeh MM, Sarhadi M, Hindi KS. A branch-and-bound algorithm for single-machine scheduling with batch delivery and job release times. Computers and Operations Research 2008;35:1099–1111. 9. Hamidinia A, Khakabimamaghani S, Mazdeh MM, Jafari M. A genetic algorithm for minimizing total tardiness/earliness of weighted jobs in a batched delivery system. Computers and Industrial Engineering 2012;62:29–38. 10. Shabtay D. Scheduling and due date assignment to minimize earliness, tardiness, holding, due date assignment and batch delivery costs. International Journal of Production Economics 2010;123:235– 242. 11. Yin Y, Cheng TCE, Xu D, Wu CC. Common due date assignment and scheduling with a ratemodifying activity to minimize the due date, earliness, tardiness, holding, and batch delivery cost. Computers and Industrial Engineering 2012;63:223–234. 12. Yin Y, Cheng TCE, Hsu CJ, Wu CC. Single-machine batch delivery scheduling with an assignable common due window. Omega 2013;41:216–225. 13. Wan G, Yen BPC. Tabu search for single machine scheduling with distinct due windows and weighted earliness/tardiness penalties. European Journal of Operational Research 2002;142:271–281. 14. Baker KR, Scudder GD. Sequencing with earliness and tardiness penalties: a review. Operations Research 1990;38:22–36. 15. Chang PC, Chen SH, Mani V. A hybrid genetic algorithm with dominance properties for single machine scheduling with dependent penalties. Applied Mathematical Modelling 2009;33:579–596. 16. Chang PC, Chen SH. Integrating dominance properties with genetic algorithms for parallel machine scheduling problems with setup times. Applied Soft Computing 2011;11:1263–1274. 17. Ahmadizar F, Hosseini L. Bi-criteria single machine scheduling with a time-dependent learning effect and release times. Applied Mathematical Modelling 2012;36:6203–6214. 18. Atashpaz-Gargari E, Lucas C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. IEEE Congress on Evolutionary Computation 2007;4661–4667. 19. Nazari-Shirkouhi S, Eivazy H, Ghodsi R, Rezaie K, Atashpaz-Gargari E. Solving the integrated product mix-outsourcing problem using the imperialist competitive algorithm. Expert Systems with Applications 2010;37:7615–7626. 20. Shokrollahpour E, Zandieh M, Dorri B. A novel imperialist competitive algorithm for bi-criteria 35
scheduling of the assembly flowshop problem. International Journal of Production Research 2011;49:3087–3103. 21. Karimi N, Zandieh M, Najafi AA. Group scheduling in flexible flow shops: a hybridised approach of imperialist competitive algorithm and electromagnetic-like mechanism. International Journal of Production Research 2011;49:4965–4977. 22. Niknam T, Fard ET, Pourjafarian N, Rousta A. An efficient hybrid algorithm based on modified imperialist competitive algorithm and K-means for data clustering. Engineering Applications of Artificial Intelligence 2011;24:306–317. 23. Lian K, Zhang C, Gao L, Li X. Integrated process planning and scheduling using an imperialist competitive algorithm. International Journal of Production Research 2012;50:4326– 4343. 24. Naimi Sadigh A, Mozafari M, Karimi B. Manufacturer–retailer supply chain coordination: A bilevel programming approach. Advances in Engineering Software 2012;45:144–152. 25. Banisadr AH, Zandieh M, Mahdavi I. A hybrid imperialist competitive algorithm for singlemachine scheduling problem with linear earliness and quadratic tardiness penalties. International Journal of Advanced Manufacturing Technology 2013;65:981–989.
Table 1 Results of MILP, RODP, ICA1, ICA2 and ICADP for small instances Instance N F nj 's
MILP Qu alit y
Ti m e
RODP M in
A v e
M a x
1. 0 2 2. 0 5 0. 1 4 0. 8 0
1. 0 2 2. 7 7 0. 4 4 1. 4 1
1. 0 2 4. 5 9 0. 6 4 1. 6 4
1
(7)
0
33 5. 3
2
(4,3 )
0
3. 52
3
(2,2 ,3)
0
0. 50
0
10 .2 2
0
0. 08
0
0
0
Average
0
69 .9 2
0. 8 0
8
49 81
9. 3 8
15 2. 0 10 .8 0
1. 0 8 0. 6 8
1. 1 3 1 0. 7 7 1. 0 8 0. 9 4
1. 5 8 1 1. 7 8 1. 0 8 1. 2 7
7
4
5
(2,1 ,3,1 ) (3,1 ,1,1, 1)
1
(8)
0
2
(4,4 )
0
3
(2,3 ,3)
0
ICA1
ICA2
ICADP
M in
A v e
M a x
Ti m e
M A i v n e
M a x
Ti m e
M A i v n e
M a x
Ti m e
3. 94
0
4. 3 3
5. 9 5
6. 87
0
0
0
10 .1 7
0
0
0
2. 04
4. 10
0
0
0
5. 33
0
0
0
4. 73
0
0
0
2. 49
0. 6 3 1. 5 0
0. 6 3 1. 5 0
0. 6 3 1. 5 0
0. 68
0
0
0
3. 13
0
0
0
1. 62
4. 30
0
0
0
8. 69
0
0
0
7. 51
0. 32
0
0
0
0. 55
0
0
0
0. 35
0
0
0
0. 12
4. 57
0. 4 3
1. 2 9
1. 6 2
3. 55
0
0
0
5. 41
0
0
0
2. 76
18 .7 0
0
0
0
25 .0 9
0
0
0
21 .5 5
0
0
0
11 .0 9
1. 0 8
1. 0 8 0. 6 2
1. 0 8 0. 7 8
7. 05
0
0
0
0
0
0
7. 20
17 .1 6
0
0. 3 2
0. 7 8
0
0
0
14 .8 7
Ti m e
3. 40 11 .1 0
10 .3 0 19 .9 0
0
36
13 .0 0 22 .7 0
4
(2,2 ,2,2 )
0
4. 52
5
(3,1 ,1,2, 1)
0
17 .3 9
Average
0
10 33
9
13 27
1
(9)
0
2
(3,6 )
0
3
(4,1 ,4)
0
4
(4,1 ,3,1 )
0
5
(1,1 ,4,1, 2)
0
10 5. 6
Average
0
32 0. 3
1 0
89 .6 4 37 .8 2 41 .5 6
1
(10)
0
33 96
2
(3,7 )
0
68 82
3
(2,4 ,4)
0
15 85
4
5
(2,3 ,1,4 ) (6,1 ,1,1, 1)
0
0
18 1. 5 17 4. 1
Average
0
24 44
1 1
0
0
0
5. 3 3 3. 2 9 1 0. 4 0
5. 6 8 3. 6 9 1 5. 7 4 3. 7 7
5. 9 2 4. 0 1 1 7. 9 6 4. 9 5
0
0
0
0. 6 2 1 1. 3 1 4. 4 7 1 0. 2 3
0. 6 2 1 1. 8 9 6. 4 0 1 3. 8 3
0. 4 6
6. 9 3
1. 4 5 0. 1 0 0. 6 2 2. 5 7 1 1. 1 7 1 3. 1 8
2. 9 8 1. 0 7 5. 1 6 5. 9 9 1 7. 6 0 1 7. 0 8 9. 0 0
0. 6 2 1 2. 8 0 7. 2 7 1 5. 7 0 1 1. 0 0 6. 0 5 2. 8 3 8. 9 2 8. 9 0 2 5. 6 8 2 3. 7 5 1 0. 8 4 1 9. 6 2 4
0
1
(11)
0
59 67
2
(3,8 )
52. 33
19 96
3
(4,1 ,6)
0
72 00
4. 9 9
4
(1,1 ,1,8 )
19. 96
12 62
7. 2 5
5
(3,1
0
47
3
1 2. 4 8 3
15 .8 4 13 .8 5
1 0. 5 9 5. 9 2 3. 5 2
1 0. 5 9 5. 9 2 3. 6 4
52 .1 1
3. 4 0
6. 3 4
0
3. 9 6
1 0. 5 9 5. 9 2 3. 6 7 1 0. 4 0 4. 9 5
0
0
0
45 .0 0
3. 82
2. 4 9
2. 4 9
2. 4 9
0. 25
0
0
0
10 .6 7
0. 5 0
0. 5 6
0. 6 5
32 .5 3
0
0
0
14 .1 1
0
0
0
8. 74
0
0
0
24 .1 0
0
0. 9 9
1. 5 7
30 .0 8
0. 6 8
2. 5 6
44 .5 1
2. 3 6
42 .1 0
4. 50
23 .1 7 11 .0 0 19 .1 1
22 .2 5
0
0
0
2. 11
0
0
0
13 .1 2
0
0
0
9. 68
29 .5 5
0
0
0
39 .1 3
0
27 .4 9
0
0
0
16 .9 0
0
0
7. 63
0
0
0
6. 10
0
0
0
20 .9 0
0
0
0
13 .3 3
39 .4 3
0
0
0
68 .3 0
0
0
0
36 .8 8
3. 3 8
23 .7 8
0
0
0
30 .7 7
0
0
0
22 .4 7
5. 3 1
9. 3 6
67 .7 5
0
0. 4 6
2. 3 6
58 .4 2
0
0
0
36 .6 3
1. 5 8
1. 5 8
1. 5 8
22 .9 2
1. 5 8
1. 5 8
1. 5 8
23 .4 6
0
0
0
30 .6 3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0. 3 2
0. 4 1
0. 7 9
0
0
0
14 .3 5 18 .7 7
33 .5 0 34 .7 0 25 .9 0 36 .1 4
0
0
0
0. 7 9
1. 3 8
2. 1 9
28 .5 0 58 .9 0 23 .1 1 40 .2 4
64 .9 0
2. 1 7
3. 6 4
8. 1 5
78 .8 0
0
0
0
96 .8 1
0
0
0
52 .0 0
14 5. 7
2. 8 8
2. 9 9
3. 1 4
61 .1 2
0
0
0
19 3. 8
0
0
0
12 7. 4
99 .8 0
0. 8 7
0. 8 7
0. 8 7
84 .7 0
0
0. 0 6
0. 3 2
14 8. 5
0
0
0
85 .7 5
10 4. 8
2. 2 7
2. 7 9
3. 1 3
10 3. 5
0. 9 9
1. 2 6
1. 8 8
12 5. 8
0
0
0
90 .3 0
27
2.
2.
2.
40
0
0
0
65
0
0
0
21
37
31 .4 5 47 .4 3 20 .3 7 36 .2 3
23 .4 5 29 .6 0 19 .5 0 27 .9 6
,3,1, 3)
4. 7
Average
14. 46
33 80
1 2
72 00
1
(12)
5.2 2
2
(9,3 )
0.8 3
72 00
3
(7,2 ,3)
0
19 38
4
(3,2 ,4,3 )
4.6 8
97 1. 5
5
(2,3 ,1,3, 3)
0
11 24
Average
2.1 5
36 87
1 3
1
(13)
11. 70
72 00
2
(5,8 )
15. 72
37 82
3
(6,5 ,2)
0.6 5
72 00
4
(2,9 ,1,1 )
3.7 9
72 00
5
(2,1 ,6,1, 3)
0
22 34
Average
6.3 7
55 23
1 4
72 00
1
(14)
18. 65
2
(10, 4)
2.9 0
37 01
3
(4,5 ,5)
3.5 9
15 18
5. 2 0 1 4. 3 6 1 2. 4 6 1 3. 9 0 4 3. 1 7
9. 0 0 1 9. 0 3 1 9. 3 0 1 6. 8 4 4 4. 2 9
7. 2 8
8. 9 0
1 4. 5 3 1 8. 2 7 3 7. 7 6 3 4. 8 0 5 8. 8 3 1 8. 9 0 1 2. 7 5 3 2. 6 1 2 4. 1 8 4 4. 4 9 1 2. 3 3
1 8. 9 5 2 1. 6 6 4 3. 0 7 4 3. 3 0 6 3. 7 4 1 9. 4 0 1 8. 2 4 3 7. 5 5 3 3. 5 2 5 4. 0 0 2 6. 3 0
2. 7 5 2 4. 5 3 2 1. 7 7 2 1. 1 2 4 5. 1 4 1 1. 2 2 2 3. 3 2 2 4. 5 1 5 0. 6 0 4 3. 3 7 6 8. 0 0 2 0. 0 0 2 2. 2 4 4 0. 8 4 4 0. 5 6 6 3. 5 8 3 3. 5 8
.1 4
8 5
8 5
8 5
.5 0
88 .4 7
2. 2 1
2. 6 3
3. 6 3
73 .7 2
0. 2 0
0. 2 6
0. 4 4
12 6. 1
0
0
0
75 .4 6
14 7. 0
2. 4 1
5. 6 2
1 3. 3 1
18 2. 0
0. 0 1
0. 8 7
2. 7 5
20 8. 9
0
0. 0 8
0. 1 6
13 4. 8
13 9. 1
0. 8 5
1. 5 8
2. 4 8
20 9. 3
0
0. 6 8
1. 8 3
22 1. 6
0
0
0
10 7. 5
98 .2 0
2. 2 3
6. 8 4
1 6. 3 2
16 1. 4
2. 2 0
5. 2 7
1 4. 8 0
17 1. 6
0
0
0
86 .8 0
14 2. 9
0. 6 0
3. 7 5
9. 6 8
10 1. 2
0. 1 3
1. 5 5
2. 5 7
12 3. 8
0
0
0
12 2. 7
13 1. 2
2. 2 0
2. 2 0
2. 2 0
44 .2 5
0. 9 7
0. 9 7
0. 9 7
82 .5 0
0
0
0
10 4. 0
13 1. 7
1. 6 6
4. 0 0
8. 8 0
13 9. 6
0. 6 6
1. 8 7
4. 5 8
16 1. 7
0
0. 0 2
0. 0 3
11 1. 2
27 8. 0
1 1. 2 8
0
4. 4 2
7. 7 8
25 0. 3
0
0. 1 9
0. 3 4
23 5. 4
9. 2 6
3 8. 3 7 1 3. 9 8
22 7. 9
18 3. 6
1 8. 6 6 1 0. 7 7
17 7. 1
1. 0 0
2. 8 8
4. 9 1
22 2. 8
0
0. 1 0
0. 2 7
15 0. 2
16 4. 2
3. 6 4
3. 6 4
3. 6 4
70 .4 5
2. 6 4
2. 7 5
3. 0 0
23 3. 3
0
0. 0 3
0. 1 1
14 5. 3
11 1. 3
0
3. 4 4
5. 7 9
19 3. 9
0
0
0
14 7. 6
0
0
0
99 .4 0
11 5. 6
0
0
0
13 6. 3
0
0
0
11 8. 9
0
0
0
94 .1 2
17 0. 5
4. 8 4
7. 3 0
16 1. 1
0. 7 3
2. 0 1
3. 1 4
19 4. 6
0
0. 0 6
0. 1 4
14 4. 9
34 1. 9
2. 5 5
1 9. 0 7
1 2. 3 6 3 6. 4 1
28 7. 3
5. 8 6
8. 7 8
1 2. 2 8
30 1. 0
0
0. 1 7
0. 3 9
25 0. 4
30 6. 5
2. 7 0
3. 6 5
4. 3 0
23 2. 6
0. 0 6
0. 8 9
1. 9 5
28 0. 0
0
0. 1 6
0. 3 0
26 9. 0
16 0. 1
0. 8 8
4. 1 9
5. 2 6
20 7. 0
0. 2 0
0. 7 4
1. 0 5
17 9. 9
0
0. 1 1
0. 2 0
14 4. 2
38
.6 6
.8 4
4
(2,6 ,1,5 )
7.6 5
21 86
5
(5,2 ,5,1, 1)
0
72 00
Average
6.5 6
43 61
1 5
1
(15)
18. 38
72 00
2
(9,6 )
8.6 9
22 08
3
(7,1 ,7)
1.9 4
22 92
4
(8,3 ,3,1 )
4.4 4
17 95
5
(3,1 ,7,2, 2)
3.6 7
27 73
Average
7.4 2
32 54
Overall average
4.1 1
26 75
2 9. 6 0 1 1. 9 5 2 4. 5 1 7 5. 4 6 3 8. 8 2 3 8. 9 7 1 0 8. 7 5 0. 0 0 6 2. 3 9 1 8. 1 4
3 3. 7 7 1 6. 3 0 3 2. 7 8 7 9. 0 6 4 9. 5 6 4 6. 4 7 1 1 4. 0 5 4. 5 2 6 8. 7 2 2 1. 8 8
3 8. 2 5 2 0. 3 7 3 9. 2 7 8 3. 1 0 5 7. 4 1 5 2. 3 2 1 2 1. 8 5 8. 0 7 7 4. 5 4 2 5. 0 5
34 4. 8
2. 9 2
5. 8 9
9. 8 5
20 9. 7
1. 1 4
5. 5 4
7. 3 0
27 9. 6
0
0. 0 9
0. 2 0
27 5. 0
22 4. 0
1. 4 0
2. 3 3
3. 2 6
13 6. 0
0
0
0
22 0. 5
0
0
0
15 5. 0
27 5. 5
2. 0 9
7. 0 3
21 4. 5
1. 4 5
3. 1 9
4. 5 2
25 2. 2
0
0. 1 1
0. 2 2
21 8. 7
33 6. 0
3 5. 2 8
4 1. 3 2
26 2. 4
0. 6 1
7. 6 8
1 8. 8 7
35 3. 9
0
0. 2 5
0. 8 1
25 7. 8
31 0. 4
8. 5 6
9. 0 1
20 1. 3
3. 2 6
3. 3 3
3. 4 2
34 1. 3
0
0. 1 1
0. 2 4
28 5. 4
33 8. 7
5. 7 4
8. 2 2
27 4. 1
1. 4 7
2. 1 8
3. 0 0
29 0. 0
0
0. 1 3
0. 2 9
29 6. 3
30 4. 5
3. 9 4
5. 6 8
8. 2 1
27 0. 9
0. 2 8
0. 9 0
2. 3 4
34 5. 8
0
0. 1 7
0. 2 8
28 6. 0
26 1. 8
9. 2 8
3. 5 8
6. 6 3
9. 2 8
34 6. 7
0
0. 1 6
0. 3 4
24 2. 9
1 2. 5 6
1 4. 5 9 1 9. 0 6
32 2. 0
31 0. 3
1 1. 1 5 1 5. 0 8
26 6. 1
1. 8 4
4. 1 4
7. 3 8
33 5. 5
0
0. 1 6
0. 3 9
27 3. 7
11 7. 9
3. 2 0
4. 9 6
7. 3 9
10 3. 7
0. 6 3
1. 3 8
2. 3 9
12 9. 0
0
0. 0 4
0. 0 9
98 .5 3
1 1. 8 2 4 8. 8 3 1 0. 1 5 1 3. 5 0
Table 2 Results of MILP, RODP, ICA1, ICA2 and ICADP for large instances Instance
MILP
N F nj 's
Qu ali ty
2 0
23. 65
1 (20)
2
(1,19 )
21. 56
3
(10,3, 7)
43. 49
4
(5,2,1 ,12)
12. 86
RODP
T i m e
M in
A v e
M a x
7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0
6 7. 7 5 1 0 6. 3 1 0 0. 2 8 5. 6 8
7 7. 3 0 1 2 0. 3 1 2 1. 8 9 6. 7 6
9 4. 7 2 1 5 7. 7 1 5 0. 9 1 0 9. 7
ICA1 T i m e 6 0 0 6 0 0 6 0 0 6 0 0
M in
A v e
M a x
1 9. 2 5 3 2. 1 2 3 3. 9 2 4 0. 8 0
2 2. 4 6 3 8. 8 6 5 0. 0 7 4 3. 4 1
2 8. 0 2 4 4. 6 7 6 4. 0 2 4 7. 5 9
39
ICA2 T i m e 6 0 0 6 0 0 6 0 0 6 0 0
M in
A v e
M a x
1 4. 2 9 2 1. 3 7 1 7. 3 7 3 2. 2 2
1 4. 9 3 2 4. 7 9 3 2. 8 1 3 4. 5 3
1 8. 6 4 2 9. 0 7 4 0. 9 7 3 8. 5 5
ICADP T M i a m x e
T i m e
M A i v n e
6 0 0
0
0. 1 1
0. 1 9
6 0 0
0
0. 1 0
0. 1 9
6 0 0
0
0. 0 6
0. 1 5
6 0 0
0
0. 1 2
0. 1 8
4 3 1. 1 4 5 4. 0 3 9 9. 5 4 0 3. 8
(1,8,4 5 ,6,1)
22. 07
Average
24. 73
2 5
15. 67
1 (25)
2
(12,1 3)
20. 42
3
(7,8,1 0)
23. 29
4
(6,10, 3,6)
13. 71
5
(8,12, 1,2,2)
36. 44
Average
21. 91
3 0
29. 80
1 (30)
2
(7,23 )
36. 79
3
(11,2, 17)
9.1 4
(11,3, 4 1,15)
11. 05
(16,3, 4,1,6)
21. 20
5
Average
21. 60
3 5
1 (35)
15. 72
(1,34 2 )
27. 29
(10,1 3,12)
9.4 7
3
1 6 7 0 6 0 9 4 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0
7 2. 2 2 8 6. 4 3 4 9. 2 6 6 4. 1 6 5 2. 3 3 3 6. 6 2 5 6. 0 9 5 1. 6 9 3 6. 6 3 8 3. 2 0 4 4. 0 1 8 0. 3 3 2 3. 3 1 5 3. 5 0 3 8. 9 2 2 2. 7 6 3 5. 8
8 2. 4 0 9 9. 7 1 5 7. 4 1 7 4. 0 0 7 7. 2 2 5 0. 3 8 6 6. 1 6 6 5. 0 3 5 3. 4 8 9 4. 0 3 5 4. 4 3 9 1. 6 2 5 4. 7 0 6 9. 6 5 4 1. 9 5 2 5. 8 0 4 1. 5
9 2. 0 3 1 2 1. 0 6 9. 6 5 8 8. 6 9 9 8. 9 7 5 8. 4 5 7 8. 0 9 7 8. 7 7 6 5. 0 8 1 0 2. 1 6 6. 7 8 1 0 1. 6 6 6. 9 2 8 0. 5 0 4 5. 0 1 3 0. 3 4 4 5. 0
6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0
2 9. 5 0 3 1. 1 2 3 0. 0 2 3 1. 4 9 2 1. 2 7 3 1. 1 8 2 9. 5 8 2 8. 7 1 2 7. 4 2 3 7. 5 7
3 5. 6 5 3 8. 0 9 3 7. 4 2 3 2. 0 0 2 6. 5 9 3 4. 7 8 4 0. 4 4 3 4. 2 5 3 6. 4 2 4 8. 3 2 1 5. 8 0 4 8. 4 0 4 5. 5 0 3 8. 8 9 1 5. 1 7 1 7. 6 9 2 1. 2
5. 5 6 4 0. 3 4 4 1. 1 8 3 0. 4 1 1 1. 6 4 1 5. 4 9 1 8. 0
40
4 3. 6 8 4 5. 6 0 4 4. 8 6 3 9. 6 6 3 1. 9 4 3 8. 5 5 5 6. 2 5 4 2. 2 5 4 4. 3 7 5 5. 8 1 2 2. 0 2 5 7. 5 7 4 8. 4 5 4 5. 6 4 1 8. 8 2 2 0. 5 9 2 3. 6
6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0
2 0. 5 5 2 1. 1 6 3. 2 8 1 4. 1 7 1 0. 3 4 9. 6 3 1 6. 8 9 1 0. 8 6 2 3. 1 9
2 5. 3 2 2 6. 4 8 1 0. 2 1 1 8. 9 2 1 7. 4 6 1 6. 0 4 2 3. 8 5 1 7. 3 0 2 9. 2 5 1 7. 2 9
6 0 0
8. 9 0
6 0 0
2. 0 5
7. 6 8
2 2. 8 9 1 9. 9 5 1 5. 4 0
2 8. 3 0 2 8. 8 4 2 2. 2 7
5. 2 1
7. 9 5
1 0. 2 7 7. 2 3
1 2. 9 3 9. 1 4
6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0
3 2. 5 8 3 1. 9 6 1 4. 2 5 2 6. 0 5 2 3. 5 2 2 2. 5 6 2 7. 6 6 2 2. 8 1 3 5. 4 3 2 2. 3 6 1 6. 6 3 3 7. 0 0 4 1. 8 1 3 0. 6 5 1 2. 6 3 1 4. 9 5 1 1. 0
6 0 0
3 9 5. 6 4 1 6. 8 5 9 7. 8 5 8 7. 2 5 5 3. 1 5 7 4. 6
0
0. 0 7
0. 1 4
6 0 0
0
0. 0 9
0. 1 7
6 0 0
0
0. 1 3
0. 6 1
6 0 0
0
0. 0 2
0. 0 4
6 0 0
0
0. 0 3
0. 0 7
6 0 0
0
0. 1 4
0. 6 6
6 0 0
0
0. 0 9
0. 2 2
6 0 0
6 0 0
0
0. 0 8
0. 3 2
5 8 2. 5
6 0 0
0
0. 1 1
0. 3 5
6 0 0
6 0 0
0
0. 1 1
0. 4 7
6 0 0
6 0 0
0
0. 0 5
0. 2 4
6 0 0
6 0 0
0
0. 1 9
0. 6 2
6 0 0
6 0 0
0
0. 0 7
0. 2 7
6 0 0
6 0 0
0
0. 1 1
0. 3 9
6 0 0
6 0 0
0
0. 2 2
0. 8 1
6 0 0
6 0 0
0
0. 0 9
0. 3 2
6 0 0
6 0 0
0
0. 2 2
0. 4 0
6 0 0
4
(11,6, 8,10)
14. 63
5
(7,22, 1,3,2)
35. 49
Average
20. 52
4 0
38. 65
1 (40)
2
(12,2 8)
40. 21
3
(10,2 3,7)
8.1 4
(10,8, 4 13,9)
8.3 8
(7,10, 5 8,10, 5)
33. 06
Average
25. 69
4 5
41. 24
1 (45)
2
(7,38 )
22. 42
3
(37,3, 5)
26. 29
4
(4,12, 7,22)
19. 83
(21,7, 5 3,13, 1)
33. 17
Average
28. 59
5 0
39. 04
1 (50)
2
(44,6 )
*
0
7
5
2
7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2
3 6. 0 6
4 2. 1 3 1 3. 2 9 3 2. 9 4 2 6. 0 1 4 0. 3 3 4 3. 3 7 3 0. 0 2 5 4. 9 0 3 8. 9 3 7 3. 1 7 5 4. 7 6 4 7. 7 7 4 5. 2 1 3 7. 6 1 5 1. 7 0 5 9. 3 6 3 8.
4 6. 6 6 2 0. 7 9 3 7. 5 6 3 4. 4 7 5 2. 2 5 4 6. 9 8 3 9. 3 1 5 9. 9 5 4 6. 5 9 7 6. 8 2 6 0. 4 7 5 1. 6 0 5 0. 0 9 4 4. 1 3 5 6. 6 2 5 9. 9 2 4 0.
2. 6 0 2 7. 2 4 1 5. 1 2 3 0. 5 8 4 1. 5 1 1 3. 2 9 4 4. 7 2 2 9. 0 4 6 5. 7 8 4 6. 5 0 4 4. 2 1 3 9. 4 0 3 1. 5 4 4 5. 4 9 5 8. 6 2 3 3.
6 0 0 6 0 0 6 0 0
9
4
0
3 6. 5 8 1 0. 9 9 1 8. 5 6
3 9. 6 5 1 2. 8 6 2 1. 3 2 1 2. 6 6 2 5. 5 0 1 0. 6 6 1 5. 1 7 3 4. 8 4 1 9. 7 7 5 9. 1 8 3 6. 3 2 3 2. 3 9 2 8. 1 0 2 1. 4 2 3 5. 4 8 2 9. 7 2 1 2.
4 2. 8 0 1 4. 0 1 2 3. 9 6 1 5. 2 9 3 3. 7 7 1 4. 2 2 1 9. 6 9 3 8. 8 8 2 4. 3 7 6 7. 7 2 4 0. 7 8 3 5. 8 1 3 0. 7 8 2 7. 7 9 4 0. 5 8 3 0. 4 9 1 5.
6 0 0
9. 6 8
6 0 0
1 2. 2 7
6 0 0
8. 0 1
6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0
1 1. 8 1 3 1. 0 8 1 4. 5 7 5 1. 6 8 2 9. 4 9 2 8. 4 2 2 7. 0 1 1 7. 9 1 3 0. 9 0 2 8. 7 7 1 1.
41
8 6 0 0
1 6. 7 1
1 9. 4 6
2 2. 1 7
6 0 0
0
0. 2 1
0. 4 3
6 0 0
6 0 0
1. 7 8
5. 3 9
8. 0 1
6 0 0
0
0. 1 8
0. 2 7
6 0 0
6 0 0
8. 2 4
0
0. 1 8
0. 4 5
6 0 0
5. 9 5
6 0 0
0
0. 2 2
0. 4 7
6 0 0
6 0 0
9. 2 0
6 0 0
0
0. 1 6
0. 3 7
6 0 0
6 0 0
5. 4 5
8. 6 1
6 0 0
0
0. 0 9
0. 1 8
6 0 0
6 0 0
4. 7 1
7. 9 0
6 0 0
0
0. 2 7
0. 4 9
6 0 0
2 8. 0 3 1 0. 6 7 3 9. 2 6 2 7. 1 4 1 9. 9 6 2 0. 7 9
3 1. 0 0 1 5. 1 6 4 7. 1 3 2 8. 6 9 2 1. 6 1 2 2. 0 0
6 0 0
0
0. 2 9
0. 5 6
6 0 0
6 0 0
0
0. 2 1
0. 4 1
6 0 0
6 0 0
0
0. 0 7
0. 1 2
6 0 0
6 0 0
0
0. 1 3
0. 2 2
6 0 0
6 0 0
0
0. 4 8
0. 8 7
6 0 0
6 0 0
0
0. 2 7
0. 4 9
6 0 0
8. 3 3
9. 6 5
6 0 0
0
0. 2 3
0. 4 0
6 0 0
2 3. 1 0 1 4. 3 9 8. 0
2 5. 8 2 1 6. 2 1 9. 4
1 3. 7 7 1 8. 8 6 2 1. 3 4 1 1. 3 1 1 3. 1 9 3 3. 5 7 1 9. 6 5 5 4. 0 3 3 0. 9 6 2 3. 9 6 2 5. 7 1 1 1. 3 0 2 9. 1 9 1 7. 3 1 1 0.
6 0 0
6 0 0
1 0. 9 7 1 2. 7 0 1 5. 6 0
6 0 0
0
0. 2 4
0. 4 2
6 0 0
6 0 0
0
0. 2 0
0. 3 9
6 0 0
6 0
0
0. 2
1. 0
6 0
6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0
(11,1 3 0,29)
23. 71
(24,1, 4 13,12 )
11. 10
5
(6,27, 2,9,6)
*
Average
24. 62
5 5
42. 59
1 (55)
2
(26,2 9)
42. 21
3
(5,32, 18)
*
4
(42,5, 5,3)
*
(1,19, 5 25,6, 4)
63. 70
Average
49. 50
6 0
46. 13
1 (60)
2
(37,2 3)
28. 04
3
(20,1 8,22)
54. 92
4
(52,4, 1,3)
*
5
(4,50, 4,1,1)
*
Average 6
1 (65)
43. 03 *
0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 2 8 7 4 6 3 3 5 1 2 8 8 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 6 0 1 8 7
3 9 3 3. 0 4 1 9. 0 6 4 5. 7 9 3 7. 9 8 3 9. 8 9 3 6. 9 1 4 7. 6 7 4 7. 8 0 5 4. 2 8 4 5. 3 1 1 8. 0 3 1 4. 3 2 6 4. 6 4 4 8. 7 9 3 9. 6 7 3 7. 0 9 4
3 1 4 0. 6 4 2 0. 9 4 4 8. 1 7 4 1. 4 8 4 3. 1 5 4 3. 4 7 5 1. 8 8 5 3. 5 5 5 6. 3 8 4 9. 6 9 3 0. 6 3 2 5. 1 9 6 9. 1 6 5 7. 1 7 5 0. 8 5 4 6. 6 0 6
9 1 4 4. 9 4 2 4. 1 9 5 1. 0 0 4 4. 1 9 4 5. 9 8 4 8. 2 8 5 4. 6 5 5 9. 8 4 5 8. 8 4 5 3. 5 2 4 3. 4 3 3 2. 6 7 7 1. 6 3 6 9. 2 7 5 6. 3 8 5 4. 6 8 7
0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6
0 3 2 3. 6 9 1 0. 7 4 2 0. 0 8 1 8. 8 6 3 0. 0 3 1 9. 9 9 3 2. 7 9 3 0. 8 5 3 8. 9 3 3 0. 5 2 2 2. 3 2 1 8. 0 7 3 1. 8 1 4 5. 9 5 3 7. 5 4 3 1. 1 4 5
6 8 2 6. 4 5 1 3. 0 9 3 2. 1 8 2 2. 8 2 3 4. 0 0 2 8. 1 7 3 9. 6 6 3 8. 3 9 4 4. 3 0 3 6. 9 0 2 4. 0 2 2 0. 3 4 4 0. 4 7 5 5. 8 9 4 5. 7 6 3 7. 3 0 5
42
8 6 2 8. 1 3 1 4. 9 1 3 7. 3 0 2 5. 3 4 3 9. 3 2 3 3. 8 9 4 4. 0 8 4 3. 2 1 4 8. 1 9 4 1. 7 4 2 5. 5 0 2 2. 9 5 4 5. 5 5 6 6. 1 8 5 2. 8 0 4 2. 6 0 6
0
4
7
6 0 0
2 2. 3 6
6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6
0
2 3. 1 9
4 8 2 3. 9 7
6 0 0
3. 7 1
6. 2 8
8. 6 2
1 9. 8 2 1 3. 6 6 2 9. 9 4
2 5. 9 9 1 6. 2 3 3 1. 8 6 1 6. 1 8 3 2. 0 9 3 2. 8 6 3 8. 6 1 3 0. 3 2 2 2. 6 8 1 7. 4 8 3 0. 9 5 4 9. 8 5 4 0. 8 6 3 2. 3 6 5
3 3. 2 4 1 8. 7 2 3 3. 2 2 2 3. 5 9 3 6. 8 6 3 7. 2 2 4 1. 1 8 3 4. 4 1 2 4. 1 0 1 9. 5 2 3 3. 7 1 5 8. 3 4 4 3. 7 1 3 5. 8 8 6
6. 2 1 2 7. 0 8 2 8. 6 7 3 6. 9 0 2 5. 7 6 2 0. 4 7 1 4. 6 1 2 6. 9 4 3 8. 5 4 3 4. 7 9 2 7. 0 7 4
8
1
0
0
0. 1 9
0. 5 1
6 0 0
6 0 0
0
0. 2 4
0. 5 0
6 0 0
6 0 0
0
0. 3 8
0. 8 2
6 0 0
6 0 0
0
0. 2 6
0. 6 5
6 0 0
6 0 0
0
0. 4 2
1. 0 7
6 0 0
6 0 0
0
0. 1 3
0. 3 0
6 0 0
6 0 0
0
0. 3 6
0. 6 9
6 0 0
6 0 0
0
0. 2 5
0. 4 9
6 0 0
6 0 0
0
0. 1 4
0. 4 1
6 0 0
6 0 0
0
0. 2 6
0. 5 9
6 0 0
6 0 0
0
0. 5 4
0. 9 1
6 0 0
6 0 0
0
0. 1 8
0. 2 7
6 0 0
6 0 0
0
0. 1 1
0. 2 4
6 0 0
6 0 0
0
0. 2 1
0. 6 0
6 0 0
6 0 0
0
0. 4 7
1. 0 1
6 0 0
6 0 0
0
0. 3 0
0. 6 1
6 0 0
6
0
0.
0.
6
5
2
(21,4 4)
*
3
(46,7, 12)
*
(20,6, 4 26,13 )
66. 39
(17,2 5 7,2,1 5,4)
*
66. 39
Average
7 0
1 (70)
*
2
(37,3 3)
*
3
(35,1 5,20)
*
4
(62,4, 1,3)
*
(9,30, 5 27,3, 1)
54. 98
Average
54. 98
Overall average
28. 97
2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 7 2 0 0 6 9 1 3
8. 0 9 6 2. 2 6 3 0. 5 0 7 2. 0 6 3 9. 6 1 5 0. 5 0 4 1. 8 5 3 5. 5 5 4 4. 1 4 5 9. 1 5 4 8. 6 2 4 5. 8 6 4 6. 3 8
2. 3 1 6 9. 8 6 3 3. 4 9 7 6. 2 0 4 3. 3 5 5 7. 0 4 4 3. 9 3 3 8. 6 5 4 7. 5 2 6 5. 5 8 5 6. 1 2 5 0. 3 6 5 4. 8 3
4. 4 2 7 5. 8 2 3 7. 1 9 8 0. 1 3 4 6. 1 0 6 2. 7 3 4 6. 3 5 4 0. 5 7 4 9. 4 4 7 0. 0 5 6 1. 0 0 5 3. 4 8 6 2. 7 0
0 0
1. 8 1 3 7. 3 6 2 5. 5 6 5 5. 4 0 3 8. 4 2 4 1. 7 1 3 5. 8 7 2 7. 5 0 2 7. 3 7 3 3. 2 9 3 0. 2 6 3 0. 8 6 2 7. 9 4
6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0
6. 9 3 4 8. 1 2 2 6. 4 0 5 6. 6 9 4 1. 9 6 4 6. 0 2 3 8. 2 2 3 0. 1 7 3 5. 6 4 3 8. 6 5 3 4. 8 4 3 5. 5 0 3 3. 3 0
2. 0 0 5 4. 6 2 2 7. 7 2 6 1. 4 5 4 5. 4 8 5 0. 2 5 4 1. 3 1 3 2. 6 5 4 2. 0 3 4 2. 5 2 3 8. 4 9 3 9. 4 0 3 8. 3 4
0 0
5. 1 1 3 4. 8 2 1 7. 6 9 2 7. 1 2 2 4. 3 9 2 9. 8 3 2 5. 7 4 2 0. 6 0 2 5. 1 5 2 6. 1 5 1 7. 5 6 2 3. 0 4 1 8. 9 8
6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0 6 0 0
1. 9 4 4 4. 8 5 2 1. 8 7 4 0. 5 3 2 9. 5 2 3 7. 7 4 3 0. 5 5 2 2. 9 3 2 9. 8 1 3 2. 0 4 2 4. 4 5 2 7. 9 6 2 3. 8 7
0. 4 9 5 1. 0 5 2 5. 2 3 4 7. 5 8 3 3. 9 0 4 3. 6 5 3 3. 5 6 2 4. 6 7 3 4. 0 0 3 4. 9 9 3 2. 9 3 3 2. 0 3 2 8. 4 3
0 0
5 2
9 0
0 0
6 0 0
0
0. 4 1
0. 7 1
6 0 0
6 0 0
0
0. 4 6
0. 8 3
6 0 0
6 0 0
0
0. 2 3
0. 4 8
6 0 0
6 0 0
0
0. 2 3
0. 5 1
6 0 0
6 0 0
0
0. 3 7
0. 6 9
6 0 0
6 0 0
0
0. 5 2
1. 0 7
6 0 0
6 0 0
0
0. 2 0
0. 5 0
6 0 0
6 0 0
0
0. 1 8
0. 3 4
6 0 0
6 0 0
0
0. 4 0
1. 0 9
6 0 0
6 0 0
0
0. 5 1
0. 9 0
6 0 0
6 0 0
0
0. 3 6
0. 7 8
6 0 0
6 0 0
0
0. 2 2
0. 5 0
5 8 1. 8
Table 3 Results for two small instances taken from [9] GA [9] Instance a b
Cost
Time
1007
1.05
1378
1.14
RODP MILP1
ICA1
Time
971
842
889.5
6.23
842
5.05
842
5.19
842
4.31
1378
1378
1378
1.72
1378
1.03
1378
0.76
1378
0.65
43
Time
Cost
Time
ICADP
Cost
Fig. 1 An example of the proposed representation scheme
Cost
ICA2
MILP
Cost
Time
Fig. 2 Assimilation of a job permutation Fig. 3 Moving a colony towards its relevant imperialist
44
͵ ͷ
͵ ͳͷͳǤʹ ͵ ʹ
ʹͷͷǤ ͵ͶͲ ͳͺͶ ͷ ͳͺͶ
ͳ ʹͶͻ ͵͵ͺǤ ͵ ʹ ʹͷͷǤ ͵ͶͲ
ʹ ͵ͷͻ ͳ ͵ͺͲ
Ͷ ͺ ͶͶʹ ͷ͵ͳ ͵ ͵ ͷͶͲǤʹ ͷͶͲǤʹ
͵ ʹ
ͳ
ͳ
ͷ
ͷ
ͷ
Ͷ
Ͷ
ʹ
͵
ͳ
ͺ
ʹ
Ͷ
ͳ
ʹ
͵
͵
ͷ
Ͷ
ͺ
ͺ
ͺ
݀݅ݏଶ ߜଶ
ʹ ߜଵ ݀݅ݏଵ
ͳ
Researchhighlights: 1. This paper considers singlemachine batch delivery scheduling with job release datesandduewindows. 2. Theaimistoschedulethejobs,toformthemintobatchesandtodecidethedelivery dateof eachbatchsoastominimizethe sumofearliness, tardiness,holding, and deliverycosts. 3. A mathematical model of the problem is presented, and a number of dominance propertiesarederived. 4. To solve this NPhard problem efficiently, a hybrid algorithm is proposed by incorporatingthedominancepropertieswithanICA. 5. In the proposed algorithm, unforced idleness,forming discontinuous batches, and postponingthedeliveryofcompletedbatchesareallowed.