Parallel machine scheduling problem to minimize the makespan with resource dependent processing times

Parallel machine scheduling problem to minimize the makespan with resource dependent processing times

Applied Soft Computing 11 (2011) 5551–5557 Contents lists available at ScienceDirect Applied Soft Computing journal homepage: www.elsevier.com/locat...

305KB Sizes 1 Downloads 74 Views

Applied Soft Computing 11 (2011) 5551–5557

Contents lists available at ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

Parallel machine scheduling problem to minimize the makespan with resource dependent processing times Kai Li a,b,∗ , Ye Shi a , Shan-lin Yang a,b , Ba-yi Cheng a,b a b

School of Management, Hefei University of Technology, Hefei 230009, PR China Key Laboratory of Process Optimization and Intelligent Decision-making, Ministry of Education, Hefei 230009, PR China

a r t i c l e

i n f o

Article history: Received 27 January 2010 Received in revised form 28 December 2010 Accepted 1 May 2011 Available online 6 May 2011 Keywords: Parallel-machine Scheduling Makespan Resource allocation Controllable processing time

a b s t r a c t This paper considers the identical parallel machine scheduling problem to minimize the makespan with controllable processing times, in which the processing times are linear decreasing functions of the consumed resource. The total resource consumption is limited. This problem is NP-hard even if the total resource consumption equals to zero. Two kinds of machines, critical machine and non-critical machine, are defined. Some theoretical results are provided. And then, a simulated annealing algorithm is designed to obtain the near-optimal solutions with high quality. To evaluate the performance of the proposed algorithm, we generate the random test data in our experiment to simulate the ingot preheating before hot-rolling process in steel mills. The accuracy and efficiency of the simulated annealing algorithm is tested based on the data with problem size varying from 200 jobs to 1000 jobs. By examining 10,000 randomly generated instances, the proposed simulated annealing algorithm shows an excellent performance in not only the solution quality but also the computation time. © 2011 Elsevier B.V. All rights reserved.

1. Introduction Most classical scheduling problems assume that the job processing times are fixed numbers. However, the job processing times maybe depend on some resources, such as manpower, energy, gas, catalyzer and money. For example, in the steel industry, the ingot preheated process before hot rolling consumes lots of energy. The manufacture can improve the efficiency of the production by allocating some extra energy. Hence it is necessary to balance the conflict between the production efficiency and the energy consumption. Another practical example comes from a case where a manufacturer can improve the production efficiency by adding the workers, overtime, outsourcing and so on, which will consume more money. In this paper, we consider the corresponding parallel machine scheduling problem with resource dependent processing times, in which the resource consumption function is assumed as a linear decreasing function. The goal is to minimize the makespan under the constraint of the limited total resource consumption. n This probˆ max , lem can be described generally as Pm |pj = pj − xj , j=1 xj ≤ X|C where Pm means that there are m identical parallel machines, pj is the basic processing time of Jj , xj is the shortened amount of the

∗ Corresponding author at: School of Management, Hefei University of Technology, Hefei 230009, PR China. Tel.: +86 551 2919156. E-mail address: [email protected] (K. Li). 1568-4946/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2011.05.005

processing time by consuming resource and it can be regarded as the consumed nresource amount, and thus the real processing time pj = pj − xj , x ≤ Xˆ means the total resource consumption can j=1 j ˆ Cmax means that the objective is to not exceed the limited value X, minimize the makespan. Obviously the Pm ||Cmax problem is a special case of this problem with Xˆ = 0, and is NP-hard [1], therefore this problem is also NP-hard. The remainder of this paper is organized as follows. The next section gives the literature review about the scheduling problems with resource dependent processing times. Section 3 gives the nmathematical formalized presentation of the Pm |pj = pj − xj , j=1 xj ≤ ˆ max problem. Section 4 analyzes the properties of the optimal X|C solutions, and then a simulated annealing algorithm is proposed in Section 5 to obtain near-optimal solutions with high quality. Section 6 tests and analyzes the accuracy and efficiency of the proposed simulated annealing algorithm experimentally. Finally, Section 7 gives the conclusions.

2. Literature review It is usually believed that the initial research work on the resource dependent scheduling problems is done by Vickson [2,3] and Van Wassenhove and Baker [4]. The resource dependent scheduling problems have received much attention in recent years. Janiak et al. [5], Shabtay and Steiner [6] and Nowicki and Zdrałka [7] summarize the existing work respectively.

5552

K. Li et al. / Applied Soft Computing 11 (2011) 5551–5557

Most of the papers about resource dependent processing times focus on the single machine scheduling problems. For the linear non-increasing functions, Vickson [2] considers the single machine scheduling problem to minimize the total compression cost plus weighted completion times. He proposes a branch-and-bound scheme for solving this problem optimally and a heuristic algorithm for fast solution approximation. Wan et al. [8] and Jainak et al. [9] analyze the computational complexity of the same problem respectively. They show that this problem is NP-hard. Wang and Xia [10] consider the problem to minimize a cost function containing total completion time, total absolute differences in completion times and total compression cost. This problem can be transformed to an assignment problem and thus can be solved in O(n3 ) time. Daniels and Sarin [11] provide theoretical results for constructing the trade-off curve between the number of tardy jobs and the total amount of allocated resource. Cheng et al. [12] prove that the problem of minimizing the total amount of allocated resource subject to a limited number of tardy jobs is NP-hard and propose a pseudo-polynomial-time dynamic programming algorithm. Janiak and Kovalyov [13] consider the single machine problem with controllable processing time to find simultaneously a sequence of the jobs and a resource allocation so that the deadlines are satisfied and the total weighted resource consumption is minimized. This problem is shown to be solvable in O(n log n) time. Ng et al. [14] study the single machine scheduling problem with a variable common due date to minimize a linear combination of scheduling, due date assignment and resource consumption costs. Algorithms with O(n2 log n) running times are presented for scheduling costs involving the earliness/tardiness and the number of tardy jobs. Nowicki and Zdrzalka [15] deal with a bicriterion approach to preemptive scheduling of m parallel machines for jobs having processing costs which are linear function of variable processing times. An O(n2 ) greedy algorithm is given which generates all breakpoints of a piecewise linear efficient frontier for the case of identical machines. For uniform machines, an algorithm is provided which solves a problem of least processing cost under limited completion time in O(n max {m, log n}) time. Jansen and Mastrolilli [16] consider the identical parallel machine scheduling problems with controllable processing times, in which the processing times of the jobs lie in different intervals and the costs depend linearly on the position of the corresponding processing times in the intervals. They present polynomial time approximation schemes for the problems. In some papers, the job processing time is described by a convex decreasing resource consumption function. Kaspi and Shabtay [17] present a single machine scheduling problem to simultaneously determine the optimal job permutation and the resource allocation, such that the maximal completion time is minimized. They provide an O(n) time optimization for the case of identical job release dates, and an O(n2 ) time optimization algorithm for the case of non-identical job release dates. Shabtay and Kaspi [18] consider the identical parallel machine problems. They show the makespan problem with nonpreemptive jobs is NP-hard. If preemptive jobs are allowed, the makespan problem is shown to be solvable in O(n2 ) time. They also show that the problem of minimizing the sum of completion times can be solvable in O(nlogn) time. Shabtay and Kaspi [19] consider the single machine problem to minimize the total weighted flow time with controllable processing times. They propose an exact dynamic programming algorithm for small-scale problems and some heuristic algorithms for large scale problems. Cheng and Janiak [20] consider a single machine scheduling problem in which the job release dates are given but the processing times are decreasing functions of the amount of resource consumed. They show that ordering jobs in non-decreasing release give an optimal solution and the problem to minimize both the makespan and resource consumption is polynomially solvable.

Some other papers assume that the release dates and processing times are all dependent on resource (e.g. Wang and Cheng [21], Choi et al. [22], Cheng et al. [23,24], Kaspi and Shabtay [25]). In this paper, we only consider the problems with resource dependent processing times, hence these papers are not reviewed in detail. 3. Problem description Given n jobs Jj (j = 1, 2, ..., n), the processing time of Jj is pj = pj − xj , where pj is the basic processing time without resource allocation, xj corresponds the consumed resource by Jj (xj ≤ pj ).  n  n The total resource consumption is limited: xj ≤ Xˆ ≤ pj , j=1

j=1

where Xˆ is the limited amount. Given m machines Mi (i = 1, 2, ..., m) with the same speed 1. Once a machine begins processing a job, it must continue the processing until the jobs is finished. A machine can process at most one job at a time, and a job can only run on one machine at a time. Let ˘ be the universal set of the schedules.  ( ∈ ˘) is an individual schedule and can be denoted as  = {1 , 2 , ..., m }, in which i is the sub-schedule m on the machine Mi and there are ni (ni ≥ 0) jobs in i , then n = n. Let ij is the jth job in the i=1 i sub-schedule i , then  =



m



{i1 , i2 , ..., ini }|i = 1, 2, ..., m

and

{i1 , i2 , ..., ini } = {Jj |j = 1, 2, ..., n}. i=1 Let  be the universal set of the resource allocations.  ( ∈ ) is a certain resource allocation and can be denoted as  = { 1 ,  2 , ...,  m }, where i = {i1 , i2 , ..., ini } is the resource allocation corresponding to the sub-schedule i , then  =  {i1 , i2 , ..., ini }|i = 1, 2, ..., m and the corresponding total conm ni ˆ Let Xi means the total sumed resource is X = ij ≤ X. i=1

j=1

resource consumption corresponding to  i , then X =

m X. i=1 i  j

For a fixed solution (, ) ( ∈ ˘,  ∈ ), then Cij =

j



q=1



piq − iq =

j

p − q=1 iq

j

 . q=1 iq

i = The maximal completion time of (i ,  i ) is Cmax

ni  j=1



pij − ij =

ni

p − j=1 ij

ni

p q=1 iq

n

p j=1 ij

= =

 . j=1 ij

The makespan of the solution (, ) is Cmax = maxm Ci = i=1 max

n

n



i i maxm p −  . j=1 ij j=1 ij i=1 The goal of this problem is to find an optimal solution ( * ,  *) such that Cmax (∗, ∗) = min minCmax (, ) and subject to

 ∈ ˘ ∈ 

ˆ X(∗, ∗) ≤ X. 4. Problem analysis In this section, we give the definitions of critical and non-critical machines firstly, and then propose some properties for the problem. Definition 1. In a solution (, ), if the maximal completion time of the sub-solution (i ,  i ) (i = 1, 2, ..., m) equals to the makespan, i i.e. Cmax = Cmax , then we call the machine is a critical machine; otherwise a non-critical machine. Denote the universal set of the critical machines in the solution i (, ) as K(, ) = {i|Cmax = Cmax }. Property 1. There is at least one optimal solution ( * ,  *) such that  m ni ∗ i=1

ˆ  = X.

j=1 ij

Proof. Proof by contradiction. Suppose that ( *) is an opti* m,   ni ˆ  ∗ ≤ X. mal solution, hence it is feasible and such that j=1 ij i=1 Suppose that

m ni i=1

∗ j=1 ij

ˆ Let < X.

m ni i=1

∗ j=1 ij

= X∗. Then we

can construct another solution (, ) as follows: Suppose that there are k (1 ≤ k ≤ m) critical machines in ( * ,  *), then we append (Xˆ − X∗)/k for each sub-solution corresponding to

K. Li et al. / Applied Soft Computing 11 (2011) 5551–5557

5553

the k machines and the corresponding maximal completion times i i are all decreased, i.e., for any machine Mi ∈ K( * ,  *), Cmax = Cmax ∗ i i −(Xˆ − X∗)/k. (Xˆ − X∗)/k > 0, therefore Cmax < Cmax ∗. It conflicts with the assumed condition that ( * ,  *) is an optimal solution. Property 2. If the solution (, ) is not optimal and

m ni i=1

 j=1 ij

=

ˆ then there is at least one non-critical machine in the solution. X, It is obvious that the inverse negative proposition of this propi h erty is tenable, i.e., for ∀i, h ∈ {1, 2, ..., m}, if i = / h, Cmax = Cmax and all the available resource is consumed, then the solution (, ) must be optimal. This property shows that in the non-optimal solutions when all the limited total resource is consumed, there are certainly two different machines Mi and Mh , i = / h, i, h ∈ {1, 2, ..., m}, such i h . = / Cmax that Cmax Property 3. In the solution (, ), if the value of the total resource consumption Xi of  i is unchanged, then the change of the job positions in i or the change of the resource allocation of  i cannot change the i maximal completion time Cmax of i . This propertyholds obviously  nito theformula ni  according n ni i i for Cmax : Cmax = p = p −  = p −  = ij ij ij j=1 j=1 ij j=1 ij j=1

ni

p j=1 ij

value of

− Xi . Therefore when the jobs in i are not changed, the

ni

p j=1 ij

is also unchanged. If the total resource consump-

i is also fixed. tion Xi of  i is fixed, then Cmax

Property 4. In a solution (, 0) without consumed resource, let Ml be the machine with the minimal maximal time among all  i m completion i l ˆ then the machines, i.e. l = arg min Cmax . If Cmax − Cmax ≤ X, i=1

Fig. 1. The transformation for the local solution.

1≤i≤m

an optimal solution can be obtained which is transformed from (, 0) in O(n) time. i Cmax

l − Cmax

+ (C − Cmax ) /m resource to each machine Mi (i = 1, 2, Xˆ − i=1 max ..., m), then it can assure that all the machines are critical machines and all the total available resource is consumed. Therefore an optimal solution is obtained.

Proof.mFor ithe solution  (, 0), we only allocate l

according to property 1. Fig. 1(b) is the corresponding local solution without resource allocation formed by (i ’ , 0) and (l ’ , 0). Let   i’ l ’ . Fig. 1(c) is the transformed result for the local ı = Cmax − Cmax solution in Fig. 1(b). Let sub-solutions be (i ” , 0)  the transformed   i ” − C l ” . Then we reallocate the total and (l ” , 0). Let ı = Cmax max local resource as follows:

5. Simulated annealing algorithm

(1) If ı” > Xi + Xl ,then allocate Xi + Xl resource to the machine Mk , h ” |h = i, l , hence ı” is decreased; k = arg min Cmax

5.1. Algorithm idea

(2) If ı” ≤ Xi + Xl , then we can obtain an optimal local solution according to Property 4.

According to property 1, there are at least one optimal solution consuming all the available resource, we only consider m therefore ni ˆ According to the solutions (, ) such that  = X. j=1 ij i=1

Property 2, if the current solution (, ) consumes all the limited resource but is not optimal, then there is at least one critical machine and one non-critical machine in the solution. Therefore, in our simulated annealing algorithm, we primarily aim at the local solution with a critical machine Mi (i ∈ K(, )) and the non-critical machine with minimal maximal completion time Ml h |h = 1, 2, ..., m}). It is hoped to quickly shorten the (l:=arg min{Cmax i l difference between the maximal completion times Cmax and Cmax on the two machines Mi and Ml by local transformation. i influent effects for Cmax are two According ni to Property 3, the i parts: p and X . Hence C does not depend on the job posii ij max j=1

tions in i and the resource allocation in  i . Therefore, for the local solution formed by (i ,  i ) and (l ,  l ), we consider the corresponding local solution without resource allocation firstly, i.e., let the maximal completion times corresponding to the two sub-solutions i’ l’ and Cmax respectively. We shorten the (i , 0) and (l , 0) be Cmax i ’ l ’ difference between Cmax and Cmax firstly. Secondly, we can shorten the difference further by allocating the resource. A certain local transformation is shown in Fig. 1. Fig. solution formed by (i ,  i ) and (l ,  l ). Let  1(a) is a local  i l , then ı =/ 0 if the solution (, ) is not optimal − Cmax ı = Cmax

5.2. Algorithm presentation Simulated annealing is a neighborhood search approach to obtain a global optimum solution for combinatorial optimal problems. It was proposed by Metropolis [27] and popularized by Kirkpatrick et al. [28]. Many researchers use simulated annealing to solve scheduling problems (e.g. see the work by Zhang and Wu [29], Naderi et al. [30], Loo and Wells [31] and Figielska [32]). simulated annealing algorithm for the The steps of the proposed n ˆ max problem are as follows: Pm |pj = pj − xj , j=1 xj ≤ X|C Step 1. Construct an initial solution by the following Modified LPT (Longest Processing Time firstly) algorithm (MLPT): Consider the corresponding problem without resource allocation, use LPT ˆ resource to any rule to obtain a solution (’ , 0). Allocate X/m  i sub-schedule i . Compute the maximal completion time Cmax = i’ ˆ Cmax − X/m of each sub-solution (i ,  i ) (i = 1, 2, ..., m) and the makespan Cmax = maxm Ci . i=1 max

Step 2. Compute the lower bound LB:=

n

p j=1 j



− Xˆ /m.

Step 3. If (Cmax − LB) /LB < ε1 (ε1 = 0.0002), then obtain the (near) optimal solution (, ) and the corresponding makespan Cmax . Stop.

5554

K. Li et al. / Applied Soft Computing 11 (2011) 5551–5557

Step 4. Choose a critical machine Mi , i ∈ K(, ) and the noncritical machine with the minimal maximal completion time Ml , h |h = 1, 2, ..., m}. Denote the local solution coml:=arg min{Cmax posed by the two sub-solutions (i ,  i ) and (l ,  l ) as (Local ,  Local ), the corresponding maximal completion time is ZLocal , and the corresponding total resource consumption of the local solution is Xˆ Local . Denote the corresponding local solution without resource   allocation as (Local , 0), the maximal completion time of (Local , 0) 

i ’ , C l ’ ), where C i ’ i is ZLocal = max(Cmax max max = Cmax −

nl

ni

 h=1 ih

and

l’ l Cmax = Cmax −  . h=1 lh Step 5. Compute the lower  bound of thelocal solution without  i’ l’ + Cmax /2. resource allocation LBLocal := Cmax









Step 6. Set the initial temperature T := ZLocal − LBLocal · K (K: = 300). Step 7. If T < ε2 (ε2 = 0.0005) or there is no new solution being accepted under the same temperature, then go to Step 12. Step 8. Set the number of the loop times under the same temperature L := 10.     Step 9. Randomly choose a job ij in (i , 0) and a job lq in (l , 0). 



Exchange the positions of the two jobs ij and lq . Obtain the ’

new local solution with resource allocation (Local , 0), the corre

sponding maximal completion time ZLocal , and the corresponding

i ” and C l ” of the two sub-schedules maximal completion times Cmax max   (i , 0) and (l , 0), respectively. 







Step 10. Compute ZLocal = ZLocal − ZLocal . If ZLocal < 0, or 



ZLocal ≥ 0 and Exp(−ZLocal /T ) > r (r is a random number, 







0 ≤ r < 1), then Local :=Local , ZLocal :=ZLocal . Step 11. L := L − 1. If L = 0, then T := T × ˛, ˛ := 0.7, go to Step 7; otherwise, go to Step 9. i ” − Cl ” | ≤ X ˆ Local , then go to Step 14. Step 12. If |Cmax max i ” ≥ C l ” , then allocate X ˆ Local resource to the jobs Step 13. If Cmax max   in l ; otherwise allocate it to the jobs in i . Compute the corresponding maximal completion time on the two machines and the global makespan Cmax . Go to Step 3.   i ” ≥ C l ” , then allocate l ” − Ci ” Step 14. If Cmax Xˆ + Cmax max max /2    i ” − Cl ” + X l ” − Ci ” ˆ + Cmax resource to  , and allocate Cmax max max /2 i



to  ; resource   l i ” − Cl ” Xˆ + Cmax max /2

otherwise,

allocate 

l ” − Ci ” + Cmax max

resource to i , and allocate   i ” − Cl ” Xˆ + Cmax /2 resource to  . Compute the corresponding max l maximal completion time on the two machines and the global makespan Cmax . Go to Step 3.



6. Computational experiments In this section, we test n and analyze the accuracy and efficiency ˆ max problem by experiments. of the Pm |pj = pj − xj , j=1 xj ≤ X|C The problems we considered include the problems with 200, 400, 600, 800, 1000 jobs and 2, 4, 6, 8 machines. For the convenience to present, we let n × m mean the problems with n jobs and m machines. The algorithm is realized in C++ using Dev-C++ 4.9.9.2 compiler. The environment of our experiments is CPU: Pentium IV 2.93 GHz, memory: 480 MB, operating system: Microsoft Windows XP SP1. In order to evaluate their performances, a data-generating scheme is necessary. Janiak [26] provide a uniform-distributiongenerating scheme according to the intervals typical of the ingot preheating before hot-rolling process in their experiments. Following him, in this paper, for each job Jj , an integer basic processing time pj was generated from the uniform distribution [170,890]. To ˆ test the influence that the total n resource consumption X put on the solutions, we let Xˆ =  · pj , and  ∈ {0.1, 0.2, 0.3, 0.4, 0.5}, j=1

i.e., the total processing times can be decreased from 10% to 50%

by the resource consumption in different problems. To improve the objectivity of the experiment results, for the same  and the same problem size, we did experiment 100 times and observed the average, the minimal and the maximal values. Therefore, the algorithm is used to solve 5 × 5 × 4 × 100 = 10,000 random problem instances. Let Cmax (H) be the value of the objective function that algorithm H obtained, and thus the gap percentage between Cmax (H) and the lower bound LB can be calculated as Gap(H) = (Cmax (H) − LB)/LB · 100. It is obvious that the value Gap(H)% is always lower than the relative error between Cmax (H) and the optimal value. Time(H) means the CPU time occupied by the algorithm H and the unit is second. T1 means the number of the times that the optimal solutions are obtained in the 100 randomly generated instances. T2 means the number of times that the CPU time is near to zero in the 100 randomly generated instances. Tables 1–5 give the experimental results according to different  values. Table 6 gives the comprehensive comparison. Table 1 gives the experimental results of the problems with the setting  = 0.1. This experiment corresponds to the problems with the minimal total available resource in all the five experiments. When  = 0.1, the average relative error of all the problems is 0.000532%. For all the {400, 800, 1000} × 2 and {400, 800} × 4 problems, the simulated annealing algorithm obtains the optimal solutions. The CPU time occupied by the simulated annealing algorithm lies between zero and 0.875 s, and the average CPU time is 0.010226 s. Table 2 gives the experimental results of the problems with  = 0.2. The total available resource is increased, then the average relative error for all the instances in Table 2 is 0.00029%, which is lower than the corresponding value in Table 1. In Table 2, the simulated annealing algorithm obtains the optimal solutions for six situations, the {400, 600} × 2 and {200, 600, 800, 1000} × 4 problems. The CPU time occupied by the simulated annealing algorithm is less than 0.375 s, and the average CPU time is 0.005012 s. Table 3 gives the experimental results of the problems with  = 0.3. The average relative error of the solutions obtained by the simulated annealing is further reduced to 0.00012%. Seven situations of the {200, 400} × 2 and {200, 400, 600, 800, 1000} × 4 problems can be solved optimally by the simulated annealing algorithm. The simulated annealing for each problem in Table 3 can be terminated within 0.484 s, and the average CPU time of the simulated annealing algorithm is 0.00379 s. Table 4 gives the experimental results of the problems with the setting  = 0.4. The solutions obtained by simulated annealing algorithm average convergence to the lower bound 0.0000469%. The simulated annealing algorithm can obtain the optimal solutions for all the {200, 600, 1000} × {2, 4} and {400, 800} × 4 problems. The CPU time in Table 4 lies between zero and 0.016 s, and the average value is 0.003362 s. Table 5 gives the experimental results of the problems with  = 0.5. This setting presents the problems with the maximal total available resource in all the 5 experiments. The average relative error of the simulated annealing algorithm under this setting is 0.0000184%. For all the {200, 600, 800, 1000} × {2, 4} and 400 × 4 problems, the simulated annealing obtains the optimal solutions. The CPU time occupied by the simulated annealing algorithm lies between zero and 0.016 s. The average CPU time for the problems with  = 0.5 is 0.003266 s. Overall, from all the data in Tables 1–6, we can find the following: (1) The accuracy and efficiency of the proposed simulated annealing algorithm are all improved with the increasing of the value of . Obviously, the more total available resource means that it is easier to balance the difference of the maximal

K. Li et al. / Applied Soft Computing 11 (2011) 5551–5557

5555

Table 1 Experimental results for the problems with the setting  = 0.1. n

200 200 200 200 400 400 400 400 600 600 600 600 800 800 800 800 1000 1000 1000 1000

m

2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8

Gap (MLPT)

Gap (SA)

Time (SA)

Ave

Min

Max

Ave

Min

Max

Ave

Min

Max

2.15114 5.62875 8.6641 11.6112 1.70621 3.80516 6.03187 8.13234 1.25295 3.78992 4.89551 6.62913 1.18614 2.96986 4.51384 5.4034 1.03148 2.62615 4.00418 5.17385

0.007093 0.973664 1.54851 3.36429 0.020645 0.211328 1.48469 2.86056 0.008126 0.894932 1.75513 2.63402 0.029033 0.314109 1.21047 2.52997 0.016789 0.276964 0.959536 2.06394

6.62102 18.888 18.0236 21.2446 6.53523 8.33588 12.2038 16.986 5.53367 9.79986 10.1583 12.1485 3.71788 7.24998 13.4199 11.4341 3.77909 7.10152 9.06018 10.042

0.000197 9.74E−06 0.000884 0.001284 0 0 0.000971 0.001813 0.000349 9.56E−05 0.000578 0.000985 0 0 0.000432 0.000968 0.000349 0 0.000269 0.001446

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0.012618 0.000974 0.018693 0.019844 0 0 0.015987 0.018973 0.016129 0.009565 0.016745 0.018005 0 0 0.007633 0.017296 0.01808 0 0.005146 0.016065

0.00109 0.00126 0.01673 0.01374 0.00203 0.00329 0.0164 0.01406 0.00217 0.00343 0.01767 0.01156 0.00313 0.00454 0.02031 0.01499 0.00376 0.00842 0.0222 0.02374

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0.016 0.016 0.078 0.156 0.016 0.078 0.187 0.25 0.016 0.016 0.141 0.078 0.016 0.047 0.141 0.484 0.016 0.375 0.141 0.875

T1

T2

98 99 49 73 100 100 52 71 97 99 41 76 100 100 47 77 98 100 44 77

93 92 27 41 87 84 31 45 86 78 29 41 80 73 21 39 76 69 18 26

T1

T2

98 100 43 84 100 99 46 78 100 100 45 83 99 100 43 77 99 100 64 85

96 92 62 72 91 89 54 70 86 84 51 64 79 81 49 61 76 83 36 55

T1

T2

100 100 34 85 100 100 41 92 98 100 43 95 98 100 64 94 96 100 74 96

96 94 75 84 91 90 64 82 85 84 65 74 82 89 51 73 79 76 46 71

Table 2 Experimental results for the problems with the setting  = 0.2. n

200 200 200 200 400 400 400 400 600 600 600 600 800 800 800 800 1000 1000 1000 1000

m

2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8

Gap (MLPT)

Gap (SA)

Time (SA)

Ave

Min

Max

Ave

Min

Max

Ave

Min

Max

2.60896 5.87424 9.57239 11.2718 1.70218 4.19256 6.42145 7.68048 1.61703 3.7518 5.09311 6.82995 1.26869 3.2034 4.59746 5.51701 1.17144 2.56358 3.74199 5.23925

0.008366 1.47522 0.905265 3.57973 0.038606 0.534823 1.56792 2.77336 0.029909 0.375815 1.97683 1.96003 0.002412 0.535046 1.68129 1.9331 0 0.338198 1.28336 2.0566

8.71202 14.4341 21.7058 22.2174 4.90137 11.4889 12.8985 16.574 5.29818 10.4542 13.0923 12.8995 3.97944 7.64653 8.81079 11.6173 2.92463 6.15324 7.06054 11.7891

0.000223 0 0.001068 0.000805 0 1.68E−06 0.000464 0.000889 0 0 0.000255 0.000532 2.41E−05 0 9.27E−05 0.00051 0.000173 0 0.000101 0.000664

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0.013939 0 0.016986 0.016969 0 0.000168 0.01439 0.015305 0 0 0.005543 0.014923 0.002412 0 0.003834 0.008794 0.017304 0 0.008189 0.019946

0.00063 0.00125 0.00687 0.0047 0.00143 0.00171 0.00735 0.0072 0.00217 0.0025 0.00844 0.00577 0.0033 0.00298 0.00843 0.00656 0.00376 0.00267 0.01484 0.00767

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0.016 0.016 0.078 0.047 0.016 0.016 0.031 0.172 0.016 0.016 0.032 0.031 0.016 0.016 0.031 0.047 0.016 0.016 0.375 0.047

Table 3 Experimental results for the problems with the setting  = 0.3. n

m

200 200 200 200 400 400 400 400 600 600 600 600 800 800 800 800 1000 1000 1000 1000

2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8

Gap (MLPT)

Gap (SA)

Time (SA)

Ave

Min

Max

Ave

Min

Max

Ave

Min

Max

2.5844 6.54069 9.86171 11.4426 1.90273 4.24968 6.58327 8.20195 1.52189 3.40156 5.03151 6.62883 1.31636 2.97276 4.47175 5.87707 1.21837 2.74795 4.11157 5.18386

0.051625 0.965094 1.92498 5.18836 0.022321 0.467233 1.19382 2.92219 0.001498 0.816408 1.4081 1.92029 0.004177 0.615036 1.2119 2.55385 0.006868 0.705359 0.885482 2.08247

7.58702 13.9615 18.6796 21.4123 4.98303 10.8465 14.3311 23.1693 4.89156 7.4969 10.6793 12.6082 5.17397 7.56037 9.66122 12.2573 4.42955 7.24295 10.6483 9.60562

0 0 0.000371 0.000553 0 0 7.60E−05 0.000232 0.00014 0 8.87E−06 8.08E−05 0.00017 0 4.37E−06 0.000119 0.000518 0 3.59E−05 8.53E−05

0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.51E−13 0 0 0 1.63E−12 0

0 0 0.010487 0.013077 0 0 0.002019 0.016502 0.012523 0 0.000366 0.007 0.012853 0 6.98E−05 0.007882 0.019149 0 0.002311 0.004302

0.00061 0.00094 0.00391 0.00251 0.00141 0.00155 0.00579 0.00329 0.00236 0.0025 0.00548 0.00422 0.0028 0.00172 0.01237 0.00437 0.00325 0.00375 0.00844 0.00452

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0.016 0.016 0.016 0.016 0.016 0.016 0.031 0.063 0.016 0.016 0.016 0.031 0.016 0.016 0.484 0.031 0.016 0.016 0.016 0.016

5556

K. Li et al. / Applied Soft Computing 11 (2011) 5551–5557

Table 4 Experimental results for the problems with the setting  = 0.4. n

m

200 200 200 200 400 400 400 400 600 600 600 600 800 800 800 800 1000 1000 1000 1000

2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8

Gap (MLPT)

Gap (SA)

Time (SA)

Ave

Min

Max

Ave

Min

Max

Ave

Min

Max

2.60731 6.05246 9.61169 12.0771 1.91204 4.4699 6.21415 8.079 1.5794 3.76646 5.66749 6.7453 1.35219 3.10401 4.87077 5.67742 1.2229 2.72937 4.20821 5.45148

0.055529 1.32389 3.38066 3.11382 0.001192 1.06534 1.75701 2.34686 0.038607 0.470428 2.08364 2.72352 0.018091 0.515826 1.26527 2.57161 0.021477 0.609776 1.22607 1.43845

11.4923 14.5833 24.5077 23.6647 5.63289 10.3179 13.3896 15.9487 5.44847 8.67168 12.0618 12.8801 4.04134 7.5087 12.2258 11.6343 4.12597 6.59835 9.30621 10.1709

0 0 5.19E−05 0.000651 1.19E−05 0 4.96E−06 1.71E−05 0 0 1.42E−06 1.84E−05 0.000181 0 5.88E−07 1.15E−08 0 0 5.60E−07 6.51E−08

0 0 0 0 0 0 0 0 0 0 2.69E−13 0 0 0 1.70E−13 0 0 0 2.73E−12 0

0 0 0.002153 0.014391 0.001192 0 6.78E−05 0.001294 0 0 1.23E−05 0.001469 0.018091 0 3.54E−06 1.15E−06 0 0 4.52E−06 6.51E−06

0.00093 0.00094 0.00343 0.00204 0.00156 0.00141 0.00469 0.00235 0.00203 0.00219 0.005 0.00375 0.00312 0.00312 0.00687 0.00344 0.00407 0.00315 0.00844 0.00471

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016

T1

T2

100 100 43 89 99 100 51 98 100 100 62 96 99 100 81 99 100 100 83 99

94 94 78 87 90 91 70 85 87 86 68 76 80 80 56 78 74 80 46 70

T1

T2

100 100 33 98 98 100 50 98 100 100 78 99 100 100 84 98 100 100 88 99

95 94 82 90 91 89 74 88 86 83 66 81 79 82 59 79 74 69 48 73

Table 5 Experimental results for the problems with the setting  = 0.5. n

m

200 200 200 200 400 400 400 400 600 600 600 600 800 800 800 800 1000 1000 1000 1000

2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8

Gap (MLPT)

Gap (SA)

Time (SA)

Ave

Min

Max

Ave

Min

Max

Ave

Min

Max

2.97995 6.819 9.88049 11.7871 1.90676 4.45375 6.95917 8.58556 1.82557 3.68295 5.03857 6.99803 1.5576 3.25993 4.81562 5.49245 1.3021 2.90082 4.24312 5.22396

0.105502 0.909469 3.01836 3.13369 0.000638 1.03781 1.89384 3.27137 0.023281 0.415454 2.1101 2.11927 0.044364 0.911831 1.09837 1.75874 0.029463 0.441783 1.21769 1.45827

11.1494 20.52 25.3232 23.3011 7.50522 11.6408 16.6949 18.5966 6.43588 10.0707 9.27677 13.1386 4.35216 7.79392 8.95811 12.8345 5.16914 6.87262 8.89599 10.99

0 0 3.40E−05 1.55E−05 0.00014 0 0.000108 3.49E−06 0 0 6.18E−05 3.53E−06 0 0 5.58E−07 8.64E−08 0 0 5.23E−07 6.35E−08

0 0 0 0 0 0 2.54E−12 0 0 0 2.16E−13 0 0 0 0 0 0 0 3.27E−13 0

0 0 0.002005 0.000853 0.013411 0 0.009858 0.000255 0 0 0.00611 0.000353 0 0 4.59E−06 7.12E−06 0 0 3.76E−06 6.35E−06

0.00078 0.00094 0.00282 0.00155 0.00142 0.00173 0.00406 0.00187 0.00218 0.00266 0.0053 0.00297 0.0033 0.0028 0.00642 0.00327 0.00407 0.00485 0.0081 0.00423

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016

completion times corresponding to the sub-schedules according to Property 4. (2) The dependent relationship between the efficiency of the proposed simulated annealing algorithm and the problem size is not obvious, therefore our simulated annealing algorithm can be used to solve the problems with large size. The reason is that the number of the inner loop times is set to a fixed number, and the well-directed local search also improves the efficiency of the proposed simulated annealing algorithm.

(3) In all the 10,000 randomly generated problems, the simulated annealing algorithm obtains the optimal solutions 8573 times, and the CPU times are near to zero for 7270 instances. The average relative error of the solutions for all instances obtained by the simulated annealing algorithm is only 0.000201%, and the worst case of the relative error is 0.019946%. The average CPU time for all instances occupied by the simulated annealing algorithm is 0.005131 s, and the worst case of the CPU time is 0.875 s. The simulated annealing algorithm can reach

Table 6 The statistic experimental results according to the values of . 

Gap (MLPT)

0.1 0.2 0.3 0.4 0.5 Statistic

Gap (SA)

Time (SA)

T1

T2

0.875 0.375 0.484 0.016 0.016

79.9 82.15 85.5 89.95 91.15

56.8 71.55 77.55 78.5 79.1

0.875

85.73

72.7

Ave

Min

Max

Ave

Min

Max

Ave

Min

Max

4.560359 4.695939 4.792526 4.869933 4.985625

0.007093 0 0.001498 0.001192 0.000638

21.2446 22.2174 23.1693 24.5077 25.3232

5.32 E−04 2.9 E−04 1.2 E−04 4.69E−05 1.84E−05

0 0 0 0 0

0.019844 0.019946 0.019149 0.018091 0.013411

0.010226 0.005012 0.00379 0.003362 0.003266

0 0 0 0 0

4.780876

0

25.3232

0.000201

0

0.019946

0.005131

0

K. Li et al. / Applied Soft Computing 11 (2011) 5551–5557

a (near) global   optimum solution when the initial tempera ture T (T := ZLocal − LBLocal · K) is large enough. The idea of the proposed simulated annealing algorithm is to improve the efficiency by dealing with the sub-problems with two machines firstly, therefore we set K: = 300 and the efficiency is also guaranteed. It shows that the proposed simulated annealing algorithm is very accurate and efficient. 7. Conclusions a simulated annealing algorithm for In this paper, we propose n ˆ max problem. The idea aims at the the Pm |pj = pj − xj , j=1 xj ≤ X|C local solution formed by a critical machine and the non-critical machine with the minimal maximal completion time, therefore the process of the search is pointed and fast. In the experiments, we introduce the parameter  to control the total available resource, and consider the problems with 5 kinds of  settings. A mass of experimental results show that the average relative error of all the solutions obtained by the simulated annealing algorithm for different problems is only 0.000201%, therefore the proposed simulated annealing algorithm is very accurate. Additionally, the number of the inner loop times is a constant value, hence the dependent relationship between the efficiency of the simulated annealing algorithm and the problem size is not obvious. The simulated annealing algorithm can solve the problems with 1000 jobs only within 0.875 s. The further research work on this paper may test the practicability of the proposed simulated annealing algorithm for the problems with super-large size, e.g. more than 1000 jobs. We also devote to expanding the idea of the proposed simulated annealing algorithm for some more complex practical parallel machine scheduling problems. Acknowledgements The authors would like to thank the anonymous referees for their valuable comments and suggestions that have significantly improved the paper. This work was supported by the fund from National Natural Science Foundation of China under grant number 70871032, 90924021, 70971035 and the fund from Anhui Provincial Natural Science Foundation under grant 11040606Q27. References [1] J.K. Lenstra, A.H.G. Rinnooy Kan, P. Brucker, Complexity of machine scheduling problems, Annals of Discrete Mathematics 1 (1977) 343–362. [2] R.G. Vickson, Choosing the job sequence and processing times to minimize total processing plus flow cost on a single machine, Operations Research 28 (1980) 1155–1167. [3] R.G. Vickson, Two single machine sequencing problems involving controllable job processing times, AIIE Transactions 12 (3) (1980) 258–262. [4] L. VanWassenhove, K.R. Baker, A bicriterion approach to time/cost trade-offs in sequencing, European Journal of Operational Research 11 (1982) 48–54. [5] A. Janiak, W. Janiak, M. Lichtenstein, Resource management in machine scheduling problems: a survey, Decision Making in Manufacturing and Services 1 (2) (2007) 59–89. [6] D. Shabtay, G. Steiner, A survey of scheduling with controllable processing times, Discrete Applied Mathematics 155 (2007) 1643–1666. [7] E. Nowicki, S. Zdrzalka, A survey of results for sequencing problems with controllable processing times, Discrete Applied Mathematics 26 (1990) 271–287.

5557

[8] G. Wan, B.P.-C. Yen, C.-L. Li, Single machine scheduling to minimize total compression plus weighted flow cost is NP-hard, Information Processing Letters 79 (2001) 273–280. [9] A. Janiak, M.Y. Kovalyov, W. Kubiak, F. Werner, Positive half-product and scheduling with controllable processing times, European Journal of Operational Research 165 (2005) 413–422. [10] J.-B. Wang, Z.-Q. Xia, Single machine scheduling problems with controllable processing times and total absolute differences penalties, European Journal of Operational Research 177 (2007) 638–645. [11] R.L. Danial, R.K. Sarin, Single machine scheduling with controllable processing times and number of jobs tardy, Operations Research 37 (6) (1989) 981–984. [12] T.C.E. Cheng, Z.-L. Chen, C.-L. Li, Single-machine scheduling with trade-off between number of tardy jobs and resource allocation, Operations Research Letters 19 (1996) 237–242. [13] A. Janiak, M.Y. Kovalyov, Single machine scheduling subject to deadlines and resource dependent processing times, European Journal of Operational Research 94 (1996) 284–291. [14] C.T.D. Ng, T.C.E. Cheng, M.Y. Kovalyov, S.S. Lam, Single machine scheduling with a variable common due date and resource-dependent processing times, Computers & Operations Research 30 (2003) 1173–1185. [15] E. Nowicki, S. Zdrzałka, A bicriterion approach to preemptive scheduling of parallel machines with controllable job processing times, Discrete Applied Mathematics 63 (1995) 237–256. [16] K. Jansen, M. Mastrolilli, Approximation schemes for parallel machine scheduling problem with controllable processing times, Computers & Operations Research 31 (2004) 1565–1581. [17] M. Kaspi, D. Shabtay, Convex resource allocation for minimizing the makespan in a single machine with job release dates, Computers & Operations Research 31 (2004) 1481–1489. [18] D. Shabtay, M. Kaspi, Parallel machine scheduling with a convex resource consumption function, European Journal of Operational Research 173 (1) (2006) 92–107. [19] D. Shabtay, M. Kaspi, Minimizing the total weighted flow time in a single machine with controllable processing times, Computers & Operations Research 31 (2004) 2279–2289. [20] T.C.E. Cheng, A. Janiak, Resource optimal control in some single-machine scheduling problems, IEEE Transactions on Automatic Control 39 (6) (1994) 1243–1246. [21] X. Wang, T.C.E. Cheng, Single machine scheduling with resource dependent release times and processing times, European Journal of Operational Research 162 (2005) 727–739. [22] B.-C. Choi, S.-H. Yoon, S.-J. Chung, Single machine scheduling problems with controllable processing times and resource dependent release times, European Journal of Operational Research 181 (2007) 645–653. [23] T.C.E. Cheng, M.Y. Kovalyov, N.V. Shakhlevich, Scheduling with controllable release dates and processing times: Makespan minimization, European Journal of Operational Research 175 (2006) 751–768. [24] T.C.E. Cheng, M.Y. Kovalyov, N.V. Shakhlevich, Scheduling with controllable release dates and processing times: Total completion time minimization, European Journal of Operational Research 175 (2006) 769–781. [25] M. Kaspi, D. Shabtay, A bicriterion approach to time/cost trade-offs in scheduling with convex resource-dependent job processing times and release dates, Computers & Operations Research 33 (2006) 3015–3033. [26] A. Janiak, Single machine scheduling problem with common deadline and resource dependent release dates, European Journal of Operational Research 53 (1991) 317–325. [27] N. Metropolis, A. Rosenbluth, M. Resenbluth, et al., Equation of state calculations by fast computing machines, Journal of Chemical Physics 21 (1953) 1087–1092. [28] S. Kirkpatrick, C.D. Gelatt Jr., M.P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680. [29] R. Zhang, C. Wu, A hybrid immune simulated annealing algorithm for the job shop scheduling problem, Applied Soft Computing 10 (2010) 79–89. [30] B. Naderi, R. Tavakkoli-Moghaddam, M. Khalili, Electromagnetism-like mechanism and simulated annealing algorithms for flowshop scheduling problems minimizing the total weighted tardiness and makespan, Knowledge-Based Systems 23 (2010) 77–85. [31] S.M. Loo, B.E. Wells, Task scheduling in a finite-resource, reconfigurable hardware/software codesign environment, INFORMS Journal on Computing 18 (2006) 151–172. [32] E Figielska, A genetic algorithm and a simulated annealing algorithm combined with column generation technique for solving the problem of scheduling in the hybrid flowshop with additional resources, Computers & Industrial Engineering 56 (2009) 142–151.