Terminal penalty rolling scheduling based on an initial schedule for single-machine scheduling problem

Terminal penalty rolling scheduling based on an initial schedule for single-machine scheduling problem

Computers & Operations Research 32 (2005) 3059 – 3072 www.elsevier.com/locate/dsw Terminal penalty rolling scheduling based on an initial schedule fo...

219KB Sizes 0 Downloads 54 Views

Computers & Operations Research 32 (2005) 3059 – 3072 www.elsevier.com/locate/dsw

Terminal penalty rolling scheduling based on an initial schedule for single-machine scheduling problem夡 Bing Wanga, b,∗ , Yugeng Xia , Hanyu Gua a Institute of Automation, Shanghai Jiaotong University, Shanghai 200030, China b Department of Information Science and Control Engineering, Shandong University at Weihai, Shandong 264209, China

Available online 20 December 2004

Abstract This article addresses the problem of single-machine deterministic scheduling with release times to minimize the total completion time. A type of rolling horizon procedure (RHP) is used based on an initial schedule to improve the global performance and control the computational cost. A penalty function is added to the total completion time of a local sub-problem of the RHP and this technique guarantees the improvements of the global performance over the initial schedule at each iteration. An extensive experimentation was conducted to illustrate the significance of the procedure over existing approaches. Computational results indicate that this procedure improves the initial schedule and the solution quality is higher than that of the existing procedures in most situations while the computational efforts are moderate. 䉷 2004 Elsevier Ltd. All rights reserved. Keywords: Rolling horizon procedures; Terminal penalty; Initial schedule; Local scheduling

1. Introduction Consider a single-machine scheduling problem with release dates to minimize the total completion  time, which is designated as 1/ri / Ci and described as follows: there are n jobs to be scheduled. Each job has a release time ri and a processing time pi . Setup times are sequence-independent and included into the processing time. Preemption is not allowed. All information is assumed to be deterministic and 夡 Supported by National Natural Science Foundation of China (Grant # 60274013) and Natural Science Foundation for Youths of Shandong University (Grant # 11010053187075). ∗ Corresponding author. E-mail address: [email protected] (B. Wang).

0305-0548/$ - see front matter 䉷 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2004.11.004

3060

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

known a priori. It is a typical scheduling problem that has attracted much attention for a long time. This problem is strongly NP-hard [1], suggesting that the existence of an efficient, polynomial-time algorithm for its solution is unlikely. The scheduling problem with release times can be solved off-line or on-line, depending on the global information of the jobs known a priori or not. Traditional optimization methods can be used for offline scheduling to achieve the optimal solutions, such as branch-and-bound approaches in Dessouky and Deogun [2], Hariri and Potts [3], and Chu [4]. However, they can only handle small or mediumscale problems in a reasonable CPU time. More practically, heuristics are often used to get near-optimal solutions for the large-scale scheduling problems. Among these, the dispatching rules are the most popular ones, such as shortest processing time (SPT) [5], earliest completion time (ECT) [2], the priority rule for total flowtime (PRTF) and the alternative priority rule for total flowtime (APRTF) [6]. Dispatching rules can result in reasonable solutions with low computational efforts and even achieve the optimal solutions in some cases. However, the solutions in many cases may be very poor. To improve the myopic nature of dispatching rules, Suresh proposed a release date iteration (RDI) procedure in [7], which improves the initial schedule generated by a certain dispatching rule through modifying the problem data iteratively. Another way to reduce the computational burden for large scheduling problems is to use the decomposition mechanism of rolling horizon procedure (RHP) as in [8–11]. RHP is a heuristic mechanism with which the global scheduling is replaced by a procedure of solving local scheduling problems. It can either reduce the computational complexity for off-line scheduling or make on-line scheduling for problems with incomplete information. However, since the RHP separates the local problems from the global one and ignores the impact of the local objectives on the global one, the solutions of RHP are sometimes very poor. The computational results in [9] show that RHP is better than dispatching rules in some cases and worse in other cases. However, for off-line scheduling, the rolling mechanism of RHP allows us to make improvement for some initial schedules by properly defining the local problems with known global information. In this paper, we combine the dispatching rules and a special RHP to form a two-stage heuristic called terminal penalty rolling scheduling (TPRS). An initial schedule is firstly generated using certain dispatching rule with low computational burden, and then a rolling scheduling, in which a terminal penalty function is appended to the objective function of each sub-problem, is adopted to improve the performance of the initial schedule iteratively. It will be shown that TPRS can improve the procedural solution as the iteration goes forward, and the ultimate solution is no worse than the initial schedule. Thus, TPRS may be thought to be an improving procedure on the initial schedule just like RDI [7]. The computational complexity of TPRS, however, is lower than RDI. Furthermore, extensive computational experiments show that the solutions of TPRS are better than those of RDI in most cases and TPRS can almost consistently obtain better global solutions than RHP [9] with modest additional computational burdens. The paper is organized as follows. In Section 2, TPRS is described in detail. In Section 3, the performances of TPRS are analyzed and some conclusions are achieved. Computational results are presented in Section 4. We conclude the paper with a summary and give a discussion of future research in Section 5. 2. TPRS For 1/ri /



Ci

RHP in [9] was described as a procedure consisting of a series of iterations. Each iteration involves three steps: identifying a sub-problem, solving the sub-problem to get the optimal solution, and implementing

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

3061

a portion of this solution. As a special RHP, TPRS includes two stages: initial scheduling and rolling scheduling based on the initial schedule. The rolling scheduling procedure in TPRS also consists of a series of iterations. Each iteration involves three steps like that of RHP in [9]: identifying the rolling window for a sub-problem, local scheduling and resetting the global schedule, and fixing a partial schedule. However, there are two important differences. The first is that the sub-problems of rolling scheduling are identified based on an initial schedule in TPRS. The second is on the definition of sub-problems. In order to describe them in details, we firstly give some denotations and specifications. Let N be the set of all jobs. t denotes the iteration number, i.e. the decision point for the tth local scheduling. The global schedule for set N at t is denoted as S(t) and the global performance Ci for S(t) is denoted as J (t). A partial schedule is a portion of a global schedule. If a job subset of N is denoted as g(t) at t, a partial schedule created based on jobs of g(t) is denoted as S(g(t)). The beginning time of S(g(t)) is the beginning time of the first job in S(g(t)) and the completion time of S(g(t)) is the completion time of the last job in S(g(t)). An active partial schedule SR (g(t)) is called a t -shift schedule of the partial schedule SP (g(t)) if the beginning time of SR (g(t)) is shifted to the right by t (  0) or to the left t (< 0) by |t| from that of SP (g(t)) as far as necessary to absorb idle times while the sequence of jobs keeps unvarying. 2.1. Initial scheduling For any RHP, two important issues should be considered for analyzing its global performance: how to describe the global performance during rolling procedure, and how the local scheduling is executed and goes forward iteratively. In the on-line RHP proposed by Suresh et al. [9], the global performance could not be described due to the lack of global information and the sequence of jobs included in the sub-problem at each iteration is uncertain. It makes the evaluation of the global procedural performance of RHP difficult. However, for the scheduling problem addressed in this paper, information is known a priori. An initial scheduling for job set N could be executed to cope with these two issues. For the first stage of TPRS, all jobs can be scheduled firstly by use of a dispatching rule. This initial schedule, denoted as S, can help to describe the global performance during the rolling scheduling procedure and also give a basic sequence of jobs with which each sub-problem could be identified. This initial schedule is a base for rolling scheduling and is important to the global performance of TPRS. 2.2. Rolling scheduling based on the initial schedule The second stage of TPRS is a rolling scheduling procedure with local scheduling going forward iteratively. At each iteration a partial schedule is fixed until the global schedule is completely fixed. In the following, we will describe the three steps of the rolling scheduling procedure from the first iteration on with more details presented later. At the first iteration, t = 1, the initial schedule S is generated. The first  jobs in S are assigned to the job set w(1), which is called a rolling window. In S, the partial schedule for jobs in w(1) is denoted as SP (w(1)), and following SP (w(1)) there is the partial schedule SP (w(1)) ¯ of the later job set w(1). ¯ The initial schedule S, called the global previous schedule SP (1) at t = 1, is then divided into two parts after identifying the rolling window, i.e. S = SP (1) = SP (w(1)) + SP (w(1)), ¯ where the subscript P is used to identify the previous schedule before local scheduling.

3062

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

S (e (t-1))

S (e (t-1)) SP (w (t))

(a)

SP (w (t))

SR (w(t)) SR (wl (t))

S (e (t))

S (e (t ))

(b)

SR (w (t ))

SR (wl (t ))

SP (w (t+1))

SP (w (t+1))

Fig. 1. Window’s rolling in TPRS. (a) Identifying a rolling window at t, (b) window’s rolling from t to t + 1.

For the jobs in the rolling window w(1), a local scheduling is executed and a local optimal solution SR (w(1)) is obtained. The partial schedule SP (w(1)) ¯ may be changed into SR (w(1)) ¯ according to the result of SR (w(1)). Therefore, after local scheduling at t = 1, the global previous schedule SP (1) is changed into SR (1), called the global current schedule, consisting of SR (w(1)) and SR (w(1)), ¯ i.e. SR (1) = SR (w(1)) + SR (w(1)), ¯ where the subscript R identifies the current schedule after local scheduling. At the third step of the iteration, the local optimal schedule SR (w(1)) is divided into two partial ¯ schedules: SR (wl(1)) and SR (wl(1)). The partial schedule SR (wl(1)) consisting of the job set wl(1) ¯ includes the first  jobs of SR (w(1)), and the partial schedule SR (wl(1)) includes the remaining jobs in SR (w(1)). SR (wl(1)) is fixed. If we let e(t) denotes the set of the cumulatively fixed jobs at t and call the fixed schedule for e(t) the existing schedule S(e(t)), wl(1) is exactly e(1) and SR (wl(1)) is exactly S(e(1)). Except for S(e(1)), the remaining partial schedule of SR (1) is simply denoted as S(e(1)), ¯ which ¯ consists of SR (wl(1)) and SR (w(1)). ¯ At the next iteration, t = 2, the global current schedule SR (1) at last iteration is assigned as the global previous schedule SP (2), i.e. SP (2) = SR (1). Thus SP (2) consists of the existing schedule S(e(1)) and the remaining schedule S(e(1)). ¯ S(e(1)) ¯ has not been fixed and needs to be further rescheduled. The rolling window w(2) will be identified based on S(e(1)). ¯ At any t, the iteration repeats the similar procedure like that at t = 1. (a) Identifying a rolling window: At t, there is a global previous schedule SP (t) consisting of the existing schedule S(e(t − 1)) and the remaining schedule S(e(t ¯ − 1)). The rolling window at t, denoted as w(t), is identified by including the first  jobs of S(e(t ¯ − 1)). In SP (t), following the rolling window w(t), there is the later job set w(t). ¯ Therefore, after identifying the rolling window, S(e(t ¯ − 1)) is divided into two parts, i.e. S(e(t ¯ − 1)) = SP (w(t)) + SP (w(t)), ¯ and the global previous schedule SP (t) is divided into three parts, i.e. ¯ SP (t) = S(e(t − 1)) + SP (w(t)) + SP (w(t)).

(1)

They are shown in Fig. 1(a). In (Fig. 1, the length of a line indicates the completion time of the partial schedule for a corresponding subset.) The number of jobs in w(t), , is a parameter of TPRS that controls the size of each sub-problem during the TPRS. The number of jobs in each rolling window keeps a constant except for the last rolling

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

3063

window. In the last rolling window, the number of jobs may be less than  due to insufficient remaining jobs. After identifying the rolling window w(t), local scheduling will be executed based on w(t). (b) Local scheduling and resetting global schedule: At t, local scheduling is executed only for the jobs in the current rolling window w(t). The sub-problem at t is formulated as follows: to find an optimal schedule SR (w(t)) such that    (2) Ci + |w(t)| ¯ Cw(t)  , min  S(w(t))

i∈w(t)

where P Cw(t) = max{0, Cw(t) − Cw(t) },

(3)

¯ denotes the number of jobs in w(t), ¯ Cw(t) denotes Ci denotes the completion time of job i in w(t), |w(t)| P the completion time of any feasible schedule S(w(t)), and Cw(t) denotes the completion time of the partial previous schedule SP (w(t)). A sub-problem of (2) and (3) is called terminal penalty sub-problem (TP sub-problem). In Eq. (2), the  first item i∈w(t) Ci is the Ci performance of a local schedule, which is compatible with the global performance. The second item |w(t)| ¯ Cw(t) is a penalty for the increase of the global performance due to the beginning time delay of the later jobs caused by the delay of the completion time of the local optimal schedule. This item takes the impact of the local schedule on the global objective into account. It is clear that adding the terminal penalty function will prevent only seeking the optimal schedule in the sub-problem while ignoring its negative impact on the global performance. In addition, the simpler form of the penalty function only needs the information about the job number in w(t), which makes the TP sub-problems R , C R = max{0, C R − C P }. more solvable. If we denote the completion time of SR (w(t)) as Cw(t) w(t) w(t) w(t) After local scheduling based on the rolling window, the sequence of jobs in SP (w(t)) ¯ is not varied due to non-rescheduling. However, if we let the global current schedule SR (t) consist of three partial schedules S(e(t − 1)), SR (w(t)), and SP (w(t)), ¯ SR (t) may be infeasible since SR (w(t)) and SP (w(t)) ¯ P R would be overlapping if the completion time Cw(t) is later than Cw(t) or have unnecessary idle time if R is earlier than C P . In order to keep the global schedule feasible and active, let the beginning time Cw(t) w(t) R , then we obtain a partial schedule S (w(t)) which is the of the first job in w(t) ¯ be shifted by f Cw(t) R ¯ R f Cw(t) -shift schedule of SP (w(t)), ¯ i.e. R P R = Bw(t) + f Cw(t) , Bw(t) ¯ ¯

where

 R f Cw(t)

=

R − CP Cw(t) w(t) P rf − Cw(t)

(4)

R , if rf  Cw(t) else,

(5)

P R rf is the release time of the first job f in SP (w(t)). ¯ Bw(t) and Bw(t) are the beginning times of SP (w(t)) ¯ ¯ ¯ R R P ¯ respectively. According to (4) and (5), if Cw(t) = Cw(t) − Cw(t)  0, SR (w(t)) ¯ is rightand SR (w(t)), R ¯ and the performance of SR (w(t)) ¯ will be increased. In the case where shifted by Cw(t) from SP (w(t)) R R P R R |, or else just left-shifted Cw(t) = Cw(t) − Cw(t) < 0, and rf  Cw(t) , SR (w(t)) ¯ is left-shifted by |Cw(t)

3064

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

P | and the performance of S (w(t)) by |rf − Cw(t) in both cases will be decreased. Thus at t, the global R ¯ current schedule is reset to be SR (t) consisting of S(e(t − 1)), SR (w(t)), and SR (w(t)), ¯ i.e.

¯ SR (t) = S(e(t − 1)) + SR (w(t)) + SR (w(t)).

(6)

(c) Fixing a partial schedule: At the third step, the local optimal schedule SR (w(t)) is divided into ¯ two partial schedules SR (wl(t)) and SR (wl(t)). The set wl(t) of the first  jobs in SR (w(t)) forms the partial schedule SR (wl(t)) which is fixed at this iteration. The remaining partial schedule of SR (w(t)) is ¯ just SR (wl(t)). S(e(t − 1)) and SR (wl(t)) are merged into the existing schedule S(e(t)). S(e(t)) consists ¯ of all jobs which are cumulatively fixed so far. Following S(e(t)), SR (wl(t)) and SR (w(t)) ¯ are merged into S(e(t)). ¯ Therefore, the global current schedule SR (t) is thought to consist of the existing schedule S(e(t)) and the remaining schedule S(e(t)). ¯ S(e(t)) ¯ is not fixed yet and needs to be further rescheduled at the following iterations. At t + 1, SR (t) is assigned as the global previous schedule SP (t + 1), i.e. SP (t + 1) = SR (t), and a new iteration proceeds, shown in Fig. 1(b). This process is repeated until the last rolling window. When the number of jobs in S(e(t)) ¯ is no more than , S(e(t)) ¯ is assigned as the last rolling window. The optimal schedule of the last rolling window is completely fixed and is merged into the existing schedule. Then the global schedule is completely fixed. The number  of the fixed jobs at each iteration should be smaller than .  is another parameter of TPRS, which describes the step size of rolling scheduling and controls the number of local scheduling in the procedure. Since  also keeps constant like , at each rolling window, the number of the fixed jobs is equal to that of the jobs included into the next rolling window from the later job set. The number of local scheduling can be calculated as M = [n − /] + 1. Thus, the large-scale global scheduling with n jobs is decomposed into M small-scale local scheduling with no more than  jobs. 2.3. Analysis of computational complexity It is clear that the computational complexity of TPRS is polynomial-time. For the first stage, the complexity of dispatching rules is O(n log n). For the second, since the sizes of sub-problems are restricted strictly to be no more than  jobs, the complexity of local scheduling using Branch and Bound is O(! log ) [2]. Thus the rolling scheduling can be implemented with O(! log (n− )/). Therefore, the computational complexity of TPRS is O(n log n+ ! log (n− )/). Although the two-stage TPRS requires some additional computational efforts compared with RHP and the additional computational burden mainly exists at the first stage, both TPRS and RHP have the polynomial-time computational complexity with lower than two orders. As a comparison, compared with the RDI [7] with O(n4 log n), TPRS is advantageous in computational complexity. 3. Analysis of the global performances  P , and At t, let J P (t) be the Ci performance of the global previous schedule SP (t) and Je(t−1) , Jw(t)  P be the Ci performances of S(e(t − 1)), SP (w(t)) and SP (w(t)), ¯ respectively. According to (1), Jw(t) ¯ we have P P + Jw(t) J P (t) = Je(t−1) + Jw(t) ¯ .

(7)

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

3065

Let J R (t) be the performance of the global current schedule SR (t). Analogously, according to (6), we have R R J R (t) = Je(t−1) + Jw(t) + Jw(t) ¯ ,

(8)

R , JR are the performances of SR (w(t)) and SR (w(t)), ¯ respectively. In the following, let where Jw(t) w(t) ¯ P R Ci , Ci be the completion time of job i in the previous schedule SP (t) and the current schedule SR (t), respectively. In order to deduce the conclusion of the global performances, three propositions are given in the following. R  J P + |w(t)| R . Lemma 1. In TPRS, Jw(t) ¯ Cw(t) ¯ w(t) ¯ R > C P , in order to ensure S (w(t)) feasible, jobs in SR (w(t)) ¯ should be delayed and Proof. If Cw(t) R ¯ w(t)  ¯ will increase. The worst caseis that all jobs in SR (w(t)) ¯ are delayed the Ci performance of SR (w(t))  R R P R P P R R with Cw(t) , then ∀i ∈ w(t), ¯ Ci = Ci + Cw(t) . Due to Jw(t) = i∈w(t) = i∈w(t) ¯ Ci , we have Jw(t) ¯ Ci ¯ ¯   P R P R = J P + |w(t)| R .   i∈w(t) ¯ Cw(t) ¯ Cw(t) ¯ (Ci + Cw(t) ) = i∈w(t) ¯ Ci + |w(t)| w(t) ¯ R + |w(t)| R J P . Lemma 2. In TPRS, Jw(t) ¯ Cw(t) w(t) R + |w(t)| R J ¯ Cw(t) Proof. For certain feasible schedule S(w(t)), due to Eq. (2), we have Jw(t) w(t) + P P R + Cw(t) . If we let S(w(t)) be SP (w(t)), then Jw(t) = Jw(t) , Cw(t) = Cw(t) = 0, therefore Jw(t) |w(t)| ¯ R J P .  Cw(t) |w(t)| ¯ w(t)

From Lemmas 1 and 2, we can easily obtain Lemma 3 as follows. R + J R J P + J P . Lemma 3. In TPRS, Jw(t) w(t) ¯ w(t) w(t) ¯

  Ci Theorem 1. In TPRS for 1/ri / Ci , if the initial solution S is feasible, at each iteration the performance of the global current schedule is no  worse than that of the global previous schedule and improves as the iteration goes forward. The global Ci performance of the ultimate solution is no worse than that of the initial solution and the initial schedule is an upper bound for the solutions.   Proof. If the initial solution is feasible, let J S be the Ci performance of S and J S is also the Ci P (1) = performance of the global previous schedule SP (1). According to (7), due to e(0) =  , J = 0, J e(0)  P P . After the first local scheduling, the Jw(1) + Jw(1) Ci performance of the current schedule SR (1) is ¯ R R . Due to Lemma 3, we have J R (1)  J P (1) = J S . + Jw(1) J R (1) = Jw(1) ¯ P + J P . After local At t, due to SR (t − 1) = SP (t), according to (7), J R (t − 1) = J P (t) =Je(t−1) + Jw(t) w(t) ¯ R +J R . From Lemma 3, we have J R (t)  J P (t)= scheduling at t, according to (8), J R (t)=Je(t−1) +Jw(t) w(t) ¯ J R (t − 1). Finally, we have

J ∗  J R (M)  J R (M − 1)  · · ·  J R (1)  J P (1) = J S ,  where J ∗ is the Ci performance of the global optimal solution. 

3066

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

4. Computational results and discussion In this section, we present the results of a series of computational experiments to show the effectiveness of TPRS. In Section 4.1, the improvements of TPRS over its initial schedule are presented, and then TPRS is compared with RDI in [7] and RHP in [9] in Sections 4.2 and 4.3, respectively. Since the best selection of parameters of RHP in [9] is {12, 5, 2}, the pair of parameters {, } of TPRS is specified to be {17, 2} according to the suggestion of RHP. The branch-and-bound procedure of Dessouky and Deogun [2] is modified to solve the sub-problems with terminal penalty functions. The dominance rules in the branch-and-bound procedure of Dessouky can be proved to be still valid for the TP sub-problems, and the lower and upper bounds can be obtained from modification to those of [2]. All procedures were coded in C language and all tests ran on a computer with Pentium 4-M CPU 1.80 GHz and Windows XP operating environment. Problems were randomly generated using a format similar to that used by Suresh et al. [7] and [9]. Processing time pi was generated from a discrete uniform distribution between 1 and 100. If a global problem consists of n jobs, the expected processing time for each job is 50.5 and the release dates were generated from a discrete uniform distribution between 1 and an upper limit U L = 50.5n. The range parameter  was used to control how rapidly jobs are expected to arrive for processing. A total of ten different  values, varying from 0.20 to 3.00, were considered. A small  value denotes that jobs arrive rapidly and a large  value denotes that jobs arrive slowly. 4.1. Testing the improvements over the initial schedule in TPRS In TPRS, we used certain dispatching rule to generate the initial schedule just like RDI in [7]. Among the four dispatching rules referred to in the introduction, SPT and APRTF tend to perform better than the other two rules in most cases [9]. Although APRTF tends to identify better solution than SPT when jobs arrive slowly, we prefer to use SPT to generate an initial solution for TPRS in that SPT is better when jobs arrive rapidly and the TPRS is more likely to make larger improvements. In the literature [7], RDI with an initial solution using SPT is denoted as RDI/SPT, analogously the TPRS with the SPT initial solution is denoted as TPRS/SPT. Problems of five sizes with 50, 100, 150, 200 and 250 jobs were generated to test the improvements of TPRS/SPT over the initial SPT solution. The percentage improvements were calculated as (SPTTPRS)/TPRS. One thousand instances were generated for different combinations of problem size and range value where twenty instances were tested for problems with ten different  values. Table 1 presents the average and maximum of improvements of TPRS/SPT over SPT for 50-job problems and 250-job problems, which represent the smaller size and the larger size, respectively. The average percentage improvements of TPRS/SPT over SPT for all five sizes and 10 range parameters are shown in Tables 2 and 3. The computational results show that the solutions of all problems obtained from TPRS/SPT are no worse than their initial schedules by SPT anyway and different amounts of improvements were made over the initial schedules for problems with different problem parameters. Theorem 1 is verified. It is also observed that the improvements are less when the sizes of problems are larger. Maybe it is because when the problems are larger, the initial schedules created using SPT are better consistently and the spaces of improvements are smaller. This is also verified in Table 2 where the improvements are getting less when the problem sizes are getting larger. It illustrates that SPT performs better when the size of problem

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

3067

Table 1 Improvements of TPRS/SPT over SPT Range parameter 

0.20 0.40 0.60 0.80 1.00 1.25 1.50 1.75 2.00 3.00

50-job problems

250-job problems

Averagea (%)

Maximum b (%)

Averagea (%)

Maximumb (%)

0.121 0.123 0.119 0.149 0.200 0.173 0.128 0.175 0.129 0.041

0.184 0.134 0.226 0.382 0.271 0.243 0.204 0.271 0.183 0.050

0.006 0.012 0.014 0.011 0.020 0.022 0.020 0.017 0.013 0.009

0.011 0.035 0.031 0.027 0.035 0.039 0.029 0.024 0.021 0.013

aAverage improvements over SPT. b Maximum improvements over SPT.

Table 2 Improvements of TPRS/SPT over SPT by number of jobs Number for jobs (n) Average percentage

50 0.136

100 0.044

150 0.025

200 0.019

250 0.015

Table 3 Improvements of TPRS/SPT over SPT by arrival time range Range parameter () Average percentage

0.20 0.032

0.40 0.038

0.60 0.042

0.80 0.053

1.0 0.068

1.25 0.065

1.50 0.053

1.75 0.057

2.00 0.044

3.00 0.011

is larger. As claimed by Posner in [12], SPT is asymptotically optimal as the problem size approaches infinity. Since TPRS improves any initial schedule created by SPT, it should be asserted that the TPRS/SPT is also asymptotically optimal. In Table 3, it can be observed that the improvements of the TPRS/SPT over SPT are larger for problems with  values from 0.60 to 1.25, which are thought to be the hardest problems [3]. It demonstrates that TPRS differs from RDI whose performance is not obviously affected by different  value. As a procedure of improving an existing initial schedule, TPRS has another advantage. TPRS can be executed repeatedly taking the newly obtained schedule as the initial schedule. Theorem 1 can ensure that the new schedule would be no worse than the previous. TPRS can improve the initial schedule more than one time. However, sometimes it may be unnecessary to execute TPRS once again when the solution is good enough because little improvement may be not worth more computational cost.

3068

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

Table 4 Comparison of TPRS/SPT and RDI/SPT: 50-job problems Range parameter 

0.20 0.40 0.60 0.80 1.00 1.25 1.50 1.75 2.00 3.00

TPRS/SPT better than RDI/SPT

RDI/SPT better than TPRS/SPT

Numbera

Averageb (%)

Maximumc (%)

Numbera

Averageb (%)

Maximumc (%)

5 7 12 15 20 15 12 11 16 2

0.232 0.131 0.173 0.187 0.144 0.098 0.115 0.081 0.038 0.014

1.047 0.294 0.325 0.366 0.347 0.183 0.167 0.218 0.091 0.018

6 5 1 2 0 0 1 0 0 0

0.904 0.283 0.132 0.011 — 0 0.042 0 0 0

2.423 0.371 0.132 0.020 — 0 0.042 0 0 0

a Number of times out of 20. bAverage improvements. c Maximum improvements.

Table 5 Comparison of TPRS/SPT and RDI/SPT: 250-job problems Range parameter 

0.20 0.40 0.60 0.80 1.00 1.25 1.50 1.75 2.00 3.00

TPRS/SPT better than RDI/SPT

RDI/SPT better than TPRS/SPT

Numbera

Averageb (%)

Maximumc (%)

Numbera

Averageb (%)

Maximumc (%)

11 17 16 19 20 18 19 16 18 12

0.008 0.009 0.012 0.011 0.013 0.010 0.009 0.008 0.008 0.004

0.012 0.019 0.021 0.031 0.033 0.023 0.017 0.014 0.015 0.008

7 2 4 1 0 1 0 1 0 0

0.130 0.001 0.020 0.001 — 0.004 0 0.002 0 0

0.501 0.002 0.073 0.001 — 0.004 0 0.002 0 0

a Number of times out of 20. bAverage improvements. c Maximum improvements.

4.2. Comparing TPRS/SPT with RDI/SPT RDI/SPT and TPRS/SPT were compared in the same instances for 50-job problems and 250-job problems. The percentage improvements were calculated as (RDI-TPRS)/TPRS when the TPRS was better than the RDI and (TPRS-RDI)/RDI when the RDI was better than the TPRS. Each entry was obtained from the statistic results of 20 instances. The computational results are presented in Tables 4 and 5. For each  value, the left-hand side of Table 4 reports the number of times that TPRS found a better solution than RDI, the average magnitude by which these solution were better, and the maximum

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

3069

improvements. The right-hand side of Table 4 reports similar results when RDI found a better solution than TPRS. Table 4 reports similar results for 250-job problems. Tables 4 and 5 show that TPRS/SPT is almost consistently better than RDI/SPT for problems with  > 0.60. In particular, for problems with  values from 0.80 to 2.00, RDI/SPT rarely outperforms TPRS/SPT. In addition, it is also observed that the performances of TPRS are rather poorer than that of RDI for some problems with the smallest  values. Note that there are also some problems where TPRS/SPT did the same as RDI/SPT, especially for the smallest or the largest  value. TPRS seems to perform better in the environments where arrival time range is normal. Although the improvements on both average and maximum are shown in the order of a few percentage points, it can be observed that TPRS is advantageous in most cases. For the 50-job problems more improvements are obtained in TPRS/SPT and for the 250-job problems the improvements are fewer. However, the number of problems in which TPRS/SPT found a better solution than RDI/SPT is larger for 250-job problems. Though the computational results show that TPRS requires more computational efforts than RDI for most problems, its lower order computational complexity than RDI is beneficial when the problem size becomes larger.

4.3. Comparing TPRS/SPT with RHP Following the suggestion of [9], the parameters {x, y, z} of the RHP are specified to be {12, 5, 2}, which denotes that the sub-problems include at most 12 future jobs, 5 current jobs, and 2 fixed jobs. The 2 fixed jobs are referred to jobs just like  jobs in parameters {, } of the TPRS. The parameters of TPRS/SPT are specified to be {, } = {17, 2} so that the largest sizes of sub-problems of the two procedures are all 17 jobs and the two procedures are comparable. Problems with 50 and 250 jobs were generated. Four hundred instances were tested for different combinations of problem size and range value. Twenty instances were tested for problems with ten different  values from 0.2 to 3.0. For each instance the problem was solved using the RHP and then using the TPRS/SPT immediately. The computation results are shown in Tables 6 and 7. For each  value, the left-hand side of these two tables reports the number of times that TPRS/SPT found a better solution than RHP, the average amount by which these solutions were better, and the maximum improvement. The percentage improvement was calculated as (RHP-TPRS)/TPRS. The right-hand side of two tables reports the similar results when RHP found a better solution than TPRS/SPT. The percentage improvement here was calculated as (TPRSRHP)/RHP. Tables 6 and 7 show that TPRS/SPT are consistently better than RHP for almost all  value problems. In particular, it is obviously observed that TPRS/SPT performs better than RHP for  1.0, which is just the case that RHP performs poorly in [9]. These observations demonstrate that the consideration of correlation and consistency of local performances and global performances in sub-problems of TPRS helps to improve the solution quality of RHP especially for its worst-cases. The advantage of TPRS over RHP is more obvious when  value is smaller. This is because the terminal penalty of TPRS is more effective when  value is smaller. Note that the performances of TPRS/SPT are closer to those of RHP for most instances with larger  values. In [9], this is the case where RHP performs well. It demonstrates that TPRS/SPT makes great improvements over RHP when RHP performs poorly while performing not worse than RHP when RHP performs well. Although the performances of TPRS/SPT are a little poorer than those of RHP for fewer

3070

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

Table 6 Comparison of TPRS/SPT and RHP: 50-job problems Range parameter 

0.20 0.40 0.60 0.80 1.00 1.25 1.50 1.75 2.00 3.00

TPRS/SPT better than RHP

RHP better than TPRS/SPT

Numbera

Averageb (%)

Maximumc (%)

Number a

Averageb (%)

Maximumc (%)

20 20 12 19 20 6 0 3 0 0

0.931 1.649 0.661 0.316 0.151 0.079 0 0.021 0 0

1.249 1.739 1.064 1.715 0.212 0.084 0 0.032 0 0

0 0 0 1 0 0 0 0 0 0

0 0 0 0.003 0 0 0 0 0 0

0 0 0 0.003 0 0 0 0 0 0

a Number of times out of 20. bAverage improvements. c Maximum improvements.

Table 7 Comparison of TPRS/SPT and RHP: 250-job problems Range parameter 

0.20 0.40 0.60 0.80 1.00 1.25 1.50 1.75 2.00 3.00

TPRS/SPT better than RHP

RHP better than TPRS/SPT

Numbera

Averageb (%)

Maximumc (%)

Numbera

Averageb (%)

Maximumc (%)

20 20 20 20 20 18 19 18 20 9

0.616 0.521 0.364 0.332 0.181 0.025 0.012 0.008 0.004 0.001

1.282 1.235 0.645 0.490 0.398 0.110 0.029 0.031 0.015 0.001

0 0 0 0 0 2 0 2 0 0

0 0 0 0 0 0.004 0 0.001 0 0

0 0 0 0 0 0.005 0 0.001 0 0

a Number of times out of 20. bAverage improvements. c Maximum improvements.

instances, this phenomenon appears too rarely to change the fact that the global performances of TPRS/SPT are better consistently than those of RHP. Tables 6 and 7 also show the effect of problem size on improvement of TPRS/SPT over RHP. It is observed that the improvement amounts for 50-job problems with small  values are more than those for 250-job problems. However, the improvement magnitude for 50-job problems has larger variation than that for 250-job problems. The reason may be that TPRS/SPT is based on the initial schedule identified by SPT and performs the same way as SPT. This demonstrates that the solution quality of TPRS depends on that of the initial schedule to a more extent.

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

3071

For computational effort, since the TPRS is a sort of two-stage procedure and terminal penalty functions are added into sub-problems, the computational effort of TPRS/SPT is higher than that of RHP. The increase of computational time more easily appears in problems with smaller  values but achieve larger improvements of solution quality. The research of Suresh et al. [9] suggests that if  > 1 is known, simply using RHP is a better choice while if  < 1 is known, the best dispatching rule may be preferred. However,  value is usually unknown for a practical problem. One has to obtain the solution using both RHP and dispatching rule and then select the better one. In this way, the computational efforts are actually those of a two-stage procedure though anyone does not improve the other existing result. However, TPRS/SPT has the advantages of both RHP and SPT. The ultimate solution is better than anyone of the two procedures in most situations. The improvements over SPT are guaranteed due to Theorem 1 and the conclusion will keep unvarying if the  value, parameters {, }, or the initial schedule is changed. This reflects the generality of thought of TPRS. 5. Conclusions and future research For the single-machine scheduling problem in a deterministic rescheduling environment, the two-stage TPRS procedure is presented as a special RHP based on the initial schedule. Since the global information is known at decision point, an initial schedule can be generated using certain dispatching rule due to their low computational cost. A rolling scheduling procedure is implemented to improve the initial schedule due to its lower computational burden. The rolling windows of TPRS based on the initial schedule keeps the fixed size to restrict the computational complexity. The sub-problem addresses jobs in the rolling window at each iteration. A terminal penalty function is appended to the objective function of the subproblem in order to consider the correlation and consistency between the local objective and the global one. The conclusion of the proved Theorem 1 ensures that the solution of TPRS must be no worse than that of the initial schedule. Extensive computational results indicate that TPRS/SPT is effective and is better than the existing procedures in most situations. Obviously, there are two advantages of TPRS. One is a controllable computational effort and another is the generic rolling mechanism that can be applied to other problems and other objective functions. Ovacik and Uzsoy [10] had testified that the RHP is effective for solving a single-machine scheduling problem minimizing the maximum lateness. Similar suggestion can be given for the TPRS. Terminal penalty functions may be used in other problems and other objectives. While such procedures generally exist for single-machine scheduling problems, there are many parallel machines scheduling problems for which the development of such a procedure is a challenging work. This is next research area that needs to be explored. On the other hand, since RHPs are heuristics, fewer efforts of generic analysis are made to improve the solution quality of RHPs. This paper is an elementary trial in this direction. In addition, TPRS presents a kind of rescheduling heuristics and its idea may be used to develop a rescheduling strategy with efficiency and stability as criteria. References [1] Lawler EL, Lenstra JK, Rinnooy Kan AHG, Shmoys DB. Sequencing and scheduling: algorithm and complexity. In: Graves SC, Rinnooy Kan AHG, Zipkin P, editors. Handbooks in operations research and management science, vol. 4. Logistics of production and inventory, Amsterdam: North-Holland; 1993. p. 445–522.

3072

B. Wang et al. / Computers & Operations Research 32 (2005) 3059 – 3072

[2] Dessouky MI, Deogun JS. Sequencing jobs with unequal ready times to minimize mean flow time. SIAM Journal on Computing 1981;10(1):192–202. [3] Hariri AMA, Potts CN. An algorithm for single machine sequencing with release dates to minimize total weighted completion time. Discrete Applied Mathematics 1983;5:99–109. [4] Chengbin Chu A branch-and-bound algorithm to minimize total flow time with unequal release dates. Naval Research Logistics 1992;39:859–75. [5] Chandra R. On n/1/F¯ dynamic deterministic problems. Naval Research Logistics Quarterly 1979;26:537–44. [6] Chengbin Chu, Efficient heuristics to minimize total flow time with release dates. Operations Research Letters 1992; 12:321–30. [7] Suresh C, Rodney T, Reha U. An iterative heuristic for the single machine dynamic total completion time scheduling problem. Computers & Operations Research 1996;23(7):641–51. [8] Suresh C, Rodney T, Reha U. Single-machine scheduling with dynamic arrivals: decomposition results and an improved algorithm. Naval Research Logistics 1996;43:709–19. [9] Suresh C, Rodney T, Reha U. Rolling horizon procedures for the single machine deterministic total completion time scheduling problem with release dates. Annals of Operations Research 1997;70:115–25. [10] Ovacik IM, Uzsoy R. Rolling horizon algorithms for a single-machine dynamic scheduling problem with sequencedependent setup times. International Journal of Production Research 1994;32(6):1243–63. [11] Fang Jian , Xi Yugeng Rolling horizon job shop rescheduling strategy in the dynamic environment. International Journal of Advanced Manufacturing Technology 1997;13:227–32. [12] Posner M. The deadline constrained weighted completion time problem: analysis of a heuristic. Operations Research 1988;36:742–6.