Preemptive scheduling on identical machines with delivery coordination to minimize the maximum delivery completion time

Preemptive scheduling on identical machines with delivery coordination to minimize the maximum delivery completion time

Theoretical Computer Science 583 (2015) 67–77 Contents lists available at ScienceDirect Theoretical Computer Science www.elsevier.com/locate/tcs Pr...

421KB Sizes 0 Downloads 94 Views

Theoretical Computer Science 583 (2015) 67–77

Contents lists available at ScienceDirect

Theoretical Computer Science www.elsevier.com/locate/tcs

Preemptive scheduling on identical machines with delivery coordination to minimize the maximum delivery completion time Youjun Chen a,b , Lingfa Lu a , Jinjiang Yuan a,∗ a

School of Mathematics and Statistics, Zhengzhou University, Zhengzhou, Henan 450001, People’s Republic of China College of Mathematics and Information Science, North China University of Water Resources and Electric Power, Zhengzhou, Henan 450045, People’s Republic of China b

a r t i c l e

i n f o

Article history: Received 31 January 2015 Accepted 30 March 2015 Available online 3 April 2015 Communicated by D.-Z. Du Keywords: Preemptive scheduling Identical machines Delivery coordination NP-hard Approximation algorithm

a b s t r a c t In this paper, we consider a two-stage scheduling problem on identical machines in which the jobs are first processed preemptively on m identical machines at a manufacturing facility and then delivered to their customers by one vehicle which can deliver one job at each shipment. The objective is to minimize the maximum delivery completion time, i.e., the time when all the jobs are delivered to their respective customers and the vehicle returns to the facility. We first show that the problem is strongly NP-hard. We then present a 32 -approximation algorithm and show that the bound is tight. © 2015 Elsevier B.V. All rights reserved.

1. Introduction This paper studies a two-stage scheduling problem in which the first stage is job production and the second stage is job delivery. Modern business market has been more and more competitive. In order to be competitive, the companies have to reduce their storage costs. That is, all jobs are needed to be transported as soon as possible to another machine for further processing or to their customers. Due to its importance in manufacturing industry, machine scheduling with delivery coordination has been widely studied over the last twenty years. According to the transportation function, problems on this topic can be classified into two types (see Lee and Chen [12]). The first type (type-1) involves intermediate transportation of the unfinished jobs from one machine to another for further processing. The second type (type-2) involves outbound transportation of the finished jobs from the machine(s) to their customer(s). The earliest scheduling paper with type-1 transportation is the one studied by Maggu and Das [17]. They studied a two-machine flow-shop scheduling problem to minimize the makespan. In this problem, both of the two machines have unlimited buffers and there are enough vehicles to transport jobs from the first machine to the other. They solved the problem by using a generalization of Johnson’s rule [10]. Maggu et al. [18] considered the same problem but imposed additional constraints on the sequence of the processing jobs. Kise [11] studied a similar problem in which there is only a vehicle and the vehicle can transport a job at a time. Stern and Vitner [24], Ganesharajah et al. [3], Panwalkar [19], and

*

Corresponding author. E-mail address: [email protected] (J. Yuan).

http://dx.doi.org/10.1016/j.tcs.2015.03.046 0304-3975/© 2015 Elsevier B.V. All rights reserved.

68

Y. Chen et al. / Theoretical Computer Science 583 (2015) 67–77

Gemmill and Stevens [6] considered a similar problem with limited buffer spaces and unit vehicle capacity. Hurink and Knust [7] studied independently a similar problem in which the return time of the vehicle is zero. The earliest scheduling paper with type-2 transportation is the one studied by Potts [20]. He studied a single machine scheduling problem with unequal job arrival times and delivery times to minimize the makespan. In this problem, it is assumed that there are a sufficient number of vehicles so that each finished job can be delivered individually and immediately to its customer. The author presented a heuristic algorithm with the worst-case performance ratio of 32 for the problem. Hall and Shmoys [8] presented two polynomial-time approximation schemes for the same problem. Woeginger [25] studied the same problem in the parallel-machine environment with equal job arrival times. Strusevich [21] considered a twomachine open-shop scheduling problem. For this problem, the author presented a heuristic algorithm with the worst-case performance ratio of 32 . Lee and Chen [12] studied several scheduling problems with type-2 transportation. However, in their problems, there are v vehicles with the same capacity c to transport all finished jobs. Thus, when a job completes its processing, it has to wait for some time until some vehicle becomes available. Soukhal et al. [22] and Yuan et al. [28] studied the two-machine flow-shop scheduling problem to minimize the makespan. They showed that the problem is binary NP-hard when c = 2 and is strongly NP-hard when c ≥ 3 even if the jobs have an equal processing time on the first machine and all jobs have equal transportation time. Lu et al. [15] considered the single-machine scheduling with release dates in which only a vehicle can be used to deliver all jobs to a single customer. They showed that the problem is strongly NP-hard for each fixed c ≥ 1 and gave a heuristic with a tight worst-case performance ratio of 53 . Wang and Cheng [26] considered the scheduling problems with an unavailable interval on the machine(s). For the single machine scheduling problem, they showed that this problem is NP-hard and proposed a heuristic with a tight worst-case performance ratio of 32 . For the two parallel machines scheduling problem, they proposed a heuristic with a worst-case

performance ratio of 53 . Wang and Cheng [27] further studied a single-machine scheduling problem in which there is a capacitated vehicle to transport unprocessed jobs from the supplier’s warehouse to the factory and another capacitated vehicle to deliver finished jobs to the customer. They showed that this problem is NP-hard in the strong sense and proposed a heuristic with a tight performance ratio of 2. Dong et al. [2] considered a two-machine open-shop problem with one customer. They gave two algorithms with worst-case performance ratios of 2 and 32 when c ≥ 2 and c = 1, respectively. Chang and Lee [1] extended Lee and Chen’s model in [12] by considering the situation where each job might occupy a different amount of physical space in a vehicle. They provided a heuristic with a worst-case performance ratio of 53 for the single machine scheduling problem and a heuristic with a worst-case performance ratio of 2 for the two parallel machines scheduling problem. For the single machine scheduling problem, He et al. [9] presented an improved approximation algorithm with a worst-case performance ratio of 53 . For the same problem, Lu and Yuan [13] provided a heuristic with 35 the best-possible worst-case performance ratio of 32 . Lu and Yuan [14] also extended Chang and Lee’s problem in [1] on an unbounded parallel-batch machine. They showed that the problem is strongly NP-hard and gave a heuristic with a worstcase performance ratio of 74 . For the two parallel machines scheduling problem, Zhong et al. [29] presented an improved

algorithm with a worst-case ratio of 53 and Sua et al. [23] proposed a heuristic with a worst-case performance ratio of 63 , 40 except for two particular cases. In this paper, we consider a two-stage scheduling problem in which a set N = {1, 2, · · · , n} of n jobs are first processed preemptively on m identical machines and then delivered to their customers by only one vehicle which can deliver one job at each shipment. A schedule for the problem includes a scheme for the preemptive processing of the n jobs on the m machines and a scheme for the delivery of the n jobs, where a job j can be delivered only if it has completed its processing and the only vehicle is available. The objective is to minimize the maximum delivery completion time, i.e., the time when all the jobs are delivered to their respective customers and the vehicle returns to the facility. Let D j be the delivery completion time of job j, i.e., the time when job j is delivered to its customer and the vehicle returns to the facility. We use D max = max{ D j : j ∈ N } to denote the maximum delivery completion time of all jobs. Following the classification scheme for scheduling problems by Graham et al. [5], the problem in consideration is denoted by P |pmtn | D max . We show in this paper that the problem is strongly NP-hard and present a 32 -approximation algorithm. The paper is organized as follows. In Section 2, we provide some useful notations and lemmas. The NP-hardness proof is proposed in Section 3 and the approximation algorithm is presented in Section 4. 2. Preliminaries The following notations are used in this paper.

• • • • • • •

N = {1, 2, · · · , n} is the set of n jobs to be scheduled. p j is the processing time of job j. q j is the round-trip delivery time of job j between the machine and the customer. p max ( J ) = max{ p j : j ∈ J } is the maximum processing time of the jobs in J ⊆ N. p min ( J ) = min{ p j : j ∈ J } is the minimum processing time of the jobs in J ⊆ N. qmax ( J ) = max{q j : j ∈ J } is the maximum delivery time of the jobs in J ⊆ N. qmin ( J ) = min{q j : j ∈ J } is the minimum delivery time of the jobs in J ⊆ N.

Y. Chen et al. / Theoretical Computer Science 583 (2015) 67–77

• • • • • • •

69



p( J ) = p is the sum of the processing times of the jobs in J ⊆ N.  j∈ J j q( J ) = j ∈ J q j is the sum of the delivery times of the jobs in J ⊆ N. J ≥ (k) = { j ∈ N : p j ≥ pk } for k ∈ N. S j (σ ) is the processing starting time of job j on the m machines in schedule σ . C j (σ ) is the processing completion time of job j on the m machines in schedule σ . C max (σ ) = max{C j (σ ) : 1 ≤ j ≤ n} is the makespan of schedule σ . τ j (σ ) is the departure time (i.e., the delivery starting time) of job j in schedule σ . In our discussion, we require that is a feasible schedule. Then τ j (σ ) ≥ C j (σ ) and the vehicle is available at time τ j (σ ). • D j (σ ) = τ j (σ ) + q j is the delivery completion time of job j in schedule σ . • D max (σ ) = max{ D j (σ ) : 1 ≤ j ≤ n} is the maximum delivery completion time of all jobs in schedule σ . • σ (i ) is the i-th delivered job in schedule σ , 1 ≤ i ≤ n.

σ

Lemma 2.1. There is an optimal schedule σ of problem P |pmtn| D max such that τσ (i ) (σ ) = max{C σ (i ) (σ ), D σ (i −1) (σ )} for 1 ≤ i ≤ n, where D σ (0) (σ ) = 0. Consequently, C σ (1) ≤ C σ (2) ≤ · · · ≤ C σ (n) . Proof. Let σ be an optimal schedule and let e = e (σ ) be the largest index i with 1 ≤ i ≤ n so that τσ ( j ) (σ ) = max{C σ ( j ) (σ ), D σ ( j −1) (σ )} for each 1 ≤ j ≤ i. For our purpose, we may assume that σ is chosen so that e (σ ) is as large as possible. If e = n, then the result holds for σ . Suppose in the following that e ≤ n − 1. Since σ (i ) cannot be delivered before times C σ (i ) (σ ) and D σ (i −1) (σ ), we have τσ (i) (σ ) ≥ max{C σ (i) (σ ), D σ (i−1) (σ )}, 1 ≤ i ≤ n. Then τσ (i) (σ ) = max{C σ (i) (σ ), D σ (i−1) (σ )} for 1 ≤ i ≤ e and τσ (e+1) (σ ) > max{C σ (e+1) (σ ), D σ (e) (σ )}. Let π be a new schedule obtained from σ by revising the departure time of job σ (e + 1) as max{C σ (e+1) (σ ), D σ (e) (σ )}. It can be observed that π is feasible and D max (π ) ≤ D max (σ ). This implies that π is also an optimal schedule. But then, e (π ) ≥ e + 1 = e (σ ) + 1 contradicting the choice of σ . The lemma follows. 2 For the classical scheduling problem P |pmtn|C max , McNaughton [16] showed that the optimal objective value of the 1 problem is given by max{ m · p ( N ), p max ( N )}. In addition, McNaughton [16] presented an algorithm which we call LS-Pmtn to solve problem P |pmtn|C max optimally in O (n + m) time. Let LS = {1, 2, · · · , n} be an arbitrary list of the jobs in N. The algorithm LS-Pmtn can be described as follows. 1 LS-Pmtn: Set max{ m · p ( N ), p max ( N )} to be the time bound. Then schedule the n jobs according to the list LS = ( J 1 , J 2 , · · · , J n } on the m machines successively. Whenever the above time bound is met and the scheduling is not completed, split the current job (if necessary) into two pieces and schedule the second piece of a preempted job on the next machine at zero time.

The following lemma is just a consequence of the work of McNaughton [16]. The correctness of the lemma follows from the validity of the algorithm LS-Pmtn. Lemma 2.2. Let J ⊆ N, let LS be a list of the jobs in J , let [a, b] be a time interval with 0 ≤ a < b, and let k be a positive integer. If max{ 1k · p ( J ), p max ( J )} ≤ b − a, then algorithm LS-Pmtn can schedule the jobs in J in the time interval [a, b] on k machines completely. 2 3. NP-hardness proof The following is the well-known strongly NP-complete 3-Partition problem [4]. problem, we are given a set of 3t positive integers a1 , a2 , . . . , a3t , each of 3-Partition: In an instance I of the 3-Partition 3t size between b/4 and b/2, such that i =1 ai = tb. The decision asks whether there is a partition of the ai ’s into t groups of 3, each summing exactly to b. Theorem 3.1. Problem P |pmtn| D max is strongly NP-hard. Proof. We use the strongly NP-complete 3-Partition problem for the reduction. Given an instance (a1 , . . . , a3t ; b) of 3-Partition, we construct an instance of problem P m|pmtn| D max as follows:

• We have m = t identical machines 1, 2, · · · , t and n = 5t jobs 1, 2, · · · , 5t. • The first 3t jobs 1, 2, · · · , 3t are called partition jobs, the middle t jobs 3t + 1, 3t + 2, · · · , 4t are called restricted jobs, and the last t jobs 4t + 1, 4t + 2, · · · , 5t are called tail jobs. • For each partition job i with 1 ≤ i ≤ 3t, the processing time p i and the delivery time qi are given by p i = tai and qi = ai .

(1)

70

Y. Chen et al. / Theoretical Computer Science 583 (2015) 67–77

Fig. 1. The processing scheme of

σ.

• For each restricted job 3t + i with 1 ≤ i ≤ t, the processing time p 3t +i and the delivery time q3t +i are given by p 3t +i = (i − 1)(t + 1)b and q3t +i = tb.

(2)

• For each tail job 4t + i with 1 ≤ i ≤ t, the processing time p 4t +i and the delivery time q4t +i are given by p 4t +i = (t − i + 1)(t + 1)b − tb and q4t +i = 0.

(3)

• The threshold value of D max is given by Y = t (t + 1)b.

(4)

• The decision asks whether there is a feasible schedule σ so that D max (σ ) ≤ Y . Above construction can be done in polynomial time in the unary encode. From (1), (2), (3), and (4), one can easily check that

p 1 + p 2 + · · · + p 5t = tY ,

(5)

q1 + q2 + · · · + q5t = Y ,

(6)

p 3t +i + p 4t +i = t 2 b = Y − tb.

(7)

and

We show in the following that the instance of 3-Partition has a solution if and only if the scheduling instance has a feasible schedule σ so that D max (σ ) ≤ Y . Suppose first that the instance of 3-Partition  has a solution. Then the index set {1, 2, · · · , 3t } can be partitioned into t subsets I 1 , I 2 , · · · , I t so that | I i | = 3 and j ∈ I i a j = b for each i = 1, · · · , t. We construct a feasible schedule σ of the scheduling instance by the following way.

• For the processing scheme, machine i, i = 1, · · · , t, is occupied by the restricted job 3t + i, the partition jobs in I i , and the tail job 4t + i in this order. After the processing scheme is generated, C j (σ ) is determined for all j with 1 ≤ j ≤ 5t. Fig. 1 indicates the processing scheme of the 5t jobs.

• For the delivery scheme, we deliver the 5t jobs in the nondecreasing order of their completion times, i.e., C σ (1) (σ ) ≤ C σ (2) (σ ) ≤ · · · ≤ C σ (5t ) (σ ).

(8)

Note that the total processing times and the total delivery times of the jobs in each I i , 1 ≤ i ≤ t, are given by p ( I i ) = tb and q( I i ) = b, respectively. From (7), we can see that, in the schedule σ , there are no idle times from time 0 to time Y on the m = t machines, and each machine completes the processing of the jobs assigned to it at time Y . For each i with 1 ≤ i ≤ t, set S (i ) (σ ) = min{ S j (σ ) : j ∈ I i }, C (i ) (σ ) = max{C j (σ ) : j ∈ I i }, τ (i ) (σ ) = min{τ j (σ ) : j ∈ I i }, and D (i ) (σ ) = max{ D j (σ ) : j ∈ I i }, and call them the processing starting time, the processing completion time, the departure time, and the delivery completion time of I i , respectively. Since q4t +i = 0 for 1 ≤ i ≤ t, we only need to consider the first 4t jobs J 1 , · · · , J 4m , which are delivered in the following order in σ :

3t + 1 ≺ I 1 ≺ 3t + 2 ≺ I 2 ≺ · · · ≺ 4t ≺ I t .

(9)

With (8) and (9) in hand, Table 1 indicates the processing and the delivery of the first 4t jobs in σ . From Table 1, we can observe that D max (σ ) = t (t + 1)b = Y . Hence, σ is a feasible schedule of the scheduling instance so that D max (σ ) ≤ Y , as required. Conversely, suppose that π is a feasible schedule such that D max (π ) ≤ Y . We are ready to show that the instance of 3-Partition problem has a solution. From (5) and (6), we have Claim 1. Each of the m machines is busy at any time instant t ∈ [0, Y ] in π . Claim 2. The only vehicle is always busy in the time interval [0, Y ] in π .

Y. Chen et al. / Theoretical Computer Science 583 (2015) 67–77

71

Table 1 The processing and the delivery of the first 4t jobs. Jobs

3t + 1

I1

···

3t + i

Ii

···

4t

It

pj qj S j (σ ) C j (σ )

0 tb 0 0 0 tb

tb b 0 tb tb (t + 1)b

··· ··· ··· ··· ··· ···

(i − 1)(t + 1)b

tb b (i − 1)(t + 1)b i (t + 1)b − b i (t + 1)b − b i (t + 1)b

··· ··· ··· ··· ··· ···

(t − 1)(t + 1)b

tb b (t − 1)(t + 1)b t (t + 1)b − b t (t + 1)b − b t (t + 1)b

τ j (σ ) D j (σ )

tb 0 (i − 1)(t + 1)b (i − 1)(t + 1)b i (t + 1)b − b

tb 0 (t − 1)(t + 1)b (t − 1)(t + 1)b t (t + 1)b − b

The result of Claim 1 implies that there are t = m jobs j with D j (π ) = C j (π ) = Y in π . Then these jobs j must have delivery times q j = 0 since D j (π ) = τ j (π ) + q j ≥ C j (π ) + q j . Note that 4t + i, 1 ≤ i ≤ t, are the only m = t jobs with delivery times 0. Then we have Claim 3. C 4t +i (π ) = Y for each i = 1, 2, · · · , t. The result of Claim 2 implies that there is at least one job with departure time 0 in processing time 0, we have

π . Since 3t + 1 is the only job with

Claim 4. 3t + 1 is the only job with departure time 0 in π . We need more claims to reveal the properties of

π.

Claim 5. C 3t +1 (π ) < C 3t +2 (π ) < · · · < C 4t (π ) < Y . Proof of Claim 5. Since job 4t has a positive delivery time, we have C 4t (π ) < Y . Suppose to the contrary that Claim 5 is violated. Let i ∈ {2, · · · , t } be the maximum so that C 3t +i (π ) ≤ C 3t +i −1 (π ). Then i ≤ t − 1 and C 3t +i (π ) < C 3t +i +1 (π ) < · · · < C 4t (π ). Note that the t − i + 2 jobs 3t + i − 1, 3t + i , · · · , 4t, each having delivery time tb, have departure times at least C 3t +i (π ) ≥ p 3t +i = (i − 1)(t + 1)b. It follows that D max (π ) ≥ (i − 1)(t + 1)b + (t − i + 2)tb > t (t + 1)b = Y . This contradicts the assumption that D max (π ) ≤ Y . Claim 5 follows. 2 From Claim 5, we can partition the index set {1, 2, · · · , 3t } into t subsets by setting

I i = { j : 1 ≤ j ≤ 3t , C 3t +i (π ) ≤ C j (π ) < C 3t +i +1 (π )}, i = 1, 2, · · · , t .

(10)

Then the definition in (10) implies that I 1 ∪ I 2 ∪ · · · ∪ I t = {1, 2, · · · , 3t } is the set of partition jobs. We further set

bi =



a j , i = 1, 2, · · · , t .

(11)

j∈ I i

From (1) and (11), we have

q( I i ) = b i and p ( I i ) = tb i , i = 1, 2, · · · , t .

(12)

By (11), to complete the proof, we only need to show that b i = b and | I i | = 3 for 1 ≤ i ≤ t. Claim 6. For each index i with 1 ≤ i ≤ t, we have

C 3t +i (π ) = (i − 1)(t + 1)b = p 3t +i ,

(13)

b i + b i +1 + · · · + bt = (t − i + 1)b.

(14)

and

Proof of Claim 6. Suppose to the contrary that the claim is violated. Then there is a certain i with 1 ≤ i ≤ t so that one of (13) and (14) does not hold. For our purpose, we assume that i is of the maximum. Then either C 3t +i (π ) > p 3t +i or b i + b i +1 + · · · + bt = (t − i + 1)b, and furthermore, for each index i (if any) with i + 1 ≤ i ≤ t, we have C 3t +i (π ) = (i − 1)(t + 1)b = p 3t +i and bi + bi +1 + · · · + bt = (t − i + 1)b. Set δ = C 3t +i (π ) − p 3t +i . For each job j, we use p˜ j to denote the length of the piece of j processed in the interval (C 3t +i (π ), Y ] on the m = t machines in π . From Claim 1, we have



j ∈ I i ∪···∪ I t

p˜ j +



p˜ j = t (Y − C 3t +i (π )).

j :3t +1≤ j ≤5t

The following facts can be observed:

(15)

72

Y. Chen et al. / Theoretical Computer Science 583 (2015) 67–77

p˜ 3t +i p˜ 3t +i p˜ 4t +i p˜ 4t +i

• • • •

= 0 for 1 ≤ i ≤ i . (From Claim 5.) ≤ p 3t +i − C 3t +i (π ) for i + 1 ≤ i ≤ t. (Due to the choice of i implies that C 3t +i (π ) = p 3t +i for i + 1 ≤ i ≤ t.) ≤ p 4t +i for i ≤ i ≤ t. (Trivial.) ≤ Y − C 3t +i (π ) for 1 ≤ i ≤ i − 1. (Trivial.)

Then we have





p˜ j ≤ (i − 1)(Y − C 3t +i (π )) + p 4t +i +

( p 3t +i + p 4t +i − C 3t +i (π )).

(16)

i :i +1≤i ≤t

j :3t +1≤ j ≤5t

From (7) and the definition of δ , we have Y − C 3t +i (π ) − p 4t +i = tb − δ . It follows from (15) and (16) that





p˜ j ≥

(Y − p 3t +i − p 4t +i ) + (Y − C 3t +i (π ) − p 4t +i ) = (t − i )tb + (tb − δ).

(17)

i :i +1≤i ≤t

j ∈ I i ∪···∪ I t

Note that, from (12), we have

t (b i + · · · + bt ) =





pj ≥

j ∈ I i ∪···∪ I t

From (17) and (18), we deduce that

t (b i + b i +1 + · · · + bt ) =

p˜ j .

(18)

j ∈ I i ∪···∪ I t



p j ≥ (t − i )tb + (tb − δ) = (t − i + 1)tb − δ.

(19)

j ∈ I i ∪···∪ I t

Equivalently, (19) can be rewritten as

b i + b i +1 + · · · + bt ≥ (t − i + 1)b − δ/t . Since either δ

> 0 or bi + bi +1 + · · · + bt =

(t − i

(20)

+ 1)b, inequality (20) further implies that



δ + bi + bi +1 + · · · + bt > (t − i + 1)b.

(21)

I t and the jobs 3t + i with i

Note that the jobs j ∈ I i ∪ · · · ∪ ≤ i ≤ t have departure times at least C 3t +i (π ), each of the former jobs j has a delivery time a j , and each of the later jobs 3t + i has a delivery time tb. Then the total delivery time of the jobs in I i ∪ · · · ∪ I t is given by b i + b i +1 + · · · + bt . From (21), we have

Y ≥ C 3t +i (π ) + (t − i + 1)tb + (b i + b i +1 + · · · + bt )

= (i − 1)(t + 1)b + δ + (t − i + 1)tb + (bi + bi +1 + · · · + bt ) > (i − 1)(t + 1)b + (t − i + 1)tb + (t − i + 1)b = t (t + 1)b = Y .

(22)

The contradiction in (22) completes the proof of the claim.

2



As a consequence of Claim 6, we have j ∈ I i a j = b i = b for each index i with 1 ≤ i ≤ t. Since b /4 < a j < b /2 holds for all j with 1 ≤ j ≤ 3t, we have | I i | = 3 for each i with 1 ≤ i ≤ t. It follows that ( I 1 , I 2 , · · · , I t ) is a solution of the instance of 3-Partition. The result follows. 4. A

3 -approximation 2

algorithm

Let D max (σ ∗ ) = max{ D j (σ ∗ ) : 1 ≤ j ≤ n} be the maximum delivery completion time of all jobs in an optimal schedule Let

 D = max

1 m

 · p ( N ) + qmin ( N ), max { p j + q( J ≥ ( j ))} .

σ ∗.

(23)

j∈N

Recall that J ≥ ( j ) = {i : p i ≥ p j }. Lemma 4.1. D max (σ ∗ ) ≥ D, p ( N ) ≤ mD, and q( N ) ≤ D. Proof. In a feasible schedule, at least one job has a processing completion time at least

(σ ∗ )

1 m

· p ( N ) and every job has a delivery

time at least qmin ( N ). Hence, we have D max ≥ · p ( N ) + qmin ( N ). To show that D max (σ ∗ ) ≥ max j ∈ N { p j + q( J ≥ ( j ))}, we fix an index j ∈ N and consider the jobs i with p i ≥ p j . Each of these jobs J i has a processing completion time at least p j and the delivery of these jobs is performed by only one vehicle. It follows that D max (σ ∗ ) ≥ max j ∈ N { p j + q( J ≥ ( j ))}. Consequently, D max (σ ∗ ) ≥ D. 1 m

Y. Chen et al. / Theoretical Computer Science 583 (2015) 67–77

73

The relations p ( N ) ≤ m · D and q( N ) ≤ D follow directly from the definition of D in (23). The lemma follows.

2

We first present the following greedy Algorithm GA which returns a set J consisting of a part of jobs. For j ∈ J , we define J > j = {i ∈ J : i > j } to be the set of indices greater than j in J . Algorithm GA. For problem P |pmtn| D max . Step 0. Sort the jobs in N in SPT order so that p 1 ≤ p 2 ≤ · · · ≤ pn . Set J := N. Step 1. If q( J ) > D2 , go to Step 2. If q( J ) ≤ D2 , go to Step 4. Step 2. Find the minimum index j ∈ {1, 2, · · · , | J |} so that q( J > j ) ≤ D 2

D . 2

Then either q( J ) > or p ( J ) ≥ p ( N ) − m · Step 3. Do the following: (3.1) If p ( J ) ≥ p ( N ) − m · D2 , then output J and stop. (3.2) If q( J ) > D 2

D 2

and p ( J ) < p ( N ) − m · D 2

D , 2

D 2

and p ( J > j ) < p ( N ) − m ·

then q( N \ J ) = q( N ) − q( J ) <

D 2

D . 2

Reset J := { j , j + 1, · · · , n}.

(since q( N ) ≤ D by Lemma 4.1) and p ( N \ J ) >

m · > p(N ) − m · (since p ( N ) ≤ mD by Lemma 4.1). In this case, reset J := {1, 2, · · · , j − 1} = N \ J and go to Step 4. Step 4. Find the minimum index j ∈ {1, 2, · · · , | J |} so that p ( J > j ) < p ( N ) − m · D2 . Reset J := { j , j + 1, · · · , | J |} and stop. 2 It can be observed that Algorithm GA runs in O (n log n) time. The following lemma reveals the property of the job subset J returned by Algorithm GA. Lemma 4.2. Let J ∗ be the job set returned by Algorithm GA. Then p ( J ∗ ) ≥ p ( N ) − m · so that q( J ∗ ) − q

z



D 2

and p ( J ∗ ) − p z

< p(N ) − m ·

D . 2

D 2

and there is a job z ∈ J ∗ with p z = p min ( J ∗ )

Proof. At the beginning of Step 2, we have J = {1, 2, · · · , n}. The implementation of the algorithm implies that at the end of Step 2, we have J = { j , j + 1, · · · , n} and either q( J ) > D2 or p ( J ) ≥ p ( N ) − m · D2 . Since Step 3 follows Step 2 directly, Steps (3.1) and (3.2) cover all possibilities for the set J returned by Step 2. If Algorithm GA stops at Step (3.1), then J ∗ = { j , j + 1, · · · , n} for some j and p ( J ∗ ) ≥ p ( N ) − m · D2 . By the implementation

of Step 2, we further have q( J ∗ \ { j }) ≤ D2 and p ( J ∗ \ { j }) < p ( N ) − m · D2 . Then the result follows by setting z = j. Suppose in the following that Algorithm GA stops at Step 4. No matter Step 4 is executed after Step 1 or Step 3.2, at the beginning of Step 4, we have J = {1, 2, · · · , j } for some j, q( J ) ≤ D2 , and p ( J ) ≥ p ( N ) − m · D2 . By the implementation of Step 4, J ∗ = { j , j + 1, · · · , | J |} for some j, q( J ∗ ) ≤ D2 , p ( J ∗ ) ≥ p ( N ) − m · Then the result is also valid by setting z = j. The lemma follows. 2

D , 2

q( J ∗ \ { j }) ≤

D , 2

and p ( J ∗ \ { j }) < p ( N ) − m ·

D . 2

Let J ∗ be the set of jobs returned by Algorithm GA. The following notations will be used in our discussion.

• • • • •

J A = { j ∈ J∗ : p j >

D }. 2

\ J A = { j ∈ J ∗ : p j ≤ D2 }. = { j ∈ N \ J ∗ : p j > D2 }. = N \ ( J ∗ ∪ J cA ) = { j ∈ N \ J ∗ : p j ≤ D2 }. = | J A |, ncA = | J cA |, n B = | J B |, and ncB = | J cB |. Note that | J ∗ | = n A + n B and | N \ J ∗ | = ncA + ncB . A job j is called big if p j > J A ∪ J cA are big and the jobs in J B ∪ J cB are small. JB =

J∗

J cA J cB nA

Lemma 4.3. p ( J B ) − p min ( J B ) ≤ (m − n A ) ·

D 2

D , 2

and small if p j ≤

and the jobs in J A are big, we have p min ( J ∗ ) = p min ( J B ) and p ( J A ) > n A p( J A ) < p(N ) − m ·

− p ( J A ) < mD − m ·

D 2

− nA ·

Lemma 4.4. Let ε = max{0, p ( J B ) − (m − n A ) ·

D }. 2

D 2

= (m − n A ) ·

D . 2

ε ≤m·

·

D . 2

The lemma follows.

D . 2

D . 2

Since J ∗ = J A ∪ J B

Then p ( J B ) − p min ( J B ) = p ( J ∗ ) − p min ( J ∗ ) −

Then p ( J cB ) + p ( J cA ) + ε ≤ m ·

Proof. From Lemma 4.2, we have p ( J A ) + p ( J B ) = p ( J ∗ ) ≥ p ( N ) − m · D 2

Then the jobs in

if J B = ∅.

Proof. From Lemma 4.1, we have p ( N ) ≤ mD. From Lemma 4.2, we have p ( J ∗ ) − p min ( J ∗ ) < p ( N ) − m · D 2

D . 2

2

D . 2

Then p ( J cB ) + p ( J cA ) + ε = p ( N ) − ( p ( J A ) + p ( J B )) +

+ ε . Hence, the lemma holds when ε = 0. ε > 0. Then ε = p ( J B ) − (m − n A ) · D2 , and so, p ( J B ) − ε = (m − n A ) · D2 . Since p ( J A ) > n A · D2 c and p ( N ) ≤ mD, we have p ( J B ) + p ( J cA ) + ε = p ( N ) − p ( J A ) − p ( J B )) + ε < mD − n A · D2 − (m − n A ) · D2 = m · D2 . The lemma follows. 2 Suppose in the following that

74

Y. Chen et al. / Theoretical Computer Science 583 (2015) 67–77

Fig. 2. The processing scheme of Step 2. D )/(n A 2 D > 2.

Lemma 4.5. Suppose that n A + ncA ≥ m and let δ = ( p ( J A ) + p ( J cA ) − m · Furthermore, there are at most m − 1 jobs J j in J A ∪ J cA so that p j − δ

+ ncA ). Then δ ≤

Proof. Since p ( J A ) + p ( J cA ) ≤ p ( N ) ≤ mD and n A + ncA ≥ m, we have δ ≤ (mD − m ·

D )/m 2

D 2

=

and

D . 2



j ∈ J A ∪ J cA ( p j

The relation



− δ) = m · D2 .

j ∈ J A ∪ J cA ( p j



δ) = m · D2 follows from the definition of δ . This further implies that there are at most m − 1 jobs J j in J A ∪ J cA so that p j − δ > D2 since p j − δ ≥ p j − D2 > 0 for each j ∈ J A ∪ J cA . The lemma follows. 2 Now we are ready to describe our approximation algorithm for problem P |pmtn| D max . Algorithm APP. For problem P |pmtn| D max . Step 1. Set J ∗ to be the job set returned by Algorithm GA and let z be the job in J B of the minimum processing time. In the case that J B = ∅, we assume that z is a dummy job with p z = 0 and q z = 0. In the case that J B = ∅, by Lemma 4.2, we may further assume that q( J ∗ ) − q z ≤ D2 and p ( J ∗ ) − p z < p ( N ) − m · D2 . If n A + ncA < m, go to Step 2, and if n A + ncA ≥ m go to Step 3. Step 2. Note that n A + ncA < m. Do the following (see Fig. 2): (2.1) Set ε = max{0, p ( J B ) − (m − n A ) · D2 }. In the case that J B =

∅, split job z into two pieces z and z so that p z = ε and p z = p z − ε . (2.2) From time D2 , schedule the n A jobs in J A in the first n A machines 1, 2, · · · , n A so that each job occupies a single D . 2 D [ 2 , D ] and

machine starting at time

(2.3) In the time interval on the m − n A machines n A + 1, n A + 2, · · · , m, do algorithm LS-Pmtn for the jobs list LS1 = ( z , J B \ { z}). Furthermore, from time 0, schedule the ncA jobs in J cA in the last ncA machines m − ncA + 1, m − ncA + 2, · · · , m so that each job occupies a single machine starting at time 0. Since n A + ncA < m, the processing of J cA has no overlap with the processing of J A and z . But there may be overlaps for the processing of J cA and J B \ { z}. Denote by J B the part of the jobs in J B \ { z} which are covered by the processing of the jobs in J cA and denote by J B the part of the jobs in J B \ { z} which are missed by the processing of the jobs in J cA . We conventionalize that, for each job j ∈ J B \ { z}, the part of job j in J B (if any) forms one piece of j. (2.4) Omit the processing of the jobs (pieces) in J B in Step (2.3) and regard J B as unscheduled so that the overlap in Step (2.3) no longer exists. (2.5) In the time interval [0, D2 ] and on the first m − ncA machines 1, 2, · · · , m − ncA , do algorithm LS-Pmtn for the jobs in J cB ∪ { z } ∪ J B in an arbitrary list. (2.6) The processing completed jobs are delivered in the nondecreasing order of their processing completion times as soon as possible. Step 3. Note that n A + ncA ≥ m. Do the following (see Fig. 3): (3.1) Set δ = ( p ( J A ) + p ( J cA ) − m ·

+ ncA ). Split each job j in J A ∪ J cA into two pieces j and j so that p j = δ and : j ∈ J A ∪ J cA } and J A ∪ J cA = { j : j ∈ J A ∪ J cA }. Let J L = { j ∈ J A ∪ J cA : p j > D2 }

D )/(n A 2

= { j

p j = p j − δ . Set J A ∪ and set n L = | J L |. (3.2) In the time interval [0, D2 ] and on the m machines, do algorithm LS-Pmtn for the jobs in ( J A ∪ J cA ∪ J B ∪ J cB ) in an arbitrary list. (3.3) From time D2 , schedule the n L jobs in J L in the first n L machines 1, 2, · · · , n L so that each job occupies a single J cA

D . 2 D [ 2 , D ] and

machine starting at time

(3.4) In the time interval on the last m − n L machines n L + 1, n L + 2, · · · , m, do algorithm LS-Pmtn for the jobs in ( J A ∪ J cA ) \ J L in an arbitrary list. (3.5) The processing completed jobs are delivered in the nondecreasing order of their processing completion times as soon as possible. 2 Theorem 4.1. Algorithm APP has a worst-case performance ratio of at most

3 . 2

Y. Chen et al. / Theoretical Computer Science 583 (2015) 67–77

75

Fig. 3. The processing scheme of Step 3.

Proof. We first show the rationality of the algorithm. If n A + ncA < m, then n A < m. This implies that, in Step (2.2) of APP, the n A jobs in J A can be scheduled on n A machines

separately starting at time D2 . From Lemma 4.3 and the definitions of ε , z and z , we have p z = ε ≥ 0 and p z = p z − ε ≥ 0. Then z and z are well defined. The definition of ε also implies that the jobs in { z } ∪ ( J B \ { z}) have a total processing time at most p ( J B ) − ε ≤ (m − n A ) · D2 . Since the maximum processing time of the jobs in { z } ∪ ( J B \ { z}) is less than D2 , from Lemma 2.2, also in Step (2.3) of APP, the jobs in { z } ∪ ( J B \ { z}) are completely scheduled in the time interval [ D2 , D ] on the last m − n A machines n A + 1, n A + 2, · · · , m. Since ncA < m − n A , in Step (2.3) of APP, the ncA jobs in J cA can be scheduled on the last ncA machines separately starting at time

D 2

but overlap may occurs on the processing of J cA and J B \ { z}. At

Step (2.4), the jobs in J B have a total processing time at most processing processing

time of the jobs in J cB ∪ { z } time of the jobs in J cB ∪ { z } ∪

∪ J B is at most p ( J B is less than

D 2



J cB )

j ∈ J cA ( p j

+ ε + p(

D ) = p( 2 c J A ) − ncA ·



D . From 2 − ncA ) · D2 .

J cA ) − ncA ·

Lemma 4.4, the total

D 2

Since the maximum

≤ (m

, from Lemma 2.2, in Step (2.5) of APP, the jobs in { z } ∪ ( J B \ { z})

are completely scheduled in the time interval [0, D2 ] on the first m − ncA machines 1, 2, · · · , m − ncA . Consequently, APP is reasonable in the case that n A + ncA < m. Alternatively, if n A + ncA ≥ m, from Lemma 4.5, the jobs in ( J A ∪ J cA ∪ J cB ∪ J cB ) have the maximum processing time at

D and the jobs in J A ∪ J cA have the total processing time m · D2 . This further implies that the jobs in ( J A ∪ J cA ∪ 2 c J B ∪ J B ) have the total processing time P ( N ) − m · D2 ≤ m · D2 since P ( N ) ≤ mD. From Lemma 2.2, in Step (3.2) of APP, the jobs in ( J A ∪ J cA ∪ J B ∪ J cB ) are completely scheduled in the time interval [0, D2 ]. In Step (3.3), since n L · D2 < p ( J L ) ≤ m · D2 , we have n L < m, and so, the n L jobs in J L can be scheduled on n L machines separately starting at time D2 . The remaining jobs in ( J A ∪ J cA ) \ J L have maximum processing time at most D2 and have the total processing time m · D2 − p ( J L ) ≤ m · D2 − n L · D2 ≤ (m − n L ) · D2 . So, from Lemma 2.2, in Step (3.4) of APP, the jobs in ( J A ∪ J cA ) \ J L are completely scheduled in the time interval [ D2 , D ]. Consequently, APP is also reasonable in the case that n A + ncA ≥ m.

most

Now we are ready to analyze the performance ratio of Algorithm APP. Let σ be the schedule obtained by Algorithm APP. Let l be the first job in σ so that the vehicle is busy from time τl (σ ) to time D max (σ ). Then τl (σ ) = C l (σ ). Let J + (l) = { j ∈ N : C j (σ ) ≥ C l (σ )}. Then D max (σ ) = C l (σ ) + q( J + (l)). The following inequalities are repeatedly used in our discussion:

q( N ) ≤ D (by Lemma 4.1).

(24)



p j + q( J ( j )) ≤ D for j ∈ N (by the definition of D ). q( J A ∪ J cA ) ≤ If C l (σ ) ≤

D , 2

D 2

(since we have (25) and p min ( J A ∪ J cA ) >

(25) D 2

from (24) and the fact q( J + (l)) ≤ p ( N ), we have D max (σ ) = C l (σ ) + q( J + (l)) ≤

Hence we may assume in the following that C l (σ ) > Case 1. n A + ncA < m. Then

D . 2

(26)

). D 2

+ D = 32 D, as required.

We distinguish the following two cases.

σ is obtained by Algorithm APP in Step 2. Consequently, l ∈ J A ∪ J cA ∪ J B .

+ pl > D. Since all jobs in J cA ∪ J B are processed by time D, we have J + (l) ⊆ J ≥ (l). From (25), + pl + q( J + (l)) ≤ 32 D, as required. D c Suppose in the following that l ∈ J A ∪ J B . Then 2 < C l (σ ) ≤ D. If J B = ∅, then l ∈ J cA , C l (σ ) = pl ≤ D, and J + (l) ⊆ J A ∪ J cA . From (26), we have q( J + (l)) ≤ q( J A ∪ J cA ) ≤ D2 . Consequently, D max (σ ) = C l (σ ) + q( J + (l)) = pl + q( J + (l)) ≤ 32 D, as required. Hence, we may assume that J B = ∅. Note that z is the job with the minimum processing time in J A ∪ J cA ∪ J B . From (25), we have p z + q( J + (l)) ≤ p z + q( J ≥ ( z)) ≤ D. In the case that C l (σ ) ≤ C z (σ ), from the fact C z (σ ) = D2 + p z ≤ D2 + p z , we have D max (σ ) = C l (σ ) + q( J + (l)) ≤ D + p z + q( J + (l)) ≤ 32 D, as required. 2 Suppose in the following that C l (σ ) > C z (σ ). Then J + (l) ⊆ J cA ∪ ( J B \ { z}. If either J cA = ∅ or C l (σ ) > max{C j (σ ) : j ∈ J cA }, then J + (l) ⊆ J A ∪ ( J B \ { z}) = J ∗ \ { z}. From Lemma 4.2, we have q( J + (l)) ≤ q( J ∗ \ { z}) ≤ D2 . From the fact C l (σ ) ≤ D, we have D max (σ ) = C l (σ ) + q( J + (l)) ≤ D + D2 = 32 D, as required. If l ∈ J A , then C l (σ ) =

D 2

we have pl + q( J + (l)) ≤ pl + q( J ≥ (l)) ≤ D. It follows that D max (σ ) = C l (σ ) + q( J + (l)) = D2

76

Y. Chen et al. / Theoretical Computer Science 583 (2015) 67–77

Suppose in the following that J cA = ∅ and C l (σ ) ≤ max{C j (σ ) : j ∈ J cA }. Let h ∈ J cA with C l (σ ) ≤ C h (σ ) so that C h (σ ) is as small as possible. Then J + (l) ⊆ ( J ∗ \ { z}) ∪ J ≥ (h) and so q( J + (l)) ≤ q( J ∗ \ { z}) + q( J ≥ (h)). Note that C h (σ ) = p h , q( J ∗ \ { z}) ≤ D2 (from Lemma 4.2), and p h + q( J ≥ (h)) ≤ D (from (25)). Then we have D max (σ ) = C l (σ ) + q( J + (l)) ≤ p h + q( J ∗ \ { z}) + q( J ≥ (h)) ≤ D +

D 2

= 32 D, as required. This completes the discussion of Case 1.

σ is obtained by Algorithm APP in Step 3, and so, l ∈ J A ∪ J cA and pl > D2 . If l ∈ J L , then C l (σ ) = + pl and q( J + (l) ≤ q( J ≥ (l)). From (25), we have pl + q( J ≥ (l)) ≤ D. Thus, we have D max (σ ) = C l (σ ) + q( J + (l)) ≤ D2 + pl + q( J ≥ (l)) ≤ D2 + D = 32 D, as required. If l ∈ ( J A ∪ J cA ) \ J L , then C l (σ ) ≤ D. From (26), we further have q( J + (l) ≤ q( J A ∪ J cA ) < D . Consequently, D max (σ ) = C l (σ ) + q( J + (l)) ≤ D + D2 = 32 D, as required. This completes the proof of Theorem 4.1. 2 2 Case 2. n A + ncA ≥ m. Then D 2

Theorem 4.2. The upper bound

3 2

established in Theorem 4.1 is tight.

Proof. We construct an instance as follows:

• We have m identical machines 1, 2, · · · , m and n = 2m jobs 1, 2, · · · , 2m. • For the first m jobs, the processing time p i and the delivery time qi are given by p i = 1 and qi = 1 +

1 m

, 1 ≤ i ≤ m.

(27)

• For the last m jobs, the processing time pm+i and the delivery time qm+i are given by pm+i = 1 + m and qm+i = 1 −

2 m

, 1 ≤ i ≤ m.

From (27) and (28), we have p ( N ) = 2m + m2 , qmin ( N ) = 1 − 2 ), m

2 )} m

(28) 2 , m

and max j ∈ N { p j + q( J ≥ ( j ))} = max{1 + m(1 +

1 ) m

+

m(1 − 1 + m + m(1 − = 2m. From the definition of D in (23), we can easily verify that D = 2m. From Lemma 4.1, we have D max (σ ∗ ) ≥ D = 2m. Furthermore, we can obtain a schedule σ by the following way: assigning the first m jobs to the m machines separately in time interval [0, 1], and the last m jobs to the m machines separately in time interval [1, m + 2]. It can be verified that D max (σ ) = 2m. Hence, we have D max (σ ∗ ) = 2m. For the above instance, no matter the set J ∗ returned by algorithm GA, we have J A ∪ J cA = {m + 1, m + 2, · · · , 2m}, and so, | J A ∪ J cA | = m. This implies that Algorithm APP only executes Step 3. Furthermore, it can be verified that δ = 1 in Step 3 of APP. Then, in the schedule σ returned by Algorithm APP, each job j ∈ {m + 1, m + 2, · · · , 2m} is partitioned into two pieces j and j so that p j = 1 and p j = m, the jobs in { j : 1 ≤ j ≤ m} ∪ { j : m + 1 ≤ j ≤ 2m} are scheduled in the interval [0, m] on two machines, and the jobs in { j : m + 1 ≤ j ≤ 2m} are scheduled in the interval [m, 2m] on the m machines separately. D (σ ) −2 It can be verified that D max (σ ) = 3m − 2. Consequently, limm→∞ D max(σ ∗ ) = limm→∞ 3m = 32 . The result follows. 2 2m max

Acknowledgements This research was supported by NSFC (11271338, 11171313, 11426094 and 11301528). References [1] Y.C. Chang, C.Y. Lee, Machine scheduling with job delivery coordination, European J. Oper. Res. 158 (2004) 470–487. [2] J.M. Dong, A. Zhang, Y. Chen, Q.F. Yang, Approximation algorithms for two-machine open shop scheduling with batch and delivery coordination, Theoret. Comput. Sci. 491 (2013) 94–102. [3] T. Ganesharajah, N.G. Hall, C. Sriskandarajah, Design and operational issues in AGV-served manufacturing systems, Ann. Oper. Res. 76 (1998) 109–154. [4] M.R. Garey, D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, San Francisco, 1979. [5] R.L. Graham, E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, Optimization and approximation in deterministic sequencing and scheduling: a survey, Ann. Discrete Math. 5 (1979) 287–326. [6] D.D. Gemmill, J.W. Stevens, Scheduling a two-machine flowshop with travel times to minimize maximum lateness, Int. J. Prod. Res. 35 (1997) 1–15. [7] J. Hurink, S. Knust, Makespan minimization for flow-shop problems with transportation times, Discrete Appl. Math. 112 (2001) 199–216. [8] L.A. Hall, B. Shmoys, Jackson’s rule for single-machine scheduling: making a good heuristic better, Math. Oper. Res. 17 (1992) 22–35. [9] Y. He, W.Y. Zhong, H.K. Gu, Improved algorithms for two single machine scheduling problems, Theoret. Comput. Sci. 363 (2006) 257–265. [10] S.M. Johnson, Optimal two- and three-stage production schedules with setup times included, Nav. Res. Logist. Q. 1 (1954) 61–68. [11] H. Kise, On an automated two-machine flowshop scheduling problem with infinite buffer, J. Oper. Res. Soc. Japan 34 (1991) 354–361. [12] C.Y. Lee, Z.L. Chen, Machine scheduling with transportation considerations, J. Sched. 4 (2001) 3–24. [13] L.F. Lu, J.J. Yuan, Single machine scheduling with job delivery to minimize makespan, Asia-Pac. J. Oper. Res. 25 (2008) 1–10. [14] L.F. Lu, J.J. Yuan, Unbounded parallel batch scheduling with job delivery to minimize makespan, Oper. Res. Lett. 36 (2008) 477–480. [15] L.F. Lu, J.J. Yuan, L.Q. Zhang, Single machine scheduling with release dates and job delivery to minimize the makespan, Theoret. Comput. Sci. 393 (2008) 102–108. [16] R. McNaughton, Scheduling with deadlines and loss functions, Manage. Sci. 6 (1959) 1–12. [17] P.L. Maggu, G. Das, On 2 × n sequencing problem with transportation times of jobs, Pure Appl. Math. Sci. 12 (1980) 1–6.

Y. Chen et al. / Theoretical Computer Science 583 (2015) 67–77

77

[18] P.L. Maggu, G. Das, R. Kumar, On equivalent job-for-job block in 2 × n sequencing problem with transportation times, J. Oper. Res. Soc. Japan 24 (1981) 136–146. [19] S.S. Panwalkar, Scheduling of a two-machine flowshop with travel time between machines, Oper. Res. 42 (1991) 609–613. [20] C.N. Potts, Analysis of a heuristic for one machine sequencing with release dates and delivery times, Oper. Res. 28 (1980) 1436–1441. [21] V.A. Strusevich, A heuristic for the two-machine open-shop scheduling problem with transportation times, Discrete Appl. Math. 93 (1999) 287–304. [22] A. Soukhal, A. Oulamara, P. Martineau, Complexity of flow shop scheduling problems with transportation constraints, European J. Oper. Res. 161 (2005) 32–41. [23] C.S. Sua, J.C.H. Panb, T.S. Hsua, A new heuristic algorithm for the machine scheduling problem with job delivery coordination, Theoret. Comput. Sci. 410 (2009) 2581–2591. [24] H.I. Stern, G. Vitner, Scheduling parts in a combined production-transportation work cell, J. Oper. Res. Soc. 41 (1990) 625–632. [25] G.J. Woeginger, Heuristics for parallel machine scheduling with delivery times, Acta Inform. 31 (1994) 503–512. [26] X.L. Wang, T.C.E. Cheng, Machine scheduling with an availability constraint and job delivery coordination, Naval Res. Logist. 1 (2007) 11–20. [27] X.L. Wang, T.C.E. Cheng, Production scheduling with supply and delivery considerations to minimize the makespan, European J. Oper. Res. 194 (2009) 743–752. [28] J.J. Yuan, A. Soukhal, Y.J. Chen, L.F. Lu, A note on the complexity of flow shop scheduling with transportation constraints, European J. Oper. Res. 178 (2007) 918–925. [29] W.Y. Zhong, G. Do’sa, Z.Y. Tan, On the machine scheduling problem with job delivery coordination, European J. Oper. Res. 182 (2007) 1057–1072.