Operations Research Letters 42 (2014) 21–26
Contents lists available at ScienceDirect
Operations Research Letters journal homepage: www.elsevier.com/locate/orl
The coupled unit-time operations problem on identical parallel machines with respect to the makespan Alix Munier-Kordon a , Djamal Rebaine b,∗ a
Laboratoire LIP6/département SOC, Université Pierre et Marie Curie, 4 place Jussieu, 75252, Paris Cedex 05, France
b
Département d’informatique et de Mathématique, Université du Québec à Chicoutimi, 555, Boul. de l’Université, Saguenay (Québec), Canada G7H 2B1
article
info
Article history: Received 25 January 2013 Received in revised form 15 November 2013 Accepted 15 November 2013 Available online 22 November 2013
abstract This paper addresses the problem of scheduling n unit-time coupled operations on m identical parallel machines with minimum time delay considerations so as to minimize the overall completion time, known as the makespan. Two approximation algorithms, along with their worst-case analysis, are presented. © 2013 Elsevier B.V. All rights reserved.
Keywords: Coupled operations Makespan Special case Time delays Worst-case analysis
1. Introduction The scheduling problem we are addressing in this paper can be described as follows. Given are a set J = {1, . . . , n} of n jobs and a set P = {1, . . . , m} of m identical parallel machines. Each job comprises two unit-time operations, each of which can be processed by any of the m machines. Furthermore, these two operations require processing at least a time interval apart, representing a time delay τj associated with job j ∈ J. In other words, if C1 (j) and S2 (j) denote the completion time and the starting time of the first and the second operation of job j in a given valid schedule, respectively, then we have that S2 (j) ≥ C1 (j) + τj . These jobs are known as coupled unit-time operations. We seek a schedule that minimizes the overall completion time, known as the makespan. Let us mention that there also exists literature on other variants of the above problem in which the time which must elapse between two consecutive operations of the same job is exact or lies between minimum and maximum values. The case we are investigating in this paper is known as the minimum time delay. For more details, see for e.g. [3,8]. Motivation for the formulated problem comes from real world problems. The one which is often cited in the literature is the single
∗
Corresponding author. Tel.: +1 418 545 5011; fax: +1 418 545 5012. E-mail addresses:
[email protected] (A. Munier-Kordon),
[email protected],
[email protected] (D. Rebaine). 0167-6377/$ – see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.orl.2013.11.006
radar application corresponding to the case of m = 1: the two operations of a job correspond to a pulse transmission and a pulse reception; time delays correspond to the transmission times and reflected times back to the radar. More details on this scheduling problem may be found in [3,6,7,9]. Another potential application which we might cite occurs in chemical plants or production systems where the same machine is used to perform several operations on the same job. In this context, after completing one of its operations, a job has to wait for a given time due to either some chemical reactions or cooling processes that are involved before starting its next operation on the same machine [4]. In the present paper, we generalize this model to consider the case where there are more than one identical machine (m ≥ 1) operating in parallel. In addition to delineating the borderline between polynomiality and intractability, the assumption of unit-time operations may be justified by the fact that in some cases the processing times of the jobs might be negligible compared to their associated time delays, and thus have a small influence on the makespan. The above scheduling problem is shown in [11] to be N P -hard in the strong sense even for the restricted case of one single machine. More interesting complexity results and polynomial time algorithms may also be found respectively in [5,1,2,10]. Therefore, the search for heuristic algorithms merits investigation through their worst-case analysis by measuring the distance between the makespan of the solution that is produced and a lower bound on the optimal value. To the best of our knowledge, there is no result we could report on this problem for the case of minimum time delays that we are considering in this paper, not even for m = 1.
22
A. Munier-Kordon, D. Rebaine / Operations Research Letters 42 (2014) 21–26
Fig. 1. The structure of an optimal solution.
This paper is organized as follows. Section 2 presents definitions and preliminary results for the remaining sections of the paper. In Sections 3 and 4, we discuss the approximation approach and undertake a worst-case analysis for two algorithms, respectively. The first algorithm, The Online Algorithm, produces an asymptotic 1 worst-case ratio of ( 23 − 2n ), while the second algorithm, The Sorting Algorithm, produces an asymptotic worst-case ratio of ( 45 +
4m ). We also prove in Section 4 that when the time delays are n distinct, The Sorting Algorithm generates an optimal solution for m > 1. Concluding remarks are presented in Section 5.
2. Preliminary results In this section, we present some definitions and useful results that we need for the forthcoming sections. Let us note that the results presented in the remainder of this paper hold for m ≥ 1 unless stated otherwise. Definition 1. For each job, j ∈ J, a valid schedule is going to process one operation earlier than the other. Such an operation is called the first operation. The remaining operation is the second operation. Definition 2. The optimal makespan produced by X ⊆ J is denoted by ωopt (X ). Lemma 1. There exists an optimal schedule in which the first operations are processed first, followed by the second operations of the set of all jobs. Proof. Let us assume that a machine processes the second operation of job k immediately before the first operation of some job j. These can be interchanged still obtaining a valid schedule without changing the makespan. Repeating this argument establishes the lemma. Lemma 2. There exists an optimal solution in which (n mod m) n machines process ⌈ m ⌉ first operations, and the remaining machines n process ⌊ m ⌋ first operations of the set of the n jobs. Proof. By Lemma 1, first operations may always be scheduled before second operations of tasks. Because of the precedence constraints, first operations may also be pushed to the left. The n schedule obtained performs ⌊ m ⌋ × m first operations during the n interval [0, ⌊ m ⌋]. The remaining (n mod m) first operations are n scheduled at time ⌊ m ⌋. Thus, the statement of the lemma follows immediately. Lemma 3. There exists an optimal solution in which, without an idle n slot, from time ωopt (J ) − ⌈ m ⌉, (n mod m) machines process ⌈ mn ⌉ secn ond operations, and, from time ωopt (J )−⌊ m ⌋, the remaining machines n process ⌊ m ⌋ second operations of the set of the n jobs.
Proof. Reversing the time and using the same argument as in Lemma 2 establishes Lemma 3. Let us observe that from Lemmas 2 and 3, the structure of an optimal solution may be visualized by Fig. 1, where p1 = ωopt (J ) − ⌈ mn ⌉ and p2 = ωopt (J ) − ⌊ mn ⌋. The next results are about establishing lower bounds on the makespan. Lemma 4. Let C1 (j) and C2 (j) be the completion times of the first and the second operation of job j, respectively, in an optimal schedule. If n mod m = 0, then we have that n C2 (j) + n ωopt (J ) − C1 (j) − = 0. m j∈J j∈J Proof. Without loss of generality, we may assume that an optimal n solution satisfies Lemmas 2 and 3. If t1 = m , then we may deduce that first operations are processed during time interval T1 = [0, t1 ], and second operations during time interval T2 = [ωopt (J ) − t1 , ωopt (J )], as we may see it in Fig. 1. In other words, for any time slot in T1 (T2 ), the number of processed jobs is equal to m. We therefore derive that n
C1 (j) = m(1 + · · · + t1 ),
j =1
and, n
C2 (j) = n(ωopt (J ) − t1 ) + m(1 + · · · + t1 )
j =1
= n(ωopt (J ) − t1 ) +
n
C1 (j).
j =1
Therefore, the result follows.
Theorem 1. n
τ j j =1 n . ωopt (J ) ≥ +1+ n m
n Proof. Let J ′ ⊆ J be the m × ⌊ m ⌋ jobs with the maximum time de′ ′ ′ lays, and n = |J |. Let also C1 (j) and C2′ (j) be the completion times of the first and the second operation of job j ∈ J ′ , respectively for an optimal schedule of J ′ with a structure described by Lemmas 2 and 3. Clearly, we have that
C2′ (j) − C1′ (j) ≥ τj + 1, and
ωopt (J ′ ) = ωopt (J ′ ) − C2′ (j) + (C2′ (j) − C1′ (j)) + C1′ (j).
A. Munier-Kordon, D. Rebaine / Operations Research Letters 42 (2014) 21–26
Let us note that the makespan of heuristic γ verifies ωγ (J ) =
Thus,
ωopt (J ) ≥ ωopt (J ) − C2 (j) + τj + 1 + C1 (j).
r⋆ +
Adding up the above inequalities for j = 1, . . . , n′ , we get that
the optimal makespan for set M.
′
′
n′ ωopt (J ′ ) ≥
′
′
C1′ (j) +
j∈J ′
τ j + n′ −
j∈J ′
C2′ (j) + n′ ωopt (J ′ ).
j∈J ′
From the definition of J ′ we have that n′ mod m = 0. Therefore, from Lemma 4 we derive that
23
C1 (j) − ′
j∈J ′
C2 (j) + n ωopt (J ) = ′
′
′
j∈J ′
n
′2
m
,
and thus, n′ ωopt (J ′ ) ≥
τj + n′ +
n′
j∈J ′
2
m
|M | m
. In what follows, we present another lower bound on
Lemma 5.
n |M | ωopt (M ) ≥ 1 + r ⋆ − + . m
m
Proof. By definition, we have that r ⋆ ≥
m
since all the first op-
n erations are completed at time r ⋆ . Therefore p = r ⋆ − m is a non-negative integer. Moreover, since any second operation in M is not processed earlier, then τj ≥ p for any j ∈ M. Thus, in the best case, for the optimal solution, everytaskin set M is available
at time p + 1. It then follows that 1 + p +
.
n
|M | m
is a lower bound on
the optimal solution for set M. Therefore, the result follows.
Since J ′ ⊆ J, it follows that
3. The Online Algorithm
ωopt (J ) ≥ ωopt (J ) τj
In this section, we address the worst-case performance of The Online Algorithm.
′
≥
j∈J ′
n j∈J
′
τj
+1+
n Theorem 2. 3 ω1 (J )
m
n
+1+ m n τj n j∈J ≥ . +1+ n m ≥
The result is thus established.
The next two sections deal with the worst-case analysis of two heuristics viz. The Online Algorithm and The Sorting Algorithm. Before proceeding any further, let us first describe them. The first heuristic, The Online Algorithm, processes the first operations of the jobs on the first available of the m machines, according to an arbitrary ordering of the jobs with no relation of their time delays, starting with Machine 1. Then, the second operations are processed as soon as possible on the m machines as they become free. Note that the running time of this algorithm is clearly O(n). The second heuristic, The Sorting Algorithm, consists first of renaming the jobs in a way that the time delays are such that τ1 ≥ τ2 ≥ · · · ≥ τn . The second step consists of assigning in that order the first operations of the n jobs to the first available of the m identical machines. Once this task is completed, the algorithm assigns the second operations of the jobs as soon as possible to the m identical parallel machines as they become free. Note that the running time of this algorithm is clearly dominated by that of the sorting procedure, which can be implemented in O(n log n)-time. In what follows, heuristic γ = 1 (γ = 2) refers to The Online Algorithm (Sorting Algorithm). Definition 3. The makespan produced by heuristic γ , with respect to an instance of a given set of jobs X ⊆ J, is denoted by ωγ (X ).
ωopt (J )
≤
2
1 2n
.
Furthermore, this bound is asymptotically tight. Proof. Let us recall from Lemma 5 that
n |M | ωopt (M ) ≥ 1 + r ⋆ − + . m
m
Thus, ⋆
ω1 (J ) = r +
|M | m
n ≤ ωopt (M ) − 1 + nm ≤ ωopt (J ) − 1 + . m
n m
Now, since ⌈ ⌉ = ⌊
ω1 (J ) ≤
n+m−1
n+m−1 m
⌋, we may deduce that
− 1 + ωopt (J ).
m
It then follows that
ω1 (J ) n−1 ≤ + 1. ωopt (J ) mωopt (J ) Since
ωopt (J ) ≥
2n
m
2n
≥
m
,
then we derive that
ω1 (J ) (n − 1)m ≤ +1 ωopt (J ) m(2n)
Definition 4. The release time of the second operation of job j, j through heuristic γ , is δj = ⌈ m ⌉ + τj .
≤
We assume that the schedule produced by heuristic γ ∈ {1, 2} has at least one idle slot at a time instant t < ωγ (J ) − 1. Otherwise, it is optimal. Definition 5 now follows.
≤
Definition 5. The smallest value r such that second operations of jobs in J within [r , ωγ (J ) − 1] is processed by heuristic γ without any idle time slot on the machines is denoted by r ⋆ . The set of jobs for which the release time of the second operation is greater than r ⋆ is denoted by M = {j ∈ J , δj ≥ r ⋆ }.
−
1 2 3 2
− −
1 2n 1 2n
+1 .
The asymptotic tightness of the above bound is achieved by the following instance I of set J. Indeed, consider n = km jobs with τj = 0 for j = 1, . . . , n − 1, and τn = 2k − 2. If the jobs are processed from 1 to n, in this order, then The Online Algorithm will schedule the first operations of jobs j = 1, . . . , (k − 1)m on the m machines from time 0 up to time k − 1. A time
24
A. Munier-Kordon, D. Rebaine / Operations Research Letters 42 (2014) 21–26
k − 1, the first operations of jobs j = (k − 1)m + 1, . . . , km − 1 are processed on the first (m − 1) machines, and then the first operation of job n = km on machine m. The second operations of the km − 1 first jobs start their processing from time k up to time 2k − 1. The second operation of job n = km can only start its processing at time k + 2k − 2 to finish at time ω1 (I ) = 3k − 1. An optimal schedule for the above instance I may be built as follows. Process, on the first machine, first and second operations of job n = mk at time 0 and 2k − 1, respectively. The rest of the jobs is processed as follows: first and second operations of jobs j, j = j j −1 1, . . . , n − 1, at time ⌊ m ⌋ and k + ⌊ m ⌋, respectively. The resulting schedule produces a makespan of value ωopt (I ) = 2k. It then follows that
ω1 (I ) 3k − 1 3 = → as k → ∞. ωopt (I ) 2k 2 4. The sorting algorithm
4.2.1. A special case The special case we consider here is a set J of n jobs defined by a n value ρ ≥ ⌈ m ⌉ and the time delays τj = ρ − ⌈ mj ⌉. By construction, we see that the second operations of jobs are all available at time slot ρ (i.e. δj = ρ for any j ∈ J). It then follows that the makespan n ⌉. for The Sorting Algorithm is ω2 (J ) = ρ + ⌈ m Let us first prove the following technical result needed to evaluate a lower bound on the makespan of an optimal schedule. Lemma 6.
n j j =1
4.1. Distinct time delays
n j
Theorem 3. If m > 1 and the time delays are distinct, non-negative, and integral, then The Sorting Algorithm generates an optimal solution for the m-machine problem with coupled unit-time operations. Proof. Let us first observe that m and τ1 + 2 are two obvious lower bounds the makespan. However, in our case, we neces on < τ1 + 2. Indeed, let us suppose it is not the sarily have 2n m case. Then, since the times delays aredistinct, integral and nonnegative, we get τ1 ≥ n and thus 2n ≥ τ1 + 2 > n, a contram diction as m > 1. Now, since The Sorting Algorithm is greedy and starts with Machine 1, then it is clear that the corresponding time delays of the first operations on that machine are greater than that of the first operations on the rest of the machines. Furthermore, since the time delays are distinct, and the processing times of the jobs are of unit length, then the number of first and second operations on Machine 1 is at least as large as on any machine. Therefore, the makespan is given on Machine 1. Let us now consider the first operations of two jobs i and j, in this order, on Machine 1. As we assumed that these time delays are all distinct and integral, then clearly we have that τi ≥ τj + 2. It follows that the start time of the second operation of job i is strictly greater than the start time of the second operation of job j. Therefore, as job 1 is processed first on Machine 1, the makespan generated by The Sorting Algorithm on Machine 1 is clearly τ1 + 2. The result is thus established.
2n
m
=m
m
2
n ⌈ m⌉
m
2
.
m
m n n 2
n
n−
m
m
m
m
n n +1 + n− m . m
m
Therefore,
n j m
n m n
=
m
2
m
+
m 2
The result is thus established. Lemma 7. If τ¯ =
τ¯ ≥ ρ −
n 2m
n
i =1
+n−m
n m
.
τi /n, then
− 2.
Proof. From Lemma 6, we derive that nτ¯ = nρ −
n i m
i =1
= nρ −
n
n−
m
mn 2
+
m
m 2
.
So,
τ¯ = ρ −
n m
+
m n 2 2n m
−
n m m
2n
.
We then get that
τ¯ ≥ ρ −
n m
n m n +1 + − +1 . 2m
2n
m
Therefore, we have that
τ¯ ≥ ρ −
n 2m
−
3 2
−
m 2n
.
Since m ≤ n, we have that − m ≥ −1. Thus, the result follows. n
We are now ready to prove the following result. n ⌉ and Theorem 4. If J denotes a set of jobs defined by a value ρ ≥ ⌈ m
Let us observe that the above result remains valid for a set of jobs that satisfy τj ≥ τj+m + 2, for j ∈ {1, . . . , n − m}.
τj = ρ − ⌈ mj ⌉, then we have that
4.2. Arbitrary time delays
5 3 ω2 (J ) ≤ + . ωopt (J ) 4 ωopt (J )
We present in this section the worst-case analysis of The Sorting Algorithm in which the time delays are arbitrary. Without loss of generality, we assume that the value of the makespan, ω2 (J ), produced by The Sorting Algorithm for an instance of set J, is such that ω2 (J ) > ⌈ 2n ⌉. The idea behind the proof of the worst-case perm formance of The Sorting Algorithm is that we first proceed with a special instance I1 of set J. And then, in the second step, we show that the worst performance of a given instance is less than that of the special instance that we considered earlier.
m
+
n
α+
α=1
=
j =1
We assume here that the jobs are associated with time delays that are all different from each other. For the sake of simplicity, the jobs are renamed such that τ1 > · · · > τn . This case is well solvable as shown below.
m
mn
n−
Proof. We have that
j =1
This section is devoted to the analysis of The Sorting Algorithm. We first discuss a special case in which this heuristic produces an optimal solution. Then, we present the worst-case analysis of this heuristic.
n
=
Moreover, this bound is asymptotically tight. Proof. From Theorem 1 and Lemma 7, we have that n ωopt (J ) ≥ τ¯ + m n n ≥ρ− −2+ 2m m n ≥ρ+ − 2. 2m
A. Munier-Kordon, D. Rebaine / Operations Research Letters 42 (2014) 21–26
Since ⌈ 2n ⌉ is another lower bound on ωopt (J ), then we may derive m that
ωopt (J ) ≥ max
2n
m
,ρ +
n 2m
25
Proof. Let us consider a job j ∈ J ′ . j 1. If j ∈ M, then δj ≥ r ⋆ by assumption. Now, τj′ = r ⋆ − 1 − ⌈ m ⌉ ≤
δj − 1 − ⌈ mj ⌉ and δj = ⌈ mj ⌉ + τj . So, we get τj′ ≤ τj − 1. 2. Otherwise, j ∈ J ′ − M and δj = r ⋆ − 1 following Lemma 8. Now, δj = ⌈ mj ⌉ + τj and r ⋆ − 1 = ⌈ mj ⌉ + τj′ , so τj = τj′ .
−2 .
Now, as noticed above, we have that n ω2 (J ) = ρ + m n ≤ ρ+1+ m n n ≤ ρ+ −2 + +3 . 2m 2m
Thus, the result of the lemma follows. We are now ready to prove the following result. Theorem 5. 2 ω2 (J ′ ) ω2 (J ) + ≥ . ωopt (J ′ ) ωopt (J ) ωopt (J )
We therefore get that
Proof. Let us consider a job j ∈ M − J ′ . Then, by definition of the j time instants t1 , . . . , tk , we must have ⌈ m ⌉ = tk . So, |M − J ′ | ≤ m
1
ω2 (J ) ≤ ωopt (J ) + ωopt (J ) + 3.
|J ′ |
4
Dividing the above inequality by ωopt (J ) establishes the result.
|M |
and |M | = |M − J ′ | + |M ∩ J ′ | ≤ m + |J ′ |. So, ⌈ m ⌉ + 1 ≥ ⌈ m ⌉. The makespan produced by The Sorting Algorithm for instance |J ′ | J ′ is ω2 (J ′ ) = r ⋆ − 1 +⌈ m ⌉. The makespan for instance J ′ is ω2 (J ) =
4.2.2. General case We suppose here that we are given a general instance on set J. We show in this section that the worst-case performance of The Sorting Algorithm for the general instance is less than that of the special case described in Section 4.2.1. Let us denote by t1 , . . . , tk the time instants at which the first operations of the jobs in set M = {j ∈ J , δj ≥ r ⋆ } are completed.
r ⋆ + ⌈ m ⌉. So, ω2 (J ′ ) − ω2 (J ) ≥ −2. Now, from Lemma 11, we deduce that ωopt (J ) ≥ ωopt (J ′ ), and the result follows from the two previous inequalities.
Lemma 8. If i is a job in J −M such that its first operation is completed at time tα − 1 with α ∈ {1, . . . , k}, then δi = r ⋆ − 1.
Moreover, this bound is asymptotically tight.
Proof. Let j be a job in M such that its first operation is completed at time tα . Then, since job i is processed earlier, we get i < j. Hence, τi ≥ τj . Since i ∈ J − M, we have that δi = ⌈ mi ⌉ + τi < r ⋆ . Again, since
ω2 (J ) ω2 (J ′ ) 2 ≤ + ωopt (J ) ωopt (J ′ ) ωopt (J )
⋆
j ∈ M, we also have that δj = ⌈ ⌉ + τj ≥ r . Now, ⌈ ⌉ = 1 + ⌈ ⌉. It then follows that j m
⋆
r >
i
m
≥
i
m
j m
i m
+ τi + τj ≥ r ⋆ − 1.
So, τi = τj and δi = ⌈ mi ⌉ + τi = r ⋆ − 1. The result is thus established. Lemma 9. For any α ∈ {1, . . . , k}, we have that tα = α . Proof. It follows, from the definition of r ⋆ , that there is at least one idle slot at time r ⋆ − 1. For any α ∈ {1, . . . , k}, we may thus assume that there are m jobs in J − M completed at time tα − 1. From Lemma 8, we know that their release dates are r ⋆ − 1, which is impossible by definition of r ⋆ . So, we must have that t1 = 1, and the k time instants t1 , . . . , tk are consecutive. Thus, the result follows. Lemma 10. The number of jobs in J − M, with first operation completed in [t1 , tk − 1], is bounded by m − 1. Proof. From Lemma 8, we have that δj = r ⋆ − 1 for any j ∈ J − M satisfying the assumption of the lemma. Since there is at least one idle slot at time r ⋆ − 1, the result follows immediately. Now, let us build a new instance J ′ from J as follows: 1. Jobs in J ′ correspond to jobs in J with first operation completed in [t1 , tk − 1]. 2. The release date of each job in J ′ is exactly r ⋆ − 1. So, for any j j ∈ J ′ , we set τj′ = r ⋆ − 1 − ⌈ m ⌉. Lemma 11. δj ≥ r ⋆ − 1 and τj ≥ τj′ for any j ∈ J ′ .
|M |
Corollary 1. 5 4m ω2 (J ) ≤ + . ωopt (J ) 4 n Proof. From Theorems 4 and 5, we may deduce that
≤
5 4
+
3
ωopt
(J ′ )
+
2
ωopt (J )
.
n and ωopt (J ′ ) ≥ r ⋆ ≥ m . Thus, the Now, we have that ωopt (J ) ≥ 2n m upper bound of the theorem is established. The asymptotic tightness of the above bound is achieved by the following instance I of J. Indeed, consider n = (2k + 1)m jobs with τj = (3k + 1)−⌈ mj ⌉ for any j ∈ {1, . . . , n}. Applying The Sorting Algorithm, we get that first operation of job j ∈ {1, . . . , n} finishes at j time ⌈ m ⌉. Thus, its second operation is available at time δj = 3k+1. n It then follows that ω2 (I ) = 3k + 1 + ⌈ m ⌉ = 5k + 2. An optimal schedule for the above instance I may be built as folj lows. If ⌈ m ⌉ is odd, we set the start time of the first operation and
second operation of job j to j
⌈ mj ⌉+1 2
− 1 and 3k + 2 −
⌈ mj ⌉+1 2
, respec-
tively. If ⌈ m ⌉ is even, we set the start time of the first and the second ⌈j⌉
⌈j⌉
operation of job j to k + m2 and 4k + 2 − m2 , respectively. So, this schedule produces a makespan of value ωopt (I ) = 4k + 2. Hence,
ω2 (I ) 5k + 2 5 = → as k → ∞. ωopt (I ) 4k + 2 4 5. Conclusion The special case of distinct time delays is solved to optimality by The Sorting Algorithm only if m > 1. The complexity status of the corresponding problem for m = 1 is an open question. In addition, it would be of interest to propose other patterns of time delays for which the problem can be well solved. Other heuristic algorithms with better worst-case ratios than the ones we proposed should also be developed. Finally, exploring the exact time delay case for the problem we introduced in this paper would be another challenging direction of research.
26
A. Munier-Kordon, D. Rebaine / Operations Research Letters 42 (2014) 21–26
Acknowledgments This research is partially funded, for Djamal Rebaine, by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Ministère des Relations Internationales du Québec, and the Consulat Général de France à Montréal. References [1] D. Ahr, J. Bekesi, G. Galambos, M. Oswald, G. Reinelt, An exact algorithm for scheduling identical coupled tasks, Math. Methods Oper. Res. 59 (2004) 93–203. [2] Ph. Baptiste, A note on scheduling identical coupled tasks in logarithmic time, J. Discrete Appl. Math. 158 (2010) 583–587. [3] J. Blazewicz, E. Ecker, K. Kis, C.N. Potts, M. Tanas, J.D. Whitehead, Scheduling of coupled tasks with unit processing times, J. Sched. 13 (2010) 453–461.
[4] N. Brauner, G. Finke, V. Lehoux-Lebacque, C.N. Potts, J.D. Whitehead, Scheduling of coupled tasks and one-machine no-wait robotic cells, Comput. Oper. Res. 36 (2009) 301–307. [5] P. Brucker, S. Knust, Complexity results for single-machine problems with positive finish start-time-lags, Computing 63 (1999) 299–316. [6] J.N.D. Gupta, Single facility scheduling with two operations per job and timelags, Preprint, 1994. [7] A.J. Orman, C.N. Potts, A.K. Shahani, A.R. Moore, Scheduling for a multifunction phased array radar system, European J. Oper. Res. 90 (1996) 13–25. [8] C.N. Potts, J.D. Whitehead, Heuristics for a coupled-operation problem, J. Oper. Res. Soc. 58 (2007) 1375–1388. [9] R.D. Shapiro, Scheduling coupled tasks, Nav. Res. Logist. Q. 27 (1980) 477–481. [10] G. Simonin, R. Giroudeau, J.C. Konig, Polynomial time algorithms for scheduling problems for coupled-tasks in the presence of treatment tasks, Electron. Notes Discrete Math. 36 (2010) 647–654. [11] W. Yu, H. Hoogeveen, J.K. Lenstra, Minimizing makespan in a two-machine flow shop with delays and unit-time operations is NP-hard, J. Sched. 7 (2004) 333–348.