On the optimality conditions of the two-machine flow shop problem

On the optimality conditions of the two-machine flow shop problem

Accepted Manuscript On the optimality conditions of the two-machine flow shop problem Hatem HADDA, Najoua DRIDI, Mohamed Karim HAJJI PII: DOI: Refere...

1000KB Sizes 4 Downloads 28 Views

Accepted Manuscript

On the optimality conditions of the two-machine flow shop problem Hatem HADDA, Najoua DRIDI, Mohamed Karim HAJJI PII: DOI: Reference:

S0377-2217(17)30850-0 10.1016/j.ejor.2017.09.029 EOR 14707

To appear in:

European Journal of Operational Research

Received date: Revised date: Accepted date:

2 June 2015 14 September 2017 18 September 2017

Please cite this article as: Hatem HADDA, Najoua DRIDI, Mohamed Karim HAJJI, On the optimality conditions of the two-machine flow shop problem, European Journal of Operational Research (2017), doi: 10.1016/j.ejor.2017.09.029

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

Highlights • We introduce new optimality conditions for the two machine flow shop.

CR IP T

• We construct group sequences leading to large sets of optimal solutions. • We discuss the asymptotic behavior of the makespan for randomly generated instances.

AC

CE

PT

ED

M

AN US

• We comment and point out errors in a previously published paper.

1

ACCEPTED MANUSCRIPT

On the optimality conditions of the two-machine flow shop problem a

CR IP T

Hatem HADDAa,∗, Najoua DRIDIa , Mohamed Karim HAJJIa

Universit´e de Tunis El Manar, Ecole Nationale d’Ing´enieurs de Tunis, Unit´e de recherche OASIS, BP 37, Le belv´ed`ere, 1002 Tunis, Tunisie.

Abstract

M

AN US

This paper tackles the makespan minimization for the well known twomachine flow shop problem. Several optimality conditions are discussed aiming at the characterization of a large set of optimal solutions. We show that our approach dominates some of the results found in the literature. We also establish a number of necessary optimality conditions and discuss the asymptotic behavior of the optimal makespan when the number of jobs goes to infinity.

ED

Keywords: Scheduling, Flow shop, Optimality conditions, Group sequence, Asymptotic behavior. 1. Introduction

AC

CE

PT

This paper considers the minimization of the maximum completion time (i.e. makespan) for the well known two-machine flow shop problem. The problem consists in scheduling a set J = {J1 , J2 , . . . , Jn } of n jobs. Each job Ji is composed of two operations OiA and OiB to be processed without preemption on machine A and B respectively. Operation OiB cannot start until OiA is finished. Furthermore, each machine cannot handle more than one operation at a time, and each job cannot be processed by more than one machine at a time. We recall that permutation schedules are dominant [8], ∗

Corresponding author. Tel +21697772980. Fax +21673304688. Email addresses: [email protected] (Hatem HADDA), [email protected] (Najoua DRIDI), [email protected] (Mohamed Karim HAJJI) Preprint submitted to Elsevier

September 23, 2017

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN US

CR IP T

and that the problem is polynomially solvable by the well known Johnson’s algorithm [17]. The problem is denoted by F2 |perm|Cmax . We are mainly interested in developing a sufficient condition of optimality that allows the characterization of a large set of optimal schedules. One of the topics that have attracted much attention over the last few years is the development of new techniques generating a set of optimal solutions instead of a single one. These techniques are particularly important for robust scheduling approaches [16] as they offer the decision maker the possibility of switching from a solution to another one in case of disruption. To achieve this goal, authors [2, 6, 10, 19] commonly exploit the notion of a group sequence. According to this concept, the jobs are partitioned into a number of groups (in which the order may be arbitrary), and then a sequence of groups is specified, which allows the characterization of a whole set of solutions. In some cases, such sets may be characterized in terms of properties over the jobs, which allows avoiding the explicit enumeration. Several authors have considered generating a large set of optimal solutions for the F2 ||Cmax problem. In [3], authors are interested in the enumeration of all permutations satisfying Johnson’s rule [17]. In [4], another enumeration based approach is proposed that, starting from Johnson’s sequences, constructs the complete set of optimal solutions by job permutations. This approach can only be used for small sizes (dozen of jobs) due to the excessively high time complexity of the proposed algorithm. In [2, 10], the notion of a group sequence is used for the F2 |perm|Cmax problem. In [2], the author develops a group sequence characterizing a set of sub-optimal solutions for which the worst makespan is bounded by some predefined threshold. In [10], authors are interested in the minimization of the makespan under the constraint that the jobs have to be partitioned within a given number of groups. They show that the problem is NP-hard, and develop an approach that, starting form a given optimal sequence and some fixed upper bound on the makespan value, determines a group sequence in which the number of groups is minimized. In [6], authors consider identifying a set of optimal solutions. The proposed algorithm constructs a partial order that gives the freedom of positioning a number of jobs without risking any increase in the makespan value. The work presented in the present paper fits in this context. We construct a group sequence allowing the characterization of a large number of optimal solutions. We particularly show that our approach dominates the one given in [6]. We also establish a number of necessary conditions for the optimality, 3

ACCEPTED MANUSCRIPT

M

AN US

CR IP T

and discuss the asymptotic behavior of the makespan for randomly generated instances. The motivation behind this work emerges from several aspects. Indeed, the F2 ||Cmax is considered to be one of few non-trivial problems for which a solution is known. For this reason, Johnson’s rule has been extensively used for more complex shops [5, 7, 11, 12, 14, 18] to derive elimination rules, to identify polynomial particular cases, or simply to construct heuristics. In this perspective, generalizing or deriving new optimality conditions for the F2 ||Cmax may lead to new results for more general shops. Another aspect concerns the importance of the methods characterizing sets of optimal solutions for robust scheduling approaches. Indeed, many researchers [2, 6, 10, 19] exploited the concept of group sequence to tackle the robustness issue. The remainder of this paper is organized as follows. Section 2 presents the notations and establishes a number of basic definitions. Sections 3 and 4 introduce new sufficient conditions for the optimality of F2 |perm|Cmax . Section 5 presents several comments on the work of Briand at al. [6]. Section 6 introduces a number of necessary conditions for the optimality. Section 7 analyses the asymptotic behavior of the makespan, and finally, Section 8 gives some concluding remarks.

ED

2. Notations and basic definitions The following notations are used in the subsequent analysis.

PT

• J = {J1 , J2 , . . . , Jn }: set of all jobs. • ai : processing time of Ji ∈ J on A.

CE

• bi : processing time of Ji ∈ J on B. • J = {Ji ∈ J | ai < bi } and J = {Ji ∈ J | ai > bi }. Moreover, each job satisfying ai = bi has to be included (arbitrarily) either in J or J.

AC

• π =< π1 , π2 , . . . , πn >: a permutation schedule, where πi is the job at the ith position.

• Sij (π) and Cij (π): start and finish time of operation Oij , i ∈ {1, . . . , n}, j ∈ {A, B} in permutation π. We will drop the reference to π whenever no confusion can arise. 4

ACCEPTED MANUSCRIPT

• Cmax (π): makespan of solution π. ? • Cmax : optimal makespan.

• P Given a set G ⊆ J, we denote by a(G) = Ji ∈G bi (if G = ∅ then a(G) = b(G) = 0).

CR IP T

• π ? : an optimal solution. P

Ji ∈G

ai and by b(G) =

AC

CE

PT

ED

M

AN US

The concept of a group sequence is very useful to characterize large sets of solutions while avoiding their complete enumeration. In fact, a group sequence specifies a strict partial order over the jobs. We recall that a strict partial order (SPO for short) is defined by the pair R = (X, ≺R ) where the binary relation ≺R on X × X is transitive and asymmetric. An SPO for which every element pair (u, v) ∈ X × X, with u 6= v, satisfies either u ≺R v or v ≺R u is called a total order. In our case, the set X corresponds to the set of jobs J, and we consider the precedence relation ≺ defined by : Ju ≺ Jv for a pair of jobs (Ju , Jv ) ∈ J × J, u 6= v, if Svj ≥ Cuj for j ∈ {A, B} (i.e. Ju precedes Jv ). Moreover, we will overload the symbol ≺ to denote precedence constraints between disjoint groups and sequences of jobs. More specifically, when we write G ≺ H (with G ∩ H = ∅) then ∀(Ju , Jv ) ∈ G × H we have Ju ≺ Jv , which means that the jobs of H cannot start until all the jobs of G are finished. In the case where G and H are two distinct sub-sequences, the relation is reduced to a concatenation of sequences. Consider for instance the group sequence J1 ≺ G ≺ H defined over 5 jobs with the group G = {J2 , J3 } and the subsequence H = J4 ≺ J5 . Here, the group sequence characterizes two solutions that are J1 ≺ J2 ≺ J3 ≺ J4 ≺ J5 and J1 ≺ J3 ≺ J2 ≺ J4 ≺ J5 . As it can be seen, the question of determining the solutions characterised by a given group sequence (or equivalently by its associated SPO) amounts to finding all the total orders that are consistent with it. We recall that F2 |perm|Cmax can be solved by Johnson’s rule (JR) [17]: Ji ≺ Jj if min(ai , bj ) < min(aj , bi ). If min(ai , bj ) = min(aj , bi ), we break the tie arbitrarily. Based on this rule it is possible to derive an O(n log n) algorithm (see Algorithm 1) to construct an optimal sequence. Furthermore, by using JR it is always possible to rearrange any given solution so that the jobs of J are scheduled before those of J without any increase in the makespan. Only this type of schedules will be considered

5

ACCEPTED MANUSCRIPT

CR IP T

Algorithm 1 Johnson’s algorithm 1: Schedule the jobs of J in non-decreasing order of ai (let π Joh be the obtained sequence). 2: Schedule the jobs of J in non-increasing order of bi (let π Joh be the obtained sequence). 3: Concatenate the two sub-permutations and consider the optimal solution π Joh = π Joh ≺ π Joh .

AN US

henceforth. Thereby, any given solution π can be written as π = π ≺ π, where π (respectively π) is a permutation of J (respectively J). Another important characteristic of the F2 |perm|Cmax problem is its reversibility. Indeed the problem is equivalent to the inverse configuration in which the first machine is B and the second is A (with inverted routes for the jobs). Given a permutation π for F2 |perm|Cmax , its corresponding makespan can be written as

M

Cmax (π) = max {

u X

1≤u≤n

i=1

aπ i +

n X i=u

bπi }.

(1)

AC

CE

PT

ED

P P A job Jz = πx that realizes Cmax (π) = xi=1 aπi + ni=x bπi will be referred to as a crossover job. Note that for Jz , operation OzB starts directly after the completion of OzA (i.e. SzB = CzA ), and that machine B is continuously busy starting from SzB . Given a sub-permutation π (over the jobs of J), any job π w that is preceded by an idle period on B is called a ground job and is denoted by gp = π w . Such a job satisfies Sπw B = Cπw A and Cπw−1 B 6= Sπw B (with Cπ0 B = 0). We assume here that all the operations of π are started as early as possible. We denote by l the number of ground jobs in π, and by agp and bgp the processing times corresponding to gp , p ∈ {1, . . . , l}. Furthermore, we denote by θp , p ∈ {1, . . . , l − 1}, the sequence of jobs scheduled between gp and gp+1 . θl refers to the sequence succeeding gl . Note that θp may be empty for some p ∈ {1, . . . , l}. Based on this notation, the sub-permutation π can be written as π = g1 ≺ θ1 ≺ g2 ≺ θ2 · · · ≺ gl ≺ θl . We also associate for each ground job gp = π w , p ∈ {1, . . . , l}, the value δp = Sπw B − Cπw−1 B corresponding to the last idle period on machine B before the processing of Oπw B (see Figure 1). 6

ACCEPTED MANUSCRIPT

ag1



π 6 ag2

θ1 θ1

 δ2

π 6

θ2 ≺ . . . θl ≺ θ0 l0 · · · ≺ θ10 bg2

ag2

θ2 ≺ . . . θl ≺ θ0 l0 · · · ≺ θ10

0

θ10

δ20  -

bg2

AN US

 g δ 1 b1

CR IP T

Symmetrically, by considering the inverse problem for the jobs of π, we define the l0 ground jobs g 0 p , and their corresponding sub-sequences θp0 and idle times δp0 for p ∈ {1, . . . , l0 }. Note that, in the original problem, the operations of π are started as late as possible. Based on this notation, a permutation π can be written as π = g1 ≺ θ1 ≺ · · · ≺ gl ≺ θl ≺ θl00 ≺ g 0 l0 ≺ · · · ≺ θ10 ≺ g 0 1 (see Figure 1). Note that at least one of the two ground jobs gl and g 0 l0 is a crossover job.

0

ag1

θ10

0



δ10  bg1

0

Figure 1: Ground jobs and idle times.

3. Sufficient optimality condition

ED

M

This section introduces a new sufficient optimality condition yielding a large set of optimal solutions. We schedule the sets J and J separately, and concatenate them to form a complete solution. We first investigate the case where J = ∅ (or symmetrically J = ∅).

CE

PT

3.1. Case J = J Before proceeding, we present the following lemma that captures the key idea behind the reasoning to follow. In the following, we assume that machines A and B become available for processing at dates sA and sB respectively. These availability dates may represent the required time to process the jobs of some initial partial permutation.

AC

Lemma 1. Consider an F2 |perm|Cmax instance with J = ∅ and machines’ availability dates sA and sB . If ai ≤ sB − sA , ∀Ji ∈ J, then all solutions yield the same completion times on both machines, that are sA + a(J) on A, and sB + b(J) on B. Proof. Let Ju be the first job sequenced among jobs in J. Since au ≤ sB − sA , then its completion time on A is given by CuA = sA + au ≤ sB . Therefore operation OuB starts at date sB on B, and its completion time is CuB = sB + bu . Note that the new availability dates of the machine (after 7

ACCEPTED MANUSCRIPT

CR IP T

the completion of Ju ) satisfy CuB − CuA = sB − sA + bu − au ≥ sB − sA . This means that all the remaining jobs satisfy the condition ai ≤ CuB − CuA . By applying repeatedly the same argument, we conclude that all solutions yield the same completion dates with no idle periods on both machines. 2

AC

CE

PT

ED

M

AN US

We now present Algorithm OG that, based on the problem data (jobs’ processing times), constructs an optimal group sequence (OOG ) over the jobs. This way, any sequence that is consistent with the resulting SPO is optimal. We emphasize that, unlike other approaches [4], our Algorithm does not need any initial schedule to initiate the computation. Algorithm OG selects the jobs to be set as ground jobs (gp , p ∈ {1, . . . , l}), and then associates to each one of them a sequence of groups of jobs θpq , q ∈ {1, . . . , np }. More specifically, let gp be a ground job succeeded by a set R such that there is no idle time on machine B between gp and the end of the block gp ≺ R. The availability dates of the machines after the completion of gp ≺ R are sA = Cgp A + a(R) and sB = Cgp A + bgp + b(R). Algorithm OG proceeds to the identification of the set I formed by the jobs not yet scheduled with ai ≤ sB − sA = bgp + b(R) − a(R). These jobs respect the condition of Lemma 1, and may therefore be sequenced arbitrarily after R with no idle periods on both machines. Then Algorithm OG selects an arbitrary non empty subset θpq from I, and schedules it after R (i.e. gp ≺ R ≺ θpq ). Note that the new schedule does not contain any idle period on machine B after gp , and that the new set R0 = R ≺ θpq satisfies b(R0 ) − a(R0 ) = b(R) − a(R) + b(θpq ) − a(θpq ) ≥ b(R) − a(R). This means that for the next step, set I 0 (formed by the not yet scheduled jobs satisfying ai ≤ bgp + b(R0 ) − a(R0 )) will include all the jobs left from I in the previous step, and may also include new ones. If at some iteration set I is empty, then any job scheduled after R will induce an idle time on B. In this case, Algorithm OG selects the job with the least ai value to be set as a new ground job. Note that this choice minimizes the introduced idle time on machine B. This process is repeated until all the jobs of J are scheduled. In this manner, Algorithm OG specifies the composition and the order in which the groups θpq ( p ∈ {1, . . . , l}, q ∈ {1, . . . , np }) should appear, however, the order of the jobs in each group θpq may be arbitrary. A formal statement of the procedure OG is given by Algorithm 2. We are also interested in a particular variant of Algorithm OG (denoted ROG hereafter) in which Line 7 is replaced by the following one: 7’: Set θpq ← I. 8

CR IP T

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN US

Algorithm 2 OG 1: Select Jz ∈ J with az = minJi ∈J ai ; 2: p ← 1, q ← 0, g1 ← Jz , S ← J\{g1 }, Q ← ∅, R ← ∅, O OG ← g1 ; 3: while S 6= ∅ do 4: Set I ← {Ji ∈ S | ai ≤ bz + b(R) − a(R)}; 5: if I 6= ∅ then 6: q ← q + 1; 7: Set θpq to be an arbitrary non empty subset of I; 8: R ← R ≺ θpq ; 9: OOG ← OOG ≺ θpq ; 10: S ← S\θpq ; 11: else 12: Q ← Q ≺ Jz ≺ R ; 13: np ← q; 14: p ← p + 1; 15: q ← 0; 16: Select Jz ∈ S with az = minJi ∈S {ai }; 17: gp ← Jz ; 18: R ← ∅; 19: OOG ← OOG ≺ gp 20: S ← S\{gp }; 21: end if 22: end while 23: l ← p; 24: nl ← q;

9

ACCEPTED MANUSCRIPT

An SPO OOG generated by Algorithm OG has the following form OOG = g1 ≺ θ11 ≺ θ12 · · · ≺ θ1n1 ≺ g2 ≺ θ21 · · · ≺ θ2n2 · · · ≺ gl ≺ θl1 · · · ≺ θlnl .

CR IP T

Q Q p q The number of solutions characterized by OOG is given by lp=1 nq=1 |θp |!, q q where |θp | is the cardinality of the set θp . Given a solution π respecting OOG , its makespan is given by Cmax (π) = a(Q) + az + bz + b(R),

(2)

AN US

where Q and R are the sets of jobs respectively preceding and succeeding Jz , as output by Algorithm OG. Note that by construction Jz is the last ground job (i.e. Jz = gl ), which means that it starts the last busy period on B, and consequently it is a crossover job. Theorem 1. Any solution π that is consistent with an SPO generated by Algorithm OG is optimal.

ED

M

Proof. The sequence π can be written as π = Q ≺ gl ≺ R. As the crossover job Jz is preceded by Q and succeeded by R, then Cmax (π) = a(Q) + az + bz + b(R). On the other hand, by construction it can be seen that ai ≥ az ∀Ji ∈ R, and that ai ≤ az ∀Ji ∈ Q. This means that JR can yield a solution π Joh in which the jobs of Q precede Jz , and those of R succeed Jz . Considering the position of Jz in π Joh , and using (1), we conclude that ? ≥ a(Q) + az + bz + b(R) = Cmax (π), which proves that π is optimal. 2 Cmax

CE

PT

Note that all solutions respecting a particular SPO OOG , have the same ground jobs and the same completion times for the sets θp , p ∈ {1, . . . , l}. On this basis, when dealing with the ground jobs and the completion times, we may refer indifferently to OOG or to any particular sequence consistent with it.

AC

3.2. General case For the general case, we schedule the sets J and J separately. For J, we consider the SPO OOG generated by Algorithm OG. We also apply the same algorithm for J considering the inverse problem in which B is the first 0 machine and A is the second one. We then retain the inverse SPO OOG for the original problem with its associated ground job Jz0 = gl00 and sets Q0 and 0 0 R0 . We finally concatenate OOG and OOG to obtain the SPO OOG ≺ OOG 10

ACCEPTED MANUSCRIPT

Table 1: Example 1.

J2 2 3

J3 2 4

J4 4 6

J5 5 6

J6 10 12

J7 11 12

J8 11 13

J9 15 16

J10 15 17

J11 27 4

J12 3 1

CR IP T

ai bi

J1 1 2

over all the jobs of J (see Figure 2). Considering a solution π = π ≺ π 0 respecting the SPO OOG ≺ OOG , its makespan can be written as

az

Q Q



OOG 6 R bz



M



AN US

Cmax (π) = a(Q)+az +max{bz +b(R)+b(R0 ), a(R)+a(R0 )+az0 }+bz0 +b(Q0 ). (3) We denote by ∆ = |bz + b(R) + b(R0 ) − (a(R) + a(R0 ) + az0 )| that represents the idle time that will appear on one of the two machines after the concatenation of π and π.

R



0

R0

OOG 6 az 0

R0

0

Q

bz0

Q0

0

ED

Figure 2: Solution respecting the SPO OOG ≺ OOG . 0

PT

Theorem 2. Any solution π = π ≺ π respecting the SPO OOG ≺ OOG is optimal.

AC

CE

Proof. Consider the optimal solution π Joh = π Joh ≺ π Joh . Theorem 1 proves that any permutation π (respectively π) respecting the SPO OOG 0 (respectively OOG ) has the same completion times on A and B compared to π Joh (respectively π Joh ). Consequently π is optimal. As for the makespan, its value depends on the maximum between bz + b(R) + b(R0 ) and a(R) + a(R0 ) + az0 (see Figure 2). This determines which job between Jz and Jz0 is the crossover job. 2 3.3. Illustrative example Consider the example specified by the data in Table 1. We have J = {J1 , J2 , J3 , J4 , J5 , J6 , J7 , J8 , J9 , J10 } and J = {J11 , J12 }. We first illustrate the 11

ACCEPTED MANUSCRIPT

AN US

CR IP T

application of Algorithm ROG. For J we get g1 = J1 . θ11 contains the jobs with ai ≤ bg1 = 2, which results with θ11 = {J2 , J3 }. θ12 contains the jobs with ai ≤ bg1 + b(θ11 ) − a(θ11 ) = 5, which yields θ12 = {J4 , J5 }. At this stage, there is no job with ai ≤ bg1 + b(θ11 ) + b(θ12 ) − a(θ11 ) − a(θ12 ) = 8, then Algorithm ROG sets g2 to be the job with the least ai value among the remaining jobs, which gives g2 = J6 . Next, θ21 contains the jobs with ai ≤ bg2 = 12, which yields θ21 = {J7 , J8 }. Finally, θ22 contains the jobs with ai ≤ bg2 + b(θ21 ) − a(θ21 ) = 15, which gives θ22 = {J9 , J10 }. For J we get g 0 1 = J12 and g 0 2 = J11 . Consequently, Algorithm ROG generates the SPO J1 ≺ {J2 , J3 } ≺ {J4 , J5 } ≺ J6 ≺ {J7 , J8 } ≺ {J9 , J10 } ≺ J11 ≺ J12 .

M

This SPO characterizes (2!)4 = 16 optimal solutions, with δ1 = 1, δ2 = 2, δ10 = 1, δ20 = 1, ∆ = 9, and with g 0 2 = J11 being the crossover job. We now list all possible outputs for Algorithm OG: J1 ≺ {J2 , J3 } ≺ {J4 , J5 } ≺ J6 ≺ {J7 , J8 } ≺ {J9 , J10 } ≺ J11 ≺ J12 , J1 ≺ J3 ≺ J4 ≺ {J2 , J5 } ≺ J6 ≺ {J7 , J8 } ≺ {J9 , J10 } ≺ J11 ≺ J12 , which yields a total of 24 optimal solutions.

CE

PT

ED

3.4. Complexity of Algorithm OG The construction of an SPO by Algorithm OG (selection of the ground jobs and partition over the groups θpq ) requires O(n log n) steps, and starting from a non-decreasing order of ai , this complexity reduces to O(n). However, Algorithm OG may lead to different SPOs. Indeed, when selecting a ground job, Algorithm OG may have to choose among several ties. OG may as well make different choices for the groups θpq which leads to distinct SPOs. Nevertheless, although different, these SPOs still present some similarities as stated by Lemma 2.

AC

Lemma 2. Starting from the same sets J and J, we have • All SPOs generated by Algorithm OG have the same number of ground jobs. • For the jobs of J (respectively J), each block gp ≺ θp , p ∈ {1, . . . , l} (respectively θ0 p ≺ g 0 p , p ∈ {1, . . . , l0 }) is composed by the same jobs for any SPO. 12

ACCEPTED MANUSCRIPT

CR IP T

Proof. Let g1 ≺ θ1 and gb1 ≺ θb1 be the first two blocks associated to two different SPOs returned by Algorithm OG for J. We have ag1b = ag1 ≤ bg1 which means that, if g1 6= gb1 , then Algorithm OG will schedule the job gb1 somewhere in θ1 . Consequently, all the jobs of θb1 will also be included in the block g1 ≺ θ1 . By symmetry we conclude that the jobs of g1 ≺ θ1 are also included in the block gb1 ≺ θb1 , which means that g1 ≺ θ1 and gb1 ≺ θb1 are composed by the same jobs. Using the same argument for the following blocks, we conclude that all SPOs have the same number of ground jobs, and that each block gp ≺ θp , p ∈ {1, . . . , l}, is composed by the same jobs. The same argument applies for the jobs of J. 2

AC

CE

PT

ED

M

AN US

The previous result states that the jobs manipulated by Algorithm OG for the construction of any given block gk ≺ θk are always the same for any SPO independently of the previous choices. Hence, when setting some ground job, Algorithm OG will always have to choose from the same set of jobs. This set is entirely determined by the relative order of the processing times and not by the previous choices made by Algorithm OG. As one can imagine, two different SPOs may lead to the same solutions, but may also lead to distinct sequences. Therefore, it would be interesting to consider not only one SPO, but to generate all the SPOs that Algorithm OG can produce. Unfortunately, it turns out that the enumeration of all possible outputs of Algorithm OG may require an exponential time as showed by the following example. Consider the example with n = 4n0 (with n0 > 1 being an integer) specified by the data in Table 2. The data has been chosen in such a way that Algorithm OG will always set the jobs J4k−3 for k ∈ {1, . . . , n0 } as ground jobs, and for each one of them, the groups θpq will be selected among the jobs J4k−2 , J4k−1 and J4k . Hence, any SPO will contain n0 blocks gk ≺ θk for k ∈ {1, . . . , n0 }, composed each by the jobs {J4k−3 , J4k−2 , J4k−1 , J4k }. Let us focus first on the first 4 jobs (k = 1) : J1 , J2 , J3 and J4 . Algorithm OG can produce the three following SPOs : O1 : J1 ≺ {J2 , J3 } ≺ J4 , O2 : J1 ≺ J2 ≺ {J3 , J4 } and O3 : J1 ≺ J3 ≺ {J2 , J4 }. In all cases, those three SPOs characterize together 4 optimal sequences : J1 ≺ J2 ≺ J3 ≺ J4 , J1 ≺ J2 ≺ J4 ≺ J3 , J1 ≺ J3 ≺ J2 ≺ J4 and J1 ≺ J3 ≺ J4 ≺ J2 . It can be seen that we need at least the two distinct SPOs O2 and O3 to get all 4 optimal sequences. That means that Algorithm OG needs to be run at least twice to get every possible order over the jobs {J1 , J2 , J3 , J4 }. 13

ACCEPTED MANUSCRIPT

Table 2: Example.

J4k−3 10k − 9 10k − 8

J4k−2 10k − 8 10k − 7

J4k−1 10k − 8 10k − 6

J4k 10k − 7 10k − 6

CR IP T

Job (for k ∈ {1, . . . , n0 }) ai bi

AN US

Now, considering all n jobs, Algorithm OG will each time deal with the set {J4k−3 , J4k−2 , J4k−1 , J4k } for each k ∈ {1, . . . , n0 }. As explained earlier, we need at least two SPOs to cover all possible outputs for the set {J4k−3 , J4k−2 , J4k−1 , J4k }. Therefore, to cover all possible outputs for the n jobs, we need to run Algorithm OG at least two times for every n k ∈ {1, . . . , n0 }, which amounts to 2 4 SPOs. We conclude then that the enumeration of all possible SPOs that Algorithm OG can produce may require an exponential time. 4. Order variation

M

Henceforth, we first examine the relation between Algorithm OG and JR. We then suggest a method for extending the order generated by Algorithm OG.

PT

ED

4.1. Relation between OG and JR Theorem 3. Every solution π Joh can be generated by an SPO given by Algorithm OG in which the ground jobs coincide with the ones in π Joh . Conversely, for each SPO given by Algorithm OG, there exists a solution π Joh with the same ground jobs and the same jobs between those ground jobs.

AC

CE

Proof. We are going to show the result by recurrence. We first consider π Joh and its corresponding ground jobs gp and sets θp , p ∈ {1, . . . , l}. Clearly = g1 is such that ag1 = minJi ∈J {ai }, hence Algorithm OG can also set π Joh 1 that job as a ground job. Furthermore, for each job π Joh ∈ θ1 we have q Pq−1 Pq−1 g Joh aπJoh ≤ b1 + i=2 bπJoh − i=2 aπJoh , for otherwise π q is a ground job. q i i q−1 Joh Thus Algorithm OG may set θ1 = {π Joh ∈ θ1 . In other terms, q } for all π q we consider a sequence of groups (with a single job each) corresponding to the order of appearance of the jobs in θ1 . Now suppose that for some u ∈ {1, . . . , l − 1}, the sub-permutation ϕ = g1 ≺ · · · ≺ gu ≺ θu can be generated by Algorithm OG. We are going to show that Algorithm OG can schedule gu+1 ≺ θu+1 directly after ϕ. 14

ACCEPTED MANUSCRIPT

CR IP T

Note that JR ensures that agu+1 ≤ ai for all Ji ∈ J \ ϕ. Hence Algorithm OG can also set gu+1 as a ground job. Using the same argument discussed earlier, it can be shown that Algorithm OG can schedule θu+1 after gu+1 . The same argument also applies for π Joh . Conversely, let OOG be an SPO generated by Algorithm OG. Solution π respecting OOG in which the jobs of the sets θp for p ∈ {1, . . . , l} (respectively θ0 p , p ∈ {1, . . . , l0 }) are scheduled in non-decreasing order of ai (respectively non-increasing order of bi ), respects JR. 2

AN US

Lemma 2 and Theorem 3 show the close relation between the solutions given by JR and the ones generated by Algorithm OG. Indeed, Algorithm OG proceeds to the variation of the order of the jobs that appear between two consecutive ground jobs in some solution π Joh without changing the completion time of each block. The idle times (δp ) also remain unchanged. Roughly speaking, we can say that the solutions produced by Algorithm OG are structurally similar to the ones given by JR.

PT

ED

M

4.2. Equivalent solutions generation We now introduce Algorithm ESG that, starting from some given solution π, generates an SPO yielding a set of permutations with an equivalent makespan. Algorithm ESG starts by identifying, for π, the ground jobs and their associated sequences θp , p ∈ {1, . . . , l}. Then each sub-permutation θp is partitioned into a sequence of groups θpq , q ∈ {1, . . . , np } as stated in Algorithm 3. We also apply the same procedure for the jobs of π (considering the inverse problem) which generates for each sub-permutation θp0 a sequence of groups θp0q , q ∈ {1, . . . , n0p }.

CE

Theorem 4. Let OESG be an SPO obtained by applying Algorithm ESG for a given solution π. Then any permutation respecting OESG has the same makespan compared to π.

AC

Proof. First, we consider the application of ESG to π. Given a particular subsequence θp , it should be clear that as long as S 6= ∅, set I is never empty. Indeed the original order ensures that I is at least formed by the next job of θp . Note that by definition of θp , starting from gp , machine B is continuously busy until the end of θp . Note also that by virtue of Lemma 1, the placement of the jobs of I does not generate any idle time on B. Consequently, the sequence of groups θpq will lead to the same completion time compared to the 15

ACCEPTED MANUSCRIPT

AN US

CR IP T

Algorithm 3 ESG 1: Identify for π the ground jobs gp and the corresponding subsequences θp , p ∈ {1, . . . , l}; 2: for p = 1 to l do 3: q ← 1, S ← θp , Q ← ∅; 4: while S 6= ∅ do 5: Set I ← {Ji ∈ S|ai ≤ bgp + b(Q) − a(Q)}; 6: Set θpq to be an arbitrary non empty subset of I; 7: Q ← Q ≺ θpq ; 8: S ← S\θpq ; 9: q ← q + 1; 10: end while 11: np ← q; 12: end for 13: nl ← q;

M

original order θp . Moreover, the idle times (δp ) remain exactly the same as in the original solution π since Algorithm ESG does not alter the choice of the ground jobs. The same argument applies for π as well. Thus, any solution respecting the new SPO has the same makespan compared to π. 2

AC

CE

PT

ED

In the light of the previous result, and considering again Theorem 3, it can be seen that any SPO generated by Algorithm OG, corresponds to the application of Algorithm ESG to some solution π Joh . In this sense, Algorithm OG only changes the order of the jobs that appear between two consecutive ground jobs in some solution π Joh without changing neither the completion time of each block gp ≺ θp , nor the idle times δp . In what follows, we introduce new conditions that allow some of the jobs to be rescheduled before their respective ground jobs, which alters the idle times (δp ) without changing the makespan. This yields a new set of optimal solutions which extends the one given by Algorithm OG. The idea can be formulated as follows. Consider for a given sequence π (over the jobs of J), the ground jobs gp and their respective sequences θp (p ∈ {1, . . . , l}). First, note that the starting time of any given job Ji on A depends only on the set of jobs that precede it and not on their order. Furthermore, the starting time of the crossover job Jz = gx on machine B depends only on its finishing time on A. Now if we change the order of the jobs preceding Jz so that it remains a 16

ACCEPTED MANUSCRIPT

CR IP T

ground job (i.e. we do not alter its starting time on B), then the finishing times of Jz and of all its successors will be unchanged. Consequently, the new solution will have the same makespan. The idea is to characterize some movements that may change the idle times without changing the crossover job which yields solutions with an equivalent makespan. Given a ground job gu ∈ J that is not a crossover job (i.e. gu 6= Jz ), we denote by τu the total idle time that appears on machine B between gu and Jz . More specifically ( P x δ if Jz ∈ J p=u+1 Pl p τu = ∆ + p=u+1 δp otherwise.

AN US

Before stating the main result, we establish the two following preliminary lemmas. Lemma 3. Let π be a solution respecting an SPO generated by Algorithm OG. Then for any ground job gp , p ∈ {2, . . . , l}, we have δp ≤ agp − agp−1 .

M

Proof. Consider the ground job gp = π w for some p ∈ {2, . . . , l}. Clearly its associated idle time satisfies δp ≤ agp − bπw−1 . As π w−1 is associated to the ground job gp−1 then aπw−1 ≥ agp−1 . Since aπw−1 ≤ bπw−1 , we derive that 2 δp ≤ agp − bπw−1 ≤ agp − aπw−1 ≤ agp − agp−1 .

PT

ED

Lemma 4. Consider an SPO generated by Algorithm OG. Let Jw ∈ J be a job preceding the crossover job, and let gu be a ground job preceding Jw . If aw − agu > τu , then for any ground job gv with v ∈ {1, . . . , u}, we have aw − agv > τv .

CE

Proof. Pu Let gv be a ground job preceding gu (i.e. v < u), we have τv = τu + p=v+1 δp . Using Lemma 3 we obtain

AC

aw − agv = (aw − agu ) + (agu − agu−1 ) + (agu−1 − agu−2 ) + · · · + (agv+1 − agv ) > τu + δu + δu−1 + · · · + δv+1 = τv . 2

Theorem 5. Given an SPO OOG generated by Algorithm OG, let Jz = gx be a crossover job. Consider a job Jw ∈ J preceding Jz , and let gv be its 17

ACCEPTED MANUSCRIPT

associated ground job (i.e. Jw ∈ {gv ≺ θv }). Let, if it exists, gs (s ≤ v) be eOG , a ground job with aw − ags ≤ τs . Then any solution respecting the SPO O derived from OOG by inserting Jw

CR IP T

• just before any of the ground jobs gu , u ∈ {s, . . . , v}, or

• in any of the groups θuq , u ∈ {s, . . . , v}, q ∈ {1, . . . , nu }, is optimal.

AC

CE

PT

ED

M

AN US

Proof. Note that in accordance with Lemma 4, we have aw − agu ≤ τu for every ground jobs gu , u ∈ {s, . . . , v}. We consider first the case where Jz ∈ J. Suppose that Jw ∈ θvr , r ∈ {1, . . . , nv }, is shifted to be scheduled in the group θuq for some q ∈ {1, . . . , nu }. Compared to OOG , the only changes eOG are that θvr is replaced by θevr ← θvr \ {Jw }, and that θuq is in the SPO O replaced by θeuq ← θuq ∪ {Jw } (see Figures 3 and 4). In order to compute the new makespan, we consider two particular permutations π and π e associated eOG respectively. Let µ (respectively µ to OOG and O e) be the sequence of jobs preceding Jz in π (respectively π e). Shifting the position of Jw may lead to the appearance of an idle time (denoted Λ) on machine B just before Jw (see Figure 4). Let θˆu be the set of jobs of θu scheduled before Jw in π e, we have Λ = max{0; a(θˆu ) + aw − bgu − b(θˆu )}. As by definition a(θˆu ) − b(θˆu ) ≤ 0 and agu ≤ bgu , then Λ ≤ aw − agu ≤ τu . Note that shifting the position of Jw does not alter the starting time of Jz on machine A since it is still preceded by the same jobs. As for the starting time of Jz on machine B, it will depend on the completion time of the sequence µ e. The maximum possible shifting of the completion time of the last job of µ e, will occur when the insertion of Jw will shift its succeeding jobs so that machine B is continuously busy. In this case the completion time of the sequence µ e will be Cmax (e µ) = C g u A ( π e )+ ≤ Cgu A (π) +

= Cmax (µ).

x−1 X

x−1 X p=u

18

(bp + b(θp )) + Λ

p=u

(bp + b(θp )) + τu

ACCEPTED MANUSCRIPT

B

g1 ≺ · · · ≺ θu−1

g1 ≺ · · · ≺ θu−1

ag u

 -

gu+1 ≺ · · · ≺ θv−1

θu

bg u

ag v

 - gu+1 ≺ . . . θv−1  -

θu

δu



θv

6 aw

gv+1 ≺ · · · ≺ θx−1

bg v

bw

δv

δu+1

 -

Jz = gx

gv+1 ≺ · · · ≺ θx−1

δx

δv+1

CR IP T

A



Figure 3: An SPO satisfying the conditions of Theorem 5.

A

g1 ≺ · · · ≺ θu−1 g1 ≺ · · · ≺ θu−1

 δu

bgu

θbu

θeu 6

aw θbu

 Λ

θu \θbu



bw

AN US

B

agu



Figure 4: Shifting job Jw .

PT

ED

M

This means that the total increase in the completion time of the new set µ e will not affect the crossover job Jz , and consequently π e is optimal. If Jw is inserted just before gu , then the idle time δu increases by exactly aw − agu . Using similar arguments as earlier we conclude that π e is optimal. Consider now the case where the crossover job is a part of J. This means that the maximum completion time of the set J on machine B (i.e. Cmax (J)), will not affect the makespan as long as it does not affect the starting times of the jobs of J. Using similar arguments as earlier, it can be proven that the condition aw − agu ≤ τu ensures that the starting times of the jobs of J remain unchanged which completes the proof. 2

CE

The results presented in this section give us the possibility to extend the set of optimal solutions. Indeed starting from any SPO given by Algorithm OG, Theorems 4 and 5 allow us to construct rapidly a set of new optimal solutions structurally different from those derived from JR.

AC

4.3. Illustrative example We consider again the example specified by the data in Table 1. The application of Algorithm ROG generates the SPO J1 ≺ {J2 , J3 } ≺ {J4 , J5 } ≺ J6 ≺ {J7 , J8 } ≺ {J9 , J10 } ≺ J11 ≺ J12 ,

with δ1 = 1, δ2 = 2, δ10 = 1, δ20 = 1, ∆ = 9, and g 0 2 = J11 being the crossover job. 19

θu \θbu

...

bz

...

ACCEPTED MANUSCRIPT

CR IP T

Considering the conditions of Theorem 5 for gs = g1 and Jw = J7 , we have a7 − ag1 = 10 ≤ τ1 = δ2 + ∆ = 11. Consequently, any solution respecting one of the following SPOs is optimal. e1 : J7 ≺ J1 ≺ {J2 , J3 } ≺ {J4 , J5 } ≺ J6 ≺ J8 ≺ {J9 , J10 } ≺ J11 ≺ J12 , O e2 : J1 ≺ {J2 , J3 , J7 } ≺ {J4 , J5 } ≺ J6 ≺ J8 ≺ {J9 , J10 } ≺ J11 ≺ J12 , O

e3 : J1 ≺ {J2 , J3 } ≺ {J4 , J5 , J7 } ≺ J6 ≺ J8 ≺ {J9 , J10 } ≺ J11 ≺ J12 , O

e4 : J1 ≺ {J2 , J3 } ≺ {J4 , J5 } ≺ J6 ≺ J8 ≺ {J9 , J10 , J7 } ≺ J11 ≺ J12 , O

M

AN US

which yield 64 additional optimal solutions. It is possible to extend those sets using Algorithm ESG. Consider for instance the sequence π =< J7 , J1 , J2 , J3 , J4 , J5 , J6 , J8 , J9 , J10 , J11 , J12 > that e1 . π is composed by the ground jobs g1 = J7 , g 0 2 = J11 and respects the SPO O g 0 1 = J12 , with g1 being succeeded by the sequence θ1 =< J1 , J2 , J3 , J4 , J5 , J6 , J8 , J9 , J10 >. In its first run, Algorithm ESG considers the set I = {Ji ∈ θ1 |ai ≤ bg1 } = {J1 , J2 , J3 , J4 , J5 , J6 , J8 }. Suppose we choose θ11 ← I. In its second run, ESG considers the set I = {Ji ∈ θ1 \ θ11 | ai ≤ bg1 + b(θ11 ) − a(θ11 )} = {J9 , J10 }. If we choose θ12 ← I, we get the SPO

ED

J7 ≺ {J1 , J2 , J3 , J4 , J5 , J6 , J8 } ≺ {J9 , J10 } ≺ J11 ≺ J12 , characterizing 7! × 2! = 10080 optimal solutions.

PT

5. Comments on Briand et al. [6]

AC

CE

In [6], Briand et al. describe a sufficient condition for the optimality of F2 |perm|Cmax . When applied, the proposed condition generates a partial order characterizing a set of optimal solutions. Hereafter, we comment on the presented results after briefly describing their approach. Two interval structures are corresponded to the sets J and J. The interval [ai , bi ] (respectively [bi , ai ]) is associated to each job Ji ∈ J (respectively Ji ∈ J), then Allen’s algebra [1] is used to represent the relations between those intervals. For the interval structure associated to J, a job Jub is said to be a base if @ Ji ∈ J with ai < abu ≤ bbu < bi , where abi and bbi denote the processing times of Jub on A and B respectively. Given a base Jub , a b-pyramid σu is defined by the set of jobs Ji ∈ J satisfying abu < ai ≤ bi < bbu . The base jobs of J are indexed 20

ACCEPTED MANUSCRIPT

CR IP T

in non-decreasing order of abi , i ∈ {1, . . . , n1 } (where n1 denotes the number of base jobs in J). Symmetrically, the bases and their respective b-pyramids are defined for the interval structure associated with J. Here, the bases are indexed in nonincreasing order of their bbi , i ∈ {n1 + 1, . . . , n1 + n2 } (where n2 is the number of base jobs in J). Note that a given job Ji may be part of several b-pyramids. Let u(i) and v(i) denote the indexes of the first and the last b-pyramids to which Ji belongs (the reader is referred to [6] for more details). Briand et al. [6] consider the sequences of the form

and prove the following result.

AN US

J1b ≺ σ1 ≺ J2b ≺ · · · ≺ Jnb 1 ≺ σn1 ≺ σn1 +1 ≺ Jnb 1 +1 ≺ · · · ≺ σn1 +n2 ≺ Jnb 1 +n2 ,

Theorem 6 ([6]). Any solution such that

M

• the bases are sequenced according to the non-decreasing order of their indexes;

is optimal.

ED

• Ji ∈ J is scheduled inside any sub-sequence from σu(i) to σv(i) in any order;

PT

We now show that our approach dominates the one presented in [6].

CE

Theorem 7. Any solution respecting the conditions of Theorem 6, can be generated by Algorithm OG.

AC

Proof. We are going to show by recurrence that any solution σ respecting Theorem 6, can also be generated by Algorithm OG. Note that in σ as well, the jobs of J are sequenced before those of J. We first consider the jobs of J. By construction, the first base J1b is such that ab1 = minJi ∈J {ai }, which means that Algorithm OG can set g1 = J1b . Now the jobs of Ji ∈ σ1 are by definition such that ai < bb1 . Hence σ1 can be part of θ11 (i.e. Algorithm OG can schedule the jobs of σ1 directly after g1 in any order). 21

ACCEPTED MANUSCRIPT

ED

M

AN US

CR IP T

Now suppose that for a given u ∈ {1, . . . , n − 1}, the sub-permutation ϕ = J1b ≺ · · · ≺ Jub ≺ σu can be generated by Algorithm OG. This means that ϕ can be written under the form ϕ = g1 ≺ · · · ≺ gv ≺ R, where Jub ≺ σu forms the last part of gv ≺ R. We are going to show that Algorithm OG can b ≺ σu+1 after R. Two cases are distinguished. schedule Ju+1 b Case 1. abu+1 ≤ bgv + b(R) − a(R), then Algorithm OG can schedule Ju+1 directly after R as a continuation (not as a ground job). Case 2. abu+1 > bgv + b(R) − a(R), then for any job Ji ∈ J\ϕ one of the following two cases must be realized. Either Ji is a base, or it is a part of a bpyramid associated to some base Jwb . Knowing that the bases are indexed in the non-decreasing order of abi , then in either case we obtain ai ≥ abw ≥ abu+1 b ∀Ji ∈ J\ϕ. Hence Ju+1 can be set as a ground job by Algorithm OG. Let us now consider the jobs of σu+1 . By definition we have ai < bbu+1 b is set as a ground job then all the jobs of σu+1 are eligible ∀Ji ∈ σu+1 . If Ju+1 b . to be scheduled by Algorithm OG after Ju+1 b Otherwise, if Ju+1 is set as a continuation of R (Case 1), then abu+1 ≤ bgv + b(R) − a(R), which gives bgv + b(R) − a(R) − abu+1 + bbu+1 ≥ bbu+1 > ai . In this case as well, all the jobs of σu+1 are eligible to be scheduled by Algorithm b as a continuation of R. OG after Ju+1 The same argument applies to the jobs of J (considering the inverse problem), which completes the proof. 2

AC

CE

PT

We add that a ground job as defined in our approach is a base job as defined in [6], while the reverse is not always true. This explains why our SPOs yield larger sets then the ones given by Briand et al. [6], since some of the base jobs (that cannot be moved in Briand et al.’s partial order) may be a part of some group in our SPOs. Another comment on the work presented in [6], is about the number of sequences characterized by Theorem 6. This number is announced in [6] to P Q be s∈S ( m i=1 |σi (s)|!), where S is the set of all possible job assignments inside the sequences σi , and |σi (s)| is the number of jobs in σi for a particular assignment s ∈ S. In reality this is not true, and the announced value is actually an upper bound on the cardinality of the characterized sequences. Indeed, several assignments may lead to the same permutations in the case where some of the jobs may be assigned to both σn1 and σn1 +1 . Consider for instance the example proposed in [6] (See Table 3). We have J = {J1 , J2 , J3 , J5 , J6 , J7 , J8 } and J = {J4 , J7 , J9 }. Note that J7 belongs to both sets. 22

ACCEPTED MANUSCRIPT

Table 3: Example 2 [6].

J2 3 8

J3 8 12

J4 8 2

J5 2 4

J6 4 5

J7 7 7

J8 J9 9 5 11 3

CR IP T

ai bi

J1 1 6

AN US

The approach used by the authors of [6] leads to the sequences of the form J1 ≺ σ1 ≺ J2 ≺ σ2 ≺ J3 ≺ σ3 ≺ σ4 ≺ J4 . The possible assignments of the jobs are J5 ∈ σ1 , J6 ∈ σ1 or J6 ∈ σ2 , J7 ∈ σ2 or J7 ∈ σ3 or J7 ∈ σ4 , J8P∈ σ3Qand J9 ∈ σ4 . The enumeration of all possible assignments gives s∈S ( m i=1 |σi (s)|!) = 16, while the number of distinct sequences is 13. In this case, the assignments of J7 to σ3 and σ4 generate three redundant sequences. We add that the application of Algorithm ROG to this example gives the SPOs J1 ≺ {J2 , J5 , J6 } ≺ {J3 , J7 , J8 } ≺ J9 ≺ J4

M

if Jz is assigned to J, and

J1 ≺ {J2 , J5 , J6 } ≺ {J3 , J8 } ≺ {J9 , J7 } ≺ J4

AC

CE

PT

ED

if Jz is assigned to J, characterizing 48 optimal solutions. Moreover, the enumeration of all possible outputs of Algorithm OG yields a total of 324 optimal solutions. For the sake of exactitude, we also point out that the makespan expression given in Theorem 8 in [6] is incorrect (the theorem is valid) as shown by the following example. Consider a three-job instance, with a1 = 1, b1 = 5, a2 = 2, b2 = 6, a3 = 7 and b3 = 8. The approach used by the authors indicates that the jobs form three independent bases with σ1 = σ2 = σ3 = ∅, and that the sequence π = hJ1 , J2 , J3 i is optimal (which is correct). However, the makespan expression given in Theorem 8 [6] reduces to Cmax (π) = C{J1 } +max(C{J2 } −a2 , C{J2 } −∆tσ1 )+max(C{J3 } −a3 , C{J3 } −∆tσ2 ), with C{Ji } = ai + bi , and ∆tσi = C{Ji } − ai = bi . This gives Cmax (π) = 21, ? whereas it should be easy to see that the optimal makespan is Cmax = 20. This is due to the fact that the proposed formula does not correctly compute the starting times of the jobs on the first machine. 23

ACCEPTED MANUSCRIPT

6. Necessary optimality condition

CR IP T

Although the F2 |perm|Cmax problem has been extensively studied, to the best of our knowledge, no necessary optimality conditions have been discussed. We address this issue hereafter. Let π ? be an optimal solution generated by Algorithm OG, and let Jz be a crossover job. Without loss of generality, we suppose that Jz ∈ J. We denote by Q and R the sets of jobs from J scheduled respectively before and after Jz in π ? (see Figure 5). We ? have Cmax = a(Q) + az + bz + b(R) + b(J). The following result applies. Schedule σ az

Schedule σ 0 A

E∩Q

az

E∩R

Schedule π ? az

Q Q

B

J

F

J

R

bz

M

A

J

F

AN US

E

A

R

J

ED

Figure 5: Position of Jz in schedules π ? , σ and σ 0 .

PT

Theorem 8. A necessary condition for the optimality of any given sequence is that (C1) all the jobs Ji ∈ Q with ai 6= bi must be scheduled before Jz .

CE

(C2) all the jobs Ji ∈ R with ai 6= az must be scheduled after Jz . (C3) all the jobs Ji ∈ J with ai 6= bi must be scheduled after Jz .

AC

Proof. Let σ be an optimal solution where one of the conditions (C1) and (C2) is not satisfied. Using JR, it is always possible to rearrange σ so that the jobs of J are scheduled after those of J with no increase in the makespan. Let E and F denote the sets of jobs from J respectively preceding and succeeding Jz in σ (see Figure 5). Based on JR it is possible to construct permutation σ 0 , derived from σ by rearranging the jobs of E ∩ R to be scheduled just before Jz with Cmax (σ 0 ) = Cmax (σ) (see Figure 5). Let Jy be the first job of 24

ACCEPTED MANUSCRIPT

the block E ∩ R in σ 0 (Jy = Jz if E ∩ R = ∅). Considering the position of Jy in σ 0 and using (1) we obtain

CR IP T

a(E ∩ Q) + ay + b(E ∩ R) + bz + b(F ∩ Q) + b(F ∩ R) + b(J) ≤ Cmax (σ 0 ). (4) By definition we have b(F ∩ Q) ≥ a(F ∩ Q) and ay ≥ az . However, if condition (C1) is not satisfied then we get b(F ∩ Q) > a(F ∩ Q), and if condition (C2) is not satisfied then ay > az . In either cases, using (4) we get

AN US

Cmax (σ 0 ) > a(E ∩ Q) + az + b(E ∩ R) + bz + a(F ∩ Q) + b(F ∩ R) + b(J) ? = a(Q) + az + bz + b(R) + b(J) = Cmax ,

M

which is absurd. We then conclude that both conditions (C1) and (C2) have to be satisfied. Suppose now that (C1) and (C2) are satisfied and that σ does not respect (C3). Hence there exists a set of jobs G ⊆ J scheduled before Jz in σ with a(G) > b(G). Given the position of Jz in σ, we have

ED

Cmax (σ) ≥ a(Q) + a(G) + az + bz + b(R) + b(J\G) ? > a(Q) + az + bz + b(R) + b(J) = Cmax , which is absurd. This means that all three conditions have to be satisfied.

PT

2

7. Asymptotic behavior

AC

CE

Multiple attempts have been made to extend the results found for the F2 ||Cmax problem. This led to the study of several close configurations such as the two-stage assembly flow shop, and many variants of the two-stage hybrid flow shop. In this context, several authors reported similar conclusions regarding the behavior of those problems when considering randomly generated big size instances [9, 13, 15, 20]. Indeed, big size instances seem to be easier to solve with makespan values frequently coinciding with some basic lower bound. Hereafter, we present a number of observations explaining the origin of such behavior for the F2 |perm|Cmax problem. In the following, we assume that the F2 |perm|Cmax instances are generated under the following assumptions: 25

ACCEPTED MANUSCRIPT

• All processing times are integers randomly generated over the same interval [α, β]. • The processing times are generated independently from each other.

CR IP T

• The probability distribution used to generate the processing times is supposed to be the same for all jobs.

M

AN US

We note that these assumptions are commonly used for the generation of random instances. This way there is (β − α + 1) possible values for ai (or bi ), which gives (β − α + 1)2 possible outcomes for any couple of processing times (ai , bi ) of any given job Ji . Let λ be a fixed integer from the interval [α, β]. In the subsequent analysis we are particularly interested in the jobs Ji ∈ J such that ai ≤ λ with ai < bi (or equivalently bi − ai ≥ 1). We denote by ρλ the probability of getting such a job, i.e. ρλ = P ((ai ≤ λ) and (ai < bi )). This event is realized by the couple of processing times (ai = e, bi = f ) for e ∈ {α, α + 1, . . . , λ} and f ∈ {e + 1, e + 2, . . . , β}. Consequently, ρλ is given by the sum of the probabilities of getting all the outcomes satisfying the two conditions. We have β λ X X P ((ai = e) and (bi = f )). ρλ =

ED

e=α f =e+1

As the processing times ai and bi are generated independently then

PT

ρλ =

β λ X X

P (ai = e)P (bi = f ).

(5)

e=α f =e+1

AC

CE

Note that for a fixed interval [α, β], and a given probability distribution, the probabilities P (ai = e) and P (bi = f ) are constant values that depend only on e and f with no dependence on n (number of generated jobs). Consequently ρλ depends only on λ (and not on n). In order to illustrate the previous analysis, Let us consider for example the commonly used uniform distribution over [α, β]. We have P (ai = k) = 1 P (bi = k) = (β−α+1) ∀k ∈ {α, α + 1, . . . , β}. Using (5) we get ρλ =

β λ X X

(λ − α + 1)(2β − α − λ) 1 = , 2 (β − α + 1) 2(β − α + 1)2 e=α f =e+1 26

ACCEPTED MANUSCRIPT

which clearly depends on λ only (and not on n). The following result applies.

CR IP T

Lemma 5. Consider an F2 |perm|Cmax instance where the processing times are integers generated over a constant interval [α, β]. The probability that the ? optimal makespan Cmax coincides with the lower bound lb = max{minJ {ai }+ b(J); a(J) + minJ {bi }} grows as n → +∞.

AC

CE

PT

ED

M

AN US

Proof. When fixing the first ground job g1 , Algorithm OG seeks for a job with ag1 = minJ {ai }. Let λ = bg1 , and let ρλ = P ((ai ≤ λ) and (ai < bi )), that is the probability that a job Ji ∈ J is so that bi − ai ≥ 1 with ai ≤ λ. Recall that for a given probability distribution, ρλ depends only on λ (and not on n). Several data structures for the processing times (events) can be imagined so that Algorithm OG generates an SPO in which g1 remains the only ground job over the jobs of J. This yields to an SPO whose makespan is Cmax (π) = minJ {ai } + b(J). In the following we consider a particular event, and we show that the probability of its occurrence grows when n → +∞. Consider the set E = {Ji ∈ J | ai ≤ λ and ai < bi }, and let |E| denote its cardinality. A job Ji has ρλ probability to be part of E, and (1 − ρλ ) probability not to be included in E. If we consider this as a Bernoulli trial, then the probability that the cardinality of E is exactly m ∈ {0, 1, . . . , n} (i.e. obtaining exactly m successes out of n trials) will follow the binomial  n m distribution. More specifically, we have P (|E| = m) = m (ρλ ) (1 − ρλ )n−m . We are interested in the event by which E contains no less than (β − λ) jobs. Indeed in this case, all the jobs of E can be scheduled directly after g1 with no idle time on B. Then after their completion, the difference between the completion times on machines B and A will be such that λ + b(E) − a(E) ≥ λ + |E| (as bi − ai ≥ 1 ∀Ji ∈ E) ≥ λ + β − λ = β.

Consequently, the rest of the jobs (i.e. J \ E) can be scheduled arbitrarily after E yielding an SPO with Cmax (π) = minJ {ai } + b(J). Now the probability that E contains no less than (β − λ) jobs is given by

27

ACCEPTED MANUSCRIPT

  n X n P (|E| ≥ β − λ) = (ρλ )m (1 − ρλ )n−m m m=β−λ

 n (ρλ )m (1 − ρλ )n−m . (6) = 1− m m=0   λ) n (ρλ )m (1 − ρλ )n−m , and We have n+1 (ρλ )m (1 − ρλ )n−m+1 = (n+1)(1−ρ n−m+1 m m λ) λ for n sufficiently large (e.g. n > m−ρ ), we get (n+1)(1−ρ < 1, which imρλ n−m+1   n+1 n plies that m (ρλ )m (1 − ρλ )n−m+1 < m (ρλ )m (1 − ρλ )n−m for all m ∈ Pβ−λ−1 n  {0, . . . , β − λ − 1}. We then conclude that (ρλ )m (1 − ρλ )n−m m=0 m decreases when n grows, and consequently P (|E| ≥ β − λ) grows when n −→ +∞. Since this latter result is true for all possible values of λ, we conclude that the probability that the optimal makespan over the jobs of J (i.e. Cmax (π)) coincides with the lower bound minJ {ai } + b(J), grows as n → +∞. Symmetrically, the same argument applied to the jobs of J shows that the probability to get Cmax (π) = a(J)+minJ {bi } grows when n → +∞. When concatenating π and π the obtained makespan coincides with lb. 2

M

AN US

X

CR IP T

β−λ−1 

AC

CE

PT

ED

A more specific characterization of the elements discussed above requires the knowledge of the particular probability distribution used for the generation of the processing times. However, in order to establish an empirical assessment for Lemma 5, we compare the optimal makespan and lb for several instances. For that, we consider 199 different sizes (n ∈ {2, . . . , 200}) and three values for β ∈ {20, 50, 100} (with α = 1). For each size n, and each value β, 100 instances are generated using a uniform discrete distribution over the interval [1, β]. Table 4 summarizes the average percentage of instances for which the optimal makespan coincides with lb. It can be seen that this percentage grows rapidly. Indeed, it exceeds 90% as soon as n exceeds 10. We also notice that this percentage grows faster when the interval [α, β] is smaller. These results clearly indicate that the relative hardness of randomly generated instances depends on the considered intervals. 8. Conclusion In this paper we established several sufficient conditions for the optimality of the F2 |perm|Cmax problem. The proposed conditions led to the characterization of a large set of optimal solutions. The generated set includes the 28

ACCEPTED MANUSCRIPT

? Table 4: Average percentage of instances with Cmax = lb.

[1,100] 79.8 92.6 95.2 95.6 97.6 98.0 98.5 98.6 98.3 98.6

n [1,20] 101-110 99.8 111-120 99.9 121-130 100 131-140 100 141-150 100 151-160 100 161-170 100 171-180 100 181-190 100 191-200 100

[1,50] 99.5 99.7 99.3 99.7 99.8 100 99.8 99.9 99.8 99.9

[1,100] 99.4 98.7 99.7 98.8 99.6 99.8 99.7 99.8 99.4 99.9

CR IP T

[1,50] 79.0 93.6 96.6 98.0 98.1 99.1 98.8 97.9 99.3 99.3

AN US

[1,20] n 2-10 79.4 95.8 11-20 21-30 96.6 98.5 31-40 98.9 41-50 51-60 99.7 99.5 61-70 71-80 99.7 100 81-90 91-100 100

CE

PT

ED

M

solutions given by Johnson’s Algorithm, and dominates the partial order presented in [6]. We also established a number of necessary conditions for the optimality that showed the importance of the crossover job, and the structure of the strict partial order generated by Algorithm OG. The latter in turn was also used to explain the high probability with which the makespan coincides with some trivial lower bound for big size instances. The results presented above show once more that the notion of group sequence is a rather powerful concept, which justifies further its investigation for more general shops. The investigation of optimality conditions in the non-permutation context is also an interesting issue. Finally, we think that the characterization of similar structures for more general shops could help building robust scheduling approaches, and may offer a basis for the explanation of the empirical easiness of solving big size instances for similar problems. References

AC

[1] Allen, J. (1981). An interval based representation of temporal knowledge. Proceedings of the 7th IJCAI, Vancouver, Canada, 221-226. [2] Baptiste, P. (1996). Sub-optimal groups of sequences in the classical flow-shop F2 ||Cmax . 5th workshop on project management and scheduling, Poznan, 27-30. 29

ACCEPTED MANUSCRIPT

[3] Bellman, R., Esogbue, A. O., & Nabeshima, I. (1982). Mathematical aspects of scheduling and applications. Pergamon press, Oxford.

CR IP T

[4] Billaut, J.C., & Lopez, P. (1998). Enumeration of all optimal sequences in the two-machine flow shop. Proceedings CESA’98 Computational engineering in systems application - Symposium on industrial and manufacturing systems, Nabeul-Hammamet, Tunisia, 378-382. [5] Brah, S. A., & LOO, L.L. (1999). Heuristics for scheduling in a flow shop with multiple processors. European Journal of Operational Research, 113, 113-122.

AN US

[6] Briand, C., La, H.T., & Erschler, J. (2006). A new sufficient condition of optimality for the two-machine flow shop problem. European Journal of Operational Research, 169, 712-722. [7] Cambell, H. G., Dudek, R. A., & Smith, N. L. (1970). A heuristic algorithm for the n job, m machine sequencing problem. Management Science, 16, 630-637.

M

[8] Conway, R.W., Maxwel, W.L., & Miller, L.W. (1967). Theory of scheduling. Addison-Wesley, Reading, MA.

ED

[9] Dridi, N., Hadda, H., & Hajri-Gabouj, S. (2009). M´ethode heuristique pour le probl`eme de flow shop hybride avec machines d´edi´ees. RAIRO - Operations Research, 43, 421-36.

CE

PT

[10] Esswein, C., Billaut, J.C., & Strusevich, V.A. (2005). Two-machine shop scheduling: Compromise between flexibility and makespan value. European Journal of Operational Research, 167, 796-809. [11] Gupta J.N.D. (1988). Two-stage hybrid flowshop scheduling problem. The Journal of the Operational Research Society, 39, 359-364.

AC

[12] Hadda, H., Dridi, N., & Hajri-Gabouj, S. (2012). A note on the twostage hybrid flow shop problem with dedicated machines. Optimization Letters, 6, 1731-1736. [13] Hadda, H., Dridi, N., & Hajri-Gabouj, S. (2014). Exact resolution of the two-stage hybrid flow shop with dedicated machines. Optimization Letters, 8, 2329-2339. 30

ACCEPTED MANUSCRIPT

[14] Hadda, H., Hajji, M.K., & Dridi, N. (2015). On the two-stage hybrid flow shop with dedicated machines. RAIRO - Operations Research, 49, 795-804.

CR IP T

[15] Haouari, M., & M’Hallah, R. (1997). Heuristic algorithms for the twostage hybrid flowshop problem. Operations Research Letters, 1, 43-53. [16] Herroelen, W., & Leus, R. (2005). Project scheduling under uncertainty: Survey and research potentials. European Journal of Operational Research, 165, 289-306.

AN US

[17] Johnson, S.M. (1954). Optimal two- and three-stage production schedules with setup times included. Res. Log. Q., 1, 61-68.

[18] Lee, C. Y., & Vairaktarakis, G. L. (1994). Minimizing makespan in hybrid flowshop. Operations Research Letters, 16, 149-158.

M

[19] Wu, D., Byeon, E.S., & Storer, R.H. (1999). A graph-theoretic decomposition of the job shop scheduling problem to achieve scheduling robustness. Operations Research, 1, 113-124.

AC

CE

PT

ED

[20] Xi, S., Kazuko, M., & Hiroyuki, N. (2003). Powerful heuristics to minimize makespan in fixed, 3-machine, assembly-type flowshop scheduling. European Journal of Operational Research, 146, 498-516.

31