An FPTAS for the parallel two-stage flowshop problem

An FPTAS for the parallel two-stage flowshop problem

JID:TCS AID:10794 /FLA Doctopic: Algorithms, automata, complexity and games [m3G; v1.180; Prn:10/06/2016; 11:30] P.1 (1-9) Theoretical Computer Sci...

669KB Sizes 1 Downloads 39 Views

JID:TCS AID:10794 /FLA

Doctopic: Algorithms, automata, complexity and games

[m3G; v1.180; Prn:10/06/2016; 11:30] P.1 (1-9)

Theoretical Computer Science ••• (••••) •••–•••

Contents lists available at ScienceDirect

Theoretical Computer Science www.elsevier.com/locate/tcs

An FPTAS for the parallel two-stage flowshop problem Jianming Dong a,1 , Weitian Tong b,c,1 , Taibo Luo d,b,1 , Xueshi Wang a , Jueliang Hu a , Yinfeng Xu d,e , Guohui Lin a,b,∗ a

Department of Mathematics, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China Department of Computing Science, University of Alberta, Edmonton, Alberta T6G 2E8, Canada Department of Computer Sciences, Georgia Southern University, Statesboro, GA 30458, USA d Business School, Sichuan University, Chengdu, Sichuan 610065, China e State Key Lab for Manufacturing Systems Engineering, Xi’an, Shaanxi 710049, China b c

a r t i c l e

i n f o

Article history: Received 8 September 2015 Accepted 27 April 2016 Available online xxxx Keywords: Two-stage flowshop scheduling Multiprocessor scheduling Makespan Dynamic programming Fully polynomial-time approximation scheme

a b s t r a c t We consider the NP-hard m-parallel two-stage flowshop problem, abbreviated as the (m, 2)-PFS problem, where we need to schedule n jobs to m parallel identical two-stage flowshops in order to minimize the makespan, i.e. the maximum completion time of all the jobs on the m flowshops. The (m, 2)-PFS problem can be decomposed into two subproblems: to assign the n jobs to the m parallel flowshops, and for each flowshop to schedule the jobs assigned to the flowshop. We first present a pseudo-polynomial time dynamic programming algorithm to solve the (m, 2)-PFS problem optimally, for any fixed m, based on an earlier idea for solving the (2, 2)-PFS problem. Using the dynamic programming algorithm as a subroutine, we design a fully polynomial-time approximation scheme (FPTAS) for the (m, 2)-PFS problem. © 2016 Elsevier B.V. All rights reserved.

1. Introduction In the m-parallel k-stage flowshop problem, denoted as (m, k)-PFS, there are m parallel identical k-stage flowshops F 1 , F 2 , . . . , F m . Each of these classic k-stage flowshop contains exactly one machine at every stage, or equivalently k sequential machines. An input job has k tasks, and it can be assigned to one and only one of the m flowshops for processing; once it is assigned to a flowshop, its k tasks are then processed on the k sequential machines in the flowshop, respectively. Let M ,1 , M ,2 , . . . , M ,k denote the k sequential machines in the flowshop F  , for every . Let J denote a set of n input jobs J 1 , J 2 , . . . , J n . The job J i is represented as a k-tuple ( p i ,1 , p i ,2 , . . . , p i ,k ), where p i , j is the processing time for the j-th task, that is, the j-th task needs to be processed non-preemptively on the j-th machine in the flowshop to which the job J i is assigned. For all i , j, p i , j is a non-negative integer. The objective of this problem is to minimize the makespan, that is the completion time of the last job. Clearly, when m = 1, the problem reduces to the classic k-stage flowshop (flowshop scheduling in [4]); when k = 1, the problem reduces to another classic m-parallel identical machine scheduling problem (multiprocessor scheduling in [4]). When only the two-stage flowshops are involved, i.e. (m, 2)-PFS, the problem has been previously studied in [11,20,23].

* 1

Corresponding author at: Department of Computing Science, University of Alberta, Edmonton, Alberta T6G 2E8, Canada. E-mail address: [email protected] (G. Lin). Co-first authors.

http://dx.doi.org/10.1016/j.tcs.2016.04.046 0304-3975/© 2016 Elsevier B.V. All rights reserved.

JID:TCS AID:10794 /FLA

Doctopic: Algorithms, automata, complexity and games

[m3G; v1.180; Prn:10/06/2016; 11:30] P.2 (1-9)

J. Dong et al. / Theoretical Computer Science ••• (••••) •••–•••

2

Table 1 Known results for the hybrid k-stage flowshop problem. m j machines in stage j

k stages

k=1 k=2 k ≥ 3 fixed k arbitrary

mj = 1

m j fixed

m j arbitrary

Polynomial time Polynomial time [15] PTAS [10] Not be approximated within 1.25 [22]

FPTAS [18] PTAS [10] PTAS [10]

PTAS [12] PTAS [19] PTAS [14]

The (m, k)-PFS problem is closely related to the well studied hybrid k-stage flowshop problem [16], which also includes the classic k-stage flowshop and the classic parallel identical machine scheduling problem as special cases. The hybrid k-stage flowshop problem is a flexible manufacturing system model, and it contains m j ≥ 1 parallel identical machines in the j-th stage, j = 1, 2, . . . , k. The problem is abbreviated as (m1 , m2 , . . . , mk )-HFS. A job J i is again represented as a k-tuple ( p i ,1 , p i ,2 , . . . , p i ,k ), where p i , j is the processing time for the j-th task, that can be processed non-preemptively on any one of the m j machines in the j-th stage. The objective of the (m1 , m2 , . . . , mk )-HFS problem is also to minimize the makespan. One clearly sees that when m1 = m2 = . . . = mk = 1, the problem reduces to the classic k-stage flowshop problem; when k = 1, the problem reduces to the classic m-parallel identical machine scheduling problem. We next review some of the most important and relevant results on the k-stage flowshop problem and on the m-parallel identical machine scheduling problem. For the k-stage flowshop problem, it is known that for k ∈ {2, 3}, there exists an optimal schedule that is a permutation schedule for which all the k machines process the jobs in the same order; but for k ≥ 4, there may exist no optimal schedule which is a permutation schedule [3]. When k = 2, the two-stage flowshop problem is polynomial time solvable, by Johnson’s algorithm [15]; the k-stage flowshop problem becomes strongly NP-hard when k ≥ 3 [5]. After several efforts [15,5,6,2], Hall presented a polynomial-time approximation scheme (PTAS) for the k-stage flowshop problem, for any fixed integer k ≥ 3 [10]. Note that due to the strongly NP-hardness, a PTAS is the best possible result unless P = NP. When k is a part of input (i.e. an arbitrary integer), the problem cannot be approximated within 1.25 [22]; nevertheless, it remains unknown whether the problem can be approximated within a constant factor. For the m-parallel identical machine scheduling problem, it is NP-hard when m ≥ 2 [4]. When m is a fixed integer, the problem admits a pseudo-polynomial time exact algorithm [4] that can be used to construct an FPTAS [18]; when m is a part of input, the problem becomes strongly NP-hard, but admits a PTAS by Hochbaum and Shmoys [12]. The literature on the hybrid k-stage flowshop problem (m1 , m2 , . . . , mk )-HFS is also rich [17], especially for the hybrid two-stage flowshop problem (m1 , m2 )-HFS. First, (1, 1)-HFS is the classic two-stage flowshop problem which can be solved optimally in polynomial time [15]. When max{m1 , m2 } ≥ 2, the (m1 , m2 )-HFS problem becomes strongly NP-hard [13]. The special cases (m1 , 1)-HFS and (1, m2 )-HFS have attracted many researchers’ attention [7,9,1,8]; the interested reader might refer to [21] for a survey on the hybrid two-stage flowshop problem with a single machine in one stage. For the general hybrid k-stage flowshop problem (m1 , m2 , . . . , mk )-HFS, when all the m1 , m2 , . . ., mk are fixed integers, Hall claimed that the PTAS for the classic k-stage flowshop problem can be extended to a PTAS for the (m1 , m2 , . . . , mk )-HFS problem [10]. Later, Schuurman and Woeginger presented a PTAS for the hybrid two-stage flowshop problem (m1 , m2 )-HFS, even when the numbers of machines m1 and m2 in the two stages are a part of input [19]. Jansen and Sviridenko generalized this result to the hybrid k-stage flowshop problem (m1 , m2 , . . . , mk )-HFS, where k is a fixed integer while m1 , m2 , . . . , mk can be a part of input [14]. Due to the inapproximability of the classic k-stage flowshop problem, when k is arbitrary, the (m1 , m2 , . . . , mk )-HFS problem can not be approximated within 1.25 either unless P = NP [22]. Table 1 summarizes the results we reviewed earlier. Besides, there are plenty of heuristic algorithms in the literature for the general hybrid k-stage flowshop problem, and the interested readers can refer to the survey by Ruiz et al. [17]. Compared to the rich literature on the hybrid k-stage flowshop problem, the (m, k)-PFS problem is much less studied. In fact, the general (m, k)-PFS problem is almost untouched, except only the two-stage flowshops are involved [11,20,23]. He et al. first proposed the m parallel identical two-stage flowshop problem (m, 2)-PFS, motivated by an application from the glass industry [11]. In their work, the (m, 2)-PFS problem is formulated as a mixed-integer programming and an efficient heuristics is proposed [11]. Vairaktarakis and Elhafsi [20] also studied the (m, 2)-PFS problem, in order to investigate the hybrid k-stage flowshop problem. Among other results, Vairaktarakis and Elhafsi observed that the (2, 2)-PFS problem can be broken down into two subproblems, a job partition problem and a classic two-stage flowshop problem [20]. Note that the second subproblem can be solved optimally by Johnson’s algorithm [15]. The NP-hardness of the first subproblem [4] implies the NP-hardness of (2, 2)-PFS, simply by setting all p i ,2 ’s to zeros. One of the major contributions in [20] is an O (n P 3 )-time dynamic programming algorithm for solving the (2, 2)-PFS problem optimally, where n is the number of jobs and P is the sum of all processing times. That is, this exact algorithm runs in pseudo-polynomial time. The NP-hardness of (2, 2)-PFS implies that the general (m, 2)-PFS problem is NP-hard, either m is a part of input (arbitrary) or m is a fixed integer greater than one. Recently, Zhang et al. [23] studied the (m, 2)-PFS problem from the approximation algorithm perspective, more precisely only for the special case where m = 2 or 3. They designed a 3/2-approximation algorithm for the (2, 2)-PFS problem and a 12/7-approximation algorithm for the (3, 2)-PFS problem [23]. Both algorithms are variations of Johnson’s algorithm and the main idea is first to sort all the jobs using Johnson’s algorithm into a sequence, then to cut this sequence into two (three, respectively) parts for the two (three, respectively) two-stage flowshops

JID:TCS AID:10794 /FLA

Doctopic: Algorithms, automata, complexity and games

[m3G; v1.180; Prn:10/06/2016; 11:30] P.3 (1-9)

J. Dong et al. / Theoretical Computer Science ••• (••••) •••–•••

3

in order to minimize the makespan. The performance analysis is technical in proving how the cutting points guarantee the approximation ratios. In this paper, we investigate the (m, 2)-PFS problem for any fixed m ≥ 2. There are two major results. The first result is a pseudo-polynomial time dynamic programming algorithm that solves the problem exactly. This algorithm is inspired by the idea of Vairaktarakis and Elhafsi for solving the (2, 2)-PFS problem, and is presented in Section 2. Using the above exact algorithm as a subroutine, we present in Section 3 the second result — an FPTAS for the (m, 2)-PFS problem. This FPTAS clearly improves the previously best approximation results by Zhang et al. [23] for the special case of m = 2, 3. We conclude the paper in Section 4. 2. An exact algorithm for the (m, 2)-PFS problem In this section, we present a dynamic programming algorithm for solving the (m, 2)-PFS problem exactly. The running time of the algorithm is shown to be O (nm2 P 2m−1 ), and thus it is pseudo-polynomial when m is a fixed integer. Here n is the number of jobs and P is the sum of all processing times. For the sake of simplicity, we use PFS instead of (m, 2)-PFS in the sequel, and since only the two-stage flowshops are involved we use ai = p i ,1 and b i = p i ,2 to denote the processing times of the job J i in the two stages, respectively, and p i = ai + b i . We also denote the job J i as J i = (ai , b i ), for i = 1, 2, . . . , n. Without loss of generality, we assume that the jobs are pre-sorted by Johnson’s algorithm, that is, J i precedes J j (i.e. i < j) if and only if

min{ai , b j } ≤ min{b i , a j }. Such a job sequence is referred to as the Johnson’s sequence for the set of jobs. Lemma 1. [15] In the classic two-stage flowshop problem, given a set of jobs in the Johnson’s sequence  J 1 , J 2 , . . . , J n , where J i =

(ai , bi ), job processing in this sequence is an optimal schedule and its makespan is



n

max w =1

w  i =1

ai +

n 



bi

.

i= w

Lemma 2. Given a set of jobs in the Johnson’s sequence  J 1 , J 2 , . . . , J n , where J i = (ai , b i ), any subsequence of this job sequence is the Johnson’s sequence for the subset of jobs therein. Proof. The lemma holds since for any two jobs J i and J j , whether or not J i precedes J j is independent of all the other jobs. 2 Lemma 3. For the PFS problem, there is an optimal schedule in which the sub-schedule on the -th two-stage flowshop F  is the Johnson’s sequence for the subset of jobs assigned to F  , for every . Proof. The lemma follows from Lemma 1 by considering only the subset of jobs assigned to the flowshop F  .

2

In the following we aim to compute an optimal schedule π ∗ for the PFS problem satisfying the property stated in Lemma 3, that is, its sub-schedule on the -th two-stage flowshop F  is the Johnson’s sequence for the subset of jobs assigned to F  , for every . We assume without loss of generality that in any feasible schedule, every machine processes the jobs assigned to it as early as possible; this way, every first stage machine processes its jobs consecutively. Recall that all the jobs are pre-sorted by Johnson’s algorithm into  J 1 , J 2 , . . . , J n , where J i = (ai , b i ). In the optimal schedule π ∗ , when we restrict it to the job prefix Ji =  J 1 , J 2 , . . . , J i , for any i, the job J i is processed the last in one of the m flowshops. This way, we must be able to construct the optimal schedule π ∗ by sequentially assigning the jobs of  J 1 , J 2 , . . . , J n  to the m flowshops. To this purpose, we introduce an important concept called configuration X , that captures the machine idling status after each machine finishes processing all the jobs assigned to it. Note that a configuration does not memorize which flowshop every job is assigned to. Formally, let I  denote the amount of idle time from the finishing time of the last job on the machine M ,1 to the current makespan, and S  denote the amount of idle time from the finishing time of the last job on the machine M ,2 to the current makespan. A configuration X is defined as a tuple ( I 1 , I 2 , . . . , I m , S 1 , S 2 , . . . , S m ). Clearly, every valid configuration must have I  ≥ S  for all  and must have S  = 0 for some ; when the current makespan is attained on F  , we have S  = 0. For every job prefix Ji =  J 1 , J 2 , . . . , J i , there could be multiple schedules which give rise to the same configuration X . The smallest makespan achieved by these schedules is denoted as f i ( X ). Therefore, the makespan of an optimal n schedule, ∗ , is equal to min f ( X ), where the minimization goes over all possible configurations. Let P = denoted by C max X n i =1 (ai + b i ) be the sum of all the processing times. We only need to consider those configurations in which P > I ≥ S ≥ 0 for all  and   m i 2m−2 S  = 0 for some ; because we always have configurations =1 I  = mf i ( X ), in total there are at most m P j =1 a j +

JID:TCS AID:10794 /FLA

Doctopic: Algorithms, automata, complexity and games

[m3G; v1.180; Prn:10/06/2016; 11:30] P.4 (1-9)

J. Dong et al. / Theoretical Computer Science ••• (••••) •••–•••

4

Fig. 1. Subcase 1.a. The parent configuration X  leads to the configuration X with respect to J i through a (k, k)-event, and J i is added to F k .

to be discussed. We next develop recurrences for computing f i ( X ) for a configuration X using only the f i −1 ( X  )’s, by determining which configuration X  should be considered and how the job J i is added to lead the configuration X  to become the configuration X . Once this is done, one clearly sees that the space complexity of our algorithm is O (m P 2m−2 ). 2.1. The recurrences Let X and X  be two configurations. If adding the job J i to the end of one of the m flowshops leads the configuration X  to the configuration X , then X  is a parent (configuration) of X with respect to J i . Note that X  being a parent of X with respect to J i does not necessarily imply that it is also a parent of X with respect to another job J j , for any j = i. Suppose for the configuration X the makespan is achieved at the k-th flowshop, that is S k = 0, and for the parent configuration X  with respect to J i the makespan is achieved at the -th flowshop, that is S  = 0. The process of adding the job J i to the end of one of the m flowshops to lead the configuration X  to the configuration X is referred to as a (k, )-event, denoted as E (k, ).  , S  , S  , . . . , S  ) with respect to In the following, we want to determine all the parent configurations X  = ( I 1 , I 2 , . . . , I m m 1 2 the job J i for a given configuration X = ( I 1 , I 2 , . . . , I m , S 1 , S 2 , . . . , S m ), and to compute f i ( X ) using these f i −1 ( X  )’s. Recall that in the configuration X , S k = 0 (i.e. the makespan is achieved at the k-th flowshop). We group the parent configurations into two classes, those through a (k, k)-event and those through a (k, )-event for  = k. Case 1. E (k, k). In this case, the parent configuration X  leading to the configuration X is a (k, k)-event, that is, S k = 0. We distinguish two subcases depending on whether the job J i is added to F k . Subcase 1.a. The job J i is added to F k (see illustrations shown in Fig. 1). In this subcase, we have I k ≥ b i . If I k = b i , then we have ai ≥ I k (see an illustration shown in Fig. 1a). The idle times of the machines M t ,1 and M t ,2 in the configuration X , for any t = k, are increased by ai + b i − I k = p i − I k from those in the configuration X  , respectively.  , S  , S  , . . . , S  ) must satisfy That is, the parent configuration X  = ( I 1 , I 2 , . . . , I m m 1 2

⎧  Ik = ⎪ ⎪ ⎪ ⎨ S = k  = ⎪ I ⎪ t ⎪ ⎩  St =

0,

1, . . . , a i ,

0, I t − ( p i − I k ),

S t − ( p i − I k ),

for t = k,

(2.1)

for t = k.

Denote the set of configurations each satisfying Eq. (2.1) as C1 . Clearly, |C1 | = ai + 1. Note that f i ( X ) = f i −1 ( X  ) + ( p i − I k ) for every parent configuration X  ∈ C1 . If I k > b i , then we have ai < I k (see an illustration shown in Fig. 1b). The idle times of the machines M t ,1 and M t ,2 in the configuration X , for any t = k, are increased by b i from those in the configuration X  , respectively. That is, the parent  , S  , S  , . . . , S  ) must satisfy configuration X  = ( I 1 , I 2 , . . . , I m m 1 2

⎧  Ik = ⎪ ⎪ ⎪ ⎨ S = k ⎪ I t = ⎪ ⎪ ⎩  St =

I k + ai − b i , 0, I t − bi ,

for t = k,

S t − bi ,

for t = k.

(2.2)

Denote the set of configurations each satisfying Eq. (2.2) as C2 . Clearly, there is only one configuration in C2 . Note that f i ( X ) = f i −1 ( X  ) + b i for every parent configuration X  ∈ C2 . In summary, in Subcase 1.a the shortest makespan of the configuration X on the job prefix Ji =  J 1 , J 2 , . . . , J i  is calculated as

JID:TCS AID:10794 /FLA

Doctopic: Algorithms, automata, complexity and games

[m3G; v1.180; Prn:10/06/2016; 11:30] P.5 (1-9)

J. Dong et al. / Theoretical Computer Science ••• (••••) •••–•••

5

Fig. 2. Subcase 1.b. The parent configuration X  leads to the configuration X with respect to J i through a (k, k)-event, and the job J i is added to F j with j = k.

Fig. 3. Case 2. The parent configuration X  leads to the configuration X with respect to J i through a (k, )-event, where k = .

⎧   ⎪ ⎨ min X  ∈C1 f i −1 ( X ) + ( p i − I k ),  f i ( X ) = min X  ∈C2 f i −1 ( X ) + b i , ⎪ ⎩ ∞,

if I k = b i , if I k > b i ,

(2.3)

otherwise.

Subcase 1.b. The job J i is added to F j with j = k (see an illustration shown in Fig. 2). In this subcase, we have I t = I t and S t = S t , for every t = j; in particular S k = S k = 0.

i

Denote A i = h=1 ah as the sum of the first stage processing times for the job prefix Ji . By the assumption that every machine processes its jobs as early as possible, we have I j = I j + ai and S j ≥ S j + b i . That is, the parent configuration  , S  , S  , . . . , S  ) must satisfy X  = ( I 1 , I 2 , . . . , I m m 1 2

⎧  Ij = ⎪ ⎪ ⎪ ⎨ S = j ⎪ I t = ⎪ ⎪ ⎩  St =

I j + ai , S j + b i , S j + b i + 1, . . . , I j , It ,

for t = j ,

St ,

for t = j .

(2.4)

Denote the set of configurations each satisfying Eq. (2.4) as C3 . Clearly, |C3 | = ( I j + ai ) − ( S j + b i ) + 1. Note that f i ( X ) = f i −1 ( X  ) for every parent configuration X  ∈ C3 . In summary, in Subcase 1.b the shortest makespan of the configuration X on the job prefix Ji =  J 1 , J 2 , . . . , J i  is calculated as

f i ( X ) = min f i −1 ( X  ).  X ∈C3

(2.5)

Case 2. E (k, ). In this case, the parent configuration X  leading to the configuration X is a (k, )-event (k = ), that is,

S  = 0. Note that in this case the job J i must be added to F k to ensure that the makespan of X is achieved at F k , and S  ≤ S t , for any t = k (see illustrations shown in Fig. 3). From f i ( X ) = f i −1 ( X  ) + S  , we conclude that I t = I t − S  and S t = S t − S  , for every t = k. The following determines the values for I k and S k .

If I k > b i (see an illustration shown in Fig. 3a), then the machine M k,2 starts processing the job J i not immediately after the job J i is finished on the machine M k,1 , but I k − b i time units after. So we have I k = I k + ai − S  and S k = b i − S  . That  , S  , S  , . . . , S  ) must satisfy is, the parent configuration X  = ( I 1 , I 2 , . . . , I m m 1 2

JID:TCS AID:10794 /FLA

Doctopic: Algorithms, automata, complexity and games

[m3G; v1.180; Prn:10/06/2016; 11:30] P.6 (1-9)

J. Dong et al. / Theoretical Computer Science ••• (••••) •••–•••

6

⎧  Ik = ⎪ ⎪ ⎪ ⎨ S = k  ⎪ ⎪ It = ⎪ ⎩  St =

I k + ai − S  , bi − S  , It − S , St − S ,

(2.6)

for t = k, for t = k.

Denote the set of configurations each satisfying Eq. (2.6) as C4 . Clearly, there is only one configuration in C4 . Note that f i ( X ) = f i −1 ( X  ) + S  for every parent configuration X  ∈ C4 . If I k = b i (see an illustration shown in Fig. 3b), then the machine M k,2 starts processing the job J i immediately after the job J i is finished on the machine M k,1 . So we still have I k = I k + ai − S  , but S k can be any non-negative integer less than  , S  , S  , . . . , S  ) must satisfy or equal to I k . That is, the parent configuration X  = ( I 1 , I 2 , . . . , I m m 1 2

⎧  Ik = ⎪ ⎪ ⎪ ⎨ S = k ⎪ I t = ⎪ ⎪ ⎩  St =

I k + ai − S  , 0, 1, . . . , I k + a i − S  , It − S ,

for t = k,

St − S ,

for t = k.

(2.7)

Denote the set of configurations each satisfying Eq. (2.7) as C5 . Clearly, |C5 | = I k + ai − S  + 1. Note that f i ( X ) = f i −1 ( X  ) + S  for every parent configuration X  ∈ C5 . In summary, in Case 2 the shortest makespan of the configuration X on the job prefix Ji =  J 1 , J 2 , . . . , J i  is calculated as

⎧  ⎪ ⎨ min X  ∈C4 f i −1 ( X ), f i ( X ) = S  + min X  ∈C5 f i −1 ( X  ), ⎪ ⎩ ∞,

if I k > b i , if I k = b i ,

(2.8)

otherwise.

2.2. The dynamic programming algorithm The exact algorithm for the (m, 2)-PFS problem  essentially computes the f i ( X ) for all i’s and all configuration X ’s, and n returns the minimum min X f n ( X ). Recall that P = i =1 (ai + b i ) is the sum of all the processing times; and there are at 2m−2 most m P f i ( X )’s for every i = 1, 2, . . . , n. In the last section we have developed the recurrences for computing f i ( X ) for a configuration X using only the f i −1 ( X  )’s, by determining which parent configuration X  should be considered and how the job J i is added to lead the configuration X  to become the configuration X . The exact algorithm is a dynamic programming algorithm based on these recurrences. Since the computation of f i ( X )’s for all feasible configurations only replies on the f i −1 ( X  )’s over all feasible configurations, one clearly sees that the algorithm needs only two arrays each of length m P 2m−2 during the execution. That is, the space complexity of the exact algorithm is O (m P 2m−2 ). For every infeasible configuration X , f i ( X ) = ∞ for all i = 0, 1, . . . , n. The boundary conditions for the algorithm are

 f 0( X ) =

0,

if X = (0, 0, . . . , 0),

∞, otherwise,

(2.9)

and by putting Eqs. (2.3), (2.5), (2.8) together the overall recurrence is

⎧ ⎪ min X  ∈C1 f i −1 ( X  ) + ( p i − I k ), if E (k, k), J i is added to F k , and I k = b i , ⎪ ⎪ ⎪ ⎪ min X  ∈C2 f i −1 ( X  ) + b i , if E (k, k), J i is added to F k , and I k > b i , ⎪ ⎪ ⎪ ⎨min   if E (k, k) and J i is added to F j , j = k, X ∈C3 f i −1 ( X ), fi(X) = ) + S , ⎪  min f ( X if E (k, ),  = k, and I k > b i ,  i − 1 X ∈ C ⎪ 4 ⎪ ⎪ ) + S , ⎪  ⎪ min f ( X if E (k, ),  = k, and I k = b i ,  X ∈C5 i −1 ⎪ ⎪ ⎩ otherwise, ∞,

(2.10)

where the sets of configurations C1 , C2 , C3 , C4 , and C5 are specified in Eqs. (2.1), (2.2), (2.4), (2.6), and (2.7), respectively. Since |C1 | = ai + 1, |C2 | = 1, |C3 | = ( I j + ai ) − ( S j + b i ) + 1, |C4 | = 1, and |C5 | = ai + b i − S  + 1, every entry f i ( X ) can be computed in O (m P )-time. Therefore, the total running time of the exact algorithm is O (nm2 P 2m−1 ). We summarize this result in the following. Theorem 1. The (m, 2)-PFS problem can be solved exactly by a dynamic programming algorithm in O (nm2 P 2m−1 )-time and O (m P 2m−2 )-space.

JID:TCS AID:10794 /FLA

Doctopic: Algorithms, automata, complexity and games

[m3G; v1.180; Prn:10/06/2016; 11:30] P.7 (1-9)

J. Dong et al. / Theoretical Computer Science ••• (••••) •••–•••

7

3. An FPTAS for the (m, 2)-PFS problem The dynamic programming algorithm presented in the last section is an exponential-time algorithm, and pseudopolynomial when m is a fixed integer. In this section, we use it as a subroutine to design a fully polynomial time approximation scheme, denoted as {A( ),  > 0}, for the (m, 2)-PFS problem. 3.1. Description of the algorithm A( ) Given a positive real number  , and a set of jobs J =  J 1 , J 2 , . . . , J n , with J i = (ai , bi ) where nin the Johnson’ssequence n ai , b i are non-negative integers, we let A = A n = i =1 ai , B = B n = i =1 b i , D = max{ A , B }/m, and K =  D /n. The basic idea of the algorithm A( ) is to assign the jobs of J in a good way to the m flowshops. To this purpose, the algorithm A( ) first scales down the job processing times to

ai =

ai K

, bi =

bi K



for every job J i from ai , b i , respectively. For ease of presentation, this new job is denoted by J i . Clearly, the job sequence J  =  J 1 , J 2 , . . . , J n  is still in Johnson’s order, since for any two integers x, y, x ≤ y implies Kx ≤ Ky . In the second step, the algorithm A( ) calls the dynamic programming algorithm presented in the last section to obtain an optimal schedule π for the job sequence J  =  J 1 , J 2 , . . . , J n  with J i = (ai , bi ), that minimizes the makespan using the scaled processing times. For the job schedule π , the algorithm A( ) calculates the makespan using the original processing times, and returns both the schedule π and its makespan. We remark that in the optimal schedule π for the job sequence J  =  J 1 , J 2 , . . . , J n  with J i = (ai , bi ), the sequence of the jobs assigned to each flowshop is a subsequence of  J 1 , J 2 , . . . , J n , and thus is in Johnson’s order. 3.2. Performance analysis Let π ∗ denote an optimal schedule for the job sequence J =  J 1 , J 2 , . . . , J n  with the original processing times J i = (ai , b i ). We assume without loss of generality that in π ∗ , the sequence of the jobs assigned to each flowshop is a subsequence of  J 1 , J 2 , . . . , J n . Let C ∗j denote the finishing time of the last job assigned to F j on the machine M j ,2 , ∗ j = 1, 2, . . . , m. Let C max = maxmj=1 C ∗j denote the makespan of the schedule

π ∗ . For the schedule π ∗ , when the new scaled

∗

processing times are used in calculation, correspondingly let C j denote the finishing time of the last job assigned to F j on the machine M j ,2 , j = 1, 2, . . . , m. Let π denote the optimal schedule generated by the dynamic programming algorithm for the job sequence J  =   J 1 , J 2 , . . . , J n  with J i = (ai , bi ). Let C j  denote the finishing time of the last job assigned to F j on the machine M j ,2 , j = 1, 2, . . . , m. For the schedule π , when the original processing times are used in calculation, correspondingly let C j deπ = maxm C denote the note the finishing time of the last job assigned to F j on the machine M j ,2 , j = 1, 2, . . . , m. Let C max j =1 j

π .2

makespan of the schedule

∗ / K + n. Lemma 4. maxm C ∗  < C max j =1 j

Proof. Recall that C ∗j  is the finishing time of the last job in F j in the schedule

π ∗ , using the scaled job processing times for

∗ calculation, while C max is the optimal makespan for the job sequence using the original job processing times for calculation.

According to the scaling schema, we have

ai K

≤ ai <

ai K

bi

+ 1,

K

≤ bi <

bi K

+ 1.

(3.1)

Consider the optimal schedule π ∗ for the job sequence using the original job processing times. For each flowshop F j , the sequence of the jobs assigned to it is a subsequence of  J 1 , J 2 , . . . , J n , thus in Johnson’s order; and therefore it is still in Johnson’s order when using the scaled job processing times. Assume these job indices are π ∗j [1], π ∗j [2], . . . , π ∗j [n∗j ]. It follows from Lemma 1 and Eq. (3.1) that for every j, ∗

n∗j

C j = max w =1 n∗j

< max w =1

2

⎧ w ⎨ ⎩

i =1

a π ∗ [i ] + j

⎧ w a ∗ ⎨ π [i ] j



i =1

⎫ ⎬

n∗



K

j 

i= w



b π ∗ [i ] j

 +1 +



n∗j  b π ∗ [i ]  j i= w

K

+1

⎫ ⎬ ⎭

In notation, a quantity with  indicates it is calculated using the scaled job processing times.

JID:TCS AID:10794 /FLA

Doctopic: Algorithms, automata, complexity and games

[m3G; v1.180; Prn:10/06/2016; 11:30] P.8 (1-9)

J. Dong et al. / Theoretical Computer Science ••• (••••) •••–•••

8

⎧ ⎛ ⎫ ⎞ n∗j w ⎨1  ⎬  ⎝ = max a π ∗ [i ] + bπ ∗ [i ] ⎠ + n∗j + 1 j j ⎭ w =1 ⎩ K n∗j

i =1



i= w



= C j / K + n j + 1. Therefore, m

m



∗ max C ∗j < max{C ∗j / K + n∗j + 1} ≤ C max / K + n, j =1

j =1

where the last inequality follows from m ≥ 2 and thus no flowshop processes all the n jobs. This proves the lemma.

2

π ≤ K maxm C  . Lemma 5. C max j =1 j

C  is the optimal makespan for the job sequence using the scaled job processing times for calcuProof. Recall that maxm j =1 j π lation, while C max is the makespan of the schedule π using the original job processing times for calculation. Consider the optimal schedule π for the job sequence using the scaled job processing times. For each flowshop F j , the sequence of the jobs assigned to it is a subsequence of  J 1 , J 2 , . . . , J n , which is the same as the Johnson’s sequence for the jobs using the original job processing times. Assume these job indices are π j [1], π j [2], . . . , π j [n j ]. It follows from Lemmas 1, 2 and Eq. (3.1) that for every j,



nj

C j = max w =1



nj

≤ max w =1

w 

a π j [i ] +

i =1

i =1

nj

= max K w =1

b π j [i ]





K a π j [i ] +

w  i =1

= K C j .



i= w

w  

 

nj 

aπ j [i ] +

nj   i= w

nj  i= w







K b π ∗ [i ] j



bπ j [i ]

Therefore, m

m

j =1

j =1

π = max C ≤ K max C  . C max j j

2

This proves the lemma.

Theorem 2. The algorithm A( ) is an O (22m−1 n2m m2 (1 + m/ )2m−1 )-time (1 +  )-approximation for the (m, 2)-PFS problem. Proof. Combining Lemmas 4 and 5, from the optimality of the schedule m

m

j =1

j =1

π we have



π ≤ K max C  ≤ K max C ∗ < K (C ∗ / K + n) = C ∗ + nK . C max max max j j ∗ Recall that we set K =  D /n and D = max{ A , B }/m. We have C max ≥ D and thus

π < C ∗ +  D ≤ (1 +  )C ∗ . C max max max

That is, the makespan of the schedule π generated by the algorithm A( ) is within (1 +  ) of the optimum. For the running time of the algorithm A( ), it is dominated by its call to the dynamic programming algorithm on the job sequence J  =  J 1 , J 2 , . . . , J n  with J i = (ai , bi ). From Eq. (3.1),

P =

n  (ai + bi ) i =1

< 2n +

n 1 

K

(ai + bi )

i =1

= 2n + P / K = 2n + ( A + B )/ K

JID:TCS AID:10794 /FLA

Doctopic: Algorithms, automata, complexity and games

J. Dong et al. / Theoretical Computer Science ••• (••••) •••–•••

[m3G; v1.180; Prn:10/06/2016; 11:30] P.9 (1-9)

9

≤ 2n + 2 max{ A , B }/ K = 2n + 2mD

n

D = 2n(1 + m/ ). Therefore, from Theorem 1, the running time of the algorithm A( ) is O (22m−1 n2m m2 (1 + m/ )2m−1 ), which is polynomial in n and 1/ for any fixed integer m. This proves the theorem. 2 4. Conclusions We presented a pseudo-polynomial time dynamic programming algorithm for solving the (m, 2)-PFS problem for any fixed integer m. This exact algorithm is used as a subroutine to design an FPTAS for the (m, 2)-PFS problem. Given that the problem is NP-hard, our FPTAS is the best possible unless P = NP. We remark that our exact algorithm replies heavily on the fact that the Johnson’s sequence of the jobs is an optimal schedule for the classic two-stage flowshop problem. Since the classic k-stage flowshop problem is strongly NP-hard when k ≥ 3, and no permutation schedule can be guaranteed to be optimal when k ≥ 4, our design techniques do not carry over to the (m, k)-PFS problem when k ≥ 3. Nevertheless, the classic k-stage flowshop problem admits a PTAS when k is fixed, and the PTAS presented in [10] might contain hints to a PTAS for the (m, k)-PFS problem. We leave this as an open question. Acknowledgements Dong is supported by the National Natural Science Foundation of China Grant No. 11501512. Tong is supported by an Alberta Innovates - Technology Futures (AITF) Graduate Student Scholarship, the NSERC, and the FY16 Startup Funding from the Georgia Southern University. Luo is supported by a sabbatical research grant of Lin; his work was mostly done during his visit to the UofA. Hu is supported by the National Natural Science Foundation of China (NNSF) Grants No. 11271324 and 11471286. Xu is supported by the NNSF Grants No. 61221063 and 71371129, and the Program for Changjiang Scholars and Innovative Research Team in University Grant No. IRT1173. Lin is supported by the NSERC, the NNSF Grant No. 61471124, and the Science Foundation of Zhejiang Sci-Tech University (ZSTU) Grant No. 14062170-Y; his work was mostly done during his sabbatical leave at the ZSTU. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]

B. Chen, Analysis of classes of heuristics for scheduling a two-stage flow shop with parallel machines at one stage, J. Oper. Res. Soc. 46 (1995) 234–244. B. Chen, C.A. Glass, C.N. Potts, V.A. Strusevich, A new heuristic for three-machine flow shop scheduling, Oper. Res. 44 (1996) 891–898. R.W. Conway, W.L. Maxwell, L.W. Miller, Theory of Scheduling, Addison-Wesley, Reading, MA, 1967. M.R. Garey, D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman & Co., New York, NY, USA, 1979. M.R. Garey, D.S. Johnson, R. Sethi, The complexity of flowshop and jobshop scheduling, Math. Oper. Res. 1 (1976) 117–129. T. Gonzalez, S. Sahni, Flowshop and jobshop schedules: complexity and approximation, Oper. Res. 26 (1978) 36–52. J.N.D. Gupta, Two-stage, hybrid flowshop scheduling problem, J. Oper. Res. Soc. 39 (1988) 359–364. J.N.D. Gupta, A.M.A. Hariri, C.N. Potts, Scheduling a two-stage hybrid flow shop with parallel machines at the first stage, Ann. Oper. Res. 69 (1997) 171–191. J.N.D. Gupta, E.A. Tunc, Schedules for a two-stage hybrid flowshop with parallel machines at the second stage, Int. J. Prod. Res. 29 (1991) 1489–1502. L.A. Hall, Approximability of flow shop scheduling, Math. Program. 82 (1998) 175–190. D.W. He, A. Kusiak, A. Artiba, A scheduling problem in glass manufacturing, IIE Trans. 28 (1996) 129–139. D.S. Hochbaum, D.B. Shmoys, Using dual approximation algorithms for scheduling problems theoretical and practical results, J. ACM 34 (1987) 144–162. J.A. Hoogeveen, J.K. Lenstra, B. Veltman, Preemptive scheduling in a two-stage multiprocessor flow shop is NP-hard, European J. Oper. Res. 89 (1996) 172–175. K. Jansen, M.I. Sviridenko, Polynomial time approximation schemes for the multiprocessor open and flow shop scheduling problem, in: STACS, in: Lecture Notes in Comput. Sci., vol. 1770, 2000, pp. 455–465. S.M. Johnson, Optimal two- and three-stage production schedules with setup times included, Nav. Res. Logist. Q. 1 (1954) 61–68. C.-Y. Lee, G.L. Vairaktarakis, Minimizing makespan in hybrid flowshops, Oper. Res. Lett. 16 (1994) 149–158. R. Ruiz, J.A. Vázquez-Rodríguez, The hybrid flow shop scheduling problem, European J. Oper. Res. 205 (2010) 1–18. S.K. Sahni, Algorithms for scheduling independent tasks, J. ACM 23 (1976) 116–127. P. Schuurman, G.J. Woeginger, A polynomial time approximation scheme for the two-stage multiprocessor flow shop problem, Theoret. Comput. Sci. 237 (2000) 105–122. G. Vairaktarakis, M. Elhafsi, The use of flowlines to simplify routing complexity in two-stage flowshops, IIE Trans. 32 (2000) 687–699. H. Wang, Flexible flow shop scheduling: optimum, heuristics and artificial intelligence solutions, Expert Syst. 22 (2005) 78–85. ´ D.P. Williamson, L.A. Hall, J.A. Hoogeveen, C.A.J. Hurkens, J.K. Lenstra, S.V. Sevastjanov, D.B. Shmoys, Short shop schedules, Oper. Res. 45 (1997) 288–294. X. Zhang, S. van de Velde, Approximation algorithms for the parallel flow shop problem, European J. Oper. Res. 216 (2012) 544–552.