Decomposition approaches in permutation scheduling problems with application to the M-machine flow shop scheduling problems

Decomposition approaches in permutation scheduling problems with application to the M-machine flow shop scheduling problems

125 Decomposition approaches in permutation scheduling problems with application to the M-machine flow shop scheduling problems J.G. S H A N T H I K ...

1MB Sizes 3 Downloads 157 Views

125

Decomposition approaches in permutation scheduling problems with application to the M-machine flow shop scheduling problems J.G. S H A N T H I K U M A R and Yih-Bor WU Systems and Industrial engineering, University oj Arizona, Tucson, A Z 85721, U.S.A.

Abstract. In this article, we consider three decoraposition techniques for ~3ermutation scheduling problems. We introduce a general iterative clecomposition algorithm for permutation scheduling problems ard appl3 it to the permutation flow shop scheduling problem. We also develop bounds needed for this iterative decomposition approach and compare its compatationai requirements to that of the traditional branch and bound algorithms. Two heuristic algorithms .3ased on the iterative decomposition approach ~tre also developed. Extensive numerical study indicate,, that the heuristic algorithms are Fractical alternatives to very costly exact algorithms for large flow shoF scheduling problems.

Keywords: Scheduling, heuristics, planning, production

I. Introduction The idea of decompcsing a problem to simr, lify its solution is not z~e~.,,'.However. th, u,e of ia"ge scale computer.,, in recent years has led to rapid exoansion of decomposition technique,, fo, optimization, for solving reliability and electrical network problems, for process conmds, and in ~ wic,e variety of other problems. In this paper we consider three decomposition techniques for the !~erln,:~ation .'
2. The decomposition approaches In this section, we :;hall define and discuss three different decomposition ap[ re,aches, in the order ,~I declining efficiency, t,) solve scheduling problems. They are: tl~ complete d~'compt.siti,m. 121 partial decomposition, and (3, iterative decompositic, n.

Received .lanuarv 19~2 rc~ !acd F cbruar', ! 0~*, N~r th-I tollaild European .Iournal of Opera icmal Rc~.carch 19 11085) 125 141 0377-2217/~5/':),3.30'

lt.~{~5,[:l~,cvtcr Sctent.'c Pub!>,b.er. BV ~:
126

J.G. $hcnthikumat and Y.B. Wu / Decomposition approaches in permutation scheduli,~g

2.1. 7omplete decompo.;ition (C-decomposition)

Under a complete decomposition, it is assumed that a g~,.'en set J -: { j = 1, 2 . . . . . n }, of n jobs can be partitioned, within a number of oper:~.tions bounded by a polynomial iunction of n, into (m + I) mutually exclusive subsets J ( i ) , i = 1, 2 . . . . . m - 1, such that the following two conditions are satisfied: (C1) A I I j e J ( i ) are kP..own to r~-ecede all k e J ( i 1), V i = 2 . . . . . m + 1, in an optimal schedule rr*; and "t(C2) The cost 4~(,'v,~ for the schedule ~r is equal to Elql,=~¢,(¢(%)), where ,~(~r,) is the ~'ost of schedule ~ , for the subproblem p ( i ) formed by the job set J ( i ) , ~r=U~"=o~,,,+~_,, f,('), i = 1, 2 . . . . . m + 1, are monotonically nondecreasing functions and are independent of the schedules 'rr, of the jobs in J ( i ) , V t = 1,2 . . . . . m + 1. •

o'

Theorem 2.1. I f ~r,* is the optimal schedule of the subproblem p(i), such that dp( rr,*) is tire m i n i m u m for all schedul s ,"r,, of Jobs ~ J( l ), then the optimal schedule r;* of jobs in J ~an be obtained by the recomposition rule ~7 *" = U qrm~ * l_ , t---1

and the cost ~( ,;r ) of this schedule is equal to

E sl! ,- + ~f,(dp(

~r,* )).

Pro~,f. Since the functions f,(-), i = 1.2 . . . . . m -~ 1. are independent of the sequences ~r,, i = 1, 2 . . . . . m + 1, ~(¢., ~r)) can be minimized without any regard to other subsets J ( i ) , Vi ¢:j. Since fj(. ) is a monotonically nondecreasing function, the minimum off~(¢(~, ))is obtained when ,/,(%) is minimum. If %* is the optimal schedule or" the jobs in J(./). then it is obvious hat the minimum of ¢(~r) is ¢(~r*), where

g

a.d

=*::

t:l

a ~=0

Using the above principle, a large scheduling problem ,.'an be decomposed into two or more disjoint .,,ub,;zts of :,mailer scheduling problems. These subsets can be solved svparately and the optimal schedule can r,e obtained by r e c o m p o s i n f . . h e m as di,;cussed earlier. The jobs can be normally partitioned into subs 'ts, based on either the feasibility or the optimality of the final schedule. Application of this complete decomposition approach to a single machine scheduling problem to minimize the weighted flow time s.:',j :cted to maintaining a minimum m a x i m u m 'ardiness is illustrated in [21]. Except for the one machine s,:heauling problems and for some special cases of two machines and ,d machines sched~lling problems, it is nzt :,osslble to construct the function f,(- ) independent of the sequeI~ces %, Vj 4=i. The following theorem a ,~- provides sufficient conditions where complete decomposition could be used. 'lh.:orenl 2.2. i r a function g, exists for all i = 1, 2 . . . . . m + 1, su,~h that ((3)

~ : , ( v ( j + 1)) ~< g..iTr(j))

for ";r,( j ), v( y + 1 ) E J( i ), inplie.s theft .....

1).

ti) . . . . .

.....

~ h: re ";:,(') is the./-th job in permutation ~r, then the schedule ,rr*, satisf}'mg

~]~

.<(~,*(,))~g,(~,*(/)).

= U I--

al,'~t '-"*

~

}

-

vi
v,.

t

1

is r,otmTal when th,,, .~u,').~,..i.~ j~ ). , - ~.. . -~ . . . . . ~:: ! t,, ~a~,~sj.~ ~o~:dit~on (C1).

+ l)

. . . . .

J.G. Shanthikum-r and Y.B. ;Vu / Detu,,~p,,~ifi,.;i ,'gpyrc.ache_vm permutatmn schedtdmg

127

Proof. Suppose ~* exists. Any ~r:e~r* must contain two adjacent elements (rr*( 1*, rr ~( t)) with t < J . Interchanging them will produce a new permutation without increasing .6 because of condition ( C 3 ) . ( o n tinuing this way ~r,*, i = 1, 2 . . . . . m + 1, and ,-r* will be reached eventually. [] The above concept can be applied to the Johnson's [14] two machine and three machine cases. Also Jackson's [13] n / 2 / G I n , < ~ 2/Fm,,~ problcm can be shown to satisfy the above conditions. Anvhov.. existence of the function g indicates that there will not be much benefit applying the decomposition concept. But the above concept can be applied efficiently to problem:; with precedeace constraints and a very good example of this case can be found in Kurisu [15]. He has solved the t w o m a c h i n e scheduling problem with required precedence .~mong jobs by identifying these subsets. In most of the scheduling problt.a,s, it is unlikely to identify the subsets J ( i ) . t = 1.2 . . . . . m + i. to satisfy the conditions specified for com?lete decomposition. In such cases it wili not be possible to emplo.~ the decomposition approach. In the next section we discuss a slightly less efficien~ dcc,qmpmi',i,:r: app.r~-,::=h than complete decomposition 2.2. Partial decomposition (P-decomposition) In some permutation scheduling problems, it is possible to ident, fy the ,,uh,,et.,, J(~ ). ~ = 1. . . . . m +- 1. of jobs which may satisfy condition (C2) or (C3) even though such subset partlti,,ning max not ..ati.,,I\ condition (CI). That is, these partitions do not necessarily satisfy any precedence conditions, in such ca.,,e~ it is possible to use the following partial decomposition approach. Since condition ,"(1) will not be ~,atiqicd by an initially identifiable job partition, in this approach we have to consider all p~,,,ible partition,, ,,m..h that at lest one partition, which we could not identify initially, satisfies condition ~C1 ). Thi,, c~mcept wit! he illustrated through the following theorem. Briefly, we can say that the partial decomposition approach, also defined as decompc~it~,:~ , ~,, .,~h,-.1 enumeration, can be used when condition (C2) or (C3) is satisfied by some job partiti(ming i n ~ ,c..-.-r fixed cardinality. Let J ( i ) be the job partitions such that [J(i 1~,= n, and -,U.... l ~./
IP,={J,(I).J,(2)

. J , ( , , , + 1)} "4i---I

l ~

and I4(J)l =n,,

Vi=l,2

..... , , , + 1 ,

where n ;, j = 1, 2 . . . . . m + 1, are fixed integers. Theorem 2 . 3 . / f either condition (C2) or condition (C3) t~ .vatfdted. flu'~r tlle /},ih,,,~,.'e pr,,: ,.'j::,,' :'. :::/.:',,.,,./::~ : an optimal solution rr*. Algorithm 2.1. Step O. Initialize k = 1 and ~ = ', f;. t = l. 2 . . . . . / ~. Step 1. Choose the partition Pa ~,nd solve the subproblems formed b \ j o b , in .L I / ,,. / - i. ' and let these solutions berry, j = 1 . 2 . . . . . m + 1 Step 2. Recompos:e ' . h e ~ ; e j,-~h'" , ~ , ~ c h l h a ~ v,* - : U " ' ,,';r,i~ . , S t W 3. Set c~=.5a - {Pa } and t.: = k + !. _~. ~i} =--.6. tith!',. ~,, [,, sicp 5,. ~i -.} .j_,, i . . . . " j ; i Step 4. Choose the best schedule "Jr* such that ,

,

.

rr"" is the optimal schedule. Stop.

]*

!2~

J.G. Sl,.anthi~urnar ard }: B 14,"1/ Decr,mpositton approaches in permutation scheduhng

Proof. Let the optimal schedule ¢r ~ ,a'*. Since 9 consists of all possible partitions, there exists one partition & such tt, at ~..11jobs in J~(i) follow all jobs in ,L(i + 1). Vi < m, in ,he schedule ~r. Then the partition P, satisfies condition (C1) and therefore, from Theorem 2.1 and 2.2. we can conclude that ~r* is also an optimal schedule. But we know that ~(~r')~< ()(%') and therefore 'rr* is an optimal schedule, i21 In some problems it may be possible to eliminate certain partitions. Again, such partition elimination can be based either on the optimality crit.-ria or on the feasibi!ity condition of the final schedule. In [20] we solved a special structure flow shop pr olem to illustrate the above principle. Job partitions, in the majority of the N P complete scheduling problem,,, do not satisfy conditions (CI) and (C2) or (C3). So. in such cases, either assl:mptions can be made regard ng these conditions or a branch and bound technique ,.an be used. Ashour [11 has assumed that rr* = - , ~ 0'.,,, ,,-* ! -,, for the n job M machine flow shop scheduling problem n / M / P / ! ~ : , ~ , in his heuristic decompositio a approach. Then Algorithm 2.1 can be used to solve the above problem. But G u p t a and MayKut [12] have proposed a heuristic decomposition approach by defining a definite partition. That is, it has been assumed that condition (CI) is true according to thei; partition. But it should be noted that. because of these assumptions, the optimality of the final schedules obtained through these decomposition approaches [1.12] can not be guarantced. We .,,hall now consider the branch and bound approach. 2.3. Branch and bound as a decomposition tool (l-decomposition): herative decomposition IcMahon and Burton [I7] have indicated the duality of the flow shop scheduling problems with rain mum make-span criterion. Aiso ~t was observed that. in some cases, it would be computationally effi. lent to solve tne dual probim,,. So far, no sufficient condition has been derived to identify these cases. In t ~,.., decomposition approach, job sequences will be built from both ends. thus making use of the duality of i aachine scheduling problems with minimum make-span criterion. Bounds at each node is es.imated thrt ugh heuristic schedules and these schedules are used to update the upper bound at each node. A similar app o a c h has been independently developed by Potts {19]. Through extensive computational results. Ports [191 ha.,, e, tabli:,hed tha v his approach outpe, i'orms the previously published algorithms. Our algorithm. alth -mgh use.,, the same basic idea, is different from his. Furthermore two heuristic aigoriti:ms are de,,' 'loped based on our basic algorithm. l van though the idea of decomposition in this approach is not explicit, we claim that the underlying con :ept can v/ell be thought of as heuristic decomposition approach. We shall make this clear in the foil, ,wing discussion. I et o and 0 be the scheduled front end tail end partial sequences, respectively. Also. let rr be the set of ilnh, heduled jobs Under the assumptit;n that these three subsets {o}. rr, aJ,d {o} satisfy the conditions (CI and (C2) or thp.t ,rr satisfies the condition (C3). the optimal schedule with fixed partial sequences o and O is ',7¢r0) where ~- isthe optimal schedule for the permutation scheduling problem formed by the job set ,rr. Sinr .' we resort to the iterative d,:composition approach only because the assumption regarding conditions t C l l and (C2) or (C3) is invalid, it is straightforward that the schedule ( o ~ O / c a n only be interpreted as a good s':hedule obtained through lteuristic decomposition. Therefore. it c , n be easily realized that the effort :~ ~ '.seeded to iden~.ifv or obtain rr may not be worthwhile. Therefore. instead of rr. we shall use a heuristic ~,chedulc ~ of tile j~bs belonging to rr and use the schedule .~ = Co, 7?. o) to estimate a lower bound / ~ ( g ) ~nd to upcate the upper bound ,..{. I he ]-d :composition approach to ob'ai;l the optintal !,chedule ca a be characterized as follows: (1) Thr, mghout the execution of the decomposition approach. ~he best solution S* fou,ld so far :~rovides a:i upper bound/.,/.on the value of thc optimal solution. (2) A branching rule ,f. one of which is discussed below, will be used to choose a node r/, for further c.,~p,~n.,,ion. The selection of no6e r/ ~,,iii be described as rl--= {'{ .A/'}, where A/'is the ~et of all active node:... (?) i3ounding rule /'i provide:, a lc,,,,,er ~ound / J ( }" ). where )" is '.he set of all feasible schedule.,, tl'.al ,:c..Jld bc generated from that node.

J.G. Shanthik umar and Y.B. 14:u / Decompositton ap;~roac/tes tn po.rmutatton sctwdultn ~

l 2 ~)

(4) The heuristic schedule ..~ is used to estimate the lower b o u n d g'£(}') = C(~'). ',,here .~"- } , n d C ( S ) ~< q~(,$') ~' feasible schedules S ~ Y.

Bronching rules. ( d ) (1) J u m p track i m p l e m e n t s a frontier search where a node with the lowest lower b o t n d / / i , , chosen t,,r branching. (2) N e w e s t active n o d e search: this is ersentially a d e p t h first search where tbc d e s c e n d e n t s of a parent n o d e are selected for b r a n c h i n g in an a r b i t r a r y manner. (3) Restricted f l o o d i n g is similar to the newest active n o d e search except that the d e s c e n d e n t s arc cho~,en in o r d e r of n o n d e c r e a s i n g lower bounds. W e shall discuss two possible a p p r o a c h e s in l - d e c o m p o s i t i o n . T h e y are: select and c o m b i n e ,:Se'.:tion 2.3.1), and select a n d b r a n c h (Section 2.3.2).

2.3.1. Select and combine approach In this algorithm, j o b s suitable to be scheduled first and scheduled last are selected using b o u n d : / , .:~d then the possible c o m b i n a t i o n of selected j o b s are evaluated through the b o u n d ,/d_~. If ::l. # .,,,,/. tl~' eliminati o n of that c o m b i n a t i o n occurs. Let u d * be the initial u p p e r b o u n d obtaine~l through omc heuristic a p p r o a c h .

Algorithm 2.2. Step O. Set g'= 1, ot = p~ =,0, u d = , d * , ./V'= { 1 } and the node identity NI{I ) = { g'. ,7:. p~. ,-'r, } t,~r nodv 1, where % = J. Step 1. IF tiff'=.,0. THEN go tO step 4: ELSE select node rl = d{ .,4/" }. Set ~4:= A : - { rt }. G o to ~,tep 2. Step,..~ C h o o s e J ( l ) = {j~%lddl(o,,jrtp~,)<~} and J r 2 ) = { / ~ r . , ' t d ; ( % v / p , , ) ~ - ~ { } ,.:here I / , ' -.--. = rr,,, {dt(orrp) is the lower b o u n d for the fixed partial schedule.,, o and p and r: is the .,ct ~t t,n',chcdt, led jobs. Step 3. Vi ~_ .I(1 ) and j ~ J(2). j 4= i: H- 2 { < n - 2. t t~t:x. i~. {d2{o,,irrjp,) < ,~dlHI;.a set N I ( r L , ) = { {+ 1. oot.jp .. ~r } and .a"'= ./, "-.- { 0,, }. '.~ hc.~c 'i,, ~', a n~..,a ~ , d c n u m b e r assigned to this new node and %,, = % - { i . i }. U p d a t e the upper b~,und ,,..,( u-,in2 the i~,:,.,r:. ',:~ schedule (,%tirjp,~)" va.sE reject thi:, c o m b i n a t i o n : rE.st. il-2{ = n - 2, THEN ted= mln{ #d'; da(o, ia~a,_jp,, ); O(%ta:a~ IP,, )} anti ch(~¢wsc ~, " ,,u,:h th,t~ c,l <, 'i : where ~r = {i, a~. a2, J }" tl.SI-ezd-- rain{ ~t': O(o,,talp,, ~} and cho(,,e 5;* ',uch that ol q"~ . / ". } ..-:_ % = {,/, a . j } . G o to step 1. Step 4. S* is the o p t i m a l schedule and a S is the o p t i m u m cost. St(~p.

2.3.2. Select and branch approach in this algorithm, the j o b s are selected either to be a u g m e n t e d to the f~ont end p a r t i a l , e q u c n c c , ~ ,~r r,~ the tail end partial sequence p, d e p e n d i n g on the n u m b e r of j o b s /1 and : 2 . r..spccti,.cl'.. ~n th~.--,c Da,,rt~l sequences a and p.

Algorithm 2.3. StepO. Setfl = { 2 = I " o I =pl=0:JV'= {l}',:=::g* and NIt. I): '/l. /2. c~: f.)..,.:r,~.',:hcrc/ S t e p 1. IF ,./1/~-----/0, I H I - N g o tO step 6 EI.Sl- .,,elect node rt = {{ ~'~," } .qc~ .~ .-: ; { rl; ~tnd - ' Fl > : 2 , rHVN go to step 4: ELSE go '.O step 2. 5;tep 2. IF 77" = O, "l-HEN g o tO step 1" tl.Sl{ choose an,,' / ~ rr' and >eL ,'7. -- .-7' : ~nd 3_~, t<~ -,tot; 3 . I-,,,,{L~I~ or.. rr,;3, ".c'. \ -,l( 1/ i " ' :,rc;o 3. I~. ({( cl,)'r:][> ) ":" ,:/ l[~t-', u p ( _ ; a t c ::: ci;I.iZ ~!.' ".. . . . ' 1. %,.,/&. %, }; .A/'= . / / ' + { "0/}, where r~: is the new node and %, : v U ~ / ',. t,- ./'i + {'2 + 1 < , - 2, rHt~N go to step 2: i-!.S~-. IF :1 ~, : 2 1- 1 = n - .:.. "~ rip,i:,: let a { = rain{ :::" O(o,,a~a~lp,.) Olc~ ,.a-_ io. ~ ~,t~d. ciq~,.,~:,c 3, ..u~.}t i},. :, q,(S:: ) = c~o*. where .% = { a~. ~12,j }. GO tO step 2: ~il Sk :..~ :---: n-ti~.l [ : : : 9 t o,o/:,., f' ,,iid ~.',,,..... c .'," -.a." that 4 , ( S * ) = e::, where % = {u../}. G o to step 2: EI.SE go tO step 2.

130

J.G. Sh anth;kumar ~nd Y. ' Wu / Deck. nposmon approaches in permutation scheduling

Step 4. IF '7r' 0, 7HEN go tO ste3 1" :,.LSE choose a n y ) ~ rr' arm set ~' = rr' - { j } and go to step 5. Step " If dg(%j~rp,~)
step 4. Step 6. S* is the optimal schedule. St 9p.

3. Application of I-decomposition approarh We have presented three distinct ,": ,mposition approaches to solve the permutation scheduling problems in the previous sections. In :his section, we discuss the application of the I-decomposition approach to the n / M / P / F m ~ , problem See [21] and [20] for the application of C-decomposition and P-decomposition approaches. During the last two decatdes, considerz ble attention has been shown in the area of flow shop scheduling. Several optimization and heuristic algorit ,ms have been proposed and compared. Some of the optimization techniques, such as branch and bound, employ the heuristic schedule as the initial upper bound. Anyhow, as pointed out by Baker [4]. there is no reported research available, to cbtain the optimal initial phase. Though decomposition approaches have men proved to be efficient tools Io tackle large size mathematical programming problems, not much effort has been shown to develop decomposition approaches to solve flow shop scheduling. In this section, we extend the l-decomposition approach, presented in Sec;.ion 2, to solve the flow shop scheduling problem • 4thout any buffer restriction. In the decomposition approach, a step by step heuristic updating of the best available schedule is employed, which, in turn, was observed to be very effective in solving the flow shop scl~eduling problem. Briefly. this problem could bc stated as: Find the order in which the given n jobs should be processed on the M maclunes ~n tile same technological order, so that the total elapsed .ime is a minimum. In general, at each machine there can be n! possible schedules in a to~ai of (n!) ~t schedules on all M 'nachines. Due to various considerations, like technology, etc., it is norn ally assumed tha~ the jobs are processed in the same order in all the machines. For this simplified version, the permulation flow shop. the number of feasible schedules reduces from (n!) ~t to n!. It should be noted that the op:imal solution to the , t / M / P / I " I...... problem wili be the same for the n / M / F / F m , , , problen, if the maxinaum numh..'r of machines in the flow shop is les, than or equal to three. The proof for the above statement can be see.q in con~vay et al. [7]. Iile completion tirae T(oa: m ) of a partial schedule on, obtained by eugmenting job a to the partial schedule o on machine m is given by the following recursive relation: T(oa:,n)=max{T(oa:m-1)

T(o..,1)} + t ( a , m ) ,

(1)

Vm~M,

where T(c:/: 0 ) = T(0: m) = 0. Voa and m. arm t~a. m) is the processing time of job a on machine m. '~la~n and m ~ M . By recursivelv applying the above relationship, one may derive one of the- following equations for the make-span l-t S). where S is the schedule (a t. a 2. . . . . an). {

F(S)& T(S: M):=

!t I

t,t _,

J'l

'~

max I .% tz~ (-



~ It,,,

~ ~ P! t t--l-

! = !t I

t = It,,,

!

) ¢2a)

(~r

F(S)

I m z

:~I

I,.~=1

~:~m,,

T(S: M ) =

t(a,,, s 1-
:'m,,

i~M

I

(2b)

1

J.G. Shanthikumar and -
/

Decomposition approaches in permutation :"heduling

t3t

It has been pointed out by McMahon and Burton [17] that there exists a dual problem for each flow shop scheduling problem. They have also suggested that, if the processing time i; such that the s31ution to the dual problem is easier, one may solve the dual problem. Szwarc [22] has extended this idea and has presented a combinatorial algorithm tc build schedules from both ends. Recently Potts [19] has developed a branch and bound algorithm that builds up the schedules from both ends. Extensive computational results indicate that his algorithm is superior- than the previously published algorithms. We too foand that the decomposition approach developed here based on the idea of building schedules from both ends and a step by step application of the heuristic updating of upper bounds required less computational effort than any other existing algorithms. In this paper, we will apply the 'ielect and branch l-decomposition approach to the n / M / P / F , .... problem. To implement this Algorithm 2.3, we need a Iowe~ bound g~(o~rp~ for the front end partial schedule o and tail end partial sequence P. This lower bound is such that g'g ( p~'O ) ~< min ( T(ocr0" M ) }. ,,//. The following bounds can be easily developed (see the appendix for the prc max

go*( o~rp ) =

1 ~m~
+

Y'. $ ----- 11I I,

~rn

~M

t(b,.s)

.

[V(o", , , ) +

Z

.,). The first lower bound is

+ Z , ( b , . ,') + --.

(3)

I

v~here p = ( b I, b?. . . . . . bk) and p and o are fixed partial sequences. ,':r i~ any .,,.:quence of jobs c,. i = 1 . 2 . . . . . g'. The above is a very weak bound. It :;houid be noted ti:at, as the nur:bcr of macnin.:.,, increases, the abo~'c fails to provide a good bound. But anyhow, due to its less computational effort, the z,bove bound v, ill bt: used to select the jobs that are suitable to 'oe augmented to the patti ~1 whcdule,, o and p. al:cad~ f~rmcd from the front and back ends. But the acceptable combinations of :uch augmented job,, v. ill bc dcc~dcd based upon stronge,- bounds which we shall present next. Let

T ' ( o ; m ) = m a x t l ~ , , , . ~ ,max ., ! i T(o~,~:

) + r a':-" i n ',,-,,,~ t//

') j

ll,~,~r) t .

(4

Then the next bound is /

e'8(orrp) =

max 1 ~,n~.,l~<-...

+min

,! T'( o: m ) + ~ .~M I

t( t. m )

te--,.v

t(/.~)+ ~ t(b,.s)*....~-

~ ,(~,.....,)

t:

I ! the above two bounds, the tota:~ span of r.he unscheduled j~bs t ~ v) was taken a.,, the sum of the pro, essing times on each machine. As one would expect, it is clea - that min;~, ~ , { i . s ) ) =

~ i l l . s).

Vs
So ,he above bounds can be described as a 'aingle machine sch~duie' bt~und. The ab¢,,.e bot.z:d,, can ;.--:: strengthened by introducing a ' t w o m~,chine~, schedule' boup~d. Silo..; it !., ca..,, ;.,, dctcrnl~m: 'i.'.~- ,,~,~:..!..~:-,n:~r~

J. G. S h a n t h i k u m a r a n d Y B. Wu / D e c o m p o s t t i o n

172

approaches in p e r m u t a t i o n scheduling

of the two machine schedule using Jonson's rule. the following bounds can be used effectively. Let r~, _ ¢1)1 \ ,",,,, = (c~. c 2.)1) . . . . . ce ) be the Johnson's two machine schedule for jobs in ,n" at machines m and rn + I. That i!;,

m i n { t ( t , "',m),t(c,_'',,m+l)}~
1), t((,_ ,, ,n)}.

-lhe.~ tht lower bound is I11~

ZT(o,m)+

max

./ ~ ( o "lrp } = l~
m

+

"

-- ~ < m + l . ~ .

"

+l

I

t ( ~/71 , s ) +

t( Cg'" , s ) +

M

£

j

...

S ~ tl'!

mt

Y'. ~-nl~

• ~M

E

t(b,

s)"

,

"'" +

"~

~

~=/11"k-|

t(b1,,s)}

.

(~)

$" = 1 1 1 / , - I

Since it requires considerable computational effort to estimate the above bound, the following simplification wi" be made on the above bound. Our limited computational experience showed that the following _,i,nplification results in a better overall performance than the above bound. "~'he definition (,f the critical machine will be presented next, which will be used to simplify the above bou: d.

l)efinitior 1. C ritical machine set. Let C(.3,,," M ) =

max I.~ " ) . ' ( m l ~ . . .

t(.j, s ) +

rain "~'n)+ I ~ M

~,

- !

s=l

,,~ - i

M

t ( a ..... , s ) + m i n {

t( a) .... ~ ) + . . . .~=,n

'

~

t(j,s)

II

whe,'e S,,, = <(a),,,. a2,,,. .... a,,,,) is the Johnson's sclledule for jobs in J on machines m and m + 1. If

t'(>,;,, M)= 1hen { . i . . - i +

{C(s,,," M)},

max

} is defined as the critical machine ,.et.

N o ~ the b._>und presented above can be evaluated with respect to the critical machine set { m. m + 1 }. F;lel, we have

max 1 ~m(m]~<

...


m + I

+

~..... s ) + ~) ~ ' ) " 1 1

t(ci".sj+

, T(~: B~)~. ~ ~m,-;

I

I

~

,n~

M

E

g

~;=til-)"

r(b~,s~+ |

...

";

..+ 3'

t11~,

]

r(h~,s)

)'.

(7)

I

File estimation of the above bounc, will not bc of much computational burden because the sequence %, for tl~e job set v can bc updated for (ver, v without much computational effort. Apart from this, at every n~,dc ~i~e .,,coucnce ( 0 % 0 ) can he '.re.~ted as a heuristic rule and the upper bound for the optimal schedule car, be updated. The following theorem will provide a lo¢,,,:r bound for the minimum make-span. "i,,eorem 3.1. Let C ( S * : M ) be tt~e ualue of C(.Si, " M ) omained uccordmg to definition 1. Then

c ( s ' . .~J ) ~ rain { T( S; ,~t ) }.

J. G. Snanthikumar and Y.B. Wu / Decomposition approaches m permutation scheduli. ,g

133

Proof. Consider the m a k e - s p a n T(S; M ) for the schedule S = (a~, a 2. . . . . a,,).

T(S; M)=

~ Y'~ t ( a , , s ) +

max

... +

1 <~mt<~... <~m,,-~M ~ s = l

Y'.

,'(a,,,s

.

s=~n,,

Let rn~ >/m and rn,,_ ~ ~< m + l, then we have

j n~l

T ( S ; M)>~

.,+, +

M

E s=m._

ml ~ t(a,,s)+

t(a,,s)+

man

t(.,,,s)+ I

...

)}

E

t(.,,,s

.

s=m+2

Therefore,

rain

{ T(S" M)} )

I ImP' I

max

S

+

,,,+z

~

=

n],;

t( j. s )

min

.~4

t(a,,,,,,s)+min J~J

I

~ .$" =

t(j,s) .)

ill

1

}i f

I 7 1 "I- __

where S,,, = (a,,,,, a2,,,, . . . . a ..... ) is tiae J o h n s o n ' s .~chcdulc on !rmchir'.,::s ,~, ?nd m rain { T ( S ; M ) } .>- C(S*" M ) .

...

+

i. T h a : ia,

-'-

E3

5;

Therefore, if the m a k e - s p a n T(S*" M), for the schedule S*. is equal to C ( S : M ). it is clear .hat 5; .... i,, ~n o p t i m a l schedule. Similar, but stronger o p t i m a l i t y condition~ can be derived ;or ~,pccial ,,tructurc pr~ blcr~ .. (see [201).

3.1. Computational procedures to estimate the bounds 1. Bound 1 ( e q u a t i o n (3)). i

LB1 =

f

ma× 1 ~m~mt<...

<~M

i,~;

T ( o ; m ) ~- E "-- ,(~,, , , ) + •

t "=

]

Z '~ =

-

t( t,,. s ) + . . . . t~7



5-" ',

-

,.-e t;q

;

t l t,~. ~ ) ! ~,

For the selected partial schedule o, c o m p u t e the elapsed time on all machines using the recursive relatic,r~ (1). Sum the processing time of all j o b s m ~ on each machine and add these to the etap~ed ti,nc at ~:ac',~ c o r r e s p o n d i n g machine. T h a t is, we have estimated

g T(o;

Z

V! < m

~~

M.

Now. using these values as the machine a,,-:ilable lime.~, estimate the e!ao~.',:d ti~:~c ",~r the partial sch<.~,uie t~. T h e elapsed time thus o b t a i n e d on rna, hirv= M is the lower b o u n d LB1.

J. G. Shanthikumar and Y.B. Wu / Decomposition approaches ?n permutation scheduKng

134

2. Bound 2 (equation (7)). m~

LB2 =

, T( o; h-i ) + ~

max l ~ m

I<.-" ~m+ l~...

,,,,

+

~ M tx

M

£

t(bi,s)+

Y'. s=m,t_ t

s=~t+ I

"

s--~a

nO + 1

( '~ , s .

+ --. +

E s=m'/

I

t(bk,a)}. t

The two-machine ~chedules for machines m and m + 1, Vl ~< m < M, must be obtained using Johnson's rule. Using definition 1, the critical machine set { h-i, ~ + 1 } can be determined. Now let S = {a~, a 2. . . . . a~,"2 be the Johnson's sch.~.dule on ~ and ~i + 1. Now eliminate the j o b s j ( a l i j ~ t~ to p), from the schedule S without altering the relative position:, of the other jobs, and let this be ~r,~= (c{, c 2. . . . . ct). Estimate the elapsed time of the partial sequence o on all machines m using the recursive relation (1). Estimate the sum ~ ~_.,t(j, m), Vm =~ T/i, ~ + 1. Add these to the elapsed time of the partial sequence o on all machines m=#~,~+l and let it b e T ' ( art • m). Now using T(o; h3) and T(o; ~ + 1) as the machine available time. estimate the elapsed time of the sequence %, for the two machines ~ and i-h + 1 and let them be T'(o'rr; ~ ) and T'(o,a'; ~ + 1 ), respectively. Now estimate the make-span for the schedule o using recursive relation (1) and machine available times T'(o.rr; m), Vm ~ M. The make-span thus evaluated will be the lower bound LI,2. In the decomposition algorithm, the multi-sort heuristic algorithm due to Gupta [11] will be used as the initial phase. The ai;ove algorithm, along with Paimer's [!8] and Campbell, Dudek, and Smith's [6] heuristic rule, were tested as the initial phase for the decompofition algorithm and it was observed that Gupta's heuristic algorithm performed comparatively better than the others. The heuristic algorithm due to Gupta [11] is presented beiow. J

t

)

Algorithm 3.1. Heuristic Algorithm (Gupta). l.et

g ( a ) = - - - sign{t(a,l) min I .-',n<

;(a,M)}

{ t ( a , ,,,) + t ( a , ,,,-t- 1)} "

M-

l

~ here

t -1 sign(A-B)--- I

1

if

A
if

A >lB.

Step 1. Calculate .,(a) for each job a using the at'o~e equation. Step 2. Arrange the jobs in the ascending order of g ( a ) values, breaking ties in favor of jobs with ~,mailest s,am of proc..'ssiz ~. time on all machines. St~T 3. Calculate the rl.~ke-span o( the above schedule. Treat this as the initial best solution. Now we shall describe tr., decomposition algorithm which, after considerable computational experimentation. ,,v;~s found to be c~ :nputationall~ better than the existing branch and bound arr.d combinatorial algorithms. Some results oi these computational experimer, ts can be ~een in Section 4. Al,v,orithm ~.2. l-Decomposition Algorithm. Stc7~ 0. in~tializat on). Let ,-,,¢ be the rnake-span of the schedule S* obtained through some heuristic :,l~.,,rithi;, Fi,id the critical n-,achine se- {m. m + 1} and the lower bound C(S,,,: M) of the optimal schedule, if C(S,r,: M ) = ,-,,,~. ri-JE,~ go to step 5: ELSE ente: tim node 1 in tim node list, set the bound g'd= O, and let {1 = ?'2 = 1. Also initialize tile "-artial sequences % and p~ empty: thal is, a~ = .o~ =.(). Enter step 1. Step I (Active node selection). II: ther,." is no active node left in the node list, rH[~N go tO step 5; ELSE select 'he active node 71 with the lowest bound. In case of ties, select the node with higher value of ~1 + g'2.

J.G. Shanthikumar and Y.B. Wu / Decomposition approaches in perrnutatton scheduhng

135

If ties occur again, then break the tie arbitrarily. Remove this node from the node list and if {1 = {2. 1HI'N go to step 2: ELSt go to step 3. Step 2 (Front end job selection). Compute the lower g'a"for the sequence o,,a~o,, where a ~ % ~ ~,,,, using any one of the bounds discussed earlier. I~: ~g < J , mEN create a new node, enter this in the node list, and let the corresponding value of t l = g'l + 1 and the partial sequences be %a and ;3,, for that new node and continue: ttSE continue. (Repeat step 2, Va ~ % to p,, and go to step 1.1 Step 3. (Back end job selection). Compute the lower bound g~' for the sequence (o,,~rao,,',, where a ~ % tO p,,, using any one of the bounds discussed earlier. Note that the fixed partial sequences are o~ and ap,, in this case. u: ~7>_-,,a' "ruts repeat step 3, Va ~ % w ~, and go to step 1: ELSE create a new node rf, enter this in the node list, and let the corresponding values of ~'2 = ~'2 + 1 and ~1 = {1. Also let the corresponding partial sequences be % and a& for that new node r(. w the new value of t'2 + {1 < ;~, r n t N go to step 4: tt.s~, w ~'1 + t'2 = n, T n ~ compute the schedule cost (make-span) for the sequences (%,a2a~p,,,), where o,~ = %, d,, = ap, and ~r = { a~, a 2 }, and choose the best schedule S out of ~hese two: ~LSE compute the make-span of the schedule S = %,rrp',,. n: the schedule S has a schedule cost less than ~,~. ]'HEN set S* = S and u~'= T(S; M). ~F ag= C(S,n; M). then go to step 5: E~.SE repeat step 3. Va ~ % U 0,, and go to step 1. Step 4 (Intermediate heuristic updating). Find the Johnson's schedule %, for t h e j o b ~ j ~ ,,r on the critical machine set { ~ , ~i + 1}. Obtain the cost T(S; M) for the schedule S = (%%,a0,). IF T(S; M ) < , e l . aHOY set S* = S, . ~ ' = T(S; M), and if C(Su,; M) = ~*, THEN go tO step 5: ELSE repeat step 3. Va ~ %0,~ and go to step 1. Step 5 (Final sequence). We have obtained the optimal solutian. The present upper bound is the optimum make-span and S* is the optimal schedule. Stop•

4. S o m e

computational

results and extensions

Here we give the computational performance of the Decompositio,i Algorithm 3.2 discussed in Section 3. The proposed Algorithm 3.2 was tested against the branch and bound algori~nm. In the implementation of the branch and bound algorithm, for a fixed partial schedule o. the lower h o u n d s / ~ , u~t:d for the partial schedule o, are as follows:

I

(1)

[o/=

max

, T'(o:m)+

(2)

£g=

max

{

{

'~'!

Y" . t{ / , m ) + m i n

T'Io'm)+max

t

~

{"'

Y'. t ( k . . v ) +

t ( / . ,.)t

rain

t(j.s),.

'i

,,

where

~ ( aV". . , t(, ) . T'(o • m ) = m a x t l .[. : , vmaax , , , . _ l {T(o . ~ ) + m' 'i' n1,,'l

i / i,i'" 7 ( o :

m~ I

for all n.' > 1, and T ' ( o ; m ) = T( o: m) for ,7 :-: 1 and u ' = J - o. Tbc pr,;cc:-:,ing tim::s for th~ t~:,t pr~,',.,,Ic>~:~ ... c-.~.-~: o.....,nor . ~,,I . . fr~m~ . . :.. t~nif~r.m di,~tributi,~n d~,,:..-ih~,~._'d. ., between 1 and 30. The average execution time in CPU second~ on an iBM 360/370 is g~ve~; ~n ,' anl~ "!..i. The computational results clearly show that the decomposition ~'ppre, ach perf~rmcd '.,,.~:il ! , ; ~h~ problems tested. But it does not exclude the po.,,sibility of identifying proi~iem ,,tru,,.'tures ~ hich lnight gi,.'¢ a

J.G. Shanthikumar and Y B. Wu / D~,~,m[osition approaches in permutation scheduling

136

Fable 4.1. S~me computational result~ for the branch ano bound and decomposition algorithm. Number of jobs

4 5 6 7 4 5 6 7 5

Number of machines

3 3 3 3 4 4 4 4 5

Number of test problems

10 10 10 10 10 10 I0 10 1O

Average execution time in seconds Branch and bounc Bound ~

Beuncts 1 and 2

0.039 0.103 0.336 1.328 0.094 0.124 0.972 1.448 0.667

0.038 0.092 0.342 0.864 0.093 0.154 ,).860 2.120 0.416

Decomposition algorithm 3.2 0.036 0.055 0.101 0.170 0.075 0.108 0.242 0.390 0.193

r,egative result. Although the decomposition algorithm in gen~,ral seems to perform better than the traditional ~ranch .~nd bound al3orithm, it was decided to test the capability of this algorithm for large scheduling r~roblems. A progr:ml for Algorithm 3.2 was written in PASCAl. on a V A X - 1 1 / 7 8 0 interactive computing system. This compw:ting system is designed to provide a virtual memory, multiprogramming environment by using ::,: central ~,'ocessc:r and V A X / V M S operating system. The svste,n has 4 million bytes of ECC MOS memor.s. H~vever, only 7200 bytes a~e required for our program. The set-up time needed for the program is negligible, while the CPU titan, to obtain the optimal solu:icn even for the 20 jobs permutation scheduling p~oblem is noted to be large. Iv,deed for one such problem. Algorithm 3.2 required more than 8 hours on the interactive mode before obtaining an optimal schedule. Hence two heuristic algorithms were developed t. ased on Algorithm 3.::. They are: ADorithu: 4. I. This algorithm is the same a, Algorithm 3.2, excep~ that here nodes with 2m scheduled jobs are eliminated from the node list f, Jr ~ome m. If we ,et m > "n, we would then get the same schedule as that obtained from Algorithm 3.2.

Algorithm 4.2. Algorithm 4.2 is a modification of Algorithm 3.2. Here a depth first search is carried out and tt. algorithms is terminated as soon as the search ~eaches the bottom for the first time. Algorithms 4.1 and 4.2 are implememed on a V A X - 1 1 / 7 8 0 interactive compaling system using the PASCAL programming langua_,.e. 7200 b2,tes of memory are required for each of the:;e two problems. Using the r,Asca:. programming langt,age, we-rove compared the efficiency of tt',ese programs with respect to different data structures. We found that in storing the node information and in searching for a particular m~de in ,he branch-and bound process for Algorithms 3.2 and 4.1. u.'ing linked data structure is much bt t.er than using array struct.,rcs. This superiority is obser~,ed with resp,,'ct to the storage requirement:, and ('!:'L; time requirements for running ~,he program. Besides, wheq using linked list structure, it is not required to specify the ma,cimum, node numbers. However, if an array structure is used for node s~,;ormat~on, on.' needs to :,Fcctfv the ra~tximum node number. This requires one to guess the maximum node number Jr. advance and forces the program to terminate before obtaining an optima! schedule. It is ~ecomrnended tl',at the iinkec list data structure be used for these algor;thms. The comparisons of the test results for the Algorithms 3.', &l, and 4.2 in terms of CPtJ time and n~.akespan obtained for various cases a,-~ shown in Tables 4.2. 4.3 ~nd 4.4. The processing times for the t~ st problems were generated from a uniform distribution.

J.G. Shanthikumar and Y.B. Wu / Decomposition approaches in permutation scheduling

137

Table 4.2. Case 1. 5 m a c h i n e s / 5 jobs Test numbers

Algorithm 3.2 Makespan

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Average Std. de,,.

CPU time (sec.)

Algorithm 4.1 with (m = I )

Algorithm 4.2

Makespan

CPU time (sec.)

Makespan

CPU time (see.)

63 71 66 62 72 62 54 50 50 60 61 46 53 61 48 50 57 60 55 55

20.04 9.06 18.57 4.90 8.64 7.08 6.26 7.70 7.70 4.15 3.94 7.08 5.37 8.85 5.77 3.91 7.45 6.78 15.41 5.55

65 72 69 62 72 64 59 54 54 60 61 48 55 71 53 50 60 63 55 56

2.67 2.68 2.80 2.79 2.64 2.72 2.77 2.91 2.91 2.70 2.70 2.92 2.58 2.91 2.66 2.64 2.70 2.62 2.84 2.55

63 72 69 64 72 62 57 57 57 67 61 48 56 71 43 50 60 63 55 56

3.96 3.78 3.84 3.84 3.79 3.86 3.82 3.77 3.77 3.85 3.96 3.86 3.85 3.91 3.96 3.gl 3.89 4.16 4.19 Y96

58.20 7.08

10.83 12.21

60.40 7.03

2.72 0.11

60.85 7.01

3.90 0. ! 1

Table 4.3 Case 2. 4 m a c h i n e s / 1 0 j o b s " Algorithm 4.1 (m = 2)

Algorithm 4.1 ( m = 3~

Aig~,rahr:i 42

Makespan

('PU time

Makespao

CPU time

Make' pa;,

(PL~ t~mc

\ akc-.pan

1 2 3 4 5

69 78 80 83 91

I 1.02 sec. 15.27 10.69 23.91 16.92

(~7 78 80 82 95

14.50 54.80 13.56 29.82 3:09.38,

67 ! 3.64 % 89 87

18.22 g0 17 ~2 1742 l ,~.00

7

Average: Std. dev.

80.20 7.98

15.56 5.38

78.40 6.88

60.41 74.00

gl g0 8.93

16.92 1.8

~ "~k0 k 93

Test numbers

( upta

" Algorithm 3.2 has been tested, but it required more than 8 hour.,, to get an ~ptimal ,,olut,oe ,n ,ome t:a,,c~, Hence v,,,- did no~ re,, Algorithm 3.2 for all five cases. Table 4.4 Case 3. 3 m a c h i n e s / 2 0 jobs ae Test No.

Algorithm 4.1 m=4

Al~or~t ~m m= 5

m = 6

m = 7

Mp h

CPU "

Mp

('PU

1"14.62 1:46.81 .

118 156 .

I 1"~62.

1~"...,_

4 5

118 158 . 142 138 t36

0:56.55 1:09.97

136 136

1:30.48 115 2:04.30 . 156 ;. 2C,.~': f, '~ ..,,-, . 1:12.36 136 1:25.87 136

A,,erage Sld o¢ ' x.

138.40 14 1/

116.25 18 54

137.60 1~.. 67

1:31.°7 137.(J0 1:4',';.57 !3().20 19 ... "~'¢ 1,1..76 21 :,g "" I6 .... ~'

1 ."~ .'~

Mp

.

,n-=8

(~'PU

Mp

('PU

1:,19.71 "~-~528 .1 • "u). ¢~',. 1.=.8.64 l"4_._9 "~'~

111 156

.2,

Algorithm 3.2 not Ii.,,tcd due to ~,.:une rea.,,on a,, In Tabt.,: 4. ~,. ~' hap = Makespan. ( ' P U --- CPU time in {minute:~econd).

t_~"

~3~, 136 .

,kip

42

(iupta

CPt.

.kip

(PL

Xip

2:07.00 111 .~~46.95 156 1'"',, ~t,,, 1. -,_+" t .:b).~*, t~6 2:10.81 134

2.32.13 3:07.35 2!5as, ~.1(;.~[

120 165 l:, ~ 1.+9

t:3439 1:35.'~9 1 35 ,iJ ] :3(~. :,~.

121 165 ~..;" is-:,

"19.45

!35

i .;~; ,i

~ ",;.

2:10.29 _. "" I()

2.29.c24

"40.(;'12 I ~ 6 K,

135'{~; !6.32

..a

.

"-,

.

!4~.Zi

d.G. Shanthi~.tmar and Y.B. Wu

13b

/

Decomposition appro,whes in permutation scheduling

4. ;. Discussion a n d coJlclusion on c o m p u t a t i o n a l results.

The computational res~Jlts show that Algorithm 3.2 is efficient in looking for an optimal solution for small permutation flow shop problems (see Table 4.2 for problems of size 5 j o b s / 5 machines). Even in these cases we noted thai the variance of the C P U times required to obtain the optimal solution is high compared to i s mean value. This indicates the possible danger of using this algorithm without a proper cutoff. As poi:ated out earlier, in one case with 4 machines and 10 jobs we had to be on the comput, er for 8 hours in ;nterzctive mode before obtaining an optimal sequence. Obsera'e that the standard deviation of the CPU .ime required by the Algorithms 4.1 and 4.2 are very small. Further, the average of the makespaE obtaine:l by Algorithm 4.1 is reasonably close to the average of the optimal makespans. However the average ( PU time required b3 Algorithm 4.1 is much smaller than the CPU time required by A~gorithm 3.2. Hence we reccrnmend Algorithm 4.1 as a practical alternative to algorithms developed to obtain optimal scheduies. The value of m should be selected such that a proper trade-off between the C P U time and the makespan of the final schedule is obtained. Algorithm 4.2 is less time consuming than Algorithm 4.1 when m > ~n. -towever, the results for this algorithm in general is not as good as that of Algorithm 4.1.

Apl~_.ndix In this appendix, we prove the validity of ~he lower bounds given by equations (3), (5) to (7). ""heorem AI (For equation (3)).

J Fill 11 ,,;

r(o~o;

M )} ~

"~

max ] *~" n? { i lt| l "" "'"

d T( o; m) + Y'. t( c,. ,,, )

"~ ttl l

l ~ /~f I

t=[

M

,.,

-~- 2 t { t , , , s ) + . . . +

E

)} t{t,~,.,.

.'here O = ( h i . b . . . . . b~ ) a n d p a n d o are f i x e d partial sequences, vr is any sequence of j o b s ,.',. i = 1.2 ...... {. o, p. a n d .;r are nmtua/l)" exclusJt'e ana p U o U ~r = J.

l ' r ~ f . The repcated use of equation (1) will yield

T(ovo:

M) =

max l~
,,i T ( o ; m )

(mi<~""

~,~! I

,~ = t111,

i

~=m

M

"))

3 = IPI A

I

+ ~2 t l c , ', s } +

... +

t(c,', s)

E ~'))

i

1 |

j

t

t

~here ~ = (c l, c.~...... c:). Now setting m, = ~,~, Vi ~< d. and since r_,," it(el, s ) = ~:,.: it(c,, s). Vs ~< M, we g, t I

1 ( or, p" M } 2~:

i:~ax

T(G" .7) + I2 r( .;, m) t--|

+

t ( I'~, s ,,, + " " . +

z-, v

t(b~ , s ) ; •

(A1 }

J. G. Shanthikmnar and Y. 8. Wu / Decomposition approaches in permutation .wheduling

139

Since (A1) is independent of the sequence rr, we have ( .; r(o; ,,, ) +

m,~,,

min { T( orqo; M )} >/

l~m~
[

~=l

n ~t

"1

M

+ E S "~-

g

E ' ( c,, ,,, )

...

,(b,,s)+

+

111

'(b,.s) i-'

Z ~ ~

n ; ~, - 1

Theorem A2 (For equation (5)). max

min { T( a~'p; M ) } >t

i ~rn~
IT

I

{ T'( ...

tit I

o : ,,, ) 4- ~2 t( i. m ) + m i n

~M

I ~ ,-'r

t(j.s)}

Y'~

+ ...

s' = ~9! + I

] ~ '~"

M

m2

+ Z ,(b,.~)+..-+

I2 ~{b~.,)}.

S -"~ D11

3-~-Ittl,

Proof. Repated use of (1) for the sequence (a~'O) will yield o)l~.

T( o~p; M) =

max

. r(,,

m') +

t(c;. s) +

...

I ~m

Ill I

S ~

If w e set m ' I = m "_ . . . . .

IPII

at ~

,(,.,.,,)+ ..-*

5-'.



t

~

m/. i = m. then fron

).1

-

tit i

~

/

i

f(o" m*)+ 22 t(c[..,}+ E tic;.,,,}

max l ~.t*(md

.-.

d.'~f t

~-~ - t *

tPl I

;

+ I2 ,(~,,.,)+

,(b,.,}+

Z

.~ = Ill

2

M

:ll £ r

... +

f

3 : If'l~

,.

max

I

.i :

max .....

-

-

T(a" m)

(

.

~-,

.I

'ma×,

t(i,.:)~-,

(

¢

~ ,'(c',.s)-,-

~ T{.'-,')+

m z

,.,,,

(A2}

i

HI

(A2) we get ,,,,

T( q~p, M ) >/

"t

t(t,,.~)+ ... + E , ( h , . . , ) .

~t(c;.m}

M

i

max

t l.,,,"

.....

~T

T(o" m * ) +

'"" ~

i

t ( ¢ i , .~'1'"

J

it

t(C.,.}~-,

--' ~;"17,, ,,

. . . . .

-.

r'/~,,,

.

140

J. G. Shanthikumar and K B. Wu , Decomposition approaches in permutation scheduling

Therefore.

rain { T(o~rp; M )} >i ~r

'

max l~
"'-'

I

f

) max, max ~A! I ~ l~
1

i T( ,,; m* ) + minj~= Y'. , ( j , s $~

)} ;

nl*

T(o;=)}+ E ,(i, -,) •

i~

1711

}

,,,,

~_, t ( j , s )

+ min )~-~r

s=m+l

)}

.~

E

+

,(b,,,)+

.

.

E

+

.

.~"= t;! I

,(b~,,

.

tie A

3 =

From (4) and (A31 we have min

{ T(o.'rp: M)}

>I

I

max

'

l~m~nh~<--.

~
r'(o'm)+

.~,

+

~

~

t(j.s

j E ,.'n" s = m + l

.

)}

Y'. t(ba.s

"'" +

t(b,,s)+

{'

Y'.t(i.m)+min iE,-r

.

S = );I;.

= ;)! I

Theorem A3 (Fo: equation (7)).

,nin {/'( o~p; .-r

M )} .~

f

max 1.
~m+ 1 ~...

t( ~:~...., s )

:l,.;".~)+ j

M

Y'. .s

t(t,,.s)+...+

1

Y"

-- ,'qt ~ i

',

-

m,

Proof. Substituting ml = m 4- 1 in equation (A2), we get II1~

T( o~rp- M ) >

T{a'm)+

max 1 - , < m ~ < m ~ . - . ~
m -~ 1

~
t

t(c,.s)+

Z

"--

¢=m

m,

M

't

+ £,(c.;.s)+ £ ,(h,,~)+...+ Z,(b,..~).} nl ,

Since

I

.$"= III +

till,

I

,,-.,, is the Jo.hnson's schedule for machines m and ~r,

~~

"dlaX

E '(d. ~) +

k

;(..m

L 1 ,S) +

'

,

I

m + 1. we have

g

+

•,

4- "- •

~=m

nr~

~ .~ = h i ' /

+- Y'~

<;'~f I

m +1

,-

"4

T( o" m)

. .4-

s

,f.;..?)'

r. 171~

,. '(~:,, S) l•

i

/

,(b,..,) ~

. f

,}

(A3)

141

J.G. S h a n t h i k u m u r and ): B. Wu / Decomposition apyrouches m permutation '~chedulm~

and therefore '

min { T( ovp; M ) }

max 1 .~ m

~

m" (

m -,- 1

• " • ~

m

/~11

T ( o : ,,,) + ~ -~- ] ~

• • •

m

.-~ .+~,1 t

~

t((;",

~) +

...

.~ ~ m

3,!

i

References

[11 Ashour,

S.. " A Decomposition approach for the machine scheduling p r o b l e m " , lnteruat,onal Jo,u'nul r , / l ' r o d u c I , m Rewar~ h 6 (1967) 109-122. I21 Ashour, S., " A n experimental investigation and comparative evaluation of fl~w~'~hop scheduling tc,.[;:~ ~ ~.:~.-". 0/,.,.,,,~ .... 1.:~....... ;, 18 (1970) 451-549. 131 Ashour, S., " A m( dified d,:composition algorithm ft>r scheduling problems", Internatlomd Jt,'trttctl of Pr~,d,, ,J,m Re ~eur~/r ~ t ] qTt)l 281-284. [4 ] Baker, K.R., " A t.omparative study of flo~ ~t.,~p alg~rithms". Operattoni Re seur~h 23 ~16)75) 62-73. 151 Brown. A.P.G. and Lomnicki. Z.A., " S o m e app'icati~n~ of the branch-and-bound alg~rithrn t~ the m::ch~ac ,chedultng pr~blem". Operuttonul Rese~rch Quarterly 17 (1966) 173- 186. 16] Campbell. H.G., Dudek. R.A. and Smith.M.L.. "'A heuristtc algorithm for the n-job m-naachLnt: ,.cq,mf3t.Lng F,r,~blcnl". Management Scttnce 16 (1970) 630-637. 171 Conway, R.W., Maxwell, W.L. and Miller. I..W.. Theory o/Scheduhn'~. Addisc~n-We.~te,,. Readtng. MA. 1967 181 Dannenbring, D.G.. "'An evaluation of flow shop sequencing heuristics", Management S~teltc e 23 ~~977) I ] 74- I 1b,2 19] Gary. M.R., Johrason, D.S. and Sethy, R. "'The complexit~ of flow ',hG 0 and jt~b sh~>p ~chedutin~". 3luthem:ttl~ s ,/()/~eru t~,n, Research 1 (1976; 117-129. Gupta, J.N.D.. "'A functional heurbt;c alg~,ri'hm fc~r the fl~,v. ,h,.~ ,~,.'d~!:r:~ ~rt~blcrn'" Oper~ui,m~/ Research (,)uarter/'~ 2~. (197t) 39-47. I111 Gupta. I.N.D., "Heuristic algorithms for multistage flow ~hop schedul,.:~". A I E L l~ran~a~t,,,u~ 3 Ilt)711 I121 (;upta, J.N.I). and Maykut. A.R.. " ' [ : l o ~ . , ~ p ,,chcdulmg i-~,, [lcur~,,ttt. dcc~,mp~,.~lt,~", h~'~,rntztt,m~d l,,,jr~i,.tl ,,1 l'r,,h~,~ ..... P,e~eurch li (1973) 1()5 l l l . 1131 Jackson, J.R.. "'All extension of John.~(~n',, I,:std[ {~n j~b I,~t ,cheduhnz". Nu, ul R~',.eur~h l.~;,/,.t/~ (.)uurtcr/; ~ (]'~;1,~ 2',~1 2t~ ~, 1141 Jt>hn,,~l~. %.M.. "'()primal tw,~ a~d three ,'~agc produt:t~m -,~.Iscduic., v,~t:~ ~e~-up t~rnc, in.,.ludcO'. ',¢z, al I?c,c;~r,h I .,:,':/',' Quurter!v i ( ! 954) 61 - 68. 151 Kurisu. "I.. " T w o machine ,,ch,.',.tuli~? t~nd,:: ~~'qutrcd pr,~.cdcJ,~.c ar~,m-' ;~,h "" ,I,,ur~! ,,t rl',e. ();~, r~tt,m' le.~., ~, ~; ', . . . . . iupun 19(1976) 1- 13. 161 Lagweg. B.J., Lenstra, L.K. a~d I~,,m~,,,'. N.a~,. A.II.J.. "'A gc~acra] b'.m~:d;.q!3 -,~.hcnac t,~: ~v.r~:~.: :I,,,r~ ~. . . . . . . . i~ ;.~ . ; , ' Operation~ Research 26 (197g) 53-67. [171 McMahon, G.B. and Burton, P.G.B.. "' Flow ~,hop .~cheduling wlth the hrznch arad h,~ui~d meth~,d'" ()/~'rc~tt,,u~ /'~,'.~'~H~i! 1: I] '.~t,7 473--481. 181 Palmer, D.S.. "'Sequencing.lobs through a multi-stage process in the nfinimun~ total t~mc- -\ ~u:~k mct!,~d ,,i ,,h:,~wtra~ ,~ .-~..~r optimum". Operational Research Quarterly 1fi ( i t~65 ) I 01 - 107. 191 Potts, C.N., " A n adaptive branching rule f~)r the permutation fl<~v.,,h~p problem". [-,tr,~o~'att .l,,z~r~',~/'40/,,'r~,t:,,~,,;/ /~,.',.t,, ;~ 5 (1980) 19-25. 1201 Shanthikumar. J.G., Decomposition appr~aches in permutat~,m scheduling pr~blem~, kl.,~, So. [ hc,~,. I t~t~cr,~'. ,,I ! ~r,,z:t,~ Toronto (1977). 1211 Shanthikumar, J.G. and Buzacott. J.A.. "'On the use of de • mpo,iti~,n approach ia ,~ ,~raglc r~ta~.ia~r~c ~ i,,.,~,~r~_ ?,r.,~:~":~ . J,,:,r,,,,.' o / t h e Operattons Resear~'h Soctetv O[ J u p a r 25 (19~2) 2~- 47. 221 Szwarc. W.. "Elimination method.~ in the /.~l / tl ~cqucvt~:ing prohlc~t~". V~t.~l Rt'~t'ar~h l.~,~t~!t,~ Q:~,c,', r/, ',': i] q ' ] ) . ' ~ ",;r-