Genetic algorithms and job shop scheduling

Genetic algorithms and job shop scheduling

Computers ind. Engng Vol. 19, Nos 1-4, pp. 81-91, 1990 Printed in Great Britain. All rights reserved GENETIC ALGORITHMS 0360-8352/90 $3.00 + 0.00 Co...

618KB Sizes 0 Downloads 154 Views

Computers ind. Engng Vol. 19, Nos 1-4, pp. 81-91, 1990 Printed in Great Britain. All rights reserved

GENETIC ALGORITHMS

0360-8352/90 $3.00 + 0.00 Copyright © 1990 Pergamon Press plc

AND JOB SHOP SCHEDULING

John E. Biegel and James J. Davern University of Central Florida Orlando, Florida 32816

ABSTRACT We describe applications of Genetic Algorithms (GAs) to the Job Shop Scheduling (JSS) problem. More specifically, the task of generating inputs to the GA process for schedule optimization is addressed. We believe GAs can be employed as an additional tool in the Computer Integrated Manufacturing (CIM) cycle. Our technique employs an extension to the Group Technology (GT) method for generating manufacturing process plans. It positions the GA scheduling process to receive outputs from both the automated process planning function and the order entry function. The GA scheduling process then passes its results to the factory floor in terms of optimal schedules. An introduction to the GA process is discussed first. Then, an elementary n-task, one processor (machine) problem is provided to demonstrate the GA methodology in the JSS problem arena. The technique is then demonstrated on an n-task, two processor problem, and finally, the technique is generalized to the n-tasks on m-processors (serial) case. INTRODUCTION Job Shop Scheduling (JSS) problems become difficult very quickly. They belong to a class of problems said to be unsolvable in polynomial time (i.e. n, n^2, n^3, ...). The following discussion characterizes just how quickly the problem gets too large to solve. Assume the world is 20 billion years old. 20^i0^9,365.25,60,60

= 6.31152.10^17

This means the world is: seconds old

or 6.3"10^23

microseconds

old

If we had a computer system capable of producing and evaluating a job shop schedule every microsecond since the beginning of time, we could have produced only 24! or 6.20,10^23 schedules. In terms of a one machine scenario, this means that since the beginning of time we could have optimally scheduled only 24 jobs in the job queue waiting for processing on our single machine. Certainly we recognize that with heuristics, common sense, experience, and other quantitative and non-quantitative tools at our disposal, we really don't do too bad at scheduling. However, the above discussion suggests just how big the JSS problem is -- especially when we're faced with the ntask, m-machine scenario. The solution we're after is to come up with a tool that provides optimal schedules in near real-time. GAs could provide the basis for this tool.

81

82

Proceedings of the 12th Annual Conference on Computers & Industrial Engineering

The GA Process To illustrate Objective: Further, settings

the GA process, Optimize

we draw on G o l d b e r g ' s

f(x) = x^2

on the integer

assume we have 5 switches of 0 (off) or 1 (on).

01 i i SWl

in General

interval

in this example.

SW2 SW3 SW4 SW5

[i] example: [0,31]

Each switch having

i

The m a x i m u m sum o b t a i n a b l e from a c o n f i g u r a t i o n of all switches in the on position is 31 (2^5 - i). Therefore, the objective is to m a x i m i z e our output with a value of x = 31 or x^2 = 31^2 = 961. To proceed using the GA process, we first generate an initial p o p u l a t i o n of 5-bit strings (randomly g e n e r a t e d strings). We use an initial p o p u l a t i o n of 4 strings -- each string representing, from left to right, switch settings for switches Sw(1) t h r o u g h Sw(5). The initial strings are as follows: 01101 ii000 01000 i0011 Successive p o p u l a t i o n s are g e n e r a t e d using the genetic algorithm. The objective is to search for the optimal value from the string. Obviously, the real o b j e c t i v e is to do this in as few iterations as possible. We use "payoff" values from an e v a l u a t i o n process to d e t e r m i n e the best strings. The p a y o f f in this example amounts to selecting strings that increase the string's value toward our known m a x i m u m v a l u e of 31^2 = 961. (In our JSS problems we o b v i o u s l y do not know our optimal value in advance.) The above p r o b l e m can have 2^5 = 32 p o s s i b l e switch settings. Note that if each switch had i0 versus 2 positions, then the p o s s i b l e outcomes w o u l d be 10^5 = 100,000. As stated earlier, we're looking for GAs to help us solve "big" problems easier and faster. We continue on w i t h the above p r o b l e m and make some o b s e r v a t i o n s the GA process as we proceed. GAs use p r o b a b i l i s t i c t r a n s i t i o n features of the p r o c e s s include: 1. 2. 3. 4.

Direct use The search The search Randomized a. b. c.

rules to guide the search.

regarding

Salient

of a coding or abstraction. is from a p o p u l a t i o n (of schedules in JSS). is blind to a u x i l i a r y i n f o r m a t i o n or noise. operators are used. These are:

R e p r o d u c t i o n (New strings are g e n e r a t e d based on a fitness function -- survival of the fittest). Crossover. Mutation.

R e p r o d u c t i o n gives an increasing number of samples to the o b s e r v e d best strings. Therefore, we give e x p o n e n t i a l l y increasing numbers of samples to the best strings. The strings t h e m s e l v e s are composed of a b s t r a c t i o n s of the actual problem. Note that GAs w o r k from a p o p u l a t i o n while many other techniques w o r k from a single point. Other t e c h n i q u e s may find local optima rather than the global optimum. GAs search for the global optimum. Further, the t r a n s i t i o n rules of GAs are stochastic rather than deterministic. Finally, GAs ignore all information except the payoff.

Biegel

and Davern: Applications of GAs to the JSS Problem

83

When a new schedule (i.e. string) is required, we simply d u p l i c a t e an existing string b a s e d on the fitness of the existing strings. In our example, fitness is m e a s u r e d by the highest value of the string sum squared. We simply sum the fitness values and d e t e r m i n e the p e r c e n t a g e of the total for each string. Then, we Monte Carlo select the string to reproduce by g e n e r a t i n g random numbers b e t w e e n 1 and 1000. Therefore, we reproduce the more fit strings with a h i g h e r probability. The table b e l o w illustrates this concept. STRING # 1 2 3 4

0 1 0 1

STRING 1 1 0 1 0 0 1 0 0 0 0 1

1 0 0 1

x 13 24 8 19

Fitness x^2 169 576 64 361 SUM 1170 Average 293 MAX VAL 576

% of Total 14.4 49.2 5.5 30.9 100.0

Range 1-144 145-646 647-700 701-1000

For example, we have a 49.2% chance of obtaining a random number that will result in r e p r o d u c i n g the string 1 i 0 0 0. We keep the same number of strings in our p o p u l a t i o n t h r o u g h o u t the process, so we really have a s u r v i v a l - o f - t h e - f i t t e s t approach. Crossover occurs by first randomly m a t i n g the newly p r o d u c e d strings. Then, the m a t e d pairs of strings are crossed over based on a random number between i and the string length (sl) minus one [1,(sl-1)]. The new strings are g e n e r a t e d by e x c h a n g i n g the values b e t w e e n the strings at the all string p o s i t i o n s s t a r t i n g with the random number plus one. Continuing w i t h G o l d b e r g ' s two strings:

example,

assume we wish to mate the following

A = 0 1 1 0 1 B = I I 0 0 0

We select a r a n d o m number, say 4, and we exchange the v a l u e s at string position 5. The strings r e s u l t i n g from the c r o s s o v e r are as follows: A'=011010 B ' = I I 0 0 1 1

Consider the next iteration where random numbers for s e l e c t i n g c r o s s o v e r mates have been generated. Further, a second random n u m b e r (2) is generated to identify the c r o s s o v e r point on strings 3 and 4:

STRING #

STRING Following Random Crossover NEW Reproduction Mate Site (Rnd) P O P U L A T I O N

1

011011

2

4

2 3 4

1 1 0 0 I 0 0 1 J 0 0 0 1 0 I 0 1 1

1 4 3

4 2 2

Fitness x^2 144 1 1 0 0 1 625 1 1 0 i 1 729 1 0 0 0 0 256 SUM 1754 Average 439 MAX V A L 729 0 1 1 0 0

x 12 25 27 16

Observe in the above table that the p o p u l a t i o n average fitness has improved from 293 to 439 in one g e n e r a t i o n (iteration), and the m a x i m u m fitness has increased from 576 to 729 during this iteration. We have m o v e d from a set of switch settings v a l u e d at 60% of our known m a x i m u m v a l u e (961) to a set of switch settings v a l u e d at 76% of our known m a x i m u m value in this single iteration. The m u t a t i o n o p e r a t o r plays a secondary role in the GA process. Mutation is the o c c a s i o n a l (with small probability) random a l t e r a t i o n of a string position value. That is, we change a 0 to a 1 or a 1 to a 0 in the above example. We leave G o l d b e r g ' s example at this point with the following observations: GA o p e r a t i o n s rely simply on random number generation, string copying, and partial string exchanging. In theory, GAs may be applied to any p r o b l e m and could be ideal for our JSS problems. CAIE 19/I.4---G

84

Proceedings of the 12th Annual Conference on Computers & Industrial Engineering

The GA Process

in JSS

Our specific goal is to increase the throughput of the manufacturing process by optimally scheduling the factory floor. If we can achieve this, we can contribute to a reduction in the total time required for the designthrough-production cycle. The GA process, therefore, represents a potentially powerful tool for achieving improvements in the total manufacturing cycle time by reducing the time required following the engineering design phase. A simplified view of where the GA process fits is as follows:

I Order [ Entry #~'~

Engineering

I Work Order

I Demand

|

~ Engineering

Floor

We recognize that a manufacturing process plan in the Computer Integrated Manufacturing (CIM) cycle can be a result of a Group Technology (GT) process. Our suggestion is to follow through on the GT concept by extracting the process planning data from the GT process and marrying this data with the work order (demand) requirements generated by the order entry process. This, in turn, results in the input needed to abstract the data such that the GA approach can be used for determining an optimal schedule. Further, it is at this point in the cycle that information about the order must be captured and maintained for the remainder of the work order's life cycle. This management data is directly obtainable from the process plan and work order. The information captured within the process plan includes the following essential data needed for the GA process: -

Part Number - Operation 1 (Job, Step, or Task) - Operation 1 Machine Requirement - Operation 1 Processing Time Operation Operation Operation

ID

2 ID 2 Machine Requirement 2 Processing Time

Operation n ID Operation n Machine Requirement Operation n Processing Time The work order itself includes the following essential for the GA process:

information

needed

Part Number - Lot Number - Lot Quantity Due Date -

-

Armed with the above set of information move into t h e G A scheduling process.

for each lot, we are prepared to

For simplicity, we start with the case of n-tasks and a single machine. Further, we assume the static case where all lots are identified up front,

Biegel and Davern: Applications of GAs to the JSS Problem

85

and no additional lots are added during the processing cycle. And finally, we assume a flow-type of shop where all routings are identical for The GA all parts (even if the processing time can be zero for some parts). process proceeds as follows: I.

Table (store) all data for each lot for a given machine except for the following job queue management data: a. b. c.

2.

3.

Lot number Step ID Step processing

time

Execute the GA process: a.

Randomly generate a population regard to processing time.

of legal schedules without

b.

Evaluate each schedule function).

c.

Perform the traditional GA operations crossover, and mutation.

d.

Evaluate each new schedule

e.

Continue with the GA process steps c and d above until no "substantial" improvement in mean flow time is recognized.

f.

Stop the process, and present the results to the user as a sequence of jobs to be loaded onto the objective machine.

for it's mean flow time

(fitness

of reproduction,

for it's mean flow time.

Execute the schedule.

The GA operators: reproduction, crossover, and mutation are performed using the above process. Each new schedule produced is validated for legality (i.e. no duplicate or missing jobs) prior to evaluation, and the "survival of the fittest" GA process continues. The "best" solution is the schedule sequence providing the minimum mean flow time. The n-task t single machine Problem The goal of the GA process is to minimize the time required to provide the optimal schedule. For our purposes, the objective function is to minimize the mean flow time. The n-task as:

(job), single machine

I JOB

J JOB

J ...

situation can be represented Single J ...... > J Machine

J JOB

graphically

J

In this, the simplest situation, we assume n-lots with one Job (or step or task in this case) per lot and one machine that must process each job. Note: for simplicity, we use the term "Job or jobs" to represent the work to be processed on a given machine. Obviously, this "Job" may represent only one of many tasks on a given work order. Our objective ~(n) Where:

is to minimize the mean flow time as given by:

= i/n [nt(1) + (n-l)t(2)

+ ... + 2t(n-l)

+ t(n)]

n is the number of jobs

and: t(n)

is the processing

time for the nth sequential

job.

86

Proceedings of the 12th Annual Conference on Computers & Industrial Engineering

To illustrate,

c o n s i d e r the f o l l o w i n g table:

iJob Q u e u e posn: 1 i Job ID & Seq: h

2 c

3 a

4 i

5 j

6 e

7 b

8 d

9 f

i

l0 I gl

i

i Job Time: 2 4 7 3 8 9 5 1 i0 6i i Cum Flow T i m e : 2 0 56 112 133 181 226 246 249 269 2750 For the above 10-job situation, F n = 1/10,(275)

the m e a n flow time is g i v e n by:

= 27.5

It is obvious that the old r e l i a b l e SPT (Shortest P r o c e s s i n g Time) first rule yields the m i n i m u m solution. SPT w o u l d align the jobs as follows:

iJob Q u e u e Posn: 1 i Job ID & Seq: d

2 h

3 i

i Job Time: 1 2 J Cum F l o w T i m e : 1 0 28

3 52

% c

5 b

6 g

7 a

I

8 j

9 e

l0 B fl

i 4 5 6 7 8 9 l01 80 Ii0 140 168 192 210 220 i

For the above s e q u e n c e of jobs the m e a n flow t i m e is g i v e n by: Fn

= 1/10

[220] = 22.0,

and r e p r e s e n t s the optimal case. (What w e ' d shoot for w i t h our GA process).

On close inspection, the JSS p r o b l e m is s i m i l a r to the T r a v e l i n g S a l e s m a n Problem (TSP). That is, legal s e q u e n c e s m u s t be preserved, and no city may be missed. In the TSP case, we cannot p a s s t h r o u g h the same city m o r e than once. Similarly, in the n-task, one m a c h i n e JSS problem, a job c a n n o t pass through our single m a c h i n e m o r e than once (no d u p l i c a t i o n ) , and no job may be missed. For example, the s e q u e n c e of jobs:

I JOB a

i JOB c I JOB b

Single i ...... > i M a c h i n e

I JOB e I JOB d

i

is legal. E a c h job is s c h e d u l e d t h r o u g h the single m a c h i n e only one time. However, the schedule:

I JOB a i JOB c

Single i ...... > I M a c h i n e

i JOB a i JOB c I JOB a

I

c e r t a i n l y is n e i t h e r a f e a s i b l e nor a p r a c t i c a l schedule. We h a v e b o t h d u p l i c a t e s and m i s s i n g jobs. Therefore, it is an illegal schedule. This is so obvious, why is it important? It turns out that the GA operator, crossover, can p r o d u c e illegal schedules. To r e m e d y this illegal s c h e d u l e problem, several " p e r m u t a t i o n crossover" t e c h n i q u e s or o p e r a t o r s h a v e been introduced. This p a p e r a d d r e s s e s G o l d b e r g and L i n g l e ' s [2] P a r t i a l l y Mapped C r o s s o v e r (PMX). The e x a m p l e G o l d b e r g and Lingle use to i l l u s t r a t e PMX is a t e n - c i t y TSP case, and we d r a w an a n a l o g y to a t e n - j o b s c h e d u l e for a single machine: The f o l l o w i n g s e q u e n c e s r e p r e s e n t two p e r m u t a t i o n s (a t h r o u g h j): Job Queue Position: 1 S c h e d u l e A: i S c h e d u l e B: h

2 h g

3 d a

4 e b

5 f c

6 g j

7 a i

8 c e

(schedules)

of ten Jobs

9 l0 b j d f

With PMX, two r a n d o m n u m b e r s (e.g. 4 and 6) are c h o s e n as the c r o s s o v e r points for the schedules. This m e a n s that c r o s s o v e r starts at the lower random n u m b e r q u e u e p o s i t i o n and c o n t i n u e s t h r o u g h the larger r a n d o m number queue position. The r e s u l t i n g s c h e d u l e s f o l l o w i n g c r o s s o v e r are as follows:

Biegel and Davern: Applications of GAs to the JSS Problem

87

Step i: Job Queue Position: 1 S c h e d u l e At: i S c h e d u l e B': h

2 h g

3 4 d I b a I e

5 c f

6 7 j I a g [ i

8 c •

9 i0 b j d f

Note that we have indeed g e n e r a t e d illegal schedules. In S c h e d u l e A', by swapping queue p o s i t i o n s 4, 5, and 6 with Schedule B', we d u p l i c a t e d jobs b, c, and j, and we e l i m i n a t e d jobs e, f, and g from our schedule. We have similar p r o b l e m s in Schedule B!. PMX remedies this s i t u a t i o n by e x e c u t i n g a second step. This second step detects p r o b l e m situations c r e a t e d by the c r o s s o v e r step and fixes the illegal s i t u a t i o n s -- the d u p l i c a t e s and m i s s i n g jobs. Step two results in the f o l l o w i n g schedules: Step 2: Job Queue Position: 1 S c h e d u l e A": i S c h e d u l e B": h

2 h j

3 4 d i b a I e

5 c f

6 7 j ] a g I i

8 e b

9 i0 f g d c

Note that this second step operated on queue p o s i t i o n s o u t s i d e of the original c r o s s o v e r range (queue p o s i t i o n s 4 t h r o u g h 6) and e l i m i n a t e d the duplicates w i t h the c r o s s o v e r values that caused the duplicate. For example, in S c h e d u l e B', the first d u p l i c a t e situation was caused by the crossover at queue location 6. This r e s u l t e d in a d u p l i c a t e in queue p o s i t i o n 2. Therefore, PMX detects the d u p l i c a t e and r e p l a c e s queue p o s i t i o n 2 w i t h job j -- the job in the c r o s s o v e r p o s i t i o n that caused the illegal schedule. Schedules A" and B" now r e p r e s e n t legal schedules and, process, can now be e v a l u a t e d for merit or payoff.

a c c o r d i n g to the GA

In this paper, we accept this PMX process w i t h o u t c r i t i c i s m as being a suitable e x e c u t i o n process for our JSS problem. Other p e r m u t a t i o n crossover t e c h n i q u e s may be more or less efficient. What's left then? Not much, really. We store our m a n a g e m e n t data and carry only the lot ID (lot and job are e q u i v a l e n t in this case) and process time into the GA process. Items A and H in the table b e l o w r e p r e s e n t the GA required input data. The r e m a i n d e r of the data are m a n a g e m e n t data. A . LOT # 1008 1009

B . . Part # 18-104 12-110

C . Due Date 10-12-90 10-5-90

E

F

Quantity i0 12

Machine a a

.

G Processing Time (ea) i0 12

H Processing Time (ext) i00 144

Why is the p r o c e s s i n g time a required input to the GA process? We actually don't need p r o c e s s i n g times for the c r o s s o v e r and m u t a t i o n operators of the genetic a l g o r i t h m -- these operators work d i r e c t l y on the p o p u l a t i o n of strings (list of jobs). However, we do need the times for the r e p r o d u c t i o n operator. Recall that the GA operates on the strings b a s e d on our fitness m e a s u r i n g process, and it is the result of the e v a l u a t i o n p r o c e s s that tells the GA w h i c h strings to reproduce. It is obvious then, that we must carry the p r o c e s s time with the respective job. The GA p r o c e s s is then executed, and we obtain our schedule. What about the output? It is p r e s e n t e d to the factory floor simply as a list of jobs in the sequence s p e c i f i e d by the GA process. For example: JOB-1009,

JOB-1008,

JOB-1015,

. .

The m a n a g e m e n t data can be filled back in to present the schedule s a t i s f a c t o r y report format with times, part numbers, etc..

in a

88

Proceedings of the 12th Annual Conferenc~ on Computers & Industrial Engineering

Is our r e s u l t i n g schedule optimal? Is the GA process fast? The literature suggests the p r o c e s s p r o v i d e s e n c o u r a g i n g results [3]. We need to do more testing w i t h the p r o c e s s to assess its c o n t r i b u t i o n in the JSS arena. The n - t a s k t t w o - m a c h i n e We now p r o g r e s s to the more involved case, p r o b l e m (serial). The n - t a s k

(job),

I JOB

2-machine

I JOB

situation

I ...

I JOB

Problem

the n-task,

2-machine JSS

can be r e p r e s e n t e d

graphically

I ...... > I M a c h i n e 1 1 M a c h i n e 2 1

The first thing we note is the change to our input table. the p r o c e s s i n g times for both m a c h i n e s as shown below: A . LOT # 1008

B . . Part # 18-104

1009

12-110

C .

E

F

Quantity i0

Machine 1 2 1 2

.

Due Date 10-12-90 10-5-90

as:

12

G Processing Time (ea) i0 8 12 6

It now carries

H Processing Time (ext) i00 80 144 72

In the static case (i.e. no additional w o r k entering the system d u r i n g the processing), the optimal o r d e r i n g of lots at the first m a c h i n e is carried through each s u c c e s s o r machine. Consider the following example:

JOB a b c d e

Processing Machine 1 4 1 5 2 5

Time P(t) Machine 2 3 3 4 5 6

We first schedule the m a c h i n e s on a f i r s t - c o m e - f i r s t - s e r v e d and obtain a total p r o c e s s i n g time of 25 as shown below.

(FCFS) basis

FCFS MACH2

lal

l

cl

I

d

e

I

MACH1

2

Next, we load m a c h i n e as shown below.

4

6

8

10

12

14

I6

18

20

22

24

26

1 using SPT and obtain a total p r o c e s s i n g SPT

MACH2II

t

I

2

4

" I

6

8

I

10

12

II

14

16

18

20

22

24

26

time of 23

Biegel and Davern: Applications of GAs to the JSS Problem

89

SPT looks like it p r o v i d e d the o p t i m u m schedule, but this d e t e r m i n a t i o n can only be m a d e after v i e w i n g the entire schedule t h r o u g h both machines. SPT cannot be relied on to p r o v i d e the optimal solution in the general case. With the GA approach, we also p e r f o r m our e v a l u a t i o n against b o t h machines. Actually, we simulate the above d i a g r a m and d e t e r m i n e the best schedules. In our situation, we're looking for the shortest time span required to p r o c e s s all Jobs through both machines. This is w h e r e the real strength of the GA p r o c e s s shows through. Because we e v a l u a t e the entire two m a c h i n e schedule, we optimize at the p o p u l a t i o n level rather than at the individual m a c h i n e level. Again, the e v a l u a t i o n p r o c e s s w h e r e we d e t e r m i n e the "best" of the schedules has the s i g n i f i c a n t role, b e c a u s e it tells us w h i c h of the schedules the GA operators will be applied to during the next iteration. The n - t a s k t m - m a c h i n e The n - t a s k i JOB

(job), m - m a c h i n e

i JOB

i ...

I JOB

Problem

situation can be r e p r e s e n t e d

graphically

I ...... > I Mach 1 i M a c h 2 i ...

as:

i Mach n I

E x t r a p o l a t i n g on the 2-machine case d i s c u s s e d above, what d i f f e r e n c e does it make if we have more than two m a c h i n e s serially? None! As long as we evaluate the entire p o p u l a t i o n of schedules t h r o u g h all machines, we employ the same p r o c e d u r e as w i t h the 2-machine scenario. Also, note that r e g a r d l e s s of the number of jobs or machines, the schedule is r e p r e s e n t e d quite simply as a set of Job identifiers i n d i c a t i n g the sequence of job processing. The m a n a g e m e n t data is carried in the b a c k g r o u n d and is not made apparent until it is r e q u i r e d at the p r e s e n t a t i o n step of the process. A m o d i f i e d v e r s i o n of our Gantt chart schedule r e p r e s e n t a t i o n w o u l d be s a t i s f a c t o r y in most cases. The Dynamic Case In the d y n a m i c case, w h e r e we have additional tasks e n t e r i n g the environment as p r o c e s s i n g continues, we may want to p e r f o r m a s c h e d u l i n g action after each new lot enters. The new or r e s u l t a n t schedule on the first m a c h i n e is again c o n s i d e r e d optimal for the s u c c e s s o r machines. (This is valid, since part of the GA process requires e v a l u a t i o n of each p r o p o s e d schedule in it's e n t i r e t y and r e p r o d u c t i o n of the m i n i m u m span time schedule). An a p p r o a c h to the d y n a m i c case is as follows: Each time a job is c o m p l e t e d Determine

on M a c h i n e

1:

if a new task entered the system:

IF YES -- Reschedule. AND: move task just completed to front of M a c h i n e AND:

2 queue,

apply new schedule to all machines.

IF NO -- Continue with current Consider the following

schedule on all machines.

2-machine dynamic schedule

Time = 0, first schedule: JOB SEQUENCE: b i d i a I e i c I JOB TIME (Mach i): 1 I 3 i 2 i 5 I 4 I

scenario:

90

Proceedings of the 12th Annual Conference on Computers & Industrial Engineering

Time = i, Job b complete Reschedule:

on Math i.

Job f enters -- time = 3.

Second Schedule JOB SEQUENCE

(Mach i): d I f I a I e I o I

JOB SEQUENCE

(Mach 2): b

Time = 4, Job d complete Reschedule:

I d I f I a I e I c

on Mach 1.

I

Job g enters -- time = i.

Third S c h e d u l e JOB SEQUENCE

(Mach i): g I f I a I e I c I

JOB SEQUENCE

(Mach 2): d I g I f I a I e I c I

etc. Again, to be effective, the e v a l u a t i o n process must be a c c o m p l i s h e d in near real-time in order to keep up with new schedule g e n e r a t i o n requirements. As in the static case, there is no separate, unique schedule for m a c h i n e 2 (or m a c h i n e s 2, ., m in the m - m a c h i n e scenario). CONCLUSION AND FUTURE STUDY What we p r e s e n t e d in this paper is a sort of p r i m e r on a p p l y i n g genetic algorithms to the JSS problem. Our initial research indicates that GAs could be the a p p r o p r i a t e tool to bring JSS p r o b l e m s into a m a n a g e a b l e arena. However, m a n y areas of study and o p p o r t u n i t y exist. Our research will continue to explore specific a p p l i c a t i o n s of GAS to JSS. We believe study remains to be done in the following areas. It is important to note that w h i l e most of these areas have r e c e i v e d at least some consideration, specific a t t e n t i o n to GA a p p l i c a t i o n s in the JSS area has been limited. i.

JSS p r o b l e m s with p r e c e d e n c e constraints, due date constraints, inventory constraints, and m u l t i p l e parallel servers.

2.

C o m p a r i n g GAs with the t r a d i t i o n a l q u a n t i t a t i v e techniques.

3.

Combining GAs w i t h other t e c h n i q u e s

4.

B u i l d i n g new schedules based on learned i n f o r m a t i o n from p r i o r schedules of a similar nature. (Remembering old, good starting strings).

5.

D e v e l o p i n g a l g o r i t h m s to optimize p o p u l a t i o n size selection, m u t a t i o n percentages, and e v a l u a t i o n speeds as a function of the number of jobs b e i n g scheduled.

6.

D e v e l o p i n g the data base structures to a c c o m m o d a t e the integration of GT w i t h the process plan data, the order entry data, and the factory floor p r e s e n t a t i o n requirements.

7.

Building "fast response" scheduling p r o t o t y p e software to handle immediate change r e q u i r e m e n t s and keep the shop floor w o r k i n g n e a r - o p t i m a l while a more refined schedule is being generated.

8.

P e r f o r m i n g s e n s i t i v i t y analyses to determine which features of the GA a l g o r i t h m have the most significant affect on performance.

quantitative

and non-

(including heuristics).

Biegel and Davern: Applications of GAs to the JSS Problem

REFERENCES

i. Goldberg, D. E., (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, MA: Addison-Wesley. 2. Goldberg, D. E., Lingle, R., (July 1985). "Alleles, Loci, and the Traveling Salesman Problem," Proceedings of an International Conference on Genetic Algorithms and their Applications, Carnegie-Mellon University, Pittsburgh. 3. Oliver, I. M., Smith, D. J., and Holland, J. R. C., (1987). "A Study of Permutation Crossover Operators on the Traveling Salesman Problem," Proceedings of the Second International Conference on Genetic Algorithms and their Applications, 224-230. Not Cited Campbell, H. G., Dudek, R. A., and Smith, M. L., "A Heuristic Algorithm for the n Job m Machine Sequencing Problem," Management Science, vol. 16, no. I0, June 1970. Johnson, S. M., "optimal Two- and Three-Stage Production Schedules with Setup Times Included," Naval Research Logistics Quarterly, vol. i, no.l, March 1954.

Dr. John E. Biegel is a Professor of Engineering and Director of the Intelligent simulation Laboratory at the University of Central Florida, Orlando, Florida. Mr. James J. Davern is a Martin Marietta Internal Information Systems Consultant for Manufacturing Systems and an Industrial Engineering doctoral student at the University of Central Florida, Orlando, Florida.

91