Genetic Algorithm Approach to a Production Ordering Problem in an Assembly Process with Buffers

Genetic Algorithm Approach to a Production Ordering Problem in an Assembly Process with Buffers

Copyright © IFAC Infonnation Control Problems in Manufacturing Technology, Toronto, Canada, 1992 GENETIC ALGORITHM APPROACH TO A PRODUCTION ORDERING ...

1MB Sizes 0 Downloads 23 Views

Copyright © IFAC Infonnation Control Problems in Manufacturing Technology, Toronto, Canada, 1992

GENETIC ALGORITHM APPROACH TO A PRODUCTION ORDERING PROBLEM IN AN ASSEMBLY PROCESS WITH BUFFERS N, Sannomiya and H. Iima Department 0/ Electronics and Information Science, Kyoto Institute o/Technology, Matsugasaki . Sakyo-ku, Kyoto 606. Japan

Abstract. This paper deals with an optimal production scheduling problem for an assembly process with buffers at the input and output sides of a machine. The problem has two constrain ts. One constraint is that the buffer's capacity is limited . The other constraint is that the cycle time of the worker is constant without waiting time. An optimal production ordering is determined subject to these constraints in such a way that the production rate of each product should be as constant as possible. A procedure for applying the genetic a lgorithm to this problem is shown. The operations of reproduction , crossover and mutation are discussed . It is observed from numerical results that the genetic algorithm is more effective than other methods. Keywords. operation.

Optimization ; Genetic algorithm ; Scheduling; Manufacturing process ; Buffer

1. INTRODUCTION

2. PROBLEM STATEMENT

Flexible manufacturin g systems have become in creasingly important because of their advantages such as lowe r costs, consistent product quality and flexibl e system management a nd planning. However, in order to develop such systems, many theoretical and technical problems must be solved. A typical problem is to develop a scheduling algorithm for production systems. It is a problem of combinatorial optimization and includes difficulties such as complicated constraints and many local optimum solutions. R ece ntly several approaches have been investigated for overcoming the difficulties ; for example, simulated annealing algorithm (Coroye r and Liu , 1991) and genetic algorithm (N ishikawa and Tamaki , 1991 ).

We consider a production ordering problem in the case where a set of N products Ai (i = 1,2, . .. , N) are processed on a machine. As shown in Fig.1, the machine has buffers with the capacity B at the input and output sides. A worker carries a product from the preceding stage to the input buffer and from the output buffer to the succeeding stage. After that , the worker returns to the preceding stage. The cycle time of the worker is h Q• For simplicity, the work er needs no time for moving from the input buffer to the output buffer.

In this paper, we consider a production ordering problem for an assembly process with several constraints. One constraint is that the capacity of the buffers at the input and output sides of the process is limited. The other constraint is that the cycle time of the worker is constant. In this system, an optimal production ordering is determined in such a way that the rate of production amount of each product should be kept as constant as possible. For obtaining a good suboptimal feasible solution , the genetic algorithm (Goldberg , 1989) is applied to this problem. A procedure for carrying out the operations of reproduction , crossover and mutation is shown. The effectiveness of the algorithm is compared from the viewpoints of accuracy and computation time among other methods, such as the greedy method and the simulated annealing method. It is observed from numerica l results that the genetic algorithm is more effective from accuracy viewpoint than other methods. The greedy method takes less computation time than other method, but the accuracy of the greedy method depend s on the severity of the constraints on the assembly process.

Fig. 1 A prod uction process

The present problem has two constraints. One is the constraint on limited buffer capacity. The other is that the worker must make a tour at regular intervals. An optimal production ordering is determined subject to these constraints in such a way that the production rate of each product should be as constant as possible. A constant rate of production is desirable from the viewpoint of stationary and economic operation of the whole production process.

403

We define hj as the processing time for the product Aj on the machine, and Qj as the amount of Aj to be processed. Then, the total amount Q of products and the total processing time T are given by N = ~

Q

Adding the both sides of (5) from k to t(,)

=

t(,)

k=lj=1

From (8), (16) and (17), we have

hj Qj

(2)

where [a] implies the integer part of a.

i=l

(i)

In order to formulate the problem , we define the variables in period k (1::; k::; T) as follows.

=

t( s) leads

=

b(k-l)+ ~ ~ uj(k)- ~ v(k) (18)

(1)

b(t(s))

N ~

~ k=l

Qj

i=l

T =

b(k)

1 to k

t(,) N

t(,)

~ k=l

=

Bo+ s - [t(s)jhoJ

the case where v(t(s)) Equation (5) becomes

=

(19)

0

= b(t(s-l)) + 1 .". b( t( s - 1)) = b( t( s)) - 1

b(t(s)) Xj(k) : actual value of the total amount of Aj processed up to period k. mj( k) = Qj k j T: ideal value of the total amount of Aj processed up to period k. uj(k) : binary variable representing the state of the machine (i.e., uj(k)=l when product Aj was processed at k on the machine and uj(k)=O otherwise). v(k) binary variable represe nting the state of the worker (i.e., v(k)=l when the worker reached the machin e at k and v(k)=O otherwise). b(k) number of products in the output buffer. e(k) number of products in the input buffer.

0::; b(t(s)) ::; B o ::; b( t( s - 1)) ::; B

N

1 ::; b( t( s)) ::; B

1::; Bo+ s - [t(s)jhoJ ::; B .". (s - B + Bo)h o ::; t(s) < (Bo + s)ho (ii) the case where v( t( s)) Equation (17) implies

[t(s)jhoJ

b(k)

(3)

+ uj(k)

+

~

c(k) = c(k-l) -

uj(k) - v(k)

(5)

+ v(k)

(6)

b(O) e(O)

uj(k)

.". (s-B+Bo)ho::;t(s)::;(Bo+s)ho

Thus, in order to satisfy either (25) or (28), the following relation must hold:

On the other hand, adding (6) from k using (9), (16) and (17) becomes

(7)

Ba

(8)

=

Co

(9)

xj(k) ::; Qj

( 10)

x;(T)

(11)

Qj

0::; b(k) ::; B

(12)

0::; e(k) ::; B

(13)

i=1,2,···,N;

e(t(s))

= Ail '

A i2 ,'

"

J

A iJ

=

s=1,2, ·· ·,Q

{~

for k = t( s) and otherwise

v(k)

{~

for k = mho for k,* mho

=

m =

+ [t(s)jhoJ

(30)

(31)

(s - B + Bo)h o ::; t( s) ::; (s + B - Co)h o for Bo+ Co > B (s - Co)h o ::; t(s) ::; (s + Bo)h o for Bo + Co::; B s

=

1,2, ... , Q

(32)

Consequently , the problem is expressed in the following form.

(15)

min

In this case, Uj( k) and v( k) are given by

Uj(k)=

t(s) and

(14)

AiQ

hj,+hj,+···+h j,

=

Since both (29) and (31) hold, the following constraint is obtained:

The processing of product A j , is completed at the following period.

t(s)

Co - s

1 to k

.". (s-Co)ho::;t(s)::;(B+s-Co)ho

k=1 , 2, ···,T

,"' ,

=

=

(29)

By calculating c(t(s)) in the same manner as that for b( t( s)), we obtain

Let s be the parameter representing the processing order. "Ve assume that A j , is the sth product to be processed. Then the sequence of products is as follows.

r

(28)

(s - B + Bo)h o ::; t(s) ::; (Bo + s)ho

=

=

(27)

(4)

0

=

(26)

t(s)jho

Substituting (27) into (12) yields

N

~ i=l N

Xj(O)

1

b(t(s)) = Bo + s - t(s) / ho

k=1

b(k-l)

=

=

=

(25)

from which (19) becomes

x;(k-l)

=

(24)

Substituting (19) into (24) yields

The time variations of the variabl es defined above are given by

xj(k)

(23)

From (21) and (23), we obtain

T

~ ~ [xj(k) - mj(kW j=1

(22)

1 ::; b( t( s)) ::; B + 1

The objective function to be minimized is given by =

(21)

Substituting (20) into (22) yields

At the initial period (k=O) , the output buffer holds Ba products, and also the input buffer holds Co products, where at least one of Ba and Co is positive and Ba, Co::; B. The initial position of the worker is the front of th e output buffer.

Z

(20)

From (12), we have

Z

=

N

T

j=1

k=1

~ ~ [x;(k) - mj(kW

subject to (4), (7), (10), (11), (16) and (32)

'=',

(16)

The feasible range for t(s) is given, from (32), by

(17)

R t (,)

1,2 , 3,··

404

=

(2B-B o -Co)ho for { (Bo + Co)h for o

Bo+Co>B Bo + Co::; B

(33)

Figure 2 shows the relationship between the range for t(s) and the initial number of products in both buffers. The value of R t (.) has an influence upon the feasibility of the problem . It is observed from the figure that R t (.) attains its maximum value at Bo + Co = B. Hence, when the initial number of products in both buffers is near to the buffer capacity, the problem has so many feasible solutions.

mutation operation for the string, and a new string is obtained for Pj( t+1). By replacing Pj(t+1) with the new one, we have a new population for P( t+ 1). Step 6. If t = t', the string with the highest fitness is adopted as the solution of the present problem. If t< to , set t= t+1 and return to Step 2. 3.2 Generation of the Initial Population

B ho

The present problem has a narrow feasible region. If the string (14) is selected at random as the initial string, it is seldom feasible. Then a search procedure is needed for finding a feasible population.

--------

Let r Q_. be the sequence in which s products have been already ordered backwards. Then r Q_, is a subsequence of r which has length s. Further , !1(8) is defined as the set of product numbers.

O~------~--------~------~

o

B

2B

The initial population P(I) used in Step 1 of Algorithm 1 is obtained by the following algorithm:

Fig. 2. The range for t(s) .

Algorithm 2 Step 1. Set !1(s) = {I, 2,"', N} for all SE{l, 2,"' , Q}. Step 2. Set s = Q and r Q = 9. In this case we have t(Q)=T. Step 3. Find an i E!1 (s) which satisfies

3. APPLICATION OF GENETIC ALGORITHM The genetic algorithm (Goldberg, 1989) is a search technique based on the mechanics of natural selection and natural genetics. It searches from a population of points, not a single point, by using random choice as a tool to obtain the global optimum. In this algorithm, the point corresponds to an individual which is represented in terms of a string, i.e . a sequence of genes. Each individual has his own fitness function value to be maximized.

(s-l- B+Bo)h o ~ t(s) - hi ~ (s-l +B - Co)ho for Bo+Co > B (s-l- Cu)ho ~ t(s)-hi ~ (s-l+Bo)h o for Bo+Co~B

(34)

If (34) has many solutions, select one of the them at random. The solution is denoted as i,. If (34) has no solution, go to Step 4.

We apply the genetic algorithm to the production ordering problem . The string is defined as a sequence of products given by (14). For simplicity, the product Ai is denoted as i. Then the string has length Q, and is a sequence of figures belonging to {I , 2, ... , N}. The s-th figure in the sequence represents the product number which is to b e processed in the s-th place .

Step 4 . Set s=s+l and !1(s)=!1(s) - {i,}. Return to Step 3. Step 5. Define r .-1 = Ai" r,. Step 6. If s=l, an individual Pj(l) is obtained. Go to Step 7. If s>l, set t(s-I)=t(s)-hi, and s = s -1. Return to Step 3. Step 7. Continue the procedure between Step 1 and Step 6 until M individuals Pj(1)(j=1,2,"', M) are obtained.

As an example, let N=3, Q I =3, Q 2 =3, Q3 =4, and Q=10. Then, an individual is expressed as 1 123 3 3 122 3

Since (34) is equivalent to (32), a feasible population is sought by using Algorithm 2.

which means

3.3 Reproduction 3.1 Outline of the Algorithm In Algorithm 1, reproduction is an operator by which individual strings are copied according to their fitne ss function values J., j= 1,2, '· ·, M. 'Ve want to maximiz e the value of each Then , copying strings means that strings with a higher value have a higher probability of survival in the next generation.

We consider a set of individual strings p/t) , j=1,2, . .. , M belonging to the population P(t) at generation t. The genetic algorithm is composed of three operators such as reproduction , crossover and mutation . They generate the population P( t+ 1) at the next generation t+ 1.

h

The algorithm is summarized as follows.

We propose the fitness function for the string Pj as follows.

Algorithm 1 Step 1. Select at random the initial population P( 1) satisfying (32). Set t=1. Step 2 (Reproduction). Calculate the fitness functions for individual strings, and generate the population P(t+1) according to the distribution of their function values. Step 3. Go to Step 4 or Step 5 by a random choice. Step 4 (Crossover). Mate the members of P(H1) at random, and carry out the crossover operation for each pair of strings. Then we have a new population for P(t+1). Go to Step 6. Step 5 (Mutation). Select a string Pj( H1) at random from the population P(t+1). Carry out the

Jj

=

U

Zk

min l~k~M

+

max Zk - Zj

(35)

l~k~M

where Zj is the objective function of Pi' In (35) , the second term is a cons tant introduced in such a way that the fitness function value becomes non-negative. The function of the first term is to control the survival of the strings in the next generation. For this purpose, the parameter U is adjusted in such a way that a bad string tends to survive as U becomes large. As an example, the following formula is used.

U

= - 0.45

t+05

t' where t' is the final generation.

405

(36)

The expected number of survivors for Ej

where

I

=

Pj

is given by

Ijl!

3.5 Mutation (37)

Mutation in Step 5 of Algorithm 1 is the occasional random alteration of figure of a string position . That is, first a string is selected at random from the population. The string is given by

is the average value of I j , i.e. 1 M

1=- E Ij

(38) M j=1 Then, the population P(t+1) at the next generation is generated based on the value E j .

f=A iJ , .. ·, A ia , .. ·, Ai..,''' , AiQ

(41)

Second, two positions in the string are selected at random ; say the Q-th and /3-th positions. Then, a new string is generated by exchanging the figure at the Q-th position with that at the /3-th position . Consequently,

3.4 Crossover Generation of new strings is carried out by the operations of crossover and mutation.

(42) If the string obtained by the mutation operation corresponds to an infeasible solution , another string is selected, and mutation is carried out again .

Crossover in Step 4 of Algorithm 1 proceeds in two steps. First , members of P(t+l) are mated at random . Second , each pair of strings undergoes the following operation : a position is selected along the string, and two strings are split at the position. Then, two new strings are generated by swapping all figures in the split subsequence. In this case, the feasibility of the solution must hold after this operation . For this purpose, the number of each product must be the same between the swapped subsequences.

3.6 Parameters of the Algorithm

As for the parameters of Algorithm 1, we use the following values. t* = 2000, M = 60 Probability of selecting crossover in Step 3 = 0.9 Probability of selecting mutation in Step 3 = 0.1

The new strings satisfying such constraints are obtained as follows. We apply the crossover operation to the following two strings: fl =A i" Ai, , " ' , AiQ } f 2 =A jl , Aj, , · .. , AjQ

4. NUMERICAL RESULTS

(39)

As an example , we set the following values of the problem parameters: N=6 , B=Co =l , Bo=O and h o =6. The processing time of each product is given by

Let lJl( i) be the number of product Ai to be processed according to the sequence fl(i= 1,2). Further , A is the set of the positions at which the string can be split with holding the feasibility of the solution. Then, the set A is given in the following manner.

h l =l, h 2 =3 , h3=5, h4 =7 , h5=9, h6=1l ,

Algorithm 3 Step 1. Set A={O , Q} and lJI(i)=lJ2(i)=0 for every i E {I , 2, ... , N}. Set s = 1. Step 2. The s-th product of the sequence fl is sought for each 1. From these products, i.e . Ai, and A j,' lJI (i) and lJ2 ( i) are calculated as

First, it is shown that the present problem has narrow feasible region. We define

VI (i,) = lJI (i,) + 1, lJ2(j,) = lJ2(j,) + 1 Step 3. If lJI(i)=lJ2(i) for every i, A=A+{s}. Step 4. If s= Q-1, stop. If s< Q-l, set s=s+1 and return to Step 2.

PI

I

a- I

J

a

Q

=

P2 =

Two elements are chosen at random from A thus obtained; say Q and /3. Then, for th e old strings (39), the crossover operation gives the following new strings:

f; =Ai , .. ., Ai , Aj , ... , AjfJ , AifJ+ J , ... , Ai } f2 =A j, ,"" A ja _ A ia , .. ·, A ifJ , AjfJ+J,'''' AjQ

number of possible sequences (14) under no consideration of the constraint (32) number of possible sequences (14) under consideration of th e constraint (32)

In Fig.4 the ratio P 2/ PI is plotted in non-increasing order for the respective case studies in the case of Q = 10 . It is observed from the figure that , for a sequence (14) chosen arbitrarily , the probability at which the sequence is feasible is at most 4%.

(40)

,

Since the processing period is the same between the swapped subsequences, the new sequences f; and f 2 are feasible solutions. 0.04

As an example , two strings P; (t+l) and P2 (t+l) are generated by the crossover operation, as shown in Fig. 3. In this case we have A={O, 5, 8, 13}. In the figure, we chose Q=5 and /3=13. 1 1233 123 2332 1

: PI (t+ 1)

23 13 132 133 122

: P2( t+ 1)

!

0.03

~

---~

0.02 0.01 0

1 123332 133 122

:

P; (t+ 1)

23 13 11232332 1

:

P2 (t+ 1)

(43)

The amount Qi of each product is assigned to various values corresponding to the case studies. The number of case studies is decided by investigating all possibilities of Qi which satisfies (1) , (2) and (43). Consequently , it is found out that the problem with Q = 10 has 43 case studies, and the problem with Q=20 and T=120 has 414 case studies.

0

10

20 Case number

30

Fig. 4 Rate of feasible solutions (Q= 10 ).

Fig. 3 An example of crossover operation.

406

40

Second, the validity for the formula (36) is shown in Table 1 for 50 case studies in the case of Q = 50. It is observed from the table that the value of U given by (36) is more effective than that of U fixed.

Table 1 Dependence of the objective value on U(Q=50).

In order to investigate the effectiveness of the present method, a numerical calculation has been executed for various case studies. We compare the result obtained with the results obtained by other methods, such as the greedy method (Sannomiya et ai., 1991) and the simulated annealing method (Aarts and Korst, 1989). The problem with Q::; 20 are small-scale problems, and then the optimal solution can be obtained for such problems by applying the branch-and-bound method.

349 (259 - 504) 387 (255 - 935)

(a) 0.5 (constant) (b) 0.05 (constant) (c) 0.5~ 0.05(by (36))

318 (246 - 440)

Table 2 Comparison of the computation results for the cases where Q::; 20

Table 2 shows the computational results obtained for the cases where Q = 10 and Q = 20. The objective value is shown at relative values, i.e. objective value obtained / optimal value. Table 3 shows the computational results for the cases where Q = 30, Q = 40 and Q = 50 . In these cases, the optimal solution can not be obtained because too much computation time is required. In addition, the problems with Q 2: 30 have too many case studies. Therefore, for those problems, our consideration is confined to 50 case studies. It is observed from the two tables that the genetic algorithm takes much computation time but finds a good suboptimal solution, as compared with the greedy method and the simulated annealing method.

(a) Q=10(43casestudies) Greedy method

Simulated Genetic annealing algorithm method

Average objective value (normalized value)

1.197

1.457

1.028

Num ber of times at which the optimal solution is obtained

2

5

28

Average CPU time (sec)

0.0058

14.47

49.70

(b) Q=20 (414 case studies)

Figure 5 shows the objective value obtained by the genetic algorithm in the case of Q = 10. The case number in the figure corresponds to that for Fig. 4. As shown in the figure, the genetic algorithm gives good results in spite of narrow feasible region.

<>

Objective value Average (min. - max)

U

Greedy method

<>

<>~«OOO~ <>

Simulated Genetic annealing algorithm method

Average objective value (normalized value)

1.311

1.478

1.083

Number of times at which the optimal solution is obtained

6

6

42

Average CPU time (sec)

0.0061

77.90

142.81

Table 3 Comparison of the computation results for the cases where Q> 20

o

10

20 Case number

40

30

(a) Q=30 (50 case studies) Greedy method

Fig.5 Objective value obtained by the genetic algorithm (Q=10).

1200

Simulated Genetic annealing algorithm method

A verage objective value

242.79

256.03

176.53

Average CPU time(sec)

0.017

227.65

388.02

<> (b) Q=40 (50 case studies)

GA is better

<>

1000

<>

Greedy method

800

::8 Cl

<>

600

,J

<>

<> ~<>

<>



200 200

368.67

310.33

256.88

Average CPU time (sec)

0.022

403.76

492.19

(c) Q=50 (50 case studies)

<> GM is better <> 300

Average objective value

<>

400

<> <>

Simulated Genetic annealing algorithm method

400

Greedy method

500

GA Fig.6 Comparison of the objective value between the genetic algorithm a nd the greedy method (Q=50). GA genetic algorithm GM: greedy method

407

Simulated Genetic annealing algorithm method

A verage objective value

452.39

379.43

317.64

Average CPU time(sec)

0.029

704.18

775.80

In Fig. 6, comparison of the objective value between the genetic algorithm (GA) and the greedy method (GM) is shown for each case study in the case of Q = 50. Figure 7 also shows the comparison between the genetic algorithm and the simulated annealing method (SA).

The genetic algorithm proposed in this paper is considered to have the following merits: 1) The present problem has a narrow feasible region. Nevertheless, a feasible solution can be obtained from the generation of the initial population and by the crossover operation. 2) Either the crossover operation or the mutation operation is applied necessarily at each generation in order to search a new region. 3) The reproduction operation is done in a deterministic way in order to increase the efficiency of search. A time-varying parameter is introduced in the operation for preventing the individual strings from converging at a relatively early generation.

800 GA is better

<>

600

<> 400

<> <>

The effectiveness of the algorithm has been compared from the viewpoints of accuracy and computation time among other methods, such as the greedy method and the simulated annealing method. It was observed from numerical results of 607 case studies that the genetic algorithm takes much computation time, but finds a better solution than other methods. Further, the genetic algorithm was found to be effective in spite of narrow feasible region.

<> <><> <> SA is better

200 '""----~----'-----' 200 300 400 500 GA

REFERENCES

Fig.7 Comparison of the objective value between the genetic algorithm and the simulated annealing method (Q=50). GA: genetic algorithm SA: simulated annealing method

Aarts, E. and Korst, J. (1989). Simulated Annealing and Boltzmann Machines. John- Wiley, Chichester. Coroyer, C. and Liu, Z. (1991). Effectiveness of heuristics and simulated annealing for the scheduling of concurrent tasks - an empirical comparison. Rapports de Recherche, NO 1379. INRIA.

5. CONCLUDING REMARKS

Goldberg, D. E. (1989). Genetic Algorithm in Search, Optimization and Machine Learning. Addison-Wesley, Reading.

An optimal production ordering problem has been considered for an assembly line system. The system consists of two subsystems. One is a machine whose function is to process a set of products. The other is a carrier, a worker in this case, to carry the products. These two subsystems are required to continue their operations withou t rest for increasing the efficiency of the system operation. In addition, the capacity of the buffers connecting the subsystems is limited. Then, in order to satisfy these conditions, the two subsystems must be synchronized in a sense. However, this constraint is severe. The feasibility of the problem strongly depends on the order of prod ucts to be processed on the machine.

Kotani, S. (1987). Mathematical treatment of kanban controlled systems. Communications of the Operations Research Society of Japan, 32, 730-738 (in Japanese). Nishikawa, Y. and Tamaki, H. (1991). A genetic algorithm as applied to the jobshop scheduling. Trans. of Societv of Instrument and Control Engineers, 27, 592599 (in Japanese). Sannomiya, N., Torii, H., lima, H. and Kawai, S. (1991). Just-in-time optimization in a production ordering problem. Papers of Technical Meeting on Systems and Control , SC-91-4, lEE Japan (in Japanese).

In the present problem , an optimal production ordering has been determined in such a way that the rate of production amount of each product should be kept as constant as possible. As pointed out previously by Kotani (1987), parts or materials should be used at a constant rate in oreder to process the products in a kanbancontrolled assembly system, i.e. a kind of just-in-time system. In the present paper we did not consider use of parts or materials for processing the products. This is due to simplifying the problem by omitting insignificant constraints. However, the present problem formulation is also applicable to the just-in-time system by supplement the constraints which define the relationship between the products and parts. For obtaining a suboptimal solution, the genetic algorithm has been applied to the present problem. The genetic algorithm is one of the effective approaches for solving combinatorial optimization problems. The algorithm is a kind of random search method, but aims at finding a good solution efficiently by using the genetic operators. Various ideas and procedures have been proposed for the details of the algorithm.

408