Formation of manufacturing cells by cluster-seeking algorithms

Formation of manufacturing cells by cluster-seeking algorithms

Journal of Mechanical WorkingTechnology, 20 (1989) 403-413 Elsevier Scicncc Publishers B.V., Amsterdam - Printed in The Netherlands 403 FORMATION OF...

440KB Sizes 2 Downloads 27 Views

Journal of Mechanical WorkingTechnology, 20 (1989) 403-413 Elsevier Scicncc Publishers B.V., Amsterdam - Printed in The Netherlands

403

FORMATION OF MANUFACTURING CELLS BY CLUSTER-SEEKING ALGORITHMS

P.H. Gu a n d H.A. EIMaraghy

Centre for Flexible Manufacturing Research and Development, McMaster University, Hamilton, Ontario, Canada, L8S 4L7

SUMMARY This paper presents three cluster-seeking algorithms - K-means, Revised K-means and Isodata for formation of part families and machine cells. These algorithms are based on the concept of pattern recognition and are capable of producing variable size, mutually independent groups of parts and/or machines without excluding exceptional components. These algorithms are compared with existing grouping algorithms, and examples are used to demonstrate the effect of clustering criteria on the final solutions. It has been found that the Isodata algorithm is more efficient and more flexible than existing machine/components matrix manipulation techniques.

INTRODUCTION Cellular manufacturing enables small batch size production to achieve high productivity and low cost which is normally associated with mass production. Manufacturing cells may be used by themselves or as modules in flexible manufacturing systems. Part classification and grouping into families with similar geometric and/or processing attributes are the basic concepts leading to the formation of manufacturing cells. It is also a prerequisite for the successful development of any flexible manufacturing system. Two main approaches in Group Technology are generally applied for forming machine cells and grouping parts into families, these are coding systems and Production Flow Analysis (PFA). The first coding system was developed by Optiz (1970). A number of coding systems such as MICLASS, KK, DCLASS and COFORM (Rembold et al., 1985) were also developed. Parts coding and classification systems are concerned with the description of the parts characteristics. The Production Flow Analysis, originally developed by Burbidge, is concerned with the sequence of production processes. PFA is an analytical technique which finds the groups and families by analyzing the information given in the components route cards. A number of algorithms for parts grouping have been developed. Burbidge (1971) presented a manual technique to form part families and machine groups which are particularly suitable for small size problems. The simple linkage cluster analysis approach of numerical taxonomy was applied by MeAulley (1972). A similarity coefficient for any machine pair is computed and a tree diagram called a dendragram is constructed. The dendragram is simply a pictorial representation of bonds of similarity between machines. This approach could be used manually for simple problems. For large size problems, a minimum spanning tree method was proposed by MeAulley, based on the algorithms given by Ross (1969). McCormick et al. (1972) proposed the bond energy approach which is defined as the product of the

0378-3804/89/$03.50

©1989 Elsevier Sciencc PublishersB.V.

404 adjoining e l e m e n t in the machine-component matrix. This method requires long computing time and heuristic methods, leading to approximate solutions, m u s t be used for problems of realistic size. An iterative a l g o r i t h m is implemented on computer by K i n g (1980), called the Rank Order C l u s t e r i n g (ROC) algorithm, which is designed to generate diagonalized groupings of machine-component m a t r i x The algorithm r e a r r a n g e s the rows and c o l u m n s of the machine-component matrix in a n iterative m a n n e r that eventually, in a finite n u m b e r of steps, produces a matrix in which both rows a n d c o l u m n s are a r r a n g e d in order of decreasing value when read as binary words. The algorithm is simple and can easily be i m p l e m e n t e d on computers, but it cannot provide final m u t u a l l y independent groupings. C h a n and Milner (1982) developed an approach called Direct C l u s t e r i n g A l g o r i t h m (DCA). It forms families and groups by u s i n g blocks and rods, and by c h a n g i n g the sequence in which components and m a c h i n e s are listed in the matrix. Based on the available description both ROC and DCA a l g o r i t h m s contain six solution steps a n d the t e r m i n a t i o n criterion is exactly the same. The solution obtained by the Direct C l u s t e r i n g A l g o r i t h m is identical to Burbidge's trial-and-error result. Based on the a n a l y s i s of solutions to the s a m e problem produced by Burbidge, C h a n et al and King, the only difference b e t w e e n t h e algorithms is t h a t King's solution produces a group which is divided into two by the others. H a n and H a m (1986) reported t h e multi-objective c l u s t e r a n a l y s i s for p a r t f a m i l y f o r m a t i o n u s i n g Goal P r o g r a m m i n g based on the concept of group technology a n d parts coding where all p a r t s are coded, t h e n the method is applied to form part families. Wu et al. (1986) applied syntactical p a t t e r n recognition method for design of a cellular m a n u f a c t u r i n g system. This paper p r e s e n t s a cluster-seeking approach to group parts into families and form m a c h i n e s into cells. The K - m e a n s , revised K - m e a n s a n d lsodata a l g o r i t h m s are discussed.

CLUSTERING-SEEKING ALGORITHM The group formation of cellular m a n u f a c t u r i n g can be viewed as unsupervised l e a r n i n g only if one set of component routes is available. The u n s u p e r v i s e d l e a r n i n g problem m a y be stated as that of identifying classes in the given set of patterns, which are components in the context of this work, in order to group all given components into families and form t h e m a c h i n e s into cells. The application of clusterseeking algorithm, is in principle straightforward. Suppose that a set of components {X1, X2, ..., X N} has known operation routes and t h a t the component families and associated m a c h i n e cells are unknown. The following a l g o r i t h m s m a y be used to identify representative c l u s t e r centres. T h e r e s u l t i n g c l u s t e r domain m a y t h e n be interpreted as different component families and associated m a c h i n e cells. T h e cluster centres are reference points in the p a t t e r n space and the n u m b e r of the centres indicate the n u m b e r of part families and m a c h i n e cells. In statistical p a t t e r n recognition, a p a t t e r n (a component in this context) is u s u a l l y expressed as a vector X: X - - [Xl,X2,...,Xn] T

(1)

Each e l e m e n t of the vector represents one d i m e n s i o n a l attribute. In this problem, X is a component and xi, i = 1, 2, ..., n are process routes which are associated with machines. If a m a c h i n e is required for a given process, the corresponding xi = I otherwise xi = 0.

4O5 Since the components are clustered, similarity between them should be d e t e r m i n e d , in the cluster-seeking algorithms, the measure of similarity is defined as the Euclidian distance between two pattern vectors X and Z (Tou and Gonzalez, 1974): D = ]IX- Z ][

(2)

Based on the measure of similarity between the p a t t e r n s r e p r e s e n t i n g process plans a clustering criterion is required for partitioning the given data into components families and associated machine cells. The clustering criterion used is based on optimizing a certain performance index. One of the commonly used performance indices is the sum of the squared errors: Nc

a = Y_ y

, x - pjl

(a)

j=l x~s. J where Nc is the number of cluster domains, i.e. component families, Sj is the set of samples belonging to the jth domain, i.e. jth group of components in this research, and j is the component mean vector of set Sj. The number of samples in ~ is Nj and X is a sample i.e. one component. The value of pj is computed as follows: 1 pj = ~jj ' ~ X X~S. J

(4)

Revised K-means Algorithm The K-means algorithm uses the first K samples as initial cluster centres, which influences the clustering results. To overcome this drawback, a revised K - m e a n s a l g o r i t h m is proposed and implemented in this work. The basic idea is to use the maximum distance concept (Tou and Gonzalez, 1974) to determine initial cluster centres, then the K-means algorithm is used. 1.

To find all possible cluster centres; set a threshold T as a criterion for determining centres if distance exceeds T.

2.

Randomly choose a first cluster centre for a given set of samples.

3.

Compare all components with cluster centres. The cluster centres are incremented as follows: i=1,2,...,N, [ X i j - Zpj] > T j=l

(5) j=l,2

..... K

where Xij is the jth component of the ith sample in the sample set. Zpj is the jth component of the pth cluster centre. N is the number of samples (components). K is the number of cluster centres. The process determines all initial cluster centres. 4.

Start iteration, at the kth (lower case k) iteration, distribute the samples {X} (components set) among the K cluster domains (component families), using the relation: X (Sj(k) if [lX - Zj(k) N< [[X - Zi(k) [[

(6)

for all i = 1,2 ..... K, i ~ j, where ~(k) denotes the set of components whose cluster centre is Zj(k).

406 5.

After new d o m a i n s (groups) have been formed, update cluster centres such t h a t the s u m of the squared distances from all points in ,~(k) to the new cluster centres is minimized. In other words, the new cluster centre Zj(k) is computed so that the performance index is optimized: J=

~ [IX-ZJ (k+l)l12 S.(k~ J

(7)

X~

The Zj(k+ 1) which m i n i m i z e s this performance index is simply the component m e a n of Sj(k). The new cluster centre, therefore, is determined by: 1

Zj(k+l)--

N-"~ ~ X j X~S. J

(8)

where Nj is the n u m b e r of s a m p l e s in Sj(k). 6.

Check eonvergence -f iteration by comparing distance between centres with a preset criterion using: i= 1,2,...,K, 1 Zij. ( k + l ) - Z . .q( k ) l <- e

j=l,2

(9)

..... n

If it is satisfied, then the iteration t e r m i n a t e s otherwise go to 4.

Isodata A l g o r i t h m The Isodata a l g o r i t h m (Tou a n d Gonzalez, 1974) identifies the cluster centres and associated cluster d o m a i n s which are intepreted as part families and the m a c h i n e s required by the part families form the cells. The a l g o r i t h m requires a set Nc of initial cluster centres, Z1, Z2, ..., ZNc. This set need not necessarily be equal in n u m b e r to the desired cluster centres and can be formed by selecting s a m p l e s from the given set of components. Other p a r a m e t e r s which should be specified before executing the iteration or at the first step of the iteration are: K = n u m b e r of cluster d o m a i n s (groups) desired; N c = K at beginning of iteration; QN = a p a r a m e t e r a g a i n s t which the n u m b e r of s a m p l e s in a cluster d o m a i n is compared; Qs = s t a n d a r d deviation parameter; Qc = l u m p i n g parameter; L = m a x i m u m n u m b e r of pairs of cluster centres which can be lumped together; I = m a x i m u m n u m b e r of iterations allowed; e = convergence criteria The Isodata A l g o r i t h m is described below: 1.

Distribute the N samples(components) a m o n g the p r e s e n t cluster centres, u s i n g the relation: X ~S] if IlX-Zjll < IIX-Zill

i = 1 , 2 ..... Nc, i + j

(10)

for all X in the components set, Sj represents the s u b s e t of components a s s i g n e d to cluster centre

z,.

407 2.

Discard sample subsets with fewer than QN members; that is, if for any j, Nj < QN, discard Sj and reduce Nc by 1.

3.

Update each cluster centre Zj, j = 1, 2, ..., N¢ by setting it equal to the sample mean of its corresponding set ~; 1 Z j : ~-T X X j : l , 2 ,,j XES. J

. . . . . Ne

(11)

where Nj is the number of components in Sj. Compute the average distance Oj of components in cluster domain Sj from their corresponding cluster centre, using the relation: 1 Oj = --ZHX -Zj]L N 5.

j = 1,2 . . . . . i c

(12)

Compute the overall average distance of the components from their respective cluster centres, using the relation: 1

D= 6.

N

Nc

-=

(13)

NjDj

If this is the last iteration, set Qe = 0 and go to step 10. If this is an even-numbered iteration, or if N c -> 2K, go to 10, otherwise continue.

7.

Find the standard deviation vector oj = (Oil, Oil, ..., onj) T for each component subset, using the relation 1 °iJ = N'-j X/ Z (xik--zij)2 X~S. J

i = 1,2,

....

n,

(14)

j= 1,2,...,N¢

where n is the sample dimensionality, Xik is the ith component of the kth sample in Sj, z~i is the ith component of Zj and Nj is the number of components in Sj. Each component of oi represents the standard deviation of the samples in Sl along a principal coordinate axis. 8.

Find the maximum component of each oj, j = 1, 2, ..., N c and denote it by Omax.

9.

I f f o r a n y Ojmax , j ~ 1, 2, ..., i o w e h a v e Ojmax ~> O s a n d

(a)

Dj > D a n d N j > 2(QN + 1)

(15)

Nc -
(16)

or

(b)

then split Zj into two new cluster centres Zj+ and Zj-, and increase N c by 1. Cluster centres Zj + and Zj- can be determined by adding a given quantity ~j to the component of Zi which corresponds to the maximum component of j; Zj- is formed by subtracting j from the same component of Zj. One way of specifying [3j is to let it be equal to some fraction of Ojmam, that is, ~j = k*ojmax, where 0 < k -< 1. The basic requirement in choosing [3i is that it should be sufficient to provide a detectable difference in the distance from an arbitrary sample to the two new cluster centres, but

408 not so large as to change the overall cluster domain a r r a n g e m e n t appreciably. If cluster splitting took place in this step, go to 1 ; otherwise continue. 10.

Compute the pair-wise distance Dij between all cluster centres: Dij = []Zi-Zj t]

i = 1,2 . . . . N - l , j=i+l,2,...,N

l 1.

(17) c

Compare the distance Dij a g a i n s t the p a r a m e t e r Q¢. A r r a n g e the L s m a l l e s t distances which are Jess than Qc in ascending order: [ Dil)l, Di2j2, ..., DiLjL ] where Diljl < Di2j2 < ... < DiLjL a n d L is the m a x i m u m n u m b e r of pairs of cluster eentres which can be lumped together.

12.

With each distance Diljl there is a n associated pair of cluster eentres Zil and Zj~. Starting with the smallest of these dl,'~ances, perform a pair-wise l u m p i n g operation according to the following rule: For I = 1,2, ..., L if neither Zil nor Zil h a s been lumped together in this iteration, m e r g e these two cluster centres u s i n g the following relation:

. l -.. [Nil (Zil) + NjI(Zjl)] Z i = Nil+Nj l

(18)

Delete Ell and Zjl and reduce N c by 1. It should be noted that only pair-wise l u m p i n g is allowed and t h a t a lumped cluster centre is obtained by weighing each old cluster centre by the n u m b e r of components in its domain. Experimental evidence indicates that more complex l u m p i n g can produce unsatisfactory results (Tou and Gonzalez, 1974). The above procedure m a k e s the lumped cluster e e n t r e s representative of the true average point of the combined subsets. It is also important to note that, since a cluster centre can he lumped only once, this step will not always result in L lumped centres. 13.

Check convergence of iteration u s i n g criterion e set by user i= 1,2,...,No,

t19)

] Zij(k +1) - Zij(k)l
1, 2 , . . . , n

If the above expression for all i and j is satisfied, the iteration is t e r m i n a t e d and all derived solutions are printed out. If it is not satisfied and this is the last iteration, the a l g o r i t h m fails and an a d j u s t m e n t of p a r a m e t e r s s u c h as K, Qn, Qm, Qc, L, or etc. should be made. Otherwise go to I. This algorithm altows the u s e r to change initial p a r a m e t e r s d u r i n g iterations. These p a r a m e t e r s were initially defined before step 1. The following case s t u d y illustrates the Isodata algorithm.

A CASE STUDY To illustrate the use of the above algorithms, a case study, shown in Figure 1, which consists of 43 components and 16 m a c h i n e tools was chosen. It was originally provided by Burbidge (1971), who used a m a n u a l method to obtain a solution which consists of five component groups and three exceptional corn

409 M/C

Component number

No.

1 2 3 4 5 6 7 8 9 I0 ii 12 13

14 15 16 17

18 19 20 21 22 23 2 4 25 2 6 27 28 29

30 31

32 33 34 35 36

37 38

1

39 4 0 4 1 42 43

x

2 3 4 5 6 7 8 9 I0 ii 12 13 14 15 16

x

x

x

x x x xx x xxx × x

x x x xx×

x xx

x

x

x x x

x x

x

x x x

x

x

x

x

x

x

x

x

x

x x x

x

x x

x x

x

x

x

x

x

x

x

x

x x

x

x

x

x x

x x

x

x x

x

x

x

×

x

x x

x

x

x ×

x

x

x

x

x

x

x

x

x

x

x

The Machine-Components Matrix Presented by Burbidge

M/C

O

x x

x

No.

x x

x

x

x

x

x x

x

Figure 1

l 6 7 8 9

x x x

x

x

x

x

x x

x

x x

x

x x

×

x

x

Component number i 13 39 25 x x x x x x x x ×x × x

12 31 26 4 2

37 2 32

38 i0 4 0

28 18 4 27

X

X

24 3 20 30 II 22 17 7 35 6 34 36

19 23

14 43

5 9 21 4 1

15 29 8 33

16

x x x x× X

X

2 16 6 8

X

X

1

X X

X

X

X

X

X

X

X

X X

X

X

X

X X X X X X

X X

X

X

X X X

X

X

II

X

12 8 13 3 6 14 5 4 6 15 8

X

X

X

X

X X

X X X

X X X X X

X X X X X

X X

K X X

XX X X X

X

X

X

X

X

X X X X

X

X X X

Figure 2

X

X

X

X

X

X X X X X

X

X

X X XX

X

X X

X

X X X X X

X X

Solution by Burbidge

ponents (Figure 2). This example was subsequently used by King (1980) with his Rank Order Clustering algorithm, and by Chan and Milner (1982} with their Direct Clustering algorithm. In order to compare the cluster-seeking algorithm developed in this work with the above algorithms, the same problem is used in this case study. Comparison of Different Cluster-Seeking Algorithms The results in Figure 3 were derived by specifying the number of desired groups to be four and associated initial cluster centres arbitrarily as the first K components in the component set of 43. In the revised K-means algorithm, the initial cluster centres are determined by applying maximum distance algorithm with pre-determined threshold T. Since determination of subsequent cluster centres is based on the selected initial cluster centre, the first one is therefore chosen from the given 43 components randomly. Comparing Figures 3 and 4, it is found that for the same number of machine cells, the extra machines required are less than those proposed by the original K-means solution, 24 machines for

410

M/C No,

1 8

12

13

h 5

x

6

xx

7

x

8

x

lO

x

x

x

19

23

x

x

x

x

x

x

30

41 a 5 6 Y lZ~ 16 [7

22

26

"7~

11

!~

II

20

;2

,*

3": ~6

x x

x

x

x

x

Component number 2 10 18 28 32 37 38 a 0 4 2 3 9 t l 15 20 21 2z* 27

3 <1 43

x

x x

]1

25

x

x

x

×

15

x

x

x

x

x

]

×

2

x

6

x

8

x

I~

x

16 a

×

x

x

x

x x

x

×

x

x

x

x

×

x

x x

x

x

x

x

x

x x

x

x

x

x

x

x

x ×

5

X

8

~

~

X

X

X ~ 12

X X

X

X

X X

13

X

K

>

X

X

X

X

X

15 3

N

4

X

5 6

X X ~

9

X

X

12 15

X

:4

]6

X

Figure 3

Solution by K-Means with K = 4

M/C No.

Component 1

6

3

7

12

13

x

6

x

7

x

8

x

10

x

]4 16

17

25

26

31

>:

x x x

x

x

x x

35

36

X

2~

X

39

2

4

10

18

number

28

32

37

X

X

X

38

40

42

K

X

5

8

9

14

15

]6

X

X

19

21

23

29

~:

X

X

X

X

X

33

~1

N

X

,~3

3

~'

!:!

X >:

x x

34

>: x

x

x

x

x x 2:

1 2

X

6

X

8

X

X

X X

X X X

9 l/,

X

16 z,

"4 •

X

X

X

X

X

:~

X

X

X

X

X

X

X

:~ X X

X X

X X X ~

5 6

X

8

X ~

l:

X X

15

X

X X

X

X

~

X

X

~

~

X

X X

X

X

X

11 12 13

Figure 4

Solution by Revised K-Means with Threshold T = 6

Revised K-means and 31 machines for K-means. The Isodata algorithm is affected by input parameters which include desired groups, minimum number of members in a group, standard deviation parameter, lumping parameters used as the basis for merging two cluster centres during each iteration, and convergence criteria. For the same example, the solutions produced by the Isodata algorithm are shown in Figures 5 and 6. Only 22 machines are required to form 4 cells as shown in Figure 5.

411 M/C

Component

No.

1

12

13

25

6

× X

X

7 8

X X X

X

X

10

X X

X

X

76

31

39

3

1I

20

22

24

27

30

M

M

X

X

X

X

X

X

2 4

6

7

10

number 17

18

28

32

X

X

34

35

36

X X

X

X

37

38

40

42

X

X

X

X

X

X

X

5

8

9

14

15

16

X

21

23

29

33

hl

[,~

X

X

X

X

X X

X

X X

X

X

X

X

X X

X

X X

X

X

g

X

X

X

X

17

34

35

36

X

X

X X

×

×

8

X X

X

1l 12

X

X

13

M

X

X

X

1

X

2 3

X

6

X

8 9

X

14

X X

X X XX

16

X X

X

X

X X

X

X

X

X XX

X

X

X

X X

X

X

X

X

X

X

X X

X

X

4

X

5

X X X X x X

X

6 8

X X

X

X X

11

X

15

X

M/C No.

1

6 7

X X X

X X

X

8 10

X X X X

X

X

12

13

25

26

31

39

2 4

10

2

X

X

6 8

X

9

X

18

28

32

X

Solution by I S O D A T A with Q = 6, N c = 5, Qs = 1 and Qc =N1

Figure 5

37

38

40

Component 42 3 11

number 20 22 24

27

× ×

X

X

X

X

X

×

X

X

X

X

X

30

5

8

9

14

15

16

19

21

23

29

X

X

X

X

X

X

X

X

X

X

33

41

43

X

X

X

6

7

×

X

:: ~:

× X

t

16 8

19

X

X

X

X X

X

XX

X X K

K

X X

X XX

X

X

X KX

XX X

X

X

X K X

ll

X

• M

×

17 13

X

X

X x

X X

X X X X

5 6

X

74

X X

8 11

X X

:(

X

X

X

X

X

X

X X

X 1(

15

X

X

:4

X X X

X

6 14

X X X

X

16

X

3 X

Figure 6

M

X

Solution by I S O D A T A with QN - 5, N c = 6, Qs = 0.2 and Qc -~ 1

Comparison of Isodata with Other Algorithms If the result in Figure 5 is compared with that obtained by King (1980), it is found that if component No. 9 in King's solution is moved to the lower corner group and adding machine No. 11, the two solutions become identical. | f t h e solution in Figure 6 is compared with solutions by Burbidge (1971) and Chan and Milner (1982), it is found that all three exceptional components are moved to corresponding groups as shown in Figure 6, and that the additional corresponding three machines are added to their cells, the solutions become identical. That is, machine No. 16 is added to the cell containing machines No. 3,6,14, and machine No. 11 is placed in the cell with machines No. 4,5,6,8,15, and machine No. 14 is added to the cell containing machines No. 1,2,6,8,9,16. These observations are quite interesting.

412 In general, all machines which are used to form cells can be thought as one cell and all compenents as one family. When the cell is split into a number of new cells and components are divided into different families, it is therefore possible that certain types of machines would be required by several cells. This means that there should be more than one of the same type of machines so that independent cells can be formed. In fact, in the production environment, load balancing should be considered. This can be done by checking the types and the number of parts on the machines as well as the time they occupy the machines. If for some reason, the additional machines cannot be provided, redesign of the part or of the machining sequence should be performed so that all cells could be formed using only existing machines.

DISCUSSIONS AND CONCLUSIONS In this paper, cluster-seeking algorithms are presented. These algorithms are flexible enough to adapt to real production situations. Compared with other techniques, it is found that the [sodata can produce better results. However, some intensive simulations are still required to determine the best value for initial parameters such as desired number of clusters, lumping parameters, minimum number components in each cell and standard deviation. This situation has been improved (Gu, 1989) using optimization techniques. Based on the formed machine cells, a knowledge-based assignment system has been developed to integrate the cell formation procedure and a feature-based design system which is discussed in detail by ElMaraghy and Gu (1988).

REFERENCES

Burbidge, J.L., 1971, "Production Flow Analysis", The Production Engineer, April/May issue, pp 139152. Chan, H.M. and Milner, D.A., 1982, "Direct Clustering Algorithm for Group Formation in Cellular Manufacturing", Journal of Manufacturing System, Volume l, Number, pp. 65-74. E1Maraghy, H.A. and Gu, P.H., 1988, "Knowledge-Based System for Assignment of Parts to Machine Cells", International Journal of Advanced Manufacturing Technology, 3(1), pp. 33-44. Gu, P.H., 1989, "Artificial Intelligence Approach to Integration of Feature-Based Design and Manufacturing Tasks Planning", Ph.D. Thesis in preparation, Department of Mechanical Engineering, McMaster University, Canada. Han, C. and Ham, I., 1986, "Multi-objective Cluster Analysis for Part Family Formations", Journal of Manufacturing Systems, Volume 5, Number 4, pp. 223-230. King, J.R., 1980, "Machine Component Group Formation Using ROC Algorithm", International Journal of Production Research, April issue, pp. 213-231. McAulley, J., 1972, "Machine Grouping for Efficient Production", The Production Engineer, February issue, pp. 53-57. McCormick, W.T., Schweitzer, P.J. and White, T., 1972, "Problem Decomposition and Data Reorganization by a Clustering Technique", Operation Research, pp. 993-1009. Opitz, H., 1970, "A Classification System to Describe Workpieces", Pergamon, Oxford, England.

413 Rembold, U., Blume, C. and Dillmann, R., 1985, Computer-Integrated Manufacturing Technology And Systems, MARCEL DEKKER, INC. New York. Ross, G.J.S., 1969, "Algorithms AS13, AS14 and AS15", Applied Statistics, 18, (i), pp. 103-110. Tou, J.T. and Gonzalez, R.C., 1974. Pattern Recognition Principles, Addison-Wesley Publishing Company, Inc. Wu, H.L., Venugopal, R. and Barash, M.M., 1986, "Design of a Cellular Manufacturing System: A Syntactic Pattern Recognition Approach", Journal of Manufacturing System, Volume 5, Number 2, pp. 82-88.