Journal of Manufacturing Systems Volume 1 l/No. 3
Generalized Part Family Formation Using Neural Network Techniques Y.B. Moon and S.C. Chi, Syracuse University, Syracuse, NY
Abstract
consists of 0 and 1 entries is the only information utilized. Burbidge Lz developed and extended GT applications through the implementation of the standard family formation in various manufacturing facilities. McAuley ~7 regarded the formation problem as a clustering analysis in his single linkage (SLINK) clustering algorithm. Other methods for standard family formation were developed by McCormick, z° Carrie, 4 Rajagopalan and Batra, zz King, 13 Chan and Milner, s Wu et al., 29 Ballakur and S t e u d e l , 3 and C h a n d r a s e k h a r a n and Rajagopalan. 7 Generalized family formation is more practical than standard family formation. It is also more complicated. In generalized family formation, other manufacturing factors and production constraints are involved. Additional manufacturing factors that have been considered include different process plans for each part, ms'z6 sequence of operations in part production routing, 1° part production cost, z4 part processing time, 9 and lot size. 9'~4 Some production constraints that may be incorporated in the generalized family formation include the maximum frequency of trips that can be handled by each material handling system, the maximum processing time available on each machine, and the maximum number of machines in each cell. After analyzing all the generalized family formation approaches, we find that manufacturing information such as production volume, sequence of operations, and processing time are dealt with easily if the similarity coefficient method (SCM) is used. SCM generates pairwise similarity coefficients among entries and utilizes this similarity measure to generate machine cells and part families. McAuley 17 first adopted SCM in his cluster analysis approach--SLINK clustering analysis. Later, ideas based on SCM to solve the formation problem were
The spontaneous generalization capability of neural network models has been exploited for solving the part family formation problem. Our approach combines the useful capacities of the neural network technique with the flexibility of the similarity coefficient method. Notably, the constraint satisfaction model of neural networks is adopted. In generalized part family formation, several practical factors such as sequence of operations, lot size, and multiple process plans are also considered. Our approach proves to be highly flexible in satisfying various requirements and efficient for integration with other manufacturing functions.
Keywords: Group Technology, Part Family Formation, Neural Networks, Constraint Satisfaction
Introduction Group Technology (GT) is a manufacturing philosophy that can be applied to improve efficiency of production, reduce production cost, and manage the batch-oriented production system easily. ~1,~2,2a GT identifies and brings related or similar parts/ processes together in terms of their similarities in design and manufacturing. The part family formation problem that focuses on production flow analysis (PFA) involves the partitioning of a number of parts into families as well as a number of machines into manufacturing cells based on the similarities among them. This aim is achieved by analyzing route sheets or process plans that specify manufacturing data such as operation sequence and quantity requirement. Depending on the type of information being utilized, the family formation problem can be further classified into standard family formation and generalized family formation. In the standard family formation, a machine-part incidence matrix that
149
Journal of Manufacturing Systems Volume I l/No. 3
developed by other researchers such as Carrie, 4 De Witte, 9 Rajagopalan and Batra, 22 Waghodekan and Sahu, 27 and Chandrasekharan and Rajagopalan. 6 We developed an approach that applies the spontaneous generalization capability of neural network models in solving the part family formation problem. As a kind of SCM, our method is more flexible and considers many manufacturing factors and constraints within a unified framework. The following sections introduce the background of our method and provide a detailed explanation.
as possible. With a different priority, each unit has a bias with it and an appropriate solution satisfies such prior information as much as possible. To measure performance, goodness is defined as the degree of satisfaction with respect to the desired constraints. For a positive weight, the more the two units are activated, the better the constraint is satisfied. On the other hand, for a negative weight, the more the two units are activated, the less the constraint is satisfied. The mathematical equation of the goodness measure of unit i is expressed as g o o d n e s s i -~ ~,j Wij a i aj + i n p u t i a i + b i a s i a i
Constraint Satisfaction (CS) Model of Neural Networks
where W~/
= the connection weight between units i and j, ai = the activation value of unit i, input i = the external input value to unit i, and biasi = the bias value of unit i.
Information processing in neural networks transpires through interactions of large numbers of interconnected elements called processing units. These processing units influence each other by generating excitatory and inhibitory messages. The units may represent hypotheses or possible goals and actions; the interconnections among the units may represent the constraints or the relations between the units. When learning or information storage occurs, the connections are modified and the activation values of the processing unit are updated.16'18'19 The constraint satisfaction (CS) model 23 is a neural network model in which the interconnection strengths among units are not changeable. The CS model is designed to find fiear-optimal solutions to problems with a large set of simultaneous constraints. The processing units are organized into several pools. Each pool represents a specific characteristic of a chosen problem. In general, the units in a pool are mutually inhibitory. Therefore, when a unit arrives at the highest value of activation, it tends to turn off the activation of the other units in its pool. This also implies that units compete with each other. However, excitatory connections exist among units in different pools and these units tend to activate each other interactively. Each connection between two units represents a constraint. Since the importance of each constraint is not always the same, it is assumed that each constraint has a respective importance value and the goal of the system is to find a solution that simultaneously satisfies as many of the most important constraints
In addition, the overall goodness is simply defined as the sum of the individual goodnesses: overall goodness = El Ej W U a i aj + Z i input i ai + Y'i biasi ai Therefore, the goal of the system is to maximize overall goodness. While the system is running, overall goodness is increased until all units reach their maximum or minimum activation values. When the goodness value becomes stable, a peak is reached and the network settles into a stable state.
Standard Family Formation-Spark Clustering Analysis In standard family formulation, the only information considered is the part-machine incidence matrix. The matrix is simply composed of the values 0 and 1. The value of 1 means that the corresponding part type must be processed by the corresponding machine type. After the given information is clearly expressed in a matrix format, the main task of the part family formation problem is to find a clustering technique for transforming the initial matrix into a well-structured diagonal block form. In other words, the goal is to create a proper
150
Journal of Manufacturing Systems Volume I l/No. 3
set of part families, and thus machine cells, with the least interaction between machine cells. The term spark clustering analysis (SCA) 8 refers to our approach.
where
i = 1 . . . . . , n, k = 1 . . . . . . . . n, d(i, k) = the Jaccard coefficient between units i and k, all (or aoo) = the number of entries shown by value 1 (or 0) at the corresponding position in unit (i,k), c --- the total number of entries in a pattern, A = the mathematical average of Jaccard coefficients d(i,k), and n = the number of entries in each pattern.
Construction of a Neural N e t w o r k Each machine and part is represented by a processing unit. All processing units are separated into three pools--machine pool, part pool, and instance pool. The machine pool contains all the machine units, while the part pool is composed of all the part units. The units within these two pools may receive external signals so that extra stimulus may be provided from outside. However, the units within the instance pool, called hidden units, cannot receive external signals. These hidden units may be instances of either machine units or part units, zl Without loss of generality, the hidden units are assumed to be instances of part units from now on. Two types of connection matrices are established before the necessary neural network is constructed. The first is the requirement incidence matrix representing the connection strength between the machine pool and the instance pool (when the hidden units are the instances of part units). The entry of this matrix has the value of 1 if the corresponding machine is required to process the corresponding part. Other entries have the value of 0. The second type of matrix is the similarity matrix derived from the requirement incidence matrix by using the Jaccard coefficient formula zs presented in Equation (1). Two similarity matrices are constructed for the second type of connection matrix-the machine similarity matrix and the part similarity matrix. The entries in the machine similarity matrix are used as connection weights between units within the machine pool. Similarly, the entries in the part similarity matrix are used as connection weights between units within the part pool. These entry values (or connection weights) for units i and k, Wik , are derived as follows:
d(i, k) = all / (c--aoo)
(1)
A = (Y~i Ek d(i, k)) / n(n-1) if i :~ j
(2)
Wik
:
0ifi
Here, a pattern corresponds to a machine (or a part) and a unit corresponds to a part (or a machine). The value Wik lies between -1 and 1. Positive weight means that there is an excitatory connection between the processing units; conversely, negative weights cause units to mutually inhibit each other. The weight between each part unit and its own shadow unit in the instance pool is always equal to 1. An example of requirement incidence matrix is presented in Figure 1, which consists of five part types and four machine types. Figures 2 and 3 are developed from the Jaccard coefficient formula. The machine similarity matrix and the part similarity matrix are given in Figures 4 and 5. The connectivity of the example network is partially illustrated in Figure 6. The network is made up of machine units designated by M1, M2, M3, and M4, part units designated by P1, P2 . . . . . P5, and hidden part units designated by __P1, _P2, ..., __P5. Execution Procedure After the network has been constructed, the part family formation is achieved by the following procedure: Step 1: Give an external input to the first available part only (a "spark"). Step 2: Run the neural network and store the result (unit activations). Step 3: Select the activated parts as a family and the activated machines as a group. Step 4: If all the parts are assigned, go to Step 5. Otherwise, go to Step 1. Step 5: If there is any part assigned into two or more families or any ambiguous part that could not
= k,
= d(i, k)-A, otherwise.
(3)
151
Journal of Manufacturing Systems Volume I I/No. 3
Parts 1
2
3
4
5
1
.......................................
M
1
1
a
c h i
2
1
3
1
1
4
3
4
1
0
2
-.28
0
3
.39
-.28
0
4
-.28
.72
-.28
1
Machines
1
n
e
2
¢
1
1
1
S
0
Figure 1 R e q u i r e m e n t Incidence M a t r i x
Figure 4 M a c h i n e Similarity M a t r i x
M ~c]'lilnes 2
I
M a 1
4
3
Parts
0
e
0
h 2 i n 3 e S
0.667
4
1
0
0
0
0
I
0
0
Parts
s,.
Figure 2
2
1
0
2
-0.3
0
3
-0.3
0.7
3
4
5
0
4
0.2 -0.3
-0.3
0
5
0.7 -0.3
-0.3
0.2
0
M a t r i x of J a c c a r d Coefficients for M a c h i n e s
Figure 5
Part Similarity Matrix
Pexl:~ 1
2
3
4
5 N
0 P
0
0
~t r t
0
I
0
0.5
0
0
0
1
0
0
0.5
s
be assigned to any family easily, go to Step 6. Otherwise, stop the program. Step 6: Put aside all the double-assigned and ambiguous parts. Step 7: Give one of these parts an external input. Step 8: Run the network and decide which family this part should belong to. Step 9: If all of these parts are assigned, stop. Otherwise, go to Step 7. The diagram showing this procedure is given in Figure 7. For a more detailed explanation and discussion of the neural network implementation, refer to Reference 21.
0
Figure 3 M a t r i x o f J a c c a r d Coefficients for P a r t s
152
Journal of Manufacturing Systems
Volume l 1/No. 3
Part
pool once the size of the problem (the number of machines and parts) is known. Since the connection weights are determined in advance and do not change, there is no need for training. We are interested in the stable activation values of units, not
PO(~I
!
pick the first available part
run and store
I~
/
a family and a group
get
I
[
I No
Figure 6 Connectivity of a Machine-Part Network Represented by Connection Weights
No
Discussion One spark generates two respective clusters (one for parts, the other for machines) simultaneously. The spark entity can be arbitrarily chosen from the unassigned entity set. In ideal cases, clustering is finished when the unassigned entity set is empty. Ambiguous entities, however, usually appear in real situations. A practical advantage of spark clustering analysis is its flexibility that allows the analyst to evaluate the performance of the obtained partitions and then repetitively check the overall ambiguous entities one by one. Not only are ambiguous entities properly assigned into existing clusters, but the partitions are better accomplished by multiple checking. Repetitive cycles through the spark clustering analysis continue to refine the partitions until a balanced and suitable result is procured. Another advantage of the described structure is that many decision factors are eliminated for neural network computing. The number of hidden layers is fixed as one, and this hidden layer plays the role of buffer to absorb sudden changes in units' activation values. Also, the number of units is fixed for each
collect these parts
I I
pick the first available part
run and assign /
No
Yes~-~ ----'~ end
Figure 7
Flow Diagram of the Experimental Procedure
153
Journal of Manufacturing Systems Volume 1 l/No, 3
in the learning aspects of the network. Usually a unit is considered activated if its activation value is more than 0.5 in the stable condition.
strengths between part units are still derived from the Jaccard coefficient formula based on the machine-part incidence matrix. However, strengths between machine units in the machine pool are developed from the Jaccard coefficient formula and a sequence coefficient. The new formula used for machine unit connection is:
Generalized Family Formation Three practical factors are considered for our generalized family formation approach. The first is the sequence of operations, which has a significant effect on material handling costs and time. In clustering approaches used for standard family formation, the operations of a part are performed on a corresponding group of machines regardless of the sequence of operations. The machines performing the first and last operations of a part may form a cell, while the machines performing the intermediate operations form another cell. Although the within-cell layout problem can be considered after the cell configurations are determined, it is more desirable to solve the layout problem before the cells are formed. The second factor is lot size. In the standard family formulation, the lot size of every part is considered to be the same. However, the required quantity of each part is varied in actual situations. The degree of importance of each part is proportional to its lot size. For example, we may want to gather two large part types into a family that perhaps only uses some similar machines rather than two smaller part types that use more of the same machines. To improve the effectiveness of our approach, the data of lot size is added to generate proper part families and machine cells to reduce the actual intercellular moves. The third extension is the involvement of multiple process plans. Each part can be manufactured by more than one process plan. A process plan must be assigned to each part so that the overall performance is optimal.
d(i, k) = a l l / (d-aoo)
(4)
S(i, k) = d(i, k) + C * t(i, k) / N
(5)
A = (Z i Y~k S(i, k)) / m ( m + 1)
(6)
Wik
---- 0 i f
i
=
k
= S(i, k)-A, otherwise
(7)
where k - - I . . . . . . n. i = 1 . . . . . . . . n. d(i, k) = Jaccard similarity coefficient between units i and k. a l l (or aoo) = the number of entries shown by value 1 (or 0) at the corresponding position in units i, k. d = the number of entries in a pattern. A = the mathematical average of Jaccard coefficients d( i ,k ). n = the number of entries in each pattern. Wik = the strength of the connection between machine units i and k. S(i, k) -- the similarity value between machine units i and k. C = an experimental constant. t(i, k) = the consecutive times of machines i and k in the operation sequence of all the part types. N = the maximum of t(i, k); i, k = 1 . . . . . . . . m. m = the number of machine types. A sequence coefficient can be added to the Jaccard similarity coefficient, with a higher priority value, in terms of the number of any consecutive two machines. That is, the activations of highly related units are excited not only from their partners in the same pool, but also from the related pool. For example, a family of similar parts is generated by the main excitations from all related parts as well as by indirect excitations from the machine pool.
Involvement of the Sequence of Operations The sequence of operations for a part is an ordered list of machines that process the part. This information is also adopted from route sheets. The setup of the network system for involving the sequence of operations is the same as in the standard family formulation except for the weights between the machine units in the machine pool. The
154
Journal of Manufacturing Systems Volume
Therefore, parts operated most by the same consecutive machines will be more easily organized together than the parts processed just by the same required machines. Furthermore, since consecutive machine units have stronger relations, they tend to be assigned to the same cell. After the machine units are excited, the corresponding part units will also be strongly excited. The constant C determines how fast a stable condition is reached. If C is small, the execution of the neural network slows down. On the other hand, too high C may result in an oscillatory situation, in which the resting level is never achieved. This is due to abrupt changes in the activation values of the units. Usually, the value of C can be fixed after several experimental runs with the network. In our experiments, C has been set to 0.3.
3
Machines 1 2 3 4 5 .................................................................. 1 1 2
P a
2
2
3
t
2
3
S
4
1
6 3
1
r
2
1
3
Figure 8 Requirement Incidence Matrix for the Involvement of the Sequence of Operations
Machines 1
2
3
4
5
6
..................................................................
Example
M
1
0
2
1
0
3
.33
.33
0
4
.50
.50
0
0
5
0
0
.33
0
0
6
0
0
.33
0
1
a
A small example is presented to illustrate the proposed m e t h o d for calculating the weights between machine units. There are 4 part types and 6 machine types in the example. The requirement incidence matrix, in which the entries indicate the operation sequence, is shown in Figure 8. At first, the J a c c a r d s i m i l a r i t y c o e f f i c i e n t s b e t w e e n machines are calculated using Equation (4) as shown by the matrix in Figure 9. In addition, the consecutive times between machines are obtained as shown in Figure 10. The modified similarity values between the machines using Equation (5) are shown in Figure 11. Finally, Figure 12 shows the weights that are used in the connections between machine units for the setup of the parallel network. The numbers in Figure 12 are determined using Equation (7). After the network system settles down, two part families are found (see Figure 13).
Involvement
l l/No.
c h i n
e S
0
Figure 9 Jaccard Similarity Coefficient Matrix Between Machines
S(i, k) = E,,Xi(u) X k (u) Q(u) Z,,[Xi(u) + Xk(U) - Xi(u) Xk(u)] Q(u)
(8)
where u = 1, ..., n, S(i, k) = the similarity coefficient between machine types i and k, Xi(u) = the entry on the position of machine type i to part type u in incidence matrix; Xi(u) = 0orl, Q(u) = the lot size of part type u, and n = the total number of the machine types.
of Lot Size
For the cases of different lot sizes, the quantitative analysis of interrelations between part types is developed from the Jaccard similarity coefficient formula. The similarities between machines are derived by multiplying the quantity of each part type to its corresponding entry. The formula for calculating the similarities between machines is as follows:
Obviously, the collection of forecasted demand for each part is the first step to be computed before the similarity coefficients between machine types are calculated. The calculation of the strengths of
155
Journal of Manufacturing Systems Volume l l/No. 3
Machines 1
M a c h i n e
2
3
Machines
4
5
6
1
2
3
4
5
6
..................................................................
..................................................................
1
0
1
0
2
2
2
.93
0
3
.12
-.04
0
4
.28
.28
-,37
0
5
-.37
-.37
.12
-.37
0
6
-.37
-,37
.12
-.37
.78
0
M a
3
I
0
0
c
h 4
0
1
0
0
i n
5
0
1
0
0
0
e
S
S
6
0
I
0
0
i
0
..................................................................
Figure 10
Figure 12
Consecutive Times Between Machines
Connection Weights Between Machine Units
Machines 1
2
3
4
Machines 5
6
1
..................................................................
M a c h i
1
0
2
1.3
0
3
.49
.33
0
4
.65
.65
0
0
5
0
0
.49
0
0
6
0
0
.49
0
1.15
P a r t
n
e
2
4
3
5
6
1
3
2
3
..................................................................
2
2
3
4
1
2
1 3
1
2
S
3
S
.................................................................
Figure 13 Final Incidence Matrix (2 cells)
Figure I1 Modified Similarities Between Machines
Parts
the connections between part units and between the other units remains the same as in standard family formulation. Figure 14 presents an example to explain and analyze the result of the implementation. The final incidence matrix is shown in Figure 15. All parts are assigned into four families. Also, the four corresponding machine cells are formed. In this case, machines 2, 7, and 13 were used as the test seeds. Each machine processes two part types with different lot sizes. For instance, machine type 2 is used to process either part type 8 with 2 batches required or part type 7 with 4 batches required. After the test is executed, machine 2 is organized with machines 7, 9, and 12 to process parts 3, 7, 11, and 19 rather than with the machines 1, 5, and 6, because the lot size for part type 7 is bigger than that for part type 8.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ..........................................................................................
1
I
I
2
1
3 4
I
1 1
1
I
1 11
5
1
6 1
7 8
1
9
1
10
1
11
1
1
1
1
1
1
1
1
12
Q
1 1
1
1
1
1
1
1
1
1
1
5
1
4
2
3
1
1 1
1
I 1
1
5
3
5
1
6
1
1
1
1
2
Figure 14 Initial Matrix for the Involvement of Lot Size
156
Journal of Manufacturing Systems Volume I l/No. 3
similarity coefficients in order to promote default assignment. However, in the multiple process plan case, the same methodology is not valid because the number of process plans for each part varies. A value o f - 0 . 2 is assigned to all the connection weights between the machine units. This value, minimizing the influences of the machine units, was obtained through many experimentations.
Parts 6
8
12
15
18
3
7
11
19
1
4
13
14
16
2
5
9
10
17
...... l............................................................................................. I l l l l 1 5 1 1
1 1 1
M
6
11!__±
a
2
1
c
7
h
9
i
12
n
3
e
4
s
8
1
1 1
1
1 1 1 I I 1 1 I l l l
I
1 I I 1 1 1
Experimental Procedure
1 1 1 1 1 1
10
1
1
The constraint satisfaction model does not guarantee an optimal solution. Usually, several runs must be taken to search for the best feasible solution before a part family and a machine cell are determined. In general, the solution with the highest goodness value is the best answer. After a part family is selected, the next step is to assign negative inputs to the process plans of the chosen parts. Then the system is run again, and the next part family and machine cell will be obtained. This procedure is repeated until all of the parts are assigned. The detailed steps are listed below: Step 1: Run the neural network and store the result. Step 2: Select the activated parts as a family and the activated machines as a group. Step 3: Inhibit the process plans that are related to the chosen parts. Step 4: Run the neural network and store the result. Step 5: Select the activated parts as a family and the activated machines as a group. Step 6: If all the parts are assigned, then stop the system. Otherwise, go to Step 3.
I l l
1 1 1 1 1 1 11 .................................................................................................... Q
1
2
3
1
1
1
4
5
2
1
1
5
6
1
1
5
3
2
1
Figure 15 Final Incidence Matrix (4 cells)
Involvement of Multiple Process Plans Kusiak 17 proposed a method for the generalized GT concept within which each part type has one or more process plans and a specific process plan must be chosen for each part type. His method is an integer programming method called the p-median model. The advantage of this integer programming method is that the optimal solution can be obtained. However, the computations involved are so extensive that the larger cases cannot be dealt with efficiently. Furthermore, it is unreasonable to assume that the number of part families (clusters) can be determined in advance. This family formation problem with multiple process plans can be restated as one that finds the optimum solution for grouping specific parts into families processed by a group of machines. The constraint is that only a single process plan should be assigned to a part type while all the part types are assigned with a process plan. In our approach, only one specific process plan is assuredly chosen for each part type because we set strong inhibitory connections among the process plans of a part. If there are three process plans designed for a part type, the strength of the connection between either two is set to be strongly inhibitory. Eventually, only one of the process plans will be activated. In standard family formulation, the strengths among the machine units are developed from the
Example The example shown in Figure 16 is a case of 6 machines and 6 parts within which part 1 has three designed process plans and each of the other parts own two plans. In the beginning, when the system was run several times we got: Cluster 1: plans machines and goodness --- 7.7038
157
P2b, P3a, P5a M l, M3, M6
Journal of Manufacturing Systems Volume I l/No. 3
Then set a negative input (-1) to process plans P2a, P2b, P3a, P3b, P5a, and P5b. The second cluster was obtained as: Cluster 2: plans machines
the capability of dealing with large part family formation problems efficiently. • The approach also considered a practical manufacturing concept with multiple process plans. This method can solve a formation problem within which each part type has one or more process plans, and a specific process plan must be chosen for each part type. This extension will allow integration with other computer-aided manufacturing functions such as computer-aided process planning (CAPP). • The sequence of operations is also involved in the generation of part families and machine cells. The method described is capable of refining the basic assignment and minimizing inter-cell material movement. • A quantitative analysis of the interrelationship between part types permits machine types to be formed into cells in such a way that there are
Pla, P4a, P6b M2, M4, M5
and goodness = 8.4787 In this case, the first feasible solution includes two clusters and the total goodness is equal to 16.1825, The rearranged result is presented in Figure 17. The other feasible solution in Figure 18 is also found with a total goodness of 11.0759. The first solution with the higher goodness is finally chosen.
Conclusion Research demonstrates that the developed neural network can address the GT family formation problem efficiently. Furthermore, it shows that the extensions with the involvement of other manufacturing information can be done easily within the same unified structure. In our approach, some major features and potential extensions have been found: • This method started from the implementation of the standard family formulation by using the similarity coefficient method in the setup of the network system. In traditional approaches, the final decision of part families and corresponding machine cells should be determined by a human from the arranged matrix. However, the neural network approach generates a part family as well as a machine cell simultaneously without examining the final matrix again. This feature supplies
process plans la
M a
2
h i n
c
a
a
4
b
5a
1
1
4
1
1
1
5
1
1
1
1
1
s
3
1
1
1
6
1
1
1
Figure 17 Feasible Solution With Goodness
=
16.1825
process plans
3
b
3a
e
5
6 2a
process plans b
2b
C
..............................................................................................
a
6b
2
parts 1
4a
...................................................................
3b
6a
lb
4b
5b
...................................................................
a
b
a
b
a
b
1
..............................................................................................
M 1
1
a
1
1
1
1
1
c
2
1
3
1
1
1
4
1
5
1
6
1
1
1
1
1 1
1 1
1
1
h i
1
1 l
1
1
1
1 1
1 1
1
n
1
e S
1
1
1
1
1
1
1 1
..............................................................................................
Figure 18
Figure 16 Initial Matrix for the Involvement of M u l t i - P r o c e s s - P l a n
Feasible Solution With Goodness
158
= 11.0759
Journal of Manufacturing Systems Volume l l/No. 3
11. N.L. Hyer and U. Wemmerlov, "Group Technology and Productivity," Harvard Business Review, July-August 1984, pp. 140-49. 12. N.L. Hyer and U. Wemmerlov, "Group Technology in the US Manufacturing Industry: A Survey of Current Practices," International Journal of Production Research, Vol. 27, No. 8, 1989, pp. 1287-1304. 13. J.R. King, "'Machine Component Grouping in Production Flow Analysis: An Approach Using Rank Order Clustering," International Journal of Production Research, Vol. 18, No. 2, 1980, pp. 213-32. 14. K.R. Kumar, A. Kusiak, and A. Vannelli, "Grouping of Parts and Components in Flexible Manufacturing Systems," European Journal of Operational Research, Vol. 24, 1986, pp. 387-97. 15. A. Kusiak, "The Generalized Group Technology Concept," International Journal of Production Research, Vol. 25, No. 4, 1987, pp. 561-69. 16. R. Lippmann, "An Introduction to Computing with Neural Nets," IEEE ASSP Magazine, Vol. 4, 1987, pp. 4-22. 17. J. McAuley, "Machine Grouping for Efficient Production," Production Engineering, 1972, pp. 53-57. 18. J.L. McClelland and D.E. Rumelhart, Explorations in Parallel Distributed Processing: A Handbook of Models, Programs, and Exercises, MIT Press, Cambridge, MA, 1988. 19. J.L. McClelland, D.E. Rumelhart, and G.E. Hinton, "The Appeal of Parallel Distributed Processing," Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, 1986, MIT Press, Cambridge, MA. 20. W.T. McCormick, P.J. Schweitzer, and T.W. White, "Problem Decomposition and Data Reorganization by a Clustering Technique," Operations Research, 1972, pp. 993-1009. 21. Y.B. Moon, "Forming Part-Machine Families for Cellular Manufacturing: A Neural Network Approach," International Journal of Advanced Manufacturing Technology, Voh 5, No. 4, 1990, pp. 278-91. 22. R. Rajagoplan and J.L. Batra, "Design of Cellular Production Systems--A Graph Theoretic Approach," International Journal of Production Research, Vol. 13, 1975, pp. 567-79. 23. R.E. Rumelhart, J.L. McClelland, and The PDP Research Group, Parallel Distributed Processing, Explorations in the Microstructure of Cognition, Vol. 1, The MIT Press, Cambridge, MA, 1988. 24. H.K. Seifoddini and P.M. Wolfe, "Selection of a Threshold Value Based on Material Handling Cost in Machine-Component Grouping," liE Transactions, Voh 19, No. 3, 1987, pp. 226-70. 25. S.M. Shafer and J.R. Meredith, " A Comparison of Selected Manufacturing Cell Formation Techniques," International Journal of Production Research, Vol. 28, No. 4, 1990, pp. 661-73. 26. A. Shtub, "Modelling Group Technology Cell Formation as a Generalized Assignment Problem," International Journal of Production Research, Vol. 27, No. 5, 1989, pp. 775-82. 27. P.H. Waghodekan and S. Sahu, "Machine-Component Cell Formation in Group Technology: MACE," International Journal of Production Research, Vol. 22, No. 6, 1984, pp. 937-48. 28. U. Wemmerlov and N.L. Hyer, "Cellular Manufacturing in the US Industry: a Survey of Users," International Journal of Production Research, Vol. 27, No. 9, 1989, pp. 1511-30. 29. H.L. Wu, R. Venugopal, and M.M. Barash, "Design of a Cellular Manufacturing System: A Syntactic Pattern Recognition Approach," Journal of Manufacturing Systems, Vol. 5, No. 2, 1986, pp. 81-87.
strong interdependencies between machine types that process a large lot of similar part types. • The experimental procedure can be initiated from either part types or machine types. With this feature, the results obtained by the network system can be re-checked from the other respect. • Taking advantage of the inherent parallel processing of neural networks, the approach can solve a large problem quickly. • Additional feature pools can be added to the original model without major modification. This will allow the use of part geometries as well as part topologies. Some drawbacks of our approach overlap with those of neural network computing. That is, several parameters or constants must be determined by a trial-and-error basis. Even though this paper presents some guidelines for such parameters, more systematic and fundamental research will be needed. Nonetheless, our approach minimizes such arbitrariness by choosing a neural network structure that does not require many decisions regarding the number of hidden layers, the number of units, and the size and scope of the training sample.
References 1. J.L. Burbidge, "Production Flow Analysis," Production Engineering, 1971, pp. 139-52. 2. J.L. Burbidge, " A Manual Method of Production Flow Analysis," Production Engineering, 1977, pp. 34-38. 3. A. Ballakur and H.J. Steudel, " A Within-Cell Utilization Based Heuristic for Designing Cellular Manufacturing Systems," International Journal of Production Research, Vol. 25, No. 5, 1987, pp. 639-65. 4. A.S. Carrie, "Numerical Taxonomy Applied to Group Technology and Plant Layout," International Journal of Production Research, Vol. 11, No. 4, 1973, pp. 399-416. 5. H.M. Chan and D.A. Milner, "Direct Clustering Algorithm for Group Formation in Cellular Manufacture," Journal of Manufacturing Systems, Vol. 1, No. 1, 1982, pp. 65-75. 6. M.P. Chandrasekharan and R. Rajagopalan, "MODROC: An Extension to Rank Order Clustering," International Journal of Production Research, Vol. 24, No. 5, 1986, pp. 1221-33. 7. M.P. Chandrasekharan and R. Rajagopalan, "ZODIAC--An Algorithm for Concurrent Formation of Part-Families and MachineCells," International Journal of Production Research, Vol. 25, No. 6, 1987, pp. 835-50. 8. S.C. Chi, " A Neural Network Approach for Part Family Formation in Group Technology," Master's thesis, Syracuse University, 1990. 9. J. De Witte, "The Use of Similarity Coefficients in Production Flow Analysis," International Journal of Production Research, Vol. 18, No. 4, 1980, pp. 503-14. 10. G. Harhalakis, R. Nagi, and J.M. Proth, "An Efficient Heuristic in Manufacturing Cell Formation for Group Technology Applications," International Journal of Production Research, Vol. 28, No. 1, 1990, pp. 185-98.
Authors' Biographies Young B. Moon is Assistant Professor of Mechanical and Aerospace Engineering at Syracuse University, Syracuse, NY. He has a BS from Seoul National University, an MS from Stanford University, and a PhD from Purdue University, all in industrial engineering. His current research interests include distributed artificial intelligence applications in manufacturing systems, concurrent engineering, and intelligent control of manufacturing systems. Sheng-Chai Chi has a BS in mechanical engineering from Tatung Institute of Technology, and an MS in manufacturing engineering from Syracuse University. His research interests are neural network applications in group technology, and control of manufacturing systems.
159