An improved neural network for manufacturing cell formation

An improved neural network for manufacturing cell formation

ELSEVIER Decision Support Systems 20 (1997) 279-295 An improved neural network for manufacturing cell formation Chao-Hsien Chu * Department of Ma...

1MB Sizes 9 Downloads 66 Views

ELSEVIER

Decision Support Systems 20 (1997) 279-295

An improved neural network for manufacturing cell formation Chao-Hsien

Chu

*

Department of Management, Iowa State Universi~, Ames. IA 50011, USA

Abstract

With structures inspired by the structure of the human brain and nervous system, neural networks provide a unique computational architecture for addressing problems that are difficult or impossible to solve with traditional methods. In this paper, an unsupervised neural network model, based upon the interactive activation and competition (IAC) learning paradigm, is proposed as a good alternative decision-support tool to solve the cell-formation problem of cellular manufacturing. The proposed implementation is easy to use and can simultaneously form part families and machine cells, which is very difficult or impossible to achieve by conventional methods. Our computational experience shows that the procedure is fairly efficient and robust, and it can consistently produce good clustering results. © 1997 Elsevier Science B.V. Keywords: Neural networks; Unsupervised learning; Interactive activation and competitive learning; Cellular manufacturing

1. Introduction

Neural networks (also known as neural nets, connectionist systems or artificial neural systems) are massively parallelized computing systems that have the ability to learn from experience (examples) and to adapt to new situations. In addition, they are extremely fast, particularly when implemented in hardware. With structures inspired by the structure of the human brain and nervous system, neural networks provide a unique computational architecture for addressing problems that are difficult and impossible to solve with traditional methods. Since the inception of neural nets, more than 50 paradigms have been developed for a variety of applications, including economic prediction, financial analysis,

* Tel.: + 1-515-294-9693; fax: + 1-515-294-6060; e-mail: chu_c @iastate.edu

m o r t g a g e loan evaluation, credit approval, bankruptcy prediction, stock market prediction, machine fault diagnosis, robotics arm-positioning control, resource allocation, scheduling, and many others [1-7]. In this paper, an effective neural network based upon the interactive activation and competition (IAC) learning logic is proposed as a good alternative decision-support tool to solve the cell-formation (CF) problem of cellular manufacturing (CM). Cellular manufacturing is an innovative manufacturing strategy proposed for improving shop-floor productivity and flexibility. By grouping and producing similar products (or parts) together, CM can transform batch- into line-type production while maintaining a high degree of flexibility as in job-shop systems. Potential benefits of CM include simplified material flows, faster throughput, reduced setup times, reduced inventory, better control over the shop floor and lower scrap rates [8]. The grouping philosophy has been widely used in modern manufacturing practices such as just-in-time (JIT) produc-

0167-9236/97/$17.00 © 1997 Elsevier Science B,V. All rights reserved. PII S01 67-9236(97)000 15-8

280

C.-H. Chu / Decision Support Systems 20 (1997) 279-295

tion and flexible manufacturing systems (FMS). Cell formation, which involves the process of grouping parts and machines into manufacturing cells, is the first step towards designing an effective CM system. Forming manufacturing cells is a complicated task in manufacturing practice, as many factors, such as product profiles, design features, production quantity and demand patterns, machine functions and capacity, processing times and sequences, budgets, tools and space restrictions, must be considered [9,8]. Also, there are several performance criteria, such as grouping efficiency, total machine investment, total material handling cost, average machine utilization and manufacturing lead time, for which a solution must be evaluated [9,10,8]. Over the past decade, the CF problem has attracted much attention and numerous analytical approaches have been developed. Two categories of procedures can be discerned: those based on design features and those based on processing requirements. These procedures can be further divided into three categories: (1) identifying part families and then machine cells, (2) identifying machine cells and then part families and (3) identifying part families and machine cells simultaneously [11]. The first two procedures are normally performed sequentially. Most hierarchical (for instance, Ref. [12]) and nonhierarchical (for instance, Refs. [ 13,14]) and heuristic (for instance, Refs. [15-17]) methods either form part families first and then use a search algorithm to locate machine cells or work in the reverse direction. Although a mathematical programming model can be developed to form machine cells and part families simultaneously, the number of groups must be specified in advance. Also, it is difficult or impossible to solve once the problem size grows. Thus most of them (for instance, Refs. [18-20]) use the first or second procedure. Most array-based clustering methods (for instance, Refs. [21-24]) and a few heuristic methods (for instance, Ref. [25]) use the simultaneous clustering logic; these methods are efficient, the clustering procedure normally requires some type of human intervention (e.g., visually determining the block diagram or the similarity coefficient for the grouping) and the results may not always be satisfactory [26]. A thorough review of the CF problem can be found in various sources like Refs. [27,9,24,19,28,11].

Clearly, from the above discussion, there is an emerging need for a better decision support tool for making CF decision. Recently, a number of neural networks have been adopted as a good alternative tool to solve the CF problem. See our review in Section 2. The following unique features of neural networks make them more appealing than conventional techniques for CF: (1) the memory in neural networks is distributed and the information can be processed in parallel; (2) most neural networks are hardware implementable and thus are more suitable for solving large-scale problems than the pure software-implementing alternatives; and (3) neural networks can automatically learn from examples without going through tedious acquisition and representation procedures. In this study, a variation of the interactive activation and competition network was adopted for CF. The IAC network has the following advantages over other neural network paradigms: (1) the network can simultaneously form part families and machine cells; (2) the required number of cells can be detected automatically from the data; and (3) they are less sensitive to the system parameters than some other networks such as ART 1. The use of an IAC network in decomposing manufacturing cells has been explored by Moon [29,30] and Moon and Chi [31]. However, they used a more complicated three-layer scheme. In addition, a large number of similarity coefficients between every pair of parts and machines had to be computed; thus, it requires additional computing memory space to keep track and store the coefficients and will take longer time to process the results. Also, the implementation has not been tested extensively, thus little is known about the performance of the model [10]. In this paper, a simpler two-layer scheme is proposed, and the connecting weights between neurons are set to '1' for excitatory connections and ' - 1 ' for inhibitory connections, thus avoiding the difficulty of selecting suitable similarity coefficients [27] and reducing the need for tedious computations.

2. Related research

Neural networks offer an effective algorithmic approach that has been studied intensively by mathematicians, statisticians, physicists, engineers and

C.-H. Chu / Decision Support Systems 20 (1997) 279-295

computer scientists. Since their inception, they have proven to be an effective approach for performing the following types of tasks [3]: (1) modeling, in which one can use a neural network to develop the mathematical relationship between input and output variables and which can thus be used to estimate or predict output values from inputs; (2) pattern mapping, in which one can, for instance, input a written text to receive a spoken word; (3) pattern completion, in which one can input a partially obscured object to recall a stored complete pattern; (4) pattern classification, by which one can sort different patterns into meaningful categories; and (5) optimization, in which one can efficiently solve a complicated combinatorial optimization problem using neural nets. A detailed discussion and critical review of neural networks and their applications can be found, for example, in Refs. [1-4,7,32].

281

The CF problem can be solved directly by using the classification capability of neural networks; or alternatively, it can first be formulated as a mathematical model and then be solved by an optimization-based neural net such as an annealed neural net, a Hopfield network, or a Boltzmann machine [5]. Results of some research in both of these directions have appeared in the literature. Table 1 summarizes these related studies. The focus of the first direction is to find appropriate neural networks that have good classifying capability. Several neural topologies, including backpropagation, self-organizing map, competitive learning, adaptive resonance theory (ART), and interactive activation and competitive learning, have been adopted to model the CF problem. Depending on the training methods used, these nets can be divided into two categories: supervised and unsupervised learn-

Table 1 Summary of related literature Model used

Data used

No. of layers

Comments

Ref.

Design feature Design feature Design feature

3 3 3

(1) Form part families only; (2) Have to specify the required no. of cells. Same as above Same as above

[33] [34] [35]

Self-organizing map Self-organizing map

Design feature Routing

3 2

[33] [36]

Competitive learning

Routing

2

Competitive learning Adaptive resonance theory (ART 1) ART 1 ART 1 ART 1 ART 1 ART l Fuzzy ART Multiconstraint net

Routing Routing

2 2

Routing Routing Routing Routing Routing Routing Routing and other production data Routing

2 2 2 2 2 2 3

Same as above (1) Form part families and machine cells simultaneously; (2) Have to specify the required no. of cells. (1) Form part families and machine cells separately (sequentially): (2) Have to specify the required no. of cells. Same as above (1) Form part families and machine cells separately (sequentially); (2) Have to specify the maximum no. of cells. Same as above Same as above Same as above Same as above Form machine cells first and then search for the part families. Same as above (1) Form machine cells only; (2) Have to specify the required no. of cells.

[29,30]

Routing and other production data

3

(1) Form part families and machine cells simultaneously; (2) Have to compute similarity coefficients. Same as above

Superl:ised learning Back-propagation Back-propagation Back-propagation

Unsupercised learning

Interactive activation and competition Interactive activation and competition

3

[37] [38] [39] [10] [40] [4 l] [38] [42,43] [43] [44]

[31]

282

C.-H. Chu / Decision Support Systems 20 (1997) 279-295

ing. Supervised learning requires that those output patterns from earlier groupings be specified or presented during the training stage; these networks can then be used as the basis for classifying new inputs. Thus, networks in this category are more suitable for CF applications based upon the design features. Back-propagation is the only supervised paradigm used in these studies. In contrast, without using existing classified data, unsupervised learning can self-organize the presented data to discover common properties. Because the manufacturing environment is dynamic, it is difficult, if not impossible, to know beforehand the patterns of existing parts and processes; networks of this category are more appropriate for practical CF applications. The second direction of research requires users to formulate a mathematical model for the problem under consideration, after which a neural net is developed to represent and solve the mathematical model. The major advantage of this approach is that additional constraints such as demand, cell size, capacity, etc. may be taken into consideration. However, developing a neural network that can properly represent a complicated mathematical model is very difficult and requires considerable computational effort [5]. To our knowledge, there has been no study using this approach to model the CF problem. Although previous studies have shown that neural networks are suitable for solving the CF problem, the following deficiencies yet exist: (1) The trial-and-error method is normally needed to figure out the network structure and system parameters, especially for the back-propagation paradigm, in which the number of hidden layers and the number of neurons in each hidden layer must be determined beforehand. (2) The number of output neurons (that is, the number of cells to be formed) in the back-propagation, competitive learning and multiconstraint networks must be specified before the network is constructed and simulated. (3) A set of parameters must be predetermined for almost every neural paradigm, but their sensitivity to the systems' performance varies. The ART 1 is the most popular neural net in use but it has been found to be very sensitive to the vigilance values used [45,42,36,38], and the sequence to which input vectors are applied [39,10]; hence, a number of variations and extensions have been devised [39,10,44,43]. (4) Some neural models, especially

those based on the design features, can be used only to form part families [33-35], to form machine cells first and then use a search algorithm to find part families [42,44,43], or to form part families and machine cells separately (sequentially) [39,37,10,40,41,38]. (5) Most proposed neural nets, except the multiconstraint net [44], consider routing information only in forming manufacturing cells. (6) Despite the fact that the IAC model used in some studies [29-31] can effectively form part families and machine cells simultaneously, their implementation, which uses a three-layered network structure, is complicated and requires extensive effort in computing similarity coefficients for every pair of parts and machines; thus, the implementation may be inappropriate for large-scale CF problems. In this study, we propose a simpler IAC neural network to model the CF problem. Our implementation can avoid most of the aforementioned deficiencies, except item 5.

3. Notation used The various notation for indices and parameters are enumerated below. Indices

i index of processing units or neurons (part or machine); i = 1. . . . . (n + m) j index of sending neurons (machine or part); j = 1. . . . . ( n + m )

k index of manufacturing ceils; k = 1, 2 . . . . Parameters ai

4 Ol

q c~

the activation value of processing unit i the updated activation value of processing unit i a constant to scale the strength of excitatory inputs set of cell k set of initial cell k produced from IAC net

decay estr

a constant that determines the strength of the tendency to return to resting level a constant to scale the strength of external input

C.-H. Chu / Decision

ext (i) excit (i) 3/ inhib (i) m

M Mkr max

min r/

net (i) ncycles P

rest

Support Systems

strength of external input to unit i sum of excitation to unit i a constant to scale the strength of inhibitory inputs sum of inhibition to unit i number of machines needed set of machines available for receiving external inputs set of machines contained in C~ the maximum activation level allowed the minimum activation level allowed number of parts to be clustered net input (weighted sum) of unit i number of cycles (epochs) by which the network is to be trained set of parts available for receiving external inputs set of parts contained in C~ the resting level to which activation tends to settle in the absence of external output weight associated with the connection from unit j to unit i

4. Generic scheme of IAC networks The IAC net, a simple neural paradigm that uses unsupervised learning logic, has been extensively studied by Grossberg [45], by McClelland [46], by McClelland and Rumelhart [47,2,48] and by Rumelhart et al. [4]. An IAC network normally consists of a set of neurons divided into a number of competitive layers [48]. These layers can be either visible or hidden, depending on whether or not the units in the layers can receive direct input from outside the network. Within a layer, all units are mutually inhibitory; that is, competition exists among units such that the unit or units that have the strongest activation tend to drive down the activation of the other units. Between layers, units may have excitatory or inhibitory connections. The connections between neurons are normally bidirectional; that is, the connection weights between neurons i and j, wij and wji, are equal. As a result, the activation in each layer both influences and is influenced by activation in other layers. Fig. 1 depicts the computational algorithm of the IAC net [2]. The algorithm, in general, can be di-

283

20 (1997) 279-295

vided into five steps: (1) Develop a network to represent the problem. This step, called representation or encoding, is often problem-dependent. For a different domain, the encoding scheme and network structure may have to be specially designed to reflect its own characteristics. (2) Determine and assign the system parameters. Seven parameters--max, min, rest, decay, estr, a and ?/--are used by the IAC network. These parameters play a key role in a network's learning. At any instance in time, each neuron's status, called its activation, is updated until the network is stabilized. The max and min are used to restrict the highest and lowest activation level each processing unit can reach. The rest is the activation level at which neurons tend to settle in the absence of external input. The decay term acts as a restoring force that tends to bring the activation of units back to the rest level. The a term scales the strength of the excitatory input to units from other units in the network and the 7 rate scales the strength of the inhibitory input to units from other units in the

1) /* Develop the IAC neural net. 2) /* Assign system parameters (max, rain, rest, decay, a l p h a , g a m m a ) */ 3) /* Select a processing unit for clamping and applying an external input to the unit */ Clamp an external input, ext (r), to the selected unit r; 4) /* Train the network */ /* Initialize the activation value for all processing units *I Let a, = rest, i = 1, 2, ..., n+m

/* Start the training process for n c y c l e times *1 Repeat: (* the Outside Loop *) /* Start the inside Loop */ Perform the following loop for each processing unit (i) /* Compute the net input for each processing unit i */ Compute the sum of excitation - excit (i): excit (i) = ~ W , a i

Compute the sum of inhibition - inhib (i): inhib (i) = 2~ W,j a i

Compute the net input - net (i): net (i) = (estr) ext (i) + (alpha) excit (i) + (gamma) inhib (i)

1" Update the activation value for each processing unit */ /* Update activation value - a',: */ If net (i) > 0 a'i = (max - ai) net (i) (decay)(a~ - rest)

Otherwise, a'~ = (ai - rain) net (i) - (decay)(a~

rest)

I* Determine activation value: */ Ifa'i > max, a'i = max lfa'i < rain, a'~ = rain Until all units are processed /* End of inside loop */ /* End of Outside Loop/* Return (* End of Outside Loop *) 5) /* Check the network stabilization/* If the network is stabilized, Stop the procedure Else repeat the outside loop Fig. 1. T h e c o m p u t a t i o n a l a l g o r i t h m o f t h e I A C n e t w o r k [2].

284

C.-H. Chu/ Decision Support Systems 20 (1997) 279-295

network. (3) Choose a clamping unit. A processing unit, r, is randomly selected from the visible layers and an external input, ext (r), is applied to that unit. (4) Train the network. This step involves the calculation of each neuron's net input, net (i), and the updating of its activation level, a i. The net input consists of three terms: the external input, scaled by estr; the excitatory input from other units, scaled by c~; and the inhibitory input from other units, scaled by y. The updating routine increments the activation of each unit based on its net input and current activation level. The training continues until all connection weights remain unchanged (i.e., the network is stabilized). (5) Group units with positive activation value into a cluster. Steps 3 - 5 is repeated until all units in the network are classified. A detailed description of the algorithm and its variations has been published in Ref. [2].

5. The

proposed

implementation

Unlike the networks described in previous studies [29-31], the IAC network adopted for this study consists of only two layers: machine and part layers. The part layer is made up of parts to be classified and the machine layer consists of machines needed to process the parts. Both layers are considered visible, because an external input can be applied to any neuron in either layer. The two layers are fully interconnected, complying with the following rule: if a part needs to be processed by a machine, the connection between that part and the machine is excitatory; otherwise, it is inhibitory. Within the layer, the processing units compete with each other by each having an excitatory connection to itself. Meanwhile, in our implementation, there is no need to compute the similarity coefficients for parts or machines; instead, the initial connecting weights are assigned as a constant ( ' 1 ' for excitatory connections and ' - 1 ' for inhibitory connections). This arrangement not only avoids the difficulty of selecting an appropriate formula for computing the similarity coefficients [27], but also saves much time from not carrying out such tedious computations. Because the number of neurons, n, in a hidden layer and the number of pairs of similarity coefficients, 1 / 2 [n(n - 1 ) + m ( m - 1)], tend to increase very rapidly as

Table 2 Data set for numerical illustration Machines (i) Parts(j) 1

2

3

4

5

6

7

8

9

10

1

I

1

1

1

0

0

0

0

0

0

2 3 4 5 6 7 8 9 10

1 1 1 0 0 0 0 1 0

1 1 1 0 0 0 0 1 0

1 1 0 0 0 0 0 0 0

1 0 0 0 0 1 0 0 0

0 0 0 1 1 1 0 1 0

0 0 0 1 1 1 0 1 0

0 0 0 1 l 0 0 0 0

0 0 0 0 0 0 1 0 1

0 0 0 0 0 0 1 0 1

0 0 0 0 0 0 0 0 1

I1

1

1

0

0

1

1

0

0

0

0

12

0

0

0

0

0

0

0

1

1

1

problem size increases, our proposed architecture is more efficient and thus, more suitable for practical applications. For illustration, let us consider the same data set analyzed in Ref. [30], but with correction of some typographical errors in the data. Table 2 depicts the machine/part matrix used, with '1' indicating that part j needs to be processed at machine i. As shown in Fig. 2, a two-layered IAC network with 22 neurons (12 units in the machine layer and 10 units in the part layer) is needed for our implementation. In contrast, a three-layer network (see Fig. 3) with 32 neurons (12 units in the machine layer, 10 units in the part layer and 10 units in the part instance layer - - a hidden layer) is used in Ref. [30], and 111 pairs of similarity coefficients also need to be computed.

- -

: Excitatory Connections

............... : Inhibitory Connections Fig. 2. Structure of the proposed IAC network.

285

C.-H. Chu / Decision Support Systems 20 (1997) 279-295 ,.. .........................................................................................................................

.

l

lrz

'...7..-7-. -.C'7.:_ZS_~.:-..................................................................................... Fig. 3. Structure of the IAC net used in Ref. [30].

Regarding connections, although we have used both the excitatory and inhibitory connections to link units between layers (as was done in Refs. [29,30]), our implementation avoids the need of connections between units within each layer, thus simplifying the network structure and reducing the computation time. For instance, because part 1 needs to be processed at machines l, 2, 3, 4, 9 and 11, the connections of p~ with m~, m e, m 3, m 4, m 9 and ml~ are considered excitatory and its connections with other machines are inhibitory. The core part of the interactive activation and competition logic is performed via the IAC program provided in Ref. [2]. Because the program requires several complicated files (template, look, network and start-up) as inputs, we used the BASIC programming language to develop a generator, IACG, to simplify the task. I A C G is a menu-driven front-end that allows users to perform the following tasks from an integrated menu: (1) enter the data (machine/part matrix) needed by CF from the keyboard or retrieve the data from a file; (2) update data files; (3) enter and update the default system parameters; and (4) generate network files. Our implementation can pro-

ceed following a four-phase procedure, as shown in Fig. 4. Phase 1: Assigns system parameters. To generate related network files, the system parameters need to be determined beforehand. In general, max is set to '1,' min < rest_<0 and decay between 0 and 1. According to our experiment (reported in Section 7.2), all of these parameters except ncycles can be set as the default values (i.e., c~ = 0.1, decay = 0.1, estr = 0.4, y = 0.1, max = 1.0, min = - 0 . 2 and rest = - 0 . 1 ) as suggested in Ref. [2] without impacting the clustering results. Also, in most cases, the value of ncycles can be set at 120, despite the fact that these numbers can be changed interactively during the simulation. Phase 2: Generates network files. Four files are needed to use the iac program: (a) a template file, used to tell the program what to display and where to display the results; (b) a startup file, needed to initialize the network configuration and to set the various parameters of the model; (c) a look file, used to specify where in a rectangular array the elements of a vector or matrix are to be displayed; and (d) a network file, needed to specify the pattern of connecSelect system Parameters

Clamp a

..........

[

Phase

............

Processing Unit

[

Train the Network

I

No Phase 3

I Group units with r Positive Value and Delete Them

No

............

Merge Cells If Necessary

J

............... ]

Phase

Fig. 4. The proposed IAC implementation procedure.

C. -H. Chu / Decision Support Systems 20 (1997) 279-295

286

definitions: nunits

22

...................................

Total

#

of

Neurons

end constraints: u

1.0

__

v

-1.0

--

Excitatory Connections -] Connection Inhibitory Connections Weights

end

network: U ...........

UUUUVVVVW

• U ..........

UUUUVWVVV

•. U .........

uunvvvvvvv

•..

UUVVVVVVVV

U ........

....

U .......

.....

VVVVUUBVVV

u ......

......

u .....

.......

n ....

........

U.

.........

Machine Layer

vvvuuuvvvv

( 12 neurons

)

VVWVVVUUV •. UIIVVUUVVVV

U..

..........

vvvvnuuvvv

vwwvvunu

u.

...........

unwuuvvvv

u v

~

n

u

u

unauwvvnvnvu

.........

uuunwwnvllv,

u ........

nntlv~vvvvvwv.,

n .......

uuvvvvnwvvv..,

n ......

wvvuunvnvuv

....

VVVVIIUUVUVIIV

.....

u ..... u ....

vwvuuvvvvvv

......

e...

vvvvvwnvuvu

.......

VVVVaTvvnvnvu

........

VVWVVVVVUVU

.........

Part

Layer

(10 neurons

)

n.. u. U

end I

I I _ _ J

Machine Layer

Part Layer

Fig. 5. Structure of the network file.

tions and connecting weights• A detailed discussion of the structure of these files can be found in Ref, [2]. Fig. 5 depicts an example of the network file using the data given in Table 2. As shown, if a part needs to be processed by a machine, the connection between them is excitatory (denoted as u); otherwise, it is inhibitory (denoted as v). Also, each processing unit has an excitatory connection to itself, denoted as

been grouped (i.e., sets M and P have become empty). Please note that the sets M and P are merely indices used to control the clamping process. Although, as training progressed, the parts or machines may have been deleted from these sets from further clamping consideration, they still remain active in the network pools (layers) for possible interaction with other neurons. Phase 4: Merges cells• This phase is optional depending on the data sets used. In most cases, if the data set contains few or no exceptional elements, the initial grouping results are normally satisfactory thus requiring no postanalysis. However, if the data are robust, i.e., exceptional elements or bottleneck machines exist, this step must be performed to produce disjointed groups. Merging cells is somewhat tricky at times. According to our experiment, the initial cell C I should be considered for merging with cell C'k, if one of the following conditions is satisfied: (a) the initial machine cell M~ is a subset of cell M~, that is, M~ _cM~; (b) the initial part family P~ is a subset of family P~, that is P[ c P~; (c) the initial manufacturing cell C I is a subset of cell C~, that is C I c C~. Meanwhile, if a cell contains only one part along with several machines or the reverse, the cell may need to be merged with other cells as long as the two cells considered have an intersection. That is, if M~ or P[ contains only one element and C I • C~ 4: ~J, then cell C I should be considered for merging with cell C~.

6. Numerical illustration

U.

Phase 3: Trains the network. This phase consists of several steps. First, a neuron is randomly selected from the part ( P ) or machine ( M ) sets, which are initially set to contain all the parts and machines to be grouped• Second, the network was trained, following the learning algorithm outlined in Fig. 1. This learning process repeats until all connection weights remain unchanged (i.e., the network is stabilized). Third, the processing units with positive activation values are then grouped together (denoted as C~,). The initial cell C~ contains an initial machine cell M~ and a part family P~. These elements will then be deleted from the sets M and P. This training phase will be continued until all the processing units have

We use the data set given in Table 2 to demonstrate the above implementation procedure• First, we use the IACG generator to enter or select the data set needed for CF, specify the system parameters, generate the required network files, and initialize the M and P sets. This completes the first two phases of our implementation. We then trigger the IAC program from IACG to start the training, in which the process repeats five times (iterations) for this particular data set. Table 3 summarizes the results of these iterations. In the first iteration, assume that we select machine 1 (i.e., m I) as the unit to be clamped and apply an external input of value ' 1' to m r After 120 cycles

C.-H. Chu / Decision Support Systems 20 (1997) 2 7 9 - 2 9 5

Table 3

Table 3 (continued)

S u m m a r y of numerical illustration by iterations

Machine number

287

Activation value

Part number

Activation value

m[ ~

87

Pt

83

m2 m3 m4 m5 m~ m7 ms m9 mm mll

74 74 65 0 0 0 0 65 0 65

P2 P3 P4 P5 P6 P7 Ps P9 Pm

83 46 0 0 0 0 0 0 0

ml2

0

Iteration 1

Machine number Iteration 5 mI m2 m3 m4 m5 m6 m7 ms m9 mlo

roll mr2

Activation value

Part number

Activation value

0 0 0 0 76 76 57 0 57 0 57 0

p~ P2 P3 P4 P5 P6 Pv* " P8 P9 Pro

0 0 0 0 80 80 82 0 0

0

M a c h i n e 1 was clamped (marked by * ); C'I = {m~, m 2, m3, m 4,

Iteration 2

mg, mll/Pl,

m L

0

Pl

0

m~ m3 m4 b m~ m6 m7 ms m9 mm mr1 m12

0 0 0 86 70 70 0 70 0 70 0

P2 p3 P4 P5 P6 P7 P8 P9 Pm

0 0 0 81 81 0 0 0 0

0 0 0 0 0 0 0 85 0 73 0 73

Pl p2 P3 P4 P5 P6 P7 Ps P9 Pm

0 0 0 0 0 0 0 75 75 53

mI me m3 m4 m5 m6 m7 ms m9 mto

79 79 67 0 0 0 0 0 0 0

Pl P2 P3 P4 a P5 Pc, P7 Ps P9 Plo

75 75 75 85 0 0 0 0 0

mll

0

m12

0

Iteration 3 mI m~ m~ m4 m5 m6 m7 m~ ~ m9 mm roll m~2 Iteration 4

0

P2, P3}; M = { m s ,

m6, mv, m s , m m , m l 2 } 4 : ~ ;

P = { P 4 , Ps, P6, Pv, Ps, P9, Plo } ~ " b M a c h i n e 5 was clamped (marked by * ); C~ = {m~, m 6, mv, m9, mr1 / P s , P6}; M = { m 8 , pro} : g 0 . c Machine

m l o , m t 2 } : g 0 : P = { P 4 , Pv, P8, P9,

8 was clamped (marked by

*):

C'3={m~,

m m,

m12 / P 8 , Pg, Plo}; M = ~ ; P = { P 4 , P7 }4:~" d Part 4 was clamped (marked by * ): C~ = {P2, P~, P > P 3 / m ~ , m>m~}; M=~; P={pv}~. Part 7 was clamped (marked by * ); C; = {p~, Ps, P 6 / m s , m6, m 7, m 9, mtl}; M = ~ ; P = ~ .

of training, the connection weights of all units remain unchanged, indicating that the network has stabilized. Because machines 1, 2, 3, 4, 9, 11 and parts 1, 2, 3 have a positive activation value, we assign them to the initial cell 1 (C'~) and remove them from the M and P sets. Because these two sets are not empty, we reset the system and continue for the next iteration. In the second iteration, assume that machine 5 (i.e., m 5) was clamped. Following the same learning process, we produce the second initial cell (C'2), which consists of machines 5, 6, 7, 9, 11 and parts 5 and 6. Parts 5 and 6 and machines 5, 6 and 7 were further removed from the sets M and P. Because both sets are also not empty, we continue for the third iteration. In the third iteration, assume that we clamp machine 8 and produce the third initial cell (C~), which contains machines 8, 10, 12 and parts 8, 9, 10. After removing these parts and machines from sets M and P, the machine set is empty. However, two parts, P4 a n d P7, still remain in the part set; we thus continue for the next iteration.

C.-H. Chu / Decision Support Systems 20 (1997) 279-295

288

In the fourth iteration, we select part 4 as the clamping unit. We form the fourth initial cell (C~), which consists of machines 1, 2, 3 and parts 1, 2, 3, 4. Thus, part 4 was further removed from set P. In the fifth iteration, the last unit, P7, was clamped and the fifth initial cell (C~) was formed. This cell contains machines 5, 6, 7, 9, 11 and parts 5, 6, 7. Part 7 was further removed from the part set. Because both sets M and P are now empty, we terminate the training phase. Once the initial manufacturing cells are formed, we move on to phase four for possible merge consideration. By examining the five cells formed, we find that M~ is a subset of M~ and C~ is a superset of t t ¢ C~; thus, we merge C~ with C I and C~ with C 2. We conclude that only three cells are actually needed, although five iterations of training were performed. The final clustering results are: cell 1 (C~) contains m l , m 2 , m 3 , m 9 , m l l , Pl, P2, P3 and P4; cell 2 (C 2) c o n s i s t s o f m 5, m 6, m 7 , m 9, m l l , P s , P6 a n d P7; a n d the last cell (C 3) contains m 8, mlo, ml2, P8, P9 and Plo. Please note that our clustering results are exactly the same as those produced by Moon [30] even though there are major differences between our network structure and implementation and those of

Moon [30]. Also note that, in this example, machines 9 and 11 have been assigned to both cells 1 and 2. This provides users with some flexibility for part reallocation consideration. The final decision on whether or not these elements are to be duplicated or where these elements are to be allocated can be determined by an informal judgment, based upon the cell capacity and workload or by a theoretical analysis. For instance, one can simply examine the activation of these elements and assign them to the cell to which they have a higher activation value (i.e., they have higher similarity to the other units in that cell). Another way of determining where they belong is to compute the grouping efficacies (GE) [49] for all possible cell combinations and then assign them to the cell that results in a higher grouping efficacy. For instance, using the former analytical method, because the activation values of m 9 and m l~ are higher in cell 2 than in cell 1, we can strategically assign both machines to cell 2.

7. Computational experience In this section, a sensitivity analysis is first conducted to explore the possible impacts of changing

Table 4 Summary of testing data sets No.

Size ( m x n)

No. of cells

No. of elements

Matrix density a (%)

Exceptional elements

Bottleneck machines

Ref.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

5× 7 5X7 5X7 7 X 12 8 X 20 9X9 10 X 20 12 X 10 14 X 24 14 X 24 15 X 10 16 X 30 16 x 43 18 X 10 20 x 35 24 x 40 24 X 40 30 X 41 39 X 10 40 X 100

2 2 2 3 3 3 4 3 4 4 3 4 5 3 4 7 7 3 3 10

14 16 16 23 61 32 49 38 60 61 46 116 126 53 136 130 131 128 83 424

40.00 45.71 45.71 27.38 38.13 39.51 24.50 31.67 17.86 18.15 30.67 24.17 18.31 29.44 19.43 13.54 13.65 10.41 21.28 10.60

No Yes Yes Yes Yes Yes No Yes Yes Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes

No No No No No No No No No No No No Yes No No No No No No No

[24] [17] [17] [19] [13] [50] [20] [15] [15] [23] [21] [20] [24] [51] [52] [53] [53] [18] [54] [25]

Matrix density = number of element/(number of parts x number of machines).

C.-H. Chu / Decision Support Systems 20 (1997) 279-295

the clamping sequence and the value of parameters on the final clustering results. The performance of the proposed IAC implementation is then compared with the performance of another IAC implementation in Refs. [29,30] and of two other popular neural network approaches--competitive learning (CL) and adaptive resonance theory (ART 1). To explore these issues, 20 data sets were selected from the open literature (see Table 4). These data sets have been extensively explored in r e l a t e d studies [39,26,16,37,14]. Please note that the largest problem we can find contains 40 machines and 100 parts. Although some other studies [10,43] have tested their implemented neural networks using a larger data set, the data were generated by a computer and were not available in the published literature.

7.1. Impacts of clamping sequences To explore the possible impacts of changing the sequence of clamping processing units, we used three alternate clamping sequences to train the network. The first sequence randomly selects the clamping unit starting from machine set M. The second sequence randomly chooses the clamping unit starting from part set P. In the last sequence, we mixed in selecting units from both sets. For illustration, Table 5 shows the computational results for the illustrated data. As can be seen, despite the different clamping sequences, the final clustering results are in fact the same. However, there are exceptions. If the data set contains bottleneck machines (for example, data set 13), the clamping sequence will have an impact on the final clustering results. Here, the bottleneck machine is defined as a machine used to process a majority of the parts. For instance, machine 6 is a bottleneck machine because 19 out of 43 parts have to be processed at this machine. According to our experiment, the following conclusions can be drawn: if the data set contains no bottleneck machines (despite the fact that it may contain a few exceptional elements), the sequence of selecting units for clamping appears to have no impact on the final clustering results. However, if the data set contains bottleneck machines, one should avoid selecting the bottleneck machines as the clamping units; otherwise, the clustering results would not be as good as expected (i.e., a lower group efficacy will result). In

289

Table 5 The impacts of clamping sequences Iteration

Clamping neuron

Manufacturing cells

Sequence 1: (m I, m 5, m 8, P4, P7) 1 mI C'I m~,m2, m3, m4, mg, m l t / p I, P2, P3 2 m5 C~ m~, m6, mT, mg, r o l l / p s , p6 3 m8 C~ m~, mto, m12/Ps, P9, Pl0 4 P4 C] P4*,Pl,P2, P ~ / m l , m 2 , m~ 5 P7 C~ p~, ps, p 6 / m s , m6, mT, mg, mll Merge cells Merge C~ and C'1; merge C~ and C~ CI mj, mz, m3, m4, mg, m l l / p l , Clustering results P2, P3, P4 C2 ms, m6, mT, m9, m r l / p s , p6, p7 C3 ms, mr0, m l 2 / p s , pg, pro Sequence 2: (Pl, P.~, P4, Ps, P7, P8 ) 1 Pl C'I PI*, P 2 / m r , m2, m~, m 4, m 9, mll 2 P3 C2 P~, Pl, P 2 / m l , m2, m3, m4, m 9, mt I 3 P4 C~ p4, p~, p2, p 3 / m t , m2, m3 4 Ps C] ps*, p6/m~, m6, my, mg, mLi 5 P7 C~ p7, ps, p 6 / m s , m6, m7, m9, mt L 6 P8 C~ p8, pg, p l 0 / m s , mto, mr2

Merge cells

Merge C't, C~ and C~; merge C.~ and C~ Ct

Clustering results C2 C3

pt, p2, pa, p 4 / m l , m2, m3, mz, m9, m lt p~, p6, p 7 / m s , m6, mT, mg, mll P8, Pg, P l o / m s , mlo, ml_~

Sequence 3: (m12, P2, ms, P4, P7) 1 mr2 C'I m~2, ms, m l o / P s , Pg, Plo 2 p_~ C 2 p~, pl, p 3 / m l , me, m3, m4, rag, mt~ 3 ms C~ m~, m6, mT, mg, m ~ l / p s , p6 4 P4 C] p4, pl, p2, p 3 / m l , m2, m3 5 P7 C~ p7*,ps, p6/m~;, m6, roT, mg, roll Merge cells Merge C~ and C~; merge C~ and C~ Ci m8, mlo, m l 2 / P s , Pg, Plo Clustering results C_~ m I, m e, m~, m 4, m 9. r o l l / P l , Pe, P3, P4 C3 ms, m6, my, mg, m l t / p s , p6, p7 * Clamping units.

fact, in the latter case, the bottleneck machines should be removed before clustering.

7.2. Impacts of parameters' calues To study the possible impacts of changing system parameters, we applied three different parameter sets to train the network. The first set, which assigns a

C.-H. Chu / Decision Support Systems 20 (1997) 279-295

290

larger scaling factor for external input than for internal inputs (i.e., estr > a and y ) is the default one suggested in Ref. [2]. In the second set, the scaling factors for external and internal inputs are the same (i.e., estr = a = y). In the last set, we assigned a higher scaling factor for internal inputs than for external inputs (i.e., estr < a and y). Table 6 depicts the values of the parameter sets and their computational results for the illustrated example. As can be seen, changing the parameters' values will alter the activation value of individual neurons; however, the final clustering results remain the same. The computations from other examples lead to similar observations. Thus, we can conclude that, as long as the values of these parameters are set within a reason-

able range, they will have no influence on the clustering results of the IAC network.

7.3. Computational results The CL and ART 1 networks used for comparison are adopted from Refs. [37] and [38], respectively. In these two neural network approaches, two neural nets had to be developed sequentially; a machinebased net, based upon part features, was developed to group part families and a part-based net, based upon machine usage, was used to form machine cells. To test the performance of the CL net, we wrote a generator written in BASIC as a front-end to trigger the cl system provided in Ref. [2]. The ART

Table 6 The impacts of changing parameters' values

Parameters used Parameters

c~ decay estr y max min rest

Iteration 1 Parameter set

C'~

1

2

3

0.1 0.1 0.4 0.1 1.0 -0.2 -0.1

0.1 0.1 0.1 0.1 1.0 -0.2 -0.1

0.4 0.1 0.1 0.4 1.0 -0.2 -0.1

Iteration 3

m~ m2 m3 m4 m9 ml~ Pl P2 P3

Iteration 2 Parameter set

C~

1

2

3

87 74 74 65 65 65 83 83 46

79 73 73 65 65 65 83 83 42

93 93 93 88 88 88 100 100 78

Iteration 4

C~

Parameter set 1

2

3

m~ m9 ml2

85 73 73

P8 P9 Pl0

75 75 53

72 73 73 74 74 56

89 93 93 93 93 88

xxx Final clustering results Merge C] and C'I Merge C; and C~

C~

Pl P2 P3 p~ mj m2 m3 m4 m9 m i~

1

2

3

86 70 70 70 70 81 81

77 69 69 69 69 81 81

92 91 9l 91 91 100 100

Iteration 5 Parameter set

C~

1

2

3

75 75 75 85 79 79 67

83 83 37

100 100 78

73 73 73 66 66 66

93 93 93 88 88 88

Ci = {Pl, P2, P3, p 4 / m t , m2, m3, m4, m9, m l l } Ca = {Ps, P6, p 7 / m s , m r , mT, m9, m~l} C3 = {P8, P9, P l 0 / m 8 , mr0, ml2}

* The clamping sequence follows sequence 1.

m~ m6 m7 m9 mll P5 P6

Parameter set

P5 P6 P7 m5 m6 m7 m9 m tl

Parameter set 1

2

80 80 82 76 76 57 57 57

80 80 52 74 74 63 63 63

3 90 90 92 92 92 92 92

C.-H. Chu / Decision Support Systems 20 (1997) 279-295

l net was evaluated using a neural network environment, called PCN, written in Turbo-C [32]. We used a recommended set of default system parameters from Ref. [2] (i.e., parameter set 1) to test the IAC and CL implementation. The sequence of the units to be clamped in the IAC net was randomly selected to avoid any possible bias. If the data contain known bottleneck machines (e.g., data set 13), these bottleneck machines will be removed before clustering and then duplicated afterwards. As for the ART 1, the order of presenting the inputs is randomly selected, but, to ensure that the best possible results are produced for comparison, an effort was made to fine-tune the vigilance value used. To facilitate comparison with existing work, three popular CF measures--number of exceptional elements (EE), percentage of exceptional elements (PE) and grouping efficacy ( G E ) - - a n d the CPU time are used to evaluate the computational performance. An EE is produced either when certain parts must be processed through more than one machine cell or when certain machines are required by more than one part family. An EE increases the tangible and the

291

intangible costs of developing manufacturing cells; hence, a final clustering with a smaller number of exceptional elements (EEs) is preferable. A P E is defined as the number of EEs divided by the total number of '1' entries in the machine/part matrix [26]. Similar to EE, a final clustering with a smaller PE is better. A G E is defined as (total number of l's, that is, number of EE)/(total number of l's + number of voids '0' in the block diagram) [49]. Since GE is a composite index to measure the density of block diagram appose to the percentage of EE's, a better clustering will result in a larger GE. The CPU time for the ART 1 network was not compared for the following two reasons: (1) the program used for testing was obtained from a different source; thus, it is almost impossible to ensure that the data structure used and the programming skill involved in designing the system are equivalent to those of the other programs and the empirical comparison of computational time may therefore be meaningless, and (2) extensive effort has been devoted in ART 1 to select the best (proper) vigilance value; hence, it would be difficult to accumulate the

Table 7 Summary of clustering results No.

I 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Best solution

IAC a

CL [37]

ART 1 [38]

EE

PE (%)

GE (%)

EE

PE (%)

GE (%)

EE

PE (%)

GE (%)

EE

PE (%)

GE (%)

0 2 2 1 9 6 0 5 2 2 0 19 3 4 2 10 20 8 3 36

0.00 12.50 12.50 4.35 14.75 18.75 0.00 13.16 3.33 3.28 0.00 16.38 2.38 7.55 1.47 7.69 15.27 6.25 3.61 8.49

82.35 73.68 73.68 73.33 85.25 74.29 100.00 73.33 66.67 67.05 92.00 67.83 60.89 72.06 75.14 85.10 73.03 36.36 61.07 82.32

0 2 2 1 9 6 0 5 2 2 0 19 3 4 2 10 20 8 3 36

0.00 12.50 12.50 4.35 14.75 18.75 0.00 13.16 3.33 3.28 0.00 16.38 2.38 7.55 1.47 1.47 15.27 6.25 3.61 8.49

82.35 73.68 73.68 73.33 85.25 74.29 100.00 73.33 66.67 67.05 92.00 67.83 60.89 72.06 75.14 85.10 73.03 36.36 61.07 82.32

0 2 2 1 9 7 0 5 2 2 0 19 4 7 2 10 20 8 3 36

0.00 12.50 12.50 4.35 14.75 21.88 0.00 13.16 3.33 3.28 0.00 16.38 3.17 13.21 1.47 7.69 15.27 6.25 3.61 8.49

82.35 73.68 73.68 73.33 85.25 73.53 100.00 73.33 66.67 67.05 92.00 67.83 60.40 69.70 75.14 85.10 73.03 36.36 61.07 82.32

0 3 2 4 9 6 0 5 2 6 0 35 4 9 3 10 38 10 3 45

0.00 18.75 12.50 17.39 14.75 18.75 0.00 13.16 3.33 9.84 0.00 30.17 3.17 16.98 2.21 10.77 29.01 7.81 3.61 10.61

82.35 61.90 73.68 76.00 85.25 74.29 100.00 73.33 66.67 66.27 92.00 45.51 60.40 62.86 76.88 81.69 50.54 35.54 61.07 81.33

EE: number of exceptional elements; PE: percentage of exceptional elements; GE: grouping efficacy. For the IAC net in Refs. [29,30] and the proposed implementation.

C.-H. Chu / Decision Support Systems 20 (1997) 279-295

292

CPU time due to this human intervention. The control performances (best solutions) used for comparison purpose were recorded as follows: (1) collects the best clustering results from the literature; (2) plots the block diagram; (3) improves the block structures if needed; (4) records the revised clustering results; and (5) computes the percentage of exceptional elements and grouping efficacy values. It is not surprising that the clustering results for the proposed IAC implementation and for the other IAC model are the same, because both implementations used the same learning logic despite that differences in the network structures used. Although the differences in structures may have some impact on the computational time, the final clustering results should be the same. Because the clustering results for both IAC implementations are the same, their results are summarized in the same columns. Table 7 summarizes the clustering results of our computations. As Table 7 shows, the IAC networks outperformed the two other neural nets (CL and ART 1) on all three CF measures that we examined, with the CL

net ranked in second place. As shown, using the default parameter values, the IAC net can yield the same clustering results as the best solutions for all data sets, whereas the CL net has problems in properly clustering three data sets and ART 1 did not perform well with 10 out of the 20 data sets. The ART 1 net was again proven to be quite sensitive to the vigilance value used. This similar finding has been reported in previous studies [10,42,40,41]. Although we have made an extensive effort to fine-tune the parameters of ART 1, its computational results still cannot match those of the IAC and CL nets. Meanwhile, if the data sets contain no exceptional elements (e.g., data sets 1, 7 and 11), all neural network models can produce good clustering results. On the other hand, if the data set contains bottleneck machines (e.g., data set 13), none of the neural networks can produce good clustering without any preprocessing. In this case, the bottleneck machines need to be removed first and then duplicated after preliminary clustering. In fact, although some studies avoided touching this sensitive issue, most CF meth-

Table 8 Summary of computational time (in seconds) No.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Average

Proposed IAC

1AC [29,30]

CL [37]

N

CPU time

LTS (%) a

N

SM

CPU time

LTS (%)

CPU time

LTS (%)

12 12 12 19 28 18 30 22 38 38 25 46 59 28 55 64 64 71 49 140

7.1 7.1 7.2 8.5 9.3 8.1 13.6 14.5 17.8 19.2 17.3 21.9 24.8 15.2 27.8 29.6 31.3 39.1 40.4 53.3

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

19 19 19 31 48 27 50 32 62 62 35 76 102 38 90 104 104 112 59 240

31 31 31 87 218 72 235 111 367 367 150 555 1023 198 785 1056 1056 1255 786 5730

9.5 9.6 10.1 14.2 19.8 13.4 23.3 17.2 31.5 32.6 17.9 38.7 45.2 18.7 43.4 53.1 55.2 61.6 42.1 81.3

33.80 35.21 40.28 67.06 112.90 65.43 71.32 18.62 76.97 69.79 3.47 76.71 82.26 23.03 56.12 79.39 79.36 57.54 4.21 52.53 55.15

12.3 12.4 12.4 19.1 25.2 18.9 32.7 24.9 51.8 53.0 26.3 59.6 88.1 30.2 78.8 95.9 73.3 87.7 79.5 102.0

73.24 74.65 72.22 124.71 170.97 133.33 140.44 71.72 191.01 176.04 52.02 172.15 255.24 98.68 183.45 223.99 134.19 124.30 96.78 91.37 133.03

N: number of neurons; SM: number of similarity coefficients. a LTS: percentage longer than the shortest CPU time.

C.-H. Chu / Decision Support Systems 20 (1997) 279-295

ods to our knowledge are very sensitive to the existence of bottleneck machines in the data [21,27,26,9,22-24]. Some general conclusions regarding the use of IAC for CF can be drawn from the computational tests. First, if the data set can be completely blocked (i.e., no exceptional elements), the IAC algorithm can quickly arrive at a disjointed block diagram without going through the merging phase. Second, if the data set contains a few exceptional elements, the model sometimes produces a number of good alternative solutions. This would provide users with additional flexibility in making their final CF decision. In addition, the merging phase is most likely needed under these conditions. Third, if the data set contains bottleneck machines, the proposed procedure may be unable to produce a good result if these bottleneck machines have not been identified and removed before the clustering. This happened in data set 13; as expected, the initial clustering results for this data set was not good because there were two bottleneck machines (m 6 and m8). However, after removal of these two machines, the clustering results improved significantly, indicating that, without preprocessing, the IAC net may be incapable of handling data sets with bottleneck machines. This situation is, however, also common to many of the available clustering algorithms [27,26,22]. The computational CPU times are summarized in Table 8. The table also lists the number of neurons and the number of pairs of similarity coefficients required for the IAC implementations. Although the values of CPU time for the tested problem sets may not show very significant differences, the results in terms of percentage longer than the shortest CPU time (LTS) did provide a guideline for understanding the relative computational efficiency among the methods that we compared. As can be seen from the table, the IAC nets are far more efficient than the CL net. On average, the proposed IAC model and the one in Refs. [29,30] is about 133% and 78% faster than the CL net, respectively. This is because two separate networks--machine-based and part-based net--have to be developed and tested for the CL net, while the IAC nets can form machine cells and part families simultaneously. It is also clear from our test that the proposed IAC implementation is much more efficient than another IAC model [29,30]. The former

293

on average is about 55% more efficient than the latter. This superiority can be credited to its simpler network structure (that is, a structure that uses fewer neurons and connections) and to the absence of need (memory space and time) to compute similarity coefficients.

8. Concluding remarks We have tested a neural network procedure based upon a variation of the interactive activation and competition paradigm to form manufacturing cells. The computational results show that the proposed procedure is more efficient and effective than a similar IAC model and the two other neural networks. Also, the proposed implementation appears to be insensitive to the values of the parameters used and the sequence of selecting units for applying the external input does not matter, except in the cases in which data contain bottleneck machines. Moreover, all the computational times, when a personal computer is used, are within a reasonable acceptance range. This demonstrates that the proposed IAC implementation may have a potential for practical applications. Because we have tested it only for problems involving up to 100 parts and 40 machines, its use for larger problems may have to be verified. However, because most neural networks can be programmed in a parallel logic and are hardware-implementable [5], the concern of long computational time is less critical than the quality of clustering results. The proposed procedure has the following features over optimal algorithms or other conventional heuristics. First, the procedure can form part families and machine cells simultaneously, whereas many conventional methods or neural network approaches can only form part families or machine cells sequentially or separately. Second, the procedure is very easy to use and there is no need to compute the similarity coefficients before classification. On the other hand, many analytical methods, such as hierarchical clustering [12], P-median [16,19] and neural network models [29-31], require this time-consuming and tedious step. Computing similarity coefficients for a large size of problem requires extra computer memory space to keep track and store the large amount of coefficients and also takes longer

294

C.-H. Chu / Decision Support Systems 20 (1997) 279-295

CPU time to complete the process. Third, the procedure uses a simple two-layered architecture that not only greatly reduces the processing time, but makes it an especially appealing alternative for practical applications. Finally, the procedure can find the optimal or near-optimal clustering results in a small fraction of time compared to the time required for optimal solution methods. For example, it takes about 54 s to solve a problem with 40 machines and 100 parts with a 586 PC, but most of the optimization approaches either cannot solve this problem or take more than 2 h to solve it [16]. Using neural networks to solve CF problems is an interesting and promising approach. Although neural networks have been shown as a useful tool for modeling the CF problems, most neural nets proposed thus far consider only the machine/part matrix (i.e., routing data) in the formation stage; thus, the manufacturing cells formed may need to be adjusted afterwards for other factors, such as machine and system capacity, product demand, processing time, material handling costs, etc. A possible breakthrough is to use optimization- or constraintbased nets such as Hopfield nets and the Boltzmann machine [5] to solve the problem.

Acknowledgements

[5] [6] [7] [8] [9]

[10]

[11]

[12] [13]

[14]

[15]

[16]

The authors would like to thank the anonymous referees for their constructive comments and suggestions. An earlier version of this paper was completed while the author served as a visiting professor of management sciences at the Institute of Policy and Planning Science, University of Tsukuba, Tsukuba, Ibaraki, Japan.

[17]

References

[21]

[1] J. Dayhoff, Neural Network Architecture, Van Nostrand-Reinhold, New York, 1990. [2] J.L. McClelland, D.E. Rumelhart, Explorations in Parallel Distributed Processing: A Handbook of Models, Programs and Exercises, MIT Press, Boston, MA, 1988. [3] Neuralworks Professional II: Users' Guide, Neuralware, Pittsburgh, PA, 1989. [4] D.E. Rumelhart, J.L. McClelland, PDP Group, Parallel Dis-

[18]

[19] [20]

[22]

[23] [24]

tributed Processing: Exploration in the Microstructure of Cognition, vol. 1, MIT Press, Boston, MA, 1989. Y. Takefuji, Neural Network Parallel Computing, Kluwer Academic Publishers, Boston, MA, 1992. K.Y. Tam, Neural network models and the prediction of bank bankruptcy, Omega Int. J. Manage. Sci. 19 (1991) 429-445. P.D. Wasserman, Neural Computing: Theory and Practice, Pergamar Press, New York, NY, 1989. U. WemmerliAv, N.L. Hyer, Research issues in cellular manufacturing, Int. J. Prodn. Res. 25 (1987) 413-431. C.H. Chu, Recent advances in mathematical programming for cell formation, in: A.K. Kamrani, H.R. Parsaei, D.H. Liles (Eds.), Planning, Design and Analysis of Cellular Manufacturing Systems, Elsevier, Amsterdam, Netherlands, 1995, pp. 3-46. C. Dagli, R. Huggahalli, Machine-part family formation with the adaptive resonance theory paradigm, Int. J. Prodn. Res. 33 (1995) 893-913. U. WemmerliSv, N.L. Hyer, Procedures for the part-family, machine-group identification problem in cellular manufacturing, J. Oper. Manage. 6 (1986) 125-147. J. McAuley, Machine grouping for efficient production, Prodn. Engr. 51 (1972) 53-57. K.P. Chandrasekham, R. Rajagopolan, An ideal-seed nonhierarchical clustering algorithm for cellular manufacturing, Int. J. Prodn. Res. 24 (1986) 451-464. G. Srinivasan, T.Y. Narendran, GRAFICS--a nonhierarchical clustering algorithm for group technology, Int. J. Prodn. Res. 29 (1991) 463-478. R.G. Askin, S.P. Subramanian, A cost-based heuristic for group technology configuration, Int. J. Prodn. Res. 25 (1987) 101-113. C.H. Chu, W. Lee, An efficient heuristic for grouping part families, Proceedings of the Midwest Decision Sciences Conference, 1990, pp. 62-64. P.H. Waghodekar, S. Shau, Machine-component cell formation in group technology: MACE, Int. J. Prodn. Res. 22 (1984) 937-948. K.R. Kumar, A. Vannelli, Strategic subcontracting for efficient disaggregated manufacturing, Int. J. Prodn. Res. 25 (1987) 1715-1728. A. Kusiak, The generalized group technology concept, Int. J. Prodn. Res. 25 (1987) 561-569. G. Srinivasan, T.Y. Narendran, B. Mahadevan, An assignment model for the part-families problem in group technology, Int. J. Prodn. Res. 28 (1990) 145-152. H. Chan, D. Milner, Direct clustering algorithm for group formation in cellular manufacturing, J. Manufg. Sys. 1 (1982) 66-75. J.R. King, Machine-component grouping in production flow analysis: an approach using a rank-order clustering algorithm, Int. J. Prodn. Res. 18 (1980) 213-232. J.R. King, Machine-component group formation in group technology, Omega Int. J. Manage. Sci. 8 (1980) 193-199. J.R. King, V. Nakomchai, Machine-component group formation in group technology: review and extension, Int. J. Prodn. Res. 20 (1982) 117-133.

C.-H. Chu / Decision Support Systems 20 (1997) 279-295 [25] K.P. Chandrasekham, R. Rajagopolan, ZODIAC--an algorithm for concurrent formation of part-families and machinecells, Int. J. Prodn. Res. 25 (1987) 835-850. [26] C.H. Chu, M. Tsai, A comparison of three array-based clustering techniques for manufacturing cellular formation, Int. J. Prodn. Res. 28 (1990) 1417-1433. [27] C.H. Chu, Clustering analysis in manufacturing cellular formation, Omega Int. J. Manage. Sci. 17 (1989) 289-295. [28] N. Singh, Design of cellular manufacturing systems: an invited review, Eur. J. Oper. Res. 69 (1993) 284-291. [29] Y.B. Moon, An interactive activation and competition model for machine-part family formation in group technology, Proceedings of the International Joint Conference on Neural Networks, Washington DC, 1990, pp. 667-670. [30] Y.B. Moon, Forming part-machine families for cellular manufacturing: a neural network approach, Int. J. Adv. Manufg. Technol. 5 (1990) 278-291. [31] Y.B. Moon, S.C. Chi, Generalized part family formation using neural network techniques, J. Manufg. Sys. 11 (1993) 149-159. [32] Y.C. Yeh, Artificial Neural Networks: Models, Applications and Practice, 2nd edn., Ru-Lin Publishers, Taipei, Taiwan, 1993 (in Chinese). [33] K. Chakraborty, U. Roy, Connectionist models for part family classifications, Comput. Ind. Eng. 24 (1993) 189-198. [34] S. Kaparthi, N.C. Suresh, A neural network system for shape-based classification and coding of rotational parts, Int. J. Prodn. Res. 29 (1991) 1771-1784. [35] Y. Kao, Y.B. Moon, A unified group technology implementation using the back-propagation learning rule of neural network, Comput. Ind. Eng. 20 (1991) 425-437. [36] C.O. Malav6, S. Ramachandran, Neural network-based design of cellular manufacturing systems, J. Intel. Manufg. 2 (1991) 305-314. [37] C.H. Chu, Manufacturing cell formation by competitive learning, Int. J. Prodn. Res. 31 (1993) 829-843. [38] V. Venugopal, T.T. Narendran, Machine-cell formation through neural network models, Int. J. Prodn. Res. 32 (1994) 2105-2116. [39] S.J. Chen, C.S. Cheng, A neural network-based cell formation algorithm in cellular manufacturing, Int. J. Prodn, Res. 33 (1995) 293-318. [40] A. Kusiak, Y. Chung, GT/ART: using neural networks to lbrm machine cells, Manufg. Rev. 4 (1991) 293-301. [41] T.W. Liao, L.J. Chen, An evaluation of ART 1 neural models for GT part family and machine cell forming, J. Manufg. Sys. 12 (1993) 282-290. [42] S. Kaparthi, N.C. Suresh, Machine-component cell formation in group technology: a neural network approach, Int. J. Prodn. Res. 30 (1992) 1353-1367. [43] N.C. Suresh, S. Kaparthi, Performance of fuzzy ART neural network for group technology cell formation, Int. J. Prodn. Res. 32 (1994) 1693-1713. [44] H.A. Rao, P. Gu, A multiconstraint neural network for the pragmatic design of cellular manufacturing systems, Int. J. Prodn. Res. 33 (1995) 1049-1070.

295

[45] S. Grossberg, A theory of visual coding, memory and development, in: E.L.T. Leeuwenberg, H.F. Buffart (Eds.), Formal Theories of Visual Perception, Wiley, New York, 1978. [46] J.L. McClelland, Retrieving general and specific information from stored knowledge of specifics, Proceedings of the Third Annual Meeting of the Cognitive Science Society, 1981, pp. 170-172. [47] J.L. McClelland, D.E. Rumelhart, An interactive activation model of context effects in letter perception. Part 1. An account of basic findings, Psychol. Rev. 88 (1981) 375-407. [48] D.E. Rumelhart, J.L. McClelland, An interactive activation model of context effects in letter perception. Part 2. The contextual enhancement effect and same tests and extensions of the model, Psychol. Rev. 89 (1982) 60-94. [49] C.S. Kumar, K.P. Chandrasekham, Grouping efficacy: a quantitative criterion for goodness of block diagonal forms of binary matrices in group technology, Int. J. Prodn. Res. 28 (1990) 233-243. [50] T.A. Gongaware, I. Ham, Cluster analysis applications for group technology manufacturing systems, Proceedings of the Ninth North American Manufacturing Research Conference, 1981, pp. 503-508. [51] B.B. Flynn, Grouping technology versus process layout: a comparison using computerized job shop simulation, Unpublished DBA dissertation, Indiana University, IN, 1984. [52] J. Burbidge, Production flow analysis, Prodn. Engr. 42 (1963) 742-752. [53] K.P. Chandrasekham, R. Rajagopolan, Groupability: an analysis of the properties of binary data for group technology, Int. J. Prodn. Res. 27 (1989) 1035-1052. [54] B.B. Flynn, F.R. Jacobs, An experimental comparison of cellular (group technology) layout with process layout, Decision Sci. 18 (1987) 562-581.

Chao-Hsien Chu is an associate professor of management at Iowa State University. He received his Ph.D. in Business Administration from Pennsylvania State University. He has several years of industrial experience and was a visiting professor at University of Tsukuba in Japan. His research interests are in emerging information technologies, cellular manufacturing and management of technology. He has published articles in Decision Support Systems, liE Transactions, Journal of Operations Management, International Journal of Production Research, among others. He received the best theoretical/empirical research paper award from the Decision Sciences Institute in 1989 and is in the editorial review board of Journal of Production and Operations Management and Journal of End User Computing.