ARTICLE IN PRESS
Int. J. Production Economics 103 (2006) 185–198 www.elsevier.com/locate/ijpe
Mixed-variable fuzzy clustering approach to part family and machine cell formation for GT applications Miin-Shen Yanga,, Wen-Liang Hungb, Fu-Chou Chenga a
Department of Applied Mathematics, Chung Yuan Christian University, Chung-Li, Taiwan 32023, ROC b Department of Mathematics Education, National Hsinchu Teachers College, Hsin-Chu, Taiwan, ROC Received 23 September 2004; accepted 2 June 2005 Available online 5 October 2005
Abstract Group technology (GT) is a useful way to increase productivity with high quality in flexible manufacturing systems. Cell formation (CF) is a key step in GT. It is used to design a good cellular manufacturing system that uses the similarity measure between parts and machines so that it can identify part families and machine groups. Recently, fuzzy clustering has been applied in GT because the fuzzy clustering algorithm can present partial memberships for part–machine cells so that it is suitably used in cellular manufacturing systems for a variety of real cases. However, these fuzzy clustering algorithms are only used for numeric data of parts and machines. In this paper, we apply a mixed-variable fuzzy clustering algorithm, called mixed-variable fuzzy c-means (MVFCM), to CF in GT. According to real application examples, the MVFCM algorithm gives good results when it is applied to the mixed-variable types of CF data. r 2005 Elsevier B.V. All rights reserved. Keywords: Cellular manufacturing systems; Cell formation; Fuzzy clustering; Fuzzy c-means; Symbolic data; Fuzzy data; Mixed-variable data
1. Introduction High manufacturing profit comes from lower cost and higher product quality. Generally, there are some guidelines for reducing product cost with high quality. To create a good production process is an important step. It needs various kinds of machines and also a lot of complicated work procedures to finish the manufacturing. Frequently, the parts have to move from one place to another. The action may cause not only machine idleness but also consumpCorresponding author. Tel.: +886 3 456 3171; fax: +886 3 456 3160. E-mail address:
[email protected] (M.-S. Yang).
tion of moving time and manpower. On the other hand, there are more and more manufacturing companies that need to carry out batch manufacturing with small or medium production lot size. In this situation, more setup changes and frequent part or machine travel occur. Group technology (GT) has been a useful way to treat these problems with a flexible manufacturing process. It is used to exploit similarities between components to achieve lower cost and to increase productivity with high quality. Cell formation (CF) is a key step in GT. It is used to design a good cellular manufacturing system that uses the similarities of parts related to machines so that it can identify part families and machine groups. The parts in the same machine group have
0925-5273/$ - see front matter r 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.ijpe.2005.06.003
ARTICLE IN PRESS 186
M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
similar requirement so that GT can reduce travel and setup time. In CF, the part/machine matrix which has m p dimensions with binary components is usually described and given. The m rows indicate m machines and the p columns represent p parts that need to be operated upon. In the matrix, ‘‘1’’ (‘‘0’’) represents that this part should be (not) worked on the machine. The matrix exhibits parts requirement relative to machines. Our objective is to group parts and machines just like a cell. There are many CF methods in the literature (see Singh, 1993; Singh and Rajamani, 1996; Molleman et al., 2002). Some of them used algorithms with certain energy functions or codes to immediately arrange a part/machine matrix, such as the bond energy algorithm (BEA) (McCormick et al., 1972), the rank order clustering (ROC) (King, 1980), the modified rank order clustering (MODROC) (Chandrasekaran and Rajagopalan, 1986), the direct clustering algorithm (DCA) (Chan and Milner, 1982) and evolutionary algorithms (EAs) (Plaquin and Pierreval, 2000). Some others used the similarity-based hierarchical clustering (Mosier, 1989; Wei and Kern, 1989; Gupta and Seifoddini, 1990; Shafer and Rogers, 1993; Liao et al., 1998), for example, single linkage clustering (SLC), complete linkage clustering (CLC), average linkage clustering (ALC), and linear cell clustering (LLC). However, these CF methods assumed well-defined boundaries between part–machine cells. These crisp boundary assumptions may fail to fully describe the case where the part–machine cell boundaries are fuzzy. This is why fuzzy clustering algorithms were applied for CF. Xu and Wang (1989) first considered the fuzzy clustering in the application to CF. Chu and Hayya (1991) then improved its usage. Gindy et al. (1995) considered its optimal cluster number using some validity indexes. Venugopal (1999) gave a state-ofthe-art review on the use of soft computing including fuzzy clustering. Moreover, Gu¨ngo¨r and Arikan (2000) applied fuzzy decision making in CF. However, these fuzzy clustering methods all considered numeric data. In fact, part or machine data are often presented in symbolic and fuzzy variable types, and these fuzzy clustering algorithms cannot be used for these mixed-variable data. Symbolic data are different from numeric data. Symbolic variables may present human knowledge, nominal, categorical and synthetic data. Cluster analysis for symbolic data with hierarchical, hard and fuzzy clustering algorithms have been widely studied (see Gowda and Diday, 1992; Gowda and
Ravi, 1995). Fuzzy data are another type with imprecise or with a source of uncertainty not caused by randomness, such as linguistic assessments. For example, Liao (2001) proposed classification and coding approaches to part family under a fuzzy environment with the construction of a coding structure for fuzzy data by considering a fuzzy feature as a linguistic variable. The fuzzy data type is easily found in natural language, social science, and knowledge representation. Fuzzy numbers are used to model the fuzziness of data and usually used to represent fuzzy data. There are several papers (see Hathaway et al., 1996; El-Sonbaty and Ismail, 1998) that gave fuzzy clustering algorithms for these fuzzy data. In real situations, there are data sets with mixedvariable types of symbolic and fuzzy data. Recently, Yang et al. (2004) proposed a fuzzy clustering algorithm for these mixed types of data, called the mixed-variable fuzzy c-means (MVFCM). In this paper, we shall apply this mixed-type data clustering algorithm to CF. The distance measure for symbolic data and fuzzy data defined by Yang et al. (2004) will be used in a part–machine matrix. We then apply the MVFCM algorithm to these mixedvariable types of data. In Section 2, we present fuzzy clustering methods to a binary part–machine matrix with numeric data. In Section 3, we review the definition of symbolic and fuzzy data and give the distance measure for these mixed-variable data. Section 4 gives the MVFCM algorithm for CF. In Section 5, we associate validity indexes to find optimal cell numbers for CF. Section 6 presents examples, and conclusions are stated in Section 7. 2. Fuzzy clustering for cell formation In CF, a part–machine matrix, which has n m dimensions with binary data, is usually given. Assume that there are n parts and m machines to be grouped into c part–machine cells. We can first group n parts into c part families and then use part family centers as references to form machine cells. Finally, we reallocate part families and machine cells to form final part–machine cells. Let X ¼ fx1 ; . . . ; xn g be a set of n parts where xj ¼ ðxj1 ; . . . ; xjs Þ is the jth part feature vector such that 1 if the part j requires the machine ‘; xj‘ ¼ 0 otherwise: Thus, c part families consist of a partition X 1 ; . . . ; X c of X where X 1 ; . . . ; X c are mutually
ARTICLE IN PRESS M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
disjoint subsets with X 1 [ [ X c ¼ X . Equivalently, the partition fX 1 ; . . . ; X c g can be denoted by the indicator functions m1 ; . . . ; mc such that mi ðxÞ ¼ 1 if x 2 X i and mi ðxÞ ¼ 0 if xeX i for all i ¼ 1; . . . ; c. Usually, fm1 ; . . . ; mc g is called a hard c-partition of X. Traditionally, part family formation in GT is to find an optimal hard c-partition fm1 ; . . . ; mc g based on a variety of methods, such as coding, product flow analysis, and ROC. However, this crisp membership of part xj belonging to family i with values 0 or 1 may not be fitted in reality. In general, partial membership with values in the interval [0,1] may be fitted in most real cases. The idea of a membership function with fuzzy set was first used in part family formation by Xu and Wang (1989). They used the well-known fuzzy c-means (FCM) clustering algorithm to GT application. Now consider an extension to allow mi ðxÞ to be a membership function of fuzzy set mi or PX, assuming values in the interval [0,1], such that ci¼1 mi ðxÞ ¼ 1 for all x in X. In this fuzzy extension, fm1 ; . . . ; mc g is called a fuzzy c-partition of X. Thus, the FCM clustering algorithm is based on the minimization of the sum of squared error J FCM ðl; aÞ ¼
c X n X
2 mm ij kxj ai k ,
187
They demonstrated that the method was sensitive to the family number c, the index of fuzziness m, and also to the stopping criterion of the algorithm. The family number c in FCM is supposed to be known a priori. However, we do not have a priori knowledge of c in most real cases. To solve this problem, Gindy et al. (1995) proposed a validity measure for part family formation in GT applications. To improve performance, Josien and Liao (2000) proposed an integrated use of FCM and fuzzy k-nearest neighbor (KNN) rule. The results from FCM are then used as an input to the supervised fuzzy KNN as training data. The approach improves the performance of part–machine CF. The FCM clustering algorithm had been successfully used in part–machine CF. However, FCM can be only used for numeric type part–machine data. In real cases, part–machine data may appear in different types, such as numeric, symbolic, or fuzzy. On the other hand, the part–machine matrix may have multiple attributes for parts on machine. In some situations, part data may include several different attributes, and similarly for machines. To solve these mixed-variable types in part–machine CF, we use an extended type of FCM, called mixed-variable FCM (MVFCM) (see Yang et al., 2004).
i¼1 j¼1
where m41 is the index of fuzziness, ai is the cluster center of the part family k, and mij ¼ mi ðxj Þ. The necessary conditions for a minimizer ðl; aÞ of J FCM are the following update equations: !1 c X kxj ai k2=ðm1Þ (1) mij ¼ 2=ðm1Þ k¼1 kxj ak k and Pn
j¼1 ai ¼ Pn
mm ij xj
m j¼1 mij
.
(2)
The FCM clustering algorithm and its variety of generalizations have been widely studied and applied (see Bezdek, 1981; Yang, 1993, Wu and Yang (2002)). This FCM algorithm was first used in part-family formation by Xu and Wang (1989). Chu and Hayya (1991) formed part families based on FCM membership functions m1 ; . . . ; mc and then extensively considered the machine cell configuration on the basis of FCM part family centers. Finally, they reallocated the final classification part–machine matrix to final part–machine cells.
3. Mixed-variable data and distances The numeric (or vector) data are used for most cases. However, except for these numeric data, there are many other types of data such as symbolic and fuzzy. In this section, we consider the mixed feature type of the numeric, symbolic, and fuzzy data. The distance for these mixed feature data defined by Yang et al. (2004) will be adopted here. Suppose that a feature vector F is written as a d-tuple of feature components F 1 ; :::; F d , with F ¼ F1 Fd. For any two feature vectors A and B with A ¼ A1 Ad and B ¼ B1 Bd , the distance between A and B is defined as dðA; BÞ ¼
d X
dðAk ; Bk Þ,
k¼1
where dðAk ; Bk Þ represents the dissimilarity of the kth feature component according to its feature type. Because there are symbolic and fuzzy feature components in a d-tuple feature vector, the distance
ARTICLE IN PRESS 188
M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
dðA; BÞ will be the sum of the dissimilarity of symbolic data, which is defined by d p ðAk ; Bk Þ for the ‘‘position’’, d s ðAk ; Bk Þ for the ‘‘span’’ and d c ðAk ; Bk Þ for the ‘‘content’’, and also with the dissimilarity of fuzzy data which is defined by d f ðAk ; Bk Þ. Thus, dðA; BÞ can include d p ðAk ; Bk Þ, d s ðAk ; Bk Þ, d c ðAk ; Bk Þ and d f ðAk ; Bk Þ. 3.1. Symbolic feature components Various definitions and descriptions of symbolic objects were given by Diday (1988). According to Gowda and Diday (1992), symbolic features can be divided into quantitative and qualitative features in which each feature can be defined by d p ðAk ; Bk Þ due to position p, d s ðAk ; Bk Þ due to span s and d c ðAk :Bk Þ due to content c. They then defined a distance for these symbolic data. Recently, Yang et al. (2004) gave an improved distance measure by modifying the Gowda and Diday’s dissimilarity definition as follows.
3.1.2. Qualitative type of Ak and Bk For qualitative feature types, the dissimilarity component due to position is absent. The term U k for qualitative feature types is absent too. The two components that contribute to dissimilarity are ‘‘due to span’’ and ‘‘due to content’’. Let l a ¼ length of Ak ¼ the number of elements in Ak ; l b ¼ length of Bk ¼ the number of elements in Bk ; l s ¼ length of Ak and Bk ¼ the number of elements in the union of Ak and Bk ; inters ¼ the number of elements in the intersection of Ak and Bk . The two dissimilarity components are then defined as follows: d s ðAk ; Bk Þ ¼
jl a l b j , ls
d c ðAk ; Bk Þ ¼
jl a þ l b 2 intersj . ls
3.1.1. Quantitative type of Ak and Bk The distance between two feature components of quantitative type is defined as the dissimilarity of these values due to position, span, and content.
Thus, the distance dðAk ; Bk Þ for qualitative type Ak and Bk is defined as
Let al ¼ lower limit of Ak ; am ¼ upper limit of Ak ; bl ¼ lower limit of Bk ; bm ¼ upper limit of Bk ; inters ¼ length of intersection of Ak and Bk ; l s ¼ span length of Ak and Bk ¼ j maxðam; bmÞ minðal; blÞj; U k ¼ the difference between the highest and lowest values of the kth feature over all objects; l a ¼ jam alj; l b ¼ jbm blj.
3.2. Fuzzy feature components
The three distance components are then defined as follows: due to position : d p ðAk ; Bk Þ ¼ due to span : d s ðAk ; Bk Þ ¼
blþbm j alþam 2 2 j , Uk
jl a l b j , U k þ l a þ l b inters
due to content : d c ðAk ; Bk Þ ¼
jl a þ l b 2 intersj . U k þ l a þ l b inters
Thus, the distance dðAk ; Bk Þ is defined as dðAk ; Bk Þ ¼ d p ðAk ; Bk Þ þ d s ðAk ; Bk Þ þ d c ðAk ; Bk Þ.
dðAk ; Bk Þ ¼ d s ðAk ; Bk Þ þ d c ðAk ; Bk Þ.
Fuzzy data types often appear in real applications. Fuzzy numbers are used to model the fuzziness of data and are usually used to represent fuzzy data. Trapezoidal fuzzy numbers (TFN) are used most. Hathaway et al. (1996) proposed the FCM clustering algorithm for symmetric TFN, using a parametric approach. They defined a dissimilarity for two symmetric TFNs and then used it for FCM clustering. However, they did not consider the left or right shapes of fuzzy numbers (i.e. LR-type TFN). Yang et al. (2004) gave a distance for LR-type TFNs based on Yang and Ko’s distance definition (1996) as follows. Hathaway et al.’s parametric approach gave the parameterization of a TFN A as A ¼ ða1 ; a2 ; a3 ; a4 Þ, where we refer to a1 as the center, a2 as the inner diameter, a3 as the left outer radius, and a4 as the right outer radius. Using this parametric representation we can parameterize the four kinds of TFNs with real numbers, intervals, triangular, and TFN. Let L (and R) be decreasing, shape functions from Rþ to ½0; 1, with Lð0Þ ¼ 1; LðxÞo1, for all x40; LðxÞ40, for all xo1; Lð1Þ ¼ 0 or (LðxÞ40, for all x,
ARTICLE IN PRESS M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
and Lðþ1Þ ¼ 0). A fuzzy number X with its membership function 8 m1 x for xpm1 ; > >L a < 1 for m1 pxpm2 ; mX ðxÞ ¼ > xm > :R b 2 for xXm2 ; is called an LR-type TFN. Thus, according to Hathaway’s parametric representation, X is denoted by X ¼ ððm1 þ m2 Þ=2; m2 m1 ; a; bÞ, where a40 and b40 are called the left and right spreads, respectively. Given A ¼ ððm1a þ m2a Þ=2; m2a m1a ; aa ; ba Þ and B ¼ ððm1b þ m2b Þ=2; m2b m1b ; ab ; bb Þ, Hathaway et al. (1996) defined the distance d h ðA; BÞ of A and B as d 2h ðA; BÞ ¼ ððm1a þ m2a Þ=2 ðm1b þ m2b Þ=2Þ2 þ ðm2a m2b þ m1b m1a Þ2 þ ðaa ab Þ2 þ ðba bb Þ2 . Since d 2h ðA; BÞ did not consider the shape functions L and R, Yang et al. (2004) defined another distance d LR ðA; BÞ with d 2LR ðA; BÞ ¼ ðm1a m1b Þ2 þ ðm2a m2b Þ2 þ ððm1a laa Þ ðm1b lab ÞÞ2 þ ððm2a þ rba Þ ðm2b þ rbb ÞÞ2 , R1 R1 where l ¼ 0 L1 ðwÞ dw and r ¼ 0 R1 ðwÞ dw. If L and R are linear, then l ¼ r ¼ 12. Thus, for any given two TFNs A ¼ mða1 ; a2 ; a3 ; a4 Þ and B ¼ mðb1 ; b2 ; b3 ; b4 Þ, we have a distance d f ðA; BÞ on the basis of Yang et al.’s definition with 2a1 a2 2b1 b2 2 d 2f ðA; BÞ ¼ 2 2 2a1 þ a2 2b1 þ b2 2 2a1 a2 1 a3 þ þ 2 2 2 2 2 2b1 b2 1 2a1 þ a2 1 b3 þ a4 þ 2 2 2 2 2 2b1 þ b2 1 þ b4 . 2 2
Then d 2f ðA; BÞ ¼
1 2 ðg þ g2þ þ ðg ða3 b3 ÞÞ2 4 þ ðgþ þ ða4 b4 ÞÞ2 Þ,
where g ¼ 2ða1 b1 Þ ða2 b2 Þ and gþ ¼ 2ða1 b1 Þþ ða2 b2 Þ.
189
The distance d f is different from Hathaway et al.’s (1996) distance. The main difference is that the distance d f ðA; BÞ takes the shape functions L and R into account, but Hathaway et al.’s does not, which means that the distance d f ðA; BÞ includes information of shape functions L and R in the distance measures. Yang et al. (2004) have illustrated these differences. 4. Mixed-variable fuzzy clustering algorithm for cell formation Let X ¼ fx1 ; . . . ; xn g be a set of n part feature vectors. If part feature vectors are in Rs , then the FCM algorithm can be used to group part families based on the update Eqs. (1) and (2). However, in applying FCM to symbolic or fuzzy data, there are problems encountered; for example, the weighted mean equation and the Euclidean distance k k are not suitable for these mixed-feature data. To overcome these problems, a new representation for cluster centers should be used (e.g., El-Sonbaty and Ismail, 1998; Yang et al., 2004). A cluster center is assumed to be formed as a group of features, and each feature is composed of several events. Let Akpji be the pth event of feature k in cluster i and let ekpji be the membership degree of association of the pth event Akpji to the feature k in cluster i. Thus, the kth feature of the ith cluster center Aik can be presented as Aik ¼ ½ðAk1ji ; ek1ji Þ; . . . ; ðAkpji ; ekpji Þ.
(3)
In this case, we shall have X 0pekpji p1 and ekpji ¼ 1, p
\ p
Akpji ¼ f
and
[ p
Akpji ¼
[
X jk ,
j
where ekpji ¼ 0 if the event Akpji is not a part of feature k in the cluster center Ai and ekpji ¼ 1 if there are no other events than the event Akpji sharing this event in forming the feature k in cluster center Ai . Thus, the update equation for ekpji is Pn m j¼1 mij y (4) ekpji ¼ Pn m , j¼1 mij where y 2 f0; 1g and y ¼ 1 if the kth feature of the jth datum X j consists of the pth event; otherwise y ¼ 0. mij ¼ mi ðX j Þ is the membership of X j in cluster i.
ARTICLE IN PRESS M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
190
The membership function ekpji is an important index function for using FCM with symbolic data. Except for applying the FCM to symbolic feature components, we also consider the FCM in fuzzy feature components. Let fX 1 ; . . . ; X n g be a mixedfeature data set. Then, the FCM objective function can be defined as Jðm; e; aÞ ¼
c X n X
Pn aik2 ¼
Pn aik3 ¼
(5)
Pn aik4 ¼
X
X
k0 of symbolic
p
þ
X
j¼1
mm ij ð2xjk1 þ xjk2 þ xjk3 þ 2aik1 aik2 Þ Pn , m j¼1 mij (11)
2 mm ij d ðX j ; Ai Þ,
where d ðX j ; Ai Þ ¼
mm ij ð4xjk2 þ xjk3 þ xjk4 aik3 aik4 Þ P , 4 nj¼1 mm ij (10)
i¼1 j¼1
2
j¼1
j¼1
mm ij ð2xjk1 þ xjk2 þ xjk4 2aik1 aik2 Þ Pn . m j¼1 mij
!
(12)
2
d ðX jk0 ; Ak0 pji Þek0 pji
d 2f ðX jk ; Aik Þ.
On the basis of these necessary conditions we can construct an iterative algorithm. ð6Þ
k of fuzzy
There are parameters fm1 ; . . . ; mc g, fek0 1ji ; . . . ; ek0 pji g and faik1 ; aik2 ; aik3 ; aik4 g for the FCM objective function Jðm; e; aÞ. The Picard method (see Carothers et al. (2004)) for approximating the optimal solutions which minimize the objective function Jðm; e; aÞ is considered with the necessary condition of a minimizer of J over the parameter m. The update equation for the membership functions is !1 c X ðd 2 ðX j ; Ai ÞÞ1=ðm1Þ mij ¼ , 2 1=ðm1Þ q¼1 ðd ðX j ; Aq ÞÞ i ¼ 1; . . . ; c; j ¼ 1; . . . ; n,
ð7Þ
where d 2 ðX j ; Ai Þ is defined by Eq. (6) for which d 2 ðX jk0 ; Ak0 pji Þ and d 2f ðX jk ; Aik Þ are the dissimilarities for symbolic and fuzzy data proposed in Section 2. According to Yang et al. (2004), there are two groups of parameters e and a to be considered where the parameters ek0 pji are for these k0 that are symbolic and the parameters faik1 ; aik2 ; aik3 ; aik4 g are for these k that are fuzzy. Thus, (a) For these k0 that are symbolic, we have the following update equation: Pn m j¼1 mij y ek0 pji ¼ Pn (8) m . j¼1 mij (b) For these k that are fuzzy, we have the following update equations: Pn m j¼1 mij ð8xjk1 xjk3 þ xjk4 þ aik3 aik4 Þ P aik1 ¼ , 8 nj¼1 mm ij (9)
Mixed-variable FCM (MVFCM): Step 1: Fix m and c and give an e40. Initialize a ð0Þ fuzzy c-partition mð0Þ ¼ fmð0Þ 1 ; . . . ; mc g and set ‘ ¼ 0. Step 2: For symbolic feature k0 , compute the ith cluster center Að‘Þ ¼ ½ðAð‘Þ ; eð‘Þ Þ; . . . ; ik0 k0 1ji k0 1ji ðAð‘Þ ; eð‘Þ Þ, using Eq. (8). For fuzzy feak0 pji k0 pji ture k, compute the ith cluster center Að‘Þ ik ¼ ð‘Þ ð‘Þ ð‘Þ ðað‘Þ ik1 ; aik2 ; aik3 ; aik4 Þ, using Eqs. (9)–(12). Step 3: Update mð‘þ1Þ using Eq. (7). Step 4: Compare mð‘þ1Þ to mð‘Þ in a convenient matrix norm. IF kmð‘þ1Þ mð‘Þ ko; THEN STOP. ELSE ‘ ¼ ‘ þ 1 and GOTO Step 2.
In GT, one may have part families based on part feature vectors. Another is based on the consideration of production flow analysis (PFA) (Burbidge, 1991). In PFA, a part–machine matrix is assumed where the matrix X ¼ ½xj‘ nm is a binary matrix with xj‘ ¼ 1, if part j requires machine ‘, and xj‘ ¼ 0, otherwise. There are many grouping methods used for these GT applications. According to the review in Venugopal (1999), the approaches to the part–machine grouping problems can be classified as soft computing and traditional hard computing on the basis of the computing viewpoint. However, we find that all of these part–machine grouping approaches are considered for numeric data. In Example 1, we will apply the MVFCM algorithm to part family, when part feature vectors are mixed-variable types of symbolic and fuzzy data.
ARTICLE IN PRESS M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
191
Table 1 Ten parts with mixed-variable attributes for Example 1 Parts
1 2 3 4 5 6 7 8 9 10
Part attribute components A1–A6 A1
A2
A3
A4
A5
A6
153 153 165 165 165 153 165 143 153 143
Quenched Quenched Hot working Hot working Hot working Quenched Hot working Quenched and Cold work Quenched Quenched and Cold work
[6,0,3,3] [9,0,3,3] [1,0,3,3] [4,0,2,3] [5,0,1,3] [7,0,2,3] [2,0,1,1] [5,0,2,3] [9,0,3,2] [4,0,1,3]
T1,T2,T3,T4,T5 T1,T2,T3,T4,T6 T1,T2,T3,T6 T1,T2,T4,T6 T1,T2,T3,T6,F T1,T2,T4,T6,T7,T8 T1,T2,T3,T4,T6 T1,T2,T3 T1,T2,T3,T4,T6,F T1,T4,T6,T7
[10,0,2,2] [9,0,2,2] [3,0,2,2] [3,1,2,3] [4,1,2,2] [8,0,1,2] [2,2,4,4] [8,4,2,2] [10,0,2,4] [8,3,1,2]
23 21.4 20 22.9 25.9 21.1 24 19.4 23.1 18.7
Example 1. There are 10 parts of forging steels. Each part is described using 6 attributes of A1–A6. The attributes include numeric, symbolic, and fuzzy data. The data are shown in Table 1. The attribute A1 is a categorical type of symbolic data. The attributes A2 and A4 are qualitative types of symbolic data. The attributes A3 and A5 are fuzzy data and the attribute A6 is numeric. According to a priori knowledge, the cluster number is c ¼ 3. Because the data in Table 1 are mixed-variable types, we use the MVFCM algorithm to group part families. The results from MVFCM on the data set of Table 1 are shown in Table 2. We find that the part families are f1; 2; 6; 9g; f3; 4; 5; 7g and f8; 10g. It gets good results. Although we have a priori knowledge of c ¼ 3, that is usually unknown in most real cases. Finding an optimal cluster number c is called a validity problem. Usually, a validity index is used for finding an optimal c. We shall treat this kind of problem in the MVFCM algorithm in the next section so that it can be practically used in GT applications. 5. Optimal cell number in part–machine cell formation After a fuzzy c-partition fm1 ; . . . ; mc g is provided by a fuzzy clustering algorithm, such as FCM or MVFCM, we may ask whether the c-partition accurately represents the structure of the data set. This is a cluster-validity problem (see Bezdek, 1974b; Yang, 1993). Since most fuzzy clustering methods need to pre-assume the cluster number c, a validity index for finding an optimal c that can
Table 2 Clustering results from MVFCM algorithm for the data in Table 1 Parts
1 2 3 4 5 6 7 8 9 10
Memberships m1 ; m2 ; m3 of parts in three clusters m1
m2
m3
0.7635 0.9147 0.1156 0.0195 0.1624 0.7209 0.0286 0.0394 0.9082 0.0359
0.0663 0.0231 0.6149 0.9529 0.7047 0.0581 0.9309 0.0182 0.0305 0.0213
0.1702 0.0622 0.2695 0.0276 0.1329 0.2210 0.0405 0.9424 0.0613 0.9428
completely describe the data structure becomes a most studied topic in cluster validity. We briefly review the three most used FCM-based indexes and then extend these to the proposed MVFCM. (a) The first validity index associated with FCM was the partition coefficient (PC) (Bezdek, 1974a) defined by PCðcÞ ¼
c X n 1X m2 , n i¼1 j¼1 ij
(13)
where 1=cpPCðcÞp1. An optimal cluster number c is obtained by solving max2pcpn1 PCðcÞ to produce the best clustering performance. (b) The partition entropy (PE) (Bezdek, 1974b) was defined by PEðcÞ ¼
c X n 1X m log m , n i¼1 j¼1 ij 2 ij
(14)
ARTICLE IN PRESS M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
192
where 0pPEðcÞplog2 c. An optimal c is obtained by solving min2pcpn1 PEðcÞ to produce a best clustering performance. (c) A validity function proposed by Xie and Beni (XB) (1991) was defined by Pc Pn 2 m J m ðm; aÞ=n i¼1 j¼1 mij kxj ai k , XBðcÞ ¼ ¼ 2 SepðaÞ n mini;j kai aj k (15) where J m ðm; aÞ is a compactness measure and SepðaÞ is a separation measure. An optimal c is found by solving min2pcpn1 XBðcÞ to produce the best clustering performance. The above three indexes are the most-used validity indexes. The indexes PC and PE can be directly used for MVFCM with mixed-feature data. However, the XB index uses the compactness and separation measures that involve data types, so that it needs to be modified when it is used in MVFCM. Obviously, it can be modified as follows: XBðcÞ Pc
i¼1
¼
Pn
n mini;j Pc
i¼1
þ
mm ij P
j¼1
Pn
P
k0 of ð symbolic
k0 of ð symbolic
j¼1
n mini;j
P
P
pd
2
p
d 2 ðX jk ; Ak0 pji Þek0 pji Þ
ðAk0 pji ; Ak0 pjj Þek0 pji Þ
P
2 k of d f ðX jk ; Aik Þ fuzzy . 2 k0 of d f ðAik ; Ajk Þ fuzzy
mm ij P
Next, we continue Example 1 of Section 4 using the four validity indexes. Example 1 (Continued). We run the data set of Table 1 from Example 1 in Section 4 using the indexes PC, PE, and XB. We run these from c ¼ 2 to c ¼ 9. The results are shown in Table 3. We find that PC and PE give the optimal cluster number cn ¼ 2, and XB gives the optimal cluster number cn ¼ 7. In this case, we choose cn ¼ 2 as its optimal cluster number estimate. We then implement MVFCM for the data set with c ¼ 2. The results are shown in Table 4. We have the optimal part families with {3,4,5,7} and {1,2,6,8,9,10}. 6. Applications to part–machine cell formation We had created an MVFCM algorithm for mixed-variable data in Section 4 and gave an estimate to an optimal cluster number, c, based on the validity indexes in Section 5. Example 1 was given to demonstrate its efficiency. In this section,
Table 3 Indexes results from MVFCM for data in Table 1 Cluster numbers
Validity indexes
c¼2 c¼3 c¼4 c¼5 c¼6 c¼7 c¼8 c¼9
PCðcÞ
PEðcÞ
XBðcÞ
0.8008 0.7422 0.7350 0.7422 0.7931 0.7488 0.6991 0.6210
0.4846 0.7031 0.7467 0.8013 0.6804 0.8035 0.5028 0.5042
2.0306 2.5828 3.1896 1.8523 1.3022 0.3652 0.6705 1.0154
Table 4 Clustering results from MVFCM when c ¼ 2 Parts
1 2 3 4 5 6 7 8 9 10
Memberships m1 ; m2 of parts in two clusters m1
m2
0.0615 0.8325 0.9771 0.7794 0.0296 0.9519 0.2032 0.1029 0.2711
0.9144 0.9385 0.1675 0.0229 0.2206 0.9704 0.0481 0.7968 0.8971 0.7289
we shall apply MVFCM to part family and machine CF in real cases. 6.1. A part family application We consider a real example about the production of forging steels used in shipbuilding. There are five parts for these forging steels. These are screwshaft, steel bars, square steel, countershaft, and hoist plate. Each part has seven principle attributes: section equivalent, heat treatment technicality, carbon equivalent, yield point, hardness, tensile strength, and reduction of area. The data set of five parts with mixed-variable data for forging steel used in shipbuilding are shown in Table 5. The first attribute in the mixed-variable data is the section equivalent that describes the size of parts with the measure unit in mm. This section size is the main characteristic of outer geometric description to the parts for forging steels. It is numeric data. The second attribute is the
ARTICLE IN PRESS M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
193
Table 5 Five parts with mixed-variable attributes for forging steels used in shipbuilding Forging parts
Section equivalent
Heat treatment technicality
Carbon equivalent
Yield point
Hardness
Tensile strength
Area reduction rate
Screwshaft
780
0.48–0.56
340
[1,30,15,0]
655
40
Steel bar
400
0.16–0.23
265
[1,10,25,0]
455
50
Square steel
500
0.37–0.45
265
[1,25,0,10]
545
45
Idler or countershaft Hoist plate
680
830 C 600 C 890 C 600 C 860 C 600 C 830 C 600 C 890 C 600 C
0.48–0.56
350
[1,30,15,0]
690
40
0.16–0.23
280
[1,10,25,0]
490
50
300
normalized, tempered normalized, tempered normalized, tempered normalized, tempered normalized, tempered
heat treatment technicality. Different tempered heat treatments actually influence the inner structure of forging steels. It is qualitative-type symbolic data. The third attribute is the carbon equivalent percentage which is an impact factor for the hardness and other components of forging steels. It is quantitative-type symbolic data. The fourth attribute is the yield point. It represents the change point when the material begins to have plastic change. The measure unit is N=mm2 , denoted by MPa. It is quantitativetype symbolic data. The fifth attribute is hardness. The measure unit is Brinell hardness (HB). It is fuzzy data. The final two attributes are tensile strength and area reduction rate. These are quantitative-type symbolic data. We now use the dissimilarity measure defined in Section 3 and the MVFCM algorithm described in Section 4 to the data set of Table 5. The most complicated calculation is the symbolic data cluster center and its membership. According to Table 5, we know that heat treatment technicality, carbon equivalent score, yield point, tensile strength, and area reduction rate are symbolic data. Next, we demonstrate the calculation about the cluster center. Assume that the cluster number c ¼ 3. We give initial membership of five parts to the first cluster as m11 ¼ 0:26; m12 ¼ 0:02; m13 ¼ 0:94; m14 ¼ 0:37, and m15 ¼ 0:16. We find out the structure of a cluster center for the component of heat treatment technicality. For the cluster center, we have its total with 0.26+0.02+0.94+0.37+0.16 ¼ 1.75. Thus for the heat treatment technicality, we have the membership of the cluster center with 830 C normalized, 600 C tempered: ð0:26 þ 0:37Þ=1:75 ¼ 0:36, 890 C normalized, 600 C tempered: ð0:02 þ 0:16Þ=1:75 ¼ 0:1029,
Table 6 Structure of symbolic data components for a cluster center Heat treatment technicality 830 C normalized, 600 C tempered 890 C normalized, 600 C tempered 860 C normalized, 600 C tempered
0.36 0.1029 0.5371
Carbon equivalent score 0.48–0.56 0.16–0.23 0.37–0.45
0.36 0.1029 0.5371
Yield point ss (MPa) 340 265 350 280
0.1486 0.5486 0.2114 0.0914
Tensile strength sb (MPa) 655 455 545 690 490
0.1486 0.0114 0.5371 0.2114 0.0914
Area reduction rate f% 40 50 45
40 50 45
860 C normalized, 600 C tempered: 0:94=1:75 ¼ 0:5371. Similarly, we can find memberships of other symbolic data components for the cluster centers as shown in Table 6. When we implement MVFCM, we need to assign the cluster number c. Based on the discussion in Section 5, we can use the validity indexes to find an optimal c. Thus, we implement MVFCM for the data of Table 5 and then find the validity indexes for
ARTICLE IN PRESS M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
194
c ¼ 2, 3 and 4. The results are shown in Table 7. We find that PC and PE give an optimal cn ¼ 2 and XB gives an optimal cn ¼ 3. In this case, we choose c ¼ 2 as its cluster number estimate. For a given c ¼ 2, we have the final clustering results shown in Table 8. That is, the parts of screwshaft and countershaft are in the same cluster; and steel bar, square steel and hoist plate are in another cluster. We see that the grouping results are quite reasonable. 6.2. A part– machine cell formation application We consider a real application of the MVFCM algorithm to the production of cast aluminum alloys
Table 7 Validity indexes for the steel data Cluster numbers
Validity indexes
c¼2 c¼3 c¼4
PC(c)
PE(c)
XB(c)
0.9356 0.8618 0.2750
0.1869 0.4159 1.9624
1.6757 0.3787 0.4369
Table 8 Membership results for the steel data Parts
Screw shaft Steel bar Square steel Counter shaft Hoist plate
Memberships m1 and m2 for two clusters m1
m2
0.9883 0.0276 0.1236 0.9902 0.0045
0.0117 0.9724 0.8764 0.0098 0.9955
and forging steels based on eight machine processes where the part–machine matrix has mixed-variable data types. There are seven parts with ZL207, 50SiMn, 35SiMn, ZL205A, ZL402, 42SiMn, and ZL202, and eight machines with eight different processes of fusion, air impermeability, tensile test, cast, impact test, crack-arresting fracture, heat treatment, and cooling. The part–machine matrix is shown in Table 9. We begin to construct part families, machine groups, and part–machine CF for the mixed-variable data shown in Table 9. We see that each part of these seven cast aluminum alloys and forging steels has eight (machine) attribute components. These attributes are mixed-variable data types so that the MVFCM algorithm can be used for part families. We first implement MVFCM with the validity indexes PC, PE and XB, for c ¼ 2 to c ¼ 6. The results are shown in Table 10. We find that PE and XB give an optimal cn ¼ 4 and PC gives an optimal cn ¼ 5. In this case, we choose c ¼ 4 as its cluster number estimate. We then give the final MVFCM memberships of seven parts for four families c1, c2, c3, and c4. The results are shown in Table 11 with four part families of {50SiMn, 35SiMn, 42SiMn}, {ZL402, ZL202}, {ZL207}, and {ZL205A}. We continue working for machine grouping. We see that there are eight machines with seven attribute components from Table 9. However, we find that there are different attribute data types corresponding to the same component of different machines. Say, for example, that the machine 4 has the attribute ‘‘S’’ and that the machine 6 has the attribute ‘‘[20,50]’’ corresponding to the component of ZL402. In those cases, we cannot directly use MVFCM for these machine data. Fortunately, we can transform these mixed-variable machine data to
Table 9 Part–machine matrix for the production of cast aluminum alloys and forging steels Machines
1 2 3 4 5 6 7 8
Parts ZL207
50SiMn
35SiMn
ZL205A
ZL402
42SiMn
ZL202
603–637 N N S N N N N
N N 14 N 39 N T OC
N N 14 N 39 N T OC
544–633 N 3 S N N N N
N [40,70] 4 S N [20,50] N N
N N 14 N 29 N T OC
N [60,90] N S,J N [20,50] N N
N: non-responsive; S: sand-casting; J: alloy-casting; T: tempering; OC: oil cooled.
ARTICLE IN PRESS M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
195
binary data. We set the value 1 if the part is processed by the machine and 0 otherwise. Say, for example, that machine 8 had made the ‘‘OC’’ process on the part 50SiMn so that we set the value to 1. The transferred binary machine data are shown in Table 12. Because the binary data are special types of mixed-variable data, MVFCM is still used for these binary data. Thus, we also implement MVFCM for the data of Table 12 and then find the validity indexes for c ¼ 2, 3, 4, and 5. The results are shown in Table 13. We find that PC, PE, and XB give the optimal cluster number cn ¼ 4. We then implement MVFCM for these eight
machine data of Table 12 with c ¼ 4. The final MVFCM memberships with c ¼ 4 are shown in Table 14 with four machine groups of {5,7, 8}, {2, 6}, {1, 4} and {3}. After we have four part families of {50SiMn, 35SiMn, 42SiMn}, {ZL402, ZL202}, {ZL207}, and {ZL205A} and four machine groups of {5, 7, 8}, {2, 6}, {1, 4} and {3}, we can construct the part–machine cells. These four cells are as follows:
Table 10 Validity indexes for seven parts from Table 9
The results are shown in Table 15. However, it is reasonable to incorporate cell 1 and cell 4 into one cell. In this case, we have three part–machine cells as follows:
Cluster numbers
Validity indexes
c¼2 c¼3 c¼4 c¼5 c¼6
PCðcÞ
PEðcÞ
XBðcÞ
0.8524 0.8875 0.9195 1.0143 0.9850
0.3544 0.3383 0.2489 0.7345 0.8179
0.3342 0.0886 0.0409 0.0421 0.0441
Cell Cell Cell Cell
1: 2: 3: 4:
{50SiMn, 35SiMn, 42SiMn} with {5, 7, 8}; {ZL402, ZL202} with {2, 6}; {ZL207, ZL205A} with {1, 4} {50SiMn, 35SiMn, 42SiMn} with {3}.
Cell 1: {50SiMn, 35SiMn, 42SiMn} with {3, 5, 7, 8}; Cell 2: {ZL402, ZL202} with {2, 6}; Cell 3: {ZL207, ZL205A} with {1, 4}. The results are shown in Table 16. These actually present good final results.
Table 11 MVFCM memberships of four part families for seven parts from Table 9 Part family
c1 c2 c3 c4
Parts ZL207
50SiMn
35SiMn
ZL205A
ZL402
42SiMn
ZL202
0.00004 0.00005 0.99973 0.00018
0.99997 0.00001 0.00001 0.00001
0.99994 0.00002 0.00003 0.00001
0.00004 0.00008 0.00025 0.99963
0.03930 0.82113 0.05965 0.07992
0.99997 0.00001 0.00001 0.00001
0.02665 0.86311 0.06302 0.04722
Table 12 Transferred binary data for eight machines from Table 9 Machines
1 2 3 4 5 6 7 8
Parts ZL207
50SiMn
35SiMn
ZL205A
ZL402
42SiMn
ZL202
1 0 0 1 0 0 0 0
0 0 1 0 1 0 1 1
0 0 1 0 1 0 1 1
1 0 1 1 0 0 0 0
0 1 1 1 0 1 0 0
0 0 1 0 1 0 1 1
0 1 0 1 0 1 0 0
ARTICLE IN PRESS M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
196
Table 13 Validity indexes for eight machines from Table 12
7. Conclusions
Cluster numbers
GT is a useful way of increasing the productivity for manufacturing high-quality products and improving the flexibility of manufacturing systems. CF is an important GT procedure. It can form part families, machine groups and part–machine cells. There are many useful CF methods in the literature. However, most are only for numeric data, especially binary data. In fact, part or machine data are often
Validity indexes
c¼2 c¼3 c¼4 c¼5
PCðcÞ
PEðcÞ
XBðcÞ
0.9437 0.9519 1.0328 0.9548
0.4811 0.5002 0.3119 0.5816
0.5245 0.2070 0.0716 0.0952
Table 14 MVFCM memberships of four groups for eight machines from Table 12 Groups
c1 c2 c3 c4
Machines 1
2
3
4
5
6
7
8
0.01571 0.02060 0.80357 0.16012
0.00098 0.99652 0.00150 0.00100
0.00052 0.00021 0.00023 0.99904
0.08492 0.32745 0.46581 0.12182
0.99993 0.00001 0.00003 0.00003
0.00099 0.99652 0.00149 0.00100
0.99993 0.00001 0.00003 0.00003
0.99994 0.00001 0.00002 0.00003
Table 15 Part–machine cell formation with four cells based on MVFCM Machines
5 7 8 2 6 1 4 3
Parts 50SiMn
35SiMn
42SiMn
ZL402
ZL202
ZL207
ZL205A
39 T OC N N N N 14
39 T OC N N N N 14
29 T OC N N N N 14
N N N [40,70] [20,50] N S 4
N N N [60,90] [20,50] N S,J N
N N N N N 603−637 S N
N N N N N 544−633 S 3
Table 16 Part–machine cell formation with three cells based on MVFCM Machines
3 5 7 8 2 6 1 4
Parts 50SiMn
35SiMn
42SiMn
ZL402
ZL202
ZL207
ZL205A
14 39 T OC N N N N
14 39 T OC N N N N
14 29 T OC N N N N
4 N N N [40,70] [20, 50] N S
N N N N [60, 90] [20, 50] N S,J
N N N N N N 603−637 S
3 N N N N N 544−633 S
ARTICLE IN PRESS M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
presented, except in numeric, with other types such as symbolic and fuzzy. Recently, Yang et al. (2004) proposed a fuzzy clustering algorithm for these mixed types of data, called mixed-variable fuzzy cmeans (MVFCM). Our objective in this paper is to apply the MVFCM algorithm to CF when the part–machine matrix has mixed-variable data types. Because real data in cellular manufacturing systems vary, numeric data may not be enough for the part–machine matrix in real cases. Symbolic and fuzzy data may also be represented. In these cases, the MVFCM algorithm is very useful. These data have been described and constructed in this paper. We also give several efficient real examples where we use MVFCM in CF with mixed-variable data types. Acknowledgments The authors would like to thank the anonymous reviewers for their constructive comments to improve the presentation of the paper. This work was supported in part by the National Science Council of Taiwan under Grant NSC-92-2118-M-033-001.
References Bezdek, J.C., 1974a. Numerical taxonomy with fuzzy sets. Journal of Mathematics Biology 1, 57–71. Bezdek, J.C., 1974b. Cluster validity with fuzzy sets. Journal of Cybernetics 3, 58–73. Bezdek, J.C., 1981. Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum, New York. Burbidge, J.L., 1991. Production flow analysis for planning group technology. Journal of Operations Management 10 (1), 5–27. Carothers, D., et al., 2004. An overview of the modified Picard method. Available from: http://www.math.jmu.edu/jim/ expository.pdf. Chan, H.M., Milner, D.A., 1982. Direct clustering algorithm for group formation in cellular manufacture. Journal of Manufacturing Systems 1 (1), 64–76. Chandrasekaran, M.P., Rajagopalan, R., 1986. MODROC: An extension of rank order clustering of group technology. International Journal of Production Research 24 (5), 1221–1233. Chu, C.H., Hayya, J.C., 1991. A fuzzy clustering approach to manufacturing cell formation. International Journal of Production Research 29 (7), 1475–1487. Diday, E., 1988. The symbolic approach in clustering. In: Bock, H.H. (Ed.), Classification and Related Methods of Data Analysis. North-Holland, Amsterdam. El-Sonbaty, Y., Ismail, M.A., 1998. Fuzzy clustering for symbolic data. IEEE Transactions on Fuzzy Systems 6 (2), 195–204. Gindy, N.N.G., Ratchev, T.M., Case, K., 1995. Component grouping for GT applications—a fuzzy clustering approach
197
with validity measure. International Journal of Production Research 33 (9), 2493–2509. Gowda, K.C., Diday, E., 1992. Symbolic clustering using a new similarity measure. IEEE Transactions on Systems, Man, Cybernetics 22, 368–378. Gowda, K.C., Ravi, T.V., 1995. Divisive clustering of symbolic objects using the concepts of both similarity and dissimilarity. Pattern Recognition 28, 1277–1282. Gu¨ngo¨r, Z., Arikan, F., 2000. Application of fuzzy decision making in part–machine grouping. International Journal of Production Economics 63, 181–193. Gupta, T., Seifoddini, H., 1990. Production data based similarity coefficient for machine-component grouping decisions in the design of a cellular manufacturing system. International Journal of Production Research 28 (7), 1247–1269. Hathaway, R.J., Bezdek, J.C., Pedrycz, W., 1996. A parametric model for fusing heterogeneous fuzzy data. IEEE Transactions on Fuzzy Systems 4 (3), 270–281. Josien, K., Liao, T.W., 2000. Integrated use of fuzzy c-means and fuzzy KNN for GT part family and machine cell formation. International Journal of Production Research 38 (15), 3513–3536. King, J.R., 1980. Machine-component grouping in production flow analysis: An approach using rank order clustering algorithm. International Journal of Production Research 18 (2), 213–232. Liao, T.W., 2001. Classification and coding approaches to part family formation under a fuzzy environment. Fuzzy Sets and Systems 122 (3), 425–441. Liao, T.W., Zhang, Z., Mount, C.R., 1998. Similarity measures for retrieval in case-based reasoning systems. Applied Artificial Intelligence 12 (4), 267–288. McCormick, W.T., Schweitzer, P.J., White, T.W., 1972. Problem decomposition and data reorganization by a clustering technique. Operations Research 20 (5), 993–1009. Molleman, E., Slomp, J., Rolefes, S., 2002. The evolution of a cellular manufacturing system—a longitudinal case study. International Journal of Production Economics 75, 305–322. Mosier, C.T., 1989. An experiment investigating the application of clustering procedures and similarity coefficients to the GT machine cell formation problem. International Journal of Production Research 27 (10), 1811–1835. Plaquin, M.F., Pierreval, H., 2000. Cell formation using evolutionary algorithms with certain constraints. International Journal of Production Economics 64, 267–278. Shafer, S.M., Rogers, D.F., 1993. Similarity and distance measures for cellular manufacturing, Part 1: A survey. International Journal of Production Research 31 (5), 1133–1142. Singh, N., 1993. Design of cellular manufacturing systems—an invited review. European Journal of Operational Research 69 (3), 284–291. Singh, N., Rajamani, D., 1996. Cellular Manufacturing Systems. Chapman & Hall, New York. Venugopal, V., 1999. Soft-computing-based approaches to the group technology problem: A state-of-the-art review. International Journal of Production Research 37, 3335–3357. Wei, J.C., Kern, G.M., 1989. Commonality analysis: A linear cell clustering algorithm for group technology. International Journal of Production Research 27 (12), 2053–2062.
ARTICLE IN PRESS 198
M.-S. Yang et al. / Int. J. Production Economics 103 (2006) 185–198
Wu, K.L., Yang, M.S., 2002. Alternative c-means clustering algorithms. Pattern Recognition 35, 2267–2278. Xie, X.L., Beni, G., 1991. A validity measure for fuzzy clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 841–847. Xu, H., Wang, H.P., 1989. Part family formation for GT applications based on fuzzy mathematics. International Journal of Production Research 27 (9), 1637–1651.
Yang, M.S., 1993. A survey of fuzzy clustering. Mathematical and Computer Modelling 18, 1–16. Yang, M.S., Ko, C.H., 1996. On a class of fuzzy c-numbers clustering procedures for fuzzy data. Fuzzy Sets and Systems 84, 49–60. Yang, M.S., Hwang, P.Y., Chen, D.H., 2004. Fuzzy clustering algorithms for mixed feature variables. Fuzzy Sets and Systems 141, 301–317.