Signal Processing 82 (2002) 461 – 472 www.elsevier.com/locate/sigpro
Color image segmentation using fuzzy C-means and eigenspace projections Jar-Ferr Yang ∗ , Shu-Sheng Hao, Pau-Choo Chung Department of Electrical Engineering, National Cheng Kung University, 1, University Road, Tainan, Taiwan, Republic of China Received 7 September 2000; received in revised form 26 June 2001; accepted 16 November 2001
Abstract In this paper, we propose two eigen-based fuzzy C-means (FCM) clustering algorithms to accurately segment the desired images, which have the same color as the pre-selected pixels. From the selected color pixels, we can 3rst divide the color space into principal and residual eigenspaces. Combined eigenspace transform and the FCM method, we can e4ectively achieve color image segmentation. The separate eigenspace FCM (SEFCM) algorithm independently applies the FCM method to principal and residual projections to obtain two intermediate segmented images and combines them by logically selecting their common pixels. Jointly considering principal and residual eigenspace projections, we then suggest the coupled eigen-based FCM (CEFCM) algorithm by using an eigen-based membership function in clustering procedure. Simulations show that the proposed SEFCM and CEFCM algorithms can successfully segment the desired color image with substantial accuracy. ? 2002 Elsevier Science B.V. All rights reserved. Keywords: Color image segmentation; Fuzzy C-means; Principal component transformation
1. Introduction Wireless and Internet communications become more and more popular for many multimedia information retrievals. However, the bandwidth of wireless and Internet networks varies due to the number of users, access facilities, media types, and the amount of data. In order to e
This research was partially supported by the National Science Council under Contract #NSC-88-2213-E-006-104 and by the Image=Graphics Technology Research and Application Development Project of Institute for Information Industry sponsored by MOEA, Taiwan, Republic of China. ∗ Corresponding author. Tel.: +886-6-2763874; fax: +886-6-2345482. E-mail address:
[email protected] (J.-F. Yang).
standards with Fexible, interactive and scalable advantages have been proposed for multimedia transmission and storage [14]. Instead of frame-based coding, the MPEG-4 video could be designed with video object planes, which are encoded or decoded separately as user’s selection. In order to fully utilize the multiple objects and interactivity features of MPEG-4 standards, the most important and di
0165-1684/02/$ - see front matter ? 2002 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 5 - 1 6 8 4 ( 0 1 ) 0 0 1 9 6 - 7
462
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
of the desired color image, we might help the search engine to retrieve the desired image picture or video sequence correctly. Recently, many image segmentation algorithms have been proposed according to motion [2], optic Fow [13], region edge [4], and color information [3,6,9,11,19 –21,23,25,27,28]. Generally, the color information appeared in the image provides an important feature for human to cluster the desired objects. Based on color information, many techniques including fuzzy C-means (FCM) [19,29], neural network [9,20], region growing and merging [21,25], edge enhancement [6,23], color normalization [11], color histogram and clustering [3,27,28] are proposed. Recently, several researchers utilized the principal component transformation (PCT) to successfully extract the desired parameters [5,8,24,30]. The color image segmentation using the PCT approach requires a manual selection of desired color samples. Although the PCT methods exhibit good performances in color image segmentation, we still need to face the threshold and data smoothing problems. By using iterative cluster technique, the fuzzy C-means (FCM) methods [19,29,32] can automatically cluster the color images without acquiring any prior threshold setting. The PCT-based segmentation approaches detect the pixels, which have large projection along the principal eigenvector of color components. Generally, the PCT concept gives the detected component higher preference once it has larger projection to the principal eigenvector. However, the strong projection out of the normal range does not always imply that the tested pixels have the same color as the desired. To achieve better segmentation, we should use all eigenspace projections. In this paper, we propose two algorithms, which combine eigenspace projections and FCM concept together for color image segmentation. In Section 2, we review the general eigenspace projections and introduce the conventional FCM algorithm for data clustering. In Section 3, we 3rst propose the separated eigenspace FCM (SEFCM) algorithm, which applies the FCM algorithm separately to the principal and residual eigenspace projections. To achieve better performance, we then suggest the coupled eigen-based FCM (CEFCM) method by introducing an eigen-based membership function directly embedded in the FCM algorithm. In
Section 4, some simulation results are shown to verify the e4ectiveness of the proposed methods for color image segmentation. Finally, a brief conclusion will be addressed in Section 5. 2. PCT and FCM The principal component transformation (PCT), which is known as Karhunen–Loˆeve (KL) transformation, can achieve the optimal energy compaction for data representation [7]. For image segmentation, the PCT could help to identify the most likely components [5,8,24,30,32]. First, the user is required to manually click the mouse to select several blocks of the desired color image. Then, the PCT-based segmentation algorithms will select the color image, which possesses the same characteristics of the selected block pixels. Let the kth sample pixel of the selected blocks in a color space be represented as xk = [xk; 1
xk; 2
xk; 3 ]T ;
(1)
where xk; 1 , xk; 2 , and xk; 3 are the color components. If we choose the RGB color space, then xk; 1 , xk; 2 , and xk; 3 represent red, green, and blue gray levels of the kth sample pixel, respectively. From the selected M samples, we 3rst compute the correlation matrix, Rˆ x as M 1 ˆ Rx = xk xkT : (2) M k=1
After the eigen-decomposition, the correlation matrix stated in (2) can be expressed by 3 Rˆ x = i wi wiT ; (3) i=1
where 1 ¿ 2 ¿ 3 are eigenvalues enlisted in descending order and wi for i = 1; 2, and 3 are the corresponding eigenvectors. Hence, the eigenvector w1 corresponding to the largest eigenvalue is called the principal eigenspace. Statistically, the projection of the desired color pixels onto w1 will be the largest due to energy compaction by the KL transform. Since the desired color pixels of the image are mostly concentrated on the direction of the principal eigenspace, the PCT methods test all the color pixels
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
by projecting them onto the principal component to identify the desired color image. Once the projection is larger, the tested color pixel will be closer to the principal component. Thus, in the PCT approach, we should 3nd a threshold of the projection to determine whether the tested pixel is the desired one or not. The performance of the PCT-based approach will be degraded if the threshold is not properly selected. Thus, for the PCT-based segmentation algorithms, we need further determine a threshold through statistical analysis of eigen-structures [5,8,24,30,32]. The KL transformation, which computes all the projections of the eigenvectors, provides the optimal data representation in the mean square sense [7]. Due to the orthogonal property of eigenvectors, the maximum projection of a pixel onto the 3rst principal eigenvector physically will be identical to the minimum projection onto the least eigenvectors. Hence, the eigenvectors associated with the least eigenvalues also provide useful information for signal modeling and parameter estimation [12,17,18,22]. In this paper, we jointly consider the projections of w1 ; w2 , and w3 for color image segmentation. Hereafter, we further de3ne the residual eigenspaces, which are orthogonal to the principal eigenspace w1 , to be spanned by the eigenvectors w2 and w3 . The fuzzy C-means (FCM) algorithm is an iterative unsupervised clustering algorithm, which adjusts group representative centers to best partition the image into several distinct classes [1,10,31]. The clustering process is accomplished by minimizing an objective function, which is de3ned by some measure similarity of the data samples. The objective function can be expressed as follows: Jm (U ; V ; X ) =
N c
m ujq · dist 2 (xq ; Cj );
(4)
q=1 j=1
where N is the number of the data samples, c is the number of clusters and m is the arbitrary chosen FCM weighting exponent, which must be greater than one. In (4), X = {x1 ; x2 ; : : : ; xN } denotes a set of unlabeled vectors and V = {C1 ; C2 ; : : : ; Cc } represents the unknown prototypes, which are known as the cluster centers. The vectors, xq and Cj , are both in k-dimensional real Euclidean space Rk . Hence, the similarity measurement dist(xq ; Cj ) in (4) can be speci3ed by either Euclidean distance or Mahalanobis
463
m } is distance. The fuzzy c-partition matrix U = {ujq m with size of c × N . The fuzzy membership value ujq , which de3nes the belonging of xq to the jth cluster, m satis3es 0 6 ujq 6 1, for all j and q. Unlike traditional classi3cation algorithms, the FCM algorithm assigns all test samples to each cluster in fuzzy fashions. The fuzzy membership value describes how close a sample resembles an ideal element of a population. The imprecision caused by vagueness or ambiguity is characterized by the membership value. Inclusive of the concept of fuzziness, the FCM algorithm computes each class center more precisely and with higher robustness to the noise. To normalize the membership function, should make c we N m m u = 1 for all q and 0 ¡ j=1 jq q=1 ujq ¡ N for all j. In Euclidean distance, the similarity measure, dist(xq ; Cj ) can be expressed as k 1=2 dist jq = dist(xq ; Cj ) = (xq − vj ) ; (5) =1
where k is the number of the feature parameters. In Mahalanobis distance, the similarity measure can be expressed by an inner product norm as distjq = dist 2 (xq ; Cj ) = ||xq − Cj ||T Aj ||xq − Cj || = QjT Aj Qj :
(6)
In (6), Aj is a k × k positive de3ned matrix derived from the jth cluster. When Aj = I , Mahalanobis distance becomes Euclidean norm. For m ¿ 1 and xq = Cj , the minimization of the objective function de3ned in (4) leads to (dist jq )−2=(m−1) m ujq = c −2=(m−1) i=1 (dist iq ) and
N
q=1
Ci = N
m ujq xq
q=1
m ujq
for all i:
for all j; q
(7)
(8)
By iteratively updating the fuzzy membership with (7) and the centroids with (8), the algorithm converges to a local minimum of Jm (U ; V ; X ) [1]. The procedures of the FCM algorithm [1,10] are enlisted as follows: Step 1: Initialization: Determine the number of clusters, c and set the iteration loop index t = 1.
464
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
We then randomly select the cluster centers as C(0) j , for j = 1; 2; : : : ; c, from the component space. Step 2: Sampling: Select N data samples xq for q = 1; : : : ; N from the image and compute the initial membership function U (0) by using (5) and (7). Step 3: Calculate new fuzzy cluster centers: Compute all new cluster centers {C(t) } with U (t−1) as stated in (8). Step 4: Update membership function U (t) : Compute new similarity measures of xq for q = 1; : : : ; N with respect to the new cluster centers {C(t) } by (5) and update U (t) based on new similarity measure with (7). Step 5: Check the convergence: Compute = |C(t) − C(t−1) |:
(9)
If ¡ then terminate; otherwise set t = t + 1 and go to Step 3, where is the preset terminating criterion. In the above procedures, the superscript t for U (t) and C(t) denotes the iteration number. Once the change of the class centers during the iterations is less than a prede3ned criterion , the objective function Jm (U ; V ; X ) is treated as the convergence. In this case, it is considered that the 3nal segmentation result is achieved. Schmid in [26] suggested an orientation sensitivity FCM algorithm (OS-FCM) by modifying Aj described in (6) as Aj = VjT Lj Vj ;
(10)
where Lj denotes the diagonal matrix containing the inverse of eigenvalues and Vj represents the unitary matrix lining up the corresponding eigenvectors of the fuzzy covariance matrix Cjx for the jth cluster. The fuzzy covariance matrix for the jth cluster Cjx is given by Cjx =
N 1 m ujq (xq xqT − Cj CTj ): N
(11)
q=1
From simple matrix derivations, it is obvious that Aj = (Cjx )−1 . 3. Eigenspace FCM algorithms The projection of the principal eigenspace exhibits the likeness of the detected pixel matching to the selected color space while the projection of the residual eigensapces reveals the di4erence from the desired.
Update Membership Functions & Eigenvectors FCM on Eigenspace Projections
Initialization
N Color Object Segmentation
Segmented
<ε ? Y Color Objects
Iterative Segmentation Process Desired Color Samples
Correlation Matrix Generation
Eigenspace Transformation Color Eigenspace Formation Color Image
Fig. 1. The proposed eigenspace FCM algorithms.
To achieve e4ective color segmentation, we can combine the FCM classi3cation with principal and residual eigenspace projections together. Similar to the PCT approaches, we 3rst use the eigenvectors of the correlation matrix of desired color samples to transform the original color space as zq = V xq = [zq; 1
zq; 2
zq; 3 ]T ;
(12)
where V = [w1 w2 w3 ] is formed by the eigenvectors. The 3rst element, zq; 1 , for q = 1; : : : ; N speci3es the projection of the qth sample onto the principal eigenspace while the second and the third elements zq; 2 and zq; 3 represent the projections onto the residual eigenspaces. With transformed color samples at hand, we then develop the separate eigenspace FCM (SEFCM) and the coupled eigen-based FCM (CEFCM) methods. As shown in Fig. 1, we 3rst compute eigenvalues and eigenvectors of the correlation matrix Rˆ x from the manually selected color samples. After eigenspace transformation, the iterative FCM segmentation process with updated membership functions will be applied to the transformed data. From selected color samples, we only need to perform the eigenspace transformation once for FCM processes. If no desired colors are pre-selected, the correlation matrix has to be estimated from the membership value m ujq as N N m m Rx; j = 1 ujq ujq xq xqT (13) q=1
q=1
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
and the new eigenspaces for the jth class should be computed in the FCM processes. The detailed descriptions of the SEFCM and the CEFCM are addressed in the following two subsections.
In the SEFCM, we propose two separate transforms, which can partition tested pixels into principal and residual eigenspaces by further considering their corresponding eigenvalues. After eigenspace transformation stated in (12), we did not directly pick zq; 1 as the principal component. For extracting the principal eigenspace, we suggest the principal weighting matrix j as + ( j; 2 2 j; 3 )−1 0 0 0 j;−11 0 (14) P; j = : −1 0 0 j; 1 Statistically, the eigenvalue represents the energy distribution of the corresponding eigenvector in the selected color samples. If j; 1 is much larger than j; 2 and j; 3 , it implies that the selected pixels mostly come from the principal component only. In this case, we can use P; j to further weight the transformed pixel such that we can intensively raise the value of zq; 1 and suppress the values of zq; 2 and zq; 3 . If j; 1 is close to j; 2 and j; 3 , it means the selected pixels cannot exhibit any signi3cant component. In this case, the principal weighting matrix will not relatively enhance zq; 1 so radically as the previous case. Thus, depending on distribution of eigenvalues, we can use (14) to extract the principal eigenspace robustly. On the contrary, we can extract the residual eigenspaces by using the residual weighting matrix as −1 j; 1 0 0 j; 2 +j; 3 −1 : R; j = (15) 0 0 ( 2 ) j; 2 +j; 3 −1 0 0 ( 2 ) Similarly, since the eigenvalue j; 1 is larger than j; 2 and j; 3 we can robustly select the residual components, zq; 2 and zq; 3 by the weighting matrix, R; j . By referring (6) and (10), the SEFCM membership function for the principal component depicted in (7)
Desired Color Samples Color Eigenspace Formation
Color Image
Color Eigenspace Projections
3.1. Separate eigenspace FCM (SEFCM) method
465
Initialization
Principal weighted
FCM Process on Principal-weighted Planes
Residual weighted
FCM Process on Residual-weighted Planes
Initialization
Segmented Color Objects Segmented Color Logical Objects Operation
Segmented Color Objects
Fig. 2. Signal Fow diagram of the proposed SEFCM algorithm.
becomes [(xq − Cj )T AP; j (xq − Cj )]−2=(m−1) m = c ujq T −2=(m−1) i=1 [(xq − Ci ) AP; i (xq − Ci )] [(zq − Cj )T P; j (zq − Cj )]−2=(m−1) = c ; −2=(m−1) T i=1 [(zq − Ci ) P; i (zq − Ci )]
(16)
where AP; j = VjT P; j Vj and AP; i = ViT P; i Vi as (6) are the transform and weighted matrices of classes j and i, respectively. In (16), we have zq = Vi xq and Cj = Vj Cj , where Cj and Cj denotes the jth cluster centers in the original and transform domains, respectively. To achieve (16), we should 3rst perform the PCT as zq = Vi xq . By using the principal weighting matrix, we can compute the principal-weighted transform as 1=2 zP; q = P; j zq ;
q = 1; : : : ; N:
(17)
Secondly, the traditional FCM algorithm is then applied to the principal transformed samples {zP; q } to obtain the segmented image. Similarly, we can achieve the residual-weighted transform as 1=2 zR; q = R; j zq ;
q = 1; : : : ; N:
(18)
Then, the traditional FCM algorithm is applied to the residual transformed samples {zR; q } to acquire another segmented image. Finally, we can apply a simple logical “AND” operation to these two extracted images to obtain the SEFCM results. As shown in Fig. 2, the detailed procedures of the SEFCM are
466
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
3.2. Coupled eigen-based FCM (CEFCM) method In order to precisely segment the desired images, we further propose the coupled eigen-based FCM
Updated Correlation Matrix (See Eq. (13)) Initialization Desired Color Samples
Color Image
Color Eigen-subspace Formation
stated as follows: Step 1: Manually select a few desired color object blocks from the image. Step 2: Compute the correlation matrix and obtain the eigenvetors according to (2). Step 3: Transform the color images into eigenspaces by using (12) to obtain Z = {zq ; q = 1; : : : ; N }. Step 4: Compute the principal weighted transformed samples by using (17) to achieve ZP = {zP; q ; q = 1; : : : ; N }. and perform the traditional FCM algorithm procedure to minimize Jm (UP ; VP ; ZP ) to segment the desired image. Step 5: Compute the residual weighted transformed samples by using (18) to acquire ZR = {zR; q ; q = 1; : : : ; N } and perform the traditional FCM algorithm procedure to minimize Jm (UR ; VR ; ZR ) to segment the desired image. Step 6: Perform logical “AND” operation to extract the coexisted pixels from the segmented images obtained from Steps 4 and 5. The result after logical “AND” will be the 3nally segmented image. After the FCM classi3cation in Steps 4 and 5, each pixel has been marked a speci3c class number in the segmented image. If the segmentation process is started from a manual selection of the desired pixels from the entire image, we can easily identify the desired image by checking whether the segmented image contains the selected pixels or not. If the segmentation is started with the selected color information without location of the desired pixels, we can pick the segmented image whose the cluster center, Cj is √ closest to the principal component, 1 w1 . In other words, √ we choose the jth √ class as the desired image if dist( 1 w1 ; Cj ) 6 dist( 1 w1 ; Ci ) for all i. Once the pixels of the desired image are identi3ed, we then mark the selected pixels in the jth class with 1’s and the remaining with 0’s. Finally, in Step 6, we can use logic “AND” of the binary images obtained in Steps 4 and 5 to acquire the 3nal segmented result, where 1’s and 0’s denote the detected and unwanted pixels, respectively.
Iterative FCM Segmentation Process With Eigen-based Distance Measure (Stated in (21))
Postprocessing (Optional)
Segmented Color Objects
Fig. 3. Signal Fow diagram of the proposed CEFCM algorithm.
(CEFCM) algorithm. Considering principal and residual subspaces together, the CEFCM directly adopts three-dimensional eigenspaces for classi3cation. The function block diagram of the CEFCM is shown in Fig. 3. During iterative FCM procedure, we also adopt the updated correlation matrix as (13). For the jth class correlation matrix Rx; j , we can obtain eigenvector wj; i and the corresponding eigenvalue j; i , with the descending order as j; 1 ¿ j; 2 ¿ j; 3 :
(19)
Statistically, the jth cluster center can be expressed by the principal component as Cj = j; 1 wj; 1 : (20) Since the principal eigenvecor is the best projection vector and its corresponding eigenvalue is the average projection length in the mean square sense. The second and the third components can be treated as the spreading terms in the class. To design a new similarity measure to the jth cluster center, we can use all three eigenspace projections together. For any sample, xq , whose three eigenspace projections satisfy ||xqT wj; 1 || ≈ j; 1 , ||xqT wj; 2 || ≈ 0, and ||xqT wj; 2 || ≈ 0, we can say that this sample is very close to the jth cluster center, i.e., j; 1 wj; 1 . Considering the normalization, we can design an eigen-based distance measure as distjq = dist 3 (xq ; Cj ) =
1 1 ||xqT wj; 2 ||2 + ||xT wj; 3 ||2 j; 2 j; 3 q
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
+ =
1 (||xqT wj; 1 ||2 − j; 1 ) j; 1
1 1 ||zq; j; 2 ||2 + ||zq; j; 3 ||2 j; 2 j; 3 +
1 (||zq; j; 1 ||2 − j; 1 ); j; 1
(21)
where zq; j; i = xqT wj; i denotes the projection of xq onto the ith eigenvector of the jth class. With the above eigen-based distance measure, we can use (7) to compute the eigen-based membership function easily. The detail procedures for the CEFCM are listed as follows: Step 1: Manually sample a few desired color samples from the image. Step 2: Compute the correlation matrix and obtain the eigenvetors and eigenvalues. Step 3: Initialize the class center Cj = j; 1 wj; 1 for selected color samples and the membership function by using the eigen-based distance measure stated in (21). (Repeat Steps 1–3 if more than one color are desired). Step 4: Randomly select the remaining class centers, Ci , which should satisfy 1j; 1 (||CTi wj; 1 ||2 −j; 1 ) ¿ to assure the dummy color centers away from the initialized class center(s) in Step 3. Initialize the membership function by using Euclidean distance measure stated in (5). Step 5: Update the new correlation matrix (13) and compute the eigen-based membership function until the FCM procedure converges. 4. Simulation results and discussion In order to verify the e4ectiveness of the proposed algorithms, we adopt four test images, Mosaic, Ball, Akiyo, and News, for simulations. As shown in Fig. 4, we put a triangular mark on each image to indicate the desired color image we want to extract. Specially, Mosaic test image with isolated color tiles could be treated a synthetic image, which helps us to verify the accuracy of the segmentation. It is noted that the desired color tile in Mosaic test image contains a dark spot and a light scratched line near the upper right corner. Because we do not introduce any spatial and
467
temporal information in our algorithms, all the other images with the same color will be appeared in the segmented results. According to the FCM property, the other color images will be automatically classi3ed into di4erent clusters as well. To clearly exhibit the desired color in the segmented images, we only show the desired color and undesired color objects by bright pixels with gray level = 255 and dark pixels with gray level = 0, respectively. First, we apply the PCT to transform the image from the original color space into the eigenspace of the selected pixels. By using the threshold determination method suggested in [16], we obtain four segmented images as shown in Fig. 5. Although the main parts of the desired object are extracted, some missing or extra pixels unexpectedly come out due to threshold sensitivity. After we apply the conventional FCM to these four images in the RGB color space, the segmented results are shown in Fig. 6. Without any threshold determination, the desired objects obtained by the traditional FCM are better than those obtained by the PCT method. However, the segmented results still contain many undesired pixels. For Mosaic image, the FCM method gives a correct detection of the desired color. However, the FCM incorrectly detected extra two more tiles, which are on the upper-right and lower-left corners. Fig. 7 shows the simulation results by applying the conventional FCM on the transformed eigenspace. Comparing to Fig. 6, some improvements are achieved because the projected data has been re-distributed after PCT projections. For Mosaic image, the lower-left tile has been correctly removed but the upper-right tile still exists. For comparisons, we also apply the OS-FCM algorithm [26] to extract the desired images. Fig. 8 shows the segmented images obtained from the OS-FCM method. Although all the main objects can be detected, most similar color pixels are also extracted. For Mosaic image, the lower-left tile has been correctly removed but the upper-right tile still exists. By combining eigen-projection and FCM algorithms, Fig. 9 shows the segmented results obtained by the SEFCM. We found that the undesired pixels have been mostly removed. The SEFCM can correctly segment the desired color tile from
468
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
Fig. 4. Test images: (a) Mosaic; (b) Ball; (c) Akiyo; (d) News.
Fig. 5. Segmented images obtained by the PCT method using threshold and logical operation: (a) Mosaic; (b) Ball; (c) Akiyo; (d) News images.
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
469
Fig. 6. Segmented images obtained by the conventional FCM on RGB planes: (a) Mosaic; (b) Ball; (c) Akiyo; (d) News images.
Fig. 7. Segmented images obtained by the conventional FCM on eigenspaces: (a) Mosaic; (b) Ball; (c) Akiyo; (d) News images.
470
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
Fig. 8. Segmented images obtained by the OS-FCM method on eigenspaces: (a) Mosaic; (b) Ball; (c) Akiyo; (d) News images.
Fig. 9. Segmented images obtained by the SEFCM: (a) Mosaic; (b) Ball; (c) Akiyo; (d) News images.
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
471
Fig. 10. Segmented images obtained by the CEFCM: (a) Mosaic; (b) Ball; (c) Akiyo; (d) News images.
Mosaic test image. However, the SEFCM still cannot correctly segment the color skin from Akiyo image. In order to improve the performance, the CEFCM algorithm jointly adopts the principal and residual subspace projections together. Fig. 10 shows the segmented images obtained from the CEFCM method. It is obvious that the CEFCM method outperforms the other color image segmentation algorithms. The CEFCM achieves more accurate segmentation for all test images. 5. Conclusions In this paper, we proposed two color segmentation algorithms by combining the PCT and FCM methods together to extract the desired color images. Considering both principal and residual projections, both the SEFCM and CEFCM methods show better performance for color object segmentation than the PCT-based methods or the FCM approaches. The proposed SEFCM and CEFCM methods are more
robust and less susceptible to the noise but require higher computation than the existed algorithms. The SEFCM method, which is with less computation than the CEFCM, disjointedly applies the FCM method to both principal-weighted and residual-weighted eigenspaces so that it extracts the 3nal desired color image by logic “AND” of common pixels. The CEFCM approach with the suggested eigen-based membership function achieves more precise color image extraction than the SEFCM. The proposed method only considers the color information. Considering spatial and temporal information, we believe that our proposed algorithms can be further improved for color image segmentation.
References [1] J.C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithm, Plenum, New York, 1981. [2] M.M. Chang, A.M. Tekalp, M.I. Sezan, Simultaneous Motion Estimation and Segmentation, IEEE Trans. Image Process. 6 (9) (September 1997) 1326–1333.
472
J.-F. Yang et al. / Signal Processing 82 (2002) 461 – 472
[3] D. Chai, K.N. Ngan, Face segmentation using skin-color map in videophone applications, IEEE Trans. Circuits Systems Video Technol. 9 (4) (June 1999) 551–564. [4] C.-C. Chu, J.K Aggarwal, The integration of image segmentation maps using region and edge information, IEEE Trans. Pattern Anal. Mach. Intell. 15 (12) (December 1993) 1241–1252. [5] R.D. Dony, S. Haykin, Image segmentation using a mixture of principal components representation, IEE Proc. Vision, Image Signal Process. 144 (April 1997) 73–80. [6] C. Garcia, C. Georgios, Face detection using quantized skin color regions merging and wavelet packet analysis, IEEE Trans. Multimedia 1 (3) (September 1999) 264–277. [7] R.C. Gonzalez, P.A. Wintz, Digital Image Processing, Addison-Wesley, Reading, MA, 1992, p. 153. [8] A. Guzman de Leon, J.F. Lerallut, J.C. Boulanger, Application of the principal components transform to colposcopic color images, IEEE 17th Annual Conference on Engineering in Medicine and Biology Society, Vol. 1, 20 –23 September 1995, pp. 511–512. [9] G.A. Hance, S.E. Umbaugh, R.H. Moss, W.V. Stoecker, Unsupervised color image segmentation: with application to skin tumor borders, IEEE Eng. Med. Biol. Mag. 15 (1) (January–February, 1996) 104–111. [10] S. Haykins, Neural Networks–A Comprehensive Foundation, 2nd Edition, Prentice-Hall, Englewood Cli4s, NJ, 1999. [11] G. Healey, Segmentation images using normalized color, IEEE Trans. Systems, Man, Cybern. 22 (1) (1992) 64–73. [12] B. Hu, R.G. Gosine, A new eigenstructure method for sinusoidal signal retrieval in white noise: estimation and pattern recognition, IEEE Trans. Signal Process. 45 (12) (December 1997) 3073–3083. [13] Y. Huang, K. Palaniappan, X. Zhuang, J.E. Cavanaugh, Optic Fow 3eld segmentation and motion estimation using a robust genetic partitioning algorithm, IEEE Trans. Pattern Anal. Mach. Intell. 17 (12) (December 1995) 1177–1190. [14] ISO=IEC JTC1=SC29=WG11, Information Technology— Coding of Audio-Visual Objects, Part 2: MPEG-4 Visual, 14496-1, October 1998. [15] ISO=IEC JTC1=SC29=WG11, Multimedia Content Description Interface, Part 3: MPEG-7 Visual, 15938-1, March 2001. [16] M. Kaveh, A.J. Barabell, The statistical performance of the MUSIC and the minimum-norm algorithms resolving plane waves in noise, IEEE Trans. Acoustics, Speech, Signal Process. ASSP-34 (2) (April 1986) 331–341. [17] I.B. Kerfoot, Y. Bresler, Theoretical analysis of multispectral image segmentation criteria, IEEE Trans. Image Process. 8 (6) (December 1997) 798–820.
[18] J.H. Lee, B.H. Chang, S.D. Kim, Comparison of colour transformation for image segmentation, Electron. Lett. 30 (20) (September 1994) 1660–1661. [19] Y.W. Lim, S.U. Lee, On the color image segmentation algorithm based on the thresholding and fuzzy C-means techniques, Pattern Recognition 23 (9) (1990) 935–952. [20] E. Littmann, H. Ritter, Adaptive color segmentation—a comparison of neural and statistical methods, IEEE Trans. Neural Networks 8 (1) (January 1997) 175–185. [21] A. Moghaddamzadeh, N. Bourbakis, A fuzzy region growing approach for segmentation of color images, Pattern Recognition 30 (6) (1997) 867–881. [22] H. Murase, S.K. Nayar, Illumination planning for object recognition using parametric eigenspaces, IEEE Trans. Pattern Anal. Mach. Intell. 16 (12) (December 1994) 1219–1227. [23] E. Saber, A.M. Tekalp, G. Bozdagi, Fusion of color and edge information for improved segmentation, Image Vision Comput. 15 (1997) 680–769. [24] E. Sahouria, A. Zakhor, Content analysis of video using principal components, IEEE Trans. Circuits Systems Video Technol. 9 (8) (December 1999) 1290–1298. [25] R. Schettini, A segmentation algorithm for color image, Pattern Recognition Lett. 14 (1993) 499–506. [26] P. Schmid, Segmentation of digitized dermatoscopic images by two-dimensional color clustering, IEEE Trans. Med. Imaging 18 (2) (February 1999) 164–171. [27] L. Shafarenko, H. Petrou, J. Kittler, Histogramspace segmentation in a perceptually uniform color space, IEEE Trans. Image Process. 7 (9) (September 1998) 1354–1358. [28] X. Wan, C.J. Kuo, A new approach to image retrieval with hierarchical color clustering, IEEE Trans. Circuits Systems Video Technol. 8 (5) (September 1998) 628–643. [29] H. Wu, Q. Chen, M. Yachida, Face detection from color images using a fuzzy pattern matching method, IEEE Trans. Pattern Anal. Mach. Intell. 21 (6) (June 1999) 557–563. [30] J. Xiuping, J.A. Richards, Segmented principal components transformation for e