Accepted Manuscript The image segmentation based on optimized spatial feature of superpixel Xiaolin Tian, Licheng Jiao, Long Yi, Kaiwu Guo, Xiaohua Zhang PII: DOI: Reference:
S1047-3203(14)00186-2 http://dx.doi.org/10.1016/j.jvcir.2014.11.005 YJVCI 1442
To appear in:
J. Vis. Commun. Image R.
Received Date:
16 November 2013
Please cite this article as: X. Tian, L. Jiao, L. Yi, K. Guo, X. Zhang, The image segmentation based on optimized spatial feature of superpixel, J. Vis. Commun. Image R. (2014), doi: http://dx.doi.org/10.1016/j.jvcir.2014.11.005
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
The image segmentation based on optimized spatial feature of superpixel Xiaolin Tian, Licheng Jiao, Long Yi, Kaiwu Guo, Xiaohua Zhang Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center for Intelligent Perception and Computation, Xidian University, Xi’an, Peoples R China
Corresponding author: Dr. XIAOLIN TIAN Institute of Intelligent Information Processing Xidian University, P.O.Box 224 Xi’an 710071, Peoples R China
[email protected] Fax. +86-29-88201023
The image segmentation based on optimized spatial feature of superpixel Xiaolin Tian, Licheng Jiao, Long Yi, Kaiwu Guo, Xiaohua Zhang Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center for Intelligent Perception and Computation, Xidian University, Xi’an, Peoples R China Abstract-This paper proposes a method of image segmentation based on superpixels. The method is applied to achieve the segmentation of synthetic aperture radar (SAR) image. Firstly, the superpixels are extracted based on multi-scale features. Then, the fuzzy c-means (FCM) clustering based on superpixels is implemented, in which the influence of neighboring and similar superpixles is incorporated into FCM and the influential degree is optimized to improve segmentation performance. Experimental results show that the proposed method can achieve an impressive accuracy of SAR segmentation. For application extension, when we extract corresponding feature from several types of specific images, the proposed method is able to achieve better segmentation performance. Keywords-Image segmentation, fuzzy c-means clustering, superpixel, optimization
1. Introduction The image segmentation aims to divide an image into several disjoint regions with uniform and homogeneous attributes. Many different segmentation methods have been explored and proposed [1-3]. Clustering based method for image segmentation is an important method. There are many clustering strategies, such as the hard clustering method and the fuzzy clustering method, etc. Each clustering method has its own special
characteristics. Among the fuzzy clustering methods, fuzzy c-means (FCM) clustering [4] method has been widely studied and successfully applied in image segmentation [5-13]. FCM has robust characteristics for ambiguity and can preserve much more structural information [14]. The conventional FCM method is able to effectively segment noise-free images. However, this method does not incorporate any spatial information, so that the segmentation result is sensitive to noise [15-20]. Therefore, spatial constraint is imposed in fuzzy clustering [8, 10, 16]. For instance, a spatially weighted FCM based on image histogram and spatial information is proposed, and the weights of the method are given by the ratio of every gray value in histogram [21-23]. By modifying the objective function, the methods of enhancing the robustness of FCM to noise are proposed [18-20]. Meanwhile, several overcoming the noise sensitivity method are proposed by incorporating regularization term into the adaptive FCM method [19, 20], in which a regularization term is introduced into the traditional FCM by imposing the neighborhood effect. In kernel-based clustering methods, an original low-dimensional input feature is transformed into a higher dimensional feature through nonlinear mapping to deal with the data sets with noise or outliers [23]. These kernel-based clustering methods are discussed and applied in image segmentation and other fields, such as different shape clusters [24, 25], incomplete data
processing
[26],
semi-supervised
kernel-FCM
(KFCM)
algorithm
[27],
multiple-kernel fuzzy c-means method [11, 18], kernel FCM-based fuzzy support vector machine method [15], etc. In the aforementioned algorithms, the state of each pixel is determined by the membership of neighboring pixels. Due to these techniques are computationally intensive,
applying them directly to the pixels in an image usually leads to long computation times. It is crucial to extract discriminative feature, and the most popular feature include edge, texture, wavelet, et al, among which wavelets have provided a new dimension to the field of computer vision. Because of its multi-resolution property, many studies have been conducted utilizing wavelets in image segmentation [28, 29]. Based these features, we can obtain superpixels. Superpixels are the result of perceptual grouping of pixels, and the result of an image oversegmentation. Superpixels carry more information than pixels and align better with image edges than rectangular image patches [30]. Superpixel is local, coherent and preserves most of the information necessary for image segmentation at the scale of interest [31, 32]. The pre-segmentation into superpixels will reduce the computational burden. The proposed method is based on superpixels, and the superpixels are extracted based on multi-scale spatial features. The influence of neighboring and similar superpixles is incorporated into FCM and the influential degree is optimized to improve segmentation performance. In experiments, real synthetic aperture radar (SAR) images and artificial synthetic SAR images are tested, and segmentation results show that the proposed method can achieve the better segmentation results. In addition, by extracting corresponding feature from specific images, application extension is achieved based on the proposed method. The main contributions of this paper are: 1) Spatially neighboring and similar superpixels are incorporated clustering method. 2) Influential degree of spatial superpixels is optimized. 3) Application extension can be achieved by varying the extracted feature. The paper is organized as follows: the characteristic of superpixel is introduced in
Section 2; Superpixel-based FCM with optimum influential degree are presented in Section 3; Section 4 presents the detail of implementation of the proposed method; Section 5 describes the experiments and Section 6 concludes the paper.
2. Superpixel 2.1 Superpixel extraction Because image pixels are the consequence of the discrete representation of images and are not natural entities, therefore, when we estimate the illuminant, partial image regions always is unable to produce a robust estimate of the illuminant [33]. By segmenting image into the pixel groups with gray and texture constancy, the superpixel can be formed. Superpixels partition an image into regions which are approximately uniform in size and shape. Superpixels are becoming increasingly popular in many computer vision applications [34]. The advantage of superpixels is analyzed and shown in applications, such as object recognition [35], segmentation [36], etc. Figure 1 shows superpixel segmentation, in which the image is divided into superpixels and each superpixel shows the same visual appearance, which can cause substantial speed-up of subsequent processing. Therefore, the careful choice of the superpixel method and its parameters for the particular application are crucial. We use TurboPixels [37] to extract superpixels from an image, in which one superpixel is roughly uniform in texture and gray, so that the boundaries of regions are preserved. In order to encode gray, texture and spatial information into superpixels, we describe each superpixel j by a 7-dimensional wavelet feature vector F j = ( f1 , f 2 ,..., f 7 ) , in which Fj is the average wavelet value of all pixels in superpixel j across 3 layers. This feature is represented as F . Sp j is the average location of all pixels in superpixel
j.
2.2 Selection of neighboring and similar superpixel To reduce the computational complexity, we extract superpixels from an image. Given a chosen superpixel shown in red color in Figure 1, the neighboring superpixels are all adjacent superpixels of the chosen superpixel, which are shown in yellow color in Figure 1. The similar superpixels are outside of the neighboring superpixels and near the chosen superpixel, which are shown in blue color in Figure 1. The selection of the similar superpixels actually is to search a certain number of the most similar superpixels. The searching procedure is achieved by a similarity metric between the chosen superpixel and a nearby superpixel. We use a hierarchical histogram difference kernel [38] as the similarity metric. Accordingly, in constructing the objective function of FCM, the relative distance and the feature difference between the chosen sueperpixel and neighboring and similar superpixels are considered as spatial weighting information of the chosen sueperpixel to modify the FCM clustering (see Section 3.2). [Insert Figure 1 about here]
3. Superpixel-based FCM with optimum influential degree 3.1 Superpixel-based FCM Fuzzy c-means (FCM) is a data clustering method wherein each data point belongs to a cluster to some degree that is specified by a membership grade. FCM allows one pixel to belong to one or more clusters [39, 40]. For superpixel-based FCM method, we attempt to partition a finite collection of superpixels into a collection of C number of fuzzy clusters with respect to some given criteria [41]. Mathematically, the superpixel-based FCM
objective function of partitioning a superpixel dataset
{F }
N
j
j=1
into C clusters is given by
J ( U, V ) = ∑ i =1 ∑ j =1 uijm dij2 C
N
(1)
where uij is membership between 0 and 1; V = {v1 , v 2 ,...v i ,...v C } is the centroids of cluster; dij is the Euclidean distance between the i − th centroids and the j − th superpixel; m ∈ [1, ∞ ] determines the level of cluster fuzziness. Fuzzy portioning of known superpixel sample is carried out through an iterative optimization of the objective function:
∑ = ∑
N
1
uij =
∑
⎛ dij ⎜ k =1 ⎜ ⎝ d kj
C
⎞ ⎟⎟ ⎠
2 m −1
, vi
m j =1 ij j N m j =1 ij
u F
(2)
u
The traditional FCM method functions well on segmenting most noise-free images, however, it is not able to segment images corrupted by noise, imaging artifacts, and intensity inhomogeneity, which will lead to its non-robust results. The main reasons are due to the non-robust Euclidean distance, and the disregard of spatial contextual information in image. To deal with these problems, we propose robust distance measures by incorporating neighboring and similar superpixel into traditional FCM objective function. 3.2 Modified superpixel-based FCM According to Equation (2), membership degrees and cluster centers are predominantly determined by the similarity measure dij2 . In the proposed method, neighboring and similar superpixels are introduced into dij2 , in which a fractional structure relying on membership
uij and influential degree establish the spatial model. This model is based on the similarity
of neighboring and similar superpixels. S ⎧ ⎛ uik t 2jk ∑ 2 2 = k 1 ⎪⎪ Dij = dij ⎜1 − α1 S ⎜ ⎨ t2 ∑ k =1 jk ⎝ ⎪ ⎪⎩α1 +α 2 ≤ 1
where t 2jk
⎞⎛ ⎟ ⎜1 − α 2 ⎟⎜ ⎠⎝
∑ ∑ S
u r ⎞ ⎟ ⎟ r k =1 jk ⎠
k =1 ik jk S
is the relative distance between superpixel
j
(3)
and superpixel
k ,
2
t 2jk = Sp j − Sp k , and α1 is its influential degree; rjk is the spatial feature difference between superpixel j and superpixel k , rjk = F j − Fk , and α 2 is its influential degree; the S is the number of neighboring and similar superpixels corresponding to superpixel
j . The modified superpixel-based FCM objective function is as follow: J ( U, V ) = ∑ i =1 ∑ j =1 uijm Dij2 C
N
(4)
The iterative optimization of the objective function is similar to Equation (2), where the dij2 is replaced by Dij2 .
∑ = ∑
N
1
uij =
∑
⎛ Dij ⎜ k =1 ⎜ ⎝ Dkj
C
⎞ ⎟⎟ ⎠
2 m −1
, vi
m j =1 ij j N m j =1 ij
u F
(5)
u
3.3 Optimizing influential degree Particle swarm optimization (PSO) is a population based stochastic optimization technique inspired by social behavior of bird flocking or fish schooling. In a social group, the behavior of each individual not only is influenced by their past experiences and cognition, but also is affected by the overall social behavior [42, 43]. There is a population of candidate solutions (particles) in the PSO method. These particles are moved and guided by their own best position and the best position of entire
population. For simplifying the description, the ith
particle is represented as
α id = (α i1 , α i 2 ) to represent influential degree ( α1 , α 2 ). Pid = ( pi1 , pi 2 ) represents the d-dimension quantity of the individual i at its most optimist position. Gid = ( gi1 , g i 2 ) is the d-dimension quantity of the population at its most optimist position. The velocity of particle i is presented as Vid = ( vi1 , vi 2 ) . In PSO method, each individual in the search space has its direction and speed. Based on the past experience and group behavior, probabilistic search is adjusted. ⎧⎪Vid(k +1) = wVid(k ) + c1 × rand (⋅) × ( Pid(k ) − α id(k ) ) + c2 × Rand (⋅) × (Gid(k ) − α id(k ) ) ⎨ (k +1) (k ) ( k +1) ⎪⎩α id = α id + Vid
(6)
where w is the inertia weight. The larger w is, the stronger global searching ability the method has. If w is smaller, the methods tends to the local search. In the paper, w linearly decreases to the minimum value wmin from the initial maximum value wmax with the number of iterations increasing. c1 and c2 is the acceleration constant. rand (⋅) and Rand (⋅) are two random functions with changing scope [0,1] .
In our experiments, population size is 30; largest generation of population evolution is 100; the both acceleration constant is 2; maximum inertia weight is 0.9; minimum inertia weight is 0.4.
4. Implementation of the proposed method 4.1 Flowchart of the proposed method Initialization: Determine the clustering categories ( C ), the degree of fuzziness m = 2 , and the iterative threshold ε = 0.1% and δ = 0.1% ; set the initial fuzzy matrix U (t ) ,
where t is iterative time and uij is the elements of U ; the parameter of influential degree, α1 and α 2 , initially α1 = α 2 = 0 .
Step 1. Perform wavelet transform on input image, and the wavelet feature F of each superpixel is obtained.
Step 2. Get the superpixels of the image by over-segmentation using TurboPixels. Step 3. Search neighboring superpixels for each superpixel (see Section 2.2). Find S N number of similar superpixels outside of neighboring superpixels, where S N = log ( X × Y ) in our experiments, X is the width of image, and Y is the height of image.
Step 4. According to Eqution (4) and Eqution (5), iteratively updating the cluster centers
(
)
and the membership for each superpixel. If max U ( k ) − U ( k +1) U ( k ) ≤ ε is met, go to Step 5, otherwise, k ← k + 1 and deal with Step 4 continually.
Step 5. Optimize the influential degree, α1 J ( U, V ) − J ( U, V ) (t )
( t +1)
and α 2 , by PSO method. If
J ( U, V ) ≤ δ is met in five consecutive iterations or the (t )
evolutionary generation reaches maximum, the iteration stops; otherwise, return to Step 4. The steps of the proposed method is described in Figure 2.
[Insert Figure 2 about here] 4.2. Computational complexity To test the validity of the proposed method, we research on three categories of image (SAR image, texture image, and natural image). So the computational complexities corresponding to three different features are discussed in this section. If an image is the size of M × N , the wavelet feature of SAR image is extracted, which will bring a complexity
increase of O ( MN log MN ) . The gray level co-occurrence matrix (GLCM) [44, 45] of texture image is extracted, which will bring a complexity increase of O ( MNDM FM ) , where DM is the number of direction, and FM is the number of feature. The 3-D CIE Lab color
of natural image is extracted, which will bring a complexity increase of O ( MN ) . Similarly, applying PSO to the proposed method brings a complexity increase of O ( N SP SP ) for clustering of each time, where N SP is the number of superpixels in an image,
the S is the number of neighboring and similar superpixels, and the P is the population size. In our MATLAB implementation, the proposed algorithms runs on a HP Pro 3380 PC with Intel (R) i5-3470, 4GB memory and Windows 7 operating system. The processing time of the wavelet, GLCM and 3-D CIE Lab of a 256 × 256 image is about 178s, 159s, and 0.6s. The segmentation time of the proposed method using size of a 256 × 256 image is about 17s, 16s, and 1s for SAR image, texture image, and natural image, respectively.
5. Experimental results and analysis We evaluate our method with two different experiments: (1) Synthetic and real SAR image segmentation, and (2) the application extension of the proposed method. SAR image segmentation are performed using wavelet feature F . The application extension for texture image is based on texture feature, i.e. gray level co-occurrence matrix (GLCM) [44, 45], and the feature of 3-D CIE Lab color is extracted for natural image segmentation. The compared methods include FCM, Nyström [46], K-means clustering [47], FCM_S [23], KFCM [25], and FLICM [12]. These methods adopt the same feature as our proposed method. In addition, for verifying the effect of SAR image segmentation, we compare our method with the CHUMSIS [32], which is a new SAR image segmentation method based
on superpixel. For application extension, texture and natural image, we compare the proposed method with two novel superpixel-based methods, CoSand [48] and CDHIS [49].
5.1 SAR image segmentation A. Synthetic SAR image segmentation In experiment, for analyzing the SAR image segmentation results quantitatively, we synthesize several SAR images. Figure 3(a) is a template. Figure 3(b) shows the synthetic single-look SAR image, and the size of the image is 256 × 256 . The four synthetic SAR images are applied to test the segmentation performance, and are segmented by FCM, Nyström, K-means, FCM_S, KFCM, FLICM, CHUMSIS and the proposed method. All segmentation results are shown in Figure 3. Figure 3(c) is FCM segmentation result, Figure 3(d) is Nyström segmentation result and Figure 3(e) is the segmentation result of K-means method. Because the spatial information is not considered in these methods, the segmentation results are poor for noisy image. The Synthetic SAR images can not be correctly segmented. Figure 3(f) and Figure 3(g) is the result of FCM_S and KFCM respectively, although both of the methods consider the spatial information, the segmentation results are not ideal because the two methods are not robust for badly speckled SAR image. Figure 3(h) is the result of FLICM, the local segmentation (blue area and circular yellow area) is better, but the result of other area is not better. Figure 3(i) is the result of CHUMSIS, the result is generally better than other compared clustering methods and it is comparable to our proposed method. Figure 3(j) is the segmentation result of our method, and the better segmentation result is obtained comparing with other methods. We conclude that our method is more powerful than other compared methods, since the proposed method sufficiently uses the underlying spatial feature, i.e. neighboring
and similar superpixels.
[Insert Figure 3 about here] Figure 4 shows the changes of objective function value with the increase of evolution generation during processing four synthetic SAR images by using the our method. From Figure 4, with the increase of evolutionary generation, ideal segmentation results gradually are achieved.
[Insert Figure 4 about here] Table 1 shows the pixel number of misclassification and percentage of misclassification of the different methods. From the Table 1, the number of misclassification is significantly reduced by the proposed method.
[Insert Table 1 about here] B. Real SAR image segmentation Three real SAR images are tested in experiments. SAR1 is the size of 550 × 430 , Pipeline over the Rio Grande river near Albuquerque, New Mexico, Ku-Band Radar, 1-m resolution. SAR2 is the size of 550 × 400 , Theodore Roosevelt Memorial Bridge, Washington, D.C. Ku-Band Radar, 1-m resolution. SAR3 is the size of 470 × 450 , China Lake Airport, California, Ku-Band Radar, 3-m resolution. All of three real SAR images have 8-bit grey levels. Wavelet feature F is adopted to process these real SAR images. Three images are segmented by FCM, Nyström, K-means, FCM_S, KFCM, FLICM, CHUMSIS and the proposed method. The segmentation results are shown in Figure 5, Figure 6 and Figure 7. The original images are shown in Figure 5(a), 6(a) and 7(a). Figure 5(b), 6(b) and 7(b) are segmentation results of FCM method. Figure 5(c), 6(c) and 7(c) are the segmentation
results of Nyström method. Figure 5(d), 6(d) and 7(d) are the segmentation results of K-means method. These segmentation results show that the three methods can not get the desired results because of the absence of spatial information and there are many obvious misclassification regions. The three methods are not sufficient to process badly speckled SAR images. In general, there are many small regions in the results of FCM_S and KFCM [Figure 5(e), 6(e), 7(e) and Figure 5(f), 6(f), 7(f)]. In other words, many regions can not be segmented correctly. The FLICM obtains the better result [Figure 5(g), 6(g), 7(g)] by comparing with other clustering method. However, there are still many obvious misclassification areas, such as bank of the river [Figure 5(g)] and the middle of the bridge [Figure 6(g)]. The result of CHUMSIS [Figure 5(h), 6(h), 7(h)] is better than other compared clustering methods, however, this method divides one complete object area into different areas and the integrity of area is affected, such as the bank area in Figure 5 and the bridge in Figure 6. Figure 5(i), 6(i) and 7(i) are the segmentation results of the proposed method, and the results show that the proposed method can correctly segment the different regions of an image. In Figure 5(i), the river, bank, and lawn are correctly segmented. In Figure 6(i), the achieved segmentation is visually correct, since river, bridge, and vegetation are well separated. In Figure 7(i), the runway, the living area, and other area are properly divided and the fine runway can also be accurately segmented. These better segmentation results are benefit from the neighboring and similar superpixels, optimized influential degree and the alternate optimizing process. Figure 8 shows the objective function value of the three real SAR images by using the proposed method. From Figure 8, ideal segmentation results gradually are obtained with the increase of evolutionary generation. When the evolutionary generation is about 50, the
ideal segmentation results have been achieved.
[Insert Figure 5 about here] [Insert Figure 6 about here] [Insert Figure 7 about here] [Insert Figure 8 about here] Two real SAR images (SAR4 and SAR5) with typical texture appearance are segmented, and the segmentation results are shown in Figure 9 and Figure 10. Our method uses wavelet feature F to implement the segmentation. From the segmentation results, the proposed method is superior to the other compared methods.
[Insert Figure 9 about here] [Insert Figure 10 about here] 5.2 Other application A. Synthetic texture image segmentation For typical texture image, we adopt GLCM feature. A co-occurrence matrix is specified by the relative frequencies P (u , v, d ,θ ) , where two pixels occur in a direction specified by the angle θ , distance d , and gray level u and v respectively. The choices of θ is the four directions: 0D , 45D , 90D , 135D . Different d describes the different texture scale, and d = 1 in our experiments. The following secondary statistics of these features are used as
classification feature: (1) angular second moment g1 = ∑ u = 0 ∑ v = 0 p 2 (u, v ) ; (2) homogeneity: L −1
g 2 = ∑ u = 0 ∑ v = 0 p (u, v ) (1 + u − v ) L −1
L −1
L −1
; (3) contrast: g3 = ∑ u = 0 ∑ v = 0 (u − v )2 p (u, v ) , where L-1 is L −1
L −1
maximum gray level. The values of angular second moment ( g10 , g11 , g12 , g13 ) are corresponding to 0D , 45D , 90D , 135D ; the homogeneity in the four directions is represented as
( g 20 , g 21 , g 22 , g 23 ) ; the contrast in the four directions is described as ( g30 , g31 , g32 , g33 ) . We describe
(
each
superpixel
j
by
a
12-dimensional
GLCM
feature
vector
)
G j = g 10 ,..., g 13 , g 20 ,..., g 23 , g 30 ,..., g 33 , in which G j is the average GLCM value of all pixels in superpixel j . We only need to replace the F j by G j in Equation (4) and (5), and overall implementation steps are same as the Section 4. To quantitatively compare segmentation results, four synthetic texture images are applied to test the segmentation performance. The four texture images are from Brodatz database. These synthetic images have 2, 3, 4 and 5 types of different texture regions respectively, and the size of each image is 256 × 256 . The test images are named as: SYN1, SYN2, SYN3, and SYN4. In experiments, these images are segmented by FCM, Nyström, K-means, FCM_S, KFCM, FLICM, CoSand, CDHIS and the proposed method. All segmentation results are shown in Figure 11. The original images are shown in Figure 11(a), which include 2, 3, 4 and 5 types of texture regions respectively. Figure 11(b) is the ground truth of these synthetic images. Figure 11(c) is FCM results. In Figure 11(c), because the spatial information is not considered, the ideal segmentation results can not be obtained. Figure 11(d) is Nyström results and Figure 11(e) is the results of K-means method. In the two groups of results, a lot of unconnected distinct parts are inaccurately segmented, and ideal segmentation results can not be achieved. Figure 11(f) is the results of the FCM_S. Although the method incorporates spatial information, the ability of processing texture image is not robust. The KFCM and FLICM generally implement the better results [Figure 11(g) and Figure 11(h)] although there is some misclassification. The CoSand and CDHIS are superpixel based methods, and the results are shown in Figure 11(i) and Figure 11(j) respectively. The
common characteristic of the both methods is over-segmentation and local similar areas are extracted. A segmented area is always a part of one complete object area. However, the two methods have weak ability of identifying similar texture area, which leads to the segmentation results of texture image is poor. Figure 11(k) is the segmentation results of the proposed method, and the better segmentation results are obtained. In Figure 11, the misclassified points mainly correspond to the sites at texture borders, which contain more than one texture. Misclassifications at the boundary are unavoidable, but the number of misclassifications is reduced largely by using the optimized influential degree. We conclude that our method is more powerful than the compared clustering methods and superpixel-based methods, since the method sufficiently uses the underlying spatial superpixel feature.
[Insert Figure 11 about here] Figure 12 shows the changes of objective function value with the increase of evolution generation during processing four synthetic texture images by using our method. From Figure 12, with the increase of evolutionary generation, ideal segmentation results gradually are achieved. When the evolutionary generation is about 60, the alternate optimizing process ends and ideal segmentation results are implemented.
[Insert Figure 12 about here] Table 2 shows the pixel number of misclassification and percentage of misclassification of the different methods for segmenting synthetic texture images. Because the two superpixel-based methods, CoSand and CDHIS, are over-segmentation and similar local areas are extracted, the number of clustering is not determined with priori knowledge. The obtained number of class is different from that of other methods. So we do not compare the
misclassification number of the two methods. From the Table 2, the number of misclassification is significantly reduced by our method.
[Insert Table 2 about here] B. Natural image segmentation In each superpixel, 3-D CIE Lab color features are extracted. CIE Lab color models are considered to be perceptually uniform and are referred to as uniform color models. It is uniform derivations from the standard CIE XYZ space. We describe each superpixel j by a 3-D CIE Lab feature vector L j = (l1 , l2 , l3 ) , in which L j is the average value of all pixels in superpixel j . We only need to replace the Fj by L j in Equation (4) and (5), and overall implementation step is same as the Section 4. The natural image segmentation results are shown in Figure 13. These images are from the MSRC datasets [50]. From the experimental results, our method obtains better segmentation results than other compared methods. Figure 14 shows the changes of objective function value with the increase of evolution generation during processing seven natural images by using the our method. From Figure 14, with the increase of evolutionary generation, ideal segmentation results gradually are achieved.
[Insert Figure 13 about here] [Insert Figure 14 about here]
6. Conclusions The proposed method achieves a superpixel-based FCM clustering. The influence of
neighboring and similar superpixles is incorporated into clustering. Their influential degree controls how strongly the inter-superpixel similarity is imposed and the influential degree is optimized to improve segmentation performance. Experimental results show that the proposed method can achieve the better segmentation accuracy of SAR image. Furthermore, the proposed method implements application extension, when the feature corresponding to specific images is adopted.
Acknowledgement This work is supported by the Project Supported by Natural Science Basic Research Plan in Shaanxi Province of China (Program No.2014JM8301); The Fundamental Research Funds for the Central Universities; the National Natural Science Foundation of China under grant No. 60972148, 61072106, 61173092, 61271302, 61272282, 61001206, 61202176, 61271298; The Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project): No. B07048; the Program for Cheung Kong Scholars and Innovative Research Team in University: IRT1170.
References [1] N. Pal, S. Pal. A review on image segmentation techniques. Pattern Recognition, 1993, 26: 1277-1294. [2] M. Gong, Y. Liang, J. Shi, W. Ma, J. Ma. Fuzzy c-means clustering with local information and kernel metric for image segmentation. IEEE Transactions on Image Processing, 2013, 22(2): 573-584. [3] H. Cao, H. Deng, Y. Wang. Segmentation of M-FISH images for improved classification of chromosomes with an adaptive fuzzy c-means clustering method. IEEE
Transactions on Fuzzy Systems, 2012, 20(1): 1-8. [4] J. Bezdek. Pattern recognition with fuzzy objective function algorithms. Kluwer Academic Publishers Norwell, MA, USA, 1981. [5] H. Zhang, Q. M. J. Wu, T. M. Nguyen. A robust fuzzy method based on student's t-distribution and mean template for image segmentation application. IEEE Signal Processing Letters, 2013, 20(2): 117-120. [6] J. C. Noordam, W. H. A. M. van den Broek. Geometrically guided guzzy c-means clustering for multivariate image segmentation. In: Proc. International Conference on Pattern Recognition, 2000, 1: 462-465. [7] M. J. Kwon, Y. J. Han, I. H. Shin, H. W. Park. Hierarchical fuzzy segmentation of brain MR Images. International Journal of Imaging Systems and Technology, 2003, 13(2): 115-125. [8] M. N. Ahmed, S. M. Yamany, N. Mohamed, A. A. Farag, T. Moriarty. A modified fuzzy c-means method for bias field estimation and segmentation of MRI data. IEEE Trans. on Medical Imaging, 2002, 21(3): 193-199. [9] D. Q. Zhang, S. C. Chen, Z. S. Pan, K. R. Tan. Kernel-based fuzzy clustering incorporating spatial constraints for image segmentation. In: Proc. International Conference on Machine Learning and Cybernetics, 2003, 4: 2189-2192. [10]X. Li, L. Li, H. Lu, D. Chen, Z. Liang. Inhomogeneity correction for magnetic resonance images with fuzzy c-mean method. In: Proc. SPIE Int. Soc. Opt. Eng., 2003, 5032: 995-1005. [11]L. Chen, C. L. P. Chen, M. Lu. A multiple-kernel fuzzy c-means method for image segmentation, IEEE Transactions on Systems, Man, and Cybernetics, Part B, 2011,
41(5): 1263-1274. [12]S. Krinidis, V. Chatzis. A robust fuzzy local information c-means clustering method. IEEE Transactions on Image Processing, 2010, 19(5): 1328-1337. [13]O. Sjahputera, G. J. Scott, B. Claywell, M. N. Klaric, N. J. Hudson, J. M. Keller, C. H. Davis. Clustering of detected changes in high-resolution satellite imagery using a stabilized competitive agglomeration method. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(12): 4687-4703. [14]S. N. Sulaiman, N. A. M. Isa. Denoising-based clustering methods for segmentation of low level salt-and-pepper noise-corrupted images. IEEE Transactions on Consumer Electronics, 2010, 56(4): 2702-2710. [15]X. Yang, G. Zhang, J. Lu, J. Ma. A kernel fuzzy c-means clustering-based fuzzy support vector machine method for classification problems with outliers or noises. IEEE Transactions on Fuzzy Systems, 2011, 19(1): 105-115. [16]H. Le Capitaine, C. Frelicot. A cluster-validity index combining an overlap measure and a separation measure based on fuzzy-aggregation operators. IEEE Transactions on Fuzzy Systems, 2011, 19(3): 580-588. [17]M. Gong, Z. Zhou, J. Ma. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering. IEEE Transactions on Image Processing, 2012, 21(4): 2141-2151. [18]H. C. Huang, Y. Y. Chuang, C. S. Chen. Multiple kernel fuzzy clustering. IEEE Transactions on Fuzzy Systems, 2012, 20(1): 120-134. [19]S. Balla-Arabe, X. Gao, B. Wang. A fast and robust level set method for image segmentation using fuzzy clustering and lattice boltzmann method. IEEE Transactions
on Cybernetics, 2013, 43(3): 910-920. [20]I. Despotovic, E. Vansteenkiste, W. Philips. Spatially coherent fuzzy clustering for accurate and noise-robust image segmentation. IEEE Signal Processing Letters, 2013, 20(4): 295-298. [21]E. K. K. Ng, A. W. -C. Fu, R. C. -W. Wong. Projective clustering by histograms. IEEE Transactions on Knowledge and Data Engineering, 2005, 17(3): 369-383. [22]B. Li, W. Chen, D. Wang. An improved FCM method incorporating spatial information for image segmentation. In: Proc. of International Symposium on Computer Science and Computational Technology, 2008: 493-495. [23]S. Chen, D. Zhang. Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 2004, 34(4): 1907-1916. [24]J. M. Leski. Fuzzy c-varieties/elliptotypes clustering in reproducing kernel Hilbert space. Fuzzy Sets and Systems, 2004, 141(2): 259-280. [25]D. Graves, W. Pedrycz. Kernel-based fuzzy clustering and fuzzy clustering: A comparative experimental study. Fuzzy Sets and Systems, 2010, 161(4): 522-543. [26]D. Zhang, S. Chen. Clustering incomplete data using kernel-based fuzzy c-means algorithm. Neural Processing Letters, 2003, 18(3): 155-162. [27]C. Zhu, S. Yang, Q. Zhao, S. Cui, N. Wen. Robust semi-supervised kernel-FCM algorithm incorporating local spatial information for remote sensing image classification. Journal of the Indian Society of Remote Sensing, 2014, 42(1): 35–49. [28]S. Arivazhagan, L. Ganesan. Texture segmentation using wavelet transform. Pattern Recognition Letters, 2003, 24: 3197-3203.
[29]S. Liapis, E. Sifakis, G. Tziritas. Colour and texture segmentation using wavelet frame analysis, deterministic relaxation, and fast marching methods. J. Vis. Commun. Image R. 2004, 15: 1-26. [30]X. Ren, J. Malik. Learning a classification model for segmentation. In: Int. Conf. Computer Vision, 2003, 1: 10-17. [31]X. Ren, J. Malik. Learning a classification model for segmentation. In: International Conference on Computer Vision, 2003, 10-17. [32]H. Yu, X. Zhang, S. Wang, B. Hou. Context-based hierarchical unequal merging for SAR image segmentation. IEEE Transactions on Geoscience and Remote Sensing, 2013, 51(2): 995-1009. [33]S. Wang, H. Lu, F. Yang, M. H. Yang. Superpixel tracking. In: International Conference on Computer Vision, 2011, 1323-1330. [34]B. Fulkerson, A. Vedaldi, S. Soatto. Class segmentation and object localization with superpixel neighborhoods. In: International Conference on Computer Vision, 2009, 670-677. [35]C. Pantofaru, C. Schmid, M. Hebert. Object recognition by integrating multiple image segmentations. In: European Conference on Computer Vision, 2008, 481-494. [36]P. Mehrani, O. Veksler. Saliency segmentation based on learning and graph cut renement. In: Proc. British Machine Vision Conference, 2010, 1-12. [37]A. Levinshtein, A. Stere, K. N. Kutulakos, F. J. Fleet, S. J. Dickinson, K. Siddiqi. TurboPixels: Fast Superpixels Using Geometric Flows. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(12): 2290-2297. [38]J. Shotton, M. Johnson, R. Cipolla. Semantic texton forests for image categorization
and segmentation. In: European Conference on Computer Vision, 2008, 1-8. [39]T. Soni Madhulatha. An Overview on Clustering Methods. IOSR Journal of Engineering, 2012, 2(4): 719-725. [40]J. Ke, L. O. Hall, D. B. Goldgof. Fast accurate fuzzy clustering through data reduction. IEEE Transactions on Fuzzy Systems, 2003, 11(2): 262-270. [41]D. J. Hemanth, D. Selvathi, J. Anitha. Effective Fuzzy Clustering Method for Abnormal MR Brain Image Segmentation. In: IEEE International Advance Computing Conference, 2009, 609-614. [42]K. E. Parsopoulos, M. N. Vrahatis. On the computation of all global minimizers through particle swarm optimization. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 211-224. [43]S. Bird, X. Li. Enhancing the robustness of a speciation-based PSO. In: IEEE Congress on Evolutionary Computation, 2006, 843-850. [44]R. M. Haralick. Statistical and structural approaches to texture. In: Proceeding of the IEEE, 1979, 67(5): 786-804. [45]R. M. Haralick, K. Shanmugan, J. Dinstein. Textual features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3, 1973, 610-621. [46]C. Fowlkes, S. Belongie, F. Chung, J. Malik. Spectral grouping using the nyström method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(2): 214-225. [47]A. Rakhlin. Stability of clustering methods. In: Conference on Neural Information Processing Systems, 2005. [48]G. Kim, E. P. Xing, Fei-Fei Li, T. Kanade. Distributed cosegmentation via submodular
optimization on anisotropic diffusion. In: International Conference on Computer Vision, 2011, 169-176. [49]P. Arbelaez, M. Maire, C. Fowlkes, J. Malik. Contour detection and hierarchical image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(5): 898-916. [50]J. Winn, A. Criminisi, T. Minka. Object categorization by learned universal visual dictionary. In: International Conference on Computer Vision, 2005, 1800-1807.
Captions
Table 1. Pixel misclassification number and percentage of different segmentation methods. Table 2. Pixel misclassification number and percentage of different texture segmentation methods.
Figure 1. Selection of neighboring and similar superpixel. Figure 2. The steps of the proposed method Figure 3. Synthetic single-look SAR image segmentation. (a) template, (b) synthetic SAR image, (c) FCM, (d) Nyström, (e) K-means, (f) FCM_S, (g) KFCM, (h) FLICM, (i) CHUMSIS, and (j) Ours. Figure 4. The objective function value of our method for segmenting four synthetic SAR images. Figure 5. Real SAR1 image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CHUMSIS, and (i) Ours. Figure 6. Real SAR2 image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CHUMSIS, and (i) Ours. Figure 7. Real SAR3 image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CHUMSIS, and (i) Ours. Figure 8. The objective function value of our method for segmenting three real SAR images. Figure 9. Real SAR4 image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CHUMSIS, and (i) Ours.
Figure 10. Real SAR5 image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CHUMSIS, and (i) Ours. Figure 11. Synthetic texture image segmentation. (a) original images, (b) ground truth, (c) FCM, (d) Nyström, (e) K-means, (f) FCM_S, (g) KFCM, (h) FLICM, (i) CoSand, (j) CDHIS, and (k) ours. Figure 12. The objective function value of our method for segmenting four synthetic texture images. Figure 13. Natural image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CoSand, (i) CDHIS, and (j) ours. Figure 14. The objective function value of our method for segmenting natural images.
Tables and Figures
Table 1. Pixel misclassification number and percentage of different segmentation methods. Image 1-look 2-look 4-look 6-look Image 1-look 2-look 4-look 6-look
Methods FCM 10315(15.74%) 3477(5.31%) 1237(1.89%) 1276(1.95%)
Nyström
K-means
20877(31.86%) 8922(13.61%) 1439(2.20%) 1546(2.36%)
2595(3.96%) 1581(2.41%) 1256(1.92%) 1305(1.99%)
FCM_S 11624(17.73%) 7766(11.85 %) 3920(5.98 %) 2907(4.43 %)
Methods KFCM 10619(16.20 %) 3472(5.29%) 1237(1.88%) 1276(1.94%)
FLICM
CHUMSIS
7241(11.04%) 1851(2.82%) 1977(3.01%) 2082(3.17%)
1126(1.72%) 1029(1.57%) 898(1.37%) 876(1.34%)
Ours 1004(1.53%) 751(1.15%) 654(1.00%) 655(1.00%)
Table 2. Pixel misclassification number and percentage of different texture segmentation methods. Image SYN1 SYN2 SYN3 SYN4 Image SYN1 SYN2 SYN3 SYN4
Methods FCM 5890(8.99%) 33887(51.71%) 1571(2.40%) 2828(4.32%)
Nyström
K-means
2145(3.27%) 33158(50.60%) 25968(39.62%) 2097(3.20%)
7208(11.00%) 42981(65.58%) 1466(2.24%) 2867(4.37%)
Methods KFCM 5890(8.98%) 33889(51.71%) 1571(2.40%) 2828(4.32%)
FLICM 5627(8.58%) 32292(49.27%) 1271(2.20%) 2867(4.37%)
Ours 1987(3.03%) 2504(3.82%) 1518(2.32%) 1921(2.93%)
FCM_S 5742(8.76%) 38814(59.23%) 11931(18.21%) 2370(3.62%)
Figure 1. Selection of neighboring and similar superpixel.
J ( U, V ) = ∑ i =1 ∑ j =1 uijm Dij2 C
⎧vid(k +1) = wvid(k ) + c1 × rand (⋅) × ( Pid(k ) − α id(k ) ) ⎪⎪ k k + c2 × Rand (⋅) × (Gid( ) − α id( ) ) ⎨ ⎪ (k +1) (k ) ( k +1) ⎪⎩α id = α id + vid
N
S ⎧ ⎛ u t2 ⎪⎪ Dij2 = dij2 ⎜1 − α1 ∑ k =S1 ik jk ⎜ ⎨ ∑ k =1 t 2jk ⎝ ⎪ ⎪⎩α1 +α 2 ≤ 1
⎞⎛ ⎟⎜1 − α 2 ⎟⎜ ⎠⎝
∑ = ∑
N
1
uij =
⎛D ∑ k =1 ⎜⎜ Dij ⎝ kj C
⎞ ⎟⎟ ⎠
2 m −1
, vi
Figure 2. The steps of the proposed method
∑ ∑
m j =1 ij j N m j =1 ij
u F u
S
u r ⎞ ⎟ ⎟ r j k k =1 ⎠
k =1 ik jk S
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j) Figure 3. Synthetic single-look SAR image segmentation. (a) template, (b) synthetic SAR image, (c) FCM, (d) Nyström, (e) K-means, (f) FCM_S, (g) KFCM, (h) FLICM, (i) CHUMSIS, and (j) Ours.
1-look 2-look 3-look 4-look
Object functional value
10
8
6
4
2
0 0
10
20
30
40
50
60
70
80
90
100
110
Generation
Figure 4. The objective function value of our method for segmenting four synthetic SAR images.
(a) SAR1
(b)
(c)
(d)
(e)
(f)
(g) (h) (i) Figure 5. Real SAR1 image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CHUMSIS, and (i) Ours.
(a) SAR2
(b)
(c)
(d)
(e)
(f)
(g) (h) (i) Figure 6. Real SAR2 image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CHUMSIS, and (i) Ours.
(a) SAR3
(d)
(b)
(c)
(e)
(f)
(g) (i) (h) Figure 7. Real SAR3 image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CHUMSIS, and (i) Ours.
20
SAR1 SAR2 SAR3
18
Object functional value
16 14 12 10 8 6 4 2 0 0
10
20
30
40
50
60
70
80
90
100
110
Generation
Figure 8. The objective function value of our method for segmenting three real SAR images.
(a) SAR4
(b)
(c)
(d)
(e)
(f)
(g) (i) (h) Figure 9. Real SAR4 image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CHUMSIS, and (i) Ours.
(a) SAR5
(b)
(c)
(d)
(e)
(f)
(g) (h) (i) Figure 10. Real SAR5 image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CHUMSIS, and (i) Ours.
SYN1
SYN2
SYN3 (a)
(b)
(c)
(d)
(e)
SYN4
(f)
(g)
(h)
(i)
(j)
(k)
Figure 11. Synthetic texture image segmentation. (a) original images, (b) ground truth, (c) FCM, (d) Nyström, (e) K-means, (f) FCM_S, (g) KFCM, (h) FLICM, (i) CoSand, (j) CDHIS, and (k) ours.
Object functional value
240
SYN1 SYN2 SYN3 SYN4
160
80
0 0
20
40
60
Generation
Figure 12. The objective function value of our method for segmenting four synthetic texture images.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
Figure 13. Natural image segmentation. (a) original images, (b) FCM, (c) Nyström, (d) K-means, (e) FCM_S, (f) KFCM, (g) FLICM, (h) CoSand, (i) CDHIS, and (j) ours.
5.5
1_20_s 3_29_s 5_13_s 5_15_s 12_26_s 17_27_s 18_14_s 18_25_s 19_29_s
5.0
Object functional value
4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 0
20
40
60
80
100
Generation
Figure 14. The objective function value of our method for segmenting natural images.
Highlights 1) Spatially neighboring and similar superpixels are incorporated clustering method. 2) Influential degree of spatial superpixels is optimized. 3) Application extension can be achieved by varying the extracted feature.