Graph-based learning for segmentation of 3D ultrasound images

Graph-based learning for segmentation of 3D ultrasound images

Author's Accepted Manuscript Graph-Based Learning for Segmentation of 3D Ultrasound images Huali Chang, Zhenping Chen, Qinghua Huang, Jun Shi, Xuelon...

2MB Sizes 1 Downloads 129 Views

Author's Accepted Manuscript

Graph-Based Learning for Segmentation of 3D Ultrasound images Huali Chang, Zhenping Chen, Qinghua Huang, Jun Shi, Xuelong Li

www.elsevier.com/locate/neucom

PII: DOI: Reference:

S0925-2312(14)01387-3 http://dx.doi.org/10.1016/j.neucom.2014.05.092 NEUCOM14831

To appear in:

Neurocomputing

Received date: 16 November 2013 Revised date: 22 May 2014 Accepted date: 24 May 2014 Cite this article as: Huali Chang, Zhenping Chen, Qinghua Huang, Jun Shi, Xuelong Li, Graph-Based Learning for Segmentation of 3D Ultrasound images, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2014.05.092 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Graph-Based Learning for Segmentation of 3D Ultrasound images Huali Chang1, Zhenping Chen1,a, Qinghua Huang1,*, Jun Shi2, and Xuelong Li3 1

School of Electronic and Information Engineering, South China University of Technology, Guangzhou, P.R.

China 2

School of Communication and Information Engineering, Shanghai University, Shanghai, P.R. China.

3

Center for OPTical IMagery Analysis and Learning (OPTIMAL), State Key Laboratory of Transient Optics

and Photonics. Xi’an Institute of Optics and Precision Mechanics. Chinese Academy of Sciences, Xi’an 710119, Shaanxi, P.R. China. *Corresponding author: [email protected]

ABSTRACT The analysis of 3D medical images becomes necessary since the 3D imaging techniques have been more and more widely applied in medical applications. This paper introduces a novel segmentation method for extracting objects of interest (OOI) in 3D ultrasound images. In the proposed method, a bilateral filtering model is first applied to a 3D ultrasound volume data set for speckle reduction. We then take advantage of graph theory to construct a 3D graph, and merge sub-graphs into larger one during the segmentation process. Therefore, the proposed method can be called a 3D graph-based segmentation algorithm. After the mergence of sub-graphs, a set of minimum spanning trees each of which corresponds to a 3D subregion is generated. In terms of segmentation accuracy, the experiments using an ultrasound fetus phantom, a resolution phantom and human fingers demonstrate that the proposed method outperforms the 3D Snake and Fuzzy C Means clustering methods, indicating improved performance for potential clinical applications. Keywords:Graph theory, 3D ultrasound, image segmentation, pairwise region comparison predicate. 1. Introduction Medical ultrasound (US) is one of the most widely used diagnostic tools, since it is low cost and low risky to patients as well as it allows more accurate and faster procedures in comparison with other medical image modalities [1]. It has been

a

Huali Chang and Zhenping Chen contributed equally to this paper. 1

frequently used in clinical applications. Segmentation is the procedure of separating the object of interest (OOI) from the background in image analysis. Image segmentation plays a significant role in both qualitative and quantitative image analysis, which is mainly to detect and measure the OOI in an image so as to establish the description of the image from the obtained objective information. The major objective of image segmentation is to partition an image into a number of regions that are visually distinct and uniform based on some of the image property, such as the color, gray level, texture features, etc. 3D US image segmentation is helpful in 3D image analysis since it extracts 3D OOI which may be a lesion or a specific tissue locating in human body and the diagnosis can be conducted based on 3D analyses. In this paper, we aim to isolate the 3D OOI from its surroundings in a medical US volumetric image. Segmentation of 3D US images has drawn particular attention in the past decades. However, because the 3D segmentation is often performed on the 2D slices, it is timeconsuming for operators to delineate the boundaries of the OOI in a 3D image manually, thus robust and fast segmentation algorithms are needed to accomplish segmentation tasks after 3D image reconstruction. A large number of 3D medical image segmentation algorithms have been presented in recent years, including adaptive thresholding method [2], watershed and rule-based effective method [3], watershed segmentation [4], active contour model [5], deformable shape model [6], prediction, block-matching and partial 3D constraint based method [7], etc. It is often difficult to segment the medical US images due to the speckles and low contrast which are inherent in US images. Thus accurate detection of boundaries or contours has become an important role in extraction of the OOI in US images. To this end, a deformable model which is also called Snake [8] has been widely utilized in US image segmentation. The Snake is firstly proposed for obtaining the contours in 2D images. It deforms iteratively to focus on the contour of the OOI of an image as accurately as possible. Therefore, some efforts have been made to segment US images using the Snake [9,10]. Although the Snake has been widely employed for the segmentation of various medical images, it is still not robust enough to noises and heavily depends on initial contour of the OOI. The Snake based method is not computationally efficient on account of the requirement of the initial contours which may be defined by either some complexly autoinitialized methods or manual delineations. In contrast, clustering technique [11] which requires less user participation for image segmentation may be a feasible option. In the procedures of a clustering-based image segmentation, the points with similar intensities can be clustered into the same class. Fuzzy C means (FCM) clustering which can be performed without using initial contours is a popular and classical method [12]. In [13], the Fuzzy co-clustering approach based on the simultaneous clustering of both object and 2

feature memberships is used for the color segmentation of natural images. It has also been used for segmentation of breast tumors in 3D US images [14]. However, the clustering method may cause problems of over-segmentation and undersegmentation. In recent years, graph learning technique has a wide variety of applications in computer science. Due to its simple structure and solid theoretical theories, it has become a hot research topic and a large number of techniques based on graph learning have been developed. In [15, 40], the hypergraph-based techniques were employed for image relevance learning and image classification. In [40], Additionally, 3-D object retrieval and recognition have attracted much research interest in the past years [16,17]. Gao et al. [18,19] proposed several novel graph-based algorithms which were successfully employed in View-based 3-D object retrieval and recognition. A new graph-based algorithm for overlapping clustering was proposed to design a graph-covering and filtering strategy, which together allow to build overlapping clusters more accurately [20]. Liu et al. [21] proposed a graph learning framework for image annotation. It achieved success in the field of image and video analysis. In [22], the authors first proposed a model for transductive learning on graphs and developed a margin analysis for multi-class graph learning. They provided a better understanding of the normalization of the graph Laplacian matrix as well as dimension reduction in graph learning. Some studies have been conducted on the basis of graph learning techniques to achieve improved performance for video annotation [23,24]. In the fields of multimedia analysis, Wang et al. [25] introduced a web image search re-ranking approach that explores multiple modalities in a graph-based learning scheme, and Huang et al. [36] explored the user-video tripartite graph for personalized video recommendation. Graph technique [26] is regarded as one of segmentation methods in addition to a clustering method. The image segmentation algorithm based on graph theory called graph-based method has become a hot subject of study in recent years. Felzenszwalb and Huttenlocher [27] proposed a typical graph-based segmentation algorithm (called efficient graph-based algorithm, EGB) which was successfully applied for general images. There are two steps in the method, which are the graph construction for mapping an image to a graph and the mergence of vertices in the graph. The local spatial and global information are taken into account in the EGB method, hence the regions at different locations but with similar intensity levels can be partitioned into different groups. Huang et al. [28] proposed a robust graph-based (RGB) segmentation algorithm in which a novel pairwise region comparison predicate (PRCP) was designed for US images by taking into account local statistics and signal-to-noise ratio (SNR). In [29], a parameter-automatically optimized robust graph-based (PAORGB) image segmentation method is proposed to optimize the parameters in the RGB method. Furthermore, Zheng and Huang [39] extended the RGB method to a 3D form (called 3D RGB in this paper) and used it for segmentation of 3D 3

US images. In spite of successful applications on 3D US phantoms, its performance decreases as the image size increases due to the intrinsic shortcomings of its PRCP. Accordingly, we propose to design a new graph-based learning technique to extract the OOI in the segmentation of 3D US images in this paper. We perform a preprocessing procedure to preserve the boundaries and reduce the speckles before the image segmentation owing to the reason that there are complex image artifacts in 3D US images. Afterward, making use of graph-based method, we construct a 3D graph in which each vertex corresponds to a voxel and the edge weight between two vertices is the intensity difference. Therefore, the image segmentation can be converted into the graph segmentation in which each subgraph corresponds to a subregion. Learning based on local statistics is conducted during the mergence of subgraphs. Comparisons with the traditional Snake and FCM clustering methods are finally applied to evaluate the performance of the proposed method. The rest of this paper is organized as follows. The proposed segmentation method is presented in Section 2. Section 3 shows and discusses the experimental results as well as the comparisons among different methods. The conclusions of this paper are drawn in Section 4.

2. Methods In this study, considering the characteristics of US images, we first use a preprocessing procedure (i.e. bilateral filtering model) for speckle reduction. Thereafter, the proposed graph-based 3D image segmentation method (mainly including two steps: mapping images to the graph and merging homogenous neighboring regions) is employed. We incorporate hypothesis testing into the PRCP to determine whether two neighboring subgraphs should be merged or not in mergence of subgraphs. Basically, the process of image segmentation is transformed into that of graph segmentation. Afterwards, we use Kruskal’s algorithm to obtain the minimum spanning trees (MSTs) each representing an isolated subregion. Finally, the original image becomes a forest which contains a number of MSTs, i.e. subregions. 2.1. Speckle reduction As is known to all, a US image often contains plenty of noises and artifacts on account of the complex imaging environment and imaging principle, such as speckles, shadow and low contrast. They significantly lower the performance of conventional segmentation methods. To improve the robustness of the proposed method against noises, a preprocessing procedure for speckle reduction is necessary. A bilateral filtering model which has been proven to have high-efficiency 4

and high-accuracy applications especially for the speckle reduction of US image is used as a preprocessing method in this study [30]. The advantage of the bilateral filtering algorithm is that it does not need complicated iterative computation. Even when the filtering window is large, it can guarantee to complete the calculation within limited time. It can be expressed as

f [k ] = {∑ W [k , n].g[n]} × {∑ W [k , n]}−1 n =Ω

(1)

n =Ω

where g is the original image, f denotes the filtered image, Ω represents the neighborhood of the kth voxel, n ∈ Ω . The weight coefficient W[k, n] can be defined as,

W [k , n] = Wd [k , n] × Wr [k , n]

(2)

where the space weight coefficient and the gray weight coefficient can be computed as the following,

Wd [k ,n] = exp (Wr [k ,n] = exp (-

d d2 ([k ],[k-n]) n2 ) exp () = 2δ d2 2δ d2

(3)

d d2 ([k ],[k-n]) (g[n]-g[k-n]) 2 ) exp () = 2δ r2 2δ r2

(4)

where δ d represents the geometric distribution factor and δ r denotes the Gray level distribution factor. The value of δ d determines the number of voxels in the filtering window that is set to 3×3×3 in this study and δ r is the threshold of the gray scale difference. 2.2. Graph construction Graph is ubiquitous in computer science. Graph is a collection of vertices V and edges E, where V shows the set of vertices and E is a set of edges connecting the vertices. A graph may be either undirected or directed. The graph is called a directed graph when the edges are one-way connection. Intuitively, an undirected graph has undirected edges which are "two-way" or "duplex" connection between its vertices. In an undirected graph, an edge merely connects two adjacent vertices. In this method, we make use of undirected graphs to represent 3D images. The emphasis of graph construction before the 3D image segmentation is to compute the edge weights. Each undirected graph G has vertices vi∈V and edges (vi, vj)∈E. The edges have pairs of adjacent vertices and each edge has a corresponding edge weight w (vi, vj), which is a measurement of the dissimilarity between the two neighboring vertices vi and vj, i.e.

wij =| I ( vi ) − I ( v j ) |

(5)

5

where I(vi) is the intensity of the vertex vi. As each vertex which has more than one neighboring vertex is connected with its neighborhoods, it might have more than one edge. The graph of a 2D image can be constructed in a 4-connected, 6-connected or 8-connected neighborhood when using a conventional graph-based segmentation method [28], as each pixel has eight edges connected to its eight adjacent pixels and we can choose one of the neighborhoods. For a 3D image, the models of neighborhood become more complex as each voxel has at most 26 neighboring voxels, i.e. a voxel has 26 edges connecting to its neighbors. In the procedure of graph construction, each vertex corresponds to a voxel. Each edge connecting two adjacent vertices vi and vj has an edge weight which is based on the intensity difference between the two connecting vertices [26]. The templates to traverse the 3D US image are shown in Fig. 1. We present three types of templates here, i.e. the 6-connected neighbors in Fig. 1a, the 12-connected neighbors in Fig. 1b, and the 26-connected neighbors in Fig. 1c. Each cube stands for a voxel in the 3D image, the cube marked in red represents the reference voxel and the remaining cubes represent its adjacent voxels. The templates establish the relationship between each pair of the neighboring voxels.

(a)

(c)

(b)

Fig. 1. Graph templates in which the cubes denote the traversal cells. (a) 6-connected neighbors, (b) 12-connected neighbors, and (c) 26-connected neighbors. 2.3. Pairwise region comparison predicate for region mergence Conventional PRCP which have been applied in 2D image segmentation was proposed in [27] as a measurement of the differences between two neighboring regions. Huang et al. [28] took advantage of the statistical information of each region and modified the predicate. In this paper, by taking into account the smoothness and distributions of the subregions (denoted by subgraphs in the graph-based segmentation) to be merged, we make further modifications based on the PRCP. At the beginning of the segmentation, each vertex is treated as a subgraph just after graph construction and there is a boundary between each pair of connected voxels. In order to merge the small adjacent subgraphs into a larger homogenous subgraph, a criterion is needed to judge whether the neighboring subgraphs should be merged. If two adjacent subgraphs 6

cannot be merged, then their boundaries are valid; otherwise, their boundaries are invalid. In order to ensure the accuracy of the experiment and make our results more convincing, two PRCPs are employed for two different conditions which are differentiated by a pre-defined threshold Nv. When the number of voxels for any of subgraph is less than Nv indicating that there are small scale of voxels in the subgraphs, the PRCP similar to [28] is used to determine whether two neighboring regions should be merged or not. As such, when the number of voxels for both the two adjacent subgraphs are larger than Nv indicating that there are large scale of voxels to be compared, we evaluate the intensity distributions of the two subgraphs using hypothesis testing result as the criterion of mergence. In [27,28], the key point of the PRCP is the comparison between the inter-component differences and the within component differences, which determines whether or not the two components should be merged. To a certain extent, it has adaptive capability when segmenting images with large range of intensities and complex textures. Given a graph G = (V, E), the predicate D(C1, C2) (C1 and C2 represent two different components) can be defined as follows,

⎧ TRUE D(C1 , C2 ) = ⎨ ⎩ FALSE

Dif (C1 , C2 ) > MInt (C1 , C2 ) other

(6)

where MInt(C1, C2) is the minimum internal component difference and Dif(C1, C2) is the difference between two different components, C1 , C2 ⊆ V . Whether a boundary that connects two components (C1, C2) should be eliminated or not depends on the PRCP. If Dif(C1, C2) between two different components is larger than MInt(C1, C2), the predicate D(C1, C2) is equal to TRUE, thus the boundary is valid and these two regions should not be merged; otherwise they should be merged into a homogenous region.

2.3.1. Mergence of subregions containing small-scale voxels If the subgraphs contain small numbers of voxels, it is difficult to compute their real intensity distributions. A simple PRCP is designed to make the merging continue. In a graph G = (V, E), firstly we define the internal difference of a component C ⊆ V to be the intensity standard deviation of C, expressed as

Int (C ) = σ (C )

(7)

where σ(C) represents the intensity standard deviation of C. Then the difference between two different components C1 , C2 ⊆ V , is defined to be the dissimilarity of their respective average intensity values by considering local statistical information, i.e. 7

Dif (C1 , C 2 ) =| μ (C1 ) − μ (C 2 ) |

(8)

where μ(C) stands for the average intensity of the component C. If there is no edge between these two components C1 and C2 (in another word, if they are not adjacent regions), their difference Dif(C1, C2) = ∞. Obviously, the mean values of the voxels can significantly degrade the effect of noises in a 3D image. The definition for MInt (C1,C2 ) , C1 , C2 ⊆ V ,is formulated by:

MInt (C1 , C 2 ) = min( Int (C1 ) + τ (δ , C1 ), Int (C 2 ) + τ (δ , C 2 ))

(9)

where τ (δ ,C ) is a threshold function, it is expressed as,

τ (δ , C ) =

k Γ(δ ) ⋅ C + 1

Γ(δ ) = 1 − e −α ⋅δ , δ = σ

(10)

2

C

(11)

where |C| denotes the size of C, δ is the smoothness of C, σ 2 is the variance of C. Note that Γ(δ ) represents threshold adjustment factor, and the parameters k and α are positive constants. The threshold function τ(δ,C) is influenced by k, α , |C|, and Γ(δ ) . When k is a fixed value, the lower the Γ(δ ) is, the higher the τ (δ ,C ) is, and vice versa. Higher

τ (δ ,C ) may lead to higher MInt(C1, C2), weakening the evidence for the boundary between two neighboring components. Thus these two connected components which contain significantly similar textures may be merged into a homogenous component. From the above formulas, we draw the conclusion that the threshold function τ (δ ,C ) controls when to invalidate the boundary of two different components, the minimum internal difference of which must be no smaller than the difference between them at this time. In other words, if the textures of the two components are homogenous and the sizes of which are very small, the threshold τ (δ ,C ) in Eq. (10) removes the boundary in all probability. Therefore, the α and k have significant influence on the segmentation performance. Note that the threshold function employed in this study is different from that used in [28]. It is due to the non-linear property of the ratio of variance to mean in 3D US images [37], indicating that the ratio of variance to mean cannot appropriately be a factor to examine the smoothness of 3D image regions.

8

2.3.2. Mergence of subregions with large-scale voxels If both the subgraphs to be merged contain a large scale of voxels, it is inappropriate to judge the mergence only with respect to the means and standard deviations. It is more reasonable to judge whether their statistical distributions are identical. Hence, we use hypothesis testing to accept or reject the mergence. Hypothesis testing is a method to determine whether a statistical hypothesis is true or not. The best way to make a judgment is to inspect all data. However it is not always practical, especially for a 3D volume data which often contains a very large number of voxels, making the computation time quite consuming. Some statisticians typically inspect a part of random samples from the data. If the partially selected data is inconsistent with the statistical hypothesis, the hypothesis should be rejected. In this study, in case that the number of voxels for both the two subgraphs are larger than N v , we use hypothesis testing as the criterion to determine whether two neighboring regions should be merged or not. In order to save the computational resource, we inspect a part of samples (i.e. voxel intensities in this study) randomly selected from the two neighboring subgraphs (denoted as ξ and η) to estimate the overall distribution. To simplify the problem, we assume that the test statistic follows Normal distribution [31]. If the two subregions belong to a homogenous region, their statistical distributions are identical and the dissimilarity of their intensity means and variances should not be significant. Firstly, we test the variances of two subregions. Under a given significance lever α, the null hypothesis H 0 and alternative hypothesis H 1 for the two subregions are expressed as:

H 0 : σ 12 = σ 22 ;

H1 : σ 12 ≠ σ 22

(12)

where σ 12 and σ 22 refer to the variances of ξ and η, respectively. Because the means and variances of ξ and η are unknown, the decision rule for rejecting H 0 is defined by 2

* smax 2

* smin 2

≥ F1−α /2 (n1 − 1, n2 − 1) 2

2

2

(13)

2

2

2

2

* * = max( s1* , s2* ) , smin = min( s1* , s2* ) , and s1* , s2* represent the variances of the samples selected from ξ and η, where smax

respectively, n1 and n2 refer to the lengths of the two sample series. If we deny H 0 , the two components cannot be merged. Otherwise, we go on testing the difference of their means to judge whether they belong to a homogenous region or not. 9

Likewise, under a given significance level α, the null hypothesis H 0 and alternative hypothesis H 1 for the means of ξ and η are expressed as:

H 0 : μ1 − μ 2 ≤ δ ;

H1 : μ1 − μ 2 > δ

(14)

where μ1 , μ 2 denote the mean values of ξ and η, and δ is a parameter used as a threshold. If the value of δ we select is adequately small, it is reasonable that the two subgraphs can be merged under the condition of H 0 in that their means are almost the same, indicating they may belong to the same homogeneous region. If H 0 is rejected, it implies that the two neighboring regions should not be merged. Since the variances of ξ and η are unknown, the decision rule for rejecting H 0 is defined by

( x − y) − δ s12 s22 + n1 n2

≥ μ1−α

(15)

where x , y are the means of the two data sequences randomly selected from ξ and η, respectively, s12 and s22 denote their variances. We empirically assign the parameters in our experiments as follows: α =0.01, and δ = 2, n1 = n2 =1000 for all data, N v = 1000 for the resolution phantom, and N v =5000 for fetus phantom and human fingers. In Eqs. (14) and (15), the value of the test statistic is used to obtain the p-value which is resulting from the statistics for calculation of a probability of the observation under H 0 . If the p-value is less than the significance level α, there is sufficient evidence that we can reject the null hypothesis H 0 . That means that the two neighboring regions should not be merged. In the opposite case, if the p-value is larger than the significance level α, we accept H 0 and the two neighboring regions should be merged. 2.4. Region mergence When the distributions of the intensity in any pair of adjacent subgraphs are statistically identical to each other according to the judgment criterion mentioned above, we merge them into one subgraph. It is realized by obtaining the minimum spanning tree (MST) of the subgraph. The MSTs are achieved by Kruskal’s algorithm [32] whose algorithm complexity is O(eloge), where e denotes the number of edges. It is worth noting that the approach which merges small 10

subgraphs into a larger graph is different from the widely-applied graph aggregation. Graph aggregation is a new graph summarization technique for representing a large scale graph by a concise graph that can capture the underlying structural and attributive information of the original large graph. Therefore, it helps users extract and understand the information encoded in the large graphs by mere visual inspection [33, 34].

Fig. 2. The flow chart for the graph-based segmentation algorithm.

2.5. Summary of the proposed algorithm Fig. 2 illustrates the flow chart of the proposed graph-based method. The summary of the procedures is presented as follows. 1. The bilateral model is used to reduce the speckles of a 3D image. 2. Construct the graph G for the 3D image. Set all the edges valid and treat each vertex as a separate component. 3. Rank the edges E by nondecreasing order in accordance with their weights. Let k = 0. 4. Pick the k-th edge in the sorted edges. Two subgraphs connected by this edge are merged if their boundary should be removed according to the proposed PRCP. 11

5. Let k = k+1. Repeat step 4 until all edges of the image have been traversed. Having traversed all edges in the 3D graph, each tree in the final forest is a MST that corresponds to a subregion in the 3D US image.

2.6. Experimental methods This work is approved by Human Subject Ethics Committee of South China University of Technology. Our method is developed using VC++ and incorporated into a 3D US imaging system [38] which is responsible for raw data collection and volume reconstruction. Based on the system, 3D US images are reconstructed from different objects, i.e. a fetus phantom (Model 068, CIRS, Inc., Norfolk, VA), a US resolution phantom (Model 044, CIRS, Inc., Norfolk, VA), and real human fingers. For each object, we conduct the US scanning for 10 runs, and then 10 raw data sets and corresponding 10 3D images can be available. Eventually, 10 volume data sets of the fetus phantom, 10 volume data sets of the US resolution phantom and 10 volume data sets of the human fingers are obtained. The sizes of the 3D images for the resolution phantom, fetus phantom and human fingers are 36×30×80, 60×55×238 and 148×87×172, respectively. In order to give a sensitivity analysis on the proposed method, we empirically set different values to k and α for the resolution phantom, fetus phantom, and human fingers, respectively, in the experiments. Table 1 shows 3 groups of the values assigned to k and α. In order to demonstrate the merit of the proposed technique, we also realize a 3D deformable model based on Snake [9] (called 3D Snake in this paper) and the Fuzzy C means (FCM) clustering method [35] to test the same 3D image data sets for the comparison purpose. Based on the 30 data sets, we can measure the computational time. Table 1. The three sets of parameters for k and α. Data type

k

α

resolution phantom

2000

0.05

fetus phantom

4000

0.15

human fingers

2000

0.2

For quantitatively evaluation of the segmentation accuracy, the False Negative volume fraction (FNVF), False Positive volume fraction (FPVF), and True Positive volume fraction (TPVF) are measured for quantitative evaluation of segmentation performance on the data sets of phantoms. The FNVF indicates the fraction of tissue defined in the ‘true’ 12

region that is missed by a segmentation method. The FPVF denotes the amount of tissue falsely identified by a segmentation method as a fraction of the total amount of tissue in the ‘true’ region. The TPVF indicates the total fraction of tissue in the ‘true’ region with which the segmented region overlapped. Therefore, larger TPVF, smaller FPVF and smaller FNVF indicate better segmentation performance. In this study, the FNVF, FPVF, and TPVF with respect to a segmented OOI on three orthogonal slices (i.e. the longitudinal section (LS), the cross section (CS) and the side section (SS), respectively) which are extracted from a segmented 3D US image, are applied to evaluate the segmentation accuracy. The three metrics are defined by

TPVF = FNVF = FPVF =

Sn ∩ Sm Sm Sn ∪ Sm − Sn

(23)

Sm Sn ∪ Sm − Sm Sm

where Sm is the real area on one of the orthogonal slices extracted from the segmented OOI, and Sn denotes the area extracted using a segmentation method. The real area is obtained by averaging the delineations by 3 experienced operators. Fig. 3 shows the areas corresponding to the TPVF, FNVF and FPVF, respectively.

Fig. 3. Areas corresponding to TPVF, FNVF and FPVF, respectively.

13

3. Experiment Results 3.1. Qualitative analysis For brevity, we only present three sets of the segmentation results images in this paper. Figs. 4-6 show the segmentation results using the three types of 3D US image (i.e. the resolution phantom, fetal phantom and human fingers, respectively). In the proposed method, the 6-connected neighborhood, 12-connected neighborhood, and 26-connected neighborhood are used to obtain the segmentation results and the OOIs. Figs. 4a, 5a and 6a show the source images. After applying the bilateral filtering model to the images, the processed images in which most of speckles have been removed can be seen in Figs. 4b, 5b and 6b. Based on the preprocessed images, the OOIs extracted using the graph-based method with the 6connected neighborhood are illustrated in Figs. 4d, 5d and 6d, those using the graph-based with the 12-connected neighborhood in Figs. 4f, 5f and 6f, those using the graph-based with the 26-connected neighborhood in Figs. 4h, 5h and 6h, those using the FCM (C=2) in Figs. 4j, 5j and 6j, those using the FCM (C=3) in Figs. 4l, 5l and 6l, those using the FCM (C=5) in Figs. 4n, 5n and 6n, and those using the 3D Snake in Figs. 4o, 5o and 6o. From a qualitative view on the results, it can be concluded that the proposed method outperforms the FCM and the 3D Snake. The 3D OOIs extracted using our method look smoother than those extracted using the FCM and Snake. Furthermore, we illustrate the orthogonal slices of the segmented OOIs produced by our method, the 3D Snake and the FCM clustering method, respectively, as illustrated in Figs. 7-9. It is obvious that the proposed method generates more accurate regions of interest (ROI) than the other two methods do. The boundaries of the ROI extracted using our method are more complete without the significant outliers and discontinuities in comparison with the FCM and Snake methods. Also we can note that the 26-connected neighborhood and 12-connected neighborhood outperform the 6-connected neighborhood. It can be easily explained that more neighboring connections obviously lead to more complete 3D contours. However, the computational cost increases as more edges have to be considered for generating the MSTs. 3.2. Quantitative analysis Tables 2-4 show the mean and standard variance of the running time of different segmentation methods with respect to the resolution phantom, the fetus phantom and human fingers, respectively. In overview of the results, our method is the most computationally expense due to the complicated region mergence (i.e. the computation of MSTs). However, the computational efficiency of the proposed method is better than that of the 3D RGB method [39] in which an averaged 14

computation time of 5.9373±0.0398s is required to segment the 3D images of resolution phantom using the 6-connected neighborhood. It is worth noting that the computational time increases hugely as the size of the original object becomes lager. It is explained that the generation of MSTs whose complexity is O(eloge) in the region mergence takes most of the computational cost. According to the graph construction, the number of edges is significantly larger than the number of vertices, hence leading to much longer computational time.

Table 2. The computation time (sec., mean±SD) of different methods for the resolution phantom. Graph-based method(6)

3.062 ± 0.032

Graph-based method(12)

5.178 ± 0.065

Graph-based method(26)

5.911 ± 0.044 7.264 ± 3.249

3D Snake FCM (C=2)

0.0158 ± 0.000016

FCM (C=3)

0.0156 ± 0.00030

FCM (C=5)

0.0159 ± 0.00009

Table 3. The computation time (sec., mean±SD) of different methods for the fetus phantom. Graph-based method(6)

736.719 ± 11.162

Graph-based method(12)

876.083 ± 54.912

Graph-based method(26)

898.927 ± 2.727 210.515 ± 2.7

3D Snake FCM (C=2)

0.0203 ± 0.002219

FCM (C=3)

0.0303 ± 0.002495

FCM (C=5)

0.0498 ± 0.002626

Tables 5-6 show the measurement results for the FNVF, FPVF, and TPVF of the 3 orthogonal slices extracted from the segmented OOIs generated by different segmentation methods applied to the resolution phantom and the fetus phantom, respectively. It is clear that our method outperforms the 3D Snake and FCM methods in most of metrics. Although the 3D 15

Snake and FCM methods perform better in some measurements (e.g. the FPVF in Table 6), they are more sensitive to noises such that their performance in other measurements are significantly degraded, hence are not so robust as ours. Though the computational efficiency of the FCM or the 3D Snake is superior to that of our method, our method achieves the most accurate results, indicating a general improvement of segmentation performance. 4. Discussion and Conclusions In this paper, a graph-based method with a newly designed 3D graph structure is proposed for segmentation of 3D US images. Different neighborhoods for construction of the graph are investigated. Compared to the 3D RGB method [39], a new predicate for determination of the mergence of adjacent subregions is proposed by taking into account the scale of voxels. Hypothesis testing is performed when the number of voxels in the subgraphs is in a large-scale. The proposed method is compared with two widely used segmentation methods, i.e. the 3D Snake and the FCM. According to the results, it can be seen that our method is capable of extracting the OOI of 3D US images with improved overall accuracy.

Table 4. The computation time (sec., mean±SD) of different methods for the human fingers. Graph-based method (6)

7138.281 ± 6.834

Graph-based method (12)

7320.213 ± 3.452

Graph-based method (26)

7405.452 ± 4.612

3D Snake

1853.145 ± 14.584

FCM (C=2)

0.0422 ± 0.00290

FCM (C=3)

0.0547 ± 0.00495

FCM(C=5)

0.0739 ± 0.00546

16

Table 5. The segmentation performance (in percentage, mean ± SD) of the three segmentation methods for the 3 orthogonal slices of the segmented OOI for the resolution phantom. TPVF (mean ± SD)

Methods LS

88.843

Graph-based Method (6)

± 1.475

Graph-based Method (12)

± 0.743

Graph-based Method (26)

± 1.375

3D Snake

± 1.375

FCM (C=2)

± 0.907

FCM (C=3)

± 1.453

FCM (C=5)

± 1.647

88.963

90.118

89.663

87.199 85.819 86.945

CS 88.000

± 2.213 85.581

± 2.768 87.233

± 1.543 96.663

± 2.288 85.961

± 1.478 84.526

± 1.332 82.036

± 1.150

FPVF (mean ± SD)

FNVF (mean ± SD) SS

96.212

± 0.855 95.740

± 0.400 94.365

± 0.975 79.111

± 0.324 95.101

± 0.705 92.861

± 0.422 91.590

± 0.863

LS 11.157

± 1.475 11.837

± 0.743 9.882

± 1.375 10.337

± 1.857 12.801

± 0.907 14.181

± 1.159 13.055

± 1.647

CS 12.000

± 2.213 14.419

± 2.768 12.767

± 1.543 3.337

± 2.288 14.039

± 1.478 15.474

± 1.332 17.964

± 1.150

17

SS 3.788

± 0.855 4.260

± 0.400 5.635

± 0.975 20.889

± 1.857 4.899

± 0.705 7.139

± 0.422 8.410

± 0.863

LS 8.591

± 3.663 6.285

± 0.581 6.850

± 1.931 29.487

± 2.836 9.421

± 1.911 19.910

± 1.035 15.322

± 0.539

CS 9.434

± 2.898 6.876

± 1.907 7.477

± 2.180 35.940

± 6.003 8.274

± 1.635 8.824

± 1.453 7.829

± 0.974

SS 7.598

± 2.110 7.398

± 1.243 6.162

± 1.366 23.198

± 2.598 10.340

± 4.010 12.232

± 1.444 15.854

± 0.615

Table 6. The segmentation performance (in percentage, mean ± SD) of the three segmentation methods for the 3 orthogonal slices of the segmented OOI for the fetus phantom. TPVF (mean ± SD)

Methods LS

80.248

Graph-based Method (6)

± 0.852

Graph-based Method (12)

± 1.060

Graph-based Method (26)

± 1.048

3D Snake

± 0.220

FCM (C=2)

± 1.100

FCM (C=3)

± 1.040

FCM (C=5)

± 0.998

90.765

90.777 86.159

84.985

62.708 45.503

CS 90.827

± 1.467 91.507

± 0.982 90.998

± 1.576 74.242

± 2.917 89.287

± 1.989 68.151

± 1.275 59.653

± 1.408

FNVF (mean ± SD) SS

73.318

± 4.365 88.457

± 1.975 88.929

± 2.264 99.061

± 1.807 87.646

± 2.215 77.176

± 2.583 64.863

± 1.865

LS 19.752

± 0.852 9.235

± 1.060 9.223

± 1.048 13.841

± 0.220 15.015

± 1.100 37.292

± 1.040 54.497

± 0.998

CS 9.173

± 1.467 8.493

± 0.982 9.002

± 1.576 25.758

± 2.917 10.713

± 1.989 31.849

± 1.275 40.347

± 1.408

FPVF (mean ± SD) SS

26.682

± 4.365 11.543

± 1.975 11.071

± 2.264 0.939

± 1.807 12.354

± 2.215 22.824

± 0.177 35.137

± 1.865

LS 2.099

± 0.614 2.298

± 0.624 1.732

± 0.581 38.272

± 3.225 2.996

± 0.627 0.288

± 1.275 0.275

± 0.243

CS 3.950

± 2.476 2.935

± 2.167 3.340

± 2.386 20.314

± 2.615 3.982

± 2.250 1.522

± 2.231 1.149

± 2.026

SS 1.523

± 1.230 5.147

± 2.232 1.902

± 0.825 79.013

± 21.806 7.954

± 2.591 0.989

± 1.339 0.377

± 0.783

However, there are several limitations in our method. First, our method is more computationally expensive due to the complex graph structure and the consuming process for generating MSTs. The complexity of the proposed algorithm is hardly reduced. Nevertheless, it is expected that parallel computing technology would be helpful in accelerating the computing process and design of a parallelized algorithmic framework will be one of our future studies. Furthermore, the values for α , k, and N v are empirically set in this study, which may be inconvenient for practices. It is also worth noting that the selection of parameters has a significant influence on the segmentation effect. In the future work, we therefore will try to make use of some optimization algorithms (e.g. genetic algorithm) and search for the optimal parameter values to solve the problem of under-segmentation and over-segmentation. In summary, the experimental results demonstrate that the graph-based segmentation method outperforms the traditional Snake based and FCM clustering methods, indicating improved performance for 3D analysis in potential clinical applications. With further efforts to parallelize the proposed method, it can be expected that the proposed method will be more successfully applied in segmentation of various 3D medical images.

18

Acknowledgements This work is supported by National Natural Science Funds of China (Nos. 61125106, 61372007), Natural Science Funds of Guangdong Province (No. S2012010009885), the Fundamental Research Funds for the Central Universities (No. 2014ZG0038), Projects of innovative science and technology, Department of Education, Guangdong Province (No. 2013KJCX0012), and Shaanxi Key Innovation Team of Science and Technology (Grant No.: 2012KCT-04).

References [1] D.R. Chen, R.F. Chang, W.J. Wu, W.K. Moon, W.L. Wu, 3-D breast ultrasound segmentation using active contour model, Ultrasound in Medicine & Biology 29 (7) (2003) 1017-1026. [2] J. Zhang, C.H. Yan, C.K. Chui and S.H. Ong, Fast segmentation of bone in CT images using 3D adaptive thresholding, Computers in Biology and Medicine 40 (2) (2010) 231-236. [3] P.S. Umesh Adiga, B.B. Chaudhuri, An effective method based on watereshed and rule-based merging for segmentation of 3-D histo-pathological images, Pattern Recgonition 34 (7) (2001) 1449-1458. [4] M. Lascu and D. Lascu, A new morphological image segmentation with application in 3D echographic images, WSEAS Transactions on Electronics 5 (3) (2008) 72-82. [5] J. Anquez, E.D. Angelini, I. Bloch, Segmentation of fetal 3D ultrasound based on statistical prior and deformable model, in: The 5th IEEE International Symposium on Biomedical imaging (ISBI), 2008, pp. 17-20. [6] X.J. Zhu, P.F. Zhang, J.H. Shao, Y.Z. Cheng, Y. Zhang and J. Bai, A snake-based method for segmentation of intravascular ultrasound images and its in vivo validation, Ultrasonics 51 (2) (2011) 181-189. [7] J. Yang, J.S. Duncan, 3D image segmentation of deformable objects with joint shape-intensity prior models using level sets, Medical Image Analysis 8 (3) (2004) 285-294. [8] M. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active contour models, International Journal of Computer Vision 1 (4) (1989) 321-331. [9] R.F. Chang, W.J. Wu, C.C. Tseng, D.R. Chen, W.K. Moon, 3-D snake for US in matgin evalution for malignant breast tumor excision using mammotome, IEEE Transactions on Information Technology in Biomedicine 7 (3) (2003) 197-201.

19

[10] A.K. Jumaat, W.E.Z.W.A. Rahman, A. Ibbrahim and R. Mahmud, Comparison of Balloon Snake and GVF Snake in Segmenting Masses from Breast Ultrasound Images, in: The 2010 IEEE Second International Conference on Computer Research and Development, 2010, pp. 505-509. [11] R.O. Duda, P.E. Hart and D.G. Stork, Pattern Classification, Wiley-Interscience, 2000. [12] K.S. Chuang, H.L. Tzeng, S. Chen, T.J. Chen, Fuzzy c-means clustering with spatial information for image segmentation. Computerized medical Imaging and Gtaphics 30 (1) (2006) 9-15. [13] M. Hanmandlu, O.P. Verma, S. Susan, V.K. Madasu, Color segmentation by fuzzy co-clustering of chrominance color features, Neurocomputing 120 (2013) 235-249. [14] D. Boukerroui, O. Basset, A. Hernandez, N. Guerin and G. Gimenez, Texure based adaptive clustering algorithm for 3D breast lesion segmentation, in: Proceedings of the IEEE Ultrasound Symposium, vol. 2, 1997, pp. 1389-1392. [15] Y. Gao, M. Wang, Z.J. Zha, J.L. Shen, X.L. Li, X.D. Wu, Visual-Textual Joint Relevance Learning for Tag-Based Social Image Search, IEEE Transactions on Image Processing 22 (1) (2013) 363-376. [16] Y. Gao, M. Wang, R.R. Ji, X.D. Wu, Q.H. Dai, 3D Object Retrieval with Hausdorff Distance Learning, IEEE Transactions on Industrial Electronics 61 (4) (2013) 2088-2098 [17] Y. Gao, M. Wang, Z.J. Zha, Q. T, Q.H. Dai, N. Y. Zhang, Less is more: efficient 3-D object retrieval with query view selection, IEEE Transactions on Multimedia 13 (5) (2011) 1007-1018. [18] Y. Gao, M. Wang, R.R. Ji, Z.J. Zha, J.L. Shen, K-Partite Graph Reinforcement and Its Application in Multimedia Information Retrieval, Information Sciences 194 (2012) 224-239. [19] Y. Gao, M. Wang, D.C. Tao, R.R. Ji, Q.H. Dai, 3-D Object Retrieval and Recognition With Hypergraph Analysis, IEEE Transactions on Image Processing 21 (9) (2012) 4290-4303. [20] A. Pérez-Suárez, J.F. Martínez-Trinidad, J.A. Carrasco-Ochoa, J.E. Medina-Pagola, OClustR: A new graph-based algorithm for overlapping clustering, Neurocomputing 121 (2013) 234-247. [21] J. L, M.J. Li, Q.S. Liu, H.Q. Lu, S.D. Ma, Image annotation via graph learning, Pattern Recognition 42 (2) (2009) 218-228. [22] R.K. Ando, T. Zhang, Learning on Graph with Laplacian Regularization, Advances in neural information processing systems (2006) 25-32. [23] M. Wang, X.S. Hua, J.H. Tang, R.C. Hong, Beyond Distance Measurement: Constructing Neighborhood Similarity for Video Annotation, IEEE Transactions on Multimedia 11 (3) (2009) 465-476. 20

[24] M. Wang, X.S. Hua, R.C. Hong, J.H. Tang, G.J. Qi, Y. Song, Unified Video Annotation via Multigraph Learning, IEEE Transactions on Circuits and Systems for Video Technology 19 (5) (2009) 733-746. [25] M. Wang, H. Li, D.C. Tao, K. Lu, X.D. Wu, Multimodal Graph-Based Reranking for Web Image Search, IEEE Transactions on Image Processing 21 (11) (2012) 4649-4661. [26] C.T. Zahn, Graph-theoretical methods for detecting and describing gestalt clusters, IEEE Transactions on Computers 100 (1) (1971) 68-86. [27] P.F. Felzenszwalb, D.P. Huttenlocher, Efficient graph-based image segmentation, International Journal of Computer Vision 59 (2) (2004) 167-181. [28] Q.H. Huang, S.Y. Lee, L.Z. Lu, M.H. Lu, L.W. Jin and A.H. Li, A robust graph-based segmentation method for breast tumors in ultrasound images, Ultrasonics 52 (2) (2011) 266-275. [29] Q.H. Huang, X. B, Y.G. Li, L.W. Jin, X.L. Li, Optimized graph-based segmentation for ultrasound images, Neurocomputing (2013). (In press) [30] C. Tomasi and R. Manduchi, Bilateral filtering for gray and color images, in: Proceedings of the Sixth International Conference on Computer Vision, 1998, pp. 839-846. [31] V.K. Rohatgi, An Introduction to Probability Theory and Mathematical Statistics. John Wiley & Sons, 1976. [32] J.B. Kruskal, On the shortest spanning subtree of a graph and the traveling salesman problem, in: Proceedings of the American Mathematical Society, vol. 7, 1956, pp. 48-50. [33] Y.Y. Tian, R.A. Hankins, J.M. Patel, Efficient aggregation for graph summarization, in: Proceedings of the 2008 ACM SIGMOD international conference on Management of data, 2008, pp. 567-580. [34] A. Z. Broder, R. Lempel, F. Maghoul, J. Pedersen, Efficient PageRank approximation via graph aggregation, Information Retrieval 9 (2) (2006) 123-138. [35] H. Zhou, A.H. Sadka, M.E. Celebi, Anisotropic mean shift based Fuzzy C-Means segmentation of dermoscopy images, IEEE Journal of Selected Topics in Signal Processing 3 (1) (2009 ) 26-34. [36] Q.H. Huang, B.S. Chen, J.D.Wang, T. Mei, Personalized video recommendation through graph propagation, ACM Transactions on Multimedia Computing, Communications, and Applications. (In press) [37] Q.H. Huang, Y.P. Zheng, M.H. Lu, T.F. Wang, S.P. Chen, A new adaptive interpolation algorithm for 3D ultrasound imaging with speckle reduction and edge preservation, Computerized Medical Imaging and Graphics 33(2) (2009) 100-110. 21

[38] Q.H. Huang, Z. Yang, W. Hu, L.W. Jin, G. Wei, X.L. Li, Linear Tracking for 3-D Medical Ultrasound Imaging, IEEE Transactions on Cybernetics 43(6) (2013) 1747-1754. [39] L.F. Zheng, Q.H. Huang, A graph-based segmentation method for 3D ultrasound images, in: Proceedings of the 2012 Control Conference (CCC), 2012, pp. 4001-4005. [40] R.J. Ji, Y. Gao, R.C. Hong, Q. Liu, D.C. Tao, X.L. Li, Spectral-spatial constraint hyperspectral image classification, IEEE Transactions on Geoscience and Remote Sensing 52(3) (2014) 1811-1824.

22

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)

Fig. 4. 3D segmentation results of the resolution phantom. (a) Source image; (b) filetered image; (c) our method with the 6-connected neighborhood; (d) the OOI of (c); (e) our method with the 12-connected neighborhood; (f) the OOI of (e); (g) our method with the 26connected neighborhood; (h) the OOI of (g); (i) result of the FCM (m=2, C=2); (j) the OOI of (i); (k) result of the FCM (m=2, C=3); (l) the OOI of (k); (m) result of the FCM (m=2, C=5); (n) the OOI of (m); (o) result of the 3D Snake.

23

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)

Fig. 5. 3D Segmentation results of the fetus phantom. (a) Source image; (b) filetered image; (c) our method with the 6-connected neighborhood; (d) the OOI of (c); (e) our method with the 12-connected neighborhood; (f) the OOI of (e); (g) our method with the 26connected neighborhood; (h) the OOI of (g); (i) result of the FCM (m=2, C=2); (j) the OOI of (i); (k) result of the FCM (m=2, C=3); (l) the OOI of (k); (m) result of the FCM (m=2, C=5); (n) the OOI of (m); (o) result of the 3D Snake.

24

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)

Fig. 6. 3D segmentation results of the human fingers. (a) Source image; (b) filetered image; (c) our method with the 6-connected neighborhood; (d) the OOI of (c); (e) our method with the 12-connected neighborhood; (f) the OOI of (e); (g) our method with the 26connected neighborhood; (h) the OOI of (g); (i) result of the FCM (m=2, C=2); (j) the OOI of (i); (k) result of the FCM (m=2, C=3); (l) the OOI of (k); (m) result of the FCM (m=2, C=5); (n) the OOI of (m); (o) result of the 3D Snake.

25

(a1)

(a2)

(a3)

(a4)

(b1)

(b2)

(b3)

(b4)

(c1)

(c2)

(c3)

(c4)

(d1)

(d2)

(d3)

(d4)

(e1)

(e2)

(e3)

(e4)

(f1)

(f2)

(f3)

(f4)

26

(g1) (g2) (g3) (g4) Fig. 7. The orthogonal slices of the extracted OOI for the resolution phantom. From left to right columns are the 3 orthogonal slices of the segmented OOIs, the longitudinal section (LS), the cross section (CS) and the side section (SS). From top to bottom rows, the images are: (a) our method with the 6-connected neighborhood; (b) our method with the 12-connected neighborhood; (c) our method with the 26-connected neighborhood; (d), (e) and (f) are the results of FCM with C = 2, 3, and 5, respectively; (g) results of the 3D Snake.

27

(a1)

(a2)

(a3)

(a4)

(b1)

(b2)

(b3)

(b4)

(c1)

(c2)

(c3)

(c4)

(d1)

(d2)

(d3)

(d4)

(e1)

(e2)

(e3)

(e4)

(f2)

(f3)

(f1)

(f4) 28

(g1)

(g2)

(g3)

(g4)

Fig. 8. The orthogonal slices of the extracted OOI for the fetal phantom. From left to right columns are 3 orthogonal slices of the segmented OOIs, the longitudinal section (LS), the cross section (CS) and the side section (SS). From top to bottom rows, the images are: (a) our method with the 6-connected neighborhood; (b) our method with the 12-connected neighborhood; (c) our method with the 26-connected neighborhood; (d), (e) and (f) are the results of FCM with C = 2, 3, and 5, respectively; (g) results of the 3D Snake.

29

(a1)

(a2)

(a3)

(a4)

(a5)

(b1)

(b2)

(b3)

(b4)

(b5)

(c1)

(c2)

(c3)

(c4)

(c5)

(d1)

(d2)

(d3)

(d4)

(d5)

(e1)

(e2)

(e3)

(e4)

(e5)

30

(f1)

(f2)

(f3)

(f4)

(f5)

(g1)

(g2)

(g3)

(g4)

(g5)

Fig. 9. The orthogonal slices of the segmented OOI for human fingers. From left to right columns are 4 orthogonal slices of the segmented OOIs, the longitudinal section (LS), the cross section (CS) and the side section of left finger (SSL), the side section of right finger (SSR). From top to bottom rows, the images are: (a) our method with the 6-connected neighborhood; (b) our method with the 12connected neighborhood; (c) our method with the 26-connected neighborhood; (d), (e) and (f) are the results of FCM with C = 2, 3, and 5, respectively; (g) results of the 3D Snake.

31

Huali Chang received the BE degree in information engineering at Chang’an University, China, in 2012. Her research interests include biomedical engineering and pattern recognition.

Zhenping received the BE degree in information engineering at South China University of Technology, China, in 2012. His research interests include cloud computing and intelligent computation.

Qinghua Huang received BE and ME degrees in automatic control and pattern recognition, both from the University of Science and Technology of China, China, in 1999 and 2002, respectively. He received PhD degree in biomedical engineering in 2007 from the Hong Kong Polytechnic University, Hong Kong. Since 2008, he has been an associate professor in the School of Electronic and Information Engineering, South China University of Technology, China. His research interests include ultrasonic imaging, medical image analysis, bioinformatics, intelligent computation and its applications.

Jun Shi received the B.S. degree and the Ph.D. degree from the Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China, in 2000 and 2005, respectively. From 2002 to 2003, he was a Research Assistant in Jockey Club Rehabilitation Engineering Center, Hong Kong Polytechnic University. In 2005, he joined the School of Communication and Information Engineering, Shanghai University, China, where he has been an Associate Professor since 2008. He worked as a Visiting Scholar in University of North Carolina at Chapel Hill in 2011. His current research interests include medical image, signal processing, and pattern recognition. Xuelong Li is a full professor with the Center for OPTical IMagery Analysis and Learning (OPTIMAL), State Key Laboratory of Transient Optics and Photonics, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an 710119, Shaanxi, P. R. China

32