Shape collaborative representation with fuzzy energy based active contour model

Shape collaborative representation with fuzzy energy based active contour model

Engineering Applications of Artificial Intelligence 56 (2016) 60–74 Contents lists available at ScienceDirect Engineering Applications of Artificial I...

9MB Sizes 0 Downloads 87 Views

Engineering Applications of Artificial Intelligence 56 (2016) 60–74

Contents lists available at ScienceDirect

Engineering Applications of Artificial Intelligence journal homepage: www.elsevier.com/locate/engappai

Shape collaborative representation with fuzzy energy based active contour model Van-Truong Pham a,b,c, Thi-Thao Tran b,d, Kuo-Kai Shyu b,n, Chen Lin a, Pa-Chun Wang e,f,g, Men-Tzung Lo a,n a

Department of Biomedical Sciences and Engineering, National Central University, Chung-li, Taiwan Department of Electrical Engineering, National Central University, Chung-li, Taiwan c School of Engineering Education, Hanoi University of Science and Technology, Hanoi, Vietnam d School of Electrical Engineering, Hanoi University of Science and Technology, Hanoi, Vietnam e Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan f School of Medicine, Taipei Medical University, Taipei, Taiwan g School of Medicine, Fu Jen Catholic University, Taipei, Taiwan b

art ic l e i nf o

a b s t r a c t

Article history: Received 15 April 2016 Received in revised form 7 July 2016 Accepted 27 August 2016

This paper presents a fuzzy energy-based active contour model for image segmentation with shape prior based on collaborative representation of training shapes. In the paper, a fuzzy energy functional including a data term and a shape prior term is proposed. The data term relies on image information to guide the evolution of the contour. Meanwhile, the shape prior term constrains the evolving contour with respect to the priori shape to handle background clutter and object occlusion. Especially, in this study, the prior shape is represented as the combination of atoms in the shape dictionary based on collaborative representation. In particular, instead of using ℓ1-norm regularization as in sparse representation, we utilize ℓ2-regularized linear regression scheme which can obtain algebraic solution for the coding coefficients, and significantly reduces the computation time. The proposed model therefore can segment images with background clutter and object occlusion even when the training set includes shapes with large variation. In addition, the proposed shape collaborative representation model also takes less computational time compared to shape sparse representation approach. Experimental results on various images and comparisons with other models show the desired performances of the proposed model. & 2016 Elsevier Ltd. All rights reserved.

Keywords: Image segmentation Level set method Shape collaborative representation Shape sparse representation Shape prior Fuzzy energy

1. Introduction Image segmentation is a fundamental problem in computer vision and plays an important role in various areas of image processing, such as medical imaging analysis (Andreopoulos and Tsotsos, 2008; Mandal et al., 2014), object detection and classification, and video surveillance, etc. (Cremers et al., 2007). Various methods for image segmentation have been proposed in the literature such as clustering method (Wang et al., 2009), threshold method (Hammouche et al., 2008), edge detection method (Wang and Oliensis, 2010), and active contour models (ACMs) (Chan and Vese, 2001; Cremers et al., 2007; Kass et al., 1988; Roy et al., 2011). However, image segmentation is still a challenging problem due to various disturbing factors such as noise, low contrast, and weak n

Corresponding authors. E-mail addresses: [email protected] (K.-K. Shyu), [email protected] (M.-T. Lo). http://dx.doi.org/10.1016/j.engappai.2016.08.015 0952-1976/& 2016 Elsevier Ltd. All rights reserved.

boundaries (He et al., 2008). Among the developed methods for image segmentation, active contour models (Kass et al., 1988) have been proven as a well-established approach for performing the segmentation tasks. Compared with other classical methods, ACMs offer flexible segmentation frameworks, and provide closed and smooth contours (Chan and Vese, 2001). In addition, in image segmentation using ACMs, one can formulate the segmentation tasks as a problem of minimization of an energy functional. This allows us to encode other useful information such as shape, appearance, etc. into the cost function to assist segmentation tasks. ACMs based image segmentations are performed either by explicit (Kass et al., 1988; Xu and Prince, 1998) or implicit (Caselles et al., 1997; Chan and Vese, 2001) motion of the curve such that the contour is finally located on the object boundaries. Explicit ACMs or snakes (Kass et al., 1988) are normally formulated as optimization problems in which edge information of image is used in external force to guide the motion of the curve. However, for images corrupted by noise and with weak boundaries, it is difficult to obtain the satisfactory segmentation results by the snake model

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

(Chan and Vese, 2001). As an alternative approach, Mumford and Shah (1989) proposed to use image intensity in the energy functional to perform segmentation tasks. This region based approach is then developed and made more popular by Chan and Vese (2001) via minimizing the energy functional of Mumford-Shah model within level set framework (Sethian, 1999). In the level set framework, or implicit approach, the contour is represented as a zero level of a level set function and can automatically change its topology, such as merge and split (Sethian, 1999). The Chan-Vese model therefore has been widely used for various segmentation tasks as well as other applications (Cremers et al., 2007). Later on, many works extending the Chan-Vese model have been developed (Krinidis and Chatzis, 2009; Michailovich et al., 2007; Rousson and Deriche, 2002; Yezzi et al., 2002), in which the energy functionals are defined based on other intensity related information such as mean separation (Yezzi et al., 2002), histogram separation (Michailovich et al., 2007), and Bayesian model (Rousson and Deriche, 2002), to name just a few examples. In an alternative approach, Bernard et al. (Bernard et al., 2009) proposed a variational B-spline level set model, in which the level-set function is expressed as the combination of continuous basic functions using B-splines. More recently, to take the advantages of both fuzzy logic theory and Chan-Vese model into account, Krinidis and Chatzis (2009) proposed a region-based active contour model namely fuzzy energy-based active contour model (FAC). Other improved models inspired from the work of Krinidis and Chatzis can be found in Maoguo et al. (2015), Shyu et al. (2012) and Yue et al. (2015). Compared to the classical ACMs, the above fuzzy energy-based ACMs possess some advantages such as fast convergence to segmentation solutions, allow the curves to avoid being trapped into local minima (Krinidis and Chatzis, 2009). Nevertheless, for images in the presence of background clutter and object occlusion, the obtained results are not satisfactory. This stems from the fact that such models are solely based on low level image information like gradient and intensity. To overcome that shortcoming, we have introduced the shape prior knowledge into the energy functional of Fuzzy energy-based ACMs in the previous work in Tran et al. (2014). However, the shape prior in Tran et al. (2014) is reconstructed via performing principal component analysis (PCA) on the set of training shapes. Since PCA-based shape reconstruction needs to compute the mean shape of the training set, one only can get the good shape prior for segmentation if the training shapes are similar. Otherwise, the shape prior is not faithful with the shape of desired object, which leads to unsatisfactory segmentation results. To handle the drawback raised by PCA approach-based shape prior reconstruction, in this work, we propose a new fuzzy energy based active contour model with shape prior from the training shapes with high degree of variability. In more detail, we employ collaborative representation to construct the shape prior from the training set that might include shapes from various classes. In collaborative representation, via solving the ℓ2-minimization problem, one can find the largest coding coefficients, and the shape prior is constructed via linear combination of the shapes corresponding to these coefficients. The main contributions of this work can be outlined as the following: first, in the current model, we propose a shape prior energy based on the collaborative representation approach. This permits us to obtain a faithfully constructed shape from training shapes even with various classes and do not require over-complete dictionaries. Second, in this study, we utilize ℓ2-norm regularization instead of using ℓ1-norm regularization to regularize the coding coefficients, which significantly reduces the computational cost. The remainder of this paper is organized as follows. Section 2 briefly reviews the active contour segmentation. The sparse and

61

collaborative representation-based shape prior construction approaches are presented in Section 3. The description of the proposed shape prior model for image segmentation is presented in Section 4. The implementation and experimental results are given in Section 5. We end this paper with a brief conclusion in Section 6.

2. Active contour segmentation Let I denote an image defined on the image domain Ω ⊂ R2, and let C ⊂ Ω denote a contour that partitions the image domain into two disjoint regions, inside(C ) and outside(C ). The piecewise constant model proposed by Chan and Vese (2001) models the inside(C ) and outside(C ) regions by the scalars c1 and c2. The energy functional of Chan-Vese model is defined as:

FCV (c1, c2, C ) =

2

2

∫inside(C ) ( I(x) − c1) d x + ∫outside(C ) ( I(x) − c2) d x + μC,

(1)

where μ ≥ 0 is an adjustment coefficient, C is the length of the curve C , and c1, c2 are the averages of image intensities in the two regions inside(C ) and outside(C ), respectively. In the level set framework (Sethian, 1999), the energy equation in Eq. (1) can be written as:

FCV( c1, c2, ϕ) =

2

∫Ω H ( ϕ(x))( I(x) − c1) d 2

x+

∫Ω ( 1 − H ( ϕ(x)))( I(x) − c2) d x



∫Ω δ( ϕ(x)) ∇ϕ(x) d x,

(2)

where ϕ is a level set function which is defined to be positive inside the zero level set and negative outside it. Zero level set of ϕ represents the object boundaries. H (ϕ) is the Heaviside function whose value is 1 if ϕ ≥ 0 and 0 otherwise, and δ (ϕ) = H′(ϕ) is the Dirac function. Inspired from the work of Chan and Vese, Krinidis and Chatzis (2009) proposed an active contour model based on the fuzzy energy. In the model, different from the Chan-Vese model (Chan and Vese, 2001), the evolving curve is implicitly represented as 0.5 level of the pseudo-level set function, which is defined as follows:

⎧C = { x ∈ Ω: u(x) = 0.5}, ⎪ ⎪ ⎨ inside(C) = { x ∈ Ω: u(x) > 0.5}, ⎪ ⎪ ⎩ outside(C ) = { x ∈ Ω: u(x) < 0.5}.

(3)

where u(x) ∈ ⎡⎣ 0, 1⎤⎦ is the degree of membership of image point x to the inside of contour C . The energy functional of the fuzzy energy based ACM (Krinidis and Chatzis, 2009) is defined as:

F ( c1, c2, u) =

m

2

∫Ω ⎡⎣ u( x)⎤⎦ ( I( x) − c1) d x+

m

2

∫Ω ⎡⎣ 1 − u( x)⎤⎦ ( I( x) − c2) d x + μ C ,

(4)

where m is a weighting exponent on each fuzzy membership. To handle clutter and occlusion in the image to be segmented, in Tran et al. (2014), Tran et al. proposed a fuzzy energy active contour model with shape prior expressed as:

62

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

(

)

F c1, c2, u, ψ^ = γ1

m

2

∫Ω ⎡⎣ u( x)⎤⎦ ( I( x) − c1) d

x + γ2

m

2

∫Ω ⎡⎣ 1 − u( x)⎤⎦ ( I( x) − c2) d x m



∫Ω ⎡⎣ u( x)⎤⎦ ( 1 − ψ^ ( x))d x



∫Ω ⎡⎣ 1 − u( x)⎤⎦ ψ^ ( x)d x,

m

(5)

where γ1, γ2 > 0 are fixed parameters, and β > 0 is a weighting factor, and ψ^ is the prior shape which can be reconstructed via performing PCA on the set of training shapes. The PCA-based shape reconstruction, however, has the drawback of relying on the mean and mode variations of the training shapes so that it could not work well when the training set includes shapes with high degree of variablity. To handle the above mentioned drawback of PCA-based approach, in this study, we employ collaborative representation to construct the prior shape from the training set that might include shapes from various classes. The detail of the proposed approach is presented in the following Sections.

3. Sparse representation and collaborative representation of shapes Given an input observed shape g ∈ R( W × H ), such as the desired object shape in active contour segmentation. The shape g can be approximately represented as a combination of N elements in a shape dictionary D = ⎣⎡ f , f , … , f ⎦⎤ 1

N

2

g = s1f1 + s2f2 + ⋯ + sN fN = Ds

(6)

Fig. 1. Input data from MNIST handwritten digit database: (a) shapes used for the training set, and (b) test image for segmentation.

soft( y , T ) = sign( y)max( 0, y − T ). Though the ℓ1 minimization has been used in numerous shape prior-based ACMs with excellent segmentation performance, such as the works in Aghasi and Romberg (2013), Chen et al. (2013) and Zhang et al. (2011b), it is time consuming due to the nonsmoothing constraint (Li et al., 2014). In order to reduce the complexity of the ℓ1 minimization as well as speed up the computational time, Shi et al. (2011) and Zhang et al. (2011a) suggested replacing the ℓ1-norm sparsity constraint by the ℓ2-norm sparsity constraint. Eq. (10) is now rewritten in the following form:

{

min Eλ( s ) = ‖g − Ds‖22 + λ‖s‖22 s

}

(12)

4. Fuzzy energy-based active contour with shape collaborative representation 4.1. Energy functional of the proposed model

T

where s = ⎡⎣ s1, s2, … , sN ⎤⎦ ∈ R is a coefficient vector. To find s , one can solve the sparse representation problem described as

min‖s‖0 subject to Ds = g

(7)

s

where ‖‖0 stands for the ℓ0 norm. In practical applications, the target shape g may be corrupted or occluded. Considering the error-tolerance ε > 0, the above problem becomes

min‖s‖0 subject to ‖g − Ds‖2 ≤ ε s

(8)

Generally, the problem of identifying the sparsest solution of an underdetermined of linear equation is known to be NP-hard problem and difficult even to approximate (Wright et al., 2009). Fortunately, recent developments in sparsity theory (Elad and Aharon, 2006; Kreutz-Delgado et al., 2003; Wright et al., 2009) provided us a ℓ1 norm relaxation theorem for effectively solving the problem in Eq. (7). The ℓ0 minimization in Eq. (7) is reformulated as the following ℓ1 minimization (Wright et al., 2009):

min‖s‖1 subject to ‖g − Ds‖2 ≤ ε s

s

{

}

(10)

where λ is the regularized parameter. The coding coefficient s in Eq. (10) can be iteratively updated by the Iterative soft thresholding algorithm as:

(

)

sk + 1 = soft sk − τ DT ( Ds − g ) , τλ ,

F = Fdata + βFshape + μFlength

(11)

where τ is the step-size, and the soft-thresholding operator is defined as:

(13)

where β , μ are positive weighting coefficients, Fdata is the data term, Fshape is the shape prior term, and Flength is the length term. The data term inspired from Krinidis and Chatzis (2009) is given as follows:

Fdata( c1, c2, u) =

m

2

∫Ω ⎡⎣ u( x)⎤⎦ ( I( x) − c1) d x +

(9)

After relaxation, the problem becomes solvable. An equivalent formulation is

min Eλ( s ) = ‖g − Ds‖22 + λ‖s‖1

To handle challenges in segmentation of images with clutter and occlusions when the training set is even with multiple different object classes, we propose to incorporate the shape prior knowledge into the fuzzy energy based active contour model. In more detail, we introduce a shape prior with reference shape reconstructed from shape collaborative representation into the energy functional. We term our proposed method in the current study as SCRFAC, which is short for shape collaborative representation with fuzzy energy based active contour. The energy functional of the shape prior based fuzzy energy functional is defined as:

m

2

∫Ω ⎡⎣ 1 − u( x)⎤⎦ ( I( x) − c2) d x,

(14)

The shape prior term represents the distance between the evolving shape and the reference shape. In this work, we define the shape constraint term by using Eλ( s ) in Eq. (12) to measure the dissimilarity between evolving shape g and the prior shape Ds . Note that, in this study, the evolving curve is implicitly represented as T ( 0 < T < 1) level of the pseudo-level set function u. For example, when T = 0.5, the evolving curve is represented as in Eq. (3). The evolving shape therefore can be represented via the Heaviside function as: g = H ( u( x) − T ). The shape prior term is then defined as:

Fshape( u, s) =

2

∫Ω ( H ( u( x) − T ) − Ds) d x + λ

s

2 2

(15)

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

63

1

1

1

1

1

0.8

0.8

0.8

0.8

0.8

0.6

0.6

0.6

0.6

0.6

0.4

0.4

0.4

0.4

0.4

0.2

0.2

0.2

0.2

0.2

0

0

0

0

0 1 2 3 4 5 6 7 8 9

0 1 2 3 4 5 6 7 8 9

0 1 2 3 4 5 6 7 8 9

0 1 2 3 4 5 6 7 8 9

0

0 1 2 3 4 5 6 7 8 9

Fig. 2. Evolution of contour from initial position to final result. The evolution of the contour is presented in the top row, and the corresponding coding coefficients are plotted in the bottom row.

Fig. 3. (a) Sample shapes in the shape dictionary from MPEG-7 database, and (b) test images for segmentation.

64

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

1 0.8 0.6 0.4 0.2 0

0

200

400

600

800

1000

1200

1400

0

200

400

600

800

1000

1200

1400

0

200

400

600

800

1000

1200

1400

0

200

400

600

800

1000

1200

1400

0.6

0.4

0.2

0

0.8 0.6 0.4 0.2 0

0.8 0.6 0.4 0.2 0

Fig. 4. Segmentation by the proposed shape model for images with various perturbations. (a) Initial curves on the test images, (b) segmentation results, (c) reconstructed shapes and (d) the corresponding coding coefficients.

Fig. 5. Representative results of tracking image sequence by the proposed model.

Note also that the evolving shape and the training set shapes need to be aligned to the same coordinate. In this work, we exploit the shape alignment approach in Tran et al. (2013).

The length term can be calculated with the pseudo-zero level set u( x) = T as:

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

65

Fig. 6. Representative results of tracking image sequence under noise by the proposed model.

{

} ∫Ω ∇H( u( x) − T ) dx = ∫ δ( u( x) − T ) ∇( u( x) − T ) d x, Ω

Flength u( x) = T =

(16)

where H ( ⋅) and δ( ⋅) are respectively Heaviside and Dirac functions. The energy functional of the proposed SCRFAC model defined in Eq. (13) can be rewritten as

F ( c1, c2, u, s) =

m

2

∫Ω ⎡⎣ u( x)⎤⎦ ( I( x) − c1) d x+

(22)

The Gateaux derivative of the function F in Eq. (17) could be written as m−1 2 m−1 ∂F I ( x) − c1 − m⎡⎣ 1 − u( x)⎤⎦ I ( x) − c2 = m⎡⎣ u( x)⎤⎦ ∂u

(

)

)( H( u( x) − T ) − Ds) ⎛ ( u( x) − T ) ⎞⎟ − μδ( u( x) − T )div⎜⎜ ⎟ ⎝ ∇( u( x) − T ) ⎠

∫Ω ⎡⎣ 1 − u( x)⎤⎦ ( I( x) − c2) d x 2

∫Ω ( H( u( x) − T ) − Ds) d x + λ

∫Ω δ( u( x) − T ) ∇( u( x) − T ) d x,

s

2⎞ ⎟ 2



(17)

4.2. Energy minimization To minimize the energy functional in Eq. (17), for fixed u and s , minimizing the energy F ( c1, c2, u, s) with respect to c1 and c2, we can obtain the following equations: m

m

∫Ω ⎡⎣ u( x)⎤⎦ d x

,

(18)

m

c2 =

∫Ω ⎡⎣ 1 − u( x)⎤⎦ I ( x)d x m

∫Ω ⎡⎣ 1 − u( x)⎤⎦ d x

)

∂u ∂F =− ∂t ∂u

(19)

(20)

The membership u is updated as:

∂F ∂u

(24)

where IN is the identity matrix. Denote P = (DT D + λIN )DT . From Eq. (24), it is clear that P is independent of the evolving shape representation H ( u( x) − T ), so it can be pre-calculated and stored once. Consequently, during curve evolution process, once the pseudo level set function u has been updated, we can simply project H ( u( x) − T ) onto P via computing PH ( u( x) − T ). This makes the collaborative representation approach faster than the sparse representation, and might be applicable to real time applications. The algorithm for the proposed sparse shape prior-based fuzzy energy model is outlined as follows:

.

With c1, c2 and s fixed, minimizing the energy functional F ( c1, c2, u, s) with respect to u. The membership u that minimizes the energy functional satisfies the Euler-Lagrange equation:

un + 1 = un − Δt

(23)

With u, c1, and c2 fixed, minimizing the energy functional F ( c1, c2, u, s) with respect to s , we obtain the following equation:

(

∫Ω ⎡⎣ u( x)⎤⎦ I ( x)d x

2

)

(

2

s = DT D + λIN DT H ( u( x) − T ),

c1 =

(

+ βδ u( x) − T

m

⎛ + β⎜ ⎝ +μ

⎧ 0, if u ≤ 0 ⎪ u = ⎨ u, if 0 < u < 1 ⎪ ⎩ 1, if u ≥ 1.

Step 1. Initialize u, s . Step 2. Calculate c1, c2 using Eqs. (18) and (19). Step 3. Compute the fuzzy membership function u by Eqs. (21) and (22). Step 4. Align the evolving shape to the coordinate of the training set D . Step 5. Update the coefficient s using Eq. (24), then construct the reference shape Ds . Align the shape Ds to the coordinate of the evolving shape. Step 6. Check whether the segmentation solution is converged. If not, repeat steps from 2 to 5.

(21)

where un is the fuzzy membership at the nth iteration. The newly updated fuzzy membership function is then constrained to the range ⎡⎣ 0, 1⎤⎦ as follows:

5. Experiment results and validation In this section, experimental results are provided to validate the effectiveness of the proposed model when segmenting images

66

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

0.8 0.6 0.4 0.2

SSRFAC

0

0

200

400

600

800

1000

1200

1400

p

0.6

0.4

0.2

0

SCRFAC

0

200

400

600

800

1000

1200

1400

0

200

400

600

800

1000

1200

1400

0

200

400

600

800

1000

1200

1400

0

200

400

600

800

1000

1200

1400

0

200

400

600

800

1000

1200

1400

0.5 0.4 0.3 0.2 0.1

SSRFAC

0

0.4 0.3 0.2 0.1 0

SCRFAC 1 0.8 0.6 0.4 0.2 0

SSRFAC 1 0.8 0.6 0.4 0.2 0

SCRFAC

Fig. 7. Comparison of results by our approach when using SSRFAC (ℓ1-norm regularization) and SCRFAC ( ℓ2 -norm regularization) for segmentation on three images. In each row: first column: original images; second column: input image with initial contours; third column: segmentation results; fourth column: reconstructed prior shape; fifth column: coding coefficients.

in the presence of clutter and occlusion. Segmentation results by the proposed SCRFAC model on various images are also evaluated and compared with those by other active contour models. The

models used for comparison include: the fuzzy energy-based active contour model (FAC) in Krinidis and Chatzis (2009), the linear PCA-based active contour model (Tsai et al., 2003), the shape-

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

Table 1 Comparison of computation time (in s) for segmentation of images in Fig. 7 when using SSRFAC approach ( ℓ1-norm) and SCRFAC approach ( ℓ2 -norm). Image

SSRFAC ( ℓ1-norm)

SCRFAC ( ℓ2 -norm)

Bird (Fig. 7(A)) Helicopter (Fig. 7(B)) Horse (Fig. 7(C))

782.3 422.8 793.0

205.1 157.2 164.5

based active contour model by Liu et al. (2011), and our recently proposed fuzzy energy-based active contour with shape prior model (here termed as PCAFAC) in Tran et al. (2014). We also compare results by the proposed approach when using ℓ2-norm (SCRFAC) with those when using ℓ1-norm, namely SSRFAC which is short for sparse shape representation with fuzzy energy based active contour. The experiments are conducted using Matlab on the PC with Intel Pentium Dual-Core CPU 2.80 GHz and 2 GB of RAM without any particular code optimization. In this study, following parameters are set: m = 2, λ = 50, β = 2, and μ = 0.3 for most images. The evolving curve is implicitly represented as T = 0.5 level of the pseudo-level set function. 5.1. Test on shape databases We first conduct an experiment with one synthetic image-a handwritten digit. This experiment aims at illustrating how the model can take the shape from training set to segment an object. A dictionary including 10 characters taken from MNIST DATABASE (LeCun and Cortes, 2016) of handwritten digits is presented in Fig. 1(a). The reference shape is constructed as the linear combination of the digits in such dictionary. Fig. 1(b) shows test imagenumeral “seven” with background clutter. The segmentation process of this experiment is shown in Fig. 2, in which the evolution process of contour is represented in the first row. As can be seen in Fig. 2(a), the final contour matches the desired shape and satisfactory result is obtained. In addition, the corresponding coding coefficients on each prior shape in the training set are plotted in Fig. 2(b). From Fig. 2(b) we can see that at first the coefficients are calculated in terms of all the training shapes and finally correspond to the correct prior shape. In the next experiment, we validate the performances of the proposed model when the given training set including the shapes from various classes. To this end, we apply the proposed model to segment various images with background clutter and missing parts. A dictionary, in this case, is formed from the shapes in MPEG-7 data set that has 1400 binary images from 70 different classes, each class incudes 20 images (Latecki et al., 2000). A part of the shape dictionary is presented in Fig. 3(a) and four test images are also given in Fig. 3(b). The experiments, in this instance, are presented in Fig. 4 in which the test images with initial curves are shown in the first column of Fig. 4. Segmentation results obtained by the proposed model are provided in the second column. As can be seen from this

67

figure, by the proposed shape prior model, the inferences from the background clutter are eliminated, and the desired objects have been segmented as we can see the results in the second column of Fig. 4. It is also noted that, in this experiment, the model also introduces a certain amount of sparsity to the coding coefficients. The reconstructed shapes for segmenting test images and the corresponding coding coefficients are also presented in the third and fourth columns of the figure. In the next experiment, we apply the proposed model to track a partially occluded walking person. The data set for this experiment is taken from Cremers et al. (2008), which is publicity available. Since the provided training set only includes shapes of person walking left, we extend the training set by horizontally flipping all shapes. Fig. 5 presents the tracking results of the proposed model for the walking person in two directions: from left to right (the first row) and from right to left (the second row). As can be seen in Fig. 5, though the walking person is occluded (see the frames in Fig. 5), one still can obtain the satisfactory results by applying the proposed model using collaborative shape representation. In order to show the performance of the proposed model when working with noise images, we add Gaussian white noise to the frames, and apply the proposed model to track the walking person from these noise frames. The results of this experiment are presented in Fig. 6. It can be seen from this figure, though the object in these images is occluded and corrupted by noise, the proposed model still successfully achieves the shape of interest. 5.2. Method evaluation As pointed out in the previous parts, sparse representation has been used in many works for shape prior based segmentation (Aghasi and Romberg, 2013; Chen et al., 2013; Zhang et al., 2011b). However, in sparse representation, the ℓ1-norm optimization problem is time consuming due to the non-smoothing constraint. As an alternative to ℓ1-minimization in shape sparse representation, in this study, we use the collaborative representation using ℓ2-minimization to speed up the computation. To compare the performance of the sparse representation and collaborative representation for segmentation, in this part, we applied the sparse shape representation with fuzzy based energy active contour (SSRFAC) and the proposed shape collaborative representation with fuzzy energy based active contour (SCRFAC) approaches for some images with the training set from MPEG-7 data set (Latecki et al., 2000). The experiments are shown in Fig. 7, with the training set including 1400 shapes, and all images are of size 100 × 100 pixels. The test images and initial contours are given in the first and second columns; the results obtained by SSRFAC and SCRFAC approaches are shown in the same row for each image. As can be observed from Fig. 7, the SCRFAC gives competitive results compared with results by SSRFAC. Nevertheless, SCRFAC approach takes much less computational time compared to SSRFAC approach as presented in Table 1. In the next experiment, we validate the segmentation performance on some test images with dictionary including 10

Fig. 8. Input data from MNIST handwritten digit database: (a) training set, (b) image with clutter, (c) image in (b) with noise, (d) image with occlusion, (e) image in (d) with noise.

68

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

Fig. 9. Segmentation of the digit “seven” with background clutter and occlusion. (a): Original images and initial curves. (b): Results by FAC (Krinidis and Chatzis, 2009). (c): Results by PCAFAC (Tran et al., 2014). (d): Results of the proposed SCRFAC model.

characters taken from MNIST DATABASE (LeCun and Cortes, 2016) of handwritten digits shown in Fig. 8(a). The test images given in Fig. 8(b) and (d) are with background clutter and occlusion. These images are also corrupted by Gaussian white noise as shown in Fig. 8(c) and (e). In this experiment, we compare the performances of the proposed SCRFAC model with the FAC model (Krinidis and Chatzis, 2009), and our recently proposed PCAFAC model (Tran et al., 2014) in which the shape prior is constructed via applying PCA on the training shape. The segmentation results of FAC, PCAFAC (Tran et al., 2014), and the SCRFAC models are, respectively, presented in the second, third, and fourth columns of Fig. 9. It can be seen from this figure, the results by FAC model are not satisfactory since the final contours cannot recover the real boundaries of the test images. The PCAFAC model (Tran et al., 2014) and SCRFAC model can deal with test images in this case. However, we can observe that the segmentation results obtained by the SCRFAC model are more accurate than those by PCAFAC model especially when the segmented images are with clutter and noise. It is obviously clear via observing the regions indicated by black arrows in the third column of Fig. 9. To validate the performances of SCRFAC model, we compare the model with other approaches. The models for comparison include region scalable fitting (RSF) model (Li et al., 2008), L2S model (Mukherjee and Acton, 2015), PCAFAC model (Tran et al., 2014), and SSRFAC (ℓ1-norm) approach. In RSF model, at each image pixel, a local area is first determined, and then two functions are used to estimate the local intensity means inside and outside the contour within the local area. Meanwhile, in L2S model, the local intensities are modeled by Legendre polynomials, and level sets are

used to perform segmentation tasks. We first apply four above approaches and our proposed SCRFAC model to segment some images from MPEG-7 data set (Latecki et al., 2000). The segmentation results, in this case, are shown in Fig. 10 in which the results obtained by RSF, L2S, PCAFAC, SSRFAC models, and the proposed SCRFAC model are, respectively, presented in the second, third, fourth, fifth, sixth columns. In addition, the average computation time of each supervised model (PCAFAC, SSRFAC, and SCRFAC) to segment one image in this experiment is also given in Table 2. It can be seen from Fig. 10, the results obtained by RSF and L2S models are not satisfactory as the real boundaries of the desired objects cannot be segmented (as in the second and third columns of this figure). Results by PCAFAC model, except the helicopter image, are not as desired since the real object boundaries are not delineated as can be seen in the fourth column of this figure. This stems from the fact that the training set in this scenario includes shapes from various classes, so the reconstructed prior shape is not faithful to the desired object shape when performing PCA on the training set. The SSRFAC approach can deal with the images except the cuttlefish image in the fourth row of this figure. The proposed SCRFAC works well with all of the images in this experiment and gives comparative results compared with SSRFAC approach. However, the proposed SCRFAC takes much less computational time compared to SSRFAC approach as provided in Table 2. In the next experiment, we apply five models: RSF, L2S, PCAFAC, SSRFAC, and the proposed SCRFAC models to segment the left atrium (LA) from MR images. Left atrium segmentation plays an important role and is concerned with the treatment of left atrial fibrillation (McGann et al., 2008). LA segmentation is challenging since LA is small when compared with lung or left ventricle, and its boundaries

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

Initial curves

RSF

L2S

PCAFAC

SSRFAC

69

SCRFAC

Fig. 10. Segmentation results by RSF (Li et al., 2008), L2S (Mukherjee and Acton, 2015), PCAFAC (Tran et al., 2014), SSRFAC, and SCRFAC on images from MPEG-7 data set. First column: Original image with initial curves; second column: results by RSF; third column: results by L2S; fourth column: results by PCAFAC; fifth column: results by SSRFAC; sixth column: results by SCRFAC model.

Table 2 The average computation time (in s) of PCAFAC (Tran et al., 2014), SSRFAC, and SCRFAC when segmenting the images in Fig. 10. Method

PCAFAC

SSRFAC

SCRFAC

Average runtime

87.75

100.95

29.55

are generally not clear when the blood pool goes into the pulmonary veins from LA. The data set for LA segmentation, in this experiment, is taken from the Comprehensive Arrhythmia Research and Management Center which is publicly accessible at (http://insight-jour nal.org/midas/collection/view/197) including MR images from 61

patients. We use images from 20 patients for segmentation and the ground truths of the remaining patients for training shapes. Some representative segmentation results by the models are presented in Fig. 11, in which the obtained results by RSF, L2S, PCAFAC, SSRFAC, and the proposed SCRFAC models are shown in the second, third, fourth, fifth, and sixth columns, respectively. The ground truths are also provided in the last column of this figure. It can be seen form this figure, the RSF model cannot deal with these images since the model extracts some areas outside the regions of interest and the final contours are spilling from the LA boundaries. The results by L2S, PCAFAC, SSRFAC, and the proposed SCRFAC models are similar by visualization.

70

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

Original images

RSF

L2S

PCAFAC

SSRFAC

SCRFAC

Ground truth

Fig. 11. Representative segmentation of left atrium by RSF (Li et al., 2008), L2S (Mukherjee and Acton, 2015), PCAFAC (Tran et al., 2014), SSRFAC and SCRFAC on MRI database. First column: Original images; second column: results by RSF, third column: results by L2S; fourth column: result by PCAFAC; fifth column: results by SSRFAC; sixth column: results by SCRFAC; last column: ground truths. Table 3 The mean and standard deviation of Dice similarity coefficient (DSC), Mean absolute distance (MAD) for left atrium segmentation by RSF (Li et al., 2008), L2S (Mukherjee and Acton, 2015), PCAFAC (Tran et al., 2014), SSRFAC, and SCRFAC. Method

DSC

RSF

0.812 ± 0.060 1.811 ± 0.037 0.806 ± 0.045 2.108 ± 0.265 0.891 ± 0.042 1.410 ± 0.225 0.902 ± 0.031 1.401 ∓ 1.179 0.911 ± 0.026 1.320 ± 0.158

MAD

Table 4 Average computation time (in s) of PCAFAC (Tran et al., 2014), SSRFAC ( ℓ1-norm), and SCRFAC ( ℓ2 -norm) for segmentation of the left atrium. Method

PCAFAC

SSRFAC

SCRFAC

Average runtime

35.06

42.77

33.20

L2S PCAFAC SSRFAC SCRFAC

To quantitatively assess the segmentation results by the proposed model and other models, we use various metrics for evaluation. The results segmented by the radiologist are referred as the manual segmentations (ground truth), and the results by the segmentation methods are considered as automatic

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

Original images

RSF

L2S

PCAFAC

SSRFAC

SCRFAC

71

Ground truth

Fig. 12. Representative segmentation of lung by RSF (Li et al., 2008), L2S (Mukherjee and Acton, 2015), PCAFAC (Tran et al., 2014), SSRFAC, and SCRFAC on chest X-ray database. First column: original images; second column: results by RSF, third column: results by L2S; fourth column: result by PCAFAC; fifth column: results by SSRFAC; sixth column: results by SCRFAC; last column: ground truths. Table 5 The mean and standard deviation of Dice similarity coefficient (DSC), Mean absolute distance (MAD) for lung segmentation on the X-ray database by RSF (Li et al., 2008), L2S (Mukherjee and Acton, 2015), PCAFAC (Tran et al., 2014), SSRFAC, and SCRFAC. Method

RSF L2S PCAFAC SSRFAC SCRFAC

DSC

Table 6 Average computation time (in seconds) of PCAFAC (Tran et al., 2014), SSRFAC ( ℓ1-norm), and SCRFAC ( ℓ2 -norm) for segmentation of the left and right lungs. Method

MAD

Left lung

Right lung

Left lung

Right lung

0.868 ± 0.913 ± 0.931 ± 0.950 ± 0.955 ±

0.900 0.928 0.948 0.961 0.964

2.488 ± 0.269 1.934 ± 0.337 1.782 ± 0.368 1.482 ± 0.231 1.424 ± 0.196

2.436 ± 0.337 1.972 ± 0.395 1.729 ± 0.360 1.467 ± 0.285 1.443 ± 0.269

0.025 0.028 0.023 0.018 0.014

± ± ± ± ±

0.023 0.026 0.021 0.016 0.015

PCAFAC SSRFAC SCRFAC

Average runtime Left lung

Right lung

51.8 63.3 36.5

62.9 62.6 36.3

segmentations. The similarity and dissimilarity between automatic and manual segmentations are calculated via the Dice similarity coefficient (DSC), and the Mean absolute distance (MAD). The DSC and MAD are presented as follows:

72

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

Fig. 14. Quantitative results: dice coefficient (a), and mean absolute distance (b) as a function of image number for Tsai et al. model (Tsai et al., 2003), Liu et al. model (Liu et al., 2011), and the proposed SCRFAC model when segmenting the subject in Fig. 13. Table 7 The mean and standard deviation of Dice similarity coefficient (DSC), Mean absolute distance (MAD) for segmentation of the subject in Fig. 13 by Tsai et al. model (Tsai et al., 2003), Liu et al. model (Liu et al., 2011) and the proposed model. Method

DSC

MAD

Tsai et al. model Liu et al. model The proposed model

0.8977 0.031 0.9217 0.027 0.9317 0.017

1.750 7 0.315 1.603 7 0.243 1.4377 0.150

The Dice coefficient measures the similarity between automatic and manual segmentations and is defined as Lynch et al. (2008)

DSC = Fig. 13. Segmentation results when applying the proposed model, and two other shape-based active contour models to segment images of one subject. Results by: (a) Tsai et al. model (Tsai et al., 2003), (b) Liu et al. model (Liu et al., 2011), and (c) the proposed SCRFAC model.

2Aam Aa + A m

(25)

here A a , A m , and A am are the automatically delineated region, the manually segmented region, and the intersection between the two regions, respectively. DSC is always between 0 and 1. The mean absolute distance is used to measure the distance between two curves and is defined as: First, we denote

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

{ c1, c2, … , cn} be n points in segmenting contour C , and { m1, m2 , … , mk } be k points in the ground truth curve M . Then, the distance between point ci and the closest point on the curve M is given as

d( ci, M ) = min ‖mj − ci‖

(26)

mj ∈ M

These distances are calculated for all points in the two curves and then averaged to get the mean absolute distance between two curves MAD( C , M ) (Mikic et al., 1998). Finally, the MAD is defined as:

MAD( C , M ) =

⎛ 1⎜ 1 2 ⎜⎝ n

n

∑ d( ci, i=1

M) +

1 k

k

∑ d( mj , j=1

⎞ C ⎟⎟ ⎠

)

(27)

By quantitative comparison, the SSRFAC and the proposed SCRFAC models obtain more accurate results compared to RSF, L2S, and PCAFAC, as reported in Table 3. Nevertheless, compared to SSRFAC, SCRFAC takes less computational time as presented in Table 4. In the following comparative experiment, we present the segmentation results when applying RSF, L2S, PCAFAC, SSRFAC, and the proposed SCRFAC models to segment the lungs from the X-ray images. The X-ray images are taken from a publicly available database (Ginneken et al., 2006; Shiraishi et al., 2000) including 247 chest radiographs. From the data set, 75 images are used for segmentation of the lung, and the lung ground truths of remaining images are used as training shapes. Because of the low cost and fast imaging speed, the chest radiology (X-ray) has been widely used in medical imaging research field. Segmentation of lung from chest radiology provides important information i.e., the lung shapes, which are critical signs for pathology detection, i.e., diagnosis, treatment evolution. It is also essential for other analysis task in computer-aided diagnosis, i.e., cardiac size measurement. However, due to imaging artifacts, lung boundaries might become broken. In addition, it might have the lack of contrast between overlapping objects, i.e., bones and soft tissues. Moreover, the challenges of lung segmentation in radiography also come from large variations of lung shapes, lung disease, and pseudo-boundary close to diaphragm. Fig. 12 shows representative segmentation results of the lung by the above approaches. In the figure, the results by RSF, L2S, PCAFAC, SSRFAC, and the proposed SCRFAC model are shown in the second, third, fourth, fifth, and sixth columns, respectively. The corresponding ground truths of test images are also given in the last column of this figure. It can be observed from this figure, the RSF model cannot work with test images since the final contours do not locate at the desired objects’ boundaries. The L2S model gives better results than RSF model on the test images. However, the obtained results are not satisfactory since the final contours do not correctly locate on the genuine objects’ boundaries. The PCAFAC, SSRFAC, and the proposed SCRFAC models give similar results by visual inspection. However, by quantitative comparison, we could see that the proposed model produces more accurate results than those by PACFAC and SSRFAC models as reported in Table 5. Moreover, the proposed SCRFAC model takes less computational time compared with PCAFAC and SSRFAC models as reported in Table 6. Finally, we compare the performance of the proposed SCRFAC model with two other statistic shape prior-based active contour models. To this end, we apply the proposed model, the model of Tsai et al. (2003) and the recently proposed shape-based active contour model by Liu et al. (2011) to segment images from one subject of the cardiac MR data set in Andreopoulos and Tsotsos (2008). The shape prior model proposed by Tsai et al. (2003) performs PCA on the given training set to reconstruct the shape

73

prior. Meanwhile the model of Liu et al. (2011) uses kernel density estimation. The reconstructed shape priors by these two models are then incorporated into Chan-Vese model (Chan and Vese, 2001) to enhance the segmentation performances. To validate the performances, segmentation results by three models are evaluated with corresponding ground truths. Some representative the segmentation results of one subject by Tsai et al., Liu et al. and the proposed SCRFAC models are shown in Fig. 13(a)–(c), respectively. We now report mean and standard deviation of the obtained Dice similarity coefficient (DSC), and Mean absolute distance (MAD) according to Eqs. (25) and (27) of each model for the test subject in Table 7. In addition, DSC and MAD as the functions of the image number for the three models are also plotted in Fig. 14. By visual inspection on the results presented in Fig. 14, one might see that these models obtain the similar results. However, by quantitative comparison, the proposed SCRFAC model achieves more accurate segmentation results as presented in Table 7 and Fig. 14.

6. Conclusion We have presented a new supervised active contour model for image segmentation, in which the shape prior information is incorporated into a fuzzy energy based active contour model to perform segmentation tasks. In our work, we employ collaborative representation to construct the shape prior from the training set that might include shapes from various classes. In addition, in this study, we utilize ℓ2-norm to regularize the coding coefficients. The proposed model therefore can deal with images with background clutter and object occlusion, especially when the given training shape does not include similar shapes but to be with large shape variability or include shapes of multiple different object classes. In addition, the model also significantly improves the computational speed. Experimental results on various images validated the effectiveness of the proposed shape prior model.

Acknowledgment The authors would like to thank the reviewers and the Associate Editor for their valuable comments and suggestions, which have greatly helped in improving the content of this paper. K.-K. Shyu was supported by MOST (Taiwan, ROC), Grant No. 102-2221E-008-087-MY3. M-T Lo was supported by MOST (Taiwan, ROC), Grant No. 103-2221-E-008-006-MY3, and 105-2218-E-008-003.

References Aghasi, A., Romberg, J., 2013. Sparse shape reconstruction. SIAM J. Imaging Sci. 16, 2075–2108. Andreopoulos, A., Tsotsos, J.K., 2008. Efficient and generalizable statistical models of shape and appearance for analysis of cardiac MRI. Med. Image Anal. 12, 335–357. Bernard, O., Friboulet, D., Thevenaz, P., Unser, M., 2009. Variational B-spline levelset: a linear filtering approach for fast deformable model evolution. IEEE Trans. Image Process. 18, 1179–1191. Caselles, V., Kimmel, R., Sapiro, G., 1997. Geodesic active contours. Int. J. Comput. Vis. 22, 61–79. Chan, T., Vese, L., 2001. Active contours without edges. IEEE Trans. Image Process. 10, 266–277. Chen, F., Yu, H., Hu, R., 2013. Shape sparse representation for joint object classification and segmentation. IEEE Trans. Image Process. 22, 992–1004. Cremers, D., Rousson, M., Deriche, R., 2007. A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape. Int. J. Comput. Vis. 72, 195–215. Cremers, D., R. Schmidt, F., Barthel, F., 2008. Shape priors in variational image segmentation: convexity, lipschitz continuity and globally optimal solutions, In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–6.

74

V.-T. Pham et al. / Engineering Applications of Artificial Intelligence 56 (2016) 60–74

Elad, M., Aharon, M., 2006. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15, 3736–3745. Ginneken, B.v., Stegmann, M.B., Loog, M., 2006. Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database. Med. Image Anal. 10, 19–40. Hammouche, K., Diaf, M., Siarry, P., 2008. A multilevel automatic thresholding method based on a genetic algorithm for a fast image segmentation. Comput. Vis. Image Underst. 109, 163–175. He, L., Peng, Z., Everding, B., Wang, X., Han, C., Weiss, K., Wee, W.G., 2008. A comparative study of deformable contour methods on medical image segmentation. Image Vis. Comput. 26, 141–163. Kass, M., Witkin, A., Terzopoulos, D., 1988. Snakes: active contour models. Int. J. Comput. Vis. 1, 321–331. Kreutz-Delgado, K., Murray, J.F., Rao, B.D., Engan, K., Lee, T.W., Sejnowski, T.J., 2003. Dictionary learning algorithms for sparse representation. Neural Comput. 15, 349–396. Krinidis, S., Chatzis, V., 2009. Fuzzy energy-based active contour. IEEE Trans. Image Process. 18, 2747–2755. Latecki, L., Lakamper, R., Eckhardt, U., 2000. Shape descriptors for nonrigid shapes with a single closed contour, In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 424–429. LeCun, Y., Cortes, C., The MNIST database of handwritten digits. 〈http://yann.lecun. com/exdb/mnist/〉. (Accessed 11/4/2016). Li, C., Kao, C., Gore, J., Ding, Z., 2008. Minimization of region-scalable fitting energy for image segmentation. IEEE Trans. Image Process. 17, 1940–1949. Li, J., Zhang, H., Huang, Y., Zhang, L., 2014. Hyperspectral image classification by nonlocal joint collaborative representation with a locally adaptive dictionary. IEEE Trans. Geosci. Remote Sens. 52, 3707–3719. Liu, W., Shang, Y., Yang, X., Deklerck, R., Cornelis, J., 2011. A shape prior constraint for implicit active contours. Pattern Recogn. Lett. 32, 1937–1947. Lynch, M., Ghita, O., Whelan, P.F., 2008. Segmentation of the left ventricle of the heart in 3-Dþ t MRI data using an optimized nonrigid temporal model. IEEE Trans. Med. Imaging 27, 195–203. Mandal, D., Chatterjee, A., Maitra, M., 2014. Robust medical image segmentation using particle swarm optimization aided level set based global fitting energy active contour approach. Eng. Appl. Artif. Intell. 35, 199–214. Maoguo, G., Dayong, T., Linzhi, S., Licheng, J., 2015. An efficient bi-convex fuzzy variational image segmentation method. Inf. Sci., 351–369. McGann, C.J., Kholmovski, E.G., Oakes, R.S., Blauer, J.E., Daccarett, M., Segerson, N., Airey, K.J., Akoum, N., Fish, E.N., Badger, T.J., DiBella, E.V.R., Parker, D., MacLeod, R.S., Marrouche, N.F., 2008. New magnetic resonance imaging based method to define extent of left atrial wall injury after the ablation of atrial fibrillation. J. Am. Coll. Cardiol. 52, 1263–1271. Michailovich, O., Rathi, Y., Tannenbaum, A., 2007. Image segmentation using active contours driven by the Bhattacharyya gradient flow. IEEE Trans. Image Process. 15, 2787–2810. Mikic, I., Krucinski, S., Thomas, J., 1998. Segmentation and tracking in echocardiographic sequences: active contours guided by optical flow estimates. IEEE Trans. Med. Imaging 17, 274–284. Mukherjee, S., Acton, S.T., 2015. Region based segmentation in presence of intensity inhomogeneity using Legendre polynomials. IEEE Signal Process. Lett. 22, 298–302.

Mumford, D., Shah, J., 1989. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 42, 577–685. Rousson, M., Deriche, R., 2002. A variational framework for active and adaptive segmentation of vector valued images, In: Proceedings of the IEEE Workshop on Motion and Video Computing, pp. 56–62. Roy, K., Bhattacharya, P., Y. Suen, C., 2011. Towards nonideal iris recognition based on level set method, genetic algorithms and adaptive asymmetrical. Eng. Appl. Artif. Intell. 24, 458–475. Sethian, J.A., 1999. Level Set Methods and Fast Marching Methods. Cambridge University Press, Cambridge. Shi, Q., Eriksson, A., Hengel, A., Shen, C., 2011. Is face recognition really a compressive sensing problem?, In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 553–560. Shiraishi, J., Katsuragawa, S., Ikezoe, J., Matsumoto, T., Kobayashi, T., Komatsu, K., Matsui, M., Fujita, H., Kodera, Y., Doi, K., 2000. Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists' detection of pulmonary nodules. Am. J. Roentgenol. 174, 71–74. Shyu, K.K., Pham, V.T., Tran, T.T., Lee, P.L., 2012. Global and local fuzzy energy basedactive contours for image segmentation. Nonlinear Dyn. 67, 1559–1578. Tran, T.T., Pham, V.T., Shyu, K.K., 2013. Moment-based alignment for SHAPE prior with variational B-spline level set. Mach. Vis. Appl. 24, 1075–1091. Tran, T.T., Pham, V.T., Shyu, K.K., 2014. Image segmentation using fuzzy energybased active contour with shape prior. J. Vis. Commun. Image Represent. 25, 1732–1745. Tsai, A., Yezzi, A., Wells, W., Temany, C., Tucker, D., Fan, A., Grimson, W.E., Willsky, A., 2003. A SHAPE-based approach to the segmentation of medical imagery using level sets. IEEE Trans. Med. Imag. 22, 137–154. Wang, H., Oliensis, J., 2010. Generalizing edge detection to contour detection for image segmentation. Comput. Vis. Image Underst. 114, 731–744. Wang, Z.M., Soh, Y.C., Song, Q., Sim, K., 2009. Adaptive spatial information-theoretic clustering for image segmentation. Pattern Recogn. 42, 2029–2044. Wright, J., Yang, A., Ganesh, A., Sastry, S., Ma, Y., 2009. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31, 210–227. Xu, C., Prince, J.L., 1998. Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process. 7, 359–369. Yezzi, A., Tsai, A., Willsky, A., 2002. A fully global approach to image segmentation via coupled curve evolution equations. J. Vis. Commun. Image Represent. 13, 195–216. Yue, W., Wenping, M., Maoguo, G., Hao, L., Licheng, J., 2015. Novel fuzzy active contour model with kernel metric for image segmentation. Appl. Soft Comput., 301–311. Zhang, L., Yang, M., Feng, X., 2011a. Sparse representation or collaborative representation: which helps face recognition?. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 471–478. Zhang, S., Zhan, Y., Dewan, M., Huang, J., Metaxas, D., Zhou, X., 2011b. Sparse shape composition: a new framework for shape prior modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1025– 1032.