Accepted Manuscript
Segmentation of Medical Images using Mean Value Guided Contour Ali A. Kiaei , Hassan Khotanlou PII: DOI: Reference:
S1361-8415(17)30093-2 10.1016/j.media.2017.06.005 MEDIMA 1266
To appear in:
Medical Image Analysis
Received date: Revised date: Accepted date:
15 December 2016 11 May 2017 9 June 2017
Please cite this article as: Ali A. Kiaei , Hassan Khotanlou , Segmentation of Medical Images using Mean Value Guided Contour, Medical Image Analysis (2017), doi: 10.1016/j.media.2017.06.005
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Highlights
AC
CE
PT
ED
M
AN US
Applying the mean value theorem in contour repositioning to enhance the accuracy of medical image segmentation. Not using energy minimization makes the proposed model the promising capability in complex geometries, irreducible convergence speed in the saddle and stationary points, recovering boundary ruptures, and the ability to not rounding the edges of the contour. Unlike the standard optimization methods, the user gets rid of numerous parameters that should be manually set during the segmentation. The model runs very fast; e.g., the speed is more than 1400 times faster than Gauss-newton optimization that is one of the methods with best performance.
CR IP T
ACCEPTED MANUSCRIPT
Segmentation of Medical Images using Mean Value Guided Contour Ali A. Kiaei1, Hassan Khotanlou1*
CR IP T
1: Department of computer engineering, Bu-Ali Sina University, Hamedan, Iran *Corresponding author. Tel: +98- 81- 38272006; fax: +98-81-38292631 Ali A. Kiaei (
[email protected]) Hassan Khotanlou (
[email protected])
AN US
Abstract
M
Partial differential equation-based (PDE-based) methods are extensively used in image segmentation, especially in contour models. Difficulties associated with the boundaries, namely troubles with developing initialization, inadequate convergence to boundary concavities, and difficulties connected to saddle points and stationary points of active contours make the contour models suffer from a feeble performance of referring to complex geometries. The present paper is designed to take advantage of mean value theorem rather than minimizing energy function for contours. It is efficiently capable of resolving above-mentioned problems by applying this theorem to the edge map gradient vectors, which is calculated from the image. Since the contour is computed in a straightforward manner from an edge map instead of force balance equation, it varies from other contour-based image segmentation methods. To illustrate the ability of the proposed model in complex geometries and ruptures, several experiments were also provided to validate the model. The experiments’ results demonstrated that the proposed method, which is called mean value guided contour (MVGC), is capable of repositioning contours into boundary concavities and has suitable forcefulness in complex geometries.
PT
1. Introduction
ED
Keywords: Mean Value Theorem; Image Segmentation; Active Contour Models; Snakes; Gradient Vector Flow; Mean Value Guided Contour; Deformable Surface Models; Edge Detection
AC
CE
Partial differential equation-based (PDE-based) methods are one of the important approaches to segment an image (Caselles et al. 1997). A well-liked manner in this class of methods is curve propagation. The significant knowledge is to develop an initial curve and steer it in the direction of the lowermost energy function. With this end in view, one technique, known as active contour method (Kass et al. 1988), is parameterizing the contour due to one sampling method and then developing each component. To find the object boundary, it uses internal and external forces (Xu et al. 2000) as contour evaluators. The first one guarantees regularity of the contour, which the geometry features of the contour define it. The second one steers the curve to the object boundaries, and the features of images characterize it.
1.1
Related works
There are two classes of active contours based on the fundamental source of external force (Chan et al. 2005). In the first one, which is called region-based models (Wu et al. 2013a), the external force is created from the attribute of the image. The second one, the edge-based models, uses the gradient of the image to compute the external force. Since using gradient is effectual for subduing the drawbacks of active
contours, edge-based models quickly come into focus. The gradient could be calculated directly from the image or through an intermediary. Despite the effectiveness of these techniques, the primary design is largely disapproved due to its limits. These limits cause such Bewilderments that which sampling method should be selected; what are the geometric properties of the curve; is geometry divided or combined; how can the problem be generalized in higher dimensions. In accordance with a finite difference as a role of derivative, the energy minimization was largely directed by the steepest gradient descent. However, the effectiveness of active contour methods caused many researchers to try to overcome their weaknesses. Therefore, various methods of active contours were produced. One of their most common weaknesses was due to noise sensitivity of gradient, which leads these models into poor performance in noisy images. However, to make better convergence rate, a fixed force was used by caselles et al. (1997). In an indirect manner, the image gradient makes an external force field, so active contour is applied in the field (Xu et al. 1998a). One of the far-reaching and long established of a similar kind external force field is Gradient vector flow (GVF) (Xu et al. 1998b, Jaouen et al. 2014). GVF forces are thoroughly pertaining to rotation forces, and this is a deep-seated dissimilarity between the contour methods that uses GVF as an external force and the other ones. These models overcome weak convergence when the contour is initialized far from the object boundary (Xu et al. 1997). However, the smoothing term is the main difficulty of these contours, which leads to round contour edges. On the contrary, decreasing the value of smoothing term
ACCEPTED MANUSCRIPT
parameters that should be manually set during the segmentation. Finally, leakage in boundary ruptures and rounding the contour edges, which are caused by internal force, make no sense in the proposed method. The remainder of this paper is outlined as follows: Section 2 provides an overview of mean value theorem and edge map. In section 3 the proposed MVGC model is developed. Section 4 exhibits the experimental results, and section 5 provides the connected discussion. The conclusion is presented in section 6.
CR IP T
2. Background Mean value theorem
2.1
Mean value theorem expresses that there is at least one point between two endpoints of a curve in which the secant from side to side of its endpoints is parallel to the tangent of the arc. In other words, by assuming a function f:[a,b]R, that is differentiable on the open interval (a, b) and is continuous on the closed interval [a, b], there is at least one point c (in which a
( )
( )
1.2
CE
PT
ED
M
AN US
deteriorates smoothing amount. Besides, these contour models cannot progress on top of Saddle and stationary points. To state the matter differently, their ability to handle complex geometries are impoverished. By representing an external force for snake advancement, called alignment term, the problem connected to alignment amount of the image gradient was overcome by Kimmel et al. (2003). This term, which technically was built by employing the divergence of the image gradient field, can make a snake to a more precise edge location, especially in the absence of homogeneity. By dividing contour into stationary and saddle points, the disadvantage of handling complex geometries connected to the contour models that use GVF as an external force was overcome by Li et al. (2005). However, their snake model was unable to initialize within the object boundary ( Xie et al. 2008). In connection with handling complex geometries, Xie et al. (2008) also proposed a complicated model. Although presenting further region or edge data into the active contours makes a better segmentation, it almost goes to the wrong segmentation in the local areas. Representing a modified version of the energy function with different properties and different levels of region fitting energy, Ge Qi et al (2013) proposed a region-based model. To define this type of energy function, a structure tensor was specified. Using Harris corner detector, Kovacs et al. (2012) made the feature points that were wrapped to get the contour initialization. By separating the fitting energy as global and local intensities, Yang et al. (2012) overcame the intensity inhomogeneity difficulty. There are some evidence which suggest that more than anything, the active contour models depend on external forces. Therefore, there are numerous studies around discovering new external force fields (Yu et al. 2013). In addition to the GVF, Zhou et al. (2013) defined some external forces using gradient and edge. To segment the areas that appear inhomogeneous, Fang et al. (2013) used the Edge Flow-Based snake. However, by comparing edge-based methods with region-based ones, the outstanding competence of handling complex geometries with much less complexity can be seen for both methods. Based on the Edge-Flow model (Fang et al. (2013) and bearing in mind of the heterogeneous regions, Cai et al. (2015) emphasized the edge detection. Using local gradient and global contrast based initialization model, a novel gradient algorithm was suggested that is well organized to reveal the feeble object boundaries and more capable of preventing the multiple edges. Looking at an image as a Riemannian surface, inserted in a higher dimensional Euclidean space, Li et al. (2016) had a differential geometry viewpoint. Their model represented a steady frame of choosing edge map for both vector-valued and scalar images; it were able to retrieve objects boundaries even in a complex geometry.
(1)
The mean value theorem drives along the further particular declaration of Rolle's Theorem in which f(a)=f(b) and therefore f′(c)=0. There are some generalizations of the mean value theorem, for example mean value theorem in several variables, mean value theorem for vector-valued functions, Cauchy's mean value theorem, mean value theorem for definite integrals, and generalization in complex analysis. Since there are scalar and vector-valued images, the first two items are described below. Several variables: For several variables, the mean value theorem can take a broad view of real functions. To this aim, a real function of one variable is created by parameterizing variables. Let f: GR be a differentiable function where G is an open connected subset of Rn. For each points x and y in G, the theorem states that there exists at least one real value c between 0 and 1, which makes (1-c)x+cy a point on a line segment with x and y at its ends in a way that: ( )
( )
)
((
) (
)
(2)
where ∇ indicates the gradient and “·” points to a dot product. By defining a function g:G R, in which g(t) = f((1-t)x + ty), the above ( ) equation is summarized to ( ) ( ) for some 0
Proposed approach
AC
This paper presents mean value guided contour (MVGC) model which uses mean value theorem for external forces. Three-stage iterations were used in the methodology. The first stage was creating new contour points based on mean value theorem. Using the external force to guide these points towards the object boundaries was the second stage. The third stage was refining contour points to address the details more accurately. The MVGC was distinguished from all previous snake formulations in a way that any energy minimization was not comprised; instead, it was specified directly from mean value theorem. Particular advantages of the MVGC over contour models, which use energy minimization, are as follows: Firstly, the MVGC capability in complex geometries is promising. Secondly, its convergence speed in the saddle and stationary points is irreducible. Thirdly, favorable boundaries can be estimated. Besides, using an appropriate external force makes MVGC independent of initial contour. Fourthly, the design of method is simple. Fifthly, the user gets rid of numerous
Vector-valued functions: For vector-valued functions, Dieudonné et al. (2013) alternated the mean value theorem with mean inequality. On the other hand, Lang et al. (2013) used the Henstock-Kurzweil integral (Boccuto et al. 2009) to show the theorem in integral form. However, both have their own disadvantages. For example, in the first theorem it is impossible to calculate the exact mean value and in the second one, every derivative should be Henstock-Kurzweil integrable. Specifically, assume that f: U⊂ Rn → Rm is a differentiable function. The parameterization technique can be applied to each fi (i = 1, ..., m) which are the elements of f. So the points x + αih can be found on the line segment, which satisfies the following equation: (
)
( )
(
)
(3)
where x, h ∈ Rn, α ∈ [0, 1]. Although there are typically more than one point x + αi*h satisfying above equation, the main problem is that these
ACCEPTED MANUSCRIPT
points are not mostly overlapped for all i at the same time. Still, related to the vector-valued functions, a particular kind of generalization of the mean value theorem exists. That means there is at least one α* ∈ [0, 1] so that (
)
(
) )
(
)
( )
∫
)
( )
∫
(
)
(5)
Edge map
( 10 )
( ) , ds2 is divided
(
)(
)(
)
(
)(
)(
)
By assuming )
(
)
CE
(
(6)
(7)
Eq. 6 can be represented as:
AC
)
(8)
( 12 )
where a12+b12 = 1. So:
( 13 )
For a vector-valued image:
AN US
M
ED )
PT
(
( 11 )
It can be reformulated as:
(
A more desirable performance of active contours needs an appropriate choice of the edge map. Any binary or gray-level edge map defined in the image processing literature can be used (Jain et al. 1989). For example, by viewing image I(x,y) as a function of continuous position, | ( ( ) ( ))| and | ( )| are two edge maps in the active contours that are used by GVF as their external forces, where ∇ indicates the gradient operator, and Gσ(x,y) is a 2-D Gaussian function with standard deviation σ. By suggesting an unvarying framework of choosing the edge map for both types of image, Li et al. (2016) considered the subject from the differential geometry viewpoint. By regarding Rn as the feature space of the 2-D image I, a real symmetric and positive definite Riemannian metric tensor is calculated as follows:
(
( )) √ ( )
( ))
(4)
There are more generalizations to vector-valued functions, which are explained in the appendix A. In general, different generalizations of mean value theorem exist, in which based on the type of the function one of them can be used.
2.2
)( ( )
) ( () Assuming ( by ds12 to obtain the ratio:
where x and x + h are points on the open interval I, and f is a continuously differentiable real-valued function. However, the right side of Eq. 4 is computed by the fundamental theorem of calculus and using a variable changing. Dependent upon Eq. 4, the value of α* can be replaced by the mean value (
( )) (
CR IP T
(∫
( ()
)
( 14 )
|
| ) ( 15 )
On the other hand (
)
(|
|
Depend on the last two equations (|
|
|
| )
( 16 )
Not only is the edge map, λ1, bigger than the mean square gradient on the edge that makes a bigger ∇f close to the edge, but also it extracts all the features of the image.
3. MVGC Model Development In general, the proposed method utilizes the mean value theorem on the edge map of an image for contour repositioning. In the proposed method, the gradient vectors of edge map should be toward the edges so that it makes them almost parallel with the normal of edges, and have the magnitudes proportional to the edge saliency near them. According to the above conditions, different edge maps can be defined to the work. However, the proposed method uses E(x,y) = -Gσ(x,y) * I(x,y) as the edge map, where Gσ(x,y) is a 2-D Gaussian function with standard deviation σ.
where has two real eigenvalues λ1 and λ2. On the other side, the squared curve length element of parameterized curve (x(l), y(l)) is: [ ()
() ]
*( ( )
( ))( ( )
( )) +
(9)
In addition, another squared curve length element of parameterized curve (x(l), y(l), I1(x(l),y(l)), … , In(x(l),y(l))) is:
Fig. 1. There exists at least one point, d, on y = f(x) between a and b, such that the secant joining the endpoints of the line segment ̅̅̅ is parallel to the tangent at d. This direction is perpendicular to gradient orientation of its edge map on point c in line segment ̅̅̅.
ACCEPTED MANUSCRIPT
Being towards the edges leads being parallel to the normal of edges, and so the gradient vectors of edge map are perpendicular to the edges. Although based on image dimension and depending on the image that is scalar or vector, Eq. 1 to Eq. 5 can be employed in this work as shown in Fig. 1. To achieve the desirable dynamic contour for scalar (medical) images, Eq. 2 is reformulated in the present paper as a model that searches for point c(xc, yc) between a(xa, ya) and b(xb, yb) in the line segment ̅̅̅ in a way that: |
points. To provide a better estimation, this stage proposes creating a new point in each line segment. With these newly created points, each line segment is decomposed into two. Guiding these points towards the object boundaries is the second stage. In addition, the third stage is refining contour points using the obtained information to address the details more accurately. Each of these steps is described below.
( 17 )
|
( 18 )
where “ ” indicates the dot product.
CR IP T
where E is the edge map and ∇ indicates the gradient operator. In other words:
Fig. 3. Stage I: creating new points
Creating new contour points
ED
M
AN US
3.1
Fig. 2. The block diagram of the proposed method.
AC
CE
PT
Finding desirable c is solved by searching all points between a and b in line segment ̅̅̅. Based on mean value theorem, given a planar arc between line segments ̅̅̅, there is at least one point, d, at which the tangent to the arc is parallel to the secant through its endpoints. The gradient orientation of its edge map in point c is perpendicular to such direction. As shown in Fig. 1, as much as a point on line segment ̅̅̅ is closer to the point d, the more its gradient magnitude increases. Besides, its direction in line segment ̅̅̅ is an orientation from c to d. So by starting from c and moving in the orientation of its edge map gradient direction, it comes closer to d as long as at each point the gradient magnitude increases. Once it reaches to d and continues on this path, the edge map gradient magnitude of those points starts to decrease and their direction becomes in the opposite side of point c’s one. Such as edge based active contours, in this model the edge map of the image is built. An initial contour can be formed by connecting several points on the image to each other, which are identified manually. These selected points are guided toward the boundary, using the edge map gradient vectors. The line segments with such guided endpoints together form a contour, which starts estimating the object boundary. As shown in Fig. 2, after initializing the first contour and guiding its points towards the object boundaries, the methodology considers iterations with three stages. The first stage is creating new contour
At this stage, the aim is to create new contour points that help to segment the objects in more detail. To create new points, the method uses the mean value theorem. As shown in Fig. 5, for every two neighboring contour points such as a and b, the proposed method looks for a point c so that after guiding it towards its edge map gradient vector and reaching to the point d, the line segments ad and db describe the object boundary better than line segment ab. To this end, the mean value theorem states that there is at least one point (such as d) on the object boundary so that its tangent line is parallel to the secant line connecting neighboring contour points (a and b), and they are perpendicular to the gradient vectors at that point. In other words, since the endpoints of each line segment touch the object boundaries, the proposed method looks for a new point on the boundary, which its tangent line is parallel to the line secant of that line segment. Because the direction of edge map gradient vectors on the boundary is approximately perpendicular to the boundary route, the direction of edge map gradient vector on new points should be perpendicular to the secant line segment with related guided endpoints.
Fig. 4: Stage II: guiding newly created points toward the boundaries. On the other hand, because of using the Gaussian function, the edge map gradient vectors diffuse. Among the points of images, because of
ACCEPTED MANUSCRIPT
(|
|
|)
edge map gradient magnitude decreases. The direction of edge map gradient vector is also toward the object boundaries existing around it. Therefore, when each newly created point moves in a straight line in the direction of the starting location edge map gradient vector, the edge map gradient magnitude increases per step until it touches the object boundary. After crossing the boundary, its magnitude starts to decrease. From the other point of view, along this path the direction of edge map gradient vector is almost in the same direction of the starting created point before reaching to the boundary. Otherwise, after crossing the boundary it is in the opposite direction of edge map gradient vector of starting location. Hence, at this stage, the created point moves in a straight line and in its edge map gradient vector direction until the magnitude of the edge map gradient vector starts to decrease and its direction is reversed along the path. To implement stage II, for each point c that is found in stage I as new contour points, the proposed method operates as the following: At first, it finds the angle of its edge map gradient vector. For a scalar 2-D image this angle is simply computed by:
CR IP T
having larger gradient magnitudes the object boundaries have a greater share of play in spreading them. As shown in Fig. 5(a) , there exists a point c on line ab, which its edge map gradient vector is propagated by the gradient vector of suggested point d. The point c has several characteristics. Firstly, its edge map gradient vector is approximately parallel to the d’s one. Secondly, starting from c and moving in the direction of its edge map gradient vector brings it to the point d. Finally, all the points in the straight line from c to d have edge map gradient vectors in the direction of c’s one. These characteristics together convert the issue of finding point d on the object boundary, to the issue of finding a point c on the line segment ab, which its edge map gradient vector is perpendicular to the line segment ab. Since the images are digital, pure orthogonally makes no sense. Therefore, the implementation of this stage is condensed in order to find such point c that holds Eq. 18 from points a, b. Since digital images are discrete, it seems that finding such c that exactly holds the Eq. 18 will not happen. The following equation is used instead: ( 19 )
) ‖
ED
M
Fig. 5. The overall process of proposed method for creating and repositioning contour points: (a) stage I: finding a point c in line segment ab, which its edge map gradient vector is perpendicular to the line segment ab, (b) stage II: guiding point c towards d on the object boundary, and as the subsequent fragmentation of line segment ab into two line segments ad and db.
3.2
AC
CE
PT
For creating new contour points, the proposed method implements the mean value theorem for each line segment separately as follows: Assume that two endpoints of this line segment are a and b. For any point, p, on the line segment ̅̅̅ find point c, which holds Eq. 19. Therefore, the method looks for a point c on the line segment ̅̅̅ for which the absolute value of dot product of its edge map gradient vector and the line segment ̅̅̅ attains its minimum. Add point c to the new contour points. However, after all new contour points are created, they should pass stages II and III to be guided twords the object boundary and be refined. The block diagram of this stage is shown in Fig. 3.
Guiding to the boundary
The goal is guiding the contour points to the object boundaries, which were created in the previous stage. The characteristics of edge map gradient vector in the newly created point were mentioned in the previous stage. By starting from newly created points and guiding them in the direction of their edge map gradient vector, they move towards the points on object boundaries, in which the mean value theorem holds. During each direct guiding route, the magnitude of the edge map gradient vector is increasing and its direction is almost parallel to the direction of the starting point. This is because by approaching the object boundary, the effectiveness of diffusing of edge gradient increases. Conversely, by receding away from the object boundary the
‖)
( 20 )
where vc is the edge map gradient vector at point c, and e1 = (1,0) is the coordinate unit vector. Secondly, the proposed method considers the point that is neighbor of c in direction of as d0. Therefore, point d0 should have an edge map gradient vector that is nearly parallel to c’s one. Besides, its edge map gradient magnitude should be larger. In other words:
AN US
where p can be any point on the line segment ̅̅̅. However, for the first one, the line segments are introduced by the user as initialized contour.
((
( 21 )
where indicate the angle of edge map gradient vector and the edge map gradient magnitude at point d0 respectively. Thirdly, the proposed method considers the point neighbor of d0 along the ’s direction as d1. If , point d1 is closer to the object boundary than the point d0. The process of considering new points di {i=0, 1, … } as the neighbor of along the ’s direction is continued by the proposed method until the Eq. 22 no longer holds for a new point dn. ( 22 ) The point dn has the property that is in the opposite direction of the and . As explained above, this point is located on the object boundary, and it can be named as d. The proposed method considers this point as a new contour point instead of c. Finally, as shown in Fig. 5(b) the model at this stage, splits the line segment ̅̅̅ into line segments ̅̅̅̅ and ̅̅̅̅ . The latter is clearly seen in Fig. 6. Moreover the block diagram of this stage is shown in Fig. 4.
3.3
Refinement
In the last two stages, the new contour was doubled in the number of points, which were on the object boundary. So, a more detailed description of the object boundary is obtained relative to the original contour.
ACCEPTED MANUSCRIPT
4.1
Fig. 7: Stage III: refining newly guided points.
Stop condition
M
3.4
PT
ED
The stop condition of iterations is the change of contour compared to itself in the last iteration. If this change is less than a threshold, the method stops the algorithm. However, the reader may alternate the stop condition; for example, it can iterate as far as the entire object boundary which is detected exactly.
4. Experimental Results
CE
In this section, some key features of MVGC field are shown by presenting a few examples of its calculation on simple and complex objects. With this end in view, different types of experiments were used. At first, the proposed method was implemented on a simple object, to show how it works. Secondly, the objects were chosen that the critical points, convexities, and concavities of their boundaries had made them complex. Thirdly, besides these complexities, high density noises were added to the images and rough initializations were chosen to make difficult experiments. The aim was to show the capability of MVGC in segmenting them. Fourthly, segmenting objects with ruptures on the boundary, either in straight lines or in the critical points are shown. Reconstructing the remaining boundary without leakage, with the ruptures located in critical points, was illustrated. Fifthly, the complex noisy gray-level images were used to show the ability of MVGC in these areas. And finally, MVGC implemented on more dimensional medical images and the results were compared with the other works.
AC
CR IP T
The MVGC was performed for a heart-shaped object in the first experiment, as it is shown in Fig. 8. While a simple heart-shaped object is shown in Fig. 8(a), in Fig. 8(b) the edge map of the input image along with four strips - green, red, gray, and blue - are shown. The green bar indicates the initial contour that was specified by the user. As it can be seen, the contour has four vertices. These vertices were transported to the vertices of red foursquare, using the guidance to the boundary stage. The guiding route is identified with blue strips. Therefore, when the initial green contour is specified, the guiding to the boundary stage transfers it to the red contour. In practice, the blue strips lead green vertices to the object boundaries, which are the red vertices. Along the blue strips and starting from the green vertices, the magnitude of edge map gradient vector increases and the direction almost remains preserved. However, after passing out the red vertices, the magnitude of edge map gradient vector decreases and its direction will be reversed. In other words, the guidance blue strips are finished at red vertices, where their magnitudes of edge gradient vectors are their maximum values. Then the “creating new contour points” stage starts to find the best point on each red strip, where its edge map gradient vector direction is perpendicular to the red strip. Since this orthogonality cannot be realized precisely in digital images, at this stage the method tries to find a point on each red strip, where is closer to the vertical angle. These newly created contour points were guided using “guiding to the boundary” stage and are shown by gray strips. Therefore, the initial contour with four points was converted to a contour with eight points, which is shown in Fig. 8(c). In the same way, eight new points were introduced and their guidance routes are shown by gray strips. These sixteen contour points are shown in Fig. 8(d), which were obtained after the second iteration. Similarly, Fig. 8(e) shows contours with 32 points. The result was obtained after seven iterations with 476 points, which is shown in Fig. 8(g). Having notable convergence and wide capture range together with the mean value theorem the proposed method will be thoroughly able to segment the object boundary. After the “guiding to the boundary” stage, the “refinement” stage is implemented, so that the new points with no information can be removed. The numerical values of Fig. 8 are demonstrated in Table 1. As another example, the Fig. 9 shows the use of MVGC on an object with sharp edges, concavities, and convexities. The initial contour, the newly created points, the guided contour, and the guided points are shown in Fig. 9(b). In Fig. 9(c) the first iteration is performed and a contour with 10 vertices is obtained. The results of second and third iterations are shown in Fig. 9(d) and Fig. 9(e) respectively. In Fig. 9(f) the result after 10 iterations is shown. The gray strips indicate the ability of the proposed method in finding spots on the object boundary that have the highest convexity and concavity concerning contour line segments. This makes the method to be quite fast at the convergence to the object boundary. This is because the Euclidean distance between the object boundary and the contour greatly reduces per every iteration. One way to compute this Euclidean distance is to convert the Cartesian coordinate to polar coordinate when the center of coordinates is object’s center of gravity.
AN US
Fig. 6. The close-up of the edge map gradient vector. Moving from a point in the direction of its edge map gradient vector leads it to the object boundary. Along this path, the magnitude of edge map gradient vectors increases, and they are almost in the same direction of the path. After crossing the object boundary, the magnitude starts to decrease and the direction is reversed.
Convexity, concavity, and convergence
ACCEPTED MANUSCRIPT
To show how MVGC works on noisy images, the salt and pepper noise with 50% density was added to the images in the last three experiments. Moreover, the proposed method was employed with rough initializations. These initialization contours were from the outside, inside, or crossing the object boundaries. Fig. 11 shows the first noisy gray-scale experiment with rough initialization. In its top row, the rough initial contour that is located outside of the object boundary and the red guided contour are shown in Fig. 11(a). Fig. 11(b) shows the 226 points of contour after six iterations. The result after 34 iterations is shown in Fig. 11(c). In the bottom row of Fig. 11, where the rough initial contour is located inside the object boundary, the result of the contour converges to the same object boundary. Fig. 11(d) shows that the five points, which indicate initial green contour, are guided to the boundary using stage II (red). Fig. 11 (e-f) demonstrate the contour after five iterations and the result after 15 iterations respectively. As shown in Fig. 12, two rough initializations, high density noise, and sharp concavities and convexities in the object boundary are conglomerated to stop the convergence using the proposed method. Fig. 12(a) and Fig. 12(d) show two different initializations. The first one is located outside the object boundary, and the second one crosses the object boundary in some regions. Fig. 12(b) and Fig. 12(e) show their contours using the proposed method after some iterations respectively. However, MVGC overcome these difficulties as shown in Fig. 12(c) and Fig. 12(f), i.e. the results completely converge to the object boundaries.
PT
ED
M
AN US
In the third experiment, as it is shown in Fig. 10, the MVGC is performed for the hand-shaped object. It represents the black and white image without noise that has an object with deep convexities and concavities. In Fig. 10(b) the initial green contour, the newly created points, and the guidance red and gray routes are shown. The first iteration is shown in Fig. 10(c). Fig. 10(d) to Fig. 10(f) show the results after 2, 4, and 8 iterations respectively. The result is shown in Fig. 10(g) with 1848 contour points and in 15 iterations. Table 1 demonstrates its numerical values. However, the object in Fig. 10 consists of four inside black Ushaped regions near the palmar digitals and five inside white U-shaped regions on the fingertips. These U-shaped areas are intertwined within each other and they are very close together relative to their height. As shown in Fig. 10(c), if the object is considered as a human hand, in the first iteration, a new contour point is created on a line segment between ring and middle fingers endpoints, which is guided towards palmar digital truly. However, this new point cannot be found in Fig. 10(d). It is notable that the length of guiding strip to the length of that line segment ratio is more than a threshold, so the “refinement” stage assumes this new point located on a different object. Therefore, at this step, a new contour is created by assuming the existence of two different objects in the image. However, in next iterations when these two contours become closer to each other, they are merged as a combined contour. Although this type of refinement slightly slows down the convergence rate, it guarantees that the proposed method segments deep concavities and convexities.
recover the image, are shown in Table 1.
CR IP T
Fig. 8. Estimating black-white simple heart-shaped object: (a) 617*449-pixel input black-white image; (b) manual and its optimized contour in the first step; (c-e) the proposed method with 1, 2, and 3 iterations, respectively; (f) the final result after 7 iterations in less than 2.1s. Number of iterations and points, which estimate the object boundary, are shown in Table 1.
AC
CE
Fig. 9. Estimating black-white concave and convex surfaces without noise: (a) 512*512-pixel input image; (b) manual and optimized contour in the first step; (c-e) the proposed method with 1, 2, and 3 iterations, respectively; (f) the final result after 10 iterations in less than 3.3s. Number of iterations and points, which recover the image, are shown in Table 1.
Fig. 10. Estimating black-white hand-shape object with deep U-shapes: (a) 376x411-pixel input image; (b) manual and optimized contour in the first step; (c-f) the proposed method with 1, 2, 4, and 8 iterations respectively; (g) the result after 15 iterations in less than 23s. Number of iterations and points, which
Fig. 11. Segmenting grayscale noisy image with 50% density of salt and pepper noise using MVGC: (a) rough initialization from the outside of the object with 4 points (green) and the result of moving vertices using stage II (red); (b) the contour after 6 iterations; (c) the final result after 34 iterations in 19.71sec and 1878 points; (d) rough initialization from inside of the object with 5 points (green) and after guided to the object boundary (red); (e) the contour after 5 iterations; (f) the final result after 15 iterations in 22.36s, 1763 points. More details are shown in Table 2.
Fig. 13. Segmenting grayscale noisy image with 50% density of salt and pepper noise using MVGC: (a) initialize with 8 points, while it crosses the object boundary in some regions (green); (b-e) results after 40, 80, 120, and 160 iterations; (f) result after 197 iterations in 170.53s, 3938points. See Table 2.
Fig. 14 shows the use of MVGC on an object with ruptures. In Fig. 14(a), a dodecagon with four ruptures is demonstrated. Three of them are along a straight line while the fourth one located on a vertex. The initial contour, the newly created contour points, and the guidance strips are shown in Fig. 14(b). The first iteration is shown in Fig. 14(c). More accurate estimates of object boundary are shown in Fig. 14(d) to Fig. 14(e), with the results after 2 and 3 iterations respectively. The result was obtained from seven iterations, which is shown in Fig. 14(f). The “guiding to the boundary” stage tends to guide contour points to the vertices of the object, wherein the mean value theorem holds true. This tendency leads the contour converges to the object boundary very fast, even in the first iteration. This experiment demonstrates that the ruptures in the straight lines are recovered fast. The ruptures are recovered by “guiding to the boundary” stage using edge map gradient vectors. Therefore, depending on the severity of the ruptures located in sharp boundaries, they can be recovered in a way that is understandable to humans or becomes paths that connect the two ends of the rupture with a shorter length. To show the convergence of the MVGC in the presence of ruptured boundaries, another experiment is demonstrated in Fig. 15. An object with four ruptures on the boundary is shown in Fig. 15(a). Two of them are in the straight line and two others are on vertices with right and reflex angles. Regardless of the extra explanation of Fig. 15(b) to Fig. 15(f), the result after nine iterations is shown in Fig. 15(g).
AN US
Fig. 12. Segmenting grayscale noisy image with 50% density of salt and pepper noise using MVGC: (a) rough initialization from the outside of the object with 7 points (green); (b) result after 6 iterations; (c) the final result after 31 iterations in 11.33s, 1064 points; (d) initialize with 5 points, while it crosses the object boundary in some regions (green) (e) result after 2 iterations; (f) the final result after 29 iterations in 10.37s, 1028 points. See Table 2.
CR IP T
ACCEPTED MANUSCRIPT
4.2
CE
PT
ED
M
Fig. 13 is another experiment that combines the crossing to the boundary initialization, high density noise, and object boundary with deep convexities and concavities. The aim was to converge to the hand-shaped object. In Fig. 13(a) the initial green contour and the guided red contour are shown. The 40th iteration is shown in Fig. 13(b), where the contour contains 2210 points. The produced contour after 80, 120, and 160 iterations are shown in Fig. 13(c) to Fig. 13(e) respectively. Fig. 13(f) shows after the contour result with 3938 points and 197 iterations, which is completely converged to the object boundary. By comparing the experiments shown in Figs. 8-10 with Figs. 11-13, which respectively contain exactly the same objects, some conclusions can be drawn. At first, the high density noise in the image couldn’t stop the convergence of the contours resulted by the MVGC. Secondly, the proposed method can handle objects with either sharp concavities or convexities, or deep convexities and concavities in the object boundaries. Thirdly, the proposed method is dynamic and is regularized without any parameter regularization. The complexities in the experiments just cause the model to produce more contour points to reach the reasonable accuracy. However, the reasons why the proposed method has these advantages are discussed in discussion.
Ruptures and Convergence
AC
This section represents a few images with objects having ruptures to show its impact on the convergence of the proposed method. The desired objects are closed curves, which ruptured in several areas. It is desirable whether the proposed method reduces the role of ruptures and continues the curves as humans do.
Fig. 14. Recovering simple dodecagon with ruptures: (a) Initial curve; (b)
ACCEPTED MANUSCRIPT
manual and optimized contour in the first step; (c-e) the proposed method with 1, 2, and 3 iterations respectively; (f) the result after 7 iterations in less than 2.1s. Number of iterations and points, which recover the curve, are shown in Table 1.
4.3
Complex Noisy Gray-Level Images
As another gray-scale experiments, in Fig. 16-20 MVGC is perfumed on some medical images. In Fig. 16, the left ventricle of a human heart is used; the magnetic resonance image that Xu et al. already segmented it (Xu et al. 1998b). The initial contour and the result of GVF are shown in Fig. 16(a). Almost same initial contour (green) with eight-point and the newly created contour points (red) are shown in Fig. 16(b). Fig. 16(c) shows the 105 points contour after five iterations. The process of convergence to the object boundary are demonstrated in Fig. 16(d) to Fig. 16(e), where the MVGC passes its 10th and 15th iterations respectively. After eighteen iterations, the final 597-point contour is shown in Fig. 16(f), which can be compared with GVF method in Fig. 16(a). More details on the iterations, number of points, and time are shown in Table 2.
AC
CE
PT
ED
M
AN US
CR IP T
Since the initial contour was out of the object, the right angle rupture recovered fast. However, the reflex angle rupture recovered at a slower rate. This is because the edge map gradient vectors tend to converge to the vertices with larger angles. In other words, when the contour is initialized from out of the object, this right angle rupture seems to be reflex angle, and vertices with reflex angles are more accessible using edge map gradient vector.
Fig. 15. Recovering simple cross with ruptures: (a) Initial curve; (b) manual and optimized contour in the first step; (c-f) the proposed method with 1, 2, 3, and 4 iterations respectively; (g) the result after 9 iterations in less than 4s. Number of iterations points that recover the object are shown in Table 1.
ACCEPTED MANUSCRIPT
CR IP T
Fig. 16. Rounding problem. Estimating the magnetic resonance image of left ventricle of the human heart: (a) the original image, the initialization and convergence using GVF snake; (b) approximately same initialization (green) with only 8 points and the first step using MVGC on the 475x475 pixels image; (c-e) the proposed method after 5, 10, and 15 iterations, respectively; (f) the result after 18 iterations.
AN US
Fig. 17. Leakage problem in weak boundaries. Segmentation of a cardiac MR image with weak boundaries. (a) the original image, the initial green contour (b) result of GVF with its default parameters; i.e. alpha=0.5, beta=0, gamma=1, kappa=0.6, Dmin=0.5, Dmax=2, Mu=0.1; (Liu et al. 2009) (c) result of MVGC after 7 iterations with 159 points in 1.23sec; (d) another initial contour near to the object boundaries; (e) Result of GVF when parameters are optimized by manually regularizing them (Liu et al. 2009); (f) result of MVGC after 5 iterations with 137 points in 1.41sec. Number of iterations and points are shown in Table 2.
M
Fig. 18. Leakage problem in weak boundaries. Segmentation of a cardiac CT image (Wu e al. 2013b); (a) ;the original image; (b) GVF result with µ = 0.1(Wu e al. 2013b); (c) NGVF result with µ = 0.1 (Wu e al. 2013b); (d) MVGC result after 8 iterations, 191 points, and in 0.59 sec.
it.1 8 10 12 12 12
PT
Initial 4 5 6 6 6
CE
BW heart-shape BW concave and convex BW hand-shape dodecagon with ruptures Cross with ruptures
ED
Table 1 Number of points required for estimating object boundary using MVGC: (col 1) the object names; (col2-col12) number of points which estimate the images per iteration; (col 13) total time using 1.4GHz Intel Core i5 processor, and 4GB 1600 MHz DDR3 Memory. it.2 16 20 22 24 24
it.3 32 40 43 48 48
it.4 64 76 81 92 90
AC
Fig. 17 concentrates on leakages in weak boundaries. To this aim, a cardiac MR image that has weak boundaries was estimated by GVF and MVGC. Fig. 17(a) shows the original image that is initialized by 5 points contour (green. In Fig. 17(b) the result of GVF method with its default parameters are shown (Liu et al. 2009). Leaking out the weak boundaries is clearly observable. To have a better result for GVF, parameters regularization was done, and a close to the desired boundary initial contour is applied in Fig. 17(d). Fig. 17(e) shows the result of GVF with close to the boundary circle shape initialization contour, and optimized parameters for this image (Liu et al. 2009). Even in such favorable situation, the leakage problem still persists. Fig. 17(c) and Fig. 17(f) show the results of MVGC when the initial contours are as before; shown in Fig. 17(a) and Fig. 17(d). More details on the iterations, number of points, and time are shown in Table 2. In Fig. 18, MVGC was employed to segment a cardiac CT image of the human heart’s endocardium of the left ventricle (Wu e al. 2013b). In this experiment, the result of proposed method was compared to some active contour models while the original image in Fig. 18(a)
it.5 128 144 158 166 169
it.6 254 264 289 273 286
it.7 476 443 469 356 438
it.8 -614 689 -573
it.10 -873 1138 ---
It. 15 --1848 ---
time (sec) 2.0392 3.2606 22.986 2.0341 3.962
involved with noise, weak edges, and inhomogeneity. Fig. 18(b) shows that the result of GVF model leaked out the weak boundaries, especially when they are close to strong edges (Jifeng et al. 2007). In Fig. 18(c), normal gradient vector flow (NGVF) model was employed with almost same initial contours to segment the image (Jifeng et al. 2007). Although its result was much better than GVF, it failed in neighboring strong edges. The result of MVGC is shown in Fig. 18(d), when its eight points initial contour made almost same initial contour in GVF. As another grayscale experiment, in Fig. 19, a cell tissue slide image is segmented by different active contour models. Fig. 19(a) and Fig. 19(b) show the result of GVF and NGVF, respectively. Both of them fail segmenting the image. In Fig. 19(c), vector field convolution (VFC) active contour model is used (Li et al. 2007). The result shows that it also leaks out the boundary. The satisfactory result of MVGC is shown in Fig. 19(d), while the initial contour was almost same as GVF. Moreover, a rough initial contour and the result of MVGC when this initial contour is used are shown in Fig. 19(e).
ACCEPTED MANUSCRIPT
AN US
CR IP T
Fig. 19. Leakage problem when weak boundaries are close to strong edges: Estimating cell tissue slide image (Wu e al. 2013b). (a) GVF result with µ = 0.1 (Wu e al. 2013b); (b) NGVF result with µ = 0.1 (Wu e al. 2013b); (c) VFC result with c = 1.7 (Wu e al. 2013b); (d) green initialization contour and MVGC result in 17 iteration, 564 points, in 2.05 sec; (e) green rough initialization contour and MVGC result in 16 iteration, 597 points, and in 1.94 sec.
Fig. 20. Leakage problem when weak boundaries are close to strong edges: Segmentation of human lung CT image (Wu e al. 2013b). (a) original image; (b) initial curves ; (c) GVF result with µ = 0.1 (Wu e al. 2013b); (d) NGVF result with µ = 0.05 (Wu e al. 2013b); (e)VFC result with c = 1.7 (Wu e al. 2013b); (f) ILGVF result with µ= 0.1, Dt = 0.5 (Wu e al. 2013); and (g) MVGC result after 12 iteration, 268 points, and in 0.96 sec
# of initial points
CE
AC
6_0.61_226
ED
4
Iteration _ time in second _ number of points 34_19.71_1878
5
5_0.56_142
15_22.36_1763
7
6_0.62_307
31_11.33_1064
5
2_0.32_19
29_10.37_1028
8
40_23.16_2210
80_55.04_3529
120_98.74_3701
160_135.29_3811
8 12
5_0.32_105 4_0.34_84
10_0.97_312 8_0.59_191
15_1.78_495
18_2.85_597
5
7_0.52_159
18
5_0.16_137
8 5 4
6_0.56_228 4_0.42_76 5_0.41_108
10_1.03_419 8_0.82_369 8_0.62_217
17_2.05_564 12_1.35_554 12_0.96_268
16_1.94_597
PT
Noisy heart-shape with outside rough initialization Noisy heart-shape with inside rough initialization Noisy concave and convex with outside rough initialization Noisy concave and convex with crossing the boundary initialization Noisy hand-shape with crossing the boundary initialization Left ventricle Cardiac CT image Cardiac MR image with 5 point initial contour Cardiac MR image with circle shape initial contour Cell tissue Cell tissue with rough initialization Lung CT image
M
Table 2 More details on experiments using MVGC: (col 1) grayscale images; (col2) number of initial points; (col3-col7) number of iterations, time in seconds, and number of points; using 1.4GHz Intel Core i5 processor with 4GB Memory.
In Fig. 20, a comparison of MVGC and different active contour models on a CT image of human lung is illustrated. The aim is to segment the right parenchyma of lung, which its boundaries are weak and strong edges are very close to them. Fig. 20(a) shows the original image and the initial contour is shown in Fig. 20(b). Fig. 20(c) to Fig. 20(e) respectively show the result of GVF, NGVF, and VFC active contour models, where they leaked out the true boundary. The result of vector fields using the infinity Laplacian (ILGVF) - in Fig. 20(f) – shows failing the segmentation of true boundary (Wu e al. 2013b). In contrast, as shown in Fig. 20(g), MVGC truly segments the boundary.
197_170.53_3938
In general, the reason why MVGC doesn’t leak out the weak boundaries while easily passes even high density noise is as follows: weak boundaries are a great number of neighboring points with low gradient magnitudes, while each noise includes one (or close to one) point(s) with (in worst) high gradient magnitude(s). MVGC reduces the effect of noise by using edge map. Then even a noise point accosts a moving contour point in “guiding to boundary” step and captures it, the refinement step finds that this captured contour point is far from the other ones and immediately prunes it. Even if that point makes a getaway from these two steps, it is in imminent danger of facing them in next iterations, when they force that contour point to whether move
ACCEPTED MANUSCRIPT
4.4
4D(3D+time) Gray-Level Images
After experiments on binary and 2D gray-scale images, the MVGC was performed on four-dimensional images. A known dataset, which was introduced by Andreopoulos et al. 2008, were used for experiments. To show how MVGC works, different known and hybrid models were compared to it.
M
AN US
4.4.1. The dataset
The dataset include 7980 short axis cardiac MR images from 33 patients with heart abnormalities, which their ages were under 18. The dataset was introduced by Andreopoulos et al. 2008 and provided by the Department of Diagnostic Imaging Hospital for Sick Children, Toronto, Canada. The abnormalities consist of aortic regurgitation, enlarged ventricles and ischemia, cardiomyopathy, and the other ones in relation to the left ventricle. For each patient the images were sequenced both based on the time and along the long axis. In time sequence, there were 20 frames. The slices were separated by 6 to 13mm; subsequently there were 8 to 15 slices per patient. The images were 256x256 pixels, and each pixel had a distance of 0.93-1.64 mm to its neighbor. The manual segmentation of the endocardial left ventricle of images (Rosse and Gaddum-Rosse, 1997) were drawn behind the papillary muscles and trabeculae. All the images, their manual segmentations, and the metadata are available online at http://www.cse.yorku.ca/~mridataset/ and at the (Andreopoulos et al. 2008) journal’s website as an Electronic Annex. Fig. 21(a) shows one of Short axis cardiac MRI of patient one at slice 6 (z=6) and time 10 (frame=10). Manual contours of this patient at slice 6 and all frames are shown in Fig. 21(b). Fig. 21(c) shows manual contours of this patient at time=10 and all slices.
CR IP T
towards the boundary or be pruned. So, it is reasonable for MVGC to require more time to converge the object boundaries in the presence of noise. On the other hand, exactly the same reason prevents contour points from leaking out the weak boundaries. Like noise, the neighboring points in the weak boundaries capture the contours that are moved towards them. However, the contour points stay at these local extrema even if a strong boundary exists behind them. This is because weak boundaries have a great number of stuck together points that keep all captured contour points. So, neither the “refinement” step finds each of them far from the others nor “guiding to the boundary” step willing to hustle them from the local extrema.
AC
CE
PT
ED
Fig. 21. (a) Short axis cardiac MRI of one patient at z=6 and time=10. (b) Contours of this patient at z=6 and times from one to 20. (c) Contours of this patient at time=10 and z from 2 to 12.
Fig. 22. Segmenting patients 1-8 using MVGC. The manual contours are shown in green while the red ones are resulting after using MVGC.
CR IP T
ACCEPTED MANUSCRIPT
CE
PT
ED
M
AN US
Fig. 23. Segmenting patients 9-16 using MVGC. The manual contours are shown in green while the red ones are resulting after using MVGC.
AC
Fig. 24. Using MVGC to segment patients 18-33 randomly. The manual contours are shown in green while the red ones are resulting after using MVGC.
4.4.2. Comparing with other methods At first, the result of MVGC was compared to some known methods, which use different fundamental principles (Peng et al. 2016). As shown in Table 3, the first compared method (Mitchell et al. (2002)), uses three-dimensional active appearance model (3D AMM) and the user should manually segments the training sets. The second and third ones are by Kaus et al. (2004), which used coupled meshes, PDM, and deformable model. In these methods the user manually segments the training sets. Lotjonen et al. (2005) used SSM and non-rigid registration as their fundamental principle. Their method requires the user to create the prior shape model as a 3D surface fitting. Van Assen et al. (2006) used 3D active shape model (ASM) and fuzzy inference. Their method requires manual segmentation on the basal and apical slices. Huang and Metaxas’s (2008) method is based on deformable
shape and appearance model. Their method requires to specify the centroid and the radius. Constantinides et al. (2009) introduced GVFbased deformable model and fuzzy k-means papillary muscle detection, which requires the user to place a point at the center of LV and a point at the upper intersection of LV and RV. Morphological region growth plus watershed cuts were the principal of Cousty et al. (2010) method, which the user should specify a single point located at the center of cavity. Schaerer et al. (2010) presented deformable elastic template and temporal constraints, which specifying a point at the center of cavity in the ED frame by user is a vital operation for their method. The method of Zhu et al. (2010) was propagation-based subject-specific dynamic model. It required manual segmentation on first frame. Markov random field-based deformable model was introduced by Cordero-Grande et al. (2011). Ringenberg et al. (2012) used thresholding and canny edge detection-based ROI extraction plus
ACCEPTED MANUSCRIPT
3D
2.76 ± 1.02
0
-61.31
1.2E+02
3D
2.28 ± 0.93
0
-61.23
7.5E+03
3D + T
1.88 ± 2
0
-59.67
1.7E+05
3D
2.01 ± 0.31
0
-60.57
7.6E+04
3D
1.97 ± 0.54
0
-60.34
9.9E+04
3D
2.24 ± 0.54
0
-61.20
1.1E+04
3D
2.02 ± 0.93
0
-60.62
7.1E+04
2D
2.16 ± 0.46
0
-61.07
2.4E+04
2D
2.04 ± 0.47
0
-60.71
6.1E+04
3D + T
1.42 ± 0.36
1.2E-76
-25.35
3.7E+06
2D + T
2.97 ± 0.38
0
-61.31
1.2E+01
3D + T
1.47 ± 0.16
4.4E-218
-36.21
2.6E+06
3D + T
1.37 ± 0.2
5.5E-06
-11.28
5.1E+06
2D
1.73 ± 0.69
0
-57.48
3.9E+05
3D + T
1.54 ± 0.31
0
-46.42
1.5E+06
2D
2.24 ± 0.4
0
-61.20
1.1E+04
CR IP T
Kaus et al. (2004) Kaus et al. (ES) (2004) Lorenzo-Valdes et al. (2004) Lotjonen et al. (2005) van Assen et al. (2006) van Assen et al. (SAX) (2006) van Assen et al. (RAD) (2006) Huang et al. (2008) Constantinides et al. (2009) Cousty et al. (2010) Schaerer et al. (2010) Zhu et al. (2010) Cordero-Grande et al. (2011) Ringenberg et al. (2012) Eslami et al. (2013) Hu et al. (2013)
M
AN US
GVF snake as their fundamental principle. Eslami et al. (2013) used retrieval closet subject with guided random walks, which requires the user provides myocardial and background seeds on ED frame. GMM (EM) plus region restricted dynamic programming were the principle of Hu et al.’s (2013) method. Lu et al. (2013) used optimal thresholding, FFT, and multiple seeds region growth as their fundamental method. It requires choosing mid-slice and manual correction by the user. Alba et al. (2014) introduced a method based on graph cuts on intensity plus inter-slice and shape constraint. Using bspline explicit active surface, sequential thresholding, and EM were the methods of Queiros et al. (2014), which require choosing basal and apical slices by user. Qin et al. (2015) used feature competition, sparse model, and incremental learning as their fundamental principles. The user should manually segment the first frame. To evaluate the performance of MVGC, three metrics were used, and then the results were compared to the corresponding results of the other methods. These metrics express the errors between the ground truth and the results of the particular methods, in different point of views. The first metric was the landmark error, which is the minimum distance (in mm) between each landmark of manually segmented contour and the contours made by the methods. The second metric was the volumetric error, which measures the average error of each frame separately. With this end in view, for each frame from all images the average of landmark error (in mm) was computed separately. The third metric was the volume error. It computes the difference (in cm3) between the volumes of the cardiac structures made by the manual segmented contours and the contours provided by the methods. In addition to the above-mentioned metrics, some statistical tests were used to show how significant these errors are. The first two ones were t-tests and z-tests of significance. Since generally the errors have non-Gaussian distributions, the third test was Wilcoxson test of significance.
4.4.3. Comparing with hybrid methods
AC
CE
PT
ED
Besides the fact that MVGC is a basic method, which is free from any complexity such as combination of several methods, it was compared to robust hybrid models to show how well it works. To this aim, eight robust combined methods were used (Andreopoulos et al. 2008). The first model was AMM (Cootes et al. 1998), which is robust for analyzing medical images (Cootes et al. 2004). The second one was hierarchical 2D + time ASM, which its mode of deformation is based on the coefficients of shapes’ wavelet transform (Davatzikos et al., 2003; Andreopoulos et al. 2008). The first approach was segmenting data using three-fold cross validation. The frames close to end-diastole were completely different from the frames close to end-systole from the size and shape. So, two individual models were built. Each fold consists of 11 patients and each model includes 10 frames. Therefore, two models segment each image. The 3D model with the minimum fitting criterion was the final segmentation for each frame. Because it is an inverse compositionalbased approach, it was referred as IC3 in (Andreopoulos et al. 2008). Table 3 Results of landmark errors after fitting different known methods with the data set METHOD
Dim
Error
p-value
z-value
MVGC
2D + T
1.35 ± 0.25
'N/A'
'N/A'
signrank 'N/A'
Xu et al. (1998a) Mitchell et al. (2002)
2D
2.07 ± 0.56
0
-60.83
4.9E+03
3D
2.75 ± 0.86
0
-61.31
1.3E+02
Lu et al. (2013)
2D
2.08 ± N/A
0
-60.86
4.6E+04
Alba et al. (Cine) (2014) Alba et al. (LE) (2014) Queiros et al. (2014) Qin et al. (2015)
3D
2.76 ± 0.53
0
-61.31
1.2E+02
3D
1.83 ± 0.5
0
-59.16
2.2E+05
3D + T
1.76 ± 0.45
0
-58.12
3.3E+05
2D
1.75 ± 0.5
0
-57.93
3.5E+05
The second approach is optimizing using Gauss-Newton. This approach is known and tested exhaustively by Baker et al., (2004). Despite the fact that the accuracy of this approach is high, the speed is very low. Therefore, comparing its results with MVGC, either in terms of accuracy or in terms of speed, gives significant information. Table 4 Results of landmark errors after fitting different hybrid methods with the data set METHOD MVGC
Error
min
Max
p-value
z-value
sign-rank
1.35 ± 0.25 0.52
3.12
N/A
N/A
N/A
(3D AMM) 1.49 ± 1.37 0 14.89 3.9E-289 -39.67 2.2E+06 InvComp9 (3D AMM) 1.68 ± 1.52 0 14.89 0 -55.96 5.5E+05 InvComp3 (3D AMM) 1.42 ± 1.34 0 13.89 1.3E-76 -25.35 3.7E+06 Gauss–Newton (3D AMM) 1.78 ± 1.56 0 13.8 0 -58.48 2.9E+05 Classical (2D + time ASM) 1.41 ± 1.31 0 14.89 3.7E-56 -22.78 3.9E+06 InvComp9 (2D + time ASM) 1.55 ± 1.49 0 14.89 0 -47.49 1.4E+06 InvComp3 (2D + time ASM) 1.35 ± 1.3 0 13.89 0.24 -4.90 5.8E+06 Gauss–Newton (2D + time ASM) 1.63 ± 1.49 0 13.8 0 -53.68 7.8E+05 Classical Error: the landmark errors (mean ± standard deviation); min/Max: the minimum/maximum landmark error; p-value: the statistical significance p-value
ACCEPTED MANUSCRIPT
for the database of all MVGC volume errors and the respective row of mean landmark error; z-value: value of z-static; sign-rank: the Wilcoxon signed rank test.
Table 5 Results of volumetric errors after fitting different hybrid methods with the data set
Table 7 The average fitting time per 3D volume using the corresponding algorithm. The MVGC and GVF methods ran on Intel 1.4 GHz with 4 GB RAM, while the other methods ran by Andreopoulos et al., (2008) on Intel Xeon 3.06 GHz with 3 GB RAM, which was a more powerful computer. (3D AMM) (3D AMM) MVGC GVF (3D AMM) (3D AMM) Gauss Classical InvComp9 InvComp3 Newton Time 0.69 25.2 22.8 8.6 1013.4 5.4 (sec)
METHOD
Error
Min
Max p-value z-value sign-rank
MVGC
1.35 ± 0.02
1.33
1.39
N/A
N/A
N/A
4.4.4. Results
Fig. 22 to Fig. 24 show the results of MVGC on a random slice of some patients. The segmented contours of MVGC are displayed with red contours and the green one is the result of manual segmentation. Although the last experiment leads a 3D result of segmentation, MVGC is by no means a real 3D method. While the proposed segmentation method takes inter-slice relationship into account, on no account it simultaneously segments the whole 3D image. As a matter of fact, after segmenting one slice, MVGC spreads the segmented contour to the neighbor slices in the direction of inter-slice gradient vectors as their initial contours. In other words, the method has this strategy for 3D cases that it needs a manually initialize contour for the first slice. After achieving the final contour result for the first slice, the initial contours of its neighbor slice is the result of the first one that are moved in the direction of the inter-slice gradients. Similarly, in real 3D images, the proposed method just requires a 2D initialization on one cutting-plane of the image. The proposed method segments the object on that plane using mean value theorem as before. This segmented contour spreads to the neighbor planes by moving in the direction of 3D gradient vectors. The directions of 3D gradient vectors are towards the object boundaries in neighbor planes, so they are very useful for MVGC to autonomously initialize contours in the next planes. Segmenting and spreading cycle continues until the entire 3D object is segmented. In general, MVGC guides the contour to the boundaries in 2D form, while in 3D cases uses 3D gradient vectors to employs inter-slice information. Besides, the proposed method incorporates the temporalbased information in the last experiment. So, MVGC is a “2D + Time” method. Table 3 presents results of landmark errors after using MVGC and fitting different known methods with the data set. Moreover, the average and standard deviation of errors and statistical tests were employed to assess the significance of the MVGC’s results against the other methods. It is obvious that the statistical tests have no influence on the MVGC row. The same results after fitting different hybrid methods with the data set are shown in Table 4, where the hybrid methods were 3D AMM and 2D + time ASM that use four optimization approaches. Tables 5 and 6 present the results of volumetric error and volume error using MVGC and the other methods respectively. Fig. 26 demonstrates the mean and standard deviation of frames’ volumetric errors using MVGC and 3D AMM hybrid methods. The figure shows that the end-systole results using MVGC are far better than the other methods, where the contours have bigger areas than the end-diastole frames. Fig. 27 shows the regression graph of the endocardial volumes using the MVGC versus the ground truth volume. The mean time it took to segment each 3D volume is presented in Table 7. In this paper, the result was obtained on Intel 1.4 GHz with 4 GB RAM, and the time for implanting 3D AMMs were retrieved from Andreopoulos et al., (2008) on Intel Xeon 3.06 GHz with 3 GB RAM, which was a more powerful computer. However, the results were obtained in far less time than the other ones.
AN US
(3D AMM) 1.51 ± 0.55 0.57 4.03 1.1E-19 -3.92 0 InvComp9 (3D AMM) 1.68 ± 0.73 0.57 6.35 1.3E-25 -3.92 0 InvComp3 (3D AMM) 1.47 ± 0.056 0.6 4.25 2.3E-17 -3.92 0 Gauss–Newton (3D AMM) 1.8 ± 0.66 0.64 4.93 3.6E-28 -3.92 0 Classical (2D + time ASM) 1.43 ± 0.49 0.57 3.63 3.9E-14 -3.92 0 InvComp9 (2D + time ASM) 1.55 ± 0.66 0.61 5.87 1.6E-21 -3.92 0 InvComp3 (2D + time ASM) 1.4 ± 0.5 0.55 3.8 1.5E-10 -3.92 0 Gauss–Newton (2D + time ASM) 1.64 ± 0.58 0.62 4.37 1.5E-24 -3.92 0 Classical Error: the volumetric errors (mean ± standard deviation); min/Max: the minimum/maximum volumetric error; p-value: the statistical significance pvalue for the database of all MVGC volumetric errors and the respective row of mean landmark error; z-value: value of z-static; sign-rank: the Wilcoxon signed rank test.
CR IP T
for the database of all MVGC landmark errors and the respective row of mean landmark error; z-value: value of z-static; sign-rank: the Wilcoxon signed rank test.
PT
ED
M
The third one was fitting AMMs or 2D + time ASM using classical approach (Cootes and Taylor, 1998; Mitchell et al., 2002). The variants were used in optimization (Andreopoulos et al. 2008; Cootes and Taylor, 2004; Stegmann, 2004). The last one was another inverse compositional-based approach by Andreopoulos et al. (2008). This approach, which segments 3D images and focuses on end-systole frames, is called IC9. The models segment the frames using IC3, identical with different mean shape, and the corresponding diastole frames. The above approaches were used in 3D AMM method, and then the 2D + time ASM optimization methods were implemented to the aim of improving the results.
CE
Table 6 Results of volume errors after fitting different hybrid methods with the data set. METHOD Error min Max p-value z-value signrank 4.53 ± 2.34 0.59 14.99 N/A N/A N/A MVGC
AC
6.79 ± 7.44 0 50.04 1.7E-96 -17.64 2.3E+04 (3D AMM) InvComp9 8.77 ± 10.9 0.01 98.97 1.5E-210 -21.41 4.2E+03 (3D AMM) InvComp3 5.94 ± 5.83 0.01 38.79 2.4E-46 -14.14 4.0E+04 (3D AMM) Gauss–Newton 8.4 ± 7.67 0 56.12 6.1E-191 -21.03 6.0E+03 (3D AMM) Classical (2D + time ASM) 6.1 ± 6.44 0.02 47.07 3.0E-55 -14.95 3.6E+04 InvComp9 (2D + time ASM) 7.58 ± 9.3 0.01 85.57 2.2E-144 -19.69 1.3E+04 InvComp3 0 32.4 7.2E-37 -13.17 4.5E+04 (2D + time ASM) 5.76 ± 5.35 Gauss–Newton (2D + time ASM) 7.24 ± 6.31 0.04 43.22 5.4E-124 -18.90 1.6E+04 Classical Error: the volume errors (mean ± standard deviation); min/Max: the minimum/maximum volume error; p-value: the statistical significance p-value
CR IP T
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
Fig. 25. Regularized model or overfitted model. The perfect fit red contour shows the result of MVGC method on a slice that has a significant landmark error to the manual segmentation. Does it mean overfitting? The reasons such as MVGC has no parameter by itself, the image is smoothed before using MVGC to reduce the effect of noise, MVGC does not use any training data, and using pruning during the reshape the contour, all of them reject occurring the overfitting in MVGC.
Fig. 26. The volumetric errors on each frame (mean and standard deviation) using MVGC versus 3D AMM with Gauss-Newton algorithm, Classic formula, IC9, and IC3 approaches (Andreopoulos et al., 2008).
Fig. 27. Regression graph of the endocardial volume results using MVGC (vertical axis) versus the ground truth volumes (horizontal axis).
5. Discussion To have a wide capture range, the MVGC uses edge map gradient vector. By looking at them as a wireframe mesh in which color is proportional to surface height, this study can examine the capture range for MVGC. Fig. 28 shows the wireframe mesh of heart-shaped object, which is shown in Fig. 8. The edge map mesh is demonstrated in Fig. 28(a). Mesh height represents the gradient magnitude. The direction of edge map gradient vectors in each pixel are towards its maximum slope, in which the gradient magnitude has its highest growth rate. Since mesh colors are proportional to the gradient magnitude, the directions of edge map gradient vectors seem intuitive. Fig. 28(b) represents the relation of edge map gradient vector direction and color changing. The direction starts from a blue spectrum, where the gradient magnitude is close to zero, to the dark red spectrum, where the gradient magnitude is at its greatest extent. With the help of mesh displaying model, the direction and magnitude of edge map gradient vector could be demonstrated qualitatively. To show small amounts of edge map gradient magnitude along with large ones, in Fig. 28(c), the logarithm
ACCEPTED MANUSCRIPT
CR IP T
Moreover, the MVGC is not sensitive to shifting the images, for example cardiac structure due to inhales and exhales. Conversely, most of the other 4D compared methods require a preprocessing step for correcting this issue. Furthermore, MVGC is a method with no parameter by itself. However, the standard optimization methods require a high number of parameters. In 3D and 4D cases, which complexity increases, this issue even gets worse; they are more susceptible to fall into the local minima. In addition to the best performance in the 4D experiments, MVGC was implemented much faster than the other methods. For example, MVGC was run more than 1400 times faster than Gauss-newton, the previously work with the best performance, even on a weaker machine (the MVGC was implemented on Intel 1.4 GHz 4GB RAM, while Andreopoulos et al., (2008) ran the compared methods on the more powerful machine with Intel 3.06GHz 3GB RAM). Even if the differences of running machines is not considered, MVGC runs more than 330, 12, and 8 times faster than IC9, IC3, and classical approaches used for 3D AMM model respectively.
AC
CE
PT
ED
M
AN US
(edge map gradient magnitude +1) in each pixel is demonstrated. In the areas near the object boundary, the edge map gradient vectors have a voracious appetite for the boundaries; in areas far away from the boundary, this appetite is much reduced. The “guiding to the boundary” stage guides the contour points towards the boundary with a minimum speed of one pixel at each step, so the very small gradient magnitude cannot take the convergence speed lower than a certain limit. In other words, the direction of edge map gradient vector plays an important role in guiding contour points towards the object boundary. For this reason, regardless of the edge map gradient magnitude, Fig. 28(d) shows the logarithms of edge map gradient magnitude, where the direction of edge map gradient vectors are shown more intuitively. Although the values between zero and one have a logarithm less than zero, regardless of the edge map gradient magnitude, its direction in each pixel is shown as well. The direction is towards the highest propensity to slope upward. In other words, from the perspective of the color spectrum, the direction tends to alter from blue to red greedily. This perspective can be seen from the top view in Fig. 28(e), where the act of “Guiding to the boundary” stage can be shown by the color spectrum. As it is shown in Fig. 28, since the edge map focuses on the boundary, its presence as gradient vectors is associated with some outcomes. Firstly, it includes gradient vectors that are towards the object boundary, and normally their direction is perpendicular to the tangent line to the object boundary. Secondly, these vectors have great magnitudes only near the object boundaries, which cause a limited capture range. Thirdly, if the image has almost uniform color distribution, the edge map function approaches zero. These properties make MVGC have some advantages compared to edge-based active contours: The first advantage of the proposed model compared to the edgebased active contours is its capability in fast converging to the object boundary in the areas where the gradient magnitude approaches zero, for example saddle and stationary points. In other words, finding a new point in a line segment with endpoints of two neighbor contour points, which its edge map gradient vector direction is perpendicular to that line segment, makes a necessary condition for the new point: it should have a minimum edge map gradient magnitude. Therefore, this point cannot necessarily be located in an area where there is no edge map gradient vector. On the other hand, the minimum speed of MVGC’s convergence to the boundary is one pixel per step. Therefore, after a limited number of sequential steps contour points can be transferred to the object boundaries. The second advantage of the proposed model compared to active contour methods is its ability in convexities and concavities, which is capable of making the complex objects. Because the proposed method does not use the energy function, which is resultant of two independent internal and external forces. When the active contour models are defined by the coefficients, they define the energy function, which must meet the Euler’s equation with the proviso that the sum of external and internal forces becomes zero. It may perform poorly when the object has deep concavities and convexities, for example U-shaped areas. Since the Euler’s equation makes active contours calculate the sum of external and internal forces, in these areas the forces arise in opposite directions. Therefore, the sum approaches zero. This makes the contour unresponsive to the U-shaped areas. However, from the MVGC’s viewpoint, if the coordinate axes rotate and move in such a way the line segment with endpoints of two neighbor contour points matches the x-axis, the proposed model follows the Roll theorem; it looks for a relative and absolute extremum. These local and global extrema cause the MVGC convergence to the object boundary as fast as possible. Additionally, the MVGC method is not sensitive to the intensity. However, changing the global intensity makes IC9 and IC3 have different results. Inevitably, this problem should be handled before implementing models, for example normalizing intensities.
Fig. 28. The capture range of edge map gradient vectors of black-white simple heart-shaped object shown in Fig. 8. (a) wireframe mesh of edge map; (b) the color spectrum view of gradient magnitude (it increases from blue to dark red); (c) the edge map gradient magnitude logarithm, to show more intuitively; (d) its logarithm with ignoring the severity of the slope to show more intuitively than c; (e) view d from the top.
Other advantages of the proposed method compared with snakes are its convergence speed and the accuracy, which were demonstrated in the experiments. Although the hand-shaped object needed more convergence time than other experiments, without taking into account the “refinement” stage, the MVGC converged to the object boundary in less than 4.1 seconds. However, the “refinement” stage was considered as a part of the model with different goals. One of them was removing overlapped contour points. The newly created contour points may overlap the old ones after guiding to the boundary. This stage eliminated them from the contour points. The second goal was to separate the objects that were far towards each other. By taking independent contours for each object, the “refinement” stage separated them. Actually, in the first iteration of the hand-shaped object in Fig. 10, the “refinement” stage considers middle and ring fingers separately. However, regardless of the refinement stage, the MVGC converges much more quickly to the object boundary. This stage ensures that if there are different objects in the image the contour is split so that in the next iteration they are independent of each other. The third objective was to remove the newly contour points, which were wrongly guided to the corner of the image. For example, it happens when the sharp edges of the object have a twist near the corner of the image. The fourth goal was to cede new points that were very close to endpoints of current contour line
ACCEPTED MANUSCRIPT
CR IP T
extrema in the function and splitting linear interpolants, this type of linear interpolation tends to approach convexities and concavities of f, so the linear interpolants converge to f very fast. Secondly, if the line segments were split simply by finding the middle of each line segment instead of using mean value theorem, there would be no difference in estimating the smooth and complex parts of object boundary. In that case, not only should the model create more contour points than the proposed method to estimate the object boundary with the same accuracy, but also parts of object boundary with weak gradient magnitude may be taken in priority and then they are doomed among the noise. However, using the mean value theorem makes the process of creating new contour points more intelligently. By using this theorem, the proposed method prioritizes the convexities and concavities in the object boundary and makes the boundaries with weak gradient magnitude less important to the extent that the ruptures will be estimated by the linear interpolants. Thirdly, the proposed method uses an edge map that includes a Gaussian function that spreads the gradient vectors as a stain of ink does on the paper. This function by itself smooths the noise and therefore weakens its role in the process of our segmentation method. Consequently, besides the ability of proposed method in segmentation scalar or vector images and multi-dimensional images, the above properties make it a dynamic model that converges fast to the boundaries of complex objects, objects with weak gradient magnitude or ruptures in their boundaries, or even objects in noisy images, which make a wide range of images. However, the complexities and difficulties make the proposed method produce more contour points to reach the reasonable accuracy. This is because MVGC uses more iterations to overcome them. For example, an object like a triangle can exactly be segmented by just three contour points, while many contour points should be created to reasonably segment a circle-shaped object. The more an object has concavities and convexities, the more contour points are required to estimate its boundary with a threshold accuracy. Moreover, as it is explained in the above paragraph, mean value theorem tries to find the best points that converge to the object boundaries. In other words, a simple object can be estimated by a few contour points, while a complex object with exactly same area may need many contour points to be estimated with that accuracy. This is another reason that shows the proposed method is a dynamic model. Although the Gaussian function in the edge map reduces the effect of the high density noise, some still have little effectiveness. The proposed method requires having more iterations to overcome them, so more contour points are produced. This is because in stage II the stop condition of guiding to the boundary process was that the gradient magnitude decreases and the gradient direction reverses. The noise can increase or reduce the gradient magnitude and low-pass filter reduces its effect by spreading them to theirs neighbors. Therefore, although the method tries converging to the object boundaries, the stop condition in the stage II slows the speed in a way that MVGC must guide these points towards the boundaries in the next iteration. Increasing the number of iterations to overcome the noise excesses the number of contour points. Fig. 25 shows a specified slice that the magnitude of landmark errors in MVGC segmented red contour compared to the manual green one is significant. While the proposed method’s red contour best follows the object boundary, the manual green contour is very smooth. At first glance, it seems that the MVGC represents an over-fitted contour and the green contour represents a regularized segmentation. However, there are reasons that reject this assumption. At first, in overfitting, a model represents noise instead of the fundamental relationships. However, the edge map is obtained after the image is smoothed several times by Gaussian filter; consequently, the effect of noise is far less than the edges. Secondly, the complex models are more prone to overfitting, for instance, a model with too many parameters and less observations. MVGC exploits the mean value theorem in an original idea, so it has no parameter by itself. Thirdly, when a model tries to fit on variations in the training data, it becomes weak on the unseen data,
AC
CE
PT
ED
M
AN US
segments. These useless points do not accelerate the convergence speed alone, but as the number of these points increases, the algorithm complexity rises. With the connivance of these useless points, the proposed model can find new points, which make larger changes in the next iteration contour, and so the convergence rate rises. Table 2 shows the contour points to estimate the object boundary of grayscale experiments. Due to the presence of noise and boundaries with sharp edges, the more points were needed to estimate the boundary with high accuracy. Comparing the results of the proposed method and previous methods highlights the accuracy and convergence speed. The obtained result can be an estimation of the boundary that its accuracy is determined by the user, or truly the object boundary with all the details. In Section 4.1, there are two types of experiments. The first ones are some simple experiments to show how different stages of proposed method work, and so how exactly the proposed method works. On the other hand, the second experiments add some complexities to the images and difficulties to show if the MVGC works well in those situations. Complexities increase by adding high-density noise to the images, while the objects boundaries have concavities and convexities - either sharp or deep. The difficulties rise by rough initializations. The initialize contours are outside, inside, or cross the object boundaries. Different combination of these complexities and difficulties form these types of experiments. However, there are some reasons that result in concluding why MVGC handles them truly. Firstly, the MVGC overcomes the effect of noise, even with high density. This is because of Gaussian function in the edge map. This function has two main advantages. One of them is that it spreads the gradient through the image. So the object boundaries, which have large gradient magnitude, have a greater share in spreading gradient, and then the gradient direction of regions near the object boundaries tend towards the objects. The other one is that it acts as a low-pass filter in the nature of Gaussian function, which gets the effects of noise in segmenting the object boundaries reduced. Therefore, the noise is approximately solved at the heart of edge map, so the mean value theorem works well when the effect of object boundaries and noise are highlighted and reduced respectively. Secondly, sharp or deep concavities and convexities in the object boundaries are truly segmented by MVGC. For each line segment ̅̅̅, where its endpoints are located on the object boundary, mean value theorem searches the points on the object boundaries that are parallel to ̅̅̅. Consequently, these points have a gradient vector perpendicular to ̅̅̅. From the other point of view, if the object boundaries were linked to the functions, the peaks of convexities and concavities will become more susceptible to being approximately parallel to linear interpolants of the functions. By putting together these viewpoints, it can be concluded that mean value theorem strongly tends to find peaks in the object boundaries. The more the object boundaries have sharp or deep concavities and convexities, the mean value theorem is more willing to converge them. Thirdly, while the previous methods manually regularize numerous parameters to segment images, the proposed method has no longer parameters, so parameter regularization makes no sense. However, the proposed method has some properties that make it a dynamic and regularized model. At first, using mean value theorem in creating new contour points has the proposed method find the best object boundary points, in which the boundary can be estimated optimally. In other words, assume that on a line segment ̅̅̅̅ that its endpoints are located on the object boundary, the proposed method finds a point d as a new contour point. By rotating the object in a way that the line segment ̅̅̅ becomes parallel to the horizontal axis, and by considering the rotated object boundary as a function f, the point d would be located on a local extremum of f. This is because the gradient vector of d is parallel to ̅̅̅, so it is parallel to the horizontal axis, which makes it to be an extremum. Therefore, splitting the line segment ̅̅̅ into line segments ̅̅̅̅ ̅̅̅̅ makes a linear interpolation of f that consists of pieces of linear interpolants (line segments). Moreover, because of finding
ACCEPTED MANUSCRIPT
which leads to overfitting. However, the MVGC does not use any training data; its result follows the boundaries, which are not the training data. Fourthly, the performance is the ability to fit on unseen data, which is reliable in MVGC. The overfitted models have not good performance on test data. Fifthly, “memorizing” instead of “learning” in training data results in occurring overfitting, whereas MVGC neither memorizes any data nor uses training data. Finally, pruning as an efficient method for avoiding overfitting is implemented in the refinement stage of MVGC. Therefore, the overfitting problem makes no sense for the proposed method.
Lemma 1. Assume that f1,…,fm denote the elements of f and define gi(t) = fi(x+th). So: (
)
( )
( )
( ) ∑
( )
∫ (
(∫
∫ (∑ )
(
) )
)
Considering that Df is the Jacobean matrix of f, which consists of elements , and assuming that f: U⊂Rn → Rm is continuously differentiable. Then: (
)
( )
(
(∫
)
)
where x ∈ U, h ∈ R , x + th ∈ U, t ∈ [0, 1], and Df signifies the Jacobian matrix of f. Lemma 2: Assume that v: [a, b] → Rm be a continuous function defined on the interval [a, b]⊂R. In addition, u in Rm represents the value of the integral:
6. Conclusions
∫ ()
Using the Cauchy-Schwarz inequality (Steele et al. 2004), it can be shown that: ∫ ()
It shows a promising capability in convexities and concavities, which makes the object complex. (ii) The convergence speed is irreducible in stationary and saddle points. (iii) It does not need any former information, and its initialization contour can be inward, outward, or beyond the boundary of the object. (iv) There is no restriction on the capture range. (v) It is capable of converging fast in areas where the gradient magnitude of edge map is close to zero. This is because the convergence speed is independent of the magnitude of edge map gradient vector; it only depends on its direction. (vi) The proposed model is able to segment the objects without leakage in the boundary ruptures. (vii) Unlike the active contour methods, the user no longer sets manual parameters during the segmentation. (viii) The proposed method is noise-resistance model, for instance it converges to the object boundary even in the presence of high density noise.
ED
M
(i)
APPENDIX A
CE
PT
Additionally, the convergence speed, the accuracy, and the simplicity of design and run are remarkable. Expanding MVGC to a real 3D model is an interesting topic for future research.
AC
GENERALIZATION OF MEAN VALUE THEOREM TO VACTOR VALUED FUNCTIONS Following the definitions in Eq. 2 and using Cauchy-Schwarz inequality (Aldaz et al. 2015), the theorem for Several variables leads to: | ( )
( )|
|
((
)
)||
|
( 23 )
Lipschitz continuity will be established when the partial derivatives of f are bounded. Therefore, f will be uniformly continuous. Thus, not only do f not need to be differentiable in a continuous manner, but also it refuses to be continuous on the closure of G (O'Searcoid et al. 2006).
∫
()
∫
∫
()
()
Canceling the norm of u from both ends gives us
AN US
In this paper, as an alternate method for energy minimization that affects the estimation of the boundary, the mean value theorem was employed. Similar to edge-based active contours; this proposed method uses edge map gradient vectors. Interestingly, considering the mean value theorem can be concluded with the fact that there is at least one point in a line segment with endpoints of two neighbor contour points that its edge map gradient vector direction is perpendicular to that line segment. This point is introduced as a new contour point, and will be guided towards the object boundary using edge map gradient vectors. Consequently, the number of contour points will be doubled in maximum per iterations. The proposed method in comparison to conventional models demonstrates the following advantages.
CR IP T
n
|∫ ( )
|
∫ | ( )|
Lemma 3: assume that the norm of Df(x + th) is bounded by some constant M, for t∈[0,1]. The Mean Value Inequality follows from above Lemmas: (
)
∫
( ) (
∫ (
(
) )
)
REFERENCES Alba X, i Ventura F, Rosa M, Lekadir K, Tobon-Gomez C, Hoogendoorn C, Frangi AF (2014) Automatic cardiac LV segmentation in MRI using modified graph cuts with smoothness and interslice constraints. Magn Reson Med 72(6):1775–1784 Aldaz, J.M., Barza, S., Fujii, M. and Moslehian, M.S., 2015. Advances in Operator Cauchy--Schwarz inequalities and their reverses. Annals of Functional Analysis, 6(3), pp.275-295. Andreopoulos, A. and Tsotsos, J.K., 2008. Efficient and generalizable statistical models of shape and appearance for analysis of cardiac MRI. Medical Image Analysis, 12(3), pp.335-357. Baker, S., Goss, R., Matthews, I., 2004. Lucas-Kanade 20 years on: a unifying framework. Int. J. Comput. Vis. 56 (3), 221–255. Boccuto, A., Riečan, B. and Vrábelová, M., 2009. Kurzweil-Henstock integral in Riesz spaces. Bentham Science Publishers. Cai, B., Liu, Z., Wang, J. and Zhu, Y., 2015. Image segmentation framework using gradient guided active contours. Int. J. Sign. Process, 8(7), pp.51-62. Caselles, V., Kimmel, R. and Sapiro, G., 1997. Geodesic active contours. International journal of computer vision, 22(1), pp.61-79. Chan, T.F. and Shen, J.J., 2005. Image processing and analysis: variational, PDE, wavelet, and stochastic methods. Siam. Constantinides C, Chenoune Y, Kachenoura N, Roullot E, Mousseaux E, Herment A, Frouin F (2009) Semi-automated cardiac segmentation on cine magnetic resonance images using GVF-Snake deformable models. MIDAS J. http://hdl.handle.net/10380/3108 Cootes, T.F., Taylor, C., 1998. Active appearance models. In: Proceedings of the European Conference on Computer Vision, vol. 2, pp. 484–498. Cootes, T.F., Taylor, C.J., 2004. Statistical models of appearance for computer vision. Tech. rep., Imaging Science and Biomedical Engineering, University of Manchester. Cordero-Grande L, Vegas-Sánchez-Ferrero G, Casaseca-dela-Higuera P, SanRomán-Calvar JA, Revilla-Orodea A, Martín-Fernández M, Alberola-López
ACCEPTED MANUSCRIPT
CR IP T
Mitchell SC, Bosch JG, Lelieveldt BP, Van der Geest RJ, Reiber JH, Sonka M (2002) 3-D active appearance models: segmentation of cardiac MR and ultrasound images. IEEE Trans Med Imaging 21(9):1167–1178 O'Searcoid, M., 2006. Metric spaces. Springer Science & Business Media. Osher, S., 2003. Level set methods. In Geometric level set methods in imaging, vision, and graphics (pp. 3-20). Springer New York. Paragios, N., Mellina-Gottardo, O. and Ramesh, V., 2001. Gradient vector flow fast geodesic active contours. In Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on (Vol. 1, pp. 67-73). IEEE. Peng, P., Lekadir, K., Gooya, A., Shao, L., Petersen, S.E. and Frangi, A.F., 2016. A review of heart chamber segmentation for structural and functional analysis using cardiac magnetic resonance imaging. Magnetic Resonance Materials in Physics, Biology and Medicine, 29(2), pp.155-195. Qin X, Tian Y, Yan P., 2015. Feature competition and partial sparse shape modeling for cardiac image sequences segmentation. Neurocomputing 149:904–913 Queirós S, Barbosa D, Heyde B, Morais P, Vilaça JL et al., 2014. Fast automatic myocardial segmentation in 4D cine CMR datasets. Med Image Anal 18(7):1115–1131 Ringenberg J, Deo M, Devabhaktuni V, Filgueiras-Rama D, Pizarro G, Ibanez B et al, 2012. Automated segmentation and reconstruction of patient-specific cardiac anatomy and pathology from in vivo MRI. Meas Sci Technol 23(12):125405 Rosse, C., Gaddum-Rosse, P., 1997. Hollinshead’s Textbook of Anatomy, fifth ed. Lippincott-Raven. Schaerer J, Casta C, Pousin J, Clarysse P, 2010. A dynamic elastic model for segmentation and tracking of the heart in MR image sequences. Med Image Anal 14(6):738–749 Steele, J.M., 2004. The Cauchy-Schwarz master class: an introduction to the art of mathematical inequalities. Cambridge University Press. Tölli, T., Koikkalainen, J., Lauerma, K. and Lötjönen, J., 2006, October. Artificially enlarged training set in image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 75-82). Springer Berlin Heidelberg. Van Assen HC, Danilouchkine MG, Frangi AF, Ordás S, Westenberg JJ, Reiber JH, Lelieveldt BP., 2006. SPASM: a 3D-ASM for segmentation of sparse and arbitrarily oriented cardiac MRI data. Med Image Anal 10(2):286–303 Wu, H., Appia, V. and Yezzi, A., 2013a. Numerical conditioning problems and solutions for nonparametric iid statistical active contours. IEEE transactions on pattern analysis and machine intelligence, 35(6), pp.1298-1311. Wu, Y., Wang, Y. and Jia, Y., 2013b. Adaptive diffusion flow active contours for image segmentation. Computer Vision and Image Understanding, 117(10), pp.1421-1435. Xie, X. and Mirmehdi, M., 2008. MAC: Magnetostatic active contour model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(4), pp.632-646. Xu, C. and Prince, J.L., 1997, June. Gradient vector flow: A new external force for snakes. In Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on (pp. 66-71). IEEE. Xu, C. and Prince, J.L., 1998a. Generalized gradient vector flow external forces for active contours. Signal processing, 71(2), pp.131-139. Xu, C. and Prince, J.L., 1998b. Snakes, shapes, and gradient vector flow. IEEE Transactions on image processing, 7(3), pp.359-369. Xu, C., Yezzi, A. and Prince, J.L., 2000, October. On the relationship between parametric and geometric active contours. In Signals, Systems and Computers, 2000. Conference Record of the Thirty-Fourth Asilomar Conference on (Vol. 1, pp. 483-489). IEEE. Yang, Y. and Wu, B., 2012. Split Bregman method for minimization of improved active contour model combining local and global information dynamically. Journal of Mathematical Analysis and Applications, 389(1), pp.351-366. Yu, C.Y., Zhang, W.S., Yu, Y.Y. and Li, Y., 2013. A novel active contour model for image segmentation using distance regularization term. Computers & Mathematics with Applications, 65(11), pp.1746-1759. Zhou, H., Li, X., Schaefer, G., Celebi, M.E. and Miller, P., 2013. Mean shift based gradient vector flow for image segmentation. Computer Vision and Image Understanding, 117(9), pp.1004-1016.
AC
CE
PT
ED
M
AN US
C (2011) Unsupervised 4D myocardium segmentation with a Markov Random Field based deformable model. Med Image Anal 15(3):283–301 Cousty J, Najman L, Couprie M, Clément-Guinaudeau S, Goissen T, Garot J (2010) Segmentation of 4D cardiac MRI: automated method based on spatio-temporal watershed cuts. Image Vis Comput 28(8):1229–1243 Davatzikos, C., Tao, X., Shen, D., 2003. Hierarchical active shape models,using the wavelet transform. IEEE Trans. Med. Imaging 22 (3), 414–422. Dieudonné, J., 2013. Foundations of modern analysis. Read Books Ltd. Eslami A, Karamalis A, Katouzian A, Navab N (2013) Segmentation by retrieval with guided random walks: application to left ventricle segmentation in MRI. Med Image Anal 17(2):236–253 Fang, L. and Wang, X., 2013. Image segmentation framework using EdgeFlowBased active contours. Optik-International Journal for Light and Electron Optics, 124(18), pp.3739-3745. Ge, Q., Xiao, L., Huang, H. and Wei, Z.H., 2013. An active contour model driven by anisotropic region fitting energy for image segmentation. Digital Signal Processing, 23(1), pp.238-243. Guillot, L. and Le Guyader, C., 2009. Extrapolation of vector fields using the infinity Laplacian and with applications to image segmentation. Scale space and variational methods in computer vision, pp.87-99. Hu H, Liu H, Gao Z, Huang L (2013) Hybrid segmentation of left ventricle in cardiac MRI using Gaussian-mixture model and region restricted dynamic programming. Magn Reson Imaging 31(4):575–584 Huang X, Metaxas DN (2008) Metamorphs: deformable shape and appearance models. IEEE Trans Pattern Anal Mach Intell 30(8):1444–1459 Jain, A.K., 1989. Fundamentals of digital image processing. Prentice-Hall, Inc.. Jaouen, V., Gonzalez, P., Stute, S., Guilloteau, D., Chalon, S., Buvat, I. and Tauber, C., 2014. Variational segmentation of vector-valued images with gradient vector flow. IEEE Transactions on Image Processing, 23(11), pp.4773-4785. Jeffreys, H. and Jeffreys, B., 1999. Methods of mathematical physics. Cambridge university press. Jifeng, N., Chengke, W., Shigang, L. and Shuqin, Y., 2007. NGVF: An improved external force field for active contour model. Pattern Recognition Letters, 28(1), pp.58-63. Kass, M., Witkin, A. and Terzopoulos, D., 1988. Snakes: Active contour models. International journal of computer vision, 1(4), pp.321-331. Kimmel, R. and Bruckstein, A.M., 2003. Regularized Laplacian zero crossings as optimal edge integrators. International Journal of Computer Vision, 53(3), pp.225-243. Kaus MR, von Berg J, Weese J, Niessen W, Pekar V (2004) Automated segmentation of the left ventricle in cardiac MRI. Med Image Anal 8(3):245–254 Kovacs, A. and Sziranyi, T., 2012. Harris function based active contour external force for image segmentation. Pattern recognition letters, 33(9), pp.11801187. Lang, S., 2013. Complex analysis (Vol. 103). Springer Science & Business Media. Li, B. and Acton, S.T., 2007. Active contour external force using vector field convolution for image segmentation. IEEE transactions on image processing, 16(8), pp.2096-2106. Liu, L., Wu, Y. and Wang, Y., 2009. A novel method for segmentation of the cardiac MR images using generalized DDGVF snake models with shape priors. Inf Technol J, 8(4), pp.486-494. Lorenzo-Valdés M, Sanchez-Ortiz GI, Elkington AG, Mohiaddin RH, Rueckert D (2004) Segmentation of 4D cardiac MR images using a probabilistic atlas and the EM algorithm. Med Image Anal 8(3):255–265 Li, C., Liu, J. and Fox, M.D., 2005. Segmentation of external force field for automatic initialization and splitting of snakes. Pattern recognition, 38(11), pp.1947-1960. Li, Q., Deng, T. and Xie, W., 2016. Active contours driven by divergence of gradient vector flow. Signal Processing, 120, pp.185-199. Lötjönen J, Kivistö S, Koikkalainen J, Smutek D, Lauerma K., 2004. Statistical shape model of atria, ventricles and epicardium from short-and long-axis MR images. Med Image Anal 8(3):371–386 Lu YL, Connelly KA, Dick AJ, Wright GA, Radau PE, 2013. Automatic functional analysis of left ventricle in cardiac cine MRI. Quant Imaging Med Surg 3(4):200
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
Zhu Y, Papademetris X, Sinusas AJ, Duncan JS. , 2010. Segmentation of the left ventricle from cardiac MR images using a subject-specific dynamical model. IEEE Trans Med Imaging 29(3):669–687
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
Graphical Abstract