An effusion–evaporation model for image edge detection

An effusion–evaporation model for image edge detection

Optics and Lasers in Engineering 49 (2011) 946–953 Contents lists available at ScienceDirect Optics and Lasers in Engineering journal homepage: www...

1MB Sizes 1 Downloads 71 Views

Optics and Lasers in Engineering 49 (2011) 946–953

Contents lists available at ScienceDirect

Optics and Lasers in Engineering journal homepage: www.elsevier.com/locate/optlaseng

An effusion–evaporation model for image edge detection Hong Liu a,b, Yaobin Zou a,b, Renchao Jin a,b,n a b

School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China Key Laboratory of Education Ministry for Image Processing and Intelligent Control, Wuhan, Hubei 430074, China

a r t i c l e i n f o

a b s t r a c t

Article history: Received 4 September 2010 Received in revised form 18 December 2010 Accepted 5 February 2011 Available online 25 February 2011

A novel quasi-physical edge detection model is presented. The model, referred to as the effusion– evaporation model (EEM), is inspired by the natural phenomenon that the water effusing from the ground evaporates in the sunshine and leaves a wire like water stain on the ground surface, which reflects the physiognomy of the terrain. Based on the simulation of water effusing and evaporating, an EEM regards the complement of gradient magnitude image as a three-dimensional terrain, and the concave regions, which contain the residual water in the evolution final state, are used to determine the edges. Subjective and objective comparisons are performed on the proposed algorithm and two conventional edge detectors, namely Canny and LoG. The comparison results show that the proposed method outperforms Canny and LoG detectors for the real images and the standard test images with Gaussian noise. & 2011 Elsevier Ltd. All rights reserved.

Keywords: Edge detection Quasi-physical algorithm Anisotropic diffusion Bilateral filter

1. Introduction Edge detection is one of the fundamental tasks in image processing and computer vision, because of its wide use in several techniques such as segmentation, object recognition, tracking, stereo analysis, data hiding and image coding [1–3]. The efficacy of the subsequent techniques is heavily affected by the accuracy of edge detection. The task of the edge detection requires the edge detectors that are sensitive to changes, suppress areas of constant gray levels and resistant to noise. Edge detection has been an active area for more than 40 years, many effective methods are proposed. Typical algorithms include derivative based methods [4–6], Gaussian based methods [7–14], multi-scale based methods [15–19] and statistical methods [20–23]. As the complexity of image edge detection, every algorithm has its advantages and disadvantages. The methods based on derivative [4–6] have no smoothing pre-processing, and they usually approximate the first or second order derivatives through the corresponding discrete first or second differences. These methods obtain an estimation of the gradient for each pixel in the input image, and look for the local maxima to detect objects’ edges. Typically, these methods involve less computation time cost. However, they are very sensitive to noise presented in the image. Furthermore, in the region with smooth gray level variation, the detected edge is always thicker. n Corresponding author at: School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China. Tel.: + 86 27 8779 2212/+ 86 132 0710 8527; fax.: + 86 27 8754 5004. E-mail address: [email protected] (R. Jin).

0143-8166/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.optlaseng.2011.02.005

Since real images always contain some kind of noise, presmoothing is often required. Including the Canny [7] and LoG [8] edge detector, most approaches for edge detection smoothed the image with Gaussian kernels as a pre-processing step to suppress the noise. However, image smoothing also blurs sharp edges that possess important information of scene details. It is a well-known problem that the derivative of a Gaussian smoothed image would cause inaccurate edge localization around the corner points. Marr and Hildreth [8] pointed out the fact the edges occur at different levels. This implied the demand for smoothing the input image with different scale Gaussian filters, as a single Gaussian filter cannot be optimal for all possible levels. As a result, the methods proposed in Ref. [9–14] involve the multiple scales of the Gaussian filter. Although these methods seem promising, they face three main challenges including how to select a proper range for the scales, how to combine the outputs corresponding to different scales, and how to automatically adapt the noise level in the image [24]. In the last decade, there has been more attention on multiscale based edge detection. The algorithm based on standard wavelet transform presented by Mallat et al. is a typical multiscale based edge detection method [15,16]. In their researches, the edges are classified as the singularity points that can be detected as the local maxima of gradient module or the zerocrossings of wavelet coefficients. Such a method keeps simplicity in the design and the computation. However, the standard wavelet transform can only capture very limited directional information, which results in that it is not adequate for treating more complex discontinuities. Recently, a number of methods that can provide finer direction analysis are proposed. The

H. Liu et al. / Optics and Lasers in Engineering 49 (2011) 946–953

ridgelet and curvelet transforms were proposed by Candes and Donoho [17,18]. Both transforms show the potential of nonseparable methods, but come at a price in terms of design and computation complexity. Most recently, Sheng Yi et al. proposed a shearlet approach to edge detection [19]. Unlike traditional wavelets, shearlets are theoretically optimal in representing images with edges and, in particular, have the ability to fully capture directional and other geometrical features. The statistical approaches usually consist in dividing the neighborhood of each pixel into two equal parts along a given orientation and using a two-sample statistical test of independence to measure the dissimilarity between the two halves. High values of the dissimilarity indicate the presence of a region boundary. This analysis is repeated for several directions and the ones which give rise to the maximum dissimilarity is regarded as the local edge direction. Several statistics have been used for this purpose, such as the likelihood ratio [20], Kolmogorov– Smirnov test [21], Jensen-Renyi divergence measure [22] and the nonextensive information-theoretic divergence called Jensen– Tsallis measure [23]. In general, the statistical methods are more effective than the derivative based methods, but they are computationally more demanding. This paper proposes a novel edge detection algorithm. The proposed algorithm is inspired from the natural phenomenon that the water effusing from the ground evaporates in the sunshine and leaves a wire like water stain on the ground surface, which reflects the physiognomy of the terrain. If we regard the Complement of Gradient Magnitude Image (CGMI) as a three-dimensional terrain, the concave regions that contain the residual water are used to determine the pixels belonging to edges. Note that some similar concepts can be found in classical watershed transform, such as viewing a gray level image as a topological surface and interpreting the gray level as the height of the topological surface. But the watershed transform is constitutionally different from our proposed method, since it exploits the flooding or drainage analogy [25]. We performed a subjective and objective comparative analyses of the proposed algorithm with Canny and LoG edge detectors. The comparison results show that the proposed method is robust for Gaussian noisy images and outperforms the Canny and LoG detectors in the edge accurate localization.

2. Proposed edge detection algorithm

947

Bilateral filter is a nonlinear filter, which combines space and ! ! range filtering. Given an input image Iðt þ 1Þ ð x Þ, where x ¼ ðx1 ,x2 Þ denotes space coordinates, a discrete version of Gaussian bilateral filtering can be written as follows [28]: Pþs Pþs ðtÞ ðtÞ ! i ¼ s j ¼ s I ðx1 þ i, x2 þ jÞw Iðt þ 1Þ ð x Þ ¼ ð1Þ Pþs Pþs ðtÞ i ¼ s j ¼ s w with the kernel function w given by 0 1 0 1 ! !2 ! ! 2 ! ð x ðIð x  x Þ ÞIð x ÞÞ ðtÞ ! Aexp@ A w ð x , x Þ ¼ exp@ 2s2D 2s2R

ð2Þ

where s is the window size of the filter, sD and sR are the standard deviations of the space and range Gaussian filters, respectively. sR determines how edges will be preserved or blurred. Small values of sR preserve almost all edges and noise, and thus lead to filters with little effect on the image, whereas for ! ! large values, expððIð x ÞIð x ÞÞ2 =2s2R Þ will approximate to 1, thus !! turning wðtÞ ð x , x Þ into a standard, linear Gaussian blurring !! function. The middle values of sR , wðtÞ ð x , x Þ smoothes images while preserving edges, which outperform the standard, linear Gaussian filter. Although the optimal parameters can be obtained by the least mean square error optimization [29,30], here, we follow the advice from Ref. [31] and let sD equal to 3, s equal to 3 and sR be set interactively. 2.2. Effusion–evaporation model The quasi-physical method is to find a natural phenomenon equivalent to the edge detection problem in the physical world [32,33]. By observing evolution rules of water, we are inspired to obtain a formalistic algorithm to solve the edge detection problem. Imagine that some swampy lands are affluent in groundwater. The water effusing from the pores of the ground surface naturally pools in concave regions. Meanwhile, the water partly evaporates in the sunshine. When the volume of the water evaporation is greater than that of water effusion, some lower regions are waterish, whereas some higher ones are dry. However, not all regions with relatively high elevation are dry. Some of them are waterish because they are connected to some lower regions. Fig. 1 illustrates the above phenomenon. Supposedly, the initial state is that the whole region is submerged by water. Afterward, water begins to evaporate in the sunshine with a constant

The proposed detector consists of three parts. In the first part, instead of Gaussian filter, a bilateral filter is applied as a pre-processing step to suppress the noise in the image. The second part is to employ Sobel operator for obtaining the gradient magnitude image and nonmaxima suppression to locate the 1-pixel wide local maxima corresponding to the sharp edges. The second part is similar to the Canny detector, so is not discussed in detail. The last part is the main process that involves detecting edge with our quasi-physical idea. 2.1. Bilateral filter based image pre-smoothing A class of filters, called anisotropic diffusion filters, has the desirable property of edge-preserve smoothing, namely, blurring small discontinuities and sharpening edges, as guided by a diffusion conduction function that varies over the image [26]. Since the anisotropic diffusion solvers can be extended to larger neighborhoods, a broader class of extended nonlinear diffusion filters can be produced. This class includes bilateral filters, which we prefer due to their larger support size and the fact that they can be implemented quickly [27].

Fig. 1. Effusion–evaporation model (EEM): (a) initial state of the evolution and (b) final state of the evolution.

948

H. Liu et al. / Optics and Lasers in Engineering 49 (2011) 946–953

evaporation rate. At the same time, new water effuses from the ground and the effusion rate varies according to the ground surface elevation; the lower the elevation is, the higher the effusion rate is. Then under the same evaporation rate, some lower regions, such as the region labeled with a, could be waterish permanently because the volume of the water effusion is greater than that of water evaporation, whereas the higher regions, such as regions labeled with d, could be dry finally. However, some regions, such as regions b or c, whose elevations are about equal to the region d, could be waterish. This is because they are connected to the region a, which can supply redundant water for them. Regarding the gradient magnitudes as geographical elevations, the Complement of Gradient Magnitude Image (CGMI) can be imagined as a terrain, which is affluent in groundwater, and each pixel can be imagined as a pore, which effuses a certain quantity of water with an effusion rate according to its elevation. Weak edges and noises have similar gray levels. However, their positions relative to the strong edges are different. Generally, the weak edges are closer to the strong edges than the noise. Moreover the weak edges are the natural prolongation of the strong ones, whereas the noises are scattered randomly, even if they are near the strong edges. Therefore, the regions b and c are weak edges with higher possibilities, whereas the regions d are more likely to be noises. Therefore, the weak edges can be discriminated from the noises with the EEM.

denotes the k-th CWA, and the subscript t represents the t-th time evolution. For the CGMI image hðx,yÞ, let the initial water level be hmax , thus there is only one CWA in the initial state and its initial residual water volume is Xm Xn r1, 0 ¼ ðh hðx, yÞÞ ð3Þ x¼1 y ¼ 1 max

2.3. Edge detection

For a CWA with d pixels, the total quantity of water evaporating in unit time interval is

Let hðx,yÞ denote the CGMI image (see the following steps 1–4 for its computation), and let hmax represent the maximal gray level of the image hðx,yÞ. Let m and n represent the width and height of the image hðx,yÞ, respectively. For the sake of clarity, the following several definitions are introduced firstly. Definition 1. Given a CGMI image hðx,yÞ and a gray level l, define a maximal 8-connected region containing pixels with gray levels less than or equal to the gray level l as a Connected Water Area (CWA). The gray level l is vividly called the water level. As we can see from the subsequent description, the edge map is obtained with the continuous evolution of the CWAs. During the evolution, some new CWAs will be formed, and some old CWAs will disappear. The states of these CWAs are reflected with their residual water volumes. Here, we will use the variable rk,t to record their residual water volumes, where the subscript k

Definition 2. For the pixel ðx,yÞ with gray level hðx,yÞ, its quantity of water yielding in unit time interval is defined as eðx,yÞ ¼ ðhmax hðx,yÞÞ2 . For a CWA with d pixels, the total quantity of water yielding in unit time interval is Xd Xd f¼ eðxi , yi Þ ¼ ðhmax hðxi , yi ÞÞ2 ð4Þ i¼1 i¼1 In the initial state, the total quantity of water yielding in unit time interval is Xm Xn finitial ¼ eðx, yÞ ð5Þ x¼1 y¼1 Definition 3. Define the evaporation rate of each pixel as vðx, yÞ ¼ a þ p, where a denotes the mean water yield of all pixels in unit time interval, and p is used for controlling the evaporation rate of all pixels. a Can be written as follows: a¼

finitial mn

u ¼ vd ¼ ða þ pÞ d

ð6Þ

ð7Þ

The Definition 2 indicates that the volume of water effusion depends on the distributions of the magnitude in the gradient image. However, once finitial is fixed, the selection of parameter p becomes important, since it determines edge candidates as well as the computation time. As p increases, the effect of noise decreases and the time cost reduces. However, if p is too large, some weak edge signals cannot be detected. In our proposed method, the parameter p is determined experimentally and p A ½3,6 can bring the satisfiable results. With the above definitions, for an 8-bit gray level image Iðx,yÞ, the proposed algorithm for edge detection can be described in the following steps. For the sake of clarity, we use one image as an example to show the step-by-step result of the proposed method (see Fig. 2). Note that Steps 5–10 involve an iterative computation,

Fig. 2. An example to show step-by-step result of the proposed method: (a) original image; (b) after pre-smoothing with the bilateral filter; (c) gradient magnitude image; (d) after non-maxima suppression; (e) CGMI image; and (f) result of an edge detection.

H. Liu et al. / Optics and Lasers in Engineering 49 (2011) 946–953

949

therefore we will show the final result of the edge detection instead of the result of each iterative step (see Fig. 2(f)). Step 1: smooth the input original image Iðx,yÞ with a bilateral filter described in Section 2.1. Step 2: compute the gradient orientation and magnitude on the smoothed image. A pair of 3  3 convolution masks is used, one for estimating the gradient gx in the x-direction: 2 3 1 0 1 6 7 4 2 0 2 5, and the other for estimating the gradient gy in 1

0

1

2

3 1 7 0 5. The gradient magnitude image 2 1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi is approximated using G ¼ gx2 þ gy2 . The gradient direction is 1 6 the y-direction: 4 0 1

2 0

given by y ¼ arctanðgy =gx Þ. Step 3: employ non-maxima suppression on the gradient magnitude image G to obtain the image G [7]. Non-maxima suppression checks if a pixel is at a local maximum along the gradient direction of the pixel. For example, if the gradient direction y of the current pixel ðx,yÞ is rounded to 01, the pixel will be considered to be on the edge if its gradient magnitude is greater than the ones in the west and east directions. Here let G ðx, yÞ ¼ Gðx, yÞ, otherwise, let G ðx, yÞ ¼ 0. Considering the gradient discrete approximation is implemented with the 3  3 convolution mask, the rounded angle y can only take 01, 451, 901, 1351, 1801, 2251, 2701, or 3151. When the rounded angle y takes 451, 901, 1351, 1801, 2251, 2701, or 3151, the process is similar. After non-maxima suppression one ends up with an image, which is zero everywhere except at the local maxima points. At the local maxima points, the value of the gradient magnitude is preserved. Step 4: compute the CGMI image hðx,yÞ, using the following equation: hðx, yÞ ¼ 255G ðx, yÞ

ð8Þ

Step 5: initialize the evolution step counter t to 0. Set k¼1, CWA1 be the area covering the whole image, label CWA1 with 1 to indicate its evolution is ongoing. Step 6: according to Definitions 2 and 3, compute the residual water volume for each CWA. For the k-th CWA, its residual water volume rk,t þ 1 in the next step can be computed as follows: rk,

tþ1

¼ rk, t þfk, t uk,

t

ð9Þ

where r1,0 is defined by Eq. (3), fk,t and uk,t are computed by Eqs. (4) and (7), respectively. If the volume of the water effusion is greater than that of water evaporation in the k-th CWA, label the CWA with 2 to indicate that its evolution has stopped, namely, the following steps 8 and 9 will not be applied on those CWAs. Step 7: if all CWAs are labeled 2, output the pixels in the CWAs labeled 2 as the result edge image. If not, go to step 8. Step 8: label dry pixels according to the residual water volume r of each CWA are as follows: for each CWA, using a discrete approach to estimate the water level l. Initially let variable l1 denote the maximal gray level of the current CWA, and variable l2 equal to l1 1 (see Fig. 3(a)), then the cubages of the CWA for l1 and l2 can be computed with the following equation: X maxfðli hðx, yÞÞ, 0g ð10Þ bi ¼ ðx, yÞ A CWA

where i ¼ 1 or 2. Note that the gray level of some pixels is greater than l2 , so let l2 h equal to 0 for these pixels. If r is greater than or equal to b1 , go to step 10 directly, otherwise compute iteratively the corresponding b1 and b2 when l1 and l2 decrease by 1 gray level until the residual water volume r satisfy the condition b2 r r ob1 (see Fig. 3(b)). For each pixel in this CWA, if its gray level is greater than l2 , label it with 3 to indicate dry.

Fig. 3. Estimate level l and label dry pixels: (a) initially iterative state and (b) finally iterative state.

Step 9: if the pixels are labeled 3 divide the current CWA, replace the current CWA by new CWAs and divide the residual water according to the cubages of the new CWAs. Step 10: update the set of all CWAs. Let t¼t + 1 and go to Step 6.

3. Experimental results and discussion The need for the performance evaluation of edge detection algorithms is now widely recognized, and there are two major categories of evaluation methods up to now. One is subjective comparison, which usually uses some standard test images for a proposed new method and some other method, and display the edge images side-by-side. The other is an objective evaluation, which evaluates different edge detectors within the uniform framework. The algorithm evaluation has been performed jointly using the standard test and the real images. The results obtained from the real ones have been evaluated by comparing them with manual segmentation. 3.1. Subjective evaluation Several examples for gray level images are illustrated to compare the proposed method with the Canny method and the LoG method. A house image shown in Fig. 4(a) contains many edges with different directions, and the house image is often used for the evaluation of the performance of the edge detector in many literatures. The results of the proposed method, the Canny method and the LoG method are presented in Fig. 4(b)–(d), respectively. The parameters for the proposed method are: sR ¼ 0:03, p ¼ 5. The parameters for the Canny method are: s ¼ 1, low threshold¼0.04, high threshold¼0.14. The parameter for the LoG method is: s ¼ 2. Fig. 4(b) and (c) shows that the proposed method and the Canny method produce similar edge images for the house image, and both of them detect correctly the important edges in the house. However, in the detection result of the LoG method (see Fig. 4(d)), some important edges are missing and many false edge pixels appear. To investigate the performance of the proposed method under noisy environment, we consider a number of low grade images. Fig. 4(e) and (i) shows the noisy house images corrupted by the Gaussian white noise with mean 0 and variance 0.005, and 0.01, respectively. As we can see, when the Gaussian noise is added

950

H. Liu et al. / Optics and Lasers in Engineering 49 (2011) 946–953

Fig. 4. Comparison between the proposed method and the two classical methods: (a) original house image; (b) result obtained with an EEM (sR ¼ 0:03, p¼ 5); (c) result obtained with Canny (s ¼1.0, low threshold¼ 0.04, high threshold¼ 0.14); (d) result obtained with LoG (s ¼2.0); (e) noisy house image (m ¼ 0, s ¼ 0.005 Gaussian noise); (f) result obtained with an EEM (sR ¼0.2, p ¼ 5); (g) result obtained with Canny (s ¼1.5, low threshold ¼0.04, high threshold¼ 0.10); (h) result obtained with LoG (s ¼2.5); (i) noisy house image (m ¼ 0, s ¼0.01 Gaussian noise); (j) result obtained with an EEM (sR ¼ 0.4, p ¼ 5); (k) result obtained with Canny (s ¼ 2.0, low threshold¼ 0.03, high threshold¼ 0.10); and (l) result obtained with LoG (s ¼3.0).

to the original house image, the proposed method still outperforms the Canny detector and the LoG detector (see Fig. 4(f)–(h) and (j)–(l)). Even in the noisy image, the proposed method still can correctly detect the important edges, and its result includes much less false edge pixels. Moreover the false edge pixels of the proposed method are scattered points or short line sections, whereas the ones of the Canny detector are continuous long curves. Since the continuous long curves are similar in shape to the true edges, they are more difficult to be processed than the scattered points or short line sections from post-processing perspective. The second group experimental results are shown in Fig. 5. The image shown in Fig. 5(a) contains many edges with different directions, and it also includes a random texture pattern at its central region. The results of the proposed method, the Canny method and the LoG method are presented in Fig. 5(b)–(d), respectively. The parameters for the proposed method are: sR ¼ 0:03, p ¼ 6. The parameters for the Canny method are: s ¼ 1, low threshold¼0.15, high threshold¼ 0.2. The parameter for the LoG method is: s ¼ 2. As shown in Fig. 5(b)–(d), the three methods succeed to detect edges on these grids. However, the Canny method and the LoG method detect many false edge pixels at the central region of the test image. In contrast, the result of the proposed method appears cleaner. Similar to the first group experiment, we also compare the performance of the three methods, when the Gaussian noise is present at the image. Fig. 5(e) and (i) shows the noisy images corrupted by the Gaussian white noise with mean 0 and variance 0.005, and 0.01, respectively. Fig. 5(f)–(h) and (j)–(l) shows the detection results of the three methods on the noisy image. Again, we can find that the proposed method outperforming the other two methods.

To provide further experimental results, another three real images shown in Figs. 6(a), 7(a) and 8(a) are used for testing. For the three test images, the parameters for the proposed method are: sR ¼ 0:03, p ¼ 3. The parameters for the Canny method are: s ¼ 1, low threshold¼0.04, high threshold¼0.1. The parameter for the LoG method is: s ¼ 2. Comparing the detection results of the three edge detectors, we can find that the proposed method detect correctly more edge information than the other two methods, at the same time, it includes much less false edge pixels.

3.2. Objective evaluation The objective performance evaluation of the image segmentation algorithms is performed with several quantitative measures proposed by Huang and Dom [34]. Concretely, an edge-based evaluation scheme is adopted in this paper. The edge-based approach evaluates the edge detection quality in terms of the precision of the extracted edge pixels. Let B represent the edge point set derived from the segmentation and G the ground truth. Two distance distribution signatures are calculated from the experimental results, one from the ground truth to the estimated, denoted by DBG , and the other from the estimated to ground truth, denoted by DGB . Here we will take DGB as an example to introduce its calculation and the calculation of DBG is similar. Define the distance from an arbitrary point x in set B to G as the minimum absolute distance from x to all the points in G, dðx, GÞ ¼ minfdE ðx, yÞg, 8y A G, where dE ðx,yÞ denotes the Euclidean distance between points

H. Liu et al. / Optics and Lasers in Engineering 49 (2011) 946–953

951

Fig. 5. Comparison between the proposed method and the two classical methods: (a) original image; (b) result obtained with an EEM (sR ¼0.03, p¼ 6); (c) result obtained with Canny (s ¼1.0, low threshold¼ 0.15, high threshold ¼ 0.2); (d) result obtained with LoG (s ¼ 2.0); (e) noisy image (m ¼ 0, s ¼ 0.005 Gaussian noise); (f) result obtained with an EEM (sR ¼ 0.6, p ¼ 6); (g) result obtained with Canny (s ¼ 1.5, low threshold ¼ 0.15, high threshold ¼0.20); (h) result obtained with LoG (s ¼2.5); (i) noisy image (m ¼ 0, s ¼ 0.01 Gaussian noise); (j) result obtained with an EEM (sR ¼1.0, p¼ 6); (k) result obtained with Canny (s ¼2.0, low threshold ¼0.15, high threshold ¼0.20); and (l) result obtained with LoG (s ¼3.0).

Fig. 6. Comparison between the proposed method and the two classical methods: (a) original image; (b) result obtained with an EEM (sR ¼0.03, p¼ 3); (c) result obtained with Canny (s ¼1.0, low threshold ¼0.04, high threshold¼ 0.10); and (d) result obtained with LoG (s ¼ 2.0).

Fig. 7. Comparison between the proposed method and the two classical methods: (a) original image; (b) result obtained with an EEM (sR ¼0.03, p¼ 3); (c) result obtained with Canny (s ¼1.0, low threshold ¼0.04, high threshold¼ 0.10); and (d) result obtained with LoG (s ¼ 2.0).

Fig. 8. Comparison between the proposed method and the two classical methods: (a) original image; (b) result obtained with an EEM (sR ¼0.03, p¼ 3); (c) result obtained with Canny (s ¼ 1.0, low threshold ¼0.04, high threshold¼ 0.10); and (d) result obtained with LoG (s ¼2.0).

x and y. Then, the signature DGB can be established as follows: DGB ¼ fdðx, GÞ98x A Bg. A perfect match between B and G should yield zero mean and zero standard deviation, indicating that B and G completely coincide with each other. Generally, a DGB with a smaller

mean and a smaller standard deviation indicates higher quality of the image segmentation. The experimental materials for objective evaluation are a set of 20 real images and the corresponding manually-specified ground

952

H. Liu et al. / Optics and Lasers in Engineering 49 (2011) 946–953

Table 1 The distance from the estimated edge point set to the ground truth. Image ID

#22,013 #24,063 #25,098 #35,008 #37,073 #41,004 #42,049 #66,075 #106,025 #118,035 #126,039 #140,075 #143,090 #145,086 #176,035 #189,003 #208,001 #238,011 #241,004 #296,059 Mean

EEM

Canny

LoG

mDGB

sDGB

mDGB

sDGB

mDGB

sDGB

6.2404 2.3979 4.9144 6.1275 2.9649 8.3825 3.7169 9.1099 20.1034 3.0527 4.8471 7.0991 12.6774 11.7075 15.9087 2.8855 19.5396 0.8429 8.1704 13.2782 8.198345

7.1449 4.9251 5.794 7.0543 4.401 10.3307 6.731 11.1463 26.4597 5.7625 6.9085 8.4929 18.7302 29.9625 19.7255 4.2196 21.8417 1.2325 15.9093 16.9806 11.68764

6.3204 2.8092 5.1741 6.8115 2.9569 10.3207 4.5345 13.6719 20.7054 3.2507 5.213 6.9451 12.6195 12.5142 15.7225 2.9516 19.7852 0.99498 8.2731 13.7496 8.766204

7.3296 6.2531 5.8916 7.5735 4.3979 10.7271 7.4469 16.49 27.02 5.3473 7.5575 8.398 17.0403 31.1775 18.9008 4.1835 22.299 1.5494 15.6463 17.473 12.13512

6.2819 6.9766 5.3735 6.7277 3.5061 11.165 4.477 10.2808 26.5219 4.4739 5.153 7.2939 12.8001 26.1341 15.1203 3.1488 19.3961 1.1467 13.8985 14.3451 10.21105

7.3448 18.0936 5.4223 7.424 5.7621 16.5992 7.2895 9.8347 27.4341 7.7865 7.0252 8.5017 16.6625 42.0054 16.2009 3.8745 22.3785 1.8516 19.2613 16.5786 13.36655

Table 2 The distance from the ground truth to the estimated edge point set. Image ID

#22,013 #24,063 #25,098 #35,008 #37,073 #41,004 #42,049 #66,075 #106,025 #118,035 #126,039 #140,075 #143,090 #145,086 #176,035 #189,003 #208,001 #238,011 #241,004 #296,059 Mean

EEM

Canny

LoG

mDBG

sDBG

mDBG

sDBG

mDBG

sDBG

1.0369 1.7262 0.8428 4.0348 0.91681 1.8501 0.67821 1.2592 2.3045 1.7061 1.8197 1.8144 2.0788 2.226 2.6024 1.5137 1.4425 1.1655 2.8048 1.3816 1.760251

0.95015 3.7301 0.86212 5.7617 0.93261 2.6418 0.74643 1.6301 3.1644 1.9999 2.593 2.1571 2.0563 2.9025 4.5751 1.8462 1.4382 1.0366 4.786 1.6691 2.373931

1.1233 1.2196 1.1128 5.4515 0.85884 1.126 0.78581 1.5199 3.6749 1.7108 1.7917 2.0007 1.9108 2.3968 2.267 1.5284 2.1114 1.2597 2.463 1.3968 1.885488

1.0093 1.9221 0.99979 8.9038 0.88991 1.0873 0.8105 1.7321 6.4242 2.0017 2.3517 2.2377 1.7262 3.3407 4.1189 1.7139 2.5414 1.0782 4.0047 1.3598 2.512695

1.2605 2.2732 1.1919 6.1381 1.3804 2.2313 0.83613 1.4677 16.7919 1.8052 2.543 1.9234 3.7247 3.0915 1.5787 2.1485 1.9902 0.94934 2.31 1.3932 2.851444

1.0895 4.2238 1.0529 8.7755 1.2738 5.6591 0.69935 1.6312 22.2702 1.619 3.4047 1.8967 7.2913 4.8001 1.8842 3.2017 1.9717 0.76861 4.7143 1.5287 3.987818

truth images. The 20 images are selected randomly from Berkeley Segmentation Dataset and Benchmark [35], and their IDs are listed in the first column of Tables 1 and 2. A set of quantitative results obtained with three different algorithms, namely EEM, Canny and LoG over the 20 real images are shown in Tables 1 and 2 in terms of the above edge-based evaluation scheme. Taking into account the quality of the results from Tables 1 and 2, it will be noticed that the EEM method outperforms the Canny detector and LoG detector as a whole, since the mean and standard deviation of DGB and DBG of the former are smaller.

4. Conclusions We proposed a new edge detection method based on an effusion–evaporation model that is inspired by the natural

phenomenon. The proposed edge detection algorithm has two key components. The first component uses a bilateral filter to pre-smooth the original image. The bilateral filter has the desirable property of edge-preserve smoothing, which can smooth away noise while retaining important edge structures, thus improving the qualities of the pre-processed images. The second component simulates the process of water effusing and evaporating. Because of the fluidity of water, edge contours are well connected and isolated false edges are effectively removed. Experimental results for subjective and objective evaluations show that the proposed method is robust for Gaussian noisy images and outperforms the Canny and LoG detectors in the edge accurate localization. Although the proposed method is superior to the Canny and LoG edge detectors in the edge accurate localization, and it is more robust to Gaussian noise than the Canny and LoG methods,

H. Liu et al. / Optics and Lasers in Engineering 49 (2011) 946–953

it also has its disadvantages. Compared with the Canny and LoG methods, the proposed method requires more running time. Its main computational cost arises from the bilateral filter based presmoothing and the simulation of the EEM model. Furthermore, like the Canny and LoG detectors, the proposed method has the difficulty to deal with the image corrupted by the impulse noise, as the bilateral filter is not a good choice to suppress the impulse noise. In our future work, we will explore how to apply the EEM idea to the segmentation of the image corrupted by the impulse noise.

Acknowledgements The authors would like to thank two anonymous reviewers for their careful reading and providing insightful comments and suggestions on the earlier version of this paper. This work was supported by the China International Science and Technology Cooperation Project (Grant no. 2009DFA12290), the Guangxi Provincial Plan for Scientific Research and Technology Development (Grant no. 0816004-18) and the Nanning City Plan for Scientific Research and Technology Development (Grant no. 2007011409c).

References [1] Bernaus RL, Stevenson RL. Edge-assisted upper band coding techniques. Int J Imaging Syst Technol 1999;10:67–75. [2] Liang LR, Looney CG. Competitive fuzzy edge detection. Appl Soft Comput 2003;3:123–37. [3] Lin CH, Chan YK, Chen CC. Detection and segmentation of cervical cell cytoplast and nucleus. Int J Imaging Syst Technol 2009;19:260–70. [4] Haralick R. Digital step edges from zero crossing of second directional derivatives. IEEE Trans Pattern Anal Mach Intell 1984;6:58–68. [5] Rajab MI, Woolfson MS, Morgan SP. Application of region-based segmentation and neural network edge detection to skin lesions. Comput Med Imaging Graphics 2004;28:61–8. [6] Pellegrino FA, Vanzella W. Edge detection revisited. IEEE Trans Syst Man Cybern 2004;34:1500–18. [7] Canny J. A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 1986;8:679–98. [8] Marr D, Hildreth E. Theory of edge detection. Proc R Soc London B 1980;207: 187–217. [9] Schunck BG. Edge detection with Gaussian filters at multiple scales. In: Proc IEEE Comput Soc Work Comput Vision. 1987. 208–10. [10] Witkin AP. Scale-space filtering. In: Proc Int Joint Conf Artif Intell.1983. 2: 1019–22. [11] Bergholm F. Edge focusing. IEEE Trans Pattern Anal Mach Intell 1987;9: 726–41.

953

[12] Lacroix V. The primary raster: a multiresolution image description. In: Proc 10th Int Conf Pattern Recognition. 1990. 903–7. [13] Williams DJ, Shah M. Edge contours using multiple scales. Comput Vision Graphics Image Process 1990;51:256–74. [14] Goshtasby A. On edge focusing. Image Vision Comput 1994;12:247–56. [15] Mallat S, Zhong S. Characterization of signals from multiscale edges. IEEE Trans Pattern Anal Mach Intell 1992;14:710–32. [16] Mallat S, Hwang WL. Singularity detection and processing with wavelets. IEEE Trans Inf Theory 1992;38:617–43. [17] Candes EJ, Donoho DL. Ridgelets: a key to higher-dimensional intermittency? Philos Trans R Soc London A 1999:2495–509. [18] Candes EJ, Donoho DL. Curvelets—a surprisingly effective nonadaptive representation for objects with edges. In: Cohen A, Rabut C, Schumaker LL, editors. Curve and Surface Fitting. Saint-Malo: Vanderbilt University Press; 1999. p. 105–20. [19] Sheng Yi Labate D, Easley GR, Krim H. A shearlet approach to edge analysis and detection. IEEE Trans Image Process 2009;18:929–41. [20] Huang JS, Tseng DH. Statistical theory of edge detection. Comput Vision Graphics Image Process 1988;43:337–46. [21] Lim DH, Jang SJ. Comparison of two-sample tests for edge detection in noisy images. Stat 2002;51:21–30. [22] Hamza AB, Krim H. Jensen–Renyi divergence measure: theoretical and computational perspectives. In: Proc IEEE Int Symp Inf Theory. Yokohama, Japan: 2003. 257. [23] Hamza AB. A nonextensive information-theoretic measure for image edge detection. J Electron Imaging 2006:15. [24] Basu M. Gaussian-based edge-detection methods—a survey. IEEE Trans Syst Man Cybern Part C—Appl Rev 2002;32:252–60. [25] Lin YC, Tsai YP, Hung YP, Shih ZC. Comparison between immersion-based and toboggan-based watershed image segmentation. IEEE Trans Image Process 2006;15:632–40. [26] Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans Pattern Anal Mach Intell 1991;12:203–11. [27] Barash D, Comaniciu D. A common framework for nonlinear diffusion, adaptive smoothing, bilateral filtering and mean shift. Image Video Comput 2004;22:73–81. [28] Barash D. A fundamental relationship between bilateral filtering, adaptive smoothing, and the nonlinear diffusion equation. IEEE Trans Pattern Anal Mach Intell 2002;24:844–7. [29] Hu H, Hann GD. Trained bilateral filters and applications to coding artifacts reduction. In: Proc 2007 IEEE Int Conf Image Process. Texas, USA; 2007. 325–8. [30] Zhang BY, Allebach PJ. Adaptive bilateral filter for sharpness enhancement and noise removal. IEEE Trans Image Process 2008;17:664–78. [31] Paris S, Kornprobst P, Tumblin J, Durand F. A gentle introduction to bilateral filtering and its applications. Course for 2007 Siggraph. URL available at: /http://people.csail.mit.edu/sparis/bf_course/slides/04_applications_sim ple_bf.pptS. [32] Huang W, Jin R. Quasiphysical and quasisociological algorithm solar for solving SAT problem. Sci China (Ser E) 1999;42:485–93. [33] Huang WQ, Zhao XW. Quasi-physical and quasi-sociological algorithms for solving the problem of orthogonal arrays. J Comput Res Dev 2002;39: 205–12. [34] Huang Q, Dom B. Quantitative methods of evaluating image segmentation. In: Int Conf Image Process, Volume III. Washington DC; 1995. 53–56. [35] Arbelaez P, Fowlkes C, Martin D. Berkeley Segmentation Dataset and Benchmark. 2007. URL available at: http://www.eecs.berkeley.edu/Research/Pro jects/CS/vision/grouping/segbench.