2-D Gabor filter based transition region extraction and morphological operation for image segmentation

2-D Gabor filter based transition region extraction and morphological operation for image segmentation

ARTICLE IN PRESS JID: CAEE [m3Gsc;November 9, 2016;11:25] Computers and Electrical Engineering 0 0 0 (2016) 1–16 Contents lists available at Scien...

4MB Sizes 0 Downloads 63 Views

ARTICLE IN PRESS

JID: CAEE

[m3Gsc;November 9, 2016;11:25]

Computers and Electrical Engineering 0 0 0 (2016) 1–16

Contents lists available at ScienceDirect

Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng

2-D Gabor filter based transition region extraction and morphological operation for image segmentationR Priyadarsan Parida∗, Nilamani Bhoi Department of Electronics & Telecommunication Engineering, Veer Surendra Sai University of Technology (Vssut), Burla, Sambalpur, 768018, Odisha, India

a r t i c l e

i n f o

Article history: Received 18 May 2016 Revised 31 October 2016 Accepted 31 October 2016 Available online xxx Keywords: Transition region Thresholding 2D-Gabor filter (two dimensional Gabor filter) Morphological operation

a b s t r a c t Transition region-based image segmentation techniques have proved effective due to their simplicity and efficient computation. These techniques greatly depend on the accurate extraction of transition regions. Transition region extraction becomes difficult when there is grey level overlapping between foreground and background. Further, the performance of these methods deteriorates when the background and foreground are textured or are of varying intensities. Also, these are applied mostly for single object segmentation. To overcome these shortcomings, we propose a robust hybrid method for image segmentation containing single and multiple objects. The proposed method uses a two-dimensional Gabor filter which enhances the boundaries of object regions for better extraction of transition regions. These transition regions undergo morphological operations to get the object contours and object regions. Finally, objects are extracted from the object regions. Experimental results show that the proposed method yield superior performance for image segmentation containing single and multiple objects. © 2016 Elsevier Ltd. All rights reserved.

1. Introduction Image segmentation has a wide range of applications such as biomedical image analysis, forensics, character recognition and vegetation location, etc. So, it is an essential pre-processing step for all computer vision and image understanding application. In image segmentation, the foreground (object) is separated from the background based on the characteristics like colour, intensity, texture, etc. All existing segmentation methods are classified into four broad categories i.e., thresholding based method [1,2], boundary based method [3], region-based method and hybrid method [4]. Thresholding based method is the simplest method where an assumption is made that the foreground (object) and background have distinguished grey level changes. It implies that the distributions have two or more distinct peaks and hence a carefully chosen threshold can separate these peaks. So, in thresholding based schemes the segmentation process is carried out by assigning the grey values above a threshold as the object and below it as background or vice versa. The boundary based method relies on transitional characteristics such as edges [5] or graph cuts [3] to separate object from the background. The classical edge operators, i.e., Canny, Prewitt, Sobel [5] are mostly applied in boundary based method, but these operators suffer from edge discontinuities and are highly sensitive to noise. Graph cuts based method considers the image as a graph where segmentation is achieved by partitioning the graph iteratively. One of the widely used, normalized cut [6] achieves better segmentation results. But, it R ∗

Reviews processed and recommended for publication to the Editor-in-Chief by Area Editor Dr. E. Cabal-Yepez. Corresponding author. E-mail addresses: [email protected] (P. Parida), [email protected] (N. Bhoi).

http://dx.doi.org/10.1016/j.compeleceng.2016.10.019 0045-7906/© 2016 Elsevier Ltd. All rights reserved.

Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

JID: CAEE 2

ARTICLE IN PRESS

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

Fig. 1. (a) Aeroplane image (b) Rock image. The encircled area of Rock image R1 and R2 represent the overlapping grey level.

Fig. 2. Transition region extraction of Aeroplane and Rock images for different methods (a) Original image, (b) LE, (c) MLE, (d) LGLD, (e) Parida et al. [15], (f) RIB [7], (g) Proposed method.

relies on simple images with good contrast [7]. Region based segmentation considers the region similarity (region growing) [8] or region dissimilarity (region splitting and merging) [9] for image segmentation. The hybrid method combines two or more approaches to yield better segmentation. Transition region based methods [7,10–15] are recent hybrid methods that combine both thresholding and boundary based approach. Existing transition region based methods use a local statistical descriptor (entropy, variance, grey level difference and complexity) for transition region extraction. The transition region was at first demonstrated by Gerbrands [10]. Gradient-based transition region named effective average gradient (EAG) proposed by Zhang et al. [11], had the limitation of reflecting only abrupt grey level changes rather than frequent grey value changes. Also it is sensitive to noise. To ease this limitation, local entropy (LE) [12] based method is proposed. But, it has a limitation that if frequent grey level changes occur in a local neighbourhood, it increases the local entropy and the pixels of the neighbourhood are identified as transition region though it belongs to the foreground or background. To overcome these drawbacks, Li et al. [13] developed local grey level difference (LGLD) based transition region extraction method which considers both the grey level changes and the extent of these changes. But, the parameter selection to determine the threshold is a problem. Later modified local entropy method (MLE) [14] came into existence to improve the performance of transition region extraction. This method also suffers from the same problem as that of LGLD. These techniques are inefficient when the foreground and background are of varying intensities. Further, these are applied mostly for images containing a single object. One of the recent transition region based method named robust single-object image segmentation based on salient transition region (RIB) proposed by Zuoyong Li et al. [7] yields good segmentation results. But, it is applicable only to the images containing a single object. To alleviate the limitation, Parida et al. [15] proposed morphological operation based method to yield good performance. But, it is unable to give the better output when the method suffers from an inaccurate extraction of transition regions. Hence, we propose a method where the 2-D Gabor filter is used for better extraction of the transition region. Later, morphological operations are used for the separation of foregrounds (objects) from the background. In the case of transition region based techniques, the image segmentation depends on the efficient extraction of the transition region. Efficient extraction is possible when the background and foreground have distinguished grey level. But, when the image has overlapping grey levels, existing transition region based methods face difficulty in extracting transition region. These are illustrated in Figs. 1–3. Fig. 1(a) shows an Aeroplane image taken from Weizmann [16] database where background and foreground are simple and have distinguished grey values. Fig. 1(b) shows the Rock image from Weizmann [16] database where regions like ‘R1’ and ‘R2’ which contains background and foreground have overlapping grey values. Also, background and foreground of this image are textured. The transition regions of various methods for Aeroplane and Rock image are shown in Fig. 2. From Fig. 2, it is well depicted that all existing methods (LE, MLE, LGLD, Parida et al. [15], and RIB [7]) are able to extract transition region for Aeroplane image. But, it is difficult to extract transition regions effectively when they are applied to Rock image. To overcome this difficulty, we have used 2-D Gabor filter which enhances the object boundaries, resulting in better extraction of transition regions. These can be visualized clearly from Fig. 2(g). Because of better extraction of transition regions, the objects of the image are well separated from the background. Fig. 3 shows the segmentation result of various methods. It can be seen that the proposed method outperforms other methods in separating objects from the background. The rest of the paper is organized as follows: Section 2 describes the proposed method and the algorithm in brief. Section 3 gives briefly presents the images along with their corresponding ground truths used in our experiments. The Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

ARTICLE IN PRESS

JID: CAEE

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

[m3Gsc;November 9, 2016;11:25] 3

Fig. 3. Segmentation result of Aeroplane and Rock images for different methods (a) LE, (b) MLE, (c) LGLD, (d) Parida et al. [15], (e) RIB [7], (f) Proposed method.

various performance measures used are given in Section 4. An experiment is performed in Section 5 for determination of parameters of the Gabor filter. Section 6 gives the qualitative and quantitative comparison of the proposed method with existing methods. The work is concluded in Section 7. 2. Proposed method The method begins with an application of 2-D Gabor filters on the original image to get a Gabor feature image. The Gabor feature image has object regions with enhanced boundaries. As the intensity values in these regions are high, the standard deviation of Gabor feature matrix is calculated which is used as a threshold for extraction of the transition region. The transition regions obtained are of several pixel widths and thus these are applied to a thinning operation to extract edges of a single pixel width. The edges are not continuous and undergo edge linking process to achieve continuous edges (object contours). Next, the morphological filling operation is employed on object contours to find the object regions. Finally, the objects are separated from the background in the original grey image using these object regions. 2.1. 2-D Gabor filtering for achieving Gabor feature image with enhanced object boundaries Gabor filters have a wide range of applications in computer vision and pattern recognition. It is advantageous because of its properties of rotation, illumination, scale and translation invariant [17,18–19]. Also, it can withstand all sorts of photometric disturbances such as uneven illumination or noise present in the images. Moreover, the frequency and orientation resemble that of the human visual system which has been found useful for texture discrimination [19]. Gabor features are extracted from the original grey image as

G(x, y ) = f (x, y )  g f (x, y )

(1)

where, f(x, y) is the original grey image and gf (x, y) is the impulse response of the 2-D Gabor filter. The sign  represents the convolution sum. The Gabor kernel [27] for generating the gf (x, y) is defined by ψ μ, v (z) as

      kμ,ν 2 kμ,ν 2 z2  ψμ,ν (z ) = exp − exp ikμ,ν z − exp −σ 2 /2 σ2 2σ 2

(2)

where, μ and v determines the orientation and scale of the Gabor filter kernel. z = (x, y) and . denotes the norm operator. The wave vector kμ,ν = kν e jϕμ , where kv = kmax /λv and ϕμ = π μ/8. The parameter λ represents the spacing between filters in frequency domain. The first square bracketed term determines the oscillatory part of the kernel, whereas the second term compensates the dc value of the kernel for avoiding unwanted dependence of filter response on the intensity image. When the√parameter kmax is set to a value π /2 as the Gabor kernels are very wide in frequency space [18]. The value of λ is chosen as 2 for better intensification near the transition regions [19]. The Gabor filter is given in Fig. 4(a). For better visualization, the Gabor filter real coefficients with kmax = π and λ = 4 are displayed in Fig. 4(b). For confirmation of the parameters kmax √ and λ to be π /2 and 2 an experiment is performed which is discussed in Section 5. With parameters kmax and λ, Gabor filters of 8 orientations are determined. Convolving the image with 8 Gabor filters can generate the Gabor feature matrix. Only the magnitude part is considered as the phase is time varying in nature. The Gabor feature matrix is given as

⎡ r (x , y ) 0 0 ⎢ r ( x0 , y1 ) . G=⎢ ⎣

. r ( x0 , yH )

r ( x1 , y0 ) r ( x1 , y1 ) . . r ( x1 , yH )

. . . . .

. . . . .



r (xW , y0 ) r (xW , y1 ) ⎥ ⎥ . ⎦ . r (xW , yH )

(3)

Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

ARTICLE IN PRESS

JID: CAEE 4

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

Fig. 4. (a). Real component of Gabor filter coefficients kmax = π /2, λ = kmax = π , λ = 4 and μ = 1 with 8 orientations.

√ 2 and μ = 1 with 8 orientations. (b). Real component of Gabor filter coefficients

where, W and H are width and height of the feature image. The Gabor feature matrix gives intensity variations near the object boundaries. After convolution the resultant Gabor feature matrix is normalized using L2 norm. The L2 norm can be denoted as

g (x, y ) = G(x, y )

(4)

The normalization process over L2 norm is performed as

g(x, y ) = g (x, y )/max{g (x, y )}

(5)

where, g is the Gabor feature image. 2.2. Extraction of transition region The Gabor feature image obtained in former step has enhanced object boundaries. Here, the standard deviation is used as the threshold to extract the transition regions. The threshold T is expressed as



T =

H  W  1 (g(x, y ) − E )2 H ×W

 12

(6)

x=1 y=1

where, E is the expected or mean value which is expressed as

E=

H  W  1 g(x, y ) H ×W

(7)

x=1 y=1

The threshold T is used to determine the transition region according to the following equation



T R(x, y ) =

1 0

if if

g(x, y ) > T g(x, y ) ≤ T

(8)

where, TR is the transition region of the image. 2.3. Extraction of object contour For extraction of object contours the transition region has to undergo (A) morphological operations for achieving edge of single pixel width. (B) edge linking operation to extract the object contour. 2.3.1. Morphological operations for achieving edge of single pixel width The transition region represents the boundaries of object regions are of several pixels width. To achieve the boundary of a single pixel width, the transition region is used with a morphological thinning operation. During thinning operation, the resultant edge image has several isolated pixels, H-connected pixels, and spurious edge pixels [15]. To get rid of these pixels, morphological operations like cleaning, H-break, and spurious removal operations are performed. Finally, an edge of single pixel width is obtained from the transition region. 2.3.2. Edge linking operation to extract the object contour The edge, thus obtained may be discontinuous and to achieve the continuity, edge linking process is performed. Various edge linking algorithms are available in the literature [20,21]. The edge linking process is applied to a pixel when the pixel is 8-adjacent or exhibits 8-connectivity or a pixel is not associated with any other edges. The entire edge linking process is as follows [15]: From each end point (discontinuous edge pixel), 8-connectivity is checked till an end point or junction is encountered. A number is assigned as a label to all end points along with the junction. The distance between adjacent end points or junctions labelled with a different number is calculated. If the D4-distance Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

ARTICLE IN PRESS

JID: CAEE

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

Thresholding operation Gabor feature image

Input image Object region replaced with Object (segmentation original output) pixels

Transition region Region filling and shrinking operation

Object region

Morphological [thinning, cleaning, H break ] and edge linking operation

2-D Gabor filtering

5

Object contour

Fig. 5. Segmentation steps of the proposed method for Aeroplane image. Table 1 Classification of various images considering the foreground and background. Types

Background

Foreground

Name of images Single object

Multiple objects

Type-1 Type-2 Type-3 Type-4

Simple Textured Simple Textured

Simple Simple Textured Textured

Aeroplane Eagle Signboard1 Signboard2 Clock Teddy Flower

Bird Duck Walldecor Mushroom Rock

(city block distance) is less than or equal to 10, the adjacent endpoints/junctions are linked [15]. If the distance exceeds 10, a false edge linking operations will be performed beyond the object regions to the isolated points/lines corresponding to the background regions. This is not desirable in the edge linking process. So, the distance is chosen to be 10. The pixels which remain unlabelled corresponds to isolated pixels. Finally, a continuous edge is obtained which is called object contour in the edge linking operation. 2.4. Extraction of object region The object contour, further undergoes for morphological region filling operation to achieve the object region. Many isolated pixels are still left out along with object regions and to eliminate these isolated pixels morphological shrinking operation is performed. Thus, isolated pixels and object regions are separated. The structuring element used for various morphological operations is of disk type with a radius of 3. 2.5. Extraction of object from the object regions The object regions obtained in the previous step is a binary image where the object regions are represented as 1 and background region as 0. The pixels of object regions are replaced with the pixels of the original image to extract the objects. 2.6. The algorithm The overall algorithm is summarized below. A. B. C. D. E.

Apply 2-D Gabor filter to get a Gabor feature image with enhanced boundaries of object regions. Use the standard deviation of Gabor feature image as a threshold to extract the transition region. Extract the object contours from transition region by applying morphological operations and edge linking operation. Find the object regions from the object contours using morphological region filling and shrinking operations. Extract the objects from the original image using object regions. The steps of the above algorithm can be better shown in Fig. 5.

3. The images and ground truths The various images used for experimentation are taken from MSRM [22] and Weizmann [16] image database. The images considered are grouped into four categories based on whether the foreground and background are simple or textured. The different categories of images are given in Table 1. The Aeroplane, Bird, Duck, Walldecor, Mushroom, and Rock images are taken from the Weizmann data set whereas the Eagle, Signboard1, Signboard2, Clock, Teddy and Flower images are taken from MSRM database. Fig. 6 represents the original grey images along with their ground truths. Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

ARTICLE IN PRESS

JID: CAEE 6

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

(a) (b)

(i)

(c)

(j)

(d) (e)

(k) (l)

(f)

(g)

(h)

(A) Original images

(B) Ground truths

Fig. 6. (A) Original images: (a) Aeroplane (b) Eagle (c) Bird (d) Signboard1 (e) Signboard2 (f) Duck (g) Clock (h) Teddy (i) Walldecor (j) Flower (k) Mushroom (l) Rock; (B) Ground truths.

4. Performance measures The performance of the proposed method along with the existing methods are measured via three mathematical measures: misclassification error (ME) [23,24], false positive rate (FPR) [24] and false negative rate (FNR) [24]. The pixels of foreground (object) falsely classified as background or vice versa is quantified by misclassification error. The ME is defined as

ME = 1 −

|BO ∩ BT | + |FO ∩ FT | |BO | + |FO |

(9)

where, BO and FO corresponds to background and foreground pixels in ground truth image. The term BT and FT corresponds to the background and foreground pixels respectively in the segmented image and the operator |.| represent the cardinality of set operation. The value of ME varies between 0 and 1. The value 0 represents errorless segmentation whereas 1 corresponds to full erroneous segmentation. The lower the value (i.e., close to value 0) represents better segmentation. The FPR and FNR define ME measure more precisely. The FPR is the number of background pixels classified as foreground pixels to the total number of background pixels. The FNR corresponds to the number of foreground pixels classified into background pixels to the total foreground pixels. The FPR and FNR can be defined as

F PR =

|BO ∩ FT | |BO |

(10)

F NR =

|FO ∩ BT | |FO |

(11)

Like ME, the values of FPR and FNR also varies from 0 to 1. High values of FPR and FNR leads to serious over segmentation and under-segmentation respectively. In over-segmentation, a portion of background region appears with the actual foreground in the segmented image whereas, in the case of under segmentation some portion of the object is missed in the resultant segmented image [25]. 5. Determination of parameters for Gabor filters √ The parameters kmax and λ affects the performance measures ME, FPR and FNR. The parameter λ is taken as 2 for better intensification near the transition region [19]. To choose the value of kmax , the image is tested with different values of kmax . When kmax > π /2, it results in edges along with many isolated pixels. If kmax < π /2 it gives rise to wider edge width which is not desirable. When kmax = π /2, the resultant image has enhanced edge with more than one pixel width and fewer isolated pixels. This can be explained well when we took the 4 × 4 window of the filter mask shown in Fig. 7. When kmax = π , the 4connected neighbours of “1” have small values. So, it gives rise to edge with isolated pixels. If kmax = π /2, the weighting value 1 has nearly same 4-connected neighbours. The output enhanced edge has more than one pixel width with few isolated pixels. Image of kmax = π /3, all 8-neighbours of “1” have nearly same values. Hence, on convolving the mask, it blurs the edges instead of enhancing. The effects of the Gabor filter with different values of kmax are shown in Fig. 8. For confirmation, an experiment is also conducted on different images in which the performance measures (ME, FPR, FNR) are determined for various combinations of kmax and λ. The experiment is tested on four images i.e. Aeroplane, Signboard1,√Teddy For the experiment, the values of kmax chosen are π /6, π /3, π /2, 2π /3, π and the values of λ √ and Flower. √ are 1/ 2, 1, 2, 2, 2 2. All the 25 combinations of these values are assigned with indices given in Table 2. The values of performance measures of Aeroplane image for all combinations of kmax and λ are given in Table 3. For each combinations, Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

ARTICLE IN PRESS

JID: CAEE

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

7

0.294

0.106

0

0.263

0.435

0.322

0.058

0

0.651

0.616

0.443

0.255

0.392

1

0.176

0.161

0.722

1

0.816

0.416

0.863

1

0.922

0.678

0.161

0.176

1

0.392

0.416

0.816

1

0.722

0.678

0.922

1

0.863

0.263

0

0.106

0.294

0

0.058

0.322

0.435

0.255

0.443

0.616

0.651

(a)

(b)

(c)

Fig. 7. Gabor window for a single orientation for various values of kmax : (a)kmax = π , (b)kmax = π /2, (c)kmax = π /3.

Fig. 8. Output of Gabor filter on original Teddy image with different values of kmax : (a) kmax = π , (b) kmax = π /2, (c) kmax = π /3. Table 2 Indices corresponds to different combination of kmax and λ. Index (kmax , Index (kmax , Index (kmax , Index (kmax ,

1

λ) λ) λ) λ)



( π , 1/ 2 ) 8 √ (2π /3, 2 ) 15 √ (π /2, 2 2) 22 (π /6, 1)

2 (π , 1) 9 (2π /3, 2) 16 √ (π /3, 1/ 2) 23 √ (π /6, 2 )

3



(π , 2 ) 10 √ (2π /3, 2 2) 17 (π /3, 1) 24 (π /6, 2)

4 (π , 2) 11 √ (π /2, 1/ 2) 18 √ (π /3, 2 ) 25 √ (π /6, 2 2)

5



(π , 2 2 ) 12 (π /2, 1) 19 (π /3, 2)

6 √ (2π /3, 1/ 2) 13 √ (π /2, 2 ) 20 √ (π /3, 2 2)

7 (2π /3, 1) 14 (π /2, 2) 21 √ (π /6, 1/ 2)

Table 3 Performance measures of Aeroplane image for various values of kmax and λ. Index

kmax

λ

ME

FPR

FNR

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

π π π π π 2π /3 2π /3 2π /3 2π /3 2π /3 π /2 π /2 π /2 π /2 π /2 π /3 π /3 π /3 π /3 π /3 π /6 π /6 π /6 π /6 π /6

2 − 0.5 20 20 21 21 2 − 0.5 20 20 21 21 2 − 0.5 20 20 21 21 2 − 0.5 20 20 21 21 2 − 0.5 20 20 21 21

0.3127 0.3107 0.0196 0.0211 0.0296 0.3120 0.2910 0.0200 0.0256 0.0460 0.3107 0.0211 0.0196 0.0296 0.0550 0.2910 0.0200 0.0256 0.0460 0.0519 0.0200 0.0256 0.0460 0.0519 0.0570

0.3317 0.3295 0.0102 0.0089 0.0029 0.3309 0.3087 0.0089 0.0085 0.0041 0.3295 0.0089 0.0102 0.0029 2.9857e−04 0.3087 0.0089 0.0085 0.0041 0.0082 0.0089 0.0085 0.0041 0.0082 4.5571e−04

2.5887e−04 2.5887e−04 0.1745 0.2224 0.4696 2.5887e−04 2.5887e−04 0.2030 0.3062 0.7370 2.5887e−04 0.2224 0.1745 0.4696 0.9555 2.5887e−04 0.2030 0.3062 0.7370 0.7706 0.2030 0.3062 0.7370 0.7706 0.9886

Note: The best combination of and for which the value of ME, FPR and FNR is lowest is marked as bold.

Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

ARTICLE IN PRESS

JID: CAEE 8

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

(a) Performance measures (ME, FPR, FNR)

Performance measures (ME, FPR, FNR)

(b)

Indices (w.r.t. combination of

and

and

(d) Performance measures (ME, FPR, FNR)

Performance measures (ME, FPR, FNR)

(c)

Indices (w.r.t. combination of

Indices (w.r.t. combination of

and

Indices (w.r.t. combination of

and

Fig. 9. Indices (corresponding to different combination of kmax and λ) verses various performance measures (ME, FPR, FNR) for different of images: (a) Aeroplane, (b) Signboard1, (c) Teddy, (d) Flower.

the performance measure ME, FPR and FNR are calculated and plotted against the indices. The various plots are shown in Fig. 9(a)–(d). All the plots are examined and seen that compromised values of ME, FPR and FNR are achieved for the √ combinations of kmax = π /2 and λ = 2. 6. Results and discussion The entire experimentation process is carried out on a PC having Core-i3, 1.9 GHz processor, and 8G RAM. All experiments are performed using MATLAB 7.0. The proposed method is compared with several existing transition region based methods such as LE [12], MLE [14], LGLD [13], Parida et al. [15] and RIB [7] along with the thresholding based methods such as Otsu [3], Kapur [4] and a graph cut method NC [6]. The experimentation is done with images containing both single and multiple objects. All images used are of 8-bit with different resolutions to show the effectiveness of the proposed method for various resolutions. To begin our analysis, for each type of image given in Table 1, we have computed the ME, FPR and FNR values listed in Table 4. The best (lowest) values for different methods of ME, FPR and FNR appear as bold in Table 4. For subjective evaluation, the segmentation result of various methods is shown in Figs. 10 to 13. The proposed method consumes an average time of 5.04 seconds. For Type-1 images, three images are considered: Aeroplane, Eagle, and Bird. All the three images are of simple foreground and simple background. All methods work well in segmenting the Aeroplane image. The method RIB attains the lowest value of ME, whereas the method LE attains lowest FNR showing that no part of the object is missed in the segmented image. The method RIB achieves the lowest value of ME and FNR which is nearly the same value as of the proposed method. For Eagle image, LE method gives the lowest FNR representing the image to be highly over-segmented which can be depicted from Fig. 10. For Bird image, the method LGLD attains the lowest value of ME which is nearly equal to the proposed method. In Type-2 images, 3 images were considered: Signboard1, Signboard2, and Duck. For Signboard1, the proposed method gives the lowest value of ME which signifies the segmentation result is better compared to other methods. Similarly, for the image Signboard2 the proposed method achieves the lowest value of ME, FPR, and FNR. For Duck image, the lowest value of Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

ARTICLE IN PRESS

JID: CAEE

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

9

Table 4 Performance measures (ME, FPR, FNR) of different methods for various types of images. Image types

Images

Methods

ME

FPR

FNR

Type-1 images (simple background & simple foreground)

Aeroplane

Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE

0.0136 0.0249 0.0630 0.0263 0.0135 0.0179 0.0095 0.0140 0.0211 0.2446 0.0377 0.2992 0.0534 0.1991 0.0065 0.0155 0.2480 0.0102 0.0209 0.0209 0.0939 0.0654 0.0208 0.0274 0.0365 0.0252 0.0256 0.8118 0.9104 0.8661 0.8047 0.9111 0.3899 0.0593 0.0829 0.0497 0.4113 0.8127 0.3261 0.2290 0.4126 0.0579 0.0381 0.4094 0.0350 0.9782 0.9326 0.1380 0.1756 0.9807 0.0327 0.0420 0.0263 0.0340 0.4271 0.3085 0.1484 0.2474 0.3918 0.0615 0.0620 0.3236 0.0629 0.9368 0.8120 0.0642

0.0015 0 0.0666 0.0248 0 0.0089 0.0085 0.0064 0.0089 0.2573 0.0 0 05 0.3165 0.0308 0.2085 0.0047 0.0163 0.2608 0.0049 0.0103 0.0103 0.0990 0.0414 0.0099 0.0192 0.0198 0.0061 0.0148 0.9923 0.9749 0.9544 0.8062 0.9475 0.4964 0.0755 0.0454 0.0632 0.4809 0.9876 0.3925 0.1078 0.4677 0.0704 0.0464 0.4644 0.0426 0.9994 1 0.1392 0.0957 0.9993 0.0239 0.0056 0.0098 0.0198 0.2557 0.0041 0.0760 0.0333 0.1853 0.0025 0.0036 0.0588 0.0026 1.0 0 0 0 1 0.0810

0.2376 0.4346 0.0031 0.0507 0.2351 0.1670 0.0260 0.1380 0.2224 0.0250 0.6884 0 0.4460 0.0367 0.0378 0.0017 0.0270 0.1013 0.2004 0.2004 0.0072 0.4808 0.2068 0.1684 0.3248 0.3503 0.2084 0.1506 0.6743 0.5426 0.7995 0.7778 0 0 0.2203 0.0 0 03 0.0877 0 0.0179 0.7916 0.1566 0 0 0.1540 0 0.8258 0.4498 0.1290 0.7555 0.8476 0.0957 0.3063 0.2083 0.1359 0.7738 0.9243 0.2950 0.6819 0.8094 0.1809 0.1835 0.8591 0.1849 0.7047 0.1207 0.0024

Eagle

Bird

Type-2 images (textured foreground & simple foreground)

Signboard1

Signboard2

Duck

Type-3 images (simple background & textured foreground)

Clock

Teddy

(continued on next page)

Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

ARTICLE IN PRESS

JID: CAEE 10

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16 Table 4 (continued) Image types

Images

Walldecor

Type-4 images (textured foreground & textured background)

Flower

Mushroom

Rock

Methods

ME

FPR

FNR

MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method Otsu Kapur LE MLE LGLD Parida et al. [15] RIB NC Proposed method

0.1444 0.9262 0.0389 0.0109 0.0460 0.0116 0.1200 0.3406 0.9229 0.0719 0.1337 0.0628 0.1909 0.0845 0.0594 0.8910 0.8171 0.7616 0.6077 0.8417 0.5332 0.0120 0.0127 0.3653 0.9496 0.8732 0.7866 0.8191 0.9154 0.6353 0.0593 0.0843 0.1911 0.4049 0.2431 0.4803 0.2069 0.3484 0.1809 0.2687 0.2599 0.0890

0.0720 1 0.0031 0.0133 0.0461 0.0015 0.0072 0 0.8843 0.0475 0.0032 0.0208 0.0275 0.0403 0.0220 0.9012 0.9963 0.8889 0.5417 0.8220 0.4452 0.0161 0.0109 0.4550 0.9700 0.9997 0.9265 0.8116 0.9028 0.7648 0.0712 0.0027 0.2249 0.5133 0.0226 0.6678 0.0989 0.3898 0.0507 0.0035 0.2167 0.0834

0.4117 0.6548 0.1706 0.0021 0.2135 0.0490 0.3241 0.9570 0.9927 0.1163 0.3699 0.1389 0.4884 0.1644 0.1272 0.8502 0.1004 0.2528 0.8724 0.9205 0.8860 0.0075 0.0176 0.0056 0.8512 0.2629 0.1114 0.8549 0.9759 0.0101 0.0025 0.4782 0.0281 0.1281 0.8062 0.0014 0.4820 0.2426 0.5136 0.9442 0.3701 0.1034

Note: The best average values of ME, FPR and FNR are marked as bold.

ME, FPR and FNR are provided by NC, RIB, and Parida et al. respectively. The overall performance of the proposed method is found to be better when tested on Type-2 images which can be seen from Fig. 11. For Type-3 category, 3 images Clock, Teddy and Walldecor are taken out of which the first two contain a single object while the third one comprises two objects. It can be observed from Fig. 12 that all other methods except the method by Parida et al. and the proposed method has background pixels along with the object in the segmented result. For Clock image, the best values of ME, FPR and FNR are achieved for the method [15]. But the proposed method and RIB attains a nearly equal value of ME, FPR, and FNR as that of the method [15]. With Teddy image, the best FPR is obtained from the proposed method which can be revealed from Fig. 12. The method RIB achieves the lowest value of ME which is nearly equal to the proposed method. The LE method which gives the lowest FNR represents that the result is not over-segmented. For the Walldecor image, the ME value of the proposed method is lowest. The Kapur method gives the lowest FPR representing that the image is highly under-segmented that can be depicted from Fig. 12. In Type-4 images, both foreground and background are textured, where 3 images Flower, Mushroom and Rock are considered for experimentation. Due to the presence of texture in the background the existing transition region and thresholding based methods fails completely in segmenting image. For Flower image, the methods RIB and NC give best (lowest) values of ME and FPR respectively, whereas the proposed method attains the lowest FNR. Similarly, for Mushroom image, the RIB method achieves the lowest value of ME and FNR which is nearly equal to that of the proposed method, whereas NC has the lowest value of FPR. When the proposed method is tested with Rock image it attains the lowest value of ME whereas the lowest value of FPR and FNR are provided by RIB and LE respectively. The segmentation results of different methods for Type-4 images are depicted in Fig. 13. For an interactive comparison of segmented results, the segmented binary image (segmentation masks) of existing methods and proposed method along with ground truth are presented in Fig. 14. It can be inferred from the discussion that, a Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

JID: CAEE

ARTICLE IN PRESS P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

[m3Gsc;November 9, 2016;11:25] 11

Fig. 10. Image segmentation results of Type-1 images: Aeroplane, Eagle and Bird: (a) Otsu, (b) Kapur, (c) LE, (d) MLE, (e) LGLD, (f) Parida et al. [15], (g) RIB, (h) NC, (i) Proposed method.

Fig. 11. Image segmentation results of Type-2 images: Signboard1, Signboard2 and Duck: (a) Otsu, (b) Kapur, (c) LE, (d) MLE, (e) LGLD, (f) Parida et al. [15], (g) RIB, (h) NC, (i) Proposed method.

Fig. 12. Image segmentation results of Type-3 images: Clock, Teddy and Walldecor: (a) Otsu, (b) Kapur, (c) LE, (d) MLE, (e) LGLD, (f) Parida et al. [15], (g) RIB, (h) NC, (i) Proposed method.

Fig. 13. Image segmentation results of Type-4 images: Flower, Mushroom and Rock: (a) Otsu, (b) Kapur, (c) LE, (d) MLE, (e) LGLD, (f) Parida et al. [15], (g) RIB, (h) NC, (i) Proposed method.

Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

JID: CAEE 12

ARTICLE IN PRESS

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

Fig. 14. Segmented binary image of various methods along with ground truths for Aeroplane, Eagle, Bird, Signboard1, Signboard2, Duck, Clock, Teddy, Walldecor, Flower, Mushroom and Rock images: (a) Ground truth, (b) Otsu, (c) Kapur, (d) LE, (e) MLE, (f) LGLD, (g) Parida et al. [15], (h) RIB, (i) NC, (j) Proposed method.

single method cannot yield better performance for all types of images. So, the average performance of different methods for various performance measures is calculated and given in Table 5. To find the average ME for a particular method, the misclassification errors of that method for all types of images are taken and then the mean value is calculated. In a similar way, the average FPR and FNR are also calculated. From Table 5 it is seen that the RIB has the lowest value of ME i.e., 0.0671. The proposed method has the ME value 0.0795 which is close to RIB. The average FPRs of the proposed method and RIB are 0.0786 and 0.0256 respectively. So, the output of the proposed method has a less amount of background pixels as compared to RIB. The average FNR of the proposed method is 0.0972 which is lowest, whereas it is 0.1906 with RIB. This indicates that RIB method loses many numbers of foreground pixels as compared to the proposed method. Hence, the performance of the proposed method can be considered to be better than other methods. In order to test the robustness, the three methods: Parida et al., RIB and proposed method are tested in noisy environments. Here the images: Signboard2, Teddy and Flower are added with additive white Gaussian noise (AWGN) having Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

ARTICLE IN PRESS

JID: CAEE

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

13

Table 5 Average performance of different methods for various performance measures. Method

Average ME

Average FPR

Average FNR

Otsu Kapur LE MLE LGLD Parida et al. [25] RIB NC Proposed method

0.5174 0.5111 0.4125 0.2872 0.5079 0.1704 0.0670 0.1347 0.0795

0.5324 0.4996 0.4577 0.2259 0.4946 0.1592 0.0256 0.0973 0.0786

0.4299 0.4682 0.1962 0.5619 0.5194 0.1974 0.1905 0.2667 0.0972

Table 6 Performance measures (ME, FPR, FNR) of different methods under various standard deviation (σ ) of AWGN for different images. Image

Signboard2

Teddy

Flower

Standard deviation

σ =3 σ =5 σ =7 σ =9 σ = 11 σ = 13 σ = 15 σ =3 σ =5 σ =7 σ =9 σ = 11 σ = 13 σ = 15 σ =3 σ =5 σ =7 σ =9 σ = 11 σ = 13 σ = 15

Parida et al. [15]

RIB

Proposed method

ME

FPR

FNR

ME

FPR

FNR

ME

FPR

FNR

0.0586 0.0640 0.0964 0.1684 0.3549 0.7664 0.7771 0.0435 0.0596 0.1632 0.6971 0.7408 0.7319 0.7053 0.0206 0.0302 0.0924 0.4625 0.6428 0.6753 0.6729

0.0712 0.0779 0.1172 0.2047 0.4313 0.9314 0.9443 0.0060 0.0264 0.1647 0.8866 0.9422 0.9309 0.8971 0.0210 0.0339 0.1179 0.6281 0.8773 0.9231 0.9198

0 0 0 7.53e−05 9.41e−05 9.41e−06 4.05e−04 0.1814 0.1818 0.1575 0 0 0 0 0.0193 0.0202 0.0230 0.0114 0.0037 0 0

0.0345 0.0358 0.0350 0.0390 0.0427 0.0479 0.0615 0.0109 0.0113 0.0121 0.0141 0.0204 0.0609 0.3763 0.0121 0.0125 0.0130 0.0144 0.0194 0.0326 0.3042

0.0419 0.0435 0.0426 0.0474 0.0519 0.0582 0.0748 0.0133 0.0139 0.0146 0.0178 0.0256 0.0772 0.4782 0.0163 0.0168 0.0176 0.0195 0.0263 0.0444 0.4152

0 0 0 0 0 0 0 0.0020 0.0018 0.2754 0.1194 0.0011 8.38e−04 3.09e−04 6.99e−04 6.64e−04 6.58e−04 5.93e−04 5.23e−04 4.00e−04 1.35e−04

0.0384 0.0392 0.0401 0.0429 0.0483 0.0518 0.0505 0.0116 0.0118 0.0121 0.0129 0.0147 0.0211 0.0344 0.0170 0.0171 0.0177 0.0215 0.0219 0.0262 0.0314

0.0467 0.0477 0.0487 0.0522 0.0587 0.0630 0.0614 0.0015 0.0015 0.0015 0.0019 0.0036 0.0107 0.0291 0.0170 0.0170 0.0178 0.0229 0.0234 0.0292 0.0362

0 0 0 0 0 0 0 0.0489 0.0497 0.0513 0.0536 0.0555 0.0593 0.0537 0.0170 0.0173 0.0172 0.0177 0.0181 0.0177 0.0181

Fig. 15. Plot of ME verses noise standard deviation of different methods for various images: (a) Signboard2, (b) Teddy, (c) Flower.

different standard deviation σ = 3, 5, 7, 9, 11, 13, 15. Under these noise conditions, the performance measures ME, FPR and FNR are calculated and shown in Table 6. The ME values for different images such as Signboard2, Teddy and Flower are plotted against various standard deviation of noise and are shown in Fig. 15(a)–(c). To assess the qualitative (subjective) performance, the segmentation outputs of the above methods are shown in Fig. 16. From Table 6, the proposed method and RIB has the same performance till the noise with σ = 11. At σ = 15, the RIB has the ME values of 0.3763 and 0.3042 for Teddy and Flower image respectively, whereas the corresponding values for proposed method are 0.0344 and 0.0314. This shows that under higher noise conditions RIB yields substantially higher values of ME as compared to the proposed method which can also be revealed from Fig. 15(a)–(c). In Fig. 16 also, we can easily observe Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

JID: CAEE 14

ARTICLE IN PRESS

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

Fig. 16. Effect of Gaussian noise on various segmentation methods with variations in σ = 3, 5, 7, 9, 11, 13, 15 for different images. (a) Original Signboard2 image added with AWGN, (b)–(d) segmented results of different methods: (b) Parida et al. [15], (c) RIB, (d) Proposed method, (e) Original Teddy image added with AWGN, (f)–(h) segmented results of different methods: (f) Parida et al. [15], (g) RIB, (h) Proposed method, (i) Original Flower image added with AWGN, (j)–(l) segmented results of different methods: (j) Parida et al. [15], (k) RIB, (l) Proposed method.

that the proposed method provides better performance than others. The method Parida et al. fails under noisy conditions. Hence, the proposed method is more robust as compared to other methods.

7. Conclusion In this paper, we propose a novel hybrid transition region based segmentation method. The proposed method extracts transition region in a unique perspective using 2-D Gabor filter and global thresholding in comparison to other conventional transition region based approaches. The proposed method works well for images containing both single and multiple objects. It is compared with different approaches both qualitatively and quantitatively. The quantitative measures show that the proposed method has superior performance over other methods. Also, the proposed method suffers from a little over segmentation and under-segmentation providing good segmentation result with minimum loss of foregrounds and a little emergence of background. For qualitative analysis, the method is tested on a variety of images containing single and multiple objects and it is found that the proposed method outperforms other existing schemes. To verify robustness, the performance of the proposed method is tested in the noisy environment and the result demonstrates effective segmentation. The experimental results show that our method has several good properties, such as less loss of object information, high robustness and better overall performance than other existing methods. Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

JID: CAEE

ARTICLE IN PRESS P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

[m3Gsc;November 9, 2016;11:25] 15

References [1] Otsu NA. Threshold selection method from gray level histograms. IEEE Trans Syst Man Cybern 1979;9:62–6. doi:10.1109/TSMC.1979.4310076. [2] Kapur JN, Sahoo PK, Wong AKC. A new method for gray-level picture thresholding using the entropy of the histogram. Comput Vis Graphics, Image Proc 1985;29:273–85. doi:10.1016/0734-189X(85)90125-2. [3] Felzenszwalb PF, Huttenlocher DP. Efficient graph-based image segmentation. Int J Comput Vis 2004;59:167–81. doi:10.1023/B:VISI.0000022288.19776. 77. [4] Salembier P, Marqués F. Region-based representations of image and video: segmentation tools for multimedia services. IEEE Trans Circuits Syst Video Technol 1999;9:1147–69. doi:10.1109/76.809153. [5] Pollefeys M. Edge detection. Computer and machine vision. Elsevier; 2012. p. 1–19. doi:10.1016/B978- 0- 12- 386908- 1.0 0 0 05-7. [6] Shi J, Malik J. Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 20 0 0;22:888–905. doi:10.1109/34.868688. [7] Li Z, Liu G, Zhang D, Xu Y. Robust single-object image segmentation based on salient transition region. Pattern Recogn 2016;52:317–31. doi:10.1016/j. patcog.2015.10.009. [8] Kang C-HHC-CC, Wang W-JJ, Kang C-HHC-CC. Image segmentation with complicated background by using seeded region growing. AEU - Int J Electr Commun 2012;66:767–71. doi:10.1016/j.aeue.2012.01.011. [9] Ohlander R, Price K, Reddy DR. Picture segmentation using a recursive region splitting method. Comput Graphics Image Process 1978;8:313–33. doi:10. 1016/0146-664X(78)90060-6. [10] Gerbrands JJ. Segmentation of noisy images. Delft, Netherlands: Technische Univ.; 1988. [11] Zhang YJ, Gerbrands JJ. Transition region determination based thresholding. Pattern Recogn Lett 1991;12:13–23. doi:10.1016/0167- 8655(91)90023- F. [12] Yan C, Sang N, Zhang T. Local entropy-based transition region extraction and thresholding. Pattern Recogn Lett 2003;24:2935–41. doi:10.1016/ S0167-8655(03)00154-5. [13] Li Z, Liu C. Gray level difference-based transition region extraction and thresholding. Comput Electr Eng 2009;35:696–704. doi:10.1016/j.compeleceng. 20 09.02.0 01. [14] Li Z, Zhang D, Xu Y, Liu C. Modified local entropy-based transition region extraction and thresholding. Appl Soft Comput J 2011;11:5630–8. doi:10. 1016/j.asoc.2011.04.001. [15] Parida P, Bhoi N. Transition region based single and multiple object segmentation of gray scale images. Eng Sci Technol Int J 2016. doi:10.1016/j.jestch. 2015.12.009. [16] Alpert S, Galun M, Brandt A, Basri R. Image segmentation by probabilistic bottom-up aggregation and cue integration. IEEE Trans Pattern Anal Mach Intell 2012;34:315–27. doi:10.1109/TPAMI.2011.130. [17] Liu C, Wechsler H. Gabor feature based classification using the enhanced Fisher linear discriminant model for face recognition. IEEE Trans Image Process 2002;11:467–76. doi:10.1109/TIP.2002.999679. [18] Lades M, Vorbrueggen JC, Buhmann J, Lange J VD, Malsburg C, Wuertz RP, et al. Distortion invariant object recognition in the dynamic link architecture. IEEE Trans Comput 1993;42:300–11. doi:10.1109/12.210173. [19] Zhang W, Shan S, Gao W, Chen X, Zhang H. Local Gabor Binary Pattern Histogram Sequence (LGBPHS): A novel non-statistical model for face representation and recognition. Proc IEEE Int Conf Comput Vis 2005;I:786–91 IEEE. doi:10.1109/ICCV.2005.147. [20] Chen Q, sen SunQ, Ann Heng P, Xia D. shen. A. double-threshold image binarization method based on edge detector. Pattern Recognit 2008;41:1254– 67. doi:10.1016/j.patcog.20 07.09.0 07. [21] Zhijie W, Hong Z. Edge linking using geodesic distance and neighborhood information. In: IEEE/ASME international conference on advanced intelligent mechatronics. AIM; 2008. p. 151–5. [22] Liu T, Yuan Z, Sun J, Wang J, Zheng N, Tang X, et al. Learning to detect a salient object. IEEE Trans Pattern Anal Mach Intell, vol. 99, IEEE; 5555, p. 1–8. [23] Yasnoff WA, Mui JK, Bacus JW. Error measures for scene segmentation. Pattern Recogn 1977(9):217–31. doi:10.1016/0 031-3203(77)90 0 06-1. [24] Mehmet S, Bulent S. Survey over image thresholding techniques and quantitative performance evaluation. J Electr Imaging 2004;13:220. doi:10.1117/1. 1631316. [25] Feng Y, Shen X, Chen H, Zhang X. A weighted-ROC graph based metric for image segmentation evaluation. Signal Process 2016;119:43–55. doi:10.1016/ j.sigpro.2015.07.010.

Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019

JID: CAEE 16

ARTICLE IN PRESS

[m3Gsc;November 9, 2016;11:25]

P. Parida, N. Bhoi / Computers and Electrical Engineering 000 (2016) 1–16

Nilamani Bhoi received B.E. degree from Sambalpur University, India in 1998, M.E. degree from Jadavpur University, India in 2001, and Ph.D. degree in image processing from National Institute of Technology, Rourkela, India in 2009. He is an Assistant Professor in Electronics & Telecommunication Engineering, Veer Surendra Sai University of Technology, India. His research interests include image processing and machine learning. Priyadarsan Parida received B.Tech. and M.Tech. degrees from Biju Pattnaik University of Technology, India in 2007 and 2011 respectively. He is currently perusing Ph.D. in Electronics & Telecommunication Engineering, Veer Surendra Sai University of Technology, India. His research interests include image processing and pattern recognition.

Please cite this article as: P. Parida, N. Bhoi, 2-D Gabor filter based transition region extraction and morphological operation for image segmentation, Computers and Electrical Engineering (2016), http://dx.doi.org/10.1016/j.compeleceng.2016.10.019