Unsupervised skin lesions border detection via two-dimensional image analysis

Unsupervised skin lesions border detection via two-dimensional image analysis

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15 journal homepage: www.intl.elsevierhealth.com/...

2MB Sizes 0 Downloads 33 Views

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

journal homepage: www.intl.elsevierhealth.com/journals/cmpb

Unsupervised skin lesions border detection via two-dimensional image analysis Qaisar Abbas a,b,∗ , Irene Fondón c , Muhammad Rashid d a

Department of Computer Science and Technology, Huazhong University of Science and Technology, 1037 Luoyu Road, Wuhan 430074, China b Center for Biomedical Imaging and Bioinformatics, Key Laboratory of Image Processing, 1037 Luoyu Road, Wuhan 430074, China c Department of Signal Theory and Communications, School of Engineering Path of Discovery, s/n C. P., 41092 Seville, Spain d Department of Life Science and Technology, Biochemistry and Molecular Biology, Huazhong University of Science and Technology, 1037 Luoyu Road, Wuhan 430074, China

a r t i c l e

i n f o

a b s t r a c t

Article history:

The skin cancer was analyzed by dermoscopy helpful for dermatologists. The classifi-

Received 9 November 2009

cation of melanoma and carcinoma such as basal cell, squamous cell, and merkel cell

Received in revised form

carcinomas tumors can be increased the sensitivity and specificity. The detection of an

25 June 2010

automated border is an important step for the correctness of subsequent phases in the

Accepted 28 June 2010

computerized melanoma recognition systems. The artifacts such as, dermoscopy-gel, specular reflection and outline (skin lines, blood vessels, and hair or ruler markings) were also

Keywords:

contained in the dermoscopic images. In this paper, we present an unsupervised approach

Skin cancer

for multiple lesion segmentation, modification of Region-based Active Contours (RACs)

Melanoma

as well as artifact diminution steps. Iterative thresholding is applied to initialize level

Dermoscopy

set automatically; the stability of curves is enforced by maximum smoothing constraints

Artifacts removal

on Courant–Friedreichs–Lewy (CFL) function. The work has been tested on dermoscopic

Border detection

database of 320 images. The border detection error is quantified by five distinct statistical

Region-based active contour

metrics and manually used to determine the borders from a dermatologist as the ground

Exemplar-based image inpainting

truth. The segmentation results were compared with other state-of-the-art methods along with the evaluation criteria. The unsupervised border detection system increased the true detection rate (TDR) is 4.31% and reduced the false positive rate (FPR) of 5.28%. © 2010 Elsevier Ireland Ltd. All rights reserved.

1.

Introduction

Skin cancer is the most common cancer in U.S. [1]. There are two major types of skin cancer [2], namely malignant melanoma and non-melanoma (basal cell, squamous cell, and merkel cell carcinomas, etc.). Melanoma is more dangerous and can be fatal if untreated. These tumors develop changes in the skin texture and color, but they can be cured in more

than 90% of cases, if they are detected and treated in the early stages. Some cutaneous tumors remain an enigma. A good example, one even better than melanoma, is perhaps the deadliest of all skin cancers “Merkel cell carcinomas” which have tripled in the incidence from 1986 to 2001 in the U.S. and Canada [3]. It is well known that early finding and treatment of skin cancer can reduce the mortality and morbidity of patients. Digital dermoscopy [4] is widely considered as one of the

∗ Corresponding author at: Department of Computer Science and Technology Huazhong University of Science and Technology, 1037 Luoyu Road, Wuhan 430074, China. Tel.: +86 27 13437168375; fax: +86 27 87559091. E-mail address: [email protected] (Q. Abbas). 0169-2607/$ – see front matter © 2010 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.cmpb.2010.06.016

e2

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

Fig. 1 – Problems with border detection within both clinical and dermoscopic view for various types of lesions which illustrate: (a) benign melanocytic naevi, (b) scapula malignant melanomas, (c) non-pigmented basal cell carcinoma, (d) merkel cell carcinomas, (e) keratosis tumors, (f) non-pigmented tumor.

most cost-effective means to identify and classify skin cancer. In order to differentiate benign lesions from malignant ones, expert dermatologists advocate the use of computerized image analysis schemes. An automatic dermoscopy image analysis system [5–9] has usually three stages: (1) boundary detection, (2) features extraction and selection, and (3) lesion recognition. The border detection step is one of the most important, since it affects the precision of the subsequent steps. Unsupervised border detection is a difficult task due to the great variety of lesion shapes, sizes, and colors along with diverse skin types and textures as shown in (Fig. 1). In addition to this, artifacts such as dermoscopy-gel (Fig. 1(a)), specular reflection (Fig. 1(b)) and outline (markers), skin lines, hair (Fig. 1(f))) might decrease the accuracy of border detection both in clinical (Fig. 1(b)) and dermoscopy (Fig. 1(f)) images.

1.1.

Background

For designing an unsupervised segmentation method, there must be single system, which consists of lesions border detection and artifact reduction step. The borders finding of lesions and artifact removal systems are very challenging and vital considerations in fast and precise skin border description. In recent years, many studies have been focused on unsupervised and supervised segmentation of lesions in digital dermoscopy. However, none of the research efforts have been fully described to remove all artifacts. As a consequence of advances in skin imaging technology, Celebi et al. [10] recommended an unsupervised approach to detect border in dermoscopy images by statistical region merging algorithm. Also, Celebi et al. [11] discussed the difficulty and subjectivity of human understanding with computerized analysis in terms of denoising and border detection techniques. Moreover, Celebi et al. [12] suggested a modified Jseg algorithm to segment a tumor from its surrounding. A multidirection gradient vector flow (GVF) snake-based scheme is

proposed by Tang [13]. He put forward an AD filter by using adaptive threshold selection and a new gradient computation way, which is robust to noise and can effectively remove the hair. Further Yuan et al. [14] suggested a novel multimodal technique based on a region fusion and narrow band energy graph partitioning to accurate lesion segmentation. In [15], Schmid proposed a method to segmentation tumor by fuzzy c-means (FCM) clustering and mathematical morphology. Erkol et al. [16] used gradient vector flow (GVF) snakes to find the border of skin lesions. The region-growing method proposed by Iyatomi et al. [17,18] to yield the results close to those obtained by dermatologists. Gómez et al. [19] designed an unsupervised algorithm called the Independent Histogram Pursuit (IHP), for segmentation of dermatological lesions. Next, Zhou et al. [20] developed a new mean shift based fuzzy cmeans that requires less computational time than previous techniques, while providing good segmentation results. Silveira et al. [21] evaluated six different segmentation systems in favor of three skin tumor types. These algorithms are: (1) adaptive thresholding (AT), (2) gradient vector flow (GVF), (3) adaptive snake (AS), (4) level set method of Chan et al. (C-LS), (5) expectation–maximization level set (EM-LS), and (6) fuzzybased split-and-merge algorithm (FBSM).

2.

Aim and our approach

Although the significant research efforts have gone into developing computerized algorithms to reduce artifacts, detect lesion boundary, and categorization remains a technical challenge. Because segmentation step is highly dependent upon noise likes illumination, dermoscopic-gel or air bubbles and outlines such as line markers, skin and hair, etc. It has been also observed that there are three major problems present in previous studies in particular. First problem is that none of the study efforts have been fully devoted to develop an efficient

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

e3

Fig. 2 – Flow diagram of the proposed unsupervised skin lesion border detection.

routine for every type of artifact attenuation. Second, very few studies dealt with the automatic segmentation of multiple lesions. Third problem is that in some cases for basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and merkel cell carcinoma. (MCC) tumors, the skin lesion borders are not well clear, which reduces the accuracy of border detection. In adding up to this, Celebi et al. [11] suggested that image smoothing effectively removes the skin lines and blood vessels, bubbles can be removed by a morphological top-hat operator followed by a radial search procedure introduced by Fleming et al. [22], and both Wighton et al. [23], Zhou et al. [24] recommended hair removing algorithms. Although, Wighton et al. developed hair disocclusion using inpainting with the anisotropic diffusion scheme that performed on average 32.7% better than the linear interpolation of DullRazor [25] hair algorithm. Lately, Zhou et al. have developed the more sophisticated feature-preserving algorithms to remove artifacts such as dermoscopic-gel and hair with the help of curve fitting and exemplar-based image inpainting [26], which can be extended by Criminisi et al. [27]. Similarly, unsupervised PDE-based image inpainting algorithm for the hair-occluded information was also described by Xie et al. [28]. However, these techniques have several drawbacks including high computational requirements and the fact that the required number of iterations can be unbounded. For automatic border detection, our approach consists of three main steps: (1) image rescaling, (2) preprocessing, and (3) tumors border detection. In the first step, all images were rescaled to a standard size. In the second step, numerous fil-

ters have been used to reduce each type of artifacts with the help of homomorphic filtering, weighted median filtering with local cost function, and an efficient exemplar-based inpainting scheme. In the third step, a region-based active contour model is applied to segment the lesion boundaries. The flow diagram of an unsupervised border detection method is shown in (Fig. 2).

3.

Materials and methods

3.1.

Image dataset

A clinical online database of 320-color dermoscopic and clinical view lesion images were obtained from varies sources but most images came from the Department of Dermatology [29], Health Waikato New Zealand. In addition, most of these skin cancer images were captured from Nikon 995 with the digital acquisition system. The images have been stored in the RGBcolor format with dimensions vary from 640 × 480 to 640 × 405 or 620 × 300 to 340 × 460. The database was subdivided into six categories consisting of total 320 images: (1) 50 benign melanocytic lesions, (2) 70 malignant melanoma lesions, and (3) 40 basal cell carcinoma lesions, with, (4) 40 merkel cell carcinoma lesions, (5) 90 seborrhoeic keratosis lesions, and (6) 30 non-melanocytic lesions. We have left 60 out of 380 images, which having non-lesions objects to show significance of our proposed method for tumor delineation.

e4

3.2.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

and

Image rescaling

In this subsection, it is an initial and necessary step for preprocessing and segmentation results comparisons with other methods. In this step, all images were rescaled to a standard size of 450 × 360 pixels.

3.3.

Preprocessing

In this step, we make it possible to accurate detect the lesion borders. It has been observed that a dermoscopic image often contains artifacts. Most recently, many studies suggested their works but none of them have discussed about all cases of artifacts. For this rationale, we have developed an effective preprocessing method which can be divided into following three steps namely, (1) specular reflection reduction, (2) air bubbles or dermoscopic-gel reduction, and (3) hair and lines reduction.

3.3.1.

A dermoscope such as DermLite has a magnifying lens of 10×, and 32 light-emitting diodes (LED) of polarized light with the output of approximately 1800 foot-candles. When using such imaging devices, non-uniform illumination can result in unsatisfactory border detection results. To address this issue, the homomorphic, FFT and high pass filter used to compensate for uneven illumination or specular reflection variations in order to obtain the high contrast lesion images. Homomorphic filtering [30] is a general technique for nonlinear image enhancement and correction. It simultaneously normalizes the brightness across an image and increases contrast. Let a skin image s(x,y) can be decomposed as the product of illumination I(x,y) and reflectance R(x,y). s(x, y) = I(x, y).R(x, y)

(1)

If the illumination and reflectance can be acted upon separately, the illumination problem will be solved. Consequently, the log transforms is used to Eq. (1) as: log[s(x, y)] = log[I(x, y)] + log[R(x, y)] i.e. S(u, v) = I(u, v) + R(u, v)

(2)

Subsequently, the Fourier transforms is applied to Eq. (2) and filtering is performed in the frequency domain. Then, 2-D Fourier transforms is used. The coordinate variables become u and v. As H(u,v) is the homomorphic filter function applied to the illumination and reflectance. After taking the inverse Fourier transform and the exponent transforms, an enhanced skin image s(x,y) is obtained. H(u,v) used in this paper has the following form:





Bw =

1+

  u 2 2

+

 v 2 0.5

˛ −1

2

(4)

where Bw is the Butterworth filter which can be calculated by Eq. (4), D(u, v) is the distance between point (u,v) and point of origin in a frequency domain; D0 is the normalized transition, and ˛ is the constant parameter having the value in the range of ˛ ≥ 1.4 and ˛ ≥ 2.5. Similarly, if the parameters HL and HH are chosen to be HL < 1and HH > 1, then the filter H(u,v) will decrease the contribution of the low frequency (illumination) and amplify the contribution of the mid and high frequencies (reflectance). We can compute the inverse Fourier transforms as: s(x, y) = I(x, y) + R(x, y)

(5)

To reverse the logarithm used at the beginning of the process, compute an exponential function by:

Specular reflection reduction

H(u, v) = (HH − HL ) × 1 − exp −Bw ×



 D  0 D(u, v)

+ HL

(3)

I(x, y) = exp[I(x, y)] + exp[R(x, y)]

(6)

In order to achieve simultaneous dynamic range compression and contrast enhancement, the Butterworth equations allow such as setting. We used the following Butterworth with highpass filter, which has been calculated by Eq. (4). It should be noted that after this, the RGB skin image is converted to grayscale. Then in order to get illumination correction without this conversion effect, used again RGB conversion by previous storing of the color map HSV information with 0.5 gamma adjustment as shown in (Fig. 3). In this figure, we have displayed that the light variation results by setting different value of ˛ ≥ 1.5 and ˛ ≥ 1.6 with a constant value of HL = 0.5 and HH = 1.4. The fixed values of HL and HH constants parameters have been chosen from experiments by averaging of all threshold values in that data set.

3.3.2.

Air bubbles or dermoscopic-gel reduction

Dermoscopy images often contain air bubbles or dermoscopicgel, and it is vital to recognize them to avoid them from interfering with border detection. These artifacts were found to possess two individual characteristics: speckles (Fig. 3(b)) are nearby in their interiors, and a strong, bright edge (Fig. 4(a)) is here at their boundaries. In order to remove dermoscopic-gels, we utilized adaptive and recursive weighted median filter developed by Sweet [31]. This type of median filter has an edge persevering capability. Although, we used this filter to reduce both speckle noise (see Fig. 3(b)) and air bubbles (see Fig. 4(a)) with radius 3 × 3 pixels, white level pixel weights for brightest pixel ranges from 234 to 245 pixels, and by filter type recursive and adaptive. Although, this practice may create blur effect in some degree when choose higher weights and radius size. For this purpose, we should define criteria to replace those pixels which are brighter than others. Therefore, we just added one step to make criteria for pixel replacement strategy based on maximum and minimum gradients values in blue RGB channel

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

e5

Fig. 3 – An example of preprocessing step to reduce specular reflection. (a) Original images, (b) after illumination correction with ˛ = 1.5, and (c) by ˛ = 1.6, HL = 0.5, HH = 1.4.

as shown in Fig. 4. We selected the blue channel because the noise part in this channel is typically clearer than other channels of RGB color space. This image gradient information is relevant to design the local cost function. The cost function

Cn (i,j) from pixel i to a neighboring pixel j is defined as: 1/2

Cn (i, j) =

(max G(N) − (g(i) − g(j)) ) (max G(N) − min G(N) + K)

(7)

Fig. 4 – An example of preprocessing step to reduce dermoscopic-gel or air bubbles by using weighted median filtering. (a) Original ground truth image, (b) maximum gradient magnitude in the red channel, and (c) blue channel output of the air bubble reduction step.

e6

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

where Cn (i,j) denotes a control cost function of n pixel map, max G(N) and min G(N) are maximum and minimum gradient functions of an overall image g, and k denotes intensity value, which is calculated from a histogram of the absolute values of the gradient all over the image. Then this brightest pixel’s replacement criterion function will be defined by using Eq. (8) as:

 Cn (i, j) =

1 if g(x + i)or g(y + j) = [max G(N), min G(N)]

(8)

0 otherwise

As shown in (Fig. 4), the maximum magnitude of the brightest pixels in the red and blue channel becomes like speckle noise. Afterwards these brightest pixels in this range were selected to signify air bubbles and removed by neighbourhood pixels. Therefore, when brightest pixels or white-level noise is inside the tumor region, then it may disturb the tumor structure to some degree due to our pixel replacement strategy. However, we observed that such a side-effect has negligible influence on border detection accuracy.

3.3.3.

Hair and lines reduction

The detection and removal of hair pixels, blood vessels, and skin lines with ruler markings is an important early step in a dermoscopy image analysis system. As it was pointed out before, many skin lesion artifact removal algorithms have been proposed in the literature [22–25] just to remove hair pixels in dermoscopy images. But, these techniques often leave behind undesirable blurring; disturb the texture of the tumor; and result in color bleeding. Due to these problems, it is very difficult to use the color diffuse image for further skin tumor differentiation. In contrast, a new artifact removal algorithm that focuses on accurate detection of curvilinear artifacts and pays special attention to lesion structure during the removal stage has been introduced by Zhou et al. [20]. They confirmed that lines can be accurately detected by using explicit curve

 Ci (x, y) =

In order to detect lines, we propose a line detection procedure based on the 2-D derivatives of Gaussian (DOG) [33]. For completeness purposes, here we describe this filter briefly. The response filter for thin and thick line detection is calculated as: Ri (x, y) = gi (x, y) ∗ i(x, y) + g (x, y) ∗ i(x, y)

(9)

where i(x,y) is the input image, * is a convolution operator, and gi (x,y) is the Gaussian derivative for thin smooth lines calculated by Eq. (10) along with g (x,y) is used for thick lines detection by Eq. (11). Then the 2-D derivative of Gaussian (DOG) for smooth thin lines detection filter [33] is defined as: −x −x2 /2 2 i , e gi (x, y) = √ 2.i3

for

|x| ≤ 3i ,



y ≤ Li /2

(10)

This derivative of Gaussian (DOG) efficiently detects lines in all directions. Where  i is the standard deviation of the Gaussian function at a scale i, and Li is the span of the filter in y direction at that scale. Parameter Li is used to smooth a line along its tangent direction. Then the rotation of gi (x,y) with angle  is applied by using g (x ,y ) = gi (x,y), where x = x cos  + y sin  and y = y cos  + x sin . There are also some lines such as blood vessels, which are significantly thicker than normal ruler markings or hair lines. In order to achieve this characteristic, a Gaussian function with weight towards the centre of the image is defined as: g (x, y) = cos

 x  2˛

cos

 y  ˛

e−(x

2 +y2 )/2(0.5˛)2

(11)

where ˛ is the fixed parameter equal to 0.5 weighted value. This parameter can be used for reducing noise while still retaining all the blood vessel information. The appearance of blood vessels was darker than the neighbouring pixels, so we resolve this by inverting the convolved image. We repeat this process for values of  between 0 and . The following criterion was then used to eliminate superfluous curves





Ri (x + d, y) + Ri (x + d, y) ,

if Ri (x − d, y) > 0 and Ri (x + d, y) < 0

(12)

0 otherwise

modeling [32] and then removed by exemplar-based inpainting. This approach effectively removes artifacts such as ruler markings and hair, but it has high computational requirements. To address these issues, we developed a novel method for the removal of hair, blood vessels, and ruler markings using line detection and exemplar-based inpainting [27]. We observed some properties of these lines’ structures, and these structures can be accurately identified using the line detection scheme. These properties can be defined as thickness, magnitude, and length other than just direction. The lesion features, on the other hand, can be distinguished because they do not have these properties. Instead of repeatedly using the local neighborhood average, we utilize an exemplar-based inpainting technique, which actively searches for image regions that have the similar characteristics to the patch of pixels we are going to replace.

where d is used to attain the degree around the “zerocrossing”. Since, the advantage of using d parameter for “zerocrossing” is that it can effectively separate smooth thin and thick lines. Moreover, the line direction is determined as the direction with maximal centreline filter. The line width can be approximated by measuring the distance between a local maximum and local minimum along the perpendicular direction of the line. After we got the center, the direction and the width of lines can be interpolated through the information calculated by thresholding to this filter. By calculating the properties, unwanted curves can be eliminated, which were part of tumor structure. The line segments extracted after the line point filling with green RGB color is shown in Fig. 5(b). After detecting the lines, we adopted the exemplar-based inpainting and patch ordering mechanism. The detail of this algorithm has been displayed in [27]. For an optimization solution, we have just added target region lines to be replaced

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

e7

Fig. 5 – A close-up view for edge and lines detection of image from the original image which has been displayed in Fig. 6. (a) False-positive line detections, (b) true-positive lines’ detection and filling with green intensity value through proposed work.

and control the iterations. In this implementation, the contour of the target region is modeled as a dense list of image point locations through green lines as shown in Fig. 5(b). The width of lines vary depends upon the lines detection algorithm. These lines have been selected automatically by this method. Finally, pixels are classified as belonging to the target region ˝, the source region  or the remainder of the image by assigning contrast values to their alpha component. The image alpha channel is, therefore, updated (locally) at each iteration (t ≤ 144) of the filling algorithm. The value of iteration (t) is determined through experiments on the available datasets. However, in case of much hair surrounded by the tumor, we should increase this value. Moreover, if there were too many lines close to each other then it is very difficult to detect the accurate lines and also replacement of these lines by exemplar-based inpainting method. In some cases, if hair surrounded completely by the tumor surface, then the pixel replacement method may not give satisfactory results. The presented artifact reduction algorithm has been tested on a data set of 147 images. About half of the images have visible artifacts like this. This algorithm automatically detects these visible artifacts and removes them. Examples of artifact reduction results are shown in Fig. 6. Notice that in other hair detection studies, the associated pixels with hairs or ruler markings are picked up; a few non-artifact pixels are falsely selected. It should be noted that, in the presence of a large amount of hair, this procedure may damage the structure of the tumor.

4.

Tumors border detection

Following artifact reduction, we determine lesion borders using a modified region-based active contours (RACs) [34] scheme. The segmentation of tumor is also critical due to a fuzzy or irregular border of lesions. In order to segment single or multiple tumors from dermoscopic images, there are several studies have been developed, which can be categorized as thresholding, region growing, clustering-based along with active contour or snake based. In the active contour framework, they used mostly edge-based or region-based segmentation techniques. These methods demonstrated satisfactory results in case of a single lesion, clear boundaries

and no extra artifacts. In order to avoid these problems, there are two algorithms have been developed multi-direction gradient vector flow (DGVF) and region fusion and narrow band energy graph partitioning (NBGP). These methods can effectively segment tumors in case of smooth edges, strong hair, and bubbles. However, these techniques have several drawbacks including difficulty with handling multiple lesions and proper choice of parameter values, and requirement of levelset re-initialization. To address this issue, in this paper, we developed more localized region-based active contours (RACs) model with constraints to automatically segment skin lesions. The implementation of this algorithm is available in [35]. The basic concept of this model is based on an active contour without edges named as Chan–Vese (CV) segmentation technique. The CV form of this snake curve is an energy minimization method, which has been widely used in several computer vision or medical applications. We used this model [34] to segment multiple regions simultaneously and its region-based approach as compared to edge-based methods. The advantages of using this method for skin lesion segmentation is that it has capabilities to separate heterogeneous objects, insensitive to noise, and automatic convergence along with control of overlapping contours into some extent. In this framework, the localization was presented in improved form, which can segment multiple tumors with “local energies” when “global energies” fail. However, this type of implementation of RACs method is also good for initial curve placement, but requires more time when contour is placed far away from the tumors. Therefore, good initialization is often essential to increase the convergence speed. For this purpose, we have defined level set initialization method by using the iterative histogram thresholding. By following this approach, we explain the RACs model for multiple tumors segmentation based on CV model in the following sections.

4.1.

Automatic level set initialization

In this subsection, a lesion region position is determined and segmented with the initial detection of a lesion shape within a skin image by performing iterative thresholding method on luminance values. The monochrome luminance image

e8

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

Fig. 6 – An example of preprocessing step for Hair and outlines such as blood vessels or ruler markers diminution. (a, c, e) Original ground truth input images whereas (b, d, f) represents results after hair and lines artifacts reduction.

is calculated by combining the RGB values according to the NTSC 1953 coefficients. Then iterative thresholding is performed by calculating the mini–max intensity histograms on this luminance image. This type of thresholding is called global thresholding because it used histogram method. The method performs thresholding in an iterative way, which has been developed by Ridler and Calvard in 1978. In this simple thresholding method, first the algorithm computes the mean intensity values of an image. Next, by using these threshold values, it computes mean values, which are above and below to these mean intensity values. Finally, this algorithm applied iterative method to find out threshold value. This technique is visually demonstrated in Fig. 7. In this figure, first we convert RGB color image into the luminance image. By calculating this type of histogram thresholding, we will get the segmented

binary image as shown in Fig. 7(c) and (g). In this image, the tumors represent 0 intensity value, while a background is represented by 1 value. Therefore, we can initialize a level set curve closer towards the tumors as displayed in Fig. 7(d) and (h). Moreover, in order to get lesions smooth boundaries, we utilized the localized RACs in a variational level set framework.

4.2.

Lesions segmentation model

We then applied a localized region-based active-contour model to lesion boundaries segmentation. This multiple border detection model is based on the implemented of Chan–Vese (CV) segmentation technique. In this model, we can integrate varies energies models such as uniform modeling energy, the means separation energy, and the histogram

Fig. 7 – Level set initialization through adaptive thresholding where (a,e) are original input RGB images after preprocessing steps, (b,f) convert into monochrome luminance image in greyscale, (c,g) initial tumors boundaries determine by thresholding, and (d,h) initialize contour.

e9

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

separation energy. However, we used the uniform modeling energy technique due to its easy implementation. For this local energy, each point is considered individually. In order to calculate local energy, the neighborhoods are split into local “interior” and “exterior” by the evolving curves. Now, we briefly describe this active contour model. The Matlab code of this model has been downloaded from [35] and integrated it into our application by the level set initialization method, and modified curve stability enforcement. Let the of a signed distance function is denoted by , i.e. zero level set C = (x) = 0 . Similarly, the interior and exterior of C with respect to smoothed Heaviside function was also defined in the same way as [34]. Then the generic force function F allows the curve to move freely. And B(x,y) function is used to denote local mask regions and will be return 1 when the point y is within a ball of radius r centered at x and 0 otherwise. The value of radius is very important consideration in this lesion segmentation model. The reason is that by choosing r also enforces the smoothness inside and outside the contour. For this reason, we used 2 pixels value of r parameter. We determined this value, by experiments. Then by using B(x,y) the definition of energy function as define in [34] in terms of a generic force function as:



E() =

ı (x)

B(x, y).F(I(y), (y))dy dx

˝x

(13)

˝y

where E() represents energy function for the level set, which only used points near the contour. By taking ı(x) Dirac delta function outside the force integral represents that we can get capture much broader range of objects. The points selected by this function still allow contours to merge and split. For every point selected by Dirac function ı(x), we can use B(x,y) mask to ensure that F operates only on local regions about x. Furthermore, in order to keep the curve smooth, a regularization term  weighted factor has been also added [37]. This regularization term is kept constant throughout the experiments as 0.4. If this value is higher than this, then image will be smoother.



E() =

ı((x)) ˝x

+

Energy minimization

In this subsection, we introduce the energy function and the way of optimization this energy to numerous tumors without overlapping the contours. We used this energy function as a constant energy model called CV energy in the same way as denoted in [34]. Hence, this energy function operates as a constant intensity model named uniform modelling energy. For curvature flow of this energy, the global region-based energy that uses this local intensity model is given by: ∂ (x) = ı((x)) ∂t



B(x, y).ı(y). (I(y) − Ux ) ˝y

− (I(y) − Vx )

2





2



∇(x)

  ∇(x)

dy + ı((x))div

(16)

where u and v are the mean of interior and exterior points, which can be used to force the curve. In this localized version of curvature flow, every point on the curve has moved with best approximated by local mean ux and vx . These local means have been calculated in the same way as presented in [34]. By introducing the energies in terms of a signed distance function implement flows in a level set framework as proposed by [36,37]. In order to improve efficiency [38], we only compute values of in a narrow band around the zero level set. Furthermore, in order to segment multiple lesions, we are using the method of Brox and Weickert [39], which presents an effective solution for multiple region segmentation based on the idea of N computing regions. For this purpose, we allow multiple contours to compete with each lesion objects on a single interface and split until energy does not decrease for zero level set, i.e. (x) = 0 ⇔ (x) = 0. In this case, the scheme for n level set functions n (x) is given by:





⎜ ⎜

⎟ ⎟ ⎟ max ai − rj + min ri − aj ⎟ ⎟ ıi (x) > 0 ⎝ ıi (x) > 0 ⎠ 

∂i ⎜ (x) = ıi (x) ⎜ ∂t ⎜



j#i

j#i

B(x, y).F(I(y), (y))dy dx



(17)

˝y

  ı((x)) ∇(x) dx

(14)

˝x

By taking the first variation of this energy with respect to , we obtain the evolution curves calculated by Eq. (15). ∂ (x) = ı((x)) ∂t

4.3.

where ai represents forces for level set function to move multiple contours advance and other parameter ri retreat force to split and merge contours by preventing overlaps. Then, these parameters have been as:(18)ai =  calculated 



2

˝y

+ ı((x))div



∇(x)

  ∇(x)

∇ (x)

i ∇i (x) and the retreat

forces (ri ) are given by

B(x, y).∇(y) F(I(y), (y))dy ˝y

B(x, y).ıi (y).(I(y) − vx ) dy + 2 div





 ri = B(x, y).ıi (y).(I(y) − ux ) dy + div 2 ˝y 2

(15)

From Eq. (15), the only restriction on F is that its first variation with respect to  can be computed. This means that all implemented of region-based energies can be integrated into this framework. However, for easy implementation, we used CV energy model.

∇i (x)

  ∇i (x)

 (19)

We can therefore, enforce the stability by using the Courant–Friedreichs–Lewy (CFL) condition [40], that moves the curve very fast in case of multiple tumors. In CFL condition, this also asserts that the numerical waves should propagate at least as fast as the physical waves within a time frame. This condition is a coarse measure of stability and convergence of

e10

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

curves. However, this stability can be achieved by using a much more restrictive CFL condition, although this is computationally costly. Various methods to enforce curve stability such as temporal discretization, Runge–Kutta methods, etc. have been proposed in the literature. Although these methods may be too time consuming to enforce this condition for multiple objects. In our case, we modify this CFL condition in the RACs model by the maximum smoothing constraint on the level set curves, which is defined as: n (x) = n (x) + D() ×

∂ (x) ∂t

(20)

and derivation of level set function is denoted by D() and calculated as D() =



S

max

 ∂m  ∂t



(x) + ε

(21)

where Sm is a speed distance function for CFL condition and the maximum flow of curvature depends upon the cost of introducing this parameter (Sm ). By choosing an appropriate Sm parameter value is very important to increase the speed of computations for faster curvature flow. We presented a method (Sm ) by using the mean values of interior and exterior points in each curvature vectors which is given by:



255 − n (x) + ux

Sm =



vx

(22)

Therefore, the speed distance function and maximum smoothing constraints improves the CFL function. Consequently, this will reduce the iterations to converge the contours towards the actual boundaries. By updating signed distance function n (x), we can evolve the curves, which move in a smooth manner. Moreover, we used the fast marching scheme to re-initialize the contours [41].

5.

Evaluation results

This algorithm was applied to 320 images with type of pigmented and non-pigmented lesions. These were 24-bit RGB color images with dimensions 640 × 480 pixels. Manual tracing of the tumors was also performed by an experienced expert serving as a gold standard to evaluate the performance of the proposed multi-phase region-based active contours (RACs). The algorithm was preliminarily implemented in Matlab 7.6.0.324® (The Mathworks, Natick, MA). The proposed procedures took in total 2475 ms to segment (including 175 numbers of iterations) and 3328 ms for artifact reduction (including 144 numbers of iterations for exemplar-based inpainting) for all tumors present in an image of 640 × 480 (rescaled to 450 × 360) dimensions on average. On average, it took region-based active contour 119.27 iterations to reach a stable border for dermoscopic images. All computations were performed on a 2.3 GHz dual-core 64-bit AMD processor with 4 GB DDR2 RAM, running Windows 2003 server edition. As discussed before, quantitative evaluations were also performed on segmentation by using four statistical metrics. These border detection results were compared with the reference images (GT).

Table 1 – Parameters values used for experiments and comparisons. Methods DGVF GPAC RACs

IT

TS (s)

280 320 175

0.2 0.2 0.2

r (pixels)



˛

2

1.5 1.5 0.4

0.5 0.5 0.5

IT: maximum iterations, TS (s): times step in seconds, RACs: regionbased active contours, DGVF: directional gradient vector flow, GPAC: graph based active contours, r (pixels): radius in pixels, : regularization parameter, ˛: elasticity of curve.

For experimental step, the parameters used by each scheme were almost identical in all the cases and conducted tests. As pointed out before, we enhanced each image using the proposed preprocessing method. The Homomorphic filtering, weighted median filter with control function and an exemplar-Based image inpainting for lines’ reduction filters were applied to reduce the artifact’s effects. In order to find out the significance of our segmentation method, we compared the border detection results with existing state-of-the-art techniques. And, we should also set up an environment for fixing parameters and threshold values.

5.1.

Experimental setup

In order to find out the efficiency of our method, we used the Matlab implementation of these algorithms GVF [42], graph-partitioning active contours (GPAC) [43], and color segmentation (Jseg) [44] algorithms. However, in case of multiple GVF snake, we have just added the directional gradient information with a varying direction as done by Tang [13]. We called this snake model also DGVF. Now, we will discuss the initial parameters setting of each technique. The overall experimental parameters have been displayed in Table 1. Although, mostly parameters for deformable models are the elasticity, rigidity, viscosity, and regularization parameters were ˛ = 0.5, ˇ = 0.1, = 0.1 and  = 1.5, respectively. For DGVF and GPAC snakes, we used the parameter values given in Table 1. For initialization point of view of these snakes, we used the iterative thresholding method on luminance channel. In case of Jseg algorithm, we used whole RGB image to segment the tumors based on color and texture information. Finally, we extracted the ROI boundaries of tumors, which were detected by Jseg, DGVF, and GPAC algorithms for performance evaluation.

5.2.

Statistical analysis

Consequently, we employed four dissimilar metrics to quantify the boundary differences by recent systematic study [21] as well as Mean border error (MBE). The Hammoude distance (HM), the true detection rate (TDR) and the false positive rate (FPR) are an area based metrics; in addition, the Hausdorff distance (HD) which measures the distance between the boundaries in pixels. We denote (AS) for automatic segmentation and (GT) denote ground truth segmentation obtained by the expert. And, we also calculated MBE, which shows the pixels of AS disagree with the GT. These metrics are calculated as follows:

e11

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

Table 2 – Comparison results of the segmentation with proposed region-based active contour for poly skin lesions through 6 miscellaneous cases. HM (%)

TDR (%)

FPR (%)

HD (pixels)

MBE (%)

Methods case-1 DGVF JSeg GPAC Proposed RACs

50 Benign melanocytic lesions 12.74 15.67 13.45 10.25

78.28 79.20 83.10 92.18

12.01 22.81 14.82 4.26

47.81 52.06 32.67 28.95

08.91 19.03 10.03 06.00

Methods case-2 DGVF JSeg GPAC Proposed RACs

70 Malignant melanoma lesions 17.50 19.67 14.67 11.37

83.00 79.70 85.70 93.18

10.11 21.82 16.82 03.78

59.61 78.21 49.45 35.39

14.91 18.16 11.51 02.04

Methods case-3 DGVF JSeg GPAC Proposed RACs

40 Basal cell carcinoma lesions 14.68 16.38 13.10 13.21

76.24 79.34 83.69 94.10

24.48 31.78 11.69 05.48

78.61 44.72 42.34 31.28

16.76 21.47 18.49 08.89

Methods case-4 DGVF JSeg GPAC Proposed RACs

40 Merkel cell carcinoma lesions 21.34 29.67 19.61 14.25

81.28 67.50 77.43 88.17

23.53 25.90 19.60 06.13

47.50 62.20 52.36 24.95

13.41 19.18 16.26 07.02

Methods case-5 DGVF JSeg NBGP Proposed RACs

90 Seborrhoeic keratosis lesions 31.34 35.00 25.67 13.00

78.28 71.20 76.00 79.97

14.57 24.12 19.22 04.03

37.66 42.31 52.45 21.75

09.25 15.52 13.70 02.01

Methods case-6 DGVF JSeg GPAC Proposed RACs

30 Non-melanocytic lesions 22.11 24.64 17.28 14.68

81.24 64.30 79.49 90.18

11.27 14.29 13.69 10.10

68.17 82.28 44.10 36.27

09.62 14.15 08.35 04.48

92.17

05.62

Total average for proposed RAC

04.58

Results of the segmentation results calculated for the 320 images. The region-based active contour shows the best performance for segmentation of skin lesions where (HM: Hammoude distance, TDR: true detection rate, FPR: false positive rate, HD: Hausdorff distance, MBE: mean border error), lesions as: DGVF: directional gradient vector flow, JSeg: texture and color segmentation, GPAC: graph partitioning active contours, RACs: region-based active contours.

HM metric makes a pixel by pixel comparison in such a way; pixels classified as lesion by the dermatologist that were not classified as such by the AS and pixels classified as lesion by the automatic segmentation as: Hammoude distance (HM) =

#(AS ∪ GT) − #(AS ∩ GT) #(AS ∪ GT)

(23)

TDR metric measures the rate of pixels classified as lesion by both the automatic and the medical expert segmentation as: True detection rate (TDR) =

#(AS ∩ GT) #(GT)

(24)

by the medical expert:

False positive rate (FPR) =

#(AS ∩ GT) #(GT)

(25)

HD metric finds the largest distance between the boundary points. This metric, like the HM, does not distinguish between distance from inside or from outside of the reference curve. Where GT = gt1 , gt2 , gt3 , ..., gtm and SR = {AS1 , AS2 , AS3 , ..., ASn } denote the set of points belonging to contour GT and SR contour, respectively. The distance from gti to its closest point (DCP) in SR is given by:





d(gti , AS) = min ASj − gti  j

Hausdorff distance (HD) i.e. HD(GT, AS) = max

FPR metric measures the rate of pixels classified as lesion by the automatic segmentation that were not classified as lesion





(26)

max d(gti , AS), max d(ASj , GT) i

Likewise, mean border error in percentage is calculated using Eq. (27). Where ⊕ is an exclusive-OR operation that gives the

e12

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

Fig. 8 – Unsupervised border detection results.

pixels for which the AS and GT disagree as: Mean border error (MBE) =

area(AS) ⊕ area(GT) × 100 area(GT)

(27)

As a result, these five statistical measures are adequate to quantify border detection errors. Table 2 shows the border detection performance for each method in every case of (1) benign melanocytic, (2) malignant melanoma, and (3) basal cell carcinoma, in addition to, (4) merkel cell carcinoma, (5) seborrhoeic keratosis, and (6) non-melanocytic lesions. The best result according to on average mean border error is the AS with a score of 4.58%. The best true detection rate is achieved by the AS of RACs technique is 92.17%. The average best FPR is obtained by the proposed method that is (5.62%) which is

reduced from 11.32%. The HD behaves similarly to the HM. The TDR is especially important since it measures the fraction of lesion pixels, which are detected in all images. As presented in (Fig. 8), the border detection results have been highlighted proposed by our method but for clarity of comparisons and explanation of steps; consider this case of merkel cell carcinoma. We consider an experimental example of one of the results, which have been displayed in (Fig. 9). In this experiment, we explored the difficulty in finding accurate image edges where the transition between the lesion and the healthy skin was very smooth. To demonstrate the results for clinical images, we selected this 3 cm merkel cell carcinoma (MCC) on the dorsal left forearm of a 55-year old woman. The border detection difficulty is encountered even from the point of view of detectors found in the literature one, can

Fig. 9 – Border detection results of primary 3-cm merkel cell carcinoma (MCC) on the dorsal left forearm of a 55-year-old woman with (a) represents original MCC tumor, (b) the outline manually marked by a dermatologist illustrate ground truth (GT), segmentation results by use of (c) likely proposed RACs, (d) GPAC, (e) DGVF (f) JSeg algorithms.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

e13

Fig. 10 – For Border detection a challenging task in case of: (a) skin-color lesion of acral volar, (b) non-pigmented basal cell carcinoma, (c) malignant melanoma, and (d) molluscum contagiosum tumors.

examine that many of these methods do not give good results when applied to images with this same degree of complexity. This is an apparent when we compared the result obtain by the proposed method (Fig. 9 (c)) with the result obtained by other three systems. The work proposed here only exact lesion edges, while other three methods (DGVF, JSeg and GPAC) process many undesirable edges, which may be part of the skin.

6.

Discussion and Conclusion

In this paper, we have presented an algorithm designed for unsupervised border detection for numerous tumors, which is also very useful for the elimination of artifacts. We have tested to this method not only melanomas but also in pigmented and non-pigmented (basal cell, squamous cell, and merkel cell carcinomas) lesions. The algorithm is robust, efficient and intuitive. This type of border detection technique with artifact reduction steps has not been proposed in the past. This segmentation algorithm delivers good results for the big majority of the tested images; there are two groups of lesions that are not almost correctly border detected. One group is when there are areas belonging to the lesion that is lighter than the skin and is situated on the boundaries of the lesion. In these cases, the light area will be regarded as skin sees (Fig. 10) for more details. The other group is images with more than one skin lesion are much closed to each other. In that situation, our border detection can accurately track outer borders but this cannot be precisely separate lesions much clearer. If the lesions are well separated, then the problem can be solved by letting the physician give the number of lesions as input. When the lesions lie close, the result is an over-segmentation. It may be possible in these cases to divide the image pixels into three classes, based on the location of the skin mode, the first and the second minimum. Moreover, images in the training data set sample have been taken from people with fair (white) skin. As this is the group, which is most likely to develop melanoma. Even if, there is no guarantee that a person with darker skin never will develop melanoma. The border detection algorithm has to be tested on images taken from a person with darker skin, to verify that the algorithm is independent of skin-color. If it shows that the method works only on images from fair skinned people, this issue has to be adjusted.

From the evaluation of the 320 test images, the problem is that the border detection often fails when the lesion consists of the area diffused with skin or not well separated by skin. The hair-removing algorithm performs the way it should; removing the hair that otherwise would affect the border detection. Testing the border detection algorithm on a sample with more malignant melanomas needs to be done to assure the algorithm performs well in all cases. The evaluation of the performance of the border deduction is not very limited. But, at least one more expert dermatologist should evaluate the borders, and more lesions from the test sample should be included in the evaluation. Moreover, from Fig. 10(c), it is difficult to differentiate between skin lesion and non-lesion objects from this model, mostly in clinical images. In addition to this, to segment multiple tumors from single initialization of contour is also hard as shown in Fig. 10(d). Accordingly, this segmentation modal can be enhanced by shape prior constraint, automatic contours initialization and selection of constant parameters. These techniques evaluated unsupervised border detection of skin lesions in both clinical and dermoscopic view. This method was applied to a set of 320 dermoscopic images from the database including 50 benign melanocytic, 70 malignant melanoma, and 40 basal cell carcinoma, with 40 merkel cell carcinoma, 90 keratosis, and 30 non-melanocytic lesions. These images were segmented manually by an expert dermatologist. The output of the automatic border detection was compared with the manually segmented images using five metrics (Hammoude distance, TDR, FPR, and Hausdorff distance with MBE) to other three schemes (JSeg, GPAC, and DGVF). The TDR was considered the most relevant metric from a clinical point of view. In addition, this algorithm is robust in this data set since the number of average mean border error is 4.58. The best results were obtained by the presented algorithm with achievements. From this figure, the results obtained in case of mostly multiple tumors present in the selected data set. There are several potential improvements, which have been developed in this study. These novel ideas related to skin cancer analysis for dermatologists includes: (1) a new and efficient method for analyzing skin images with multiple lesions (2) fastest convergence in case of poly lesions with the level set initialization through iterative thresholding, and modify CFL condition by maximum smoothing constraints (3) to reduce the artifact noises and recommended a solution for better reduction of camera flash, bubbles and lines such as skin lines

e14

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

or ruler markings, hair, and blood vessels (4) in case of BCC and MCC for which early and correct diagnosis is also important. They are as well included in this proposed method. In addition, we have performed a separate evaluation for the border detection of varies types of lesions. This kind of comparisons has been also addressed [21] to some extent; they only considered melanomas and basal cell carcinoma lesions. These studies displayed increased TDR for the melanoma cases and also an increased FPR in the case of basal cell carcinoma. Thus, it is interpreted as the low contrast between the lesion and the skin which characterizes this type of lesion. We found that both preprocessing and segmentation work is robust and useful for any type of lesion border detection in a computer-aided diagnosis system to assist the clinical diagnosis of dermatologists.

Conflict of interest statement All authors in this paper have no potential conflict of interest.

Acknowledgements This study was supported by the Chinese Scholarship Council (CSC) (grant no. 2008GXZ143). We would like to thank Shawn Lankton for providing us the implementation of local region-based active contours to start the basic segmentation algorithm. We would also like to thank Assoc. Prof. Amanda Oakley for providing us patient‘s skin cancer database from the New Zealand Dermatological Society.

references

[1] Skin cancer facts and figures, 2009. Available from: http://www.skincancer.org/Skin-Cancer-Facts. [2] I. Papamichail, N. Nikolaidis, C. Nikolaidis, I. Glava, K. Lentzas, Marmagkiolis, et al., Merkel cell carcinoma of the upper extremity: case report and an update, J. Surg. Oncol. 6 (32) (2008). [3] A. Schwartz, W.C. Lambert, The merkel cell carcinoma: a 50-year retrospect, J. Surg. Oncol. 89 (1) (2005) 1–4. [4] G. Argenziano, H.P. Soyer, S. Chimenti, R. Talamini, R. Corona, F. Sera, et al., Dermoscopy of pigmented skin lesions: results of a consensus meeting via the Internet, J. Am. Acad. Dermatol. 48 (5) (2003) 679–693. [5] M.E. Celebi, H.A. Kingravi, B. Uddin, et al., A methodological approach to the classification of dermoscopy images, Comput. Med. Imaging Graph. 31 (2007) 362–373. [6] P. Rubegni, M. Burroni, G. Cevenini, et al., Digital dermoscopy. Analysis and artificial neural network for the differentiation of clinically atypical pigmented skin lesions: a retrospective study, J. Invest. Dermatol. 119 (2002) 471–474. [7] H. Blum, U. Luedtke, Ellwanger, et al., Digital image analysis for diagnosis of cutaneous melanoma. Development of a highly effective computer algorithm based on analysis of 837 melanocytic lesions, Brit. J. Dermatol. 151 (2004) 1029–1038. [8] M. Burroni, P. Sbano, G. Cevenini, et al., Dysplastic naevus vs. in situ melanoma: digital dermoscopy analysis, Brit. J. Dermatol. 152 (2005) 679–684. [9] S.V. Patwardhan, A.P. Dhawan, P.A. Relue, Classification of melanoma using tree structured wavelet transforms, Comput. Methods Programs Biol. 72 (2003) 223–239.

[10] M.E. Celebi, H.A. Kingravi, H. Iyatomi, et al., Border detection in dermoscopy images using statistical region merging, Skin Res. Technol. 14 (2008) 347–353. [11] M.E. Celebi, H. Iyatomi, G. Schaefer, W.V. Stoecker, Lesion border detection in dermoscopy images, Comput. Med. Imaging Graph. 33 (2009) 148–153. [12] M.E. Celebi, Y.A. Aslandogan, W.V. Stoecker, et al., Unsupervised border detection in dermoscopy images, Skin Res. Technol. 13 (2007) 454–462. [13] J. Tang, A multi-direction GVF snake for the segmentation of skin cancer images, Pattern Recognit. 42 (6) (2009) 1172–1179. [14] X. Yuan, N. Situ, G. Zouridakis, A narrow band graph partitioning method for skin lesion segmentation, Pattern Recognit 42 (6) (2009) 1017–1028. [15] P. Schmid, Segmentation of digitized dermatoscopic images by two-dimensional color clustering, IEEE Trans. Med. Imaging 18 (1999) 164–171. [16] B. Erkol, R.H. Moss, R.H. Stanley, et al., Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes, Skin Res. Technol. 11 (2005) 17–26. [17] H. Iyatomi, H. Oka, M. Saito, et al., Quantitative assessment of tumor extraction from dermoscopy images and evaluation of computer-based extraction methods for automatic melanoma diagnostic system, Melanoma Res. 16 (2006) 183–190. [18] H. Iyatomi, H. Oka, M.E. Celebi, et al., An improved internet-based melanoma screening system with dermatologist-like tumor area extraction algorithm, Comput. Med. Imaging Graph. 32 (2008) 566–579. [19] D.D. Gómez, C. Butakoff, B.K. Ersboll, W.V. Stoecker, Independent histogram pursuit for segmentation of skin lesions, IEEE Trans. Biomed. Eng. 55 (2008) 157–161. [20] H. Zhou, G. Schaefer, A. Sadka, M.E. Celebi, Anisotropic mean shift based fuzzy C-means segmentation of dermoscopy images, IEEE J. Sel. Top. Signal. 3 (2009) 26–34. [21] M. Silveira, J.C. Nascimento, J.S. Marques, comparison of segmentation methods for melanoma diagnosis in dermoscopy images, in: Proceedings of IEEE J. Sel. Top. Signal. 3 (February) (2009) 35–45. [22] M.G. Fleming, C. Steger, J. Zhang, Techniques for a structural analysis of dermatoscopic imagery, Comput. Med. Imaging Graph. 22 (5) (1998) 375–389. [23] P. Wighton, T.K. Lee, M.S. Atkinsa, Dermoscopic hair disocclusion using inpainting, in: Proceedings of the SPIE Medical Imaging 2008 Conference, 6914, pp. 691427–691427-8. [24] H. Zhou, M. Chen, R. Gass, J.M. Rehg, L. Ferris, J. Ho, et al., Feature-preserving artifact removal from dermoscopy images, in: Proceedings of the SPIE Medical Imaging 2008 Conference, 6914, 69141B–69141B-9. [25] T.K. Lee, V. Ng, R. Gallagher, A. Coldman, D. McLean, Dullrazor: a software approach to hair removal from images, J. Comput. Biol. Med. 27 (6) (1997) 533–543. [26] A. Criminisi, P. Perez, and K. Toyama Object removal by exemplar-based inpainting, in: Proceedings of Comput. Vis. Pattern Rec., Madison, WI, June, 2003, pp. 721–728. [27] A. Criminisi, P. Perez, K. Toyama, Region filling and object removal by exemplar-based image inpainting, IEEE Trans. Image Process. 13 (9) (2004). [28] F.-Y. Xie, S.-Y. Qin, Z.-G. Jiang, R.-S. Meng, PDE-based unsupervised repair of hair-occluded information in dermoscopy images of melanoma, Comput. Med. Imaging Graph. 33 (2009) 275–282. [29] Dermatologic Image Database, University of Auckland, New Zealand. Available from: http://dermnetnz.org/doctors/dermoscopy-course/ (accessed on 15.06.2009).

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 4 ( 2 0 1 1 ) e1–e15

[30] H.G. Adelmann, ‘Butterworth equations for homomorphic filtering of images, J. Comput. Biol. Med. 2 (28) (1998) 169–181. [31] M.R. Sweet, adaptive and recursive median filtering. Available from: http://www.easysw.com/∼mike/gimp/despeckle.html (accessed on 22.07.2009). [32] C. Steger, Unbiased detector of curvilinear structures, IEEE Trans. Pattern Anal. 20 (2) (1998) 113–125. [33] Q.L.L. Zhang, J. You, D. Zhang, P. Bhattacharya, Dark line detection with line width extraction, in: Proceedings of the IEEE International Conference on Image Processing (2008) 621–624. [34] S. Lankton, A. Tannenbaum, Localizing region-based active contours, IEEE Trans. Image Process. 17 (11) (2008) 2029–2039. [35] Active Contour Segmentation matlab implementation. Available from: http://www.mathworks.com/matlabcentral/fileexchange/ 19567-active-contour-segmentation (accessed on 26.01.2009). [36] J.A. Yezzi, A. Tsai, A. Willsky, A fully global approach to image segmentation via coupled curve evolution equations, J. Vis. Commun. Image Represent. 13 (1) (2002) 195–216.

e15

[37] S. Osher, R. Fedkiw, Level Set Methods and Dynamic Implicit Surfaces, Cambridge Univ. Press, New York, 2003. [38] J.A. Sethian, Level Set Methods and Fast Marching Methods, second ed., Springer, New York, 1999. [39] T. Brox, J. Weickert, Level set based image segmentation with multiple regions, Pattern Recognit. (2004) 415–423. [40] R. Courant, E. Issacson, E. Rees, On the solution of nonlinear hyperbolic differential equations by finite differences, Commun. Pure Appl. Math. 5 (1952) 243–255. [41] M. Sussman, P. Smerka, S. Osher, A Level set approach for computing solutions to incompressible two-phase flow, J. Comput. Phys. 119 (1994) 146–159. [42] Gradient Vector Flow (GVF). Available from: http://www.iacl.ece.jhu.edu/static/gvf/ (accessed on 3.09.2009). [43] Graph Partitioning Active Contours (GPAC). Available from: http://vision.ece.ucsb.edu/∼lbertelli/soft GPAC.html (accessed on 3.09.2009). [44] Color and texture segmentation (JSEG). Available from: http://vision.ece.ucsb.edu/segmentation/jseg/ (accessed on 3.09.2009).