Computerized Medical Imaging and Graphics 35 (2011) 99–104
Contents lists available at ScienceDirect
Computerized Medical Imaging and Graphics journal homepage: www.elsevier.com/locate/compmedimag
Colour and contrast enhancement for improved skin lesion segmentation Gerald Schaefer a,∗ , Maher I. Rajab b , M. Emre Celebi c , Hitoshi Iyatomi d a
Department of Computer Science, Loughborough University, Loughborough, UK Department of Computer Engineering, Umm Al-Qura University, Makkah, Saudi Arabia c Department of Computer Science, Louisiana State University, Shreveport, USA d Department of Electrical Informatics, Hosei University, Tokyo, Japan b
a r t i c l e
i n f o
Article history: Received 25 October 2009 Received in revised form 13 August 2010 Accepted 16 August 2010 Keywords: Skin cancer Skin lesion Melanoma Dermoscopy Segmentation Colour normalisation Contrast enhancement
a b s t r a c t Accurate extraction of lesion borders is a critical step in analysing dermoscopic skin lesion images. In this paper, we consider the problems of poor contrast and lack of colour calibration which are often encountered when analysing dermoscopy images. Different illumination or different devices will lead to different image colours of the same lesion and hence to difficulties in the segmentation stage. Similarly, low contrast makes accurate border detection difficult. We present an effective approach to improve the performance of lesion segmentation algorithms through a pre-processing step that enhances colour information and image contrast. We combine this enhancement stage with two different segmentation algorithms. One technique relies on analysis of the image background by iterative measurements of non-lesion pixels, while the other technique utilises co-operative neural networks for edge detection. Extensive experimental evaluation is carried out on a dataset of 100 dermoscopy images with known ground truths obtained from three expert dermatologists. The results show that both techniques are capable of providing good segmentation performance and that the colour enhancement step is indeed crucial as demonstrated by comparison with results obtained from the original RGB images. © 2010 Elsevier Ltd. All rights reserved.
1. Introduction Malignant melanoma, the most deadly form of skin cancer, is one of the most rapidly increasing cancers in the world, with an estimated incidence of 68,720 and an estimated total of 8650 deaths in the United States in 2009 alone [9]. Early diagnosis is particularly important since melanoma can be cured with a simple excision if detected early. Dermoscopy has become one of the most important tools in the diagnosis of melanoma and other pigmented skin lesions. It is a non-invasive skin imaging technique that involves optical magnification, along with optics that minimise surface reflection, making subsurface structures more easily visible when compared to conventional clinical images [1]. This in turn reduces screening errors and provides greater differentiation between difficult lesions such as pigmented Spitz nevi and small, clinically equivocal lesions [17]. However, it has also been shown that dermoscopy may lower the diagnostic accuracy in the hands of inexperienced dermatologists [2]. Therefore, in order to minimise diagnostic errors that result from the difficulty and subjectivity of visual interpretation, computerised image analysis techniques are highly sought after [7].
∗ Corresponding author. E-mail address:
[email protected] (G. Schaefer). 0895-6111/$ – see front matter © 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.compmedimag.2010.08.004
Automated border detection is often the first step in the automated analysis of dermoscopy images [4] and is crucial for two main reasons. First, the border structure provides important information for accurate diagnosis, as many clinical features, such as asymmetry, border irregularity, and abrupt border cutoff, are calculated directly from the border. Second, the extraction of other important clinical features such as atypical pigment networks, globules, and blue-white areas, critically depends on the accuracy of border detection. Automated border detection is a challenging task since dermoscopy images often suffer from low contrast between the lesion and the surrounding skin. In addition, different images or even the same image but under different lighting conditions will lead to different image colours which in turn can lead to reduced segmentation performance. In this paper we address these issues by pre-processing the images using a colour normalisation technique that both removes colour variations and enhances the contrast of the images. The processed images are then segmented using two different segmentation algorithms. The first iteratively analyses the image background and derives an optimal threshold for segmentation, while the second employs a co-operative neural network scheme for lesion edge detection. Results on a large set of dermoscopy lesion images confirm that our approach achieves good segmentation performance as judged based on manual borders obtained from three expert dermatologists. Our results also
100
G. Schaefer et al. / Computerized Medical Imaging and Graphics 35 (2011) 99–104
Fig. 1. Sample original lesion image (left) and the same image after colour normalisation (right).
demonstrate that the colour normalisation step is indeed crucial in achieving this accurate segmentation. The rest of the paper is organised as follows: in Section 2 we describe the colour normalisation technique that we employ as a pre-processing step. Section 3 details our segmentation algorithms while experimental results are presented in Section 4. Section 5 concludes the paper.
pixel locations p and j and balances local and global filtering effects, and j is varied to include the whole image (that is, differences between the current location and all other image locations are extracted). r(·) is a function that accounts for the relative lightness of a pixel and was (empirically) chosen to be a sloped linear saturation function with a slope sr of 5 of the form
2. Colour normalisation and contrast enhancement r(x) = In this paper we consider the problems of poor contrast and lack of colour calibration which are often encountered when analysing dermoscopy images. Different illumination or different devices will lead to different image colours [6] and hence to different colours of the same lesion leading to difficulties in the segmentation stage. Similarly, low contrast makes accurate border detection difficult [5]. We therefore address these issues by applying a colour normalisation technique, namely Automatic Color Equalization (ACE) [15], as a pre-processing step. ACE colour normalisation is based on a model that is designed to merge two popular normalisation techniques, namely Grayworld [3] and MaxRGB [10] normalisation. Local adjustment is performed by considering the colour spatial distribution in the image. ACE consists of two main stages: chromatic/spatial adjustment and dynamic tone reproduction scaling. The chromatic/spatial adjustment stage is the part where the actual colour normalisation is performed and the image contrast enhanced, while the second stage is responsible for accurate tone mapping and lightness constancy. In detail, in the chromatic/spatial adjustment stage an intermediate image M is generated from an input image I according to Mi (p) =
r(I (p) − I (j)) i
j,j = / p
i
d(p, j)
,
i = R, G, B
(1)
where Ii (p) − Ii (j) implements a lateral inhibition mechanism, d(p, j) is the distance (in our experiments Euclidean distance) between
⎧ −127.5 ⎪ −127.5 for x < ⎪ ⎪ sr ⎪ ⎨ sr x
⎪ ⎪ ⎪ ⎪ ⎩ 127.5
−127.5 127.5 ≤x≤ sr sr 127.5 for x > sr
for
(2)
In the dynamic tone reproduction stage, the pixel values of the intermediate image M are transferred to generate the output image O. Various possibilities for this mapping are possible, the one we adopt maximises image contrast by Oi (p) = 127.5 + si Mi (p),
i = R, G, B
(3)
where si is the slope of the segment [(minp Mi (p), 0), (maxp Mi (p), 255)]. The effect of the employed colour normalisation is shown on an example in Fig. 1 which displays a typical lesion image before and after the enhancement. It can be seen that the original image is of poor contrast and that consequently the border between lesion and background is not well defined which in turn will cause difficulties for automated delineation methods. Clearly, the processed version has a much better defined contrast which in turn should aid in deriving a better border detection. A second example is given in Fig. 2. Again, it is clear that the processed image has an enhanced contrast which is particularly noticable in the red channel. Therefore, segmentation should be more accurate on the processed images compared to the original ones. It should be noted that while clearly our proposed approach leads to increased contrast between the lesion and the skin background, the contrast within the lesion is also somewhat decreased. This however is not a problem, since our application here is seg-
Fig. 2. Top row: sample original lesion image and greyscale images of its red, green and blue channel respectively. Bottom row: the same image after colour normalisation.
G. Schaefer et al. / Computerized Medical Imaging and Graphics 35 (2011) 99–104
101
mentation; once the lesion border has been identified it can be applied to the original image and the original data of the lesion region used for further analysis. 3. Lesion segmentation For lesion segmentation, we employ two techniques based on our earlier work, one employing an iterative segmentation scheme, and the other based on co-operative neural network edge detection. Both techniques operate on a single colour channel and can hence be employed on the R, G, and B channels of a dermoscopic colour image or a greyscale channel derived from it. In the following we briefly describe both algorithms, for further details we refer the reader to [13]. 3.1. Iterative segmentation
Fig. 3. Example edge patterns used for training the neural network based segmentation technique.
Two criteria are considered in the design of this segmentation method: (1) an accurate search of the optimal lesion border can be achieved by analysing the whole image and consequently the true lesion border is also retrieved; approximation of the lesion border within the refined regions and curve fitting approaches are not adopted, and (2) input parameters are mostly image dependent and have to be adjusted based on the properties of the class of images used in the segmentation method. First, we apply a simple noise suppression method which aims to reduce artefacts such as hair that are often present in dermoscopic images. To do so, we subtract the median of the background followed by recursive Gaussian smoothing. To estimate the median of the background we inspect two strips (with width of 1/10 of the image width) located at the top and the bottom of the image and extract the median value of these regions (in doing so we assume that the lesion does not extend to the image borders). This value is subtracted from the image, and the resulting values passed through an exponential mapping function. On the resulting image, recursive smoothing using Gaussian kernels of sizes 3, 5, 7, 9 with adaptively determined values is performed. Following noise reduction, we employ an iterative scheme to segment the image into regions containing the lesion and the background skin. To do so, we apply an ISODATA algorithm to determine an optimal threshold value T for an image [14]. If I is the image and Ri denotes the ith of N regions of the image (N = 2 in our case), the iterative scheme for estimating the N − 1 optimal thresholds and the N mean intensities are given by Ti,k+1 = and
Ri ,k =
Ri ,k + Ri+1 ,k 2
until Ti,k+1 = Ti,k
noise-free training set of a limited number of prototype edge patterns of 3 × 3 pixels. Various experiments have been performed using neural network models with multi-layer perceptron architecture trained using standard back-propagation [11] with different sizes of training sets. When constructing and experimenting with various training sets, the total number of prototype edge patterns and their redundancy were considered. The total number of prototype edge patterns finally used is 42 incorporating five edge profiles with the possible orientations of each edge profile and also 100 redundant edge patterns. In Fig. 3 we show an example of five edge profiles in 3 × 3 pixel grids. Network training is performed until a compromise solution is achieved and the neural network is capable of detecting object boundaries. A small structure neural network was used (9 input neurons, 10 hidden nodes, and 7 output neurons), and successful training was achieved when the neural network’s mean square error converges to a value below 0.01 using a learning rate of 0.001 and a momentum rate of 0.5. The neural network was found to be highly capable of recognising edge patterns in noisy lesions and less likely to recognise intensity variations surrounding lesions. Low-pass or averaging behaviour makes the network less sensitive to noise and improves the edge detection ability. We therefore convolve the template of weights at the hidden layer for each output node with the original image and thus apply low-level feature extraction. Using maximal thresholds for each network output we then fuse the thresholded outputs to add up the various edge maps.
(4)
I(m, n)
m,n ∈ Ri,k
NRi ,k
(5)
where Ti,k+1 is the optimal threshold separating pixels in region Ri from pixels in region Ri+1 , Ri ,k and NRi ,k represent the mean pixel value and number of pixels of region Ri in step k, and I(m, n) is the pixel value at location (m, n). The process is repeated based upon the newly generated threshold, until the threshold value has converged. Finally, a post-processing step is applied that removes isolated pixels and small objects that are not likely to be part of the lesion. 3.2. Co-operative neural network segmentation The second algorithm we use is based on trained neural networks for detection of the outlines of skin lesions. We employ a
Fig. 4. Segmentation results for the images in Fig. 1 using the iterative segmentation algorithm. Top row: manual segmentations derived by three dermatologists. Middle row: segmentation based on red, green, and blue channel of original lesion image. Bottom row: segmentation based on red, green, and blue channel of colour normalised image.
102
G. Schaefer et al. / Computerized Medical Imaging and Graphics 35 (2011) 99–104
Table 1 XOR segmentation errors of the iterative segmentation algorithm based on raw RGB and ACE processed input. Img
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Average
RGB
ACE
Img
R
G
B
L
R
G
B
L
0.74 1.06 1.09 1.07 0.47 1.09 1.14 1.08 1.18 1.08 1.23 0.60 0.48 0.41 0.47 0.40 0.65 0.23 0.36 0.28 0.32 0.39 0.38 0.40 0.40 0.79 0.43 0.63 0.37 0.51 0.52 0.34 0.26 0.23 0.23 0.34 0.30 0.73 0.36 0.29 0.46 0.25 0.41 0.50 0.31 0.27 0.81 0.77 1.04 1.48
0.47 1.06 0.43 1.07 0.24 0.74 0.76 1.08 1.18 0.65 1.22 0.38 0.21 0.25 0.28 0.19 0.35 0.10 0.28 0.21 0.17 0.22 0.22 0.29 0.20 0.52 0.28 0.33 0.23 0.22 0.27 0.19 0.15 0.10 0.11 0.19 0.15 0.24 0.20 0.13 0.23 0.12 0.23 0.24 0.14 0.09 0.68 0.45 0.58 1.25
0.33 0.40 0.24 1.07 0.17 0.33 0.30 1.04 1.18 1.08 0.52 0.28 0.18 0.19 0.20 0.16 0.28 0.06 0.24 0.17 0.12 0.16 0.15 0.20 0.12 0.41 0.18 0.21 0.15 0.15 0.20 0.13 0.11 0.07 0.07 0.11 0.10 0.19 0.15 0.09 0.16 0.09 0.16 0.19 0.09 0.06 0.70 0.27 0.43 0.92
0.52 1.06 0.55 1.07 0.28 1.09 0.80 1.09 1.18 1.09 1.22 0.40 0.27 0.30 0.31 0.24 0.35 0.13 0.29 0.21 0.20 0.24 0.24 0.29 0.21 0.57 0.27 0.33 0.25 0.23 0.28 0.22 0.16 0.11 0.12 0.20 0.17 0.27 0.22 0.17 0.24 0.14 0.25 0.29 0.16 0.12 0.72 0.41 0.66 1.50
0.29 0.54 0.40 0.36 0.19 0.25 0.26 0.37 0.21 0.10 0.09 0.37 0.23 0.28 0.16 0.17 0.42 0.07 0.15 0.17 0.19 0.30 0.27 0.25 0.20 0.67 0.30 0.29 0.24 0.36 0.32 0.24 0.11 0.13 0.07 0.21 0.11 0.30 0.15 0.06 0.30 0.08 0.23 0.32 0.05 0.11 0.95 0.33 0.28 0.76
0.21 0.20 0.17 0.10 0.15 0.12 0.16 0.11 0.08 0.07 0.12 0.09 0.13 0.25 0.12 0.09 0.10 0.05 0.08 0.16 0.12 0.19 0.15 0.10 0.06 0.30 0.17 0.11 0.18 0.13 0.18 0.16 0.10 0.06 0.07 0.08 0.04 0.08 0.16 0.07 0.11 0.07 0.09 0.22 0.06 0.07 1.62 0.30 0.23 0.51
0.14 0.11 0.11 0.07 0.10 0.07 0.12 0.05 0.06 0.06 0.17 0.07 0.11 0.19 0.12 0.09 0.12 0.05 0.09 0.13 0.10 0.15 0.11 0.09 0.06 0.25 0.13 0.11 0.13 0.11 0.14 0.11 0.17 0.05 0.12 0.07 0.04 0.08 0.18 0.08 0.08 0.11 0.08 0.20 0.06 0.05 1.80 0.32 0.23 0.42
0.21 0.27 0.21 0.14 0.15 0.14 0.17 0.14 0.11 0.08 0.10 0.13 0.17 0.26 0.13 0.13 0.13 0.05 0.09 0.16 0.14 0.22 0.19 0.14 0.10 0.49 0.22 0.13 0.20 0.22 0.22 0.19 0.10 0.08 0.05 0.11 0.06 0.11 0.13 0.06 0.15 0.06 0.14 0.27 0.04 0.08 1.34 0.29 0.26 0.55
Finally, a post-processing step is applied that removes isolated pixels and small objects that are not likely to be part of the lesion.
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
RGB R
G
B
L
R
G
B
L
0.17 0.19 0.63 0.57 0.23 0.32 0.30 0.07 0.14 0.56 0.39 0.28 0.37 0.32 0.49 0.57 0.30 0.29 0.52 1.34 0.70 0.43 0.81 0.43 1.02 1.25 0.50 0.80 0.55 0.93 0.95 0.82 0.76 0.66 0.68 0.73 0.40 0.58 0.34 0.87 0.92 0.98 1.09 1.19 1.95 0.62 1.15 0.51 0.77 0.95 0.62
0.08 0.12 0.35 0.32 0.17 0.19 0.19 0.08 0.15 0.37 0.21 0.20 0.26 0.19 0.33 0.23 0.21 0.16 0.27 0.88 0.30 0.22 0.29 0.17 0.48 1.26 0.31 0.54 0.26 0.59 0.68 0.43 0.58 0.32 0.38 0.43 0.27 0.32 1.62 0.59 0.67 0.16 0.95 1.11 1.48 0.36 1.15 0.10 0.48 0.40 0.41
0.12 0.23 0.22 0.25 0.17 0.11 0.19 0.14 0.16 0.28 0.14 0.12 0.20 0.14 0.31 0.16 0.18 0.10 0.20 0.81 0.23 0.27 0.15 0.12 0.41 1.24 0.31 0.46 0.21 0.48 0.54 0.28 0.46 0.24 0.30 0.18 0.22 0.26 2.12 0.49 0.48 0.11 0.56 0.82 1.44 0.24 1.14 0.07 0.34 0.35 0.33
0.09 0.13 0.34 0.34 0.19 0.20 0.22 0.08 0.15 0.38 0.22 0.20 0.27 0.20 0.34 0.23 0.22 0.18 0.32 1.06 0.45 0.23 0.38 0.21 0.50 1.26 0.33 0.55 0.28 0.69 0.74 0.64 0.59 0.32 0.42 0.31 0.29 0.39 0.30 0.67 0.74 0.13 1.10 1.19 1.77 0.39 1.16 0.15 0.53 0.40 0.44
0.09 0.09 0.42 0.39 0.06 0.17 0.12 0.05 0.06 0.38 0.26 0.13 0.18 0.18 0.34 0.45 0.11 0.10 0.14 0.24 0.32 0.29 0.25 0.14 0.25 0.88 0.40 0.46 0.33 0.64 0.16 0.26 0.32 0.23 0.14 0.45 0.21 0.35 0.30 0.50 0.25 0.93 0.16 0.49 0.66 0.22 0.55 0.16 0.27 0.67 0.28
0.15 0.11 0.16 0.14 0.08 0.10 0.09 0.04 0.08 0.19 0.14 0.08 0.19 0.10 0.22 0.11 0.09 0.11 0.15 0.15 0.09 0.36 0.07 0.05 0.08 0.27 0.18 0.29 0.13 0.20 0.21 0.21 0.13 0.09 0.09 0.29 0.09 0.27 0.67 0.23 0.14 0.11 0.06 0.19 0.56 0.16 0.47 0.06 0.12 0.55 0.17
0.16 0.20 0.11 0.13 0.12 0.07 0.08 0.09 0.11 0.14 0.10 0.06 0.29 0.16 0.25 0.07 0.09 0.16 0.25 0.15 0.07 0.48 0.13 0.11 0.08 0.24 0.29 0.22 0.10 0.20 0.70 0.18 0.10 0.17 0.09 0.21 0.08 0.43 0.80 0.15 0.14 0.07 0.08 0.16 0.66 0.12 0.39 0.05 0.10 0.80 0.18
0.13 0.11 0.24 0.15 0.07 0.13 0.09 0.05 0.07 0.25 0.18 0.10 0.16 0.11 0.24 0.15 0.09 0.11 0.09 0.20 0.21 0.35 0.12 0.06 0.11 0.34 0.25 0.35 0.17 0.18 0.15 0.22 0.19 0.10 0.11 0.38 0.13 0.21 0.51 0.38 0.19 0.09 0.10 0.31 0.52 0.18 0.53 0.10 0.20 0.57 0.19
To quantify the segmentation performance we use the XOR measure proposed in [8] which calculates the percentage border detection error by
4. Experimental results Error = We evaluated our approach on a large dataset of 100 dermoscopy images (30 invasive malignant melanoma and 70 benign) obtained from the EDRA Interactive Atlas of Dermoscopy [1], and the dermatology practices of Dr. Ashfaq Marghoob (New York, NY), Dr. Harold Rabinovitz (Plantation, FL) and Dr. Scott Menzies (Sydney, Australia). The benign lesions included nevocellular nevi and dysplastic nevi. Manual borders were obtained by selecting a number of points on the lesion border, connecting these points by a second-order B-spline and finally filling the resulting closed curve. Three sets of manual borders were determined by three expert dermatologists, Dr. William Stoecker, Dr. Joseph Malters, and Dr. James Grichnik using this method.
ACE
Area (AB ⊕ MB) Area(MB)
(6)
where AB and MB are the binary images obtained by filling the automatic and manual borders, respectively, ⊕ is the exclusive-OR (XOR) operation that gives the pixels for which AB and MB disagree, and Area(I) denotes the number of pixels in the binary image I. We segment these images using both the iterative and the neural network segmentation techniques, and apply them (as both algorithms operate on greyscale images) for all three colour channels (i.e., red, green, and, blue). In addition we run both algorithms also on the luminance channel L = (R + G + B)/3 [12]. We do this both for unprocessed RGB images that only undergo the same segmentation methods as detailed in Section 3 as well as the full algorithms
G. Schaefer et al. / Computerized Medical Imaging and Graphics 35 (2011) 99–104
103
Table 2 XOR segmentation errors of the neural network segmentation algorithm based on raw RGB and ACE processed input. Img
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Average
RGB
ACE
Img
R
G
B
L
R
G
B
L
0.33 1.00 3.26 1.44 0.18 0.47 0.53 1.00 0.73 0.22 0.24 0.59 0.44 0.39 0.46 0.34 0.62 0.22 0.35 0.26 0.31 0.37 0.37 0.39 0.38 0.76 0.42 0.61 0.37 0.50 0.49 0.33 0.25 0.22 0.22 0.34 0.27 0.68 0.35 0.28 0.45 0.24 0.40 0.42 0.24 0.18 0.36 0.39 0.38 1.07
0.21 0.22 0.19 0.27 0.12 0.21 0.15 0.21 0.36 0.12 0.12 0.35 0.18 0.23 0.26 0.15 0.29 0.09 0.27 0.19 0.16 0.21 0.21 0.28 0.18 0.38 0.25 0.31 0.21 0.20 0.24 0.18 0.14 0.10 0.10 0.17 0.12 0.21 0.19 0.13 0.22 0.11 0.22 0.22 0.13 0.08 0.57 0.29 0.27 0.36
0.13 0.10 0.11 0.14 0.08 0.11 0.10 0.10 0.13 0.07 0.09 0.25 0.14 0.18 0.18 0.13 0.21 0.06 0.23 0.15 0.11 0.15 0.12 0.16 0.11 0.24 0.16 0.20 0.13 0.14 0.17 0.12 0.12 0.06 0.07 0.10 0.08 0.14 0.15 0.09 0.15 0.09 0.14 0.17 0.08 0.06 0.95 0.38 0.19 0.25
0.23 0.20 0.19 0.31 0.13 0.21 0.14 0.20 0.36 0.13 0.13 0.36 0.21 0.28 0.29 0.20 0.29 0.12 0.29 0.20 0.19 0.23 0.21 0.28 0.19 0.42 0.25 0.31 0.23 0.21 0.25 0.20 0.14 0.11 0.11 0.19 0.14 0.23 0.21 0.16 0.23 0.13 0.23 0.25 0.15 0.10 0.30 0.25 0.21 0.38
0.20 0.34 0.25 0.33 0.09 0.23 0.10 0.24 0.14 0.08 0.12 0.34 0.21 0.16 0.14 0.14 0.40 0.06 0.14 0.14 0.15 0.25 0.21 0.23 0.18 0.63 0.25 0.27 0.18 0.33 0.28 0.21 0.10 0.12 0.07 0.21 0.09 0.28 0.14 0.06 0.26 0.09 0.22 0.26 0.05 0.07 0.96 0.27 0.22 0.38
0.14 0.11 0.10 0.09 0.06 0.09 0.07 0.07 0.07 0.07 0.17 0.13 0.11 0.10 0.12 0.07 0.11 0.05 0.08 0.11 0.08 0.14 0.09 0.10 0.07 0.18 0.11 0.11 0.10 0.11 0.12 0.12 0.11 0.05 0.09 0.08 0.04 0.07 0.19 0.08 0.09 0.10 0.08 0.14 0.08 0.05 2.49 0.23 0.16 0.17
0.10 0.07 0.07 0.07 0.04 0.06 0.06 0.05 0.06 0.09 0.27 0.10 0.06 0.08 0.13 0.07 0.14 0.06 0.11 0.10 0.07 0.11 0.07 0.13 0.08 0.11 0.08 0.12 0.08 0.09 0.09 0.08 0.21 0.05 0.14 0.07 0.05 0.08 0.24 0.09 0.05 0.16 0.07 0.12 0.09 0.05 2.49 0.37 0.14 0.09
0.14 0.12 0.11 0.07 0.10 0.11 0.06 0.07 0.07 0.07 0.15 0.11 0.12 0.19 0.12 0.11 0.10 0.05 0.07 0.13 0.12 0.20 0.15 0.10 0.09 0.39 0.17 0.11 0.16 0.16 0.19 0.17 0.11 0.06 0.06 0.09 0.05 0.08 0.14 0.07 0.11 0.06 0.11 0.23 0.06 0.06 1.77 0.18 0.17 0.19
that incorporate the colour normalisation step described in Section 2. That way we will be able to quantify the effect of the applied enhancement technique. The results, expressed in terms of XOR errors averaged over all three ground truth segmentations, are presented in Table 1 for the iterative segmentation technique, and in Table 2 for the neural network based scheme. From Table 1 we can see that our approach based on colour enhancement and iterative segmentation is capable of achieving good segmentation performance, characterised by average XOR errors over the whole dataset of only 0.28, 0.17 and 0.18 for segmentations based on the red, green and blue channel, respectively 0.19 for luminance L. This compares favourably to other lesion segmentation algorithms such as the orientation-sensitive fuzzy c-means algorithm [16] which produces an average error of 0.27 on the same dataset. Comparing this with the results based on raw RGB images, i.e., using the segmentation algorithm without colour normalisation, we also see that the application of the colour enhancement
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
RGB
ACE
R
G
B
L
R
G
B
L
0.14 0.18 0.62 0.55 0.21 0.30 0.28 0.05 0.11 0.55 0.38 0.26 0.33 0.31 0.45 0.54 0.28 0.28 0.32 0.53 0.43 0.28 0.57 0.20 0.43 1.04 0.39 0.55 0.52 0.81 0.48 0.26 0.66 0.41 0.30 0.62 0.37 0.57 0.28 1.44 0.75 1.07 0.76 1.01 0.73 0.38 1.01 0.29 0.53 7.58 0.56
0.08 0.12 0.32 0.28 0.16 0.17 0.17 0.07 0.12 0.36 0.20 0.18 0.23 0.19 0.26 0.22 0.20 0.15 0.15 0.29 0.14 0.21 0.14 0.07 0.26 0.66 0.24 0.38 0.23 0.36 0.32 0.19 0.39 0.28 0.17 0.22 0.24 0.28 1.34 0.37 0.34 0.14 0.11 0.41 0.43 0.21 0.42 0.08 0.25 0.32 0.24
0.13 0.19 0.17 0.20 0.15 0.09 0.14 0.12 0.15 0.25 0.13 0.11 0.20 0.13 0.21 0.14 0.17 0.10 0.10 0.28 0.11 0.26 0.10 0.10 0.23 0.40 0.23 0.26 0.19 0.29 0.25 0.16 0.29 0.20 0.13 0.15 0.19 0.20 1.83 0.20 0.24 0.10 0.07 0.22 0.37 0.15 0.31 0.06 0.15 0.27 0.19
0.08 0.12 0.30 0.29 0.17 0.18 0.19 0.05 0.12 0.36 0.20 0.18 0.24 0.19 0.26 0.21 0.21 0.16 0.19 0.39 0.18 0.22 0.15 0.09 3.09 1.03 0.26 0.36 0.25 0.35 0.37 0.19 0.42 0.26 0.19 0.20 0.26 0.35 1.06 0.41 0.41 0.11 0.10 0.51 0.55 0.22 0.49 0.11 0.29 0.31 0.28
0.12 0.09 0.40 0.35 0.05 0.15 0.10 0.07 0.07 0.35 0.24 0.10 0.16 0.16 0.26 0.44 0.11 0.12 0.12 0.14 0.28 0.38 0.18 0.12 0.21 0.76 0.18 0.43 0.19 0.62 0.15 0.20 0.30 0.15 0.10 0.40 0.14 0.33 0.23 0.48 0.18 0.90 0.11 0.37 0.24 0.15 0.43 0.13 0.15 0.64 0.24
0.19 0.12 0.12 0.13 0.06 0.09 0.08 0.04 0.07 0.17 0.12 0.07 0.13 0.12 0.13 0.10 0.10 0.15 0.09 0.10 0.07 0.43 0.08 0.05 0.08 0.15 0.09 0.19 0.06 0.16 0.22 0.17 0.09 0.07 0.08 0.21 0.06 0.31 0.80 0.14 0.12 0.07 0.04 0.09 0.16 0.10 0.24 0.05 0.05 1.36 0.16
0.18 0.19 0.11 0.16 0.10 0.08 0.10 0.07 0.09 0.11 0.08 0.05 0.22 0.20 0.16 0.06 0.10 0.25 0.31 0.12 0.05 0.56 0.22 0.11 0.08 0.13 0.19 0.12 0.04 0.25 0.66 0.15 0.09 0.13 0.07 0.17 0.06 0.48 1.11 0.10 0.12 0.04 0.09 0.07 0.29 0.09 0.17 0.05 0.05 1.02 0.17
0.16 0.09 0.19 0.12 0.05 0.10 0.08 0.05 0.07 0.23 0.16 0.07 0.15 0.11 0.11 0.13 0.10 0.12 0.07 0.09 0.13 0.39 0.10 0.05 0.08 0.18 0.21 0.22 0.12 0.17 0.12 0.17 0.09 0.14 0.08 0.31 0.12 0.45 0.78 0.12 0.14 0.07 0.05 0.15 0.07 0.13 0.36 0.08 0.13 0.95 0.16
step indeed proves crucial in achieving this performance as segmentation based on the same border detection algorithm but on unprocessed images is clearly inferior: results on the original RGB images show an average error of 0.62, 0.41 and 0.33 using red, green and blue channels, and 0.44 based on luminance – much higher than those obtained with the colour enhancement stage. An example of this difference is illustrated in Fig. 4 which shows the final segmentations of the two images from Fig. 1 (which is Image 30 in the table). It is apparent that the segmentation obtained from the normalised image is more accurate (average errors of 0.36, 0.13, and 0.11 for segmentations based red, green, and blue channels) compared to that based on the original image (average errors of 0.51, 0.22, and 0.15). Looking at the results obtained using the neural network segmentation algorithm, given in Table 2, we can observe a similar picture. The average segmentation errors based on the original RGB images are 0.56, 0.24 and 0.19 respectively for border detection
104
G. Schaefer et al. / Computerized Medical Imaging and Graphics 35 (2011) 99–104
based on an iterative segmentation method, and one using a neural network based scheme. For both algorithms we show, using a large dataset of 100 dermoscopic images, that pre-processing using colour enhancement leads to significantly better segmentation results which in turn can form the basis of improved diagnosis. As the pre-processing step is independent of the actual segmentation technique, it logically can also be employed in conjunction with other algorithms with similar performance improvements to be expected. References
Fig. 5. Segmentation results for the images in Fig. 2 using the neural network segmentation algorithm. Top row: manual segmentations derived by three dermatologists. Middle row: segmentation based on red, green, and blue channel of original lesion image. Bottom row: segmentation based on red, green, and blue channel of colour normalised image.
based on red, green and blue colour channels, and 0.28 using luminance. Using colour and contrast enhancement this improves to 0.24, 0.16 and 0.17 for red/green/blue, and 0.16 for luminance. That is, the improvement achieved through the pre-processing step is again proving significant while the overall segmentation results are slightly better than those of the iterative segmentation technique. In Fig. 5 we show the segmentation results for the images shown in Fig. 2 (Image 8 in the table). Segmentation based on the original RGB image gives segmentation errors of 1.00, 0.21 and 0.10 for red, green and blue channels, while after colour enhancement these reduce to 0.24, 0.07 and 0.05. This clear improvement is also obvious in the segmentation masks displayed in Fig. 5, in particular for the red channel where segmentation based on the original RGB input fails completely while after the enhancement step a reasonable border delineation is achieved. 5. Conclusions In this paper we have shown that by employing a colour normalisation technique to reduce colour variations and enhance image contrast, improved segmentation of skin lesions can be achieved. We utilise two different border detection techniques, one
[1] Argenziano G, Soyer H, De Giorgi V. Dermoscopy: a tutorial. EDRA Medical Publishing & New Media; 2002. [2] Binder M, Schwarz M, Winkler A, Steiner A, Kaider A, Wolff K, et al. Epiluminescence microscopy. A useful tool for the diagnosis of pigmented skin lesions for formally trained dermatologists. Archives of Dermatology 1995;131(3):286–91. [3] Buchsbaum G. A spatial processor model for object colour perception. Journal of the Franklin Institute 1980;310:1–26. [4] Celebi M, Iyatomi H, Schaefer G, Stoecker W. Lesion border detection in dermoscopy images. Computerized Medical Imaging and Graphics 2009;33(2):148–53. [5] Delgado D, Butakoff C, Ersboll B, Stoecker W. Independent histogram pursuit for segmentation of skin lesions. IEEE Transactions on Biomedical Engineering 2008;55(1):157–61. [6] Finlayson G, Schaefer G. Colour indexing across devices and viewing conditions. In: 2nd international workshop on content-based multimedia indexing. 2001. p. 215–21. [7] Fleming M, Steger C, Zhang J, Gao J, Cognetta A, Pollak I, et al. Techniques for a structural analysis of dermatoscopic imagery. Computerized Medical Imaging and Graphics 1998;22(5):375–89. [8] Hance G, Umbaugh S, Moss R, Stoecker W. Unsupervised color image segmentation with application to skin tumor borders. IEEE Engineering in Medicine and Biology Magazine 1996;15(1):104–11. [9] Jemal A, Siegel R, Ward E, Hao Y, Xu J, Thun M. Cancer statistics, 2009. CA: A Cancer Journal for Clinicians 2009;59(4):225–49. [10] Land E. The retinex theory of color constancy. Scientific American 1977:108–29. [11] Nabney I. Netlab: algorithms for pattern recognition. Springer; 2002. [12] Ohta Y, Kanade T, Sakai T. Color information for region segmentation. Computer Graphics Image Processing 1980;13(3):222–41. [13] Rajab M, Woolfson M, Morgan S. Application of region-based segmentation and neural network edge detection to skin lesions. Computerized Medical Imaging and Graphics 2004;28(1–2):61–8. [14] Ridler T, Calvard S. Picture thresholding using an iterative selection method. IEEE Transactions on Systems Man and Cybernetics 1978;8:630–2. [15] Rizzi A, Gatta C, Marini D. A new algorithm for unsupervised global and local color correction. Pattern Recognition Letters 2003;24(11):1663–77. [16] Schmid P. Segmentation of digitized dermatoscopic images by twodimensional color clustering. IEEE Transactions on Medical Imaging 1999;18(2):164–71. [17] Steiner K, Binder M, Schemper M, Wolff K, Pehamberger H. Statistical evaluation of epiluminescence dermoscopy criteria for melanocytic pigmented lesions. Journal of the American Academy of Dermatology 1993;29(4):581–8.