Improved image processing techniques for optic disc segmentation in retinal fundus images

Improved image processing techniques for optic disc segmentation in retinal fundus images

Biomedical Signal Processing and Control 58 (2020) 101832 Contents lists available at ScienceDirect Biomedical Signal Processing and Control journal...

4MB Sizes 3 Downloads 89 Views

Biomedical Signal Processing and Control 58 (2020) 101832

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control journal homepage: www.elsevier.com/locate/bspc

Improved image processing techniques for optic disc segmentation in retinal fundus images R. Geetha Ramani, J. Jeslin Shanthamalar ∗ Department of Information Science and Technology, College of Engineering, Anna University, Guindy, Chennai, 600025, India

a r t i c l e

i n f o

Article history: Received 3 June 2019 Received in revised form 11 November 2019 Accepted 21 December 2019 Keywords: Pixel density calculation Circular hough peak Superpixel segmentation Glaucoma Circular hough transform

a b s t r a c t Glaucoma is one of the leading causes of blindness in the world and is projected to affect over 79.6 million people globally. Recently, automated computer aided systems are used in disease detection and proven to be highly useful in assisting experts in the early diagnosis. Hence, automated optic disc segmentation through the intelligent system is very much helpful for the early detection of Glaucoma. This paper presents an improved image processing algorithm for retinal fundus images using region based Pixel density calculation method for optic disc localization and improved Circular Hough Transform with Hough Peak value selection and Red channel Superpixel segmentation for Optic disc segmentation. Optic disc segmentation has been applied on eight publically available databases, HRF, DRISHTI-GS1, DRIONSDB, DRIVE, ONHSD, CHASE-DB1, INSPIRE and MESSIDOR and the accuracy of 99.73%, 99.31%, 99.37%, 99.38%, 99.64%, 99.20%, 99.31%, 99.72% and also specificity of 99.90%, 99.43%, 99.60%, 99.52%, 99.82%, 99.43%, 99.84%, 99.89% were obtained with less than 2 s computation time for an image. The result of the proposed technique shows that the system is highly competitive with the state-of-the-art and achieves better accuracy with fast execution time. © 2019 Elsevier Ltd. All rights reserved.

1. Introduction Automated Retinal Image Analysis through Image Processing is of great significance in early detection of Retinal diseases especially some sight threatening diseases like Glaucoma, Diabetic Retinopathy are irrevocable [1–4]. Glaucoma is an incurable eye pathology around the optic disc (OD) that badly damages the optic nerves which leads to blindness was unaware of most people [5]. Optic nerve damage occurs due to high eye pressure [6], if the pressure is increased above 21 mm Hg it will damage the starting point of the optic nerve head which is present in OD. Normally, the features of OD, optic cup (OC), neuroretinal rim (NRR), optic nerve head (ONH), retinal nerve fibre layer (RNFL) around the OD, ISNT rule and cup-to-disc ratio (CDR) is very much expedient for the ophthalmologists in the identification of retinal diseases like glaucoma, diabetic retinopathy (DR) and macular edema during the earlier stages [1,7–9]. This manual identification needs more experts in ophthalmology is a limiting factor, hence the automatic system would be a great benefit for the identification and evaluation of pathological behavior [10]. Many existing techniques used

OD features such as shape, size, intensity and blood vessel density for glaucoma disease diagnosis because OD is the brightest circular region in the anatomical structure of an eye which is one of the most important features used to identify diseases including glaucoma, hypertensive retinopathy, brain tumor, cardiovascular diseases, papilledema and retinal lesions [7,11]. In the field of medical image analysis, automatic identification and segmentation of OD is a highly challenging task because of the changes in OD features due to abnormalities such as ONH damage, Peripapillary atrophy (PPA), cotton wool spots, microaneurysms, bright lesions, hard and soft exudates. There are many supervised and unsupervised approaches are proposed earlier for the OD localization, Segmentation of OD & OC and CDR from the retinal fundus images for glaucoma identification [12]. Fig. 1 shows the features of Retinal fundus images. The main motive of this work is to improve the accuracy of the OD Segmentation through automatic image processing techniques to help the Ophthalmologists in the identification of Silent Theft i.e. Glaucoma retinal disease during their initial stages. The challenges addressed and work done is:

∗ Corresponding author. E-mail addresses: [email protected] (R.G. Ramani), [email protected] (J.J. Shanthamalar).

1 OD detection with an overall accuracy of 99.43% is achieved through less than 2seconds for an image by region based pixel density calculation.

https://doi.org/10.1016/j.bspc.2019.101832 1746-8094/© 2019 Elsevier Ltd. All rights reserved.

2

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

Fig. 1. Features of Healthy Retina.

2 All the localized ODs are segmented using improved image processing algorithms with an overall accuracy of 99.5% and specificity of 99.68% is very much competitive with the existing techniques and the computation time is very less compared with the existing techniques. 3 The complete process from OD localization to segmentation explained thoroughly with pictorial representation will help the researchers for further development. 4 The high amount of images i.e. 1654 images were analyzed using the unsupervised approach and got high accuracy makes the proposed method more suitable for glaucoma diagnosis. Our results explain improved image processing techniques give a high accuracy robotic system for OD segmentation. The organization of this paper is as follows: Related Work given in Section 2, Materials and Methods of proposed work is presented in Section 3, Experimental Results and visual representation of output is defined in Section 4, Discussion of proposed system and comparison of other algorithms converse in Section 5 and this paper is concluded in Section 6, respectively. 2. Related work Computerized OD localization and segmentation are still a challenging task due to the changes in the OD features. A significant number of articles have been published to deal with OD localization, OD segmentation, OC segmentation, CDR calculation and glaucoma identification. Based on the references, the categorization of these articles into five categories under the main two categories of supervised and unsupervised approaches. Unsupervised methods based on OD localization, [7] proposed fuzzy logic controller using convergent point of the vessels & color properties, [2], used an idea of circle operator [13], presented an improved corner detection algorithm [14], proposed entropy based algorithm using sliding window technique [15], presented a unified approach based on normalized cross-correlation technique [16], concentric circular convolution mask by fast fourier transform is used and [17] adopted hill-clustering technique & histogram based multilevel thresholding. However, all these methods are concentrated only on the detection and localization of OD. Unsupervised based OD segmentation categorized into four ways, namely morphology based methods [10,18–20], region based methods [21,22],

edge based methods [17,23,24] and vessel based approach [25–27]. Circular Hough Transform technique [11,12,28] is applied for the extraction of circular shape OD and its computational efficiency. For better segmentation of the OD region, many existing approaches used Circular Hough Transform for better accuracy. But, existing approaches concentrated only on circular object identification which will mislead the disc segmentation due to pathological distraction, not clear edges of OD and blood vessels. To avoid all those above problems proposed approach used Circular Hough Transform with Circular Hough Peak value selection for better OD segmentation. This method solved all those problems and also achieved better accuracy compared with the existing approaches. Some other methods bat meta-heuristic algorithm [11,29] and variation model with three energy of phase-based boundary, Principal Component Analysis (PCA) based shape & region is also used for OD segmentation. In the morphology based approach, [10] presented the extraction of OD contour mathematical morphology along with PCA with good accuracy. However, this approach is more robust if the process is started by localized OD instead of using the cropped image as an input for PCA grayscale conversion because OD detection and localization is very much difficult while using a wide variety of images as well as it will be more suitable for further development of glaucoma identification. In [12] circular hough transform is applied for the detection of OD center and then followed by a grow cut algorithm that is used for OD extraction by choosing the OD center point as a starting point of the region grow. This method is one of the competitive state-of-the-art techniques contains entire process from localization to segmentation needed for automation system with quiet promising results and handled a wide variety of images with different pathologies, however this approach is not suitable for all the databases used for testing was observed based on their performance results in terms of accuracy stability. In [28], preprocessing is done by morphological operations followed by circular hough transform for OD localization and then OD is segmented by polar transform based adaptive thresholding using region of interest. This approach is very much competitive with the state-of-the-art techniques in terms of accuracy measures. Although performance of this method is good and comparable with the benchmark datasets, however stability of accuracy is less when compared with the proposed approach while handling wide variety of images. Supervised methods based on OD localization, [30] proposed a hybrid directional model and OD segmentation [10], used mathematical morphology along with PCA [18], adopted fuzzy C means combined with thresholding and [31] presented histogram based swarm optimization. In supervised methods based on OD segmentation grouped into Feature based approaches [32–34] and Deep learning based approaches [1,35,36]. Recently deep learning based approaches for OD localization [37] used the pre-trained convolutional neural network, OD and OC segmentation [35,36] used U-Net & Dense-Net convolutional neural network and glaucoma identification using deep residual learning for image recognition [1] was developed. This method accuracy is completely based on the number of training data used and also training time is very high comparing to testing time. Deep learning methods gives high accuracy once large data set is trained but it is very difficult to train large dataset because many pre-trained networks restrict the size of the training data and also the ground truth for the publically available dataset is very less for glaucoma. Deep neural network approaches also having limitations on clinical facilities should be high end computing resources while screening and diagnosis, this is impossible for all the clinics hence the cost is high. In the feature based approach, [5] proposed superpixel OD segmentation by extracting multiple parameters of each pixel such as statistical features, texton histogram and fractal features. This method is chosen as one of the state-of-the-art methods on bench

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

mark datasets because of their performances. Although the results were promising, the computation time for feature extraction is very high compared with other state-of-the-art techniques and also the proposed method. In this technique after preprocessing, based on eccentricity measure OD is localized which could misguide the localization process while the usage of a wide variety of images having different pathologies and also the localization results were not mentioned here. In [23], present a classifier model based on structured learning used to identify OD edge and then followed by circular hough transform for OD extraction. This is one of the quiet competitive approach referred here, however in their work classifier model is based on structural features would need more variety of fundus images for training which will be easier to identify novel features easily to give better segmentation results. In this process of training, a classifier model needs more computation time comparing with other existing approaches is one of the major disadvantages of this method. Superpixel segmentation using Simple Linear Iterative Clustering (SLIC) [5,38,39] commonly used as a preprocessing step during OD localization. Features such as mean, median, maximum, minimum, average, mode intensity and centroid distance are extracted from the retinal fundus images used to segment OD [5]. These approaches require large computation time for feature extraction is one of the major drawbacks but the accuracy of OD segmentation is high and quiet competitive with the previous approaches. The proposed method is favorable on handling a large number of images having different pathological distractions with better accuracy compared to the benchmark datasets. Low computation time makes the proposed method more suitable for the automation system and also helps the ophthalmologist to quickly diagnose the retinal diseases. Proposed systems accuracy is completely dependent on OD localization as well as this process will automatically segment all the localized images without any loss was observed by our performance evaluation methods. Another important strength of the proposed method, in order to improve the segmentation accuracy it presented improved circular hough transform for OD segmentation with peak value selection combined with red channel superpixel segmentation. This will make the proposed approach more robust with high accuracy and provide a fast automation system compared with the existing techniques. 3. Materials and methods The proposed method provides an automatic system for the segmentation of OD which can be used in the identification of glaucoma disease problems. This automated system will help the ophthalmologists to analyze the retinal images efficiently as well as to identify the disease earlier. The proposed method is tested and evaluated on ONHSD, DRIONS, HRF, DRISHTI-GS1, DRIVE, CHASEDB1, INSPIRE and MESSIDOR database images and the details are presented in subsection 3.1. Fig. 2 shows the framework of the proposed system and the details of this framework are explained detailed in the following subsections. 3.1. Database description DRIONS-DB (Digital Retinal Images for Optic Nerve Segmentation DataBase) is a public database for benchmarking Optic Nerve Head Segmentation from retinal fundus images. This database [5] contains 110 images with a resolution of 600 × 400 pixels and acquired with a color analogical fundus camera, approximately centered on the ONH. Ground Truth for each image was marked by two experts with 36 coordinate points of OD. ONHSD (Optic Nerve Head Segmentation Dataset) is a public database [5] for benchmarking optic nerve head segmentation from

3

retinal fundus images. This database contains 99 images with a resolution of 760 × 570 pixels and the images were acquired using a Canon CR6 45MNf fundus camera. Ground Truth of optic nerve head edge was marked by four clinicians. MESSIDOR (Methods to Evaluate Segmentation and Indexing Techniques in the field of Retinal Ophthalmology (in French)) is a public database [5] contains total of 1200 images with a resolution of 1440 × 960, 2240 × 1488 & 2304 × 1536 pixels in which 800 images were acquired with pupil dilation (one drop of Tropicamide at 0.5%) and 400 images were acquired without dilation. These images were acquired by 3 ophthalmologic departments using a color video 3CCD camera mounted on a Topcon TRC NW6 non-mydriatic retinograph with a 45 ◦ field of view. DRISHTI-GS1database consists of 101 color fundus images of 2045 × 1751 resolution used for validation of OD Segmentation, Optic Cup and Detection notching. This database contains the Ground Truth for Segmentation Soft Map, Average OD and Cup boundaries, CDR values, Image level decisions and notching. HRF (High Resolution Fundus Image Database) is a public database contains 45 retinal color fundus images with a resolution of 3504 × 2336 pixels which are having normal, Diabetic Retinopathy and Glaucoma images. This database contains gold standard data for Field of View (FOV) masks, Vessel Segmentation and center point & radius for OD. INSPIRE-AVR (Iowa Normative Set for Processing Images of the Retina-Arterio-Venous Ratio) consists of 40 images of 2392 × 2048 resolutions. This database contains references about Vessels, OD and Arterio-Venous Ratio made by experts. CHASE-DB1 is a public database contains 28 retinal fundus images that were acquired from both the left and right eye of 14 patients with 30◦ field of view. This database [12] images captured from a Nidek NM 200D camera with a resolution of 999 × 960 pixels, which are affected by illumination artifacts and poor contrast. DRIVE (Digital Retinal Images for Vessel Extraction) consists of 40 retinal color fundus images with 565 × 584 pixels mainly used for blood vessel segmentation. These images [12] are classified as 33 normal images and 7 images, containing pigment epithelium changes, exudates and hemorrhages show mild Diabetic Retinopathy. Manual Segmentation of Vasculature is used as ground truth for Blood Vessel Segmentation. 3.2. Fundus image preprocessing Initially, the retinal color fundus images are preprocessed for enhancing the quality of an image and hence to make the process uncomplicated and effectual for the auxiliary process. In our proposed method, preprocessing of Retinal color fundus images is followed by image resize, binary conversion & masking, Erosion, Mapping and Gaussian filter. These sets of processes are more helpful for the localization process. Fig. 3 shows the flow of process followed in fundus image preprocessing. The first level of preprocessing is to resize the image from all the databases to make uniform size images for automatic processing. This will improve the efficiency of the proposed technique as well as the processing time. This system used eight publically available databases contains images in different sizes. This automatic approach will not be efficient without the help of image resize. In this method, fundus images resizing are done by simple calculation using the original resolution of an image for the automation system. Resize maintain the originality of an image because each and every pixel holds important information which is used to identify the disease at the initial stage. If this method makes the dimensions are the same for all the databases will lead to affect the shape of an OD which will affects the segmentation accuracy of the proposed system. Because in our approach different sizes of images were used, hence the proposed algorithm calculated different resize dimen-

4

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

Fig. 2. A proposed framework for Optic Disc Segmentation.

Fig. 3. Processing steps for Fundus Image Preprocessing. (a) RGB retinal image (b) Binarized mask (c) Morphological erosion (d) Element-wise multiplication (e) Gaussian filter.

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

sions for all the databases having a row or column size above 1000 in size. If the sizes of column and row values are below the threshold points are used as it is in the original fundus image otherwise resize is applied for both and also the process is iteratively performed until it gets the desired results. The calculation of the resize dimension of an image is given in Eq. (1) & Eq. (2). IRRsize =

OIR 2

(1)

IRCsize =

OIC 2

(2)

Where OIR & OIC are Column and Row size of an original image (OI) and IRRsize & IRCsize represents the Row and column of an image to be resized (IR). This process equally reduces the size of an image by both column wise and row wise until the row and column size of an image get reduced less than 1000 in size. This method is very much supportive of the circular hough peak approach because it will reduce the dimension of an image without affecting the original shape of an OD. Binarization is done by applying threshold value which is empirically set as 0.1 and used to remove background noises, abnormality marking and image details mentioned in the background of an image which will distract the region of interest process during localization. Background noises will distract the OD localization process while calculating intensity region as well as increase the number of high region center point lead to increased iterations during localization center point calculation. In order to avoid those problems, all the resized fundus images are applied for binarization using the global thresholding method [40] and its threshold is normalized to the range such as the minimum threshold is 0 and the maximum threshold is 1. The proposed system used Matlab software inbuilt functions for the global thresholding method and its default parameters at the initial stage. Our experiments are done with the threshold range starting from 0 to 1 (0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1) and it was observed that the lowest threshold level of 0.1 is more suitable to remove background noises from the retinal images without affecting the foreground details of the images. From our experiments, threshold level 0.1 is enough to remove the background details mentioned in DRIONS, ONHSD and INSPIRE databases having background marking outside of the retinal image as well as other databases while creating the binary mask. For morphological erosion operation, all the images were tested by using a disk structuring element with a radius from 1 to 30 to eliminate irrelevant detail which is in small size. From our experiments, it was observed that radius 20 is apt for all the retinal fundus images because above the range will erode OD from many images. Hence morphological erosion operation using disk structuring element with radius 20 is used during the localization process because OD is almost circular shaped. This operation will erode the image with disk structuring element 20 sizes below during fundus image processing. This process will create a mask of a resized image which is followed by morphological erosion operation using a disk structuring element to remove unwanted background information from the RGB image. The erosion of A by B denoted AB is given in Eq. (3). AB =





z ∈ E|(B)z ⊆ A

(3)

Where E represents Euclidean space, z represents a subset of E, A represents the binarized mask and B represents the disk structuring elements. The erosion of binary image A by structuring element B is the set of all points z such that B, translated by z, is contained in A [41]. For example, each pixel in binary image A superimposes the origin of B is at its center by 20 disk structuring element, if B is completely contained inside A the pixel values are retained or else the pixel values are eroded. After erosion element-wise multiplication is performed with the resized image and then processed by

5

a 2D Gaussian filter [42] to smooth the image. The formalization of the 2D Gaussian filter is given in Eq. (4).

 Gf (X1 , X2 ) = e



X 2 +X 2 1 2 2 2

 (4)

Where x1 and y1 represent the coordinate points of an image and ␴ is a standard deviation of 2D Gaussian distribution with the value specified as 0.9 for all the images. 3.3. Optic disc localization After fundus image preprocessing, all the preprocessed images were applied for OD localization through a set of processes. In our approach, disc identification is done by color channel selection, Morphological opening, Contrast Enhancement, Binarization, Dilation, Region of interest center point calculation, pixel density calculation and multilevel localization. OD detection considered being the most important and complex step in the automatic system of Glaucoma disease diagnosis due to pathological distractions. In our proposed approach, pixel density calculation plays a major role in the automatic process of disc localization and it is found to be robust as it gives better accuracy in the process of localization. Fig. 4 shows the process followed in disc localization. The first step of OD localization is Color channel selection from an RGB image. Retinal color fundus images are composed of three channels: Red, Green and Blue. Out of all the three color components, the green channel gives a better view of OD and blood vessels. Proposed system uses eight publicly available databases having different resolution of images, different intensities, different pathological distractions, different field of view (FOV) focus and different sizes due to that, green channel is selected for further process because green channel gives better view of OD as well as blood vessel will help for the better localization process comparing to red channel. The red channel contains high contrast regions which are difficult to differentiate OD from bright lesions in many retinal fundus images will misguide the localization process to bright lesion instead of OD. Blue channel contains low contrast regions which are difficult to identify OD because it will hide most of the blood vessels due to that pixel density calculation will not give good results. The proposed system is a time efficient process because it requires less than 2seconds per image for all the images used here. In the second step, the morphological opening is performed on green channel preprocessed images using line structuring elements with length is 5 and the angle of the line is specified as 5. The reason for setting the length to 5 and angle to 5 for line structuring element is used to smoothes pathological distractions like lesions, cotton wool spots, exudates and hemorrhages which is in small size. The proposed system tested all the images with disk structuring elements with a radius of 1–20 and line structuring element from 1 to 10 for length and angle using Matlab inbuilt functions. From our experiments, it was observed that mostly pathological distractions are in different shapes and also line structuring element smoothened more distractions than disk structuring element. Hence the parameter for line structuring element length and angle is fixed as 5 by 5. Morphological opening using line structuring element is used to remove small sizes of pathological distractions. This will helps to reduce the processing time by reducing the number of high-intensity regions as well as to identify the OD region effectively. Morphological opening generally smoothes the contour of an object, breaks narrow isthmuses, and eliminates thin protrusions [41]. The opening of set A by structuring element B, denoted A ◦ B, is given in Eq. (5).



A ◦ B = AB ⊕ B



(5)

6

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

Fig. 4. Processing steps for Optic Disc Localization (a) Preprocessed Image (b) Green channel selection (c) Contrast enhancement (d) Morphological opening (e) Binarization & Dilation (f) Optic Disc Localization (g) Multilevel localization.

Thus the opening A by B is the erosion of A by B, followed by a dilation of the result by B. The quality of the Green color channel images is elevated through enhancing its contrast. Contrast-Limited Adaptive Histogram Equalization (CLAHE) [40] is applied to enhance the contrast hence the OD appears brighter than other features in retinal color fundus images. Rather than performing contrast enhancement on the entire images, CLAHE is performed on 8 × 8 tiles i.e. small regions in the image and their corresponding contrast transform functions are calculated to perform contrast enhancement on each tile. Here the clip limit is fixed as 0.01 to avoid background noises as well as 0.4 uniform distribution parameter is used to create a flat histogram. These parameter values correspond to the default values of the Matlab software. When higher values were set to clip limit and Number of Tiles, the contrast of the background pixels also increased, exposing the nonuniform texture of the background. With lower values for these parameters, the contrast of the OD was not enhanced to calculate the region of interest efficiently in the subsequent steps. From our experiments, it was observed that the default parameter values are more suitable to enhance the contrast of OD for all the eight databases. After CLAHE, the enhanced images were binarized by replacing all values above a globally determined threshold with ones and setting all other values to zeros. Initially, the binarization threshold point is fixed from 0.98 and then decreased by 0.02 iteratively until the process gets one’s value. The reason for setting the binarization threshold to 0.98 is used to calculate the region of interest counts for OD Localization. Normally the threshold range is normalized between 0 and 1. From our experiment, it was observed that the threshold between 0.98 and 0.50 is very much suitable for calculating the region of interest for all the eight databases used for segmentation. There is no major difference while using below 0.50 thresholds. In order to reduce the number of iterations as well as the computation time, the threshold point is fixed as 0.98 to 0.50

for all the databases. The reason for decreasing the threshold point by 0.02 is some fundus images, the region of interest is identified at the initial threshold point but sometimes it takes less threshold point to identify the region of interest. From our experiments, it was observed that there is no major difference while decreasing by 0.01 and 0.02 threshold points but major differences occur while decreasing by 0.03. If the threshold point is decreased by 0.03, we observed that 25% of images were missed a single region of interest identification which is mostly correct localization. For example, if the region of interest is not identified at 0.98 threshold point, the threshold is decreased by 0.01, 0.02 and 0.03 which identifies 1, 1 and 3 Region of Interest respectively. So decreasing threshold point by 0.03 identifies 3 regions of interest which increase the OD localization verification by 3 times instead of single verification. In order to avoid the differences and also reduce the number of iterations, 0.02 is selected as a threshold reduction point to identify the OD region effectively. After Binarization and Dilation, the resulting image is used for measuring the properties of high intensity image region which is having one’s value. The properties of image regions are used for calculating the center points for all the connected regions. [5] Proposed a method for localization using the centroid of the most intense and least eccentric region of the green channel preprocessed image. However, it would not be suitable for pathology images having more than one high intense center point and illustrated in Fig. 5. Pixel Density Calculation Vs High intense Eccentric Approach explained in Fig. 5 shows that pixel density calculation is better than an eccentric method. The proposed system presented a pixel density calculation method for OD localization and it achieves good accuracy of 100% on DRIONS, HRF, DRISHTI-GS1, DRIVE, CHASE-DB1, INSPIRE and ONHSD databases and 99.43% on MESSIDOR database. A final and most important step of OD localization is pixel density calculation & multilevel localization and the flow of the process is detailed in the following subsections.

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

7

Fig. 5. Pixel Density Calculation (Proposed Method) Vs High intense Eccentric Approach. (a) Center no = 1, pixel count = 6185, eccentricity = 0.69 (b) Center no = 2, pixel count = 5980, eccentricity = 0.79 (c) Center no = 3, pixel count = 4497, eccentricity = 0.61.

3.3.1. Pixel density calculation In the OD localization method, pixel density calculation is considered as the most important technique in the identification of OD. Many previous approaches have done only by blood vessel density, thickness of the blood vessels and blood vessel pixel count those approaches do not include OD. The proposed system used both blood vessels and OD pixels for better localization because it is the bright region among all the regions of retinal fundus images and it contains many blood vessels. In this work, the parameters used here are achieved by testing all the eight databases and compared it with the available ground truth for better accuracy. The pixel density values calculation step the first image of each database was tested and calculated its first high-intensity region center point. Based on that center point circular shaped region binary mask is created with a radius of 100 and then element-wise binary array multiplication is performed with the original image. Region mask radius size is tested for various radius sizes of 50,100 and 150 and selected 100 for circular shape binary mask creation. For our experiment, the radius size of 100 is chosen because the radius size of 50 partially masks the OD region for some images and a radius size of 150 masks the region in a bigger size with unwanted details. Binarization threshold point details are explained in section 3.3. After mapped with the original image corresponding pixel count is calculated and its values are noted for each database for pixel density threshold range setting. From that knowledge, the pixel density threshold is tested from 500 to 6000 with an increase of 500 on each stage. Based on our observations, the pixel density threshold by 2000 was localized more images i.e. 1647 images out of 1654. Due to low-quality images, some ODs could not meet the density threshold; in that case, the pixel density threshold is reduced to 1500. This method calculates a total number of non-zero pixels for each center by applying a circular mask with radius 100 for all the images. Fig. 6 shows the steps followed in the calculation of pixel density. First high intensity center is converted into ROI (Region of Interest) polygon with radius 100 for all the resized images and then

converted into a circular shaped region mask. Region binary mask is applied for element-wise binary array multiplication with the original image for further process. At step 3 Canny edge detection algorithm is used with a sensitivity threshold of 0.10 is used to identify strong and weak edges for each region. In this step, we tested our images with other edge detection methods like Prewitt, Roberts and Sobel. Comparing to other edge detection methods canny detects very small details like thin vessels which are very much useful for pixel density calculation. During canny edge detection, the threshold is tested between 0 and 1 and 0.10 is finalized from our experiments. After edge detection, a binarized image contains connected components having less than 50 pixels are considered as pathological distractions and removed. If the pixel density is higher than 50 will remove the blood vessel details leads to failure of OD identification. Before pixel count calculation, the binarized image is dilated for a better view of detected edges. The procedural steps followed for pixel density calculation is given as: Step 1: Initialize pixel density threshold point as 2000 and the binarization threshold point is fixed from 0.98 - 0.50 for all the images with 0.02 threshold reduction point. Step 2: Choose the first region of interest from the number of highest intensity regions identified and convert it into a grayscale image. Step 3: The Canny edge detection algorithm is applied for a grayscale image with a sensitivity threshold of 0.10 is specified. Step 4: Connected components having less than 50 pixels are removed from the binary image. Step 5: The resulting image is applied for dilation using a disk structuring element as 1. Step 6: Total numbers of non-zero pixels are calculated from the dilated image. Step 7: Resulting pixel count is compared with the pixel density threshold for single as well as multiple regions. Step 2–7 is iteratively performed for multiple regions.

8

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

Fig. 6. Pixel Density Calculation for Optic Disc localization with 5 centers. (a) - (e) Process of Region of Interest center selection using radius = 100, Grayscale conversion, Canny edge detection, Connected components having < 50 pixels removed & dilated.

Step 8: If resulting pixel count < pixel density threshold, reinitialize pixel density threshold = 1500 and repeat the process from step 1. Step 9: The highest pixel count is identified as an OD location. 3.3.2. Multilevel localization After pixel density calculation, the images were applied for multilevel localization in order to crop the full OD, sometimes due to Peripapillary atrophy and pathological distractions near to OD will make the distractions on localization center point will lead to partial cropping of OD. To avoid partial cropping of OD images using

localization points, multilevel localization is performed to locate the center point of the OD. The proposed system automatically crops the images using OD multilevel localization point for the segmentation process; this will make the proposed system more robust, fast and effective and also give a clear view of the disc for better segmentation. The values used in Multilevel Localization are achieved by an observation made on all the eight databases and matched with the ground truth for better accuracy. To blur other small distractions i.e. cotton wools, small lesions, etc. grayscale image is smoother by Gaussian filter with standard deviation 3 is applied and also the maximum intensity range is calculated from

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

9

Fig. 7. Multilevel Localization Processing Steps.

the Gaussian filtered image. Step 4 process is tested with 5, 10, 15 and 20 reduction points and finalized threshold reduction point 15 is suitable for all the images. For example, the maximum intensity of the corresponding image is 200 then the threshold point is fixed as threshold>185 for binarization. Step 5 is performed with a threshold reduction point of 5, 10, 15 for all the images and selected value 5 as the final threshold reduction point for multilevel localization center point calculation. Fig. 7 shows the steps followed to identify the center points of OD and how the partially localized disc problem is solved by multilevel localization. For multilevel localization, the following steps are followed using localized OD center point: Step 1: Localized disc center point is taken as an input parameter and also green channel is chosen for the multilevel localization process because green channel differentiates OD and OC region rather than a red channel. Step 2: Green channel is converted into grayscale by adjusting the range from 0.2 to 0.9 to increase the contrast of the grayscale image which will highlight the OD center point more clearly. Step 3: Gaussian filter with standard deviation 3 is applied and also the maximum intensity range is calculated from the Gaussian filtered image. Step 4: To create a binary image, thresholds are initialized as threshold > maximum intensity of corresponding image -15 represented by 1 value and below the range is represented by 0 values. Step 5: Center points of high intensity regions are calculated from the binarized image.

i If center point = 1, coordinates of the center point will be used as image cropping as well as multilevel localization point. ii If center point>1, the First high intensity center is converted into a circular shaped region binary mask with radius 100 for all the resized images and then mapped with the original image. The maximum intensity of all three color channels (Red, Green & Blue) corresponding to the first center point is calculated and then the threshold for each color channel is initialized as threshold > maximum intensity of corresponding color channel-5. Each color channel calculation is defined below. iii For the Red color channel, pixel countR = (threshold > mamixmum intensity of red channel-5). the Green color channel, pixel iv For countG =(threshold > maximum intensity of green channel-5). v For the Blue color channel, pixel countB =(threshold > maximum intensity of green channel-5). vi For total pixel count, pixel countT = pixel countR + pixel countG + pixel countB . vii Each center point is compared with other center points based on pixel countT and the highest one is selected as the final center point. Step 6: Center point having the highest number of high-density pixel count is considered as final localization point and used for image cropping for further segmentation process.

10

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

Fig. 8. Optic Disc segmentation process. (a) Cropped Image for Optic Disc segmentation (b) Preprocessing done on Red Channel (c) Canny edge detection on Green channel (d) Circular Hough Transform with peak value = 5 (e) Red channel selection (f) Superpixel segmentation region size = 15 (g) Circular Hough Transform with peak value = 5 (h) Optic Disc boundary.

3.4. Optic disc segmentation In the OD segmentation method, all the correctly localized images are followed by localized image preprocessing, improved circular hough transform with circular hough peak value selection and Red channel Super Pixel segmentation for accurate segmentation and the details of segmentation has been explained in the following subsections. Fig. 8 shows the process of OD segmentation.

3.4.1. . Localized image preprocessing In this step, localized images are cropped by a rectangle window using a multilevel localization center point, row and column size of a corresponding image. Normally common rectangle window [5] is applied for cropping; here the size of the window used in our algorithm is different for all the databases and each database the window size is calculated automatically by using a multilevel localization center point and column & row size of an image. The

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

details of the different windows used for each database are shown in the table and also the formula used for window size calculation is explained in Eq. (6)–(10). IC WI = 2 HI =

IR 2

(6) (7)



Xmin = round Locx −



Ymin = round Locy −



RWS = Xmin

Ymin

 W  I

2

 H  I

2 WI

HI

(8) (9)



(10)

Where IC & IR represents the Column and row size of an image, WI & HI represents the Width and Height of a corresponding image, Xmin & Y min is a starting position of a crop window, RWS is a rectangle window size and Locx & Locy is an x and y coordinates of multilevel localization center point. The main motive of this calculation is to not change the shape of an OD because if the system chooses a common rectangle window for all the database the shape of an OD is shrunk or dragged either horizontally or vertically cause difficulties in circular region identification. Hence the proposed approach used the half size of column and row of a corresponding image for length and width of the rectangle window in order to avoid any changes in the shape of an OD because different sizes of images are handled by this algorithm. In our approach, the rectangle window size will vary from each database, because the rectangle window size is defined based on the corresponding size of an image. This type of cropping will not affect the shape of an OD since the system is using Circular Hough Transform (CHT) to detect a circular-shaped OD region for the OD segmentation process. Cropped images were further processed by two stages. First stage is followed by three steps, in first step image filter, Red channel selection, grayscale adjustment and binarization done by global image thresholding using Otsu’s method; the second step is done by green channel selection, grayscale adjustment, Gaussian filter and canny edge detection, and final step is applying improved CHT using circular hough peak value is fixed as 5 for OD segmentation. In the second stage, red channel selection, [5] superpixel segmentation with region size = 15 and improved CHT using circular hough peak value is fixed as 5 for OD segmentation. Improved CHT and Red channel superpixel segmentation are explained in the following subsections. The first stage of this process is performed by applying a circular averaging filter on cropped images with the disk value of 5 and then the red color channel is selected for the initial process of an image. The parameter used for this stage is calculated experimentally using Matlab software. The red channel is further adjusted by 0.5 to 0.9 for a clear view of OD and then applied for binarization by the threshold points calculated using Otsu’s global image thresholding algorithm. Binarized image is mapped with the original cropped image using array multiplication. These images were further processed by selecting the green channel and then adjusted by 0.2 to 0.8 for a clear view of an OD. The adjusted green channel is processed by Gaussian filter with standard deviation is set as 2 to blur the blood vessel distraction and then canny edge detection is applied to extract OD boundary. 3.4.2. Improved circular hough transform for optic disc segmentation OD is represented as a circular shape bright region in the retinal fundus images. Hence the proposed algorithm used improved CHT to detect circle shaped OD in the preprocessed localized images using the radius will be adjusted between 40 and 100. CHT [12],

11

an extension of hough transform, is used for the detection of circle shaped objects in an image by using Eq. (11). 2 (X − A) − (Y − B)2 = R2

(11)

Where A & B represents the circle center of each edge point X & Y and R represents the radius of a circle. In our approach, the CHT radius is fixed between 85 and 100 for the CHASSEDB1 database and between 30 and 55 for the remaining seven databases for better peak value selection. The values used in this method are achieved by an observation made on all the eight databases and compared it with the available ground truth. The importance of using Circular Hough Transform is to find circles in imperfect image inputs. In this transform, the circle-shaped regions are voted in the accumulator and then the local maximum voted circle of the accumulator is selected as a final circle. In general, the OD region is the bright & highest intensity circular shaped region among the entire region in retinal fundus images. Due to blood vessel distractions because OD is the starting point of Optic Nerve Head having high density of blood vessels, not clear edges of OD due to peripapillary atrophy, OC region distraction because cup is higher intense than OD region which is also mostly like circular shaped, pathology distraction and also the radius size of OD is not known. To avoid all those above problems the proposed approach tested all the images using Circular Hough Transform with hough peak value starting from 2 to 15. From our experiments, it was observed that Circular Hough Transform with Circular Hough Peak value 5 gives better accuracy for all the images during OD segmentation. After Peak value selection, each peak values are analyzed by calculating the minimum and maximum intensity of R, G and B color channels for each circle and the best one is selected for segmentation. For this calculation, each circle hough peak value is processed by green channel selection and adjusted between 0.2 and 0.9, morphological opening using disk structuring element 5, Prewitt edge detection for complemented image followed by total non-zero pixel count calculation. The reason for using the Prewitt edge detection method is mainly for better OD boundary extraction. Prewitt edge detection method is better than other edge detection methods while identifying the best circular region using Circular Hough Transform. From these two hough values, again one value is selected by using the total pixel countT method and chosen as OD boundary. This time threshold reduction point is empirically set as 10 for total pixel countT calculation. The sum of maximum and minimum intensity of the three color channels is calculated by adding and subtracting 10 from a maximum and minimum value of three channels for each circle as mentioned in step 5 of multilevel localization. Peak value having the highest number of total, maximum intensity and minimum intensity pixel count is considered as the best peak value and used for OD extraction. Normally most of the existing approaches used CHT [11,12,28] for OD localization and segmentation but in our approach, CHT is improved using circular hough peak value selection technique for better accuracy. The highest sum of pixels with the highest maximum and a minimum number of three color channel count is considered as a final peak value for OD segmentation. Because of this step, this system gives a competitive, robust and time efficient result that is comparable with the existing approaches. This robust system will not fail; hence it will segment all the images correctly or else partially for very least cases. 3.4.3. Red channel superpixel segmentation for optic disc segmentation The cropped image is segmented by the Simple Linear Iterative Clustering (SLIC) technique. This technique is proposed by [38] with the only parameter needed is region size (R s ), the required number of the equally sized superpixel. In this process, the Red channel is considered for further process and also used the default parameters. Based on the observation, the Red channel gives a better view

12

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

compared to blue and green channels in the process of segmenting OD using superpixel segmentation. In this work, Region size (Rs ) is selected by visualizing the variation between the images and boundary observance by using the range from 8 to15 and then the range is finalized as 15. [5] proposed superpixel segmentation for cropped RGB images with region sizes 5 and 10 and also the regularizer factor is fixed as 0.2. In our approach, the parameter used for red channel superpixel segmentation in R s = 15 for all the eight databases. SLIC superpixel creates cluster centers and the center for each superpixel is iteratively updated. Superpixel uses distance measures to determine the nearest cluster for each pixel. It is necessary to combine intensity and spatial distance values within a cluster by distance measures. Spatial distance between two consecutive pixels (i.e. i and j) is calculated in Eq. (12). Dspatial =



Si − Sj

2



+ Li − Lj

2

(12)

where i and j are the pixel locality components. The intensity distance between two adjacent pixels is calculated in Eq. (13). Dint ensity =



Ii − Ij

2

Table 1 Accuracy and Processing time of an image for Optic Disc Localization. Database Name

Total Images

Images Localized

Accuracy (%)

Time/Image (seconds)

DRIONS DRISHTI ONHSD HRF INSPIRE CHASE DRIVE MESSIDOR

110 101 90 45 40 28 40 1200

110 101 90 45 40 28 40 1193

100 100 100 100 100 100 100 99.42

0.98 1.34 1.23 1.49 1.46 1.47 1.17 1.46

a) Sensitivity (SN), defined as the sum of correctly Segmented OD Region to the sum of the total number of Ground Truth OD Region, is calculated as shown in Eq. (16). Sensitivity =

Specificity =

Dtotal =

Dint ensity

2

m

+

Dspatial s

2 (14)

Where m is the compactness value and s is the sampling interval, m determines the relative importance of the spatial and intensity distance in the total distance metric. Fig. 8 shows how the red channel image is segmented into 15 small patches. 3.5. Performance evaluation The performance of the proposed method for localization was implemented for all the eight databases and then the accuracy was calculated for each database individually as shown in Eq. (15). LocalizationAccuracy =

ODCI TotalImages

(15)

Where ODCI represents the total number of correctly identified OD. Calculation of localization accuracy based on ground truth marked by human experts and manual annotation done by using open source image annotation tool. The performance of the proposed method for the OD segmentation was calculated in terms of sensitivity (SN), Specificity (SC), Accuracy (A), DICE Coefficient (DC), Optic Disc Overlap (ODO) and the completion time of an image for OD Detection and Segmentation. All these metrics [12] were calculated using the values of True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (FN) pixel classification. Table 2 shows the performance evaluation of the proposed system for the entire database used here. The four pixel classification used for evaluation metrics are explained below: • TP: the result of mutual pixels between ground truths versus segmented OD pixels. • TN: the result of mutual pixels between ground truths versus segmented non OD pixels. • FP: the result of non OD pixels segmented as OD pixels. • FN: the result of OD pixels segmented as non OD pixels.

(16)

b) Specificity (SC), defined as the sum of correctly segmented non OD Region to the sum of the total number of Ground Truth non OD Region, is calculated as shown in Eq. (17).

(13)

Where Ii and Ij are the normalized intensity values of i and j pixels. The total distance measure Dtotal is calculated in Eq. (14).

TP TP + FN

TN TN + FP

(17)

c) Accuracy (A), defined as the sum of correctly segmented OD Region and correctly segmented non OD Region to the sum of the total number of regions, is calculated as shown in Eq. (18). Accuracy =

TP + TN TP + FP + TN + FN

(18)

d) DICE Coefficient (DC), defined as the intersection of ODS and ODGT multiplied by 2 to the sum of the total number of OD regions, is calculated as shown in Eq. (19). DICE =

2 ∗ TP (2 ∗ TP + FN + FP)

(19)

e) Optic Disc Overlap (ODO), defined as the intersection of ODS (Optic Disc Segmented ) and ODGT (Optic Disc Ground Truth ) to the union of ODS and ODGT , is calculated as shown in Eq. (20). ODO =

TP TP + FN + FP

(20)

4. Results and discussion 4.1. Experimental results To evaluate the proposed algorithm, eight publically available databases namely DRIONS-DB, DRISHTI, CHASE-DB1, ONHSD, MESSIDOR, DRISHTI, INSPIRE and HRF were used. These databases contain 1654 fundus images with different sizes, resolutions, color intensity, the field of views, pathologies and other distractions. Table 1 shows the results of Accuracy and Time taken for the identification of OD and also total images used for testing from each database were presented. Comparing to the state-of-the-art method our methods were tested on more images using image processing algorithms with less than two seconds for images in all the databases. Localization accuracy was calculated by using Eq. (15) and Fig. 9 shows the experimental results from all the databases for OD localization. To segment, all the localized images are taken for segmentation which contains 1647 fundus images. In our method localization accuracy is directly proportional to the OD Segmentation and also all the localized images are segmented with good accuracy. Table 2 shows the performance results of eight databases calculated with respect to Accuracy, Sensitivity, Specificity, Dice Coefficient and

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

Fig. 9. Visual results of the proposed method for Optic Disc Localization from all Databases. (a) DRIONS Database (600 × 400 resolution size) (b) ONHSD Database (760 × 570 resolution size) (c) MESSIDOR Database (560 × 372 resolution size)

13

14

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

Table 2 Performance evaluation of proposed system for eight databases. Database

Total Images

Images Localized

Ground Truth

A (%)

SN (%)

SC (%)

DC (%)

ODO (%)

DRIONS

110

110

DRISHTI ONHSD HRF

101 90 45

101 90 45

INSPIRE CHASE DRIVE MESSIDOR

40 28 40 1200

40 28 40 1193

Expert 1 Expert 2 Expert Expert Expert 1 Expert 2 Manual Manual Manual Manual

99.37 99.33 99.31 99.64 99.67 99.73 99.31 99.20 99.38 99.72

92.49 92.65 95.82 85.47 85.21 84.73 80.35 91.26 90.35 83.76

99.59 99.55 99.43 99.82 99.90 99.95 99.84 99.43 99.52 99.89

89.62 88.89 88.43 84.48 88.23 89.63 85.43 85.98 81.38 85.42

82.17 80.84 80.05 74.04 80.18 81.65 75.47 77.11 71.00 75.59

Optic Disc Overlap mentioned in Eq. (16–20) were presented. The accuracy of 99.37%, 99.34%, 99.31%, 99.64%, 99.67%, 99.73%, 99.31%, 99.20%, 99.38% and 99.72% on DRIONS-DB, DRISHTI, CHASEDB1, ONHSD, MESSIDOR, DRISHTI, INSPIRE and HRF databases are achieved in terms of segmentation. Fig. 10 shows the segmentation results obtained during experimentation with ground truth marking. 4.2. Comparison of performance measures with other algorithms This section compares the performance of the proposed system with the benchmark methods in terms of five evaluation measures of Accuracy, Specificity, Sensitivity, Dice Coefficient and Optic Disc Overlap and also time taken for an image. The bench mark method selected for the comparative study includes the approach of [5,10,12,23,28]. The above techniques were selected for their recent and better performance. Table 3 shows the overall performance of the proposed system compared with the stateof-the-art methods in terms of accuracy. From Table 3 it can be observed that the stability of proposed system in terms of accuracy and specificity is very much competitive with the stateof-the-art methods on all the databases with measures of 0.994, 0.996, 0.997, 0.994, 0.997, 0.992, 0.993 and 0.993 for accuracy and 0.9959, 0.9982, 0.9989, 0.9952, 0.9995, 0.9943, 0.9943 and 0.9984 for specificity on DRIONS, ONHSD, MESSIDOR, DRIVE, HRF, CHASEDB1, DRISHTI and INSPIRE databases. The proposed system achieved the highest accuracy compared with all the state-of-theart methods on HRF, CHASEDB1, DRISHTI and INSPIRE databases with measures of 99.73%, 99.20%, 99.31% and 99.31% accuracy. Table 4 shows a comparison of the proposed method with the state-of-the-art methods in terms of Accuracy, Sensitivity, Specificity, Dice Coefficient, Optic Disc Overlap and Segmentation time on eight benchmark databases. [5] Zaka presented a multi-parametric optic detection technique and region-based statistical and textural feature extraction for OD segmentation. This approach was tested with DRIONS, MESSIDOR and ONHSD databases and it shows that computation time is very high when compared to the proposed approach. The proposed approach is higher in terms of accuracy and specificity for all the above mentioned three databases and also it tested with overall eight databases which were quite high when compared with the mentioned state-of-the-art methods [28]. presented a circular hough transform for OD localization and polar transform-based adaptive thresholding for OD segmentation. In their approach was tested with DRIONS, DIARETDB1, HRF, SHIFA, MESSIDOR, RIM-

(d) DRISHTI-GS1 Database (512 × 438 resolution size) (e) HRF Database (876 × 584 resolution size) (f) INSPIRE Database (598 × 512 resolution size) (g) CHASE-DB1 Database (999 × 960 resolution size) (h) DRIVE Database (565 × 584 resolution size)

ONE and DRIVE databases with accuracy of 0.9986, 0.9937, 0.9774, 0.9963, 0.9918, 0.9750 and 0.9980. Based on the results, it clearly states that the stability of the accuracy of their approach is not good for all the databases they tested and also the localization accuracy of those databases were not mentioned. In our approach was tested on DRIONS, HRF, MESSIDOR and DRIVE databases with accuracy measures of 0.9937, 0.9973, 0.9972 and 0.9938 which represents proposed system accuracy is more stable for all the databases used. [23] presented a structured learning technique for OD localization and for OD boundary edge map and circular hough transform is used. This work was tested on DRIONS, MESSIDOR and ONHSD databases with an accuracy of 0.976, 0.977 and 0.990 and the computation time, localization accuracy was not mentioned. [10] proposed a method for the extraction of OD through mathematical morphology along with principal component analysis. It was tested with DRIONS, MESSIDOR, ONHSD and DRIVE and got an accuracy of 0.993, 0.995, 0.994, and 0.990 and the computation time, localization accuracy was not mentioned. The proposed system accuracy is higher on all the above mentioned databases for the methods of [23] and [10]. Ahmad [12] proposed a robust methodology based on morphological operations, circular hough transform and grow-cut algorithm for OD boundary segmentation. In this method applied on DRIONS, DIARETDB1, CHASEDB1, SHIFA, MESSIDOR and DRIVE databases and obtained accuracy of 0.9549, 0.9772, 0.9579, 0.9793, 0.9989 and 0.9672 [43]. proposed a novel boundary-based OD and cup extraction on INSPIRE and DRISHTI databases and results of accuracy measures were not mentioned. In our approach, OD segmentation is done on INSPIRE and DRISHTI databases with an accuracy of 99.3%. Comparing to all the other state-of-the-art methods this paper presents clear results from localization to OD boundary segmentation which is very much helpful for the comparative study. Proposed system accuracy is higher on DRIONS, CHASEDB1 and DRIVE databases and also the computation time is very less. Using our approach, all the localized databases are segmented automatically on normal as well as pathological images without any loss represents this method is quite competitive, more robust and fast processing techniques compared with the existing approaches. 4.3. Discussion This paper presents an improved image processing algorithm for OD detection, localization and segmentation. The presented approach used DRIONS, MESSIDOR, ONHSD, DRIVE, CHASEDB1,

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

15

Fig. 10. Visual results of Optic Disc segmentation for all databases. Ground Truth is represented by Yellow color boundary and proposed method is represented by green color boundary. (a) DRIONS Database (600 × 400 resolution size) (b) ONHSD Database (760 × 570 resolution size) (c) MESSIDOR Database (560 × 372 resolution size)

16

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

Table 3 Comparison of proposed method Accuracy with the state-of-the-art method for all Databases. Author Name

DRIONS

ONHSD

MESSIDOR

DRIVE

HRF

CHASEDB1

DRISHTI

INSPIRE

Rehman et al. [5] Zahoor and Fraz [28] Fan et al. [23] Morales et al. [10] Abdullah et al. [12] Proposed Method

0.993 0.998 0.976 0.993 0.955 0.994

0.993 – 0.989 0.994 0.997 0.996

0.988 0.991 0.977 0.995 0.999 0.997

– 0.998 – 0.990 0.967 0.994

– 0.977 – – – 0.997

– – – – 0.958 0.992

– – – – – 0.993

– – – – – 0.993

Table 4 Comparison of the proposed method with the state-of-the-art methods in terms of Accuracy (A), Sensitivity (SN), Specificity (SC), Dice Coefficient (DC), Optic Disc Overlap (ODO) and segmentation time on eight benchmark databases. Performance Measures

Segmented images

A

SN

SC

DC

ODO

Time/Image (seconds)

Recent Approaches DRIONS 110 Rehman et al. [5] Zahoor and Fraz [28] Fan et al. [23] Morales et al. [10] Abdullah et al. [12] Proposed Method

Total images

– – – – 109 110

0.9930 0.9986 0.9760 0.9934 0.9549 0.9937

0.9480 0.9384 – – 0.8508 0.9249

0.9940 0.9994 – – 0.9966 0.9959

0.8990 – 0.9137 0.9084 0.9102 0.8962

0.8210 0.8862 0.8473 – 0.8510 0.8217

31.10 1.60 – – 43.20 1.41

MESSIDOR 1200 Rehman et al. [5] Zahoor and Fraz [28] Fan et al. [23] Morales et al. [10] Abdullah et al. [12] Proposed Method

– – – – 1191 1193

0.9880 0.9918 0.9770 0.9949 0.9989

0.9530 0.8891 – – 0.8954

0.9880 0.9973 – – 0.9995

0.8510 – 0.9196 0.8950 0.9339

0.7470 0.8441 0.8636 – 0.8793

31.10 1.8 – – 71.3 1.25

ONHSD 90 Rehman et al. [5] Fan et al. [23] Morales et al. [10] Abdullah et al. [12] Proposed Method

– – – 90 90

0.9930 0.9895 0.9941 0.9967 0.9945

0.9240 – – 0.8857 0.8636

0.9950 – – 0.9992 0.9960

0.8970 0.9032 0.8867 0.9197 0.7780

0.8240 0.8346 – 0.8610 0.6663

31.10 – – 65.3 0.86

DRIVE 40 Zahoor and Fraz [28] Morales et al. [10] Abdullah et al. [12] Proposed Method

– – 40 40

0.9980 0.9903 0.9672

0.8309 – 0.8187

0.9993 – 0.9966

– 0.8169 0.8720

0.7561 – 0.7860

1.6 – 59.2 0.69

HRF 45 Zahoor and Fraz [28] Proposed Method

– 45

0.9774 0.9967

0.9233 0.8456

0.9892 0.9987

– 0.8713

0.8686 0.7941

– 0.75

CHASEDB1 28 Abdullah et al. [12] Proposed Method

28 28

0.9579 0.9900

0.8313 0.8516

0.9971 0.9940

0.9050 0.8211

0.8320 0.7282

– 1.81

DRISHTI 101 Proposed Method

101

0.9922

0.9495

0.9934

0.8663

0.7780

1.39

INSPIRE 40 Proposed Method

40

0.9895

0.7317

0.9966

0.7740

0.6850

1.05

HRF, INSPIRE and DRISHTI databases for evaluation and relies on image processing algorithms. To perform overall processing “intel(R) Core(TM), 1.70 GHz” processor with 4GB RAM is used as a personal computer configuration. The proposed algorithm was implemented using Matlab R2017b version and tested on eight publically available fundus image databases. The proposed algorithm was implemented using Matlab R2017b version and experimented on eight publically available fundus image databases and evaluated with available ground truth. During testing, the pro-

(d) DRISHTI-GS1 Database (512 × 438 resolution size) (e) HRF Database (876 × 584 resolution size) (f) INSPIRE Database (598 × 512 resolution size) (g) CHASE-DB1 Database (999 × 960 resolution size) (h) DRIVE Database (565 × 584 resolution size)

posed system was analyzed using default values of Matlab software and then the range is adjusted to match ground truth results for all the databases. This algorithm was developed based on Matlab software inbuilt functions and their parameter ranges are validated from minimum to maximum range and the results were noted. In our proposed approach before setting the values for each function, the system needs to consider the wide variety of image properties like a different size of an image, resolution, field of view focus, low-quality images, high-quality images, and pathological

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

images. Through our experiments, we discovered the values used in the algorithm are suitable for all the images for OD segmentation automatically. Each database was tested and evaluated individually with this algorithm and results were noted. This makes the proposed approach is more suitable for the analysis of glaucoma diagnosis and also easy to extend with the OC segmentation, CDR calculation will give the complete automatic health care system of glaucoma. Proposed work localized 100% images on all the seven databases except the MESSIDOR database. It contains 1200 images out of which 1193 images only localized due to pathological distractions with bright intensity, a large number of hard and soft exudates, hemorrhages and bright lesions. Future work will focus on 100% localization of disc on MESSIDOR as well as more pathological image databases. The proposed system was tested on a high number of retinal fundus images which was quite reasonable to compare with the state-of-the-art methods. This improved image processing algorithm gives better accuracy on segmentation compared with the already existing techniques. In the pixel density calculation method, both the blood vessels and the disc pixels are considered is very unique compared to existing methodologies. Previous approaches mostly based on the shape of the OD using either circular hough transform or blood vessel density or blood vessel thickness or region based. Improved CHT with circular hough peak value selection segments all the OD accurately once the disc is identified. Instead of using this technique only the proposed system further checked the selected peak value with red channel superpixel segmentation. The proposed system, this step will be considered as cross-validation to validate and assures better segmentation. Because of this approach, it can be more suitable for automatic diagnosis tool development for the health care system. This method is more superior to the state-of-the-art methods mentioned in Table 3 and also low computational complexity throughout the process. 5. Conclusion In this paper, Automatic identification and Segmentation of OD is achieved through improved Image Processing techniques. The proposed novel algorithm contains four stages such as Image Preprocessing, OD Localization, OD Segmentation and Performance Evaluation where each stage includes its own set of processes to be executed. Various Image Processing techniques were adopted to improve the performance of the OD localization and better accuracy for OD Segmentation. Accuracy of OD segmentation plays a major role in the identification of Glaucoma disease. The Performance Evaluation is done on publically available eight databases: DRIONS-DB, CHASE-DB1, HRF, DRISHTI-GS1, ONHSD, DRIVE, MESSIDOR and INSPIRE and the competitive results were achieved in terms of Accuracy, Sensitivity, Specificity, DC, ODO and Time taken for an image. Table 2 shows the results of all the databases after segmentation. The proposed method is found to be more robust and less computation time is required to analyze the images represents it is more competitive to the bench mark datasets. All the eight databases having ground truth marked by human experts and image annotation tool which is more helpful in the process of evaluation measures of the proposed system. This system addresses challenges regarding retinal disease identifications are: • OD localization is much complicated step in the field of an automation system which is addressed by pixel density calculation and achieves accuracy of 99.43% for MESSIDOR and 100% for remaining databases.

17

• The proposed method requires less than 2 s for identifying OD in a retinal fundus image and shown in Table 4. • Better accuracy of localization lead to better segmentation with high accuracy of 0.994, 0.996, 0.997, 0.994, 0.997, 0.992, 0.993 and 0.993 on DRIONS, ONHSD, MESSIDOR, DRIVE, HRF, CHASEDB1, DRISHTI and INSPIRE databases. • Specificity of 0.9959, 0.9982, 0.9989, 0.9952, 0.9995, 0.9943, 0.9943 and 0.9984 on above mentioned databases. • Less computation time of 1.41s, 0.86s, 1.25s, 0.69s, 0.75s, 1.81s, 1.39s and 1.05s was achieved on DRIONS, ONHSD, MESSIDOR, DRIVE, HRF, CHASEDB1, DRISHTI and INSPIRE databases. • This method is tested with a maximum number of publically available images and evaluations done using ground truth given by an expert or open-source annotation tools with overall accuracy, the specificity of 99.46%, 99.68% is achieved. This shows the stability of our proposed system is very high compared with the recent approaches. The proposed work was planned to extend for the identification of glaucoma disease through the classification framework by extracting the features of OD for each image individually. And also this proposed system is to be tested with real-time datasets and the results would be calculated and analyzed with the benchmark datasets Acknowledgement This work was supported by Centre For Research, Anna University under the Anna Centenary Research Fellowship, Anna University, Chennai, India (Reference: CFR/ACRF/2018/AR1). Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. References [1] N. Shibata, M. Tanito, K. Mitsuhashi, Y. Fujino, M. Matsuura, H. Murata, R. Asaoka, Development of a deep residual learning algorithm to screen for glaucoma from fundus photography, Sci. Rep. 8 (2018) 1–9, http://dx.doi.org/ 10.1038/s41598-018-33013-w. [2] M.N. Reza, Automatic detection of optic disc in color fundus retinal images using circle operator, Biomed. Signal Process. Control 45 (2018) 274–283, http://dx.doi.org/10.1016/j.bspc.2018.05.027. [3] P. Das, S.R. Nirmala, J.P. Medhi, Detection of glaucoma using neuroretinal rim information, 2016 International Conference on Accessibility to Digital World, ICADW 2016 - Proceedings (2017) 181–186, http://dx.doi.org/10.1109/ ICADW.2016.7942538. [4] J. Sivaswamy, S.R.K. Gopal, D. Joshi, M. Jain, U. Syed, A.E. Hospital, Drishti-Gs : Retinal Image Dataset For Optic Nerve Head (Onh) Segmentation Iiit, Hyderabad, India, 2014, pp. 53–56. [5] Z.U. Rehman, S.S. Naqvi, T.M. Khan, M. Arsalan, M.A. Khan, M.A. Khalil, Multi-parametric optic disc segmentation using superpixel based feature classification, Expert Syst. Appl. 120 (2019) 461–473, http://dx.doi.org/10. 1016/j.eswa.2018.12.008. [6] U. Balakrishnan, NDC-IVM: An automatic segmentation of optic disc and cup region from medical images for glaucoma detection, J. Innov. Opt. Health Sci. 10 (2017), 1750007, http://dx.doi.org/10.1142/s1793545817500079. [7] M. Nergiz, M. Akın, A. Yıldız, Ö. Takes¸, Automated fuzzy optic disc detection algorithm using branching of vessels and color properties in fundus images, Biocybern. Biomed. Eng. 38 (2018) 850–867, http://dx.doi.org/10.1016/j.bbe. 2018.08.003. [8] S. Chakraborty, A. Mukherjee, D. Chatterjee, P. Maji, S. Acharjee, N. Dey, A semi-automated system for optic nerve head segmentation in digital retinal images, Proceedings - 2014 13th International Conference on Information Technology, ICIT 2014 (2014) 112–117, http://dx.doi.org/10.1109/ICIT.2014. 51. [9] P. Das, S.R. Nirmala, J.P. Medhi, Diagnosis of glaucoma using CDR and NRR area in retina images, Netw. Model. Anal. Health Inform. Bioinform. 5 (2016), http://dx.doi.org/10.1007/s13721-015-0110-5.

18

R.G. Ramani and J.J. Shanthamalar / Biomedical Signal Processing and Control 58 (2020) 101832

[10] S. Morales, V. Naranjo, U. Angulo, M. Alcaniz, Automatic detection of optic disc based on PCA and mathematical morphology, IEEE Trans. Med. Imaging 32 (2013) 786–796, http://dx.doi.org/10.1109/TMI.2013.2238244. [11] B. Dai, X. Wu, W. Bu, Optic disc segmentation based on variational model with multiple energies, Pattern Recognit. 64 (2017) 226–235, http://dx.doi.org/10. 1016/j.patcog.2016.11.017. [12] M. Abdullah, M.M. Fraz, S.A. Barman, Localization and segmentation of optic disc in retinal images using Circular Hough transform and grow Cut algorithm, PeerJ. 4 (2016) e2003, http://dx.doi.org/10.7717/peerj.2003. [13] B. Gui, R.J. Shuai, P. Chen, Optic disc localization algorithm based on improved corner detection, Procedia Comput. Sci. 131 (2018) 311–319, http://dx.doi. org/10.1016/j.procs.2018.04.169. [14] L.A. Muhammed, Localizing optic disc in retinal image automatically with entropy based algorithm, Int. J. Biomed. Imaging 2018 (2018) 1–7, http://dx. doi.org/10.1155/2018/2815163. [15] J.R.H. Kumar, S. Sachi, K. Chaudhury, S. Harsha, B.K. Singh, A unified approach for detection of diagnostically significant regions-of-interest in retinal fundus images, IEEE Region 10 Annual International Conference, Proceedings/TENCON. 2017-Decem (2017) 19–24, http://dx.doi.org/10.1109/ TENCON.2017.8227829. [16] G. Zhang, Y. Wan, A new optic disc detection method in retinal images based on concentric circular mask, Proceedings of 2016 5th International Conference on Computer Science and Network Technology, ICCSNT 2016 (2017) 790–794, http://dx.doi.org/10.1109/ICCSNT.2016.8070267. [17] T. Devasia, P. Jacob, T. Thomas, Automatic Extraction and Localisation of Optic Disc in Colour Fundus Images, PeerJ 12 (2015) 2015. [18] T. Devasia, P. Jacob, T. Thomas, Automatic optic disc boundary extraction from color fundus images, Int. J. Adv. Comput. Sci. Appl. 5 (2014) 117–124, http:// dx.doi.org/10.14569/ijacsa.2014.050718. [19] S. Bharkad, Automatic segmentation of optic disk in retinal images, Biomed. Signal Process. Control 31 (2017) 483–498, http://dx.doi.org/10.1016/j.bspc. 2016.09.009. [20] H.A. Nugroho, L. Listyalina, N.A. Setiawan, S. Wibirama, D.A. Dharmawan, Automated segmentation of optic disc area using mathematical morphology and active contour, Proceeding - 2015 International Conference on Computer, Control, Informatics and Its Applications: Emerging Trends in the Era of Internet of Things, IC3INA 2015 (2016) 18–22, http://dx.doi.org/10.1109/ IC3INA.2015.7377739. [21] A. Li, Z. Niu, J. Cheng, F. Yin, D.W.K. Wong, S. Yan, J. Liu, Learning supervised descent directions for optic disc segmentation, Neurocomputing. 275 (2018) 350–357, http://dx.doi.org/10.1016/j.neucom.2017.08.033. [22] R. Kamble, M. Kokare, G. Deshmukh, F.A. Hussin, F. Mériaudeau, Localization of optic disc and fovea in retinal images using intensity based line scanning analysis, Comput. Biol. Med. 87 (2017) 382–396, http://dx.doi.org/10.1016/j. compbiomed.2017.04.016. [23] Z. Fan, Y. Rong, X. Cai, J. Lu, W. Li, H. Lin, X. Chen, Optic disk detection in fundus image based on structured learning, IEEE J. Biomed. Health Inform. 22 (2018) 224–234, http://dx.doi.org/10.1109/JBHI.2017.2723678. [24] S. Aruchamy, P. Bhattacharjee, G. Sanyal, Automated glaucoma screening in retinal fundus images, Int. J. Multimed. Ubiquitous Eng. 10 (2015) 129–136, http://dx.doi.org/10.14257/ijmue.2015.10.9.14. [25] A.M.N. Allam, A.A.H. Youssif, A.Z. Ghalwash, Optic disc segmentation by weighting the vessels density within the strongest candidates, in: Proceedings of 2016 SAI Computing Conference 2016, SAI, 2016, pp. 91–99, http://dx.doi.org/10.1109/SAI.2016.7555967.

[26] R. Panda, N.B. Puhan, G. Panda, Robust and accurate optic disk localization using vessel symmetry line measure in fundus images, Biocybern. Biomed. Eng. 37 (2017) 466–476, http://dx.doi.org/10.1016/j.bbe.2017.05.008. [27] A.P. Fardin Abdali-Mohammadi, Automatic optic disc center and boundary detection in color fundus images, Journal of AI and Data Mining. 6 (2018) 35–46. [28] M.N. Zahoor, M.M. Fraz, Fast optic disc segmentation in retina using polar transform, IEEE Access 5 (2017) 12293–12300, http://dx.doi.org/10.1109/ ACCESS.2017.2723320. [29] A.S. Abdullah, Y.E. Özok, J. Rahebi, A novel method for retinal optic disc detection using bat meta-heuristic algorithm, Med. Biol. Eng. Comput. 56 (2018) 2015–2024, http://dx.doi.org/10.1007/s11517-018-1840-1. [30] X. Wu, B. Dai, W. Bu, Optic disc localization using directional models, Ieee Trans. Image Process. 25 (2016) 4433–4442, http://dx.doi.org/10.1109/TIP. 2016.2590838. [31] T. Devasia, P. Jacob, T. Thomas, Automatic optic disc localization and segmentation using swarm intelligence, World Comput. Sci. & Inf. Technol. J. 5 (2015) 92–97. [32] T. Devasia, P. Jacob, T. Thomas, Automatic Computation of CDR USING FUZZY CLUSTERING TECHNIQUES, 2015, pp. 27–40. [33] K. Pradhepa, S. Karkuzhali, D. Manimegalai, Segmentation and localization of optic disc using feature match and medial Axis detection in retinal images, Biomed. Pharmacol. J. 8 (2015) 391–397, http://dx.doi.org/10.13005/bpj/626. [34] M. de, L. Claro, L. de, M. Santos, W. Lima e Silva, F.H.D. de Araújo, N.H. de Moura, A.M. Santana, Automatic Glaucoma detection based on optic disc segmentation and texture feature extraction, Clei Electron. J. 19 (2016) 4:1–4:10, http://dx.doi.org/10.19153/cleiej.19.2.4. [35] A. Sevastopolsky, Optic disc and cup segmentation methods for Glaucoma detection with modification of U-Net convolutional neural network, arXiv 27 (2017) 618–624, http://dx.doi.org/10.1134/S1054661817030269. [36] B. Al-Bander, B.M. Williams, W. Al-Nuaimy, M.A. Al-Taee, H. Pratt, Y. Zheng, Dense fully convolutional segmentation of the optic disc and cup in colour fundus for glaucoma diagnosis, Symmetry. 10 (2018), http://dx.doi.org/10. 3390/sym10040087. [37] F. Calimeri, A. Marzullo, C. Stamile, G. Terracina, Optic disc detection using fine tuned convolutional neural networks, Proceedings - 12th International Conference on Signal Image Technology and Internet-Based Systems, SITIS 2016 (2017) 69–75, http://dx.doi.org/10.1109/SITIS.2016.20. [38] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, S. Süsstrunk, SLIC superpixels compared to state-of-the-art superpixel methods, IEEE Trans. Pattern Anal. Mach. Intell. 34 (2012) 2274–2281, http://dx.doi.org/10.1109/TPAMI.2012. 120. [39] Z. Fan, F. Li, Y. Rong, W. Li, X. Cai, H. Lin, Detecting optic disk based on AdaBoost and active geometric shape model, 2015 IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems, IEEE-CYBER 2015 (2015) 311–316, http://dx.doi.org/10.1109/CYBER. 2015.7287954. [40] K. Zuiderveld, Contrast Limited Adaptive Histograph Equalization, 1994, pp. 474–485. [41] R.E. Gonzalez, R. Woods, Digital Image Processing, 2006. [42] A. Poorshamam, Automatic optic disc localization in color retinal fundus images, Adv. Comput. Sci. & Technol. 11 (2018) 1–13. [43] A. Chakravarty, J. Sivaswamy, Joint optic disc and cup boundary extraction from monocular fundus images, Comput. Methods Programs Biomed. 147 (2017) 51–61, http://dx.doi.org/10.1016/j.cmpb.2017.06.004.