Journal Pre-proof Computational methods for automated analysis of corneal nerve images: Lessons learned from retinal fundus image analysis Tooba Salahuddin, Uvais Qidwai PII:
S0010-4825(20)30059-7
DOI:
https://doi.org/10.1016/j.compbiomed.2020.103666
Reference:
CBM 103666
To appear in:
Computers in Biology and Medicine
Received Date: 26 September 2019 Revised Date:
16 February 2020
Accepted Date: 16 February 2020
Please cite this article as: T. Salahuddin, U. Qidwai, Computational methods for automated analysis of corneal nerve images: Lessons learned from retinal fundus image analysis, Computers in Biology and Medicine (2020), doi: https://doi.org/10.1016/j.compbiomed.2020.103666. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Ltd.
Computational Methods for Automated Analysis of Corneal Nerve Images: Images: Lessons Learned from Retinal Fundus Image Analysis Abstract Corneal and retinal imaging provide a descriptive view of the nerve and vessel structure present inside the human eye, in a non-invasive manner. This helps in ocular, or other, disease identification and diagnosis. However, analyzing these images is a laborious task and requires expertise in the field. Therefore, there is a dire need for process automation. Although a large body of literature is available for automated analysis of retinal images, research on corneal nerve image analysis has lagged due to several reasons. In this article, we cover the recent research trends in automated analysis of corneal and retinal images, highlighting the requirements for automation of corneal nerve image analysis, and the possible reasons impeding its research progress. We also present a comparative analysis of segmentation algorithms versus their processing power derived from the studies included in the survey. Due to the advancement in retinal image analysis and the implicit similarities in retinal and corneal images, we extract lessons from the former and suggest ways to apply them to the latter. This is presented as future research directions for automatic detection of neuropathy using corneal nerve images. We believe that this article will be extremely informative for computer scientists and medical professionals alike, as the former would be informed regarding the different research problems waiting to be addressed in the field, while the latter would be enlightened to what is required from them so as to facilitate computer scientists in their path towards finding effective solutions to the problems. Keywords Corneal confocal microscopy, retinal fundoscopy, automated image analysis, deep learning, neuropathy 1. Introduction Imaging various parts of the eye is a very technical process and yields informative insights. It has transformed the way physicians deal with visual impairment and eye diseases in patients. Imaging the layers of retina and cornea provides a non-invasive facility for prompt diagnosis, treatment planning and monitoring changes in the eye structure. There are several ways of acquiring images from the human eye. Optical Coherence tomography (OCT) uses infrared light rays and provides a cross-sectional view of the layered retina. Fig 1(a) shows a retinal scan obtained using OCT1. Fundus photography is used for imaging the entire retina situated at the back of the eye. These images help in the identification of alterations in the retinal blood vessels occurring due to diabetic retinopathy, dry eye disease,
1
https://www.octscans.com/
macular degeneration, etc. Fig 1(b) shows a full retinal scan obtained using fundoscopy2. Corneal confocal microscopy is used to capture images from the different layers of the human cornea. This laser microscope provides a magnified and detailed view of the structures in the cornea. Various eye diseases could be identified and monitored by observing the changes occurring in different layers through these images. However, the images obtained from the subbasal nerve plexus provide as a tool for very early diagnosis of neuropathy due to diabetes (1) and other chronic diseases (2,3). Fig 1(c) shows a sample image obtained from the subbasal nerve plexus of the human cornea.
(a)
(b)
(c)
Fig 1(a) Retinal scan obtained using OCT; (b) retinal scan obtained using fundoscopy; (c) Corneal nerve image obtained using CCM. The analysis following the imaging process is a tedious and time-consuming task requiring expertise in vision sciences. Therefore, computational methods have become a necessity to automate image analysis and simplify the process of disease identification in these images. Two common research problems have been addressed by computer scientists on retinal fundus images and CCM nerve images. The first one is to segment the vessels/nerves from the images and compute the required features automatically. This has been traditionally solved by applying various image processing algorithms, while recently the trend is moving towards the use of deep convolutional neural networks for segmentation. The second problem is to classify the images as healthy or diseased, and then further into different categories of disease, if applicable. In this paper, we provide a focused review of the state-of-the-art image segmentation and classification techniques applied to retinal fundus images and corneal nerve images. Due to the advancement in retinal image analysis and the implicit similarities in retinal and corneal images, we extract lessons from the former and suggest ways to apply them to the latter. To the best of our knowledge, this is the first article that (a) reviews the recent use of computational techniques on corneal nerve images along with retinal fundus images, (b) compares the segmentation algorithms’ performance and processing power used in each study, and (c) provides relevant future research directions in the area of automatic neuropathy detection through corneal image 2
https://en.wikipedia.org/wiki/Fundus_photography
analysis. Previously published reviews are either outdated or provide a broader view of automated medical image analysis techniques (4). Moreover, the knowledge base lacks an article that outlines techniques learned from retinal image analysis to be applied to corneal image analysis for research and development. Furthermore, this article will be extremely informative for computer scientists and medical professionals alike, as the former would be informed regarding the different research problems waiting to be addressed in the field, while the latter would be enlightened to what is required from them so as to facilitate computer scientists in their path towards finding effective solutions to the problems. To the extent of our research, previous articles have not been able to bridge this gap. This paper is organized as follows. In the next two sections, we provide an overview of the current state-of-the-art segmentation and classification techniques, respectively, available in the literature for corneal and retinal images. In section 4, we provide a preview of our proposed method and how it addresses some limitations present in the previous works. In section 5, a comparative analysis of the segmentation algorithms’ performance and processing power is conducted for the studies included in the survey. In section 6, we present our thoughts on the topic of corneal image analysis, and how the process can be improved and accelerated by observing the progress in a highly similar research area of retinal image analysis. Finally, section 7 concludes the paper. 2. Image Processing Techniques for Segmentation As CCM is progressing towards establishing itself as a biomarker for detection of preliminary neuropathy, the need for systems facilitating completely automatic image analysis, neuropathy severity prediction and disease prediction is becoming inevitable. The establishment and recognition of new and reliable standards for nerve measurements can be achieved through correct segmentation and quantification of nerves in reasonable time. We found that the problem of nerve segmentation in corneal images holds a resemblance to vessel segmentation in fundus images, vascular structures in angiographic images, vein segmentation in leaf images, and fingerprint segmentation in fingerprint images. These problems fall under the category of identification of curvilinear structures. Consequently, techniques proposed for one of these problems can be exploited to resolve another. In the following subsections, we review the existing literature on image processing techniques, mainly for corneal nerve segmentation and retinal vessel segmentation. 2.1. Region of Interest Expansion A study of related literature has revealed many research groups that have been developing successful methods for automating the segmentation of corneal subbasal images. Ruggeri et al. (5) proposed a nerve recognition and tracing method for corneal images. Their algorithm was a modification of the one originally devised for vessel segmentation in retinal images (6). The images first undergo a denoising process to enhance image contrast. This is followed by a nerve tracking process based on region growing. It involves extraction of seed points from the image using a uniformly spaced grid of 10 pixels. The expansion of the region of interest (ROI) is carried out by matching neighboring pixels according to a preset pixel intensity similarity threshold. On the encounter of a nerve intersection, a technique called bubble analysis is applied. The bubble analysis algorithm identifies nerve pixels occurring in the path of concentric circles drawn from the point of intersection. The next step is to classify pixels as belonging to the nerve or to the background, using fuzzy c-mean clustering. There is a possibility
of termination of the nerve tracking process resulting in discontinuous nerve segments. This issue is addressed by a nerve continuity algorithm which connects disjoint nerve segments. In order to assess the performance of the algorithm, 12 images from the subbasal nerve plexus of the cornea, obtained via a slit lamp scanning CCM, were segmented using the proposed algorithm. The segmentation algorithm showed the tendency of generating false positives due to the presence of cells in the image background. However, this problem may have arisen only because the images were obtained via slit lamp CCM. The state-of-the-art laser scanning CCM eliminates much of the image artefacts and can produce images with better resolution. The time required for the segmentation of each image was approximately 4 - 5 minutes. Scarpa et al. (7) used a similar algorithm but introduced the use of a Gabor filter prior to the nerve tracking procedure. They evaluated the algorithm on a set of 90 images taken from the corneas of control subjects and patients. The technique succeeded in correct identification of more than 80% of the nerve length as compared to manual segmentation. Poletti and Ruggeri (8) further improved the nerve tracking algorithm presented in (5) by identifying seed points through multiple orientations of lines. Thereafter, nerves were segmented from the background by connected seed points identified from Dijkstra’s shortest path algorithm. The proposed algorithm was tested on a set of 30 CCM images. Single image processing time was reduced from 4 minutes to 25 seconds. 2.2. Gabor Wavelets A significant contribution in this domain is by Dabbah et al. (9). Their proposition of a dual model algorithm for nerve fiber detection was a revolutionary attempt to address the problem. Their dual model consists of Gabor wavelet filter and Gaussian envelope. They enhanced the nerve fibers in the image using 2D Gabor wavelet filters for nerve identification and applied a Gaussian filter for background noise removal. The process of nerve fiber orientation estimation involved calculating distance through least mean squares (LMS) algorithm. This is followed by passing the image through a low pass Gaussian filter which results in diminishing the texture present in the background. They presented a comparative analysis of the Gabor method and a previously implemented Linop method (linear operator for detecting asbestos fibers in mammograms (10)), on a small dataset of 12 images. Results showed an improved performance by the Gabor method and a lower estimated error rate (EER) (11). However, experiments conducted on such a small dataset fail to generalize the performance of the algorithm on large scale. Wang et al. (12) extracted 100 features using Gabor filters, matched filters, gray-level features, Frangi filters and difference of Gaussian kernels. Using asymmetric principal component analysis (PCA) to select most contributing features, a cascade classification network was trained on the pixel features. The proposed method showed comparable performance with contemporary approaches and a performance gain of 2% was observed in accuracy from previous approaches. 2.3. Morphological Operations Al-Fahdawi et al. (13) designed a complete automatic system for nerve quantification based on morphological operations. The first part is nerve contrast enhancement which uses a combination of coherence and Gaussian filters for background noise reduction. Then, morphological operations of dilation, erosion, opening and closing are applied to remove stray and unwanted segments from the nerve structure. This is followed by edge detection using Canny edge detector for identifying the nerve fibers in the image. Their choice of edge detection is based on the fact that Canny edge detector is known for accurate filtering of multiple responses from a single edge. The work also presents a new algorithm for appropriate linking of
gaps in the fibers. The technique is as follows. The image is converted to its skeleton and endpoints of each segment are identified and marked with a binary circle. Based on the distance between two disconnected endpoints, a straight line connects two disjoint endpoints. Finally, the image is thinned again, which results in shrinking the circles to a line. The segmented binary image is then used to extract clinical features namely, nerve tortuosity, length, density and thickness. The performance of the proposed system was evaluated on two datasets, having 498 and 919 images respectively, captured from a laser CCM. Single image processing time is approximately 7 seconds. They used non-conventional evaluation metrics of structural similarity index (SSIM), variation of information (VOI) and probability random index (PRI). These metrics have not been used elsewhere to evaluate nerve segmentation, and their method lacks a comparison with other methods in the literature. More recently, on similar lines, Hosseinaee et al. (14) proposed a segmentation procedure, for volumetric corneal nerve images acquired from OCT, primarily based on morphological operations. They follow a similar approach as in (13), but edge enhancing diffusion is used instead of the Canny edge detector. This is followed by a Hessian-based multiscale filtering to enhance the nerve structures. A series of morphological operations are carried out, including thresholding, opening and erosion. Objects with less than 100 pixels are eliminated and blobs are used to connect disjoint nerves. Similarly, a fuzzy morphological tophat method for vessel segmentation in real time is presented in (15). This method was useful is recovering vessels in the image that were darker than their neighboring pixels. A fuzzy erosion technique is also applied which helps in removing noise at the edges. For conversion to binary image, hysteresis thresholding is employed. 2.4. Non-Deep Learning Techniques Pixel classification for CCM images has also been approached through supervised machine learning techniques. In this method, each pixel is considered a sample with the corresponding label of nerve/non-nerve (for nerve images) or vessel/non-vessel (for retinal images), and a feature vector extracted using pixel-based information. Thus, this becomes a simple binary classification problem. A multiscale enhancement of the dual model in (9) is presented in (16), which classifies pixels as nerve or non-nerve by training neural network or random forest classifiers. The evaluation of the trained model resulted in the best sensitivity and specificity at an EER of 15.44%. The software is available upon request and is named as ACCmetrics. Guimaraes et al. (17) proposed a nerve segmentation system which involved morphological operations and machine learning for nerve classification. First, they used top-hat filtering for enhancing the image contrast, computed by subtracting the morphological opening of the image from the image itself. This is followed by a nerve enhancement procedure using log-Gabor filters. Then, nerve segmentation is carried out through hysteresis thresholding. This results in a number of candidate nerve segments which may be true positives or false positives. To differentiate between them, they trained a Support Vector Machine with radial basis kernel function. They tested the efficiency of their approach by classifying pixels of 246 CCM images using Support Vector Machine (SVM), achieving an average sensitivity of 88%. For retinal fundus scans, Rani et al. (18) proposed a retinal vessel segmentation approach using matched filter designs. They selected the green channel of the images for the segmentation process. Image preprocessing stage includes the application of contrast limited adaptive histogram equalization on the images. They applied a Gaussian-based matched filter for the detection of vessel structures in the image. This results in the extraction of non-vessel structures as well, which is handled by the following steps. Using pixel-based features from each connected
component, SVM and Tree-bagger is applied to classify components as vessels or not vessels. They reported an accuracy of 95% on the STARE database and 94% on the DRIVE database. An unsupervised segmentation algorithm based on independent component analysis (ICA) is presented in (19) where morphological operations are performed as a preprocessing step to remove uneven image contrast. This is followed by ICA enhancement to each output channel and images are converted to grayscale using PCA-based method. To further enhance images, an anisotropic oriented diffusion filter is applied. The final binary image is created using a floodfilled reconstruction method. The method achieved an accuracy of 95.3% and 96.7% on DRIVE and STARE respectively, but a very low sensitivity as compared to other methods. A similar approach was followed in (20) where preprocessing was performed using morphological operations. After preprocessing a set of 13 features is extracted from each pixel which consists of intensity values and halfwave rectified Gabor responses. Applying PCA on these features, a new set of features is generated, and pixels are clustered into two classes (vessel, non-vessel) using KNN. The non-vessel pixels undergo a root guided decision tree classification. The method takes a lot of time to process the images and produces low sensitivity. 2.5. Deep Neural Networks / Convolutional Neural Networks There has been a plethora of studies in the literature that use CNNs for image segmentation. Fu et al. (21) used deep learning architecture for retinal vessel segmentation addressing the problem as a boundary detection task. They proposed a fully connected CNN for learning the vessel features followed by a fully connected conditional random field (CRF) for converting the coarse probability map to binary. They related the curvilinear segments detection problem to a boundary detection problem and used a holistically-nested edge detection (HED) technique proposed elsewhere (22) and based their implementation on this HED. Their deep learning architecture consists of 5 stages where each stage comprises of several convolutional and ReLU layers. The last convolutional layer in each stage is linked to the side output layers. Two datasets were used for evaluation of the proposed technique. They achieved sensitivity of 73% and 71% on the DRIVE and STARE datasets respectively. They showed that although their method did not produce extraordinary results when compared with other techniques, they did succeed in reducing the false positives in the optic disc and pathological regions. Inspired by (21), Hu et al. (23) proposed a CNN followed by conditional random fields (CRF) for retinal vessel segmentation. Their CNN architecture combines a fully convolutional neural network with a deeply supervised nets for edge detection. The CNN is adapted from VGG16. They reported an accuracy of 95% and 96% on the DRIVE and STARE datasets respectively. Single image execution time was 1.1s. The concept of using U-Net (24) for corneal nerve segmentation was introduced by Colonna et al. (25). U-net is a convolutional neural network intentionally designed for segmentation of biomedical images (24). It has successfully been applied for segmentation of brain tumors in MRI scans (26), generating synthetic images by mapping retinal vessel trees to retinal fundus scans (27), cell membrane segmentation in electron microscopy images (22), and many others. Colonna et al. (25) trained a U-Net based CNN on CCM nerve images obtained from healthy and diabetic patients. In the preprocessing stage, the images were cropped of 10 pixels from the edges and resized to the input size required by U-Net using a bicubic transformation. The training was performed on 8909 images, out of which 30% was used for validation only. The network was trained for 6 epochs. Instead of using manual tracings for ground truth, they obtained segmented images by applying a previously proposed algorithm for
segmentation. They evaluated the trained model on 30 test images and compared against manual tracings. In order to allow for slight shifts in the nerve position, a tolerance of 3 pixels was allowed. Results revealed a sensitivity of 97% and a false detection rate of 18%. However, since the number of test images is so small, the ability of the model to generalize cannot be determined. In a previous publication, (28) we have compared the output of segmentation of corneal images through U-Net and ACCmetrics (MA Dabbah, Imaging Science and Biomedical Engineering, Manchester, UK). Our findings show that U-Net significantly reduces the false positive rate thus providing better results than ACCMetrics. Recently, U-Net was also used by Williams et al. (29) for corneal nerve segmentation. In addition to using a single U-Net, they used another approach where majority vote from five parallel U-Net models determined the final decision. Moreover, Son et al. (30) employed U-Net as part of a generative adversarial network (GAN) to segment vessels from retinal fundoscopy images. In order to get sharp segmented structures, they experimented with various discriminators. They were able to reduce false positives but with some trade-off in the number of false negatives. They reported a precision and recall of 91% on DRIVE and STARE datasets. Likewise, Maninis et al. (31) also provide a solution to the problem of retinal image segmentation using CNN. They improvise a CNN architecture by combining the architectures of VGG-18 and Inception V3. The fully connected layers towards the end of the VGG-18 architecture were removed and specialized convolutional layers to the end of each stage were added. The final result is formed by adding another convolutional layer to the end of the model. Zhu et al. (32) proposed a method based on extreme learning machines (ELM) where features are first extracted for each pixel in the images and then the feature vector is fed to the ELM classifier for training. These features include local and morphological features, phase congruency and Hessian and divergence of vectors. The achieved sensitivity and specificity (71%, 98%) did not outperform the best in the research, but the approach is suitable for real-time processing.
3.
Classification Techniques Recently, convolutional neural networks have gained distinguished popularity among data scientists and they have been proven to solve a multitude of image-based problems including segmentation, localization and classification. Specially with the introduction of the concept of transfer learning using pretrained networks, such as GoogLeNet (33), AlexNet (34) etc, the requirement of a large amount of data to train the neural network ceases to be necessary. For this reason, pretrained CNNs have become the quintessence of deep learning. To the extent of our knowledge, there has been a scarcity of work on neuropathy classification of corneal images using artificial intelligence. Therefore, since the corneal nerve images hold an apparent visual and conceptual similarity to the retinal images obtained using fundoscopy, the following subsections explore the literature for retinopathy classification techniques applied on retinal fundus scans, in addition to the meagre literature on classification of corneal nerve images. Several approaches have been proposed in the literature for classification of retinal images using transfer learning and fine-tuning pretrained CNNs. Others have tackled the problem using custom-designed deep learning models. 3.1. Transfer Learning and Parameter Tuning
Research work in the literature shows many evidences of successful usage of pretrained models using transfer learning and fine tuning. The use of pretrained CNNs have become widespread due to their ease of use. Moreover, they do not require millions of training samples to produce a well-trained model, which is the case for networks trained with random initialization. Gulshan et al. (35) used transfer learning for a detailed and multi-aspect classification of retinal images obtained from fundus photography. They classified images as (a) having referable diabetic retinopathy (mydriatic, non-mydriatic or both), (b) into subtypes of diabetic retinopathy (moderate or worse only, severe or worse only, diabetic macular edema only) and (c) as gradable or non-gradable. Training was conducted on the pretrained model of Inception-v3 architecture. They fine-tuned an Inception V3 model and retrained it on retinal scans using the initial weights provided by the pretrained model. For an assessment of their approach, they conducted tests on 2 datasets and reported a sensitivity of approximately 90% for each classification type. Similarly, Choi et al. (36) employed transfer learning and fine-tuned VGG-19 and AlexNet architectures for classifying normal retinal images and 9 types of retinal pathologies. Their 10-class classification resulted in an accuracy of 58%. They compared this approach with commonly used machine learning classifiers (SVM, Random Forest, ANN, etc.) and found that these approaches were outperformed by CNN transfer learning method. Moreover, even with deep learning, when transitioning from binary to multiclass classification, the accuracy decreases as the number of classes increases. Lam et al. (37) attempted to perform multiclass classification of the severity levels of retinopathy from retina fundus scans. They evaluated the usage of different optimizers, learning rates, gradient update policies and dropout levels for fine tuning pretrained GoogLeNet and AlexNet architectures. Experiments were conducted on the publicly available Kaggle dataset of 35000 retinal images categorized into varying levels of retinopathy (class labels: normal, mild, moderate, severe, end stage), and on the Messidor dataset (class labels: normal, mild, moderate, severe). They reported the best accuracy of 74% on Messidor dataset obtained when augmented dataset was fed to the model for training. On the Kaggle dataset, they scored an accuracy of 84%. Burlina et al. (38) employed pretrained Overfeat for extracting features from retinal fundus scans classification. Overfeat (39) is a pretrained model, similar to GoogLeNet, trained on images from ImageNet. It implements a multiscale and sliding window approach inside a convolutional network and applies the concept of combining multiple localized predictions instead of focusing on the background. The problem is classified as four stages of retinopathy. They subdivided the problem into multiple binary classifications: stage 1,2 vs 3,4; stage 1,2 vs 3; stage 1 vs 3; stage 1 vs 3,4. Features extracted from Overfeat were fed into a linear SVM for classification. For all stages, an accuracy of greater than 90% was achieved. Continuing their experiments, Burlina et al. (40) compared the previous approach with AlexNet trained from scratch. Experimental results on the AREDS fundus image data showed that the AlexNet model (accuracy 91%) outperformed Overfeat (accuracy 84%). The reason for discrepancy in reporting the accuracy measures for Overfeat in (38) and (40) is not mentioned. However, this might be because the latter experiments conducted evaluations using cross validation. Poplin et al. (41) trained an Inception-v3 for prediction of cardiovascular risk factors from retinal fundus images. They trained a binary classification model for predicting the risk factors of smoking status or gender, another classification model for predicting major adverse cardiovascular event (MACE) and a regression model for predicting age, body mass index (BMI), systolic blood pressure (SBP), diastolic blood pressure (DBP) and HbA1c. the results were compared against the accuracy of a random classifier. For HbA1c, BMI and DBP, the
model prediction was almost similar to that of a random classifier. However, for age and SBP, the trained model’s predictions were reported as 78% and 72% respectively and showed much improvement against random classification. They reported an area under the curve (AUC) of 97% and 70% for gender and MACE classifications. These results were reported using the UK Biobank dataset. Mateen et al. (42) used a VGG-19 deep neural network for feature extraction from retinal fundus scans segmented using Gaussian mixture models. Once the features were extracted, PCA and singular value decomposition (SVD) was applied for feature reduction. Then, applying the softmax classifier on the reduced feature vector, the image was classified as non-DR, mildsevere, moderate-severe, severe or proliferative DR. A classification accuracy of 98% was reached for the five-class classification on the Kaggle dataset. Results using VGG-Net were better than with AlexNet. The idea of using multimodal data by combining data from OCT images and fundus images for retinopathy classification in patients with age-related macular degeneration (AMD) was demonstrated by Yoo et al. (43). This experimental demonstration used transfer learning by training a VGG-19 net on both types of images. The features extracted from OCT images were combined with the features of fundus images. A deep belief network consisting of multiple restricted Boltzman machines (RBM) was trained using these features. The multimodal models yielded a better accuracy for both binary and 3-class classification (96% and 72% respectively) as compared to using a single modality model. 3.2. Combination of Pretrained Models and Ensembles The use of ensembles is getting common in neural networks. Multiple classifiers are trained and then a weighted score is taken from each to give a final output. Each of these classifiers could be custom-designed or ready-to-use. Ting et al. (44) adapted the VGG network and trained eight different CNNs for different kinds of outputs contributing to the final score. Three ensembles of two CNNs each was used for classification of retinal images into (a) retinopathy severity levels, (b) AMD scores, and (c) glaucoma levels. The two networks are trained using original images and contrast-normalized images respectively. The final score for each category is the average of the scores from their respective pair of classifiers. Two more CNNs are used for determining whether the score is ungradable of nonretinal. The final scores from each category formulate the decision for referable diagnosis. Abramoff et al. (45) presented an automated device for detection of diabetic retinopathy (DR). They developed a CNN architecture inspired by AlexNet and VGG networks. Their classifier produced outputs indicating: (a) presence of diabetic retinopathy, (b) presence of referable DR, (c) vision threatening DR, and (d) quality of image or examination. Image augmentation included rotational, spatial and scale augmentations. The output from the CNN produces a probability implying whether a detection is an abnormality. These probabilities form a feature vector which is supplied to two fusion algorithms. The two algorithms predict whether the image represents referable DR or vision threatening DR respectively. The system was evaluated on the Messidor-2 dataset for retinal fundus images. A sensitivity of 96% was reported for the first output, and 100% for the rest. Aside from deep learning, ensembles are also used with machine learning classifiers. A bagging ensemble classifier for retinal images using machine learning techniques was presented in (46). The first step of feature extraction is carried out using a technique called t-distributed Stochastic Neighbor Embedding (t-SNE). Multiple machine learning classifiers are trained on the
extracted features to create an ensemble. The paper provides no detail about the machine learning algorithm used in the ensemble, nor any detail about the number of classifiers used. 3.3. Modification of Available Neural Networks Studies have also shown the usage of neural networks which are derived from or a modification of pre-designed networks. Colonna et al. (25) proposed the idea of neuropathy classification of CCM nerve images using a CNN inspired from U-Net in addition to nerve segmentation. The existing U-net architecture was modified for image-based classification as normal or neuropathy. The features extracted from the lower layers of U-Net were fed to a convolution layer, followed by max pooling and fully connected layers for final classification. They trained this modified network on 5000 images, using 30% of it as validation. The network was trained with a batch size of 256 images for 15 epochs with shuffling enabled before every epoch. The optimization method was set to stochastic gradient descent with L2 regularization and a learning rate of 0.01. The performance of the trained model was evaluated by predicting labels on 100 unseen images. They succeeded in classifying with an accuracy of 83%. However, further insight on the evaluation results is not provided. Takahashi et al. (47) performed a thorough study on two types of classifications for retinal images. The first is a 3-class classification for the grades of retinopathy (simple DR; preproliferative DR; proliferative DR) using a modified version of the GoogLeNet architecture. The top five layers of GoogLeNet were removed and some parameters were fine-tuned. The network was trained on more than 9000 images. An accuracy of 81% was scored for the prediction of retinopathy grade. They trained another model using the same network design and trained it on images from patients with follow-ups to predict their prognosis. The prognoses consisted of multistage outputs. The first output was to predict whether a treatment was required and if required then whether it must be in the current visit or the next. The second output predicted one of the three possible treatments. The third output was to determine the visual acuity of the image as stable, improved or worsened. They achieved an accuracy of 96% with a low false negative rate which implied few mispredictions when the treatment was not required but the network predicted otherwise. Phasuk et al. (48) proposed a method for retinal image classification where features were extracted using DenseNet-121. These features were fed to a fully connected neural network with 3 layers for classification into normal or glaucomatous. The proposed method was trained on the complete ORIGA dataset (586 images) and train sets from RIMONE-R3 (127 images) and DRISHTI-GS (80 images) datasets. The results showed a 90% AUC on the test sets from RIMONE-R3 (17 images) and DRISHTI-GS (11 images) datasets. Although they used approximately 800 images for training, the total amount of test images was too less (28 images), which leaves room for further experimentation of the proposed method. 3.4. Custom Designed Convolutional Neural Networks In addition to using already available neural networks, custom designed CNNs are also popular. This approach is more flexible as different layers can be added or removed depending on the structure and performance of the network. Tan et al. (49) introduced the idea of detection of AMD on retinal images using deep learning. Their CNN architecture consisted of 14 layers including 7 convolution layers, 4 max pooling and 3 fully connected layers. They trained the model using blind and 10-fold cross validation on private dataset. Prior to training, data augmentation was performed by rotation. For optimization, the Adam optimizer was used. They
achieved a sensitivity of 93% and a specificity of 88% for blind cross validation and an average of 96% sensitivity and 94% specificity for 10-fold cross validation on the AREDS dataset. Gargeya and Leng (50) designed a convolutional neural network based on the principles of deep residual learning for the classification of retinal color fundus images into normal or retinopathy. During data preprocessing, image brightness levels were adjusted, and data augmentation was applied to allow rotation invariant predictions. The CNN network was designed such that each convolutional layer combined the output from itself and previous layers. The authors give an abstract understanding of the network and do not provide the details. A visual heatmap was also developed to visualize the severe retinopathy regions in the images. The CNN was used to construct a feature vector of 1024 features from each image, which was combined with the meta-data features and then fed to a decision tree classifier for final prediction. Experiments were conducted using 5-fold cross validation. A sensitivity of 93% and a specificity of 87% was achieved on the Messidor-2 dataset. Compared with other approaches, a very high specificity was achieved using this method. An interpretable CNN classifier for retinopathy grading was presented by Torre et al. (51). The CNN consists of 17 layers, where the feature extraction is performed by 14 layers. They consider a score propagation model for computation of output scores that is generated from the input-space contribution and from the receptive fields of each layer. The score represents the importance of each pixel in the image in the final prediction and is depicted by highlighted pixels in the image. The study was conducted on the EyePacs dataset. For binary classification, an accuracy of 91% was reached. Rehman et al. (52) proposed a simple 5-layered CNN for classification of diabetic retinal images. The first layer consists of 4 kernels for convolution followed by pooling. The second layer consists of 16 kernels followed by pooling. The last three layers of the CNN each consist of fully connected neural networks with different neurons in each network. Compared to AlexNet, SqueezeNet and VGG-Net, the proposed CNN showed better performance with 98% accuracy on the Messidor dataset. 3.5. Non-Deep Learning Techniques Since retinal image classification has advanced considerably, therefore, as with any other image classification area, machine learning techniques for classification are obsolete. Recent advancements in retinal image classification include deep learning methods as detailed in sections 3.1-3.4. However, CCM image classification using machine learning is present in recent publications. Silva et al. (53) approached neuropathy classification of corneal images using SVM classifiers. They extracted a vector of 61 texture-based features from the images which was reduced to 6 features after applying (PCA). SVM was used to classify images as (a) with or without neuropathy, and (b) mild or moderate neuropathy. Using 10-fold cross-validation an accuracy of 73.5% was achieved for the first classification and 79.3% for the second classification on the dataset used in (54). Although the total number of images is large (631 images), these images were acquired from 20 subjects only, so the dataset lacks image diversity from multiple patients. Moreover, the acquired images are overlapping, which results in the same corneal region appearing in multiple images, contributing to the high value of accuracy. We have evaluated different machine learning techniques for CCM image classification in (55). The models were trained for SVM, Naïve Bayes, linear discriminant analysis (LDA), decision trees and KNN algorithms. The study was performed on images selected from the
dataset obtained from (54). Three features were extracted after morphological image processing before proceeding to the classification stage. KNN showed superior performance as compared to others with an accuracy of 92%.
From the collected literature we can observe that the trend of retinal image classification is moving towards using CNNs which does not require a separate image segmentation step. Neural networks have shortened the number of phases required for image analysis. Instead of focusing on which features to extract from the images after segmentation, the focus has shifted to improving the design of the neural network so as to produce better results. Although the training time of deep neural networks might be a limitation at times, but with the state-of-the-art GPUs it is a matter of few hours. Existing research shows that segmentation of vessel/nerve structures has been approached in many ways. This has been experimented extensively on corneal nerve images, retinal fundus scans, leaf vein images and other similar images. Table 1 and Table 2 summarize the segmentation and classification techniques for retinal and corneal nerve images, that were collected in this research. To the extent of our knowledge, neuropathy classification of corneal nerve images has not been approached by the research community except for two pilot studies (25,53). In (25), the authors do not give a detailed insight and discussion into the neuropathy classification results reported in their article. Moreover, the type of classification conducted is binary (healthy/pathological). In (53), the dataset contains largely overlapping images from a small number of patients, resulting in a lot of repetitive images. Therefore, the evaluation does not give us a clear indication of whether the classification model will generalize well. Thus, research in this area provides a lot of room for investigation on the idea of classification, extending the classification to multiclass, and exploring the potential of other machine learning algorithms to solve this problem. We discuss this further in Section 6. Table 1. Summary of Segmentation Techniques Method Region of Expansion
Related Literature Interest
• • •
Gabor Wavelets
• •
Morphological Operations
•
Poletti and Ruggeri (8): Seed point extraction, fuzzy c-means clustering and nerve continuity algorithm, tested on 12 images, resulted in a large number of false positives; Scarpa et al. (7): Based on [24], introduced the use of a Gabor filters. Tested on 90 images. Correct identification of more than 80% of the nerve length; Ruggeri et al. (5): Improvement on (6) by identifying seed points through multiple orientations of lines, segmentation of nerves by connected seed points using Dijkstra’s algorithm for finding shortest path; Dabbah et al. (9): Proposition of a dual model consisting of Gabor wavelet filter and Gaussian envelope; Wang et al. (12): Use of cascade network to classify pixels on Gabor, matched, DoG and other features. Al-Fahdawi et al. (13): Nerve contrast enhancement using coherence and Gaussian filters, morphological operations of
Method
Related Literature
• •
Non-Deep Learning Techniques
•
•
• • • Convolutional Neural Networks
• • • •
• • •
dilation, erosion, opening and closing are applied to remove stray and unwanted segments, edge detection using Canny edge detector, new method for linking gaps in the nerves; Hosseinaee et al. (14): corneal nerve images from OCT scans are segmented using morphological operations of erosion, opening and closing. And Hessian-based filtering for enhancement; Bibiloni et al. (15): real-time fuzzy morphological top-hat algorithm and fuzzy erosion technique. Rani et al. (18): Retinal vessel segmentation using Gaussian based matched filter designs, selection of the green channel of the images application of contrast limited adaptive histogram equalization, using pixel-based features from each connected component; Guimaraes et al. (17): Top-hat filtering for enhancing the image contrast, nerve enhancement procedure using log-Gabor filters, nerve segmentation is carried out through hysteresis thresholding, pixel classification using SVM with RBF kernel; Dabbah et al. (16): A multiscale enhancement of the dual model in (9), classification of pixels as nerve/non-nerve by training NN or RF classifiers; Geetharamani & Balasubramanian (20): Clustering using KNN on PCA generated features. Further classification using root guided decision trees;
Soomro et al. (19): ICA for image enhancement and PCAbased grayscale conversion. Fu et al. (21): A fully connected CNN for retinal vessel segmentation using HED technique proposed in (22); Hu et al. (23): A CNN deeply supervised nets for edge detection, followed by CRF for retinal vessel segmentation. The CNN is adapted from VGG-16; Colonna et al. (25): Corneal nerve segmentation using U-Net based CNN, Tested on 30 images, achieved a sensitivity of 97% and FDR of 18%; Williams et al. (29): Corneal nerve segmentation using single U-Net and multiple U-Nets; Son et al. (30): Employed U-Net as part of a GAN for retinal vessel segmentation; Maninis et al. (31): Retinal image segmentation by combining the CNN architectures of VGG-18 and Inception V3; Zhu et al. (32): Retinal vessel segmentation using ELMs
Table 2. Summary of Classification Techniques Method Transfer and Tuning
Related Literature Learning Parameter
•
• • • • •
Gulshan et al. (35): Inception V3 architecture, 3 types of classification: images as (a) having referable DR (mydriatic, nonmydriatic or both) or not, (b) subtypes of DR (moderate or worse only, severe or worse only, diabetic macular edema only) and (c) as gradable or non-gradable; Choi et al. (36): VGG-19 and AlexNet architectures for classification of normal retinal images and 9 types of retinal pathologies (10-class classification); Lam et al. (37): Multiclass classification of the severity levels of retinopathy from retina fundus scans using GoogLeNet and AlexNet; Burlina et al. (40): Pretrained model, Overfeat, for retinal image classification and compared the results with AlexNet trained from scratch; Mateen et al. (42): VGG-19 for feature extraction, PCA and SVD for feature reduction and softmax classifier for classification; Yoo et al. (43) use of VGG-19 for multimodal data.
Combination of Pretrained Models and Ensembles
• Ting et al. (44): Adapted the VGG network and trained 8 CNNs for different kinds of outputs contributing to the final score. Three ensembles of two CNNs each was used for classification of retinal images into (a) retinopathy severity levels, (b) AMD scores, and (c) glaucoma levels; • Abramoff et al. (45): An automated device for detection of diabetic retinopathy (DR) inspired by AlexNet and VGG networks, the classifier produced outputs indicating: (a) presence of DR, (b) presence of referable DR, (c) vision threatening DR, and (d) quality of image or examination.
Modification available networks
of neural
• Colonna et al. (25): Binary classification of CCM nerve images (healthy/pathological) using U-Net; • Takahashi et al. (47): Modified version of the GoogLeNet architecture, (a) 3-class classification for grades of retinopathy and (b) multistage classification of patients with follow-ups to predict their prognosis; • Phasuk et al. (48): DenseNet for feature extraction and custom designed fully connected network for classification.
Custom-designed Convolutional Neural Networks
• Tan et al. (49): CNN architecture for retinal image classification, consisted of 14 layers including 7 convolution layers, 4 max pooling and 3 fully connected layers; • Gargeya and Leng (50): A CNN based on the principles of deep residual learning for binary classification of retinal images, the CNN produced a feature vector of 1024 features from each image, which was combined with the meta-data features and then fed to a decision tree classifier for final prediction;
Method
Related Literature
Non-Deep Learning Techniques
• Torre et al. (51): A CNN based on prediction score propagation model for retinopathy; • Rehman et al. (52): A 5-layered CNN with embedded neural networks in 3 layers. • Silva et al. (53): Extraction of 61 texture-based features from the CCM images, and feature reduction using PCA. SVM was used to classify images as (a) with or without neuropathy, and (b) mild or moderate neuropathy; • Qidwai and Salahuddin (55): Extraction of 3 features from CCM images, followed by an evaluation of neuropathy classification by 6 algorithms: SVM, LDA, KNN, NB, decision trees and ANFIS.
Table 3. Evaluation Metrics for Segmentation Study
Acc
Prec
Sen/ TPR/ Recall 83.3%
FDR
Executi on Time
NA
Spe/ TNR/ (1-FPR) NA
Poletti and Ruggeri (8)* Scarpa et al. (7)* Ruggeri et al. (5)* Hu et al. (23) Dabbah et al. (9)* Wang et al. (12) Al-Fahdawi et al. (13)* Hosseinaee et al. (14)* Rani et al. SVM (18) Guimaraes et al. (17)* Dabbah et al. (16)* Fu et al. (21) Colonna et al. (25)* Son et al. (30)
NA
5.1%
18–32 s
80.4%
NA
NA
NA
6.3%
1 min
90
86.3%
NA
NA
NA
8.6%
4 min
12
95% 96% NA
NA
97.93% 98.14% 82.42%
77.72% 75.43% 81.72%
NA
1.1s
NA
NA
DRIVE STARE 525
95.41% 96.40% NA
NA
76.48% 75.23% NA
NA
NA
NA
98.17% 98.85% NA
NA
7s
DRIVE STARE 1417
NA
NA
NA
NA
NA
18 -20 s
NA
93.71% 94.96% NA
NA
72.74% 74.05% 88%
NA
3.26
NA
96.85% 97.44% NA
8%
0.61 s
DRIVE STARE 246
NA
NA
84.67%
84.78%
NA
NA
521
94.70% 95.45% NA
NA
NA
NA
0.2 s
NA
NA
72.94% 71.40% 97%
18%
<1s
DRIVE STARE 30
NA
91.49% 91.67%
NA
91.49% 91.67%
NA
NA
NA
Dataset/ Number of Images 12
DRIVE STARE
Maninis et al. (31) Zhu et al. (32) Soomro et al. (19) Geetharama ni & Balasubram anian (20) Bibiloni et al. (15)
NA
≈82% ≈83% NA
NA
95.3% 96.7% 95.36%
NA
93.8%
NA
NA
0.1 s
98%
≈83% ≈84% 71%
DRIVE STARE DRIVE
NA
12 s
75.2% 78.6% 70.79%
NA
NA
NA
97.6% 98.2% 97.78%
NA
2 min
DRIVE STARE DRIVE
78.6%
97%
72%
NA
0.037 s
DRIVE
Acc – Accuracy, Prec – Precision, Spe – Specificity, Sen – Sensitivity, NA – Not Available *Corneal Nerve Images 4. Neuropathy Detection using Fuzzy Systems
Attempting to advance the research in corneal nerve image analysis, in recent studies (56), we have proposed and evaluated a system for neuropathy detection in CCM images using Adaptive Neuro-Fuzzy Inference Systems (ANFIS). ANFIS has been successfully used in different types of medical data classification (57,58). It combines the learning power of neural networks and logic-based reasoning of fuzzy systems. Progressing on our research, we propose a neuropathy detection system comprising of three stages. In the first stage, corneal nerves are segmented through a U-net based CNN as proposed in (25), as opposed to (56) where we used morphological processing for segmentation. The image dataset is split into train (n=174) and test (n=578) sets for segmentation. We train the CNN on training images representing the artefacts present in the dataset. The segmented image contains white pixels representing the nerves and black pixels representing the background. In the second stage, a feature set is extracted from the segmented images for classification. Research has shown that the most important feature which correlates highly with diabetic neuropathy is corneal Nerve Fiber Length (NFL) (59). NFL is defined as the total nerve length in the image. Thus, we extract NFL from each image as a summation of all nerve pixels in the segmented image. The appearance of nerves differs in different sections of the image as neuropathy progresses and nerve damage is not necessarily present in all parts of the image. Therefore, intra-segment NFL (ISNFL) would be a good set of features to extract. We split the image into 4 equal segments and calculate NFL for each. Finally, we have a set of five features representing each image. Images are classified in the last stage using a custom-designed ANFIS classifier. Out of 642 labelled images, 129 are set aside for testing. The network is trained on the feature set generated from the previous stage. The trained classifier was used for automatic classification of images into diseased or healthy, with a classification accuracy of 95%. Further classification of the diseased class yielded an accuracy of 93.7%. Detailed results are shown in Fig. 2. The precision, recall and macro-f1 for the middle class (at-risk neuropathy) are the lowest. This is logical because images depicting neuropathy at-risk are difficult to identify. The results of this study were incomparable with the two other studies found in the literature because of the different criteria defined for neuropathy and the unavailability of images used in the other studies.
Fig. 2. Precision, Recall and Macro-f1 for all 3 classes of neuropathy using the ANFIS classifier.
5. Processing time and processor power comparison In this section, we provide a comparative analysis of the segmentation time taken by the algorithms, proposed in each study, alongside an estimate of the processor power in terms of GFLOPS (Giga floating-point operations per second) (Table 4). Studies that did not report the segmentation time or the processor used (8,9, 12, 16, 30, 19) were not included in this comparison. Studies (5) and (7) could be considered pioneers of CCM image segmentation, published in the years 2006 and 2008 respectively. From the magnified illustration in Fig 3, it is evident that using a low power Intel Pentium processor, they reduced their image segmentation time from 240 seconds in the first study to 60 seconds in the second study. Considering the best available processing power of that time, this was an improvement. Studies (13) and (18) used a processor with a little better processing power than (5) and (7) but significantly optimized the segmentation time utilizing the available processing power to the maximum. The method in (20) seems to be less efficient in time even though their processor power is better than those used in (5) and (7) indicating a suboptimal algorithm performance. Studies (21), (31) and (25) used high-performance computers for their experiments, naturally resulting in a segmentation time of less than a second per image. The methods presented in (17), (15) and our proposed method provide very low segmentation times (<1s) with the most commonly available low-cost processing power.
Table 4. A comparison of processing power and processing time for the segmentation algorithms in the studied literature. Study GFLOPS year Processor1 Processing time2 (s) Ruggeri et al. (5)* Scarpa et al. (7)* Guimaraes et (17)* Al-Fahdawi et (13)*
1.6 3.1 al. 31.63 al. 9.88
Rani et al. SVM (18) Fu et al. (21) Maninis et al. (31) Geetharamani & Balasubramanian (20)
9.88 3600 6691 20.03
Zhu et al. (32) Colonna et al. (25)*
44.26 8900
Bibiloni et al. (15) Hosseinaee et al. (14)*
14 45.42
Our Method*
25.17
Intel(R) Pentium(R) M 2006 processor 1.86GHz Intel(R) Pentium(R) 4 CPU 2008 3.40GHz Intel(R) Core(TM) i7-4770 2014 CPU @ 3.40GHz Intel(R) Core(TM) i52016 3337U CPU @ 1.80GHz Intel(R) Core(TM) i52016 3337U CPU @ 1.80GHz 2016 NVIDIA TESLA K20 GPU 2016 NVIDIA TITAN-X GPU Intel(R) Core(TM) i52016 5257U CPU @ 2.70GHz 4.0 GHz Intel i7-4790K 2017 CPU 2018 Intel GeForce GTX 1080 Intel(R) Core(TM) i5-3340 2018 CPU @ 3.10GHz Intel(R) Core(TM) i72019 4930K CPU @ 3.40GHz Intel(R) Core(TM) i72019 4712HQ CPU @ 2.30GHz
Notes: * Corneal nerve images (CCM) 1 2
An approximate processor was chosen where studies provided incomplete details of the processor The original segmentation time per image as provided by the study
240 60 0.61 7 3.26 0.2 0.1 120 12 <1 0.04 20 0.05
(a)
(b)
Fig 3. (a) A graphical representation of GFLOPS and reported segmentation time of each study. (b) A magnified version of the top graph for a better visualization of smaller values. Faded tips of the bars indicate continuation of the bar beyond the chart area.
6. Lessons Learned and Future Research Directions Considering the fact that the work on CCM nerve images for neuropathy identification has not been explored to its full potential, techniques on automated retinal image analysis provide us with an open ground and relevant ideas that can be applied to CCM nerve images. Both images, although seemingly different, point to the clinical condition of worsening of the disease that caused retinopathy or neuropathy. This raises the question that if they both relate to the same clinical condition then what necessitates the exploration and automation of corneal nerve image analysis? To answer this question, firstly, both images reveal insight into the different complications of the chronic disease, retinopathy (affects the vision) and neuropathy (affects movement and sensation in the limbs). Secondly, the absence of the axonal nerves in the cornea indicates very early neuropathy, which is currently difficult to diagnose using other methods. If explored well, it can serve as a commodity, accurate, non-invasive and quick alternative to the other methods of neuropathy diagnosis summarized elsewhere (60). There are many directions that research in this area could take in the future contributing to the benefit of the general community and adding useful insights on neuropathy for the scientific community. Besides, the intervention of artificial intelligence for automation of the image analysis will speed up the entire process. Therefore, it requires substantial efforts from computer scientists and relevant medical professionals alike. We list some of the potential research ideas below, along with the thoughts on why it would be a good area to explore. 1.
Retinal image classification has been approached by a multitude of researchers and thus the criteria are standardized among the research community for the identification of retinopathy. The work on single image classification (into healthy or diseased images) for images obtained from the CCM lacks standardization in terms of established criteria for identification of neuropathy. The two studies that are found in the literature (25,53) do not specify the exact protocol used for determining the state of the nerves in the images and lack a discussion of obtained features. Moreover, the research also lacks multiple shreds of evidence of experiments from diverse groups of experts in the field. Further and repeated machine learning experiments on a
multitude of images obtained from a diverse group of patients would be necessary to establish the findings. In contrast, we find a plethora of studies confirming the findings from retinal fundus images. In the case of using machine learning, the research also needs to advance and explore the feature extraction techniques and identify what features are indicative of neuropathy in images. Examples of representative features include quantification of nerve clustering, tissue reflection, presence of Langerhan cells in CCM images. Subsequently, these results can be correlated with the statistical findings abundantly available in the literature for CCM images. Additionally, this will be a valuable insight into neurophysiology.
3
2.
The current state-of-the-art in CCM image analysis focuses on image-based neuropathy classification where each image, representing a part of the cornea, is classified as having neuropathy or not (25,56). It must be noted that currently the cornea is captured in multiple images per eye, and complete representation of the cornea in a single image is not performed. However, it might be useful to extend it to a patient-based classification, where the patient’s level of neuropathy is determined based on all his/her corneal images.
3.
Visual heatmaps provide an effective way to observe the affected areas. A visual heatmap for retinopathy affected areas in the retina was presented in (50). Impactful research in CCM image analysis would be to test the effectiveness of different approaches in risk-stratifying patients based on their CCM result and creating a heat map visualization of areas depicting neuropathy.
4.
There are many standard datasets for retinal images, DRIVE (61), STARE (62), Messidor-2 (63), Kaggle3. It can be seen in Table 3 that studies on retinal image segmentation have used standard datasets (DRIVE, STARE) and compared their results with others on the same datasets. However, the studies on CCM images do not use any standard datasets and thus their results are not comparable. The nonavailability of labeled datasets (healthy/diseased) of CCM images is a major factor contributing to the scarcity of research efforts in this area. To give room for more research, analyzing a vast amount of CCM images from different conditions and generating ground truth data is required.
5.
The retinal scan provides a complete picture of the retina, whereas the corneal images do not. Therefore, another open problem is to determine, and then standardize, the area considered for CCM image capture in the cornea; whether the area has to be at a fixed distance from the inferior whorl, or there can be some other specification. An alternative approach, although expensive, could be to image the entire cornea, stitch the images to form a complete picture, and use this for patient classification. Automatic montaging from a sequence of corneal nerve images has been accomplished in a previous article (64).
6.
Presently, the work on image classification in the literature is limited to CCM images
https://www.kaggle.com/c/diabetic-retinopathy-detection
obtained from diabetic patients only, since the appearance of diabetic neuropathy in the human cornea is a well-established fact and reported by many studies (65,66). The literature on retinal image classification explores more than diabetes: glaucoma (48), AMD (43,49). What remains to be explored is whether different diseases affect the cornea differently in terms of neurodegeneration. For this purpose, the application of artificial intelligence techniques is necessary to understand the apparent condition of nerve loss in various disease conditions and its relationship to clinical outcomes. 7.
The experiments and results conducted so far apply only to CCM images captured from the central region of the cornea. The use of inferior whorl images remains to be explored to define the relationship between percent change at baseline versus followup and quantify the risk of disease progression.
8.
Current research relies on identifying neuropathy through images and neglects the patient history. A robust and reliable system in the future could include information from the images as well as the patient’s clinical history. Relevant features from patient history also need to be determined.
9.
An important upgrade to the process would be to embed the image processing system in the imaging device to allow for instant automatic processing. In case of storage space limitations with the hardware, an alternative and effective solution is processing on the cloud. Recently, in relevance to this point, Teikari et al. (67) provide a very informative review of embedded systems for retinal imaging and relevant cloud solutions.
10.
Lastly, a very important and broad aspect of the research would be to investigate and seek to establish a relationship between the different modalities that reflect the neuropathy condition of a patient concerning a disease. These modalities include CCM images, corpus callosum in the brain MRI scans, the cross-sectional retina captured in OCT scans, skin biopsy and DNA sequence. In the context of retinal image classification, data from multiple modalities are used in (43), where the authors combine data from OCT images and fundus images to improve diagnostic accuracy.
7. Conclusion This article summarizes the research trends in retinal vessel segmentation, corneal nerve segmentation, retinal image classification for retinopathy and corneal image classification for neuropathy. Due to the scarce amount of literature on corneal nerve image analysis for neuropathy, we observed the reasons that hinder its research progress. We tried to bridge the gap between two fields of research (computing and medicine), by outlining the different aspects of retinal image analysis and provided suggestions on how the same could be implemented in corneal image analysis. This would, undoubtedly, allow new insights in neuropathy and therefore, would be of great benefit to the research community.
Conflict of Interest Statement There are no conflicts of interest for this research.
References 1. 2.
3. 4.
5.
6.
7. 8.
9.
10. 11.
12. 13.
14.
Tavakoli M, Begum P, Mclaughlin J, Malik RA. Corneal confocal microscopy for the diagnosis of diabetic autonomic neuropathy. Muscle and Nerve. 2015; Khan A, Akhtar N, Kamran S, Ponirakis G, Petropoulos IN, Tunio NA, et al. Corneal confocal microscopy detects corneal nerve damage in patients admitted with acute ischemic stroke. Stroke. 2017; Podgorny PJ, Suchowersky O, Romanchuk KG, Feasby TE. Evidence for small fiber neuropathy in early Parkinson’s disease. Park Relat Disord. 2016; Miranda E, Aryuni M, Irwansyah E. A survey of medical image classification techniques. In: 2016 International Conference on Information Management and Technology (ICIMTech), IEEE. 2016. p. 56-61. Ruggeri A, Scarpa F, Grisan E. Analysis of corneal images for the recognition of nerve structures. In: Engineering in Medicine and Biology Society, 2006 EMBS’06 28th Annual International Conference of the IEEE [Internet]. 2006 [cited 2019 Jan 31]. Available from: https://ieeexplore.ieee.org/abstract/document/4462860/ Grisan, E., Pesce, A., Giani, A., Foracchia, M., & Ruggeri A. A new tracking system for the robust extraction of retinal vessel structure. In: Engineering in Medicine and Biology Society, 2004 IEMBS’04 26th Annual International Conference of the IEEE [Internet]. [cited 2019 Jan 31]. p. Vol. 1, pp. 1620–3. Available from: https://ieeexplore.ieee.org/abstract/document/1403491/ Scarpa F, Grisan E, Ruggeri A. Automatic recognition of corneal nerve structures in images from confocal microscopy. Investig Ophthalmol Vis Sci. 2008; Poletti E, Ruggeri A. Automatic nerve tracking in confocal images of corneal subbasal epithelium. In: Computer-Based Medical Systems (CBMS), 2013 IEEE 26th International Symposium [Internet]. 2013 [cited 2019 Jan 31]. p. 119–24. Available from: https://ieeexplore.ieee.org/abstract/document/6627775/ Dabbah MA, Graham J, Petropoulos I, Tavakoli M, Malik RA. Dual-model automatic detection of nerve-fibres in corneal confocal microscopy images. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2010. Dixon RN, Taylor CJ. Automated Asbestos Fiber Counting. In: 1979 Inst Physics Conference. 1979. Dabbah MA, Graham J, Tavakoli M, Petropoulos Y, Malik RA. Nerve fibre extraction in confocal corneal microscopy images for human diabetic neuropathy detection using gabor filters. Med Image Underst Anal [Internet]. 2009 [cited 2019 Jan 31];254–258. Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.538.479&rep=rep1&type=pdf Wang X, Jiang X, Ren J. Blood vessel segmentation from fundus image by a cascade classification framework. Pattern Recognit. 2019 Apr 1;88:331–41. Al-Fahdawi S, Qahwaji R, Al-Waisy AS, Ipson S, Malik RA, Brahma A, et al. A fully automatic nerve segmentation and morphometric parameter quantification system for early diagnosis of diabetic neuropathy in corneal images. Comput Methods Programs Biomed. 2016; Hosseinaee Z, Tan B, Kralj O, Han L, Wong A, Sorbarad L, et al. Fully automated corneal
15. 16.
17.
18.
19.
20.
21.
22. 23.
24.
25.
26.
27.
28.
29.
nerve segmentation algorithm for corneal nerves analysis from in-vivo UHR-OCT images. In: International Society for Optics and Photonics. 2019. Bibiloni P, González-Hidalgo M, Massanet S. A real-time fuzzy morphological algorithm for retinal vessel segmentation. J Real-Time Image Process. 2018 Dec 1;1–14. Dabbah MA, Graham J, Petropoulos IN, Tavakoli M, Malik RA. Automatic analysis of diabetic peripheral neuropathy using multi-scale quantitative morphology of nerve fibres in corneal confocal microscopy imaging. Med Image Anal. 2011; Guimaraes P, Wigdahl J, Poletti E, Ruggeri A. A fully-automatic fast segmentation of the sub-basal layer nerves in corneal images. In: 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2014. 2014. Rani P, Priyadarshini N, Rajkumar ER, Rajaman K. Retinal vessel segmentation under pathological conditions using supervised machine learning. In: International Conference on Systems in Medicine and Biology (ICSMB) IEEE [Internet]. 2016 [cited 2019 Feb 14]. Available from: https://ieeexplore.ieee.org/abstract/document/7915088/ Soomro TA, Khan TM, Khan MA, Gao J, Paul M, Zheng L. Impact of ICA-based image enhancement technique on retinal blood vessels segmentation. IEEE Access [Internet]. 2018 [cited 2019 Dec 11];6:3524–38. Available from: https://ieeexplore.ieee.org/abstract/document/8264697/ Geetharamani R, Balasubramanian L. Retinal blood vessel segmentation employing image processing and data mining techniques for computerized retinal image analysis. Biocybern Biomed Eng. 2016;36(1):102–18. Fu H, Xu Y, Wong DWK, Liu J. Retinal vessel segmentation via deep learning network and fully-connected conditional random fields. In: Proceedings - International Symposium on Biomedical Imaging. 2016. Xie S, Tu Z. Holistically-Nested Edge Detection. Int J Comput Vis. 2017; Hu K, Zhang Z, Niu X, Zhang Y, Cao C, Xiao F, et al. Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function. Neurocomputing. 2018; Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2015. Colonna A, Scarpa F, Ruggeri A. Segmentation of Corneal Nerves Using a U-Net-Based Convolutional Neural Network. In 2018 [cited 2019 Jan 31]. p. 185–92. Available from: http://link.springer.com/10.1007/978-3-030-00949-6_22 Dong H, Yang G, Liu F, Mo Y, Guo Y. Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks. In 2017 [cited 2019 Feb 14]. p. 506–17. Available from: http://link.springer.com/10.1007/978-3-319-60964-5_44 Costa P, Galdran A, Meyer MI, Abràmoff MD, Niemeijer M, Mendonça AM, et al. Towards adversarial retinal image synthesis. arXiv Prepr [Internet]. 2017 [cited 2019 Feb 14]; Available from: https://arxiv.org/abs/1701.08974 Salahuddin T, Al-Maadeed S, Petropoulos IN, Malik RA, Ilyas SK, Qidwai U. Smart Neuropathy Detection using Machine Intelligence: Filling the Void Between Clinical Practice and Early Diagnosis. In: IEEE World Conference on Smart Trends in Systems, Security and Sustainability. 2019. Williams BM, Borroni D, Liu R, Zhao Y, Zhang J, Lim J, et al. An artificial intelligencebased deep learning algorithm for the diagnosis of diabetic neuropathy using corneal
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
confocal microscopy: a development and validation study. Diabetologia. 2019; Son J, Park SJ, Jung K-H. Retinal Vessel Segmentation in Fundoscopic Images with Generative Adversarial Networks. 2017 Jun 28 [cited 2019 Feb 14]; Available from: http://arxiv.org/abs/1706.09318 Maninis K-K, Pont-Tuset J, Arbeláez P, Van Gool L. Deep Retinal Image Understanding. In 2016 [cited 2019 Feb 14]. p. 140–8. Available from: http://link.springer.com/10.1007/978-3-319-46723-8_17 Zhu C, Zou B, Zhao R, Cui J, Duan X, Chen Z, et al. Retinal vessel segmentation in colour fundus images using Extreme Learning Machine. Comput Med Imaging Graph. 2017;55:68–77. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. Proc IEEE Conf Comput Vis pattern Recognit [Internet]. 2015 [cited 2019 Feb 14]; Available from: https://www.cvfoundation.org/openaccess/content_cvpr_2015/html/Szegedy_Going_Deeper_With_2015 _CVPR_paper.html Krizhevsky A, Sutskever I, Neural GH. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst [Internet]. 2012 [cited 2019 Feb 14]; Available from: http://papers.nips.cc/paper/4824-imagenet-classification-with-deepconvolutional-neural-networks Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama [Internet]. 2016 [cited 2019 Feb 14]; Available from: https://jamanetwork.com/journals/jamapediatrics/fullarticle/2588763 Choi JY, Yoo TK, Seo JG, Kwak J, Um TT, Rim TH. Multi-categorical deep learning neural network to classify retinal images: A pilot study employing small database. Liu B, editor. PLoS One [Internet]. 2017 Nov 2 [cited 2019 Feb 14];12(11):e0187336. Available from: https://dx.plos.org/10.1371/journal.pone.0187336 Lam C, Yi D, Guo M, Lindsey T. Automated Detection of Diabetic Retinopathy using Deep Learning. In: AMIA Summits on Translational Science Proceedings 2017 [Internet]. 2018 [cited 2019 Feb 14]. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5961805/ Burlina P, Freund DE, Joshi N, Wolfson Y, Neil M. Bressler. Detection of age-related macular degeneration via deep learning. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI) [Internet]. 2016 [cited 2019 Feb 14]. Available from: https://ieeexplore.ieee.org/abstract/document/7493240/ Sermanet P, Eigen D, Zhang X, Mathieu M, Fergus R, LeCun Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arxiv.org [Internet]. 2013 [cited 2019 Feb 14]; Available from: https://arxiv.org/abs/1312.6229 Burlina PM, Joshi N, Pekala M, Pacheco KD, Freund DE, Bressler NM. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol [Internet]. 2017 [cited 2019 Feb 14];135:1170–6. Available from: https://jamanetwork.com/journals/jamaophthalmology/fullarticle/2654969?guestAccessKe y=8cd3104d-3355-43e4-a36d-2fc2d6a3cb3d Poplin R, Varadarajan A V., Blumer K, Liu Y, McConnell M V., Corrado GS, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep
42.
43.
44.
45.
46. 47.
48.
49.
50.
51. 52.
53.
54.
learning. Nat Biomed Eng. 2018; Mateen M, Wen J, Nasrullah, Song S, Huang Z. Fundus Image Classification Using VGG19 Architecture with PCA and SVD. Symmetry (Basel) [Internet]. 2018 Dec 20 [cited 2019 Dec 15];11(1):1. Available from: http://www.mdpi.com/2073-8994/11/1/1 Yoo TK, Choi JY, Seo JG, Ramasubramanian B, Selvaperumal S, Kim DW. The possibility of the combination of OCT and fundus images for improving the diagnostic accuracy of deep learning for age-related macular degeneration: a preliminary experiment. Med Biol Eng Comput. 2019 Mar 12;57(3):677–87. Ting DSW, Cheung CY-L, Lim G, Tan GSW, Quang ND, Gan A, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations. Jama [Internet]. 2017 [cited 2019 Feb 14];318:2211–23. Available from: https://jamanetwork.com/journals/jama/fullarticle/2665775?guestAccessKey=a06e5c073bfe-451e-ab81-da7ecb933ce9 Abràmoff MD, Lou Y, Erginay A, Clarida W, Amelon R, Folk JC, et al. Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning. Invest Ophthalmol Vis Sci [Internet]. 2016 [cited 2019 Feb 14];57:5200–6. Available from: http://tvst.arvojournals.org/article.aspx?articleid=2565719 Somasundaram SK, Alli P. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy. J Med Syst. 2017 Dec 1;41(12). Takahashi H, Tampo H, Arai Y, Inoue Y, Kawashima H. Applying artificial intelligence to disease staging: Deep learning for improved staging of diabetic retinopathy. Mori K, editor. PLoS One [Internet]. 2017 Jun 22 [cited 2019 Feb 14];12(6):e0179790. Available from: http://dx.plos.org/10.1371/journal.pone.0179790 Phasuk S, Poopresert P, Yaemsuk A, Suvannachart P, Itthipanichpong R, Chansangpetch S, et al. Automated Glaucoma Screening from Retinal Fundus Image Using Deep Learning. In Institute of Electrical and Electronics Engineers (IEEE); 2019. p. 904–7. Tan JH, Bhandary S V., Sivaprasad S, Hagiwara Y, Bagchi A, Raghavendra U, et al. Agerelated Macular Degeneration detection using deep convolutional neural network. Futur Gener Comput Syst [Internet]. 2018 [cited 2019 Feb 14];87:127–35. Available from: https://www.sciencedirect.com/science/article/pii/S0167739X17319167 Gargeya R, Leng T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology [Internet]. 2017 [cited 2019 Feb 14];124:962–9. Available from: https://www.sciencedirect.com/science/article/pii/S0161642016317742 de la Torre J, Valls A, Puig D. A deep learning interpretable classifier for diabetic retinopathy disease grading. Neurocomputing. 2019; Mobeen-Ur-Rehman, Khan SH, Abbas Z, Danish Rizvi SM. Classification of Diabetic Retinopathy Images Based on Customised CNN Architecture. In: Proceedings - 2019 Amity International Conference on Artificial Intelligence, AICAI 2019. Institute of Electrical and Electronics Engineers Inc.; 2019. p. 244–8. Silva SF, Gouveia S, Gomes L, Negrão L, Quadrado MJ, Domingues JP, et al. Diabetic peripheral neuropathy assessment through texture based analysis of corneal nerve images. J Phys Conf Ser [Internet]. 2015 May 22 [cited 2019 Feb 14];616:012002. Available from: http://stacks.iop.org/17426596/616/i=1/a=012002?key=crossref.a96eab2dd48bdf69704884cabb2091e6 Otel I, Cardoso P, Gomes L, Gouveia S, Silva SF, Domingues JP, et al. Diabetic
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
peripheral neuropathy assessment through corneal nerve morphometry. In: Bioengineering (ENBENG), 2013 IEEE 3rd Portuguese Meeting [Internet]. 2013 [cited 2019 Jan 31]. Available from: https://ieeexplore.ieee.org/abstract/document/6518436/ Qidwai U, Salahuddin T. Classification of Corneal Nerve Images Using Machine Learning Techniques. Int J Integr Eng [Internet]. 2019 [cited 2019 Sep 9];11(3). Available from: https://publisher.uthm.edu.my/ojs/index.php/ijie/article/view/4530 Salahuddin T, Qidwai U. Neuro-Fuzzy Classifier for Corneal Nerve Images. In: IEEEEMBS Conference on Biomedical Engineering and Sciences (IECBES) [Internet]. 2018 [cited 2019 Feb 15]. p. 131–6. Available from: https://ieeexplore.ieee.org/abstract/document/8626633/ Sharif MS, Qahwaji R, Ipson S, Brahma A. Medical image classification based on artificial intelligence approaches: A practical study on normal and abnormal confocal corneal images. Appl Soft Comput J. 2015; Ali R, Qidwai U, Ilyas SK. Use of Combination of PCA and ANFIS in Infarction Volume Growth Rate Prediction in Ischemic Stroke. In: 2018 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES) [Internet]. 2018 [cited 2019 Aug 28]. p. 324–9. Available from: https://ieeexplore.ieee.org/abstract/document/8626629/ Petropoulos IN, Alam U, Fadavi H, Asghar O, Green P, Ponirakis G, et al. Corneal nerve loss detected with corneal confocal microscopy is symmetrical and related to the severity of diabetic polyneuropathy. Diabetes Care [Internet]. 2013 [cited 2019 Mar 31];36:3646– 51. Available from: http://care.diabetesjournals.org/content/36/11/3646.short Petropoulos IN, Ponirakis G, Khan A, Almuhannadi H, Gad H, Malik RA. Diagnosing Diabetic Neuropathy: Something Old, Something New. Diabetes Metab J [Internet]. 2018 [cited 2019 Sep 9];42(4):255. Available from: https://synapse.koreamed.org/DOIx.php?id=10.4093/dmj.2018.0056 Staal J, Abràmoff MD, Niemeijer M, Viergever MA, Ginneken B Van. Ridge-based vessel segmentation in color images of the retina. In: IEEE transactions on medical imaging [Internet]. 2004 [cited 2019 Sep 9]. p. 501–9. Available from: https://www.researchgate.net/profile/Joes_Staal/publication/8619377_RidgeBased_Vessel_Segmentation_in_Color_Images_of_the_Retina/links/02bfe51473e1da9cf0 000000/Ridge-Based-Vessel-Segmentation-in-Color-Images-of-the-Retina.pdf Hoover AD, Kouznetsova V, Goldbaum M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. In: IEEE Transactions on Medical imaging [Internet]. 2000 [cited 2019 Sep 9]. p. 203–10. Available from: https://ieeexplore.ieee.org/abstract/document/845178/ Decencière E, Zhang X, Cazuguel G, Lay B, Cochener B, Trone C, et al. Feedback on a publicly distributed image database: the Messidor database. Image Anal Stereol [Internet]. 2014 [cited 2019 Sep 9];33(3231–234). Available from: http://www.iasiss.org/ojs/IAS/article/view/1155 Turuwhenua JT, Patel D V., McGhee CNJ. Fully Automated Montaging of Laser Scanning In Vivo Confocal Microscopy Images of the Human Corneal Subbasal Nerve Plexus. Investig Opthalmology Vis Sci [Internet]. 2012 Apr 24 [cited 2019 Jul 30];53(4):2235. Available from: http://iovs.arvojournals.org/article.aspx?doi=10.1167/iovs.11-8454 Petropoulos IN, Green P, Chan AWS, Alam U, Fadavi H, Marshall A, et al. Corneal confocal microscopy detects neuropathy in patients with type 1 diabetes without
66. 67.
retinopathy or microalbuminuria. PLoS One. 2015;10(4). Hossain P, Sachdev A, Malik RA. Early detection of diabetic peripheral neuropathy with corneal confocal microscopy. Lancet. 2005. Teikari P, Najjar RP, Schmetterer L, Milea D. Embedded deep learning in ophthalmology: making ophthalmic imaging smarter. Ther Adv Ophthalmol [Internet]. 2019 [cited 2019 Jul 30];11:2515841419827172. Available from: http://www.ncbi.nlm.nih.gov/pubmed/30911733
Highlights o Corneal and retinal imaging provide a descriptive view of the nerve and vessel structure present inside the human eye, in a non-invasive manner. This helps in ocular or other disease identification and diagnosis. Analyzing these images is a laborious task and requires expertise in the field. Therefore, there is a dire need for process automation. o Although a large body of literature is available for automated analysis of retinal images, research on corneal nerve image analysis has lagged due to several reasons. o In this article, we cover the recent research trends in automated analysis of corneal and retinal images and highlight the requirements for automation of corneal nerve image analysis, and the possible reasons impeding its research progress. o Due to the advancement in retinal image analysis and the implicit similarities in retinal and corneal images, we extract lessons from the former and suggest ways to apply them to the latter. This is presented as future research directions for automatic detection of neuropathy using corneal nerve images.