Journal Pre-proof Automated detection of glaucoma using optical coherence tomography angiogram images Chan Yam Meng, E.Y.K. Ng, V. Jahmunah, Joel En Wei Koh, Oh Shu Lih, Leonard Yip Wei Leon, U. Rajendra Acharya PII:
S0010-4825(19)30352-X
DOI:
https://doi.org/10.1016/j.compbiomed.2019.103483
Reference:
CBM 103483
To appear in:
Computers in Biology and Medicine
Received Date: 20 August 2019 Revised Date:
25 September 2019
Accepted Date: 3 October 2019
Please cite this article as: C.Y. Meng, E.Y.K. Ng, V. Jahmunah, J.E. Wei Koh, O.S. Lih, L.Y. Wei Leon, U.R. Acharya, Automated detection of glaucoma using optical coherence tomography angiogram images, Computers in Biology and Medicine (2019), doi: https://doi.org/10.1016/ j.compbiomed.2019.103483. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier Ltd.
Automated detection of glaucoma using optical coherence tomography angiogram images Chan Yam Meng1, E.Y.K. Ng 1*, V Jahmunah2, Joel En Wei Koh2, Oh Shu Lih2, Leonard Yip Wei Leon3, U Rajendra Acharya2,4,5 1
Department of Mechanical and Aerospace Engineering, Nanyang Technological University. 2
3 4
Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore.
Department of Ophthalmology, Tan Tock Seng Hospital, National Healthcare Group, Singapore.
International Research Organization for Advanced Science and Technology (IROAST) Kumamoto University, Kumamoto, Japan
5
Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore. *Corresponding Address: Department of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798.
ABSTRACT Glaucoma is a malady that occurs due to the buildup of fluid pressure in the inner eye. Detection of glaucoma at an early stage is crucial as by 2040, 111.8 million people are expected to be afflicted with glaucoma globally. Feature extraction methods prove to be promising in the diagnosis of glaucoma. In this study, we have used optical coherence tomography angiogram (OCTA) images for automated glaucoma detection. Ocular sinister (OS) from the left eye while ocular dexter (OD) were obtained from right eye of subjects. We have used OS macular, OS disc, OD macular and OD disc images. In this work, local phase quantization (LPQ) technique was applied to extract the features. Information fusion and principal component analysis (PCA) are used to combine and reduce the features. Our method achieved the highest accuracy of 94.3% using LPQ coupled with PCA for right eye optic disc images with AdaBoost classifier. The proposed technique can aid clinicians in glaucoma detection at an early stage. The developed model is ready to be tested with more images before deploying for clinical application.
1
Keywords – Glaucoma, Information fusion, Principal component analysis, 10-fold cross validation, Local phase quantization, image analysis. 1. Introduction
Glaucoma is a disorder of the eye that causes damage to the optic nerve. The optic nerve, a part of the eye that transmits information from the retina to the brain [1],[2] dies over time leading to vision loss and blindness. Glaucoma is mostly caused by the buildup of fluid pressure, known as intraocular pressure (IOP) in the inner part of the eye [3]. Insufficient and defective fluid drainage in the eye leads to higher pressure, harming the optic nerve [4]. The damage to the optic nerve manifests as a decrease in the thickness of the retinal nerve fiber layer (RNFL) and an increase in the cup to disc ratio (CDR) of the optic nerve head (ONH). An increase in the cup to disc ratio (CDR) over time, is a sign of glaucomatous progression as it reflects the loss of healthy neuro retinal tissues [5]. Glaucoma is mostly asymptomatic until advanced stages, but some symptoms of glaucoma may include blurry vision, reddening of the eye, severe eye pain, as well as nausea and vomiting. About 14 million people have been reported to be afflicted with glaucoma globally [6]. This figure is projected to soar to 111.8 million universally, in 2040 [7]. The advancement of glaucoma is usually undetected until irrevocable damage is done to the optic nerve, resulting in permanent visual losses. These reasons merit the need for accurate and timely diagnosis of glaucoma. 1.1 Imaging diagnostic approaches
Some Imaging diagnostic approaches for glaucoma include optic disc photography (ODP), confocal scanning laser ophthalmoscopy (CSLO), scanning laser polarimetry (SLP) and optical coherence tomography (OCT) [8]. ODP is a cost-effective method where a three dimensional(3D) interpretation of the ONH is used to record structural damage in patients suspected of glaucoma [9]. CSLO is an imaging device that provides a numerical 3D image of the ONH and posterior segment. Stereometric parameters ranging from the rim area to RNFL, produced by the software device are important indicators used to distinguish between healthy and early glaucoma patients. Birefringence is the optical property of a material that can be used to identify the molecular orientation of the material by calculating the delay of polarized light passing through it [10] . This principle is useful in detecting glaucoma. The SLP is not invasive and is used to successfully measure the retinal nerve fiber layer (RNFL). SLP contains the
2
confocal scanning laser ophthalmoscope and a laser beam. The polarized light that enters to the birefringent RNFL, produces a calculable phase shift, which corresponds to the RNFL thickness, a valuable parameter for glaucoma diagnosis [11]. In OCT, an optical biopsy is produced by using low coherence interferometry to obtain a cross-section of the tomographic retinal image. Automated segmentation of the image then allows the thickness of the RNFL to be measured. Despite being useful, these methods exhibit some shortcomings. In ODP, intra and interobserver variability exists amongst glaucoma specialists during interpretation of images [12]. In CSLO, the clinician has to manually outline the disc margin to extract stereometric parameters. This is both a subjective as well as an arduous task. For SLP devices, some only have fixed corneal compensation when variable compensation using the anterior and posterior segment may provide better accuracy of SLP measurements [8]. A disadvantage of OCT involves the need to dilate patients’ pupils causing them discomfort. 1.2 Non-imaging diagnostic approaches Some non-imaging diagnostic tests for glaucoma include Tonometry and Humphrey Visual Field (HVF) testing. In Tonometry, the IOP which is of paramount interest for glaucoma detection, is measured. Glaucoma is diagnosed if the pressure is 22mm Hg and above [13]. The Goldmann Applanation Tonometer (GAT) comprises a double prism affixed on a double slit lamp. The GAT is currently the yardstick for the measurement of IOP. The force needed to make an area of the cornea flat is computed in relation to the IOP. Air-puff tonometry is another tool used to measure the IOP in a similar manner, which the exception of risk to corneal abrasions [14]. The HVF testing aids clinicians in detecting and monitoring the progression of glaucoma. The visual field test maps out the area of vision and show the changes in the field of vision of the patient’s eye [15]. Dark shaded areas at the peripheral of the retina presented on the visual map indicates signs of visual losses [15]. The patient will be required to fixate their gaze onto the fixation light in the center of the HVF machine and use the clicker provided whenever flickering lights are observed. Various factors are taken into consideration when the results are being interpreted by the practitioner. Indicators such as false positive and negatives can help indicate if the patient had fixated their gaze properly during the test [15].
3
1.3 Digital fundus versus OCTA images
Digital fundus images acquired via a fundus camera are frequently used for screening for eye diseases such as diabetic retinopathy and glaucoma [16,17-19]. Classification of glaucoma can be done based on borderline, moderate or advanced stages depending on the cup to disc ratio in fundus images [8]. Using fundus images is advantageous as they are inexpensive to obtain. However, segmentation or textural techniques that are employed using these images, involve the removal of blood vessels, which do minimally contribute to glaucoma [20]. This questions the reliability of the method. On the contrary, machine learning techniques that are employed without removing the blood vessels may not have the flexibility of segmenting the fundus images deeper enough for better analysis, unlike the ocular coherence tomography angiography (OCTA) images. Hence OCTA images have the upper hand over fundus images for analysis. OCTA is a new variant of OCT imaging. It is noninvasive and allows concurrent visualization of structure and blood flow [16]. It is able to generate high quality images almost instantly. Furthermore, it is programmed to delineate individual vascular layers at various depths and segments images covering the Superficial, Middle and Deep layers [21] of the retina and choroid potentially leading to better and accurate glaucoma detection. Figure 1 shows the OCTA images of the normal versus glaucoma eyes, from the radial peripapillary capillary (RPC) part of an eye. The summarized studies which employed CAD systems using OCTA for the diagnosis of glaucoma is highlighted in Table 1.
(a)
(b)
Figure 1: OCTA images of normal (a) and glaucoma (b) eyes from radial peripapillary capillary part of eyes.
4
1.4 CAD for glaucoma diagnosis using OCTA images
Table 1 highlights the summary of computer-aided diagnostics (CAD) explored by other researchers in glaucoma diagnosis using OCTA images. Yip et al [22] acquired images of the ONH and macula and computed the microvascular density of the ONH and macula by a special software developed in-house. Vessel density mean values of the optic disc and macula were computed using the IBM Statistical Package for Social Sciences (SPSS) Statistics. Area under receiving operator curve (AUROC) was used to determine glaucoma patients from healthy subjects. Higher vessel densities were reported amongst healthy subjects as compared to glaucoma patients at the segmented layers in optic disc and macula. AUROC was reported to be the highest for radial peripapillary capillary (0.96) for optic disc and deep retina (0.86) for macula, respectively. Baudouin et al [23] used the AngioVue Imaging System to obtain angiography images with amplitude decorrelation. Parameters including the RNFL, density of temporal disc vessel, rim area and thickness of ganglion cell complex were computed using AngioVue software. MannWhitney statistical test was applied in comparison to the mean values in visual field (VF) dimensions between normal and glacomatous eyes. To determine if the calculation of ONH vessel density was influenced by age, sex and IOP, linear regression was applied on the healthy group. Pearson correlation and analysis of covariance (ANCOVA) tests were employed to investigate the relationships between ONH vessel density and other parameters such as age, RNFL thickness, ganglion cell complex (GCC) and rim area. The temporal ONH vessel density, total ONH density and rim area correlate largely. Large correlation also exists between temporal and other parameters such as total ONH density, RNFL, GCC, VF mean deviation and VF index. Bakr et al [24] acquired OCTA images of normal and open angle glaucoma (OAG) patients, after which the optic disc perfusion, RNFL thickness and ONH parameters were studied. Statistical analysis was employed using the SPSS test. Significant correlation between optic disc perfusion and glaucoma patients were found in the results, compared to healthy subjects. Also, the optic disc perfusion is significantly correlated with VF and RNFL.
5
Rao et al [25], acquired stereoscopic optic disc images from a digital fundus camera and analysed by two glaucoma experts, with all clinical and eye-data concealed. OCTA images were acquired from the AngioVue Imaging System. The Split-spectrum amplitude-decorrelation angiography (SSADA) algorithm was employed to produce merged three-dimensional OCTA and vessel densities were computed in the different layers of the retina and ONH. Shapiro-wilk statistical test was employed to compute sensitivities at fixed specificities at 80% and 95% for all parameters. AUROC was used to evaluate the ability of OCTA vessel densities to differentiate between glaucomatous and normal eyes. Results revealed that the diagnostic competencies of OCTA vessel density parameters was inconsiderable and AUROC ranged between 0.56 to 0.64 for nasal and temporal regions at the macula. Also, AUROC staggered between 0.59, 0.73, 0.70 and 0.89 for superonasal, average inside disc, (nasal, superonasal, temporal) and inferotemporal areas in the ONH vessel density. Pre-treatment of IOP in glaucoma patients showed a meaningful effect on AUROC of ONH vessel density. Anantrasirichai et al [26] explored the performance of three different groups of features; layer thickness, texture features, and combination of layer and texture features. They employed dual tree complex wavelet transform and PCA techniques. They obtained the highest accuracy of 85.15% using support vector machine (SVM) classifier with combined (layer thickness and texture) features. From Table 1, it can be noted that generally most studies used statistical methods to establish correlations between optic disc perfusion and visual parameters or optic disc perfusion/vessel density between glaucoma and healthy groups. Hence, this is the first study which uses the machine learning method for the detection of glaucoma using OCTA images. Table 1: Summarized studies of CAD systems using OCTA for the diagnosis of glaucoma. Authors
Anantrasirichai et al.[26], 2013
Techniques
Number of participants
• PCA
Normal: 14 eyes
• Texture features
Glaucoma: 10 eyes
• SVM • Dual Tree Complex Wavelet
Results
Combination of layer thickness and texture features: Accuracy: 85.15%
Transform • Vessel density of optic nerve head Baudouin et al. [23], 2016
• Density of temporal disc vessel • Visual field parameters • Statistical analysis
Normal: 30 healthy subjects Glaucoma: 50 patients
Total and temporal ONH vessel densities decreased in glaucoma patients compared to healthy group.
• Linear regression analysis Rim area, temporal ONH
6
vessel density and total ONH vessel density correlate largely. Large correlation exists between temporal and total ONH density, RNFL, GCC, VF mean deviation and VF index.
Normal: 53 healthy Subjects (78 eyes) Primary open angle glaucoma: 39 patients (64 eyes)
• AUROC measured
ONH vessel density: AUROC between 0.59, 0.73, 0.70 and 0.89 for superonasal, average inside disc, (nasal, superonasal, temporal) and inferotemporal areas.
• Sensitivities of specific vessel Macula: AUROC ranging between 0.56 to 0.64 for nasal and temporal regions.
densities in ONH, peripapillary, Rao et al.[25], 2016
macular regions examined. • ROC regression to analyse covariates of diagnostic competencies.
Mean deviation is negatively correlated with AUROC of vessel densities in all regions.
• SSADA algorithm • Shapiro-wilk statistical test
Pre-treatment IOP shows a significant effect on AUROC of ONH vessel density.
Normal: 10 normal eyes
Bakr et al.[24], 2018
• Visual field assessment
Open angle glaucoma
• Optic disc perfusion (Radial
patients: 12 patients
peripapillary capillaries vessel density) • RNFL thickness • ONH analysis • SPSS statistical test
Statistically significant correlation between optic disc perfusion and glaucoma group compared to healthy group. Statistically significant relationship between Optic disc perfusion with visual field and RNFL thickness.
7
Normal: 29 healthy Subjects (58 eyes) Glaucoma: 24 patients (32 eyes) • Automatic segmentation of optic
Higher vessel density at macula in healthy, compared to glaucoma group.
disc and macula Yip et al. [22], 2019
Higher vessel density at macula and optic disc in healthy group compared to glaucoma group.
• Vessel density computation • Gaussian band-pass filter • Statistical analysis using IBM SPSS
Optic disc: Highest AUROC for radial peripapillary capillary, 0.96.
Statistics • Microvascular density
Macula: Highest AUROC for deep retina, 0.86. OD disc images: Normal: 157 healthy subjects Glaucoma: 52 patients OS disc images: Normal: 149 healthy subjects
Present work
Adaptive histogram Local phase quantization Information fusion Principal component analysis 10-fold validation
Glaucoma: 53 patients
94.3% with PCA
OD macula: Normal: 80 healthy
combined with LPQ
subjects
disc.
feature extraction, on OD
Glaucoma: 50 patients OS macula: Normal: 78 healthy subjects Glaucoma: 41 patients
8
2. Data used
The OCTA images used in this study were obtained from the Department of Ophthalmology, Tan Tock Seng Hospital, Singapore. Approval was granted by the NHG Domain Specific Review Board (DSRB) (NHG DSRB Ref: 2019/00138) for the usage of images with this study. The images were captured via the Angiovue Enhanced Microvascular Imaging System from Optovue. Figure 2 depicts the four layers that were segmented from the OCTA macula images. OS (Ocular Sinister) images were obtained from the left eye while OD (Ocular Dexter) images were obtained from the right eye of healthy subjects and patients. Hence four types of images were examined in this study; OS macular, OS disc, OD macular and OD disc. OD disc images, OS disc images, OD macula images and OS macula images were obtained from 157 normal and 52 glaucoma subjects, 149 normal and 53 glaucoma subjects, 80 normal and 50 glaucoma subjects and 78 normal and 41 glaucoma subjects respectively. The images are further classified into different retinal layers as shown in Table 2. Table 2: Classification OCTA images
of retinal layers on Disc
Macula
Choroid Disc
Choroid Retina
Nerve head Radial peripapillary capillaries (RPC)
Deep
Vitreous
Outer Retina Superficial
9
Figure 2: Segmentation layers within OCTA macula images. (a) Superficial layer, (b) deep layer, (c) outer retina layer, (d) choroid capillaries layer 3. Methodology 3.1 Image Pre-processing Prior to feature selection and image pre-processing was performed to ensure all images are normalized to each other before actual analysis. Three pre-processing steps were employed to achieve image normalization. The OCTA images are first resized to 300 x 300 pixels, then they were all converted to grayscale and lastly a contrast limited adaptive histogram equalization (CLAHE) was applied.
3.1.1 Contrast Limited Adaptive Histogram Equalization (CLAHE)
The quality of OCTA images may get affected due to artefacts and noise, resulting in poor contrast. Hence, contrast enhancement techniques such as CLAHE is applied to refine image quality. Contrast enhancement improves the differences between the image intensities of pixels within close distance [27]. The speed of determining and identifying diseases using OCTA depends on quality of images. Histogram equalization adjusts the grey level of an image according to the distribution of probability and stretches the range of distribution to enhance
10
the contrast and visual effects [28]. The CLAHE specifically targets entropy of an image and achieves best equalization using the maximum entropy. Contrast limited as its name suggests, eliminates a common issue of an ordinary adaptive histogram equalization (AHE) to over amplify near-consistent regions within an image as the histogram of such regions are usually dense and highly concentrated. CLAHE allows the user to set a predefined value to restrict and limit the contrast amplification, preventing the issue of over amplifying near-consistent regions within an image[29] .
3.2 Feature extraction Features were then extracted from the images using local phase quantization (LPQ) [30], an analysis technique based on the blur invariance aspect of the Fourier phase spectrum [28] calculated in local image windows. In digital image processing, the distinct model for spatially shift-invariant blurring of an image f(x,y) ensuing in an observed image g(x,y) can be expressed as the equation as shown below: g(x,y) = f(x,y) * h(x,y) + n(x,y),
where h(x,y) is defined as the point spread function (psf) of the model, n(x,y) as noise and * as 2D convolution. The LPQ features use the local phase information that is extracted by the 2D discrete Fourier transforms, computed over a rectangular area denoted as ( , ) and only four frequency points are considered. This results in a vector ( , ). The phase information in the Fourier coefficients is recorded by inspecting the signs of the real and imaginary parts of each component in
( , ) by using a simple scalar quantization. The resultant eight binary
coefficients are characterized as numerals within 0 to 255. A histogram is then formed by using all the pixels in the rectangular area and utilized as a 256-dimensional feature vector [31]. Hence, in this study, LPQ is calculated locally at each pixelated region, generating histograms. Figure 3 explains the LPQ algorithm for feature extraction.
11
Figure 3: Function of LPQ feature extraction algorithm. After the LPQ of the four (choroid disc, nerve head, RPC and vitreous) images are computed individually, data fusion techniques [32] are then used to aggregate the data for efficient analysis. The LPQ of the images are added, multiplied and concatenated to perform PCA, separately. Principal component analysis (PCA) [33] is a method in which a group of features is converted into a group of linearly uncorrelated values. It basically works by geometrically projecting high-dimensional data into a lower-dimensional space [34]. PCA has the potential to summarize crucial information, while retaining the variability of the data. Thus it is prevalently used as a dimension reduction technique and to solve classification problems. Unnecessary information is eradicated by converting the original data into eigenvectors. Although some information can be lost through dimensionality reduction, the converted data is more
12
condensed and has the potential to enhance classification performance [26]. It is also commonly used due to its simplicity, hence it is employed in this study. A cross-validation is utilized to assess and evaluate the performance of the developed model. This is to reduce the chances of overfitting of the model. In this study, the data had been tested on both 5-fold (5 iterations) and 10-fold (10 iterations) cross validations. 10-fold cross validation[35] gave a better performance and was used to evaluate the proposed method of glaucoma detection. The average of ten-folds is used to evaluate the model accuracy, sensitivity, specificity and positive predictive values. Z-score normalization [36] was then applied, such that the original unstructured data is presented within a structured range. 3.3 Feature selection Highly significant features were then selected using decision tree classifier [37] using predictor importance.
This empowers the training model to predict the classification accurately. A
decision tree-based learning algorithm is one of the best and most commonly used supervised training methods. They can be scaled according to the size of the database, as the tree size is dependent on the size of the database[38]. The decision tree is simple and helps to make straightforward interpretations [39]. Unlike the random forest decision method, in which the feature selections cannot be controlled, and components are chosen on random basis, the decision tree classifier allows observation and changes to the internal controls [39]. This permits better reproducibility in methods for use in subsequent analysis. Figure 4 summarizes the flow diagram of the proposed method for the diagnosis of glaucoma.
Preprocessing of eye images
Feature extraction using LPQ
Data fusion/PCA
10-fold cross validation
Z-score normalization
Feature selection using Decision Tree
Classification using Adaboost
Figure 4: Flow diagram of the proposed method to detect glaucoma.
13
3.4 Classification Classification was done using Adaboost classifier [40-41]. This is a self-adjustable algorithm that works by generating a set of various classifiers to improve the performance of weak classifiers. It automatically adjusts to the error rate of the algorithm to modify the probability distribution of each sample. A larger weight is set for wrongly classified sample and vice versa, enabling a strong classifier to be generated. The parameters accuracy, sensitivity, specificity and precision were used for evaluation. Table 3 presents the best classification results achieved for the three methods for OS macular, OS disc, OD macular and OD disc OCTA images. 4. Results and Discussion Comparing the results from Table 3, it is evident that the highest classification accuracy was achieved with PCA technique coupled with LPQ feature extraction method, applied on the OD disc images. Only 6 features were needed to yield the highest accuracy of 94.3%. Comparing the three techniques, it is noteworthy that PCA yielded the highest accuracy compared to addition
and multiplication of each type of image.
Table 3: Best 10-fold results obtained using different methods with LPQ for each type of image. Type of image OS macular
Type of method Multiplication
No of features 6
Mean accuracy (%)
OS macular OS macular
Addition PCA
6 6
72.2
82.5
74.5
68.5
84.9
92.4
84.6
85.5
OS disc
Multiplication
6
85.7
94.9
85.2
87.0
OS disc
Addition
6
85.2
93.4
85.9
83.3
OS disc
PCA
6
93.2
95.4
96.0
86.0
OD macular
Multiplication
6
80.8
87.7
80.0
82.0
OD macular
Addition
6
78.5
85.7
78.8
78.0
OD macular
PCA
4
91.5
95.4
91.2
92.0
OD disc
Multiplication
6
90.9
94.6
93.8
82.3
OD disc
Addition
6
89.5
95.6
90.5
86.0
OD disc
PCA
6
94.3
98.2
94.4
94.0
76.4
Precision (%) 83.8
Sensitivity (%) 79.6
Specificity (%) 71.0
The current proposed CAD system employed the state-of the-art feature extraction method for the classification of OCTA images into glaucoma andnormal classes. The PCA with LPQ feature extraction and 10-fold cross validation method obtained the highest performance (accuracy of 94.3%). With less than six features, our accuracy was less than 94.3%. By adding
14
more than six features the accuracy was also less than 94.3% and hence, we have chosen six features in our study. The graphical representation of glaucoma diagnosis is presented in Figure 5. The images are loaded to the system after one presses the ‘load images’ button and the diagnosis is displayed (Glaucoma) after clicking the ‘diagnose’ button. The proposed method is also novel as compared to the other techniques discussed in Table 1. The main advantages of the proposed method are as follows: a. It can be seen from Table 1 that, we have used the highest number of OCTA images and also obtained the highest performance (94.3%). b. Our method is simple to use and used only six features to get the highest performance. c. Method is robust as we have employed ten-fold cross validation. d. Method is novel and can be extended for the detection of other eye diseases like diabetic retinopathy, age-related macular degeneration etc. e. To the best of our knowledge, this is the first study to use machine learning for the detection of glaucoma using OCTA images. The limitation of the proposed system are as follows: a. In order for the system to be clinically significant, we need to use more number of images. b. System need to be tested with images of different races also (tested only Asian races). c. Feature extraction step is computationally intensive. Other feature extraction methods like empirical wavelet transform [42], nonlinear methods [43], wavelet transformation [48,50], and higher order spetra [49] can also be employed. In future, we intend to develop the comprehensive model using deep learning technique without the need for pre-processing, feature extraction and classification with more images [44-47,51].
15
Figure 5: Graphical representation of glaucoma diagnosis. 5. Conclusion Accurate and early diagnosis of glaucoma helps to prevent the permanent loss of vision. In this study, we have extracted LPQ features coupled with data fusion techniques using OCTA images. Feature selection and classification are done thereafter using decision tree and adaboost classifiers respectively. Our proposed method has obtained the highest accuracy of 94.3% with six parameters with PCA for OD disc OCTA images. The proposed technique is also robust as it has been validated using 10-fold cross validation strategy. Although the proposed method is promising, the physical extraction of features is tedious. To mitigate this, we intend to use more data and apply deep learning models for the detection of glaucoma in future.
6. References [1]
[2] [3] [4] [5]
A. Sommer et al., “Relationship Between Intraocular Pressure and Primary Open Angle Glaucoma Among White and Black Americans: The Baltimore Eye Survey,” JAMA Ophthalmol., vol. 109, no. 8, pp. 1090–1095, Aug. 1991. H. A. Quigley and A. T. Broman, “The number of people with glaucoma worldwide in 2010 and 2020,” Br. J. Ophthalmol., vol. 90, no. 3, pp. 262 LP – 267, Mar. 2006. R. A. U, Y. K. E. Ng, and J. S. Suri, Image Modeling of the Human Eye. Artech House, 2008. T. C. Lim, S. Chattopadhyay, and U. R. Acharya, “A survey and comparative study on the instruments for glaucoma detection,” Med. Eng. Phys., vol. 34, no. 2, pp. 129–139, 2012. J. A. Giaconi, S. K. Law, A. L. Coleman, and J. Caprioli, Pearls of Glaucoma Management.
16
[6] [7]
[8]
[9]
[10] [11]
[12]
[13]
[14] [15]
[electronic resource]. Berlin, Heidelberg : Springer-Verlag Berlin Heidelberg, 2010., 2010. T. E. D. P. R. Group, “Prevalence of Open-Angle Glaucoma Among Adults in the United States,” Arch. Ophthalmol., vol. 122, no. 4, pp. 532–538, 2004. Y. C. Tham, X. Li, T. Y. Wong, H. A. Quigley, T. Aung, and C. Y. Cheng, “Global prevalence of glaucoma and projections of glaucoma burden through 2040: A systematic review and meta-analysis,” Ophthalmology, vol. 121, no. 11, pp. 2081–2090, 2014. P. Sharma, P. A. Sample, L. M. Zangwill, and J. S. Schuman, “Diagnostic Tools for Glaucoma Detection and Management,” Surv. Ophthalmol., vol. 53, no. 6 SUPPL., pp. 17– 32, 2008. M. Fingeret, F. A. Medeiros, R. Susanna, and R. N. Weinreb, “Five rules to evaluate the optic disc and retinal nerve fiber layer for glaucoma,” Optometry, vol. 76, no. 11, pp. 661– 668, 2005. J. Bergström, Experimental Characterization Techniques. 2015. D. S. G. Mitra Sehi, Delia C. Guaqueta, William J. Feuer, “Scanning Laser Polarimetry with Variable and Enhanced Corneal Compensation in Normal and Glaucomatous Eyes,” Am. J. Ophthalmol., vol. 143, no. 2, pp. 272–279, 2007. D. E. Gaasterland et al., “The Advanced Glaucoma Intervention Study (AGIS): 10. Variability among academic glaucoma subspecialists in assessing optic disc notching.,” Trans. Am. Ophthalmol. Soc., vol. 99, pp. 177–84; discussion 184-5, 2001. Q. K. Farhood, “Comparative evaluation of intraocular pressure with an air-puff tonometer versus a Goldmann applanation tonometer,” Clin. Ophthalmol., vol. 7, no. 1, pp. 23–27, 2012. P. Asman, A Textbook of Clinical Ophthalmology, vol. 81, no. 4. 2003. M. Y. Kahook, R. J. Noecker, “How do you interpret a 24-2 Humphrey Visual Field Printout?”, Glaucoma Today, 57-62, 2007.
[16] [17]
H. A. Khan et al., “A major review of optical coherence tomography angiography,” Expert Rev. Ophthalmol., vol. 12, no. 5, pp. 373–385, 2017. U. Rajendra Acharya, S. Bhat, J.E.W. Koh, S.V. Bhandary, H. Adeli, “A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images,” Comput. Biol. Med., vol. 88, no. June, pp. 72–83, 2017.
[18] Y. Hagiwara, J.E.W. Koh, J. H. Tan, S.V. Bhandary, A. Laude, E. J. Ciaccio, L. Tong, U. R. Acharya, “Computer-aided diagnosis of glaucoma using fundus images: A review,” Comput. Methods Programs Biomed., vol. 165, pp. 1–12, 2018. [19] U. Raghavendra, H. Fujita, S. V. Bhandary, A. Gudigar, J. H. Tan, and U. R. Acharya, “Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images,” Inf. Sci. (Ny)., vol. 441, pp. 41–49, 2018. [20] M. Mishra, M. K. Nath, and S. Dandapat, “Glaucoma Detection from Color Fundus Images,” Int. J. Comput. Commun. Technol., vol. 2, no. Vi, pp. 7–10, 2011. [21] J. J. Park, B. T. Soetikno, and A. A. Fawzi, “CHARACTERIZATION of the MIDDLE CAPILLARY PLEXUS USING OPTICAL COHERENCE TOMOGRAPHY ANGIOGRAPHY in HEALTHY and DIABETIC EYES,” Retina, vol. 36, no. 11, pp. 2039– 2050, 2016.
17
[22]
[23]
[24]
[25]
[26]
[27] [28]
[29] [30]
[31]
[32] [33] [34] [35] [36] [37]
[38]
V. C. H. Yip et al., “Optical Coherence Tomography Angiography of Optic Disc and Macula Vessel Density in Glaucoma and Healthy Eyes,” J. Glaucoma, vol. 28, no. 1, pp. 80–87, 2019. C. Baudouin, E. Brasnu, A. Labbé, P. Zéboulon, and P.-M. Lévêque, “Optic Disc Vascularization in Glaucoma: Value of Spectral-Domain Optical Coherence Tomography Angiography,” J. Ophthalmol., vol. 2016, pp. 1–9, 2016. A. Bakr, M. Farid, A. Al, and H. M. Bayoumy, “Assessment of Optic Disc Perfusion in Open Angle Glaucoma ( OAG ) Using Optical Coherence Tomography Angiography ( OCTA ),” Egypt. J. Hosp. Med., vol. 73, no. October, pp. 7638–7643, 2018. H. L. Rao et al., “Regional Comparisons of Optical Coherence Tomography Angiography Vessel Density in Primary Open-Angle Glaucoma,” Am. J. Ophthalmol., vol. 171, pp. 75– 83, 2016. N. Anantrasirichai, A. Achim, J. E. Morgan, I. Erchova, and L. Nicholson, “SVM-based texture classification in Optical Coherence Tomography,” Proc. - Int. Symp. Biomed. Imaging, no. April, pp. 1332–1335, 2013. B. Bhan and S. Patel, “Efficient Medical Image Enhancement using CLAHE Enhancement and Wavelet Fusion,” Int. J. Comput. Appl., vol. 167, no. 5, pp. 1–5, 2017. T. Ojala, M. Pietikäinen, and T. Maenpaa, “ACCURATE DETECTION OF SEIZURE USING NONLINEAR PARAMETERS EXTRACTED FROM EEG SIGNALS,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987, 2002. S. M. Pizer et al., “Adaptive histogram equalization and its variations,” Comput. vision, Graph. image Process., vol. 39, no. 3, pp. 355–368, 1987. V. Ojansivu and J. Heikkilä, “Blur insensitive texture classification using local phase quantization,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 5099 LNCS, pp. 236–243, 2008. Z. Zhang, F. Li, M. Liu, and P. K. Yadav, “Image Matching Based on Local Phase Quantization Applied for Measuring the Tensile Properties of High Elongation Materials,” Math. Probl. Eng., vol. 2016, 2016. Federico Castanedo, “A Review of Data Fusion Techniques,” Sci. World J., vol. 2013, p. 19, 2013. I. T. Jolliffe, “Principal Component Analysis. Second Edition,” Springer Ser. Stat., vol. 98, p. 487, 2002. R. Indhumathi and D. S. Sathiyabama, “Reducing and Clustering high Dimensional Data through Principal Component Analysis,” Int. J. Comput. Appl., vol. 11, no. 8, pp. 1–4, 2010. Brownlee J., “A Gentle Introduction to k-fold Cross Validation.”, Machine Learning Mastery, Statistical Methods, 2018. S. Gopal, K. Patro, and K. Kumar, “Normalization: A Preprocessing Stage[1] S. Gopal, K. Patro, and K. Kumar, ‘Normalization: A Preprocessing Stage.’” J. VICNESH and Y. HAGIWARA, “Accurate Detection of Seizure Using Nonlinear Parameters Extracted From Eeg Signals,” J. Mech. Med. Biol., vol. 19, no. 01, p. 1940004, 2019. Lu K-C, Yang D-L, “Image Processing and Image Mining using Decision Trees.”, Journal
18
[39]
of Info. Sci. and Engineering, 25, 989-1003, 2009. Jehad Ali, R. Khan, N. Ahmad, I. Maqsood, “Random Forests and Decision Trees.”, International Journal of Computer Science Issues, Vol 9, Issue 5, No. 3, 1697-0814, 2012.
[40] [41]
Yoav Freund, “Boosting a weak learning algorithm by majority,” Inf. Comput., vol. 121, pp. 256–285, 1995. S. Y. Kim and A. Upneja, “Predicting restaurant financial distress using decision tree and AdaBoosted decision tree models,” Econ. Model., vol. 36, pp. 354–362, 2014.
[42]
S. Maheshwari, R. B. Pachori, and U. R. Acharya, “Automated Diagnosis of Glaucoma Using Empirical Wavelet Transform and Correntropy Features Extracted From Fundus Images,” IEEE J. Biomed. Heal. Informatics, vol. 21, no. 3, pp. 803–813, 2017.
[43]
U. R. Acharya, V. K. Sudarshan, H. Adeli, J. Santhosh, J. E.W. Koh, S. D. Puthankatti, A. Adeli, “A novel depression diagnosis index using nonlinear features in EEG signals”, European neurology, vol.74, no (1-2), pp.79-83, 2015. [44]
U. R. Acharya, H. Fujita, S.
L. Oh, U. Raghavendra, J. H. Tan, M. Adam, A. Gertych, Y. Hagiwara, “Automated identification of shockable and non-shockable life-threatening ventricular arrhythmias using convolutional neural network”, Future Generation Computer Systems,
79, 952-
959,2018. [45]
S. L. Oh, Y. Hagiwara, U. Raghavendra, R. Yuvaraj, N. Arunkumar, M. Murugappan, U. R. Acharya, “A deep learning approach for Parkinson’s disease diagnosis from EEG signals”, Neural Computing and Applications, 1-7, 2018.
[46]
U.R. Acharya, S. L. Oh, Y. Hagiwara, J. H. Tan, H. Adeli, D. P. Subha, “Automated EEG-based screening of depression using deep convolutional neural network” Computer methods and programs in biomedicine, 161, 103-113, 2018.
[47]
S. L. Oh, J. Vicnesh, E. J. Ciaccio, R. Yuvaraj, U. R. Acharya, “Deep convolutional neural network model for automated diagnosis of schizophrenia using EEG signals”, Applied Sciences, 9 (14), 2870, 2019.
[48] E. S. Jayachandran, K. P. Joseph, R. U. Acharya, “Analysis of Myocardial Infarction Using Discrete Wavelet Transform”, Journal of Medical Systems, vol.34, no.6, pp 985–992, 2010. [49] R.J. Martis, U.R. Acharya, C.M. Lim, K.M. Mandana, A.K. Ray, C. Chakraborty, “Application of higher order cumulant features for cardiac health diagnosis using ECG signals”, International journal of neural systems, 23 (04), 1350014, 2013. [50] U. R. Acharya, O. Faust, S.V. Sree, F. Molinari, R. Garberoglio, J.S. Suri, “Cost-Effective and Non-Invasive Automated Benign & Malignant Thyroid Lesion Classification in 3D Contrast-Enhanced Ultrasound Using Combination of Wavelets and Textures: A Class of
19
ThyroScan™ Algorithms”, Technology in cancer research & treatment, vol. 10, no. 4, pp.371-380, 2011. [51] J.H. Tan, U.R. Acharya, S.V. Bhandary, K.C. Chua, S. Sivaprasad, “Segmentation of optic disc, fovea and retinal vasculature using a single convolutional neural network”, Journal of Computational Science, vol. 20, pp.70-79, 2017.
20
Highlights •
Automated detection of glaucoma using OCTA images
•
OS macular, OS disc, OD macular and OD disc images are considered
•
LPQ, Information fusion, PCA, and AdaBoost classifier are used
•
Obtained accuracy of 94.3% for OD optic disc images
Statement of conflict of interest The authors of the paper “Automated detection of glaucoma using Optical Coherence Tomography Angiography” hereby declare that there are no conflicts of interest.
Automated detection of glaucoma using Optical Coherence Tomography Angiography Chan Yam Meng1, E.Y.K. Ng 1*, V Jahmunah2, Joel En Wei Koh2, Oh Shu Lih2, Leonard Yip Wei Leon3, U Rajendra Acharya2,4,5
- A concise and informative title Using Optical Coherence Tomography Angiography as Automated detection of glaucoma - A short title of up to 30 characters Automated detection of glaucoma
1
Department of Mechanical and Aerospace Engineering, Nanyang Technological University. 2
Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore. 3
Department of Ophthalmology, Tan Tock Seng Hospital, National Healthcare Group, Singapore.
4
International Research Organization for Advanced Science and Technology (IROAST) Kumamoto University, Kumamoto, Japan 5
Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore.
Corresponding author *E.Y.K. Ng School of Mechanical and Aerospace Engineering, College of Engineering, Nanyang Technological University, Singapore 639798 Telephone: (+65) 6790 4455; Email:
[email protected]
- The type of article Full length research
Declarations Authors’ contributions Y.M. & L.Y. did data collection. Y.M., V.J., J.E., E.Y.K.N and O.H. implemented the model and analyzed data. Y.M. wrote the manuscript with critical input from E.Y.K.N and U.R.. All authors read and approved the final manuscript. Agreements: All authors approved the final manuscript. Competing interests The authors declare that they have no competing interests. Ethics Approval and consent to participate This study solely processed the OCTA and the use of OCTA images was approved by the Institutional Review Board (NHG DSRB Ref: 2019/00138). The data were analyzed anonymously. Funding N.A.