c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
journal homepage: www.intl.elsevierhealth.com/journals/cmpb
Automated anterior segment OCT image analysis for Angle Closure Glaucoma mechanisms classification Swamidoss Issac Niwas a,∗ , Weisi Lin a , Xiaolong Bai b,c , Chee Keong Kwoh a , C.-C. Jay Kuo d , Chelvin C. Sng f , Maria Cecilia Aquino e , Paul T.K. Chew f a
School of Computer Engineering, Nanyang Technological University (NTU), 639798 Singapore, Singapore The State Key Laboratory of Fluid Power Transmission and Control, Zhejiang University, Hangzhou 310027, People’s Republic of China c School of Electrical and Electronics Engineering, Nanyang Technological University (NTU), 639798 Singapore, Singapore d Ming Hsieh Department of Electrical Engineering, Signal and Image Processing Institute, University of Southern California, Los Angeles, CA 90089, USA e Eye Surgery Centre, National University Health System (NUHS), 119228 Singapore, Singapore f Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore (NUS), 119228 Singapore, Singapore b
a r t i c l e
i n f o
a b s t r a c t
Article history:
Background and objectives: Angle closure glaucoma (ACG) is an eye disease prevalent through-
Received 25 November 2015
out the world. ACG is caused by four major mechanisms: exaggerated lens vault, pupil block,
Received in revised form
thick peripheral iris roll, and plateau iris. Identifying the specific mechanism in a given
10 February 2016
patient is important because each mechanism requires a specific medication and treatment
Accepted 17 March 2016
regimen. Traditional methods of classifying these four mechanisms are based on clinically important parameters measured from anterior segment optical coherence tomography (AS-
Keywords:
OCT) images, which rely on accurate segmentation of the AS-OCT image and identification
Angle closure glaucoma
of the scleral spur in the segmented AS-OCT images by clinicians.
Compound image transforms
Methods: In this work, a fully automated method of classifying different ACG mechanisms
Feature selection
based on AS-OCT images is proposed. Since the manual diagnosis mainly based on the
Segmentation-free method
morphology of each mechanism, in this study, a complete set of morphological features is
Machine learning classifier
extracted directly from raw AS-OCT images using compound image transforms, from which a small set of informative features with minimum redundancy are selected and fed into a Naïve Bayes Classifier (NBC). Results: We achieved an overall accuracy of 89.2% and 85.12% with a leave-one-out crossvalidation and 10-fold cross-validation method, respectively. This study proposes a fully automated way for the classification of different ACG mechanisms, which is without intervention of doctors and less subjective when compared to the existing methods.
∗
Corresponding author. Tel.: +65 98677034. E-mail addresses:
[email protected] (S.I. Niwas),
[email protected] (W. Lin),
[email protected] (X. Bai),
[email protected] (C.K. Kwoh),
[email protected] (C.-C. Jay Kuo),
[email protected] (C.C. Sng),
[email protected] (M.C. Aquino),
[email protected] (P.T.K. Chew). http://dx.doi.org/10.1016/j.cmpb.2016.03.018 0169-2607/© 2016 Elsevier Ireland Ltd. All rights reserved.
66
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
Conclusions: We directly extracted the compound image transformed features from the raw AS-OCT images without any segmentation and parameter measurement. Our method provides a completely automated and efficient way for the classification of different ACG mechanisms. © 2016 Elsevier Ireland Ltd. All rights reserved.
1.
Introduction
Glaucoma is a kind of optic neuropathy disorder which makes a gradual loss of eye vision due to the damages of retinal cells and retinal nerve fiber layer in the eye [1]. It is mainly associated with increased intraocular pressure (IOP) of fluid inside the eye [2]. If it is left untreated, the damaged optic nerves will not be recovered, leading to eventual blindness; however, reducing IOP may slow disease progression. It is the second major cause of visual impairment and blindness worldwide. Globally, the number of glaucoma patients is estimated as 60.5 millions in 2010, which may increase to almost 80 millions by 2020 [3]. The timing of interference is a key element in glaucoma treatment and it is usually asymptomatic in its early stage; so early diagnosis is required to slow down the disease progression toward complete vision loss. There are mainly three types of glaucoma, namely, angle closure glaucoma (ACG), open angle glaucoma and developmental glaucoma. ACG is a specific type of glaucoma in which a sudden rise in IOP is experienced, usually as a result of poor drainage due to the eye, producing more aqueous humor than it can remove, hence causing a build-up of fluids [4]. ACG is more prevalent than the other two types. An eye that is susceptible to ACG usually has noticeable features which can be identified upon visual examination [5]. Anterior chamber angle (ACA) assessment is mainly used for the detection of ACG, and it can be visualized and quantified by anterior segment optical coherence tomography (AS-OCT) imaging technique [6,7]. It has been observed that ACG could be the result of one or more mechanisms in the anterior segment of the eye, such as exaggerated lens vault (L), pupil block (PB), thick peripheral iris roll (PIR), and plateau iris (PL) [6]. Laser peripheral iridotomy (LPI) is performed at an early and appropriate time for the eyes with anatomically narrow angles. Ophthalmologists and clinicians have found that the LPI is not always suitable for treating ACG patients with different mechanisms, and many patients continue to have appositional angle closure after treatment using LPI [7]. The optimal treatments for ACG patients with different mechanisms should be different. Therefore, it is important to classify these four mechanisms of ACG effectively, in order to not only provide different treatments for patients with different mechanisms, but also help the doctors tailor the best treatment for each mechanism accordingly. It has been demonstrated that these four mechanisms of ACG have different patterns of angle configurations with different anterior chamber (AC) parameters [7]. AS-OCT provides excellent repeatability and reproducibility for imaging the anterior segment of eyes. AS-OCT was used to measure
the AC parameters in the above mentioned four different mechanisms of ACG [6]. Wirawan et al. used some key parameters which were provided by customized software measured from the segmented AS-OCT images such as scleral spurto-scleral spur distance, anterior chamber depth, anterior chamber angle and area [8]. Some reliable selected features were fed into Adaboost classifier and it was found to be clinically important to distinguish the four mechanisms of ACG [8]. In another study [9], the selected features are cross examined with four types of ACG mechanisms. In [10], various supervised and unsupervised features selection methods are used to find the reliable features and achieve the classification of different ACG mechanisms. However, the measurement of AC parameters relies on the accurate identification of the scleral spur in the segmented AS-OCT images by the clinicians, which is subjective and may not be reliable, introducing additional noise to the features. In [11], a new ensemble learning method based on error-correcting output code (ECOC) is proposed with application to classification of four ACG mechanisms and it is shown as an effective approach for multiclass classification. The existing methods for the classification of different ACG mechanisms are still not fully automated, requiring the help of the clinicians in feature extraction. In this paper, our study is motivated by providing a fully automated expert system for the classification of different ACG mechanisms. We attempt to directly extract the features from the raw AS-OCT images without segmentation and parameter measurement. It is less subjective without the intervention of doctors. Specifically, in the proposed method, compound transform [12] is applied to the raw AS-OCT images to extract around three thousand different morphological features, from which a small set of discriminative features is selected for classification. In the rest of this paper, Section 2 reviews related literature on general glaucoma diagnosis methods. The proposed method is presented in Section 3, followed by experimental results in Section 4. The conclusion is made with future works highlighted in the last section.
2.
Related methods
During the past one decade, several studies have investigated the usefulness of the computer-based expert support systems for the early detection of glaucoma using different imaging modalities as listed in the literature [13–18]. From the fundus images, using image processing techniques, the optic disk and blood vessels were extracted which could provide useful information to diagnose glaucoma. Higher order spectra features and textural features of fundus images combined with a random-forest classifier resulted with an
67
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
accuracy of more than 91% for correctly identifying the glaucoma images [13]. An algorithm based on neural network classifier using the internal morphological features from the fundus images [14] used for glaucoma diagnosis. Another study [15] explored the ability of the combination of the imaging methods for distinguishing the normal eyes from the glaucoma defect using optic nerve head stereo photographs, scanning laser polarimetry, confocal scanning laser ophthalmoscopy and optical coherence tomography (OCT) images. The optical nerve head (ONH) topography images were explored by orthogonal decomposition method for the detection of glaucoma in [16]. The super pixel information from three-dimensional OCT images was used in the boosting algorithm based machine classifier to improve early detection of glaucoma [17]. An expert support system for glaucoma using three-dimensional ONH features, stereometric ONH parameters, disc and cup margin features were measured and used in [18]. Aforementioned works have attempted to distinguish the glaucoma and non-glaucoma cases using various image sources, rather than classifying any subtypes of glaucoma. AS-OCT images are most convenient in diagnosis for ACG, since they provide useful information for clinicians to observe a cross sectional view of the anterior chamber and the inside angle structures of the eye. Classifying various mechanisms of ACG using reliable features from the measured parameters of AS-OCT images was studied through machine learning classifiers with improved accuracy in [8–11]. There were a number of tools available for glaucoma with different diagnostic approaches. However, the sensitivity and specificity of any single diagnostic test are not adequate to be considered as the golden standard. Also, to the best of our knowledge, no works have been done previously for ACG mechanism classification using textural features without prior segmentation. Hence, in this research work, we make an attempt to classify the different ACG mechanisms just based on AS-OCT images in a fully automated process. Features based on the compound hierarchy of algorithms representing morphology (CHARM) algorithm are extracted, followed by a multivariate classifier for classification.
3.
Proposed method
The proposed method for the classification of different ACG mechanisms is shown in Fig. 1, including the acquisition of raw AS-OCT images followed by the extraction of a large number of morphological features using compound image transforms and image statistics [19]. A small set of discriminative and inter-dependent features selected by using maximum relevance-minimum redundancy (mRMR) algorithm [20] is fed into a Naïve Bayes Classifier for classification.
Table 1 – The number of samples with different ACG mechanisms in the experimental dataset. Class number 1 2 3 4
Mechanism
Number of samples
Exaggerated lens vault (LV) Pupil block (PB) Thick peripheral iris roll (PIR) Plateau iris configuration (PL)
26 28 10 10
Total
3.1.
74
AS-OCT images
The data used in this work consist of AS-OCT images of 74 ACG patients provided by the Department of Ophthalmology in the National University Hospital, Singapore (NUHS). Ethics approval was obtained from the review board of NUHS and the written consent was obtained from all subjects prior to AS-OCT imaging. The numbers of patients with different mechanisms are listed in Table 1. A skilled technician obtained the clinical images through a horizontal scan, including sections of the nasal and temporal quadrants, of all subjects using AS-OCT (Visante, software version 2.01.88; Carl Zeiss Meditec, Dublin, CA) in a dark room (0 lux), with the images centered on the pupil. 1300 nm infrared light was used to acquire cross-sectional view of the anterior segment image with high resolution. The standard AS single-scan protocol, producing 256 scans in 0.125 s, was used to obtain the scans. The image saturation and noise were adjusted, and the polarization for each scan was optimized by the examiner to obtain the best quality images. Because several AS-OCT images were acquired for each patient, the image with the smallest amount of artifacts was selected for analysis. AS-OCT images with artifacts resulting from movement and the eyelids, and poor quality images secondary to tear film abnormalities and corneal scars were excluded. Each eye image was captured several times with undilated state of the pupil and only images with clearly visible scleral spurs were analyzed qualitatively by three glaucoma specialists (P.T.K. Chew, M.C. Aquino and C.C. Sng). They were categorized into four groups of images based on ACG mechanism. Each mechanism has its own distinctive morphological characteristics [5] as shown in Fig. 2 which is illustrated in the dotted lines. The LV mechanism (Fig. 2a) can be spotted by the lens pushing the iris upward, which caused to reduce the angle between the cornea and the iris. The PB mechanism (Fig. 2b) can be seen by a convex forward iris profile, causing a shallow peripheral anterior chamber. The PIR mechanism (Fig. 2c) can be detected by a thick iris, which narrows the angle between the iris and cornea due to the circumferential folds along the periphery of the iris. The PL mechanism (Fig. 2d) can be identified by a sharp rise of the iris at the periphery, closer to the angle wall,
Fig. 1 – Block diagram of the proposed method for ACG mechanism classification.
68
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
Fig. 2 – AS-OCT image of an eye with different mechanisms: (a) exaggerated lens vault – LV; (b) pupil block – PB; (c) thick peripheral iris roll – PIR; (d) plateau iris configuration – PL.
before sharply turning away from the angle wall toward the visual axis.
3.2.
Extraction of CHARM features
The feature set used in the analysis is based on the compound image transforms and image statistics. The principle is to extract a very large number of image generic image descriptors (up to ∼3000 features) from the raw AS-OCT image. Compound transform has been validated to be useful in classification of cell images [12,19]. The features are listed into three major groups (A, B and C) as shown in Fig. 3 and the graphical representation of the extracted features is illustrated in Fig. 4. The types of features described in each group as follows; Group A: high contrast features includes edge statistics, object statistics and Gabor statistics features; Group B: polynomial decompositions features includes Chebyshev statistics, Chebyshev-Fourier statistics, Zernike polynomials features;
Group C: pixel statistics and textures features includes Haralick and Tamura features, multi-scale histograms, first four moments, pixel intensity statistics features, radon features, fractal features and Gini coefficient. These features are calculated from the raw pixels, and image transforms as well as multi-order transforms, including the Fourier transform, wavelet transform (Symlet 5, Level 1), Chebyshev transform and tandem combinations of these transforms. Altogether, the feature vector consists of 2919 values, each of which provides the various information of an image (Fig. 4). Features generated by using compound transform have been widely used for analysis of complex visual arts and paintings [21], and it is very effective in classification of galaxy images [22–24], knee radiography images [25,26]. As for face recognition, both coarse and fine features generated by compound transform are required [27]. The abundant features generated by compound transform can fall into the following groups [12].
Fig. 3 – Basic features used for the construction of compound transform features.
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
Fig. 4 – Graphical representation for the construction of compound hierarchy of algorithms representing morphology features.
69
70
3.2.1.
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
Group A: high contrast features
High contrast features include edges and objects, statistics about object number, spatial distribution, size and shape information. • • Edge statistics features were calculated using Prewitt gradient [28], which comprises the mean, median, variance, and 8-bin histogram of both the magnitude and the direction components. Some of the additional edge features consist of the total number of edge pixels, the direction homogeneity [29], and the difference between the direction histogram bins are sampled into a four-bin histogram. • Object statistics such as Otsu and Inverse Otsu were calculated for all 8-connected objects found in the Otsu and Inverse Otsu binary mask of the object [30] respectively. The Euler Number, and the minimum, maximum, mean, median, variance, and a 10-bin histogram of both the objects areas and distances from the objects to the image centroid are also included [31]. • Gabor statistics features were calculated through the convolution with a kernel in the form of a Gaussian harmonic function [32,33] since the Gabor filters is suitable for filtering in the spatial and frequency domain. Seven distinct frequencies were used for providing seven image descriptor values.
3.2.2.
•
•
•
•
Group B: polynomial decompositions features
In polynomial decomposition, a polynomial is produced which can approximate the image to some extent of fidelity. The generated polynomial coefficients are used as descriptors of the image. • Chebyshev statistics [34] were generated using a 32-bin histogram of a 1 × 400 vector calculated by Chebyshev transformation of the image with an order of N = 20. • Chebyshev-Fourier features [35] included a 32-bin histogram of the polynomial coefficients of a ChebyshevFourier transformation with a polynomial order of up to N = 23. • Zernike features [36] are the absolute values of Zernike polynomial approximation coefficients of the image [29] which generates 72 image descriptors. These features include rotation invariance, expression efficiency, robustness to noise and multilevel representation for describing the shapes of the patterns.
3.2.3.
•
described in [37], and contribute 28 image descriptors. These were obtained by a tabulation of how often many combinations of pixel brightness values occur in the image. Multi-scale histograms were calculated using various numbers of bins (3, 5, 7, and 9), as proposed by [38], providing 24 image descriptors. Tamura texture features [39] of contrast, directionality, and coarseness and its 3-bin histogram were utilized to provide 6 image descriptors. Radon transform features [40] were calculated for four different directions (0◦ , 45◦ , 90◦ , and 135◦ ), as well as each of the resulting series were convolved into a 3-bin histogram, which provide a total of 12 image features. Fractal features were computed as proposed by [41,42], providing a total of 20 image features. Since, the concept of fractal dimension is an indicator of the surface roughness, fractal-based texture analysis has a correlation between texture coarseness. Pixel intensity features such as mean, median, standard deviation, minimum and maximum pixel intensity values were calculated using standard formulas. Gini coefficient is used to compute the quantitative measure of the inequality with which an object’s brightness is distributed amongst its constituent pixels. One Gini coefficient is calculated using the method described in [43].
Group C: pixel statistics and texture features
Pixel statistics are mainly based on the distribution of pixel intensities of the image, including histograms and moments. Texture features are the inter-pixel variations in intensity for several directions and resolutions. • First mean, standard deviation, skewness, and kurtosis were computed on each image stripes in four different directions (0◦ , 45◦ , 90◦ and 135◦ ). Each set of stripes is finally sampled into a 3-bin histogram, which provides 48 image descriptors. • Haralick features were computed on the image’s cooccurrence matrix based on standard equations as
Since, the larger set of image features can be more informative; nevertheless, all the 2919 image features are not equally informative since some of the features are expected to be noisy and more redundant. In the proposed method, mRMR feature selection algorithm is used, which is shown to be superior for reliable and useful feature selection in previous works on ACG mechanisms classification [8–10]. The mRMR feature selection algorithm is used as described in the following subsection, for selecting the most informative image features as well as rejecting the redundant and noisy features for classification.
3.3.
Feature selection using mRMR
Minimum Redundancy Maximum Relevance (mRMR) is a feature selection algorithm that aims to find features that are most relevant to the target classes, while reducing the redundancy between the selected features simultaneously [20]. To find relevant features, the mutual information between a feature and the target class should be maximized. The mutual information between two variables m and n is defined as,
I(m, n) =
p(m, n) · log
p(m, n) dm dn p(m) · p(n)
(1)
The mutual information between the selected features set F and class n can be defined as, D(F, n) =
1 I(mi , n) |F| mi ∈ F
(2)
71
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
with F containing x features {m1 , m2 , . . ., mx } while that between each two features mi and mj (i, j = 1, 2, . . ., x) in F is defined as, 1 R(F) = |F|2
Accuracy = I(mi , mj )
(4)
Concerning the statistical significance, the F-measure is measured to calculate the test’s accuracy using precision and recall rates;
Naive Bayes Classifier
Naive Bayes Classifier (NBC) demonstrates its effectiveness in binary and multiclass classification when the features are independent of each other [44]. It generates a single probabilistic model for each class, assuming conditional independence of the features given the class label. NBC can often outpace more sophisticated classification methods for the medical diagnosis model [45]. Since the mutual information among the features selected by using mRMR is minimized, these features are predominantly independent of each other and suitable for NBC. Besides, classification based on Bayes theorem is useful for medical model diagnosis since it combines training samples with a priori knowledge to get the posterior probability of a hypothesis. It considers the contribution of all the features independently to the probability of the target class and hence, it provides more useful information for decision support other than a class label or class probability [46].
3.5.
TP + TN TP + TN + FP + FN
(3)
mi ,mj ∈ F
The objective of feature selection is to select a compact feature set with high discrimination and no redundancy by maximizing D(F,n) and minimizing R(F)simultaneously. Hence the final feature set is determined by maximizing the objective function ˚(D, R) which is defined as D(F, n) − R(F) and D(F, n)/R(F) by using mutual information difference (MID) and mutual information quotient (MIQ) criteria, respectively. In practice, the best feature set is found by using incremental search methods such as sequential forward selection method by maximizing ˚(D, R) in each searching step.
3.4.
true negatives (TN), false positives (FP) and false negatives (FN) using standard formulas [9].
Performance analysis
The performance of the classifier can be computed in terms of accuracy, F-measure, sensitivity and specificity and which in turn can be derived from the rates of true positives (TP),
F-measure = 2 ·
Precision ∗ Recall
(5)
Precision + Recall
where Precision =
4.
TP (TP + FP)
and Recall =
TP (TP + FN)
(6)
Experimental results and discussion
Our experiment was conducted using Matlab Toolbox from the original authors for feature selections with mRMR method [19]. Each feature in the original feature set was normalized to have zero mean and a standard deviation of unity and discretized in order to compute the mutual information for feature selection. Prior to classification using NBC, ranked features are selected amongst extracted 2919 features using mRMR algorithm with mutual information difference. Leaveone-out cross-validation (LOOCV) is used due to the small sample size. Fig. 5 shows the graph of the accuracy when training the classifier using NBC based on the mRMR ranked features and it is found that the accuracy of the mRMR feature selection algorithm grow quickly to a peak of 89.20%, at its top fifteen features. Beyond this peak, the accuracy of the algorithm was found to dip slightly and eventually stabilized. The accuracy is gradually reached to 84.90% when using all 2919 extracted features as shown in Fig. 5. The mRMR feature selection algorithm with NBC was able to perform well on a small feature set that is 15 out of 2919 of the entire feature set, due to the selection of features that had high relevance to the target class while reducing features that may have been correlated to the features already selected. The fifteen selected features are shown in Table 2, which would hence be significant in the detection of the ACG mechanism.
Fig. 5 – Accuracy graph using mRMR ranked features using NBC classifier.
72
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
Table 2 – Top fifteen ranked features by mRMR method and its accuracy using NBC. Feature rank
Feature number
Feature name
1 2
917 1843
3 4
2326 2740
5 6
1215 2392
7 8
731 1701
9
1836
10 11
218 1565
12 13 14
516 2187 1509
15
1143
Radon Coefficients (Wavelet (*)) Haralick Textures (Fourier (Chebyshev (*))) Radon Coefficients (Edge (*)) Zernike Coefficients (Wavelet (Edge (*))) Fractal Features (Chebyshev (*)) Chebyshev Coefficients (Fourier (Edge (*))) Zernike Coefficients (Wavelet (*)) Haralick Textures (Fourier (Wavelet (*))) Comb Moments (Fourier (Chebyshev (*))) Zernike Coefficients (*) Chebyshev Coefficients (Fourier (Wavelet (*))) Zernike Coefficients (Fourier (*)) Zernike Coefficients (Edge (*)) Pixel Intensity Statistics (Wavelet (Fourier (*))) Haralick Textures (Chebyshev (*))
(*) – raw input AS-OCT image.
All these fifteen features are revealing the morphology and shape information, high-contrast features, textural statistical distribution of the pixel values as explained in the Section 3.2, hence it is useful for the ACG mechanism classification using AS-OCT images.
Table 5 – Confusion matrix of 4-way classifier using NBC with LOOCV method. ACG mechanism
Actual class
LV PB PIR PL
Predicted class LV
PB
PIR
PL
24 3 0 0
2 25 2 1
0 0 8 0
0 0 0 9
The accuracy with different numbers of features (from 1 to 2919) is compared in Fig. 5 when the classifier is trained using NBC based on the mRMR ranked features. It is found that the accuracy of the mRMR feature selection algorithm improves quickly to a peak of 89.20%, at its top fifteen features. Beyond this peak, the accuracy of the algorithm dips slightly, and eventually stabilizes. The accuracy reaches 84.90% when using all 2919 ranked features as shown in Fig. 5. This shows robustness of feature selection in the proposed method. We also compared the accuracy with different combinations of the top 15 features. The variation of accuracy with different combinations of top 15 features is given in Table 3. It is observed that the accuracy is 58.35% when only the top feature is used and it progressively improves once more features are added. The accuracy, F-measure, specificity, sensitivity and receiver operating characteristic (ROC) area of each class using Naïve Bayes Classifier based on the selected 15 features are calculated and is shown in Table 4. Our method achieves 89.20% of accuracy, 89.30% of F-measure with a sensitivity of 88.90% and a specificity of 89.19%. Table 5 shows the confusion matrix of 4-way classifier using NBC through LOOCV method. From the experimental results as shown in Tables 4 and 5, we can
Table 3 – Variation of accuracy using different combinations of the top 15 features. S.No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Feature combination
Accuracy in %
917 917, 1843 917, 1843, 2326 917, 1843, 2326, 2740 917, 1843, 2326, 2740, 1215 917, 1843, 2326, 2740, 1215, 2392 917, 1843, 2326, 2740, 1215, 2392, 731 917, 1843, 2326, 2740, 1215, 2392, 731, 1701 917, 1843, 2326, 2740, 1215, 2392, 731, 1701, 1836 917, 1843, 2326, 2740, 1215, 2392, 731, 1701, 1836, 218 917, 1843, 2326, 2740, 1215, 2392, 731, 1701, 1836, 218, 1565 917, 1843, 2326, 2740, 1215, 2392, 731, 1701, 1836, 218, 1565, 516 917, 1843, 2326, 2740, 1215, 2392, 731, 1701, 1836, 218, 1565, 516, 2187 917, 1843, 2326, 2740, 1215, 2392, 731, 1701, 1836, 218, 1565, 516, 2187, 1509 917, 1843, 2326, 2740, 1215, 2392, 731, 1701, 1836, 218, 1565, 516, 2187, 1509, 1143
58.35 59.46 64.86 68.02 68.92 67.56 71.63 74.32 71.62 77.02 77.20 77.50 83.79 85.13 89.20
Table 4 – Accuracies, sensitivities and specificities of each class using NB Classifier with leave-one-out cross-validation method. Class No.
Accuracy (%)
Sensitivity (%)
Specificity (%)
F-measure (%)
ROC area
1 2 3 4
92.96 89.19 97.06 98.51
92.31 89.29 80.00 90.00
93.33 89.13 100.00 100.00
90.57 86.21 88.89 94.74
0.96 0.92 0.95 0.99
Weighted
89.20
88.90
89.19
89.30
0.95
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
Table 6 – Comparison of accuracy using various classifiers for ACG detection. S.No 1 2 3 4 5
Classifier type
LOOCV (%)
10-Fold CV (%)
NB classifier ANN classifier k-NN classifier Adaboost classifier RF classifier
89.20 75.67 74.32 63.51 58.10
85.12 79.72 74.32 78.40 59.45
see that the classification accuracy of PIR and PL mechanisms is relatively higher than that of L and PB. There are two reasons for this: (1) from a medical point of view, both PB and L have very evident visual characteristics, such as convex forward iris profile in PB and greatly reduced AC volume in L, which are highly discriminative; (2) the sample size of the experimental dataset is imbalanced, with more samples in PB and L and less samples in the PIR and PL. Since the number of samples collected for PIR and PL is small when compared to other two mechanisms which will reflect the imbalanced dataset for machine learning. When we provide more samples with equal number, the overall accuracy and F-measure can be improved. In machine learning application, feature selection (e.g., mRMR) is an important task to obtain high classification performance. However, most feature selection methods are based on statistics of the data. More data will make the selected features more confident. That is to say, more samples will help to improve the performance of feature selection, and also the classification accuracy, and F-measure. The performance of the NBC classifier of the proposed method has been compared (see Table 6) with the other most used contemporary classifiers for medical diagnosis such as ANN viz. multilayer perceptron model using a back-propagation algorithm, k-Nearest Neighborhood (k-NN), Adaboost classifier and random forest (RF) classifier. For each classifier, we used leave-one-out cross-validation (LOOCV) and k-fold cross-validation (here k = 10) for a fair comparison. In 10fold cross validation, the training set is divided into 10 sets of equal number of samples. Each time, 9 of these sets are used for training (k-1) and the remaining one for testing. This procedure is repeated 9 times (folds) using a different part for testing in each case. Finally the average value of all ten results is estimated to analyze the classifier performance. LOOCV method will work in the same above mentioned concept when k = 1 in k-fold cross validation method. Due to small number of samples, the statistics of the data is hard to be fully modeled, and there is some difference between LOOCV and 10-fold results for this present work. Since the selected features using mRMR are independent with each other; the combination of mRMR with NBC provides better result than the other multivariate classifiers. To the best of our knowledge, there is no such an automated method for classifying different ACG mechanisms. The other methods are mainly based on the manual observation of clinicians, which is not an automated way for classification of different ACG mechanisms. Future work includes using a large dataset to learn better discriminative features and improve the classification accuracy. Furthermore, some samples may have multiple mechanisms simultaneously, which should be
73
considered by proposing new machine learning methods. This study proposes a fully automated way for the classification of different ACG mechanisms, which is without intervention of doctors and less subjective when compared to the existing methods.
5.
Conclusions
Angle-closure glaucoma, which is a specific type of glaucoma, has been observed to be the result of one or more mechanisms in the anterior segment of the eye. Traditional methods for classification of the different mechanisms are based on calculation of the key parameters, such as the angle opening distance (AOD), trabecular-iris space area (TISA) and angle recess area (ARA) from the segmented AS-OCT images with the help of doctors, which are not fully automated. Since the manual diagnosis mainly based on the morphology of each mechanism, in this study, we directly extract multiple morphological features using compound transform and image statistics, from which a small set of features is selected by using mRMR, and then used for classification. Our method provides a completely automated and efficient way for the classification of different ACG mechanisms. Segmentation of anterior chamber area and its measurement is not required in this method. Experimental results show that our method achieves 89.20% classification accuracy. The limitation of this method is that the all features extracted from the raw AS-OCT images are based on compound image transformed features and its supervised learned feature selection method. Practically, manual labeling of various mechanism in huge ACG dataset are time consuming tasks for medical practitioners. In future work, unsupervised learning method and transfer learning method will be explored to conquer this problem. The possibility that a mechanism occurs in conjunction with other mechanisms could also be studied e.g. an iris roll mechanism occurring with a lens mechanism. Classifying all samples under only one class label, when some of these samples may have the features of two or more mechanisms, may adversely affect the performance of feature selection and classification; this can be investigated as well.
Conflict of interest statement None declared.
Acknowledgement This work was supported by Ministry of Education (MoE) AcRF Tire 1 Funding, Singapore, under Grant M4010981.020 RG36/11.
references
[1] Z.B. Eliash, G. Wollstein, T. Chu, J.D. Ramsey, C. Glymour, R.J. Noecker, H. Ishikawa, J.S. Schuman, Optical coherence tomography machine learning classifiers for glaucoma detection: a preliminary study, Invest. Ophthalmol. Vis. Sci. 46 (11) (2005) 4147–4152.
74
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
[2] J. Cheng, J. Liu, B.H. Lee, D.W.K. Wong, F. Yin, T. Aung, M. Baskaran, P. Shamira, T.Y. Wong, Closed angle glaucoma detection in RetCam images, in: Proc. Annual Int. Conf. IEEE EMBC, 2010, pp. 4096–4099. [3] H.A. Quigley, A.T. Broman, The number of people with glaucoma worldwide in 2010 and 2020, Br. J. Ophthalmol. 90 (3) (2006) 262–267. [4] Glaucoma Research Foundation, 2013. Available at: http://www.glaucoma.org/glaucoma/types-of-glaucoma.php. [5] C.C. Sng, M.C. Aquino, J. Liao, M. Ang, C. Zheng, S.C. Loon, P.T. Chew, Pretreatment anterior segment imaging during acute primary angle closure: insights into angle closure mechanisms in the acute phase, Ophthalmology 121 (1) (2014) 119–125. [6] N. Shabana, M.C. Aquino, J. See, A.M. Tan, W.P. Nolan, R. Hitchings, S.M. Young, S.C. Loon, C.C. Sng, W. Wong, P.T.K. Chew, Quantitative evaluation of anterior chamber parameters using anterior segment optical coherence tomography in primary angle closure mechanisms, Clin. Exp. Ophthalmol. 40 (2012) 792–801. [7] C.K. Leung, W.M. Chan, C.Y. Ko, S.I. Chui, J. Woo, M.K. Tsang, R.K. Tse, Visualization of anterior chamber angle dynamics using optical coherence tomography, Ophthalmology 112 (2005) 980–984. [8] A. Wirawan, C.K. Kwoh, P.T.K. Chew, M.C.D. Aquino, C.L. Seng, J. See, C. Zheng, W. Lin, Feature selection for computer-aided angle closure glaucoma mechanism detection, J. Med. Imaging Health Inform. 2 (4) (2012) 438–444. [9] S. Issac Niwas, W. Lin, C.K. Kwoh, C.-C. Jay Kuo, C.C. Sng, M.C. Aquino, P.T.K. Chew, Cross-examination for angle-closure glaucoma feature detection, IEEE J. Biomed. Health Inform. 20 (1) (2016) 343–354. [10] S. Issac Niwas, X. Bai, W. Lin, C.K. Kwoh, C.C. Sng, M.C. Aquino, P.T.K. Chew, Reliable feature selection technique for automated angle closure glaucoma mechanism detection, J. Med. Syst. 39 (3) (2015) 1–10. [11] X.S. Bai, S. Issac Niwas, W. Lin, B.-F. Ju, C.K. Kwoh, L. Wang, C.C. Sng, M.C. Aquino, P.T.K. Chew, Learning ECOC code matrix for multiclass classification with application to glaucoma diagnosis, J. Med. Syst. 40 (4) (2016) 1–10. [12] N. Orlov, L. Shamir, T. Macura, J. Johnston, D.M. Eckley, I.G. Goldberg, WND-CHARM: multipurpose image classification using compound image transforms, Pattern Recognit. Lett. 29 (11) (2008) 1684–1693. [13] U.R. Acharya, S. Dua, X. Du, V.S. Sree, C.K. Chua, Automated diagnosis of glaucoma using texture and higher order spectra features, IEEE Trans. Inf. Technol. Biomed. 15 (3) (2011) 449–455. [14] J. Nayak, U.R. Acharya, P.S. Bhat, A. Shetty, T.C. Lim, Automated diagnosis of glaucoma using digital fundus images, J. Med. Syst. 33 (5) (2009) 337–346. [15] M.J. Greany, D.C. Hoffman, D.F.G. Heath, M. Nakla, A.L. Coleman, J. Caprioli, Comparisons of optic nerve imaging methods to distinguish normal eyes from those with glaucoma, Invest. Ophthalmol. Vis. Sci. 43 (1) (2002) 140–145. [16] M. Balasubramanian, S. Zabic, C. Bowd, H.W. Thompson, P. Wolenski, S.S. Iyengar, B.B. Karki, L.M. Zangwill, A framework for detecting glaucomatous progression in the optic nerve head of an eye using proper orthogonal decomposition, IEEE Trans. Inf. Technol. Biomed. 13 (5) (2009) 781–793. [17] J. Xu, H. Ishikawa, G. Wollstein, J.S. Schuman, 3D optical coherence tomography super pixel with machine classifier analysis for glaucoma detection, in: Proc. IEEE Eng. Med. Biol. Soc., 2011, pp. 3395–3398. [18] J. Xu, H. Ishikawa, G. Wollstein, R.A. Bilonick, K.R. Sung, L. Kagemann, K.A. Townsend, J.S. Schuman, Automated
[19]
[20]
[21]
[22] [23] [24] [25]
[26]
[27]
[28] [29]
[30] [31] [32] [33]
[34] [35]
[36] [37]
[38]
[39]
[40] [41]
assessment of the optic nerve head on stereo disc photographs, Invest. Ophthalmol. Vis. Sci. 49 (6) (2008) 2512–2517. L. Shamir, N. Orlov, D.M. Eckley, T. Macura, J. Johnston, I.G. Goldberg, Wndchrm – an open source utility for biological image analysis, Source Code Biol. Med. 3 (2008) 1–13. H. Peng, F. Long, C. Ding, Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy, IEEE Trans. Pattern Anal. Mach. Intell. 27 (8) (2005) 1226–1238. L. Shamir, T. Macura, N. Orlov, D.M. Eckley, I.G. Goldberg, Impressionism, expressionism, surrealism: automated recognition of painters and schools of art, ACM Trans. Appl. Percept. 7 (2) (2010) 8. L. Shamir, Automatic detection of peculiar galaxies in large datasets of galaxy images, J. Comput. Sci. 3 (2012) 181–189. L. Shamir, G-analyzer: a tool for automatic galaxy image analysis, Astrophys. J. 736 (2) (2011) 141. L. Shamir, Automatic morphological classification of galaxy images, Mon. Notices R. Astron. Soc. 399 (2009) 1367–1372. L. Shamir, S.M. Ling, W.W. Scott, A. Bos, N. Orlov, T. Macura, D.M. Eckley, I.G. Goldberg, Knee X-ray image analysis method for automated detection of osteoarthritis, IEEE Trans. Biomed. Eng. 56 (2) (2009) 407–415. L. Shamir, S.M. Ling, W. Scott, M. Hochberg, L. Ferrucci, I.G. Goldberg, Early detection of radiographic knee osteoarthritis using computer-aided analysis, Osteoarthr. Cartil. 17 (10) (2009) 1307–1312. L. Shamir, Evaluation of face datasets as tools for assessing the performance of face recognition methods, Int. J. Comp. Vis. 79 (2008) 225–229. R.C. Gonzalez, R.E. Woods, Digital Image Processing, Pearson Education Inc, 2008. R.F. Murphy, M. Velliste, J. Yao, G. Porreca, Searching online journals for fluorescence microscopy images depicting protein subcellular location patterns, in: Proc. 2nd IEEE International Symposium on Bioinformatics and Biomedical Engineering, 2001, pp. 119–128. N. Otsu, A threshold selection method from gray level histograms, IEEE Trans. Syst. Man Cybern. 9 (1979) 62–66. S.B. Gray, Local properties of binary images in two dimensions, IEEE Trans. Comput. 20 (1971) 551–561. D. Gabor, Theory of communication, J. IEEE 93 (1946) 429–457. C. Gregorescu, N. Petkov, P. Kruizinga, Comparison of texture features based on Gabor filters, IEEE Trans. Image Process. 11 (2002) 1160–1167. I. Gradshtein, I. Ryzhik, Table of Integrals, Series and Products, 5th ed., Academic Press, 1994. N. Orlov, J. Johnston, T. Macura, L. Shamir, I. Goldberg, Computer vision for microscopy applications, in: Vision Systems – Segmentation and Pattern Recognition, ARS Press, 2007, pp. 221–242. M. Teague, Image analysis via the general theory of moments, J. Opt. Soc. Am. 70 (1980) 920–930. R.M. Haralick, K. Shanmugam, I. Dinstein, Textural features for image classification, IEEE Trans. Syst. Man Cybern. 6 (1973) 269–285. E. Hadjidementriou, M. Grossberg, S. Nayar, Spatial information in multiresolution histograms, IEEE Conf. Comput. Vis. Pattern Recognit. 1 (2001) 702. H. Tamura, S. Mori, T. Yamavaki, Textural features corresponding to visual perception, IEEE Trans. Syst. Man Cybern. 8 (1978) 460–472. J.S. Lim, Two-dimensional Signal and Image Processing, Prentice Hall, New Haven, 1990. C.M. Wu, Y.C. Chen, K.S. Hsieh, Texture features for classification of ultrasonic liver images, IEEE Trans. Med. Imaging 11 (2) (1992) 141–152.
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 3 0 ( 2 0 1 6 ) 65–75
[42] C.C. Chen, J.S. Daponte, M.D. Fox, Fractal feature analysis and classification in medical imaging, IEEE Trans. Med. Imaging 8 (2) (1989) 133–142. [43] R. Abraham, S. van den Bergh, P. Nair, A new approach to galaxy morphology: I. Analysis of the Sloan digital sky survey early data release, Astrophys. J. 588 (2003) 218–229. [44] I. Rish, An empirical study of the Naïve Bayes classifier, in: Proc. of IJCAI Workshop on Empirical Methods in AI, 2001.
[45] A. Mahdi, A. Razali, A. Alwakil, Comparison of fuzzy diagnosis with K-nearest neighbor and Naïve Bayes classifiers in disease diagnosis, Broad Res. Artif. Intell. Neurosci. 2 (2) (2011) 58–66. [46] A.Y. Ng, M.I. Jordan, On discriminative vs. generative classifiers: a comparison of logistic regression and Naïve Bayes, Neural Inf. Process. Syst. 14 (2001) 605–610.
75