Accepted Manuscript Title: Automated Estimation of Choroidal Thickness Distribution and Volume based on OCT Images of Posterior Visual Section Author: Kiran Kumar Vupparaboina Srinath Nizampatnam Jay Chhablani Ashutosh Richhariya Soumya Jana PII: DOI: Reference:
S0895-6111(15)00141-X http://dx.doi.org/doi:10.1016/j.compmedimag.2015.09.008 CMIG 1410
To appear in:
Computerized Medical Imaging and Graphics
Received date: Revised date: Accepted date:
19-4-2015 24-7-2015 29-9-2015
Please cite this article as: Kiran Kumar Vupparaboina, Srinath Nizampatnam, Jay Chhablani, Ashutosh Richhariya, Soumya Jana, Automated Estimation of Choroidal Thickness Distribution and Volume based on OCT Images of Posterior Visual Section, Computerized Medical Imaging and Graphics (2015), http://dx.doi.org/10.1016/j.compmedimag.2015.09.008 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
*Manuscript
Automated Estimation of Choroidal Thickness Distribution and Volume based on OCT Images of Posterior Visual Section
Dept. of Electrical Engg, Indian Institute of Technology Hyderabad, Telangana, India-502205 b L. V. Prasad Eye Institute, Hyderabad, Telangana, India-500034
cr
a
ip t
Kiran Kumar Vupparaboinaa,∗, Srinath Nizampatnama , Jay Chhablanib , Ashutosh Richhariyab , Soumya Janaa
us
Abstract
A variety of vision ailments are indicated by anomalies in the choroid layer of the poste-
an
rior visual section. Consequently, choroidal thickness and volume measurements, usually performed by experts based on optical coherence tomography (OCT) images, have assumed
M
diagnostic significance. Now, to save precious expert time, it has become imperative to develop automated methods. To this end, one requires choroid outer boundary (COB) detection as a crucial step, where difficulty arises as the COB divides the choroidal gran-
ed
ularity and the scleral uniformity only notionally, without marked brightness variation. In this backdrop, we measure the structural dissimilarity between choroid and sclera by struc-
pt
tural similarity (SSIM) index, and hence estimate the COB by thresholding. Subsequently, smooth COB estimates, mimicking manual delineation, are obtained using tensor voting. On five datasets, each consisting of 97 adult OCT B-scans, automated and manual seg-
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
mentation results agree visually. We also demonstrate close statistical match (greater than 99.6% correlation) between choroidal thickness distributions obtained algorithmically and manually. Further, quantitative superioirity of our method is established over existing results by respective factors of 27.67% and 76.04% in two quotient measures defined relative to observer repeatability. Finally, automated choroidal volume estimation, being attempted for the first time, also yields results in close agreement with that of manual methods. Keywords: Optical coherence tomography (OCT), Choroidal thickness, Choroidal volume, Structural similarity (SSIM) index, Tensor voting.
Preprint submitted to Computerized Medical Imaging and Graphics
July 24, 2015
Page 1 of 31
1. Introduction Recent advances in medical imaging allow physicians to visualize, understand and diagnose complex medical conditions. For instance, Optical Coherence Tomography (OCT) [1]
ip t
enables ophthalmologists to image as well as assess pathological changes in blood vessels present in the inner walls of the posterior visual section [2]. Specifically, the choroid layer
cr
has complex vasculature, lies sandwiched between the retinal pigment epithelium (RPE) and the sclera, performs critical physiological functions [3]–[7], and assumes crucial role in
us
determining various disease conditions [8]–[12]. The advent of OCT has lead to improved visualization of the choroid, and hence improved diagnosis [13]. Among various OCT tech-
an
nologies, the spectral domain OCT (SD-OCT) is perhaps the most ubiquitous [14]. Typical SD-OCT images are shown in Fig. 1. The left portion of each image, called en-face, depicts the infrared face-on view of retina, the innermost layer, while the dashed line on it indicates
M
the vertical location of the OCT plane. The right part depicts an OCT scan, consisting of retina (layered), including RPE (bright), choroid (granular), and sclera (smooth), from
ed
top to bottom (from inner to outer layer, physiologically), as labeled in the last image [15]. Usually, OCT is performed at various (e.g., 97) vertical locations, and the ophthalmologist
pt
browses through OCT images, paying attention to the choroid region to assess its condition.
In particular, thickness distribution of the choroid estimated from OCT images of the
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
posterior part of the eye has emerged as an important metric in disease management [16]. Consequently, estimation accuracy has assumed a vital role in ensuring accurate diagnostic outcome [19]. Choroidal thickness measurements have in turn been used to obtain choroidal volume. So far, such thickness measurements have been performed by experts by manually delineating the choroid inner and outer boundaries and then taking the difference. On the average, an expert labels about one hundred OCT scans per subject, making this procedure time consuming, laborious as well as susceptible to fatigue-induced error. Further, since the number of available experts is often few compared to the number of subjects (especially, ∗
Corresponding author Email address:
[email protected] (Kiran Kumar Vupparaboina)
2
Page 2 of 31
ip t cr
us
Figure 1: The 1-st, 48-th and 97-th of a set of 97 SD-OCT images of the posterior visual section (courtesy Dr. William R Freeman, University of California, San Diego, La Jolla, CA). A typical OCT image contains en-face on the left portion, and retina, RPE (outermost part of retina), choroid and sclera on the right portion.
an
in developing regions), only a fraction of potential subjects actually receive OCT-based diagnosis. Against this backdrop, automated segmentation of choroid layer could be crucial
M
in reducing professional effort and time per subject, potentially allowing more subjects to obtain specialized medical attention. Automation would also avoid human error induced
ed
by fatigue and tedium. Accordingly, we propose a novel automated algorithm for choroid segmentation and related thickness and volume measurements. Since the last three years, automation of choroid segmentation have been attracting
pt
considerable attention. Owing to eye physiology, choroid segmentation consists of two tasks: detecting (i) choroid inner boundary (CIB) and (ii) choroid outer boundary (COB). Of these,
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
the first task is relatively well posed because the RPE, defining the CIB, is significantly brighter than adjacent layers. Indeed, the gradient-based approach in various flavors has proven accurate not only in detecting CIB [24]–[30], but also in the related problem of detecting boundaries between successive retinal layers with well-defined brightness transition [35]– [38]. Accordingly, we shall also adopt a gradient-based approach for CIB detection. In contrast, the task of detecting the COB poses considerable challenge. This happens because the COB is essentially a notional divide between the choroidal granularity and the scleral uniformity, which is not defined by marked variation in brightness, and often open to subjective interpretation. Even so, gradient-based deterministic methods have been 3
Page 3 of 31
Results
Remarks
Mean border position difference (MBPD), absolute border position difference (ABPD), and correlation coefficient (CC).
MBPD: -3.90µm (SD 15.93µm), ABPD: 21.39µm (SD 10.71µm). Overall CC: 93% (95% CI, 81%–96%). (SD-Standard deviation, CI-Confidence interval).
Observer repeatability and volume comparison not reported.
Tian et al. [26] SD-OCT (EDI† ), 14µm transverse resolution, and 3.9µm axial resolution. Wavelength not specified.
45 B-scans one each from 45 healthy adult subjects.
Gradient based graph search, posed as dynamic programing .
Dice coefficient (DC).
Mean DC: 90.5% (SD 3%).
Image from each eye that passes through center of fovea is selected. Observer repeatability not reported. Suitability for volume estimation unclear.
Lu et al. [27] SD-OCT (EDI† ), 14µm transverse resolution, and 7µm axial resolution. Wavelength not specified. Images are averaged 100 times.
One B-scan per eye, one eye randomly chosen per subject, 30 adult subjects with diabetes.
Semi-automated gradient based graph search, posed as dynamic programing .
Dice coefficient (DC).
Mean DC: 92.7% (SD 3.6%).
Kaji´c et al. [28] 1060nm SD-OCT (HP∗ ), 15 to 20µm transverse resolution, and 7µm axial resolution.
871 B-scans from 12 adult eyes (691 diabetes-1 and 180 advanced pathologies).
Machine learning using stochastic modeling.
Fraction of misclassified pixels.
Average error: 13%.
Danesh et al. [29] 870nm SD-OCT (EDI† ), 14µm transverse resolution, and 3.9µm axial resolution.
100 B-scans randomly chosen from 10 eyes of 6 healthy adult subjects.
Graph cut segmentation via machine learning using multiresolution features.
Mean signed and unsigned border position error (MUBPE and MSBPE).
MUBPE: 9.79 pixels (SD 3.29 pixels), MSBPE: 5.77 pixels (SD 2.77 pixels).
Images close to the foveal cross section are considered. Observer repeatability not reported. Suitability for volume estimation unclear.
Alonso-Caneiro et al. [30] 870nm SD-OCT (EDI† ), 14µm transverse resolution, and 3.9µm axial resolution. Radial images separated by 30◦ centered around fovea. Images are averaged 30 times.
1083 pediatric B-scans from 104 healthy subjects and 90 adult B-scans from 15 healthy subjects.
Graph search, weights obtained using dual gradient probabilistic method, solution using Dijkstra’s algorithm.
Dice Coefficient (DC), mean absolute difference (MAD).
MAD: 12.6µm (SD 9.00µm) [pediatric]; 16.27µm (SD 11.48µm) [adult]; 8.01µm (SD 3.34µm) [observer repeatability]. Mean DC: 97.3% (SD 1.5%) [pediatric]; 96.7% (SD 2.1%) [adult].
Images passes through fovea center are considered. Suitability for volume estimation unclear. Observer repeatability reported in terms of MAD only for 120 scans.
Proposed, 870nm SD-OCT (EDI† ), 14µm transverse resolution, and 7µm axial resolution. Images are averaged 25 times.
97 B-scans per eye, one eye randomly chosen per subject, 5 healthy adult subjects.
Algorithmic
Scans taken at locations even far from fovea. Volumetric analysis performed. Observer repeatability is taken into account for all scans.
pt
Structural similarity (SSIM), and adaptive Hessian matrix analysis for segmentation, tensor voting for smoothing.
cr
37 B-scans per eye, Gradient based one eye randomly multistage graph chosen per subject, search approach. 20 eyes from 20 healthy and 10 eyes from 10 non-neovascular AMD adult subjects.
us
Hu et al. [25] SD-OCT. Wavelength, transverse resolution and axial resolution not specified. Images are averaged 9 times.
ip t
Evaluation Criterion
an
Segmentation Method for COB
M
Dataset
ed
Group/OCT Mode
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Mean absolute difference (MAD), Correlation coefficient (CC), Dice coefficient (DC) and absolute volume difference (AVD).
MAD: 19.15µm (SD 15.98µm), mean CC: 99.64% (SD 0.27%), mean DC: 95.47% (SD 1.73%), mean AVD: 0.3046mm3 . Observer repeatability MAD: 11.97µm (SD 10.34µm), mean CC: 99.82% (SD 0.16%), mean DC: 97.28% (SD 1.06%), mean AVD: 0.1240mm3 . Tables 2 and 3 provide further details.
Images close to the foveal cross section are considered. Observer repeatability not reported. Suitability for volume estimation unclear.
Observer repeatability and volume comparison not reported.
Table 1: Comparative summary of various attempts towards automated choroid layer segmentation. † EDI (enhance depth imaging) SD-OCT: mean wavelength greater than 800nm (often 870nm); ∗ HP (high penetration) SD-OCT: mean wavelength greater than 1000nm (often 1060nm) [14, 31].
suggested for COB detection [25]–[27]. However, statistical methods appear more suitable to handle the inherent uncertainties involved. Accordingly, machine learning [28, 29] as well as gradient-based probabilistic methods [30] have been attempted. Yet, earlier work does not directly exploit the structural transition from granularity to uniformity across the 4
Page 4 of 31
COB. Against this backdrop, we propose to quantify the structural dissimilarity between choroid and sclera using the yardstick of structural similarity (SSIM) index, and hence, by choosing appropriate thresholds, find an initial estimate of the COB. Unfortunately, an SSIM-based method on its own sometimes leaves discontinuities in the initial estimate.
ip t
We attempt to remove such discontinuities using an adaptive Hessian analysis step, which has elsewhere proven effective in localizing choroid vessels [24]. The resulting boundary,
cr
although adequately separates the choroidal vessels from scleral uniformity, generally is not smooth and deviates substantially from smooth boundaries manually drawn by experts [32].
us
With a view to obtaining close match with the latter, we finally make use of tensor voting to achieve the desired smoothing.
an
Despite our systematic approach, it remains problematic to clearly establish the advantages of our method over earlier work. First, experimental datasets used by various
M
researchers differ in terms of mean wavelengths used in SD-OCT acquisition. Further, some scans are obtained from healthy subjects, some from diseased subjects [25, 27, 28], some from adult, and some from pediatric subjects [30]. Moreover, some scans are taken only
ed
near the foveal cross section [26, 27], rather than at various vertical locations over a wide range. In addition, a variety of evaluation criteria, including correlation coefficient (CC)
pt
[25], Dice coefficient (DC) [26, 27, 30], mean border position difference (MBPD)[25, 29], and mean absolute difference (MAD) [30], have been used. Also, the image quality and subjective complexity appear to vary among various datasets rendering performance com-
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
parison among various algorithms. For fair comparison, ideally, there should be standard datasets, fairly representing possible OCT images in an unbiased manner and standardized performance measures. Indeed, desired standardization has been achieved in certain fields, such as stereo vision [33] and electrocardiogram (ECG) signal analysis [34]. However, such standardization requires enormous resources. Till its realization in the study of choroid segmentation, to improve contextual comprehension, we compare results generated by the proposed automated algorithm against observer repeatability figures on the same datasets as those figures implicitly reflect image quality. Various aspects of the reported literature as well as our work are presented in Table 1. 5
Page 5 of 31
Our algorithm is evaluated on five sets of 97 SD-OCT B-scans each taken one of five healthy adult subjects, and algorithmic results are compared with that obtained manually. We take the average of the two manual segmentations performed by the same expert as the reference following current clinical practices [17, 18], and perform both qualitative and
ip t
quantitative performance assessment. First we visually compare the automated choroid segmentation vis-`a-vis the manual reference. Subsequently, we turn to quantitative assess-
cr
ment of the proposed automated algorithm in terms of estimation accuracy of the choroidal thickness distribution and volume. Specifically, we quantify thickness estimation accuracy
us
in terms of four measures namely, difference (D), absolute difference (AD), correlation coefficient (CC), and Dice coefficient (DC), using thorough statistical analysis. We also propose
an
certain quotient measures to fairly compare performance of our algorithm against that of previously reported ones, even when the latter used completely different datasets. However,
M
from a clinical perspective, even accurate estimate of choroidal thickness on its own could sometimes be inadequate in assessing choroidal involvement in chorioretinal diseases. To see this, suppose specific scans are taken at selected foveal locations. Then the thickness
ed
measure would clearly be inadequate in representing the overall choroidal distribution. In contrast, volumetric analysis of the choroid would be better suited to assess the disease
pt
course and response to treatment [19]. Against this backdrop, we automate choroidal volume measurement for the first time in the literature (to the best of our knowledge). In particular, we obtain volume measurements for the five aforementioned datasets, and com-
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
pare our results against corresponding manual references. Finally, as earlier, we propose and make use of quotient measures for volume estimation. Using the aforementioned systematic statistical analysis, we demonstrate the algorithmic results to be in general agreement with its manual counterpart.
In summary, our main contributions are as follows: 1. End-to-end robust automation is demonstrated for choroid segmentation. (a) Structural dissimilarity between choroid and sclera is directly measured by SSIM index and exploited towards automated COB estimation. (b) Desired smoothness in the COB estimate is achieved using tensor voting. 6
Page 6 of 31
ip t
Figure 2: Choroid inner boundary and choroid outer boundary, labeled manually.
2. Automated thickness estimates are reported vis-`a-vis respective observer repeatability
cr
to facilitate contextual comprehension.
us
3. Automated choroidal volume estimation is attempted, and statistical performance analysis is carried out.
4. Quotient measures are proposed and adopteed to facilitate comparison among algo-
an
rithms tested on different datasets.
The rest of the paper is organized as follows. Details of the OCT image acquisition process
M
and the proposed methodology are presented in Sec. 2. In Sec. 3, results illustrating various segmentation steps are presented, followed by choroidal thickness and volumetric analysis.
discussion.
pt
2. Materials and methods
ed
Notes on implementation are given in Sec. 4. Finally, Sec. 5 concludes the paper with a
Now we set out to automatically detect choroid inner boundary (CIB) and choroid outer
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
boundary (COB), which are manually drawn by an expert in Fig. 2. We begin by describing our experimental datasets and the proposed methodology. 2.1. Experimental datasets
We consider OCT scans performed by a single retina specialist, using Heidelberg Retina Angiograph (HRA - Spectralis, Heidelberg Engineering, Dossenheim, Germany). The Spectralis OCT device provides up to 40,000 A scans/sec with a depth resolution of 7 µm in tissue and a transverse resolution of 14 µm using a superluminescence diode with a mean wavelength of 870 nm. Raster imaging consisting of 97 high-resolution B scans was performed with each eye, centered on the fovea. An internal fixation light was used to center 7
Page 7 of 31
ip t cr us
an
Figure 3: Proposed methodology as a flowchart.
the scanning area on the fovea. Each scan was 9.0 mm in length and spaced 30 µm apart
M
from each other. Single OCT images consisting of 512 A lines were acquired in 0.78 ms. The scans were obtained for analysis after 25 frames, and averaged using built-in automatic
ed
averaging software (TruTrack; Heidelberg Engineering, Heidelberg, Germany) to obtain a high quality choroidal image. In this work, experimental evaluation is performed on B-scans
pt
taken from five healthy adult subjects, from whom one eye randomly chosen per subject and 97 B-scans are taken per eye. The third dataset has image resolution 496 × 1536 (covering larger area), while each of the rest has 351 × 770. Manual segmentation is performed twice
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
by same expert on each scan to study the observer repeatability. The average of two such manual segmentations is taken as the reference following current clinical practices [17, 18]. 2.2. Proposed automated methodology As depicted in Fig. 3, the proposed automated methodology consists of various steps: (i) denoising, (ii) localization of choriod and (iii) choroid outer boundary (COB) detection. 2.2.1. Denoising Generally, OCT images are noisy (Fig. 4a), and appropriate denoising improves algorithmic accuracy. Accordingly, for denoising, we adopted the block-matching and 3D filtering 8
Page 8 of 31
(g) (a)
(i)
cr
(b)
ip t
(h)
us
(j)
(k)
an
(c)
(l)
(m)
(n) Choroid thickness (µm)
ed
pt
(e)
M
(d)
300 200 100 0
200 400 600 800 1000 1200 1400 Thickness varation along the length of the image (pixels)
(o)
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
(f)
Figure 4: Choroid inner and outer boundary segmentation: (a) The 95-th OCT scan from the third dataset; (b) Denoised OCT scan; (c) Output of Canny edge detection; (d) Detected RPE inner boundary mapped onto original slice; (e) and (f)– Respective images after peeling off retinal layers inside RPE, and RPE. Lower boundary of region of interest (ROI) is marked in (f); (g) Extracted ROI; (h) Pixels with SSIM index below threshold; (i) Outcome of connected components algorithm; (j) Outermost pixels in (i), Initial estimate of COB based on SSIM (k) Refined COB estimate using adaptive Hessian analysis; (l) Manually drawn COB; (m) Final COB after interpolation using tensor voting; (n) Segmented choroid; and (o) Choroid thickness variation.
(BM3D) algorithm, which is generally accepted as the state of the art [39]. Indeed, BM3D algorithm has been used for denoising OCT images in a similar context [40]. This algorithm is based on an enhanced sparse representation in transform-domain, where enhancement of the sparsity is achieved by grouping similar 2D image blocks into 3D data arrays called 9
Page 9 of 31
“groups”, followed by performing collaborative filtering on them. As groups exhibit high correlation, a decorrelating transform attenuates noise. Finally, denoised images are obtained by applying the inverse transform (Fig. 4b). We use BM3D toolbox in MATLAB,
ip t
and use the default noise standard deviation of 25. 2.2.2. Localization of choroid
cr
Next we locate the RPE layer, and more specifically its inner boundary. This further helps us locate the RPE outer boundary, which defines the CIB, and specify a region of
us
interest (ROI) between CIB and sclera which is expected to contains the COB. A. RPE inner boundary detection: Noting the higher brightness of the RPE compared
an
to that of the adjacent layers, we obtain an initial edge map using the gradient-based Canny edge operator [41], where empirically determined threshold of 0.4 and a standard deviation of 2 for the Gaussian filter have been used. However, also notice in Fig. 4a that sharp change in
M
brightness occurs not only at the RPE inner boundary, but at the retinal inner boundary as well. Accordingly, the edge operator principally detects both the above boundaries alongside
ed
some secondary edges (Fig. 4c). We take the outer one of the principal edges as the RPE inner boundary and remove its discontinuities using the dilation operator (Fig. 4d) [41].
pt
B. Choroid inner boundary (CIB) detection: Now the various retinal layers inside RPE inner boundary are peeled off (Fig. 4e). The RPE outer boundary, which also defines the CIB, occurs at a more or less uniform distance from the RPE inner boundary, and is then
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
detected based on gradient-based bright-to-dim transition. In particular, CIB possesses a high negative gradient. Accordingly, to detect the CIB, the vertical gradient in Fig. 4e is computed using the Sobel operator [41]. Then the CIB in any column corresponds to that pixel below the RPE inner boundary, where we first encounter a negative gradient. Subsequently, the RPE layer is also peeled off (Fig. 4f). At this point, we are left with detecting the COB. To this end, we select a region of interest (ROI) of sufficient thickness outside the CIB such that the ROI would contain the COB (Fig. 4g). 2.2.3. Choroid Outer Boundary (COB) Detection We now turn to COB detection using steps outlined in the flowchart of Fig. 3. 10
Page 10 of 31
A. Initial COB estimate based on SSIM: Observe in Fig. 4a that the sclera (uniform) and the choroid (granular) have dissimilar statistical structure. We exploit such dissimilarity by taking a small window from sclera as a template, and calculate the structural similarity (SSIM) index between the template and the neighborhood (of the same size as the template)
ip t
of every pixel throughout the ROI. Here SSIM between two windows A and B of equal
(2µA µB + c1 )(2σAB + c2 ) , (2µ2A µ2B + c1 )(σA2 + σB2 + c2 )
(1)
us
SSIM(A, B) =
cr
dimensions (we take 5 × 5) is given by [20]
where µA and µB denote the respective means, and σA2 and σB2 the respective variances
an
of windows A and B, whereas σAB denotes their covariance. Further, c1 (= 6.5) and c2 (= 58.5) are small constants, chosen to stabilize the expression. As the template is chosen
M
from the sclera, we expect scleral pixels to have higher and choroidal pixels to have lower SSIM indices. Indeed the lowest SSIM values are observed in the transition region between sclera and the choroid, i.e., near the COB. More accurately, pixels with SSIM index below
ed
a suitable threshold (= 0.15, empirically determined) are mostly concentrated in clusters on the choroid side of the COB, and generally isolated on the scleral side (Fig. 4h). Such
pt
scleral pixels are removed using the connected components algorithm (Fig. 4i) [41]. Finally, the lower boundary of the remaining sub-threshold pixels is taken as an initial estimate of the COB (Fig. 4j), which however is generally discontinuous.
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
B. Refinement via eigenvalue analysis of the Hessian matrix: To remove such discontinuities, we adopt an adaptive Hessian analysis method [21], [22]. In particular, we first find whether a pixel belongs to a blood vessel (present only in the choroid). To this end, we estimate the Hessian matrix H at every pixel based on its neighborhood, compute the eigenvalues λ1 and λ2 of H, and verify whether λ1 is small and λ2 is large, which has been shown to correspond to dark tubular structures such as choroidal blood vessels [42]. Accordingly, to improve accuracy, we pick threshold on λ2 in an adaptive manner. In particular, we divide the ROI into small windows (15 pixels wise), and compute the average intensity of each window. Further, each such average intensity is scaled between the range [0, 10], 11
Page 11 of 31
and mapped to predefined thresholds for λ2 , varying from 40 to 105 in steps of 5. In this premise, the threshold for λ2 is adaptively chosen based on the average intensity of the each window. Further, threshold for λ1 is set to an empirically determined −5. In this setting, the outer boundary of newly detected choroid vessels are added to the initial COB estimate,
ip t
thereby removing undesirable discontinuities (Fig. 4k).
C. Smooth interpolation using tensor voting: The refined COB estimate appears to divide
cr
the choroidal granularity and the scleral uniformity adequately, albeit in jagged manner. In contrast, manual delineation of the COB by an expert is generally smooth (Fig. 4l). To
us
achieve similar smoothness in our automated COB estimate, we make use of tensor voting [23, 43]. Directly applying tensor voting on the refined COB estimate may lead to omission
an
of some choroid vessels, because final boundary may pass through the choroid layer cutting some of the blood vessels. Therefore, post preprocessing is performed on the refined COB
M
estimate, which involves discarding boundary pixels that are close to local minima. This is done by dividing each scan into three windows along the length and fixing a local threshold based on mean thickness value of the corresponding window. Further, threshold is chosen
ed
slightly greater than the local mean.
First, we describe in brief the tensor voting technique, which propagates information us-
pt
ing tokens, conveying various objects’ orientation preferences (i.e., votes) to their neighbors. When such votes are tallied, objects belonging to the same structure tend to join together. The influence of a vote decays away from the object, and the saliency decay function (DF)
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
is generally taken as Gaussian:
DF (s, k, σ) = e(
s2 +ck2 ) σ2
,
(2)
where s denotes arc length, and k curvature, while c controls the degree of decay with curvature, and σ the scale of voting, which in turn determines the effective neighborhood size [43]. In particular, for a given σ, we choose s =
θl , sin(θ)
k=
2 sin(θ) , l
and c =
−16 log(0.1)×(σ−1) , π2
where l indicates the distance between the two tokens, namely, voter and receiver, while θ indicates the angle between the tangent of the osculating circle at the voter and the line 12
Page 12 of 31
ip t cr us
Figure 5: Segmentation results: Each of the five rows presents results for three representative B-scans from a total of 97.
an
Yellow – automated; Orange, maroon – two manual segmentations.
going through the voter and receiver [43]. Now, in order to achieve the desired smoothing of
M
the post-processed COB, we shall apply tensor voting in two stages. First, a relatively large σ (= 140) is applied with a view to finding a mean interpolated COB. Finally, a smaller σ
ed
(= 40) is used to smoothen small left-over transients. This results in our final estimation of the COB (Fig. 4m). Subsequently, we segment the choroid between the estimated CIB and
pt
the estimated COB (Fig. 4n), and hence obtain the thickness distribution (Fig. 4o). 3. Results and Statistical Analysis
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Following the aforementioned steps, we obtain the CIB and the COB estimates in each OCT B-scan of our five datasets. In Fig. 5, we depict results for three representative B-scans per dataset. For visual comparison, manual COB delineations performed by experts are also depicted alongside. Next we present a quantitative assessment of the proposed automated algorithm in terms of estimation accuracy of the resulting choroidal thickness distribution and volume. 3.1. Choroidal thickness First we estimate choroidal thickness distribution, and quantify estimation accuracy. 13
Page 13 of 31
En-face
Thickness distribution in manual reference
Thickness distribution obtained by the proposed automated method
Difference (c)-(b)
pt
ed
(d)
M
(c)
an
us
(b)
cr
ip t
(a)
(e)
Absolute difference |(b)-(c)|
Figure 6: Thickness distribution comparison for the five datasets.
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
3.1.1. Thickness distribution
Choroidal thickness distribution obtained using the proposed method is presented for each dataset in Fig. 6c, while the corresponding distribution obtained using the manual reference, taken as the average of the two manual segmentations following current clinical practices [17, 18], is presented in Fig. 6b. For positional reference, corresponding en-face images are depicted in Fig. 6a. Further, the estimation error, measured by the difference (D) between the automated and the manual reference thickness estimates, is presented in Fig. 6d, while the corresponding absolute error/difference (AD) is presented in Fig. 6e. The 14
Page 14 of 31
Dataset-1
Dataset-2
Dataset-3
Dataset-4
Dataset-5
Overall
Alonso-Caneiro Performance et al. [30] improvement
µm (µm) -12.62 (19.38) -22.79 (20.85) µm -52.98–13.75 -45.17–1.64
-14.48 (19.10) -32.06–10.22
-15.36 (16.62) -38.86–-2.82
-11.31(12.82) -41.41–1.43
-15.31 (17.97) -52.98–13.75
2.35 (15.48) –
NC (NC) –
Manual (M1 vs M2)
µm (µm) µm
-2.18 (15.20) -21.36–15.25
-6.60 (14.38) -30.67–20.18
-8.17 (13.97) -38.71–15.26
-3.09 (11.29) -19.12–9.07
-2.55 (10.84) -16.58–17.47
-4.51 (13.25) -38.71–20.18
-1.34 (6.08) –
NC (NC) –
MAD (SDAD) Automated Min–Max (Proposed vs M) CVAD
µm (µm) µm ratio
19.61 (16.39) 6.15–52.98 0.84
26.45 (18.38) 5.63–45.17 0.69
19.59 (17.46) 9.98–32.06 0.89
16.73 (15.58) 4.71–39.09 0.93
13.38 (11.15) 5.26–45.44 0.83
19.15 (15.98) 4.71–52.98 0.83
16.27 (11.48) – 0.71
NC (NC) – NC
MAD (SDAD) Min–Max CVAD
µm (µm) µm ratio
12.63 (12.62) 5.71–22.4 0.99
14.09 (11.76) 5.55–30.67 0.83
14.50 (10.56) 6.20–38.71 0.73
9.01(8.54) 3.99–22.47 0.95
9.64 (7.29) 4.19–18.26 0.75
11.97 (10.34) 3.99–38.71 0.86
8.01 (3.34) – 0.42
NC (NC) – NC
QMAD
ratio
1.55
1.87
1.35
1.85
1.38
1.59
2.03
27.67%
QCVAD
ratio
0.84
0.83
1.22
0.98
1.10
0.96
1.69
76.04%
MCC (SDCC) Automated Min–Max (Proposed vs M) CVCC
% (%) (%) ratio
99.71 (0.20) 98.84–99.95 0.0020
99.56 (0.28) 98.48–99.95 0.0028
99.35 (0.42) 97.90–99.88 0.0042
99.73 (0.27) 98.63–99.98 .0027
99.87 (0.15) 99.18–99.98 0.0015
99.64 (0.27) 97.90–99.98 0.0027
– – –
– – –
MCC (SDCC) Min–Max CVCC
% (%) % ratio
99.85 (0.09) 99.57–99.97 0.0009
99.81 (0.13) 99.20–99.96 0.0013
99.65 (0.24) 98.18–99.92 0.0024
99.89 (0.12) 99.20–99.98 0.0012
99.91 (0.06) 99.63–99.98 0.0006
99.82 (0.14) 98.18–99.98 0.0014
– – –
– – –
Manual (M1 vs M2)
Correlation coefficient Manual (CC) (M1 vs M2)
MD (SDD) Min–Max
ip t
Automated MD (SDD) (Proposed vs M) Min–Max
Quotients
QMCC
ratio
1.93
2.31
1.85
2.48
–
–
QCVCC
ratio
2.22
2.15
1.96
2.29
2.44
1.92
–
–
MDC (SDDC) Automated Min–Max (Proposed vs M) CVDC
% (%) % ratio
95.87 (1.97) 88.81–98.73 0.0205
93.94 (1.55) 89.79–98.44 0.0165
94.16 (1.47) 90.42–96.94 0.0156
96.33 (1.84) 90.72–99.05 0.0191
97.06 (1.78) 89.23–98.94 0.0184
95.47 (1.73) 88.81–99.05 0.0181
96.7 (2.1) – 0.0217
NC (NC) – NC
MDC (SDDC) Min–Max CVDC
% (%) % ratio
97.43 (0.82) 95.00–98.78 0.0084
96.90 (1.30) 93.00–98.75 0.0134
95.87 (1.50) 89.44–98.38 0.0156
98.11 (0.84) 95.45–99.16 0.0086
98.10 (0.59) 96.21–99.23 0.0060
97.28 (1.06) 89.44–99.23 0.0109
– – –
– – –
QMDC
ratio
1.60
1.95
1.41
1.94
1.54
1.66
–
–
QCVDC
ratio
2.44
1.23
0.99
2.23
3.05
1.66
–
–
Quotients
Dice coefficient (DC)
Unit
cr
Absolute difference (AD)
Parameter
Manual (M1 vs M2) Quotients
1.44
2.00
us
Bias or Difference (D)
Method
an
Evaluation criteria
M
Table 2: Statistical analysis: Automated versus manual methods. Notation: M1– Manual segmentation-1, M2– Manual segmentation-2, M– Average of M1 and M2, NC– Not comparable.
ed
absolute error appears to be tolerable, while a tendency to underestimate thickness is noticed. Ideally, one desires automated algorithms to perform as well as the manual approach. We now present a thorough statistical analysis to quantitatively establish closeness to such ideal
pt
goal as well as comparative advantage over reported algorithms. 3.1.2. Statistical performance measures
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
For a collection {zk }N k=1 of samples of quantity z, its mean Mz, standard deviation (SD) SDz, and coefficient of variation (CV) CVz are defined by
Mz =
1 N
N X k=1
zk ,
v u N u1 X t SDz = (zk − Mz)2 , N k=1
CVz =
SDz , Mz
(3)
which respectively measure the central tendency, the dispersion, and the standardized dispersion. In particular, we shall quantify the estimation accuracy of choroidal thickness distribution in terms of four quantities— namely, difference (D), absolute difference (AD), correlation coefficient (CC) and Dice coefficient (DC). Accordingly, we define specific mea15
Page 15 of 31
sures, such as MAD, MCC, SDDC, CVCC, CVDC (replacing z by the suitable specific quantity). Here note that the standardized dispersion measure CVz is meaningful only if z is nonnegative. Accordingly, since difference (D) is signed, CVD is not meaningful and not reported. Further, statistical measures are computed for results obtained not only by the
ip t
proposed algorithm (superscripted ‘auto’) but by manual methods (superscripted ‘ref ’) as well. We take as reference the average of the two manual segmentations in such computa-
cr
tions. To ensure fair comparison, our results are reported vis-`a-vis observer repeatability, i.e., the consistency of performing manual segmentations multiple times by same observer.
us
A. Difference and Absolute difference: Suppose xi and yi denote the thickness values at the i-th (i = 1, ..., N ) column (A-scan index) in two measurements. Then the difference
an
(D) and the absolute difference (AD) between those measurements at the i-th column are respectively given by
ADi = |xi − yi |.
(4)
M
Di = (xi − yi ),
For each scan, corresponding mean difference (MD) and mean absolute difference (MAD) are
ed
obtained based on (3). For the 485 B-scans from the five datasets, the proposed algorithm achieves MD between -52.98µm and 13.75µm with an average of -15.31µm and standard
pt
deviation (SDD) of 17.97µm, while the MD between the two manual segmentations varies between -38.71µm and 20.18µm with an average of -4.51µm and SDD of 13.25µm. Table 2 provides further dataset-wise details. Difference plots for the three datasets are furnished in
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Fig. 6. Inspecting those values, there appears to be a slight negative bias in the proposed method vis-`a-vis the reference manual method, indicating room for further improvement. However, the standard deviations appear to be desirably close. Next we ignore the sign of the error and turn to the absolute difference (AD) measure. The MAD between the estimated thickness and the reference thickness for the five datasets are plotted in the first column of Fig. 7. To facilitate comparison, MAD between two manual segmentations, measuring observer repeatability, are also presented. The proposed automated algorithm achieves MAD between 4.71µm and 52.98µm with an average of 19.15µm and standard deviation (SDAD) of 15.98µm, while the MAD between manual 16
Page 16 of 31
segmentations varies between 3.99µm and 38.71µm with an average of 11.97µm and SDAD of 10.34µm. Notice that although algorithmic results are generally worse compared to (i.e., above) the manual results, the former outperform the latter in certain scans. This (encouragingly) indicates that our algorithm can outperform an expert, albeit in rare occasions.
correlation coefficient (CC) is defined by [44] PN
xi yi PN 2
i=1
i=1
xi
.
2 i=1 yi
(5)
us
CC = qP N
cr
ip t
B. Correlation coefficient: For two measurements xi and yi , i = 1, ..., N , seen earlier, the
an
The scan wise CC between the estimated choroid thickness and the reference thickness for the five datasets are now plotted in the second column of Fig. 7. In the same figure, the corresponding CC between the thickness values obtained by two manual segmentations,
M
measuring observer repeatability, are also plotted for comparison. The proposed automated algorithm achieves CC between 97.90% and 99.98% with an average (MCC) of 99.64% and
ed
standard deviation (SDCC) of 0.27%, across three datasets, while the CC between manual segmentations varies between 98.18% and 99.98% with an MCC of 99.82% and SDCC of
pt
0.14%, thus demonstrating the general reliability of our method. Even for CC, our algorithm outperforms manual methods for certain scans as indicated in the plots. C. Dice coefficient: Denote the respective sets of pixel indices in the i-th (i = 1, ..., N )
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
column of two segmentations by Cix (|Cix | = xi ) and Ciy (|Ciy | = yi ). Then the Dice coefficient (DC) is defined by [26]
DC =
PN
y x i=1 |Ci ∩ Ci | . PN PN y x i=1 |Ci | + i=1 |Ci |
2
(6)
Now scanwise DC between the estimated choroid thickness and the reference thickness, for each dataset, are plotted in the third column of Fig. 7, alongside DC between two manual segmentations as a measure of observer repeatability. The proposed algorithm achieves DC between 88.81% and 99.05% with an average (MDC) of 95.47% and SDDC of 1.73%, while DC between two manual segmentations varies between 89.44% and 99.23% with an average (MDC) of 97.28% and SDDC of 1.06%. Again, even for DC, our algorithm occasionaly 17
Page 17 of 31
Proposed vs M M1 vs M2
40
20
0 0
1
0.995 0.99 0.985 0.98 0.975 0
20 40 60 80 OCT B−scan sequence number
Proposed vs M M1 vs M2 20 40 60 80 OCT B−scan sequence number
40
20
0 0
0.995 0.99 0.985
0.975 0
20 40 60 80 OCT B−scan sequence number
Proposed vs M M1 vs M2 20 40 60 80 OCT B−scan sequence number
40
20
0 0
20 40 60 80 OCT B−scan sequence number
1 0.995 0.99
0.985 0.98
0.975 0
Proposed vs M M1 vs M2 20 40 60 80 OCT B−scan sequence number
20
0 0
20 40 60 80 OCT B−scan sequence number
1
0.985 Proposed vs M M1 vs M2 20 40 60 80 OCT B−scan sequence number
(e)
0.95
0.9
0.85 0
Proposed vs M M1 vs M2 20 40 60 80 OCT B−scan sequence number
1
0.99
0.975 0
Proposed vs M M1 vs M2 20 40 60 80 OCT B−scan sequence number
Dataset-4
0.995
0.98
0.9
0.85 0
1
Ac ce
40
Proposed vs M M1 vs M2
Correlation coefficient
(d)
60
0.95
Dataset-3
ed
Proposed vs M M1 vs M2
pt
MAD (µm)
Correlation coefficient
(c) 60
Proposed vs M M1 vs M2 20 40 60 80 OCT B−scan sequence number
1
1
0.98
0.9
an
Proposed vs M M1 vs M2
Dataset-2
0.95
0.85 0
M
MAD (µm)
Correlation coefficient
(b) 60
Dataset-1
1
Proposed vs M M1 vs M2 20 40 60 80 OCT B−scan sequence number
cr
MAD (µm)
Correlation coefficient
(a) 60
0.85 0
Dice coefficient
20 40 60 80 OCT B−scan sequence number
0.9
ip t
0.975 0
Proposed vs M M1 vs M2 20 40 60 80 OCT B−scan sequence number
Dice coefficient
0.98
0.95
us
0 0
0.985
Dice coefficient
20
0.99
Dice coefficient
40
1
1 0.995
Dice coefficient
Proposed vs M M1 vs M2
Correlation coefficient
MAD (µm)
60
MAD (µm)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
0.95
0.9
0.85 0
Proposed vs M M1 vs M2 20 40 60 80 OCT B−scan sequence number
Dataset-5
Figure 7: Proposed vs Manual: Left column– Mean absolute difference (MAD); Middle column– Correlation coefficient; and Right column– Dice’s coefficient. Notation: M1–Manual segmentation-1, M2–Manual segmentation-2, M– Average of both manual segmentations.
outperforms manual methods.
18
Page 18 of 31
3.1.3. Performance comparison A. General comparison: Next we turn to comparing performance of our algorithm against that of reported algorithms. As alluded earlier, performance comparison among various al-
ip t
gorithms is potentially problematic because those are generally tested on disparate datasets. To highlight the issues, consider comparing the algorithm proposed by us against that reported by Alonso-Caneiro et al. [30]. Specifically, we have rival mean difference (MD) values
cr
of -15.31µm versus 2.35µm, and rival standard deviation on difference (SDD) 17.97µm ver-
us
sus 15.48µm (see Table 2). If one considers the above numerical figures alone, one would infer the superiority of the latter algorithm. However, such simplistic comparison inherently ignores the fact that the respective datasets on which the competing algorithms are applied
an
do not necessarily pose similar level of difficulty in choroidal delineation. Indeed, when presented to the human expert, those datasets exhibit respective standard deviation (SDD)
M
values of 13.25µm and 6.08µm on observer repeatability, indicating the relatively higher dificulty level posed by the former (our) dataset. In other words, SDD for our algorithm
ed
worsens by 35.6% compared to manual methods, while such worsening factor is 155.5% for Alonso-Caneiro et al. Thus, with reference to manual methods, our approach exhibits less relative dispersion.
pt
Now we attempt to make similar comparison based on other performance criteria. In terms of correlation coefficient (CC), proposed method achieves an overall mean CC (MCC) value of 99.64%, which compares well with the corresponding observer repeatability value of
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
99.82%. Further, our algorithmic MCC value of 99.64% is much higher than the value 93% reported by Hu et al. [25] (refer to Table 1). However, as seen in the previous paragraph, the above comparison is not necessarily fair in the absence of the observer repeatability value for the later method. Turning to Dice coefficient (DC), the results reported by AlonsoCaneiro et al. [30], appears to improve on the earlier methods (refer to Table 1), albeit in an absolute sense, because observer repeatability values are not available. Referring to Table 2, we obtain an overall algorithmic mean DC (MDC) of 95.47%, which compares well with the corresponding observer repeatability of 97.28%. The latter is close to the algorithmic MDC of 96.7% reported by Alonso-Caneiro et al.; however, as their observer repeatability value, 19
Page 19 of 31
expected to be higher, is unavailable, a fair comparison is ruled out. B. Performance quotients: We now propose quotient measures that incorporate observer repeatability. Such quotient measures should be useful not only for performance comparison with reported algorithms, but also to set benchmarks for future research. Specifically, we
|Mz auto − z ideal | , |Mz ref − z ideal |
cr
QMz =
ip t
define two quotients. The quotient of mean, QMz, is defined by the ratio
(7)
us
where Mz auto and Mz ref , repsectively, indicate the mean values obtained by the algorithm and the manual method, and z ideal denotes the ideal value of z. A low QMz value is
an
desireable. Specifically, QMz = 1 would make the algorithmic accuracy indistinguishable from the accuracy of manual methods in terms of mean error. Similarly, quotient of CV, QCVz, is defined by
CVz auto , CVz ref
M
QCVz =
(8)
ed
where CVz auto and CVz ref , repsectively, indicate the CV obtained by the algorithm and that obtained manually. Again, we desire QCVz, measuring relative standard dispersion, to be low, and QCVz = 1 would make the algorithm at par with manual methods. In (7)
earlier.
pt
and (8), the general quantity z can specifically be either AD, or CC, or DC, as mentioned
In this backdrop, we obtain the respective overall QMAD and overall QCVAD values
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
of 1.59 and 0.96. Interestingly, we obtain a QCVAD less than one, indicating that the algorithmic consistency exceeds manual consistency. Such quotient values improve upon the respective quotients of 2.03 and 1.69, reported by Alonso-Caneiro et al. [30] by respective factors of 27.67% and 76.04%. In particular, the proposed method achieves QMAD values within a small range between 1.35 and 1.87 for the five datasets, indicating consistent algorithmic performance across datasets. Such performance consistency is further buttressed by the corresponding consistent QCVAD values ranging from 0.83 and 1.22. In addition, the low overall QMCC and QCVCC (resp. QMDC and QCVDC) values of 2 and 1.92 (resp. 20
Page 20 of 31
1.66 and 1.66), respectively, further corroborate the effectiveness of the proposed algorithm. Table 2 provides further dataset-wise details. 3.2. Choroidal volume
ip t
Finally, we turn to estimating choroidal volume. As mentioned in Sec. 2.1, each of our datasets consists of 97 B-scans taken at a uniform vertical separation of 30µm. For volume
cr
computation, one needs choroidal thickness estimates even at intervening vertical locations. To this end, we perform cubic interpolation, while converting B-scan pixels to the physical
us
unit of length (mm). As earlier, ideally, we would like the automated volume measurement to approximate the reference manual measurement. Accordingly, to quantify the proximity
an
between those volume estimates, we now systematic introduce certain statistical measures. 3.2.1. General performance measures
M
As alluded earlier, the dataset-3 has larger scan images (in terms of pixels), and pertains to a considerably larger region, compared to the rest four. In dataset-3, the algorithmic
ed
volume estimate is 8.4426 mm3 compared to the manual estimate of 9.1847 mm3 . For the rest four, the algorithmic volume estimate ranges between 2.5853 mm3 and 3.0473 mm3 , while the manual estimate range is 2.8755 mm3 and 3.1899 mm3 . Now we turn to more
pt
involved performance criteria, beginning with absolute volume difference (AVD). Specifically, the AVD between automated algorithm and manual reference is defined by
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
AVDauto = |volauto − volref |,
(9)
where volauto denotes the volume estimated by the proposed method, and volref denotes that per the manual reference (average of manual segmentations M 1 and M 2). Similarly, the AVD between the manual segmentations M 1 and M 2 is given by AVDref = |volM 1 − volM 2 |,
(10)
21
Page 21 of 31
Volume
Unit
Method
mm3
Proposed (P) M1 M2 M
Dataset-1
Dataset-2
Dataset-3
Dataset-4
Dataset-5
Mean
2.9947 3.1664 3.1399 3.1532
2.5853 2.9174 2.8335 2.8755
8.4426 9.4024 8.9669 9.1847
2.8025 3.0124 2.9722 2.9922
3.0473 3.2068 3.1730 3.1899
3.9745 4.3411 4.2171 4.2791 MAVD (MRAVD)
P and M AVD (RAVD) mm (%) M1 and M2 3
0.1580 (5.01) 0.2902 (10.09) 0.0265 (0.84) 0.0839 (2.92)
0.7421 (8.08) 0.1897 (6.34) 0.1426 (4.47) 0.4355 (4.74) 0.0402 (1.34) 0.0338 (1.06)
0.3046 (7.12) 0.1240 (2.90)
ratio
5.9
3.4
1.7
5.6
ip t
QMAVD QAVD
4.2
2.4
Table 3: Comparison of choroidal volumes obtained from proposed and manual segmentations. Notation: M1–Manual
cr
segmentation-1, M2–Manual segmentation-2, M–Average of M1 and M2, see text for other notations.
which we take as a measure of observer repeatability. As comparing raw AVD values across
AVDf lag , volref
an
RAVDf lag =
us
various eyes could be unfair, we define relative AVD (RAVD) as the ratio of AVD to volref :
(11)
where “f lag” could stand for either “auto” (automated) or “ref ” (manual reference). Fur-
M
ther, MAVDf lag (resp. MRAVDf lag ) indicates the mean of AVDf lag (resp. RAVDf lag ) values taken across datasets.
ed
For the five datasets, the proposed method achieves an MRAVDauto of 7.12% vis-`a-vis observer repeatability MRAVDref of 2.90%. Table 3 provides further dataset-wise details. In
pt
particular, for the datasets under consideration, the respective RAVDauto values are 5.01%, 10.09%, 8.08%, 6.34%, and 4.47%. In contrast, the corresponding observer repeatability (RAVDref ) figures are 0.84%, 2.92%, 4.74%, 1.34%, and 1.06%. Notice that our datasets
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
pose dissimilar challenges to automated and manual methods with regards to volume computation. In particular, dataset-1 is highly amenable to manual approach, but poses significantly more challenge to the automated algorithm. On the other hand, dataset-3 is realtively less amenable to manual approach, but poses only marginally more difficulty to the automated algorithm.
22
Page 22 of 31
3.2.2. Performance quotients Now, to quantify the closeness of algorithmic performance with observer repeatability in each dataset, we calculate the quotient of absolute volume difference AVDauto RAVDauto = , AVDref RAVDref
(12)
ip t
QAVD =
cr
for which a low value is desirable. Similarly, the corresponding quotient QMAVD across all
MAVDauto MRAVDauto = . MAVDref MRAVDref
(13)
an
QMAVD =
us
three datasets is defined by
Over our five datasets, we achieve a QMAVD value of 2.4, i.e., the proposed algorithm incurs just twice the error compared to human expert while computing choroidal volume. However,
ed
4. Notes on implementation
M
the dataset-wise QAVD values of 5.9, 3.4, 1.7, 5.6 and 4.2 vary significantly.
The proposed algorithm has been implemented using MATLAB R2013a scientific pro-
pt
gramming software on a personal computer with Intel® core i7 processor, 16 GB system memory, and Windows® 7 Enterprise 64-bit operating system. Futher, MATLAB toolboxes have been used for BM3D denoising [45], SSIM index calculation [46] and tensor voting [47].
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
We recorded an average computational time of approximately 6 minutes and 12 minutes per image for the datasets 1, 2, 4, 5 and the dataset 3, respectively. Tensor voting turns out to be the most compute-intensive step, accounting for about 70% of the computation. 5. Discussion
In this paper, we proposed a novel technique for automated choroid layer segmentation in OCT images of the posterior visual section. In particular, noting that the choroid and the sclera layers are structurally dissimilar and not associated with sharp intensity variation, we
23
Page 23 of 31
departed from the usual gradient based approach and adopted a technique based on structural similarity (SSIM) index. Further, upon smoothening using tensor voting, we obtained automated choroid segmentation that exhibits close visual agreement with manual segmentation. Based on our segmentation, we then estimated choroidal thickness distribution and
ip t
volume, and quantified various aspects of estimation accuracy. Our automated choroidal volume estimation is the first such attempt to the best of our knowledge.
cr
We presented exhaustive statistical quantification to (i) establish closeness of the automated estimates to the corresponding manual ones, (ii) demonstrate superior performance
us
of our method over known results, and (iii) facilitate future benchmarking. In particular, we achieved a correlation of greater than 99.6% between choroidal thickness distributions
an
obtained algorithmically and manually. Also, our results improve upon the results of AlonsoCaneiro et al. [30] by respective factors of 27.67% and 76.04% in two quotient measures de-
M
fined relative to observer repeatability. Of course, standardized datasets would be ideal for comparison among competing algorithms, which, however, could be impractical due large resource requirement. Meanwhile, the said quotients are defined to facilitate comparison
ed
among algorithms that have been tested using disparate datasets. At this point, it is worthwhile to address a methodological issue. Recall that we picked
pt
as the average (M) of two manual segmentations (M1 and M2) as the reference following reported practice [17, 18]. In this context, one may argue that the better one between M1 and M2 should serve as a more natural reference. Of course, one cannot directly choose one
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
over the other. Instead, including M as well, one can compare the proposed algorithm with each of M1, M2 and M under various criteria. In terms of absolute difference (AD), proposed versus M1 (PvsM1, henceforth) and proposed versus M2 (PvsM2, henceforth) results are respectively 19.75µm and 20.12µm on the average. In comparison, proposed versus M (PvsM, henceforth) gives 19.15µm. In terms of correlation coefficient (CC), PvsM1=99.60%, PvsM2=99.60%, and PvsM=99.64%. In terms of Dice coefficient (DC), PvsM1=95.39%, PvsM2=95.28%, and PvsM=95.47%. In other words, according to all three criteria, PvsM outperforms each of PvsM1 and PvsM2 (although the latter two are numerically close to the former in general). This experimentally validates our choice of the average segmentation as 24
Page 24 of 31
the manual reference, as well as the underlying intuition that the averaging process tends to remove undesired subjective fluctuations. In the near future, we envision our algorithm to have significant clinical impact. In particular, recall that information about the choroid thickness was traditionally obtained using
ip t
ICG and ultrasonography before the development of EDI-OCT. The former devices provide low resolution images with poor repeatability and mainly qualitative information. With the
cr
advent of EDI-OCT, quantitative assessment of choroid became possible. However, measurement of choroidal thickness provides only single sectional information about the choroid.
us
In contrast, measurement of choroidal volume at the macular region would provide global information about the disease and helps to assess treatment response. This information
an
could be applicable immediately in diseases such as central serous chorioretinopathy and Vogt Koyanagi Harada syndrome where choroid is primarily site of involvement [49, 50].
M
Clinical studies have already shown decrease in choroidal thickness in response to treatment in these diseases, and we anticipate that the assessment of choroidal volume in such situation would provide a broader perspective of the treatment response. However, the time
ed
consuming manual methods till now remained the main obstacle in clinical adoption of the aforementioned assessment. Our automated method is well suited to remove such obstacle.
pt
In the long run, we envisage improved quantification and visualization of the choroidal layer and vessels in 3D to enable next-generation screening and diagnosis [51], in which direction certain general mathematical techniques have been suggested [21],[22], and signif-
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
icant specific research has been initiated [24]. In closing, we hasten to add that although we argued for the appropriateness of SSIM and tensor voting for COB detection and hence choroidal thickness estimation, a vastly different approach could be appropriate in a different context for detecting a different feature (see, e.g., [52] for optic disk detection), and for dealing with thickness (see, e.g., [53] for thickness of nerve fiber layer). Acknowledgment This work was supported by the Department of Electronics and Information Technology (DeitY), Govt. of India, under the Cyber Physical Systems Innovation Project: 13(6)/201025
Page 25 of 31
CC&BT. References [1] Huang D, Swanson EA, Lin CP, Schuman JS, Stinson WG, Chang W, et al. Optical coherence tomog-
ip t
raphy. Science. 1991;254(5035):1178–1181.
[2] Wojtkowski M, Kaluzny B, Zawadzki RJ. New directions in ophthalmic optical coherence tomography.
[3] Nickla DL, Wallman J.
The multifunctional choroid.
cr
Optometry & Vision Science. 2012;89(5):524–542.
Progress in retinal and eye research.
us
2010;29(2):144–168.
[4] Bill A, Sperber G, Ujiie K. Physiology of the choroidal vascular bed. International ophthalmology. 1983;6(2):101–107.
an
[5] Parver LM. Temperature modulating action of choroidal blood flow. Eye. 1991;5(2):181–185. [6] Alm A, Nilsson SF. Uveoscleral outflow–a review. Experimental eye research. 2009;88(4):760–768. [7] Van Norren D, Tiemeijer L. Spectral reflectance of the human eye. Vision research. 1986;26(2):313–320.
M
[8] Dhoot DS, Huo S, Yuan A, Xu D, Srivistava S, Ehlers JP, et al. Evaluation of choroidal thickness in retinitis pigmentosa using enhanced depth imaging optical coherence tomography. British Journal of Ophthalmology. 2013;97(1):66–69.
ed
[9] Imamura Y, Fujiwara T, Margolis R, Spaide RF. Enhanced depth imaging optical coherence tomography of the choroid in central serous chorioretinopathy. Retina. 2009;29(10):1469–1473. [10] Manjunath V, Goren J, Fujimoto JG, Duker JS. Analysis of choroidal thickness in age-related macular
2011;152(4):663–668.
pt
degeneration using spectral-domain optical coherence tomography. American journal of ophthalmology.
[11] Esmaeelpour M, Povaˇzay B, Hermann B, Hofer B, Kajic V, Hale SL, et al. Mapping choroidal and retinal
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
thickness variation in type 2 diabetes using three-dimensional 1060-nm optical coherence tomography. Investigative ophthalmology & visual science. 2011;52(8):5311–5316. [12] Sim DA, Keane PA, Mehta H, Fung S, Zarranz-Ventura J, Fruttiger M, et al. Repeatability and reproducibility of choroidal vessel layer measurements in diabetic retinopathy using enhanced depth optical coherence tomography. Investigative ophthalmology & visual science. 2013;p. IOVS–12. [13] Margolis R, Spaide RF. A pilot study of enhanced depth imaging optical coherence tomography of the choroid in normal eyes. American journal of ophthalmology. 2009;147(5):811–815. [14] Miki A, Ikuno Y, Jo Y, Nishida K. Comparison of enhanced depth imaging and high-penetration optical coherence tomography for imaging deep optic nerve head and parapapillary structures. Clinical ophthalmology (Auckland, NZ). 2013;7:1995.
26
Page 26 of 31
[15] Adler FH, Kaufman PL, Levin LA, Alm A. Adler’s Physiology of the Eye. Elsevier Health Sciences; 2011. [16] Spaide RF. Enhanced depth imaging optical coherence tomography of retinal pigment epithelial detachment in age-related macular degeneration. American journal of ophthalmology. 2009;147(4):644–652.
ip t
[17] Zijdenbos AP, Forghani R, Evans AC. Automatic pipeline analysis of 3-D MRI data for clinical trials: application to multiple sclerosis. IEEE Transactions on Medical Imaging. 2002;21(10):1280-1291.
[18] Armato SG, Sensakovic WF. Automated lung segmentation for thoracic CT: Impact on computer-aided
cr
diagnosis. Academic Radiology. 2004;11(9):1011-1021.
[19] Chhablani J, Barteselli G, Wang H, El-Emam S, Kozak I, Doede AL, et al. Repeatability and repro-
us
ducibility of manual choroidal volume measurements using enhanced depth imaging optical coherence tomography. Investigative ophthalmology & visual science. 2012;53(4):2274–2280. [20] Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to
an
structural similarity. IEEE Transactions on Image Processing. 2004;13(4):600–612. [21] Sato Y, Nakajima S, Shiraga N, Atsumi H, Yoshida S, Koller T, et al. Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. Medical image
M
analysis. 1998;2(2):143–168.
[22] Chen J, Amini AA. Quantifying 3-D vascular structures in MRA images using hybrid PDE and geometric deformable models. Medical Imaging, IEEE Transactions on. 2004;23(10):1251–1262.
ed
[23] Medioni G, Tang CK, Lee MS. Tensor voting: Theory and applications. Proceedings of RFIA, Paris, France. 2000;3.
pt
[24] Zhang L, Lee K, Niemeijer M, Mullins RF, Sonka M, Abramoff MD. Automated segmentation of the choroid from clinical SD-OCT. Investigative ophthalmology & visual science. 2012;53(12):7510–7519. [25] Hu Z, Wu X, Ouyang Y, Ouyang Y, Sadda SR. Semiautomated segmentation of the choroid in spectraldomain optical coherence tomography volume scans. Investigative ophthalmology & visual science.
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
2013;54(3):1722–1729.
[26] Tian J, Marziliano P, Baskaran M, Tun TA, Aung T. Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images. Biomedical optics express. 2013;4(3):397–411. [27] Lu H, Boonarpha N, Kwong MT, Zheng Y. Automated segmentation of the choroid in retinal optical coherence tomography images. In: 35th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC). IEEE; 2013. p. 5869–5872. [28] Kaji´c V, Esmaeelpour M, Povaˇzay B, Marshall D, Rosin PL, Drexler W. Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model. Biomedical optics express. 2012;3(1):86–103. [29] Danesh H, Kafieh R, Rabbani H, Hajizadeh F. Segmentation of choroidal boundary in enhanced depth
27
Page 27 of 31
imaging OCTs using a multiresolution texture based modeling in graph cuts. Computational and mathematical methods in medicine. 2014. [30] Alonso-Caneiro D, Read SA, Collins MJ. Automatic segmentation of choroidal thickness in optical coherence tomography. Biomedical optics express. 2013;4(12):2795–2812.
tomography. American journal of ophthalmology. 2008;146(4):496–500.
ip t
[31] Spaide RF, Koizumi H, Pozonni MC. Enhanced depth imaging spectral-domain optical coherence
[32] Srinath N, Patil A, Kumar VK, Jana S, Chhablani J, Richhariya A. Automated detection of choroid
cr
boundary and vessels in optical coherence tomography images. In: 36th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC). IEEE; 2014. p. 166–169.
[34] http://www.physionet.org/physiobank/database/#ecg
us
[33] http://vision.middlebury.edu/stereo/
[35] Chiu SJ, Li XT, Nicholas P, Toth CA, Izatt JA, Farsiu S. Automatic segmentation of seven retinal layers
an
in SDOCT images congruent with expert manual segmentation. Optics express. 2010;18(18):19413– 19428.
[36] Yang Q, Reisman CA, Wang Z, Fukuma Y, Hangai M, Yoshimura N, et al. Automated layer segmenta-
M
tion of macular OCT images using dual-scale gradient information. Optics express. 2010;18(20):21293– 21307.
[37] Garvin MK, Abr` amoff MD, Wu X, Russell SR, Burns TL, Sonka M. Automated 3-D intraretinal layer
ed
segmentation of macular spectral-domain optical coherence tomography images. Medical Imaging, IEEE Transactions on. 2009;28(9):1436–1447.
pt
[38] Haeker M, Sonka M, Kardon R, Shah VA, Wu X, Abr`amoff MD. Automated segmentation of intraretinal layers from macular optical coherence tomography images. In: Proc. SPIE. vol. 6512; 2007. p. 651214. [39] Dabov K, Foi A, Katkovnik V, Egiazarian K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Transactions on Image Processing. 2007;16(8):2080–2095.
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
[40] Mahajan NR, Donapati RCR, Channappayya SS, Vanjari S, Richhariya A, Chhablani J. An Automated Algorithm for Blood Vessel Count and Area Measurement in 2-D Choroidal Scan Images. In: IEEE International Conference of the Engineering in Medicine and Biology Society (EMBC). 2013. [41] Gonzalez RC. Digital image processing. Pearson Education India; 2009. [42] Frangi AF, Niessen WJ, Vincken KL, Viergever MA. Multiscale vessel enhancement filtering. In: Medical Image Computing and Computer-Assisted Interventation (MICCAI). Springer; 1998. p. 130– 137. [43] Medioni G, Kang SB. Emerging topics in computer vision. Prentice Hall PTR; 2004. [44] S¨ ornmo L, Laguna P. Bioelectrical signal processing in cardiac and neurological applications. Academic Press; 2005.
28
Page 28 of 31
[45] http://www.cs.tut.fi/~foi/GCF-BM3D/. [46] http://www.ece.uwaterloo.ca/~z70wang/research/ssim/. [47] http://in.mathworks.com/matlabcentral/fileexchange/21051-tensor-voting-framework. [48] Liu X, Tanaka M, Okutomi M. Noise level estimation using weak textured patches of a single noisy
ip t
image. In: 19th IEEE International Conference on Image Processing (ICIP). IEEE; 2012. p. 665–668. [49] Maruko I, Iida T, Sugano Y, Oyamada H, Sekiryu T, Fujiwara T, et al. Subfoveal choroidal thickness after treatment of Vogt-Koyanagi-Harada disease. Retina. 2011;31(3):510-7.
cr
[50] Maruko I, Iida T, Sugano Y, Furuta M, Sekiryu T. One-year choroidal thickness results after photodynamic therapy for central serous chorioretinopathy. Retina. 2011 Oct;31(9):1921-7.
us
[51] Kiran Kumar V, Chandra TR, Jana S, Richhariya A, Chhablani J. 3D Visualization and Mapping of Choroid Thickness based on Optical Coherence Tomography: A Step-by-Step Geometric Approach. In: International Conference on 3D Imaging (IC3D). IEEE; 2013. p. 1–8.
an
[52] Ramakanth SA, Babu RV. Approximate nearest neighbour field based optic disk detection. Computerized medical imaging and graphics. 2014;38(1):49–56.
[53] Odstrcilik J, Kolar R, Tornow RP, Jan J, Budai A, Mayer M, et al. Thickness related textural properties
M
of retinal nerve fiber layer in color fundus images. Computerized Medical Imaging and Graphics.
pt
ed
2014;38(6):508–516.
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
29
Page 29 of 31
Highlights (for review)
te
d
M
an
us
cr
ip t
Choroid is statistically separated from sclera using structural similarity index Smoothness in choroid delineation is achieved using tensor voting Automated choroidal volume estimation is attempted for the first time Choroidal thickness and volume estimates are given vis-à-vis observer repeatability Quotients are defined to fairly compare among methods tested on disparate datasets
Ac ce p
Page 30 of 31
Ac
ce
pt
ed
M
an
us
cr
i
Graphical Abstract (for review)
Page 31 of 31