A graph-based approach to automated EUS image layer segmentation and abnormal region detection

A graph-based approach to automated EUS image layer segmentation and abnormal region detection

ARTICLE IN PRESS JID: NEUCOM [m5G;November 14, 2018;12:8] Neurocomputing xxx (xxxx) xxx Contents lists available at ScienceDirect Neurocomputing ...

4MB Sizes 2 Downloads 37 Views

ARTICLE IN PRESS

JID: NEUCOM

[m5G;November 14, 2018;12:8]

Neurocomputing xxx (xxxx) xxx

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

A graph-based approach to automated EUS image layer segmentation and abnormal region detection Xu Chen a, Yiqun Hu b, Zhihong Zhang a,∗, Beizhan Wang a, Lichi Zhang c, Fei Shi d, Xinjian Chen d, Xiaoyi Jiang e a

Xiamen University, Xiamen, China Zhongshan Hospital, Xiamen University, Xiamen, China Shanghai Jiao Tong University, Shanghai, China d Soochow University, Suzhou, China e University of Munster, Munster, Germany b c

a r t i c l e

i n f o

Article history: Received 5 June 2017 Revised 23 January 2018 Accepted 21 March 2018 Available online xxx Keywords: EUS image Layer segmentation Abnormal region detection Early carcinoma diagnosis

a b s t r a c t Endoscopic ultrasonography (EUS) has shown great advantages in the diagnosis and staging of gastrointestinal malignant tumors. However, EUS based diagnosis is limited by variability in the examiner’s subjective interpretation to differentiate between normal and early esophageal carcinoma. In this paper, we propose a novel approach aiming at automatic abnormal region detection from the esophageal EUS images; the contribution is three-fold: first, we present a series of preprocessing strategies developed specifically for the enhancement of EUS images to aid the estimation in the subsequent works. Second, we provide an automatic layer segmentation method based on the multiple surface graph searching approach with incorporation of geometric constraints, which is applied to segment the EUS images into five discernible layers. Third, we introduce the novel feature extraction strategy by utilizing the features from each column in the segmented layers. The SVM classifier is then applied to fulfill the normal and early esophageal carcinoma classification. Subsequently, a clustering method is used to assemble the abnormal columns together so as to detect the abnormal region. Experimental results show that our method has demonstrated its robustness even facing noisy EUS images, and has achieved high accuracy in detecting abnormal region. © 2018 Elsevier B.V. All rights reserved.

1. Introduction Digestive system malignant tumors are common diseases for the human, and the early detection of gastrointestinal carcinoma is crucial for patient prognosis. However, the accurate diagnosis of gastrointestinal early carcinoma remains a difficult task, even for the experienced doctors. Since first be reported in the early 1980s, EUS has been proven to be a more powerful imaging procedure compared with other imaging methods (i.e., CT, MRI) in the diagnosis and staging of gastrointestinal malignant tumors [1–3]. EUS has the ability to produce high-resolution ultrasonography images of the esophagus due to the proximity of the transducer to the esophagus, thus it can avoid interference caused by other organs. More specifically, EUS can visualize the walls of the esophagus and present a five-layer bright-dark-bright-dark-bright image. As presented in Fig. 1(b), layers from inside to outside correspond



Corresponding author. E-mail address: [email protected] (Z. Zhang).

to the superficial mucosa, the mucosa, the submucosa, the muscularis propria and the adventitia, respectively. Such layer visualization is beneficial to the determination of the depth of tumor invasion because early esophageal cancer is often visualized as hypoechoic disruption of the first three wall layers (see Fig. 1(d)). However, EUS-based diagnosis is usually limited by the variability of the doctors subjective interpretation of images [4], and the quality of EUS images is greatly affected by technical limitations and tumor related factors. Consequently, biopsies are indispensable for specifying target cells. But doing so will further perplex the detection procedures. Thus, it is of great value to develop a new technology complementary to manual inspection for the diagnosis of early carcinoma. In particular, a computer assisted EUS based diagnostic system would be a great help in simplifying the decision making process as well as in guiding endoscopic ultrasonographyfine needle aspiration (EUS-FNA). Wang et al. [5] demonstrates that the 5-year survival after early cancer detection is over 90%, in stark contrast to esophageal cancers that present with symptoms. However, early esophageal carcinoma remains a major diagnostic challenge today because it

https://doi.org/10.1016/j.neucom.2018.03.083 0925-2312/© 2018 Elsevier B.V. All rights reserved.

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

JID: NEUCOM 2

ARTICLE IN PRESS

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx

automatic and can work without any manual interference. In addition, several preprocessing methods specific to EUS images, as well as a layer segmentation method based on optimal surface search [19,20] are also proposed. To the best of our knowledge, the proposed method is the first approach for fully automatic diagnosis of early esophageal carcinoma in EUS images. 2. Method overview

Fig. 1. EUS images of normal esophagus and early carcinoma.

usually does not show any symptoms, thus results in poor prognosis. Once the symptoms appear, the esophageal cancers have already invaded to the muscularis propria and spread to local lymph nodes [6], then the effect of treatment falls dramatically. Texture analysis, a powerful tool in digital image processing (DIP) and computer vision, is a promising scheme to overcome this problem due to its ability to characterize multiple patterns in an image. It can quantify the intuitive qualities of patterns in pixel intensities in terms of smooth, gradient, bumpy and so on, which makes it helpful in diagnosing several diseases in clinical practice. Papers [7– 9] have already demonstrated the potential of texture analysis in improving tumor diagnosis. However, sonographic texture analysis approaches in EUS image classification are still few, and most of them focus on the analysis and diagnosis of pancreatic related diseases. For example, papers [10–12] successfully make use of neural networks to analyze the differences between pancreatic cancer and non-cancer EUS images. The method [11] achieved high sensitivity (93%) and specificity (92%) as well as outstanding positive predictive values (PPV, 87%) and negative predictive values (NPV, 96%). Two works [13,14] adopted a simple support vector machine (SVM) [15], a successful supervised learning model, to differentiate pancreatic cancer and chronic pancreatitis. Moreover, Zhang et al. [16] proposed a refined SVM model based on high-order graph matching kernel, leading to an optimal prediction of the types of esophageal lesions. Olowe et al. [17] determined the feasibility of differentiating benign and malignant mediastinal and abdominal lymph nodes by using spectrum analysis of ultrasound backscatter from EUS. Loren et al. [18] showed that it is feasible to analyze lymph node metastasis in patients with esophageal carcinoma by a computer-assisted evaluation in EUS images. Based on the above literature reviews, two issues need to be concerned. First, although much improvement has been achieved in the works mentioned above on EUS image analysis, they cannot automatically detect the abnormal regions in EUS images, and region of interest (ROI) should be manually outlined by endoscopic specialists in advance. After all, the diagnosis performance relies greatly on the effectiveness of ROI selection. Second, for the diagnosis of early esophageal carcinoma, research using DIP and pattern recognition is rarely explored. In this paper, we focus on developing an automatic abnormal region detection approach based on EUS images. In contrast with previous work, the proposed method does not require the ROI to be manually outlined in advance. Consequently, the system is fully

The proposed method consists of three major steps: (1) EUS image preprocessing, (2) EUS image layer segmentation using modified multiple surfaces graph search, and (3) abnormal region detection. Fig. 2 shows the flowchart of the proposed method. In preprocessing, each of the original EUS images is firstly transformed to a polar coordinate image where columns and rows correspond to angle and distance from the center of image, respectively. Representation of EUS images in polar coordinates is important for facilitating the description of local image regions in terms of their radial and tangential characteristics. It also facilitates a number of subsequent detection steps, such as EUS image layer segmentation and the further feature selection. Then a blind deconvolution algorithm is applied as an image restoration processing. Subsequently, a simple image sharpening method makes the boundaries between adjacent layers clearer. These three parts will be described in Section 3. Section 4 will illustrate in detail how to use the multiple surfaces graph search method to segment the EUS image into all discernible layers. We call the boundaries between adjacent layers as curves, which are illustrated in Fig. 3. Curve 1 is firstly detected with a specified cost function. Then curve 5 is detected secondly. Last, the other three curves are detected simultaneously, where curves 1 and 5 work as constraints. The final step is to extract the morphological features from the obtained segmented layers and thus perform classification, which will be presented in Section 5. To be specific, when the layers are obtained, features are extracted column by column in the region between curves 1 and 5. The texture classification based method is adopted to produce the classification results. Furthermore, a clustering method is utilized to locate the abnormal region. 3. Pre-processing Acting as an important role in our method, preprocessing provides a pre-segmented surface for the multiple surface graph searching and makes our approach fully automatic. It consists of three steps: (1) representation of EUS images in polar coordinates, (2) removal of catheter-induced artifacts by deconvolution technique, and (3) sharpening the layer boundary by a non-linear filter. As presented in Fig. 4, EUS images are first transformed from Cartesian coordinate to polar coordinate. In this paper, an EUS image of size X × Y is transformed to a polarform image of size Xp × Yp , where Yp = max(X/2, Y/2 ) denotes radii, and Xp denotes angles and should be specified manually. The origin point is at the center of the image. The polar coordinate transformation formula is as follow [21].





r = (x − X/2 )2 + (y − Y/2 )2 θ = atan2(y − Y/2, x − X/2 )

(1)

where (θ , r) represent the angular (horizontal) and radius (vertical) coordinates in transformed image, respectively. Then, each column in polar-form image shows a serial of ultrasonic signal detection responses towards certain directions, in which each pixel represents the signal intensity in terms of pixel intensity. Subsequently, the problem of finding five circlelike curves in original image turns to be identifying five horizontal curves in polar image. The later problem can be tackled effectively with varied segmentation methods or path finding approaches.

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

ARTICLE IN PRESS

JID: NEUCOM

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx

3

Fig. 2. Flowchart of the proposed method.

Fig. 3. Illustration of curves.

In practice, EUS images are usually blurred in varying degrees. As presented in Fig. 4, boundaries between layers in EUS images are usually indistinct. The cause of such low-quality may ascribe to various reasons, such as target motility and instrument error. Commonly, the blurring effect can be modeled as a convolution procedure: f ∗ g = h, where f and h are the raw and observed signals (images), respectively, and g denotes the convolution operator. The objective of deconvolution is to find the solution of aforementioned equation, i.e., to estimate the raw signal (image). In this paper, in order to avoid yielding undesirable segmentation results, a deconvolution method [22] is used to improve the quality of images by estimating the de-blurred image. Finally, a non-linear filter is applied to make boundaries between adjacent layers clearer, which will benefit the following layer segmentation procedure. Denoted by Coli = ( pi1 , pi2 , . . . , pin )

Fig. 4. Illustration of image transformation. (a) An original EUS image with Cartesian coordinate. (b) A transformed EUS image with polar coordinate.

the ith column in P, where pij (1 ≤ i ≤ Xp , 1 ≤ j ≤ Yp ) represents the intensity of jth pixel of ith column. The δ − neighbourhood of a pixel pij in ith column is denoted by Ui j (δ ) = { pi,max(0, j−δ ) , pi,max(0, j−δ +1) , . . . , pi,min(Yp −1, j+δ ) }, i.e., the set of all pixels that are at distance less than or equal to δ from pij in ith column. Each pixel is compared with the median value of its neighborhoods in terms of intensity. If it is bigger, then double it. Otherwise, halve it. It is formulated as:



pi j =

min( pi j × 2, 255 ), i f pi j ≥ median(Ui j (δ )) pi j /2, otherwise

(2)

4. Layer segmentation For layer segmentation, various approaches have been proposed and well-researched, such as dynamic programming approaches

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

JID: NEUCOM 4

ARTICLE IN PRESS

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx

Fig. 5. Illustration of graph construction. Col1 (x) and Col2 (x) are two corresponding columns in adjacent subgraphs. (a) The intra subgraph edges. (b) The inter subgraph edges.

[23–27]. As shown in [23], dynamic programming approach can be used to segment multiple layers simultaneously, but it will be along with the exponential time cost when the number of layers increases. Thus, in our case the optimal surface approach [19,20] would be a better choice. The multiple surfaces graph searching is actually an optimization process which is beneficial for locating a set of targeted surfaces with minimal cost. Its efficiency primarily lies in the integration of shape information and the globally optimal 3-D delineation graph cut technique [28]. Although such traditional optimal surface approach was mostly applied in 3-D space [29–33], it can also work in 2-D space naturally, as in our EUS images. The multiple surfaces graph searching is composed of three key components: (1) graph construction; (2) cost function design; and (3) optimal surfaces recovery. The first two key components are usually implemented within a single process to clearly demonstrate the properties and correlation of the surfaces. When the graph is formed, the optimal surfaces can be detected using the traditional max-flow/min-cut algorithm [28]. 4.1. Graph construction Suppose we are going to find n optimal curves in a polarform EUS image I of size Xp × Yp , in which each pixel is denoted by I(x, y). Each curve of interest can be defined as a mapping function Si (x) → y, x ∈ {0, 1, . . . , X p − 1}, y ∈ {0, 1, . . . , Yp − 1}, i ∈ {1, 2, . . . , n}, which means that the desirable curve interacts each column at exactly one pixel. Such curves are supposed geometric ordered, i.e., Si+1 is above the curve Si . Denoted by c(x, y) the cost of each pixel, the cost function of curves can be defined as the summation of pixel costs among all interesting curves. The question is how to find n curves with the minimal costs. The key point of optimal surface searching approach lies in building a specified weighted directed graph. In this way, the cost of optimal curves in an EUS image will be equal to its minimum closed set in the corresponding graph. A closed set appearing in the given graph is a subset of nodes in which any directed edges will not leave the given set. As presented in Fig. 5, the graph is constructed as follows. • Firstly, for each interesting curve, a subgraph is derived. Denoted by Gi = (Vi , Ei ) the subgraph corresponding to the i-th curve. Every node Vi (x, y) ∈ Vi represents exactly one pixel I(x,

y), whose cost w(x, y) is computed as follows.



w(x, y ) =

c (x, y ), c (x, y ) − c (x, y − 1 ),

if y = 0 otherwise

(3)

As illustrated in Fig. 5(a), the intra-subgraph edges Ei are constructed according to the following rules: (1) along the bottom row, connect each node Vi (x, 0) with its left and right neighbours (i.e., Vi (x − 1, 0 ) and Vi (x + 1, 0 )); (2) along each column, connect each node Vi (x, y) with the node under it (i.e., Vi (x, y − 1 )); (3) along each column, connect each node Vi (x, y) with the nodes located at the adjacent columns and x units under it along y-direction (i.e., Vi (x − 1, y − x ) and Vi (x + 1, y − x )). Where x is a parameter that controls the smoothness of the curve. Note that in our case, the polar-form EUS image is wraparound along the x-direction, which means that the column 0 and column X p − 1 are considered to be adjacent. Thus for simplicity, we define Vi (−1, y ) = Vi (X p − 1, y ). The construction rules of intra-subgraph edges Ei are detailed as follows.

⎧ {< Vi (x, 0 ), Vi (x − 1, 0 ) >} ⎪ ⎪ ⎪ ⎪{< Vi (x, 0 ), Vi (x + 1, 0 ) >} ⎨ {< Vi (x, y ), Vi (x, y − 1 ) > |y > 0} ⎪ ⎪ {< Vi (x, y ), Vi (x + 1, y − x ) > |y ≥ x } ⎪ ⎪ ⎩ {< Vi (x, y ), Vi (x − 1, y − x ) > |y ≥ x }

• Then, adjacent subgraphs will be connected to form a whole graph. In this paper, it is assumed that subgraphs are ordered by their position, just like their corresponding curves. For instance, subgraph Gi+1 is above the subgraph Gi . Suppose that Coli (x) and Coli+1 (x ) indicate two corresponding columns with the same x coordinates in Gi and Gi+1 . As illustrated in Fig. 5(b), the inter-subgraph edges, denoted by Es , are constructed according to the following rules: (1) connect each node Vi (x, y) in Coli (x) with y ≥ δ u to the node in Coli+1 (x ) and δ u units under it along y-direction (i.e., Vi+1 (x, y − δ u )); (2) connect each node Vi+1 (x, y ) in Coli+1 (x ) with y < Yp − δ l to the node in Coli (x) and δ l units above it along y-direction (i.e., Vi (x, y + δ l )). Where δ l and δ u are two parameters that control the minimum and maximum distances between two adjacent curves, respectively. The construction rules of inter-subgraph edges Es are detailed

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

ARTICLE IN PRESS

JID: NEUCOM

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx

as follows.



4.3. Optimal curves segmentation

{< Vi (x, y ), Vi+1 (x, y − δ u ) > |i < n, y ≥ δ u } {< Vi+1 (x, y ), Vi (x, y + δ l ) > |i > 0, y < Yp − δ l }

As elaborated above, a graph G = (V, E ) was derived from the polar-form EUS image I, where V = {V1 , V2 , . . . , Vn } and E = {E1 , E2 , . . . , En } ∪ E s . The construction of graph ensures that any non-empty closed set in G defines n feasible curves in I with same cost. Consequently, the problem of searching n optimal curves in I can be transformed into finding a minimum-cost closed set in G, which is a well-studied problem in graph theory, and can be further transformed into computing a minimum s-t cut in a derived graph G. More details can be found in [19,20].

4.2. Design of graph cost function Observe that there are 5 curves to be found in our case, and the difficulty of finding each curve is quite different. Thus, for layer segmentation, some prior knowledge can be utilized to design specific cost function for individual curves. The cost function of curve 1 (i.e., the bottom curve in polar image, corresponding to the inner boundary of the innermost layer in original image, see Fig. 3(a)) is designed firstly due to the most obvious characteristics of the curve. That is, curve 1 is the boundary between the first ultrasonic signal feedback region (i.e., the white pixels area lowermost) and no signal region (i.e., the black pixels below). Thus, the cost of each pixel of curve 1 is assigned as follows.



0, i f x = 0

c1 (x, y ) =

− I (x − 1, y ) − I (x, y ) + sum(Ux,y (σ )), otherwise

(4)

− Where Ux,y (σ ) = {I (x, max(0, y − σ )), I (x, max(0, y − σ + 1 )), . . . , I (x, y − 1 )} denotes the set of nearest pixels that below − I(x, y) in x-column, and sum(Ux,y (σ )) represents the summation − of intensities of Ux,y (σ ). The cost of pixels of curve 1 behaves a combination form. It takes not only the pixel gray gradients (i.e., a dark-to-bright transition, from bottom to top) into consideration, but also favors mostly black neighbors below. More specifically, pixels that located at the boundaries between white region and black region and have a set of black neighbors below it (e.g., pixels at the curve 1 in Fig. 3(a)) will be assigned with a small cost value. Similarly, the cost function of curve 5 (the top curve in polar image, corresponding to the inner boundary of outermost layer in original image) is designed as follows.



0, i f x = 0

c5 (x, y ) =

+ I (x − 1, y ) − I (x, y ) − sum(Ux,y (σ )), otherwise

(5)

+ Where Ux,y (σ ) = {I (x, y + 1 ), . . . , I (x, min(y + σ , Yp − 1 ))} denotes the set of nearest pixels that above I(x, y) in x-column. That is, the costs of pixels of curve 5 favor a black-to-white transition and a set of white neighbors above. Finally, the costs of pixels of other 3 curves are assigned as follows.



ck (x, y ) =

0, I (x − 1, y ) − I (x, y ), I (x, y ) − I (x − 1, y ),

5

if x = 0 i f x > 0 and k is odd i f x > 0 and k is even

(6)

where k ∈ {2, 3, 4}. That is, the cost is either the white-to-black (curves 2 and 4) or the black-to-white (curve 3). Those pixels located at the boundaries between black regions and white regions will be assigned a small cost value.

Since cost functions have been decided, we are going to detect the optimal curves of EUS images. Considering the different difficulties of finding individual curves, we divide the layer segmentation procedure into 3 steps instead of directly applying optimal surface approach to detect all the curves simultaneously. Curve 1 is detected first as it has the most obvious characteristics, which makes it hardly be confused with other curves. To be specific, we first construct a specified weighted directed graph corresponding to curve 1 as discussed in Section 4.1, where the costs are defined in Eq. (4). Subsequently, the max-flow/min-cut algorithm [28] is applied to find the minimum closed set of the graph. As discussed in [19,20], the minimum closet set in derived graph specifies the optimal curve in I, i.e., the expected curve 1. Then, curve 5 is detected similarly, while the curve 1 that detected before serves as a geometric constraint. That is, pixels under curve 1 are ignored during the detection of curve 5. Finally, since curves 1 and 5 have located the curves 2, 3 and 4 in a certain area, the rest curves can be detected simultaneously. As introduced in Section 4.1, we construct a joint graph consisting of three subgraphs corresponding to curves 2, 3 and 4, respectively. Pixels that under curve 1 or above curve 5 are ignored. By finding the minimum closed set of the joint graph, the optimal curves 2, 3 and 4 are detected simultaneously. Thus, all curves have been detected. By transforming curves from polar coordinate to Cartesian coordinate, curves in original EUS image can be obtained. Fig. 3 shows an example of layer segmentation in EUS image. 5. Feature selection and pattern classification When the layers are obtained, features are extracted column by column in the region between curve 1 and 5. The texture assortment based method carried out to produce classification results. Being able to estimate layer thickness and its related variations, texture classification method has the power to present different features of tissues. As presented in Table 1, for each EUS image under polar coordinate system, a total of 12 features of 3 items could be drawn from each column to form and classify patterns. The three identified categories and texture features were as follows: (1) intensity level distribution measures: mean, standard deviation, and gray intensity entropy. The above features managed to display the occurrence frequency of all the gray levels in a subcolumn of region of interest (ROI); (2) run length measures including short run emphasis, long run emphasis, gray level nonuniformity, run length nonuniformity, and run percentage among them. For a region of interest, a gray level run is defined as a set of consecutive and collinear pixels with the same gray level, and the length of a run is the number of pixel in the run. The run length features aimed at describing the heterogeneity and tonal distributions of the intensity levels in a subcolumn of interest; (3) co-occurrence matrix measures: angular second moment, contrast, entropy, and inverse difference moment. The co-occurrence matrix measures demonstrate the overall spatial relationships between one intensity and another in the subcolumn of interest contains [34]. For each column in polar-form EUS images, denoted by Rx (μ) the intensities of pixels that in the area of 2μ + 1 columns that center on the xth column. More specifically, Rx (μ ) = { pi j |x − μ ≤ i ≤ x + μ, L5x ≤ j ≤ L1x }, where Lkx denotes the height of curve k in xth column. Features are calculated column by column. Given a polar-form EUS image with size of Xp × Yp , a total of Xp × 12 features will be extracted. It is necessary to point out that features are extracted from the raw images rather than the preprocessed images, since the preprocessing operation will yield characteristic change. The preprocessing procedure is only used to facilitate layer segmentation.

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

ARTICLE IN PRESS

JID: NEUCOM 6

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx Table 1 Used classification features. Category

Feature

Intensity level distribution measures

a

Formula 1 Ri j w × h i, j

Mean



1 (Ri j − R¯ )2 w × h i, j − i pi log2 pi

Standard deviation Gray level entropy Run length measures

b



Short run emphasis

i, j

Long run emphasis

 i, j j2 Mi, j / i, j Mi, j

Gray level nonuniformity

 i ( j Mi, j )2 / i, j Mi, j

Run length nonuniformity

 j ( i Mi, j )2 / i, j Mi, j

Run percentage Co-occurrence matrix measuresc

Mi, j / i, j Mi, j j2

Angular second moment Contrast Entropy Inverse difference moment

a R represents the region of interest (ROI) with size of w × h, R¯ = probability that pixels are in ith gray level. b M represents the run length matrix. c C represents the co-occurrence matrix.

In the training stage, the features of the training images are calculated. For normal EUS images (i.e., without lesion), features of all subcolumns serve as negative instances. For early carcinoma EUS images, the features of subcolumns within the region that delineated by specialists work as positive instances, as well as features of other subcolumns work as negative instances. A SVM classifier is trained based on these instances. In the classification stage, the trained classifier mentioned previously is applied to test images. When preprocessing, layer segmentation and feature extraction are finished, every gap produced by difference value of the top and bottom curves is labeled as either 0 or 1. For a well trained classifier, it is reasonable to assume that most of subcolums with label 1 should get gathered in the abnormal region, as well as few others sparsely distribute in the other regions. It is natural to cluster these 1-labeled subcolumns based on their distribution density so as to form the regions of interest, i.e., the abnormal region or lesion region. In this paper, in order to obtain a binary footprint for the abnormal region, the DBSCAN method [35] is employed to cluster the neighboring subcolumns with label 1 in terms of their column indices. If no cluster is produced, the image should be a normal image without lesion. Otherwise, each cluster implies an abnormal region. Finally, we would like to point out that curves 2, 3 and 4 are actually unused in feature selection, classification and abnormal region detection procedures. For abnormal region detection task, thus the segmentation of these three curves can be omitted. 6. Results This research adhered to the tenets of the declaration of Helsinki and was approved by Ethics Committee of Zhongshan Hospital affiliated with Xiamen University. To assess the efficiency of the proposed method, a total of 124 2-D EUS images are used in the experiment. These EUS images, including 50 early carcinoma EUS images and 74 normal EUS images, were collected by ultrasonic endoscopic micro probe where the probe is 3.4mm diameter, frequency 15 MHz, dynamic range 7.5–20 MHz, ultrasonic gain is 50 dB usually in Zhongshan Hospital affiliated with Xiamen University, Xiamen, Fujian, China. Informed consent was obtained from each subject. Each image contains 450 × 450 pixels. And we ask an experienced clinician to provide the EUS reference images of the

1 w×h

i, j

1 Mi, j w × h i, j 2 C i, j i, j 2 2 i, j (i − j ) Ci, j − i pi log2 pi i, j

Ci, j 1 + ( i − j )2

Ri, j denotes the mean value of R, and pi is the

abnormal region of each early carcinoma (e.g., see Fig. 12(c)) so that we are able to quantitatively evaluate the performance of the proposed method. All EUS images were processed and analyzed. Besides, we adopt the Support Vector Machine (SVM) integrating the 10-fold cross-validation strategy to evaluate the classification performance. 6.1. Restoration and sharpening The layer segmentation approach is highly relied on the preprocessing of EUS images. The quality of EUS images can be negatively affected by the speckle noise. The degraded EUS images directly impede effectiveness of the image processing and the efficiency of analysis algorithms. In order to better complete the segmentation tasks, we should effectively remove the speckle noise and avoid the damages of edge-like features, which can be dealt with the restoration and sharpening approach. In this paper, we applied deconvolution technique on polar coordinate based EUS images. This distinguishes it from most of the existing work, which directly applied deconvolution technique on original Cartesian coordinate based EUS image. Subsequently, a non-linear filter is applied to restored images and serves as a sharpening processing. The visual comparison results are illustrated in Fig. 6, where we transform the sharpened images back to Cartesian coordinate in order to facilitate the comparison. Note that the back transformation operation is indeed unnecessary in the following layer segmentation and abnormal region detection stages. One can see that, the first row reflects three original EUS images. The second row shows their corresponding preprocessed images. Comparing the first and second rows in Fig. 6, we can observe that this preprocessing step can effectively suppress speckle noises and provides better visual quality. 6.2. Evaluation of layer segmentation Fig. 7 shows three examples of the segmented layers. For the parameters setting, the smoothness constraint x is 2, and the geometric constraints δ l and δ u are specified as 10 and 80, respectively (empirically determined). For the purpose of evaluating the segmentation performance, two specialists individually conducted manual operation to track the curves in the raw EUS images to form the ground truth. And the two tracings we averaged stipulated the reference standard (referred to as Ref.).

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

ARTICLE IN PRESS

JID: NEUCOM

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx

7

Fig. 6. Experimental results for three examples of EUS images before and after preprocessing. (a)–(c): the original images. (d)–(f): the preprocessed images.

Table 2 Summary of mean unsigned border positioning errors for each curve. Curve

Algo. vs. Ref.

Obs. 1 vs. Obs. 2

1 2 3 4 5 Overall

1.84 ± 1.27 2.36 ± 2.22 1.94 ± 1.39 1.62 ± 1.21 1.55 ± 1.13 1.85 ± 1.51

1.70 ± 1.48 2.11 ± 1.42 1.96 ± 1.54 2.04 ± 1.85 2.20 ± 1.89 2.01 ± 1.67

Mean ± Standard Deviation. Unit: pixels.

As shown in Table 2, we measure absolute Euclidean distances between segmentation results of the proposed method and the reference standard to count the unsigned border positioning errors in each curve and tried to compare them with the inter-observer variability, that is, the unsigned border positioning differences between the two manual tracings. Besides, the bold numbers that are presented in Table 3 and the comparison between paired t-tests and segmentation errors imply that our proposed approach statistically improves the performance of layer segmentation. Through comparing and contrasting the unsigned differences made by observers, we found that the error gap between curve 4 and 5 is smaller than others, while the unsigned positioning error in the curve 1 and 2 tends to be bigger. Furthermore, the unsigned positioning error of curve 3 is unable to statistically distinguish observers’ unsigned differences. Our proposed layer segmentation method is proven to be effective, because the overall mean of unsigned error is 1.85 ± 1.51, which is significantly lower than the mean unsigned error 2.01 ± 1.67. Fig. 8 illustrates the Bland–Altman analysis [36] of the tracings obtained from the proposed method versus the reference. The hor-

Table 3 Summary of p-values for paired t-tests between unsigned border positioning errors between our segmentation results and the reference standards, and unsigned border positioning errors between manual tracings from two observers. Curve

p-value

1 2 3 4 5 Overall

0.001 0.001 0.7344 0.001 0.001 0.001

izontal coordinate represents the mean of curve heights (in polarform EUS image) of tracings obtained by the proposed method and the reference standard, and the vertical ordinate denotes the difference of them. The Bland–Altman plot shows that the proposed automated segmentation method generated curves higher than manual tracings (in polar form EUS images) in most of curves except curve 1. The proportion of points within 1.96 SDs (i.e., the 95% limits of agreement) of curves 1–5 are 94.6%, 95.9%, 94.1%, 93.8% and 94.4%, respectively, and the overall proportion is 94.6%. It suggests that the auto segmented curves have a good agreement with the reference standard. Finally, we calculated the intraclass correlation (ICC) [37] of volumetric measurements between the tracings of auto segmented curves and reference standard. The ICC analysis measures the similarity between the auto segmented curves and the reference. As shown in Table 4, all ICCs are greater than 0.94, meaning that the results are promising.

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

ARTICLE IN PRESS

JID: NEUCOM 8

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx

Fig. 7. EUS images layer segmentation.

Table 5 The confusion matrix for EUS image classification.

Table 4 The ICCs of each curve. Curve 1

Curve 2

Curve 3

Curve 4

Curve 5

overall

0.995

0.981

0.966

0.957

0.941

0.968

6.3. Classification performance As introduced in Section 5, every polar-form EUS image was set to 720 columns (i.e., X p = 720) and texture features were extracted from each column of the image. In total, 124 × 720 columns were collected and the columns can be treated as samples for further classification. We adopt a 10-fold cross-validation strategy to evaluate the performance of the proposed method. To be specific, all images are randomly separated into 10 disjoint subsets. For each experiment we only used samples that extracted from one of the subsets for testing, while leaving the remaining samples (from the other nine subsets) for training. SVM with radical basis function kernel1 was used for the classification. To study the effect of μ on classification performance of the proposed method, we vary μ value from 0 to 119. Fig. 9 shows the AUC results with respect to different μ. Clearly, with the increase of μ value, the AUC curve first increases and then decreases after reaching a peak. As a whole, both the increasing and decreasing are smooth, without drastic changes. Some of representative ROC curves with respect to different μ values are shown in Fig. 10. These ROC curves are not very different from each other, while the

1 We choose LibSVM [38] for training and classifying. Software available at http: //www.csie.ntu.edu.tw/∼cjlin/libsvm/.

Real positive Real negative

Predicted positive (%)

Predicted negative (%)

73.1 17.6

26.9 82.4

Table 6 Evaluation of EUS image classification. Accuracy (%)

Sensitivity (%)

Specificity (%)

PPV (%)

NPV (%)

81.2

73.1

82.4

38.5

95.3

corresponding AUC values differ slightly from each other. This observation verified the robustness of our proposed method with respect to parameter μ. In this study, we set μ to 55, and its corresponding AUC result is 0.837. The confusion matrix is provided in Table 5, showing that the proposed method performed well, since more than 80% samples from normal regions have been successfully classified. The rate of false alarm, i.e., normal samples mistaken for early esophageal carcinoma ones is low (17.6%). But the rate of missing report, i.e., early esophageal carcinoma samples mistaken for normal samples, demonstrates a bigger risk than false alarm rate. This missing report rate 26.9% for the early esophageal carcinoma group leaves the space for further enhancement. Also, we calculate the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) to see whether the proposed method can achieve the evaluation goals (see Table 6). According to Table 6, the accuracy of columns classification can achieve 81.2%. For instance, in the case of early esophageal carcinoma, the sensitivity of classification reached 73.1% and its speci-

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

JID: NEUCOM

ARTICLE IN PRESS

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx

9

Fig. 8. Bland Altman plots of the difference of the tracings obtained by proposed method and the referred standard. Solid lines imply the mean of the differences, while dashed lines represent the CR values.

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

JID: NEUCOM 10

ARTICLE IN PRESS

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx Table 7 Evaluation of abnormal region detection based on segmented and delineated layers.

Segmented Layers Delineated Layers

Fig. 9. AUC value w.r.t. the use of different μ value.

ficity achieved 82.4%, which undoubtedly leads to better detection and treatment. There exists a big difference between measuring PPV and NPV. The former emphasizes counting the proportions of predicted positive samples that are true positive; while the latter one concentrates on calculating the percentage of true negative in all samples predicted as none suffer from any esophageal disease. Such a remarkable gap is due to the fact that the proportion of positive and negative samples is unbalanced (about 1 : 6.6), thus a small part of negative samples mistaken for positive cases will appreciable affect the positive predictive value. Note that these results are based on the classification of column instances, which are actually intermediate results. Our final purpose is to detect abnormal regions within EUS images based on the these intermediate results, which we will elaborate in Section 6.4. 6.4. Abnormal region detection Fig. 11 illustrates the classification results in a polar-form early carcinoma EUS image. The red track is the abnormal region that manually delineated by the specialist and validated by the

Accuracy (%)

Sensitivity (%)

Specificity (%)

PPV (%)

NPV (%)

83.1

70.0

91.9

85.4

81.9

83.9

72.3

90.9

82.9

84.3

pathological examination. The columns marked with blue at bottom are those labeled with 1 in classification experiment. We can see that the majority of these 1-labeled columns are gathered in the abnormal region. Thus, it is reasonable to find a proper clustering method to assemble these abnormal columns together so as to detect the abnormal region. We use DBSCAN algorithm to cluster 1-labeled columns together based on their column indices. After clustering, each cluster implies an abnormal region. Fig. 12(a) shows the abnormal region manually delineated by the specialist, and (b) shows the region automatically detected by our method, where the left and right boundaries of region are determined by the clustering result (the leftmost and rightmost columns in the cluster), and the top and bottom boundaries come from the results of layer segmentation (i.e., the curve 1 and curve 5, respectively). Fig. 12(c) and (d) show the abnormal region delineated by the specialist and our method on original image, respectively. For quantitative evaluation of the performance of abnormal region detection on delineated layers, the following strategy is adopted: (1) for early esophageal carcinoma EUS images, if the abnormal region that automatically detected overlapped with the manually delineated region, it is a correct detection; otherwise, it is a wrong classification; (2) for normal EUS images, if no abnormal region was detected, it is a correct detection; otherwise, it is a wrong detection. For comparison, we also evaluate the performance on delineated layers. The detailed performance, i.e., accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) are presented in Table 7. It’s clear that the performance based on segmented and delineated layers do not make much difference, which demonstrates the robustness of our detection method. Note that the correct rate of detection (83.1%) and the specificity (91.9%) is better than the correspond-

Fig. 10. Some of ROC curves w.r.t. the use of different μ value.

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

JID: NEUCOM

ARTICLE IN PRESS

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx

11

Fig. 11. Classification result. The red track is the abnormal region that manually delineated by the specialist and validated by the pathological examination. The columns marked with blue at bottom are those be labeled with 1 in classification experiment.

Fig. 12. Illustration of detecting abnormal region. The red tracks imply the abnormal regions that manually delineated by the specialist. The blue footprints are generated by the proposed method.

ing performance of columns classification results (81.2 and 82.4%) that stated above. In addition, the huge difference between PPV and NPV has disappeared. Unfortunately, there is a slight drop on sensitivity value. These are due to the effect of DBSCAN algorithm which plays a role in depressing some of the dispersed misclassified columns (either those located in the abnormal region but be labeled as 0, or those located out of the abnormal region but be labeled as 1, see Fig. 11) by successfully treating them as noises. It is obvious that our method successfully marks out the abnormal region with a high accuracy, which would help the disease diagnosis. More importantly, the whole procedures are fully automatic, without manual operation. 6.5. Effect of preprocessing In this section, we assess the effect of the preprocessing operation. We explore how the preprocessing procedure (i.e., blind deconvolution and enhancement) affects the performance of the following layer segmentation and final abnormal region detection. Table 8 shows the layer segmentation performance on EUS image without part or all of the preprocessing operations. Compared to the results in Section 6.2, it is obvious that the blind deconvolution and enhancement operation do improve the segmentation quality especially in terms of mean unsigned border positioning errors.

Table 8 Layer segmentation performance without preprocessing.

Mean unsigned border positioning errors

Proportion of points within 1.96 SDs

ICC

Curve

Without enhancement

Without deconvolution & enhancement

1

1.48 ± 1.07

1.34 ± 1.02

2 3 4 5 Overall 1

3.61 ± 3.77 4.16 ± 4.32 4.72 ± 5.68 4.15 ± 5.04 3.69 ± 4.50 95.0%

7.36 ± 3.89 9.15 ± 4.73 11.36 ± 6.13 11.63 ± 6.56 8.43 ± 6.18 94.4%

2 3 4 5 Overall 1 2 3 4 5 Overall

92.9% 92.9% 91.5% 90.7% 92.6% 0.996 0.971 0.964 0.944 0.952 0.965

94.0% 95.0% 95.6% 95.6% 94.9% 0.996 0.934 0.904 0.858 0.840 0.906

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

ARTICLE IN PRESS

JID: NEUCOM 12

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx Table 9 Abnormal region detection performance without preprocessing. Without enhancement (%) Accuracy Sensitivity Specificity PPV NPV

82.3 64 94.6 88.9 79.6

Without deconvolution & enhancement (%) 80.7 68 89.2 81.0 80.5

The effect of preprocessing operation on the final abnormal region detection is also evaluated and shown in Table 9. Compared to Table 7, detection accuracy slightly drops. Although preprocessing procedure has a large influence on layer segmentation, it does not greatly affect the final detection accuracy, which demonstrates the robustness of the proposed method against the variance of layer segmentation. 7. Conclusion and discussion In conclusion, an automatic approach in the layer segmentation and detection of abnormal region in EUS images has been presented. The experimental results showed that the proposed method is effective and helpful for improving the accurate for abnormal region detection in EUS images. Most importantly, compared with other existing methods, this new method highlights automation, not requiring any manual operation to locate the region of interest (ROI). Since the acquisition of EUS image derives from the ultrasonic probe with sector scanning, the polar coordinate is more suitable to describe the characteristics of EUS image. Therefore, representation of EUS images in polar coordinates is crucial for promoting the description of local image regions in terms of their radial and tangential traits. As discussed in Section 6.5, deconvolution and enhancement operation applied on transformed polar-form images can effectively suppress speckle noise in EUS images and provide better visual quality, which do contribute to the following layer segmentation procedure. Segmentation of layers from EUS images is fundamental to diagnose the process of esophageal diseases. It results from the fact that many significant features are defined on the basis of tumor shape, and gray level or texture in layer areas. As demonstrated in Section 6.2, the auto segmented layers have comparative qualities as manual tracings, which shows a promising prospect in preventing the issues of labor intensive in the manually segmentation works. Experiment results demonstrate the effectiveness and robustness of the proposed method in detecting abnormal regions. In Table 7, the accuracy (83.1%) shows a promising prospect in clinical applications. However, the sensitivity is relatively low. Some works can be done to improve the performance of abnormal region detection. For example, extend the proposed method to 3D EUS images if available. Besides, note that the proposed abnormal region detection consists of two separated steps: columns classification and clustering. It is interesting to incorporate these two steps to achieve a better performance. In our future work, we will focus on: (1) studying the attributes of the EUS image and proposing better graph construction (i.e., hypergraph) and cost functions to improve the automated layer segmentation performance; (2) more effective features are needed to be exploited to differentiate early esophageal carcinoma cases from normal ones, since the existing feature vectors often suffer characteristic confliction in EUS images; (3) extend the proposed method to 3D segmentation once a 3D model creation facility is available.

Acknowledgment This work is supported by National Natural Science Foundation of China (Grant No. 61402389), the Fundamental Research Funds for the Central Universities (No. 20720160073) and the Health joint fund of the Provincial Department of Science and Technology (No.2015J01534). The first two authors contribute equally to this p aper. References [1] J. Choi, S.G. Kim, J.S. Kim, H.C. Jung, I.S. Song, Comparison of endoscopic ultrasonography (EUS), positron emission tomography (PET), and computed tomography (CT) in the preoperative locoregional staging of resectable esophageal cancer., Surg. Endosc. 24 (6) (2010) 1380–1386. [2] K.J. Napier, M. Scheerer, S. Misra, Esophageal cancer: a review of epidemiology, pathogenesis, staging workup and treatment modalities, World J. Gastrointest. Oncol. 6 (5) (2014) 112–120. [3] S. Misra, M. Choi, A.S. Livingstone, D. Franceschi, The role of endoscopic ultrasound in assessing tumor response and staging after neoadjuvant chemotherapy for esophageal cancer, Surg. Endosc. 26 (2) (2012) 518–522. [4] O. Pech, E. Günter, F. Dusemund, J. Origer, D. Lorenz, C. Ell, Accuracy of endoscopic ultrasound in preoperative staging of esophageal cancer: results from a referral center for early esophageal cancer, Endoscopy 42 (06) (2010) 456–461. [5] V.S. Wang, J.L. Hornick, J.A. Sepulveda, et al, Low prevalence of submucosal invasive carcinoma at esophagectomy for high-grade dysplasia or intramucosal adenocarcinoma in barretts esophagus: a 20-year experience, Gastrointest. Endosc. 69 (2009) 777–783. [6] E.L.B. Lieberman, R.C. Fitzgerald, Early diagnosis of oesophageal cancer, Br. J. Cancer 101 (2009) 368–370. [7] M.C. Kolios, G.J. Czarnota, M. Lee, J.W. Hunt, M.D. Sherar, Ultrasonic spectral parameter characterization of apoptosis, Ultrasound Med. Biol. 28 (2002) 589–597. [8] H.C. Van, C.B. Van, L. Valentin, et al, External validation of mathematical models to distinguish between benign and malignant adnexal tumors: a multicenter study by the international ovarian tumor analysis group., Clin. Cancer Res. 13 (2007) 4440–4447. [9] H.C. Van, C.B. Van, L. Valentin, et al, Prospective internal validation of mathematical models to predict malignancy in adnexal masses: results from the international ovarian tumor analysis study, Clin. Cancer Res. 15 (2009) 684–691. [10] I.D. Norton, Y. Zheng, M.S. Wiersema, J. Greenleaf, J.E. Clain, E.P. Dimagno, Neural network analysis of EUS images to differentiate between pancreatic malignancy and pancreatitis, Gastrointest. Endosc. 54 (2001) 625–629. [11] A. Das, C.C. Nguyen, F. Li, B. Li, Digital image analysis of EUS images accurately differentiates pancreatic cancer from chronic pancreatitis and normal tissue, Gastrointest. Endosc. 67 (2008) 861–867. [12] A. Sftoiu, P. Vilmann, F. Gorunescu, D.I. Gheonea, M. Gorunescu, T. Ciurea, G.L. Popescu, A. Iordache, H. Hassan, S. Iordache, Neural network analysis of dynamic sequences of EUS elastography used for the differential diagnosis of chronic pancreatitis and pancreatic cancer, Gastrointest. Endosc. 68 (2008) 1086–1094. [13] M. Zhang, H. Yang, Z. Jin, J. Yu, Z. Cai, Z. Li, Differential diagnosis of pancreatic cancer from normal tissue with digital imaging 11 processing and pattern recognition based on a support vector machine of eus images, Gastroint. Endosc. 72 (2004) 978–985. [14] M. Zhu, C. Xu, J. Yu, Y. Wu, C. Li, M. Zhang, Z. Jin, Z. Li, Differentiation of pancreatic cancer and chronic pancreatitis using computer-aided diagnosis of endoscopic ultrasound (EUS) images: A diagnostic test, PLOS ONE 8 (2013) 1–6. [15] W.S. Noble, What is a support vector machine? Nat. Biotechnol. 24 (2006) 1565–1567. [16] Z. Zhang, L. Bai, P. Ren, E. Hancock, High-order graph matching kernel for early carcinoma EUS image classification, Multimedia Tools Appl. 75 (2016) 3993–4012. [17] K. Olowe, R. Kumon, F.T. Farooq, et al, Differentiation of benign and malignant lymph nodes by endoscopic ultrasound (EUS) spectrum analysis, Gastrointest. Endosc. 65 (2007) AB194. [18] D.E. Loren, C.M. Seghal, G.G. Ginsberg, et al, Computer-assisted analysis of lymph nodes detected by EUS in patients with esophageal carcinoma, Gastrointest. Endosc. 56 (2002) 742–746. [19] K. Li, X.D. Wu, D.Z. Chen, et al, Efficient optimal surface detection: theory, implementation, and experimental validation, Proc. SPIE 5370 (2004) 620–627. [20] K. Li, X.D. Wu, D.Z. Chen, et al, Optimal surface segmentation in volumetric images-a graph-theoretic approach, IEEE Trans. Pattern Anal. Mach. Intell. 28 (2006) 119–134. [21] B.F. Torrence, E.A. Torrence, The Student’s Introduction to MATHEMATICA®: A Handbook for Precalculus, Calculus, and Linear Algebra, Cambridge University Press, 2009. [22] D.S. Biggs, M. Andrews, Acceleration of iterative image restoration algorithms, Appl. Opt. 36 (1997) 1766–1775. [23] C.M. Sun, A. Ben, Multiple paths extraction in images using a constrained expanded trellis, IEEE Trans. Pattern Anal. Mach. Intell. 27 (2005) 1923–1933. [24] C. Sun, S. Pallottino, Circular shortest path in images., Pattern Recognit. 36 (2003) 709–719.

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083

JID: NEUCOM

ARTICLE IN PRESS

[m5G;November 14, 2018;12:8]

X. Chen, Y. Hu and Z. Zhang et al. / Neurocomputing xxx (xxxx) xxx [25] S. Timp, N. Karssemeijer, A new 2d segmentation method based on dynamic programming applied to computer aided detection in mammography, Med. Phys. 31 (2004) 958–971. [26] A. Rojas-Dominguez, Improved dynamic-programmingbased algorithms for segmentation of masses in mammograms, Med. Phys. 34 (2007) 4256–4269. [27] Y. Zhou, X. Cheng, X. Xu, et al, Dynamic programming in parallel boundary detection with application to ultrasound intima-media segmentation, Med. Image Anal. 17 (2013) 892–906. [28] Y. Boykov, V. Kolmogorov, An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision, IEEE Trans. Pattern Anal. Mach. Intell. 26 (2004) 1124–1137. [29] F. Shi, X.J. Chen, H.M. Zhao, et al, Automated 3-d retinal layer segmentation of macular optical coherence tomography images with serous pigment epithelial detachments, IEEE Trans. Med. Imag. 34 (2015) 441–452. [30] X.J. Chen, N. Meindert, et al, Three-dimensional segmentation of fluid-associated abnormalities in retinal OCT: probability constrained graph-search-graph– cut, IEEE Trans. Med. Imaging 31 (2012) 1521–1531. [31] S. Hussein, A. Green, A. Watane, D. Reiter, X. Chen, G.Z. Papadakis, B. Wood, A. Cypess, M. Osman, U. Bagci, Automatic segmentation and quantification of white and brown adipose tissues from pet/ct scans, IEEE Trans. Med. Imag. 36 (3) (2017) 734–744. [32] E. Gao, F. Shi, W. Zhu, C. Jin, M. Sun, H. Chen, X. Chen, Graph search–active appearance model based automated segmentation of retinal layers for optic nerve head centered OCT images, in: SPIE Medical Imaging, International Society for Optics and Photonics, 2017, p. 101331Q. [33] K. Yu, X. Chen, F. Shi, W. Zhu, B. Zhang, D. Xiang, A novel 3d graph cut based co-segmentation of lung tumor on pet-ct images with Gaussian mixture models, in: SPIE Medical Imaging, 2016, p. 97842V. [34] S.R. Fleagle, W. Stanford, T. Burns, D.J. Skorton, Feasibility of quantitative texture analysis of cardiac magnetic resonance imagery: preliminary results, Proc. SPIE 50 (3) (1994) 23–33. [35] M. Ester, H.P. Kriegel, J. Sander, et al, A density-based algorithm for discovering clusters in large spatial databases with noise, Proceedings of the Second International Conference Knowledge Discovery and Data Mining., 1996. [36] J.M. Bland, D. Altman, Statistical methods for assessing agreement between two methods of clinical measurement, The Lancet 327 (8476) (1986) 307–310. [37] P.E. Shrout, J.L. Fleiss, Intraclass correlations: uses in assessing rater reliability, Psychol. Bull. 86 (2) (1979) 420–428. [38] C.C. Chang, C.J. Lin, LibSVM: a library for support vector machines, ACM Trans. Intell. Syst. Technol. 2 (3) (2011) 1–27. Xu Chen received his B.S. and M.S. degrees in the school of mathematical sciences from Xiamen University, China, in 2004 and 2007, respectively. He is currently a Ph.D. candidate at the software school of Xiamen University. His research interests include pattern recognition, machine learning, and computer vision.

Yiqun Hu received his M.D. degrees in Concord Hospital, Beijing, China, in 2006. He is now an associate professor of digestive medicine, Zhongshan Hospital, Xiamen University. His research interests include pathogenesis of Pancreatic Cancer, Pancreatic Diseases, Inflammatory Bowel Disease and Clinical application of ERCP, EUS.

Zhihong Zhang received his B.Sc. degree (1st class Hons.) in computer science from the University of Ulster, UK, in 2009 and the Ph.D. degree in computer science from the University of York, UK, in 2013. He won the K. M. Stott prize for best thesis from the University of York in 2013. He is now an associate professor at the software school of Xiamen University, China. His research interests are widereaching but mainly involve the areas of pattern recognition and machine learning, particularly problems involving graphs and networks.

13

Beizhan Wang received his B.E., M.E. and Ph.D. degrees from the Northwestern Polytechnical University, China, in 1987, 1997 and 2003, respectively. He is now a professor at the software school of Xiamen University, China. His research interests include pattern recognition, machine learning and data mining.

Lichi Zhang received the BE degree from the School of Computer Science, Beijing University of Posts and Telecommunications, China in 2008, and the Ph.D. degree from Department of Computer Science, University of York, UK, in 2014. He is now a postdoc researcher in Med-X Research Institute, Shanghai Jiao Tong University, China. His research interests include 3D shape reconstruction, object reflectance estimation and medical image analysis.

Fei Shi received her Ph.D. degree from the School of Engineering, New York University in 2006. She is now a lecturer at the school of electronic and information engineering, Soochow University, China. Her research interests include medical image segmentation and pattern recognition.

Xinjian Chen, IEEE Senior Member, received his Ph.D. degree from the Institute of Automation, Chinese Academy of Science in 2006. After graduation, he worked on research projects with several prestigious groups: Microsoft Research Asia, Beijing, China (20 06–20 07); Medical Image Processing Group, University of Pennsylvania (2008– 2009); Department of Radiology and Image Sciences, National Institutes of Health (2009–2011); and Department of Electrical and Computer Engineering, University of Iowa (2011–2012). In 2012, he joined the School of Electrical and Information Engineering, Soochow University where he serves as a Distinguished Professor and Director of Medical Image Processing, Analysis and Visualization Laboratory. Xinjian has published more than 70 peer-reviewed papers in prestigious international journals and conferences, and currently holds 3 granted patents and 8 pending status patents. Xinjian is a recipient of the National One Thousand Young Talents Award, China (2012), JiangSu Provincial High Level Creative Talents Award (2013), Beijing Science and Technology Advancement Award (2011). His research interests include medical image processing, quantitative image analysis, and their clinical applications. Xiaoyi Jiang studied Computer Science at Peking University and received his Ph.D. and Venia Docendi (Habilitation) degree from University of Bern, Switzerland. He was an associate professor at Technical University of Berlin. Since 2002 he is a full professor of Computer Science at University of Munster, Germany. Currently, he is an Editor-in-Chief of the International Journal of Pattern Recognition and Artificial Intelligence and also serves on the advisory board and editorial board of several journals including Pattern Recognition, IEEE Transactions on Cybernetics, and Chinese Science Bulletin. He is senior member of IEEE and fellow of IAPR.

Please cite this article as: X. Chen, Y. Hu and Z. Zhang et al., A graph-based approach to automated EUS image layer segmentation and abnormal region detection, Neurocomputing, https://doi.org/10.1016/j.neucom.2018.03.083