Journal Pre-proof Gradient-based generation of intermediate images for heterogeneous tumor segmentation within hybrid PET/MRI scans Arafet Sbei, Khaoula ElBedoui, Walid Barhoumi, Chokri Maktouf
PII: DOI: Reference:
S0010-4825(20)30061-5 https://doi.org/10.1016/j.compbiomed.2020.103669 CBM 103669
To appear in:
Computers in Biology and Medicine
Received date : 15 October 2019 Revised date : 17 February 2020 Accepted date : 17 February 2020 Please cite this article as: A. Sbei, K. ElBedoui, W. Barhoumi et al., Gradient-based generation of intermediate images for heterogeneous tumor segmentation within hybrid PET/MRI scans, Computers in Biology and Medicine (2020), doi: https://doi.org/10.1016/j.compbiomed.2020.103669. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Ltd.
Journal Pre-proof
*Conflict of Interest Statement
Jou
rna
lP repro of
There is no conflict of interest
Journal Pre-proof
Highlights (for review)
lP repro of
- Segmentation method for both homogeneous and heterogeneous tumors
- Preliminary IUR includes necrosis and regions with cystic change within the tumor
- Separation technique excludes regions with high uptake similar to tumor from the preliminary IUR
- Intermediate images for PET and MRI are defined by combining tumor map gradient and gradient image
Jou
rna
- Validation on PET phantoms and real-world PET/MRI scans of prostate, liver and pancreatic tumors
Journal Pre-proof
lP repro of
*Revised manuscript (clean) Click here to download Revised manuscript (clean): article.tex
Gradient-based Generation of Intermediate Images for Heterogeneous Tumor Segmentation within Hybrid PET/MRI Scans
Arafet Sbei1 , Khaoula ElBedoui1,2 , Walid Barhoumi1,2 and Chokri Maktouf3 1 Université de Tunis El Manar, Institut Supérieur d’Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l’Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia 2 Université
3 Nuclear
de Carthage, Ecole Nationale d’Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia Medicine Department, Pasteur Institute of Tunis, Tunis, Tunisia
Abstract
Segmentation of tumors from hybrid PET/MRI scans plays an essential role in accurate diagnosis and treatment planning. However, when treating tumors, several challenges, notably heterogeneity and the problem of leaking into surrounding tissues with similar high uptake, have to be considered. To address these issues, we propose an automated method for accurate delineation of tu-
rna
mors in hybrid PET/MRI scans. The method is mainly based on creating intermediate images. In fact, an automatic detection technique that determines
a preliminary Interesting Uptake Region (IUR) is firstly performed. To overcome the leakage problem, a separation technique is adopted to generate the final IUR. Then, smart seeds are provided for the Graph Cut (GC) technique to obtain the tumor map. To create intermediate images that tend to reduce heterogeneity faced on the original images, the tumor map gradient is combined max with the gradient image. Lastly, segmentation based on the GCsum technique
Jou
is applied to the generated images. The proposed method has been validated on PET phantoms as well as on real-world PET/MRI scans of prostate, liver and pancreatic tumors. Experimental comparison revealed the superiority of the proposed method over state-of-the-art methods. This confirms the crucial role of automatically creating intermediate images in dealing with the problem
of wrongly estimating arc weights for heterogeneous targets. Keywords:
PET/MRI scans, heterogeneous tumors, intermediate images,
Preprint submitted to Elsevier
February 17, 2020
lP repro of
Journal Pre-proof
tumor map, co-segmentation, gradient
1. Introduction
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Medical imaging is gaining an increasingly important role in cancer assessment and treatment planning in recent years. Anatomical imaging modalities (also called structural imaging modalities), such as Computed Tomography (CT) and
Magnetic Resonance Imaging (MRI), are widely used for examination of abnormal regions of the body caused by diseases. Indeed, anatomical images are
characterized by their high spatial resolution, and their sensitivity in providing detailed anatomical information about tumors. However, detecting pathology using only anatomical features is a difficult task because of their weakness in providing cellular activity. Contrariwise, functional imaging modalities play an important role in bringing metabolic information about cancers or infectious
diseases. In particular, Positron Emission Tomography (PET) [1] gives quantitative information about lesions by exploiting the unique decay physics of
positron-emitting isotopes. 18 F-FluoroDeoxyGlucose (FDG) is the most commonly used radiotracer for PET imaging. FDG accumulates in areas with high
levels of metabolism and glycolysis, such as sites of inflammation, tissue repair, hyperactivity (e.g. muscle), and particularly in cancer cells, which are often highly metabolically active. The interpretation of the relative increased uptake of FDG by many malignancies compared to background requires knowledge of
rna
1 2
normal distribution of 18 F-FDG uptake, as well as of physiological variants, benign lesions, and imaging related artifacts. This makes PET modality a reliable tool for quantitative assessment of changes in radiotracers. In fact, different quantitative methods have been proposed in literature. The most used one is
24
the Standardized Uptake Value (SUV), which performs a physiologically relevant measurement of cellular metabolism. Nevertheless, despite its strength in
25
providing a functional or metabolic assessment of normal tissue or disease con-
Jou
23
26 27
28
29 30 31
ditions, PET exams are limited by their low spatial resolution, and most often images lack the anatomical detail required for clinical interpretation, compared to other structural imaging modalities. In order to provide complementary information in one single procedure, medical imaging dealing with hybrid techniques has been recently employed in clinical routine. Indeed, images from integrated PET/CT and PET/MRI instruments have received extensive interests in med2
32 33 34 35 36 37 38 39 40 41 42
lP repro of
Journal Pre-proof
ical image processing research. Thus, in case of PET/CT, many segmentation methods have been developed to extract tumor boundaries. However, the segmentation of PET/MRI is an open research issue since many basic challenges,
particularly the heterogeneity of tumors [2], are not yet completely solved. The definition of heterogeneity, which is one of the most challenging issues in radiotherapy assessment treatment, changes according to the used modality. Concerning PET images, it refers to the radiotracer spatial distribution through
tumor due to spatially varying vascularization, hyoxia, necrosis, and cellylarity. For MRI modality, heterogeneity can also include the spatial variability of vessel density and the physiological tissue characteristics [4]. Figure 1 illustrates an example of a heterogeneous prostate tumor in both PET and MRI images. In
44
this work, we propose an automated co-segmentation method that delineates tumors simultaneously in MRI and PET images. Taking into consideration the
45
tumor heterogeneity in both modalities, the proposed method is mainly based
46 47 48 49 50 51 52 53 54
gradient image, in order to relax the amount of heterogeneity inside tumors. Lastly, tumors within the obtained intermediate images are co-segmented using joint fuzzy connectedness and the graph cut technique. Mainly based on two contributions, the proposed method treats two major problems usually encoun-
tered in PET/MRI segmentation frameworks. Indeed, tumors are identified regarding nearby high uptake regions using a two-stage separation algorithm. Moreover, heterogeneous targets are delineated via the automated creation of gradient-based intermediate images.
Jou
55
on the automatic determination of foreground and background seeds. Then, intermediate images are created, by combining the tumor map gradient and the
rna
43
3
lP repro of
Journal Pre-proof
(a)
(b)
Figure 1: Two images, from hybrid PET/MRI scanner, that represent a prostate tumor: (a) MRI image showing a heterogeneous target, (b) PET image showing different uptake regions.
56 57 58 59
The rest of this paper is organized as follows. In Section 2, we present a brief review of tumor segmentation using monomodal PET scans as well as multimodal
PET/CT and PET/MRI scans. Section 3 describes the proposed segmentation method and the framework used to assess system performance. In Section 4, we investigate the performance of the method on simulated and clinical cases. Finally, advantages of the proposed method compared to state-of-the-art meth-
62
ods are discussed in Section 5.
63 64
65 66 67 68 69
2. Related Work
A variety of segmentation methods have been developed to delineate tumors within monomodal PET scans as well as multimodal PET/CT and PET/MRI scans. For PET segmentation, thresholding-based methods (e.g. fixed, adaptive and iterative thresholding. . .) and region-based methods (e.g. region growing. . .) are the most used [5, 6]. However, due to noise and scanner properties, these methods usually fail to delineate inhomogenous targets [7]. To deal with this issue, stochastic and learning-based methods have been widely adapted in recent
Jou
70
rna
61
60
71
72
73
works for PET image segmentation, given that the fuzziness of PET images is well suited for boundaries representation. In fact, stochastic-based meth-
75
ods exploit the statistical difference between uptake regions and surrounding tissues. For instance, Gaussian Mixture Models (GMM) are used for segment-
76
ing PET images, while assuming that the intensity distribution within the Re-
74
4
77 78 79 80 81 82 83 84 85 86 87
lP repro of
Journal Pre-proof
gion Of Interest (ROI) can be approximated by summing Gaussian densities
[8]. The approximation can be achieved using an optimization technique such as Expectation-Maximization (EM), allowing to outperform fixed threshold-
ing methods [8]. Similarly, among unsupervised statistical methods, 3-FLAB (3-Fuzzy Locally Adaptive Bayesian) has shown high accuracy when dealing with heterogeneous targets [9]. Furthermore, classification and clustering methods are also used to delineate tumors in PET images. For instance, Fuzzy
C-Means (FCM) is adopted for PET image segmentation [10], taking advantages of the fuzzy distributions of PET images during the tumor delineation procedure. Generally, most of the above-mentioned methods are not suited for multi-focal regions segmentation [6]. Therefore, Affinity Propagation (AP)
89
can be used to determine similarity between data points on image histograms. Then, the determined affinities allow choosing an optimal threshold to segment
90
the image into different regions [11]. Some graph-based methods are also ap-
91 92 93 94 95 96 97 98 99 100 101 102 103 104
plied to the segmentation of PET images. The majority of these methods are based on the Graph Cut (GC) technique [12]. However, the major shortcoming of this technique lies in the “min cuts” that occur when a small set of seeds is used. Other graph-based methods, such as Random Walk (RW) [13], have been also adopted. Alternatively, a Dempster-Shafer Theory (DST)-based delineation method has shown high accuracy for segmenting PET images [14]. Recently, a
slice-by-slice marching Local Active Contour (LAC) segmentation method with an automatic stopping condition has been proposed in [15]. This LAC-based
method recorded higher accuracy than many existing methods. Nevertheless, this method requires a set of very close seeds to the desired object in order to
rna
88
avoid the leakage through physiological high uptake regions, despite its strength in segmenting images based on seeds provided on only one slice. Likewise, local active contours, with an energy function that combines the machine learning component with discriminant analysis, have shown noticeable segmentation re-
106
sults on both phantoms and clinical studies [16]. The LAC-based segmentation method is also coupled with the K-Nearest Neighbor (KNN) classification tech-
107
nique to integrate expert knowledge in the delineation process [17]. In [18], a
Jou
105
108
109
110 111
112 113
joint solution for PET image segmentation, denoising and partial volume correction has been proposed. In fact, the segmentation process has been driven by an AP-based iterative clustering technique that integrated both denoising and partial volume correction algorithms into the delineation process. More recently,
a segmentation method based on Conventional Neural Networks (CNN) [19] has proved its performance against other PET segmentation methods dealing with 5
lP repro of
Journal Pre-proof
114
lung and head-and-neck tumors.
115
Furthermore, modern methods of PET image segmentation have taken advan-
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142
tage of information gained from structural modalities. For instance, GC-based methods [20, 21] incorporate both functional and anatomical information using a context function, while considering two sub-graphs to represent PET and CT images. In [22], an image mapping has been proposed in order to estimate
the arc weights’ costs. Generally, a common problem with GC-based methods is the unrealistic assumption of one-to-one correspondence between structural and
functional modalities. Due to their metabolic characteristics, lesions on PET scans may have smaller uptake regions compared to these on anatomical images
[23]. To deal with this limitation, an automated co-segmentation framework based on RW was introduced in [24] typically based on an asymmetric weighting technique. However, when using an automated co-segmentation method, the detection of Interesting Uptake Regions (IUR) in PET images could include
the background when dealing with high uptake regions surrounding tumors. Moreover, it is not able to encompass details such as spiculations [24]. Due to this fact, a co-segmentation method based on Fuzzy Connectedness (FC) was
proposed in [25]. Indeed, a visibility weighting scheme was introduced in order to avoid asymmetric relation between the two images. This method has shown
similar results to the RW-based co-segmentation method. Nevertheless, given the fact that it is a region-based method, FC co-segmentation may fail when
rna
116
targeted tumors are heterogeneous. Moreover, a co-segmentation method based on joint FC and GC was developed in [26]. This method surpasses the FC-based methods, especially when tumors are heterogeneous. However, it was shown that despite being combined, the whole weight can be extracted only from PET due to its high contrast [26]. In [27], a fully co-segmentation method was designed for segmenting gamma knife. This method seems beneficial for PET images, but it does not take into account slow intensity variation between tumors and surrounding tissues in MRI scans. Recently, belief functions have recorded
144
good results for the joint segmentation of PET/CT images [28]. The main originality of these functions resides in describing voxels not only by intensities, but
145
also by the textural features. In [29], thanks to anatomical information gained
Jou
143
147
from CT images, a restoration process was integrated into the co-segmentation one in order to deal with tumor boundary fuzziness. This method has shown
148
prominent results. However, it does not take into consideration the presence
146
6
149 150 151 152 153 154 155 156 157 158 159
lP repro of
Journal Pre-proof
of normal tissues having similar FDG uptake to the tumor, thus it can leak through the background. It was also designed for homogeneous tumors in CT scans and it has not been reliably extended to MRI images. Likewise, graph cut optimization using hybrid kernel functions has been investigated for delineating FDG uptakes in fused preoperative and postoperative PET/CT images [30].
Nevertheless, since the segmentation is applied to a single fused image, the same ROI is obtained in both modalities. This makes the method unable to deal with
asymmetric relation between both modalities. More recently, a deep learningbased method [31] has been proposed in order to delineate tumors in both CT and PET scans, while avoiding interference with the complex background of CT images. This method has improved segmentation accuracy thanks to the use
161
of probability map rather than the original CT images, where tumors usually have a similar intensity to the surrounding tissues. However, generalizing the
162
former method to different tumors, especially rare ones, would be difficult due
160
163 164
to restricted datasets of hybrid multimodal functional/anatomical scans. In addition, an empirical study on multimodal PET/CT/MRI images showed that
166
fusing images at convolutional or fully connected layer makes CNN performs well on tumors’ delineation [32].
167
3. Materials and Methods
168 169
In this section, we firstly present the simulated and the real-world data used in this work. Then, the proposed method for tumors’ delineation in PET
rna
165
171
and MRI scans is described. Lastly, a framework for the evaluation of tumor segmentation within hybrid PET/MRI scans is carried out.
172
3.1. Materials
170
173 174
To evaluate the proposed segmentation method, both phantoms and clinical studies have been investigated. In fact, Zeolite phantoms generated in [33] are
176
used in order to validate the segmentation of PET scans. The PET acquisition was achieved using a PET/CT scanner (Biograph TruePoint 64; Siemens
177
Healthcare, Erlangen, Germany). Attenuation-Weighted-OSEM (AWOSEM)
Jou
175
179
was performed for PET image reconstruction. Four iterations, eight projection subsets, and a 3D Gaussian post-reconstruction filter with 4-mm Full Width
180
at Half Maximum (FWHM) were used. Each PET slice consists of 336×336
178
7
181 182 183 184 185 186 187 188 189 190 191 192 193 194
lP repro of
Journal Pre-proof
voxels of size 2.04×2.04×2.00 mm3 . The images simulate eight lesions with vol-
umes ranging from 1.86 to 7.24 ml and lesion to background contrasts ranging from 4.8 : 1 to 21.7 : 1. Furthermore, the proposed method is validated on eight PET/MRI anonymous images of liver, pancreas and prostate tumors from scans performed on fully integrated PET/MRI scanner (SIGNA PET/MR 3.0 T, GE
Healthcare) at the Pitié-Salpêtrière Hospital, Paris, France. All patients fasted for at least 4 hours before the PET/MRI examination. Serum glucose levels were checked prior to the injection and were less than 200 mg/dl (11.1 mmol/L) in all patients. A body-weight-adapted dose of 18 F-FDG (3.7 M Bq/kg) was intravenously injected. Both MRI and PET images were acquired simultaneously 60 minutes after injection. The PET scanning was reconstructed using the VPFXS
reconstruction method. The image resolution is 384 × 384, and voxels sizes are 1.0937 × 1.0937 × 3.4000 mm3 for MRI images and 1.5625 × 1.5625 × 2.7800 mm3 for corresponding PET images. The minimum volume determined for
196
MRI (resp. PET) is 670 mm3 (resp. 500 mm3 ), while the mean contrast-tonoise measured for PET images is 3.9, ranging from 2.5 to 5.5, and 1.28 for MRI
197
images, ranging from 0.25 to 2.5.
198
3.2. Methods
199 200 201 202 203 204 205 206
In this work, we present an accurate method for segmenting tumors in hybrid PET/MRI scans based on creating intermediate images. Figure 2 illustrates the
outline of the proposed method which is composed of four steps: (1) the automatic determination of the preliminary IUR, (2) the separation of foreground
rna
195
from background when the preliminary IUR includes a normal high uptake region, (3) the creation of intermediate images to address the issue of wrong max estimation of arc weights, (4) the co-segmentation based on the GCsum technique. The main contribution of this work resides in the automatic generation
208
of intermediate images, particularly in order to deal with heterogeneous tumors. The idea was firstly proposed in [34], where a learning method proceeded by
209
iteratively drawing foreground and background pixels until a classifier becomes
Jou
207
210 211
212
213
214
215
able to determine the object map. To achieve this, authors used image attributes such as texture and gradient. Alternatively, instead of using a classifier
synergistically, the high activity of PET images is explored herein to determine objects maps in both PET and MRI images. This permits to determine the object map while avoiding user manual intervention, commonly based on a visual feedback. In what follows, we detail the proposed method, for segmenting 8
lP repro of
Journal Pre-proof
216 217 218
219
rna
Figure 2: Outline of the proposed method. The final IUR is determined based on one of three ordered conditions: (1) when the preliminary IUR is homogeneous (denoted IUR 1), (2) after the separation using the MRI image (denoted IUR 2), (3) after the separation using the PET image (denoted IUR 3).
homogeneous as well as heterogeneous tumors in PET/MRI scans, based on automatic intermediate images generation.
3.2.1. Automatic detection of preliminary interesting uptake regions The main goal of determining the IUR is to provide seeds for the GC segmenta-
221
tion technique in order to obtain the tumor map. For this purpose, we propose
Jou 220
222
223
224 225
an algorithm that consists of two main steps. First, a preliminary IUR is determined using a region growing technique based on the mean SUV of the target region. Then, the tumor and surrounding tissues with similar high uptakes are separated. As well, similar to [26], the PET image is registered on the MRI one 9
lP repro of
Journal Pre-proof
226
(Figure 3) using a Modality Independent Neighbourhood Descriptor (MIND)
227
for a deformable multi-modal registration [35].
(a)
(b)
rna
(c)
Figure 3: Registration of the PET and MRI images. (a) and (b) represent the original PET and MRI images, respectively, (c) illustrates the overlaid PET image on the MRI image.
228 229 230 231
Moreover, in order to take into account the functional aspects of patients, PET images are converted into SUVs. SUV is calculated as ratio of the tissue Radioactivity Concentration (RC) in kBq/ml and the injected dose (ID) in M Bq at the time of injection T divided by a factor (such as body weight, lean body
233
weight, or body surface area). The normalization using the body weight BW in Kg has been widely used and it has provided interesting results in several
234
works [15, 24]. Accordingly, the adopted SUV is defined as follows:
Jou
232
235
SU V =
RC , ID/BW
(1)
236 237
where, RC is defined as the factor between the image intensity and the image
10
lP repro of
Journal Pre-proof
238
local scale factor. In order to determine SU Vmax voxels, a thresholding step (2) is
239
thereafter adopted.
241 240
242 243 244 245 246
1 c(u) = 0
if SU V (u) >
global 40%.SU Vmax
(2)
otherwise,
where, c(.) is the encoder function, SU V (u) is the SU V of the voxel u and
global SU Vmax is the maximum SU V within the current slice. Then, we seek iteratively, in each of the eight directions, for voxels that exceed the mean value
of those calculated in the previous iteration. Using the mean as a threshold local value instead of comparing to the SU Vmax allows increasing the probability of
248
including some regions, such as necrotic tissues, into foreground voxels (Figure 4). The pseudo-code of the automatic detection of preliminary IUR can be sum-
249
marized as follows:
247
Jou
rna
250
11
lP repro of
Journal Pre-proof
input : PET_image
output: IUR: Interesting Uptake Region 1
2 3 4
IUR←c(PET_image);
// c is the encoder function (Eq. for i ← 1 to Card(IU R) do flag←true; while (flag=true) do
Tmp ← ∅; NewSeeds(i)←search8neighborhoods(IUR(i));
5 6
for k ← 1 to Card(N ewSeeds(i)) do
7 251
if (SUV(NewSeeds(i)(k))> η.mean(SUV(IUR(i))) ) then // η is empirically set to 80%
8
IUR(i)←NewSeeds(i)(k) ∪ IUR(i); Tmp←NewSeeds(i)(k) ∪ Tmp;
9 10
end
11
end if (Tmp = ∅) then
12 13
flag←false; end
14 15 16 17
(2))
end end
Jou
rna
Algorithm 1: Automatic detection of IUR.
12
lP repro of
Journal Pre-proof
(a)
(b)
Figure 4: Automated determination of the preliminary IUR: (a) seeds’ initialization based on thresholding according to the SUVglobal max in the PET image. (b) IUR determined by the proposed detection technique (superimposed on the original PET image).
252
3.2.2. Separation of foreground and background
253
The main objective of this step is to verify if the preliminary IUR has leaked
255 256 257 258 259 260 261 262 263
local the maximum SUV within the target (SU Vmax ). If so, the preliminary IUR is designated to be the final IUR. Otherwise, this preliminary IUR tends to
include regions of cystic change or necrosis. It is worth noting that in case of the presence of high uptakes nearby the tumor, the SUV is low on the boundary that separates the tumors from these surrounding tissues. Therefore, the presence of different regions with low uptake within the preliminary IUR makes the differentiation between necrotic and boundary regions a challenging task. Thus, the method begins by clustering MRI voxels of the preliminary IUR into two classes using the K-means algorithm. If voxels of each cluster are fully connected, erosion and dilation are applied to provide seeds for the GC technique
Jou
264
through the background. In fact, the proposed method verifies if the studied target is homogeneous based on a fixed thresholding value defined as 40% of
rna
254
265
267
in order to obtain the tumor map. In fact, tumor identification is based on the first seeds that are determined based on Eq. 2. These first seeds reflect the high
268
metabolism within the tumor compared to nearby tissues [21]. However, when
266
269
270
the preliminary IUR on MRI images is heterogeneous, then at least voxels of one cluster would be disconnected from each other (Figure 5 ). In this case, the
13
271 272 273 274 275 276 277
lP repro of
Journal Pre-proof
PET image is used in order to detect local minima within the preliminary IUR,
which are more likely to be detected inside necrotic and/or boundary regions. These local minima would include regions with high metabolic activity. In this way, the K-means algorithm is applied again aiming to keep only voxels having a low uptake. Finally, local minima are connected to the contour based on the Di-
jkstra minimum path algorithm [36]. To this end, a graph is constructed where nodes represent the voxels of the tumor and arcs represent the relation between
279
them in the eight directions. In fact, the aim is to find the most homogeneous path outgoing from the local minima regions’ extremities to the contour of the
280
initial IUR. A path between two voxels p and q can be expressed as follows [37]:
278
281
(b)
rna
(a)
Figure 5: (a) Heterogeneous tumor clustering based on MRI. The red and blue colors illustrate two different clusters that are not fully connected. (b) Homogeneous tumor clustering based on MRI. The region with a blue contour represents the tumor while the one with the red contour is a neighboring region that has similar metabolic activity to the tumor. 282
(3)
πpq = (r1 , r2 , ...., rk ),
283
where, r1 = p, rk = q and ri is connected to ri+1 (∀ i ∈ {1, . . . , k–1}). Thus, the cost function, which represents the path between two successive nodes, is defined as follows:
Jou
284 285
287 286
288 289
290
2
πri ,ri+1 = max{exp−|(f (ri )−f (ri+1 )) | },
(4)
where, π is the path between two successive voxels ri and ri+1 , such that ri+1 belongs to the set of all non-visited nodes, and f (ri ) and f (ri+1 ) are the normalized SU V s of voxels ri and ri+1 , respectively. Then, in order to decide if 14
291 292 293
the set of voxels, which join the determined paths with local minima voxels, is representing a boundary region, a homogeneity function is applied. In fact, for each voxel ri in the final path, we define the homogeneity function as follows:
295 294
homo(ri ) = 296 297 298 299 300 301 302 303
lP repro of
Journal Pre-proof
1
f or f (ri ) < min(f (πq1 q2 )) + γ.min(f (πq1 q2 )) 0 otherwise,
(5)
where, q1 and q2 are the two voxels that connect the final path to the contour of the tumor, f (πq1 q2 ) is the SU V within the path πq1 q2 and γ is the percentage
of the min value. Hence, the homogeneity function excludes paths that are more likely having an abrupt change on the SUVs of their voxels. We notice
that because of the heterogeneity of the tumor and of the fuzziness and blurred nature of PET images, the separation of surrounding tissues can be represented with more than a simple path that splits a region into two. Therefore, we seek for voxels that lie between the two separated regions and that are more likely to
305
have low metabolic activity in order to ensure that the whole foreground voxels are included. This could be achieved by simply clustering the nearby active
306
region, and thereafter voxels belonging to the cluster with the lowest mean
307 308 309 310 311 312 313 314 315 316 317 318
technique is applied in order to adjust the determination of the IUR, thanks to its capacity to identify objects with small boundaries. In fact, the GC technique
permits to avoid that the delineated object crosses boundaries that have gaps or a true weakly visible boundary, which leads thereafter to a smoothing effect on the boundary of the output object. In order to better understand the whole process described for the separation of the preliminary IUR in PET scans, an
example is given in Figure 6. In Figure 6(b), blue nodes correspond to the background, red nodes represent high active voxels and green nodes illustrate necrotic or boundary voxels. These nodes are all related by blue arcs, and black paths refer to either the contour of the preliminary IUR or to the local minima voxels, while green paths, related to the black ones, represent the determined path based on Eq. 4. The K-means algorithm permits to keep only paths with
Jou
319
SUV, while being connected to the ROI, are added to the tumor. In some cases, this could lead to a leakage into the background. To surpass this issue, the GC
rna
304
320
321
322
323
324
325
green nodes. Based on the homogeneity function, heterogeneous paths, which
include green and red nodes, are excluded from the separation process. Finally, green nodes belonging to the surrounding active region are merged to the final
object after clustering the nearby region to the tumor into two clusters using Kmeans. As mentioned before, these green nodes could belong to the background. 15
lP repro of
Journal Pre-proof
But, thanks to the GC technique, the determination of the object map could be adjusted.
rna
326 327
Figure 6: Tumor separation from nearby active region on a PET image: (a) local minima of the IUR, (b) graph illustration of the determination of the shortest homogeneous path, (c) ROI determined after separating the tumor from nearby active organ, (d) graph illustration of region determined after separating the tumor from nearby active organ.
The pseudo-code of the separation algorithm and the necrosis detection can be summarized as follows:
Jou
328 329
16
lP repro of
Journal Pre-proof
input : PET/MRI registered images output: TCV: Tumor contour voxels 1
(Cluster(1), Cluster(2))← K-MeansClustering(MRI_IUR, 2);
2
if (IsConnected(Cluster(1)) & IsConnected(Cluster(2))) then TCV←Contour(Cluster(1));
3 4 5 6 7 8
else MinimumPETCluster← arg min (MinLocal(IUR)); for j ← 1 to Card(M inimumP ET Cluster) do πp1 q1 ←Find minimum path using Eq. (4);
πp2 q2 ←Find minimum path using Eq. (4); πq1 q2 ← πp1 q1 ∪ MinimumPETCluster(j) ∪ πp2 q2 ;
9
if (homo(πq1 q2 )=1) then //
10
homogeneous (Eq. 11 330
determine if the path is
(5))
TCV←Contour(IUR)[q1 , ..., q2 ] ∪ πq1 q2 ; SCV←Contour(IUR)[q2 , ..., q1 ] ∪ πq1 q2 ;
12 13
(SCV(1), SCV(2))← K-MeansClustering(SCV, 2); LowUptake←min(SCV(1), SCV(2));
14 15
//
Next instructions are about including voxels with low metabolic activity and lying between the
tumor and the nearby normal regions TCVLowUptake←find(IsConnected(TCV, LowUptake));
rna
16
TCV←TCV ∪ TCVLowUptake;
17
19
else TCV←Contour(IUR);
20
end
18
21
22
end
end
Jou
Algorithm 2: Regions separation and necrosis detection
331
3.2.3. Generation of intermediate images
332
Given the defined object based on the IUR detection, and in order to determine
334
its map, a morphological operation is undertaken on the determined target to obtain seeds for both modalities. After that, the segmentation is formulated as
335
an energy minimization problem in order to perform a binary labeling for both
333
17
336 337 338 339 340 341 342 343 344 345 346
lP repro of
Journal Pre-proof
MRI and PET images. This is achieved by solving a maximum flow problem in
low-order polynomial time. Therefore, similar to [26], the graph construction considers two graphs representing an image pair (MRI, PET). For each image, a graph is constructed, where voxels v refer to nodes of the MRI image while their corresponding nodes on the PET image are denoted v ′ . Label lv = 1
(resp. lv′ = 1) indicates that the voxel is belonging to the target object of the MRI image (resp. PET image). Label lv = 0 (resp. lv′ = 0) indicates that the
voxel is belonging to the background of the MRI image (resp. PET image). Each graph represents two sets of links: n-links to encode the boundary term Buv (resp. Bu′ v′ ) that measures the penalty of assigning different labels to two neighboring voxels u and v (resp. u′ and v ′ ) in the MRI image (resp. PET
348
image) and t-links to encode the regional term Rv (lv ) (resp. Rv′ (lv′ )). Given the neighboring relationship NMRI (resp. NP ET ) for the MRI image (resp.
349
PET image), the total energy function can be described as follows:
351 350
352 353 354 355 356
P EMRI (l) = P (u,v)ǫNM RI Buv vǫMRI Rv (lv ) + E(l) = P P E P ET (l) = (u′ ,v ′ )ǫNP ET Bu′ v ′ , v ′ ǫP ET Rv ′ (lv ′ ) +
(6)
In fact, in what concerns MRI images, the regional term Rv (lv ) is set such that
Rv (lv = 1) = 0 and Rv (lv = 0) = +∞. This represents the hard region costs for the foreground object, while the hard region costs for the background are
rna
347
358
set such that Rv (lv = 1) = +∞ and Rv (lv = 0) = 0. For the area outside the background and foreground fields, the cost function could be expressed by
359
a Gaussian mixture model. Given g¯f (resp. g¯b ) the mean intensity of objects
357
360 361 363 362
assumed to be foreground (resp. background), and the corresponding standard deviation σf (resp. σb ), the region terms are expressed as follows: Rv (lv = 1) = −λ1 log P (gv | lv = 1) ∝
364
(gv − g¯f ) , σf2
Jou
Rv (lv = 0) = −λ1 log P (gv | lv = 0) ∝ − log(1 − exp(−
365 366
(gv − g¯b ) )), σb2
(7) (8)
where λ1 is a scaling parameter. For the boundary term, a gradient cost function is adopted as follows:
368 367
Buv = −λ2 log(1 − exp(− 18
| ∇I |2 (u, v) )), 2 2σMRI
(9)
369 370 371 372 373 374 375 376 377 378 379
where, λ2 is a scaling constant, | ∇I |2 (u, v) is the squared gradient magni-
tude between u and v, and σMRI is a given Gaussian parameter illustrating the standard deviation in homogeneous regions. Similarly, for PET images, the
hard region costs are set such that Rv′ (lv′ = 1) = 0 and Rv′ (lv′ = 0) = +∞ for foreground voxels and Rv′ (lv′ = 1) = +∞ and Rv′ (lv′ = 0) = 0 for background voxels. For voxels that lie between the two regions, similar to [20], a thresholding step is applied. In fact, given the SU V S(v ′ ) of the voxel v ′ , and the higher local local SH (=50%.SU Vmax ) and the lower SL (=15%.SU Vmax ) threshold values, it is highly probable that voxels with SU V s that exceed SH belong to the tumor
while voxels with SU V s lower than SL have a high likelihood to belong to the background. Therefore, the cost region functions are defined as follows:
381 380
Rv′ (lv′ = 1) =
382
lP repro of
Journal Pre-proof
0
if S(v ′ ) > SH
λ3 Cmax .(1 − λ C s
384 383
3
1
1+exp−(S(v
′ )−S
L /SH −SL −α)/β)
)
if SL ≤ S(v ′ ) ≤ SH if S(v ′ ) ≤ SL
max
Rv′ (lv′ = 0) = λ3 Cmax − Rv′ (lv′ = 1),
385 386
(10)
(11)
where λ3 is a scaling constant for the region term and Cmax is the maximum cost allowed. A sigmoid function is employed to assign a high cost for a voxel with a low SUV (between SL and SH ). The parameters α and β control the curvature and the center point of the function, respectively. The boundary term,
389
which is similar to the one used for the MRI image, is defined as follows:
387
391 390
rna
388
′
Buv = −λ4 log(1 − exp(−
392
′
| ∇I |2 (u′ , v ′ ) )), 2.σP2 ET ′
(12)
′
where λ4 is a scaling constant, | ∇I |2 (u , v ) denotes the squared gradient ′
′
394
magnitude between u and v , and σP ET is a given Gaussian parameter that is interpreted as the standard deviation in homogeneous object regions. So,
395
the GC segmentation result is the membership map used in order to obtain
Jou
393
396
397
the intermediate images. In fact, the main goal of using intermediate images is to make local affinities of Iterative Relative Fuzzy Connectedness (IRFC) [38]
399
higher inside and outside the object than on its boundary [34]. For this purpose, the gradient of each object map and the gradient of the corresponding
400
image for both modalities are combined, after creating the image map for each
398
19
401
lP repro of
Journal Pre-proof
object determined based on the GC technique for PET and MRI images. This
403
enhances the discontinuities between the target and the background (Figure 7). The weight w for each arc is given according to the following linear combination
404
[34]:
402
406 405
(13)
w = λ.wi + (1 − λ).wo ,
407 408
where, λ ∈ [0, 1], wo represents the object-based weight and wi denotes the image-based weight.
(a)
(b)
(c)
409 410 411 412 413 414
3.2.4. Co-segmentation
In our previous work [26], it was shown that combining the IRFC [38] technique with the GC technique performs interesting results for the co-segmentation of tumors in hybrid PET/MRI scans. On the one hand, IRFC showed its accuracy in terms of unrealistic one-to-one correspondence between PET and MRI tumors. On the other hand, GC is considered as a post-processing step allowing to improve the final segmentation results. Thus, the GCmax sum is adopted in this
Jou
415
rna
Figure 7: Intermediate image generation: (a), (b) and (c) illustrate the gradient image of the object map, the gradient image and the intermediate image, respectively.
417
work, where the IRFC technique is applied henceforth on intermediate images. In fact, the fuzzy connectedness technique describes how two voxels, denoted c
418
and d, could hang together through an affinity function [39]. Besides, a feature
416
420
based affinity, denoted by µφ (c, d), is used. It requires prior knowledge about the mean intensity m and the standard deviation σφ of the object to be seg-
421
mented, which represents in this work voxels corresponding to the object map
419
20
lP repro of
Journal Pre-proof
422
determined using the GC technique. Based on our experimentation, the follow-
423
ing affinity function was adopted [39]:
425 424
2
− |f (c)−m)| 2 2σ
µφ (c, d) = min{exp 426 427 428 429 430 431 432
2
− |f (d)−m)| 2
, exp
2σ
φ
}.
(14)
Combining affinities to improve the segmentation of each modality showed its
efficiency in terms of the potentially unrealistic one-to-one correspondence between PET and MRI tumors. In fact, local affinities are calculated for each intermediate image. Then, the affinity function combines the two fuzzy affinities for each modality by assigning weights regarding to the visibility of the target. So, the combination of the MRI affinity (denoted µM φ ) with the PET one (denoted µP φ ) takes the following form:
µκ (c, d) =
433
φ
0
if µM φ (c, d) = 0 or
w µM (c, d) + w µP (c, d) M φ P φ
µP φ (c, d) = 0,
otherwise,
(15) where wM (∈ [0, 1]) and wP (= 1 − wM ) are the weights for the MRI affinity
435
and the PET one, respectively. Therefore, the determined segmentation result is propagated to the original images of both modalities. Finally, the GC technique
436
is applied similarly as described in Section 3.2.3. It is worth noting that despite
434
437 438
the automation of the proposed method for determining ROIs and identifying a tumor regarding its surrounding physiological high uptake region without any user intervention, MRI images are required by our collaborator expert in nuclear medicine to confirm the smooth running of the segmentation.
441
3.2.5. Parameter Setting
442
In our experiments, the following parameter setting was empirically employed
rna
440
439
444
for all analyzed data, always according to a theoretical basis. Concerning the GC parameters for MRI images, the proposed method is not too sensitive to
445
the choice of λ1 . Thus, while a wide range of values can be used, we set the
Jou
443
446 447
448 449
450 451
region term parameter as λ1 = 10 and the standard deviation σMRI to 0.25. The boundary term parameter λ2 plays a more important role since its choice depends strongly on the image heterogeneity and the tumor volume. In fact, for the homogeneous cases, λ2 is set to 10 for the liver and the pancreatic tumors, and to 10.107 for the prostate tumor since it is being more heterogeneous. Concerning the PET images, the standard deviation σP ET for the boundary term is 21
452 453 454 455 456 457 458 459 460 461 462
lP repro of
Journal Pre-proof
set to 0.5, while the region cost is set to 10000. Indeed, this indicates that the
lower the probability that a voxel belongs to the desired region, the higher the cost of its labeling as the target region. For the segmentation of PET images, the contribution of both parameters λ3 and λ4 is quite similar. Therefore, for both region and smoothness costs, value 1 is attributed. For the SUV distribution term, relying on the theoretical basis and experimental findings in [21], the parameters α and β, which control the curvature and the center point of the
local local cost function, are set to 20%.SU Vmax and 4%.SU Vmax , respectively. Finally, for the intermediate image generation, the weight of the object gradient plays
a more important role than the one of the image gradient. This is mainly due to its contribution in reducing the amount of heterogeneity within the tumor.
464
In fact, the weight for object gradient is set to 0.7 while the one of the gradient image is set to 0.3.
465
3.3. Framework for performance evaluation
466
In order to validate numerically the proposed segmentation method, overlap-
467
based and special distance-based metrics have been measured, given a gold stan-
468 469 470 471 472 473 474 475 476 477 478 479
of comparing the proposed segmentation method against state-of-the-art methods. Some works [15, 16] reported the subjectivity of the manual delineation
when referring to the clinical specialization of the operator. For instance oncologists will draw in average, smaller boundaries than radio-therapists. Thus, it is more reliable for a study to consider different operators’ delineations in order to obtain a consolidated reference. In our case, since the images have been entrusted to the expert under some ethical issues, we were obliged to consider only the fact that the experience of the expert can guarantee us a considerable ground-truth. In addition to that, the Dice Similarity Coefficient (DSC), the Hausdorff Distance (HD), the sensitivity and the Positive Predictive Value (PPV) were considered according to True Positive (TP), False Positive (FP), True Negative (TN) and False Negative (FN) voxels. The DSC (16) [40] evalu-
Jou
480
dard. In fact, a manual delineation was performed by our collaborator expert, who has more than 20 years of experience in nuclear medicine, with the intention
rna
463
481
482
ates the percentage of the overlapping ratio between the delineated tumors by
484
a segmentation method (U1 ) and the ones produced within the ground-truth that was performed by our collaborator expert (U2 ). The HD (17) measures
485
the most mismatched boundary points between the segmented tumor and the
483
22
486
ground-truth [41].
487 488
lP repro of
Journal Pre-proof
DSC(U1 , U2 ) = 2. 489 490
TP | U1 ∩ U2 | = 2. , | U1 | + | U2 | 2.T P + F P + F N
HD(∂U1 , ∂U2 ) = max{ sup inf 491 492 493
d(x, y),
yǫY
xǫX
sup inf yǫY
xǫX
d(x, y) },
(16)
(17)
where, d(x, y) is the Euclidean distance between points x and y, ∂U1 and ∂U2 are the boundaries of the segmented region U1 and the ground-truth U2 , respec-
495
tively, and |.| denotes the set cardinality operator. In order to obtain an optimal Radiation Treatment Planning (RTP), the sensitivity (18), also called True Pos-
496
itive Volume Fraction (TPVF), is considered [42]. It defines the fraction of the
494
497 498 499 500 501 502
total amount of segmented tissue U1 that overlaps with the ground-truth U2 . A perfect segmentation method would be 100% sensitive (segmenting all voxels
from the target voxels) and 100% specific (not segmenting anything from the background voxels). Nevertheless, considering that the number of TN voxels depends on the space volume, the specificity makes little sense and only the sensitivity conveys useful information [43]. Indeed, most of the existing works replace the specificity with the Positive Predictive Value (PPV) (19), also referred to as precision, which is the fraction of the total amount of tissue in U1
505
that overlaps with U2 .
503
506
rna
504
sensitivity =
507 508
TP | U1 ∩ U2 | = . | U2 | TP + FN
PPV =
509
511
512
513
514 515
516
(19)
In [19], the importance of sensitivity and PPV was studied regarding to the clin-
Jou
510
TP . TP + FP
(18)
ical use. On the one hand, sensitivity could be considered more important than PPV in radiotherapy planning where the aim is to reduce the risk of missing the target, even if it means delivering higher dose to the surrounding healthy tissues and organs-at-risk. On the other hand, PPV plays a more important role in radiotherapy follow-up, since the aim is to obtain consistent volume measurements in sequential PET scans and to avoid including background/nearby 23
517
lP repro of
Journal Pre-proof
tissues. Thus, three scores have been defined as follows:
518
• score = 0.5.sensitivity + 0.5.P P V ;
519
• score RadioT herapy planning(RT ) = 0.6.sensitivity + 0.4.P P V ;
520
• score F ollow − U p(F U ) = 0.4.sensitivity + 0.6.P P V.
521 522 523 524 525 526 527 528 529
The proposed method is firstly compared to various single modality PET segmentation methods widely used in the literature. These methods include thresholding using 50% of the SUVmax [44], GC-based PET segmentation method [12],
Geodesic Active Contours (GAC) [45], and clustering-based methods (Fuzzy C-Means (FCM) [10] and Affinity Propagation (AP) [11]). Then, qualitative and quantitative results of the proposed method are compared with the ones of three relevant state-of-the-art PET/MRI co-segmentation methods. These
methods are: Song’s GC-based co-segmentation method [20], Xu’s FC-based co-segmentation method [25] and our previous method based on GCmax sum [26].
531
It is worth noting that the AP method is the original authors’ implementation, while the proposed method as well the other methods have been carefully im-
532
plemented on Matlab R2018a using a machine with 2.40 GHz CPU and 8 GO of
530
RAM and Ubuntu 18.04 as the operating system. This was necessary in order to test all methods under the same environment.
535
4. Results
536 537 538
The performance of the proposed method is firstly compared with PET solelybased segmentation methods for both simulated and clinical cases. Then, the comparison with relevant PET/MRI co-segmentation methods is conducted for real-world scans.
Jou
539
rna
534
533
540 541
542
543
4.1. Comparison with segmentation methods using PET solely
Performance results are first investigated for zeolite phantoms. Then, experiments are conducted on real-world clinical scans.
544
24
lP repro of
Journal Pre-proof
Figure 8: Objective evaluation of the accuracy (DSC, HD, sensitivity and PPV) obtained over zeolite phantoms, while using the proposed method (PM), 50%.SU V max [44], GC [12], GAC [45], FCM [10] and AP [11].
546 547 548 549 550 551 552 553 554
The zeolite phantoms performed in [33] are used to validate the proposed method on PET scans for both homogeneous and heterogeneous tumors. Quantitative results that are shown in Figure 8 illustrate the mean ± standard deviation of DSC, HD, sensitivity and P P V metrics. As demonstrated, for PET scans, the proposed method shows a better performance than other state-of-the-art methods in terms of DSC and HD. Concerning the sensitivity and the P P V , the proposed method does not record the best results. Nevertheless, a good P P V can be offset by a bad sensitivity, and vice versa. For instance, a percentage of 91% of the sensitivity is obtained by the FCM technique. In contrast, the recorded percentage of P P V is 76%. In fact, in terms of the trade-off between
Jou
555
4.1.1. Comparison using simulated data
rna
545
557
the two values, the proposed method outperforms state-of-the-art methods by recording a percentage of 86% for both sensitivity and P P V . In comparison to
558
the GC technique, the proposed method is slightly better, and this is mainly
559
due to the smart seeds that are defined after determining the IUR.
556
560
25
lP repro of
Journal Pre-proof
(a)
(b)
(c)
(d)
(e)
(f)
Figure 9: Segmentation results on a heterogeneous prostate tumor using 50%.SU V max [44] (a), GC [12] (b), GAC [45] (c), AP [11] (d), FCM [10] (e), and the proposed method (f). The segmentation result is in red and the ground-truth is in blue.
561
4.1.2. Comparison using clinical data
563
Figure 9 shows the segmentation results for a challenging prostate tumor case. It is clear that the compared methods are almost not able to surpass neither the is-
564
sue of heterogeneity within the tumor, nor the problem of the presence of nearby
565 566 567 568 569 570 571 572 573
the absence of leakage effects through the background. However, thresholding [44], FCM [10] and AP [11] clustering methods (Figures 9(a), 9(d) and 9(e), respectively) are not related to any seeds initialization, and hence both inhomogeneous tumors and surrounding regions with high uptake similar to the tumor are not taken into consideration. Thanks to the separation and the generation of intermediate images steps, the proposed method shows its capability on handling both issues. Likewise, the numerical performances (Figure 10) confirm that the proposed method has excellent performance compared to existing rele-
Jou
574
active organs. It is worth noting that both GC-based [12] and GAC-based [45] methods are initialized with seeds close to the desired object, which explains
rna
562
576
vant segmentation methods using PET solely. In fact, Figure 10 illustrates the Average-Max-Min values of DSC, HD, sensitivity and PPV metrics on eight
577
PET images of the pancreas, liver and prostate.
575
578 579
26
rna
lP repro of
Journal Pre-proof
Jou
Figure 10: Objective evaluation of the accuracy (DSC, HD, sensitivity and PPV) of segmenting tumors in PET images, while using the proposed method (PM), 50%.SU V max [44], GC [12], GAC [45], FCM [10] and AP [11].
27
lP repro of
Journal Pre-proof
580
4.2. Comparison with PET/MRI co-segmentation methods
581
Qualitative results are shown in Figure 11 for the case of a heterogeneous
583
prostate tumor. The tumor represents an abrupt change of intensity within both modalities. The presented subtle intensity change in the PET image could
584
be explained by either necrotic regions or the presence of a boundary region be-
582
585 586 587 588 589 590 591 592
tween the tumor and the surrounding artery. However, heterogeneity in the MRI image reflects the presence of different tissues within the tumor. The example of using a prostate tumor was studied in [26] and the combination of affinities used in this work is only determined from PET images in order to avoid the
heterogeneity of MRI images. In fact, the segmentation is based on GCmax sum [26] and it takes the mean and the standard deviation of SU V as prior knowledge in
order to delineate the desired objects based on seeds given by the users. Nevertheless, the risk of leaking through the background is always present. However,
594
the creation of intermediate images has shown that the estimated affinity function works well for MRI images, allowing the proposed method to jointly use
595
information from both modalities (Figure 11(d)).
rna
593
(a)
(b)
(c)
(d)
Jou
Figure 11: Segmentation results on a heterogeneous prostate tumor using GC [20] (a), FC [25] (b), GCmax sum [26] (c) and the proposed method (d). The first row represents the MRI image and the second row shows the corresponding PET image. The segmentation result on both modalities is in red and the ground-truth is in blue.
596
Furthermore, an example of a segmented liver tumor is given in Figure 12. This
598
example confirms the effectiveness of the proposed method by the generation of intermediate images while exploiting anatomical images for the separation of
599
the tumor from the surrounding tissues. Concerning homogeneous tumors, the
597
28
600 601 602 603 604 605 606 607 608 609 610
lP repro of
Journal Pre-proof
proposed method shows similar results as the co-segmentation method based on GCmax sum (Figure 13). The numerical performances are shown in Table 1 in terms of DSC mean ± standard deviation as well as in terms of HD mean ±
standard deviation for both PET and MRI images. Also, quantitative results in terms of sensitivity mean ± standard deviation as well as PPV mean ± stan-
dard deviation for both PET and MRI images are given in Table 2. According to the recorded results, it is clear that the proposed method outperforms various relevant state-of-the-art methods. In particular, for the MRI images, the Proposed Method (PM) reaches a DSC performance of 92%, which is signifi-
cantly higher than the ones recorded by GC [20] (= 81%), FC [25](= 84%) and GCmax sum [26] (= 91%). Besides, the percentage of DSC for PET images is 87%
612
for GC [20], 90% for FC [25], 92% for GCmax sum [26] and 93% for the proposed method. Similarly, the proposed method records prominent results in terms of
613
HD for different tumors boundaries, which confirms its effectiveness more than
614 615 616 617 618 619 620 621 622 623 624 625 626 627
the other methods. In fact, for the MRI images, the recorded HD value is equal to 18.97 for GC [20], 15.78 for FC [25], 8.77 for GCmax sum [26] and 7.67 for the
proposed method. For the PET images, the PM significantly outperforms the compared methods, since the recorded HD value is 19.53 for GC [20], 11.85 for FC [25], 11.55 for GCmax sum [26] and 6.10 for the proposed method. Besides, when combining P P V and sensitivity, the proposed method records higher values than other methods for both PET and MRI images (Table 3 and Table 4, respectively). Note that the minimum value of DSC recorded by the proposed
method and GCmax sum [26] is the same for both MRI and PET modalities (Table 5). This can be explained by the fact that the lowest DSC value is recorded
rna
611
within the homogeneous areas. Moreover, statistical tests were performed in order to decide whether there is or not a significant difference between the proposed method performances and the ones of the compared methods. Tests of proportions were conducted based on the DSC percentages. Since the number
629
of images is quite limited, the number of observations considered herein refers to the mean number of voxels of the reference standard. In fact, compared
630
to [20] (resp. [25]), where the DSC percentage is equal to 84% (resp. 85%),
Jou
628
631
632 633
634
635
636
the p-values recorded are almost null for a left tailed test (H1: 84% < 92%) (resp. (H1: 85% < 92%)). These p-values are less than a level of 0.05, which indicates that a 5% risk is used as the cutoff for significance. Therefore, the statistical test shows that the difference with [20] (resp. [25]) is significant with a risk level equals to 0.05. In order to affirm this, the method that recorded 84% (resp. 85%) as DSC percentage, has a confidence interval estimation (at 29
637 638 639 640 641 642 643 644 645
lP repro of
Journal Pre-proof
a 95% confidence level), which varies in the interval of [82.977, 85.023] (resp.
[84.004, 85.996]). This confirms that the proposed method significantly outperforms [20] (resp. [25]) since p < 0.05. In comparison to [26], where the DSC
percentage is equal to 91%, for a left tailed test (H1: 91% < 92%) a p-value of 0.0045 is obtained. Accordingly, the GCmax sum [26] has a confidence interval of [90.202, 91.798]. As mentioned before, the proposed method records similar results to [26] for homogeneous cases. Therefore, the statistical test confirms
that the proposed method significantly outperforms [26] for the segmentation of heterogeneous tumors (p < 0.05).
646
648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663
integrated within some existing co-segmentation methods [25, 26]. Nevertheless, these methods require manually initialized seeds, contrary to the proposed method which is automated. Indeed, we propose to extend these works by creat-
ing intermediate images that estimate arc weights. The proposed improvement allows exploiting tumor map determined after using IRFC segmentation on PET images. The determination of this map helps to provide smart seeds for the GC technique in order to determine MRI tumor maps. Once the tumor maps are
provided, the gradient image can be combined with tumor gradient to decrease the amount of heterogeneity within both modalities. Figure 14 shows examples of segmenting prostate and liver tumors after creating the intermediate images semi-automatically. We can clearly conclude that the creation of intermediate images helps to ameliorate the segmentation results for both methods. Moreover, although its computational cost is less efficient, the proposed method is
performing more accurately than [25, 26], even while using the extended versions of these two methods. Obtained values of the mean DSC rate after applying FC [25] and GCmax sum [26] on both initial and intermediate images are presented in Table 6.
Jou
664
It is worth noting that intermediate images can be effectively generated and
rna
647
30
(a)
lP repro of
Journal Pre-proof
(b)
(c)
(d)
rna
Figure 12: Segmentation results for a liver tumor using GC [20] (a), FC [25] (b), GCmax sum [26] (c) and the proposed method (d). The first row shows the MRI image and the second row illustrates the corresponding PET image. The segmentation result on both modalities is in red and the ground-truth is in blue.
(a)
(b)
(c)
(d)
Jou
Figure 13: Segmentation results for a homogeneous pancreas tumor using GC [20] (a), FC [25] (b), GCmax sum [26] (c) and the proposed method (d). The first row represents the MRI image and the second row shows the corresponding PET image. The segmentation result on both modalities is in red and the ground-truth is in blue.
31
metric method GC [20] FC [25] GCmax sum [26] PM
lP repro of
Journal Pre-proof
DSC for MRI
DSC for PET
HD for MRI
HD for PET
81.49±3.48 84.22±3.92 91.19±2.37 92.03± 2.69
87.67±1.44 90.15±1.49 92.32±0.93 93.53±1.25
18.97±7.96 15.78±8.01 8.77±3.54 7.67± 3.53
19.53±11.24 11.85±6.13 11.55±6.43 6.10±2.11
Table 1: Quantitative DSC and HD rates (mean ± standard deviation) for MRI and PET images using GC [20], FC [25], GCmax sum [26] and the proposed method (best values are in bold).
metric method GC [20] FC [25] GCmax sum [26] PM
Sensitivity for MRI
Sensitivity for PET
P P V for MRI
P P V for PET
80.80±7.24 86.01±6.52 96.34±0.41 96.60±0.41
98.37±0.66 85.72 ±3.59 95.05±1.51 97.10±0.25
84.69±5.30 84.83±5.37 88.46±4.14 90.32 ±4.58
83.43 ±3.44 95.52±1.34 93.26±0.68 95.14±0.93
Table 2: Quantitative sensitivity and P P V rates (mean ± standard deviation) for MRI and PET images using GC [20], FC [25], GCmax sum [26] and the proposed method (best values are in bold).
score
score RT
score F U
90.90±1.81 90.62±1.26 94.15±0.96 96.12±0.59
92.40±1.71 89.64±1.00 94.33±1.06 96.32±0.52
89.41±1.40 91.60±0.84 93.97 ±0.87 95.93 ±0.66
rna
metric method GC [20] FC [25] GCmax sum [26] PM
Table 3: score, score RT and score F U values for PET images using GC [20], FC [25], GCmax sum [26] and the proposed method (best values are in bold).
Jou
metric method GC [20] FC [25] GCmax sum [26] PM
score
score RT
score F U
82.75±3.52 85.42±3.84 92.40±2.25 93.45±2.46
82.36±3.99 85.54±4.12 93.18±1.88 94.08±2.04
83.14±3.33 85.30±3.77 91.61±2.63 92.83±2.88
Table 4: score, score RT and score F U values for MRI images using GC [20], FC [25], GCmax sum [26] and the proposed method (best values are in bold).
665 666
32
lP repro of
Journal Pre-proof
metric method GC [20] FC [25] GCmax sum [26] PM
min DSC for MRI
min DSC for PET
71.14 75.60 84.11 84.11
84.75 87.10 91.51 91.51
Table 5: Minimum DSC values for MRI and PET images using GC [20], FC [25], GCmax sum [26] and the proposed method (best values are in bold).
metric method FC [25] GCmax sum [26] FC with intermediate images GCmax sum with intermediate images
DSC MRI
DSC PET
84.22±3.92 91.19±2.37 91.92 ± 2.63 91.92±2.63
90.15 ±1.49 92.32 ±0.93 92.29± 1.26 92.91 ± 1.36
Jou
rna
Table 6: Quantitative DSC rates (mean ±standard deviation) for MRI and PET images using max FC [25], GCmax sum [26], FC with intermediate images and GCsum with intermediate images (best values are in bold).
(a)
(b)
(c)
(d)
Figure 14: Segmentation results for liver ((a) and (b)) and prostate ((c) and (d)) tumors using FC on intermediate images ((a) and (c)) and GCmax sum on intermediate images ((b) and (d)). The first row represents MRI images and the second row shows the corresponding PET images. The segmentation result on both modalities is in red and the ground-truth is in blue.
33
lP repro of
Journal Pre-proof
667
669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695
In this work, we proposed an automated method for segmenting heterogeneous tumors in hybrid PET/MRI scans, based on the generation of intermediate images. First, we proceeded by suggesting an IUR detection algorithm that determines the first seeds based on the SU Vmax value. Besides, the preliminary
IUR region is determined using the mean SU V value in order to include regions with necrotic or cystic change due to the heterogeneous nature of tumors. This permits to avoid the risk of being trapped into local minima, contrary to existing methods [24] based on the SU Vmax . Thus, initial IUR generated by the proposed method ensures that voxels with low SUV belong to the set of foreground voxels. However, several tumors may be surrounded by normal tissues with a similar cellular activity such as the heart when dealing with lung tumor
segmentation [46]. Therefore, it is highly probable that automatic detection algorithm leaks through the background what explains the need for a separa-
tion technique. Both MRI and PET images are considered for the separation process. On the one hand, thanks to the high soft tissue contrast, MRI images could provide essential structures. Hence, the determined preliminary IUR from the MRI image is clustered into two clusters assuming that the studied tumor is homogeneous. The clustering process does not take into consideration the spatial distribution of voxels. In fact, the number of clusters used is limited
to only two, such that voxels of each cluster are all connected. The separation based on MRI images may reduce the computational cost, in comparison with the separation based on PET images which needs numerous steps. On the other hand, spatial distribution of the radiotracer within a particular organ can be explained by the low activity on its boundaries compared to its core. This is the main assumption used for the proposed separation algorithm when using PET images. In fact, this low activity is considered to be a local minima within the studied tumor. In such a way, the homogeneity function permits to facilitate the decision, due to the fact that the contrast on boundaries is homogeneous.
Jou
696
5. Conclusion and Discussion
rna
668
697
698
699
700
701
702
However, even though it is rare, the risk of determining a homogeneous path from necrosis could occur. Graph-based methods have shown their efficiency and accuracy in segmenting multimodal images. Nevertheless, all these methods depend strongly on image homogeneity, making their performance on het-
erogeneous tumors quite limited. Some state-of-the-art methods of PET/MRI tumor segmentation [20, 21] handle the inconsistent information by penalizing 34
704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730
the segmentation difference. This would lead to identical delineation results in
both modalities and further neglecting the asymmetric relation between both modalities. Results recorded using GC-based segmentation methods for PET
solely outperformed the co-segmentation methods based on the assumption of one-to-one correspondence between multi-modal images. This affirms that the asymmetric relation should be carried out. To handle this issue, we opted for the IRFC technique thanks to its visibility weighting scheme. Nevertheless, since it
is region-based, IRFC technique depends strongly on arc weight estimation for the affinity function, notably in the case of heterogeneous targets. Furthermore,
the segmented tumor can leak through the background due to the variability of tissues within MRI images. In the case of heterogeneous tumors, the segmentation can be trapped into a homogeneous region where the first seeds are defined. For instance, both FC [25] and GCmax sum [26] could have this issue, which makes the use of MRI images useless in this case. However, the proposed method deals
efficiently with this issue, taking benefits from generated intermediate images. These images enhance the definition of tumor boundaries while reducing the
amount of inhomogeneity, and thereafter IRFC could perform better. It is worth mentioning that when segmenting homogeneous tumors, the proposed method records similar results to [26]. In fact, the proposed method performs well on different types of tumors (pancreas, liver and prostate). Indeed, its accuracy is higher than the ones recorded by relevant state-of-the-art methods, and this is what makes it an effective alternative to these methods. We notice that due to
the ethical issues of obtaining a large dataset of hybrid scans, and similarly to most of existing hybrid PET/MRI co-segmentation methods, we were obligated
rna
703
lP repro of
Journal Pre-proof
to limit the number of real-world test images to eight. It is important to notice that the delineation process for some cases, such as brain tumors, is based on separating three tumor tissues (solid or active tumor, edema and necrosis) [47]. Considering the necrosis as part of the active tumor, the proposed method
732
could fail to separate both tissues from each other. It should be noted that low spatial resolution leads to the inevitable Partial Volume Effect (PVE). One
733
of the limitations of the present work is that partial volume correction is not
Jou
731
734 735 736
737
738 739
considered within the scope of this paper and hence the PVE is not treated. Also, the effects of motion due to cardiac respiratory or patient movement are not treated in the current work. Considering all these limitations to be already treated by the literature works, we believe that the proposed method may be in the pipeline to assist clinicians to delineate not only homogeneous tumors but also heterogeneous challenging cases without any difficulty. We note that the 35
lP repro of
Journal Pre-proof
740
proposed method does not work directly in 3D. Nevertheless, the thick slices
741
could justify the 2D approach.
742 743 744 745 746 747 748 749 750
To summarize, an automated method for segmenting both homogeneous and heterogeneous tumors in hybrid PET/MRI scans is proposed. Qualitative and quantitative results prove the accuracy of the proposed method, notably thanks to generated intermediate images. In addition, we have demonstrated that in-
termediate images can enhance other semi-automated state-of-the-art methods. As a future objective and in order to enhance the proposed method, empiri-
cally configured parameters could be determined more effectively using machine learning techniques, especially when images are acquired and reconstructed us-
752
ing different scanners. Moreover, since the proposed method is able to delineate more than a single tumor regarding to their position in the image, it would
753
be interesting to test it on images with multiple tumors. From another side,
751
754 755 756 757
since CT images have shown their efficiency in bringing complementary information, integrating them into the proposed method would ameliorate both regions separation and tumors delineation. Conventional Neural Networks (CNN) have shown their superiority against the state-of-the-art methods for segmenting PET
759
images. To the best of our knowledge, using CNN for the co-segmentation of PET/MRI scans is still not studied. Therefore, it should be interesting to in-
760
vestigate the segmentation of hybrid PET/MRI scans using CNN.
758
761
763 764
765 766 767
[1] H. Zaidi,
769
Molecular imaging of small animals,
New York,
NY,
USA:Springer, 2014
[2] M. Hatt, F. Tixier, L. Pierce, P. E. Kinahan, C. C. L. Rest, D. Visvikis, Characterization of PET/CT images using texture analysis: the past, the present. . . any future?, European Journal of Nuclear Medicine and Molecular Imaging, 44 (1) (2017) 151-165.
Jou
768
References
rna
762
[3] J. O’Connor , C. Rose, J. Waterton, R. Carano, G. Parker, A. Jackson,
771
Imaging Intratumor Heterogeneity: Role in Therapy Response, Resistance, and Clinical Outcome, American Association for Cancer Research, 21(2)
772
(2015) 249–57.
770
773
[4] S. Yoon, C. Park, S. Park,J. Yoon, S. Hahn,J. Goo, Tumor Heterogeneity in
36
lP repro of
Journal Pre-proof
774
Lung Cancer: Assessment with Dynamic Contrast-enhanced MR Imaging.
775
Radiology, 280(3) (2016) 940-948.
776 777 778
779 780 781
782 783 784
785 786 787
788 789
[5] A. Baazaoui, W. Barhoumi, E. Zagrouba, R. Mabrouk, A survey of PET image segmentation: applications in oncology, cardiology and neurology, Current Medical Imaging Reviews, 12(1) (2016) 13-27.
[6] B. Foster, U. Bagci, A. Mansoor, Z. Xu, D. Mollura, A review on segmentation of positron emission tomography images. Computers in Biology and Medicine, 50 (2014) 76-96.
[7] Fahey, F., Kinahan, P., Doot, R., Kocak, M., Thurston, H., Poussaint, T., 2010. Variability in pet quantitation within a multicenter consortium. Medical Physics 37 (2010) 3660-3666.
[8] M. Aristophanous, B.C. Penney, M.K. Martel, C.A. Pelizzari, A gaussian mixture model for definition of lung tumor volumes in positron emission tomography, Medical Physics 34 (2007) 4223–4235.
[9] M. Hatt, C. Le Rest, P. Descourt, A. Dekker, D. De Ruysscher, M. Oellers, P. Lambin, O. Pradier, D. Visvikis, Accurate automatic delineation of het-
erogeneous functional volumes in positron emission tomography for oncology applications. International Journal of Radiation Oncology* Biology*
792
Physics, 77(1) (2010) 301-308.
793 794 795 796
797 798 799
[10] A. Boudraa, J. Champier, L. Cinotti, J. Bordet, F. Lavenne, J. Mallet, Delineation and quantitation of brain lesions by fuzzy clustering in positron emission tomography, Computerized Medical Imaging and Graphics. 20(1) (1996) 31–41.
[11] B. Foster, U. Bagci, Z. Xu, B. Luna, W. Bishai, S. Jain, D.J. Mollura, Robust segmentation and accurate target definition for positron emission tomography images using affinity propagation, IEEE Trans. Biomed. Eng. 61 (3) (2014) 711–724.
Jou
800
rna
791
790
801
802
803
804
805
[12] Y. Boykov, M. Jolly, Interactive graph cuts for optimal boundary & region segmentation of objects in ND images, In Proceedings. Eighth IEEE International Conference on Computer Vision. 1 (2001) 105-112.
[13] U. Bagci, J. Yao, J. Caban, E. Turkbey, O. Aras, D. Mollura, A graphtheoretic approach for segmentation of PET images, In Proceedings. Annual 37
lP repro of
Journal Pre-proof
806
International Conference On Engineering in Medicine and Biology, (2011)
807
8479-8482
809
[14] C. Lian, R. Su, T. Denoeux, L. Hua, V. Pierre, Spatial Evidential Clustering with Adaptive Distance Metric for Tumor Segmentation in FDG-PET
810
Images, IEEE Transactions on Biomedical Engineering, 65(1) (2017).
808
811
812
[15] A. Comelli, A. Stefano, G. Russo, M. G. Sabini, M. Ippolito, S. Bignardi, G.
814
Petrucci, A. Yezzi, A smart and operator independent system to delineate tumours in Positron Emission Tomography scans. Computers in Biology
815
and Medicine, 102 (2018) 1-15.
813
816 817 818 819
[16] A. Comelli, A. Stefano, S. Bignardi, G. Russo, M. G. Sabini, M. Ippolito, S. Barone, A. Yezzi, Active contour algorithm with discriminant analysis for
delineating tumors in positron emission tomography, Artificial intelligence in medicine, 94 (2019) 67-78.
820
821 822
[17] A. Comelli, A. Stefano, G. Russo, S. Bignardi, M. G. Sabini, G. Petrucci, M. Ippolito, A. Yezzi, K-nearest neighbor driving active contours to delineate biological tumor volumes. Engineering Applications of Artificial Intelligence, 81 (2019) 133-144.
825
[18] Z. Xu, M. Gao, G. Z. Papadakis, B. Luna, S. Jain, D. J. Molluraa, U.
826 827
828 829 830 831
Bagci, Joint solution for PET image segmentation, denoising, and partial volume correction. Medical Image Analysis, 46 (2018) 229-243.
[19] M. Hatt, B. Laurent , A. Ouahabi, H. Fayad, S. Tan, W. Lu, V. Jaouen, C. Tauber, J. Czakon, F. Drapejkowski, W. Dyrka, S. Camarasus-Pop, F. Cervenansky, P. Girard, T. Glatard, M. Kaini,Y. Yaoi, D. Visvikis, The first MICCAI challenge on PET tumor segmentation, Medical Image Analysis, 44 (2018) 177-195.
Jou
832
rna
824
823
833
[20] Q. Song, J. Bai, D. Han, S. Bhatia, W. Sun, W. Rockey, J. Bayouth, J.
835
Buatti, X. Wu, Optimal co-segmentation of tumor in PET-CT images with context information, IEEE Transactions on Medical Imaging, 32(9) (2013)
836
1685-1697
834
38
837 838 839
lP repro of
Journal Pre-proof
[21] W. Ju, D. Xiang, B. Zhang, L. Wang, I. Kopriva, X. Chen, Random walk and graph cut for co-segmentation of lung tumor on PET-CT images, IEEE Transactions on Image Processing, 24(12) (2015) 5854-5867.
841
[22] Z. Zhong, Y. Kim, J. Buatti, X. Wu, 3D Alpha Matting Based Cosegmentation of Tumors on PET-CT Images. In Molecular Imaging, Re-
842
construction and Analysis of Moving Body Organs, and Stroke Imaging
843
and Treatment, (2017) 31-42.
840
844
[23] S. Roels, P. Slagmolen, J. Nuyts, J. Lee, D. Loeckx, F. Maes, Biological
846
image-guided radiotherapy in rectal cancer: challenges and pitfalls. International Journal of Radiation Oncology, Biology, and Physics 75 (2009)
847
782–790.
845
848 849
[24] U. Bagci, J. Udupa, N. Mendhiratta, B. Foster, Z. Xu, J. Yao, X. Chen, D. Mollura, Joint segmentation of anatomical and functional images: Ap-
851
plications in quantication of lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT images, Medical Image Analysis, 17(8) (2003) 929-945.
852
[25] Z. Xu, U. Bagci, J. Udupa, D. Mollura, Fuzzy Connectedness Image Co-
850
854
segmentation for Hybrid PET/MRI and PET/CT Scans, In Computational Methods for Molecular Imaging, Springer International Publishing.
855
22 (2015) 15-24.
853
[26] A. Sbei, K. ElBedoui, W. Barhoumi, P. Maksud, C. Maktouf, Hybrid
857
PET/MRI co-segmentation based on joint fuzzy connectedness and graph
858
859 860 861 862
cut, Computer Methods and Programs in Biomedicine, 149 (2017) 29-41.
[27] L. Rundo, A. Stefano, C. Militello, G. Russo, M. Sabini, C. D’Arrigo, F. Marletta, M. Ippolito, G. Mauri, S. Vitabele, M. C. Gilardi, A fully automatic approach for multimodal PET and MR image segmentation in gamma knife treatment planning. Computer Methods and Programs in Biomedicine, 144 (2017) 77-96.
[28] C. Lian, S. Ruan, T. DenAAux, H. Li, P. Vera, Joint tumor segmentation
Jou
863
rna
856
864
866
in PET-CT images using co-clustering and fusion based on belief functions. IEEE Transactions on Image Processing, 28(2), (2018) 755-766.
867
[29] L. Li, W. Lu, Y. Tan, S. Tan, Variational PET/CT Tumor Co-segmentation
865
868
869
Integrated with PET Restoration. IEEE Transactions on Radiation and Plasma Medical Sciences, (2019). 39
870 871 872
lP repro of
Journal Pre-proof
[30] J. A. Jeba, S. N. Devi, Efficient graph cut optimization using hybrid ker-
nel functions for segmentation of FDG uptakes in fused PET/CT images. Applied Soft Computing, 85 (2019) 105815.
874
[31] L. Li, X. Zhao, W. Lu, S. Tan, Deep learning for variational multimodality tumor segmentation in PET/CT. Neurocomputing, (2019).
875
[32] Z. Guo , X. Li , H. Huang, N. Guo, and Q. Li, Deep Learning-Based
873
877
Image Segmentation on Multimodal Medical Imaging. IEEE Transactions on Radiation and Plasma Medical Sciences, 3(2)(2019) 162-169.
878
[33] C. D. Soffientini, E. De Bernardi, R. Casati, G. Baselli, F. Zito, A new
876
880
zeolite PET phantom to test segmentation algorithms on heterogeneous activity distributions featured with ground-truth contours. Medical physics,
881
44(1) (2017) 221-226.
879
882 883 884
885
[34] P. de Miranda, A. Falcão, J. Udupa, Synergistic arc-weight estimation for interactive image segmentation using graphs. Computer Vision and Image Understanding, 114(1) (2010) 85-99.
[35] M. Heinrich, M. Jenkinson, M. Bhushan, T. Matin, F. Gleeson, M. Brady,
887
J. Schnabel, MIND: modality independent neighbourhood descriptor for multi-modal deformable registration, Medical Image Analysis, 16 (7) 1423–
888
1435.
886
[36] E.W. Dijkstra, A note on two problems in connexion with graphs, Numerische Mathematik 1 (1959) 269–271.
891
[37] P. Belén, C. Jesús, S. Daniel. On Automatic Selection of Fuzzy Homo-
892 893
894 895
geneity Measures for Path-based Image Segmentation, EUSFLAT (2005) 691-698.
[38] K. Ciesielski, J. Udupa, P. Saha, Y. Zhuge, Iterative relative fuzzy connectedness for multiple objects with multiple seeds, Computer Vision and Image Understanding, 107(3) (2007) 160-182.
Jou
896
rna
890
889
897
898
899
900
901
[39] K. Ciesielski , J. Udupa , Affinity functions in fuzzy connectedness based image segmentation II: defining and recognizing truly novel affinities, Computer Vision and Image Understanding, 114 (1) (2010) 155–166 .
[40] L. Dice, Measures of the amount of ecologic association between species. Ecology 26 (1945) 297–302. 40
902 903
[41] P. Cignoni , C. Rocchini, R. Scopigno, Metro: measuring error on simplified surfaces, Computer Graphics Forum 18 (1998) 167–174.
904
905
lP repro of
Journal Pre-proof
[42] J. K. Udupa, V. R. LeBlanc, Y. Zhuge, C. Imielinska, H. Schmidt, L.
907
M. Currie, B. E. Hirsch, J. Woodburn, A framework for evaluating image segmentation algorithms. Computerized Medical Imaging and Graphics, 30
908
(2006) 75-87.
906
909
910 911 912
[43] M. Hatt, J. A. Lee, I. El Naqa, C. Caldwell, E. De Bernardi, W. Lu, S.
Das, X. Geets, V. Gregoire, R. Jeraj, M. P. MacManus, O. R. Mawlawi, U. Nestle, A. B. Pugachev, H. Schoder,T. Shepherd, E. Spezi, D. Visvikis,
914
H. Zaidi, A. S. Kirova, Classification and evaluation strategies of autosegmentation approaches for PET : report of AAPM task group No.211,
915
Medical Physics 44 (6) (2017) e1-e42.
913
916 917 918 919
[44] E. Deniaud-Alexandre, E. Touboul, D. Lerouge, D. Grahek, J. Foulquier, Y. Petegnief, B. Grés, H. ElBalaa, K. Keraudy, K. Kerrou, F. Montravers, B. Milleron, B. Lebeau, J. Talbot, Impact of computed tomography and 18F-deoxyglucose coincidence detection emission tomography image fusion
for optimization of conformal radiotherapy in non-small-cell lung cancer, International Journal of Radiation Oncology Biology Physics, 63 (5) (2005)
922
1432-1441.
rna
921
920
924
[45] V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours. International journal of computer vision, 23 (1) (1997) 61-79.
925
[46] S. Basu, T. Kwee, S. Surti, E. Akin, D. Yoo, A. Alavi, Fundamentals of
923
926
927 928
[47] M. Havaei, A. Davy , D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, P. Chris, P. Jodoin, H. Larochelle, Brain tumor segmentation with deep neural networks, Medical Image Analysis, 35 (2017) 18-31.
Jou
929
PET and PET/CT imaging, Ann. NY Acad. Sci. 1228 (1) (2011) 1–18.
41