Tracking tumor boundary using point correspondence for adaptive radio therapy

Tracking tumor boundary using point correspondence for adaptive radio therapy

Computer Methods and Programs in Biomedicine 165 (2018) 187–195 Contents lists available at ScienceDirect Computer Methods and Programs in Biomedici...

NAN Sizes 0 Downloads 20 Views

Computer Methods and Programs in Biomedicine 165 (2018) 187–195

Contents lists available at ScienceDirect

Computer Methods and Programs in Biomedicine journal homepage: www.elsevier.com/locate/cmpb

Tracking tumor boundary using point correspondence for adaptive radio therapy Nazanin Tahmasebi a,b,∗, Pierre Boulanger a,b,c, Jihyun Yun d, B. Gino Fallone d,e,f, Kumaradevan Punithakumar a,b,c a

Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada Servier Virtual Cardiac Centre, Mazankowski Alberta Heart Institute, Edmonton, Alberta, Canada Department of Radiology & Diagnostic Imaging, University of Alberta, Edmonton, Alberta, Canada d Department of Oncology, Medical Physics Division, University of Alberta, Alberta, Canada e Department of Physics, University of Alberta, Edmonton, Alberta, Canada f Department of Medical Physics, Cross Cancer Institute, Edmonton, Alberta, Canada b c

a r t i c l e

i n f o

Article history: Received 24 March 2017 Revised 12 July 2018 Accepted 2 August 2018

Keywords: Non-rigid image registration Image segmentation MRI-Guided radiotherapy Tumor tracking Linac-MR Radiation therapy

a b s t r a c t Background and objective: Tracking mobile tumor regions during the treatment is a crucial part of imageguided radiation therapy because of two main reasons which negatively affect the treatment process: (1) a tiny error will lead to some healthy tissues being irradiated; and (2) some cancerous cells may survive if the beam is not accurately positioned as it may not cover the entire cancerous region. However, tracking or delineation of such a tumor region from magnetic resonance imaging (MRI) is challenging due to photometric similarities of the region of interest and surrounding area as well as the influence of motion in the organs. The purpose of this work is to develop an approach to track the center and boundary of tumor region by auto-contouring the region of interest in moving organs for radiotherapy. Methods: We utilize a nonrigid registration method as well as a publicly available RealTITracker algorithm for MRI to delineate and track tumor regions from a sequence of MRI images. The location and shape of the tumor region in the MRI image sequence varies over time due to breathing. We investigate two approaches: the first one uses manual segmentation of the first frame during the pretreatment stage; and the second one utilizes manual segmentation of all the frames during the pretreatment stage. Results: We evaluated the proposed approaches over a sequence of 600 images acquired from 6 patients. The method that utilizes all the frames in the pretreatment stage with moving mesh based registration yielded the best performance with an average Dice Score of 0.89 ± 0.04 and Hausdorff Distance of 3.38 ± 0.10 mm. Conclusions: This study demonstrates a promising boundary tracking tool for delineating the tumor region that can deal with respiratory movement and the constraints of adaptive radiation therapy. © 2018 Elsevier B.V. All rights reserved.

1. Introduction Image-guided radiotherapy (IGRT) is an important treatment option for treating mobile tumors, which promises improved targeting and delivery of highly conformal radiation dose [1]. Currently, many different indirect tumor tracking methods using internal or external tumor surrogates are used to treat mobile tumors [2–4]. Despite the variety of tracking techniques, however, reliance on surrogates has been shown to be sub-optimal due to (1) in∗ Corresponding author at: Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada E-mail address: [email protected] (N. Tahmasebi).

https://doi.org/10.1016/j.cmpb.2018.08.002 0169-2607/© 2018 Elsevier B.V. All rights reserved.

ternal surrogates require invasive procedures, and the surrogates may migrate from the initial locations [5], (2) external surrogates must rely on ambiguous correlations between internal tumor motion and external surrogates displacement [6]. More importantly, any tumor shape deformation is completely unknown during treatment. To compensate for these inaccuracies, a margin around the tumor is assigned and irradiated, which may results in (1) unacceptable medical complications due to excessive healthy tissue irradiations, or (2) poor disease control due to limiting the necessary therapeutic dose to avoid those complications [7,8]. Recently, hybrid radio-therapy MR-systems, called Linac-MR, have been proposed by several groups for intra-fractional tumor motion management [9–13] without the need for surrogates,

188

N. Tahmasebi et al. / Computer Methods and Programs in Biomedicine 165 (2018) 187–195

which may overcome those difficulties. With Linac-MR, one can directly image the tumor during irradiation and use them in real time to adjust therapeutic radiation to track changes in tumor position and shape. The first and most important step of real-timein tumor tracking is the accurate delineation of the tumor region in each intra-fractional MRI image. Manual contouring to obtain such delineation is time-consuming, thus is not an option. Therefore, a precise and fast auto-contouring algorithm is critical to perform real-time tumor tracking during radiation therapy. With a reliable algorithm, the dosage of the beam delivery can be significantly increased which will lead to a better treatment of patients Auto-contouring of mobile tumors in MR images is challenging due to the low contrast between the tumor region and their surrounding anatomies. The problem becomes more challenging with large range of tumor motion and deformation; this type of problems arises occasionally, for instance, when tumors are located in lung regions [14]. Lung tumors are often difficult to treat due to the large ranges of motion and deformation produced by breathing. Several studies have shown that lung tumors may move up to 40 mm in superior–inferior (SI), 15 mm in anterior–posterior (AP), and 10 mm in left–right (LR) directions during normal breathing [15–18]. To delineate tumors in lung region, Yun et al. [19] recently have proposed a neural network based auto-contouring algorithm. This study proposed to use a non-rigid registration with moving meshes as a framework to track tumor region boundaries. The imaging for lung tumor treatment consists of two stages: (1) pretreatment stage; and (2) treatment stage [14]. In the pretreatment stage, the algorithm requires an expert oncologist to manually contour tumor regions of MR slices. Whereas in the treatment stage, the algorithm is expected to produce automatic contours to track the tumor region. Two approaches were proposed in this context: (1) first frame registration approach that uses the manual segmentation of the first frame of the pretreatment stage; and (2) best frame registration approach that uses manual segmentation of all the frames from the pretreatment stage to find the best frame, which was decided based on a minimum L2 -norm between the selected treatment image and the pretreatment images. We also tested both first frame and best frame methods using a publicly available RealTITracker registration algorithm1 .

[21]:





μ = | | ,

(2)

Our goal is to find a transformation φ :  → , ∂  → ∂  with

Jφ (ξ ) = μ(ξ ),

(3)

where Jφ denotes the Jacobian of the transformation. To find a transformation φ which satisfies (2) and (3), let a vector field ρ (ξ ) be such that:

div ρ (ξ ) = μ(ξ ) − 1

(4)

and

curl ρ (ξ ) = γ (ξ ),

(5)

with null boundary condition ρ (ξ ) = 0∀ξ ∈ ∂ , where γ (ξ ) is a continuous function over . Let a velocity vector field from ρ (ξ ) be:

νt (ξ ) =

ρ (ξ )

t + ( 1 − t )μ ( ξ )

,

t ∈ [0, 1],

(6)

where t is an artificially introduced time parameter. Then φ is obtained using:

d ψ (ξ , t ) = νt (ψ (ξ , t )), dt

t ∈ [0, 1], ψ (ξ , t = 0 ) = ξ ,

(7)

where φ (ξ ) = ψ (ξ , t = 1 ). The above optimization problem is solved using a step-thencorrect approach [20]. We refer the reader to Chen et al. [20] for derivation and numerical implementation details. We computed a sequence of corresponding points along a window surrounding the border of the tumor region in all the frames using a transformation function φˆ , given the segmentation in one frame. In contrast to registering images sequentially [22], we took the following approaches to reduce the accumulative error of the algorithm. 2.1.2. First frame registration based technique We used the above moving mesh point correspondence method between the first image in pretreatment stage T1 and each kth image in treatment stage Tk to obtain a sequence of points over time.

2. Method 2.1. Non-rigid image registration A two-dimensional moving mesh is applied in order to obtain a point-to-point correspondence between ith image Ti and kth image Tk defined over  ⊂ R2 . To reduce the time in computing the moving mesh, only a part of the image is used with enough margin to ensure the region of interest is included. Finding the points correspondence can be formulated as the following optimization problem [20]:

  φˆ k,i = Ti (ξ ) − φ (Tk (ξ ))2

(1)

where for each pixel location ξ ∈  ⊂ R2 , φ :  →  is a transformation function and  · 2 denotes the L2 -norm. To get a unique solution to the problem, we introduced a deformation field using a Jacobian transformation μ and curl of end velocity field γ , where μ :  → R and γ :  → R.

2.1.3. Best frame registration based technique In this approach, we use the pretreatment images as the training set. For each image in the treatment stage, we select the closest image base on L2 -norm between the selected treatment image and the pretreatment images from the pretreatment stage and compute point correspondence between them. Let Nq be the number of frames in the pretreatment images. The typical number of manually segmented frames in pretreatment is 30. The best frame Tq (among Tj for j ∈ (1, . . . , Nq )) for any image in the treatment frame Tk can be found by:





Tq = arg min T j (ξ ) − Tk (ξ ) . Tj

2

(8)

Then a points correspondence between Tq and Tk is computed to obtain the segmentation on Tk . 3. Results 3.1. Experimental protocol

2.1.1. Moving mesh generation In order to generate a moving mesh, we first define a continuous monitor function μ(ξ ) which is used to obtain a vector field 1

The RealTITracker is available at: http://bsenneville.free.fr/RealTITracker/.

The data consists of free-breathing 3T MRI images acquired from six patients with lung cancer. An additional image noise was added to reflect the image noise at 0.5T which will be used during radiation treatment. Each data set consists of 130 images that include 30 images from pretreatment stage and 100 images from the

N. Tahmasebi et al. / Computer Methods and Programs in Biomedicine 165 (2018) 187–195 Table 1 Details of the data sets used in evaluation of the proposed method. Description

Value

Number of subjects Breathing pattern Number of pretreatment frames Number of treatment frames Original image matrix Interpolated image matrix Field of view Pixel spacing Imaging speed TE TR Slice chosen Slice thickness

6 Free breathing 30 per subject 100 per subject 128 × 128 pixels 256 × 256 pixels 400 × 400 mm 1.56 × 1.56 mm 4 fps 1.1 ms 2.2 ms sagittal 20 mm

189

The results of the automated algorithms were compared with manual segmentations by an expert radiation oncologist. 3.2. Visual inspection

Table 2 Maximum tumor motion in both superior–inferior and anterior–posterior directions as well as mean tumor area of each subject during the treatment stage.

In Fig 1, we give representative examples of the segmentation results for the first frame and best frame registration methods for patient 2. The manual segmentation and automated segmentation by the best frame diffeomorphic registration based algorithm are given by green and yellow contours, respectively. Although both methods tracked the boundary of the tumor region during the breathing motion, the best frame registration approach yielded a better conformance with the expert manual segmentation in most frames. Fig 2 shows representative examples showing the automated boundary by the best frame diffeomorphic registration based algorithm (yellow) and manual boundary (green) of the tumor regions for the first frame (second column) and best frame (third column) registration techniques for all subjects.

Patients

Maximum SI centroid motion (mm)

Maximum AP centroid motion (mm)

Tumor area (mm2 )

3.3. Quantitative evaluation metrics

Patient Patient Patient Patient Patient Patient

22.17 28.91 11.47 11.81 12.01 12.47

6.15 9.43 8.04 4.43 8.27 3.68

491.95 688.66 240.32 696.79 414.00 823.09

The following criteria were used to evaluate the similarities between the manual and automatic segmentation.

1 2 3 4 5 6

treatment stage. A pretreatment, dynamic MR scan is performed with the treatment unit (i.e., Linac-MR), using the same MR sequence and patient setup intended for treatment. During treatment, the Linac-MR will provide 2D intra-fractional, dynamic MR imaging of a lung tumor. The plane of MR imaging will be selected to visualize the maximum tumor dimensions for the beam’s eye view. MR images will be acquired at an imaging rate of 4 fps to keep the time delay less than 500 ms. The details of the datasets are presented in Table 1. To reduce computational time for the registration approach, only a part  of the image was used with enough margin to allow for tissue deformation due to respiration. The bounding box  for the image for determined by finding the tight bound that consists of the initial manual contours in the first frame of the pretreatment images with a margin of 15 pixels. Although a better bounding box could have been obtained for the best frame approach by utilizing set of all images in the pretreatment stage, we used the same bounding box for the first and best frame approaches to prevent any bias. The following parameter settings are used for the moving mesh approach: Maximum number of iterations = 100; number of steps in the Euler’s method for solving (7) = 20; and the lower and upper bounds for the Jacobian = 0.5 and 2.0, respectively. The reader is referred to [20] for the description of the parameter settings. We used the L2 – L2 functional for the RealTITracker implementation with 0.3 as the weighting factor between the data fidelity term and the regularity of the motion field [23]. Table 2 shows the maximum tumor motion in both SI and AP directions as well as the mean tumor area of each subject during the treatment stage. The highest movements both in SI and AP directions belong to patient number 2. Patients number 6 and 4 have the biggest tumor size with relatively low motion. Patient 3 has the smallest tumor size with high AP motion and low SI motion. Patients 1 and 5 both have medium tumor size. However, patient 5 has higher AP motion and lower SI motion than that of patient 1. In addition, the tumor shape in patient 5 is different from other patients. The shapes of tumors are shown in Fig. 2. The proposed method was evaluated over a 600 treatment stage frames which are acquired from six patients with a lung tumor.

3.3.1. The Dice Metric (DM) We computed the Dice Metric which is a common measure of similarity between manual and automatic segmentation. The DM is defined by:

DM (Aa , Am ) =

2Aam Aa + Am

(9)

where Aa , Am , and Aam are the area enclosed by the automatically segmented region, the corresponding expert labeled region, and their intersection, respectively. Note that DM = 1 means a complete match. 3.3.2. The Hausdorff Distance (HD) We computed the HD [24], a symmetric measure of distance between both automatic and manual borders of the tumor region. Let automatic and manual borders be denoted by Ca and Cm , respectively. For each point pia on Ca , we computed the distances to j all the points pm on Cm . The HD is then calculated by j HD(Ca , Cm ) = max(max(min(d ( pia , pm ))), max(min(d ( pia , pmj )))) j

i

j

(10)

i

where d( · ) is the Euclidean distance. 3.3.3. Root Mean Square Distance (RMSD) The RMSD was obtained by calculating the square root of the averaged squared Euclidean distances between the automatic and manual contour points in a symmetric manner. The RMSD is given by

   RMSD = 

 1 (Nm + Na )

 pa ∈Ca

min

pm ∈Cm

d2

( pa , pm ) +

 pm ∈Cm

min d2 pa ∈Ca

( pm , pa ) (11)

where d ( · ) is the square of the Euclidean distance, and Na and Nm are the number of sample points on Ca and Cm , respectively. 2

3.3.4. Reliability We examined the reliability of the algorithm by evaluating the complementary cumulative distribution function (CCDF) of the obtained Dice metrics, defined for each d ∈ [0, 1] as the probability of obtaining DM higher than d over all frames, which means a number of frames segmented with DM higher than d over the total number of volumes.

190

N. Tahmasebi et al. / Computer Methods and Programs in Biomedicine 165 (2018) 187–195

Fig. 1. Representative examples showing the automated (yellow) and manual (green) boundaries of the tumor regions for the first frame (first row) and best frame (second row) registration techniques from patient 2. The results are based on the diffeomorphic registration approach. The best frame registration yielded a better conformance with the expert manual contours than the first frame registration in most frames.

3.3.5. Centroid difference We computed the centroid difference by measuring the Euclidean distance between the centroid positions of the regions of interest by manual and automatic segmentations. The centroid difference is given by

Centroid Difference =



( x )2 + ( y )2

(12)

where x and y are the centroid differences between manual and automatic regions in the x (anterior–posterior) and y (superior–inferior) coordinate directions, respectively.

We also evaluated the accuracy of the automated segmentation results in comparison the L2 similarity metric for subject 1. For every image in the treatment stage, the auto segmentation results were computed multiple times using every manual segmentation in the pretreatment stage. The treatment stage consists of 100 images and the pretreatment stage consists of 30 manual segmentation which resulted in 30 0 0 auto segmentations. Fig. 5 shows the Dice metric evaluated for different range of L2 similarity metric values. An unoptimized MATLAB implementation of the registration took on average 0.486 seconds to compute the contours in each image on an Intel Core i7 3.5 GHz CPU with 64 GB RAM.

3.4. Quantitative results 4. Discussion We compared the proposed nonrigid registration and RealTITracker [25] approaches for the first and best frame method. Table 3 reports the performance in terms of DM, HD, Centroid, and RMSD. The average values for DM, HD and RMSD for the nonrigid registration best frame approach are 0.89, 3.38, and 1.34 mm respectively and for the RealTITracker best frame approach the respective results are 0.89, 3.45 and 1.45. Table 4 reports the mean and standard deviation of centroid difference in SI and AP direction as well as centroid differences in 2D for automatic and manual segmentation of the lung tumor regions in each patient. Fig. 3 shows the tumor centroid motion obtained using manual (blue) and nonrigid registration best frame based auto (red) approaches in the treatment stage for subjects one to six (each subject is shown in the respective row). The Anterior- Posterior position is shown in column 1 and the Superior–Inferior position is shown in column 2. The displacements are measured from the centroid of manual regions. Fig. 4 shows the reliability graph of the nonrigid registration first frame and best frame method. Best frame approach outperformed the first frame method in terms of reliability.

This study proposed and compared different approaches to track the center and boundary of mobile lung tumor region using MRI. The current clinical radiation therapy systems are limited and lack sufficient speed to deliver the radiation dose with precise location accuracy. However, knowing the exact location of the tumor region is advantageous than tracking only one representative location for the following reasons: (1) Treatment margins can be significantly reduced by adjusting size of the beam leading to the reduction of normal tissue toxicity; and (2) the amount of dose can be determined based on the size of the tumor. Additionally, the boundary delineation approach proposed in this study can be implemented in an online system in hybrid Linac-MR systems in the future. The first frame approach significantly reduces the number of manually segmented images required for the pretreatment stage which saves the expert’s time. However, due to the high amplitude of tumor motion in patient 2, the algorithm fails at tracking the tumor boundary in some frames. The best frame approach outperforms the first frame method both in terms of the performance and the ability to track the

N. Tahmasebi et al. / Computer Methods and Programs in Biomedicine 165 (2018) 187–195

191

Fig. 2. Representative examples showing the automated (yellow) and manual (green) boundaries of the tumor regions for the first frame (second column) and best frame (third column) registration techniques for all patients. The results are based on the diffeomorphic registration approach.

192

N. Tahmasebi et al. / Computer Methods and Programs in Biomedicine 165 (2018) 187–195 Table 3 Mean and standard deviation of Dice metric, Hausdorff distance, and root mean square distance between automatic and manual segmentation of the lung tumor regions in each patient. Metric

DM First frame (MM) First frame RealTITracker Best frame (MM) Best frame RealTITracker HD First frame (MM) First frame RealTITracker Best frame (MM) Best frame RealTITracker RMSD First frame (MM) First frame RealTITracker Best frame (MM) Best frame RealTITracker

Patients

Mean

P1

P2

P3

P4

P5

P6

0.85 (0.04) 0.86 (0.05) 0.86 (0.06) 0.90 (0.04)

0.82 (0.25) 0.89 (0.09) 0.91 (0.03) 0.92 (0.02)

0.85 (0.05) 0.89 (0.04) 0.87 (0.05) 0.89 (0.05)

0.92 (0.02) 0.92 (0.02) 0.95 (0.02) 0.91 (0.03)

0.88 (0.02) 0.83 (0.07) 0.86 (0.03) 0.86 (0.04)

0.89 (0.03) 0.88 (0.04) 0.91 (0.03) 0.90 (0.03)

0.87 (0.06) 0.88 (0.05) 0.89 (0.04) 0.89 (0.04)

4.21 (1.19) 4.88 (1.71) 3.67 (1.37) 3.39 (0.94)

7.60 (11.16) 4.20 (3.13) 3.40 (1.00) 3.34 (0.84)

2.96 (1.18) 2.37 (0.68) 2.48 (0.67) 2.43 (0.86)

3.05 (0.68) 3.82 (0.91) 2.58 (0.60) 3.45 (0.71)

4.47 (1.42) 5.18 (1.75) 4.34 (1.82) 4.09 (1.58)

4.55 (1.23) 4.70 (1.27) 3.82 (1.14) 3.99 (1.14)

4.47 (2.81) 4.19 (1.57) 3.38 (1.10) 3.45 (1.01)

1.82 (0.50) 2.00 (0.70) 1.62 (0.66) 1.43 (0.43)

3.75 (7.17) 1.78 (1.78) 1.40 (0.44) 1.40 (0.34)

1.30 (0.42) 1.09 (0.28) 1.11 (0.30) 1.15 (0.38)

1.21 (0.20) 1.50 (0.32) 1.00 (0.24) 1.52 (0.34)

1.34 (0.25) 1.72 (0.54) 1.35 (0.36) 1.50 (0.40)

1.82 (0.48) 2.04 (0.59) 1.56 (0.44) 1.71 (0.47)

1.87 (1.51) 1.69 (0.70) 1.34 (0.41) 1.45 (0.39)

Table 4 Mean and standard deviation of difference in centroid positions between manual and automatic segmentations: The table reports the values for overall as well as along SI and AP directions each patient. Metric

Centroid diff First frame (MM) First frame RealTITracker Best frame (MM) Best frame RealTITracker Centroid diff S-I First frame (MM) First frame RealTITracker Best frame (MM) Best frame RealTITracker Centroid diff A-P First frame (MM) First frame RealTITracker Best frame (MM) Best frame RealTITracker

Patients

Mean

P1

P2

P3

P4

P5

P6

1.30 (0.58) 2.70 (1.39) 1.34 (0.89) 1.25 (0.80)

4.40 (10.23) 1.83 (2.82) 1.20 (0.64) 1.03 (0.62)

0.79 (0.55) 0.83 (0.50) 0.95 (0.41) 0.96 (0.49)

1.54 (0.48) 1.76 (0.53) 0.80 (0.48) 0.84 (0.41)

1.44 (0.80) 1.47 (0.97) 1.34 (1.23) 1.29 (0.80)

2.07 (1.07) 2.29 (1.28) 1.43 (0.81) 1.39 (0.75)

1.92 (2.29) 1.81 (1.25) 1.18 (0.67) 1.13 (0.64)

1.13 (0.60) 1.59 (0.95) 0.53 (0.45) 0.96 (0.83)

1.51 (2.88) 0.71 (1.03) 0.68 (0.56) 0.71 (0.57)

0.51 (0.51) 0.59 (0.50) 0.59 (0.41) 0.59 (0.46)

1.45 (0.46) 1.67 (0.55) 0.49 (0.37) 0.57 (0.40)

0.85 (0.62) 1.00 (0.93) 0.86 (0.68) 0.82 (0.63)

0.92 (0.59) 0.81 (0.53) 1.08 (0.80) 0.85 (0.64)

0.91 (0.59) 1.06 (0.75) 0.70 (0.55) 0.75 (0.59)

0.50 (0.37) 2.07 (1.23) 1.16 (0.93) 0.59 (0.48)

3.91 (9.92) 1.60 (2.68) 0.85 (0.61) 0.63 (0.48)

0.50 (0.41) 0.47 (0.36) 0.62 (0.42) 0.64 (0.44)

0.45 (0.30) 0.45 (0.31) 0.55 (0.45) 0.53 (0.35)

1.09 (0.67) 0.90 (0.71) 0.89 (0.65) 0.86 (0.71)

1.75 (1.11) 1.98 (1.43) 0.70 (0.63) 0.91 (0.74)

1.75 (1.11) 1.24 (1.12) 0.80 (0.61) 0.69 (0.53)

tumor region. Finding the best frame and applying point correspondence allows for small deformation in the moving mesh approach, thereby converging to the final solution faster. Hence, investing some time to find the best frame among the pretreatment images saves time in the registration process. Tables 3 and 4 report the results for first frame and best frame registration as well

as RealTITracker approaches in terms of DM, HD, centroid difference and RMSD, respectively. The number of sample points used in the RMSD computation is slightly different for each method due to the difference in the number of points in automated contours. The number of samples points used in RMSD computations are 391 ± 13, 399 ± 11, 384 ± 11 and 398 ± 12 for the first frame mov-

N. Tahmasebi et al. / Computer Methods and Programs in Biomedicine 165 (2018) 187–195

193

Fig. 3. Tumor centroid positions estimated from the manual (blue) and automated (red) methods in the treatment stage for patients one to six (each patient is shown in the respective row). The automated position estimates are based on the best frame diffeomorphic registration approach. Anterior posterior position is shown in column 1 and superior inferior position is shown in column 2. The displacements are measured from the centroid of manual regions.

ing mesh, first frame RealTITracker, best frame moving mesh and best frame RealTITracker methods, respectively. Fig. 5 shows the Dice score values against different range of L2 similarity metric. We observed that the lower the L2 similarity metric the higher the accuracy. Therefore, using pair of images with lower L2 metric as in the case of best frame approach can lead to improved segmentation results which is consistent with our evaluation of first and best frame approaches.

Tracking based approaches to guide the radiation therapy has been proposed recently in the literature. Zachiu et al. have been proposed an optical tracking based approach for MR-guided beam therapies for moving organs which was tested on livers and kidneys [23]. Wilms et al. have been proposed deformable registration method for tracking which was tested on thoracic/abdominal MR and liver ultrasound data sets [26]. One of the advantages of the proposed diffeomorphic registration based method is that it allows for tracking large deformations due to the movement of lungs dur-

194

N. Tahmasebi et al. / Computer Methods and Programs in Biomedicine 165 (2018) 187–195

Although our method relies on point correspondence between the image frames in computing the auto segmentation, it does not explicitly utilize the temporal consistency of the motion of the tumor region over the image sequence. In future, we plan to investigate options to add temporal constraints on the smoothness of motion of the tumor regions. One of the limitations of our study is that the method was evaluated only over a small set of patients. The proposed approach utilized the experimental Linac-MR system, and therefore, only patient volunteers have participated in the study which limited the amount scans available for the analysis. 5. Conclusion

Fig. 4. Reliability graph of the first frame and best frame method. The best frame approach outperformed the first frame method in terms of reliability.

In this paper, we investigated the delineation of lung tumor regions from MR images acquired with free breathing for a real-time radiation therapy application. For automation of tumor boundary tracking, we proposed a non-rigid registration method based on a moving mesh approach. Given an expert oncologist manual segmentation of a single frame in the sequence, the proposed method tracks the rest of the tumor borders. Moreover, we investigated further improvement of the method by training the algorithm to find the best match between the image in the treatment stage and slices which were acquired during the pre-treatment stage. We evaluated the proposed methods over a data set consisting of 600 frames of MR images acquired from 6 patients with a lung cancer. The best frame based registration method yielded the best results with less error relative to the manual ground truth segmentation. In addition, the best frame method outperformed the algorithm proposed by Yun et al. [19] in terms of the Hausdorff distance. Ethical Approval The protocol was approved by the University of Alberta Health Research Ethics Board. Funding This work was supported by a grant from SERVIER Canada Inc. Conflict of Interest None declared.

Fig. 5. The median, 5th percentile and 95th percentile values of the Dice score between manual and automated segmentation results with respect to L2 similarity metric evaluated over 30 0 0 pair of images from subject 1. The lower the L2 similarity metric the higher the accuracy.

ing respiration. In this study, we have demonstrated that the proposed method can accurately track organ movement up to 29 mm in Patient 2. In the previous work related to this study, 30 frames were manually contoured by the expert oncologist to train two neural network model defined in [19]. The results for the previous study yielded values of 0.89 ± 0.04, 4.69 ± 1.68 and 1.45 ± 0.79 for Dice score, HD and centroid difference, respectively. The proposed method yielded better values for HD and centroid difference in comparison to the method by Yun et al. [19] for the majority of the subjects. Our study utilized single 2D image sequences acquired from LINAC-MR scanners. Therefore, we relied on a 2D implementation of the registration algorithm which does not track the tissue movements in the through plane direction. However, the tracking is limited by the scanning protocol, and the proposed approach is not limited to the 2D space and can be extended to 3D.

Supplementary material Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.cmpb.2018.08.002. References [1] X. Han, Feature-constrained nonlinear registration of lung CT images, in: B. van Ginneken, K. Murphy, T. Heimann, V. Pekar, X. Deng (Eds.), MICCAI 2010 Workshop: Medical Image Analysis for the Clinic - A Grand Challenge, 2010, pp. 63–72. [2] S.S. Vedam, V.R. Kini, P.J. Keall, V. Ramakrishnan, H. Mostafavi, R. Mohan, Quantifying the predictability of diaphragm motion during respiration with a noninvasive external marker, Med. Phys. 30 (4) (2003) 505–513. [3] E. Nioutsikou, Y. Seppenwoolde, J.R.N. Symonds-Tayler, B. Heijmen, P. Evans, S. Webb, Dosimetric investigation of lung tumor motion compensation with a robotic respiratory tracking system: an experimental study, Med. Phys. 35 (4) (2008) 1232–1240. [4] R.L. Smith, K. Lechleiter, K. Malinowski, D. Shepard, D. Housley, M. Afghan, J. Newell, J. Petersen, B. Sargent, P. Parikh, Evaluation of linear accelerator gating with real-time electromagnetic tracking, Int. J. Radiat. Oncol. Biol. Phys. 74 (3) (2009) 920–927. [5] K. Kitamura, H. Shirato, S. Shimizu, N. Shinohara, T. Harabayashi, T. Shimizu, Y. Kodama, H. Endo, R. Onimaru, S. Nishioka, H. Aoyama, K. Tsuchiya, K. Miyasaka, Registration accuracy and possible migration of internal fiducial gold marker implanted in prostate and liver treated with real-time tumor– tracking radiation therapy (RTRT), Radiother.Oncol. 62 (3) (2002) 275–281.

N. Tahmasebi et al. / Computer Methods and Programs in Biomedicine 165 (2018) 187–195 [6] C. Ozhasoglu, M.J. Murphy, Issues in respiratory motion compensation during external-beam radiotherapy, Int. J. Radiat. Oncol. Biol. Phys. 52 (5) (2002) 1389–1399. [7] L. Ekberg, O. Holmberg, L. Wittgren, G. Bjelkengren, T. Landberg, What margins should be added to the clinical target volume in radiotherapy treatment planning for lung cancer? Radiother. Oncol. 48 (1) (1998) 71–77. [8] J.H. Hendry, B. Jeremic, E.H. Zubizarreta., Normal tissue complications after radiation therapy, Rev. Panam. Salud Publica. 20 (2006) 151–160. [9] A. Sawant, R.L. Smith, R.B. Venkat, L. Santanam, B. Cho, P. Poulsen, H. Cattell, L.J. Newell, P. Parikh, P.J. Keall, Toward submillimeter accuracy in the management of intrafraction motion: the integration of real-time internal position monitoring and multileaf collimator target tracking, Int. J. Radiat. Oncol. Biol. Phys. 74 (2) (2009) 575–582. [10] B. Cho, P.R. Poulsen, A. Sloutsky, A. Sawant, P.J. Keall, First demonstration of combined KV/MV image-guided real-time dynamic multileaf-collimator target tracking, Int. J. Radiat. Oncol. Biol. Phys. 74 (3) (2009) 859–867. [11] T. Depuydt, D. Verellen, O. Haas, T. Gevaert, N. Linthout, M. Duchateau, K. Tournel, T. Reynders, K. Leysen, M. Hoogeman, et al., Geometric accuracy of a novel gimbals based radiation therapy tumor tracking system, Radiother. Oncol. 98 (3) (2011) 365–372. [12] B. Fallone, B. Murray, S. Rathee, T. Stanescu, S. Steciw, S. Vidakovic, E. Blosser, D. Tymofichuk, First MR images obtained during megavoltage photon irradiation from a prototype integrated linac-MR system, Med. Phys. 36 (6) (2009) 2084–2088. [13] J.J. Lagendijk, B.W. Raaymakers, A.J. Raaijmakers, J. Overweg, K.J. Brown, E.M. Kerkhof, R.W. van der Put, B. Hårdemark, M. van Vulpen, U.A. van der Heide, MRI/Linac integration, Radiother. Oncol. 86 (1) (2008) 25–29. [14] J. Yun, E. Yip, K. Wachowicz, S. Rathee, M. Mackenzie, D. Robinson, B. Fallone, Evaluation of a lung tumor autocontouring algorithm for intrafractional tumor tracking using low-field MRI: a phantom study, Med. Phys. 39 (3) (2012) 1481–1494. [15] H. Shirato, Y. Seppenwoolde, K. Kitamura, R. Onimura, S. Shimizu, Intrafractional tumor motion: lung and liver, Semin. Radiat. Oncol. 14 (1) (2004) 10–18. [16] C. Plathow, C. Fink, S. Ley, M. Puderbach, M. Eichinger, I. Zuna, A. Schmhl, H.-U. Kauczor, Measurement of tumor diameter-dependent mobility of lung tumors by dynamic MRI, Radiother. Oncol. 73 (3) (2004) 349–354.

195

[17] H. Shirato, K. Suzuki, G.C. Sharp, K. Fujita, R. Onimaru, M. Fujino, N. Kato, Y. Osaka, R. Kinoshita, H. Taguchi, S. Onodera, K. Miyasaka, Speed and amplitude of lung tumor motion precisely detected in four-dimensional setup and in real– time tumor-tracking radiotherapy, Int. J. Radiat. Oncol. Biol.Phys. 64 (4) (2006) 1229–1236. [18] Y. Zhang, M. Brady, S. Smith, Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm, IEEE Trans. Med. Imaging 20 (1) (2001) 45–57. [19] J. Yun, E. Yip, Z. Gabos, K. Wachowicz, S. Rathee, B. Fallone, Neural-network based autocontouring algorithm for intrafractional lung-tumor tracking using linac-MR, Med. Phys. 42 (5) (2015) 2296–2310. [20] H.-m. Chen, A. Goela, G.J. Garvin, S. Li, A parameterization of deformation fields for diffeomorphic image registration and its application to myocardial delineation, in: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2010, pp. 340–348. [21] J. Liu, New Development of the Deformation Method, Department of Mathematics, The University of Texas at Arlington, 2006 (Ph.D. thesis). [22] K. Punithakumar, M. Noga, I. Ben Ayed, P. Boulanger, Right ventricular segmentation in cardiac MRI with moving mesh correspondences, Comput. Med. Imaging Gr. 43 (2015) 15–25. [23] C. Zachiu, N. Papadakis, M. Ries, C. Moonen, B.D. de Senneville, An improved optical flow tracking technique for real-time mr-guided beam therapies in moving organs, Phys. Med. Biol. 60 (23) (2015) 9003. [24] D.P. Huttenlocher, G.A. Klanderman, W.J. Rucklidge, Comparing images using the hausdorff distance, IEEE Trans. Pattern Anal. Mach. Intell. 15 (9) (1993) 850–863. [25] C. Zachiu, B. Denis de Senneville, C. Moonen, M. Ries, A framework for the correction of slow physiological drifts during MR-guided HIFU therapies: proof of concept, Med. Phys. 42 (7) (2015) 4137–4148. [26] M. Wilms, I.Y. Ha, H. Handels, M.P. Heinrich, Model-based regularisation for respiratory motion estimation with sparse features in image-guided interventions, in: Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2016, pp. 89–97.