Biomedical Signal Processing and Control 48 (2019) 61–68
Contents lists available at ScienceDirect
Biomedical Signal Processing and Control journal homepage: www.elsevier.com/locate/bspc
A fast method for simulating ultrasound image from patient-specific CT data Anila Satheesh B., Arun K. Thittai ∗ Indian Institute of Technology Madras, India
a r t i c l e
i n f o
Article history: Received 16 April 2018 Received in revised form 4 September 2018 Accepted 6 October 2018 Keywords: Image-guided CT Simulation Ultrasound Scatter Convolution
a b s t r a c t Introduction and problem statement: Ultrasound imaging is one of the most preferred modalities for imageguided procedures owing to its low-cost, non-ionizing nature, and real-time capability. However, its interpretation is complex, and hence considerable research has gone into developing methods that can help improve the interpretation of ultrasound images. One such solution is to use ultrasound simulation tools that can generate patient-specific ultrasound images from image data obtained from other modalities during pre-planning stage. Use of such tools can aid the user to have enough practice before the actual image-guided procedure. In this regard, ultrasound simulation from CT data has gained much interest over the past few years. Methodology: One of the most recent ways of simulating ultrasound images from CT is to combine the ultrasound echo reflection image, the transmission intensity map, and the scatter image of the region of interest from the CT image. However, the scatter image is simulated using Field II program, which is computationally very intensive. In this paper, we propose combining the traditional convolution method for scatter map generation along with ray tracing approaches to simulate an ultrasound image. The methodology is tested on CT data from Visible Human Project (VHP) in simulations and validated on multi-modality tissue-mimicking phantom in vitro. Simulation is done for a curvilinear array transducer at 2.5 MHz frequency and an imaging depth of 180 mm. Results and conclusion: The results obtained from simulation suggests that using convolution method reduces the computation time significantly, from almost 2.4 h to about 17 s for a chosen region of interest in VHP data of dimensions 180 mm x 60 mm x 1 mm without affecting the image quality. In case of in-vitro phantom, the computation time is reduced from almost 3 h to about 10 s for a chosen region of interest of dimensions 180 mm x 60 mm x 1.25 mm. Thus, convolution method may be preferred over Field II for generating ultrasound scatter image since the method provides a comparable image to that of real-time images, but takes only significantly less computation time. © 2018 Elsevier Ltd. All rights reserved.
1. Introduction Biomedical ultrasound imaging is regarded as the most preferred modality for image-guided procedures because of its non-ionizing, low cost and real-time capabilities. However, interpretation of an ultrasound image is challenging and requires experience and skill. To address this problem, researchers have tried to simulate ultrasound images for training the workforce. In case of image-guided procedures, patient-specific preplanning stage data from other modalities can be used to simulate the ultrasound images [1–15]. Use of such ultrasound simulation tools can ensure that the radiologists are adequately trained to interpret the
∗ Corresponding author. https://doi.org/10.1016/j.bspc.2018.10.003 1746-8094/© 2018 Elsevier Ltd. All rights reserved.
ultrasound images accurately while performing the imaging on a real patient. In this regard, ultrasound image simulation from CT image data has gained much attention over the last few years [1–14]. The methods adopted reportedly claim to ease the difficulties faced by practitioners in obtaining and interpreting anatomical information of the subject. Further, these methods can also be a useful tool in image-guided procedures. For example, in case of brachytherapy, patient-specific CT data can be used to simulate ultrasound images which can be co-registered with real-time ultrasound images obtained during applicator insertion. This helps in a better understanding of the patient anatomy which can thereby improve the accuracy of the whole procedure [16]. Further, these ultrasound images can be used for planning the brachytherapy session as reported in [17].
62
A. Satheesh B., A.K. Thittai / Biomedical Signal Processing and Control 48 (2019) 61–68
Several methodologies have been reported in the literature for simulation of ultrasound from CT images. A low cost ultrasound simulator based on CT scan images for training radiologists is proposed by Nicolau et al. in [1], which is based on the work reported by Hostettler et al. in [2]. In this work, ultrasound is simulated from CT images by combining ultrasound echo map, absorption image, and texture image obtained from the CT image. However, speckle information is not modeled in the work. Another work by Burger et al. [3] simulates ultrasound image from CT by defining a 3D model of the image structure from the CT data according to the tissue characteristics. Once the model is defined, ultrasound image is obtained by simulating ray propagation, beamforming, and back scattering. Simulating ultrasound from CT using a computationally efficient parallel programming platform like CUDA is reported by Reichl et al. [4]. Yet another method of ultrasound simulation from CT is proposed by Karamalis et al. in [5]. The method involves modelling ultrasound propagation in tissues by westervelt equation and solving it using finite difference scheme on a graphics processing unit to simulate ultrasound images real-time. Shams et al. [6] in 2008 reported a method which simulates ultrasound image from CT data in two phases. In the first phase, called ‘runtime’ phase, ultrasound echo reflection map of the chosen region of interest (ROI) is obtained in accordance with the view angle. Reflections occur on surfaces with dimensions greater than the wavelength of the ultrasound signal. These surfaces can be determined from the edge map within the ROI. During the second phase, called ‘pre-processing’ phase, ultrasound scattering image of the ROI is generated from Field II [18], which is a programming tool used for simulating ultrasound transducer fields and ultrasound imaging using linear acoustics. Scattering occurs in regions with dimensions lower or comparable to the wavelength of the ultrasound signal and hence, is independent of view angle. The final B-mode ultrasound image is produced by combining the reflection image from phase 1 and scatter image from phase 2. As an extension to this work, Kutter et al. [7] generated ultrasound transmission texture along with reflection image in phase 1 and combined this with the reflection and scatter images to generate the final B-mode image. Transmission texture is generated to account for the attenuation effects. However, in both works, ultrasound scatter image generation is performed using Field II program which is known to be computationally very intensive. Hence, for certain imageguided procedures, for example brachytherapy, where intervening time is an important parameter, this approach may not be preferred. In brachytherapy, after the preplanning data of the patient is obtained, there must be minimum delay between placing the catheter, treatment planning, and starting the treatment [20]. Another approach for simulating ultrasound image from CT data was reported by Dillenseger et al. [8] in 2007, which is based on the convolution model proposed by Bamber and Dickinson [19] in 1980. According to the model, the ultrasound image can be obtained as the convolution between the system point spread function (PSF) and the tissue scattering function (TSF) within the ROI. The method is computationally fast since it is performed in the frequency domain and also assumes the system PSF to be linear, separable, and spatially invariant. The TSF is taken as a function of the compressibility of the tissues in the region of interest. However, the effects of attenuation and ultrasound reflections at the interfaces is not reflected in the obtained image. Yet another approach for ultrasound simulation from CT was proposed recently by Szostek and Piorkowski [9] which uses ray trace based wavefront reconstruction method. The main focus of the paper was to implement the ultrasound refraction artifact. Another paper by Camara et al. [12] in 2017 simulates patient-specific deformable ultrasound in real-time. In this work,the input to the algorithm is a pre-operative CT image from which deformable ultrasound image is simulated by implementing the algorithm within a GPU-accelerated NVIDIA
Fig. 1. Flowchart for ultrasound simulation from CT data.
Flex framework. Again, the various methods used for simulating ultrasound image from patient-specific data from other modalities are compared and reported by Gao et al. [14] in their 2012 paper. Salehi et al. [15] in 2015 simulated ultrasound images using convolutional ray tracing approach with segmented MRI scan of the subject as input. The current paper follows the method reported in [6]. However, instead of using Field II program for simulating the ultrasound scatter image, the convolution model [8] is used. The final B-mode image is obtained by combining scatter image, reflection image and transmission image. Thus, a combinational approach is put forth which utilizes the method proposed by Shams et al. and Kutter et al., but without the computational time constraints posed by Field II, due to the use of convolution model to generate the scatter image. The paper is organized as follows. Section II explains the methodology used for simulation of ultrasound from CT as well as the performance metrics used for comparison purposes. Section III presents the various results obtained. Discussion regarding the obtained results and the future scope is explained in section IV. Section V gives the conclusions from the work. 2. Methodology The overall methodology of the proposed work is shown in Fig. 1. CT image data of the subject from pre-planning stage is used to generate ultrasound echo reflection image, intensity transmission image as well as the scatter image. Combining the three images will form the final B-mode ultrasound image. Before this, the CT data is preprocessed and a region of interest is chosen for simulation. For any given CT image data, the region of interest is chosen according to the ultrasound transducer position. In this work, a curvilinear ultrasound transducer with 2.5 MHz centre frequency and 60 mm footprint is assumed to be positioned at the centre of an axial CT slice. The depth of penetration is fixed at 180 mm for simulation. Thus, the ROI is a sector image at the centre of the slice with radius of 180 mm. This sector image, which is in cartesian coordinates, is remapped to a rectangle image by doing polar to cartesian conversion as done in [6]. This is performed for two reasons: Firstly, the convolution model assumes parallel wave propagation. Sector to rectangle conversion ensures that the propagation is parallel while the convolution method is performed. Moreover, converting sector to rectangle will enable the same simulation method to be used for both linear and curvilinear array transducers. Also, the pixel spacing in CT (∼0.9 mm) is higher than the ultrasound signal wavelength (∼0.6 mm). In order to simulate an ultrasound image of appreciable quality, data samples need to be added in between. Hence, the
A. Satheesh B., A.K. Thittai / Biomedical Signal Processing and Control 48 (2019) 61–68 Table 1 Parameters used for ultrasound simulation from CT. Parameter
Value
Operating Frequency Bandwidth Transducer Element pitch Number of elements Speed of sound Sampling frequency
2.5 MHz 2-5 MHz 0.47mm 128 1540 m/s 40MHz
selected ROI is resampled using a simple bilinear interpolation. The different steps involved in the simulation of ultrasound images from CT data are explained under separate sub-sections below:
For generating ultrasound reflection image of the region of interest, edge information is necessary. Several methods have been reported in literature for edge detection of images, in which Canny method is found to perform better than other existing methods [21]. In this work, therefore, canny edge detection is performed to extract the edge information. Moreover, directly using CT image for scatterimage generation is not preffered since soft tissue delineation in CT is poor. Therefore, in order to improve contrast in the image, a simple contrast stretching is performed, which modifies the dynamic range of the input image values to its 5th and 95th percentile. This makes sure that the image pixel values are not affected by outliers. 2.2. Ultrasound reflection and transmission image generation Edge information obtained using Canny edge detection is used for generating the ultrasound reflection and transmission images for a curvilinear array transducer. The transducer specifications used for simulation are listed in Table 1. Specular reflection of ultrasound signals occur at the interface between two media when their dimensions are larger than the wavelength of the incoming ultrasound signal. The location of these interfaces can be obtained from the edge map. The strength of the reflected signal depends on the acoustic impedance mismatch between the media as well as on the angle of insonification. The intensity reflection coefficient (␣R ) provides the relative amount of reflected signal and is given by the following formula [22] :
˛R =
Z2 cos i − Z1 cos t Z2 cos i + Z1 cos t
2
(1)
where i is the incident angle, t is the transmitted angle, and Z1 and Z2 are the acoustic impedances of media 1 and media 2, respectively. The CT Hounsfield values of tissues are reported to be approximately proportional to their acoustic impedance values [10] and hence can be used in Eq. (1). However, air and bone do not satisfy the proportionality and therefore, ␣R values for air-tissue and bone-tissue interface are fixed at 0.99 and 0.4, respectively [23]. The transmission coefficient can be written in terms of reflection coefficient as ˛T = 1 − ˛R . The ultrasound reflected intensity at any point (x,y) can then be written as : IR (x, y) = Ia (x, y) ∗ ˛R (x, y) ∗ cos
(2)
where, Ia (x,y) is the incident intensity at the point (x,y), and is the angle between normal to the surface and direction of propagation of ultrasound wave. The transmitted intensity at that point will then be the difference between the incident and reflected intensities. The incident intensity Ia at the point (x,y) can be written as [7] : df ⁄20 Ia (x, y) = I0 10−
where, I0 is the initial incident intensity, is the attenuation coefficient in dB cm−1 MHz−1 , d is the depth in cm, and f is the frequency in MHz. Typically, an ultrasound transducer consists of a number of elements. A group of elements form the active aperture and transmit the ultrasound signal and receive the reflected echoes. The active aperture is then electronically shifted across the entire transducer aperture to cover the region of interest. Thus, reflections from a point target are partially received by all elements of the active aperture. The reflections are integrated over the active aperture after applying Hamming apodization window function as is routinely done [24]. Mathematically, this can be written for a linear array as:
x+l−1
IR (x, y) ∗ 10−
IR1 (x, y) =
2.1. CT data preprocessing
(3)
63
vf 20
∗ w(u)du
(4)
x−l
where, l is half the number of active elements, v is the path length between the point (x,y) and the active aperture element, w is a Hamming window. The transmission texture in the image at any point along the depth can be calculated as the difference between the incident ultrasound signal intensity and the reflected intensity at that point. The value of reflected intensity will be high when an interface is present. the value will be small otherwise. Also, the incident signal intensity at any point will depend on the attenuation undergone by the ultrasound signal at that point in the tissue. Thus, the attenuation of ultrasound signal along the depth as well as the shadowing effect due to air-tissue/bone-tissue interface together will form the transmission texture image for the selected ROI. 2.3. Ultrasound scatter image generation To generate ultrasound scatter image, convolution method proposed in [19] is used. The model assumes the ultrasound system to have a linear, separable and space-invariant point spread function (PSF). In this paper, PSF is a cosine modulated Gaussian function given by :
PSF(x, y, z) = e
−
x2 2 x
y2
2
y
z
+ 2 + z2
cos (2fy)
(5)
where, x ,y, z represent pulse width, pulse length, and thickness, respectively [8]. The tissue scatter function (TSF) can be written as a function of the compressibility values of the tissues involved [19]. In order to determine the compressibility values, the various regions separated by the edge map interfaces are labelled as air, fat, blood, tissue, and bone depending on their CT Hounsfield values. The TSF can then be written as : TSF(x, y, z) = ∇ 2 ˇ(x, y, z)
(6)
where, ∇ represents the Laplacian operator and  is the compressibility value of tissue. In order to capture the speckle information, we introduce 10 scatters within a resolution cell of dimension x x , which satisfies the condition for a fully-developed speckle [25]. The proposed methodology will not be affected by the amount of speckle, however, the appearance of speckle may enhance the simulated US image to resemble closely to realistic image. The speckle spread is assumed to be Gaussian with compressibility value of the tissue as mean. The final scatter image is then obtained by convolving PSF and TSF as: IS = PSF ⊗ TSF
(7)
The process is implemented in frequency domain where convolution becomes multiplication. The resultant data is then envelope
64
A. Satheesh B., A.K. Thittai / Biomedical Signal Processing and Control 48 (2019) 61–68
detected using Hilbert transform and log compressed at 70 dB to form the scatter image fit for visualization. 2.4. Ultrasound image formation The generated reflection, transmission, and scatter images are then blended together to form the final ultrasound image as shown below [7]: IUS = (G1 ⊗ IR + ˛G2 ⊗ IT ) Is
(8)
where, G is a Gaussian filter with zero mean and adjustable standard deviation (1 and 2 ), ␣ is the blending parameter and Is is the scatter image obtained using eqn (7). The values of 1 and 2 are chosen so as to smooth the output of image fusion process. Since reflection image represents the edge information, we require the smoothening to be lower than the transmission texture smoothening. Also, the value of ␣ is chosen so as to make the transmission texture in the image noticeable. Nevertheless, the values are defined such that information from reflection, transmission as well as scatter images are available in the fused image. Since the parameter values are mentioned to be user-defined and not given in [7], for the images displayed in this paper, we fixed the values at 2,4, and 2 for 1 , 2 , and ␣, respectively. 2.5. Image database used The simulation methodology explained above is implemented on two different datasets. The first CT data images used for simulation are taken from the Visible Human Project (VHP) [26] database that contains images of a female cadaver in ‘png’ format. The axial slices are 1 mm thick and 1 mm apart and are stored as 16 bit grayscale images. Scatter image is generated using convolution model as well as Field II for comparison purposes. The results obtained from both the methods are compared against each other. However, the results obtained could not be validated since a ground truth ultrasound image is not available for the database. Hence, the simulation is also implemented on a CIRS triple modality 3D abdominal phantom (Model No. 057 A) of dimensions 26 cm x 12.5 cm x 19 cm. The phantom is representative of an adult abdomen and can be imaged using CT, ultrasound and MRI [27]. The internal structures include liver, partial lung, portal vein, two partial kidneys, abdominal aorta, vena cava, simulated spine, and six ribs. A total of eight lesions are also present inside. For our purposes, CT images of the phantom were taken 1 mm apart, each having thickness of 1.25 mm. Ultrasound images of the phan® tom were acquired using a SONIX TOUCH Q+ scanner (Ultrasonix, Analogic Corporation, Peabody, MA,USA). This acquired ultrasound image is then taken as the ground truth and the ultrasound images simulated from CT data using scatter images from Field II and convolution model are compared against it. For the purposes of simulation, the transducer array is assumed to be at the centre of an axial CT slice. A curvilinear transducer C25 with specifications listed in Table 1 is used for the purpose. The depth of penetration of the ultrasound signal in both the database images is fixed to be 180 mm. 2.6. Performance metrics For both the database images, the process of ultrasound simulation from CT is performed on 25 different axial CT slices, considering the transducer to be at the centre of the slice at all times. All the simulation programs are written in MATLAB 2016a and executed in an intel core i7 3.6 GHz 64-bit system with 16GB RAM. Computation time and Normalized RMSE are the metrics used to quantitatively compare between the images. Normalized RMSE is a dimensionless
Fig. 2. Images showing the a) input CT slice from VHP data that is used for ultrasound simulation. The transducer is assumed to be at the centre of the slice. The sector region of interest is marked with the black solid line. b) Rectangular image corresponding to the sector image shown in (a).
metric typically used to compare an image with a reference image and can be obtained using Eq. (8) given below.
RMSE =
1 M∗N
M N
2
Iin (m, n) − Iref (m, n)
m=1 n=1
Imax − Imin
(8)
where, Iin and Iref are the input and reference images, respectively, of dimensions MxN, Imax and Imin are the maximum and minimum pixel values in the images and are set to 1 and 0 in this paper. However, no corresponding experimental ultrasound image is available to serve as the ground truth reference for simulation of ultrasound images from VHP CT data. Hence, only the comparison between scatter images obtained from Field II and convolution method could be done. For CIRS phantom, on the other hand, since the ground truth ultrasound image was obtained experimentally, the simulated ultrasound images are compared with the ground truth image. In addition, comparison between scatter images from Field II and convolution model are also done. 3. Results 3.1. Ultrasound image simulation results for VHP CT data Fig. 2a shows an axial CT slice image and the sector-shaped ROI chosen for simulating the corresponding ultrasound image. This sector image in cartesian coordinates is then converted to a rectangular image (Fig. 2b) and is resampled for reasons explained in methodology section (Section 2). This image is then given as the input to the CT preprocessing phase as explained in section 2.1. As can be seen from the figures, the anatomy of the image consists of a subcutaneous fat layer (top layer) and muscles (slightly brighter than fat). The region also has several air pockets (dark regions). The image has pixel values ranging from -1024 to 200. The ultrasound reflection image obtained from run-time phase is as shown in Fig. 3a. It can be noticed that edges in CT image is shown in the reflection image, but with different weightings for every edge depending on the ultrasound signal reflection at that edge. Edges corresponding to air-tissue interface have 99% reflection of the input ultrasound beam and hence these appear brighter than the others. Also, tissue-tissue interface reflections are very small (∼ 0.01%) so that they are less visible. Another image obtained from run-time phase is the transmission image (Fig. 3b), which shows how the beam is transmitted along the depth direction. Dark lines show that very little or no signal is transmitted along depth in that region. Hence, the dark region in the transmission image is expected since the same region shows an air-tissue interface in the reflection image. The transmission image also shows the signal attenuation along the depth.
A. Satheesh B., A.K. Thittai / Biomedical Signal Processing and Control 48 (2019) 61–68
65
Fig. 5. Comparison between the sector ultrasound images obtained from scatterimages obtained through (a) Field II, and (b) Convolution method.
Table 2 Quantitative comparison between scatter images generated from Convolution model and Field II (Simulation). Fig. 3. Images of a) Ultrasound echo reflection image computed from the input in Fig. 2 b, and b) ultrasound intensity transmission image.
Computation Time Normalized RMSE
Field II model
Convolution model
141.3 ± 2.8 min 0.142 ± 0.025
16.75 ± 1.28 sec
Fig. 6. Images showing the a) input CT slice from CIRS phantom that is used for US simulation. The transducer is assumed to be at the centre of the slice. The sector region of interest is marked with a black solid line. b) Rectangular image corresponding to the sector image shown in (a). Fig. 4. Comparison of scatter images generated from a) Field II, and b) Convolution model. Simulation parameters are kept same in both cases.
Fig. 4 compares the scatterimages obtained from Field II and that from convolution model. The parameters used for Field II simulation are the same as in Table 1. One can see that there is no appreciable visible difference between both the images, although the scatter image from Field II appear slightly blurred as we move along the depth direction. Speckle information can be seen well in both the images. For a more quantitative comparison, computation time and normalized RMSE are used. The scatter image generation in Field II took nearly 2.4 h for the given region of interest, whereas convolution method yielded the image in approximately 17 s. Fig. 5 compares the final ultrasound images in sector form which are obtained using scatterimages from Field II and convolution model. The various interfaces are visible due to reflection image, attenuation effects as well as acoustic shadowing beneath the edges are contributed by transmission image, and speckle information is obtained from scatter image. All the ultrasound images are displayed at a dynamic range of 70 dB. The mean computation time and mean normalized RMSE values are then computed by repeating the simulation for 25 different axial CT slices. The values obtained are listed in Table2. It can be seen that the computation time required for convolution model is several orders of magnitude lower than that required for Field II. Further, the mean normalized RMSE value between the images is also found
to be small indicating that the US image obtained using the two different approaches are quite similar. 3.2. Ultrasound image simulation results for CIRS phantom data Fig. 6a shows an axial CT slice from the database. The chosen sector ROI within the CT slice is marked with a black solid line. The sector image in cartesian coordinates is then converted to rectangular image (Fig. 6b) and resampled for further processing. The anatomy of the region of interest contain a top fat layer (dark), muscle layer (slightly brighter than fat), liver with two lesions (dark round circles) and hepatic vein, vertebra (bright region), abdominal aorta and vena cava (two round regions above vertebra), surrounding soft tissue (dark region) with a portion of kidney in them. The image has pixel values ranging from -1024 to 250. The ultrasound reflection image from run-time phase is as shown in Fig. 7a. One can observe that the bone-tissue interface has the highest weighting since around 40% of the incoming signal is reflected here. The rest of the edges are basically tissue-tissue or blood-tissue interfaces, where reflection is minimal. Transmission image obtained during this phase is shown in Fig. 7b. As expected, the dark region is around the vertebral region since ultrasound reflection is higher in this region. Ultrasound attenuation is also large in bones and hence the region below bone-tissue interface is darker.
66
A. Satheesh B., A.K. Thittai / Biomedical Signal Processing and Control 48 (2019) 61–68
Fig. 7. Images of a) Ultrasound echo reflection image computed from the input in Fig. 6b, and b) ultrasound intensity transmission image. Fig. 8. Comparison of scatter images generated from a)Field II, and b) Convolution model. Simulation parameters are kept same in both cases.
Fig. 8 compares the scatter images of the input CT image obtained from Field II and from convolution model. The visible difference between the two images is minimal, although one can notice that the scatter image generated by FIELD II has a blur which worsens as we go down the depth. The images are compared quantitatively using computation time and normalized RMSE as metrics for 25 different axial CT slices (Table 3) similar to what was done and reported for the VHP database in the previous section. However, since the experimentally obtained ultrasound image from the same ROI is available from ultrasound scanning of the CIRS phantom, comparison is also made between the ultrasound images simulated from CT images and the ground truth ultrasound images (Fig. 9). It can be observed from Fig. 9 that the ground truth ultrasound
Table 3 Comparison between scatter images generated from Convolution model and Field II (CIRS phantom).
Computation Time Normalized RMSE
Field II model
Convolution model
177.4 ± 5.4 min 0.18 ± 0.033
9.2 ± 0.22 sec
image and the ultrasound images simulated from the CT image, by using scatter images obtained from Field II and the convolution model, appear to be very similar. The mean normalized RMSE value between the above-mentioned images are also calculated and are
Fig. 9. Comparison between groundtruth ultrasound image (a) with ultrasound image from scatterimage obtained through Field II (b), and ultrasound image from scatter image obtained through convolution method (c).
A. Satheesh B., A.K. Thittai / Biomedical Signal Processing and Control 48 (2019) 61–68
67
the given CT volume. It can be anticipated that the significance of the gain in computational efficiancy achieved using the proposed approach will be even better appreciated in such applications where there is a need to simulate the ultrasound volume corresponding to the whole patient-specific CT-volume within a short period of time. 5. Conclusions
Fig. 10. Quantitative comparison between groundtruth ultrasound image with ultrasound image from scatterimage obtained through Field II as well as through convolution method. Normalized RMSE is the metric used.
depicted in Fig. 10. Again, the results obtained show that the actual ultrasound image and the simulated ultrasound images are quite similar. All the images are displayed at 70 dB dynamic range. 4. Discussion In this work, we propose to use a novel combinational approach that utilizes convolution method, instead of Field II for simulating the ultrasound scatter image from a given input CT image and blends it with ultrasound reflection image and transmission image to obtain the final B-mode ultrasound image. From the results obtained by implementing this methodology on two different datasets, it can be seen that convolution method may be preferred over Field II for generating scatter image since the method provides a comparable image to that of real-time images, but taking only significantly less computation time. This is because convolution model assumes the system PSF to be linear, separable and space invariant, and hence need to be defined only once for a system. On the other hand, in Field II, PSF is space-variant, i.e., PSF needs to be defined each time for varying depth thus increasing the computation time. Although the assumption in convolution model may seem unrealistic, new scanners are coming up with techniques that maintain the beamwidth throughout the depth during transmit [28]. Hence, convolution model is not as restrictive as it was considered a few years back. Moreover, other methodologies mentioned in introduction section that simulate ultrasound from CT images differ from our work both in approach as well as objective. The objective of most of the existing methodologies is to simulate ultrasound from CT in order to train radiologists to interpret the data. While this can be one application of this work, the intent is to simulate ultrasound images from pre-planning stage data for use in image-guided procedures. Additionally, the objective is to simulate as realistic as possible ultrasound images (including speckle information) in a very computationally efficient manner. This would prove to be beneficial in applications such as brachytherapy catheter placement and planning, where intervening time is an important factor to be taken care of. Once the applicator is placed inside the patient’s body, treatment has to begin immediately since there is always a chance of applicator movement relative to the target, as well as organs at risk, which is undesirable [20]. In such cases, convolution model is better suited for ultrasound simulation than Field II model as the difference between the two images is also found to be small. This is experimentally verified by comparing the ground truth ultrasound image of a multimodal phantom with the simulated ultrasound images. Note that the transducer used for simulation in this work is a curvilinear array transducer since the region of interest is abdomen area. But the algorithm works equally well for regions which are imaged using linear transducers. Again, the ultrasound image simulation performed here are for ROI in axial CT slices. However, the approach described in this paper can be expanded to simulate ultrasound images for any arbitrary transducer position and orientation from
A new framework to simulate ultrasound images from CT is introduced in which convolution model is used in place of Field II for generating scatter image and then combined with reflection and transmission images to form the B-mode ultrasound image. The proposed methodology is demonstrated on two datasets namely - invivo and commercial tissue-mimicking phantom. From the results, it is clear that the convolution model approach to simulate ultrasound scatter image is better than Field II model with respect to its capability of yielding images that are better or of comparable quality at reduced computational time. This can be specifically advantageous for certain image guided procedures like brachytherapy, where patient specific CT data from preplanning phase can be used to aid the US-guided procedure. Funding This work is supported by a seed grant from Indian Institute of Technology Madras. The CIRS phantom was purchased with support from a research project grant No. BT/PR14677/MED/32/406/2015 from Department of Biotechnology, India. Acknowledgements The authors are indebted to Dr. Paul Ravindran, Dr. Thomas Samuel, and Dr. Jeba Karunya, CMC Vellore for their valuable suggestions and also for providing help to acquire the CT data of CIRS phantom. The authors are also thankful to National Library of Medicine (NLM) for providing access to the VHP database. The authors would also like to thank P.G. Senapathy Computing Centre, IIT Madras for granting access to HPC cluster computing which has been a great help during the preliminary studies of the work. References [1] S. Nicolau, A. Vemuri, H.S. Wu, M.H. Huang, Y. Ho, A. Charnoz, A. Hostettler, L. Soler, J. Marescaux, A low cost simulator to practice ultrasound image interpretation and probe manipulation: design and first evaluation, ISVRI 2011 - IEEE Int. Symp. Virtual Real. Innov. 2011, Proc. (2011) 37–40, http://dx. doi.org/10.1109/ISVRI.2011.5759598. [2] A. Hostettler, C. Forest, A. Forgione, L. Soler, J. Marescaux, Real-time ultrasonography simulator based on 3D CT-scan images, Stud. Health Technol. Inform. 111 (2005) 191–193. [3] B. Burger, C. Abkai, Simulation of dynamic ultrasound based on CT models for medical education, Stud. Health Technol. Inform. 132 (2008) 56–61. [4] T. Reichl, J. Passenger, O. Acosta, O. Salvado, Ultrasound goes GPU: real-time simulation using CUDA, Proc. SPIE (2009), http://dx.doi.org/10.1117/12. 812486. [5] A. Karamalis, W. Wein, N. Navab, Fast Ultrasound Image Simulation Using the Westervelt Equation, In International Conference on Medical Image Computing and Computer-Assisted Intervention (2010), Springer, Berlin, Heidelberg, 2010, pp. 243–250. [6] R. Shams, R. Hartley, N. Navab, Real-time simulation of medical ultrasound from CT images, Med. Image Comput. Comput. Assist. Interv. 11 (2008) 734–741, 18982670. [7] O. Kutter, R. Shams, N. Navab, Visualization and GPU-accelerated simulation of medical ultrasound from CT images, Computer Methods Programs Biomed. 94 (2009) 250–266, http://dx.doi.org/10.1016/j.cmpb.2008.12.011. [8] J.L. Dillenseger, S. Laguitton, É. Delabrousse, Fast simulation of ultrasound images from a CT volume, Comput. Biol. Med. 39 (2009) 180–186, http://dx. doi.org/10.1016/j.compbiomed.2008.12.009. [9] K. Szostek, A. Piórkowski, Real-time simulation of ultrasound refraction phenomena using ray-trace based wavefront construction method, Comput.
68
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
A. Satheesh B., A.K. Thittai / Biomedical Signal Processing and Control 48 (2019) 61–68 Methods Programs Biomed. 135 (2016) 187–197, http://dx.doi.org/10.1016/j. cmpb.2016.07.034. W. Wein, A. Khamene, D.-A. Clevert, O. Kutter, N. Navab, Simulation and fully automatic multimodal registration of medical ultrasound, Med. Image Comput. Comput. Assist. Interv. 10 (2007) 136–143, http://dx.doi.org/10. 1007/978-3-540-75757-3. H. Gao, H.F. Choi, P. Claus, S. Boonen, S. Jaecques, G.H. Van Lenthe, G. Van Der Perre, W. Lauriks, J. D’Hooge, A fast convolution-based methodology to simulate 2-Dd/3-D cardiac ultrasound images, IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 56 (2009) 404–409, http://dx.doi.org/10.1109/ TUFFC.2009.1051. M. Camara, E. Mayer, A. Darzi, P. Pratt, Simulation of Patient-Specific Deformable Ultrasound Imaging in Real Time, in Imaging for Patient-Customized Simulations and Systems for Point-of-Care Ultrasound, Springer International Publishing, Cham, 2017, http://dx.doi.org/10.1007/ 978-3-319-67552-7. O. Goksel, S.E. Salcudean, B-Mode ultrasound image simulation in deformable 3-D medium, IEEE Trans. Med. Imaging. 28 (2009) 1657–1669, http://dx.doi. org/10.1109/TMI.2009.2016561. H. Gao, T. Hergum, H. Torp, J. D’Hooge, Comparison of the performance of different tools for fast simulation of ultrasound data, Ultrasonics. 52 (2012) 573–577, http://dx.doi.org/10.1016/j.ultras.2012.01.009. M.Salehi, S. A. Ahmadi, R. Prevost, N Navab, & W Wein, Patient-specific 3D ultrasound simulation based on convolutional ray-tracing and appearance optimization, In International Conference on Medical Image Computing and Computer-Assisted Intervention (2015, October), 510-518, Springer, Cham. J. Rotmensch, S.E. Waggoner, C. Quiet, Ultrasound guidance for placement of difficult intracavitary implants, Gynecol. Oncol. 54 (1994) 159–162, http://dx. doi.org/10.1006/gyno.1994.1186. S. Van Dyk, K. Narayan, R. Fisher, D. Bernshaw, Conformal brachytherapy planning for cervical cancer using transabdominal ultrasound, Int. J. Radiat.
[18] [19]
[20]
[21] [22] [23] [24] [25] [26] [27]
[28]
Oncol. Biol. Phys. 75 (2009) 64–70, http://dx.doi.org/10.1016/j.ijrobp.2008.10. 057. J.A. Jensen, Field : a program for simulating ultrasound systems, 10TH Nord, Conf. Biomed. IMG. 34 (1996) 351–353. J.C. Bamber, R.J. Dickinson, Ultrasonic B-scanning: a computer simulation, Phys. Med. Biol. 25 (1980) 463–479, http://dx.doi.org/10.1088/0031-9155/25/ 3/006. C. Kirisits, M.J. Rivard, D. Baltas, F. Ballester, M. De Brabandere, R. Van Der Laarse, Y. Niatsetski, P. Papagiannis, T.P. Hellebust, J. Perez-Calatayud, K. Tanderup, J.L.M. Venselaar, F.A. Siebert, Review of clinical brachytherapy uncertainties: analysis guidelines of GEC-ESTRO and the AAPM, Radiother. Oncol. 110 (2014) 199–212, http://dx.doi.org/10.1016/j.radonc.2013.11.002. R. Maini, H. Aggarwal, Study and comparison of various image edge detection techniques, Int. J. Image Process. 3 (2009) 1–11. T.L. Szabo, Diagnostic ultrasound imaging, Inside Out 549 (2004), http://dx. doi.org/10.1016/B978-0-12-396487-8.00001-X. W. Hedrick, D. Hykes, D. Starchman, Ultrasound Physics and Instrumentation, 3rd edn., Mosby - Year Book, Inc., 1995, http://dx.doi.org/10.1118/1.597501. F.W. Kremkau, Multiple-element transducers, RadioGraphics. (1993) 1163–1176, http://dx.doi.org/10.1148/radiographics.13.5.8210599. J. A. Jensen, S. Nikolov, Fast simulation of ultrasound images, proceedings the IEEE Ultrasonics Symposium in Puerto Rico, October 2000. M.J. Ackerman, The visible human project, Proc. IEEE. 86 (1998) 504–511, http://dx.doi.org/10.1109/5.662875 [dataset]. Triple Modality 3D Abdominal Phantom, Model 057A, Computerized Imaging Reference Systems, Inc., 2013, Available at http://www.cirsinc.com/products/ all/65/triple-modality-3d-abdominal-phantom/. K. Thiele, J. Jago, R. Entrekin, R. Peterson, Exploring nSIGHT Imaging – a Totally New Architecture for Premium Ultrasound, White Paper, Koninklijke Philips NV, 2013.