Computers in Biology and Medicine 37 (2007) 1591 – 1599 www.intl.elsevierhealth.com/journals/cobm
Boundary delineation in transrectal ultrasound image for prostate cancer Ying Zhang a , Ravi Sankar a,∗ , Wei Qian b a Department of Electrical Engineering, University of South Florida, Tampa, FL 33620, USA b Department of Interdisciplinary Oncology, H. Lee Moffitt Cancer Center & Research Institute at University of South Florida, Tampa, FL 33620, USA
Received 25 October 2005; received in revised form 10 November 2006; accepted 23 February 2007
Abstract This paper presents a new advanced automatic edge delineation model for the detection and diagnosis of prostate cancer on transrectal ultrasound (TRUS) images. The proposed model is to improve prostate boundary detection system by modifying a set of preprocessing algorithms including tree-structured nonlinear filter (TSF), directional wavelet transforms (DWT) and tree-structured wavelet transform (TSWT). The model consists of a preprocessing module and a segmentation module. The preprocessing module is implemented for noise suppression, image smoothing and boundary enhancement. The active contours model is used in the segmentation module for prostate boundary detection in two-dimensional (2D) TRUS images. Experimental results show that the addition of the preprocessing module improves the accuracy and sensitivity of the segmentation module, compared to the implementation of the segmentation module alone. It is believed that the proposed automatic boundary detection module for the TRUS images is a promising approach, which provides an efficient and robust detection and diagnosis strategy and acts as “second opinion” for the physician’s interpretation of prostate cancer. 䉷 2007 Elsevier Ltd. All rights reserved. Keywords: Transrectal ultrasound; Tree-structured nonlinear filter; Directional wavelet transforms; Tree-structured wavelet transform; Active contour model
1. Introduction Prostate cancer is the most common type of cancer found in American men, other than skin cancer. The American Cancer Society estimates that there will be about 232,090 new cases of prostate cancer in the United States in 2005. About 30,350 men will die of this disease (American Cancer Society, http://www.cancer.org). Early detection of cancer will undoubtedly improve the survival rate tremendously. But there are still many unanswered questions about finding prostate cancer early. The chances for clinical assessment of prostate cancers found early by using the PSA (prostate specific antigen) blood test or the digital rectal exam (DRE) are often small. Transrectal ultrasound (TRUS) scanning of the prostate is, therefore, commonly utilized as the routine manner of prostate cancer detection and diagnosis [1]. TRUS imaging was introduced in 1971 by Watanabe et al. [2]. It is used clinically for guiding the biopsy, brachytherapy,
∗ Corresponding author. Tel.: +1 813 974 4769; fax: +1 813 974 5250.
E-mail address:
[email protected] (R. Sankar). 0010-4825/$ - see front matter 䉷 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.compbiomed.2007.02.008
prostate gland volume study, and cancer treatment planning and monitoring due to several favorable properties: it is inexpensive and easy to use; it is not inferior to MRI or CT in terms of diagnostic value; and it can follow anatomical deformations in real-time during biopsy and treatment [3]. Boundaries and volumes obtained from TRUS images play a key role in clinical decisions, since accurate boundary delineation is essential to guarantee preservation of organ function while controlling organ-confined cancer [4]. TRUS imaging has experienced several engineering advancements over the past 30 years, which has resulted in improved qualitative visual inspection of the image in real-time by a trained physician [5]. In spite of recent advances in ultrasonic imaging, manually boundary delineation on TRUS images by physicians is still a challenging task due to poor contrast, missing boundary, low SNR, speckle noise and refraction artifacts of the images. Besides, manual identification for prostate edges is tedious and sensitive to observer bias and experience. Previous studies have demonstrated that visual interpretation of gray scale images is not highly accurate in identifying these internal architectural changes [6]. This limitation is partially due to the fact that human eye is capable of distinguishing
1592
Y. Zhang et al. / Computers in Biology and Medicine 37 (2007) 1591 – 1599
only about 30 gray tones between black and white [5]. Thus, automated boundary delineation that can remove the physical weaknesses and subjectivity of observer interpretation within ultrasound images is essential for the early detection and treatment of prostate cancer. Research has been initiated into automatic algorithms with minimal manual involvement that could segment the prostate boundaries from TRUS images accurately and effectively. In this paper, the proposed segmentation algorithm is implemented on 2D TRUS images. The segmentation algorithm includes two modules: the preprocessing module and the segmentation module. The preprocessing module includes tree-structured nonlinear filter (TSF), directional wavelet transform (DWT) and tree-structured wavelet transform (TSWT). In the segmentation module, active contour model is used for prostate boundary delineation. Due to the inherent noise and bad contrast of ultrasound images, the preprocessing module presented in this study is important for the accurate boundary delineation of prostate on TRUS images [7]. It reduces false edges and efficiently detects the prostate boundary detection. A comparison between the segmented image with the preprocessing module and the one without the preprocessing shows that the preprocessing module has a great improvement on the accuracy of boundary detection. The organization of this paper is as follows: the 2D segmentation method is presented in Section 2. The algorithm implementation and evaluation results are described in Section 3. The conclusion of this study is addressed in Section 4.
algorithm. Methods based on deformable contour models are physically motivated model-based techniques for delineating object boundaries by using closed curves or surfaces that deform under the influence of internal and external forces [11]. For accurately delineating prostate boundaries, in this paper, 2D TRUS images were processed for noise reduction and artifact removal before the deformable contour model was implemented in the segmentation module. Fig. 1 shows the flowchart of this procedure.
2. Computer-aided diagnosis algorithms
2.1. Preprocessing module
A CAD system for prostate boundary segmentation on TRUS images is presented in this paper, which includes a preprocessing module for noise reduction and image enhancement, and a segmentation module for edge delineation. The algorithms for prostate boundary delineation in 2D TRUS images are classified into edge-based methods, texturebased methods and model-based methods [8]. Edge-based detection method uses edge detectors to identify all the edges in the image, followed by edge selection and linking to outline the prostate boundary. This method is straightforward and works well when the boundary is clearly defined [9]. However, it leads to spurious boundaries in highly textured areas and misses boundaries where the prostate boundary is not well delineated [6]. Texture-based detection method is actually a pixel classifier. The effect of using texture information is marginal [10]. As Richard et al. suggested that combining the edge information produced by the edge-based approach and the segmentation information produced by the neural network pixel classifier has the promise of providing a better, more fully-automated segmentation algorithm [10]. Modelbased segmentation method is another frequently adopted technique. Most recent research has shown that model-based segmentation methods are more efficient and powerful in delineating object boundaries [8]. Some prior knowledge, such as anatomic information, physical characteristics of the object and radiological features of imaging [11], is integrated to this
The preprocessing module, which has been used and tested successfully in mammography, plays a key role for the accurate boundary delineation on TRUS images due to inherent noise and low resolution of the image. It suppresses noise and artifacts, enhances edge information in the TRUS images, and makes the follow-up segmentation module works efficiently. The preprocessing module was composed of three algorithms: tree-structured nonlinear filtering (TSF), directional wavelet transforms (DWT) and tree-structured wavelet transforms (TSWT) as shown in Fig. 2.
Digital TRUS Image
Noise Suppression Artifact Removal
Image enhancement & smoothing with edge preservation
Prostate boundary segmentation
Preprocessing module
Segmentation module
Fig. 1. Flowchart of the prostate delineation algorithm.
2.1.1. Tree-structured filter (TSF) The major advantages of TSF for noise suppression and artifact removal are that its implementation does not require a prior knowledge of the local statistics within the filter window and its computational efficiency [12]. As a symmetric multistage filter, TSF combines the advantages of central weighted median filters (CWMF), linear and curved windows, and multistage operations that sequentially compare filtered and raw image data with the objective of obtaining more robust characteristics for noise suppression and detail preservation. CWMF is a nonlinear image smoothing and enhancement technique. It is especially effective in reducing both signal dependent and random noise [12]. However, CWMF with a large central weight preserves more image detail but suppresses less noise than a filter with a smaller central weight. In order to
Y. Zhang et al. / Computers in Biology and Medicine 37 (2007) 1591 – 1599
Original Image
Tree-Structured Filter (TSF)
Noise suppression
Directional Wavelet Transforms (DWT)
Image enhancement & smoothing Tree-Structured Wavelet Transform (TSWT)
For segmentation module Fig. 2. Algorithms used in the preprocessing module.
improve the performance of the CWMF, the filter’s variable window size of N × N pixels with different linear and curved shapes and the number of stages of a series of CWMF in a multistage tree-structured architecture were modified. Since the TRUS image consists of various curved edges, it would be desirable to alter the orientation of the window selected for filtering a particular section of the image on a pixel by pixel basis so as to match the differences in the curvature of various edges in the images. Therefore, the CWMF was modified to be a 5 × 5 pixels window with 2N − 2 linear or curved “sticks” where N = 5. Multistage filtering structure provides different filter architectures and can be applied for a wide range of tasks. In the first stage of the filter, the number of CWMFs equals to the number of different “sticks”. The original unfiltered signal was used as a single input to each subfilter for better preservation of image details. The second stage of the filter consists of fewer CWMFs; the inputs were combinations of an even number of CWMF outputs from the first stage and the original signal. Several intermediate states can be added by repeating the above procedure. In the final stage, one CWMF was left. The number of CWMFs employed in the intermediate stages and the number of stages implemented in the filter depends on the requirements of the particular problem [12]. In this study, a three-stage architecture was empirically selected which combined good balanced performance, ease of implementation, and fast processing time.
1593
highly suitable when one faces multi-scale oriented tokens in images [13]. Implemented on a pixel-by-pixel basis, DWT allows enhancement of structures that have a specific orientation in the image. Having the noise-suppressed image as the input image, two output images were obtained from the feature decomposition with directional wavelet analysis. One is a directional texture image. It could be used for directional feature enhancement. The other is the smoothed version of the original image with directional information removed. This output image was further processed by TSWT in the next stage [13]. 2.1.3. Tree-structured wavelet transform (TSWT) Gabor filter and conventional wavelet transforms would normally require a prior knowledge of image related parameters, such as spatial frequencies, orientations and shape of the Gaussian envelop function [14]. The M-channel TSWT used in this module is different. It has the advantage that can be readily expanded to M × M different resolution information to generate M 2 subimages. As a highly efficient multiresolution representation method for image enhancement, TSWT was designed to zoom into desired frequency channels and perform selective decomposition and reconstruction of region of interest (ROI) in TRUS image. The lower resolution characteristics are useful for localization of the suspicious areas, while the information in the high resolution is essential for fine detail in segmentation module. In this work, “separable wavelets” were used for the 2D wavelet implementation with subsampling by two in each dimension. Incorporating quadrature mirror filters (QMFs) as basic subunits of the tree structure, TSWT can be determined automatically in the transform process [14]. It should be noted that two changes occur in the original image during the decomposition and reconstruction processes: a (spatial) frequency resolution change and a scale change. The resolution change is due to the low pass filter or high pass filter (loss of high or low frequency). The scale change is due to the decimation or expansion operations. In order to reconstruct one undistorted signal from a set of subimages, the synthesis filters should satisfy the perfectreconstruction condition. Therefore, operations in the analysis part were matched by equivalent operations in the synthesis part of the QMF and an output image was generated identical to the input. 2.2. Segmentation module
2.1.2. Directional wavelet transform (DWT) Wavelet models can be described as multi-channel frequency decomposition filters that contain both frequency and directional information as opposed to the Fourier transform, which contains only frequency information. The DWT was a wavelet transform for multi-orientation signal decomposition. Pioneered by Mallat and Zhong, DWT has proved to be very useful when analysis of multi-scale features is important. It has also been shown that the use of specific orientation sensitive filters like Gabor filters or directional wavelets is
The segmentation module was designed to detect the prostate edge to differentiate prostate region in the TRUS images. A characteristic of an edge is that it is located at a significant change in a physical aspect of areas such as changes of gray level, color or texture. The active contour model (snake) was used in this module to outline the boundary of prostate in TRUS images since it has been proven that active contour model is more advantageous than other edge detectors by considering 2D spatial information [15].
1594
Y. Zhang et al. / Computers in Biology and Medicine 37 (2007) 1591 – 1599
2.2.1. Prostate contour initialization Snake, first proposed by Kass et al. in 1980s, is a specific example of deformable model [16]. It has been successful in performing tasks such as edge detection, image understanding, corner detection, motion tracking, and stereo matching. Snake is an energy-minimizing spline and controlled by minimizing a function which converts high-level contour information like curvature and discontinuities and low-level image information like edge gradients and terminations into energies. From a given starting point, it deforms itself to conform to the nearest salient contour. Snakes do not detect contours, but depends on identifying shape within an image; the initial location must be provided either by other processing or by higher level knowledge. It is generally beneficial to introduce some prior information about the structure that is being recovered to constrain the model deformation when segmenting structures in noisy data such as TRUS images using deformable models [17–19]. The active contour model used for boundary segmentation requires an initial curve input by the user to initiate the boundary detection process. The algorithm delineates the prostate boundaries successfully when an initial contour is reasonably close to the prostate boundaries [8]. In TRUS images, prostate boundaries are smooth and generally define closed near-convex contours. This knowledge could be incorporated into the deformable model as prior information in the form of initial conditions, constraints on the model shape parameters, or into the model fitting procedure to achieve higher specificity in detecting shapes from the data [17]. Incorporating prior information about the object shape requires either training or global shape modeling [20]. Chiu et al. initialized the contour by having a user to enter four initial vertices close enough to the boundary determined by the naked eye [21]. Then, the spline interpolation technique was applied to generate the initial contour. Although it reduces userinvolvement in this process by entering fewer points, the proposed algorithm has some deficiencies. The contour generated became less accurate since the deformable model was sensitive to the initial contour. In [22], Chiu et al. re-proposed their deformable algorithm. The initial contour is derived by four userentered vertices followed by conventional spline interpolation technique. However, the concave section of the boundary on TRUS images is failed to be captured, the DDC (discrete dynamic contour) model is easily attracted to the false edge. In order to improve the performance, they utilized the intensity information which is provided by the approximate coefficient of the dyadic wavelet transform to drive the DDC model away from the region of high gray scale value and towards the actual boundary. Shen et al. proposed the use of statistical shape model [3]. By using prior knowledge of the prostate in the TRUS images, the initialization model was created from the average shape of the manually outlined prostate in the test data. Pathak et al. compared three initial contour generation methods [23]: (1) manually outlined the initial contour on each image; (2) the user specifies eight points on the prostate boundary and the initial curve is derived by linearly interpolating between these points; (3) choose a seed point near the center of the prostate gland, the initial contour is derived using a zero-crossing based
edge detection method. The experiment results show that the prostate volume measured by method 1 was closer to the estimation by expert than that for method 2. Method 2 gave higher 95% confidence interval than method 1. Method 3 shows that it fails to detect the proper boundaries. Gong et al. observed that initial contour training involves manually capturing object shape variability associated with delineations by different users [17]. An alternative approach is to model global shape properties using parametric shape models. Traditional deformable models are local models, i.e., contours are assumed to be locally smooth. Global properties, such as orientation and size, are not explicitly modeled. Modeling of global shape properties can provide greater robustness to initialization and model evolution. Furthermore, global properties are important in image interpretation because they can be characterized using only a few parameters. Global properties also tend to be much more stable than local ones. If information about the global properties is known a priori, it can improve the segmentation performance. The initial contour utilized in our paper is based on the same viewpoint as Gong et al. The anatomical model of prostate is less subjective and variation to different users. It removes the bias that could be introduced by the selection of reference points [17]. With many parameters controlling local deformations, complex shapes can be modeled, but at the expense of increased computational complexity. On the other hand, abstracting shape into a small number of parameters through global shape modeling leads to more stable and faster numerical schemes. In practice, it is desirable to reduce the number of parameters and the range of each parameter used in a model to obtain a compact representation [17]. It also can be noticed that the anatomical model shape adopted in our paper allows the ease of specifying the relative spatial location of a given point with respect to the whole prostate. It is similar to an average prostate outline, and it is also reflectionally symmetric about the vertical axis located at the center of the image. The previously-proposed algorithms for automatic prostate boundary identification, while possessing a certain degree of clinical utility, are not in wide clinical use due to their inability to handle speckle noise, poor contrast between the prostate and surrounding tissues, missing boundary segments, etc. In addition, the presence of structures, such as the seminal vesicles, makes the automatic boundary delineation difficult [9]. Achieving accurate and robust prostate boundary identification remains a difficult task [17]. Our proposed boundary segmentation module focuses on the preprocessing module which combines TSF, directional wavelet transform and TSWT to remove noise, and enhance edge information of the image. Therefore, accurate boundary delineation can be achieved by the following module. Fig. 3(c) shows the initial contour laid on the TRUS image for boundary segmentation. 2.2.2. Snake (active contour model) algorithm The snake is an open or closed contour represented parametrically as v(s) = (x(s), y(s)), where x(s) and y(s) are the coordinates along the contour and s ∈ [0, 1]. Each contour configuration has an energy associated with it. It is placed on
Y. Zhang et al. / Computers in Biology and Medicine 37 (2007) 1591 – 1599
1595
Fig. 3. Segmentation results on 2D TRUS images: (a) original image; (b) corresponding output image after preprocessing; (c) preprocessed image with the initial contour laid on; (d) the final contour image with the prostate boundaries segmented at = 0.3 and = 0.5.
an image and moves toward an optimal position and shape by minimizing its own energy. The energy is a weighted combination of internal and external forces. These energies influence all points along the contour. The total energy is at a minimum when all forces are balanced. In our edge delineation module, the snake defines desired image boundaries in an autonomous fashion by using the internal and image energy forces. The total energy along the contour is 1 Esnake = [Eint (v(s)) + Eimage (v(s))] ds, (2.1) 0
where Eint is the internal energy of the spline due to bending; Eimage is the image force. The internal forces enforce the smoothness. It emanate from the shape of the snake. The image force attracts the contour to the desired features. The internal energy of the contour depends on the shape of the contour and can be written 2 2 dv d v Eint = (s) + (s) 2 , (2.2) ds ds where (s), (s) specify the elasticity and stiffness of the snake. The first derivative term |v (s)|2 discourages stretching and makes the model behave like an elastic string or membrane. The second derivative term |v (s)|2 discourages bending and makes the model behave like a rigid rod or thin plate. These terms are weighted by parameter functions (s) and (s). (s) controls the tension along the spine; (s) controls the rigidity of the spine. If (s) equals zero at some points, discontinuities are allowed; if (s) equals zero at a point s, discontinuous curvature such as a corner is developed. In practice, the parameter functions are often chosen to be constants [23].
The image force is a weighted combination of different functions. It can be expressed as Eimage = Eline + Eedge
(2.3)
The line-based functional Eline may be very simple: Eline = f (x, y),
(2.4)
where f (x, y) denotes image gray levels at image location (x, y). The edge-based functional Eedge attracts the snake to contours with large image gradients, which is to locations of strong edges: Eedge = −|∇f (x, y)|2 ,
(2.5)
where ∇f (x, y) calculates the gradient of the enhanced image from the preprocessing module. In accordance with the calculus of variations, the final contour that minimizes the total energy is found by solving the vector valued partial differential (Euler–Lagrange) equation: 2 jEsnake j jv j2 j v jv = + 2 2 = − js js js js js jv + ∇f (v) = 0 (2.6) This equation can be minimized by using the gradient-descent method to obtain the final contour. Having discretization on the contour and covert it to a vector v simplifies Eq. (2.6). Let v(s) be vi (s), (i = 1, 2, . . . , N ), then jv(s) → vi+1 (s) − vi (s) js
(2.7)
1596
Y. Zhang et al. / Computers in Biology and Medicine 37 (2007) 1591 – 1599
Eq. (2.6) can be converted to jv j → (2vi (s) − vi−1 (s) − vi+1 (s)) · js js j2 js 2
j2 v js 2
(2.8)
→ (vi−2 (s) − 4vi−1 (s) + 6vi (s) − 4vi+1 (s) + vi+2 (s)) ·
(2.9)
The final solution is vj +j = M −1 (vj + ∇F (vj ) · j ),
(2.10)
where M is a pentadiagonal matrix, whose diagonal element is the sequence of , − − 4, 1 + 2 + 6, − − 4, . 3. Experiment results and evaluation statistics The TRUS images were from the Medical Image Processing Lab at H. L. Moffitt Cancer Center and Research Institute, obtained with the 7-MHz ultrasound probe, typically of dimensions 55 × 30 mm at a resolution of 440 × 240 pixels. Twenty images were used in the verification of the algorithm. Fig. 3 shows the results of the segmentation algorithm in 2D TRUS images. Fig. 3(a) is the original image. After noise removal and image enhancement by the preprocessing module, the output image is shown in Fig. 3(b). Fig. 3(c) shows the anatomical contour of the prostate is overlaid on the preprocessed image for the boundary segmentation. The final prostate boundary is outlined by the segmentation module; Fig. 3(d) shows the resulted contour. TRUS segmentation images with manual outlined and computer-aided outlined are presented in Fig. 4. Fig. 4(a) shows the comparison between manually outlined boundary and computer-aided outlined boundary without implementing the preprocessing module. Fig. 4(b) shows the comparison after the preprocessing module is implemented. It can be seen that with the preprocessing module which removes image noise, smoothes images and enhances the image resolutions, the performance of the segmentation module can be significantly improved.
The computer-defined boundary on each TRUS image was compared with the truth file for evaluation by area-based metrics and distance-based metrics. Area-based metrics are insensitive to shape. To evaluate the performance of our segmentation module, three area-based metrics: fractional area difference (FAD), sensitivity and accuracy were used. Definitions for calculating these area-based metrics can be found in [24]. The evaluation results calculated by the area-based metrics are shown in Table 1. It can be observed from the above table that the implementation of preprocessing module in our algorithm reduces the difference between the boundaries defined by truth file and the boundaries delineated by computer. The sensitivity and the accuracy of the segmentation module have also been improved. The mean absolute distances between the computer estimates with the preprocessing module and the boundaries outlining without preprocessing on the same image are given in Table 2. Distance-based metrics are described in detail by Ladak et al. [24] and Mao et al. [25]. A reference pixel was manually placed in the center of the segmented image for calculating the
Table 1 Evaluation results of the segmented images based on area-based metrics Area-based metrics
Percentage (%) (Without preprocessing)
Percentage (%) (With preprocessing)
Fractional area difference Sensitivity Accuracy
2.5 93.3 90.2
1.4 98.2 97.6
Table 2 The mean distances between CAD detection and their corresponding truth files 20 Images
With preprocessing segmentation Without preprocessing segmentation
Mean absolute distance (MAD) Mean (mm)
Standard derivation (mm)
1.26 1.93
0.72 1.07
Fig. 4. Manual outlined structures are overlaid on the automatic boundary segmentation images. Dotted line: manually outlined boundary. Solid line: computer-defined boundary. (a) Without the preprocessing module implementation on TRUS images: (b) the preprocessing module is implemented on TRUS images before the segmentation module is processed.
Y. Zhang et al. / Computers in Biology and Medicine 37 (2007) 1591 – 1599
1597
with preprocessing
6
without preprocessing
Local Bias (pixels)
4 2 0 -2 -4 -6 0
50
100
150
200
250
300
350
Angle (0 - 360 Degree) Fig. 5. Local bias metrics. Dotted line: difference between manually segmented boundary on TRUS images and computer delineated boundary with preprocessing module. Solid line: computer delineated boundary without preprocessing module.
6 with preprocessing without preprocessing
Local Error (pixels)
5
4
3
2
1
0 0
50
100
150
200
250
300
350
Angle (0 - 360 Degree) Fig. 6. Local error metrics. Dotted line: computer delineated boundary with preprocessing module. Solid line: computer delineated boundary without preprocessing module.
local bias and local errors on the segmented images with and without preprocessing module. The difference between manually delineated boundary and computer extracted boundary is defined as dj (i ), where j is the number of images and i is the degree of angle ranging from 0 to 360◦ . One of the advantages of local bias and local errors metrics is that it shows directly the difference between manually outlined boundary and the one defined by computer. Fig. 5 shows the plot of local bias metrics. The solid line shows the difference between manually outlined edge and computer outlined without our preprocessing module. The dotted line shows the difference after the preprocessing module is manipulated before the segmentation. Fig. 6 shows
the local error metrics by comparing the segmentation result after the image is implemented with and without preprocessing module. The average error is reduced from 3.84 to 2.76 pixels after the implementation of preprocessing module, which corresponds to approximately 0.35 mm. In the range of 220.320◦ , the error is smaller than the other portion of the image, which is due to the location of transducer when conducting TRUS images. It is obvious that the implementation of the preprocessing module reduces the disagreement in prostate boundary delineation between the physicians and the proposed computer-aided algorithm.
1598
Y. Zhang et al. / Computers in Biology and Medicine 37 (2007) 1591 – 1599
4. Conclusion Semi- or automatic prostate boundary detection methods provide robust, consistent and reproducible results with a certain degree of accuracy. It is unlikely that automatic prostate boundary detection methods will ever replace physicians [8]. However, as the “second opinion” for the physicians during the routine clinical setting, computer-aided segmentation techniques have been widely used in the detection and diagnosis of breast cancer, lung cancer and prostate cancer. In this study, the preprocessing module which includes TSF, DWT and TSWT for noise suppression and image enhancement was implemented. The resulting images were further processed by the segmentation module for boundary delineation. In the segmentation module, the snake algorithm was modified and implemented on the 2D preprocessed TRUS images. It can be observed that the preprocessing module improves sensitivity and accuracy of the segmentation significantly. The evaluation results for this segmentation algorithm show that it is an efficient boundary delineation method among the edge segmentation algorithms. In order to provide more accurate and direct view of the prostate structure during the clinical examination for physicians, 3D reconstruction of the prostate can be proposed based on the algorithms implemented on 2D images in the future. The 3D ultrasound imaging is a new imaging modality with potential still being explored. Volume images produced are appropriate for post-imaging interpretation and manipulation through the use of appropriate image processing and analysis techniques. The data collection in 3D space is in the form of a sequence of 2D image planes. The preprocessing module presented in this study can be implemented on each 2D TRUS images before the 3D rendering. It will help to reduce false edges by a great amount during the segmentation process. Therefore, it is efficient for the implementation of 3D segmentation algorithm. It is obvious that this combination of both 2D and 3D information of prostate will significantly enhance the physicians’ situation awareness and thus can lead to substantial improvements in cancer detection performance. References [1] E. Feleppa, A. Kalisz, J. Sokil-Melgar, F. Lizzi, T. Liu, A. Rosado, M. Shao, W. Fair, Y. Wang, M. Cookson, V. Reuter, W. Heston, Typing of prostate tissue by ultrasonic spectrum analysis, IEEE Trans. Ultrason. Ferroelectrics Freq. Control 43 (4) (1996) 609–619. [2] S. Tong, D.B. Downey, H.N. Cardinal, A. Fenster, A three-dimensional ultrasound prostate imaging system, Ultrasound Med. Biol. 22 (6) (1996) 735–746. [3] D. Shen, Y. Zhan, C. Davatzikos, Segmentation of prostate boundaries from ultrasound images using statistical shape model, IEEE Trans. Med. Imaging 22 (4) (2003) 539–551. [4] J. Priestly, D. Beyer, Guided brachytherapy for the treatment of confined prostate cancer, Urology 40 (1992) 27–32. [5] A. Houston, S. Premkumar, D. Pitts, Prostate ultrasound image analysis: localization of cancer lesions to assist biopsy, CBMS ’95, Lubbock, TX, 1995 pp. 94–102. [6] R. Huben, The USA experience: diagnosis and follow-up of prostate malignancy by transrectal ultrasound, Prog. Clin. Biol. Res. 237 (1987) 153–159.
[7] Y. Zhang, W. Qian, R. Sankar, Prostate boundary detection in transrectal ultrasound images, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Philadelphia, PA, vol. 5, (2005), pp. 617–620. [8] F. Shao, K.V. Ling, W.S. Ng, R.Y. Wu, Prostate boundary detection from ultrasonographic images, Ultrasound Med. (2003) 605–623. [9] S. Pathak, V. Chlana, D. Haynor, Y. Kim, Edge-guided boundary delineation in prostate ultrasound images, IEEE Trans. Med. Imaging 19 (2) (2000) 1211–1219. [10] W.D. Richard, C.G. Keen, Automated texture-based segmentation of ultrasound images of the prostate, Comput. Med. Imaging Graphics 20 (1996) 131–140. [11] Y. Zhan, D. Shen, Automated segmentation of 3D US prostate images using statistical texture-based matching method, MICCAI 2003, Lecture Notes in Computer Science, vol. 2878, Springer, Berlin, 2003, 688–696. [12] W. Qian, L.P. Clarke, M. Kallergi, R.A. Clark, Tree-structured nonlinear filters in digital mammography, IEEE Trans. Med. Imaging 13 (1) (1994) 25–36. [13] L. Li, W. Qian, L.P. Clarke, Digital mammography: computerassisted diagnosis method for mass detection with multiorientation and multiresolution wavelet transforms, Acad Radiol. 4 (1997) 724–731. [14] W. Qian, M. Kallergi, L.P. Clarke, H.D. Li, P. Venugopal, Treestructured wavelet transform segmentation of microcalcifications in digital mammography, Med. Phys. 22 (1995) 1247–1254. [15] R. Chang, W. Wu, C. Tseng, D. Chen, W. Moon, 3-D snake for US in margin evaluation for malignant breast tumor excision using mammotome, IEEE Trans. Inf. Technol. Biomed. 7 (3) (2003) 197–201. [16] M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, Int. J. Comput. Vision 1 (1988) 321–331. [17] L. Gong, S.D. Pathak, D.R. Haynor, P.S. Cho, Y. Kim, Parametric shape modeling using deformable superellipses for prostate segmentation, IEEE Trans. Med. Imaging 23 (2004) 340–349. [18] T. McInerney, D. Terzopoulos, Deformable models in medical image analysis: a survey, Med. Image Anal. 1 (1996) 91–108. [19] J. Montagnat, H. Delingette, N. Ayache, A review of deformable surfaces: topology, geometry and deformation, Image Vision Comput. 19 (2001) 1023–1040. [20] C. Xu, D.L. Pham, J.L. Prince, Image segmentation using deformable models, in: J.M. Fitzpatrick, M. Sonka (Eds.), Handbook of Medical Imaging, Medical Image Processing and Analysis, vol. 2, SPIE Press, Bellingham, WA, 2000, pp. 129–174. [21] B.C.Y. Chiu, G.H. Freeman, M. Salama, A. Fenster, K. Rizkalla, D.B. Downey, A segmentation algorithm using dyadic wavelet transform and discrete dynamic contour, CCECE 2003, Montreal, 2003 pp. 1–4. [22] B. Chiu, G.H. Freeman, M.M.A. Salama, A. Fenster, Prostate segmentation algorithm using dyadic wavelet transform and discrete dynamic contour, Phys. Med. Biol. 49 (2004) 4943–4960. [23] S.D. Pathak, R.G. Aarnink, J. Rosette, V. Chalana, H. Wijkstra, D. Haynor, F. Debruyne, Y. Kim, Quantitative three-dimensional transrectal ultrasound (TRUS) for prostate imaging, SPIE 3335 (1998) 83–92. [24] H.M. Ladak, F. Mao, Y. Wang, D.B. Downey, D.A. Steinman, A. Fenster, Prostate boundary segmentation from 2D ultrasound images, Med. Phys. 27 (8) (2000) 1777–1788. [25] F. Mao, J. Gill, A. Fenster, Technique for evaluation of semi-automatic segmentation methods, Proc. SPIE 3661 (1999) 1027–1036. Ying Zhang received the MSEE degree from the University of South Florida, Tampa, Florida in 2004. Currently, she is working towards the Ph.D. degree at the Department of Electrical Engineering, University of South Florida. Her main research interests are in medical image processing and computer vision. Her current research is to establish a strong model/network between pathology and radiology for computer-assisted image interpretation and differential diagnoses rendering by exploiting computational methods and machine learning techniques. Ravi Sankar received the B.E. (Honors) degree in Electronics and Communication Engineering from the University of Madras, India, in 1978, the M. Eng. degree in Electrical Engineering from Concordia University, in 1980,
Y. Zhang et al. / Computers in Biology and Medicine 37 (2007) 1591 – 1599 and the Ph.D. degree in Electrical Engineering from the Pennsylvania State University, in 1985. Since then, he has been with the Department of Electrical Engineering at the University of South Florida, Tampa. He is currently a Professor of Electrical Engineering and Director of the Interdisciplinary Communications, Networking and Signal Processing (iCONS) Research group and Interdisciplinary Center of Excellence in Telemedicine (ICE-T). His main interests are in the interdisciplinary research areas of wireless communications, networking, signal processing and its applications. In particular, he is interested in processing, coding, and recognition applications to speech, image, biomedical and other signals and in integrating intelligent techniques including the use of neural networks and fuzzy logic in the simulation, modeling, and design of high performance and robust systems. He has published widely in these areas and has conducted successfully numerous funded research projects over the past 25 years. He is an Associate Editor of the IEEE Communications Surveys and Tutorials and was a guest editor for the IEEE Transaction on Information, Technology in Biomedicine. He is also in the Honorary Editorial Board of Libertas Academia’s Cancer Informatics. He has served on the organizing committees,
1599
technical program committees, and as a session organizer and chair for many conferences. He was the invited keynote speaker for the Fourth IASTED International Conference on Communications. Internet, and Information Technology, 2006. He is the current Chair and Founding Officer of the Engineering in Medicine and Biology Society (EMBS) Chapter of the IEEE Florida west coast section. Wei Qian is currently an Associate Professor at the Department of Interdisciplinary Oncology at University of South Florida, and director of medical imaging program. He received a B.Sc. degree and M.Sc. degree in Electrical Engineering from Nanjing University of Telecommunications, and Ph.D. degree in Biomedical Engineering from Southeast University, Nanjing, China, in 1982, 1985, and 1990, respectively. He was Postdoctoral Research Associate at the Department of Electrical Engineering at the University of Notre Dame in 1990/1991 and Postdoctoral Research Associate at the Department of Radiology at the University of South Florida in 1991/1993. Dr. Qian’s research interests include CAD in medical imaging, biomedical imaging and molecular imaging.