Fast and robust registration of PET and MR images of human brain

Fast and robust registration of PET and MR images of human brain

www.elsevier.com/locate/ynimg NeuroImage 22 (2004) 434 – 442 Fast and robust registration of PET and MR images of human brain Jirˇ´ı Cˇ´ızˇek, a,b Ka...

578KB Sizes 0 Downloads 73 Views

www.elsevier.com/locate/ynimg NeuroImage 22 (2004) 434 – 442

Fast and robust registration of PET and MR images of human brain Jirˇ´ı Cˇ´ızˇek, a,b Karl Herholz, a,c,* Stefan Vollmar, a Rainer Schrader, b Johannes Klein, a,c and Wolf-Dieter Heiss a,c a

Max-Planck-Institute for Neurological Research, 50931 Cologne, Germany Center for Applied Computer Science, University of Cologne, 50923 Cologne, Germany c Department of Neurology, University of Cologne, Joseph-Stelzmann-Str. 9, 50931 Cologne, Germany b

Received 21 October 2003; revised 13 January 2004; accepted 15 January 2004

In recent years, mutual information has proved to be an excellent criterion for registration of intra-individual images from different modalities. Multi-resolution coarse-to-fine optimization was proposed for speeding-up of the registration process. The aim of our work was to further improve registration speed without compromising robustness or accuracy. We present and evaluate two procedures for coregistration of positron emission tomography (PET) and magnetic resonance (MR) images of human brain that combine a multiresolution approach with an automatic segmentation of input image volumes into areas of interest and background. We show that an acceleration factor of 10 can be achieved for clinical data and that a suitable preprocessing can improve robustness of registration. Emphasis was laid on creation of an automatic registration system that could be used routinely in a clinical environment. For this purpose, an easyto-use graphical user interface has been developed. It allows physicians with no special knowledge of the registration algorithm to perform a fast and reliable alignment of images. Registration progress is presented on the fly on a fusion of images and enables visual checking during a registration. D 2004 Elsevier Inc. All rights reserved. Keywords: Registration; Multi-modal; Mutual information; Preprocessing; PET; MRI

Introduction Registration of functional and morphological 3D image data sets has become an essential tool for individual anatomical localization of functional processes in the brain (Woods et al., 1998). This is not only true for use of functional imaging in neuroscientific research, but increasingly also for applications of functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) in clinical neurology, neurosurgery, and psychiatry

* Corresponding author. Fax: +49-221-4726-298. E-mail address: [email protected] (K. Herholz). Available online on ScienceDirect (www.sciencedirect.com.) 1053-8119/$ - see front matter D 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.neuroimage.2004.01.016

(Pietrzyk, 2001). For instance, registration of fMRI or PET to magnetic resonance (MR) images of macroscopic anatomy is necessary for accurate presurgical localization of language or motor areas in patients with brain tumors in critical locations (Thiel et al., 2001) or of epileptic foci in patients with refractory focal seizures (Muzik et al., 2000). Speed and robustness of registration are always important aspects for the quality of registration procedures and clinical applications are especially demanding in that respect because they may involve patients with abnormal anatomy and function due to brain lesions, and results must be produced within the tight time frame of the clinical diagnostic setting. Until recently, this was often achieved by manual interactive registration procedures (e.g., Pietrzyk et al. (1994, 1996)). By use of different tracers, the spectrum of the functional appearance of anatomical structures on PET images may be very diverse, which also demands a high degree of versatility and robustness of coregistration procedures. Registration of medical images using maximization of mutual information has become very popular in the last years and has undergone an intensive research. Several papers validated its performance for registration of images from different modalities (Maes et al., 1997; Pluim et al., 2001; Studholme et al., 1997; Thurfjell et al., 2000; West et al., 1997). An extensive survey dealing with all aspects of a multi-modal registration based on mutual information can be found in Pluim et al. (2003). A multiresolution approach was proposed by several researchers for acceleration of image registration (Maes et al., 1999; Pluim et al., 2001; Thurfjell et al., 2000). They showed that a suitable multiresolution schema is able to improve registration speed by a factor of more than 2 without loss of accuracy. In this paper, two automatic preprocessing procedures for further acceleration of PET-MR registration are proposed and compared: (1) thresholding followed by a sequence of morphological operations and (2) utilization of a well-established automatic brain extraction procedure. Acceleration obtained by the preprocessing is demonstrated for PET-MR registration using simulated and real images of human brain. We show that a suitable preprocessing step can bring an additional speed-up of factor 3 without loss of accuracy and that it can help in cases where the basic registration method fails to find a good match. A speed-up of factor

J. Cˇ´ızˇek et al. / NeuroImage 22 (2004) 434–442

10 can be achieved with a small loss of accuracy. A common PET (128  128  47 voxels) and MR image (256  256  100 voxels) can be precisely and reliably co-registered in less than 1 min on current PCs.

Materials and methods Mutual information The general principle of the registration algorithm is an iterative search for a transformation that maximizes mutual information of two image volumes. Mutual information (MI) is a reliable and precise measure of alignment for registration of images from different modalities, for example, PET-MR (Pluim et al., 2001), MRI-CT (Maes et al., 1997; Studholme et al., 1997; Wells et al., 1996), or SPET-MR (Thurfjell et al., 2000). A brief resume of the basic principles of MI follows. Let us consider two image volumes that need to be coregistered. One of them is static throughout the registration process and will be called the reference volume R. The other one is called the reslice image S and is subject to geometrical transformations that aim at bringing it into alignment with the reference volume. In this paper, multi-modal, intra-subject registrations are considered. The set of searched transformations is therefore restricted to the six-parameter rigid body transformations in 3D (translation and rotation). Sa denotes the volume S transformed according to a set of parameters a. Mutual information (MI) is computed using the normalized marginal intensity histograms (HR and HS) and the joint intensity histogram (HR,S ) of the two image volumes. These histograms approximate the marginal and joint probability distributions of intensity values in both images. The mutual information MI is then computed as MIðaÞ ¼

X ieR;jeSa

HR;Sa ði; jÞlog

HR;Sa ði; jÞ HR ðiÞ  HSa ðjÞ

where i and j are indices of voxels from the overlapping parts of the volumes R and Sa. In the case of a multi-modality registration of intra-subject images, mutual information must be maximized to bring the images into alignment. The aim is to find a transformation a that realigns the reslice image so that the amount of information that one image bears about the other is maximal. We compute the mutual information using the partial volume interpolation as described in Maes et al. (1999). This approach makes MI a smooth function of transformation parameters and possibly reduces the number of local minima (Pluim et al., 2000). For computation of marginal and joint histograms, 64 bins are used as proposed in Thurfjell et al. (2000). Optimization Maximization of mutual information is accomplished by the downhill simplex optimization method by Nelder and Mead (1965). We have chosen this method because it does not require computation of a gradient of similarity measure and is comparable in speed and accuracy to other commonly used algorithms like Powell’s minimization or the conjugate gradient method as demonstrated in Maes et al. (1999). Moreover, they show that the downhill simplex method is

435

even slightly more efficient than Powell’s method when a multiscale approach is used and due to the simplicity of its implementation (no gradient estimation needed) is to be preferred to the similarly efficient gradient-based methods. For an N-dimensional minimization, the method begins with N + 1 initial points defining a geometrical object called simplex in N-dimensional space of transformation parameters. The simplex is then iteratively transformed using a predefined sequence of transformations that aim to move the vertices of the simplex towards the minimum. Our implementation is based on the code presented in Press et al. (1992). Offsets of the vertices of the initial simplex are +10 mm for translation and +5j for rotation. Convergence of optimization is declared when the fractional increase of mutual information is smaller than 106 (an empirically chosen value). Multi-scale approach The most time-consuming part of a voxel-based registration is resampling of the image volume during computation of MI(a). The computational expense is proportional to the number of image voxels used. Using a multi-scale coarse-to-fine optimization approach, the computational demand can be significantly reduced without loss of accuracy (Maes et al., 1999; Pluim et al., 2001; Thurfjell et al., 2000). Firstly, optimization is performed at low resolution using a small subset of image volume. After reaching convergence, it proceeds at higher resolutions using the transformation computed at lower resolution as an initial estimate. This results in a smaller number of iterations required to reach convergence at higher resolutions and markedly shortens computational time. Sub-sampled subsets are constructed by sampling the reference image with integral factors [ fx, fy, fz] along the x, y, and z dimension using the nearest-neighbor interpolation. In our implementation, a factor fx simply means that only every fxth voxel in the x-direction is used. For a PET-MR registration, we use a three-level optimization scheme, starting with sub-sampling factors [4, 4, 1], continued with factors [2, 2, 1] in the second level and (optionally) finishing at full resolution with factors [1, 1, 1]. Thurfjell et al. (2000) demonstrated that an optimization scheme without the full resolution level can give sufficiently precise results in much shorter time. We evaluate this statement in Results. In the following text, full-resolution registration denotes a registration using all three levels whereas low-resolution registration is a registration without the last optimization level [1, 1, 1]. Further improvements of registration speed The multi-scale approach itself brings along a large gain in registration speed without loss of accuracy. However, there still remains a space for improvement. One possibility is to reduce the total volume of data used for registration. A common brain image volume contains a large number of voxels that do not bear any usable information (e.g., background voxels). By excluding these areas from computations it is possible to shorten registration time. Removal of a noisy background may also help to improve robustness of registration. We present and evaluate two approaches that remove these areas reliably and automatically: intensity thresholding followed by morphological operations and extraction of brain volume from MR images before registration.

436

J. Cˇ´ızˇek et al. / NeuroImage 22 (2004) 434–442

Intensity thresholding One of the easiest ways to remove a part of an image volume from computation is to select a certain intensity value and mask out all voxels with intensities lower (or higher) than this value. With a well-chosen threshold, it is possible to remove most background areas and keep most head areas in PET and MR images. To allow an easy distinction between background and head areas, we use a quantity that does not vary much for a given scanner and acquisition protocol: the ratio q of the background (non-head) volume to the whole image volume. The intensity threshold corresponding to a certain ratio q is the q% quantil in the intensity histogram of the whole volume (i.e., the lowest intensity higher than q% of all image voxels). Thresholding results in 3D binary mask where zero voxels correspond to the voxels in the original image with an intensity lower than the intensity threshold. An example of such a thresholded image is given in Fig. 1. When a low threshold is used, many background voxels are not removed (Fig. 1, left). On the other hand, a higher threshold unavoidably removes some portion of brain voxels (Fig. 1, right). Adjustment using morphological operations. We use a sequence of binary 3D morphological operations to ‘‘clean’’ background areas and to fill ‘‘holes’’ in areas of interest. Morphological operations can simplify image data and eliminate irrelevant objects while preserving shape of large enough objects (Gonzales and Woods, 1993; Sonka et al., 1994). A similar method based on morphological operations was proposed in Alpert et al. (1996). They used a manual threshold selection and a registration procedure based on the AIR algorithm (Woods et al., 1993). We have employed two operators: an erosion followed by a sequence of dilations. Fig. 2 shows a result of application of a thresholding at 70% followed by an erosion and dilation to a PET and a MR image. All background voxels are removed while the head volume is preserved. An advantage of this approach is that it is not necessary to select an intensity threshold very precisely. As demonstrated in Fig. 3, the differences between binary masks created using thresholds at 65%, 75%, and 85% of the image volume are quite significant. When erosion and dilation is used after thresholding, the resulting masks differ only slightly in the brain outline. Moreover, for a given scanner and used acquisition protocol, the head-to-wholevolume ratio does not vary much. For routine clinical usage, it needs to be estimated only once and can be reused in subsequent studies.

Fig. 1. A binary mask of a PET image created by thresholding at 65% and 85%-quantil.

In all registrations in our evaluation, we use tresholding at 80% for PET images and at 70% for MR images followed by an erosion (spherical structuring element, radius 3 voxels for PET and 1 voxel for MR) and a sequence of three dilations (radius 3 voxels for PET, 1 voxel for MR). The additional time needed to erode and dilate a 3D PET or MR image volume is about 3 s for PET images and 3 – 13 s for MR images1 and is small in comparison with the gain in speed due to the masking, as demonstrated in Results. Automatic extraction of brain volume from MR images For MR images, several sophisticated tools exist that make use of the detailed anatomical pattern of MR images and are able to automatically extract brain volume from a given MR image. Using an extracted brain instead of a whole MR image volume has potential advantages in a large reduction of image volume and in removal of areas that do not have a significant corresponding signal in PET image (e.g., neck). In our study, we used the Brain Extraction Tool (BET) developed by Smith (Smith, 2002). BET is able to make a fully automatic and reliable brain extraction in a short time (approximately 15 – 35 s) using a surface model approach. Computation of a mutual information with masking Mutual information is computed from the marginal histograms HR, HS and the joint histogram HR,S. They are created by taking samples v from the reference image volume, transforming them to samples v V in the reslice image volume and distributing intensities of the samples v and v V to the appropriate histogram bins by partial volume interpolation as described in Maes et al. (1999). For efficiency reasons, the volume with a smaller number of voxels is always used as the reference (static) volume (i.e., usually the PET image in case of a PET-MR registration). When registration of a PET image to a reference MR image is requested, then a MR-toPET registration is performed first after which an inverse of the resulting transformation is applied to the PET image. The similarity measure is computed only from the overlapping part of images, that is, a pair of corresponding samples v, v V, is disregarded if either of them lies outside its image volume. When masking of voxels is used, the computation is adjusted by incorporating the information from the binary masks as follows: (1) The smallest rectangular box (bounding box) containing all masked voxels2 is determined in the reference volume (Fig. 4, top left) using the reference mask (Fig. 4, bottom left). Samples are taken only from this box. (2) The volume within the bounding box in the reference image is sampled, non-masked voxels are skipped (e.g., the voxel v1 in Fig. 4). After this, all non-masked voxels in the reference image have been removed from the computation without the need for a transformation. (3) Sample voxels v that pass the two preceding steps are transformed to corresponding samples v V in the reslice volume. If v V is not masked in the reslice volume, the pair, v, v V is removed from the computation (e.g., the voxel pair v2, v2V in Fig. 4).

1 All computational times in this paper were measured on a PC with a 1.1-GHz AMD Athlon processor. 2 Masked voxels have a non-zero value in the corresponding binary mask.

J. Cˇ´ızˇek et al. / NeuroImage 22 (2004) 434–442

437

Fig. 2. Computation of a mask for a PET (upper row) and MR (bottom row) image. Original image (a) is first thresholded at a certain quantil (b). Small spots in the background areas are removed by erosion (c). Holes and gulfs in the eroded image are then filled and the mask is enlarged by a dilation (d).

Voxels that pass all these steps (e.g., the voxel pair v3, v3V in Fig. 4) are then used for computation of mutual information. This approach markedly reduces the number of necessary transformations—only voxels that pass steps 1 and 2 need to be transformed. The time required to co-register two images with and without the preprocessing is compared in Results. Implementation The registration algorithm has been implemented in C++. A command-line version runs under Linux, Solaris, and Windows.

Volume Imaging in Neurological Research (VINCI), a graphical image analysis tool for Windows NT/2000/XP, has been adapted for configuration and execution of registration jobs and evaluation of registration results. VINCI was first presented as a visualization tool for high resolution PET data (Vollmar et al., 2002). It has been designed for research purposes as well as for routine clinical usage: no deep understanding of the registration procedure and its parameters is necessary. On the other hand, many parameters can be tuned by an experienced user. Registration progress is presented using a fusion of images and updated on the fly every few seconds with the current transforma-

Fig. 3. Binary masks created from a PET image using thresholding only (upper row) and thresholding followed by erosion and dilation (bottom row).

438

J. Cˇ´ızˇek et al. / NeuroImage 22 (2004) 434–442

from a set of five high-resolution, high-contrast T1-weighted MR images (TE 5.5 ms, TR 9.8 ms, flip angle 8j) of normal patients acquired using the 1.5 T Gyroscan Intera scanner (Philips Medical Systems, Best, The Netherlands). The image dimensions were 256  256  200 voxels with voxel size 1.0  1.0  1.0 mm. The aim of this test was to demonstrate the speed-up achieved with the pre-processing and to show that the gain in speed is not at the cost of a lower robustness or accuracy. For each MR image, a simulated PET image was created using a fully automatic procedure described bellow. The procedure is based on the simulation procedure used by Kiebel et al. (1997) for cross-validation of SPM and AIR. High-resolution, high-contrast MR images were used to enable robust segmentation of cortex and white matter.

Evaluation using simulated PET images

(1) Brain volume was extracted from MR image using the Brain Extraction Tool (Smith, 2002). (2) The extracted volume was automatically segmented into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using the segmentation tool FAST from the FMRIB software package (Zhang et al., 2001). (3) Voxel intensities of the MR image were scaled so that the mean intensities of GM, WM, and CSF were related by a ratio of 10:3:1. (4) A random rigid-body transformation was applied and the image was resliced to a dimension of 128  128  47 voxels with voxel size 2.2  2.2  3.125 mm using trilinear interpolation. (5) The image was smoothed by an isotropic 3D Gaussian filter with FWHM of 8 mm. (6) Additive Gaussian white noise with variance equal to 30% of the mean intensity value of brain voxels was applied.

The described registration procedure and the preprocessing methods were evaluated using simulated PET images generated

In this way, a misaligned MR-PET pair was created with a known exact transformation needed to realign it. The simulation sequence

Fig. 4. Reduction of image volumes in computation of mutual information using binary masks.

tion (Fig. 5). In this way, a user can immediately see if a registration evolves in the right direction. In case of problems, it is possible to pre-align images manually before automatic registration. Tools are provided for visual inspection of registration results: image volumes can be resliced and zoomed synchronously in an intuitive fashion with an optional contour rendering. A leading manufacturer of PET systems has recently acquired a license to ship VINCI with their machines.

Fig. 5. A snapshot of a running registration in VINCI.

J. Cˇ´ızˇek et al. / NeuroImage 22 (2004) 434–442 Table 1 Mean computational times and speed-up factors of registrations of simulated images NOPREP QMPET QMMR QM

BET

BETQM

Full-resolution Preprocessing (s) 0.00 Optimization (s) 482.36 Total time (s) 482.36 Speed-up factor 1.00

3.69 184.09 187.78 2.57

7.45 370.29 377.74 1.28

11.14 32.23 36.01 188.61 231.98 154.84 199.75 264.21 190.85 2.41 1.83 2.53

Low-resolution Preprocessing (s) 0.00 Optimization (s) 133.72 Total time (s) 133.72 Speed-up factor 1.00 (3.61)

3.69 51.80 55.49 2.41 (8.70)

7.45 100.82 108.27 1.24 (4.46)

11.14 32.23 36.01 53.94 60.56 40.90 65.07 92.79 76.90 2.05 1.44 1.74 (7.41) (5.20) (6.27)

The speed-up factors related to the average time of the full-resolution NOPREP registrations are in brackets.

was repeated 30 times for each MR image. The rigid-body transformation was generated randomly from a normal distribution with mean l = 0 and standard deviation r = 9j for the rotation parameters and r = 15 mm for the translation parameters. The parameters r were chosen large enough to get some number of failed registrations for a comparison of robustness of the preprocessing enhancements. The maximal generated rotation in a single axis was 19.4j, the maximal translation was 38.5 mm. Each simulated PET image was co-registered to its corresponding MR image using the described registration procedure with six types of preprocessing:

439

patients with various diseases (epilepsy, tumor, stroke, Parkinson’s disease). The PET images were acquired using different radiotracers: 11C-methionin (11C-MET), 11C-flumazenil (11C-FLUMA), 2-18F-fluoro-2-deoxy-D-glucose (18F-FDG), 6-18F-fluoro-L-dopa (18F-DOPA), 15O-water, 11C-raclopride (11C-RACL), N-11C-methyl-4-piperidyl acetate (11C-MP4A). The PET images were acquired on an ECAT Exact HR (Siemens Medical Systems, Erlangen, Germany) scanner in three-dimensional data acquisition mode and reconstructed to 47 contiguous planes with 128  128 voxels (voxel size 2.2  2.2  3.125 mm). A Hanning filter with cutoff frequency of 0.5 cycle/ pixel was applied. Before reconstruction, corrections for random coincidences, scatter, and photon attenuation were applied (Wienhard et al., 1994). After a transmission scan of 10-min duration, data was recorded in 6 (11C-MET, 11C-FLUMA, 18F-FDG, 11CRACL, 11C-MP4A) or 9 (18F-DOPA) frames of 10 min duration each, except for 15O-water, where 120 s of data were acquired into a single frame. The T1-weighted MR scans were obtained on differing scanners: 1 T Magnetom Impact (Siemens Medical Systems), 1 T Gyroscan NT, or 1.5 T Gyroscan Intera (Philips Medical Systems). Inplane dimensions were 256  256 voxels (voxel size 1  1 mm), number of planes was 64, 100, or 150 (slice thickness 2.5, 1.5, or 1 mm). Each PET image was automatically co-registered to its corresponding T1-weighted MR image using NOPREP, QM, BET, and BETQM. Altogether 107 different PET-MR pairs were co-registered, that is, 4  107 = 428 registration runs were performed.

Results (1) without any preprocessing (NOPREP); (2) using thresholding, erosion, and dilation for PET images only, as described in Intensity thresholding (QMPET); (3) using thresholding, erosion, and dilation for MR images only (QMMR); (4) using thresholding, erosion, and dilation for PET and MR images (QM); (5) using the program BET to extract brain volume from MR images as described in Automatic extraction of brain volume from MR images; no preprocessing for PET images (BET); (6) combining BET (5.) for MR images and QMPET (4.) for PET images (BETQM). Together, 6  30  5 = 900 automatic registrations were performed. Registration errors were evaluated using a fixed set of 14 points located on the boundary of the brain volume in each MR image. For each registration, these points where once transformed using the exact original transformation and once using the transformation computed by the registration procedure. The mean and maximum Euclidean distance DE and Dmax between the two sets of transformed points was used as a measure of error. Evaluation using clinical data Performance of the registration procedure with the preprocessing enhancements was also evaluated on a data set of real MR and PET images. The aim of this test was to demonstrate that the speed results from the simulations can be achieved for real data as well. The test set consisted of PET- and T1-weighted MR images of

Simulated PET images Speed Table 1 presents average computational times and speed-up factors for each type of preprocessing3. These results were computed from the successful registration cases and include the time necessary to threshold, erode and dilate images (for QMPET, QMMR, QM, BETQM) and the time needed for brain extraction (BET, BETQM). The fastest full-resolution registration was achieved using QMPET, BETQM, and QM, outperforming the basic registration method approximately by a factor 2.5. If we omit the fullresolution level, the longer time needed for BET preprocessing becomes more pronounced and causes BETQM to be slower in comparison with QM. An average low-resolution registration with QM takes slightly more than 1 min and is two times faster than the low-resolution NOPREP registration and even more than seven times faster than the full-resolution NOPREP. Robustness The Table 2 summarizes the number of failed registrations for each type of registration. For this purpose, we consider a registration to fail if the error measure Dmax exceeds 2 mm. Note that the distinction between failed and successful registra-

3 The computational times were measured on a PC with a 1.1-GHz AMD Athlon processor.

J. Cˇ´ızˇek et al. / NeuroImage 22 (2004) 434–442

440

Table 2 Number of failed registrations for each type of preprocessing

Total

NOPREP

QMPET

QMMR

QM

BET

BETQM

150

150

150

150

150

150

Full-resolution Failed 52 % 35

68 48

27 18

4 3

0 0

5 4

Low-resolution Failed 56 % 37

72 45

28 19

4 3

0 0

6 4

tions was very clear: the minimum error in the unsuccessful group exceeded 10 mm. It can be seen from the reduced number of failed registrations that all types of preprocessing except for QMPET tend to improve robustness of the registration. The results show that removing of background voxels just in a PET image increases the number of failed registrations. A similar speed but significantly more robust results were achieved with QM and BETQM. BET turns out to be the most robust type of registration. In QM, BET, and BETQM, omitting of the full-resolution practically did not affect robustness. Precision An important question is whether the preprocessing does not result in a loss of accuracy. Table 3 compares the mean and maximum registration errors E{DE}, max{Dmax} over all successful registrations with and without pre-processing. The registration errors are smaller than NOPREP for all tested types of preprocessing. The average precision of low-resolution registrations is only slightly worse than that of full-resolution ones. Clinical data Speed Table 4 presents average computational times and speed-up factors for each type of preprocessing. For the BETQM and QM preprocessing, an average speed-up factor of approximately 3 was achieved in full-resolution registrations. Without the full-resolution optimization step, an average time of less than 1 min was achieved with QM. This is more than 10 times faster than the full-resolution NOPREP registration. Robustness The quality of alignment was evaluated by visual inspection on a fusion of registered image pairs in transversal, coronal, and

Table 3 Mean registration errors over all successful registrations NOPREP QMPET QMMR QM

BET BETQM

Full-resolution E{DE} (mm) 0.71 max{Dmax} (mm) 1.26

0.52 0.94

0.48 0.84

0.47 0.30 0.62 0.53

0.29 0.51

Low-resolution E{DE} (mm) 0.82 max{Dmax} (mm) 1.91

0.59 1.70

0.51 1.90

0.50 0.36 1.49 0.68

0.34 0.78

Table 4 Mean computational times and speed-up factors of registrations of clinical images

Full-resolution Preprocessing (s) Optimization (s) Total time (s) Speed-up factor Low-resolution Preprocessing (s) Optimization (s) Total time (s) Speed-up factor

NOPREP

QM

BET

BETQM

0 558.5 558.5 1.00

7.1 178.6 185.7 3.01

20.5 254.5 275.0 2.03

23.9 157.9 181.8 3.07

0 132.3 132.3 1.00 (4.22)

7.1 46.3 53.4 2.48 (10.46)

20.5 66.9 87.4 1.51 (6.39)

23.9 40.4 64.3 2.06 (8.69)

The speed-up factors related to the average time of the full-resolution NOPREP registrations are in brackets.

sagittal slices. The rate of unsuccessful registrations for clinical data was very low (see Table 5). All types of preprocessing gave more robust results than the basic NOPREP. The only exception was one 18 F-DOPA image where the registration succeeded with NOPREP but failed with QM and BETQM. No failed registration was recorded for BET.

Discussion For both simulated and clinical images, the largest speed-up was acquired using the QM and BETQM preprocessing. For clinical data, an average speed-up factor of 3 was achieved with QM and BETQM using a full-resolution registration. A speed-up factor of 2.5 was achieved with QM in registrations without the full-resolution step. It allows to perform a registration in less than a minute on a moderately fast PC. Note that the time for visual checking of registration result must also be considered in the true overall processing time. It depends on physician’s experience and is usually small. Both QM and BETQM are more robust than the basic registration procedure (NOPREP) and therefore suitable as a standard preprocessing step in the described registration method. Excellent robustness was acquired using the BET preprocessing where no failed registration was recorded. Although not as fast

Table 5 Number of failed registrations for each type of PET (rows) and each type of pre-processing (columns) PET radiotracer

Total no.

No. of unsuccessful registrations NOPREP

QM

BET

BETQM

15

62 8 12 6 8 8 3 107

2 1 0 0 0 0 1 4 3.7%

0 0 0 0 1 0 0 1 0.9%

0 0 0 0 0 0 0 0 0.0%

1 0 0 0 1 0 0 2 1.9%

O-water 11 C-MET 18 F-FDG 11 C-RACL 18 F-DOPA 11 C-MP4A 11 C-FLUMA Total

J. Cˇ´ızˇek et al. / NeuroImage 22 (2004) 434–442

as QM or BETQM, it is the method of choice when other approaches fail to find a good match. According to the results achieved using simulated PET images, all preprocessing methods increase registration accuracy in comparison to the basic registration without any preprocessing (NOPREP). Moreover, even low-resolution registrations with some kind of preprocessing are on average more accurate than the full-resolution NOPREP. However, the inference about accuracy is based on tests with simulated images only and should therefore be used with care. A confirmative test of accuracy using real images (e.g., with the help of fiducial markers) is needed. Another aspect for comparison of the presented types of preprocessing is the range of possible applications. BET and BETQM can only be used in registrations where a MR image is involved whereas QM does not require a certain image modality and could probably be also used in applications without MR images (e.g., a co-registration of intra-individual PET images acquired using different radiotracers, correction of a slight motion in a sequence of PET frames).

Conclusion In this paper, we have presented and evaluated a technique for improving registration speed of multi-modal PET-MR registration of human brain images. The technique is an extension of registration methods based on multi-resolution maximization of mutual information. We have demonstrated that elimination of background voxels using a quantil-based intensity thresholding followed by a sequence of morphological operations and optionally combined with an automated MR brain extraction results in speed-up of a factor 3. It also tends to improve robustness and accuracy of registration. In combination with a low-resolution optimization scheme, registration time can be decreased more than 10 times at the cost of a small decrease of accuracy. We propose that the described preprocessing should be used routinely in registration of PET and MR images of human brain. The described registration method has been implemented in an application with a convenient graphical user interface and a realtime on-screen visual presentation of registration progress. This makes it a powerful yet easy-to-use tool for both clinical and research environments.

References Alpert, N.M., Berdichevsky, D., Levin, Z., Morris, E.D., Fischman, A.J., 1996. Improved methods for image registration. NeuroImage 3 (1), 10 – 18. Gonzales, R.C., Woods, R.E., 1993. Digital Image Processing. AddisonWesley Publishing. Kiebel, S.J., Ashburner, J., Poline, J.B., Friston, K.J., 1997. MRI and PET coregistration—A cross validation of statistical parametric mapping and automated image registration. NeuroImage 5 (4), 271 – 279. Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., Suetens, P., 1997. Multimodality image registration by maximization of mutual information. IEEE Transactions on Medical Imaging 16 (2), 187 – 198. Maes, F., Vandermeulen, D., Suetens, P., 1999. Comparative evaluation of

441

multiresolution optimization strategies for multimodality image registration by maximization of mutual information. Medical Image Analysis 3 (4), 373 – 386. Muzik, O., da Silva, E.A., Juhasz, C., Chugani, D.C., Shah, J., Nagy, F., Canady, A., von Stockhausen, H.M., Herholz, K., Gates, J., Frost, M., Ritter, F., Watson, C., Chugani, H.T., 2000. Intracranial EEG versus umazenil and glucose PET in children with extratemporal lobe epilepsy. Neurology 54 (1), 171 – 179. Nelder, J.A., Mead, R., 1965. A simplex method for function minimization. Computer Journal 7, 308 – 313. Pietrzyk, U., 2001. Registration of MRI and PET images for clinical applications. In: Hajnal, J., Hawkes, D., Hill, D. (Eds.), Medical Image Registration. CRC Press, Boca Raton, pp. 199 – 216. Pietrzyk, U., Herholz, K., Fink, G., Jacobs, A., Mielke, R., Slansky, I., Wuerker, M., Heiss, W.-D., 1994. An interactive technique for three-dimensional image registration: Validation for PET, SPECT, MRI and CT brain studies. Journal of Nuclear Medicine 35, 2011 – 2018. Pietrzyk, U., Herholz, K., Schuster, A., von Stockhausen, H.-M., Lucht, H., Heiss, W.-D., 1996. Clinical applications of registration and fusion of multi- modality brain images from PET, SPECT, CT and MRI. European Journal of Radiology 21, 174 – 182. Pluim, J.P.W., Maintz, J.B.A., Viergever, M.A., 2000. Interpolation artefacts in mutual information-based image registration. Computer Vision and Image Understanding 77 (2), 211 – 232. Pluim, J.P.W., Maintz, J.B.A., Viergever, M.A., 2001. Mutual information matching in multiresolution contexts. Image and Vision Computing 19 (1 – 2), 45 – 52. Pluim, J.P.W., Maintz, J.B.A., Viergever, M.A., 2003. Mutual-informationbased registration of medical images: A survey. IEEE Transactions on Medical Imaging 22 (8), 986 – 1004. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P., 1992. Numerical Recipes in C, 2nd edition. Cambridge Univ. Press, Cambridge. Smith, S.M., 2002. Fast robust automated brain extraction. Human Brain Mapping 17 (3), 143 – 155. Sonka, M., Hlavac, V., Boyle, R., 1994. Image Processing, Analysis and Machine Vision. Chapman & Hall, Computing, London. Studholme, C., Hill, D.L.G., Hawkes, D.J., 1997. Automated three- dimensional registration of magnetic resonance and positron emission tomography brain images by multiresolution optimization of voxel similarity measures. Medical Physics 24 (1), 25 – 35. Thiel, A., Herholz, K., Koyuncu, A., Ghaemi, M., Kracht, L.W., Habedank, B., Heiss, W.D., 2001. Plasticity of language networks in patients with brain tumors: A positron emission tomography activation study. Annals of Neurology 50 (5), 620 – 629. Thurfjell, L., Lau, Y.H., Andersson, J.L.R., Hutton, B.F., 2000. Improved efficiency for MRI-SPET registration based on mutual information. European Journal of Nuclear Medicine 27 (7), 847 – 856. Vollmar, S., Michel, C., Treffert, J.T., Newport, D.F., Casey, M., Knoess, C., Wienhard, K., Liu, X., Defrise, M., Heiss, W.-D., 2002. HeinzelCluster: accelerated reconstruction for FORE and OSEM3D. Phys. Med. Biol. 47 (15), 2651 – 2658. Wells, W.M., Viola, P., Atsumi, H., Nakajima, S., Kikinis, R., 1996. Multimodal volume registration by maximization of mutual information. Medical Image Analysis 1 (1), 35 – 51. West, J., Fitzpatrick, J.M., Wang, M.Y., Dawant, B.M., Maurer, C.R., Kessler, R.M., Maciunas, R.J., Barillot, C., Lemoine, D., Collignon, A., Maes, F., Suetens, P., Vandermeulen, D., vandenElsen, P.A., Napel, S., Sumanaweera, T.S., Harkness, B., Hemler, P.F., Hill, D.L.G., Hawkes, D.J., Studholme, C., Maintz, J.B.A., Viergever, M.A., Malandain, G., Pennec, X., Noz, M.E., Maguire, G.Q., Pollack, M., Pelizzari, C.A., Robb, R.A., Hanson, D., Woods, R.P., 1997. Comparison and evaluation of retrospective intermodality brain image registration techniques. Journal of Computer Assisted Tomography 21 (4), 554 – 566. Wienhard, K., Dahlbom, M., Eriksson, L., Michel, C., Bruckbauer, T.,

442

J. Cˇ´ızˇek et al. / NeuroImage 22 (2004) 434–442

Pietrzyk, U., Heiss, W.D., 1994. The ECAT exact HR-performance of a new high-resolution positron scanner. Journal of Computer Assisted Tomography 18 (1), 110 – 118. Woods, R.P., Mazziotta, J.C., Cherry, S.R., 1993. MRI-PET registration with automated algorithm. Journal of Computer Assisted Tomography 17 (4), 536 – 546. Woods, R.P., Grafton, S.T., Holmes, C.J., Cherry, S.R., Mazziotta, J.C.,

1998. Automated image registration: I. general methods and intrasubject, intramodality validation. Journal of Computer Assisted Tomography 22 (1), 139 – 152. Zhang, Y.Y., Brady, M., Smith, S., 2001. Segmentation of brain MR images through a hidden Markov random field model and the expectationmaximization algorithm. IEEE Transactions on Medical Imaging 20 (1), 45 – 57.