Choosing the ART relaxation parameter for Clear-PEM 2D image reconstruction

Choosing the ART relaxation parameter for Clear-PEM 2D image reconstruction

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 8 ( 2 0 1 0 ) 183–190 journal homepage: www.intl.elsevierhealth.com/j...

1MB Sizes 0 Downloads 35 Views

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 8 ( 2 0 1 0 ) 183–190

journal homepage: www.intl.elsevierhealth.com/journals/cmpb

Choosing the ART relaxation parameter for Clear-PEM 2D image reconstruction J. Mesquita a,b,1 , N. Matela a,∗ , N. Oliveira a , M.V. Martins a , P. Almeida a a

Universidade de Lisboa, Faculdade de Ciências, Instituto de Biofísica e Engenharia Biomédica (IBEB), Campo Grande, 1749-016 Lisboa, Portugal b Universidade de Nova de Lisboa, Faculdade de Ciências e Tecnologia, 2829-516 Caparica, Portugal

a r t i c l e

i n f o

a b s t r a c t

Article history:

The Algebraic Reconstruction Technique (ART) is an iterative image reconstruction algo-

Received 16 April 2009

rithm. During the development of the Clear-PEM device, a PET scanner designed for the

Received in revised form

evaluation of breast cancer, multiple tests were done in order to optimise the reconstruc-

8 October 2009

tion process. The comparison between ART, MLEM and OSEM indicates that ART can perform

Accepted 13 November 2009

faster and with better image quality than the other, most common algorithms. It is claimed in this paper that if ART’s relaxation parameter is carefully adjusted to the reconstruction

Keywords:

procedure it can produce high quality images in short computational time. This is con-

Image reconstruction

firmed by showing that with the relaxation parameter evolving as a logarithmic function,

Iterative algorithms

ART can match in terms of image quality and overcome in terms of computational time the

Emission tomography

performance of MLEM and OSEM algorithms. However, this study was performed only with

Positron emission mammography

simulated data and the level of noise with real data may be different. © 2009 Elsevier Ireland Ltd. All rights reserved.

ART OSEM MLEM Relaxation parameter

1.

Introduction

The high incidence of breast cancer and the relative low specificity of the conventional detection methods (X-ray and ultrasound mammography) suggest the need for new imaging techniques. The Clear-PEM device, currently being developed for positron emission mammography (PEM) [1], answers this need by allowing the evaluation of early malign neoplasm in the breast and of ganglion loco-regional invasion. In nuclear medicine, obtaining tomographic images from projections can be performed either by using analytical or



iterative reconstruction algorithms. There are two types of iterative algorithms: algebraic and statistical. The Algebraic Reconstruction Technique (ART) is in the aim of this study. This algorithm is the most commonly used image reconstruction algebraic iterative algorithm and is the earliest image reconstruction method from a large family of methods called constraint-satisfaction methods. It was developed by Kaczmarz to solve systems of linear equations and was firstly applied to medical image reconstruction by Herman [2]. Specifically, we aimed at a detailed study of the influence of the relaxation parameter over the performance of a ART algorithm when compared with two statistical iterative

Corresponding author. Tel.: +351 21 7500177; fax: +351 21 7500030. E-mail address: [email protected] (N. Matela). 1 Tel.: +351 21 2948300; fax: +351 21 2954461. 0169-2607/$ – see front matter © 2009 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.cmpb.2009.11.010

184

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 8 ( 2 0 1 0 ) 183–190

algorithms for 2D image reconstruction, MLEM (maximumlikelihood expectation-maximization) [3,4] and OSEM (ordered subsets expectation-maximization) [5], for the specific case of noisy data, acquired using planar detectors. This study also intends to shed some clues on finding a criterion for varying the relaxation parameter value along iteration number in order to maximise image quality using the shortest amount of time possible. The task of recovering an object from its projections implies the construction of some specific variables. Let Yi be the sum of all events detected along the ith line over the object, fj the activity in the jth voxel of the object and aij the probability that annihilation in j is detected in a line of response (LOR) that contributes to the integral Yi . In these conditions each line integral of the activity results from the sum (Eq. (1)) Yi =



aij fj

(1)

j

We can understand each measurement, Yi as a hyperplane where the solution of f must lie. There are as many hyperplanes as projections and the solution of f must belong simultaneously to all hyperplanes, which means that the solution will lie on the intersection of all hyperplanes. In the presence of noiseless data, if we have at least the same number of projections and image pixels to calculate, the intersection of all hyperplanes will be a single point and the solution will be unique. ART’s iterative process starts with the definition of a first estimate of function f. This initial estimate will correspond to a vector in the space defined by the hyperplanes. The first iteration of the algorithm will correspond to the projection of this vector onto one of the hyperplanes, by determining the point on the hyperplane closest to the point of the estimate. This new point will then be used as an estimate for the next iteration that will consist in projecting it onto other hyperplane. This procedure is repeated for all hyperplanes, forcing the estimate to converge to the desired solution (Fig. 1). The numerical expression that represents this operation is given by Eq. (2). In Eq. (2), k is the iteration number and aij is the model of the emission and detection processes corresponding to the probability that a detection Yi had been originated from an event in pixel j Yi − fjk = fjk−1 +

 l 

ail fjk−1

a2il

aij

(2)

l

By multiplying the estimate resulting from the previous iteration by the system matrix, we are calculating the expected measurements from the estimated activity. After that, we compare these expected measurements with the real measurements, Yi , using subtraction. The next step consists in multiplying the difference obtained by the respective system matrix element, performing the backprojection operation. Finally, we update the estimate from the previous iteration by adding the correction factor obtained from the backprojection operation.

Fig. 1 – An inverse problem with two pixels with an unknown activity (f1 and f2 ) and two measurements (Y1 and Y2 ) defining two hyperplanes. The solution lies on the intersection of all hyperplanes (adapted from [14]).

If we analyse the inverse problem using a geometrical approach, we realise that convergence speed is highly dependent on the hyperplanes considered to find the solution. The obvious mathematical explanation is that the more perpendicular the two consecutive hyperplanes are, more new information we are adding to solve the problem, opposing to the situation where we are using very close hyperplanes, having both almost the same information and not contributing largely to the solution [6]. When noise is present in the acquired data, the assumption that all hyperplanes intersect in a single point is generally not true since each hyperplane is slightly shifted from its original position (Fig. 2 (left)). In this situation, multiple intersections corresponding to partial solutions will probably arise, i.e., solutions that satisfy part of the constraints but not all of them. This will be a problem to the iterative procedure since it will not converge to a unique solution, switching cyclically between partial solutions (Fig. 2 (middle)). This kind of behaviour can be limited by introducing a relaxation parameter () in the algorithm expression used to update the solution f (Eq. (3))

fjk

=

fjk−1

k

+

Yi −

 k−1 ail f  l 2 j aij a

(3)

l il

In order to keep the result in the space between the different hyperplanes, minimising the distance to the different partial solutions, this relaxation parameter must have a value between 0 and 1, limiting the update process. By doing this, we prevent the algorithm from entering in a loop, forcing the convergence to a point somewhere in the middle of the partial solutions (Fig. 2 (right)). The choice of the relaxation parameter value is critical to the quality and speed of image reconstruction using ART

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 8 ( 2 0 1 0 ) 183–190

185

Fig. 2 – Hyperplanes shifted due to the presence of noise in the acquisitions (left); attempt to solve the corresponding inverse problem but not converging to any solution (middle); problem with under-relaxation of ART (right) (adapted from [14]).

[2]. However, there is no generally accepted method to find the optimum value of ␭ and several approaches have been published [2,6–9]. Nonetheless, it is unanimously accepted that the choice of  depends on the purpose of the image reconstruction. High values of  allow fast convergence speed but also noisy reconstruction images (mostly salt and pepper noise). On the contrary, low values of  allow smooth images but low convergence speed.  can have fixed values over all the iterative process or it can be iteration dependent. Its value can be chosen by previous evaluation for a certain kind of examination or it can be calculated as a function of some parameters measured from the image obtained in the previous iteration. The accepted gold standard iterative algorithm in image reconstruction for emission tomography is the maximumlikelihood expectation-maximization (MLEM) algorithm. This method has its mathematical foundations on the statistical nature of the data acquired in PET. The concept was developed by Dempster et al. [10] and later applied to image reconstruction for emission tomography by Shepp and Vardi [3] and by Lange and Carson [4]. The MLEM method is based on the assumption that the number of photon pairs emitted in each voxel j is given by fj and follows a Poisson distribution. It is possible to obtain a likelihood function that describes the probability of observing a measurement described by vector Y, if a certain activity distribution fj is true. The goal of the MLEM algorithm is to find the activity distribution fj that maximises the likelihood function and this is achieved iteratively by using Eq. (4)

fj (k) =

f (k−1) 

j 

a i ij

i



aij Yi

(4)

(k−1) a f t it t

Analysing Eq. (4) we can see that by doing the sum presented



(k−1)

), we in the denominator of the second fraction ( t ait ft are calculating the projections that would be measured if the estimate, resulting from the previous iteration, was correct. These estimated projections are compared to the measured projections by dividing the measured values (Yi ) by the result of the projection step. The results of these comparisons are then summed, weighted by the system matrix elements, in

order to obtain the update factor. By multiplying the update factor by the estimation obtained from the previous iteration, the update step is performed. One of the limitations of the MLEM algorithm is its slow convergence speed. The most successful way of addressing this problem is the use of a subset strategy. This kind of approach consists in dividing the complete set of LORs into subsets. This approach, when applied to MLEM, resulted in a new algorithm called the ordered subsets expectationmaximization (OSEM), proposed by Hudson and Larkin in 1994 [5]. The projections that are grouped in each subset should be chosen in order to maximise the angular difference between them. By this, it is guaranteed that each subset will contribute with the maximum new information that it is possible. The equation describing OSEM (Eq. (5)) is similar to the one of MLEM. The only difference between the two equations rests in the fact that the two sums over the LORs are not applied to the all set of LORs but only to the LORs that belong to the subset Sn

fj (k) =

f (k−1)

j

i ∈ Sn

aij

 i ∈ Sn



aij Yi

(k−1) a f t it t

(5)

We can see that an update of the activity distribution estimation is made for each subset. That means that if we have divided the complete set of LORs into n subsets, after all LORs are considered, the algorithm would have resulted in n times more updates of the estimated image than if we were using the MLEM algorithm. Obviously, when the number of subsets is fixed to one, the OSEM algorithm is reduced to the MLEM algorithm. Usually, each update, corresponding to one subset, is called sub-iteration. One iteration of OSEM, composed of n sub-iterations, takes a time comparable to the time it takes to iterate MLEM once, but consist in n times more updates of the estimated activity distribution. The distribution of the LORs by the subsets should be done in order to have a maximum angular distance between the LORs in each subset, to include as much new information as possible in each sub-iteration. The order by which each sub-

186

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 8 ( 2 0 1 0 ) 183–190

set is accessed should also be done using the same criterion. In this way, it is possible to include the highest quantity of new information in each update. The method by which we have divided our LORs into subsets and by which we have defined the order of the subsets was also based on the method presented in [6]. When compared with other iterative algorithms, such as MLEM and OSEM, ART lost some of its popularity, especially due to the difficulties of its optimisation [2]. However, it is claimed in [6] that if a correct optimisation is performed, the use of ART results in an image reconstruction process with similar quality to the ones obtained using MLEM and OSEM in times which are one order of magnitude lower.

Table 1 – Sequences of  tested. it stands for iteration number. Expression for calculating 

Sequence number

 = 0.1 − 0.01 × (it − 1) min = 0.01  = 0.1e(−(it−1)) min = 0.01832

1 2

ln(1.1)

 = it  = 0.1 − 0.005 × (it − 1)  = 0.1  = 0.05  = 0.005  = 0.002  = 1 − 0.05 × (it − 1) =1

3 4 5 6 7 8 9 10

Table 2 – Reconstruction times performed on a P4-2 GHz computer.

2.

Methods

The algorithms were evaluated using emission data simulated with the Geant4 Monte Carlo toolkit developed specifically for Clear-PEM [11]. With the purpose of analysing the performance of the reconstruction algorithms (ART, MLEM and OSEM) the breast of the NURBS Cardiac Torso (NCAT) phantom [12] was used in the simulation framework to generate 18-FDG activity distribution on the breast including two lesions. We have simulated two 14.2 cm × 16.1 cm detector plates placed 10 cm apart, acquiring data in two orthogonal projection angles. The simulated lesions consisted of two homogeneous spheres with a diameter of 10 mm, one at the centre of the breast and the other 5 cm closer to the chest wall. The simulations included depth of interaction (DoI) information, Compton scattering, positron range and background activity. The uptakes were: 4.81 kBq/ml for the background and 20.35 kBq/ml for the lesions, representative of physiological uptake. The simulated data was reconstructed with the three different image reconstruction algorithms implemented. The image reconstruction with OSEM algorithm was performed with 4 subsets. A description of their implementation can be accessed in [13]. The system matrix values, aij , that need to be used during the reconstruction process were calculated using the tube-driven method [13]. As said before, the value of  is chosen in order to optimise the algorithm convergence speed and to maximise the image uniformity. It is not an easy task finding the relaxation parameter that best fits the reconstruction process [2]. There is not a single choice for the “best” relaxation parameter. This choice depends on the medical purpose of the reconstruction, the method of data acquisition, the existence of noise on the measurements, and on the number of iterations that we intend to perform [6].  may be a function of the kth iteration, so we assume that for every integer k there is a positive real number (k) . Using this hypothesis, ART performance was tested for 20 iterations with 10 different sequences of the relaxation parameter (Table 1). We have made this choice based on previous tests done during the implementation of the reconstruction algorithms for the Clear-PEM scanner [13]. These tests showed that the maximum and minimum  values that enabled obtaining good results were respectively 1 and 0.005.

Reconstruction algorithm ART MLEM OSEM

Reconstruction time per iteration (s) 60 142 53

We have selected three reasonable figures of merit (FoMs) that we believe better evaluate the algorithms performance: image uniformity, contrast and noise. To calculate the values for these three criteria we have drawn a region of interest (RoI) over the lesions and several RoIs over the background and we have measured the average of counts and the standard deviation inside each the RoI. The numerical expressions that represent the image uniformity, contrast and noise are given by Eqs. (6)–(8), respectively U=

C=

RoIlesion lesion

(6)

RoIlesion − RoIbackground RoIlesion + RoIbackground

Noise = background =



Variance

(7)

(8)

On the previous equations RoIlesion and RoIbackground stands for average number of counts inside the region of interest around the lesion and on the background, respectively, and  lesion and  background stands for standard deviation inside the region of interest around the lesion and on the background. For each image, obtained in the end of each iteration, two circular RoIs were applied, one centred in the lesion and the other on the background. These two RoIs were drawn, stored and then applied to all the images used in this study.

3.

Results

3.1.

Reconstruction times

The reconstruction was processed on a P4-2 GHz computer. For ART, MLEM and OSEM 20 iterations were performed. The computation times per iteration for each algorithm are presented in Table 2.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 8 ( 2 0 1 0 ) 183–190

Fig. 3 – Image uniformity (ROIlesion / lesion ) plotted as function of iteration number. Results obtained for the 10 sequences of ART and for MLEM and OSEM.

3.2.

187

Fig. 5 – Contrast plotted as function of iteration number. Results obtained for the 10 sequences of ART and for MLEM and OSEM.

Uniformity vs. iteration number

The plot of the image uniformity (U) against iteration number obtained with each relaxation parameter sequence and with each image reconstruction algorithm is presented in Fig. 3. To decide where to stop the iterative process of each algorithm we had to define a convergence criterion. We have decided to define this criterion based on the uniformity value, considering that convergence was reached when uniformity variation was less than 1% on the subsequent iteration. Based on this criterion we present in Fig. 4 the computation times required by each sequence or algorithm to reach convergence.

3.3.

Contrast vs. iteration number

The variation of the calculated contrast values along the iterative process, for each relaxation parameter sequence and each algorithm is presented in Fig. 5. We have included a line corresponding to the simulated contrast, which is 0.618, assuming

Fig. 6 – Contrast plotted as function of noise. Results obtained for the 10 sequences of ART and for MLEM and OSEM after reaching convergence.

a background uptake of 4.81 kBq/ml and lesion uptake of 20.35 kBq/ml.

3.4.

Fig. 4 – Bars graphic illustrating the time taken to reach convergence.

Contrast vs. uniformity

As we have said before, we have used the uniformity index to decide when each image reconstruction procedure as converged. However, Fig. 3 shows us that the value to which each image reconstruction procedure converges is not always the same. To evaluate simultaneously the values of uniformity index and contrast obtained after convergence we present Fig. 6 where we have plotted the values obtained at the convergence iteration. With the help of this plot we can realise that sequences 1, 4 and 5 converge to higher uniformity values and, simultaneously, allow higher contrast values.

188

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 8 ( 2 0 1 0 ) 183–190

Fig. 7 – Axial central planes of the reconstructed volumes obtained with NCAT phantom: (a) 2th iteration with Seq.2, (b) 20th iteration with Seq.7, (c) 20th iteration Seq.8, (d) 5th iteration with Seq.9 and (e) 13th iteration with Seq.10.

3.5.

Contrast vs. image noise

In many situations, images with higher contrasts do not correspond to useful images, since they may present also higher noise levels. We can verify this in Fig. 7, where we present the axial central planes of the reconstructed volumes of the NCAT breast phantom when convergence was reached, with some of the used sequences. To evaluate the evolution of image contrast with noise level, we have plotted contrast against noise (Fig. 8). As we have seen in Fig. 7, some of the sequences produce images with higher noise levels. Those sequences are located to the right in the plot presented in Fig. 8. Since it is very difficult to distinguish all the other lines, we have decided to plot

Fig. 9 – Contrast plotted as function of noise. Results obtained for the 10 sequences of ART and for MLEM and OSEM after reaching convergence.

Fig. 8 – Contrast plotted as function of noise. Results obtained for the 10 sequences of ART and for MLEM and OSEM (20 iterations in each case).

only the points corresponding to the convergence iteration for each sequence (Fig. 9). From the analysis of this last plot, we can observe that we have  sequences which allow contrast values similar to the ones obtained with OSEM and MLEM, needing less iterations to converge, but with higher noise values (sequences 3 and 2, respectively). There is also a group of sequences that allow contrast values considerably higher than the ones obtained with OSEM and MLEM, but again with higher noise values (sequences 1, 4, 5 and 6). From this last group, the sequence that allows images with less noise for the same contrast is sequence 1.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 8 ( 2 0 1 0 ) 183–190

189

Fig. 10 – Axial central planes of the reconstructed volumes obtained with NCAT phantom: (a) 5th iteration Seq.1, (b) 2th iteration with Seq.2, (c) 5th iteration with Seq.3, (d) 18th iteration with MLEM and (e) 9th iteration with OSEM.

In order to illustrate the effect of the variations of the contrast and noise values on the reconstructed images, in Fig. 10 we present the axial central planes of the reconstructed volumes when convergence was reached for sequences 1, 2, and 3 and for MLEM and OSEM.

4.

Discussion and conclusions

The main goal of image reconstruction processes is maximising image contrast. It is important to be able to do this without paying the price of increasing too much the noise level on reconstructed images. If that is not the case, one is forced to use noise removal filters which can, in the majority of the situations, contribute to a decrease in the image contrast and in image resolution. Analysing the contrast vs. uniformity and contrast vs. noise plots obtained in this study, we can verify that it is possible to define relaxation parameter sequences that allow higher contrast values with lower noise levels and that these can be used with ART. Additionally, these sequences allow obtaining higher uniformity levels with a smaller number of iterations. The sequences that present this kind of behaviour are characterised by applying higher relaxation parameter values in the beginning of the image reconstruction process and smaller values as it approaches convergence. From all of the 10 sequences tested, sequences 1, 2 and 3 proved to be the ones that enabled obtaining images with better quality. It is easy to acknowledge that in the first iterations the values of the relaxation parameter  differ a lot from each other but in the middle of the reconstruction process it is advantageous to have small variation until the last iteration. This can be understood with the help of the explanation using hyperplanes, presented in the introduction of this paper. We start the image reconstruction process with an estimate that corresponds to a homogeneous FOV. This estimate, in the

hyperplane space, is very far away from the intersection of the hyperplanes corresponding to each measurement. Because of this, a high relaxation parameter is desirable at the onset of the image reconstruction process. This allows faster convergence of the solution sequence to a region close to the intersection of the hyperplanes. After reaching this region, it is important to refine the search for an optimal solution value without being “prisoner” of essentially the same result. If happening, this would mean that the iteration algorithm would never, in fact, reach the optimum value for the solution, although one may be lead to think that it would have reached convergence. This may explain the better performance obtained with sequences that have lower relaxation parameter values after a certain number of iterations. None the less, one must be careful not use low relaxation parameter sequences. In fact, we have shown that its use prevents final images from having salt and pepper noise but on the other hand result in very poor contrast. As expected, MLEM and OSEM allowed similar results regarding contrast, uniformity and noise, needing OSEM always less iterations to convergence. We must stress that the sequences found cannot be regarded as universal because they were tested with simulated data developed for a specific and non-traditional device (Clear-PEM scanner). We believe however that this study provided evidence that ART can, in 2D mode, match the performance of MLEM and OSEM in terms of image quality. The results obtained must now be validated using real data.

Acknowledgments We would like to acknowledge to A. Trindade and P. Rodrigues (Laboratório de Instrumentac¸ão e Física Experimental de Partículas, Lisboa, Portugal) for the simulated data, without

190

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 8 ( 2 0 1 0 ) 183–190

which the comparisons presented would not be possible. This work was supported in part by Fundac¸ão para a Ciência e a Tecnologia, Portugal (grants number SFRH/BD/6187/2001 and SFRH/BD/3002/2000) and from Agência de Inovac¸ão – Portugal (Projecto PET, Programa de I&D em Consórcio – POSI, DGDRSIFEC/14/01/03/FDR/00134).

references

[1] M.C. Abreu, et al., Design and evaluation of the Clear-PEM scanner for positron emission mammography, IEEE Transactions on Nuclear Science 53 (1) (2006) 71–77. [2] G.T. Herman, Image Reconstruction from Projections—The Fundamentals of Computerized Tomography, Academic Press, 1980, p. 316. [3] L.A. Shepp, Y. Vardi, Maximum likelihood reconstruction for emission tomography, IEEE Transactions on Medical Imaging MI-1 (2) (1982) 113–122. [4] K. Lange, R. Carson, EM reconstruction algorithms for emission and transmission tomography, Journal of Computer Assisted Tomography 8 (2) (1984) 306–316. [5] H.M. Hudson, R.S. Larkin, Accelerated image-reconstruction using ordered subsets of projection data, IEEE Transactions on Medical Imaging 13 (4) (1994) 601–609. [6] G.T. Herman, L.B. Meyer, Algebraic reconstruction techniques can be made computationally efficient, IEEE Transactions on Medical Imaging 12 (3) (1993) 600–609.

[7] D. Ros, et al., The influence of a relaxation parameter on SPECT iterative reconstruction algorithms, Physics in Medicine and Biology 41 (5) (1996) 925–937. [8] G.T. Herman, A. Lent, P.H. Lutz, Relaxation methods for image-reconstruction, Communications of the ACM 21 (2) (1978) 152–158. [9] P. Toft, The Radon Transform—Theory and Implementation, in Department of Mathematical Modelling, Technical University of Denmark, 1996, p. 326. [10] A.P. Dempster, N.M. Laird, D.B. Rubin, Maximum likelihood from incomplete data via Em algorithm, Journal of the Royal Statistical Society Series B—Methodological 39 (1) (1977) 1–38. [11] A. Trindade, et al., Breast cancer imaging studies by Monte Carlo simulation with Clear-PEM, in: 2005 IEEE Nuclear Science Symposium—Conference Record, vols. 1–3, 2005. [12] W.P. Segars, Development and Application of the New Dynamic NURBS-Based Cardiac-Torso (NCAT) Phantom, University of North Carolina, Chapel Hill, 2003, p. 243. [13] N. Matela, et al., Comparison of different image reconstruction strategies in Clear-PEM, in: CIMED 05—Second International Conference on Computational Intelligence in Medicine and Healthcare, Conference Records, 2005, IEE. [14] M.N. Wernick, J.N. Aarsvold, Emission Tomography—The Fundamentals of PET and SPECT, 1st ed., Elsevier Academic Press, 2004, 576.