High-speed Tomographic Reconstruction Employing Fourier Methods

High-speed Tomographic Reconstruction Employing Fourier Methods

Real-Time Imaging 3, 255–274 (1997) High-speed Tomographic Reconstruction Employing Fourier Methods irect Fourier methods (DFM) of tomographic recons...

1MB Sizes 0 Downloads 54 Views

Real-Time Imaging 3, 255–274 (1997)

High-speed Tomographic Reconstruction Employing Fourier Methods irect Fourier methods (DFM) of tomographic reconstruction have been investigated since the introduction of the technique in the early 1970s, although computerized tomograms employ the convoluted back projection method (CBPM) as the means of computing the reconstruction. The main difficulty with DFM is the required interpolation of radially sampled data to the Cartesian grid in order to allow a 2D Fourier transform to be performed on the frequency plane data, and is at least part of the reason for the preference for CBPM. However, CBPM is difficult to implement quickly on even highly specialized hardware. In contrast to this, the main computational components of the DFM are a series of 1D Fourier transforms on the projection data followed by a 2D Fourier transform subsequent to a radial-to-Cartesian coordinate conversion. The Fourier transforms can be computed very efficiently using a fast Fourier transform (FFT) algorithm implemented in digital signal processing hardware. This paper describes the development of a DFM reconstruction method. The issue of the frequency plane interpolation is addressed and the effect of the accuracy with which this is performed on image reconstruction quality is examined. It was found that good reconstruction quality could be obtained with a combination of zero-padding the projection data and simple bilinear interpolation of the frequency plane data array from radial to Cartesian coordinates. Hardware was developed based around the Sharp LH9124 FFT DSP chipset that is capable of computing a 2D 512 3 512 resolution complex to complex Fourier transform in 32 ms. Thus, by implementing the DFM with this hardware it is possible to compute tomographic reconstructions in several tens of milliseconds, some two orders of magnitude faster than currently achieved with CBPM. This would be of value in a number of medical and industrial applications that require dynamic processes to be examined in real-time.

D

© 1997 Academic Press Limited

Stefan Brantner1, Rupert C. D. Young1, David Budgett2 and Chris R. Chatwin1 1School

of Engineering, University of Sussex, Brighton BN1 9QT, U.K. E-mail: [email protected] 2Department of Engineering Science, University of Auckland, Auckland, New Zealand

Introduction Tomography is an important technique in modern medicine and industry, being employed to determine the density variations within a cross-section or volume of material from measurements of radiation absorbed by the medium. The main approach for reconstruction algorithms is the convo-

1077-2014/97/040255 + 20 $25.00/ri970081

luted back-projection method (CBPM), in spite of its high computational effort. Although it is possible to generate good quality images with this approach, the powerful pipeline hardware architectures which have to be employed require several seconds to perform the reconstruction. A different approach is the direct Fourier method (DFM)

© 1997 Academic Press Limited

256

S. BRANTNER ET AL.

[1,2], which is based on the projection theorem for Fourier transforms. This theorem is known as the ‘central slice’ or ‘central projection’ theorem, and relates the line integrals of a physical property to the 2D Fourier transform (FT) of the desired image (see Appendix). The major computational burden of this approach then becomes the FT. However, this can be ameliorated by exploiting the fast Fourier transform (FFT) algorithm, reducing the required computational time for 2D transforms from O(N 3) to O(N 2log2N). The central slice theorem demonstrates that the 2D FT of the desired image can be obtained by resampling a set of 1D FTs of the projection data. Fourier inversion of the 2D array yields the reconstruction. The resampling step is basically a coordinate transformation from the polar to the Cartesian grid. Thus the polar sampling of the 2D transform has to be interpolated to the Cartesian grid in order to be able to apply a discrete Fourier transform. This resampling step is quite difficult, and its accuracy is the crucial point of DFM to improve reconstruction quality. Several simple interpolation techniques, such as the nearest neighbor interpolation (NNB) and 2D linear interpolation, do not yield satisfactory results and introduce reconstruction errors, evident as noise-like effects in the image. However, the bilinear (BI) interpolation yields much better results, but is still relatively simple (and fast) to implement (see Appendix). More sophisticated interpolation techniques evaluate each point in the complex frequency domain from many neighboring samples, but are correspondingly more computationally intensive. Several different solutions have been suggested in recent publications that involve quite complex interpolation techniques. For example. Belleon and Lanzavecchia [3] introduced a novel technique, the ‘moving window Shannon reconstruction’ (MWSR). This method uses a high number of coefficients to resample a single complex point of the 2D transform, i.e each point is reconstructed from all samples encompassed by a window moving along the sampling interval. Mate and Bajla [4] suggested using ‘hybrid spline-linear interpolation’ (SPLI) which calculates, as a first step, de Boor’s cubic spline interpolation in the radial direction; as a second step a linear interpolation is performed in the angular direction. Like the bilinear interpolation, the SPLI interpolation uses just four points to calculate the value of the desired point. Additionally, the partial derivatives with respect to angular direction are required. Other authors like Schomberg and Timmer [5] or O’Sullivan [6] take advantage of gridding methods to resample the 2D FT.

In this paper, very good reconstructions were obtained with a combination of zero supplementation of the projection data and simple bilinear interpolation of the radially sampled data to the Cartesian grid. This allows fast reconstruction times to be maintained, particularly as the computational burden of the radial interpolation can be absorbed by the FFT hardware directly, since it simply results in the Fourier transformation of longer data sets. The paper also describes an algorithm to simulate X-ray sinograms by means of the Hough transform (see Appendix). This simulated projection data was needed in order to test the reconstruction algorithm on both phantom and real images. A set of C functions has been developed to carry out computer simulations of the reconstruction process described above. A 2D video-rate FFT board based on the Sharp LH9124 digital signal processor (DSP) chipset has been developed to enable near real-time reconstruction of projection data. The hardware design of the board is described in a later section.

Computational Results This section presents results generated from a set of experimental C-programs, which implement the algorithms outlined in the Appendix. The software was developed on a Sun Sparc 20 workstation with 64 MB memory running under the operating system Sun Solaris. The routines were also adapted to the PC operating system Linux. There the program was tested on a Pentium 90 with 16 MB memory. Computational results of simulated projections and reconstructed images are presented. Various methods to improve image reconstruction quality are discussed and compared. Finally, consideration is given to the porting of the software to the specialized DSP FFT hardware developed for fast computation of the 1D and 2D FFTs. Simulated projections Due to the unavailability of real projection data from a computed tomography (CT) or magnetic resonance (MR) device, an algorithm for projection simulation was developed, the fidelity of which was critical since inaccurate simulation will introduce spurious errors which do not occur in the real projection data. These errors are introduced because the sinograms (projections) are simulated on the basis of discrete data, i.e. sampled images, which make it necessary to use interpolation techniques.

HIGH-SPEED TOMOGRAPHIC RECONSTRUCTION

257

Figure 1. An enlarged detail of simulated projections: (a) without interpolation; (b) with linear interpolation.

The first approach used to simulate sinograms was the nearest neighbor one outlined in the Appendix. Due to the fact that this method does not employ any interpolation, strong artifacts were introduced in the simulation process that influenced the subsequent reconstruction process quite heavily. Reconstructed images were of bad quality due to high noise levels. Furthermore, a dependency upon projection direction appeared, which accumulated spurious effects at certain angular values such as π/4. These effects occurred because the gray-level image function, which is determined by its samples, was extended to the continuous domain simply by convolving it with a square tile interpolation function [3]. Figure 1(a) shows the artifacts introduced in the region around π/4 due to this rough approximation. The linear interpolation technique for projection simulation, presented in the Appendix, improves the quality of the sinograms considerably. Though one cannot overcome the spurious effects near θ = π/4, they reduce substantially with this technique. Figure 1(b) depicts the improvement due to linear interpolation. Further improvements could involve more sophisticated interpolation techniques such as, for instance, giving a certain fraction of the gray-level to more than two neighbors according to a Gauss curve.

The dimensions of the projection data in the (r,θ )domain were chosen as follows. The number of data points in the r direction was set to the width of the image. The number of projections was set to 180, 360, or 720 for the angular range of 0 to π in the θ-direction. In subsequent sections different projection resolutions are compared to each other with respect to the quality of the reconstructed image. Here some images and their Hough transforms, which represent the simulated projections, are presented. The first example is a simple image. which only contains several parallel lines. This example illustrates the two tasks which can be accomplished by the Hough transform: the traditional function of line detection on one hand and the ability of projection simulation on the other (Figure 2). The Hough transform of an image, used to detect lines, will generate local maxima in the (r,θ )-domain. Each maximum corresponds to a line in the spatial domain or, conversely, an edge segment coincides with a cluster in the Hough transform. A cluster analysis of the Hough transform yields the two parameters of each line segment r and θ. In Figure 2 it can be seen that all the lines in the original image have an angle of about 50˚ to the y-axis. Hence the line normals have an angle of about 140˚ to the y-axis, and

258

S. BRANTNER ET AL. Projection rays

Overlap region

Image

α

θ=α

Projection rays

Figure 2. Simulated sinograms of parallel lines.

the first parameter of these lines is thus determined with π radians. The second parameter r is just the θ ≅ 140 × 180 signed shortest distance of the line to the origin (which is in the middle of the image). Therefore, there are 10 local maxima in the Hough transform at θ ≅ l40˚. On the other hand, the use of the Hough transform of an image for projection simulation can be appreciated by examining Figure A-1. It can be seen that the X-rays penetrating the object sum all the gray-levels along the line of propagation. For the image in Figure 2 the integration along lines only yields local maxima at one angle α, when the imaginary rays and the lines in the image have the same angle to the y-axis. This is again only the case for θ ≅ 140˚. At all other angles the rays cross more than one line and the integration yields a smaller value. Figure 3 depicts these considerations. The second example shows the simulation of projections for the of Shepp-Logan phantom image with 8 bits of dynamic range. In this example an angular resolution of

Image

θ<α

α

Figure 3. Illustration of the line integral action of the Hough transform (used for projection simulation).

720 datapoints for the range 0–π was used. The third example is a gray-level medical image. which was obtained from a reconstruction of MR data. This image will also be used to demonstrate the reconstruction algorithm. Again the angular resolution was chosen to be 720 projections for the range 0–π.

HIGH-SPEED TOMOGRAPHIC RECONSTRUCTION

259

Figure 4. Simulated sinograms of the Shepp-Logan phantom image.

Figure 5. Simulated sinograms of a gray level image.

The process of projection simulation is relatively time consuming, because for every single pixel in the image a sinusoid has to be drawn in (r,θ)-space. If the width (=height) of the image is N, then the algorithm requires N O(N 3) time. Only pixels with distance R = x 2 + y 2 < 2 are considered.

time. As already mentioned in the introduction, these authors use a window of size up to 16 3 16 values to compute a single interpolated value of the 2D FT field. However, it is possible to obtain effective interpolation with much less computationally intensive methods, such as the spline interpolation technique investigated by Matej and Bajla [4] and Niki et al. [7].

Image reconstruction

In this work, four different steps were carried out to improve reconstruction quality while retaining a simple, and hence fast, bilinear interpolation of the frequency domain samples. These were: (i) increasing polar sampling density; (ii) increasing the number of projections; (iii) increasing the size of the 2D Fourier spectrum of the image; and (iv) low pass filtering the interpolated Fourier field. All four steps are considered separately in the following subsections. Also, the degree of improvement is compared to the corresponding increase in computing time required.

Image reconstruction using the direct Fourier method basically consists of three steps: (i) calculating the 1D FTs of the projections, (ii) resampling the 2D FT of the image from this set of 1D FTs and (iii) computing an inverse FFT to obtain the final image. As many authors of papers concerning DFM reconstruction have noted, the resampling step is the most critical task [1–4,7,8]. If this step is done with low accuracy, the final image will contain artifacts and have a high noise level. In this section, different methods to improve the resampling step are presented. Generally, a trade-off exists between the quality of reconstruction and the computing time needed. The superb reconstruction quality described in Bellon and Lanzavecchia [3] requires a high computing

Increase of the polar sampling density If we take the FFT of a 1D function over a set of discrete samples, the frequency representation of this function will also be a discrete set of (complex) samples, the number of

260

S. BRANTNER ET AL.

frequency samples being the same as in the spatial domain. To avoid an aliasing error, care must be taken with the spatial sampling increment ∆r. If ∆r is too large, then the projections are undersampled and aliasing occurs in the Fourier domain, which will corrupt the estimates of P(π,θ ). The first condition to get reasonable results in the Fourier domain is, therefore. a proper sampling rate in the spatial domain. As mentioned earlier, the number of spatial samples was chosen to be ∆x = ∆y = ∆r, where ∆x and ∆y are the sampling increment of the image and ∆r is the radial sampling increment in the (r,θ )-projection domain. This choice should be reasonable for a large range of images. However, if an image contains very high frequency components, the above assumption will produce aliasing errors. This is especially the case for images which contain only lines. The aliasing effect is depicted in Figure 6. In this figure we use a simple image with a single line, simulate projections (with sampling rate ∆r), and finally take the 1D FT of every single projection p(r,θn). This yields the field of FTs of the projections illustrated in Figure 6, in which artifacts due to aliasing errors are clearly visible. The central slice theorem states that the FT of an (n–1) dimensional projection is a central slice of the n-dimensional FT of an object, orthogonal to the projection direction. Therefore the FTs of projections are reordered in such a    π π  way that P( ρ, θ ) = F1  p r,  θ +  mod   . The FTs of 2 2    the projections are thus already prepared for the subsequent resampling step. If, on the other hand, we want to improve the polar-toCartesian mapping, we could decrease the radial sampling increment in the Fourier domain ∆ρ by increasing M, the number of samples in the radial direction. This results in an accuracy improvement of the radial part of the BI interpolation. When using the FFT, this can be implemented by padding each projection with zero value samples before transforming [9]. Thus, samples of zero value are added to both ends of the projection data, increasing the size of each data line by a factor of 2β (β = 1, 2, 3) before computing the FT of the projections. This issue will be discussed later. Though Niki et al. [7] suggest reducing computing time by zero-padding the data, enlarging the input data of the 1D FT by a factor of 2, 4, or 8 will increase the computing time needed for the FT (by a factor βlog2β ). However, the increase in reconstruction quality is quite dramatic.

Figure 6. Illustration of the aliasing error, which is apparent when reconstructing images containing strong, high-frequency features.

With β = 2 or 3 the quality of the reconstructed image is quite good, and one cannot see a difference from the original image without enlargement. Nevertheless, this depends

HIGH-SPEED TOMOGRAPHIC RECONSTRUCTION rather strongly on the number of projections N. There is a quite dramatic improvement from β = 0 to β = 1 independent of N. If we continue to increase β, the improvement is quite low compared to the large increase in quality from β = 0 to β = 1. In a later subsection we deal with reconstruction improvement resulting from enlargement of the 2D Fourier spectrum of the original function f(x, y). Following Niki et al. [7], a performance index was applied to quantitatively evaluate the quality of the image: Width Height

P=

∑ ∑ (Oxy − Rxy )2

x= 0 y=0 Width Height

∑ ∑ (Oxy − O )2

0 y=0 1x=444 424444 3 ≅σ 2

(1)

where Oxy is the gray level of the original image at (x, y), Rxy the gray level of the reconstructed image at (x, y), O– the average gray level of the original image: O=

Width Height 1 ⋅ ∑ ∑ (Oxy ) Width ⋅ Height x=0 y=0

Width and Height are width and height of the reconstructed image.

261

Increase of the number of projections N Again, increasing the number of projections N is equivalent to decreasing the angular sampling increment of the simulated projections, ∆θ. By increasing N we obtain an accuracy improvement of the angular part of the BI interpolation. This results in a smaller spacing of the 1D FTs of the polar projection, and thus in peripheral regions of the 2D Fourier field we greatly improve the interpolation accuracy. (In the inner regions, there is already an oversampling.) The number of projections is defined in the simulation step and can be adjusted freely. In our simulations we used N = 180, 360 or 720. The interpolation improvement in the outer regions of the 2D Fourier field, and hence accuracy improvement in the computation of high frequency components, gives rise to a decrease in noise present in the reconstructed image. Due to the fact that image noise is characterized by high frequency components, it can be reduced by low pass filtering the 2D Fourier field. But this also implies a general blurring of the image, detailed features being lost at the expense of a noise level reduction. The better method, to reduce noise and to improve the sharpness of high frequency components at the same time, is to increase the angular sampling density. Obviously this step leads to an increased amount of computing time.

P, the ‘normalized root mean squared distance measure’, is the square root of the average squared difference between original and reconstructed image divided by the variance of the original image. The smaller this value is, the better is the image quality for a particular reconstruction.

Figure 8 illustrates the reconstruction of an MR image with different azimuthal resolutions. β was held constant at 1, and χ was set to 1 as well. This reconstruction yielded the performance indices shown in Table 2.

At this point we introduce parameters which characterize a reconstruction process: (i) the number of projections N, (ii) the polar projection radial resolution enhancement factor β, (iii) and the 2D Fourier spectrum enhancement factor χ. As with the parameter β, the frequency field can be enhanced by 2χ.

One can see that the amount of noise reduces as N increases. Figure 8(b) and (c) show strong noise patterns instead of a uniform gray level. In Figure 8(d) the noise reduces to a minimum and the image quality is quite acceptable, with only a small resolution loss from the original.

Hence, every reconstruction process is characterized by the triple (N, β, χ). This notation will be used for all the following examples.

If we again examine Figure A-2, we see that the polar grid is a rather inefficient sampling method to represent the Fourier domain. Close to the origin, where F(u∆X, v∆Y) normally is large, there are many samples available to interpolate a single point on the Cartesian grid, i.e. low frequency polar samples are separated by a very short distance in angular direction, yet they are far apart in the radial direction [9]. However, at higher frequencies they are relatively far apart in the angular direction, while the radial sampling step remains constant. Hence if we increase the azimuthal resolution by increasing N, we store a large amount of data not needed for the interpolation process.

The first example illustrated in Figure 7 is the reconstruction of the Shepp-Logan phantom image. A fixed number of N = 720 projections was used, while β was varied between 0, 1, and 2. χ was set to 0, i.e. the size of the 2D Fourier field was not enhanced. The reconstruction with β = 0 has strong rectilinear artifacts, which disappear when β is increased. These reconstructions yielded the performance indices shown in Table 1.

S. BRANTNER ET AL.

262

Figure 7. Different reconstructions of the Shepp-Logan phantom image: (a) the original image; (b) reconstructed with parameters (720,0,0); (c) with (720,1,0); and (d) with (720,2,0).

This makes it reasonable to increase the number of Cartesian sampling points in the frequency domain, to take advantage of the high number of low frequency samples.

These considerations are dealt with in the following subsection.

Table 1. Values of the performance index P (defined in the text by Eqn (1)) with increasing values of the parameter β, the degree of zero supplementation of the projection data prior to 1D Fourier transformation.

Increase of the size of 2D Fourier spectrum

Fixed parameters: N = 720, χ = 0

β P

0 0.502

1 0.237

2 0.194

As an illustration of the 2D Fourier field enlargement, Figure 9 shows the reconstruction of a CT image. The two fixed parameters were N = 720 and β = 2. χ was set to 0, 1, and 2. The performance indices shown in Table 3 were obtained. These results show that one cannot gain a lot in accuracy by increasing the number of samples on the Cartesian grid. The results stay more or less the same,

HIGH-SPEED TOMOGRAPHIC RECONSTRUCTION

263

Figure 8. Comparison of original and reconstructions obtained from simulated projections of a MR head section image. (a) The original, (b) reconstructed with parameters (180,1,1); (c) with (360,1,1); and (d) with (720,1,1).

while the amount of computing time and memory requirement increases considerably. With other images also, only very limited improvement occurred with increase in size of the 2D Fourier field. Therefore it can be concluded that if computing time is a critical factor, it is not worth using an enhancement factor χ > 0.

latter were gained from a CT device and hence are already frequency limited, whereas the former contains high frequency components which are difficult to reconstruct.

If we compare the reconstructions of the Shepp-Logan phantom on one hand, and the reconstructions of the MR and the CT head phantom on the other, we note that the latter yields much better performance indices. A difference between the Shepp-Logan and the medical images is that the

Low pass filtering (LPF) can be used as a means to reduce the noise in the reconstructed image, but there exists a trade-off between a reduction in noise level and a blurring of the image. Nevertheless, if one carries out a low pass filtering process in the frequency domain, requiring little

Low pass filtering

S. BRANTNER ET AL.

264

Table 2. Improvement in the performance index P with increase of the number of projections N Fixed parameters: β = 1, χ = 1 N P

180 0.209

360 0.181

720 0.166

computational overhead, one can improve the performance index of the reconstruction. Thus the LPF filtering process is carried out prior to the frequency data being transformed back to the spatial domain by an inverse FT.

A Butterworth LPF (see Appendix) is characterized by two parameters. The first, NF, is the order of the filter, the second, sF, is the ‘strength’ of the filter, which can be characterized as follows. Let Dmax be the maximum frequency in the 2D Fourier field; then |H(u,v)| is reduced to 0.5 at D the locus frequency D0 = max . sF As every reconstruction is characterized by the triple (N, β, χ ), we can characterize each filtering process with the tupel (NF,sF ). If one applies a stronger filter like (1,4), then the reconstructed image is blurred too much and too many fine details are lost. The aim is to improve the performance

Figure 9. Comparison between original and reconstructions of a CT head section image. (a) The original; (b) the reconstruction with parameters (720,2,0); and (c) with (720,2,2).

HIGH-SPEED TOMOGRAPHIC RECONSTRUCTION

265

Table 3. Variation in the performance index P with increasing values of the parameter χ, the degree of zero padding of the Fourier plane of the image.

Table 4. Values of the performance index P resulting from application of the Butterworth filters with the indicated parameters (defined in the text).

Fixed parameters: N = 720, β = 2

Image

No filter

(1,2)

(1,3)

Shepp-Logan (360,2,2) CT image (720,2,1)

0.231 0.078

0.201 0.075

0.195 0.089

χ P

0 0.084

1 0.078

2 0.077

index and to keep the amount of detail at a reasonably high level. Only first order filters are used because higher order low pass filters have too steep a filter curve and hence introduce a similar ringing effect to a simple window function. In the subsequent examples just (1,2) and (1,3) filters are used, where the latter are already beginning to give a subjective impression of too much blurring.

The use of low pass filters is illustrated in Figure 10, where they are applied to the (360,2.2) reconstruction of the Shepp-Logan phantom and to the (720,2,1) reconstruction of the CT image used in Figure 9. In both examples the (1,2) filter considerably improved the performance index without blurring the image details. The (1,3) filter improved the performance index just a little for the Shepp-Logan phantom, but the index deteriorated when this low pass filter was applied to the CT image. The CT image in Figure 10 does not contain any sharp edges and, therefore, the blurring process does not visually change the image appearance a great deal. The performance indices obtained are summarized in Table 4. Generally, the low pass filtering process does not require a long computing time due to the fact that every sample in the field simply has to be multiplied by the corresponding filter value. Therefore it is worth filtering the 2D Fourier field with a ‘weak’ low pass filter if the data to be reconstructed contains high-frequency features, like sharp edges. If one compares the different steps introduced in the preceding sub-sections and carried out to improve reconstruction quality, one can establish the following basic rules for image reconstruction by Fourier methods when using bilinear interpolation:

Figure 10. Comparison of unfiltered and filtered reconstructions of the Shepp-Logan phantom and the CT image from Figure 9. (a) The (360,2,2) reconstruction without filtering; (b) application of a (1,2) filter; and (c) application of a (1,3) filter. The same sequence is shown in (d)–(f), where (d) is the (720,2,l) reconstruction and (e), (f) the corresponding filtered reconstructions.

● The resolution of the projection data should be enhanced at least by a factor of two, since without enlargement rectilinear artifacts arise. For most cases this should be enough, further enhancement not improving the image quality tremendously. ● The number of projections N highly influences the image quality. Thus N should be as high as possible, as long as computing time is acceptable. Reasonable values for simulation turned out to be 360 or 720. However, other considerations, such as the X-ray dose level given to the patient in medical applications, must also be borne in mind and may restrict this to lower values. ● Two-dimensional Fourier field enlargement is not desirable, since the amount of computation for the inverse Fourier transform and for low pass filtering rises by a factor of four when the size of the Fourier field is

266

S. BRANTNER ET AL.

doubled, while the image quality improves only marginally. ● Low pass filtering should be carried out if the amount of artifact noise in the reconstructed image is high due to high-frequency components in the original image.

Hardware Implementation The major computational requirement is that for a series of 1D discrete Fourier transforms and a 2D discrete Fourier transform subsequent to a coordinate conversion. Since the 2D DFT is realized as a series of 1DFTs on the rows and columns of the 2D input array, the initial 1D DFT on the projection data may be implemented by identical hardware. In addition, there is a requirement for relatively simple hardware for generation of a look-up table to effect the radial-to-Cartesian coordinate transformation which will also be briefly described in this section. The FFT sub-system can complete the 2D FFT of each image within 40 ms. The memory input and output buffers will introduce a pipeline delay, so the FT of an image is not available until 40 ms after the image has been digitized. Although delayed, the throughput rate is maintained at 25 frames per second. A common synchronization signal at 25 Hz coordinates the transfer of data through the FFT subsystem. The Sharp LH9124 DSP chipset is a specialized highperformance FFT DSP. The real and imaginary data have 24 bits of resolution; the four-port hardware architecture provides very efficient data transfer and the hardware implementation of radix 2, 4 and 16 butterfly structures offers very fast FFT computations. A single processor system, using a simple architecture, is capable of completing a 512 3 512 FFT in 32 ms. A PC has been selected to host the FFT system. The ISA bus provides a simple interface, and software to communicate with the FFT system can be written in C. Host access for initialization, programming and self-diagnosis has been incorporated into the design. A block diagram of the FFT system is shown in Figure 11(a) and a photograph of the of the board developed in Figure 11(b). As originally designed. it was required to operate on 512 3 512 pixel size images, obtained from a standard CCIR format camera [10]. The CCIR standard has 25 frames per second and accommodates 575 active lines of image data, with an additional 50 lines capable of carrying information such as teletext. A CCD camera will typically have 581 vertical pixels, and a 512 image is simply obtained by dropping lines at the top or bottom of the image. The duration of each line is approximately 64 µs, with 12 µs used for timing and reference levels. The remaining 52 µs contains a continuous analogue signal

formulated from the horizontal pixels from the CCD camera. It is the sampling rate of the frame grabber that determines the number of pixels obtained per line. Most cameras have pixels with a 3:4 aspect ratio, so to obtain a 1:1 aspect ratio image the sampling rate must be set to sample 575 3 4/3 = 762 pixels per 52 µs, i.e. 14.75 MHz. The Analogue Devices AD 9502 is a hybrid video digitizer capable of converting the CCIR signal at 14.75 MHz directly into 8-bit digital information and control signals. All memories employed are standard 15 µs access time SRAMs with the usual conventional control signals, and requiring a 5V power supply. Address bus, data bus and write-enable control signals from each memory are multiplexed to the appropriate hardware, as shown in Figure 11(a). The QuickSwitch range of multiplexers, featuring a propagation delay of less than 1 ns, are employed. The Q53383 is a 9-bit bus exchanger available in a surface mount QSOP package. The input memory buffers are two 256K 3 8-bit memories, MIA and MIB. One memory is assigned to the video digitizing circuit and the other to the DSP; the connection of the memories toggle every 40 ms in response to the setting of the control signal Mux 1, which has two states: 0: video digitizer to MIA, AG1 control signals to MIB and MIB data bus to DSP Port Q. 1: video digitizer to MIB, AG1 control signals to MIA and MIA data bus to DSP Port Q. The output memory buffers are two 256K 3 48-bit memories, MOA and MOB. One memory is assigned to the output device and the other to the DSP; the connection of the memories toggle every 40 ms in response to the setting of the control signal Mux 2, which has two states: 0 : DSP port B to MOA data bus and AGB control signals to MOA, output device to MOB. DSP port B to MOB data bus and AGB control signals to MOB, output device to MOA. Multiplexer 3 assigns output to either the video output device (VOD) or to the PC host. The address bus and control for the output come from either the VOD or the PC host. The 48-bit output data bus is always connected to the 48-bit VOD data bus. The selection of which 16 bits of the 48-bit wide output bus is connected to the host 16-bit data bus is determined by the state of multiplexer 3. The FFT is performed by the Sharp LH9124 DSP chip,

HIGH-SPEED TOMOGRAPHIC RECONSTRUCTION

267

Analogue data interface

A/D conversion

FFT scheduler

MIA 256K × 8

8 Mux 1

Linear addr. sequence

Mux 1

Imaginary data

8 MIB 256K × 8

PC host

Real data Imaginary data

MOA 256K × 48

Port Q

21

MDA 256K × 48

48

FFT DSP processor

Port B

Mux 1

Port A

Real data

AGA

Mux 2 Imaginary Mux 2 data

Mux 3

Real data

Port C AGI

48

MOB 256K × 48

48

Data output

MDC SRAM Mux 2 AGC

AGB

(a)

(b) Figure 11. Two dimensional fast Fourier transform hardware. (a) Block diagram – data flow: data buses black, address buses gray. (b) Photograph of the developed hoard.

which is optimized to compute FFTs. It differs from conventional DSPs in that it is a pass-based processor. This means that a single instruction is used on the entire data set. To perform a 512 3 512 FFT, the columns can be completed in three passes (radix-2, radix-16, radix-16) and the row in another three passes. (Although the order of the radix operations could be changed, it is advisable to use a radix-2 instruction as the first pass, because the block float-

ing point scheme will not be operating until after the first pass. This will not cause a problem initially, because the input data is only 8 bits and the word size is 24.) Only six instructions are given to the DSP to compute the entire 2D complex Fourier transform of the input image. From the design point of view, the main consequence of the DSP being a pass-based processor is that the control hardware need only operate at low speeds; the processor, address

268

S. BRANTNER ET AL.

generators and memories may be processing new data at 40 MHz, but new instructions are delivered to the functional units only six times in each 40 ms CCIR frame period. The FFT computation is carried out by the LH9124 and LH9320 computing devices, which communicate at high speed with the memories. These devices need to be coordinated, and this is the job of the scheduler. For each pass, the LH19124 must be supplied with a function code, various control registers must be set and the devices must be told when to start. The scheduler operates at relatively low speed. During the 40 ms interval allowed for computing the FFT, it only needs to service the devices six times – once for each pass. Once the control registers are set, the scheduler starts the devices and waits for the pass to complete, then the control registers are set for the next pass, and so on. There are latencies associated with the different devices. Latency is the delay between starting a device and a valid output being produced. For the address generators, there is a delay of five clock cycles before the first valid address is produced. The DSP chip will have latency delays between the input data and output data – these delays will vary according to which instruction the DSP is implementing. The AGs have a programmable latency delay for each pass. This enables all devices to be started together, and the internal programmable latency counter can be used to ensure that each device actually produces memory addresses after the correct delay. The programmable delays will be greater than the theoretical minimum of five cycles so that the system processing starts on the boundary of the maximum cycle word length of the DSP (which is 16). This means the DSP can be started at the same time as the AGs. Each AG produces a TC (terminal count) output. which goes HIGH, indicating its address frequency has completed. The end of a pass (including latency times) occurs when all AG TCs have gone HIGH. Then the scheduler can increment the program counter and start the next pass. However, the TC signal will remain HIGH until the next valid address is produced. This means that a simple test on the state of the TCs is not sufficient to detect the completion of a pass throughout the whole FT sequence. A state machine is used to detect that a pass is underway before enabling the detection of all TCs being HIGH as a test that a pass is finishing. The end of an FFT process can be detected by setting the programmable output from one AG. This signal is used by the scheduler to reset the program counter. The scheduler

then waits for the 40 ms synchronization GO signal before starting the next FFT process. The simple scheduler consists of a small memory, program counter and a registered PAL. The program counter output is connected to the address lines of the memory. Each memory location holds the control codes required for each pass. These include the DSP function code, data flow and also multiplexer state selection controls. The PAL implements a state machine which is responsible for initializing the FFT sequence, and loading and running each pass. In addition, a multiplexer is required to permit host programming of the scheduler memory. The scheduler overheads contribute minimally to the total time to compute the FFT. The result from the 1D FFT is mapped to a Cartesian coordinate system using a third digital system based on programmable gate arrays (PGAs). Each Cartesian point has six reference values stored in a supplementary 48-bit 512 3 512 memory. The values include two 9-bit indices, n and m, and four 8-bit scalers (m+1–m*), (m*–m), (n+1–n*) and (n*–n), as defined in Eqn (A.13) and (A.14). The multiplications and additions are implemented using four Xilinx 3130–3 PGAs. These devices contain a relatively low density of combinational logic blocks which permit very efficient use of long lines to minimize propagation delays. The speed of operation is constrained by the need to access four data values from the Fourier plane for each interpolation. The PGAs are clocked at 50 MHz, and parallel configuration of shifters and adders permit real-time interpolation of a 512 3 512 complex array within 21 ms. The host PC is responsible for programming the scheduler and AG memories, and also initializing the coefficient data memory. Supplementary tasks include the transfer of image data from the FFT subsystem to the PC. The host section communicates with a PC via a common parallel type interface via the PC/AT IO (ISA) bus using standard 8255 programmable peripheral interface chips. Two 8255 devices are used, one addressing the high byte and the other the low byte of the ISA bus. The PC uses the 8255 Port A as a 16-bit control port and Port C as a bi-directional 16-bit data port. Writing to the control registers and reading or writing data can be done via the C programming language using the IN and OUT instructions. The host is responsible for programming the device and initialization. Once operating, the timing for the FFT system is derived from the input data signal, and co-ordination of other components of the system can be derived from the timing information contained in the output video signal.

HIGH-SPEED TOMOGRAPHIC RECONSTRUCTION

269

Conclusion

Acknowledgement

The paper describes the direct Fourier method of image reconstruction from projections and presents results of reconstructions from simulated projections. For projection simulation, a simple but accurate Hough transform was used.

The CT and MR images used as test data for the simulation work presented in this paper were obtained from the internet site: http: //www-ipg.umds.ac.uk/achive.html, belonging to the Image Processing Group at the United Medical and Dental Schools of Guy’s and St. Thomas Hospitals, London, U.K. The authors are grateful to this group for making these images available.

Two simple interpolation techniques were used to perform the coordinate transformation from the polar to the Cartesian grid: namely, the nearest neighbor and bilinear interpolation techniques, where only the latter was investigated further, since the former gave poor results. By increasing the size and number of projections and low pass filtering the Fourier domain before the inverse Fourier transformation, the quality of the reconstructed image was improved. Two-dimensional Fourier field enhancement turned out to be not worth carrying out, because of the high increase of computing time required on the one hand and the small gain in quality on the other. Different images, such as the Shepp-Logan phantom image and other reconstructed head section images derived from CT and MR machines, were used to simulate projections. The reconstructions of the head sections were very good, due to the bandlimited nature of these images. Simulated projections which were obtained from the set of ellipses comprising the Shepp-Logan phantom produced reconstructions with a lower performance index due to the presence of stronger high-frequency components. For each image, appropriate reconstruction parameters were determined, with the amount of computing time being borne in mind. The interpolation techniques employed yielded good results, comparable with those achieved by other workers [1–4,7,8] using nominally more involved (and computationally intensive) techniques. Implementation of the Fourier method of tomographic reconstruction with the specialized 2D FFT hardware previously described will allow reconstruction of 512 3 512 resolution projection data to be accomplished in under two videoframe times with a single board and at video rate with two cascaded boards. This is approximately two orders of magnitude faster than possible with methods based on convoluted back-projection, even when employing dedicated hardware. Certain medical and industrial processes requiring the real-time imaging of dynamic processes could potentially benefit from this capability.

Appendix: Mathematical Background In the appendix the mathematical background of the central slice theorem, bilinear interpolation, Hough transform and low pass filtering are presented.

Radon and Fourier transforms Tomographic reconstruction deals with finding the unknown function f(x,y), given the projection function p(r,θ ). The two coordinates are the position in the projection profile r, and θ, the projection angle, which is the angle between the normal of the projection beam and the x-axis (see Figure A-1). f(x,y) represents the 2D spatial distribution of some physical quantity, and is transformed to p(r,θ ) by parallel projection. Thus p(r,θ ) denotes the line integral of f(x,y) along the line specified by r and θ. p(r, θ ) =



∫ f (rcosθ – tsinθ , rsinθ + tcosθ )dt

–∞

(A.l)

The integration limits can be restricted to finite values, the spatial function f vanishing outside a finite region Σ:

Σ = ( x, y) x 2 + y 2 ≤ rmax

(A.2)

Σ can be considered as a region within a circle of radius rmax. p(r,θ ) is also referred to as the Radon transform of f(x,y): p( r, θ ) = R( f ( x, y))

(A.3)

A line then has two parameters, r, the signed distance to the origin and θ, the angle of the line-normal to the x-axis. It follows from Eqn (A.1) that each point (x0,y0) on a line can be expressed by the two fixed line parameters r and θ and the free parameter t, as given by Eqn (A.4): x 0 = r cosθ − t sinθ y0 = r sinθ − t cosθ

(A.4)

S. BRANTNER ET AL.

270

Central slice theorem

y X-ray sources

The central slice theorem relates the 1D polar projections of a spatial function, f, to the cross-sections (slices) of the 2D FT of this function. More generally, it determines that the FT of an (n – 1)-dimensional projection is a central slice of the n-dimensional FT of an object, i.e. a section through the origin of the transform, orthogonal to the projection direction. Figure A-1 depicts this relationship.

θ x Projection beam

If f(x,y) is the unknown function to reconstruct, p(r,θ ) the projection function of f, and P(ρ,θ ) the 1D FT of p(r,θ ) with respect to the first variable r, then: r,θ

) r

Spatial domain p(

X-ray detectors

P( ρ, θ ) = F( ρ ⋅ cosθ , ρ ⋅ sinθ )

To interpret this theorem physically we consider a fixed angle θ. As ρ varies, the value of P(ρ,θ ) is the same as the value of F at distance ρ from the origin at angle θ to the X coordinate axis.

0

θ (a)

The central slice theorem relationship therefore indicates a way to obtain the desired image from projections. Eqn (A.8) describes this relationship in a compact notation:

Y

ρ

P(ρ,θ )

p(r, θ ) = R( f ( x, y))  0 ≤ θ ≤ π  P( ρ, θ ) = F1 ( p(r, θ )) − rmax ≤ r ≤ rmax f ( x, y) = F2 –1 ( P( ρ, θ )) X

θ (b)

Figure A-1. Illustration of the central slice theorem: (a) projection generation; (b) relationship of the projections to sections through the 2D Fourier transform of the image.

Spatial functions and their variables are denoted by lower case letters, while upper case letters are used for frequency domain function names and their variables. Spatial functions and their corresponding frequency domain functions are related through the Fourier transform. The Fourier transform of f(x) is given by: F (u ) =

∫ f ( x) ⋅ e

− j 2πux

dx

–∞



∫ F (u ) ⋅ e

−∞

j 2πux

du

If we take 1D FTs of the projections, perform a radialto-Cartesian coordinate transformation, and finally take the 2D inverse Fourier transform of the interpolated frequency field, we obtain a reconstructed image. If we perform the whole process on discrete data, starting with a discrete image, we obtain the projection function p(r,θ ) with samples ∆r and ∆θ apart; the samples of the Fourier transformed function P( ρ,θ ) are then ∆ρ and ∆θ apart.

Bilinear interpolation (A.5)

Given F(u), f(x) can be obtained by applying the inverse FT operator: f ( x) =

(A.8)

R is the Radon transform operator, which transforms the spatial function f(x,y) to (r,θ )-space. F1 represents the 1D forward FT operator, and F2–1 the 2D inverse FT operator.

Fourier domain



(A.7)

(A.6)

In order to be able to perform a 2D fast Fourier transform, the data has to be available on a Cartesian grid. Unfortunately, the Fourier-transformed projection data is not available on a Cartesian, but rather a polar, grid. This makes it necessary to reorder the polar data to the Cartesian grid, which is basically a coordinate transformation from ( ρ,θ )-space to (X,Y)-space. As already mentioned, two

HIGH-SPEED TOMOGRAPHIC RECONSTRUCTION different simple interpolation methods were examined, namely: nearest neighbor interpolation and bilinear interpolation. In ( ρ,θ )-space samples are arranged on a uniform polar grid with spacing constants ∆ρ and ∆θ, where ∆ρ is the radial sample increment and ∆θ is the angular sampling increment in the frequency domain. Every sample position on the polar grid is characterized by the two variables m and n:

271

Four neighboring samples

Nearest neighbor

ρ ( m) = m ⋅ ∆ρ = ρ m with 1 Dr = M ⋅ Dr

Cartesian grid point

(A.9)

θ (n) = n ⋅ ∆θ = θ n

Figure A-3. Illustration of nearest neighbor and bi-linear interpolation.

p , Du = N

In the radial direction we have a total number of M samples and in the angular direction we take projections at N different angles θn. Figure A-2 shows the polar sample positions with the superimposed Cartesian grid. Cartesian grid positions are marked with a ‘+’. At these positions we wish to know the value of the Fourier transform, and so some form of interpolation has to be introduced. Figure A-3 depicts the NNB and BI interpolation schemes.

the Cartesian point (u∆X,v∆Y). Equation (A.10) describes this relationship: F(u∆X, v∆Y ) = P( ρ m , θ n) ( m, n) = (u∆X – ρ m , cosθ m ) 2 + (v∆Y − ρ m , sinθ m ) 2 = min

F(u∆X,v∆Y) denotes a single sample of the 2D FT of the unknown function f(x,y). Any point on the Cartesian grid has a certain distance to the origin and a certain angle to the X-axis:

The nearest neighbor estimate of a Cartesian grid point (u∆X,v∆Y) is P( ρm,θn) where the variables m and n are chosen to minimize the distance between the grid point and the polar sample, i.e. P( ρm,θn) is the nearest polar neighbor of Y

(A.10)

r (u, v) = (u∆X )2 + (v∆Y )2

ϑ (u, v) = arctan

v∆Y u∆X

(A.11)

If we calculate the bilinear interpolation estimate [9] of the point (u∆X,v∆Y), the four neighboring samples on the polar grid are used. These samples are denoted by P( ρm,θn ), P( ρm+1,θn ), P( ρm,θn+1), and P( ρm+1,θn+1).

Cartesian grid

∆X

∆Y

To find m and n we use the results of Eqn (A.11): m∆ρ ≤ r (u, v) ≤ ( m + 1)∆ρ n∆θ ≤ ϑ (u, v) ≤ (n + 1)∆θ With m* =

∆θ X Polar grid

∆ρ

Figure A-2. Polar sample positions with superimposed Cartesian grid.

r (u, v) ∆ρ

n* =

ϑ (u, v) ∆θ

Eqn (A.12) reduces to: m ≤ m* ≤ m + 1 n ≤ n* ≤ n + 1

(A.12)

S. BRANTNER ET AL.

272

We now calculate a linear interpolation in the ρ (radial) direction for θn and θn+1, respectively. A final linear interpolation of these two interim results in the θ (angular) direction yields the bilinearly interpolated value of F(u∆X,v∆Y).

Y (x0,y0)

r0

For ñ = n and ñ = n+l we calculate the intermediate results r(n) and r(n+1),

θ0

r (n˜ ) = ( m + 1 − m * ) ⋅ P( ρ m , θ n˜ )

rmax X

+ ( m * − m) ⋅ P( ρ m +1 , θ n˜ )

(A.13)

and the final interpolated grid point results in: Image space

F(u∆X, v∆Y ) = (n + 1 − n* ) ⋅ r (n) + (n* − n) ⋅ r (n + 1)

(A.14)

Hough transform and projection simulation

Mapping

The method proposed by Hough [11] can be used to determine parameterizations of curves such as circles, ellipses or straight lines in an image. If we wish to detect straight lines, we first consider a single point (x0,y0) and the (r,θ ) representation of a line passing through this point:

r0

0

r – max ∆r

rmax ∆r

r

θ0

∆r ∆θ

r = f (θ ) = x 0 sinθ – y0 cosθ

(A.15) π /2 ∆θ

Infinite different lines with parameters r and θ pass through the point (x0,y0). Figure A-4 depicts the meaning of

Parameter space Array

Y

π ∆θ θ

Point (x0,y0)

θ' θ'

r

y0

θ'

cos θ'

x0

sin θ'

Figure A-5. Image and parameter space. One point (x0,y0) in image space maps to a sinusoid in (r,θ ) space.

X

θ

r = x sinθ' + y cosθ' = x sinθ – y cosθ

Figure A-4. Parameters and equation of a line.

the parameters r and θ. Again, r is the shortest (signed) distance from the line to the origin, and θ is the angle between the line normal and the y-axis. Figure A-5 shows the mapping of a single point in (x,y)-space to a sinusoid in (r,θ )-space. N colinear points lying on a line which is given by xcosθ0 – ysinθ0 =r0 yield N sinusoidal curves that intersect at (r0,θ0) in parameter space [12]. To simulate the Hough transform of an image digitally, we divide the (r,θ ) parameter space into cells. Each cell is ∆r ‘wide’ and ∆θ ‘high’. This 2D array has then

HIGH-SPEED TOMOGRAPHIC RECONSTRUCTION 2rmax π cells, with Eqn (A.15) indicating that for each × ∆r ∆θ point (x0,y0) in image space, all possible lines r(θ ), with θ ranging from 0–π, are calculated. rmax is the radius of the biggest circle fitting into the image (see also Eqn (A.2)). According to Eqn (A.15), we simulate the Hough transform of an image by drawing a sinusoid into parameter space (discrete array of cells) for each point in image space lying within Σ (see Eqn (A.2)). This process could also involve an interpolation step. When the sinusoid r(θ ) hits a cell in parameter space, the cell value is incremented by the gray level of the corresponding pixel (x0,y0) in image space. Again, N points lying on a line with parameters (r0,θ0) map to N sinusoids in parameter space that intersect at (r0,θ0). All gray levels of these N points are added to yield the value of cell (r′,θ′), where r0 ≤ r′ ≤ r0 + ∆r and θ0 ≤ θ′ ≤ θ0 + ∆θ. Thus, the Hough transform enables the simulation of projections of an image as would be acquired in a tomographic process. If we take a single point (x0,y0) and draw the corresponding sinusoid into the (r,θ ) cell array, this can be done in a straightforward way without interpolation. It would involve the calculation of r(θ ) for 0 ≤ θ ≤ π and adding the gray level of (x0,y0) to the cells which are the nearest neighbors of r(θ ). A somewhat more difficult method is a process that is the reverse of a linear interpolation. Here the two surrounding cells of r(θ ) are considered. Dependent on the distance r′ – r

273

r0 and r0 + ∆r – r′, ratios of the corresponding pixel gray level are added to these two neighbouring cells. Figure A-6 depicts this method. Much better accuracy of the sinograms can be achieved when projections are simulated in this manner. Nevertheless, both approaches are just approximations which introduce a dependency upon the projection direction and accumulate spurious effects at certain angles, such as π / 4. An earlier section in the paper gives further details of the reverse linear interpolation method and presents simulations of real and phantom images.

Low pass filtering Good results can be achieved with a Butterworth low pass filter (BLPF). The transfer function of the BLPF of order n and with cut-off frequency locus at a distance D0 from the origin is defined by the relation: H(u, v) =

1  D(u, v)  1+    D0 

2n

(A.16)

where D(u,v) is given by the distance of (u,v) to the origin, which is

u2 + v2 .

This type of filter has no sharp discontinuity which establishes a clear cut-off between passed and filtered frequencies. Rather, it has a smooth transfer function which defines a cut-off frequency locus at points where H(u,v) is down to a certain fraction of its maximum value (see Figure A-7). In Eqn (A.16), at the locus frequency D(u,v) = D0.

∆r 0.4∆r

θ

0.6∆r

θi 60%

H(u,v)

1.0

0.5

40%

r(θi) 1

Figure A-6. Illustration of the reverse linear interpolation process for projection simulation.

2 D(u,v) D0

3

Figure A-7. Transfer function of a Butterworth low pass filter.

274

S. BRANTNER ET AL.

References 1. Shepp, L. & Logan, B. (1974) The Fourier reconstruction of a head section. IEEE Trans. Nucl. Sci., 21: 21–43. 2. Stark, H., Woods, J., Paul, I. & Hingorani, R. (1981) Direct Fourier reconstruction in computer tomography. IEEE Accoust. Speech Signal Process., 2: 237–245. 3. Bellon, P. & Lanzavecchia, S. (1995) A direct Fourier method (DFM) for X-ray tomographic reconstructions and the accurate simulation of sinograms. Int. Journal of Bio-Medical Computing, 38: 55–69. 4. Matej, S. & Bajla, I. (1990) A high-speed reconstruction from projections using direct Fourier method with optimized parameters – an experimental analysis. IEEE Trans. Med. Imaging, 9: 421–429. 5. Schomberg, H. & Timmer, J. (1995) The gridding method for image reconstruction by Fourier transformation. IEEE Trans. Med. Imaging, 4(3): 596–607. 6. O’Sullivan, J. (1985) A fast sinc function gridding algorithm

7.

8. 9. 10.

11. 12.

for Fourier inversion in computer tomography. IEEE Trans. Med. Imaging, M14(14): 200–207. Niki, N., Mizutani, T. & Takahashi, Y. (1983) A high-speed computerized tomography image reconstruction using direct two-dimensional Fourier transform method. Systems Computers Controls, 14(3): 56–65. Stark, H. & Paul, I. (1981) An investigation of computerized tomography by direct Fourier inversion and optimum interpolation. IEEE Trans. Biomedical Eng., BME-28: 496–505. Lewitt, R. (1983) Reconstruction algorithms: transform methods. Proc. IEEE, Vol. 71, 71: 390–408. Young, R., Budgett, D. & Chatwin, C. (1995) Video rate implementation of the Hough transform. 28th Proc. ISATA, Robotics, Motion and Machine Vision in the Automotive Industries, pp. 35–43. Hough, P. (1962) Methods and means for recognizing complex patterns. U.S. Patent 3,069,654. Gonzales, R. & Woods, R. (1992) Digital Image Processing. Addison Wesley.