Fast MAP-based multiframe super-resolution image reconstruction

Fast MAP-based multiframe super-resolution image reconstruction

Image and Vision Computing 23 (2005) 671–679 www.elsevier.com/locate/imavis Fast MAP-based multiframe super-resolution image reconstruction Di Zhanga...

416KB Sizes 2 Downloads 88 Views

Image and Vision Computing 23 (2005) 671–679 www.elsevier.com/locate/imavis

Fast MAP-based multiframe super-resolution image reconstruction Di Zhanga,*, Huifang Lib, Minghui Dub b

a Department of Computer Science, Shaoguan University, Shaoguan, China Department of Electronics and Communication Engineering, South China University of Technology, Guangzhou, China

Received 5 February 2004; received in revised form 26 March 2005; accepted 30 March 2005

Abstract Super-resolution image reconstruction produces a high-resolution image from a set of shifted, blurred, and decimated versions thereof. Previously published papers have not addressed the computational complexity of this ill-conditioned large scale problem adequately. In this paper, the computational complexity of MAP-based multiframe super-resolution algorithms is studied, and a new fast algorithm, as well as methods for parallel image reconstruction is also presented. The proposed fast algorithm splits the multiple input low-resolution images into several subsets according to their translation relations, and then applies normal MAP algorithm to each subset, the reconstructed images are processed subsequently at a successive level until the desired resolution is achieved. Experiment results are also provided to demonstrate the efficiency of the proposed techniques. q 2005 Elsevier B.V. All rights reserved. Keywords: Super-resolution; MAP; Computational complexity; Image reconstruction

1. Introduction Super-resolution (SR) image reconstruction is the process of combining multiple frames of low-resolution (LR) images of the same scene to form a single highresolution (HR) image. Usually, an SR algorithm consists of two steps: image registration and fusing multiple registered LR images into a HR image. The first step is also called motion estimation, and many effective techniques [1,2], have been developed to do the job. The second step is based on the fact that the HR image, after being appropriately shifted, blurred, and down-sampled to take into account the alignment and to model the imaging process, should produce the LR images. Many SR algorithms have been proposed, including the frequency domain method [3–5], the projection onto convex sets (POCS) method [6,7], and the maximum a posteriori (MAP) method [8–14]. Baker and Kanade [15] proposed a recognition-based method by adding a recognition-based * Corresponding author. E-mail address: [email protected] (D. Zhang).

0262-8856/$ - see front matter q 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.imavis.2005.03.004

gradient prior to the MAP framework, so the proposed estimator is still a MAP estimator. Among all the proposed SR algorithms, MAP has received much attention in recent years because it has many outstanding advantages, such as complete theory framework, flexible spatial domain observation model, powerful inclusion of a priori knowledge, and producing superior results. To decrease the computational complexity, Elad and Hel-or [16] demonstrated that the SR image reconstruction can be divided into two separate stages: de-blurring and fusing, and presented a fast algorithm for the fusing stage. Kim et al. [5] presented a fast block decomposition method for the frequency domain method. For single image MAP expansion, Schultz and Stevenson [18] suggested the multigrid algorithm, which was first presented by Terzopoulos [19]. As to multiframe MAPbased SR image reconstruction, Hardie et al. [11], as well as Connolly and Lane [13] suggested the conjugate gradient (CG) method, whereas Nguyen et al. [12] and Shin et al. [14] presented the preconditioned conjugate gradient (PCG) method. However, to a large degree, the computational complexity of SR image reconstruction has not been addressed adequately so far. We present here a detail study on

672

D. Zhang et al. / Image and Vision Computing 23 (2005) 671–679

the computational complexity of MAP-based multiframe SR algorithms, as well as methods for computational complexity reduction.

2. MAP Algorithm

z^nC1 Z z^n K 3n gn

2.1. Formulation In SR image reconstruction, each LR frame contributes new information for interpolating subpixel values of the same scene. To get different information of the same scene, relative scene motions must be recorded from frame to frame. If these scene motions are known or can be estimated within subpixel accuracy, SR image reconstruction is possible. Each LR frame can be modeled as a noisy and downsampled version of the HR image that has been shifted and blurred, or more formally [20] y Z Wz C n

(1)

where y is a NM2!1 vector representing N measured LR images of the same M!M size in lexicographic order, z is a q2M2!1 vector representing the qM!qM HR image in lexicographic order, q is the magnification factor, W is a NM2!q2M2 matrix representing the geometric shift, blur, and down-sampling operator which operates on z to yield y, and n is the independent identically distributed (iid) Gaussian noise with a probability density function (pdf) given by   1 1 T PrðnÞ Z exp K n n (2) ð2pÞp=2 sPh 2s2h A MAP estimate of the HR image z can be computed as z^ Z argmaxflog PrðzjyÞg z

Z argminfKlog PrðyjzÞ K log PrðzÞg

Many techniques could be used to solve Eq. (6), such as gradient descent (GD), CG, and PCG algorithm. Due to restrictions on paper length, here we only present a complete computational complexity study on the GD algorithm, followed by a brief discussion on other algorithms. For GD algorithm, the estimate of HR image z is updated by

(3)

z

Commonly, the prior image model is chosen to be Gauss– Markov-random-field-based model [17], and the density has the form of   1 1 T K1 PrðzÞ Z exp K z C z (4) 2l ð2pÞK=2 jCj1=2 where C is the covariance matrix of z and l is a tuning parameter. Using Eq. (1) and (2), the conditional density can be written as   1 1 2 PrðyjzÞ Z exp K 2 jjy K Wzjj (5) ð2pÞK=2 sKh 2sh By substituting Eq. (4) and (5) into Eq. (3), after some manipulation, we get   1 1 T K1 2 z jjy KWzjj C C z z^ Zargmin f ðzÞ Z argmin 2l 2s2h z z (6)

(7)

where gn is the gradient of the cost function f(z) at nth iteration, given by gn Z Vz^n fð^ zn Þ Z

1 T 1 W ðW^ zn K yÞ C CK1 z^n l s2h

(8)

and 3n is the optimal step size which can be calculated by minimizing f ðz^nC1 Þ Z f ðz^n K 3n gn Þ

(9)

By substituting Eq. (6) into Eq. (9), differentiating the right part of Eq. (9) with respect to 3n, setting the derivative equal to zero, after some manipulation, we get n

3 Z

1 s2h 1 s2h

g~ T d~ C 1l c g~ T g~ C 1l h

(10)

where the following notations were adopted hereafter for convenience T T g~ Z Wgn ; d~ ZW^zn Ky; c Z gn CK1 z^n ; h Z gn CK1 gn

(11) Although MAP can deal with general geometric motion, in this paper, we concentrate on a special case, which assumes that the geometric motions are pure global translations. This assumption is quite practical in most SR applications, such as satellite image enhancement; extraction of HR stills from video sequences when the photographed scene was static and the images were obtained with slight translations; and extraction of an object of interest from video surveillance sequences. The extension to general geometric motions is discussed in Section 5. For global translations, using motion estimation techniques could register each LR image on a fixed HR sampling grid as a pair of integer (k, l), representing the LR image has a translation of k HR pixels down and l HR pixels to the right (this motion vector notation is used throughout this paper). In ideal case, at most q2 nonredundant LR frames are possible for resolution enhancement of q in each dimension. However, it is usually almost impossible to acquire q2 frames except for the cases like translation-controlled imagers or similar systems. In most SR applications, there are more than q2 frames for small q, but much less than q2 frames for large q. Fig. 1 illustrates a simple SR image reconstruction conceptually. In the figure, the pixel (1,1) of the LR image is

D. Zhang et al. / Image and Vision Computing 23 (2005) 671–679

673

Table 1 Estimation of OP Item n

z^ d~ g g~ g~T d~ g~T g~ c h Total

OP q2M2 q4M2 q4M2Cq2M2CDq2M2 q4M2 2q2M2 2q2M2 q2M2CDq2M2 q2M2CDq2M2 3q4M2C8q2M2C3Dq2M2

Fig. 1. Super-resolution image reconstruction.

a weighted average over the solid box, whereas pixel (1,1) in another frame is a weighted average over the dashed box. The relative motion from the solid box to the dashed box is 1 HR pixel down and to the right.

LR images, i.e. N. However, N can be disregarded here because we have set NZq2.), or more formally OP Z3q4 M 2 C8q2 M 2 C3Dq2 M 2

(14)

2.2. Computational complexity analysis 3. Computational complexity reduction ~ and d~ are For simplicity, set NZq2, then z^n , g, g, vectors of the same q2M2!1 size. Since calculating z^n , ~ g~T g, ~ c, h demands the most ~ d~ and the related g~T d; g, g; computation, thus the computational complexity of GD algorithm could be reasonably judged by totaling the operating numbers of each element of the above four vectors, which can be denoted as OP (operating-count). An optimized program for speeding up the GD algorithm should first calculate and store the four vectors for all iterations, then use them to calculate the other parameters. From Eq. (1), if NZq2, then W is a q2M2!q2M2 matrix with each row has the form of   0/ a1 /aq 0/0 aqC1 /a2q 0/0 aq2KqC1 /aq2 0/ (12) where aj are constants, fully determined by the point distribution function of the imaging system. There are q2 nonzero entries in each row, when using Eq. (11) to ~ each element of d, ~ as shown in Eq. (12), calculate d, needs q2 times of operation. Since there are q2M2 ~ the total OP for calculating d~ is thus elements in d, given by OP Zq4 M 2

(13) T~

~ g~ d; g~ T g, ~ ~ d; The required OPs for calculating z^ , g, g; c, and h, are listed in Table 1. The constant D in Table 1 is the number of pixels involved in calculating the prior image model per HR pixel. The more complex the prior image model, the higher the D value. In summary, when using GD algorithm to solve Eq. (6), the total OP of each iteration depends mainly on the magnification factor and the image size (Please be noted that the computational complexity also depends on the total number of n

There are so far two methods for reducing the OP: the recursive multilevel reconstruction algorithm and parallel SR image reconstruction. 3.1. Recursive multilevel reconstruction algorithm 3.1.1. STEP algorithm The recursive multilevel reconstruction algorithm is also called the STEP algorithm. Given N frames of LR image of the same M!M size, to reconstruct an HR image of size qM!qM, we want to magnify by n stages (For simplicity, set NZq2 and qZ2n, n is any integer. Nonideal cases are discussed later), then the STEP algorithm can be stated as 1. Calculate the motion vectors of LR images. 2. Set image number mZN, stage iZ1, and set m LR images as input images. 3. Split m input images into m/4 subsets with four images in each subset, the correlation among the four images in the same subset is that each image has a vertical/horizontal translation of 1/2 pixel with other images, i.e. if the translation of an image is (k, l), then the translation of the other three images in the same subset should be (kC1/2, l), (k, lC1/2), (kC1/2, lC1/2). 4. Apply the MAP algorithm to each subset to get m/4 images of size 2iM!2iM. 5. If iZn then stop; else set mZm/4, iZiC1, and set all the reconstructed images as input images. Return to 3. The basic principle of grouping is that the relation of motion vectors among the reconstructed images of each subset is the same as that of the reference image in each subset before reconstruction.

674

D. Zhang et al. / Image and Vision Computing 23 (2005) 671–679

Table 2 Comparison of OP N/qa

DZ0

4/2 16/4 64/8 256/16 a

DZ5

MAP

STEP (% decreased)

Parallel reconstruction (% decreased)

MAP

STEP (% decreased)

Parallel reconstruction (% decreased)

20q2M2 56q2M2 200q2M2 776q2M2

20q2M2 40q2M2 60q2M2 80q2M2

20q2M2 (0%) 25q2M2 (55.4%) 26.3q2M2 (86.9%) 26.6q2M2 (96.6%)

35q2M2 71q2M22 215q2M2 791q2M2

35q2M2 (0%) 70q2M2 (1.4%) 105q2M2 (51.2%) 140q2M2 (82.3%)

35q2M2 (0%) 43.8q2M2 (38.3%) 45.9q2M2 (78.7%) 46.5q2M2 (94.1%)

(0%) (28.6%) (70%) (89.7%)

N is the total frame number and q is the magnification factor.

3.1.2. Computational complexity analysis For the STEP algorithm, at stage i, there are N/4i subsets with four images of the same 2iK1M!2iK1M size in each subset that are going to be fused into an image of size 2iM! 2iM. By applying Eq. (14) to 1 subset, we get

3.1.3. Relation to other MAP-based algorithms The only difference between the CG algorithm and the GD algorithm is the gradient direction gn in Eq. (7). For the GD algorithm, gn is given by Eq. (8), but for the CG algorithm, it is determined by

OP Z 3!24 ð2iK1 MÞ2 C8!22 ð2iK1 MÞ2 C3D!22 ð2iK1 MÞ2

gn Z pn C bn gnK1

Z 3!4

iC1

2

i

2

i

M C8!4 M C3D!4 M

2

ð15Þ

i

Since there are N/4 subsets at stage i, the total OP of stage i thus is OP Z ð3!4iC1 M 2 C8!4i M 2 C3D!4i M 2 Þ!N=4i Z 3!4!NM 2 C8NM 2 C3DNM 2

(16)

Note that the OP is a constant that independent of stage i. Because the total stage number is n, hence for the STEP algorithm, we have OP Z ð3!4!NM 2 C8NM 2 C3DNM 2 Þ!n

(17)

Because NZq2 and qZ2n, after some manipulation, we get 2

2

2

2

OP Z ð20q M C3Dq M Þ!log2 q

(18)

By comparing Eq. (14) with Eq. (18), it is obvious that the OP of the STEP algorithm will drop exponentially if N (or q) increases, compared with the OP of normal MAP algorithm (Table 2).

Fig. 2. Image segmentation for parallel reconstruction with, (a) one frame of four LR images with the same 4!4 size, (b) reconstructed HR image of size 8!8. We can use part 1/2/3/4 of the LR images to reconstruct the corresponding part of the HR image.

where the following notation is used 8 0 > < n n n p Z Vz^n f ðz^ Þ; b Z ðVz^n f ðz^n ÞÞT pn > : ðVz^n f ðz^nK1 ÞÞT pn

(19)

nZ0 (20) nO 0

Note that no new terms are presented in Eq. (19) and (20), which implies that the CG algorithm can also be used in the STEP algorithm to achieve the same increase in efficiency. The PCG algorithm differs from the CG algorithm by modifying the pn as pn Z QVz^n f ðz^n Þ

(21)

where Q is a q2M2!q2M2 preconditioner matrix. When PCG is used in the STEP algorithm, different Q with different sizes must be preset for different stages. Applying similar analysis reveals that the efficiency increase achieved by using PCG in the STEP algorithm is the same as that by using GD. Using Baker and Kanade’s method, will evidently result in a quadratic error function similar to Eq. (6) [15], which

Fig. 3. The average convergent DSNR versus the magnification factor.

D. Zhang et al. / Image and Vision Computing 23 (2005) 671–679

implies that it can also be used in the STEP algorithm and a subsequent increase in efficiency can be expected as well. 3.1.4. Extension to nonideal cases Since there is no restriction on the number of LR images in the MAP framework, the same grouping method is also applicable to all nonideal cases. After applying the grouping method, if N!q2, then some subsets classified in the ideal case may be null or may have less than four

675

images; if NOq2, some subsets may contain more than four images. In the latter case, Kim at el. [5] proved that an HR image with a much higher peak SNR (PSNR) can be reconstructed by incorporating all the noisy LR images, if more than necessary noisy LR images are available (NZ31[q2Z42 in their simulation). If magnification is not 2n (e.g. 6), we can just decompose it (i.e. 2!3) and accordingly process the magnification stage by stage (i.e. magnify by 2, then by 3, consecutively). When

Fig. 4. Reconstructed Cameraman magnified by (from top to down) 2, 4, 8, and 16, with (a) a typical LR image, (b) yielded by the MAP algorithm, and (c)yielded by the STEP algorithm.

676

D. Zhang et al. / Image and Vision Computing 23 (2005) 671–679

magnifying by 3, each of 9 images in the same subset has a vertical/horizontal translation of 1/3 or 2/3 pixel with other images, i.e. if an image with translation of (k, l), the translation of other eight images in the same subset should be (kC1/3, l), (k, lC1/3), (kC1/3, lC1/3), (kC2/3, l), (k, lC2/3), (kC2/3, lC2/3), (kC1/3, lC2/3), (kC2/3, lC1/3).

the HR iamge is independent, hence parallel reconstruction is possible. From Eq. (14), it is easy to get that the reduction of OP is nearly 75% for the case shown in Fig. 2.

3.2. Parallel SR image reconstruction

Here we present only the results of using the STEP algorithm and parallel reconstruction for the STEP algorithm. For motion estimation, the technique described in [12] was used. For MAP SR image reconstruction algorithm, since there is no much perceptual difference in reconstruction quality among the published algorithms, the algorithm developed by Schultz and Stevenson [8] was chosen here. RUN TIME, which is the CPU time that is required by the MAP algorithm or the proposed methods, was used here to make a direct quantitative comparison of computational complexity. All the tests were run on a personal computer with Pentium IV 1.5-GHz CPU.

There are two kinds of parallel SR image reconstruction: parallel reconstruction for the STEP algorithm and parallel reconstruction for segmented image blocks. 3.2.1. Parallel reconstruction for the STEP algorithm For the STEP algorithm, because of the independence of the subsets at each stage, parallel reconstruction is possible. If parallel reconstruction is applied, from Eq. (16), the actual OP at stage i is given by OP Z ð20q2 M 2 C 3Dq2 M 2 Þ !4i =N Z ð20q2 M 2 C 3Dq2 M 2 Þ !1=4nKi

4. Experiment results

(22) 4.1. Simulations

Thus, the total OP of parallel reconstruction is   1 1 2 2 2 2 OP Z ð20q M C 3Dq M Þ ! 1 C C/C nK1 4 4   nK1 4 K1 Z ð20q2 M 2 C 3Dq2 M 2 Þ ! 1 C 3 !4nK1 (23) and the results are also listed in Table 2 for comparison. 3.2.2. Parallel reconstruction for segmented image blocks From Eqs. (14), (18), and (23), small M will evidently result in low OP. Therefore, another effective method to reduce OP is to segment the LR images into small blocks and then perform SR reconstruction on each block simultaneously. The principle of image segmentation is that the relation of Eq. (1) should be maintained between the blocks of LR images and the corresponding blocks of HR image. To illustrate the method of segmentation, an example is given in Fig. 2. Four LR images with the same 4!4 size are going to be fused into an image of size 8!8. The motion vectors of the four LR images are (0,0), (0,1), (1,0), and (1,1), respectively. After segmentation, the four LR images are segmented into four blocks with the same 2!2 size (marked with 1, 2, 3, and 4 in Fig. 2(a)). As a consequence, the reconstructed HR image should be segmented into four overlapped blocks with 4!4, 4!5, 5!4, and 5!5 different sizes (Fig. 2(b)). By image segmentation, a single SR image reconstruction turns out to be four reconstructions, i.e. using part 1/2/3/4 of the four LR images to reconstruct the corresponding part of the HR image. After finsihing all the four reconstructions, delete the overlapping pixels and melt the four parts into a final HR image. Obviously, the reconstruction of each block of

The images used for simulation were Airplane, Lena, Baboon, Bird, and Cameraman. Each original image was synthetically translated by all possible pixel amounts, blurred and down-sampled by factors of 2, 4, 8 and 16. For each down-sampling factor, we produced enough LR images so that there were as many LR pixels in total as pixels in the original image, i.e. 4 half-size images, 16 quarter-size images, and so on. All the generated LR images were bilinearly interpolated before motion estimation. To provide a quantitative comparison of reconstruction quality, the improved SNR is used and the definition is given by DSNR Z 10 log10

jjz K z0 jj2 ðdBÞ ^ jjz K zjj

(24)

Fig. 5. The DSNR of the reconstructed Cameraman versus the iteration number under magnification of 4.

D. Zhang et al. / Image and Vision Computing 23 (2005) 671–679 Table 3 Comparison of average RUN TIME (seconds) under equal DSNR criteriona N/qb

MAP

STEP (% decreased)

Parallel reconstruction (% decreased)

4/2 16/4 64/8 256/16

250.1 780 4455 16,560

245.8 (0%) 472.6 (39.4%) 966.7 (78.3%) 1208.8 (92.7%)

245.8 (0%) 290.2 (62.8%) 343.0 (92.3%) 397.8 (97.6%)

a b

Run on a personal computer with a Pentium IV, 1.5-G. CPU. N is the total frame number and q is the magnification factor.

where z0 is the bilinear interpolation of the reference frame at each down-sampling factor. For ideal cases, the STEP and the MAP algorithm were applied to all the LR images generated at each down-sampling factor. Hence, there were 4, 16, 64 and 256 different frames in accordance with magnification of 2, 4, 8, and 16, respectively. Fig. 3 shows the average convergent DSNR versus the magnification factor. For subjective

677

evaluation, Fig. 4 depicts the convergent Cameraman under magnification of 2, 4, 8, and 16. Although the STEP algorithm suffers a little degradation in DSNR, no visual difference can be observed between images produced by the MAP and the STEP algorithm, which implies that a more human-eye-adapted quantitative error criterion is needed for such kind of comparison. For the STEP algorithm, as demonstrated by Eq. (16), the RUN TIME for each stage is almost the same. For example, when reconstructing Lena under magnification of 16, the RUN TIME for stage 1, 2, 3, and 4 was 236.37, 247.13, 252.70, and 259.37 s, respectively. Fig. 5 shows the DSNR of the reconstructed Cameraman versus the iteration number under magnification of 4. For the STEP and the MAP algorithm, because the convergent error is not the same, equal DSNR criterion was used here for a fair comparison of RUN TIME, i.e. we stop the MAP and the STEP algorithm when the same DSNR is reached. For example, for the case shown

Fig. 6. Reconstructed Lena magnified by 4 from 32 noisy frames, with (a) yielded by the MAP algorithm, (b) yielded by the STEP algorithm.

Fig. 7. Reconstructed Lena magnified by 8 from 16 noisy frames, with (a) yielded by the MAP algorithm, (b) yielded by the STEP algorithm.

678

D. Zhang et al. / Image and Vision Computing 23 (2005) 671–679

Table 4 Comparison of average RUN TIME (seconds) under equal DSNR criteriona N/qb

MAP

STEP (% decreased)

Parallel reconstruction (% decreased)

64/8 48/8 32/8 16/8 48/4 32/4 24/4 16/4

5103.68 3931.05 2705.25 1911.79 2568.2 1746.81 1302.07 846.03

1219.13 (76.1%) 1018.32 (74.1%) 796.06 (70.6%) 586.92 (69.3%) 1088.25 (57.6%) 815.47 (53.3%) 679.73 (47.8%) 543.45 (35.8%)

427.58 (91.6%) 418.66 (89.4%) 426.32 (84.2%) 401.92 (79%) 444.33 (82.7%) 376.2 (78.5%) 342.15 (73.7%) 308.1 (63.6%)

a b

Run on a personal computer with a Pentium IV, 1.5-G. CPU. N is the total frame number and q is the magnification factor.

in Fig. 5, we stop both the MAP and the STEP algorithm after the sixth iteration. The results of average RUN TIME under equal DSNR criterion are shown in Table 3. For a magnification larger than 2, the efficiency increase is remarkable for the STEP algorithm and parallel reconstruction.

For nonideal cases, Lena was employed here for further investigation. If NRq2, we randomly chose 16, 24, 32, and 48 frames from the 16 images generated at the down-sampling factor of 4 (each LR image can be repeatedly chosen), added a white noise into each frame, and then proceeded SR image reconstruction. The procedure was repeated sufficient times for each specified number of LR images. If NRq2, we randomly picked 16, 32, 48, and 64 frames from the 64 images generated at the down-sampling factor of 8, then conducted the same experiment as described in the cases of NRq2. For subjective evaluation, Fig. 6 shows the reconstructed Lena magnified by 4 from 32 noisy frames, and Fig. 7 shows the reconstructed Lena magnified by 8 from 16 noisy frames. No visual difference can be observed between the results yielded by the MAP and the STEP algorithm in both figures. The comparison on RUN TIME for all nonideal cases is listed in Table 4, which clearly shows that (1) the reduction of input LR images will result in some degradation in efficiency if NRq2, and (2) more frames will result in an obvious increase in efficiency if NRq2.

Fig. 8. SR reconstruction of a real Calendar video sequence (magnified by 4), with (a) a typical LR image, (b) yielded by bilinear interpolation, (c) yielded by the MAP algorithm, and (d) yielded by the STEP algorithm.

D. Zhang et al. / Image and Vision Computing 23 (2005) 671–679

4.2. Real videos Our final experiment was the resolution enhancement of a real Calendar video sequence with a total number of 20 frames, captured by a mini webcam (300,000 pixels). One of the LR input images was shown in Fig. 8(a), the bilinear interpolation of Fig. 8(a) by a factor of 4 was shown in Fig. 8(b). The results yielded by the MAP and the STEP algorithm were shown in Fig. 8(c) and (d), respectively. Again, no perceptual difference can be observed between (c) and (d). The RUN TIME needed for the MAP algorithm was 770.235 s, while only 383.571 s was demanded for the STEP algorithm, an efficiency increase of 50.2% was achieved by the proposed STEP algorithm. Moreover, the RUN TIME will reduced to 153.43 s, i.e. more than 80% of RUN TIME is saved, if parallel reconstruction is also applied.

5. Conclusion remarks This paper studied the computational complexity of MAP-based multiframe SR image reconstruction algorithms, and provided a new fast algorithm, as well as methods for parallel SR image reconstruction. Although only pure global translations were examined, the application of the proposed techniques to the cases of nonglobal translations is straightforward, i.e. segmenting images into different blocks so that pure global translations can be assumed to each block and applying the proposed techniques to each block, after finishing all the reconstructions, melting all the blocks to get the final result. Moreover, it is necessary to point out that all the proposed techniques in this paper are also applicable to other SR algorithms, such as POCS method and frequency domain method, and the same efficiency increase can be expected as well.

Acknowledgements This research is supported partly by the Ministry of Education, Government of Guangdong Province (Research Grant #8303069). The authors would like to thank the anonymous reviewers for their valuable suggestions that helped this paper more complete and Doctor Ren Zhang for his work that helped to improve the presentation of this paper.

References [1] R. Li, B. Zeng, M.L. Liou, A new three-step search algorithm for block motion estimation, IEEE Trans. Circus Syst. Video Technol. 4 (4) (1994) 438–442. [2] J. Chalidabhongse, C.-C.J. Kuo, Fast motion vector estimation using multiresolution-spatio-temporal correlation, IEEE Trans. Circus Syst. Video Technol. 7 (3) (1997) 477–488.

679

[3] R.Y. Tsai, T.S. Huang, Multiframe image restoration and registration Advances in Computer Vision and Image Processing vol. 1 (1984) pp. 317–339. [4] E. Kaltenbacher, R.C. Hardie, High-resolution infrared image reconstruction using multiple low resolution aliased frames, in Proceedings in the IEEE National Aerospace Electronics Conference (NAECON), Dayton, OH, vol. 2, 1996 pp. 702–709. [5] S.P. Kim, N.K. Bose, H.M. Valenzuela, Recursive reconstruction of high resolution image from noisy undersampled miltiframes, IEEE Trans. Acoust., Speech, Signal Process. 38 (6) (1990) 1013– 1027. [6] P. Oskoui-Fard, H. Strak, Tomographic image reconstruction using the theory of convex projections, IEEE Trans. Med. Imaging 7 (1) (1998) 45–58. [7] A.J. Patti, Y. Altunbasak, Artifact reduction for set theoretic super resolution image reconstruction with edge adaptive constraints and higher-order interpolants, IEEE Trans. Image Process. 10 (1) (2001) 179–180. [8] R.R. Schultz, R.L. Stevenson, Ectraction of high-resolution frames from video sequences, IEEE Trans. Image Process. 5 (6) (1996) 996– 1011. [9] P. Cheeseman, B. Kanefsky, R. Kraft, J. Stultz, R. Hanson, Super-resolved surface reconstruction from multiple images, NASA Tech. Rep, FIA-94-12, NASA Ames Res. Ctr., Moffett Field, CA (1994). [10] R.C. Hardie, K.J. Barnard, E.E. Armstrong, Joint MAP registration and high-resolution image estimation using a sequence of undersampled images, IEEE Trans. Image Process. 6 (12) (1997) 1621–1633. [11] R.C. Hardie, K.J. Barnard, J.G. Bognar, E.E. Armstrong, E.A. Watson, High resolution image reconstruction from a sequence of roated and translated frames and its application to an infrared imaging system, Opt. Eng. 7 (3) (1998) 247–260. [12] N. Nguyen, P. Milanfar, G. Golub, A computationally efficient superresolution image reconstruction algorithm, IEEE Trans. Image Process. 10 (4) (2001) 573–583. [13] T.J. Connolly, R.G. Lane, Gradient methods for superresolution, in Proceedings of the International Conference on Image Processing. vol. 1, 1997, pp. 917–920. [14] J.H. Shin, J.S. Yoon, J.K. Paik, M. Abidi, Fast superresolution for image squences using motion adaptive relaxation parameters, in Proceedings of the 1999 Internationl Conference on Image Processing. vol. 3, 199, pp. 676–680. [15] S. Baker, T. Kanade, Limits on super-resolution and how to break them, IEEE Trans. Pattern Anal. Mach. Intell. 24 (9) (2002) 1167–1183. [16] M. Elad, Y. Hel-or, A fast super-resolution reconstruction algorithm for pure translational motion and common space-invariant blur, IEEE Trans. Image Process. 10 (8) (2001) 1187–1193. [17] C. Bouman, K. Sauer, A generalized Gaussian image model for edgepreserving MAP estimation, IEEE Trans. Image Process. 2 (3) (1993) 296–310. [18] R.R. Schultz, R.L. Stevenson, A Bayesian approach to image expansion for improved definition, IEEE Trans. Image Process. 3 (3) (1994) 233–242. [19] D. Terzopoulos, Multilevel computational processes for visual surface reconstruction, Comput. Vis. Graph. Image Process. 24 (1983) 52–96. [20] M. Elad, A. Feuer, Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images, IEEE Trans. Image Process. 6 (12) (1997) 1646–1658.