Optik 125 (2014) 3612–3618
Contents lists available at ScienceDirect
Optik journal homepage: www.elsevier.de/ijleo
Single image motion deblurring: An accurate PSF estimation and ringing reduction Ashwini M. Deshpande a,∗,1 , Suprava Patnaik b,2 a b
Department of Electronics and Telecommunication Engineering, TSSM’s Bhivarabai Sawant College of Engineering and Research, Pune (MS), India Department of Electronics Engineering, S. V. National Institute of Technology Surat, Gujarat, India
a r t i c l e
a b s t r a c t
i n f o
Article history: Received 16 July 2013 Accepted 8 January 2014
In this paper we have proposed a single image motion deblurring algorithm that is based on a novel use of dual Fourier spectrum combined with bit plane slicing algorithm and Radon transform (RT) for accurate estimation of PSF parameters such as, blur length and blur angle. Even after very accurate PSF estimation, the deconvolution algorithms tend to introduce ringing artifacts at boundaries and near strong edges. To prevent this post deconvolution effect, a post processing method is also proposed in the framework of traditional Richardson–Lucy (RL) deconvolution algorithm. Experimental results evaluated on the basis of both qualitative and quantitative (PSNR, SSIM) metrics, verified on the dataset of both grayscale and color blurred images show that the proposed method outperforms the existing algorithms for removal of uniform blur. A comparison with state-of-the-art methods proves the usefulness of the proposed algorithm for deblurring real-life images/photographs. © 2014 Elsevier GmbH. All rights reserved.
Keywords: Motion blur Dual Fourier spectrum Bit plane slicing Ringing artifacts Weight matrix
1. Introduction One of the long-standing challenges in the field of imaging is motion blur. Motion blur is generated due to relative motion between a camera and a scene during the exposure. To remove an undesirable blur from a single image, it is required to estimate the blur kernel or the point-spread function (PSF) and then restore a latent image through deconvolution. When the PSF for a blurred image is unknown, the problem of estimating the latent image is typically referred as blind image deblurring. It is more challenging problem due to the loss of high frequency details of the image and the unknown blurring process. Most of the existing blind motion deblurring methods model the blurring process as a convolution of a latent image, f and a blur kernel, h as
∞
∞
g(x, y) = f (x, y) ⊗ h(x, y) =
f (˛, ˇ).h(x − ˛, y − ˇ)d˛ · dˇ, −∞
−∞
(1)
∗ Corresponding author. E-mail address:
[email protected] (A.M. Deshpande). 1 Working as a research scholar in Electronics Engineering Department at S.V.N.I.T., Surat, Gujarat, India. 2 Presently working as a Professor in the Department of E&Tc Engg., Xavier Institute of Engineering, Mumbai (MS), India. http://dx.doi.org/10.1016/j.ijleo.2014.01.126 0030-4026/© 2014 Elsevier GmbH. All rights reserved.
where ⊗ is the convolution operator, h(x, y) is a linear PSF, characterizing blurring filter for any ˛ and ˇ, f(x, y) is the original image and g(x, y) is the observed blurry image [10]. A uniform motion blur is characterized mathematically in terms of a blur kernel as,
⎧ x L ⎨ 1 if and = − tan x 2 + y2 ≤ L 2 y h(x, y; L, ) = ⎩ 0 elsewhere
(2)
where L is the length of blur and is the blur direction or blur angle. Eq. (2) shows that the motion blur kernel is attributed by the PSF parameters L and , therefore estimating accurate PSF depends on, how accurately one can estimate these blur parameters. The widely used methods of PSF estimation for linear motion blurred image are based on spectral zeros estimation [28,3]. However, these methods have limited scope as the image frequencies tend to dominate the spectral zero locations and it is also susceptible to noise. Therefore it is not suitable for deblurring real blurred photographs. The blur kernel can be estimated using some additional hardware [2,25,17,12] that augments the camera system to aid the deblurring process or by using software-based blind motion deblurring techniques that impose certain priors on motion-blur kernel [8,27,6,13,34]. A comprehensive overview of blind deconvolution algorithms is given by Levin et al. in [18]. Instead of using an inverse filter in the frequency domain, a Bayesian-based iterative procedure can be used for recovering sharp image [31,7,16]. One
A.M. Deshpande, S. Patnaik / Optik 125 (2014) 3612–3618
such representative approach is the so-called Richardson–Lucy (RL) method [26,22] that utilizes Bayes’ law to make the inference about clear image iteratively. Various adaptations of RL method can be found in [29,36]. These methods are easier to implement, and the results are relatively of good quality. However, unpleasant ringing artifacts near strong edges are the known side effects of these methods. In recent years, regularization-based methods have been widely used like Tikhonov regularization [30], the total variation (TV) and its variations [11,20,4,5]. In addition, some heavy-tailed statistical probabilistic constraints are developed in recent years to regularize the deblurred images [8,7]. With this method, to obtain satisfying results manual intervention is required to choose a relevant patch (typically an area without saturated pixel) on the image to run the blur kernel estimator. The modification of this approach can be found in [27,6,34,14,19]. To suppress the ringing artifacts caused by failures in the blur kernel estimation Wang et al. [35] applied local prior on the RL algorithm and deblurring is performed with the known related kernel information captured by hardware. A regularized iterative image restoration algorithm [15] and a variational regularization based approach [24] are also proposed to reduce ringing artifacts. In this paper, we have proposed a novel dual Fourier spectrum method to overcome the limitations of the existing algorithms. The proposed method is found to be robust and improves the PSF estimation accuracy irrespective of the image frequency contents. A small error in the PSF estimation leads to ringing artifacts while using the existing deconvolution algorithm, which degrades the quality of restored image. In order to reduce the ringing artifacts, we have proposed a new weight matrix. The weight matrix applies different weights to the pixels while deconvolving using RL algorithm. The proposed weight matrix shows further improvement in the restored image quality. It is also noted that, the proposed algorithm works equally well for synthetically as well as naturally blurred images. The results are validated on Berkeley segmentation dataset [1]. The rest of the paper is organized as, the proposed algorithm is discussed in Section 2. Experimental results are provided in Section 3 comprising of details of implementation methodology and qualitative as well as quantitative comparison with state-ofthe-art deblurring methods. Finally, the conclusions are presented in Section 4.
3613
Fig. 1. A block diagram of proposed blind deblurring algorithm comprising both, PSF parameter estimation and RL deconvolution with proposed post-processing (with the new weight matrix for ringing reduction)
2. Proposed single image deblurring algorithm A blind image deconvolution problem is divided into two parts: (1) PSF estimation to obtain the blur kernel and (2) image deconvolution to obtain latent image. Our contribution in each of these two parts is discussed in this section. 2.1. Motion blur PSF estimation A uniform motion blur is specified by two parameters: (i) the direction of the motion and (ii) the length of the motion L. As shown in Eq. (2), these two parameters play a very important role in estimating the PSF accurately and further helps in improving the deblurring performance. The proposed deblurring algorithm to deblur from single input image is depicted in Fig. 1. This algorithm works with single uniform motion blurred image. In the first step of the PSF estimation phase, a blurred input image is transformed into frequency domain as shown in Fig. 2(d), where a periodic zero pattern characterizes the linear blur effect. The directed sinc-like patches are the cues to the underlying blur direction and the distance between any two zero crossings is a cue for the amount of displacement i.e., the blur length [3]. As shown in Fig. 2(g), it is observed that these zero crossings tend to disappear with the addition of noise and the
Fig. 2. (a) Latent image, (b) first log-spectrum of (a and c) motion blurred image with L = 25 pixels and = 35◦ without noise, (d) first log-spectrum of (c and e) second logspectrum of (c and f) motion blurred image with zero mean Gaussian noise and variance = 0.001, (g) first log-spectrum of (f and h) Second log-spectrum of (f).
directional patches tend to merge with the frequency contents of original image. This poses difficulty in identifying the blur parameters. Therefore, one more step of frequency domain transformation is applied and as a result of which zero crossings becomes more prominent. Fig. 2(e) and (h) depicts second log-spectral representation characterizing this fact.
3614
A.M. Deshpande, S. Patnaik / Optik 125 (2014) 3612–3618
Fig. 3. FFT spectra of all eight bit planes of a blurred image.
ˆ a second log-spectral image is divided To estimate the angle , into eight planes (as the gray scale image is considered for estimation) by bit plane slicing algorithm and Radon transform of 4th bit plane is obtained. Fig. 3 shows the Fourier analysis of all these bit planes. It shows that, amongst all eight planes, the 4th bit plane retains the information of the motion blur and it provides an important cue for obtaining blur angle. This fact is verified for all blurred images in the dataset. Now, to estimate Łˆ after performing the gray scale transformaˆ tion of the dual spectrum, it is rotated w.r.t. the estimated angle . It is observed that the central 3 rows carry the cues of blur length. It is converted into 1-dimensional array by taking its average, this 1-dimensional profile is shown in Fig. 4(g). Finally, a peak detection algorithm is devised to estimate the blur length. The blur length is nothing but the distance between a central peak and a prominent peak on either side of the central peak. These estimated parameters constitute a PSF kernel, which is used to deblur the input image using conventional RL algorithm. Fig. 4 shows the results at various stages of the proposed algorithm. 2.1.1. Comparison with existing PSF estimation methods The performance of blur parameter estimation on the synthetically blurred standard image dataset is compared with the methods in [9,33,21,23]. Table 1 shows comparative analysis in terms of maximum error in estimating motion direction in terms of the angle and the blur length. Amongst all these methods, method in [33] has the best accuracy for estimating the blur length. However, the error in angle estimation is too large. Method in [9] based on cepstral domain approach and the isotropy derivative arithmetic operator shows better estimation of blur angle, however the deblurring result contains lot of ringing. Method in [23] based on fuzzy logic and shows Table 1 Comparison of maximum error in PSF parameter estimation. Maximum estimation error for motion blur angle (◦ )
Maximum estimation error for motion blur length (pixels)
5
7
5
0
Moghaddam [23]
◦
0 ≤ ≤ 180 10 ≤ L ≤ 50
2
1.9
Fu et al. [9]
0◦ ≤ ≤ 180◦ 10 ≤ L ≤ 100
1
2
Proposed method
0◦ ≤ ≤ 170◦ 5 ≤ L ≤ 70
1
1
Method
Test range of blur parameters
Lokhande et al. [21] 0◦ ≤ ≤ 60◦ 15 ≤ L ≤ 60 Wu et al. [33]
0◦ ≤ ≤ 180◦ 10 ≤ L ≤ 50 ◦
Fig. 4. (a) Latent image, (b) motion blurred image with L = 20 pixels and = 45◦ , (c) first log-spectrum of (b and d) second log-spectrum of (b and e) 4th bit plane of (d and f) radon transform of (e and g) 1-D profile after averaging central three rows of rotated dual spectrum and (h) deblurred image without post-processing.
better results compared to [21]. In comparison with all these existing methods, the proposed method shows better estimation accuracy for blur length as well as blur angle. We have evaluated the results by varying the blur length from 5 pixels to 70 pixels and the blur angle from 0◦ to 170◦ . It is also noted that, the performance of these existing methods deteriorate largely for small or too large blur lengths as well as blur angles. Whereas the proposed method shows best results even for small and large blur lengths as well as blur angles. 2.2. Non-blind deconvolution with ringing reduction To cope up with an unavoidable post deconvolution effect i.e., ringing artifacts, either the RL algorithm must get accurate values ˆ or some technique must be devised to minimize these of Lˆ and , artifacts. Although our estimation method performs closer to the actual PSF parameter values, as reported in Table 1, under worst case conditions, maximum deviation in Lˆ is 1 pixel and in ˆ is 1◦ . Taking into account this practicality, a new post-processing step is imparted in the proposed deblurring algorithm.
A.M. Deshpande, S. Patnaik / Optik 125 (2014) 3612–3618
Fig. 5. TAJ example: (a) latent image, (b) motion blurred image with L = 20 pixels and = 45◦ , deblurred image without post-processing with number of R-L iterations equal to (c) 50, (d) 100, (e) 150 respectively.
The discrete Fourier transform (DFT), used by the deblurring function, assumes that the frequency pattern of an image to be periodic. This assumption creates a high-frequency drop-off at the edges of the image and it results into boundary related ringing in deblurred image. Basic RL iterations introduce ringing in the restored image as we increase the number of iterations as is shown in Fig. 5. These ringing artifacts can be reduced by applying appropriate weight to the pixels during iterative deconvolution process. However, obtaining appropriate weight matrix from the blurred image is a challenging task. The weight matrix is of the same size as that of the image and each element of the weight matrix is nothing but the weight to be applied to corresponding pixel in the blurred image during deconvolution process. The proposed weight matrix W(x, y) in spatial domain is formulated as, W (x, y) =
1 1 + [max(0, g2 )]
(3)
where (x, y) indicates any arbitrary pixel location, g2 is the local variance of observed image and local window can be of variable size such as, 3 × 3, 5 × 5, 7 × 7 and 11 × 11. Size of local window can be selected depending on the extent of the ringing. We have chosen 7 × 7 as a size of local window for obtaining experimental results in Section 3. Ringing is more annoying in the smooth regions as well as it is more likely to appear near the edges. While removing ringing by a smoothing filter care must be taken to preserve the edges, so that the deblurred image will be of good quality. Therefore, the weight matrix formulated in Eq. (3) is constructed such that the pixels on the edges are assigned the weight as 0, and thus they are ignored during RL iterations. All other pixels in between 0 and 1 are weighted during the RL iterations so as to suppress the ringing artifacts. The proposed post-processing technique employs this spatially weighted matrix into the basic framework of RL [26,22] algorithm. The RL deconvolution algorithm accepts the blurred image, estimated PSF and the proposed weight matrix as the inputs.
3615
Fig. 6. TAJ example: (a) blurred image, (b) deblurred image without post-processing (PSNR = 34.4698 dB and SSIM = 0.888), (c) new weight matrix, (d) deblurred image with proposed post-processing technique (PSNR = 35.272 dB and SSIM = 0.932), (e) close-up views of deblurred image showing effect of without and with post processing.
Increased number of iterations of RL algorithm without any weight matrix results in the sharper reconstructed image, however, the ringing artifacts also increases (refer Fig. 5). Moreover, even if we know the correct PSF, the ringing artifacts near strong edges are present and propagated in the deblurred image. By applying the proposed weight matrix, the ringing is reduced to a great extent and the restored image quality is enhanced as shown in Fig. 6. Here, SSIM as well as PSNR is used as a measure of image quality. 2.3. Fine tuning the estimated blur parameters Fig. 7 shows the plot for the quality of the restored image measured in terms of SSIM index as a function of estimated length Lˆ ˆ If Lˆ = L and ˆ = ; the restored image is of and estimated angle . good quality having maximum value of SSIM (0.784 in Fig. 7 located at the center). Fig. 7 shows that estimation error in length is more serious than the angle estimation. As the estimation error increases (for length as well as angle), the restored image quality becomes poor which is indicated by reduction in SSIM index. From Fig. 7, it can be noticed that SSIM is a good measure of quality of deblurred image when compared with the latent image.
Fig. 7. Plot of restored image quality in terms of SSIM as a function of estimated length, Lˆ and estimated angle ˆ for an actual length of L = 15 and = 35◦ .
3616
A.M. Deshpande, S. Patnaik / Optik 125 (2014) 3612–3618
In Section 2.1 it is shown that the maximum error obtained while estimating the blur length is ±1 pixel and blur angle is ±1◦ . To further reduce the error, a simple refinement algorithm is proposed as shown in Algorithm 1. However, this refinement algorithm is applicable only if the latent image is known because SSIM is used as an image quality measure. The proposed algorithm yields exact values of blur parameters and the error obtained in Section 2.1 is further reduced to zero. Based on the estimation of Lˆ and ˆ obtained from the Section 2.1, a matrix L MAT and MAT is obtained with the deviation of ±1 as shown in Algorithm 1. Then the new PSF kernels are obtained for all elements of these two matrices. After deblurring the image using these PSF values, SSIM index is obtained. Finally, the estimates of Lˆ and ˆ i.e., new Lˆ and new ˆ respectively are nothing but the values of Lˆ and ˆ which leads to maximum SSIM value. Algorithm 1.
Refinement of blur parameter estimates.
ˆ Latent image f Require: Blurred image g, estimated length Lˆ , estimated angle , Build the matrices L MAT and MAT as L MAT = [Lˆ − 1, Lˆ , Lˆ + 1; Lˆ − 1, Lˆ , Lˆ + 1; Lˆ − 1, Lˆ , Lˆ + 1] ˆ , ˆ ; ˆ ˆ + 1, ˆ + 1, ˆ + 1] MAT = [ˆ − 1, ˆ − 1, ˆ − 1; , for i = 1–3 do for j = 1–3 do Compute new PSF kernels using individual elements of L MAT(i, j) and MAT(i, j) Deconvolve the blurred image, g with all new PSF kernels using RL algorithm Compute SSIM(i,j) end for end for Obtain the index (i, j) for max(SSIM) return new Lˆ = L MAT(i, j), new ˆ = MAT(i, j)
3. Experimental results Proposed blind deblurring algorithm is tested on a wide dataset of grayscale and color images [1] for known as well as unknown blur. The implementation methodology, experimental set-up and results are presented in this section. 3.1. Implementation methodology For the purpose of state-of-the-art comparison of deblurring algorithms, five different candidate methods chosen are: Fergus et al. [8], Shan et al. [27], Cho and Lee [6], Krishnan et al. [13] and Xu and Jia [34]. These algorithms are found to be more suitable for comparison with the proposed deblurring algorithm because these algorithms are based on the assumption of uniform blur model and account for translational motion which are same as the proposed one. Number of experiments are conducted on numerous test images to validate the use of the proposed algorithm for single image deblurring, qualitatively as well as quantitatively. 3.2. Comparison with state-of-the-art methods: qualitative evaluation For comparing the deblurring performance of all the above mentioned algorithms with the proposed one following experimental set-up is considered: • Test dataset: (1) Berkeley segmentation dataset (gray scale images with sizes 321 × 481 and 481 × 321) [1], (2) color images/photographs from [1] and from [27,8]). • Motion blur parameters: (1) known blur L = 5 ≤ L ≤ 70 pixels and = 0◦ ≤ ≤ 170◦ , (2) unknown blur. • Blind deblurring algorithms: (1) state-of-the-art methods: Fergus et al. [8], Shan et al. [27], Cho and Lee [6], Krishnan et al. [13] and Xu and Jia [34], (2) proposed method (refer Fig. 1) with 50 iterations of RL algorithm and with post-processing.
Fig. 8. Experiment 1 (grayscale boat image): (a) original image from BSD [1] database, (b) blurred test image with ground truth PSF as: L = 10 pixels and = 15◦ . Comparison of deblurred images and corresponding estimated PSFs by state-of-theart blind deblurring algorithms: (c) Fergus et al. [8], (d) Shan et al. [27], (e) Cho and Lee [6], (f) Xu and Jia [34], (g) Krishnan et al. [13] and (h) proposed blind deblurring algorithm.
• Performance metrics: (1) peak signal-to-noise ratio (PSNR) (low value means poor quality and high value means good quality), (2) Structural SIMilarity (SSIM) index [32] (lies between 0 and 1, closer to 0 means poor quality and closer to 1 means good quality). Performance of the proposed method is compared with the state-of-the-art methods qualitatively (visual aspects) as well as quantitatively in terms of PSNR and SSIM, where both latent and deblurred images are available. In case of images with unknown blur, human perceptual basis is used to verify the usefulness of the proposed method. 3.2.1. Experiment 1- Deblurring grayscale image with known blur This experiment is conducted on all images in the Berkeley segmentation dataset [1] and we got better results compared to the state-of-the-art methods. As an example, the results obtained for the gray scale image (boat image of size (321 × 481)) are shown in Fig. 8 and Table 2. The boat image is blurred with PSF parameters, L = 10 pixels and = 15◦ . This blurred image is given as an input to all the candidate algorithms and their deblurring results including the estimated PSFs are recorded. As shown in Fig. 8, the proposed method outperforms other algorithms in terms of both visual quality and quantitative evaluation (refer Table 2). 3.2.2. Experiment 2- Deblurring color image with known blur In this experiment, a color image (Picasso.png of size (800 × 532)) [27] is blurred with PSF parameters L = 10 pixels and
A.M. Deshpande, S. Patnaik / Optik 125 (2014) 3612–3618
3617
Table 2 Quantitative performance comparison with state-of-the-art deblurring methods. Method
Fergus et al. [8] Shan et al. [27] Cho and Lee [6] Xu and Jia [34] Krishnan et al. [13] Proposed method
Experiment 1
Experiment 2
PSNR (dB)
SSIM
PSNR (dB)
SSIM
28.2146 30.3112 32.3022 31.566 29.7341 34.6635
0.316 0.5312 0.5832 0.6812 0.3835 0.8936
28.9127 35.1332 35.1669 34.3135 33.542 40.1794
0.5251 0.9195 0.8443 0.8846 0.7975 0.9760
= 25◦ . This blurred image is given as an input to all the candidate algorithms and their deblurring results including the estimated PSFs are recorded. As shown in Fig. 9, the proposed method outperforms other algorithms in terms of both visual quality and quantitative evaluation (refer Table 2). Ringing effect in the restored image, trailing noise in certain directions in estimated kernel are some noticeable shortcomings in the competitive methods, whereas negligible ringing artifacts, close to the ground truth estimated kernel and detail preserving restored images are the promising features of the proposed algorithm. 3.2.3. Experiment 3- Deblurring color image with unknown blur In this experiment, cropped portion (617 × 780) of a color image (ian1.jpg of size (1600 × 1200)) from Fergus et al. [8] with unknown blur is passed as an input to all the candidate algorithms. Their
Fig. 10. Experiment 3: (a) blurred test image with unknown blur taken from [8]. Comparison of deblurred images and corresponding estimated PSFs by state-of-theart blind deblurring algorithms: (b) Fergus et al. [8], (c) Shan et al. [27], (d) Cho and Lee [6], (e) Xu and Jia [34], (f) Krishnan et al. [13] and (g) proposed blind deblurring algorithm.
deblurring results including the estimated PSFs are recorded in Fig. 10. Proposed method performs much satisfactorily in this case also and results into visually appealing deblurring. As latent image is unknown, only qualitative analysis is performed and comparable deblurring results are obtained by the proposed method. 4. Conclusions
Fig. 9. Experiment 2 (Picasso color image): (a) original image from [27], (b) blurred test image with ground truth PSF as: L = 10 pixels and = 25◦ . Comparison of deblurred images and corresponding estimated PSFs by state-of-the-art blind deblurring algorithms: (c) Fergus et al. [8], (d) Shan et al. [27], (e) Cho and Lee [6], (f) Xu and Jia [34], (g) Krishnan et al. [13] and (h) proposed blind deblurring algorithm.
In this paper we have presented a new framework for uniform motion deblurring from single image. We proposed a unified model, which seeks solution to PSF parameter estimation and deconvolution with reduced ringing. PSF estimation is based on the exploration of dual spectrum and bit plane slicing to estimate blur parameters without any other cumbersome processing steps. A new spatially weighted matrix is proposed to take care of ringing artifacts in smooth region in the framework of traditional Richardson–Lucy (RL) algorithm. The quantitative and qualitative
3618
A.M. Deshpande, S. Patnaik / Optik 125 (2014) 3612–3618
deblurring results show how the proposed algorithm outperforms the state-of-the-art methods. In our future work, we would like to investigate and extend the framework of our algorithm to handle complex camera motions. Acknowledgements The authors would like to thank anonymous reviewers for their valuable suggestions and constructive comments that helped to improve the quality of this work. References [1] P. Arbelaez, C. Fowlkes, D. Martin, The Berkeley segmentation dataset and benchmark u.c. Berkeley comput. vis. group, 2007, Available: http://www.eecs. berkeley.edu/Research/Projects/CS/vision/bsds [2] M. Ben-Ezra, S.K. Nayar, Motion-based motion deblurring, IEEE Trans. Pattern Anal. Mach. Intell. 26 (2004) 689–698. [3] M. Cannon, Blind deconvolution of spatially invariant image blurs with phase, IEEE Trans. Acoust. Speech Sig. Process. 24 (1976) 58–63. [4] T. Chan, A. Marquina, P. Mulet, High-order total variation-based image restoration, SIAM J. Sci. Comput. 22 (2000) 503–516. [5] T.F. Chan, C.K. Wong, Total variation blind deconvolution, IEEE Trans. Image Process. 7 (1998) 370–375. [6] S. Cho, S. Lee, Fast motion deblurring, in: ACM Transactions on Graphics (SIGGRAPH ASIA 2009), vol. 28, 2009. [7] T.S. Cho, C.L. Zitnick, N. Joshi, S.B. Kang, R. Szeliski, W.T. Freeman, Image restoration by matching gradient distributions, IEEE Trans. Pattern Anal. Mach. Intell. 34 (2012) 683–694. [8] R. Fergus, B. Singh, A. Hertzmann, S. Roweis, W. Freeman, Removing camera shake from a single photograph, ACM Trans. Graph. (TOG) 25 (2006) 787–794. [9] Z. Fu, H. Xian, J. Xu, X. Ge, Evaluation of motion blur parameter based on cepstrum domain of the intentional restored image, in: IEEE ICCP 2010, 2010, pp. 271–274. [10] R. Gonzalez, R. Woods, S. Eddins, Digital Image Processing Using MATLAB, Pearson Prentice-Hall, Upper Saddle River, NJ, 2004. [11] H. Ji, K. Wang, Robust image deblurring with an inaccurate blur kernel, IEEE Trans. Image Process. 21 (2012) 1624–1634. [12] N. Joshi, S. Kang, C. Zitnick, R. Szeliski, Image deblurring using inertial measurement sensors, in: ACM Transactions on Graphics (SIGGRAPH 2010), 2010. [13] D. Krishnan, R. Fergus, Fast image deconvolution using hyper-laplacian priors, Neural Inf. Process. Syst. (2009) 1–9. [14] D. Krishnan, T. Tay, R. Fergus, Blind deconvolution using a normalized sparsity measure, in: Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition, 2011. [15] R.L. Lagendijk, J. Biemond, D.E. Boekee, Regularized iterative image restoration with ringing reduction, IEEE Trans. Acoust. Speech Sig. Process. 36 (1988) 1874–1888.
[16] J.M. Lee, J.H. Lee, K.T. Park, Y.S. Moon, Image deblurring based on the estimation of PSF parameters and the post-processing, Optik - Intl. J. Light Electron Opt. (2012), http://dx.doi.org/10.1016/j.ijleo.2012.06.067. [17] A. Levin, A. Rav-Acha, D. Lischinski, Spectral matting, in: IEEE CVPR, 2007, pp. 1–8. [18] A. Levin, Y. Weiss, F. Durand, W. Freeman, Understanding and evaluating blind deconvolution algorithms, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009. [19] A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Efficient marginal likelihood optimization in blind deconvolution, in: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 2657–2664. [20] X. Li, Fine-granularity and spatially-adaptive regularization for projectionbased image deblurring, IEEE Trans. Image Process. 20 (2011) 971–983. [21] R. Lokhande, K.V. Arya, P. Gupta, Identification of parameters and restoration of motion blur images, in: ACM Symposium on Applied Computing, 2006, pp. 301–305. [22] L. Lucy, An iterative technique for the rectification of observed distributions, Astron. J. 79 (1974) 745–754. [23] M.E. Moghaddam, M. Jamzad, Linear motion blur parameter estimation in noisy images using fuzzy sets and power spectrum, EURASIP J. Adv. Sig. Process. (2007). [24] V.B.S. Prasath, A. Singh, Ringing artifact reduction in blind image deblurring and denoising problems by regularization methods, in: Seventh International Conference on Advances in Pattern Recognition (ICAPR), 2009, pp. 333–336. [25] R. Raskar, A. Agrawal, J. Tumblin, Coded exposure photography: motion deblurring using fluttered shutter, ACM Trans. Graph. 25 (2006) 795–804. [26] W.H. Richardson, Bayesian-based iterative method of image restoration, J. Opt. Soc. Am. 62 (1972) 55–59. [27] Q. Shan, J. Jia, A. Agarwala, High-quality motion deblurring from a single image, in: ACM Transactions on Graphics (SIGGRAPH), vol. 27, 2008, pp. 73(1)–73(10). [28] T.G. Stockham, T.M. Cannon, R.B. Ingebretsen, Blind deconvolution through digital signal processing, Proc. IEEE 63 (1975) 678–692. [29] Y.W. Tai, P. Tan, M.S. Brown, Richardson–Lucy deblurring for scenes under a projective motion path, IEEE Trans. Pattern Anal. Mach. Intell. 33 (2011) 1603–1618. [30] A.N. Tikhonov, V.Y. Arsenin, Solutions of Ill-posed Problems, V.H. Winston & Sons/John Wiley & Sons, Washington, DC/New York, 1977. [31] C. Wang, L. Sun, P. Cui, J. Zhang, S. Yang, Analyzing image deblurring through three paradigms., IEEE Trans. Image Process. 21 (2012) 115–129. [32] Z. Wang, A.C. Bovik, Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process. 13 (2004) 600–612. [33] S.Q. Wu, Z.K. Lu, E.P. Ong, W.S. Lin, Evaluation of motion blur parameter based on cepstrum domain of the intentional restored image, in: Proc. the 16th International Conference on Computer Communications and Network, Honolulu, USA, 2007, pp. 1166–1171. [34] L. Xu, J. Jia, Two-phase Kernel Estimation for Robust Motion Deblurring, Springer-Verlag, New York, 2010, pp. 157–170. [35] W. Yongpan, X. FengHuajun, L. Zhihai, D. Qi, Chaoyue, An improved Richardson–Lucy algorithm based on local prior, Opt. Laser Technol. 42 (2010) 845–849. [36] J. Feng Zhao, H. Jun Feng, Z. Hai Xu, Q. Li, An improved image restoration approach using adaptive local constraint, Optik - Intl. J. Light Electron Opt. (2012) 982–985, http://dx.doi.org/10.1016/j.ijleo.2011.07.014.