Image deblurring based on the estimation of PSF parameters and the post-processing

Image deblurring based on the estimation of PSF parameters and the post-processing

Optik 124 (2013) 2224–2228 Contents lists available at SciVerse ScienceDirect Optik journal homepage: www.elsevier.de/ijleo Image deblurring based ...

1MB Sizes 0 Downloads 10 Views

Optik 124 (2013) 2224–2228

Contents lists available at SciVerse ScienceDirect

Optik journal homepage: www.elsevier.de/ijleo

Image deblurring based on the estimation of PSF parameters and the post-processing Jong Min Lee a , Jeong Ho Lee a , Ki Tae Park b , Young Shik Moon a,∗ a b

Department of Computer Science and Engineering, Hanyang University, Ansan, Gyeonggi-do 426-791, Republic of Korea General Education Curriculum Center, Hanyang University, Ansan, Gyeonggi-do 426-791, Republic of Korea

a r t i c l e

i n f o

Article history: Received 9 February 2012 Accepted 23 June 2012

Keywords: Image reconstruction Deblurring Point spread function Deconvolution

a b s t r a c t Two parameters of a point spread function (PSF) are the length and angle of the blur kernel. The deblurring performance is affected by the estimation accuracy of the two PSF parameters. In this paper, we propose a method of the PSF parameters estimation and the image reconstruction by using the periodicity of motion blurred images in the frequency domain and propose the reconstruction method using the proposed postprocessing for reducing ringing artifacts. Experimental results show that the proposed method estimates the PSF parameters more accurately than the existing ones. © 2012 Elsevier GmbH. All rights reserved.

1. Introduction

2. Linear motion blur model

Blurred images can be obtained by camera shaking or object motion [1,2]. This damaged image is called a motion blurred image. A motion blur is one of the major causes of image deterioration. The motion blurred image is presented by a convolution of a latent image with the Point Spread Function (PSF) [3]. Recently, the estimation method of PSF parameters considering the periodicity of motion blurred images in the frequency domain has been often used [4–7]. However, since these methods may cause data loss by transformation of the Fourier spectrum of the blurred image or by binarization of the spectrum, the accuracy of the PSF parameters estimation can be degraded. Because the errors propagate in the iterative reconstruction process, the error of the PSF parameters estimation affects the quality of the reconstructed image. Therefore, we propose an estimation method of the correct PSF parameters without modifying the spectrum data in order to solve the drawbacks of conventional methods caused by data loss. Fig. 1 shows the framework of the proposed deblurring method. The outline of the paper is as follows. In Section 2, we explain a linear motion blur model. In Section 3, we describe a method for estimating PSF parameters. In Sections 4 and 5, we explain a reconstruction method. In Section 6, we show the experimental results. Our conclusion is summarized in Section 7.

Linear motion blurred images are generated from the convolution of the latent image with the PSF as shown in Eq. (1).

∗ Corresponding author. Tel.: +82 31 400 5196; fax: +82 31 419 1162. E-mail addresses: [email protected] (J.M. Lee), [email protected] (J.H. Lee), [email protected] (K.T. Park), [email protected] (Y.S. Moon). 0030-4026/$ – see front matter © 2012 Elsevier GmbH. All rights reserved. http://dx.doi.org/10.1016/j.ijleo.2012.06.067

g(x, y) = f (x, y) ∗ h(x, y) h(x, y) =

1 (x cos  + y sin ) L

(1) (2)

L



⎧ L ⎪ ⎨ 1 if |u| ≤

L

⎪ ⎩ 0 if |u| > L

(u) =

2

(2)

2

Here, g(x, y), f(x, y), and h(x, y) represent the blurred image, the latent image, and the PSF, respectively. Generally the PSF describes the impulse response of a focused optical system to a point source. Eq. (2) is the PSF that consists of two parameters such as a blur length L and a blur angle . In order to estimate the periodicity of a linear motion blurred image in the frequency domain, we calculate the Fourier spectrum of the blurred image. 3. Estimation PSF parameters 3.1. Angle estimation The angle of a motion blur is estimated by measuring the direction of the periodically dark region in the Fourier transform of the linear motion blurred image. In this paper, in order to detect the correct blur direction, we use a method for detecting the points near to zero on u and v axes. Fig. 2 shows how to find the nearest

J.M. Lee et al. / Optik 124 (2013) 2224–2228

2225

Fig. 1. Framework of the proposed method.

Fig. 3. Edge map from the reconstructed result of 200 iteration: (a) Edge of 200 iteration, (b) Erosion of (a), and (c) Edge map.

points to zero on each axis. In order to estimate the periodicity of the Fourier spectrum in Fig. 2(a), we examine the values of u and v axes. As shown in Fig. 2(c), the gradient variations of the values of each axis are irregular. Therefore, in order to find the points for discriminating the periodicity automatically, we use a method for not detecting the transition points but measuring the gradient variations. When we select a minimum value among three adjacent points along the v axis in Fig. 2(c), the section of the minimum values in Fig. 2(d) is in red circle in Fig. 2(a) and (c). This point is the minimum point on v axis. We calculate the angle  by using the property of a right triangle as Eq. (3).  = tan−1

OB OA

(3)

3.2. Length estimation The other PSF parameter L is calculated by Eq. (4) considering the  that is obtained by Eq. (3). In Eq. (4), d is a distance between spectrum lines, N is an image size, and L is a blur length. As shown in Fig. 2(c), d is calculated by definition of cosine.

Fig. 2. The estimation process of a blur angle and length: (a) Fourier spectrum, (b) Origin region of spectrum, (c) Minimum value of v axis, and (d) Gradient distribution of v axis. (For interpretation of the references to color in the text, the reader is referred to the web version of the article.)

L=

N d

d = OB cos 

(4)

2226

J.M. Lee et al. / Optik 124 (2013) 2224–2228

Fig. 4. The intensity distribution of test images.

Fig. 5. Performance comparison with the conventional methods.

3.3. Reduction of estimation complexity In order to estimate the correct PSF parameters, we use the Fourier spectrum in the frequency domain [8]. However, a lot of computation time is required to transform the observed image to the frequency domain. Consequently, the larger the size of an image, the more increase of the time for the conversion. Therefore, we propose a new method for reducing the time of parameter estimation. To this end, we downscale the observed image to a quarter of its original size and apply the Discrete Fourier Transform (DFT) on the resized image. Through the estimation method mentioned above, we can obtain the PSF parameters of the same angle and a half length of the original one. It has been shown by experiments that the length of the PSF parameters is linearly proportional to the length of the side of an image. By this estimation technique, the time complexity of estimating the PSF parameters can be reduced.

4. Image deconvolution Deconvolution is the method to remove motion blur and reconstruct a blurred image. The representative method of image reconstruction is Richardson–Lucy (RL) iterative deconvolution algorithm [9].

Fig. 6. The experimental results of image reconstruction without the postprocessing: (a) Original image, (b) Blurred image, (c) Proposed method, and (d) Existing method.

Table 1 The results of PSF parameter estimation. Real

Estimation

Error

Parameter

Angle (degree)

Length (pixel)

Angle (degree)

Length (pixel)

Angle (degree)

Length (pixel)

Cameraman Lena Airplane

30.00 51.00 43.00

10.00 22.00 15.00

29.62 50.96 43.39

10.15 21.97 15.21

0.38 0.04 0.39

0.15 0.03 0.21

J.M. Lee et al. / Optik 124 (2013) 2224–2228

2227

Table 2 Performance comparison with the conventional methods. Moghaddam and Jamzad [11]

Zhang and Xu [12]

Parameter

Proposed method Angle (degree)

Length (pixel)

Angle (degree)

Length (pixel)

Angle (degree)

Length (pixel)

Minimum error Maximum error Average error Standard deviation

0.0 1.2 0.4 0.3

0.0 0.8 0.2 0.2

0.0 2.0 0.6 0.7

0.0 1.9 0.9 0.4

0.0 3.0 0.9 0.7

0.0 2.0 1.5 0.6

Table 3 Similarity comparison among the reconstructed results.

PSNR(dB)

20 iterations

200 iterations

Proposed method

25.83

28.37

32.70

Pixels in the observed image can be represented in terms of the point spread function and the latent image as Eq. (5). di =



pij uj

(5)

j

where pij is the point spread function, uj is the pixel value at position j in the latent image, and di is the observed value at position i. The basic idea of [9] is to calculate the most similar uj when the observed di and known pij are given. In Eq. (6), if this iteration converges, it has been empirically shown to converge to the maximum likelihood solution for uj [10]. In our experiments, the reconstructed image is generated by performing 200 iterations of RL deconvolution. (t+1)

uj

(t)

= uj

d

i

i

ci =



ci

pij

(6)

(t)

uj pij

j

5. Post-processing for reducing ringing artifacts Even though increasing the number of iterations of RL algorithm results in better accuracy of the reconstructed image, the ringing artifacts may occur. Moreover, even if we know the correct PSF, the ringing artifacts near strong edges will be appeared. Therefore, in this paper, we apply the proposed post-processing for reducing the ringing artifacts. When we observe a blurred image and a reconstructed image, the intensity variations of the two images are almost same. However, there are many differences around the edges. The proposed post-processing method uses the modification of the color values according to the efficiency of the divided regions. Firstly, we obtain an edge map of the reconstructed image generated by Sobel edge detector as shown in Fig. 3. We apply the erosion to the edge map in order to remove the values caused by the ringing artifacts. Then, we apply the dilation to the eroded edge map and we identify the edge regions in the reconstructed image. The erosion and the dilation are the two basic morphological operations. For enhancing the reconstructed image, we utilize the blurred image, the edge map, and the reconstructed image. We divide the reconstructed image into three parts: smooth regions, edge regions, and complex texture regions. The ringing artifacts appearing in the smooth regions can be considered more seriously. For the smooth regions, we reduce the color value transitions of the reconstructed image. The result is better than the smoothing image. In the edge regions, we estimate the color variations around the edges by using the orthogonal gradient of the edge regions. Then, we compensate the false transition of the edge regions caused by the ringing artifacts. For the complex texture regions, we leave

Fig. 7. The experimental results of image reconstruction with the proposed postprocessing: (a) Original image, (b) 20 iterations, (c) 200 iterations, and (d) Proposed method generated by post-processing.

2228

J.M. Lee et al. / Optik 124 (2013) 2224–2228

the reconstructed results without changes. After performing the proposed post-processing, we can acquire the final result image. As shown in Fig. 4, the final reconstructed result is better than the other results without the post-processing. We reduce the color value transitions of the deblurred image so that the ringing artifacts near strong edges can be decreased. 6. Experimental result 6.1. Accuracy of PSF parameter estimation We have applied the proposed method of PSF parameter estimation to various images, such as Lena, Cameraman, and Airplane. Our experiments were carried out under various conditions of motion blur and the results of the estimated PSF parameters are shown in Table 1. We compare our algorithm with the conventional ones in terms of the errors of parameter estimation. We use the same images of the existing methods [11,12] and measure the errors of parameter estimation under the random blur conditions. Compared with the existing methods for image deblurring, the proposed method is more accurate for estimating the PSF parameters as shown in Table 2 and Fig. 5. 6.2. Performance of image reconstruction In this section, the experiment shows the relation between the accuracy of parameter estimation and quality of the image reconstruction. Because there may be the propagation of errors in the iterative deconvolution process, it need to accurately estimate the PSF parameters for the image reconstruction. As shown in Fig. 6, the experimental result shows that the proposed method effectively reduce the ringing artifacts and the quality of the reconstructed image has been improved, compared to the existing method [11]. 6.3. Final result with the post-processing The experiment shows the results of the post-processing. As shown in Fig. 7, the final result with the proposed post-processing is more similar to the original image and most ringing artifacts has been reduced. Table 3 shows the similarity of the original image and the reconstructed image based on the peak signal-tonoise ratio (PSNR). Our method has improved the experimental performance by 15% compare to the result without the proposed post-processing.

7. Conclusion In this paper, we have proposed the estimation method of the PSF parameters  and L for reconstructing a linear motion blurred image and the reconstruction method using the proposed postprocessing. The proposed method has accurately estimated by considering the periodicity of a Fourier spectrum in the blurred image. In addition, the performance of reconstruction has been improved by applying the proposed post-processing method for reducing the ringing artifacts near the edges. Compared with the conventional approaches for image reconstruction, the proposed method shows better performance for linear motion blurred images. Acknowledgement This work was supported by National Research Foundation of Korea Grant funded by the Korean Government (2009-0077434). References [1] K. Sato, S. Ishizuka, A. Nikami, M. Saot, Control techniques for optical image stabilizing system, IEEE Trans. Consum. Electron. 39 (3) (1993) 461–466. [2] M. Oshima, T. Hayashi, S. Fujioka, T. Inaji, H. Mitani, J. Kajino, K. Ikeda, K. Komoda, VHS camcorder with electronic image stabilizer, IEEE Trans. Consum. Electron. 35 (4) (1989) 749–758. [3] M. Tanaka, K. Yoneji, M. Okutomi, Motion blur parameter identification from a linearly blurred image, in: Proceeding on International Conference Consumer Electronics, Las Vegas, USA, 2007, pp. 1–2. [4] M. Cannon, Blind deconvolution of spatially invariant image blur with phase, IEEE Trans. Acoust. Speech Signal Process. 24 (1) (1976) 58–63. [5] R. Fergus, B. Singh, A. Hertzmann, S.T. Roweis, W.T. Freeman, Removing camera shake from a single photograph, ACM Trans. Graph. 25 (3) (2006) 787–794. [6] L. Yuan, J. Sun, L. Quan, H.-Y. Shum, Image deblurring with blurred/noisy image pairs, ACM Trans. Graph. 26 (3) (2007), Article 1. [7] S. Chalkov, N. Meshalkina, C.S. Kim, Post-processing algorithm for reducing ringing artifacts in deblurred images, in: International Technical Conference on Circuits/Systems, Computers and Communications, Shimonoseki, Japan, 2008, pp. 1193–1196. [8] J.H. Lee, K.T. Park, Y.S. Moon, Image deblurring by using the estimation of PSF parameters for image devices, in: Proceeding on International Conference Consumer Electronics, Las Vegas, USA, 2010, pp. 1–2. [9] W.H. Richardson, Bayesian-based iterative method of image restoration, J. Opt. Soc. Am. 62 (1) (1972) 55–60. [10] L.A. Shepp, Y. Vardi, Maximum likelihood reconstruction for emission tomography, IEEE Trans. Med. Imaging 1 (2) (1982) 113–122. [11] M.E. Moghaddam, M. Jamzad, Linear motion blur parameter estimation in noisy images using fuzzy sets and power spectrum, EURASIP J. Adv. Signal Process. 2007 (1) (2007) 1–8. [12] T.T. Zhang, G. Xu, Identification of motion blurred parameter by using T-norm operator, in: International Conference on Machine Learning and Cybernetics, Hong Kong, 2007, pp. 1606–1610.