Image deblurring by means of defocus

Image deblurring by means of defocus

Optics Communications 240 (2004) 81–88 www.elsevier.com/locate/optcom Image deblurring by means of defocus T.E. Gureyev *, A.W. Stevenson, Ya.I. Nest...

598KB Sizes 0 Downloads 95 Views

Optics Communications 240 (2004) 81–88 www.elsevier.com/locate/optcom

Image deblurring by means of defocus T.E. Gureyev *, A.W. Stevenson, Ya.I. Nesterets, S.W. Wilkins CSIRO Manufacturing and Infrastructure Technology, Private Bag 33, Clayton South, Vic. 3169, Australia Received 18 March 2004; received in revised form 9 June 2004; accepted 9 June 2004

Abstract A new method for rapid deblurring (partial deconvolution) is proposed which is based on the physics of coherent image formation. The method uses slightly out-of-focus images in a way that makes the Fresnel diffraction counteract the blurring due to the point-spread function. The resultant deblurring improves spatial resolution in the images and is remarkably insensitive to noise. Successful performance of several variants of this technique is demonstrated using simulated and experimental images. It is anticipated that this method will find applications in medical, astronomical and other imaging problems, as well as in lithography. Crown Copyright  2004 Published by Elsevier B.V. All rights reserved. PACS: 42.30.Va; 85.40.Hp; 87.57.Gg; 95.75.Mn Keywords: Image forming and processing; Lithography; Image reconstruction

We consider the problem of deconvolution of a noisy image given complete or partial knowledge of the point-spread function (PSF) of the imaging system. Such a problem is encountered in a variety of scientific and industrial applications, e.g. in imaging using visible light, electrons, X-rays, etc. (see e.g. [1]). It is also well known that the deconvolution problem is a difficult one due to its mathematical

*

Corresponding author. Tel.: +61-3-9545-2702; fax: +61-39544-1128. E-mail address: [email protected] (T.E. Gureyev).

ill-posedness [2]. The latter means in particular that small errors (e.g. noise) in the measured data may result in strong artefacts in the reconstructed (deconvolved) image. A large number of methods have been developed over the years which implement various regularization techniques for more robust deconvolution of noisy data [2,3]. It has been demonstrated that accurate deconvolution can be achieved in the presence of suitable additional (a priori) information that helps constrain the set of all possible solutions, thus eliminating some reconstruction artefacts [3]. However, deconvolution methods that make effective use of a priori

0030-4018/$ - see front matter. Crown Copyright  2004 Published by Elsevier B.V. All rights reserved. doi:10.1016/j.optcom.2004.06.020

82

T.E. Gureyev et al. / Optics Communications 240 (2004) 81–88

knowledge are often iterative, computationally intensive and may occasionally diverge. Therefore, a need still exists for rapid, simple and robust methods capable of providing at least partial deconvolution of noisy images, particularly when the PSF is not known precisely. One possibility to avoid the computational instabilities associated with the deconvolution procedure is to implement the required operations in hardware before the measurement noise affects the experimental data. The method proposed in the present paper can be viewed as an implementation of such Ôhardware deconvolutionÕ. Virtually every imaging system suffers to some extent from two classes of aberrations. The first class includes Ôincoherent aberrationsÕ such as the blurring of an image due to the finite size of the source of light (Ôpenumbral blurringÕ). The second type includes Ôcoherent aberrationsÕ such as e.g. the Fresnel diffraction effects (Ôdiffraction fringesÕ). Both classes of aberrations usually degrade the performance of the optical system and, in particular, degrade the spatial resolution. The present paper shows that in certain cases it is possible to make the two types of aberrations partially cancel each otherÕs effect and thereby provide a means for improving the overall performance of an optical system. Ideas of the same general kind have been previously successfully applied to construct achromatic optical systems by combining optical elements with opposite chromatic aberrations. A similar approach also led to the definition of Scherzer defocus in electron microscopy [4, p. 298]. Let the measured image, D(x,y), be described by the convolution of the ÔidealÕ image, I(x,y) (corresponding to the delta-function PSF) with the actual PSF of the imaging system, P(x,y) Z Z Dðx; yÞ ¼ Iðx  x0 ; y  y 0 ÞP ðx0 ; y 0 Þ dx0 dy 0 : ð1Þ In reality, the measured image usually contains some noise (e.g. due to the photon counting statistics). Therefore, any useful deconvolution method, i.e. a method for finding the ÔidealÕ image I(x,y) from Eq. (1), must be able to cope with noisy input data. Following [5], let us expand the ideal signal, I(x  x 0 , y  y 0 ), in Eq. (1) into the Taylor series

at point (x, y) and exchange the order of integration and summation Dðx; yÞ ¼

1 X 1 X

amn omx ony Iðx; yÞ;

ð2Þ

m¼0 n¼0

where the coefficients amn are proportional to the moments of the PSF mþn Z Z ð1Þ amn ¼ xm y n P ðx; yÞ dx dy: ð3Þ m!n! Here we assumed that the function I(x,y) is infinitely differentiable, the PSF has finite moments of all orders and the series, Eq. (2), converge with respect to a chosen functional metric. We then construct a solution to Eq. (1) in a form of a formal series (ansatz) 1 X 1 X Iðx; yÞ ¼ bkl okx oly Dðx; yÞ ð4Þ k¼0

l¼0

with unknown coefficients bkl. Substituting Eq. (2) into Eq. (4) and equating the coefficients at the partial derivatives of the ideal signal I, we obtain the following recursive expressions for coefficients bkl: bkl ¼

dk0 dl0 

k1 X l1 X

aðkmÞðlnÞ bmn

m¼0 n¼0



k1 X m¼0

aðkmÞ0 bml 

l1 X

!, a0ðlnÞ bkn

a00 ;

ð5Þ

n¼0

where k, l = 0, 1, 2, . . ., and dij is the Kronecker delta. Sufficient conditions for the convergence of the formal series equation (4) with coefficients defined by Eq. (5) can be found in [5]. In the present work we are interested primarily in a second-order approximation that can be derived from the general deconvolution formula defined by Eqs. (4) and (5). In what follows, we consider the PSFs with the Ôcenter of gravityÕ coinciding with the origin of coordinates (this can always be achieved by an appropriate shift of the origin of coordinates). Then the first-order moments of the PSF are equal to zero, i.e. a10 = a01 = a11 = 0. We also consider at first only symmetrical PSFs with the second-order moments satisfying the relationships a20 = a02 ” a2 and a11 = 0. In this case, as it follows from Eqs.

T.E. Gureyev et al. / Optics Communications 240 (2004) 81–88

(4) and (5), the second-order partial deconvolution can be expressed by a simple formula Iðx; yÞ ffi b0 Dðx; yÞ  b2 r2 Dðx; yÞ; o2x

o2y ;

ð6Þ

b0 ¼ 1=a00 and b2 ¼ a2 =a200 . where r2  þ This formula states that for any symmetric PSF, a partial deconvolution (deblurring) of an image can be achieved by subtracting the Laplacian of the original (blurred) image with an appropriate weight from the image itself. General sufficient conditions for the validity (accuracy) of Eq. (6) have been presented in [5]. In the important special case of a Gaussian PSF P(x,y) with standard deviation r these validity conditions can be expressed in a simple form as the requirement 2ðkþlÞ 2l jo2k , that should hold for x oy DðxÞj < Cð1=rÞ all integers k,l = 1, 2, 3,. . ., and some constant C. In other words, the characteristic length of variation of the measured image intensity should be larger than r. Eq. (6) reflects the simple fact that at the level of second-order Taylor approximations the changes in an image due to a convolution with an arbitrary PSF are proportional to the width of the PSF and to the local curvature of the ideal image [5] (for example, it is easy to see that image regions with zero curvature do not change as a result of convolution, apart from possible trivial shift and multiplication by a constant factor). Although Eq. (6) may not provide an exact deconvolution, it possesses two important advantages over some more conventional deconvolution formulae (such as e.g. Wiener deconvolution). Firstly, Eq. (6) does not require the precise knowledge of the PSF, but only its width. Secondly, unlike the general deconvolution expressions, Eq. (6) is ÔlocalÕ in the sense that the value of the deconvolved image at a point (x, y) depends only on the values of the blurred image and its second derivatives at the same point. These two properties can be particularly valuable in the cases where the knowledge of the PSF and/ or of the blurred image is incomplete. The inherent ill-posedness of the deconvolution problem, Eq. (1), appears in Eq. (6) in the form of the instability of numerical differentiation required for evaluations of the image Laplacian. Deconvolution techniques based on Eq. (6) have been extensively studied in [5]. In the present work we use

83

Eq. (6) as a starting point for the development of our new method. We first consider a special case of objects consisting predominantly of a single material [6]. Let such an object be illuminated by a monochromatic plane wave with wavenumber k = 2pk (k is the wavelength) propagating along the optic axis z. We denote by I(x,y) the distribution of transmitted intensity in the plane z = 0 immediately after the object. The intensity distribution in a slightly ÔdefocusedÕ image, Iz(x,y) (z is the defocus distance), can be described by the special form of the Transport of Intensity equation (TIE) [6] I z ðx; yÞ ffi Iðx; yÞ  ar2 Iðx; yÞ;

ð7Þ

where a = zd/(2kb) and n = 1  d + ib is the complex refractive index of the material. The region of validity of Eq. (7) is approximately defined by the inequality a < h2, where h is the size of the smallest resolvable feature present in the in-focus image. In the presence of a non-trivial PSF Eq. (7) changes to Dz ðx; yÞ ffi Dðx; yÞ  ar2 Dðx; yÞ;

ð8Þ

where Dz = IzP, the blurred image D(x,y) is defined by Eq. (1) and we assumed that the difference between the PSFs in the two planes is negligible, i.e. jIz  (Pz  P)j  jIz  Pj. Note that Eqs. (7) and (8), as presented above, are valid in the case of a quasi-plane incident wave. Their generalization to the case of a quasi-spherical incident wave is quite straightforward and can be found e.g. in [6]. In the latter case, however, geometrical expansion associated with free-space propagation needs to be taken into account not only with respect to the image Dz, but also for the PSF. In other words, all functions defined on the ÔdefocusedÕ plane z have to be rescaled in accordance with magnification relative to the ÔfocalÕ plane z = 0. This conventional procedure has been applied in the experimental example presented later in the paper. Eqs. (7) and (8) show that the defocused image of a single-material object represents a linear combination of the in-focus image and its Laplacian. Isolating $2D(x,y) in Eq. (8) and substituting it into Eq. (6) we obtain Iðx; yÞ ffi ðb0  b2 =aÞDðx; yÞ þ ðb2 =aÞDz ðx; yÞ:

ð9Þ

84

T.E. Gureyev et al. / Optics Communications 240 (2004) 81–88

We see that deblurring of an image of a singlematerial object can be achieved by adding a properly weighted defocused image to the original blurred one. Such ÔhardwareÕ deblurring does not suffer from the numerical instability intrinsic to Eq. (6), as the Laplacian of the blurred image is now obtained through hardware by means of defocus. Furthermore, a deblurred in-focus image can be obtained from a single defocused image using the Ôphase retrievalÕ procedure, i.e. solving Eq. (8) for the in-focus image 2 1

Dðx; yÞ ffi ½1  ar  Dz ðx; yÞ:

ð10Þ

Substituting Eq. (10) into Eq. (9) we derive Iðx; yÞ ffi ½ðb0  b2 =aÞð1  ar2 Þ1 þ b2 =aDz ðx; yÞ:

ð11Þ

Eq. (11) allows one to obtain a deblurred in-focus image from a single defocused image. It has been previously demonstrated [6] that the main software component of the deblurring method described by Eq. (11), i.e. the application of the operator [1  a$2]1 to the blurred defocused image, is very stable with respect to image noise. It can be computed using fast and efficient numerical algorithms, such as the fast Fourier transform. The robustness of this deblurring method can be understood as a consequence of the fact that coherent defocused images have more spectral energy (and therefore better signal-to-noise ratio) in high-order Fourier components compared to the in-focus images. Finally, the phase retrieval procedure can be made redundant if the defocus distance is chosen in such a way that a = b2/b0. In this case the first of the two additive terms on the right-hand side of Eq. (9) disappears and we arrive at a simple equation Iðx; yÞ ffi b0 Dz ðx; yÞ; when z ¼ zd  2kba2 =ðda00 Þ:

ð12Þ

Therefore, the blurred defocused image at the defocus distance zd is approximately equal (apart from an overall multiplicative constant) to the deblurred in-focus image. The deblurring is achieved because at the defocus distance zd the Fresnel diffraction effects optimally compensate the blurring

due to the PSF. This is analogous to Scherzer defocus defined in electron microscopy as a distance at which the defocus optimally compensates the spherical aberration of an electron microscope [4]. In the case of a symmetrical Gaussian PSF with standard deviation r, one obtains a2/ a00 = r2/2 and, hence, zGauss ¼ kr2 b=d. Introducing d the Fresnel number, NF ” kr2/z, corresponding to the feature size r, we see that the optimal defocus distance zd corresponds to the condition N F ¼ d=b:

ð13Þ

This condition describes a specific match between the parameters of the imaging system (represented by the width of the Gaussian PSF, radiation wavelength and the defocus distance) and the fundamental characteristic (d/b) of the complex refractive index of the sample that is required for the optimal deblurring by means of defocus. So far the deblurring method based on Eqs. (9), (11) or (12) has been developed only for monochromatic incident radiation (with wavenumber k), symmetric PSFs (for which a20 = a02) and transmission images of objects consisting of a single material (it is actually sufficient that the ratio of real and imaginary parts of the projected R complex refractive index of the object, ~nðx; yÞ  nðx; y; zÞ dz, is the same for all x and y, i.e. Re n ðx; yÞ=Im nðx; yÞ ¼ constant [6]). If polychromatic incident radiation is used, the deblurring method can still be applied with the refractive index averaged with respect to the spectral distribution of the incident radiation [7]. In the case of asymmetric PSFs partial isotropic deblurring can still be achieved with a2 ” (a20 + a02)/2. Alternatively, anisotropic deblurring can be achieved in a preferred direction, e.g. in x-direction when one sets a2 ” a20. The experimental test results presented below were obtained using a polychromatic spatially anisotropic source. A similar procedure of ÔhardwareÕ deblurring can be applied to slightly defocused images of pure phase (non-absorbing) samples. In this case I(x,y) ” I0 = constant, D(x,y) ” D0 = I0a00, and instead of Eq. (8) we have Dz ðx; yÞ ffi D0 ½1  b0 ðz=kÞr2 uðx; yÞ  P ðx; yÞ;

ð14Þ

where u(x,y) is the phase distribution in the transmitted beam in the object plane, z = 0 [4, p. 61].

T.E. Gureyev et al. / Optics Communications 240 (2004) 81–88

85

Solving Eq. (14) for u  P ; ðu  P Þðx; yÞ ¼ k=ðzb0 Þr2 ½1  Dz ðx; yÞ=D0  and substituting the result into the second-order deconvolution formula for the phase (a direct analogue of Eq. (6)) uðx; yÞ ffi b0 ðu  P Þðx; yÞ  b2 r2 ðu  P Þðx; yÞ; ð15Þ we derive the following analogue of Eq. (11) which allows one to obtain the deblurred phase distribution in the plane z = 0 from a single blurred defocused image uðx; yÞ ffi ðk=zÞðb2 =b0  r2 Þ½Dz ðx; yÞ=D0  1: ð16Þ We should note that Eq. (16) is more sensitive to low-frequency noise in the experimental data, and, therefore, is less stable than Eq. (11) [6]. Compared to the usual TIE-based phase retrieval formula for a pure phase object [8, Eq. (7)], Eq. (16) contains an additional term, b2/b0, which takes into account the blurring of the phase-contrast image due to the PSF of the imaging system. We tested the deblurring methods based on Eqs. (6), (9), (11) and (12) using 256 · 256 pixel ÔLenaÕ image shown in Fig. 1. Fig. 2 depicts a blurred image obtained by the convolution of the ÔidealÕ image from Fig. 1 with a 3 · 3 pixel wide Gaussian

Fig. 1. ÔIdealÕ 256 · 256 pixel image used in numerical experiments.

Fig. 2. Blurred image obtained by the convolution of the ÔidealÕ image from Fig. 1 with a 3 · 3 pixel wide Gaussian PSF and with 1% simulated Poisson noise.

PSF and with 1% pseudo-random Poisson noise. Before attempting to deconvolve the noisy image from Fig. 2, we first calculated the Laplacian of the noise-free blurred image and added it with an appropriate weight to the blurred image itself to obtain a deblurred image in accordance with Eq. (6). The relative RMS error of the resultant image with respect to the ideal image from Fig. 1 was equal to 1.6% (compared to 2.1% for the noise-free blurred image). Note that in this noise-free case one can still achieve a much better result by using a simple Wiener deconvolution [5]. However, when we applied Eq. (6) to the blurred image with noise (from Fig. 2), the result (shown in Fig. 3) turned out to be very noisy. The relative RMS error of the image in Fig. 3 was 6% compared to only 2.3% for the raw noisy blurred image (Fig. 2). This demonstrates the desirability of collecting the data of the Laplacian experimentally in order to avoid the noise-amplification effect inherent in numerical differentiation involved in Eq. (6). We have verified the robustness of the deblurring procedure proposed in this paper by applying Eqs. (9), (11) and (12) for deblurring of the noisy image from Fig. 2. For this purpose we assumed that the images had the physical size of 256 · 256 lm, the wavelength was k = 0.1 nm and d/b = 10 (these

86

T.E. Gureyev et al. / Optics Communications 240 (2004) 81–88

Fig. 3. Deblurred image obtained using Eq. (6) from noisy blurred image in Fig. 2.

Fig. 4. Deblurred image obtained using Eq. (12), i.e. the noisy blurred defocused image at the optimal defocus distance zGauss . d

conditions correspond to the case of hard X-ray microscopy). We calculated the blurred noisy defocused images at z = 1 cm for use with Eqs. (9) and (11), and at the ÔoptimalÕ propagation distance zGauss ¼ 1:4 cm for Eq. (12). We convolved all the d defocused images with the 3 · 3 pixel wide Gaussian PSF and ÔaddedÕ 1% Poisson noise to simulate realistic blurred noisy images. The relative RMS error of the deblurred images obtained from these blurred noisy images using Eqs. (9), (11) and (12) was found to be 2.1%. 2.0% and 1.8%, respectively, i.e. substantially better than the result obtained above with Eq. (6). The deblurred image obtained using Eq. (12) is presented in Fig. 4. This image clearly displays some fine details not visible in the blurred image, Fig. 2. The image in Fig. 4 is also much less noisy compared to Fig. 3 confirming the robustness of the proposed deblurring technique. We have further tested the stability of deconvolution using Eq. (12) by applying it to images with different levels of noise. For this purpose we ÔaddedÕ 1%, 3%, 5% and 10% of Poisson noise to the previously simulated blurred defocused image at the ÔoptimalÕ propagation distance zGauss ¼ 1:4 cm. The relative RMS error between d these noisy blurred defocused images and the ÔidealÕ image from Fig. 1 was found to be equal to 1.8%, 3.4%, 5.2% and 10.1% respectively, which

confirms the excellent stability properties of the proposed Ôdeblur by defocusÕ procedure. We have also tested the deblurring method based on Eq. (11) using an experimental in-line X-ray image of an edge of an approximately 100 lm thick Polyethylene sheet (Fig. 5(a)) collected using a laboratory microfocus source with a tungsten target operated at 30 keV. The sample was located at a distance of 4 cm from the source, with the detector located 196 cm from the sample. The spatial resolution of the detector referred to the object plane was 2 lm. One can see typical phase-contrast effect (Fresnel fringe) near the edge of the Polyethylene sheet in Fig. 5(b) which displays a cross-sectional profile through the edge in Fig. 5(a) averaged over the height of the image. The source size in the direction orthogonal to the edge was equal to 7 lm, which determined the approximate width of the PSF in this direction. For comparison, we present in Fig. 6(a) a ÔphaseretrievedÕ version of the image from Fig. 5(a) obtained using Eq. (10) with the wavelength k = 0.09 nm (average for the incident X-ray spectrum). Fig. 6(a), unlike Fig. 5(a), displays the expected projected thickness profile of the sample blurred by the PSF. We also applied Eq. (11) to the image in Fig. 5(a) with the result shown in Fig. 7(a). One can see that the latter image still dis-

T.E. Gureyev et al. / Optics Communications 240 (2004) 81–88

87

Fig. 5. (a) Experimental in-line X-ray image of an edge of a Polyethylene sheet; (b) cross-section of image intensity through the edge of the sheet.

Fig. 6. (a) Distribution of the projected thickness of Polyethylene reconstructed according to Eq. (10); (b) cross-section of the reconstructed thickness distribution through the edge of the sheet.

Fig. 7. (a) Deblurred distribution of the projected thickness of Polyethylene obtained in accordance with Eq. (11); (b) cross-section of the deblurred thickness distribution through the edge of the sheet.

88

T.E. Gureyev et al. / Optics Communications 240 (2004) 81–88

plays the correct projected density profile as in Fig. 6(a), but the edge in Fig. 7(a) is much sharper than in Fig. 6(a) confirming the presence of a clear deblurring effect due to the partial compensation of the PSF-induced spread. The general nature of the deblurring effect can be easily visualized by considering the profile in Fig. 7(b) to be a weighted sum of the profiles from Figs. 5(b) and 6(b). The compensation of PSF-induced blurring by the Fresnel diffraction effect is evident. The spatial resolution estimated from Fig. 7(b) is equal to 5 lm, which is smaller than the source size. Note that this deblurring technique worked quite well despite the experimental image being rather noisy (noise level was 7.6%). In fact, as it can be seen from a comparison of Fig. 5 with Fig. 7, the deblurred image is less noisy (7.1%) than the original one, which is quite a remarkable result for a deconvolution technique. In conclusion, we have presented a new method for rapid and robust deblurring of some types of images. The method can be viewed as a hardware (or combined software–hardware) implementation of second-order deconvolution which relies on the knowledge of the width of the PSF. In that respect the proposed method is quantitative in nature and should be viewed as a special variant of image deconvolution. The method is quite different from most conventional techniques for image sharpening as it is based on the actual physics of image formation and does not arbitrarily change the true information content of the image. We have developed several variants of the new deblurring method, including one that does not require any image processing at all, provided that the image is collected at a specific defocus distance at which Fresnel diffraction optimally compensates

the PSF-induced blurring. We have tested the proposed deblurring techniques on simulated and experimental images with realistic amounts of noise. The proposed method should be useful in a variety of imaging problems [9]. Applications to electron and X-ray transmission imaging appear particularly straightforward. Other application areas may include lithography where improved resolution can be achieved by optimally selecting the defocus distance as a function of the radiation wavelength and the complex refractive index of the material of the mask. Acknowledgement The authors are grateful to Drs. D.M. Paganin and A. Pogany for helpful comments and suggestions.

References [1] J.C. Dainty, R. Shaw, Image Science, Academic Press, London, 1974. [2] A.N. Tichonov, V. Arsenin, Solutions of Ill-posed Problems, Wiley, New York, 1977. [3] C.R. Vogel, Computational Methods for Inverse Problems, SIAM, Philadelphia, 2002. [4] J.M. Cowley, Diffraction Physics, third revised ed., Elsevier, Amsterdam, 1995. [5] T.E. Gureyev, Ya.I. Nesterets, A.W. Stevenson, S.W. Wilkins, Appl. Opt. 42 (2003) 6488. [6] D. Paganin, S.C. Mayo, T.E. Gureyev, P.R. Miller, S.W. Wilkins, J. Microsc. 206 (2002) 33. [7] T.E. Gureyev, D.M. Paganin, A.W. Stevenson, S.C. Mayo, S.W.Wilkins (submitted). [8] M.R. Teague, J. Opt. Soc. Am. 73 (1983) 1434. [9] T.E. Gureyev, A.W. Stevenson, Provisional patent no. 2003905370, priority date October 2003.