Available online at www.sciencedirect.com
Optics & Laser Technology 37 (2005) 235 – 238 www.elsevier.com/locate/optlastec
Improving the sharpness of an image with non-uniform illumination Ching-Chung Yang Department of Electrical Engineering, Far East College, 49 Chung Hua Road, Hsin-Shih, Tainan, Taiwan, Republic of China Received 6 October 2003; received in revised form 10 March 2004; accepted 24 March 2004
Abstract We demonstrate a new method to sharpen a non-uniformly illuminated image by use of wavelet transformation employing Harr function. Both the 2ne characteristics and object edges in the image are well enhanced without degrading the contrast. This method provides some superiority compared to the conventional algorithms. ? 2004 Elsevier Ltd. All rights reserved. Keywords: Image enhancement; Visibility; Harr function
1. Introduction Image enhancement has been developed for decades in a lot of 2elds. One of the common ways to sharpening the image is by use of the high-pass 2lter [1,2]. Another familiar approach is by using the Laplacian operator [3,4]. But the above methods are di:cult to reveal the 2ne structures of an image with low illumination. Although the homomorphic 2ltering approach has been demonstrated to bring out the details of objects in dim area. But it makes the process complicated to decide the 2lter parameters. In this work, however, we propose a new method to e
At 2rst we let M = (I1 − I2 )=2 to represent the high-pass 2ltered result, and N = (I1 + I2 )=2 is the low-pass 2ltered one. Then we modify M and N to be M and N to sharpen the image. At last we reconstruct the new I1 and I2 by I1 = N + M and I2 = N − M . The most important in the transformation is the following relationship: M I 1 − I2 : (1) = N I1 + I 2 Eq. (1) represents the 2nest visibility for every two nearby pixels from the optical point of view. In order to exaggerate the 2nest characteristics in the picture, M with lower value should be magni2ed by some calculation without saturating the larger M . We modify the M to be M = 7 × log((I1 − I2 )+1) for M is positive, and M =−7×log(−(I1 −I2 )+1) for M is negative. The parameter 7 could be replaced by any other constant for better result. Then, to acquire the best visibility, the value of M =N must be 1 or −1. This means for positive M we choose N = M = 7 × log((I1 − I2 ) + 1)
(2)
and for negative M N = −M = 7 × log(−(I1 − I2 ) + 1):
(3)
At last, we reconstruct the new I1 and I2 . For positive M we obtain I1 = N + M = 14 × log((I1 − I2 ) + 1); I2 = N − M = 0
(4)
236
C.-C. Yang / Optics & Laser Technology 37 (2005) 235 – 238
INPUT
I1 , I 2
I1-I2≥ 0
?
Y
N
M’ = 7×log((I1-I2)+1)
M’ = -7×log(-(I1-I2)+1)
N’ = 7×log((I1-I2)+1)
N’ = 7×log(-(I1-I2)+1)
Gray level
(a)
I1’ = N’+M’ = 14xlog((I1-I2)+1)
I1’ = N’+M’ = 0
I2’ = N’-M’ = 0
I2’ = N’-M’ = 14×log(-(I1-I2)+1) (b) I1’, I2’
OUTPUT Fig. 1. The Fow chart for the algorithm by our transformation.
and for negative M I1 I2
(c)
= N + M = 0; = N − M = 14 × log(−(I1 − I2 ) + 1):
Pixel number Fig. 2. The physical concept of our algorithm.
(5)
The total procedure is summarized in Fig. 1 and the physical meaning is shown in Fig. 2. Fig. 2(a) is the original signal sequence. Fig. 2(b) represents the new I1 and I2 signal sequence. Fig. 2(c) shows that the desired 2nal result is obtained by adding Fig. 2(b) to 2(a).
I2n,j , I2n+1,j 0≤ n ≤127 0≤ j ≤255
transform
I2n+1,j , I2n+2,j
3. Experimental results We use Matlab 5.3 to process the transformation. The input image is a two-dimensional matrix with 256 × 256 size. We perform the transformation on the image matrix in row direction and column direction, respectively. The transformation in each direction again proceeds two times with one pixel shifted in order to detect all of the 2nest variations. The total procedure is described in Fig. 3. Completing the total procedure, we then add the derived new matrix to the original one to acquire the 2nal image. The original input image is shown in Fig. 4(a). Objects in dim area are not illuminated enough to be distinguished in the picture. At the beginning, we use the homomorphic 2ltering approach to sharpen the image. We use the transfer function of the Butterworth high-pass 2lter of order n and
0≤ n ≤126 Ii,j
transform
0≤ j ≤255 I’i,j
0≤ i, j ≤255
0≤ i, j ≤255
Ii,2m, Ii,2m+1 0≤ m ≤127
transform
0≤ i ≤255 Ii,2m+1 , Ii,2m+2 0≤ m ≤126
transform
0≤ i ≤255 Fig. 3. The total procedure for the two-dimensional transformation.
C.-C. Yang / Optics & Laser Technology 37 (2005) 235 – 238
Fig. 4. (a) The original input image. (b) The 2ltered image by the homomorphic 2ltering approach.
with cuto< frequency D0 shown below: H (k; l) = rl +
rh 1 + (D0 =D(k; l))2n
(6)
where D(k; l) = (k 2 + l2 )1=2 denotes the distance from point (k; l) to the center of the frequency domain, rl as well as rh are adjustable parameters. In this case, we select n=3; D0 = 0:45 ; rh = 1:35, and rl = 0:3.
237
Fig. 5. (a) The extracted image from Fig. 4(a) by our method. (b) Image derived by 0:78× Fig. 4(a) + Fig. 5(a).
We 2nd that the 2ltered image shown in Fig. 4(b) is not so clear as we wish. Then we perform the process mentioned in Fig. 3 to transform the image and obtain a new one as shown in Fig. 5(a). In performing the transformation, we adjust the constant 7 in Eqs. (2) and (3) to be 10.5. Then we derive Fig. 5(b) by adding 0:78× Fig. 4(a) to 5(a). The parameters 10.5 and 0.78 are selected to get better result. Note that the tree leaves and the wall bricks have all become shaper compared to the original image without degrading the contrast.
238
C.-C. Yang / Optics & Laser Technology 37 (2005) 235 – 238
Fig. 6. The magnitude spectrum of Fig. 4(a), 5(a), 5(b), and 4(b) sequentially. The arrows indicate the critical frequencies to sharpening the image.
4. Discussion and conclusion To quantitatively evaluate our method, we transform Fig. 4(a), 5(a), 5(b), and 4(b) to Fourier domain and compare their magnitude spectrum. This is also accomplished by using Matlab 5.3. Fig. 6(a) is the original image spectrum. Fig. 6(b) is the spectrum of the extracted image. It seems that some corresponding speci2c frequencies have been selectively magni2ed. The same is found in the 2nal image’s spectrum as shown in Fig. 6(c). Then in Fig. 6(d) we notice that the homomorphic 2ltering approach enhances critical frequencies to a lesser extent. In this situation, the processed image is di:cult to sharpen enough to observe. Furthermore, many non-critical frequencies are also ampli2ed in the process. This would result in the image saturation in the brighter region as shown in Fig. 4(b). The algorithm suggested in this study is mainly to reconstruct an image with better local visibility. It really helps to reveal the 2ne structures in a non-uniformly illuminated
image. This approach is useful for various applications, such as X-ray 2lm enhancement, night vision, pattern-recognition, etc. References [1] Huang TS, Yang GJ, Tang GY. A fast two-dimensional median 2ltering algorithm. IEEE Trans Acoust Speech Signal Process 1979;ASSP-27:13–8. [2] Brinkman BH, Manduca A, Robb RA. Optimized homomorphic unsharp masking for MR grayscale inhomogeneity correction. IEEE Trans Med Imaging 1998;17(2):161–71. [3] Frei W, Chen CC. Fast boundary detection: a generalization and a new algorithm. IEEE Trans Comput 1977;C-26:988–98. [4] Marr D, Hildrith E. Theory of edge detection. Proc R Soc London 1980;B207:187–217. [5] Daubechies I. Orthonormal bases of compactly supported wavelets. Commun Pure Appl Math 1988;41:909–96. [6] Arerbuch A, Lazar D, Israeli M. Image compression using wavelet transform and multiresolution decomposition. IEEE Trans Image Process 1996;5(1):4–15.