A combined temporal and spatial deghosting technique in scene based nonuniformity correction

A combined temporal and spatial deghosting technique in scene based nonuniformity correction

Infrared Physics & Technology 71 (2015) 408–415 Contents lists available at ScienceDirect Infrared Physics & Technology journal homepage: www.elsevi...

2MB Sizes 0 Downloads 13 Views

Infrared Physics & Technology 71 (2015) 408–415

Contents lists available at ScienceDirect

Infrared Physics & Technology journal homepage: www.elsevier.com/locate/infrared

A combined temporal and spatial deghosting technique in scene based nonuniformity correction Fan Fan a, Yong Ma a, Jun Huang a,⇑, Zhe Liu b, Chengyin Liu b a b

Electronics Information School, Wuhan University, Wuhan 430072, China School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China

h i g h l i g h t s  A combined temporal and spatial deghosting technique is proposed.  The NUC parameters can be updated only in smooth areas.  It can deal with the infrared sequences in which the background stays unchanged.  The method can reduce FPN robustly without generating ghosting artifacts.

a r t i c l e

i n f o

Article history: Received 23 November 2014 Available online 6 June 2015 Keywords: Infrared imaging Nonuniformity correction Scene based Deghosting

a b s t r a c t The least mean square error based nonuniformity correction algorithm is a kind of classical method to reduce the fixed pattern noise in infrared focal plane array. It is well-known for its low cost of computation and storage resources. However, it suffers from the drawback that ghosting artifacts can be easily generated in the edge areas when the inter-frame motion slows. In this paper, a combined temporal and spatial deghosting technique is proposed. Both spatial correlation detection and temporal motion detection are used to gate the update of correction parameters. The experimental results demonstrate that the deghosting performance of the proposed method is superior to other deghosting methods. Ó 2015 Elsevier B.V. All rights reserved.

1. Introduction The least mean square (LMS) based nonuniformity correction (NUC) algorithm [1,2] is a classical method to reduce the fixed pattern noise (FPN) in infrared focal plane array (IRFPA) [3,4]. It is well-known for its low cost of computation and storage resources. However, the ghosting is a non-ignorable drawback in the LMS based NUC algorithms. The ghosting is always occurring when the scene motion slows or stops. Once the scene remains stationary for a while, the textures and the edges gradually degenerate and become blurred. The static scene ‘‘burns’’ into the NUC parameters. This results in a ghosting artifact in the original place after the motion resumes. And the ghosting artifacts can be seen for hundreds of frames before they are eliminated. Fig. 1 gives an example of when the ghosting artifacts appear and disappear. In the figure, the top row is the original images, and the bottom row is the corresponding corrected images by an LMS based NUC algorithm, e.g., Scribner’s algorithm [1]. The first ⇑ Corresponding author. E-mail address: [email protected] (J. Huang). http://dx.doi.org/10.1016/j.infrared.2015.06.001 1350-4495/Ó 2015 Elsevier B.V. All rights reserved.

column is the first frame of the infrared sequence, and the scene remains stationary for a while. By using Scribner’s algorithm to reduce the FPN, the textures and the edges become blurred, as shown in the second column. The ghosting artifacts appear when the motion begins, as shown in the third column. They can be seen for a long period of time before they disappear, as shown in the last column. To avoid ghosting artifacts, many algorithms have been proposed [2,5,6]. Vera and Torres considered that the ghosting artifacts are mainly generated by the fixed learning rates during updating the NUC parameters. Small step size should be adopted in the edge areas. So they improved Scribner’s algorithm by adaptive learning rate [5]. Rossi also pointed out that the ghosting artifacts are generated mainly in strong edge areas. Instead of classical low pass filters, he introduced the bilateral filter, which can preserve edges [6]. Hardie pointed out that other deghosting algorithms only slow the burn-into process and do not eliminate the burn-into for long motion pauses. The ghosting artifacts generally occur when motion across the whole image or a part of the image temporarily slows or stops. Thus Hardie gated the update of the NUC parameters in such situations [2]. Hardie proposed a well

F. Fan et al. / Infrared Physics & Technology 71 (2015) 408–415

409

Fig. 1. Examples in the infrared sequence processed by Scribner’s algorithm [1]. The top row is the original images and the bottom row is the corresponding corrected images. From the left to the right: the first frame of the infrared sequence; the frame after the scene remains stationary for a while; the frame after the scene motion resumes; the frame after the scene keeps motion for hundreds of frames.

known principle in the LMS based NUC algorithms: no motion, no update. The drawback of Hardie’s deghosting method is that the update of NUC parameters is gated only by temporal pixel-to-pixel change. The scene information has not been taken into account. When the background stays unchanged, walking men, cars or other moving objects will make the static background burn into the NUC parameters. Hardie’s method cannot gate the update of the NUC parameters in this situation, where ghosting artifacts can also be generated. A more robust deghosting method should stop the update of the NUC parameters by the information from both spatial domain and temporal domain. In this paper, a combined temporal and spatial deghosting method is proposed. In each frame, the NUC parameters of one detector can be only updated when (i) the response of the detector has high correlation with its neighborhood, and (ii) large temporal deviation of the detector’s response exists. 1.1. Background Infrared focal plane array (IRFPA) sensors are widely used in the fields of aviation, industry, agriculture, medicine, and scientific research [3,4]. However, the different photo-response of each detector within the IRFPA, caused by the manufacturing technique, results in nonuniformity. Then the FPN is superposed to the original image. To reduce FPN, many NUC algorithms have been proposed. They can be mainly classified into two categories: (i) calibration based NUC algorihms [7,8] (ii) scene based nonuniformity correction algorithms (SBNUC) [9–11]. The calibration based methods determine the NUC parameters by inserting extended blackbodies into the optical path and recording the detector responses at one or more background temperatures. However, the FPN is always influenced by such external conditions as ambient temperature, variation in the transistor bias voltage [1,7,12]. This results in that the response of each detector drifts slowly with the time lapse. Thus the periodical calibration is needed. However, this requires halting the infrared sensors, which is unbearable in most applications. In contrast, the scene based methods determine the NUC parameters by the observed scene. For example, the SBNUC algorithms based on registration [9–11,13] adaptively update the NUC parameters by inter-frame registration; the SBNUC algorithms based on Kalman filter [14,15] use the Kalman filter to obtain the best NUC parameters; the SBNUC based on midway histogram equalization correct each column of the image by

mid-histogram-equalize its neighborhood columns [16]. However, the registration based SBNUC algorithms assume that the motion between adjacent frames is consisted only of translation, ignoring any scaling, rotation or other warping of the images [17–20]. And other algorithms do require large sum of computational and/or storage resources. This limits their uses in the embedded systems of the infrared camera. Besides these algorithms, one classical SBNUC algorithm is based on the LMS error [1,2,7,21,22]. It adaptively updates the NUC parameters according to the LMS error between the corrected images and their desired images. It is well-known for its low cost of computation and storage resources. And the main drawbacks of the LMS based algorithms are that they do require continuous scene motion and ghosting artifacts can be easily generated [2]. Our work mainly focuses on the deghosting technique in the LMS based NUC algorithms, so that a simple, versatile and robust algorithm can be directly applied into the real-time infrared camera.

2. Deghosting method 2.1. Observation model Assuming the photo-response of each detector in the IRFPA is linear in the operating response range, the output of the IRFPA is given by:

xn ði; jÞ ¼ an ði; jÞ  Un ði; jÞ þ bn ði; jÞ þ n ði; jÞ;

ð1Þ

where i; j are the spatial coordinates and the subscript n is the serial number of frames, Un ði; jÞ stands for the real infrared radiation collected by of the ði; jÞth detector, an ði; jÞ and bn ði; jÞ are the gain and offset of the radiation-voltage response of the ði; jÞth detector respectively, xn ði; jÞ is the observed image, and n ði; jÞ is the random electrical noise. The observed image is typically corrupted by FPN. To eliminate the FPN and provide an estimate of the true scene radiation Un ði; jÞ, NUC is required which is performed by applying a linear mapping to the output of the IRPFA. The NUC function is given by:

yn ði; jÞ ¼ g n ði; jÞ  xn ði; jÞ þ on ði; jÞ;

ð2Þ

where g n ði; jÞ and on ði; jÞ are the gain and offset NUC parameters respectively, yn ði; jÞ is the corrected image. The SBNUC updates the gain and the offset NUC parameters according to the scene in real time.

410

F. Fan et al. / Infrared Physics & Technology 71 (2015) 408–415

2.2. Scribner’s algorithm and Hardie’s deghosting method Scribner’s algorithm is essentially an LMS adaptive filter. It first defines the ‘‘desired’’ image which is expected to be the reference image and free from fixed pattern noise. However, the reference images, or the ideal images, are unknown in the real infrared sequences. The general approach is to estimate the desired image from the observed infrared image sequences by filtering the high frequency FPN by the spatial low pass filter, e.g., the mean filter, Gaussian filter, and bilateral filter. Then, the NUC parameters of each pixel are adaptively adjusted by the fast descent method so that the NUC images are equivalent or approximate to the desired images. The error image between the corrected image and its desired image is defined by [1,2,7,21,22]:

en ði; jÞ ¼ yn ði; jÞ  dn ði; jÞ;

ð3Þ

where en ði; jÞ is the error image, yn ði; jÞ is the corrected image, dn ði; jÞ is the desired image, i and j denote the spatial coordinates. The algorithm seeks to enforce the corrected image close to the desired image. The gain and offset NUC parameters are updated adaptively by minimizing instantaneous square error e2n ði; jÞ. Then steepest descent algorithm is applied to update the NUC parameters iteratively:

g nþ1 ði; jÞ ¼ g n ði; jÞ  ln ði; jÞ@ gn e2n ði; jÞ;

ð4Þ

onþ1 ði; jÞ ¼ on ði; jÞ  ln ði; jÞ@ on e2n ði; jÞ;

where ln ði; jÞ is the iteration step size which determine the convergence speed of the algorithm. The subscripts n and n þ 1 stands for that one iteration is operated in one frame. Substituting Eqs. (2) and (3) into Eq. (4), we obtain the exact updating function:

g nþ1 ði; jÞ ¼ g n ði; jÞ  2ln ði; jÞen ði; jÞxn ði; jÞ; onþ1 ði; jÞ ¼ on ði; jÞ  2ln ði; jÞen ði; jÞ;

ð5Þ

For each incoming frame, we first calculate its desired image, and update the NUC parameters. Then the new parameters are used to correct the next frame. Thus the NUC parameters in one frame are obtained by its previous frames. Fig. 2 shows the realtime block scheme of Scribner’s algorithm. The input data flow xn ði; jÞ, which comes from the IRPFA, is handled by the NUC subblock with one multiplication and one addition. The output data flow yn ði; jÞ is obtained. And the desired image dn ði; jÞ is estimated from yn ði; jÞ by spatial filter block. Then the NUC parameters are updated by Eq. (4) once for one incoming frame. The dashed arrow stands that the NUC parameters g n ði; jÞ and on ði; jÞ are adaptively adjusted. For the realtime infrared imaging system, Scribner’s algorithm does update the NUC parameters once in one frame according to the LMS adaptive filter as long as the camera is working. Indeed, the residuals are always existing in the desired image in the algorithm, because the reference images are unknown in realtime infrared imaging system. The LMS adaptive filter has a prerequisite that the input should be a stable process. However, temporally

stops and motions can easily be found in the infrared sequence. The radiation of the infrared sequence is not a stable process in real world. Therefore, as the algorithm iteratively updates the NUC parameters according to the desired images with residuals, although most of the NUC parameters are updated correctly, a fraction of them are slowly interfered by the residuals. And the degree of the interference is increasing as the step size of the iteration increases and the iteration time increases. This results in that the residuals of the NUC parameters can be easily accumulated when the desired images with similar residuals appear in the infrared sequence. For example, when the motion across the whole image or a portion of the image temporarily slows or stops, the NUC parameters are updated according to the same desired images or the same portion of the desired image. The residuals of the NUC parameters accumulates quickly. Then after the motion resumes, the residuals of the updated NUC parameters will be not suitable. The result is that the old scene is still visible superposed on the new corrected scene, and hence generates ghosting artifacts. To avoid them, Hardie improved Scribner’s algorithm. The block scheme of Hardie’s algorithm is shown in Fig. 3. Compared with Fig. 2, Hardie added the temporal motion detection subblock, which is his main contribution. Then the update of one detector’s NUC parameters can be gated when temporal deviation of its response is small. In the method, the NUC parameters are updated according to the following rules:

(

ln ði; jÞ ¼

 Z nþ1 ði; jÞ ¼

gn(i,j)

jdn ði; jÞ  Z n ði; jÞj > T

0

otherwise

dn ði; jÞ

jdn ði; jÞ  Z n ði; jÞj > T

Z n ði; jÞ

otherwise

gn(i,j)

parameters update

en(i,j)

dn(i,j) -

Fig. 2. Block schemes of Scribner’s algorithm.

ð7Þ

yn(i,j)

on(i,j)

spatial filter

spatial filter +

;

Essentially, the cause of the ghosting artifacts is that the residuals of the desired image corrupt the NUC parameters. If the desired image of each frame is absolutely equal to its ideal infrared radiation, the NUC parameters are updated by minimizing the square error between the corrected image and the ideal infrared radiation. Then, after hundreds of iteration, the updated NUC parameters are the most accurate. The mean square error between the corrected image and the ideal image is close to zero. However, the residuals are always existing. An example of a desired image

dn(i,j)

NUC

NUC

ð6Þ

2.3. Combined temporal and spatial deghosting technique

yn(i,j)

on(i,j)

;

where Z is used to detect the temporal motion, T is the threshold, and r2n ði; jÞ is an estimate of the local spatial variance centered at pixel ði; jÞ in frame n. The parameter K is the maximum step size. M is the normalization value. When the scene stays motionless, the update of the NUC parameters is paused. Clearly, in Hardie’s algorithm, the update of NUC parameters is gated only based on temporal domain.

xn(i,j)

xn(i,j)

K 1þM2 r2n ði;jÞ

parameters update gate

en(i,j)

+

temporal motion detecion

Fig. 3. Block schemes of Hardie’s deghosting algorithm.

411

F. Fan et al. / Infrared Physics & Technology 71 (2015) 408–415

Fig. 4. An example of production of bias noise. From the left to the right are: the ideal image; the desired image; the residual of the desired image.

and its residuals are shown in Fig. 4. Fig. 4(a) is an infrared image with little FPN, which can be regarded as the ideal image. Its desired image is shown in Fig. 4(b). The residuals between the ideal image and the desired image can be easily seen, as shown in Fig. 4(c). They depend on the spatial geometric and are always being generated in the edge areas or the texture areas. Therefore, when we update the NUC parameters according to the desired image in Fig. 4(b), the corrected image will become more and more blurred. The residuals burn into the NUC parameters. The burn-into phenomenon is worse in three cases: (i) the residuals are large; (ii) the identical or similar frames is always existing in the infrared sequence; (iii) the step size in Eq. (4) is relatively large. Handling the above three cases is the key of deghosting. As analyzed above, the first condition is the most important one. Since the residuals are small in the uniform regions with no edges and textures, a prerequisite of our deghosting technique is that only pixels in the uniform region can be taken into account of update. That means NUC parameters of the pixel which has high correlation with its locals in the desired image can be updated. Therefore, we define the spatial local correlation to detect the uniform areas by:

Sn ði; jÞ ¼



1 8ðp; qÞ 2 X; jdn ðp; qÞ  dn ði; jÞj 6 T s 0

otherwise

;

ð8Þ

where Sn ði; jÞ is the spatial local correlation, T s is the constant threshold of the spatial correlation detection, X indicates the local region around the pixel ði; jÞ. Sn ði; jÞ ¼ 1 represents that the pixel ði; jÞ and its locals have high correlation, ði; jÞ may be in the uniform area, and its residual is small. Sn ði; jÞ ¼ 0 represents that the pixel ði; jÞ and its locals have low correlation, it may be close to the edge or texture areas, and its residual is large. Generally, the correlation between different pixels is not only related to their photometric similarity, but also geometric closeness. However, it is unworthy to calculate the geometric closeness for its high requirement of the computational resource. Therefore, in Eq. (8), we directly calculate difference of the pixel value to evaluate the correlation. Thus the required computations of the spatial correlation detection depend on the size of X, and total computational complexity of Eq. (8) in one frame is size of (X)M  N, where M and N are the column and the row resolution of the IRFPA. Considering that Sn ði; jÞ is used to evaluate the residuals, the X should obviously cover the region of the low pass filter in LMS based NUC algorithms. Further more, since the calculation of geometric closeness is deleted, the size of X should be larger than the size of the low pass filter. For example, when the size of the spatial pass filter is 3  3, the size of X should be 5  5 or 7  7. Then the geometry characteristic of the pixels in the spatial filter cannot influence Sn ði; jÞ heavily. Besides, the reason we use the

desired image dn is that it is more resistant to the random noise and the fixed pattern noise than the observed image. The value of T s depends FPN and the random noise. When the FPN is strong, a relative large T s should be used. And when the FPN is low, a relative small T s should be used. Then we discuss the second case: the identical or similar frames is always existing in the infrared sequence. Even tens of the iterations can lead to visible change of the corrected image. Therefore, the repeating residuals, corresponding to repeating identical or similar frames, result in the ghosting artifacts. In practice, a same desired image can be observed tens of times in two different situations. One situation is that a static scene is recorded continuously for a long period of time. The other situation is that the background is static and the foreground keeps moving. Hardie’s method, which gates the update of the correction parameters by the temporal motion detection, mainly handles the first situation. For the latter situation, it cannot perform well. The residuals of the static background slowly burns into the NUC parameters during the moving foreground frequently covers the background. Therefore, Sn should be also considered into the temporal motion detection. Only when the residuals are small, corresponding to that Sn ¼ 1, the temporal motion detection is available. Then the temporal motion detection in our method is given by:



1 jdn ði; jÞ  Z n ði; jÞj > T R AND Sn ði; jÞ ¼ 1

Rn ði; jÞ ¼

0

Z nþ1 ði; jÞ ¼



otherwise

ð9Þ

;

dn ði; jÞ

jdn ði; jÞ  yn ði; jÞj > T R AND Sn ði; jÞ ¼ 1

Z n ði; jÞ

otherwise

; ð10Þ

where T R is the threshold in temporal detection. We also define Z 0 ði; jÞ ¼ 1 to ensure that jd0 ði; jÞ  Z 0 ði; jÞj > T R for all i; j. The additional latter part of the judgment Sn ði; jÞ ¼ 1 presents that only uniform region is necessary since it is not expected that the residuals of the desired image influence the temporal motion detection. Then both the spatial correlation inter-frame and the temporal inner-frame should be used to gate the update of the NUC parameters. Thus in each iteration, the NUC parameters of one pixel can only be updated when Rn ði; jÞ and Sn ði; jÞ are equal to one. Therefore, by both the temporal and the spatial detection, the update of the proposed method is gated by:

(

ln ði; jÞ ¼

K 1þM 2 r2n ði;jÞ

Rn ði; jÞSn ði; jÞ ¼ 1

0

otherwise

;

ð11Þ

where parameter K is the maximum step size, M is the normalization value. Only if a pixel satisfies both the temporal motion detection and the spatial correlation detection, the correction parameters can be updated. In fact Rn ði; jÞ ¼ 1 presents that Sn ði; jÞ is also equal

412

F. Fan et al. / Infrared Physics & Technology 71 (2015) 408–415

to 1, as shown in Eq. (9). Here, we use Rn ði; jÞSn ði; jÞ ¼ 1 to illustrate that our deghosting technique depends on the both temporal motion and spatial correlation detection. Then when the step size ln ði; jÞ is equal to 0, the update is paused. In summary, the block scheme of the proposed deghosting method is shown in Fig. 5. It is the main contribution of our work in this paper. Compared to the block scheme in Fig. 3, a spatial correlation detection subblock is introduced. The input of the subblock is dn ði; jÞ and its output is Sn ði; jÞ. Then both dn ði; jÞ and Rn ði; jÞ are the input of the motion detection subblock. And the subblock outputs Sn ði; jÞ. Then the update of the NUC parameters is gated by the logic and between Sn ði; jÞ and Rn ði; jÞ. Note that the convergence rate of our method may be slowed down compared to Hardie’s method, as the update of the NUC parameters can be gated by the two thresholds. The proposed deghosting method degrades efficiency by using the additional spatial information. To analyze how much efficiency is degraded, we count up all the computational resources per pixel from Eq. (3) to Eq. (11). The computational resources per pixel, including the comparison, add/minus, multiply, and division, used by different algorithms are list in Table 1. The computational times are counted up when the size of the spatial low pass filter is 3  3

xn(i,j)

gn(i,j)

yn(i,j)

on(i,j) spatial filter dn(i,j)

correction

parameters update

en(i,j)

gate

+

-

Rn(i,j)

spatial correlation detecion

temporal motion detecion

Sn(i,j)

Fig. 5. Block scheme of the combined temporal and spatial gated adaptive SBNUC algorithm.

and the size of X is 5  5. Clearly, our algorithm has the highest cost of the computational resources. Compared with Hardie’s algorithm, our method adds 51 comparison/AND operations, 25 add/minus operations in each pixel. Fortunately, the division which is the most time consuming operation is not evolved in additional subblock. The efficiency of our algorithm is only 9.6% lower than that of Hardie’s algorithm, as shown in the last column of the table (tested in the Matlab 2008a on Intel i3 cpu).

3. Experimental results In this section, the infrared sequences with real nonuniformity are used to verify the performance and the robustness of the proposed deghosting method and other developed deghosting techniques. The infrared sequences are obtained from a long wave un-cooled IRFPA camera. The IRFPA is UL03191-019 produced by ULIS (http://www.ulis-ir.com/index.php?infrared-detector= ul03191-019). Its resolution is 384  288. And the Analog/Digital sample precision is 14-bit. All images in the infrared sequences are first corrected by the initial gain and offset parameters, which are calculated by the typical two-point calibration method [7]. Then, our deghosting method is compared with Hardie’s method [2], denoted by GALMS, and Vera’s method [5], denoted by ALRLMS. The key parameters of the three algorithms are set as the same. The maximum step size K is 2 and normalization value M is 1. The temporal threshold in GALMS and our method is 10. The spatial threshold in our method is 15. The spatial filter in Vera’s method is the four-neighborhood average filter. The spatial filter of GALMS and our algorithm is the Gaussian low-pass filter with size 3  3 and deviation 1:5. And the size of X in local correlation detection is 7  7. The parameter setting, including the setting of the temporal threshold T R and spatial threshold T S , is analyzed in the end of this section. The first infrared sequence for comparison records 2000 frames of a static background with two walking persons. One person is walking through the field of view from time to time. The other

Table 1 The comparison of the computational resources and the consumed time per pixel.

Scribner’s algorithm Hardie’s algorithm Our algorithm

Comparison/AND

Add/Minus

multiplication

Division

Time consuming (s)

0 2 53

12 27 52

5 8 8

1 4 4

9.8204e007 3.1057e006 3.4828e006

Fig. 6. The corrected images of the infrared sequence processed by: (a) the initial correction parameters; (b) ALRLMS; (c) GALMS; (d) ours. The upper row corresponds to the 643th frame. The bottom row corresponds to the 1425th frame.

F. Fan et al. / Infrared Physics & Technology 71 (2015) 408–415

person stands still in the middle of the view in the first half of the sequence, and then walks away. The corrected images processed by the three algorithms are shown in Video 1 in the supplementary materials. In the video, we are interested in whether the ghosting artifacts are generated when the stand-still person walks away. Therefore, the 643th frame and the 1426th frame of the video are picked, as shown in Fig. 6. Frame 643 is the frame corresponding to that the background stays static and foreground keeps moving. Frame 1426 corresponds to the frame when the person is walking away. In the images processed by ALRLMS, as shown in the upper image of Fig. 6(b), the textures and the edges are all blurred. The blurred scene information has burned into the correction parameters. This results in the ghosting artifacts after the static person moves. As shown in the bottom image of Fig. 6(b). The ghosting artifacts are highlighted by the red box. For Hardie’s deghosting method, the details and the textures are clear before the background changes. However, the ghosting artifacts can also be seen in the red box on Frame 1426 of Fig. 6(c). In contrast, no obvious ghosting artifacts can be found in Fig. 6(d). Clearly, our method outperforms the other two algorithms visually. However, the ghosting artifacts cannot be quantitatively analyzed in Video 1, since it is hard to automatically distinguish the ghosting artifacts from the true scene target in a complicated scene. In this paper, we use the updated correction parameters to correct a uniform blackbody so that the ghosting artifacts can be evaluated. The corrected image of the uniform blackbody is expected to be uniform. Any contour on it can indicate that the scene information ‘‘burns’’ into the correction parameters and the ghosting artifacts are introduced. Thus, the residual nonuniformity U r is used as a measure or indicator of the ghosting artifacts when dealing with the uniform background. It is calculated as follows:

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u M X N 2 1u 1 X Ur ¼ t ðf ði; jÞ  f Þ  100%; f MN i¼1 j¼1

413

ð12Þ

where f is the image under analysis, f is the average of all pixel value f ði; jÞ; M and N are the column and raw number of imaging. Generally, a smaller U r indicates a better performance of correction and deghosting. Therefore, to evaluate the ghosting artifacts quantitatively, we records another infrared sequence specially. The infrared camera stares at a static scene. And its optical path is covered by a uniform baffle periodically, so that U r of the baffle’s NUC images can be examined to evaluate the ghosting artifacts. The correction results processed by different algorithms are shown in Video 2. The

Fig. 8. The residual nonuniformity curves processed by different algorithms.

Fig. 7. The corrected images of the infrared sequence processed by: (a) the initial correction parameters (corresponding to the original image); (b) ALRLMS; (c) GALMS; (d) ours. Images in the first row show the static scene of Frame 3015 corrected by different algorithm. Images in the second row show the scene of Frame 3095 in which the cup is covered by a uniform baffle. Images in the bottom row are segmented from the images to quantitatively evaluate the ghosting artifacts. The U r of the bottom images from (a) to (d) are: 0.087%; 0.182%; 0.116%; 0.059%

414

F. Fan et al. / Infrared Physics & Technology 71 (2015) 408–415

Fig. 9. The mean of the residual uniformity processed by our algorithm using different spatial threshold T s and temporal threshold T R . The window size of X is (a) 5  5, (b) 7  7, and (c) 9  9. The minimal residual uniformity is (a) 0.579%, (b) 0.569%, (c) 0.571%, when T s ¼ 12 and T R ¼ 4.

3015th frame and the 3095th frame corrected by the different methods are shown in Fig. 7. The images in the upper row of Fig. 7 are the corrected results of the static background. The images in the middle row are the corrected results when the uniform baffle blocks the optical path. The area of the uniform baffle in the corrected images are expected to be uniform and have signal gray value. However, the inverse contours of the cup can be seen in the middle row of Fig. 7(b) and (c). These inverse contours are the ghosting artifacts. They are introduced since the static scene ‘‘burns’’ in the correction parameters. The ghosting artifacts in Fig. 7(c) are less serious than those in Fig. 7(b), since the temporal gate in Hardie’s method works. However, ghosting artifacts still exist as the residuals of the static background slowly accumulate. All the pixels where the ghosting artifacts are introduced do not have high correlation with their neighborhoods. The spatial correlation detection in our method can detects these pixels. Then once the updates of the NUC parameters of these pixels are gated, the risk of generating ghosting artifacts is reduced. The spatial correlation detection subblock in the proposed method can detect these pixels. Therefore, no ghosting artifact is generated by our method, as shown in Fig. 7(d). To obtain the quantized evaluation of the ghosting artifacts, the same local regions of the images, where the ghosting artifacts are generated, are segmented from the original images, as show in the bottom row of Fig. 7. They are expected to be uniform gray images and the residual nonuniformity is expected to be 0. The residual nonuniformity U r of the uniform segments image indicates the proposed deghosting method performs best. We also collect all the local regions in the infrared sequence and record their residual nonuniformity U r curves. The reason why we only choose the uniform area is that the uniform scene is suitable to evaluate the ghosting artifacts. The U r curves generated by different methods are shown in Fig. 8. The curves of ALRLMS and GALMS reduce fixed pattern noise at the first half of the infrared sequence and obtain a low U r . Then the curves increase at the second half, for ghosting artifacts are introduced. This results in that the U r curves of ALRLMS and GALMS in poorer than that of the original images. Their convergence performance is poor. The ALRLMS performs worst in deghosting. The proposed algorithm obtains the best deghosting performance. It can reduced the residual nonuniformity from 0:09% to 0:06% without ghosting artifacts. The performance of our method obviously depends on the thresholds spatial threshold T s , the temporal threshold T R and the window size of the X. In this section, more experiments are carried out on Video 2 to analyze the choice of threshold and window size. The experiment results are shown in Fig. 9. The

horizontal coordinate and the vertical coordinate represents the spatial threshold T s and the temporal threshold T R respectively. The pixel value is the mean value of the residual nonuniformity U r of the NUC images in the infrared image sequence by using different T s and T R . Overall, the mean value of the residual nonuniformity U r which is concave in the T s  T R axis has a minimized value. According to all of the images in Fig. 9, when T s ¼ 12 and T R ¼ 4, we obtain the best U r . Therefore, T s ¼ 12 and T R ¼ 4 are the ideal empirical thresholds of our algorithm for the FPA we used. Besides, compare all the value at the bottom of the color bars in the three images, it is found that Fig. 9(b) has the minimized value in all the three images. All above indicates that when T s ¼ 12, T R ¼ 4, and the size of X is set 7  7 we obtain the best NUC performance.

4. Conclusion In this paper, a temporal and spatial gated deghosting technique in the LMS scene based nonuniformity correction was proposed. Both spatial correlation detection and temporal motion detection are used to gate the update of the NUC parameters. The additional computational complexity required is equal to the product of the window size of the spatial correlation detection and the resolution of the images. Therefore, without sacrificing too much efficiency, our technique can reduce the FPN robustly with few ghosting artifacts, especially when the background stays unchanged. The experimental results demonstrate that the deghosting performance of the proposed method is superior to other deghosting methods. The limited computational and storage resources is added compared with other algorithms. Our algorithm is suitable to run on the embedded system.

Conflict of interest There is no conflict of interest. Acknowledgements This work was supported in part by the Ph.D. Programs Foundation of Ministry of Education of China under Grant 20120142110088, the Postdoctoral Science Foundation of China under Grant 2015M572194, the National Natural Science Foundation of China under Grant 61275098, by the Natural Science Foundation of Hubei Province of China under Grant 2011CDB027.

F. Fan et al. / Infrared Physics & Technology 71 (2015) 408–415

Appendix A. Supplementary material Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.infrared.2015.06. 001.

References [1] D.A. Scribner, M.R. Kruer, J.M. Killiany, Infrared focal plane array technology, Proc. IEEE 79 (1991) 66–85. [2] R.C. Hardie, F. Baxley, B. Brys, P. Hytla, Scene-based nonuniformity correction with reduced ghosting using a gated lms algorithm, Opt. Express 17 (2009) 918–933. [3] A. Milton, F. Barone, M. Kruer, Influence of nonuniformity on infrared focal plane array performance, Opt. Eng. 24 (1985) 855–856. [4] O. Riou, S. Berrebi, P. Bremond, Non uniformity correction and thermal drift compensation of thermal infrared camera, in: Proc. of SPIE, vol. 5405, 2004, pp. 294–302. [5] R.E. Vera, I.S. Torres, Ghosting reduction in adaptive nonuniformity correction of infrared focal-plane array image sequences, in: Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on, vol. 2, IEEE, 2003, pp. II– 1001. [6] A. Rossi, M. Diani, G. Corsini, Bilateral filter-based adaptive nonuniformity correction for infrared focal-plane array systems, Opt. Eng. 49 (2010) 057003– 057013. [7] D.L. Perry, E.L. Dereniak, Linear theory of nonuniformity correction in infrared staring sensors, Opt. Eng. 32 (1993) 1854–1859. [8] Y.M. Chiang, J.G. Harris, An analog integrated circuit for continuous-time gain and offset calibration of sensor arrays, Analog Integr. Circ. Sig. Process 12 (1997) 231–238. [9] B.M. Ratliff, M.M. Hayat, R.C. Hardie, An algebraic algorithm for nonuniformity correction in focal-plane arrays, JOSA A 19 (2002) 1737–1747.

415

[10] D.R. Pipa, E.A. Silva, C.L. Pagliari, P.S. Diniz, Recursive algorithms for bias and gain nonuniformity correction in infrared videos, Image Process., IEEE Trans. 21 (2012) 4758–4769. [11] C. Zuo, Q. Chen, G. Gu, X. Sui, Scene-based nonuniformity correction algorithm based on interframe registration, JOSA A 28 (2011) 1164–1176. [12] M.M. Hayat, S.N. Torres, E. Armstrong, S.C. Cain, B. Yasuda, Statistical algorithm for nonuniformity correction in focal-plane arrays, Appl. Opt. 38 (1999) 772– 780. [13] R.C. Hardie, M.M. Hayat, E. Armstrong, B. Yasuda, Scene-based nonuniformity correction with video sequences and registration, Appl. Opt. 39 (2000) 1241– 1250. [14] S.N. Torres, M.M. Hayat, Kalman filtering for adaptive nonuniformity correction in infrared focal-plane arrays, JOSA A 20 (2003) 470–480. [15] H. Zhou, H. Qin, Y. Jian, B. Wang, S. Liu, Improved kalman-filter nonuniformity correction algorithm for infrared focal plane arrays, Infrared Phys. Technol. 51 (2008) 528–531. [16] Y. Tendero, J. Gilles, Admire: a locally adaptive single-image, non-uniformity correction and denoising algorithm: application to uncooled ir camera, in: SPIE Defense, Security, and Sensing, Proc. SPIE, International Society for Optics and Photonics, 2012, pp. 83531O1–83531O18. [17] J. Ma, J. Zhao, J. Tian, X. Bai, Z. Tu, Regularized vector field learning with sparse approximation for mismatch removal, Pattern Recogn. 46 (2013) 3519–3532. [18] J. Ma, J. Zhao, J. Tian, A.L. Yuille, Z. Tu, Robust point matching via vector field consensus, IEEE Trans. Image Process. 23 (2014) 1706–1721. [19] J. Ma, J. Zhao, Y. Ma, J. Tian, Non-rigid visible and infrared face registration via regularized gaussian fields criterion, Pattern Recogn. 48 (2015) 772–784. [20] J. Ma, W. Qiu, J. Zhao, Y. Ma, A.L. Yuille, Z. Tu, Robust L2 E estimation of transformation for non-rigid registration, IEEE Trans. Signal Process. 63 (2015) 1115–1129. [21] H. Zhou, H. Qin, R. Lai, B. Wang, L. Bai, Nonuniformity correction algorithm based on adaptive filter for infrared focal plane arrays, Infrared Phys. Technol. 53 (2010) 295–299. [22] S.N. Torres, E.M. Vera, R.A. Reeves, S.K. Sobarzo, Adaptive scene-based nonuniformity correction method for infrared-focal plane arrays, in: AeroSense 2003, International Society for Optics and Photonics, 2003, pp. 130–139.