Infrared Physics & Technology 69 (2015) 198–205
Contents lists available at ScienceDirect
Infrared Physics & Technology journal homepage: www.elsevier.com/locate/infrared
Interframe phase-correlated registration scene-based nonuniformity correction technology Ning Liu ⇑, Jun Xie Nanjing XiaoZhuang University, College of Physics & Electronics, Nanjing, Jiangsu Province 211171, China
h i g h l i g h t s Proposed interframe phase-correlated registration to calculate the overlapping area more precisely. Proposed new gain coefficient convergent method. This algorithm works under no concern about the level of the nonuniformity. The ‘‘ghost effect’’ is well controlled because of the precise registration.
a r t i c l e
i n f o
Article history: Received 11 August 2014 Available online 14 January 2015 Keywords: Phase-correlated Scene-based Nonuniformity correction Interframes
a b s t r a c t In this paper, we propose an interframe phase-correlated registration scene-based nonuniformity correction technology. This technology is based on calculating the correlated phase information between two neighboring frames to determine the precise overlapping area of them. Usually, the common registration algorithms use the scene motion information to calculate the relative displacement of neighboring frames to determine the overlapping area. This approach may be interfered by the level of nonuniformity and cause the registration error. Furthermore, bring negative consequences to the correction process. Our technology effectively conquers this worry, and makes the level of nonuniformity careless during the registration process. We also adopt a new gain coefficient convergent method which proposed by our lasted study to finish the correction. The whole technology works with great performance. Detailed analysis, images and flow charts of this technology are also provided. Ó 2015 Elsevier B.V. All rights reserved.
1. Introduction The scene-based nonuniformity correction (SBNUC) has been studied worldwide in recent years, lots of SBNUC algorithms have been raised [1–4] such as the least-mean-square (LMS) algorithm, the constant statistical (CS) algorithm and the interframe registration algorithm [4–7]. These algorithms have their own advantages and disadvantages. For example, the LMS is effective to the highfrequency nonuniformity (NU), but ineffective to the low-frequency NU. Meanwhile, when conducting the LMS, the scene motion must be continuous, otherwise, it will cause serious ‘‘ghost effects’’ [8–13]. It also has the disadvantage of low correction speed. The CS algorithm made a great improvement on the correction speed and the correction of low-frequency NU, but it requires the scene remains similar during the correction process. Otherwise the deviation calculation of the moving frames will fail, which can cause the malfunction of this algorithm [14–18]. Since 2011, the ⇑ Corresponding author. http://dx.doi.org/10.1016/j.infrared.2015.01.004 1350-4495/Ó 2015 Elsevier B.V. All rights reserved.
interframe registration SBNUC has become the most effective method of correcting both the high-frequency and the lowfrequency NU [20–23,25–27]. Meanwhile, the interframe registration SBNUC has the advantage of fast speed, it only takes nearly dozens of frames to finish the correction process. Somehow, it also has some disadvantages. For one thing, the due to the fact that the pixel value of the infrared focal plane array (IRFPA) is integer, the calculation of the displacement of interframes could be within the error of several pixels. For another, the error of displacement could be lead to serious edge residuals when doing the interframe registration. Therefore, when convergent the correction coefficients, the edge residuals will cause the malfunction of this algorithm. Furthermore, the level of NU can decide the accuracy of interframe registration. Considering all the situation mentioned above, we propose a new interframe phase correlated registration SBNUC algorithm to cover most of the defects. This algorithm is based on determine the exact similar scene of two neighboring frames by calculating the phase correction information without the consideration of
N. Liu, J. Xie / Infrared Physics & Technology 69 (2015) 198–205
199
Fig. 1. Demonstration of relative motion of neighboring frames. (a) and (b) Neighboring frames with certain motion. (c) Scene motion peak and NU peak of the calculation of relative motion.
Fig. 3. Scene motion peak leftover. Fig. 2. Reestablish of NU peak and scene motion peak.
the level of NU other than calculate the displacement directly. Thus, we can subtract the neighboring frames with only the scene information but leaving the NU to be corrected. Since the NU is the only leftover during the subtraction, it can be corrected effectively using the coefficient convergence method afterwards. According to our study, this algorithm can accomplish the image registration regardless of the level of NU, which make it very effective. It has great robustness, which can overcome the malfunction problem exist in interframe registration algorithm under strong level of NU. This paper is organized as follows: in Section 2, the correction model has been mentioned and the process of this algorithm has been raised in detail. In Section 3, a lot of analysis have been done
to determine the performance of this algorithm. In Section 4, we make conclusion and prospection of the future study.
2. Discussion 2.1. Correction model of nonuniformity Generally, the relationship between the signal response and the incident IR photon flux is nonlinear, especially when an FPA operates in a wide dynamic incident flux range [19]. For SBNUC, to simplify the problem formulation, the photo responses of the
200
N. Liu, J. Xie / Infrared Physics & Technology 69 (2015) 198–205
Fig. 4. Coefficient elimination besides the peak.
Fig. 6. Subtraction with only NU leftover.
individual detectors in an FPA are commonly approximated using a linear irradiance-voltage model and their output is given by [23,24]
Y n ði; jÞ ¼ g n ði; jÞ X n ði; jÞ þ on ði; jÞ:
ð1Þ
The subscript n is the frame index. gn(i, j) and on(i, j) are respectively the real gain and offset of the (i, j)th detector. Xn(i, j) stands for real incident IR photon flux collected by the respective detector. This model is reasonable especially for some SBNUC methods with a fast convergence rate, because during a short period of time, the temperature of an object can be ensured to remain in a small range so as to satisfy the linear response model. We apply a linear mapping to each observed pixel value to provide an estimate of the true scene value so that the detectors appear to be performing uniformly.
X n ði; jÞ ¼ wn ði; jÞ Y n ði; jÞ þ bn ði; jÞ:
ð2Þ
Here wn(i, j) and bn(i, j) are respectively the NUC gain and offset of the linear correction model of the (i, j)th detector. Their relations with the real gain and offset can be represented by
wn ði; jÞ ¼
1 ; g n ði; jÞ
bn ði; jÞ ¼
on ði; jÞ : g n ði; jÞ
ð3Þ
ð4Þ
Thus, we can conduct the NUC using Eq. (2) when we acquire the ideal estimation of wn(i, j) and bn(i, j) or gn(i, j) and on(i, j). 2.2. Phase correlated registration Take two neighboring frames f1(x, y) and f2(x, y) into consideration, which both contain a certain level of NU. We first obtain their relative translation by calculating their normalized crosspower spectrum [24,27]:
^cðl; tÞ ¼
F 2 ðl; tÞF 1 ðl; tÞ F 2 ðl; tÞF 1 ðl; tÞ
ð5Þ
where the asterisk denotes the complex conjugation, F1(l, t) and F2(l, t) are the Fourier transforms of f1(x, y) and f2(x, y) respectively, (l, t) are the Fourier domain coordinates. The neighboring frames f1(x, y) and f2(x, y) are within the following features: 1. Since one frame is the afterward motion of another, the two frames have an overlapping area which contains the same scene information. 2. The NU barely changes during the scene motion. See in Fig. 1. We can see from these two features that when we calculate the relative translation, there will be two peaks in the Fourier domain, one peak is caused by the scene motion which shows the similarity of these two frames. The other is caused by the NU because the NU is ‘‘too similar’’ in these two frames. As we can see in Fig. 1, these two peaks have been pointed out separately. The raised interframe registration algorithm then uses the scene motion peak to determine the displacement of these two frames by calculating the e2pjðlx0 þty0 Þ , and take the x0, and y0 as the displacement. Sometimes it works well, but in some occasions, the scene motion peak will be mistaken according to the level of the NU. In this case, the instance displacement will be calculated incorrectly. For this consideration, we propose the phase correlated registration method as follows. We intend to create a Fourier domain which only contains the scene motion peak, so that the NU will not affect the calculation. According to Fig. 1(c), we can find out that the location of the NU peak is almost in the middle of the Fourier domain, while the scene motion peak changes its location once the scene has a certain displacement. We conduct the following calculation to reestablish the two peaks:
~cða; bÞ ¼ F ð^cðl; tÞÞe2pjðal0 þbt0 Þ
ð6Þ
0
Fig. 5. Demonstration of phase correlated registration. (a) Reference image f1(x, y), (b) shifted image f2(x, y), and (c) registered image f 2 ðx; yÞ.
N. Liu, J. Xie / Infrared Physics & Technology 69 (2015) 198–205
201
Fig. 7. Registration error comparison. (a) and (b) Neighboring frames without NU. (c) Error image of phase correlated registration. (d) Error image of interframe registration. (e) and (f) Neighboring frames with NU. (g) Error image of phase correlated registration. (h) Error image of interframe registration.
The aim of exponential term is to shift the peaks to the center of the Fourier domain. The result of Eq. (6) is shown as Fig. 2. During the registration, we do not want the whole process to be bothered by the NU. We use a solution of creating a window func-
tion to suppress the NU peak. As we mentioned, the location of NU peak will not change alternatively because the shape of NU is stable in the frame stream. The window function is determined as follows:
202
N. Liu, J. Xie / Infrared Physics & Technology 69 (2015) 198–205
( hða; bÞ ¼
M
0
2
d0 a M2 þ d0 ; N2 d1 b N2 þ d1
ð7Þ
1 otherwise
e new ða; bÞ ¼ hða; bÞ Cð e a; bÞ C
ð8Þ
where M and N are the dimensions of a frame, d0 and d1 are the area threshold used to determine the region that NU peak exists. With the window function, we can eliminate the NU peak from the Fourier domain, leaving only the scene motion peak. It can be seen that the scene motion peak is much stronger than the other coefficients in the Fourier domain, so we eliminate the other coefficients as well. Giving us the following result shown in Fig. 3. When we conduct the phase correlated registration, we only need the information shown in Fig. 4. That is to say, we need only the phase information of Fig. 4 to determine the registration. Thus, we conduct the follow calculations:
h i e 0 ðl; tÞ ¼ F 1 C e new ða; bÞ C new "
Uphase ðl; tÞ ¼ tan
1
ð9Þ
# e 0 ðl; tÞÞ Imð C new e 0 ðl; tÞÞ Reð C
ð10Þ
new
¼F
1
F 2 ðl; tÞ e
iUphase ðl;tÞ
ð11Þ
We use the image data shown in Fig. 1(a) and (b) to conduct the whole phase correlated calculation using Eqs. (5)–(11). The regis0 tered image of f 2 ðx; yÞ is shown as follows: It can be seen from Fig. 5, while the neighboring frames has a certain motion, the phase correlated registration algorithm can spot out the overlapping area. We then give a demonstration of the subtraction of Fig. 5(a) and (c) to show the registration result. In Fig. 6, the subtraction means the error matrix is subtracted using the overlapping area of the neighboring frames. We can see that, using phase correlated registration, the error matrix only has certain NU leftover. There is almost no scene information or edge residuals shown in Fig. 6. The detailed analysis will be giving in Section 3. 2.3. Nonuniformity correction
Table 1 Standard deviations comparison of phase correlated registration and interframe registration.
STD
y1 ¼ k1 x1 þ b1 :
ð12Þ
y2 ¼ k2 x2 þ b2 ; Here, x1 and x2 are the same scene in the overlapping area of two neighboring frames, y1 and y2 are the outputs of the neighboring frames in the overlapping area, and k1, k2, b1, and b2 are the gain and offset coefficients of the two neighboring frames. Following these equations, the following can be deduced easily:
k2 ¼ k1
b2 b1 ^
ð13Þ
;
The correction of offsets b1 and b2 in Eq. (13) must carried out before the gains k1 and k2, which can be easily realized. Supposing the image represents by y2 is displaced towards the image represents by y1, and the overlapping area of these two frames is defined as ERR. Then, the offset correction can be given as follows: bnþ1 ði; jÞ ¼
bn ði; jÞ þ a ERRði;jÞ when pixel ði; jÞ is in the ov erlapping area bn ði; jÞ otherwise;
: ð14Þ
^
Here, x is the average of x1 and x2. From Eq. (13), we can see that the gain adjustment can be completed with the offset adjustment. We complete the equation by adding a convergence step to it, and finally we have the following equation: 8 n ði;jÞ < kn ði;jÞ a bnþ1 ði;jÞb when pixel ði; jÞ ov erlapping area ^ xði;jÞ knþ1 ði; jÞ ¼ : : kn ði;jÞ otherwise;
ð15Þ Under this coefficient convergent method, the convergence speed and performance are both improved. The detailed analysis will be given in Section 3. 3. Performance analysis
After the error matrix of the neighboring frames has been determined, we can continue updating the correction coefficient in the overlap area of these two frames. The interframe registration NUC algorithm and the LMS algorithm both use the steepest descent method to correct the gain and offset coefficients. The highfrequency NU can be corrected rapidly by this method in the case of precise registration. However, the low-frequency NU is difficult to correct at the same time. This is because the high-frequency NU usually exists in the offset coefficients while the low-frequency NU usually exists in the gain coefficients. When using similar equations to correct the gain and offset coefficients, the correction of
Without NU
y1 ¼ y2
x
where Uphase(l, t) is the correlated phase that we are seeking. It turns out that, the total image shifting information characterized by the scene motion peak will be acquired precisely using Uphase (l, t). Since no NU information exists in Uphase(l, t), the accuracy of registration will be good. Then, we obtain the final shifted image 0 f 2 ðx; yÞ by Uphase(l, t) and F2(l, t), as follows: 0 f 2 ðx; yÞ
offset coefficients is faster than that of the gain coefficients. When the high-frequency NU has been corrected, the low-frequency NU is still remains in the image. In our research, we use the following new method to correct the gain coefficient. Supposing the two neighboring frames have an overlapping area in a scene, that is to say, the scene radiation in this area should theoretically be the same. After the NUC, the neighboring frames should have the same output in the overlapping area. These assumptions can be written as the following equations [28].
With NU
Fig. 7(c)
Fig. 7(d)
Fig. 7(g)
Fig. 7(h)
0.0562
0.1624
0.2646
0.3780
In this section, we mainly focus on two aspects of performance analysis: 1. The registration error between the phase correlated registration and interframe registration; 2. The NUC performance between these two algorithm. 3.1. Registration error analysis The main registration difference between the phase correlated registration and the interframe registration is that the phase correlated registration does not calculate the relative displacement of the neighboring frames directly. Instead, we intend to determine a correlated phase information to find out the exact overlapping area of the neighboring frames. We have chosen two neighboring frames and used these two algorithms to calculate the overlapping areas. The results are shown in Fig. 7. Let us take a look at Fig. 7. First, we conduct these two algorithms to a pair of neighboring frames without NU to test the registration performance. The error image is obtained by subtract the
N. Liu, J. Xie / Infrared Physics & Technology 69 (2015) 198–205
203
Fig. 8. NUC performance after 50 frames. (a) Raw image with NU. (b) NUC performance with phase correlated registration algorithm. (c) NUC performance with interframe registration algorithm.
Fig. 9. NUC performance after whole frame stream with the frame number 150. (a) Raw image with NU. (b) NUC performance with phase correlated registration algorithm. (c) NUC performance with interframe registration algorithm.
204
N. Liu, J. Xie / Infrared Physics & Technology 69 (2015) 198–205
Fig. 10. RMSE comparison of phase correlated registration (PCR) and interframe registration (IR).
Fig. 11. NUC performance comparison with phase correlated registration algorithm using different convergence step a.
calculated overlapping area. It can be seen from Fig. 7(c) that, once the overlapping area has been calculated precisely, there should be very few edge contour left in the error image. But in Fig. 7(d), there are still some edge contours left in the error image. This is because the relative displacement calculated by the interframe registration may have a slightly error to the actual displacement value. When doing the subtraction, the overlapping area in the neighboring frames will not coincide precisely. Then the edge contour will emerge after the subtraction. The same situation happens in Fig. 7(g) and (h) either. In Fig. 7(e) and (f), we add a certain level of NU to Fig. 7(a) and (b), and then redo the registration with these two algorithms. As we mentioned in Section 2, the phase correlated registration will not affect by the level of NU. So after the subtraction, only NU is left in the error image of Fig. 7(g). However in Fig. 7(h), the edge contour is left in the error image, meanwhile, the NU error leftover has been increased too because of the registration error. So, it is reasonable to believe that, we can get better convergence performance with the result of phase correlated registration. We continue focus on the overlapping areas of Fig. 7(c), (d), (g) and (h). The standard deviations (STD) of the error image have been calculated to show the numerical results in Table 1. Since every error image is obtained by subtracting two overlapping areas, it will appear very small STD if the registration is precise.
Table 1 has just proved the analysis of Fig. 7. Although the STD of error image calculated after the interframe registration is small, it is more pleasing when conducting the phase correlated registration. 3.2. NUC performance analysis After the registration has been done correctly, we can continue the NUC process. As we mentioned above, the more precise the registration is, the less edge residuals will be left on the subtraction image, which will lead to better NUC performance. In this section, we will analysis the NUC performance of the phase correlated registration algorithm and the interframe registration to demonstrate the advance of our study. Besides, we will use the RMSE index to show the convergence performance of these two algorithms. According to Eqs. (14) and (15), the convergence step a plays an important role in the NUC. Thus, we set a to 0.06 in our study and the interframe registration. Then we test the performance of these two algorithms in the same frame stream with a strong level of NU, and the results are shown below. In Fig. 8, we can see that, the NU is rapidly convergent after 50 frames, which means that these two algorithms are both effective in correcting the NU. But in Fig. 8(c), we can still spot some stripes
N. Liu, J. Xie / Infrared Physics & Technology 69 (2015) 198–205
left on the image that is pointed out with red arrow. However, the stripes are hardly being noticed in Fig. 8(b). According to the discussion in Section 2, the phase correlated registration has better registration performance. There in only NU left in the error matrix which is shown in Fig. 7. When conducting the NUC process, the NU will be corrected more precisely under the phase correlated registration algorithm. We keep the frame stream going and the results are shown below. After whole 150 frames’ NUC, we can see that, the raw NU has been corrected with both these two algorithms. However, we can spot some annoying ‘‘ghost effect’’ in Fig. 9(c). This is because when we calculating the overlapping area determined by a chosen registration algorithm, if the registration error occurs, the overlapping area will not be overlapped precisely. If an edge in one frame is one or two pixels away from which in another frame, the NUC will cause the ‘‘ghost effect’’. It can be clearly that, the interframe registration has a certain registration error while the phase correlated registration has not. Next, we will calculate the RMSE index to show the performance effect of these two algorithms. The RMSE calculated as.
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u
2 ^ u 1 X RMSE ¼ t Xði; jÞ Xði; jÞ ; M N i;j
ð16Þ
where M and N are the dimensions of the images, X(i, j) is the raw ^
value of pixel (i, j), and Xði; jÞ is the corrected value of pixel (i, j). In our study, the frame stream we use is directly captured using a thermal imager without any correction. Thus, if we use Eq. (16) to determine the RMSE index, the curve should have a rising trend. In Fig. 10, we can clearly see the advantage of phase correlated registration when doing the NUC. According to the analysis of Fig. 9, after 50 frames NUC, the NU has been corrected with both algorithms, but the RMSE index shows that, the effect is better with phase correlated registration algorithm. While the NUC goes through all 150 frames in the frame stream, the RMSE with phase correlated registration in also better than the one with interframe registration. This could be the explanation of Fig. 9(b) and (c). Usually, the convergence step a is set between 0.01 and 0.1, which is sufficient for NUC. If a is set too big, the NUC process will become less stable, otherwise, if a is set too small, the NUC process will become less effective. The results are shown in Fig. 11. In Fig. 11, we can see that, if the convergence step ais set 0.15, the RMSE index becomes instable as the blue plots. This means that during the NUC, the correction result could be less pleasing. While if the convergence step a is set 0.01, the RMSE index rises very slowly, till the end of the frame stream, the algorithm is still not acquiring a satisfying correction effect. Thus, as we mentioned, the convergence step a is set 0.01–0.1, which is suitable for most applications. 4. Conclusion In this paper, we propose a phase-correlated registration scenebased nonuniformity correction technology. This technology basically use the correlated phase information between two neighboring frames to precisely determine the overlapping area of them. We shield the NU peak in the correlated phase information and focus on the scene motion peak in the Fourier domain. Thus, the registration process will not be affected by the level of NU, which makes the registration more precisely. Then, we adopted a new nonuniformity convergence method, which uses the bias coefficient as the variable to correct the gain coefficient. The performance has been tested and analyzed in this paper. However, this technology is still has some defects, such as: the NU correction model is established in the Cartesian coordinate system, which
205
means that this technology may fail when dealing with the situation of rotate, zooming of the scene motion. Meanwhile, the whole calculation of this technology is based on Fourier transform, which means that it may be difficult for engineers to apply this technology onto real-time hardware. All these defects should be conquered in the future research. Conflict of interest There is no conflict of interest. References [1] D.A. Scribner, K.A. Sarkaday, J.T. Caulfield, M.R. Kruer, G. Katz, C.J. Gridley, Nonuniformity correction for staring IR focal plane arrays using scene-based techniques, SPIE 1308 (1990) 224–233. [2] D.A. Scribner, K.A. Sarkaday, J.T. Caulfield, M.R. Kruer, Adaptive nonuniformity correction for IR focal plane arrays using neural networks, SPIE 1541 (1991) 100–109. [3] D.A. Scribner, K.A. Sarkaday, J.T. Caulfield, M.R. Kruer, J.D. Hunt, M. Colbert, M. Descour, Adaptive retina-like preprocessing for imaging detector arrays, IEEE 93 (1993) 1955–1960. [4] W.X. Qian, Q. Chen, G.H. Gu, Space low-pass and temporal high-pass nonuniformity correction algorithm, Opt. Rev. 17 (2010) 24–29. [5] W.X. Qian, Q. Chen, G.H. Gu, Minimum mean square error method for stripe nonuniformity correction, Chin. Opt. Lett. 051103 (2011) 1–3. [6] W.X. Qian, Q. Chen, J.Q. Bai, G.H. Gu, Adaptive convergence nonuniformity correction algorithm, Appl. Opt. 50 (2011) 1–10. [7] E. Vera, S. Torres, Fast adaptive nonuniformity correction for infrared focal plane array detectors, Appl. Sig. Proc. 13 (2005) 1994–2004. [8] S.N. Torres, E.M. Vera, R.A. Reeves, S.K. Sobarzo, Adaptive scene-based nonuniformity correction method for infrared-focal plane arrays, SPIE 5076 (2003) 130–139. [9] R.E. Vera, I.S. Torres, Ghosting reduction adaptive non-uniformity correction of infrared focal-plane array image sequences, ICIP 7803 (2003) 8–11. [10] A. Rossi, M. Diani, G. Corsini, A comparison of deghosting techniques in adaptive nonuniformity correction for IR focal-plane array systems, SPIE 7834 (2010) 1–10. [11] Tianxu Zhang, Yan Shi, Edge-directed adaptive nonuniformity correction for staring infrared focal plane arrays, Opt. Eng. 45 (2006) 1–11. [12] A. Rossi, M. Diani, G. Corsini, Temporal statistics de-ghosting for adaptive nonuniformity correction in infrared focal plane arrays, Electron. Lett. 46 (2006) 1–2. [13] J.G. Harris, Yu-Ming Chiang, Nonuniformity correction of infrared image sequences using the constant-statistics constraint, IEEE 8 (1999) 1148–1151. [14] J.G. Harris, Yu-Ming Chiang, Nonuniformity correction using the constantstatistics constraint: analog and digital implementations, SPIE 3061 (1997) 895–905. [15] M.M. Hayat, S.N. Torres, E.E. Armstrong, B. Yasuda, Statistical algorithm for non-uniformity correction in focal-plane arrays, Appl. Opt. 38 (1999) 772–780. [16] B.M. Ratliff, M.M. Hayat, An algebraic algorithm for nonuniformity correction in focal-plane arrays, J. Opt. Soc. Am. 19 (2002) 1737–1747. [17] J.G. Harris, Yu-Ming Chiang, Minimizing the ‘ghosting’ artifact in scene-based nonuniformity correction, SPIE 3377 (1998) 106–113. [18] B.M. Ratliff, M.M. Hayat, J.S. Tyo, Radiometrically accurate scene-based nonuniformity correction for array sensors, J. Opt. Soc. Am. A 20 (2003) 1890–1899. [19] B.M. Ratliff, M.M. Hayat, J.S. Tyo, Generalized algebraic scene-based nonuniformity correction algorithm, J. Opt. Soc. Am. A 22 (2005) 239–250. [20] C. Zuo, Q. Chen, G. Gu, X. Sui, Registration method for infrared images under conditions of fixed-pattern noise, Opt. Commun. 285 (2012) 2293–2302. [21] C. Zuo, Q. Chen, G.H. Gu, X.B. Sui, Scene-based nonuniformity correction algorithm based on interframe registration, J. Opt. Soc. Am. A 28 (2011) 1164– 1176. [22] C. Zuo, Q. Chen, G.H. Gu, X.B. Sui, W.X. Qian, Scene-based nonuniformity correction method using multiscale constant statistics, Opt. Eng. 50 (2011). [23] C. Zuo, Q.A. Chen, G.H. Gu, W.X. Qian, New temporal high-pass filter nonuniformity correction based on bilateral filter, Opt. Rev. 18 (2011) 197– 202. [24] Y.J. Liu, H. Zhu, Y.G. Zhao, Scene-based nonuniformity correction technique for infrared focal-plane arrays, Appl. Opt. 48 (2009) 2364–2372. [25] S.C. Cain, M.M. Hayat, E.E. Armstrong, Projection-based image registration in the presence of fixed-pattern noise, IEEE 10 (2001) 1860–1872. [26] Chao Zuo, Qian Chen, Gu Guohua, Xiubao Sui, Jianle Ren, Improved interframe registration based nonuniformity correction for focal plane arrays, Infrared Phys. Technol. 55 (2012) 263–269. [27] Chao Zuo, Yuzhen Zhang, Qian Chen, Gu Guohua, Weixian Qian, Xiubao Sui, Jianle Ren, A two-frame approach for scene-based nonuniformity correction in array sensors, Infrared Phys. Technol. 60 (2013) 190–196. [28] Ning Liu, Hang Qiu, A time-domain projection-based registration-scene – based nonuniformity correction technology and its detailed hardware realization, Opt. Rev. 21 (1) (2014) 17–26.