Online adaptive computational ghost imaging

Online adaptive computational ghost imaging

Optics and Lasers in Engineering 128 (2020) 106028 Contents lists available at ScienceDirect Optics and Lasers in Engineering journal homepage: www...

3MB Sizes 1 Downloads 88 Views

Optics and Lasers in Engineering 128 (2020) 106028

Contents lists available at ScienceDirect

Optics and Lasers in Engineering journal homepage: www.elsevier.com/locate/optlaseng

Online adaptive computational ghost imaging Heng Wu a,b, Ruizhou Wang a,c,∗, Zepeng Huang e, Huapan Xiao b, Jian Liang b, Daodang Wang b, Xiaobo Tian b, Tao Wang a,d, Lianglun Cheng a,d a

Guangdong Provincial Key Laboratory of Cyber-Physical System, School of Automation, Guangdong University of Technology, Guangzhou 510006, China College of Optical Sciences, University of Arizona, Tucson, AZ85721, USA c School of Electro-mechanical Engineering, Guangdong University of Technology, Guangzhou 510006, China d School of Computer, Guangdong University of Technology, Guangzhou 510006, China e Guangzhou Haige Communications Group Incorporated Company, China b

a r t i c l e

i n f o

Keywords: Computational ghost imaging Hadamard transform Image reconstruction techniques

a b s t r a c t We propose an online adaptive computational ghost imaging (CGI) scheme that offers high quality imaging with few measurement numbers. Different from the conventional second-order correlation CGI, the proposed scheme utilizes an online imaging method for image reconstruction, where the object image is recovered immediately after the light intensity is acquired. An adaptive measurement termination method is proposed to terminate the online imaging when the quality of the recovered image is satisfied. The effectiveness of the proposed scheme is verified by numerical simulations and experiments and its imaging performance is assessed by comparing the results with state-of-the-art FWHTGI (fast Walsh-Hadamard transform CGI) and RDCGI (Russian dolls ordering CGI). The results demonstrate that the proposed scheme can achieve high-quality imaging with few measurements, and imaging quality is better than that with FWHTGI and RDCGI under low measurement number conditions. The proposed scheme may be helpful for high-resolution and real-time CGI.

1. Introduction Ghost imaging (GI) is an intriguing imaging method for nonlocal recovery of an object image by calculating the second-order correlations between a series of illumination patterns and the transmitted (or reflected) intensities of the object [1–6]. Given the advantageous features such as large imaging spectrum bandwidth (e.g., from the visible spectrum to the THz region) [7–9], imaging in harsh environments (e.g., turbulence) [10,11], imaging with particles (e.g., atoms and electrons) [12,13], GI is considered an ideal alternative to conventional multipixel imaging techniques (e.g., CMOS and CCD cameras). In recent years, GI has found application in various fields, such as X-ray imaging [14], remote sensing [15] and multispectral imaging [16]. According to the number of light paths, GI can be categorized into two types: traditional GI (TGI) and computational GI (CGI). TGI consists of two light paths, the reference and the object paths, respectively. However, in CGI the reference path is removed and its light intensity distribution is obtained by calculating the pre-known patterns [17]. Generally, in CGI the imaging quality is dependent on the number of illumination patterns. The major obstacle for development of CGI is that it is extremely time consuming to achieve high-quality imaging because a huge number of illumination patterns is required. Various solutions have been proposed



Corresponding author. E-mail address: [email protected] (R. Wang).

to solve this problem in the past few years [18–20]. One of the most widely used solutions is to improve the hardware. For instance, a highspeed digital micro-mirror device (DMD) has been used to speed up the display of the illumination patterns [19,20]. Although the refresh rate of the high-speed DMD can be up to 22 kHz and even higher, a projective lens is usually required in this type of CGI system, but the lens introduces aberrations in the recovered image [21]. As an alternative to the DMD, Xu et al. used an LED matrix for structured illumination in a CGI system, where the frame rate reached 1000 fps [22]. But the resolution of this CGI was lower than that of the DMD-based CGI. To further improve the imaging speed, Liu et al. developed a CGI system based on an optical fiber phased array and achieved a theoretical frame rate of over 100 kHz [23]. However, many challenges exist in the practical realization of this system, such as the initial phase compensation, fiber array fabrication, and precise pattern computation. Another widely adopted solution is to apply advanced algorithms. Various algorithms have been proposed recently to reduce the imaging time and the illumination patterns [24–27]. Wang et al. described a fast and high-quality CGI scheme using the fast Walsh–Hadamard transform [24]. Although the scheme may decrease the reconstruction time, the number of illumination patterns was not reduced. Luo et al. used a Gram-Schmidt process to orthonormalize the illumination patterns [25], whereby the number of illumination patterns was reduced. However, this method is sensitive to noise. The solutions of improving the hardware and algorithms can boost the imaging speed and quality, and reduce the number of illumination patterns [18–27]. Most recently, Zhang et al. demonstrated a CGI

https://doi.org/10.1016/j.optlaseng.2020.106028 Received 29 October 2019; Received in revised form 21 December 2019; Accepted 14 January 2020 0143-8166/© 2020 Elsevier Ltd. All rights reserved.

H. Wu, R. Wang and Z. Huang et al.

Optics and Lasers in Engineering 128 (2020) 106028

ject image (GI) is restored by the second-order correlation algorithm ∑𝐾 between Si and Ii , expressed as 𝐺𝐼 = (1∕2𝐾 ) 2𝑖=1 (𝐼𝑖 − ⟨𝐼⟩)𝑆𝑖 , where ∑2 𝐾 ⟨𝐼⟩ = (1∕2𝐾 ) 𝑖=1 𝐼𝑖 [5]. The ith pattern Si is obtained by the Walsh–Hadamard transform matrix (WHTM), as shown in Fig. 1(b). The WHTM is given by WHTM = 𝑤ℎ𝑡(𝐻𝐾 ),

Fig. 1. (a) Schematic of the imaging system. The transmitted light is converged onto the BD using lens (L). The PC controls the LMD and the BD, and reconstructs the object image. (b) Generation process of the index arrays by the zigzag order, (c) default order and (d) zigzag. order.

scheme which featured a liquid crystal display (LCD) screen for display of the illumination patterns [28]. The imaging setup was low cost and simple, but it takes 131,072 measurements (illumination patterns) and required about 4.05 h to restore a 256 × 256-pixel image. In this study, we report an online adaptive CGI scheme which can recover high-quality images with fewer illumination patterns. The proposed scheme that is named zigzag-scanning-based Hadamard CGI (ZHCGI) first utilizes a zigzag scanning method to construct a Hadamard pattern sequence for the structured illumination. Then it uses a matrix decomposition method to show the sequence on the light modulation device (LMD). Instead of using a second-order correlation CGI algorithm, the proposed scheme achieves image reconstruction by an online correlation algorithm. Here, the “online” means that the object image is retrieved immediately after the temporal light intensity signal is captured, without the need of post-processing and offline reconstruction. To stop the online imaging, an adaptive measurement termination method is proposed. By using the zigzag scanning method, online correlation algorithm and adaptive measurement termination method, the number of illumination patterns is reduced, and high-quality images can be obtained.

(1)

where wht() denotes the Walsh transform, 𝐻𝐾 = H2 ⊗ HK/2 is the [ ] 1 1 Hadamard matrix (HM), 𝐻2 = , 𝐾 = 2𝑘 , ⊗ is the Kronecker 1 −1 product operator, k is a positive integer and k > 2. Note that the column and row numbers of the WHTM and HM are all equal to K, and HM is a symmetric square matrix whose rows (columns) are mutually orthogonal with entries ± 1 [24,26]. Zigzag scanning shows great advantages in Hadamard-pattern-based single-pixel imaging [21,29]. Therefore, the zigzag-scanning method is used to create the Hadamard pattern sequence in this paper. In Ref. [29], Ma et al. proposed a zigzag scanning ordering of four-dimensional Walsh basis for single-pixel imaging and explained the theory detailly. To create the Hadamard pattern sequence, a four-dimensional Walsh vector matrix is built [29]. Different from Ma’s method, we propose a zigzag-scanning-based method which construct the Hadamard pattern sequence from a two-dimensional Walsh basis matrix. As a consequence, the efficiency of the Hadamard pattern sequence construction is greatly improved. As exhibited in Fig. √ 1(b), the √ WHTM is divided into multiple small matrixes fi with a size of 𝐾 × 𝐾 , and the small matrixes are assigned index numbers from 1 to K by rows. Here, 𝑓𝑖 = 𝑓𝑖 (𝑥, 𝑦), x and y are the pixel coordinates. After that, a pattern sequence𝐹 = [𝑓1 , 𝑓2 , ⋯ , 𝑓𝐾 ] is built by the small matrixes fi using the index numbers which are stored in the array as shown in Fig. 1(c). Then the small matrixes are scanned in a zigzag order as indicated by the red arrow line shown in Fig. 1(b), and a new index array is formed by the index numbers in the scanning line, as described in Fig. 1(d). By using the new index array, the pattern sequence F is reordered, and written as 𝐺 = [𝑔1 , 𝑔2 , ⋯ , 𝑔𝐾 ], where 𝑔𝑖 = 𝑔𝑖 (𝑥, 𝑦). In the pattern sequence G, the values of each pattern gi are composed of +1 and −1. However, the LMD can only display an image whose values are in the range of [0, 1] or [0, 255]. To solve this problem, a matrix decomposition method 𝑔𝑖 = 𝑔𝑖+ − 𝑔𝑖− is introduced, where 𝑔𝑖− = 1 − 𝑔𝑖+ , 𝑔𝑖+ = (𝑔𝑖 + 1)∕2 and 𝑔𝑖± = 𝑔𝑖± (𝑥, 𝑦). Since gi is only composed of +1 and −1, thus the values of 𝑔𝑖± are equal to 1 or 0. Consequently, the Hadamard pattern sequence S, which can be displayed on the LMD, is formed by + − the pattern pairs 𝑔𝑖+ and 𝑔𝑖− , expressed as 𝑆 = [𝑔1+ , 𝑔1− , 𝑔2+ , 𝑔2− , ⋯ ,𝑔𝐾 ,𝑔 𝐾 ]. The total pattern number for the sequence S is 2K, and the total pattern pair number (PPN) is K. Corresponding to the ith pattern Si , a light intensity value Ii is recorded, which is given by 𝐼𝑖 = ∬ 𝑇 (𝑥, 𝑦)𝑆𝑖 𝑑 𝑥𝑑 𝑦, where T(x, y) is the object function. With the light intensity Ii and the pattern Si , the object image O(x, y)can be recovered by the traditional second-order correlation algorithm [2,5,7], 𝐺(2) (𝑥, 𝑦) = ⟨Δ𝑈 Δ𝑉 ⟩ = 𝑂(𝑥, 𝑦),

(2)

where Δ𝑈 = 𝑈 − ⟨𝑈 ⟩, Δ𝑉 = 𝑉 − ⟨𝑉 ⟩, 𝑈 = [Δ𝐼1 , Δ𝐼2 , ⋯ , Δ𝐼𝐾 ], 𝑉 = [Δ𝑆1 , Δ𝑆2 , ⋯ , Δ𝑆𝐾 ], Δ𝐼𝑛 = 𝐼2𝑛−1 − 𝐼2𝑛 , Δ𝑆𝑛 = 𝑆2𝑛−1 − 𝑆2𝑛 , 𝑛= 1,2,⋅⋅⋅, K, 𝐾 ∑ and ⟨⋅⟩ = 𝐾1 ⋅ is the ensemble average. Note that each value of n cor𝑛=1

2. Model and method The schematic of the proposed scheme is shown in Fig. 1(a). The imaging system is composed of an LMD, test object, collection lens, bucket detector (BD) and personal computer (PC). The LMD which is controlled by a PC projects the Hadamard pattern Si one by one onto the object. The transmitted light is converged by a collective lens and the light intensity Ii is recorded by a BD, where 𝑖 = 1, 2, ⋯ , 2𝐾, where K is a constant and 2K is the total number of measurement. The ob-

responds to a pattern pair 𝑔𝑛± , Δ𝑆𝑛 = 𝑔𝑛 is the illumination pattern and ΔIn is the corresponding light intensity value . Expanding Eq. (2) with the ensemble average, we can obtain 𝑂(𝑥, 𝑦) =

𝐾 1 ∑ Δ𝑈𝑛 Δ𝑉𝑛 , 𝐾 𝑛=1

(3)

where Δ𝑈𝑛 = 𝑈𝑛 − ⟨𝑈 ⟩, and Δ𝑉𝑛 = 𝑉𝑛 − ⟨𝑉 ⟩. As the constant 1/K has no influence to the recovered image, thus the object image OM (x, y) corresponding to the Mth pair of patterns can

H. Wu, R. Wang and Z. Huang et al.

Optics and Lasers in Engineering 128 (2020) 106028

be written as 𝑂𝑀 (𝑥, 𝑦) =

𝑀 ∑ 𝑛=1

Δ𝑈𝑛 Δ𝑉𝑛 ,

(4)

where 𝑀 = 1,2, ⋯ , 𝐾. According to Eq. (4), we develop an online ZHCGI scheme using an accumulation method, where the current image OM (x, y) is obtained based on the previous image 𝑂𝑀−1 (𝑥, 𝑦), given by 𝑂𝑀 (𝑥, 𝑦) = 𝑂𝑀−1 (𝑥, 𝑦) + Δ𝑈𝑛 Δ𝑉𝑛 ,

(5)

where 𝑂0 (𝑥, 𝑦) = 0 is an image whose pixel values are all equal to 0. In Eq. (5), since the pattern ΔVn is pre-known, thus the object image is reconstructed immediately after the intensity value ΔUn is acquired. Additionally, as Eq. (5) can be executed quickly with current generation PCs, the proposed method can turn to be a real-time ZHCGI scheme if the pattern display and intensity collection speeds are sufficiently fast. The recovered image quality is evaluated quantitatively by the peak signal-to-noise ratio (PSNR) and root mean square error (RMSE), PSNR = 10log10 (T2 ∕MSE), RMSE =



MSE,

(6) (7)



where MSE = 𝑁1 𝑥,𝑦 (𝑂 − 𝑂0 )2 , O0 and O denote the original and reconstructed images, respectively, T is the maximum gray value of O, and N is the total pixel number. During the online imaging process, an adaptive measurement termination method (AMTM) is proposed to stop the online imaging and reduce the measurement number. Firstly, when the measure′ (𝑥, 𝑦) is set as ment number reaches to m, the recovered image 𝑂𝑚 the original image. After that, the PSNR of the new restored image ′ (𝑥, 𝑦) is calculated and a PSNR array is formed 𝑃 = [𝑝 , 𝑝 , ⋯ , p ], 𝑂𝑤 1 2 B where 𝑤= 1,2, ⋯ , 𝐵 and B denotes the predetermined measurement number. The deviation array of the adjacent PSNR can be obtained, given by Δ𝑃 = [Δ𝑃2 , Δ𝑃3 , ⋯ , Δ𝑃𝐵 ], where Δ𝑃𝑤+1 = 𝑝𝑤+1 − 𝑝𝑤 . Then a standard deviation array can be acquired from ΔP, which is written as Δ𝐸 = [Δ𝐸𝑎 , Δ𝐸𝑎+1 , ⋯ , Δ𝐸𝐵 ], where Δ𝐸𝑎 = 𝑠𝑡𝑑(Δ𝑃2 + Δ𝑃3 + ⋯ + Δ𝑃𝑎 ), Δ𝐸𝑎+1 = 𝑠𝑡𝑑(Δ𝑃3 + Δ𝑃4 + ⋯ + Δ𝑃𝑎+1 ), std() is a function that is used to compute the standard deviation,a is a integer and 2 < a. Here, a means a parameter that relates to the calculation step. Specifically, 𝑎 − 2 is the step for calculating the standard deviation ΔE with the data from the deviation array ΔP. The value of a is determined by the recovered image quality and computational cost. In other words, a can be adjusted so as to obtain the desired imaging quality without reducing the imaging speed. To terminate the measurement, a threshold value (TV) 𝛿 is set. If ΔEt < 𝛿, the measurement is stopped, else the measurement goes on until the predetermined measurement number is finished. Here, t is a positive integer and a ≤ t ≤ B. The advantage of the online adaptive ZHCGI is that the measurement can be adaptively terminated according to the quality of the reconstructed image, which can help to improve the imaging efficiency. 3. Results and analysis Fig. 2 shows the simulation results for the image (128 × 128 pixels) of a “butterfly” with different pattern pair numbers (PPNs). We compare the simulation results of the ZHCGI with those of the state-of-the-art FWHTGI (fast Walsh–Hadamard transform GI) method [24] and Russian dolls (RD) ordering CGI method (named RDCGI) [26]. Note that the total PPN is 128 × 128 = 16,384 for a 128 × 128-pixel image. Here, to show the advantage of the proposed method in few PPN conditions, the numerical calculations are implemented with the PPNs set from 819 to 4914 at intervals of 819. The simulation results are presented in Fig. 2(a), where the images in rows 1, 2 and 3 are recovered by the FWHTGI, ZHCGI and RDCGI, respectively. As shown in Fig. 2(a), the imaging quality of ZHCGI is better than those of FWHTGI and RDCGI

when the PPN increases from 819 to 4914. Especially, the ZHCGI can reconstruct the object image even when the PPN is equal to 819. However, FWHTGI and RDCGI cannot achieve an imaging quality as good as that of ZHCGI (PPN is 819) even when the PPN is 1638. The comparison results for FWHTGI, RDCGI and ZHCGI with the original image are presented in Fig. 2(b)−2(d). Here, Fig. 2(b)−2(d) are corresponding to the three images with the red lines in Fig. 2(a) which are produced by FWHTGI, RDCGI and ZHCGI, respectively. The normalized intensity distributions in Fig. 2(b)−2(d) correspond to intensity distributions of the red lines shown in Fig. 2(a), respectively. It can be seen that the intensity distributions of ZHCGI are closer to the original image in comparison with those of FWHTGI and RDCGI, indicating that the imaging quality of ZHCGI is superior. The PSNR and normalized standard deviation (NSD) of ZHCGI using different PPNs are shown in Fig. 3. Here, 𝑎 = 6, the predetermined measurement number is 𝐵 = 4914, and NSD is the normalized standard deviation ΔE. The original image is obtained when the measurement number reaches to 𝑚 = 328. In this condition, the quality of the original image is low. Since the quality of the recovered image improves as PPN increases, as a consequence, when we use the low-quality original image to calculate the PSNR of recovered images, the PSNR curve will decreases, as shown in Figs. 3(a) and 6(a). On the contrary, when we use high-quality original image to calculate the PSNR of recovered images, the PSNR curve will increase, as shown in Fig. 7(a). As displayed in Fig. 3(a), the PSNR decreases as the PPN increases, and when the PPN is in the range of [2500, 4200], the variation of PSNR are quite small, which means the image quality is similar in this range. The NSD of the PSNR in Fig. 3(a) is exhibited in Fig. 3(b). The TV is set as 𝛿 = 0.02 and the measurement is terminated by AMTM when the measurement number gets to 3671. Compared with the predetermined measurement number 𝐵 = 4914, 1243 measurements are reduced. On the other hand, the parameters m, a and 𝛿 can be adjusted so as to obtain the desired imaging quality. To demonstrate the proposed online adaptive ZHCGI, an experimental setup is built as shown in Fig. 4(a). The illumination LMD is a 6-inch mobile phone LCD screen with a pixel resolution of 1920 × 1080. The Blackfly S USB3 CCD camera is used as the BD and a transmissive USAF 1951 resolution test chart is adopted as the test object. A zoom lens (ZL), whose focal length can be changed from 20 cm to 100 cm, is installed before the CCD camera, which is utilized to achieve convergence of the transmitted light. The distance between the test object and the LCD screen is 4.40 cm, and the distance between the CCD plane and the test object is 28.50 cm. The display of the LCD screen and the collection of the light intensity by the CCD camera are controlled by a PC (Intel Core i7-8700 K, 32 GB Memory). Fig. 4(b) exhibits the original object (128 × 128 pixels) captured by a mobile phone camera and processed by binarization and resizing algorithms using MATLAB. Note that Fig. 4(b) is used as a reference image for computation of the RMSE and PSNR values in Fig. 7. The total PPN is 16,384 for a 128 × 128-pixel image. Fig. 5 shows the experimental results with different PPN, where the image reconstruction calculations are implemented with the PPN set from 819 to 3276 at intervals of 819. We compare the simulation results of the ZHCGI with those of the state-of-the-art FWHTGI (fast Walsh–Hadamard transform GI) method [24] and Russian dolls (RD) ordering CGI method (named RDCGI) [26]. Here, “-E” and “-S” mean experiment and simulation, respectively. As shown in Fig. 5, when the PPN is 819, both FWHTGI-E and FWHTGI-S fail to provide a ghost image. RDCGI-E and RDCGI-S can obtain a ghost image with lots of noise. But the proposed ZHCGI-E and ZHCGI-S offer recognizable images. Also, even when the PPN is 2457, the imaging quality of FWHTGI and RDCGI is still worse than that of ZHCGI (PPN = 819). Moreover, it can be seen that the imaging quality of ZHCGI for both the simulation and the experiment is always better than those for FWHTGI and RDCGI as the PPN increases, indicating that ZHCGI realizes better imaging performance than FWHTGI and RDCGI under low PPN conditions. Fig. 6 presents the PSNR and NSD of ZHCGI

H. Wu, R. Wang and Z. Huang et al.

Optics and Lasers in Engineering 128 (2020) 106028

Fig. 2. (a) Simulation results of FWHTGI, ZHCGI and RDCGI using various PPN conditions. (b) - (d) Comparison of normalized intensity distributions. Ori: original intensity distributions.

Fig. 3. (a) PSNR of restored images with different PPN, and (b) normalized standard deviation (NSD) of PSNR in (a).

H. Wu, R. Wang and Z. Huang et al.

Fig. 4. (a) Experimental setup and (b) Test object. ZL: zoom lens.

with different PPN. When the measurement number is 𝑚 = 328, the restored image is set as the original image. Here, 𝐵 = 4914, 𝑎 = 9, and 𝛿 = 0.02. Note that we set 𝑎 = 9 in the experiments and 𝑎 = 6 in the simulations because a good balance between the image quality and computational cost is achieved. As exhibited in Fig. 6, when the measurement number is 2894, we can obtain Δ𝐸2894 = 0.018 < 0.02, and the measurement is terminated by the AMTM. It can reduce 4914 − 2894 = 2020 measurements. Fig. 7 shows the PSNR and RMSE values for the FWHTGI-E, FWHTGIS, RDCGI-E, RDCGI-S, ZHCGI-E and ZHCGI-S, where the original image is shown in Fig. 4(b). As the measurement is terminated when the

Optics and Lasers in Engineering 128 (2020) 106028

measurement number is 2894, thus the maximum PPN is set as 3276 for simplicity. When the PPN increases from 1 to 3276, the PSNR and RMSE values of ZHCGI-S are always better than those of RDCGI-S and FWHTGI-S. The PSNR and RMSE values of ZHCGI-E are always better than those of RDCGI-E, and FWHTGI-E, meaning that the imaging qualities of ZHCGI are superior to those of FWHTGI and RDCGI. Even if the measurement is terminated when the measurement number reaches to 2894, high quality image can still be obtained by the ZHCGIS and ZHCGI-E. Additionally, imaging quality enhancement methods, such as deep learning [27], differential algorithm [30] and compressive sensing [31] may be used to improve the imaging quality of the proposed method. The numerical and experimental results shown in Figs. 2, 5 and 7 demonstrate that the ZHCGI can recover the object image with a low PPN (e.g., 819). As the object image is restored immediately after the intensity value ΔUn is obtained, the setup in Fig. 4(a) can be reconfigured to a real-time system by using a high-speed LMD and a high-speed light intensity collection device (e.g., Photomultiplier tubes). In Refs. [21,29], the zigzag-scanning-based methods can provide high quality imaging with a small measurement number which is called the predetermined measurement number in this paper. However, the proposed method can further reduce the measurement number under

Fig. 5. Experimental and simulation results under different SPN conditions. FWHTGI-E, RDCGI-E and ZHCGI-E are experimental results, FWHTGI-S, RDCGI-S and ZHCGI-S are simulation results.

H. Wu, R. Wang and Z. Huang et al.

Optics and Lasers in Engineering 128 (2020) 106028

Fig. 6. Experimental results of ZHCGI-E. (a) PSNR of restored images with different PPN, and (b) NSD of PSNR in (a).

Fig. 7. Comparisons of the PSNR and the RSME for FWHTGI-E, FWHTGI-S, RDCGI-E, RDCGI-S, ZHCGI-E and ZHCGI-S.

a given predetermined measurement number condition. Moreover, the proposed method does not need to wait the whole intensity sequence ΔU is obtained. The reason is that the object image is reconstructed immediately after the intensity value ΔUn is acquired owing to the online correlation algorithm. These make the proposed method superior to the methods reported in Refs. [21,29]. In a word, compared with the existing CGI methods, the proposed method has the advantages of high imaging performance, imaging adaptively and potential for real time imaging.

Funding National Natural Science Foundation of China (61805048, 61803093, 61672168, 61701123, 51775528), China Scholarship Council (CSC, 201808440010), Guangdong Provincial Key Laboratory of Cyber-Physical System(2016B030301008), Natural Science Foundation of Guangdong Province (2018A030310599), Application Technologies R&D Program of Guangdong Province(2015B090922013), Key Area R&D Plan Program of Guangdong Province(2016B090918017, CXZJHZ201730). Author statement

4. Conclusion All the authors contribute equally to this work. We have proposed and presented an online adaptive CGI scheme, which achieves high-quality imaging with fewer illumination patterns. We use a zigzag scanning method to generate the Hadamard pattern sequence and introduce an online imaging method for image reconstruction. We propose an adaptive measurement termination method to terminate the online imaging and reduce the measurement number. The effectiveness of the proposed scheme is validated and its imaging performance is compared with the FWHTGI and RDCGI. Both the experimental and numerical results demonstrate that the imaging quality of the proposed scheme is better than that of the FWHTGI and RDCGI methods under conditions of a low illumination pattern number. In addition, the imaging quality of the proposed scheme can be further improved by use of advanced algorithms such as compressive sensing, differential and deep learning algorithms. Moreover, real-time CGI can be realized by using a high-speed LMD and light intensity collection device. The proposed scheme may find application in many areas, such as real-time CGI, full-color CGI and three-dimensional CGI.

Declaration of Competing Interest The authors declared that they have no conflicts of interest to this work. References [1] Jiang W, Li X, Jiang S, Wang Y, Zhang Z, He G, Sun B. Increase the frame rate of a camera via temporal ghost imaging. Opt Laser Eng 2019;122:164–9. [2] Cheng J, Han S. Incoherent coincidence imaging and its applicability in X-ray diffraction. Phys Rev Lett 2004;92(9):93903. [3] Liansheng S, Yin C, Ailing T, Asundi AK. An optical watermarking scheme with two-layer framework based on computational ghost imaging. Opt Laser Eng 2018;107:38–45. [4] Chen M, Wu H, Wang R, He Z, Li H, Gan J, Zhao G. Computational ghost imaging with uncertain imaging distance. Opt Commun 2019;445:106–10. [5] Pittman TB, Shih YH, Strekalov DV, Sergienko AV. Optical imaging by means of two-photon quantum entanglement. Phys Rev A. 1995;52(5):R3429–32.

H. Wu, R. Wang and Z. Huang et al. [6] Wang S, Meng X, Yin Y, Wang Y, Yang X, Zhang X, Peng X, He W, Dong G, Chen H. Optical image watermarking based on singular value decomposition ghost imaging and lifting wavelet transform. Opt Laser Eng 2019;114:76–82. [7] Dongfeng S, Jiamin Z, Jian H, Yingjian W, Kee Y, Kaifa C, Chenbo X, Dong L, Wenyue Z. Polarization-multiplexing ghost imaging. Opt Laser Eng 2018;102:100–5. [8] Liu H, Zhang S. Computational ghost imaging of hot objects in long-wave infrared range. Appl Phys Lett 2017;111(3):31110. [9] Stantchev RI, Sun B, Hornett SM, Hobson PA, Gibson GM, Padgett MJ, Hendry E. Noninvasive, near-field terahertz imaging of hidden objects using a single-pixel detector. Sci Adv 2016;2(6):e1600190. [10] Meyers RE, Deacon KS, Shih Y. Positive-negative turbulence-free ghost imaging. Appl Phys Lett 2012;100(13):131114. [11] Luo C, Lei P, Li Z, Qi J, Jia X, Dong F, Liu Z. Long-distance ghost imaging with an almost non-diffracting Lorentz source in atmospheric turbulence. Laser Phys Lett 2018;15(8):85201. [12] Khakimov RI, Henson BM, Shin DK, Hodgman SS, Dall RG, Baldwin K, Truscott AG. Ghost imaging with atoms. Nature 2016;540(7631):100. [13] Li S, Cropp F, Kabra K, Lane TJ, Wetzstein G, Musumeci P, Ratner D. Electron ghost imaging. Phys Rev Lett 2018;121(11):114801. [14] Yu H, Lu R, Han S, Xie H, Du G, Xiao T, Zhu D. Fourier-transform ghost imaging with hard X rays. Phys Rev Lett 2016;117(11):113901. [15] Gong W, Yu H, Zhao C, Bo Z, Chen M, Xu W. Improving the imaging quality of ghost imaging lidar via sparsity constraint by time-resolved technique. Remote Sens 2016;8(12):991. [16] Huang J, Shi D. Multispectral computational ghost imaging with multiplexed illumination. J Opt-UK 2017;19(7):75701. [17] Shapiro JH. Computational ghost imaging. Phys Rev A 2008;78(6):61802. [18] Wu H, Zhang X, Gan J, Luo C, Ge P. High-quality correspondence imaging based on sorting and compressive sensing technique. Laser Phys Lett 2016;13(11):115205.

Optics and Lasers in Engineering 128 (2020) 106028 [19] Hu X, Zhang H, Zhao Q, Yu P, Li Y, Gong L. Single-pixel phase imaging by Fourier spectrum sampling. Appl Phys Lett 2019;114(5):51102. [20] Phillips DB, Sun M, Taylor JM, Edgar MP, Barnett SM, Gibson GM, Padgett MJ. Adaptive foveated single-pixel imaging with dynamic supersampling. Sci Adv 2017;3(4):e1601782. [21] Zhang Z, Wang X, Zheng G, Zhong J. Hadamard single-pixel imaging versus Fourier single-pixel imaging. Opt Express 2017;25(16):19619–39. [22] Xu Z, Chen W, Penuelas J, Padgett M, Sun M. 1000 fps computational ghost imaging using LED-based structured illumination. Opt Express 2018;26(3):2427–34. [23] Liu C, Chen J, Liu J, Han XE. High frame-rate computational ghost imaging system using an optical fiber phased array and a low-pixel APD array. Opt Express 2018;26(8):10048–64. [24] Wang L, Zhao S. Fast reconstructed and high-quality ghost imaging with fast Walsh–Hadamard transform. Photon Res 2016;4(6):240–4. [25] Luo B, Yin P, Yin L, Wu G, Guo H. Orthonormalization method in ghost imaging. Opt Express 2018;26(18):23093–106. [26] Sun M, Meng L, Edgar MP, Padgett MJ, Radwell N. A Russian dolls ordering of the Hadamard basis for compressive single-pixel imaging. Sci Rep 2017;7(1):3464. [27] Lyu M, Wang W, Wang H, Wang H, Li G, Chen N, Situ G. Deep-learning-based ghost imaging. Sci Rep 2017;7(1):17865. [28] Zhang Z, Su Z, Deng Q, Ye J, Peng J, Zhong J. Lensless single-pixel imaging by using LCD: application to small-size and multi-functional scanner. Opt Express 2019;27(3):3731–45. [29] Ma H, Sang A, Zhou C, An X, Song L. A zigzag scanning ordering of four-dimensional Walsh basis for single-pixel imaging. Opt Commun 2019;443:69–75. [30] Ferri F, Magatti D, Lugiato LA, Gatti A. Differential ghost imaging. Phys Rev Lett 2010;104(25):253603. [31] Li X, Meng X, Yang X, Wang Y, Yin Y, Peng X, He W, Dong G, Chen H. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme. Opt Laser Eng 2018;102:106–11.