Single photon counting compressive imaging based on a sampling and reconstruction integrated deep network

Single photon counting compressive imaging based on a sampling and reconstruction integrated deep network

Optics Communications xxx (xxxx) xxx Contents lists available at ScienceDirect Optics Communications journal homepage: www.elsevier.com/locate/optco...

2MB Sizes 0 Downloads 51 Views

Optics Communications xxx (xxxx) xxx

Contents lists available at ScienceDirect

Optics Communications journal homepage: www.elsevier.com/locate/optcom

Single photon counting compressive imaging based on a sampling and reconstruction integrated deep network Yanqiu Guan, Qiurong Yan ∗, Shengtao Yang, Bing Li, Qianqian Cao, Zheyu Fang School of Information Engineering, Nanchang University, Nanchang 330031, China

ARTICLE

INFO

Keywords: Single-photon imaging Single photon counting compressive imaging Compressed sensing Deep learning

ABSTRACT Single photon counting compressive imaging, a very efficient implementation of compressed sensing theory in photon counting imaging, offers the advantages of low cost and ultra-high sensitivity. However, when performing high-resolution imaging, single photon counting compressive imaging needs a long imaging time due to a lot of measurements and a large amount of image reconstruction calculations. In this paper, we demonstrate a single-photon counting compressed imaging system based a novel sampling and reconstruction integrated deep network. We call this network BF2C-Net. A binarized fully-connected layer is specially designed as the first layer of the network and trained out as a binary measurement matrix that can be directly loaded on the DMD to perform efficiently compressive sampling. The remaining layers of the network except the first layer are used to quickly reconstruct the compressed sensing image. The effects of compression sampling rate, measurement matrix and reconstruction algorithm on imaging performance are compared. The experimental results show that the BF2C-Net significantly outperforms existing iterative method and most other deep learning-based methods.

1. Introduction Single-photon imaging, which uses a single photon detector to detect and count individual photons, can realize object imaging under extremely weak light environment, and it has been widely used in the fields of biomedical imaging [1–3], fluorescence lifetime microscopic imaging [4,5], multi-spectral imaging [6,7], etc. To achieve two-dimensional imaging, spatially resolved single photon detectors such as multi-anode PMT, APD arrays, etc. have been developed, but they are always expensive and have low resolution. An alternative method to obtain high resolution image is scanning imaging plane with a point detector, but this method has a long imaging time due to low photon collection efficiency. Single-pixel imaging based on compressive sensing theory provides a new idea for single-photon imaging. In 2012, Wen-Kai Yu et al. proposed a single photon counting compressive imaging system [8] based on single-pixel imaging technique. It has two main advantages. One is that two-dimensional imaging can be achieved by using only point detectors, so this imaging method has a low cost, especially in some special bands. Second, the point detector in a single-pixel imaging system can simultaneously collect the light intensity of multiple pixels. The imaging sensitivity of the system is no longer limited by the detection sensitivity of single photon point detector, and the signal-to-noise ratio is greatly improved.

However, the imaging speed of single photon counting compressive imaging is very slow, limiting its application. It still has two aspects to be improved. One is to design the measurement matrix to sample the most efficient information, thereby reducing the total sampling time. The second is to develop fast and high-quality reconstruction algorithms. In terms of the measurement matrix, the signal can be effectively recovered by a small number of measurements using matrices such as Gaussian matrices [9], binary random matrices [10], and Toeplitz matrices [11]. To perform high efficiency sampling, some researchers constructed an adaptive measurement matrix based on a priori information obtained from prior measurement data to reduce the number of measurements [12]. In this paper, in order to sample information more efficiently, the measurement matrix is obtained by specially designed deep learning network. In terms of reconstruction algorithms, many excellent algorithms have been proposed, such as OMP [13], ROMP [14], IHT [15], and TVAL3 [16]. Most of these algorithms are based on the assumption that the image is sparsity or sparsity under a certain transform domain and solve a convex optimization with iterative strategies. The TVAL3 algorithm combines the enhanced Lagrangian function and the alternating minimization method based on the minimum total variation method. These above algorithms still take a long time to reconstruct the image, especially when processing large images.

∗ Corresponding author. E-mail address: [email protected] (Q. Yan).

https://doi.org/10.1016/j.optcom.2019.124923 Received 18 July 2019; Received in revised form 18 October 2019; Accepted 10 November 2019 Available online xxxx 0030-4018/© 2019 Elsevier B.V. All rights reserved.

Please cite this article as: Y. Guan, Q. Yan, S. Yang et al., Single photon counting compressive imaging based on a sampling and reconstruction integrated deep network, Optics Communications (2019) 124923, https://doi.org/10.1016/j.optcom.2019.124923.

Y. Guan, Q. Yan, S. Yang et al.

Optics Communications xxx (xxxx) xxx

Deep neural networks [17] have made a series of breakthroughs in computer vision tasks such as image classification [17], superresolution [18], and restoration [19]. Several deep neural networks used for reconstructing images from compressively sensed (CS) measurements have been proposed in recent years. Due to its powerful learning ability, the current deep learning-based method effectively avoids the problems of large computational complexity and long reconstruction time, and achieves good reconstruction performance [20]. Shimobaba T et al. used deep learning to improve the quality of computational ghost imaging (CGI) images and achieved good results [21]. In 2015, Mousavi et al. applied superimposed denoising automatic encoder (SDA) to unsupervised feature learning, which greatly shortened the reconstruction time [22]. In 2016, Kulkarni et al. proposed the ReconNet model based on image super-resolution reconstruction [20], and achieved better reconstruction results than SDA. In 2017, Yao et al. used the deep residual learning to build DR2-Net [23], which resulted in more accurate reconstruction results. The test results show that the time complexity of reconstructing compressed-sensing images using deep learning is reduced by about 100 times without reducing the quality of reconstructed images. Xie et al. proposed an adaptive measurement network for compressed sensing image reconstruction, which uses a fully connected layer in the network instead of a random Gaussian matrix to achieve CS measurements and obtain a more accurate reconstructed image [24]. All of the above deep learning based compressed sensing imaging systems use floating-point random matrix as the measurement matrix, and the deep learning reconstruction network is just verified by simulation. However, in single photon counting compressive imaging system, the measurement matrix loaded on the DMD must be a binary matrix, so the feasibility of above scheme requires verification by the experimental system. This paper proposes a novel sampling and reconstruction integrated deep network. The first layer of the network uses a binarized fullyconnected layer to achieve CS measurements of the image. That is to say, the weight of the first layer of the network is binarized and loaded onto the DMD for sampling the image, and then the subsequent deep convolution network is used to reconstruct image, thereby realizing large-area high-resolution fast imaging. On this basis, we built a single photon counting imaging experimental system to verify the reliability of BF2C-Net. Extensive experiments show our BF2C-Net significantly outperforms existing iterative method and most other deep learning-based methods. Our contributions could be summarized as follows:

Fig. 1. Experimental apparatus for single photon counting compressive imaging. Light: 10 W led lamp, DMD: digital micromirror device, PMT: photomultiplier, FPGA: field programmable gate array, Measurement vector: The count value of PMT, Computer Process Unit: Provide sampling matrix from BF2C-Net and use BF2C-Net to reconstruct.

collimator, the attenuator, and the aperture, it becomes a very weak parallel light whose intensity is on the single photon level. DMD (TI: 0.7 XGA DDR DMD) has 1024 ∗ 768 micromirrors that can be individually controlled for deflection and works as a spatial light modulator which continuously loads the measurement matrix to achieve random modulation of spatial light. The size of each micromirror is 13.68 μm×13.68 μm. Each digital micromirror has two reflection states and the reflections of +12 degrees and −12 degrees represent the modulation of ‘‘on’’ and ‘‘off’’, respectively. We placed a focusing lens in the +12 degree direction and collected the modulated light into a photon counting PMT (Hamamatsu Photonics H10682-110 PMT). As a point detector, the PMT can simultaneously collect the light intensity of multiple pixels on the imaging surface when performing one measurement. Therefore, the system has a very high signal-to-noise ratio, which enables imaging with higher detection sensitivity. The weight matrix of the first fully-connected layer of the trained BF2C-Net is used as the measurement matrix and then loaded on the DMD through the FPGA control module to achieve random modulation of the image on the DMD. Record the count value of PMT after each acquisition and send it to the host computer through the serial port of the FPGA. The measurement data is processed and then imported into the trained BF2C-Net for reconstruction to obtain the image acquired by the single-pixel camera.

• We propose a novel sampling and reconstruction integrated deep network for image compressive sensing. We modified the adaptive measurement network (AD_RE-Net) [24] and DR2-Net [23] to fit our system and compare them with BF2C-Net. We find that BF2C-Net has better reconstruction results. In addition, we find that BF2C-Net outperforms the TVAL3 algorithm in terms of reconstruction quality at most measurement ratios. • The measurement matrix we use is generated by the first fullyconnected layer of the network and has great advantages compared to random Gaussian matrix. It not only enables good reconstruction based on deep learning, but also enables traditional algorithms such as TVAL3 to achieve better reconstruction. • We use a very deep convolutional neural network to get good reconstruction quality and propose two methods to speed up training: residual learning and extremely high learning rates.

2.2. Network architecture Here we describe the proposed BF2C-Net as shown in Fig. 2. The network divides the original image into 32 ∗ 32 small blocks as input, and the reconstructed image is formed by splicing 32 ∗ 32 sized image blocks output by the network. BF2C-Net includes a compressed sampling sub-network 𝐹 𝑐 (⋅), preliminary reconstruction sub-network 𝐹 𝑓 (⋅) and a deep convolutional reconstruction sub-network 𝐹 𝑟 (⋅). 𝐹 𝑐 (⋅) is used to generate CS measurements of the original image and 𝐹 𝑓 (⋅) process CS measurements to form a preliminarily reconstructed image. 𝐹 𝑟 (⋅) uses a deep convolutional network and residual learning to further reconstruct the output image of 𝐹 𝑓 (⋅). In the following, we firstly give details of the three sub-networks and residual learning. Then we summarize the procedure that BF2C-Net reconstructs an image. Compressed Sampling Sub-network. The input of the network is the ith original image 𝑥𝑖 ∈ 𝑅1×𝑚 . The weight matrix 𝑊1 ∈ 𝑅𝑚×𝑛 of the first fully-connected layer can be thought of as a measurement matrix for compressed sensing to replace the traditional random Gaussian measurement matrix. In order to adapt to the DMD sampling characteristics,

2. Principle and realization of experimental system 2.1. Single photon counting compressive imaging system Fig. 1 shows a single photon counting compressive imaging system in our lab. The light source consists of LED, collimator, attenuator, and diaphragm. After the light emitted by the LED passes through the 2

Please cite this article as: Y. Guan, Q. Yan, S. Yang et al., Single photon counting compressive imaging based on a sampling and reconstruction integrated deep network, Optics Communications (2019) 124923, https://doi.org/10.1016/j.optcom.2019.124923.

Y. Guan, Q. Yan, S. Yang et al.

Optics Communications xxx (xxxx) xxx

Fig. 2. The process of training and testing with BF2C-Net. The network consists of 2 fully connected layers and 20 convolutional layers.

residual image. The loss function now becomes 21 ‖𝑟𝑖 − 𝑓 (𝑥)‖2 , where 𝑓 (𝑥) is the predicted value of the network. During the training process, we set the learning rate to 10 times that of ReconNet [20] to accelerate the training of our deep network. Therefore, with only 2000 iterations, our training can converge. Reconstruction Procedure. Given an image, we firstly divide it into 32 ∗ 32 sized image patches with no overlap. The size of the image block is chosen to be 32 ∗ 32 in order to facilitate the segmentation of the test image, since most of the image size of the test atlas is a multiple of 32. We also tried other sizes such as 33 ∗ 33, 41 ∗ 41, which are not as effective as 32 ∗ 32. The divided image patches is used as the input of BF2C-Net and reshaped to 1 ∗ 1024 size. The number of measurements n in the compressed sampling sub-network varies with the Measurement Ratio (MR), 𝑛 = 256, 103, 41, and 11 corresponding to MR = 0.25, 0.10, 0.04, and 0.01, respectively. The first fully-connected layer processes the input image and outputs a CS measurement. The second fully-connected layer is composed of 1024 neurons, and its 1 ∗ 1024 sized output is converted to a 32 ∗ 32 size as a preliminarily reconstructed image. As shown in Fig. 2, the deep convolutional reconstruction network 𝐹 𝑟 (⋅) takes the preliminarily reconstructed image 𝑥𝑖 𝑟 as an input, and the reconstruction result is addition of the residual image 𝑟𝑖 and the preliminarily reconstructed image 𝑥𝑖 𝑟 of the network. The output size is 32 ∗ 32 and the final output image is spliced in sequence by each image block.

we binarize the weight matrix 𝑊1 of the first fully-connected layer and express it as 𝑊1 𝑏 . The binarization method and the corresponding training method are introduced in Section 2.3. Compressed sampling is the process of loading the binarized matrix 𝑊1 𝑏 onto the DMD for sampling which can be expressed as: 𝑦𝑖 = 𝑥𝑖 𝑊1 𝑏

(1) 𝑅1×𝑛

where 𝑦𝑖 ∈ is the result of the compressed sampling of the ith original image, and n is the number of measurements. Preliminary Reconstruction Sub-network. The second fullyconnected layer in the network is used for the preliminary reconstruction of the CS measurements. It is necessary to set this layer because it is used to resize the image to the size of the original image, which enables subsequent deep convolution and residual learning to proceed smoothly. It can be expressed as: 𝑥𝑖 𝑟 = 𝑦𝑖 𝑊2

(2)

where 𝑥𝑖 𝑟 ∈ 𝑅1×𝑚 is preliminarily reconstructed image. The mapping from 𝑦𝑖 to 𝑥𝑖 𝑟 in the above equation can be seen as an approximate linear mapping, where 𝑊2 ∈ 𝑅𝑛×𝑚 is the mapping matrix. The loss function is: 𝐿(𝑊2 ) =

𝑁 1 ∑ ‖𝑥 𝑟 − 𝑥𝑖 ‖2 𝑁 𝑖=1 𝑖

(3)

Deep Convolutional Reconstruction Sub-network. Inspired by the VDSR network [25], we use a deep convolutional network as a reconstruction network for compressed sampling. We set up 20 convolutional layers where layers except the first and the last are of the same type: 64 filters of the size 3 ∗ 3 ∗ 64, where a filter operates on 3 ∗ 3 spatial region across 64 channels (feature maps). The first layer operates on the input image and the last layer,used for image reconstruction, consists of a single filter of size 3 ∗ 3 ∗ 64. Each convolutional layer is followed by a ReLU layer except for the last convolutional layer. Residual learning. Convolutional neural network (CNN) is an endto-end learning process. If learn directly like ReconNet used in [20,24], then CNN needs to save all the information of the image, that is, network need to remember all the information of low-dimensional images while reconstructing. In this way, each layer in the network needs to store all the image information, which leads to information overload. It makes the network very sensitive to gradients, and easily causes gradient disappearance or gradient explosion. Residual learning can solve these problems. In the BF2C-Net proposed in this paper, the preliminarily reconstructed image is very similar to the input image, so in the deep convolutional network, we define a residual image 𝑟𝑖 = 𝑥𝑖 − 𝑥𝑖 𝑟 , where most values are likely to be zero or small. We want to predict this

2.3. Binarization scheme and training method In this section, we mainly introduce the binarization method and the corresponding training method used in this paper. Inspired by [26], we binarize the weight 𝑊1 of the first fully-connected layer of the network. All layers in [26] are binarized while we only binarize the first layer to better guarantee the accuracy of reconstruction. The binarization scheme we use is a deterministic method based on the function Sign: { 𝑥𝑏 = Sign(𝑥) =

+1 −1

if 𝑥 ≥ 0, otherwise,

(4)

where 𝑥𝑏 is the binarized variable and x the real-valued variable. It is very straightforward to implement and works quite well in practice. The binarization operation (forward propagation process) is as follows: 𝑞 = Sign(𝑟)

(5)

The derivative of the sign function is zero almost everywhere, making it apparently incompatible with backpropagation. Therefore, we have to redefine the gradient function during the backpropagation. Assume 3

Please cite this article as: Y. Guan, Q. Yan, S. Yang et al., Single photon counting compressive imaging based on a sampling and reconstruction integrated deep network, Optics Communications (2019) 124923, https://doi.org/10.1016/j.optcom.2019.124923.

Y. Guan, Q. Yan, S. Yang et al.

Optics Communications xxx (xxxx) xxx Table 1 PSNR values in dB for testing images by different algorithms at different measurement ratios (simulation).

Fig. 3. Comparison of different algorithms in Table 1 in terms of mean PSNR at MR = 0.25, 0.10, 0.04and 0.01 (simulation)

𝜕𝐶 𝜕𝑞

has been obtained, where C is

the loss function. Then, our straight-through estimator of 𝑔𝑟 = 𝑔𝑞 1|𝑟|≤1

𝜕𝐶 𝜕𝑟

is simply: (6)

Note that this preserves the gradient’s information and cancels the gradient when r is too large. Not canceling the gradient when r is too large significantly worsens the performance. The derivative 1|𝑟|≤1 can also be seen as propagating the gradient through hard tanh, which is the following piece-wise linear activation function: Htanh(𝑥) = Clip(𝑥, −1, 1) = max(−1, min(1, 𝑥))

Methods

MR = 0.25

MR = 0.10

MR = 0.04

MR = 0.01

Barbara

TVAL3 AD_RE-Net DR2-Net BF2C-Net

24.11 20.89 24.30 24.54

21.89 19.44 23.11 23.15

19.86 18.94 22.28 22.42

12.99 18.82 20.54 20.86

Parrot

TVAL3 AD_RE-Net DR2-Net BF2C-Net

27.01 20.96 26.88 27.08

23.40 19.46 20.67 24.70

20.01 18.79 22.56 23.12

12.59 19.21 20.66 21.16

House

TVAL3 AD_RE-Net DR2-Net BF2C-Net

31.67 22.22 29.76 29.27

26.29 20.17 24.33 27.51

21.84 19.87 24.81 25.00

13.96 20.00 21.66 22.08

Boats

TVAL3 AD_RE-Net DR2-Net BF2C-Net

28.45 20.66 27.96 27.46

23.96 18.98 25.86 25.07

20.26 18.58 23.34 23.19

12.90 18.46 20.38 20.58

Cameraman

TVAL3 AD_RE-Net DR2-Net BF2C-Net

25.87 19.36 24.89 25.23

21.88 17.73 23.06 22.55

18.85 17.48 21.22 21.22

13.80 17.43 18.65 19.05

Mean PSNR

TVAL3 AD_RE-Net DR2-Net BF2C-Net

27.42 20.82 26.76 26.72

23.48 19.16 23.41 24.60

20.16 18.73 22.84 22.99

13.25 18.78 20.38 20.75

connected layer to the front of the DR2-Net for compressive sampling. In addition, we use the measurement matrix generated by BF2C-Net and the random Gaussian matrix to obtain the CS measurement, respectively. Then, we reconstruct these two measurements using the TVAL3 algorithm and compare their reconstruction quality. We use the same set of 91 images used in [20] to generate the training data for BF2C-Net and other networks. Then we extract 32 ∗ 32 sized image patches with stride 14. This process finally samples 21 760 image patches from 91 images. For each image block, we firstly extract its luminance component, denoting the luminance component as 𝑥𝑖 and use it as the input of the network. The test set is also the same as in [20]. Table 1 and Figs. 3, 4 show the PSNR and SSIM of the simulated reconstruction results of five test images use different reconstruction methods and different measurement ratios, where TVAL3 uses a random Gaussian matrix for compressive measurement. The results show that the performance of BF2C-Net is very close to that of TVAL3 when the measurement ratio is high. BF2C-Net has an absolute advantage at low measurement ratios, probably because TVAL3 is not suitable for reconstruction at low measurement ratios. The test results of the modified AD_RE-Net [24] and DR2-Net [23] are also given in Table 1 and the PSNR of AD_RE-Net is much lower than that of BF2C-Net at all measurement ratios. With a binarized fully-connected layer on the first layer, the AD_RE-Net cannot achieve good training results. This also confirms that BF2C-Net has better adaptability to the binarized fullyconnected layer and can achieve good reconstruction results. The PSNR and SSIM of the reconstruction result of BF2C-Net are very close to those of DR2-Net at all measurement ratios. At high measurement ratio, the reconstruction result of DR2-Net is slightly better than BF2C-Net. Under the other measurement ratios, BF2C-Net is better. As shown in Fig. 5, we compare the reconstruction results of multilayer binarized network and single-layer binarized network through the simulation experiment. Visually, the reconstructed image of a singlelayer binarized network has fewer blockiness effects, and the imaging quality is significantly better than that of a multi-layer binarized network. There is no doubt that the PSNR of a single-layer binarized network is more dominant. The simulation results in Table 1 all use grayscale images. However, in actual experiments, we use the resolution plate as the imaging object,

Fig. 4. Comparison of different algorithms in Table 1 in terms of mean SSIM at MR = 0.25, 0.10, 0.04and 0.01 (simulation).

that an estimator 𝑔𝑞 of the gradient

Image name

(7)

3. Experimental results and discussion In this section, we conduct a series of experiments to test the reconstruction performance of BF2C-Net. In terms of reconstruction quality, we compare our reconstruction scheme with the current state-of-the-art CS image reconstruction algorithm TVAL3 and two deep learningbased algorithms: AD_RE-Net and DR2-Net. Our single-photon counting compressive imaging system requires a binarized measurement matrix, while AD_RE-Net and DR2-Net are only suitable for measurements using a floating-point measurement matrix. So in order to adapt to our experiment system and ensure the fairness of comparison, we binarize the weight of the first layer of AD_RE-Net and add a binarized fully 4

Please cite this article as: Y. Guan, Q. Yan, S. Yang et al., Single photon counting compressive imaging based on a sampling and reconstruction integrated deep network, Optics Communications (2019) 124923, https://doi.org/10.1016/j.optcom.2019.124923.

Y. Guan, Q. Yan, S. Yang et al.

Optics Communications xxx (xxxx) xxx

binary images at all measurement ratios, which indicates that the generalization capability of DR2-Net is not as good as BF2C-Net. The above experimental results are all obtained through simulation and the following are the test results of our single photon counting compressive imaging system. In the actual experiment, we use the image obtained after long time measurement as the reference image to calculate PSNR. Fig. 7 shows the results of single photon counting compressive imaging reconstructed using AD_RE-Net and BF2C-Net under measurement ratio at 0.01, 0.04, 0.1, and 0.25. We have previously proposed a micromirror combination method that achieves large-area imaging up to the entire DMD mirror, using traditional measurement matrices and reconstruction algorithms [27]. In this experiment, the imaging resolution of the entire DMD mirror was 128 ∗ 128. The entire DMD mirror is divided into 4 ∗ 4 regions, each for implementing an imaging resolution of 32 ∗ 32 as designed by the network BF2C-Net. 8 ∗ 6 micromirrors are combined to form one pixel. According to the binarized measurement matrix generated by the first fully connected layer, we simultaneously control all the micromirrors in the corresponding combined pixels through the FPGA control module to turn to the same direction to realize the light modulation in each measurement. For example, if the value at the (1,1) position of the measurement matrix is 1, then the 8 ∗ 6 micromirrors in the top left corner are simultaneously deflected by +12 degrees. It is clear that the reconstruction results of BF2C-Net are far superior to the reconstruction results of AD_RENet, which is consistent with the results obtained from the simulation experiments.

Fig. 5. Reconstruction results with different binarization degrees (simulation). (a) Multiple layers are binarized. (b) Only the first Fc layer is binarized.

so our actual experimental result is a binary image. For example, the experimental results in Figs. 8 and 9 are obtained using the USAF 1951 resolution plate. Therefore, in Fig. 6 we give the simulation results of the binary images of BF2C-Net and DR2-Net. It can be seen from Fig. 6 that DR2-Net is not as good as BF2C-Net when reconstructing

Fig. 6. Reconstruction results of binary image (simulation). (a) Reconstruction results of DR2-Net. (b) Reconstruction results of BF2C-Net.

Fig. 7. Reconstruction results. (a) Single photon counting compressive imaging using AD_RE-Net to reconstruct. (b) Single photon counting compressive imaging using BF2C-Net to reconstruct. 5

Please cite this article as: Y. Guan, Q. Yan, S. Yang et al., Single photon counting compressive imaging based on a sampling and reconstruction integrated deep network, Optics Communications (2019) 124923, https://doi.org/10.1016/j.optcom.2019.124923.

Y. Guan, Q. Yan, S. Yang et al.

Optics Communications xxx (xxxx) xxx

Fig. 8. Reconstruction results. (a) Single photon counting compressive imaging using TVAL3 to reconstruct. (b) Single photon counting compressive imaging using BF2C-Net to reconstruct.

Fig. 9. Single photon counting compressive imaging using TVAL3 to reconstruct. (a) Sampling by random gaussian matrix. (b) Sampling by matrix from the first Fc layer in BF2C-Net.

reconstruction result obtained by using the random Gaussian measurement matrix through the evaluation of the above data. We can conclude that BF2C-Net can produce a better measurement matrix, which is not only suitable for reconstruction using neural networks, but also for reconstruction using TVAL3 algorithm.

The imaging resolution of Figs. 8, 9 is 256 ∗ 256. We divide the entire DMD into 8 ∗ 8 regions, each for 32 ∗ 32 size imaging and 4 ∗ 3 micromirrors are combined to form one pixel. Fig. 8 shows the results of single photon counting compressive imaging reconstructed using TVAL3 and BF2C-Net under measurement ratio at 0.01, 0.04, 0.1, and 0.25. At high sampling ratios, the reconstruction result of BF2C-Net is very close to that of TVAL3. At low sampling ratios, BF2C-Net is better. The experimental results are consistent with the previous simulation results. BF2C-Net does not perform as well as TVAL3 at high sample ratios, probably because TVAL3 is suitable for high sample ratios. It may also be because the training set of BF2C-Net uses grayscale images, which makes the network not dominant in the reconstruction of binary images. Fig. 9 shows the reconstruction results of single photon counting compressive imaging reconstructed using TVAL3 when using different measurement matrices for compressive measurements. (a) Reconstruction results of compressive measurements using a random Gaussian measurement matrix. (b) Reconstruction results after loading the weight matrix of the first fully-connected layer from BF2C-Net on the DMD for compressive measurement. Figs. 10 and 11 show the PSNR and SSIM of the reconstruction results in Fig. 9 at different measurement ratios. It is found that the reconstruction result obtained by using the measurement matrix from BF2C-Net is superior to the

4. Conclusions In this paper, we demonstrate a single-photon compressed imaging system based a novel sampling and reconstruction integrated deep network. A binarized fully-connected layer is set on the first layer of the BF2C-Net as a measurement matrix to load on the DMD for compressive measurement, and then use the subsequent deep convolution network for reconstruction. The experimental results show that the proposed BF2C-Net for single-photon compressive imaging is completely feasible, and the reconstruction quality is far superior to AD_RE-Net and very close to DR2-Net. Reconstruction quality of BF2C-Net is close to that of TVAL3 algorithm at high measurement ratios and better than TVAL3 at low measurement ratios. In addition, the measurement matrix generated by BF2C-Net is also suitable for the reconstruction using TVAL3 algorithm, and the reconstruction result is superior to the case of using the random Gaussian measurement matrix. Moreover, when the deep learning reconstruction network is completed, the time complexity in the image reconstruction phase will be greatly reduced. 6

Please cite this article as: Y. Guan, Q. Yan, S. Yang et al., Single photon counting compressive imaging based on a sampling and reconstruction integrated deep network, Optics Communications (2019) 124923, https://doi.org/10.1016/j.optcom.2019.124923.

Y. Guan, Q. Yan, S. Yang et al.

Optics Communications xxx (xxxx) xxx [2] A. Pourmorteza, R. Symons, V. Sandfort, M. Mallek, M.K. Fuld, G. Henderson, E.C. Jones, A.A. Malayeri, L. Folio, D.A. Bluemke, Abdominal imaging with contrast-enhanced photon-counting CT: First human experience, Radiology 279 (1) (2016) 239–245. [3] K. Taguchi, J.S. Iwanczyk, Vision 20/20: Single photon counting x-ray detectors in medical imaging, Med. Phys. 40 (10) (2013) 100901. [4] Ye Chen, Time-correlated single-photon counting fluorescence lifetime imagingFRET microscopy for protein localization, Mol. Imaging (2005) 239–259. [5] W. Becker, A. Bergmann, M.A. Hink, K. König, K. Benndorf, C. Biskup, Fluorescence lifetime imaging by time-correlated single-photon counting, Microsc. Res. Technol. 63 (1) (2003) 58–66. [6] Y. Liu, J. Shi, G. Zeng, Single-photon-counting polarization ghost imaging, Appl. Opt. 55 (36) (2016) 10347–10351. [7] X.F. Liu, W.K. Yu, X.R. Yao, B. Dai, L.Z. Li, C. Wang, G.J. Zhai, Measurement dimensions compressed spectral imaging with a single point detector, Opt. Commun. 365 (2016) 173–179. [8] W.K. Yu, X.F. Liu, X.R. Yao, C. Wang, S.Q. Gao, G.J. Zhai, Q. Zhao, M.L. Ge, Single photon counting imaging system via compressive sensing, 2012, Preprint arXiv:1202.5866. [9] C.W. He, T.T. Yin, W.B. Yu, et al., Information-weighted Gaussian matrix in compressed sensing for ECG, in: 28th Chinese Control and Decision Conference, Yinchuan, China, 2016, pp. 3827-3830. [10] Z.Z. Zhu, C.B. Zhou, F.L. Liu, et al., Binarized measurement matrix for compressive sensing, J. Microw. 30 (2) (2014) 79–83. [11] S. Xu, H.P. Yin, Y. Chai, et al., An improved toeplitz measurement matrix for compressive sensing, Int. J. Distrib. Sens. Netw. 2014 (8) (2014) 1–8. [12] S.H. Ji, Y. Xue, L. Carin, Bayesian compressive sensing, IEEE Trans. Signal Process. 56 (6) (2008) 2346–2356. [13] J.A. Tropp, A.C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inform. Theory 53 (12) (2007) 4655–4666. [14] D. Needell, R. Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit, IEEE J. Sel. Top. Sign. Proces. 4 (2) (2010) 310–316. [15] T. Blumensath, M.E. Davies, Iterative hard thresholding for compressed sensing, Appl. Comput. Harmon. Anal. 27 (3) (2009) 265–274. [16] C.B. Li, An Efficient Algorithm for Total Variation Regularization with Applications to the Single Pixel Camera and Compressive Sensing, Rice University, Houston, 2010. [17] A. Krizhevsky, I. Sutskever, G. Hinton, Imagenet classification with deep convolutional neural networks, Commun. ACM 60 (6) (2017) 84–90. [18] C. Dong, C.C. Loy, K. He, X. Tang, Learning a deep convolutional network for image super-resolution, in: 13th European Conference on Computer Vision (ECCV), 2014, Switzerland, Zurich, pp. 184–199. [19] P. Svoboda, M. Hradis, D. Barina, P. Zemcik, Compression artifacts removal using convolutional neural networks, 2016, arXiv preprint arXiv:1605.00366. [20] K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, A. Ashok, Reconnet: Non-iterative reconstruction of images from compressively sensed random measurements, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 449-458. [21] T. Shimobaba, Y. Endo, T. Nishitsuji, et al., Computational ghost imaging using deep learning, Opt. Commun. 413 (2018) 147–151. [22] A. Mousavi, A.B. Patel, R.G. Baraniuk, A deep learning approach to structured signal recovery, in: 53rd Annual IEEE Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, 2015, pp. 1336-1343. [23] H. Yao, F. Dai, D. Zhang, et al., DR2-Net: Deep residual reconstruction network for image compressive sensing, Neurocomputing 359 (2019) 483–493. [24] X. Xie, Y. Wang, G. Shi, et al., Adaptive measurement network for CS image reconstruction, in: 2nd CCF Chinese Conference on Computer Vision (CCCV), Tianjin, China, 2017, pp. 407-417. [25] J. Kim, J.K. Lee, K.M. Lee, Accurate image super-resolution using very deep convolutional networks, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, 2016, pp. 1646-1654. [26] M. Courbariaux, I. Hubara, D. Soudry, et al., Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1, 2016, arXiv preprint arXiv:1602.02830. [27] Q.R. Yan, H. Wang, C.L. Yuan, et al., Large-area single photon compressive imaging based on multiple micro-mirrors combination imaging method, Opt. Express 26 (15) (2018) 19080–19090.

Fig. 10. Comparison of different measurement matrix in terms of mean PSNR at MR = 0.25, 0.10, 0.04 and 0.01.

Fig. 11. Comparison of different measurement matrix in terms of mean SSIM at MR = 0.25, 0.10, 0.04 and 0.01.

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This work is supported by National Natural Science Foundation of China (No. 61565012), National Natural Science Foundation of China (No. 61865010), the Science and Technology Plan Project of Jiangxi Province, China (No. 20151BBE50092), and the Funding Scheme to Outstanding Young Talents of Jiangxi Province, China (No. 20171BCB23007). References [1] V. Studer, J. Bobin, M. Chahid, H.S. Mousavi, E. Candes, M. Dahan, Compressive fluorescence microscopy for biological and hyperspectral imaging, Proc. Natl. Acad. Sci. USA 109 (26) (2012) E1679–E1687.

7

Please cite this article as: Y. Guan, Q. Yan, S. Yang et al., Single photon counting compressive imaging based on a sampling and reconstruction integrated deep network, Optics Communications (2019) 124923, https://doi.org/10.1016/j.optcom.2019.124923.