Optics Communications 403 (2017) 257–261
Contents lists available at ScienceDirect
Optics Communications journal homepage: www.elsevier.com/locate/optcom
Shadow-free single-pixel imaging Shunhua Li a , Zibang Zhang a , Xiao Ma a , Jingang Zhong a,b, * a b
Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China Key Laboratory of Optoelectronic Information and Sensing Technologies of Guangdong Higher Education Institutes, Jinan University, Guangzhou 510632, China
a r t i c l e
i n f o
Keywords: Shadows removal Single-pixel imaging Image fusion Computational imaging
a b s t r a c t Single-pixel imaging is an innovative imaging scheme and receives increasing attention in recent years, for it is applicable for imaging at non-visible wavelengths and imaging under weak light conditions. However, as in conventional imaging, shadows would likely occur in single-pixel imaging and sometimes bring negative effects in practical uses. In this paper, the principle of shadows occurrence in single-pixel imaging is analyzed, following which a technique for shadows removal is proposed. In the proposed technique, several single-pixel detectors are used to detect the backscattered light at different locations so that the shadows in the reconstructed images corresponding to each detector shadows are complementary. Shadow-free reconstruction can be derived by fusing the shadow-complementary images using maximum selection rule. To deal with the problem of intensity mismatch in image fusion, we put forward a simple calibration. As experimentally demonstrated, the technique is able to reconstruct monochromatic and full-color shadow-free images. © 2017 Elsevier B.V. All rights reserved.
1. Introduction Shadows commonly exist in photography and have been extensively studied in digital image processing. On one hand, shadows have some advantages. For example, the shadows of objects allow one to extract the object shape information [1–4]. In virtual reality, adding shadows for the object can improve the realism of the scene. On the other hand, shadows also have some disadvantages. Shadows in the image cause some information loss. Shadows often cause problems to computer vision and graphics applications (such as edge detection [5], objects tracking [6], and visual scene understanding [7]). For instance, in traffic surveillance, shadows can degrade the performance of objects extraction and objects tracking. In digital photography, shadows removal can help to improve the visual quality of photographs. Therefore, detecting and removing shadows are great essentials in computer vision and graphics applications. In terms of shadows removal, there have been many shadow removal methods proposed for conventional imaging [8–16]. However, singleframe shadows removal methods [8–13] are mainly based on digital image processing. These methods are able to eliminate penumbra, but have difficulty in removing umbra. It is because the image information is completely lost within the area of umbra. The lost image information is hardly retrieved unless any prior knowledge is available. Multiframe methods [14–16] are potential to remove umbra. To obtain
multiple raw images where shadows are complementary by using multiangle illuminations, the lost image information in one image can be complemented from the others. However, the strategy of multi-angle illuminations requires extra illumination devices and expense. Umbra removal is challenging for conventional imaging. Single-pixel imaging [4,17–26] is a novel scheme of imaging, which allows one to use a detector without spatial resolution (that is, singlepixel detector) to acquire the spatial information of a scene under view. By illuminating the scene with a sequence of patterns and collecting the resultant light signals from the scene, the image of the scene can be computationally reconstructed. Single-pixel imaging receives increasing attention [4,19–26] in recent years, for it is applicable for imaging at non-visible wavelengths (such as infrared, terahertz, etc.). In addition, single-pixel imaging has advantages in imaging under weak light conditions, because large-sized single-pixel detectors are easier and more cost-effective to produce than pixelated detectors. What is more, single-pixel imaging can also combine encryption technology for optical security applications [19,20]. As is for conventional imaging, shadows likely occur in images acquired by single-pixel imaging. The principle of shadow occurrence in conventional imaging has been well investigated [27]. Due to the reciprocity [28,29] between conventional imaging and single-pixel imaging, the principle of shadow occurrence in single-pixel imaging
* Correspondence to: Jinan University, No.601, West Huangpu Avenue, Guangzhou, Guangdong, China.
E-mail address:
[email protected] (J. Zhong). http://dx.doi.org/10.1016/j.optcom.2017.07.058 Received 13 June 2017; Received in revised form 15 July 2017; Accepted 20 July 2017 0030-4018/© 2017 Elsevier B.V. All rights reserved.
S. Li et al.
Optics Communications 403 (2017) 257–261
2.2. Shadow removal In conventional imaging, shadow-free images can be obtained by illuminating the object from different angles. Thus, according to Helmholtz reciprocity, a shadow-free image can be obtained by using multiple detectors in single-pixel imaging. The shadow profile in the reconstructed image is different from that in other images. As long as the shadows in all reconstructed images are complementary, the shadowfree image can be computationally derived by fusing the images. Fig. 2 shows a setup with which we derive shadow-complementary images. As shown in the figure, the scene to be imaged is under structured illumination by the projector. Four spatially separated photodiodes (PD1, PD2, PD3, PD4) are set to collect the backscattered light from the scene. As the shading profile is determined by the position of the detectors, the reconstructed images share the same field-of-view and are shadow-complementary. Data acquisition board (DAQ) converts the PD signals to digits and feeds to a computer for reconstruction. In our experiment, we conduct Fourier single-pixel imaging for object images acquisition [21]. Fourier single-pixel imaging allows one to reconstruct an image by projecting a sequence of Fourier basis patterns (also known as fringe patterns or sinusoidal patterns). The singlepixel detector collects the resultant back-scattered light intensities. The desired image is computationally reconstructed via an inverse Fourier transform. Each Fourier basis pattern is characterized by its spatial ( ) frequency 𝑓𝑥 , 𝑓𝑦 and initial phase 𝜙. The intensity of a sinusoidal fringe pattern is expressed as
Fig. 1. Illustration of shadows occurrence in (a) conventional imaging and (b) single-pixel imaging.
differs from that in conventional imaging. It is necessary to investigate how shadows occur in single-pixel imaging. In this paper, the principle of shadow occurrence in single-pixel imaging is investigated. Unlike conventional imaging, the field-ofview of single-pixel imaging is the same by single-pixel detectors with different locations. A simple and efficient computationally technique for shadow-free single-pixel imaging is proposed. Several spatially separated single-pixel detectors are used to obtain images where shadows are complementary to each other. We also use a method termed maximum selection rule [30]. A shadow-free image is obtained by fusing the shadow-complementary images using the maximum selection rule. The proposed technique is able to reconstruct shadow-free monochromatic and full-color images using single-pixel detectors.
( ) ( ) 𝑃𝜙 𝑥, 𝑦; 𝑓𝑥 , 𝑓𝑦 = 𝑎 + 𝑏 ⋅ cos 2𝜋𝑓𝑥 𝑥 + 2𝜋𝑓𝑦 𝑦 + 𝜙 ,
2. Shadow occurrence and removal in single-pixel imaging
(1)
where 𝑎 is the mean intensity, 𝑏 is the amplitude of sinusoidal, and (𝑥, 𝑦) represents the 2D Cartesian coordinates in the scene. When a Fourier pattern 𝑃𝜙 is projected onto the object surface, the single-pixel detector collects the resultant back-scattered light intensity and results in the electrical signal 𝐷𝜙 :
2.1. Shadow occurrence Shadows will occur where the light is blocked. Different shadows can be categorized into penumbra and umbra [27]. Penumbra reduces brightness, contrast, and signal-to-noise of images. Umbra is an extreme case of penumbra, zeroing brightness, contrast, and signal-to-noise and making objects invisible. The spatial information in umbra is completely lost. For conventional imaging, shadows are set by light sources and the field-of-view is determined by the camera. For single-pixel imaging, shadows are set by detection units (such as, photodiode) and the field-of-view is determined by illumination units (such as, spatial light modulators), which is subject to Helmholtz reciprocity [31]. According to the reciprocity, detectors in single-pixel imaging are equivalent to light sources in conventional imaging, and a projector for illumination in single-pixel imaging is equivalent to a camera in conventional imaging. The two setups shown in Fig. 1 are reciprocal to each other. The conventional imaging setup shown in Fig. 1(a), there are two light sources. Considering point A in the scene, the obstacle blocks the light from light source 1 and only the light from light source 2 can reach there. As a result, the reflective intensity from point A is weaker than elsewhere. Penumbra occurs at point A in the image captured by the camera. If light source 2 is removed from the scene, light cannot reach point A and therefore umbra will be caused. The single-pixel imaging setup shown in Fig. 1(b), there are two detectors and point A is under illumination by the projector. The backscattered light from point A can be detected by detector 2 while the obstacle blocks the light from point A to detector 1. Therefore, the reconstructed image by detector 1 in Fig. 1(b) is the same as the image captured by the camera in Fig. 1(a) when only light source 1 is presented. Umbra occurs at point A in the image. Similarly, the reconstructed image by detector 2 in Fig. 1(b) is the same as the image captured by the camera in Fig. 1(a) when only light source 2 is presented. Neither umbra nor penumbra occurs at point A. Shadows in the reconstructed image by single-pixel imaging are set by the detectors. Such a feature of single-pixel imaging allows one to capture a scene and simultaneously derive multiple images each of which corresponds to a unique illumination.
( ) 𝐷𝜙 𝑓𝑥 , 𝑓𝑦 = 𝐷𝑛 + 𝛽
∬𝛺
( ) 𝑅 (𝑥, 𝑦) 𝑃𝜙 𝑥, 𝑦; 𝑓𝑥 , 𝑓𝑦 𝑑𝑥𝑑𝑦,
(2)
where 𝛺 denotes the illumination areas, 𝑅 (𝑥, 𝑦) is the surface reflectance distribution function of the object in the scene, 𝐷𝑛 is the response of environmental illumination, 𝛽 is a scale factor whose value depends on the size and the location of the detector. Fourier single-pixel imaging performs spiral scanning in the Fourier space. In other words, Fourier single-pixel imaging samples the coefficients in the Fourier spectrum along a spiral path. Each coefficient is acquired by projecting four Fourier basis patterns with the same spatial ( ) frequency 𝑓𝑥 , 𝑓𝑦 and different initial phases (𝜙 = 0, 𝜋∕2, 𝜋, 3𝜋∕2). The four patterns are denoted by 𝑃0 , 𝑃𝜋∕2 , 𝑃𝜋 , 𝑃3𝜋∕2 and the resultant elec( ) trical signals by 𝐷0 , 𝐷𝜋∕2 , 𝐷𝜋 , 𝐷3𝜋∕2 . The Fourier coefficient,𝐼̃ 𝑓𝑥 , 𝑓𝑦 , ( ) corresponding to spatial frequency 𝑓𝑥 , 𝑓𝑦 is assembled by ( ) [ ( ) ( )] 𝐼̃ 𝑓𝑥 , 𝑓𝑦 = 𝐷0 𝑓𝑥 , 𝑓𝑦 − 𝐷𝜋 𝑓𝑥 , 𝑓𝑦 [ ( ) ( )] + j ⋅ 𝐷𝜋∕2 𝑓𝑥 , 𝑓𝑦 − 𝐷3𝜋∕2 𝑓𝑥 , 𝑓𝑦 ,
(3)
where j denotes the imaginary unit. With the complete Fourier spectrum acquired, the object image is reconstructed by applying a 2-D inverse Fourier transform: { ( )} 𝐼 (𝑥, 𝑦) = 𝐹 −1 𝐼̃ 𝑓𝑥 , 𝑓𝑦 , (4) where 𝐹 −1 denotes the inverse Fourier transform operator. The final reconstructed image 𝐼 (𝑥, 𝑦) is acquired. 2.2.1. Integrated detector method To simulate a shadow-free operation lamp, we can sum up the responses of all photodiodes. The summed response 𝐷𝑠 is termed integrated response. The integrated response is obtained by 𝐷𝑠 = 𝐷1 + 𝐷2 + 𝐷3 + 𝐷4 , 258
(5)
S. Li et al.
Optics Communications 403 (2017) 257–261
Fig. 2. Experimental setup. PD1, PD2, PD3, PD4 are four spatially separated photodiodes, and DAQ is a data acquisition board.
Fig. 3. Images reconstructed by a single photodiode. (a)–(d) are reconstructed by detectors PD1, PD4, PD3, and PD2, respectively.
where 𝐷1 , 𝐷2 , 𝐷3 , and 𝐷4 are the resulting signals by PD1, PD2, PD3, and PD4, respectively. The integrated response will be used for image reconstruction. It should be noted that the resulting reconstructed image would be the same as the mean of all raw images reconstructed by each detector. This method enables umbra removal, but the umbra will be replaced by penumbra.
2.2.3. Intensity calibration To solve the problem of intensity mismatch, the intensity of each image can be calibrated by multiplying a coefficient. As such, the reconstructed images become 𝑘1 𝐼1 (𝑥, 𝑦), 𝑘2 𝐼2 (𝑥, 𝑦), 𝑘3 𝐼3 (𝑥, 𝑦), and 𝑘4 𝐼4 (𝑥, 𝑦). The final image can be fused by { } 𝐼 (𝑥, 𝑦) = max 𝑘1 𝐼1 (𝑥, 𝑦) , 𝑘2 𝐼2 (𝑥, 𝑦) , 𝑘3 𝐼3 (𝑥, 𝑦) , 𝑘4 𝐼4 (𝑥, 𝑦) , where coefficients 𝑘1 –𝑘4 are calculated via
2.2.2. The shadow removal method of image fusion The integrated detector method allows for removal of umbra, but is not able to remove penumbra. To obtain shadow-free images, we propose a method that uses maximum selection rule. The intensity at point (𝑥, 𝑦) in the final shadow-free image is the maximum intensity among the reconstructed images at the same pixel point { } 𝐼 (𝑥, 𝑦) = max 𝐼1 (𝑥, 𝑦) , 𝐼2 (𝑥, 𝑦) , 𝐼3 (𝑥, 𝑦) , 𝐼4 (𝑥, 𝑦) ,
(7)
𝑘𝑖 =
∑
𝑀 (𝑥, 𝑦) ∕
∑[
] 𝐼𝑖 (𝑥, 𝑦) ⋅ 𝑀 (𝑥, 𝑦) ,
(8)
where 𝑀 (𝑥, 𝑦) denotes a mask which is used to pick out reliable pixels to calculate the coefficients, and 𝑖 = 1, 2, 3, 4. Reliable pixels are referred to pixels with normal reflectivity (not too high or too low), and not in the shadows:
(6)
⎧1 ⎪ ⎪ 𝑀 (𝑥, 𝑦) = ⎨ ⎪ ⎪ ⎩0
where 𝐼1 (𝑥, 𝑦), 𝐼2 (𝑥, 𝑦), 𝐼3 (𝑥, 𝑦), 𝐼4 (𝑥, 𝑦) are the images reconstructed by PD1, PD2, PD3, and PD4, respectively. This method utilizes the fact that all raw images have the same fieldof-view and are shadow-complementary. Therefore, image registration is not necessary and umbra in a raw image can be replaced by effective image information from others. It should be noted that in conventional imaging it is challenging to obtain shadow-complementary images simultaneously from the same perspective. This image fusion method is based on the comparison of the image intensity. To ensure that intensities of the four raw images acquired by single-pixel imaging can be matched, the essential conditions are:
𝑇1 ≤ 𝐼1 (𝑥, 𝑦) ≤ 𝑇2 and 𝑇3 ≤ 𝐼2 (𝑥, 𝑦) ≤ 𝑇4 and 𝑇5 ≤ 𝐼3 (𝑥, 𝑦) ≤ 𝑇6 and 𝑇7 ≤ 𝐼4 (𝑥, 𝑦) ≤ 𝑇8 otherwise,
(9)
where 𝑇1 –𝑇8 are intensity thresholds for selecting reliable pixels and can be determined by gray histograms. 3. Experiment 3.1. Experimental setup The experimental setup is shown in Fig. 2. The computer generates the four-step phase-shifting sinusoidal patterns of different spatial frequencies. The illumination patterns ( 257 × 257 pixels) are successively projected by a commercial digital light projector (Acer K750). The projector operates at its maximum resolution 1920 × 1080 pixels. The scene consists of a pair of clay figurines and a piece of A4 white printing paper. The illuminated area is 0.154 × 0.154 m2 in size. Four single-pixel photodiodes (HAMAMATSU S1227-1010BR) as single-pixel detectors are spatially separated. Four single-pixel photodiodes collect
(1) The amplification of different detectors should be the same. (2) The intensity of the reflective light should be isotropic. However, it is difficult to satisfy the above conditions in practice. The intensity of the same pixels in different reconstructed images is likely with different values. To solve the problem of intensity mismatch, it is necessary to calibrate the intensity values among the reconstructed raw images. 259
S. Li et al.
Optics Communications 403 (2017) 257–261
Fig. 7. The mask 𝑀 (𝑥, 𝑦). Fig. 4. Image reconstructed by using the method of integrated detector.
Fig. 8. Shadow-free image reconstructed by using the method of image fusion with calibration.
Fig. 5. Image reconstructed by using the method of image fusion.
eliminated and replaced by penumbra. The object in the reconstructed image looks like under illumination of a shadow-free lamp. It should be noted that the reconstruction shown in Fig. 4 is equivalent to the mean of the four reconstructions shown in Fig. 3. Fig. 5 shows the reconstruction which is obtained by fusing the four images shown Fig. 3 in accordance with the maximum selection rule. It can be seen that the result in Fig. 5 has notable improvement in comparison with that in Fig. 4. However, the result in Fig. 5 is not shadow-free. The problem of intensity mismatch leads to penumbra that is incompletely eliminated. To obtain a shadow-free image, we employ the aforementioned intensity calibration process. We determine the parameters 𝑇1 –𝑇8 using
the backscattered light from the scene. DAQ (NI USB-6343) converts the PD signals to digits and feeds to the computer for a computational reconstruction of images for each photodiode (PD). 3.2. Results Fig. 3(a)–(d) show the four images reconstructed by PD1, PD2, PD3, and PD4. The object reconstruction involves only a 2-D inverse fast Fourier transform which averagely consumes 0.058 s for each image. The image shown in Fig. 4 is reconstructed using the integrated detector method. As the figure shows, the umbra in the reconstruction is
Fig. 6. The gray histograms of 𝐼1 (𝑥, 𝑦), 𝐼2 (𝑥, 𝑦), 𝐼3 (𝑥, 𝑦), and 𝐼4 (𝑥, 𝑦). 260
S. Li et al.
Optics Communications 403 (2017) 257–261
Fig. 9. Full-color shadow-free image reconstruction. (a)–(c) reconstructed shadow-free sub-images in red, green and blue channels, respectively. (d) the full-color shadow-free image mixture from (a)–(c). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
the gray histograms shown in Fig. 6. In our experiment, the values of 𝑇1 , 𝑇2 , 𝑇3 , 𝑇4 , 𝑇5 , 𝑇6 , 𝑇7 , and 𝑇8 are 0.203, 0.781, 0.178, 0.807, 0.156, 0.635, 0.201, and 0.914, respectively. With the parameters 𝑇1 –𝑇8 , the mask in Fig. 7 is derived according to Eq. (9). With the mask, the coefficients 𝑘1 , 𝑘2 , 𝑘3 , and 𝑘4 are obtained. In our experiment 𝑘1 = 2.336, 𝑘2 = 2.193, 𝑘3 = 2.778, and 𝑘4 = 1.873, respectively. The final reconstruction with calibration is shown in Fig. 8, where we can see that intensity mismatch problem has been solved and therefore shadows are perfectly eliminated. The shadow removal involves a maximum selection rule which consumes 0.159 s. The results show that the proposed method has advantages in shadows removal, and it is effective and computationally efficient. To further obtain a full-color shadow-free image, we use sub-images corresponding to red, green and blue channels, as shown in Fig. 9(a)– (c). The sub-images are obtained by using red, green, and blue patterns projection, respectively. The final shadow-free full-color image shown in Fig. 9(d) is derived by the maximum intensity rule with calibration.
[7] L. Itti, C. Koch, E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Trans. Pattern Anal. Mach. Intell. 20 (1998) 1254–1259. [8] G.D. Finlayson, M.S. Drew, C. Lu, Entropy minimization for shadow removal, Int. J. Comput. Vis. 85 (2009) 35–57. [9] S.H. Khan, M. Bennamoun, F. Sohel, R. Togneri, Automatic shadow detection and removal from a single image, IEEE Trans. Pattern Anal. Mach. Intell. 38 (2016) 431– 446. [10] R. Guo, Q. Dai, D. Hoiem, Single-image shadow detection and removal using paired regions, in: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. 2011, pp. 2033–2040. [11] L. Zhang, Q. Zhang, C. Xiao, Shadow remover: image shadow removal based on illumination recovering optimization, IEEE Trans. Image Process. 24 (2015) 4623– 4636. [12] L. Xu, F. Qi, R. Jiang, Shadow removal from a single image. in: Intelligent Systems Design and Applications, 2006. ISDA’06. Sixth International Conference on. Vol. 2, 2006, pp. 1049–1054. [13] Y. Shor, D. Lischinski, The shadow meets the mask: Pyramid-based shadow removal, Comput. Graph. Forum 27 (2008) 577–586. [14] J.M. Wang, Shadow detection and removal for traffic images, in; Networking. Sensing and Contro, IEEE International Conference on, Vol. 1 2004, pp. 649–654. [15] P. Liu, H. Wang, Z. Zheng, Investigation of self-adaptive LED surgical lighting based on entropy contrast enhancing method, Opt. Commun. 319 (2014) 133–140. [16] P. Liu, Y. Zhang, Z. Zheng, LED surgical lighting system with multiple free-form surfaces for highly sterile operating theater application, Appl. Opt. 53 (2014) 3427– 3437. [17] O. Katz, Y. Bromberg, Y. Silberberg, Compressive ghost imaging, Appl. Phys. Lett. 95 (2009) 131110. [18] F. Ferri, et al., Differential ghost imaging, Phys. Rev. Lett. 104 (2010) 253603. [19] W. Chen, Optical cryptosystem based on single-pixel encoding using the modified Gerchberg-Saxton algorithm with a cascaded structure, JOSA A 33 (2016) 2305– 2311. [20] W. Chen, Correlated-photon secured imaging by iterative phase retrieval using axially-varying distances, IEEE Photonics Technol. Lett. 28 (2016) 1932–1935. [21] Z. Zhang, X. Ma, J. Zhong, Single-pixel imaging by means of Fourier spectrum acquisition, Nature Commun. 6 (2015) 6225. [22] S.S. Welsh, M.P. Edgar, R. Bowman, P. Jonathan, B. Sun, Fast full-color computational imaging with single-pixel detectors, Opt. Express 21 (2013) 23068–23074. [23] C.B. Xue, X.R. Yao, X.F. Liu, G.J. Zhai, Q. Zhao, X.Y. Guo, Improving the signal-tonoise ratio of complementary compressive imaging with a threshold, Opt. Commun. 393 (2017) 118–122. [24] Z. Zhang, J. Zhong, Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels, Opt. Lett. 41 (2016) 2497–2500. [25] Y. Zhang, et al., 3D single-pixel video, J. Opt. 18 (2016) 035203. [26] M. Zhao, C. Kang, P. Tian, W. Xu, Color single-pixel imaging based on multiple measurement vectors model, Opt. Eng. 55 (2016) 033103. [27] D.A. Forsyth, J. Ponce, A modern approach, Comput. Vis.: Mod. Approach (2003) 88–101. [28] P. Sen, et al., Dual photography, ACM Trans. Graph. 24 (2005) 745–755. [29] P. Sen, S. Darabi, Compressive dual photography, Comput. Graph. Forum 24 (2009) 609–618. [30] D.A. Godse, D.S. Bormane, Wavelet based image fusion using pixel based maximum selection rule, Int. J. Eng. Sci. Technol. (IJEST) 3 (2011) 5572–5577. [31] M. Born, E. Wolf, Principles of Optics, Pergamon Press, 1959.
4. Conclusion The proposed technique allows one to obtain a shadow-free image by using a small number of spatially separated single-pixel detectors. The images reconstructed by each detector are shadow-complementary. We recover the spatial information hidden in the shadows by the maximum selection rule, fusing a shadow-free image. The proposed technique might find applications in the fields of biomedicine and engineering where objects’ shadows are undesired in images. Acknowledgment This work is supported by National Natural Science Foundation of China [Grant No. 61475064]. References [1] B.K.P. Horn, Obtaining Shape from Shading Information, MIT press, 1989, pp. 123– 171. [2] B.K.P. Horn, M.J. Brooks, The variational approach to shape from shading, Comput. Vis. Graph. Image Process. 33 (1986) 174–208. [3] K. Ikeuchi, B.K.P. Horn, Numerical shape from shading and occluding boundaries, ARTIF INTELL 17 (1981) 141–184. [4] B. Sun, M.P. Edgar, R. Bowman, 3D computational imaging with single-pixel detectors, Science 340 (2013) 844–847. [5] J. Canny, A computational approach to edge detection, IEEE Trans. Pattern Anal. Mach. Intell. 6 (1986) 679–698. [6] H. Jiang, M. Drew, Trackin objects wi shadows, in: Proc. Int’l Con Multimedi and Expo., 2003, pp. 512–521.
261