Spatial and temporal multiplexing array imaging lidar technique based on OOCDMA

Spatial and temporal multiplexing array imaging lidar technique based on OOCDMA

Optics and Lasers in Engineering 129 (2020) 106066 Contents lists available at ScienceDirect Optics and Lasers in Engineering journal homepage: www...

3MB Sizes 0 Downloads 22 Views

Optics and Lasers in Engineering 129 (2020) 106066

Contents lists available at ScienceDirect

Optics and Lasers in Engineering journal homepage: www.elsevier.com/locate/optlaseng

Spatial and temporal multiplexing array imaging lidar technique based on OOCDMA Xingyu Yang a,b,∗, Liting Hao b, Helong Wang c, Yuanqing Wang b a

School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang, 050043, China School of Electronic Science and Engineering, Nanjing University, Nanjing, 210046, China c Beijing Research Institute of Precise Mechatronics and Controls, Beijing, 100076, China b

a r t i c l e Keywords: Array imaging lidar OOCDMA High resolution High efficiency

i n f o

a b s t r a c t To overcome the imaging speed and resolution of traditional imaging lidar, we propose a spatial and temporal multiplexing array imaging lidar technique. Based on optical orthogonal code division multiple access (OOCDMA), square laser beam with 22N -order code division multiple access coding are divided into 2N ×2N , then they are projected to the targets without scanning. Finally, the backscatters are received by M detectors array and each detector can obtain 22N pixels. Thus M × 22N pixels information should be achieved at the terminal. The array imaging lidar takes advantage of spatial and temporal coding multiplexing technique which uses only M detectors to acquire M × 22N pixels. Therefore, the imaging speed and resolution for lidar could be improved ideally. It provides a novel approach for high resolution and efficiency imaging lidar system with small scale detectors array.

1. Introduction Imaging lidar has been widely used in urban digital model reconstruction [1], geological and geomorphological exploration [2], space researching [3], battlefield reconnaissance and command in military [4,5], etc. High resolution and efficiency imaging lidar is more and more important. According to the working mechanism, lidar can be divided into three types: point, linear and array imaging lidar technique. Generally, point imaging lidar is operated by scanning the target point-bypoint with a scanning device to achieve the information of the targets [6]. The linear and array imaging lidar always depend on the carrier to push-scan imaging with detectors array [7,8]. The scanning mechanical device is replaced by detectors array, obviously, the error caused by the mechanical scanning device is eliminated. The latter two types lidar systems have been improved in resolution and efficiency [9]. Therefore, large scale detectors array has become one of the key technologies for development of imaging lidar. In the last 20 years, the large scale detectors array has been widely applied in the imaging lidar system. Lincoln Laboratory of the Massachusetts Institute of Technology (MIT) has focused on the development of imaging lidar with Geiger-mode avalanche photodiodes (GmAPDs) and 256 × 64 Gm-APDs array have been applied to their lidar system in 2009 [10,11]. Raytheon is committed to research linear-mode avalanche photodiodes (Lm-APDs) array of 3D imaging lidar system and ∗

256×256 linear model focal plane APDs array have been used in the program of the Autonomous Landing and Hazard Avoidance Technology (ALHAT) to achieve 3D real-time gaze imaging of lunar surface topography in 2011 [12]. However, the ever-increasing scale of detectors array is detrimental to miniaturized, integrated and low cost imaging lidar system. In this paper, a spatial and temporal multiplexing array imaging lidar technique is proposed which can effectively solve the above contradictions. Based on optical orthogonal code division multiple access, square laser beam with 22N -order code division multiple access coding are separated by 2N ×2N , then they are projected onto the objects. Finally, the backscatters are received by M detector array and each detector can achieve 22N pixels. Thus M × 22N pixels should be acquired by OOCDMA imaging lidar. Therefore, the proposed array imaging lidar uses only M detectors to detect M × 22N pixels. The size of detectors array is no longer the only limitation to develop higher resolution and efficiency imaging lidar system. 2. Structure of the OOCDMAD array imaging lidar system 2.1. System structure The OOCDMA imaging lidar system can be mounted on the aircraft platform or on a vehicle platform. As shown in Fig. 1, the array imaging

Corresponding author. E-mail address: [email protected] (X. Yang).

https://doi.org/10.1016/j.optlaseng.2020.106066 Received 9 August 2019; Received in revised form 12 January 2020; Accepted 12 February 2020 0143-8166/© 2020 Elsevier Ltd. All rights reserved.

X. Yang, L. Hao and H. Wang et al.

Optics and Lasers in Engineering 129 (2020) 106066

Fig. 1. The schematic of airborne OOCDMA array imaging lidar.

Fig. 2. The main compositions and workflow of OOCDMA array imaging lidar.

lidar system is carried on an airplane and we build the following model based on the airborne OOCDMA array imaging lidar system. The coordinate system is firstly established which shows in Fig. 1, the horizontal x-axis is the flight direction of the aircraft, the horizontal y-axis represents a direction perpendicular to the x-axis, the vertical z-axis expresses the height direction of the flight. The OOCDMA lidar continuously emits pulsed laser to illuminate the targets when the aircraft platform is flying. In this process, each laser pulse will form 2N ×2N laser footprints at the same moment. It is also worth mentioning that the x-axis and y-axis forms a two-dimensional plane of the coding array which is 2N (x) and 2N (y) shown as in Fig. 3. As a novel high resolution and high efficiency imaging lidar system, the main compositions and workflow is illustrated in Fig. 2. The pulse laser is the illuminated source of active detection. The pulsed laser is shaped to be a square laser beam before encoding. Then, the square laser beam is encoded to be an array coding laser beam and projected onto the targets by the emission lens. The targets are illuminated to a 2N × 2N square region which can be regarded as 2N ×2N laser footprints. At the receiving end, the back scattered signals formed by these laser footprints are received by the optical coupler and coupled to M detector array by the spatial and temporal multiplexing technology. Combined with GPS and INS system, these multiplexing signals are acquired by the data acquisition system. Finally, data processing and imaging algorithm is implemented by computer. Especially, the intensity and range infor-

mation can be decoded by the digital decoding algorithm which provides the intensity and time data for the processing of three-dimensional reconstruction.

2.2. Essential codec module The essential codec module includes the code division multiple access (CDMA) optical encoder and multiplexing optical coupler. The CDMA optical encoder is a digital micro-mirror device (DMD) array shown as in Fig. 3 which can be controlled ‘on’ and off by MicroElectro-Mechanical system (MEMS) shown as Fig. 3. The DMD is ‘on’, the laser can penetrate and be projected to the target, this laser footprint is recorded as ‘1’ in encoding. Otherwise, the DMD is ‘off’, the laser footprint is recorded as ‘0’. As shown in Fig. 4(a), the multiplexing optical coupler mainly consists of A, B and C. A is a shaped and compressed square laser beam, B is a fiber arrangement coupler and C is a combined fiber array. Simultaneously, the cutaway of fiber arrangement at position B in Fig. 4(a) is shown in Fig. 4(b). It should be noted that the Fig. 4 is one of the arrangement. The cutaway of combined fibers array at position C in Fig. 4(a) is shown as Fig. 4(c), different locations and coded backscatter are coupled to M detectors from channel 1(#1) to channel M (#M) by the way of fibers coupled.

X. Yang, L. Hao and H. Wang et al.

Optics and Lasers in Engineering 129 (2020) 106066

to Section 2.2, we set a 16-order unipolar Wash-Hadamard code matrix as formula (2).

⎡ h1 ⎤ ⎡1111111111111111⎤ ⎢ h ⎥ ⎢1010101010101010⎥ ⎢ 2⎥ ⎢ ⎥ ⎢ h3 ⎥ ⎢1100110011001100⎥ ⎢ h ⎥ ⎢1001100110011001⎥ ⎢ 4⎥ ⎢ ⎥ ⎢ h5 ⎥ ⎢1111000011110000⎥ ⎢ h6 ⎥ ⎢1010010110100101⎥ ⎢ ⎥ ⎢ ⎥ ⎢ h7 ⎥ ⎢1100001111000011⎥ ⎢ h8 ⎥ ⎢1001011010010110⎥ WHC =⎢ ⎥ = ⎢ ⎥ ⎢ h9 ⎥ ⎢1111111100001000⎥ ⎢h10 ⎥ ⎢1010101001011101⎥ ⎢h ⎥ ⎢1100110000111011⎥ ⎢ 11 ⎥ ⎢ ⎥ ⎢h12 ⎥ ⎢1001100101101110⎥ ⎢h ⎥ ⎢1111000000000111⎥ ⎢ 13 ⎥ ⎢ ⎥ ⎢h14 ⎥ ⎢1010010101010010⎥ ⎢h15 ⎥ ⎢1100001100110100⎥ ⎢ ⎥ ⎢ ⎥ ⎣h16 ⎦ ⎣1001011001100001⎦16×16

Fig. 3. The schematic of CDMA optical encoder.

3. Principle of the OOCDMAD imaging lidar 3.1. Encoding and multiplexing According to the laser propagation model, the pulsed laser can be described as Eq. (1), where τ =T1∕2 ∕3.5 and T1/2 is the full wave at half maximum (FWHM) of the laser pulse [13,14]. t t p(t) =( )2 e− τ τ

(1)

For the OOCDMA array imaging lidar, the code should exhibit good cross correlation between different code elements, i.e., good orthogonal. To describe the encoding process succinctly, we choose N = 2. According

(2)

However, the matrix WHC is not orthogonal which will bring errors for decoding process. A local code matrix WHCis proposed where ‘0’ should be replaced ‘−1’, and WHC has good cross correlation between different code elements, i.e., good orthogonal. Furthermore, the product of WHC and WHC shows that they have good orthogonality. Therefore, WHC is a reasonable encoding matrix in encoding process. In addition, WHC =hi = [hij ], which represents that weather the i-th pixel is illuminated at the j-th time interval. Then the multiplexing encoded full waveforms can be expressed as formula (3), where a and t represent the amplitude and time of flight respectively, the reciprocal of T is the Fig. 4. The schematic of multiplexing optical coupler in CDMA optical encoder. (a) the processing of laser footprints received by the optical coupler and coupled to M detectors array. (b) The cutaway of fibers array at position B in Fig. 4(a). (c) The cutaway of fibers array at position C in Fig. 4(a).

X. Yang, L. Hao and H. Wang et al.

Optics and Lasers in Engineering 129 (2020) 106066

Fig. 5. The results of multiplexing with different M and N.

Fig. 6. Multiplexing encoded full waveform with 4096 pixels where N = 3, M = 64.

T

pulse repetition frequency of laser and N(t) is the noise. y(a, t) =

2N ∑ 2N ∑

hij ×aj p(t−tj −(i − 1) ⋅ T) + N(t)

Therefore, WHC and the transposed matrix of WHC (WHC )are completely orthogonal. (3)

j=1 i=1

To make it clearly about the spatial and temporal multiplexing process, the multiplexing result is shown in Fig. 5 when different M and N are chosen. Obviously, the spatial and temporal multiplexing approach can use small scale detectors to achieve large scale pixels which is combined with the codec technology. 3.2. Digital decoding mathematical model Based on the analysis of the Section 3.1, we can find that the orthogonal relationship between one-dimensional matrix hi and the local one-dimensional matrix hj as the formula (4). { 22N ∑ 0i ≠ j hi × hj = hik hjk = (4) 1 ⋅ 22N i = j 2 k=1 T

Combined the result of formula (4), the product of WHC and WHC is a diagonal matrix as Eq. (5) when the encoding order is 22N -order.

⎡ 1 ⋅ 22N 0 ⋯ ⎤ ⎢ 2 ⎥ ⎢ 0 1 ⋅ 22N ⋯ ⎥ T ⎥ WHC × WHC = ⎢ 2 ⎢ ⋮⋮⋱ ⎥ ⎢ ⎥ 1 2N ⎢00 ⋯ ⋅ 2 ⎥ ⎣ ⎦ 2

(5)

We build a digital decoding mathematical model based on the above conclusion. The parameters {a,t} of each pixels should be calculated. We assume that the k-th pixel is decoded and take advantage of the orthogonal characteristics between the local code and backscatter encoded full ′ waveforms. The decoding model can be expressed as Eq. (6) where N (t) is the cumulative noise. 2N

hk ⋅ y(a, t) =

2 ∑

hki ⋅ y(a, t)

i=1

⎫ ⎧ 2N 2N ⎪∑ ∑ ⎪ = hki ⋅ ⎨ hij ×aj p(t−tj −(i − 1) ⋅ T) + N(t)⎬ ⎪ j=1 i=1 ⎪ i=1 ⎩ ⎭ 2N

2 ∑

X. Yang, L. Hao and H. Wang et al.

Optics and Lasers in Engineering 129 (2020) 106066

Fig. 7. The 64 consecutive pixels decoded information from the multiplexing encoded full waveform with 4096 pixels.

Fig. 8. The 64 pixels decoded information from M (M = 64) detectors.

22N ⎛∑ 22N 22N ⎞ ∑ ∑ ⎜ h ⋅ h ⎟ ⋅ a p(t−t −(i − 1) ⋅ T)+ hki ⋅ N(t) ki ij j ⎜ ⎟ j j=1 ⎝ i=1 i=1 ⎠ ′ 1 = ⋅ 22N ak p(t−tk )+N (t) 2

c0 = 2.9979 × 105 km/s, is the light speed.

=

range =

c0 × Δt 2

(7)

(6)

Thus, the intensity ak and the flight time p(t-tk ) information can be achieved by the Eq. (6). Finally, the range information is calculated by the time of flight information which is represented as Eq. (7), where,

4. Analysis and discussion According to the above deductions and calculations, the intensity image and range image can be reconstructed by the OOCDMA array

X. Yang, L. Hao and H. Wang et al.

Optics and Lasers in Engineering 129 (2020) 106066

Fig. 9. The reconstructed results of the OOCDMAD array 3D imaging lidar. (a) The scenario diagram. (b) The reconstructed result where N = 1 and M = 64. (c) The reconstructed result where N = 2 and M = 64. (d) The reconstructed result where N = 3 and M = 64.

imaging lidar. Based on the formula (3), we set N = 3 and M = 64, the multiplexing encoded full waveform with 4096 pixels is shown as Fig. 6. The Amplitude-axis contains the intensity information of all pixels and the horizontal contains the time information of the 4096 pixels which can be calculated from the sampling points and sampling frequency. Then, the parameters intensity a and the flight time t of each pixel can be calculated from the multiplexing encoded full waveforms by the Eq. (6). To delicately describe the decoding effects, we take a set of local codes at will to decode and 64 consecutive pixels decoded information is shown as in Fig. 7. It is worth noting that these consecutive pixels decoded information are from the same detector. Simultaneously, echo signals acquired from M (M = 64) detector array are decoded and the decoded result is shown as Fig. 8. Each signal contains the intensity a and time t information of its corresponding pixel, thus the imaging information of pixels could be achieved from all detectors. Finally, the intensity and range information of all pixels are decoded. Following the same approach, the 3D images are reconstructed using intensity and range information. The scenario diagram is shown as Fig. 9(a) and (b)~(d) respectively represent the reconstructed results where N = 1, N = 2, N = 3 with M = 64 detectors. It is worthy of mentioning that the key areas A and B in Fig. 9. As N increases, the better the imaging result. In other words, the number of acquired pixels of the same targets will be larger with the increasing of N. Therefore, the resolution could be improved ideally. Based on the above analysis and research of the results, the array imaging 3D lidar technique is proved feasibly. 5. Conclusion The proposed spatial and temporal multiplexing array imaging lidar technique can effectively resolve the contradictions between the size of

detector array and the performance of lidar. The novel lidar uses only M detectors array to achieve M × 22N pixels. Therefore, the imaging speed and resolution for the novel lidar could be improved ideally with a small scale detectors array. Furthermore, the size of detectors array is no longer the only limitation to develop higher resolution and efficiency imaging lidar system. Thus, the integration and miniaturization of the imaging lidar system is easier to implement in future. Funding This work was supported by National Science and Technology Major Project of China(Grant No. AHJ2011Z001). Declaration of Competing Interest None. Acknowledgement The authors acknowledge all reviewers of the manuscript and researchers who have contribute to this paper. References [1] Zhang Z, Zhang X, Sun Y, Zhang P. Road centerline extraction from very-highresolution aerial image and LiDAR data based on road connectivity. Remote Sens 2018;10(8). doi:10.3390/rs10081284. [2] Liu K, Shen X, Cao L, Wang G, Cao F. Estimating forest structural attributes using UAV-LiDAR data in Ginkgo plantations. ISPRS J Photogramm Remote Sens 2018;146(November):465–82. doi:10.1016/j.isprsjprs.2018.11.001. [3] Mo D, Wang R, Wang N, Lv T, Zhang K-S, Wu Y-R. Three-dimensional inverse synthetic aperture lidar imaging for long-range spinning targets. Opt Lett 2018;43(4):839. doi:10.1364/ol.43.000839. [4] Busck J, Heiselberg H. Gated viewing and high-accuracy three-dimensional laser radar. Appl Opt 2004;43(24):4705. doi:10.1364/ao.43.004705.

X. Yang, L. Hao and H. Wang et al. [5] Wallace AM, Ye J, Krichel NJ, McCarthy A, Collins RJ, Buller GS. Full waveform analysis for long-range 3D imaging laser radar. EURASIP J Adv Signal Process 2010;1:1– 12. doi:10.1155/2010/896708. [6] Wang H, Chen M, Gong W, et al. Ghost imaging lidar via sparsity constraints. Appl Phys Lett 2012;101(14):141123. [7] Zheng H, Ma R, Liu M, Zhu Z. A Linear-Array receiver analog front-end circuit for rotating scanner LiDAR application. IEEE Sens J 2019;19(13):5053–61. doi:10.1109/JSEN.2019.2905267. [8] Xu F, Wang Y, Yang X, Zhang B, Li F. Correction of linear-array lidar intensity data using an optimal beam shaping approach. Opt Lasers Eng 2016;83:90–8. doi:10.1016/j.optlaseng.2016.03.007. [9] McCarthy A, Collins RJ, Krichel NJ, Fernández V, Wallace AM, Buller GS. Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting. Appl Opt 2009;48(32):6241. doi:10.1364/ao.48.006241.

Optics and Lasers in Engineering 129 (2020) 106066 [10] Knowlton R. Airborne ladar imaging research testbed. MIT Lincoln Laboratory Tech Notes; 2011. [11] Jack M, Wehner J, Edwards J, Chapman G, Hall DNB, Jacobson SM. HgCdTe APDbased linear-mode photon counting components and ladar receivers. Adv Photon Count Tech V 2011;8033:80330M. doi:10.1117/12.888134. [12] McKeag W, Veeder T, Wang J, et al. New developments in HgCdTe APDs and LADAR receivers. Infrared Technol Applications XXXVII 2011;8012:966–80. doi:10.1117/12.888099. 801230. [13] Qin Y, Vu TT, Ban Y. Toward an optimal algorithm for LiDAR waveform decomposition. IEEE Geosci Remote Sens Lett 2012;9(3):482–6. doi:10.1109/LGRS.2011.2172676. [14] Li D, Xu L, Xie X, Li X, Chen J, Chen J. Co-path full-waveform LiDAR for detection of multiple along-path objects. Opt Lasers Eng 2018;111(August):211–21. doi:10.1016/j.optlaseng.2018.08.009.