Omnidirectional field of view structured light calibration method for catadioptric vision system

Omnidirectional field of view structured light calibration method for catadioptric vision system

Measurement 148 (2019) 106914 Contents lists available at ScienceDirect Measurement journal homepage: www.elsevier.com/locate/measurement Omnidirec...

2MB Sizes 0 Downloads 92 Views

Measurement 148 (2019) 106914

Contents lists available at ScienceDirect

Measurement journal homepage: www.elsevier.com/locate/measurement

Omnidirectional field of view structured light calibration method for catadioptric vision system Xin Chen a,b, Fuqiang Zhou b, Ting Xue c a

Beijing Institute of Aerospace Control Devices, Beijing 100039, China Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Beihang University, Beijing 100191, China c College of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China b

a r t i c l e

i n f o

Article history: Received 20 March 2019 Received in revised form 6 June 2019 Accepted 5 August 2019 Available online 12 August 2019 Keywords: Structured light Catadioptric vision system Omnidirectional image Calibration

a b s t r a c t There have been improvements to structured light-based catadioptric vision systems (SLCV) to assist with robot navigation and reduce computation time. However, structured-light calibration does not satisfy this assumption. Motivated by this problem, this paper proposes a new approach for estimating the position of the structured lights of an SLCV system. Structured light points projected by lasers are reflected into the image plane through a conic mirror in order to view an omnidirectional image containing multiple structured light points. Efficient calibration of the laser location is possible based on the 2D information of the laser points, subject to the camera parameters and the mirror position. The performance of this method is evaluated experimentally using simulated and real data, and demonstrates that the proposed method is more robust and provides a higher measurement accuracy with a lower number of images, thus verifying the method’s feasibility. Ó 2019 Elsevier Ltd. All rights reserved.

1. Introduction A structured light-based vision measuring technique is commonly used to inspect 3D profiling information of an object [1]. Due to its simple structure, low cost, high precision and noncontact nature, it has gained widespread acceptance in industrial fields and is increasingly used for online inspections such as 3D contouring, gauging of small-size components and fast dimensional analysis of objects in relation to robot navigation [2,3].

1.1. Related works A basic structured light-based vision measurement system (SLV) consists of a camera and a laser projector. Calibration is the most critical part of the process of visual measurement, and the calibration precision directly reflects the quality of a visual sensing system [4–6]. The main objects for calibration in a traditional SLV are the camera and the laser projector. Since the early 1970s, there has been active research on SLV systems for 3D reconstruction and shape measurement [7]. For example, Chao et al. [8] focused on the rail profile dynamic measurement. Their research discussed collinearity and parallelity constraints of multiline structured light and proposed a simple and effective self-calibration method to deal E-mail addresses: [email protected] (F. Zhou), [email protected] (T. Xue) https://doi.org/10.1016/j.measurement.2019.106914 0263-2241/Ó 2019 Elsevier Ltd. All rights reserved.

with the distortion problem of rail profiles. Lei et al. [9] proposed a high precision calibration method by adding 3D space constraints. Chen et al. [10] illustrated an improved systematic calibration method in three aspects. With the development of imaging processing and omnidirectional catadioptric vision techniques, structured light-based omnidirectional catadioptric visual measuring systems mounted on mobile robots have been widely used within the large field of industrial measuring tasks [11–14]. Research on the calibration method of structured lights is meaningful for structured lightbased catadioptric vision measuring systems. A basic structured light-based catadioptric vision system (SLCV) includes one camera, one conic mirror and laser projectors [15]. Hence, the calibration method of the laser projector is also significant to realize a higher measurement precision for SLCV systems. However, the calibration process of laser projectors and cameras is different to traditional SLV systems. Therefore, there has been much related research proposed worldwide, due to its large-scale scenes and high efficiency. In 2006, Mei and Rives [16] analyzed the relative position of a camera and a laser projector from a robotic perspective to obtain depth information from panoramic images. Then, Li et al. [17] developed laser structured light based on an omnidirectional vision system. They obtained the scene depth information based on the relationship between the image plane and the laser plane with different mirrors. Jia et al. [18] combined omnidirectional images with structured lights to realize omnidirectional scene depth percep-

2

X. Chen et al. / Measurement 148 (2019) 106914

tion. In the same year, Xu et al. [19,20] designed a threedimensional multi-directional sensor composed of pyramid mirrors and structured lights, which is promising in robot navigation. Hoang et al. [21] presented a motion estimation method based on an omnidirectional camera and a laser range finder (LRF). 1.2. Proposed method In recent years, several authors have proposed development of autonomous SLCV systems mounted on robots for robot navigation and motion planning, which offers the advantage of a wide field of view (FOV). Moreover, an SLCV system can provide omnidirectional 3D information in narrow dark spaces [22,23]. Structured lights in SLCV systems are used to estimate the 3D position of objects. For depth information estimation, higher calibration accuracy of the structured light is required. Although the methods mentioned above can measure depth information of points in 3D space, a complete calibration method focused on structured light in an SLCV system cannot be illustrated in detail [24,25]. To estimate the position of structured light in an SLCV system, we propose an omnidirectional field of view structured light calibration method for estimating the 3D information of objects. As omnidirectional imaging technology can reflect object light within a 360° field of view via a conic mirror, the proposed method can be used to calibrate the position of multiple structured lights simultaneously. Firstly, the intrinsic parameters of the camera need to be calibrated, because the measurement accuracy is affected by the camera calibration precision. Secondly, the mirror position in the SLCV system also needs to be estimated to calculate the orientation of the reflected rays. Thirdly, a position calibration algorithm focused on the structured light based on the camera parameters and the mirror position is proposed to measure objects in extreme environments. 1.3. Paper structure

mirror reflection. A rigid configuration has been used. The relative positions between the mirror, the camera and the structured lights in the SLCV system is constant. The specification of each component is given in Table 1. In this study, we propose an omnidirectional field of view structured light calibration method based on omnidirectional images and ten dot structured lights. The method mainly includes camera calibration, mirror position estimation and structured light calibration. The camera calibration and the mirror position estimation are the basic foundation of the structured light calibration. A detailed description of the method is provided below. 2.1. Camera calibration model In the pinhole camera model, p ¼ ½u; v T is an image point and its corresponding homogenous coordinate can be expressed as 

p ¼ ½u; v ; 1T . Similarly, P ¼ ½X; Y; Z T is an object point in 3D space 

and its corresponding homogenous coordinate is P ¼ ½X; Y; Z; 1T . Therefore, the relationship between the world coordinate P and the image coordinate p can be expressed as:

2

fx 6 s p ¼ K ½ R T  P ; s–0; K ¼ 4 0 0 



0

u0

3

fy

v 0 75

0

1

ð1Þ

where s is a non-zero scale factor and f x and f y are the focal length in pixels in the x and y directions, respectively. ðu0 ; v 0 Þ is the principle point in pixels. K is the 33 intrinsic matrix of the camera. R is a 33 rotation matrix and T is a 31 translation matrix. In general, a real camera does not satisfy the pinhole camera model but always has geometric distortion, mostly radial distortion and tangential distortion. It can be presented as: #     "   udis uideal 2p1 udis v dis þ p2 ðr2 þ 2u2dis Þ 2 4 6 ¼ 1 þ k1 r þ k2 r þ k3 r þ v ideal v dis 2p2 udis v dis þ p1 ðr2 þ 2v 2dis Þ

ð2Þ

The remainder of the paper is structured as follows: Section 2 describes the principle and the procedure of the overall proposed method; Section 3 then presents the simulation and experiments; finally, Section 4 states the conclusion and the future work of our study.

where k1 , k2 and k3 are the radial distortion coefficients, and p1 and p2 are the tangential distortion coefficients. By collecting a number of target images, the camera’s intrinsic parameters can then be computed [27].

2. Calibration model description

2.2. Mirror position estimation

To date, there have been several autonomous SLCV systems related to motion and depth estimation in robotic systems [26], since point-structured lights have strong flexibility, high adaptability and robust controllability. The SLCV system used in this paper consists of a camera, a conic mirror and ten point-structuredlights and its structure is shown in Fig. 1. The conic mirror is used to provide a 360° field of view (FOV) scene to the camera. The laser projector consisting of ten structured lights projects ten points onto the object which are visible to the camera through the conic

To estimate the position of the structured lights, at least two points on the reflector rays are required. The computation of the orientation of the reflected ray and the reflected point on the conic mirror are based on the mirror position estimation. Therefore, the

Table 1 Specification of each component. Category

Quantity

Parameters

Camera

1

Interface: lens size: Pixel size: Resolution: Frame rate:

RER-USB30W02M 1/4 in. 6.0 lm  6.0 lm 640(H)  480(V) 60 fps

Lens

1

Focal length: Field angle:

3.6 mm 60°

Laser

10

Size: Wavelength: Operating voltage: Output power:

U6  9 mm

Radius: Height:

20 mm 10 mm

Mirror Fig. 1. Structure of the SLCV system.

1

650 nm 2.8 V 0.4–5 mw

X. Chen et al. / Measurement 148 (2019) 106914

3

mirror position in the camera coordinate system needs to be calibrated before the structured light calibration. Based on the computed camera parameters, the mirror position relative to the camera coordinate system can be expressed by a rotation matrix Rm and a translation matrix T m . In the SLCV system, the conic mirror is projected on the image plane as an ellipse and it can be described as:

The rotation matrix Rm relative to the camera and the conic mirror can be computed using Singular Value Decomposition (SVD). The translation matrix Tm can be determined as

Au2i þ 2Bui  v i þ C v 2i þ 2Dui þ 2Ev i þ F ¼ 0

A detailed layout of the structured light calibration is shown in Fig. 3. There are some expressions that need to be introduced: o  X c Y c Z c is the camera coordinate system, and o  X mc Y mc Z mc is the mirror coordinate system. Both coordinate systems have the same origin, but their orientations in the Z direction are different. Based on the results of the camera parameters and the mirror position, the calibration process of the structured light can be expressed as follows:

ð3Þ

Or in quadratic form, it can be expressed as:

2

pc T Q pc ¼ ½ ui

vi

A

B

D

32

ui

3

6 1 4 B

C

76 7 E 54 v i 5 ¼ 0

D

E

F

ð4Þ

1

According to the perspective projection model, the image point in the camera coordinates can be described by pc ¼ ½u; v ; f  . f is the focal length. An elliptical cone projected by the conic mirror

T m ¼ ½ x0

y0

z 0 T .

2.3. Omnidirectional structured lights calibration

T

can then be derived by P c ¼ k½u; v ; f  . k is a scale factor. Hence, the conic mirror in the camera coordinates can be described by the following expression: T

2

A

B

B

C

D=f

E=f

6 P c T Q c P c ¼ 0; Q c ¼ 4

D=f

3

E=f 7 5 F=f

ð5Þ

2

In Fig. 2, a supplementary coordinate system with its origin at the optical center is illustrated, with the Z-axis symmetrical to the axis of the mirror. The equation of the conic mirror in the mirror coordinate system can be described as follows:

Pm T Q m Pm ¼ 0 2

ð6Þ

1

0

0

1

x0 =z0

y0 =z0

6 Qm ¼ 4

3

x0 =z0

7 5

y0 =z0 x20

þ

y20

r

2

ð7Þ

=z20

where ðx0 ; y0 ; z0 Þ is the center and r is the radius of the base circle of the conic mirror. From the above analysis, there exists a rotation matrix Rm between P m and P c that can be shown as follows:

P c ¼ Rm P m

ð8Þ

1) The structured lights are projected on an object and reflected by the conic mirror through an optical center imaging in the image plane. The incident ray Lin can be expressed by the image point p and the optical center o; 2) Using the mirror position ½ Rm T m  and the incident ray Lin in the camera coordinate system, the 3D information of the reflected point A can be computed; 3) Based on the optical folding principle, the reflected ray Lre can be described by the incident ray Lin and the mirror position ½ Rm T m ; 4) By computing the point of intersection of the reflected ray Lre and the calibration target pt , a point P L on the laser emission light can be derived; 5) By collecting a number of images, the function of the laser emission light Li can be calculated using a linear fitting algorithm. In Fig. 3, the incident ray in the camera coordinates can be presented in the form of a Plücker matrix as:

Lin ¼ poT  opT

ð9Þ

where p is an image point and o is the optical center. The equation of the curved mirror in the mirror coordinates can be expressed as:

P Tm Q m P m ¼ 0

ð10Þ

where P m is a point in the mirror coordinate system. The relationship between the mirror coordinate system and the camera coordinate system is known by the mirror position estimation and can be expressed by a homogenous matrix H m ¼ ½ Rm T m . Then, the point

Fig. 2. Coordinate diagram in mirror calibration process.

Fig. 3. Calibration diagram of the structured lights.

4

X. Chen et al. / Measurement 148 (2019) 106914

P c reflected from an object point in the camera coordinate system can be derived as:

Pc ¼ Hm Pm

ð11Þ

Then, the equation of the conic mirror in the camera coordinate system can be re-identified as:

P Tc Q c P c ¼ 0

ð12Þ

1 where Q c ¼ H T m Q m H m . The coordinates of a point reflected on the conic mirror in the camera coordinate system can be expressed as:

 T T P c ¼ px ; py ; pz ¼ k½u; v ; f  2

ð13Þ 3

a13 a14 a11 a12   A11 A12 a23 a24 7 6 a21 a22 ¼ Letting Q c ¼ 4 , the bin5 a31 a32 a33 a34 A21 A22 a41 a42 a43 a44 ary equation for the proportional coefficient k can be derived as:

 2 k pTc A11 pc þ k AT12 þ A21 pc þ A22 ¼ 0

ð14Þ

Then the value of the proportional coefficient can be obtained. The binary equation has two solutions. The larger value is chosen as the solution because of the mirror position in the camera coordinate system, and can be presented as:



rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi   2  AT12 þ A21 pc þ AT12 þ A21 pc  4pTc A11 pc  A22 2pTc A11 pc

ð15Þ

Based on the spatial equation of the conic mirror in the camera  T coordinates and the value of k, the normal vector N r ¼ nx ; ny ; nz at point P c can be obtained and expressed as:

8 > < nx ¼ 2a11 px þ 2a12 py þ 2a13 pz þ 2a14 ny ¼ 2a22 py þ 2a12 px þ 2a23 pz þ 2a24 > : n ¼ 2a p þ 2a p þ 2a p þ 2a z 33 z 13 x 23 y 34

ð16Þ

The normal equation at point P c is shown as:

Ln :

x  px y  py z  pz ¼ ¼ nx ny nz

ð17Þ

According to the normal vector N r , we establish an auxiliary plane p that passes through the optical center as:

p : nx x þ ny y þ nz z ¼ 0

ð18Þ

As the incident ray and the reflected ray are in the same plane, a point A on the reflected ray can be computed as:

A ¼ 2B ¼ 2Ln p

ð19Þ

where B is the intersection point of the normal ray and the auxiliary plane p. Then the reflected ray can be shown as:

Lre ¼ P c AT  AP Tc

To verify the validity of the proposed method, experiments were performed using simulated data and real images captured by the SLCV system. The objective of the simulated experiment is to verify the robustness of the proposed method and the objective of the real experiment is to measure the accuracy of the proposed approach. The experiment details are provided in Section 3.1 and Section 3.2. It should be emphasized that since the key focus of this paper is structured light calibration, the camera parameters and the mirror position are calibrated in advance and only the structured light calibration results are presented and analyzed. The detailed calibration results of the camera intrinsic parameters and the mirror position are shown in Tables 2 and 3 and use the calibration process described above in Section 2.1 and Section 2.2 to participate in the later stage of structured light calibration. 3.1. Simulation results This section presents an evaluation of our proposed method using a simulation based on computer-generated inputs using synthetic data with Gaussian noise. In order to calibrate the position of the structured lights, the SLCV sensor was placed into a closed rectangular space and four 3  3 calibrating targets were mounted on each plane to have a through view of them. With this configuration, all ten structured lights can be calibrated at the same time, effectively reducing the number of images required to calibrate ten structured lights. Based on the proposed calibration method, the position of the ten structured lights can be estimated by capturing several omnidirectional images that include the ten laser points and the four calibrating targets. Fig. 4 presents the calibrating process of the ten structured lights in the SLCV sensor. In Fig. 4(c), the omnidirectional image can be divided into four parts based on the vertical four sides of the closed rectangular space. The ten laser points and the target points on the calibration target can then be numbered based on each ROI (region of image) [29]. The calibration results of ten structured-lights are shown in Table 4 based on the calibration method proposed by Scaramuzza [28] and the measurement principle of structured lights. The threedimensional spatial position of the ten structured lights in the camera coordinate system is shown in Fig. 5. To verify the robustness of the proposed calibration method, Gaussian noise with a mean of 0 and a standard deviation ranging from 0 to 0.9 pixels is added to the image points. Fig. 6 shows the RMS errors of the ten lasers as the Gaussian noise is increased in x0 , y0 and z0 and Fig. 7 shows the RMS errors of the ten lasers as the Table 2 Intrinsic parameters of camera. Category Camera

ð20Þ

According to the calibration algorithm for a catadioptric vision proposed by Scaramuzza in Ref. [28], the position of the calibration target pt in the camera coordinates can be estimated. Afterwards, a point P i on the laser emission ray can be determined by the reflected ray Ln and the calibration target pt and expressed as:

P i ¼ Lre pt

3. Experimental verification

ð21Þ

By collecting a number of images including multiple laser points, the position of the ten lasers can be estimated by the least square method.

Intrinsic parameters  f x ; f y =pixel:

ð551:757; 551:95Þ

ðu0 ; v 0 Þ=pixel: ðk1 ; k2 ; k3 Þ: ðp1 ; p2 Þ:

ð322:826; 205:202Þ ð0:421; 0:277; 0:161Þ  1:474  103 ; 1:084  102

Table 3 Mirror calibration results. Category Mirror

Rm 2 0:5570 4 0:8302 0:0182

0:8297 0:5554 0:0546

3 0:0352 0:0455 5 0:9983

Tm 2 3 11:0749 4 0:0286 5 33:2332

5

X. Chen et al. / Measurement 148 (2019) 106914

Fig. 4. Laser calibration process in catadioptric vision sensor. (a) Calibration target, (b) testing mode, (c) ROI segmentation, and (d) point extraction.

Table 4 Structured-light calibration results. Laser

Equation: ðx  x0 Þ=nx ¼ ðy  y0 Þ=ny ¼ ðz  z0 Þ=nz x0

LM 1 : LM 2 : LM 3 : LM 4 : LM 5 : LM 6 : LM 7 : LM 8 : LM 9 : LM 10 :

y0

z0

nx

ny

nz

27.474

49.912

55.069

0.471

0.880

0.055

18.130

39.062

55.235

0.915

0.400

0.044

11.057

26.293

55.568

0.999

0.015

0.035

14.476

10.177

56.425

0.804

0.593.

0.001

29.550

3.065

57.283

0.155

0.987

0.036

45.637

1.899

57.905

0.292

0.955

0.054

54.597

14.878

57.628

0.678

0.233

0.044

59.033

27.377

57.214

0.999

0.014

0.036

57.068

42.065

56.474

0.786

0.617

0.004

43.378

50.002

55.628

0.363

0.931

0.034

accuracy. Additionally, the change rates of the normal vector   nx ; ny ; nz of the ten lasers also have this phenomenon. This is due to differences in the spot projection areas, which easily leads to deviations in accuracy of the centroid extraction. The accuracy of the laser calibration is also directly influenced by the precision of the centroid extraction. Therefore, different laser calibration results are produced independent of the influence of noise. Highconcentration structured light can effectively alleviate this problem, although the RMS errors are limited to 2% with noise levels ranging from 0 to 0.9 pixels. Therefore, the proposed calibration method is robust for Gaussian noise and has a certain practical value. In order to further test the measurement accuracy, real data experiments will be presented in the next section. Fig. 5. Three-dimensional spatial position of ten structured lights.

Gaussian noise is increased in nx , ny and nz . From Fig. 6, we can see that the change rates of the initial coordinates ðx0 ; y0 ; z0 Þ of the ten lasers are different due to the variance of the spot extraction

3.2. Real data experiments For the real-life experiments, we installed a real pipeline with a diameter of 150 mm fixed on the test pad. The detailed experimental setting is shown in Fig. 8. In order to analyze the measurement

Fig. 6. RMS error with increasing Gaussian noise, (a) x0 , (b) y0 , and (c) z0 .

6

X. Chen et al. / Measurement 148 (2019) 106914

Fig. 7. RMS error with increasing Gaussian noise, (a) nx , (b) ny , and (c) nz .

Fig. 8. Experimental setting.

Fig. 9. (a) Captured image under ten structured-light pattern illumination, (b) 3D coordinate display of the ten point-structured-lights. Table 5 3D coordinates of the ten point-structured-lights. No.

x/mm

y/mm

z/mm

1 2 3 4 5 6 7 8 9 10

309.715 285.128 274.644 294.577 339.121 379.923 419.924 420.936 395.934 349.241

59.612 38.711 0.304 51.569 72.995 64.071 31.232 21.862 66.601 78.349

160.804 158.289 161.590 163.622 154.721 153.146 172.923 185.138 195.479 183.291

accuracy, the SLCV sensor was mounted on a mobile robot to detect the diameter of the pipeline. For the real experiment, 150 mm was set as the ground truth of the pipeline’s diameter to verify the feasibility of the proposed method. The captured omnidirectional image including ten-pointstructured lights contained a lot of dark regions, which was subject to the reflectiveness of the pipeline’s inner surface. Fig. 9(a) shows the captured images under ten structured-light pattern illumina-

Fig.10. Circle fitting result computed by the ten structured lights.

X. Chen et al. / Measurement 148 (2019) 106914

Fig. 11. Fitting errors computed by each laser point.

7

bration, mirror position and structured-light calibration is given. This method takes advantage of the omnidirectional FOV of the SLCV system to reduce the number of calibration images required to estimate the position of the structured light. Experiments were performed and analyzed to verify the accuracy and feasibility of the proposed calibration method. The proposed method offers significant advantages including: (1) An omnidirectional catadioptric vision system with structured lights are used to reduce the number of images required to calibrate structured-light systems, resulting in improved measurement efficiency. (2) Although the calibration process is simple; our proposed method improves the active adaptability of robot navigation due to its structured light measurement model for limited space measurement. (3) Furthermore, generally the calibration procedure for structured lights needs to be conducted for several cameras and multiple mirrors in different positions. This is not required for the proposed method, making it easy to use. Nevertheless, as shown in Section 3.2, the single point measurement is less accurate and thus the image processing algorithm for laser point extraction should be further studied to improve the measurement accuracy for single points. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Fig. 12. Mean errors in each fitting process.

Acknowledgment tion. Using a centroid extraction method and the position results of the ten structured lights, 3D information of the ten laser points projected on the pipeline’s inner surface can be computed. The detailed results are presented in Table 5 and the 3D position result is displayed in Fig. 9(b). From Fig. 9(b), we can see that the computed 3D coordinates of the ten projected points reflects the basic contour of the pipeline’s inner surface. With continuous deepening of the analysis for the measurement accuracy, an estimation of the inside diameter of the pipeline can be obtained using the circle fitting method. Since only the coordinate values in the x and y directions are used in the circular fitting process, the circular fitting result computed by the 3D coordinates of the ten laser points is shown in Fig. 10. From observation, it was found that the estimated diameter of the pipeline is 150.042 mm. The error between this value and the real diameter of the pipeline is only 0.042 mm. Furthermore, as the number of structured lights increases, the measurement accuracy of the pipeline’s inner diameter increases. In order to display the detailed measurement errors, the fitting errors computed by each point in the 16 images are shown in Fig. 11 and the related mean errors of each fitting process are presented in Fig. 12. In Figs. 11 and 12, we can see that maximum fitting error in each laser point is less than 3 mm and the mean error of the 16 fitting processes is 1.5 mm. Intuitively, the proposed omnidirectional structured-light calibration method for SLCV system has a high measurement accuracy. 4. Conclusion This paper proposed a new approach for estimating the position of structured lights in an SLCV system. The method was developed based on the key component of an omnidirectional catadioptric vision sensor and ten laser projectors. The proposed method completely describes the calibration process of the structured lights and the calibration mathematic model including the camera cali-

This research is supported by National Key Technologies R & D Program of China during the 13th Five-Year Plan Period (No. 2017YFC0601604). References [1] G. Zhang, Z. Wei, A novel calibration approach to structured light 3D vision inspection, Opt. Laser Technol. 34 (5) (2002) 373–380. [2] P. Kiddee, Z. Fang, M. Tan, A practical and intuitive calibration technique for cross-line structured light, Optik Int. J. Light Electron Opt. 127 (20) (2016) 9582–9602. [3] F. Ke, J. Xie, Y. Chen, A flexible and high precision calibration method for the structured light vision system, Optik Int. J. Light Electron Opt. 127 (1) (2016) 310–314. [4] C.S. Wieghardt, B. Wagner, Hand-projector self-calibration using structured light, in: Proceeding International Conference on Informatics in Control, Automation and Robotics, IEEE, 2015, pp. 85–91. [5] K. Claes, H. Bruyninckx, Robot positioning using structural light patterns suitable for self-calibration and 3d tracking, J. Transp. Secur. 7 (2) (2014) 191– 198. [6] F. Li, H. Sekkati, J. Deglint, et al., Simultaneous projector-camera selfcalibration for three-dimensional reconstruction and projection mapping, IEEE Trans. Comput. Imaging 3 (1) (2017) 74–83. [7] F. Zhou, G. Zhang, Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations, Image Vis. Comput. 23 (1) (2005) 59–67. [8] C. Wang, Y. Li, Z. Ma, J. Zeng, T. Jin, H. Liu, Distortion rectifying for dynamically measuring rail profile based on self-calibration of multiline structured light, IEEE Trans. Instrum. Meas. 67 (99) (2018) 1–12. [9] L. Nie, Y. Ye, Z. Song, Method for calibration accuracy improvement of projector-camera-based structured light system, Opt. Eng. 56 (7) (2017) 074101. [10] X. Chen, J. Xi, Y. Jin, J. Sun, Accurate calibration for a camera–projector measurement system based on structured light projection, Opt. Lasers Eng. 47 (3) (2009) 310–319. [11] F. Lopes, H. Silva, J.M. Almeida, E. Silva, Structured light system calibration for perception in underwater tanks, Handbook of Pattern Recognition and Image Analysis, Springer International Publishing, 2015. [12] J. Miura, Y. Negishi, Y. Shirai, Mobile robot map generation by integrating omnidirectional stereo and laser range finder, in: Proceedings of IEEE Conference on Intelligent Robots and Systems, IEEE, 2002, pp. 250–255. [13] V.D. Hoang, M.H. Le, K.H. Jo, Planar motion estimation using omnidirectional camera and laser rangefinder, in: Proceedings of IEEE Conference on Human System Interaction, IEEE, 2013, pp. 632–636.

8

X. Chen et al. / Measurement 148 (2019) 106914

[14] E.B. Bacca, E. Mouaddib, X. Cufí, Embedding range information in omnidirectional images through laser range finder, in: Proceedings of IEEE Conference on Intelligent Robots and Systems, IEEE, 2010, pp. 2053–2058. [15] S. Barone, M. Carulli, P. Neri, A. Paoli, A. Razionale, An omnidirectional vision sensor based on a spherical mirror catadioptric system, Sensors 18 (2) (2018) 408. [16] C. Mei, P. Rives, Calibration between a central catadioptric camera and a laser range finder for robotic applications, in: Proceedings of IEEE Conference on Robotics and Automation, IEEE, 2006, pp. 532–537. [17] Y. Li, Q. Wang, Catadioptric omni-direction vision system based on laser illumination, in: Proceedings of IEEE Conference on Industrial Electronics, IEEE, 2013, pp. 1–6. [18] T. Jia, B. Wang, Z. Zhou, H. Meng, Scene depth perception based on omnidirectional structured light, IEEE Trans. Image Process. 25 (9) (2016) 4369–4378. [19] J. Xu, B. Gao, C. Liu, P. Wang, S. Gao, An omnidirectional 3D sensor with line laser scanning, Opt. Lasers Eng. 84 (2016) 96–104. [20] J. Xu, P. Wang, Y. Yao, S. Liu, G. Zhang, 3D multi-directional sensor with pyramid mirror and structured light, Opt. Lasers Eng. 93 (2017) 156–163. [21] V.D. Hoang, K.H. Jo, A simplified solution to motion estimation using an omnidirectional camera and a 2-D LRF sensor, IEEE Trans. Ind. Inf. 12 (3) (2017) 1064–1073.

[22] A. Rituerto, L. Puig, J.J. Guerrero, Comparison of omnidirectional and conventional monocular systems for visual SLAM, Omnivis with Rss 1–3 (2010). [23] D. Caruso, J. Engel, D. Cremers, Large-scale direct SLAM for omnidirectional cameras, in: Proceedings of IEEE Conference on Intelligent Robots and Systems, IEEE, 2015, pp. 141–148. [24] C. Paniagua, L. Puig, J.J. Guerrero, Omnidirectional structured light in a flexible configuration, Sensors 13 (10) (2013) 13903–13916. [25] H. Wang, C. Wu, T. Jia, X. Yu, Projector calibration algorithm in omnidirectional structured light, Proc. SPIE 131 (2017). [26] F.Q. Zhou, X. Chen, H.S. Tan, X.H. Chai, Three-dimensional catadioptric vision sensor using omnidirectional dot matrix projection, Chin. Opt. Lett. 14 (11) (2016) 80–84. [27] Z. Zhang, A flexible new technique for camera calibration, TPAMI 22 (11) (2000) 1330–1334. [28] D. Scaramuzza, A. Martinelli, R. Siegwart, A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion, Proceedings of IEEE Conference on Computer Vision Systems, IEEE, 2006, pp. 45-45. [29] Y.Y. Zhang, Q.H. Zeng, J.Y. Liu, Y.N. Li, S. Liu, An improved image edge detection algorithm based on Canny algorithm, Navig. Control 18 (1) (2019) 84–90.