Robotics and Autonomous Systems 62 (2014) 782–793
Contents lists available at ScienceDirect
Robotics and Autonomous Systems journal homepage: www.elsevier.com/locate/robot
Error propagation and uncertainty analysis between 3D laser scanner and camera Angel-Iván García-Moreno ∗ , Denis-Eduardo Hernandez-García, José-Joel Gonzalez-Barbosa, Alfonso Ramírez-Pedraza, Juan B. Hurtado-Ramos, Francisco-Javier Ornelas-Rodriguez Research Center for Applied Science and Advanced Technology (CICATA), Cerro Blanco 141 Col. Colinas del Cimatario, Quéretaro, México Instituto Politécnico Nacional (IPN), México
highlights • • • • •
We calibrated two different kinds of sensors, LIDAR and camera. We calculated the uncertainty analysis between both sensors. The performance of the error propagation is computed. We develop a new algorithm to obtain the extrinsic parameters of the LIDAR sensor. A new calibration pattern is used.
article
info
Article history: Received 10 July 2013 Received in revised form 11 February 2014 Accepted 21 February 2014 Available online 3 March 2014 Keywords: Sensors calibration Uncertainty analysis LIDAR calibration Camera calibration Effect of noise
abstract In this work we present an in-situ method to compute the calibration of two sensors, a LIDAR (Light Detection and Ranging) and a spherical camera. Both sensors are used in urban environment reconstruction tasks. In this scenario the speed at which the various sensors acquire and merge the information is very important; however reconstruction accuracy, which depends on sensors calibration, is also of high relevance. Here, a new calibration pattern, visible to both sensors is proposed. By this means, the correspondence between each laser point and its position in the camera image is obtained so that the texture and color of each LIDAR point can be known. Experimental results for the calibration and uncertainty analysis are presented for data collected by the platform integrated with a LIDAR and a spherical camera. © 2014 Elsevier B.V. All rights reserved.
1. Introduction Nowadays, urban environment reconstruction is a rapidly growing field of research. Of special relevance are those approaches using terrestrial platforms for the scanning process. In any of these systems, one of the most important challenges is the correct fusion of information from multiple sensors, which strongly depends on a very precise calibration of the sensors. The calibration process can be viewed as a closed form solution followed by a nonlinear refinement to give the best solution for a given set of data. A great variety of different sensors are currently used to get 3D information: depth laser sensors; CCD cameras and inertial systems primarily [1–3].
∗
Corresponding author. Tel.: +52 442 274 35 70. E-mail addresses:
[email protected] (A.-I. García-Moreno),
[email protected] (J.-J. Gonzalez-Barbosa). http://dx.doi.org/10.1016/j.robot.2014.02.004 0921-8890/© 2014 Elsevier B.V. All rights reserved.
When using camera sensors, one must find a set of parameters exclusively related to a single camera-lens optical system. These camera parameters are used in many processes which involve geometric computation. They can be used for example to estimate the three-dimensional position of a feature in a given image. The precision of the 3D position of the feature in the image depends on the accuracy of the computed parameters, which tends to vary for many reasons, such as lens distortion or camera focus calculation. When performing a calibration process, two kinds of parameters can be obtained; they are 1. Internal parameters: They basically define the quality of the information collected by the sensor. For example, cameras measure the distortions and determine how the image coordinates of a point are derived, given the spatial position of the point with respect to the camera. 2. External parameters: They define the position of the sensor with respect to the scene. There exist several methodologies that can
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793
783
be divided into four groups, depending on the object used for the calibration process [4]: (a) Use of a 3D reference calibration object. (b) Use of a planar pattern shown in different orientations. (c) Use of a 2D calibration object, commonly known as target, placed in different positions. (d) Use of self-calibration, so that a calibration object is not needed, only the image point correspondences are found while the camera is moving in a static scene. The estimation of LIDAR and camera intrinsic parameters is a nonlinear problem that can be solved in different ways. A novel algorithm has been proposed [5] for joint estimation of both the intrinsic parameters of the laser sensor and the LIDAR–camera transformation. Specifically, measurements of a calibration plane are used at various configurations to establish geometric constraints between the LIDAR intrinsic parameters and the LIDAR–camera 6 degrees of freedom relative transformation. These measurement constraints are processed to estimate the calibration parameters as follows: first, an initial estimate for the intrinsic and extrinsic calibration parameters is computed in two steps. Then, a batch iterative (nonlinear) least-squares method is employed to refine the accuracy of the estimated parameters. A method that uses a planar checkerboard pattern is proposed in [6–8]. Here, the authors define the rotation matrix between the laser sensor and the camera as the result of moving their platform and observing the resulting motion of the sensors. This step attempts to solve the well-known homogeneous transform problem, where translation is calculated using a common least-squares estimation algorithm applied to the localization of the corners of the pattern, detected by both sensors. Besides the problem of calibration, in [9] the authors provide a solution for the occlusion problem that arises in conjunction with different view points of the fused data. Their approach relies on a synchronization of both sensors to allow the detection of occluded objects. When occlusion is off (because the system has moved) an automatic trigger mechanism is set on to capture the previously missed object. Inertial correction for the LIDAR, and occlusion detection algorithms derived from a 3D object segmentation are used for visibility check of projected 3D LIDAR data. Simple and fast approaches with their respective toolboxes are presented in [10,11]. They attached multiple printed checkerboard patterns at the wall and the floor to characterize the intrinsic and extrinsic parameters with few shots. Not only squared flat patterns are used for calibration of these two sensors, for example in [12] the authors used a circlebased calibration object because this geometry allows for a more accurate estimation of its position in the camera frame and as a consequence, of the camera intrinsic parameters. The authors use a linear minimization of the Euclidean distance error between the circle center point sets, and then they generate the circles of the n locations estimated by the camera, which consists in computing m points of each estimated circle location by using the circle center and an orthonormal base lying on the circle’s plane. Another approach uses a conic-based geometrical object to calibrate 2D/3D laser sensors [13]. 1.1. Motivation of research The calibration process between a laser and a panoramic camera is a rather laborious work. The scene used for calibration should contain data that can be extracted from the information provided by both sensors, such as characteristic points, corners, etc. The checkerboard shaped pattern is the most used [6], but the flatness of the pattern is not taken into account. Therefore the quality of the calibration depends on the accuracy with which the corners can be located, the sensor resolution and the pattern whose geometrical features are well known. Another calibration
Fig. 1. Sensor platform composed of LIDAR Velodyne HDL-64E (L), LadyBug2 (LB) and GPS. TheLadyBug2 spherical digital video camera system has six cameras (Ci ). C [R, T ]L i represent translation and rotation of the LIDAR and six camera frames of the LadyBug2.
approach uses 3D spheres as markers [12]. With prior knowledge of sphere radius, the 3D coordinates of the center of the sphere can be estimated accurately. The sphere’s center can be calculated based on the image. The calibration between sensors is carried out using the correspondence between the spheres digitized by the scanner and the camera. The disadvantages of this method are that: (1) the features selection is time-consuming. (2) The number of reconstructed points on the sphere must be sufficient to approximate a sphere. (3) The calibration scenario should be the same for both sensors. (4) It assumes that the intrinsic parameters of the laser are previously calibrated. The calibration proposed in this paper uses a flat pattern. The pattern dimensions were R , whose calculated using the device FARO Laser Tracker X V2⃝ accuracy is 30 µm. The pattern flatness is calculated using the same laser tracker. Due to the geometric features of the pattern, both sensors acquire the whole pattern, avoiding having blind spots for any of them. The corners of the pattern extracted from the images are calculated at a subpixel precision. Using the pattern scanned by LIDAR, and knowing the exact measurements, we interpolate the intersection of the corners to millimeter accuracy.
2. Platform projection model In this section we describe the geometric models of a multisensor system and the relationships between sensors’ coordinate systems. In the presented case the system is composed of two types of sensors, a Velodyne HDL-64E laser scanner and a Point Grey LadyBug2 digital spherical camera. The laser scanner operates by pulsing a laser diode for a short duration (typically 4 ns) and precisely measuring the time-of-flight to a target object. The camera system is a high resolution omni directional sensor; it has six 0.8-Megapixel cameras, with five CCDs positioned forming a ring seeing outwards and one more looking above, that enables the system to collect video from more than 75% of the full sphere. The goal of the method is to find a homogeneous transformation between the pinhole camera and the LIDAR in order to fuse the measurements from both sensors in urban digitalized environment applications. Data is collected by a mobile platform mounted atop a vehicle, shown in Fig. 1.
784
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793 C
The 3 × 4 matrix PL i is called the camera projection matrix, which resumes both intrinsic and extrinsic parameters. 2.2. LIDAR model We compute a standard model for the 3D data capture process, which defines a spherical global coordinate system whose origin is the location of the LIDAR. Then, the three-dimensional data is modeled as an AllPointCloud = f (r , θ , φ) with respect to origin, where r is the radius of the surface, θ the colatitude or the zenith angle, and φ the azimuthal angle on the unit sphere. The development of the mathematical model for the captured 3D points is based on the transformation of the spherical coordinates for each position of the unitary sphere onto Cartesian coordinates. The key point of this transformation is that every involved parameter includes a perturbation as defined in Eq. (2). The model is also graphically represented in Fig. 3.
Fig. 2. LIDAR frame L and the six pinhole camera model.
2.1. Panoramic camera model The six camera images of the LadyBug2 are represented by Ci (i = 1, . . . , 6). The 3D points acquired by the LIDAR (XL ) are C C transformed from LIDAR frame to camera frames by [RL i , TL i ], called the extrinsic parameters (see Fig. 2). A 2D point in the camera Ci is denoted by uCi = uCi
T v Ci . A L L T
3D point in the LIDAR frame is denoted by XL = X L Y Z . We use xˆ to denote the augmented vector by adding 1 as the last
T
T
ˆ Ci = uCi v Ci 1 and Xˆ L = X L Y L Z L 1 . A element: u camera is modeled by the usual pinhole camera model: the image uCi of a 3D point XL is formed by an optical ray from XL passing through the optical center Ci and intersecting the image plane. The relationship between the 3D point XL and its image projection uCi is given by Eq. (1):
Ci
ˆ su
Ci
=A [
C RL i
,
C TL i
with ACi = Ci
and P = A [
]Xˆ = L
−ku f
C RL i
,
C TL i
C PL i XL
ˆ
0 kv f 0
0 0
(1) u0
v0 1
0 0 0
Ci
x′ = d′x sin(θ + 1θ ) − hOSC cos(θ + 1θ ), y′ = d′x cos(θ + 1θ ) + hOSC sin(θ + 1θ ),
(2)
z ′ = (ds + 1ds) sin(φ + 1φ) + vOSC cos(φ + 1φ), where d′x = (ds + 1ds ) cos(φ + 1φ) − vOSC sin(φ + 1φ). For each laser i, the above equation can be rewritten as x′i = d′x sin(θ + 1θi ) − (hOSC )i cos(θ + 1θi ), y′i = d′x cos(θ + 1θi ) + (hOSC )i sin(θ + 1θi ), zi′ = (dsi + 1dsi ) sin(φi + 1φi ) + (vOSC )i cos(φi + 1φi ), ′
′
Xi = [xi
′
(3)
′
yi zi ] = gi (θ , φi , dsi , 1θi , 1φi , (vOSC )i , 1dsi , (hOSC )i ),
where d′x = (dsi + 1dsi ) cos(φi + 1φi ) − (vOSC )i sin(φi + 1φi ), i = 1, 2, . . . , 64. The mathematical model considers five intrinsic parameters for every laser {1θi , (hOSC )i , (vOSC )i , 1φi , 1dsi }. For the 64 lasers, the model computes 320 intrinsic parameters. 3. Sensors calibration
] C
C
where s is an arbitrary scale factor, [RL i , TL i ] are the so called extrinsic parameters, and represent the rotation and translation which relate a point from the LIDAR coordinate system L to the camera coordinate system Ci , and ACi is the intrinsic matrix of the camera Ci , formed by (u0 , v0 ) the coordinates of the image center, and fku (α) and fkv (β) are the scale factors in image axes u and v .
In this section we will show how the algorithms for extrinsic and intrinsic parameters’ calculation were developed. The six cameras of the LadyBug2 are calibrated with Zhang’s method [15]. The LIDAR is calibrated using the method proposed in [14]. We use a pattern which facilitates the extraction of the same point from camera and LIDAR data. This pattern can be
Fig. 3. Disturbed parameters from Velodyne LIDAR nominal calibration values [14].
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793
785
spotted near the center of the image in Figs. 7 and 8. The pattern is composed of interlaced white rhombuses and rhombus voids. The voids act as a black background for infrared LIDAR’s lasers, and also for the spherical camera because there is no illumination behind the pattern. Fig. 4 shows the relationship between the pattern Wi , camera Ci and LIDAR L. The transformation between LIDAR and cameras shown in Eq. (1) is computed by C
C
C
[RL i , TL i ] = [R, T]Wi ∗ ([R, T]LW )−1 .
(4)
Algorithm 1 Here we show a set of algorithms to compute intrinsic and extrinsic parameters for camera and LIDAR. In the first algorithm LIDAR data is obtained; points that belong to the pattern are extracted to be used in the computation of extrinsic parameters. In the second algorithm, extrinsic and intrinsic camera parameters are computed. Because the rigid transformation of each sensor to the world is already known, in the third algorithm we use that information to compute the transformation camera–LIDAR frame. For each camera Ci do:
Algorithm to get of extrinsic LIDAR parameters. Input: raw point clouds data Output: extrinsic parameters between LIDAR and world framework
C
Fig. 4. The extrinsic parameters [R, T]Wi are computed by Zhang’s method and [R, T]LW are computed with Algorithm 1.
using the maximization function (Eq. (5)), we calculate the distance (radius) and angle (θ ) in the plane where the reference pattern placed. Using the points projected to the plane Π (nProjPattern), the maximizing function is defined as
(θ , radius) = max
θ ,radius
f (nProjPattern)
nProjPattern
and for all nRawPointCloud do nPointCloud ← transform_to_XYZ (nRaw PointCloud) (nPatternPointCloud, Π ) ← RANSAC (nPointCloud) nProjPattern ← proj_to_normal_pattern(nPatternPointCloud) [R, T]LW ← MaximizingFunction(nProjPattern, Π ) end for Algorithm to get extrinsic and intrinsic LadyBug 2 parameters.
f (nProjPattern) =
1
≤
elsewere
0
if 2
n ∆ m
(5) − sin(θ ) nProjPattern cos(θ ) cos(θ ) 2n + 1 + radius ≤ ∆ − sin(θ ) 2m + 1
cos(θ ) sin(θ )
Input: pattern images Output: intrinsic and extrinsic parameters between camera and world framework
where m = 0, 1, 2, 3 and n = 0, 1, 2, 3 are the square number of the pattern in x and y directions, and ∆ is the size of a square. Using the plane equation Π and (radius, θ ) we compute the [R, T]LW parameters.
for all nImage do C [R, T]Wi , ACi ←use
4. Uncertainty analysis
Bouguet’s camera calibration
Toolbox end for LIDAR and LadyBug2 calibration Input: extrinsic parameters for both sensors Output: extrinsic parameters between camera and LIDAR for all nImage do C C C [RL i , TL i ] = [R, T]Wi ∗ ([R, T]LW )−1 end for In the algorithm shown in Algorithm 1, the function transform_to_XYZ transforms the LIDAR data to X , Y , Z points using the method proposed in [14]. We then use the RANdom SAmple Consensus (RANSAC ) algorithm (which is a widely used algorithm for plane detection in point cloud data) to find the Π plane. Next, the proj_to_normal_pattern algorithm projected the points nPatternPointCloud to the plane Π . In function MaximizingFunction, we build an artificial calibration pattern (Pattern(radius, θ )) using the dimension of the pattern calibration. The artificial pattern can be moved on the plane Π rotating an angle θ and moving a distance radius in the direction of rotation. The points of the projected pattern (nProjPattern) are compared with the artificial pattern plane,
We present in this section a unified approach to uncertainty analysis and the propagation error through calibration algorithms for the vision-based reconstruction system, since it estimates the values of sensors’ model parameters. According to [16] uncertainty analysis refers to the determination of the uncertainty in analysis results that derives from uncertainty in analysis inputs and sensitivity analysis refers to the determination of the contributions of the uncertainty in individual analysis inputs to the uncertainty in analysis results. First of all, we have to clearly define the model used to represent uncertainty. This model is based on: 1. The manner in which the uncertainty in individual analysis inputs is characterized. 2. The procedures that are used to propagate uncertainty through the analysis. 3. The procedures that are available for sensitivity analysis. 4. The interpretations and representations that are available for analysis results of interest. The importance of small errors in the localization of feature points and the influence of the incorrect estimation of some camera parameters on the final calibration are addressed in this research. Here we also underline the relevance of uncertainty
786
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793
assessment to the calibration parameters, which increases in highaccuracy dimensional measurements and data fusion. Since noise from digital camera and noise from environment conditions are random, the extracted feature points are also considered random measurements. Camera parameters are obtained by minimizing an objective function. Vision-based measurement is a process that transforms the target from a world coordinate system to the camera coordinate system Ci according to camera parameters. Therefore, the variations in the estimated parameters affect all the measurements that are subjected to this transformation, and the knowledge of the camera parameters’ uncertainties plays a key role in the design of measurement systems with highprecision requirements [17,18]. According to [19] for performing this analysis it is necessary to know the value of the uncertainty of input quantities: 1. uncertainty of input pixel localization (σ2d ), 2. uncertainty of target point coordinates (σ3d ).
σ2d depends on the image acquisition system and the algorithm used to extract the pixel coordinates. Its uncertainty is due to many factors (e.g. lens imperfections) and it propagates through subsequent algorithms, such as edge detectors. Although an analytical evaluation of the uncertainty of the pixel coordinates would not be straightforward, it can be easily determined experimentally. The experimental procedure has two steps: (1) imaging the target and (2) localization of the feature points on the image. These two steps are repeated several times. Then, σ2d is evaluated as the standard deviation of the pixel coordinates of the feature points. σ3d depends on target’s physical characteristics and on the instrument used to measure the coordinates of its feature points. The task of camera calibration is to estimate the unknown camera parameters from the n pairs of target points and feature points by solving Eq. (6) as a non-linear least-squares problem: 2 n C ui u min F (Y , X ) = v Ci − v
(6)
i=1
where X = U1 , U2 , . . . , Un , V1 , V2 , . . . , Vn (i.e. the feature points in the image frame) and Y are the intrinsic and extrinsic parameters, respectively; thus
u=α
v=β
r⃗1 XL + Tx r⃗3 XL + Tz r⃗2 XL + Ty r⃗3 XL + Tz
+ u0 , + v0
(u, v) represents the pixel location of each 3D point projected onto T the image, and r⃗1 , r⃗2 , r⃗3 represents the rotation matrix in Euler’s T angles. Tx , Ty , Tz are the translation vectors. XL is a 3D point in the LIDAR frame. (u0 , v0 ) are the coordinates of the image center. (uCi , v Ci )T are the points extracted from the image.
words, this relationship involves the relationship between the error of the feature points and the calculated parameters. Once we know the error propagation of feature points and parameters, we need to compare them with their theoretical values to obtain uncertainty. For parameters we do that by using Eq. (8):
Σ1 Y =
∂g ∂Y
−1
∂g Σ1X ∂X
∂g ∂X
T
∂g ∂Y
−1 T
.
(8)
4.2. Uncertainty validation To validate the uncertainty, variances of camera parameters have been estimated by Monte Carlo computation. We started by adding Gaussian noise to the extracted feature points. Then, these noisy points were used to determine camera parameters. The simulation was performed 2000 times and the dispersion for each camera parameter was obtained by calculating the standard deviation. Algorithm 2 shows how to compute the uncertainty, where Pd are the differences between coordinates of feature points in the camera system and coordinates of feature points in the world system transformed with the original intrinsic and extrinsic parameters. tam is the size of the camera system feature point matrix; the randomNoise function generates normally distributed Gaussian numbers. Pn are the noisy feature points, xp,i has the new calculated p parameters after each iteration and σ2d are the uncertainty values. Algorithm 2 Algorithm used to compute the uncertainty dispersion. Pd represents the difference between the projected feature world points and the image feature points. Pi are the feature points in the world frame projected onto the image and Pe are the world feature points. PCS are the feature points in the camera frame. Monte Carlo simulation Input: w orld and image feature points Output: intrinsic parameters matrix ACi
ξ ← std(Pd) tam ← sizeof (PCS ) for all i ← 1 : n do nr ← randomNoise(tam) N ← stdn(rn ) r Pn ← Pi + ξ · N xp,i ← getCalibration(Pn , Pe) end for p = std(xp ) σ2d 5. Results
4.1. Uncertainty model Since variations in the estimated parameters affect all the measurements, which in addition are subject to noise and other external factors, knowing camera parameters’ uncertainties plays a key role in the design of high-precision measurement systems. In [17] the error propagation of the objective function (Eq. (6)) is defined as in Eq. (7):
1Y ≈ −
∂g (Y , X ) ∂Y
−1
∂g (Y , X )1X ∂X
Results are presented in four stages, the computation of extrinsic parameters of the LIDAR, the computation of extrinsic and intrinsic parameters of the cameras Ci of the LadyBug2, extrinsic parameters between each camera and LIDAR and the uncertainty analysis. For practical purposes, we show the results for only one camera; however the LadyBug2 has six cameras. 5.1. Extrinsic LIDAR parameters
(7)
where 1X are the variations of the feature points, and 1Y represents the variations of the estimated parameters. In other
To find LIDAR extrinsic parameters a pattern consisting of rhombus is used. The dimensions of the real pattern are known; thus a synthetic pattern referenced to LIDAR origin was created.
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793
787
Table 1 Uncertainty on extrinsic LIDAR parameters [R, T]LW .
Tx
Translation (mm)Ty Tz
mean
std
–310 –300 –290 –280 –270 –260 –150
100
yaw
−515.29 165.62 −39.06 0.55 1.51 1.52
roll
Rotation (rad)pitch
−1.58 1.08 −1.92 0.015 0.011 0.012
50 0 –50
Table 2 Intrinsic camera parameters ACi .
centimeters
–100
Fig. 5. Extraction of the calibration pattern using the RANSAC algorithm and projection onto the plane Π .
Intrinsic parameters
Value
Focal length (mm) Image center location (px) Distortion coefficients (px2 ) Skew coefficient
(530.967, 526.773) (360.858, 465.310) (−0.3819, 0.1448, 0.0229, 0.0048, 0) 0
45 40 number of points
35 30 25 20 15 10 5 0 30
20 10
5
0
angle [degrees]
–10 –20 –30 –15
–10
–5
10
15
0 radius [cm]
Fig. 6. The maximization function computes the number of points of the real pattern matches with the synthetic pattern using Eq. (5).
Fig. 7. Subpixel corners extraction.
was added to the feature points to evaluate the behavior of error in these two parameters. 5.2. Extrinsic and intrinsic LadyBug2 parameters
The synthetic pattern is rotated and moved, with the objective of maximizing the intersection between the acquired pattern and the synthetic one. Fig. 5 shows the extracted points (blue points) using the RANSAC algorithm. They are projected onto the plane Π using the project_to_normal_pattern algorithm (described in Section 3). The red circles in Fig. 5 represent the position of the artificial pattern plane. This position is computed with the algorithm MaximizingFunction (described in Section 3). The MaximizingFunction computes the matches between the real pattern data and the synthetic pattern; this is shown in Fig. 6. Rotation and translation are performed on the plane Π , and translation is carried out in θ . The rigid transformation of a 3D point in the LIDAR frame, L, into the world frame (pattern frame) is defined by the rotation matrix and translation vector ([R, T]LW ): RLW
TLW
−0.2625 −0.1525 = −0.7384 −0.6037 0.6210 −0.7824 −515.35 = 165.45 . −38.83
0.9527 −0.3001 ; −0.0458
To perform this calculation we used Matlab [20] camera calibration toolbox. The intrinsic camera parameters are given in Table 2. Fig. 7 shows the subpixelic extraction points. The pattern plane position is the same for the pattern plane acquired by the LIDAR in C Fig. 5. The [R, T]Wi are
0.1852 0.6941 0.6955
0.2406 0.6541 −0.7170
C RWi
=
C TWi
−573.61 20.69 = . −47.14
0.9527 −0.3002 ; 0.0458
Table 1 shows the uncertainty on extrinsic parameters with respect to the world frame. Using a Monte Carlo simulation, noise
5.3. LIDAR and LadyBug2 calibration The rigid transformation between the camera and the LIDAR frame was computed using Eq. (4). In Fig. 8, the pattern acquired by the LIDAR is transformed onto the image frame using the extrinsic C C parameters [RL i , TL i ]. This transformation allows us to reference in the camera the points acquired by the LIDAR. The projection is completed using the intrinsic camera parameters ACi .
788
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793
a
b
Fig. 8. (a) The red points are acquired by the LIDAR and projected onto the image. (b) Zoom-in over one section of the image target. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Distance (cm)
Total points of test object
Error (%)
989 645 430
1211 1284 1332
5.36 6.23 6.45
To evaluate the calibration algorithm between the LIDAR and the camera transformation, we acquired with both sensors a wooden board. Fig. 9 shows the LIDAR data projected onto an image of the wooden board (our test object). The board is then segmented on the point cloud and on the image. The 3D data of the board is projected onto the image data and matching is computed. Table 3 shows how the distance affects the matching accuracy of the algorithm. From the table it is clear that projection quality has a direct relationship with distance. The table shows that as the distance increases, the error decreases. To compute the deviation of a LIDAR scanned plane, we made several acquisitions of a plane at different distances. With each acquisition, the captured points are used to find the plane equation. Distances from the points to the plane represent the acquisition uncertainties of the LIDAR when the sensor digitize the plane and depends on the LIDAR–plane distance. The blue line in Fig. 10 shows this relationship, and we observe that there is a range of distances between the test object and the LIDAR for which the error is minimum. The minimum return distance for the HDL-64E sensor is approximately 0.9 m. The sensor ignores returns closer than this. In Fig. 10, the minimum plane distance is 3 m. The values of Table 3 are plotted in Fig. 10 as a green line. The red triangle represents the linear interpolation of the deviation of the LIDAR scanned plane at the same distance of the data shown in Table 3 (989, 645 and 430 cm). The tendency of LIDAR characterization with respect
Standard Deviation (CM)
Table 3 Error projection of 3D points onto the image. The error represents the percentage of 3D points that do not match on the test object onto the image (see Fig. 9).
6
7
4
6
2 0
500
1000
1500
2000
2500
Error (%)
Fig. 9. Image and point cloud test data matched at different distances. (a) 9.89 m, (b) 6.45 m, (c) 4.30 m.
5 3000
DISTANCE (CM) Fig. 10. The blue line shows the distances between the test object and the LIDAR for which the error is minimum. The green line is the values of Table 3. The red triangle represents the linear interpolation of the deviation of the LIDAR scanned plane at the same distance of the data shown in Table 3. The tendency of LIDAR characterization with respect to Table 3 is the same. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
to Table 3 is the same; when the distance increases, the LIDAR uncertainty and error projection decrease. This tendency may be due to the distance of the walls used for calibration that was in the range of 1000–2000 cm. Table 4 shows the extrinsic uncertainty LIDAR frame parameters with respect to the camera frame. Previously, Gaussian noise was added to LIDAR and camera data; using the respective parameters for each sensor we compute the rigid transformation between both sensors, as described in Algorithm 1.
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793
a
b
Translation 5
Rotation 0.012 Roll
X axis
3
2
1
0
0.008
0.006
0.004
0.002
0
0.1
0.3 0.5 0.7 std of Gaussian Noise
c
0.9
0 0
1.0
d
Focal length 5
0.3 0.5 0.7 std of Gaussian Noise
0.9
1.0
0.9
1.0
Image center X axis
3
Y axis
std of Image center (pixel)
4
0.1
3.5
X axis
std of Focal length (mm)
Pitch Yaw
Z axis
std of Rotation (rad)
std of translation (mm)
0.01
Y axis
4
789
3
2
Y axis
2.5 2 1.5 1
1 0.5 0 0
0.1
0.3 0.5 0.7 std of Gaussian Noise
0.9
1.0
0
0
0.1
0.3 0.5 0.7 std of Gaussian Noise
Fig. 11. Simulated camera parameters’ behavior while Gaussian noise is added. The ordinate axis represents the standard deviation of 2000 iterations using Monte Carlo simulation.
Table 4 C
Uncertainty LIDAR–LadyBug2 extrinsic parameters [R, T ]L i .
std
roll Rotation (rad)pitch yaw
−179.13 20.26 409.45 0.0003 0.0034 0.0034
−2.00 −2.18 0.25 0.049 0.095 × 10−6 0.071
mean
Tx Translation (mm)Ty Tz
5.4. Uncertainty analysis According to the standard [21], uncertainty indicates the interval of values that the quantity to be measured (the measurand) may assume after all systematic biases have been corrected. When data affected by uncertainty is used as input to a numerical algorithm, the standard uncertainty of algorithms’ result can be evaluated either with an a posteriori approach, i.e., using statistical analysis of a results’ sample, or with a priori or analytical approach, i.e., developing analytical relationships describing how uncertainty propagates through the algorithm from input to the output. Because the latter requires only one run of the algorithm, it is the preferred
approach when measurement time is limited, such as in online measurements. In this work we applied an a posteriori approach. Results obtained from a Monte Carlo simulation are shown in Table 5, C in which we computed the extrinsic parameters [R, T]Wi and the intrinsic parameters, image center (u0 , v0 ), focal length (α, β), and distortion coefficients. Fig. 11 shows the behavior of camera parameters after Monte Carlo simulation. Using as a basis the feature point projection onto image frame differences, Gaussian noise was added to the feature points and computed a simulation with 2000 iterations for each value. Fig. 11(a) shows that the Z axis error tends to be larger than in the other axes. Fig. 11(c) shows that the error tends to be uniform and incremental for both axes while Gaussian noise also increases. 5.5. Error propagation To know how the random noise (1X ) on the feature points acts on the unobserved vector (X ) of feature points and consequently produce the observed vector X = X + 1X (real), we derive the error propagation from Eq. (7); this equation describes the dependence of the final error on each parameter with the final measured quantities for the intrinsic and extrinsic parameters. We define the ideal parameter input vector (Y ) and the ideal feature point input vector (X ), and the two real observed vectors. These two vectors are related through an optimization function F (Y , X ).
790
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793 Table 5 Uncertainty of camera’s intrinsic and extrinsic parameters.
Tx Translation (mm)Ty Tz
roll Rotation (rad)pitch yaw
−125.82 −21.50 339.43 0.0012 0.0017 0.0075
−2.22 −2.14 −2.13 3.88 4.37 × 10−6 4.44
mean
std
Focal (mm)
Image center (px)
551.34 550.81
388.61 506.62
0.0013 0.0014
0.0025 0.0029
Table 6 Analytical camera error propagation results (1Y ).
Tx Translation (mm)Ty Tz
−0.0063 0.0144 −0.0129
Focal (mm)
−9.1023 0.2319 × 10−6 9.5791
−0.0004 −0.0193
Error propagation relates the uncertainty for input measurements to the perturbation of the observed Y . The measurement results are presented in Table 6. Using Eqs. (7)–(8) we calculate the propagation error and theoretical values (covariance matrix) of the camera’s intrinsic and extrinsic parameters with the values calculated with Algorithm 2. Fig. 12 shows the theoretical parameter’s error behavior using the simulated values. 5.6. External factors The 3D reconstruction error is not only affected by the sensor’s uncertainties. To compute how these external factors affect the reconstruction we use the flatness of the calibration pattern acquired with LIDAR and compared with the acquired by a FARO R Laser Tracker X V2⃝ (accuracy 30 µm) at different distances. Once all standard uncertainties are identified for a particular measurement, a combined uncertainty can be derived [22] using Eq. (9):
n ∂ df 2 u2 (xi ) uc (y) = ∂ dxi i =1
(9)
where f is the functional relationship between y and input quantities xi on which y depends, and between output estimate y and input estimates xi on which y depends, y = f (x1 . . . xi . . . xn ). ∂ df is the partial derivative with respect to the input quantity xi of ∂ dxi the functional relationship f between y and input quantities xi on which y depends; evaluated with estimates xi for the xi . u(xi ) is the standard uncertainty of input estimates xi that estimates the input quantity xi , equal to the positive square root of u2 (xi ). Eq. (9) belongs to the law of uncertainty propagation. The expanded form of this uncertainty measurement can be calculated as follows: U = k · uc (y)
roll Rotation (rad)pitch yaw
(10)
where k is the coverage factor, and it is recommended to evaluate a value of k = 2, equivalent to a probability of 95%. If the systematic error is corrected after the calibration of the measuring system, all the indicated sources of uncertainty according to the ISO 142532:2011 standard [23] may be grouped into three categories: the standard calibration uncertainty of the object to be measured, in our case the calibration pattern (ucal ), the standard uncertainty of
Image center (px)
0.0151 −0.0214
the sum of errors in the measurement process (up ), and the standard uncertainty of the material and the variations of manufacture (uw ); see Eq. (11). The ISO/TS 15530-3 standard [24] recommends to compute the systematic error arithmetically rather than geometrically: U =k·
ucal 2 + up 2 + uw 2 + |b|
(11)
where b = x − xcal , the differences between the flatness of the pattern acquired with LIDAR and laser tracker used to calibrate the hybrid platform. ucal is the standard calibration uncertainty reported in the certificate of calibration of the laser tracker (30 µm). up includes such sources of uncertainty as: repeatability, the measurement environment, speed of measurement, and geometry of the measuring system (pattern). uw includes such sources of uncertainty as: variations in thermal expansion, errors of pattern shape and roughness. For this experiment we worked in the National Center of Metrology. We used as values for uw the distance of the pattern calibration with respect to LIDAR (l), the thermal expansion coefficient of the pattern with α = 16.2 × 10−6 and temperature variation ∆t = ±1 °C. The value for uw is computed using Eq. (12): uw ≈ u1 t =
1t u1 t = √ 3
∂f u1 t ; ∂ 1t
1t =
1l l·α
1t u1t = l · α√ = 2.06 × 10−6 = uw . 3
(12)
As seen in Fig. 13, both the combined and expanded uncertainty have a linear trend, which is characterized by Eq. (13). As the distance increases, the uncertainty increases: U = 0.0031 · l + 0.0069
(13)
where l is the distance from the object to the LIDAR. Results of a comparative reconstruction (under well controlled environmental conditions, humidity and temperature) between a R high precision reference instrument FARO Laser Tracker X V2⃝ and the LIDAR used in this work showed that those environmental factors have a negligible influence. The calculated error for LIDAR measurements is practically the same for controlled and uncontrolled conditions.
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793
a
b
Translation 2
Rotation 0.2
X axis
Roll
1
0.5
0
c
Pitch
0.15
Z axis
std of Rotation (rad)
std of Translation (mm)
Y axis
1.5
Yaw
0.1
0.05
0
0.1
0.3 0.5 0.7 std of Gaussian Noise
0.9
0
1.0
d
Focal length 2.5
0
0.3 0.5 0.7 std of Gaussian Noise
0.9
1.0
0.9
1.0
Image center X axis
Y axis
std of Image center (pixel)
2
0.1
5
X axis
std of Focal length (mm)
791
1.5
1
0.5
Y axis
4
3
2
1
0 0
0.1
0.3 0.5 0.7 std of Gaussian Noise
0.9
1.0
0
0
0.1
0.3 0.5 0.7 std of Gaussian Noise
Fig. 12. Theoretical camera parameters’ behavior while Gaussian noise is added. The ordinate axis represents the standard deviation of 2000 iterations using Monte Carlo simulation.
object parts based on intensity values seems to be difficult (especially for surfaces with unknown reflectivity). 6. Conclusions
Fig. 13. The combined uncertainty (uc (y)) was computed from the external factors affecting the LIDAR at different distances. The expanded uncertainty (U) was computed by adding a factor (|b|) calculated from the flatness of the digitized calibration pattern vs. laser tracker at different distances.
Future works have to be carried out for common materials in urban reconstructions to know the impact of external factors in our reconstructions. Meanwhile we can take up as basis [25]; the authors mention that while different wetness conditions have marginal effects on the data quality (10%–20%), ambient illumination has influence on the measurements of the pulsed scanner. Also intensities decrease only slightly with longer distances (up to 25 m). A stronger decrease is caused by increasing incidence angles. The wetness condition has a strong effect on the received intensities for building materials. Therefore, a classification of objects or
Typically, sensor calibration in a surveying application is performed once, and the same calibration is assumed to be constant for the rest of the life of that particular sensor platform. However, for reconstruction applications where the sensors are used in urban terrain, with accident conditions, the assumption that the sensor’s calibration is not altered during a task is often not true. Although we should calibrate the sensors before every task, it is typically not practical if it is necessary to set up a calibration environment every time. Our method, being free from any such constraints, can be easily used to fine tune the calibration of the sensors in situ, which makes it applicable to in-field calibration scenarios. Uncertainty analysis for a new extrinsic calibration method for a multiple sensor platform is computed. We estimate the relationship between the input and the output quantity in an implicit form. This work is based on two commercial instruments, a model HDL-64E LIDAR R R from Velodyne⃝ and a spherical CCD Camera from Point Grey⃝ , the LadyBug2. Both sensors exhibit a 360° horizontal field of view. Extrinsic parameters’ uncertainties of both sensors, which arise from systematic errors were calculated (see Table 4). By obtaining a deviation as small as 0.003 mm on the translation vector it has been shown that the proposed calibration method is robust and reliable;
792
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793
a very small rotation deviation in the order of microradians has also been obtained. These results contribute to a high level of measurement confidence, despite the complicated working conditions in which our platforms are used. One can now rely on maintaining an acceptable uncertainty range in future data fusion operations and 3D texturization. Figs. 11 and 12 show graphics for real and simulated uncertainty calculations respectively. Both series of intrinsic and extrinsic parameters exhibit a linear behavior. Uncertainties were obtained when normal distribution N (0, σ ) Gaussian noise was added to feature points extracted from the pattern (corners). The σ parameter was the result of projecting real-world pattern feature point coordinates on the image plane; its value was 10 pixels. Average error for our projection algorithm was then calculated by measuring the differences between extracted feature points of the whole set of images and those of the real pattern; this error was calculated to a value of 0.25 pixels. LIDAR uncertainty due to external factors, such as distance to target or temperature, was also calculated (see Fig. 13). This uncertainty exhibits a linear behavior and also a direct dependence on distance; the closer the object the smaller the uncertainty (see Eq. (13)). The capture platform has been designed to digitize urban environments, which are mainly a set of planes. For a plane reconstruction (e.g. front of a house) located at 4 m or less from the system we found a reconstruction error of 2 cm. If we move away from that plane the error increases at a rate of 0.0031 cm/m. This value was calculated under controlled conditions; however when working in-situ external factors will be out of control. A proper understanding of these factors is important for carrying out the survey with the LIDAR correctly. The environmental errors are caused by ambient conditions. As the laser beam propagates, its power will be reduced due to the attenuation and scattering. Adverse ambient conditions may also impair the data quality in laser scanning. A brief introduction of how the external factors reduce the quality of the LIDAR data is reported in this paper, leaving for future works a more thorough explanation. At this point we only calculated the LIDAR–camera transformation error and its distribution. In future works, using some other calibration methods, we expect to be able to minimize this uncertainty, mainly by using data from different calibration instruments. At the end we will be capable of compensating the error in 3D texturizing. We demonstrated the accuracy and efficiency of our algorithms by applying them on real sets. Acknowledgments The authors wish to acknowledge the financial support for this work to the Consejo Nacional de Ciencia y Tecnología (CONACYT) through projects SEP-2005-O1-51004/25293 and to the Instituto Politécnico Nacional through project SIP-20130165. This work was carried out at the Centro de Investigación en Ciencia Aplicada y Tecnología Avanzada (CICATA), Querétaro, México. The authors acknowledge the government agency CONACYT for the financial support through the scholarship holder 237751. References [1] C. Mei, P. Rives, Calibration between a central catadioptric camera and a laser range finder for robotic applications, in: Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006, ICRA 2006, IEEE, 2006, pp. 532–537. [2] J. Underwood, S. Scheding, F. Ramos, Real-time map building with uncertainty using colour camera and scanning laser, in: Proceedings of the 2007 Australasian Conference on Robotics and Automation, 2007. [3] A.I. García-Moreno, J.J. Gonzalez-Barbosa, F.J. Ornelas-Rodríguez, J.B. HurtadoRamos, A. Ramirez-Pedraza, E.A. González-Barbosa, Automatic 3D city reconstruction platform using a LIDAR and DGPS, in: Advances in Artificial Intelligence, Springer, 2013. [4] C. Ricolfe-Viala, A.J. Sánchez-Salmerón, Uncertainty analysis of camera parameters computed with a 3D pattern, in: Proceedings of the 13th International Conference on Image Analysis and Processing, 2005.
[5] F. Mirzaei, D. Kottas, S. Roumeliotis, 3D LIDAR-camera intrinsic and extrinsic calibration: observability analysis and analytical least squares-based initialization, Int. J. Robot. Res. 31 (4) (2012) 452–467. [6] P. Nunez, P. Drews Jr., R. Rocha, J. Dias, Data fusion calibration for a 3D laser range finder and a camera using inertial data, in: European Conference on Mobile Robots, vol. 9, 2009, p. 9. [7] G. Pandey, J. McBride, S. Savarese, R. Eustice, Extrinsic calibration of a 3D laser scanner and an omnidirectional camera, in: Intelligent Autonomous Vehicles, vol. 7, 2010, pp. 336–341. [8] A.I. García-Moreno, J.J. Gonzalez-Barbosa, F.J. Ornelas-Rodriguez, J.B. HurtadoRamos, M.N. Primo-Fuentes, LIDAR and panoramic camera extrinsic calibration approach using a pattern plane, in: Pattern Recognition, Springer, 2013. [9] S. Schneider, M. Himmelsbach, T. Luettel, H. Wuensche, Fusing vision and LIDAR-synchronization, correction and occlusion reasoning, in: 2010 IEEE Intelligent Vehicles Symposium (IV), 2010, pp. 388–393. [10] A. Geiger, F. Moosmann, O. Car, B. Schuster, Automatic camera and range sensor calibration using a single shot, in: 2012 IEEE International Conference on Robotics and Automation, ICRA, IEEE, 2012, pp. 3936–3943. [11] R. Unnikrishnan, M. Hebert, Fast Extrinsic Calibration of a Laser Rangefinder to a Camera, Tech. Rep., Carnegie Mellon University, 2005. [12] F. Rodriguez, V. Fremont, P. Bonnifait, et al., Extrinsic calibration between a multi-layer LIDAR and a camera, in: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, 2008, MFI 2008, IEEE, 2008, pp. 214–219. [13] M. Almeida, P. Dias, M. Oliveira, V. Santos, 3D-2D laser range finder calibration using a conic based geometry shape, in: Image Analysis and Recognition, 2012, pp. 312–319. [14] G. Atanacio-Jiménez, J.J. González-Barbosa, J.B. Hurtado-Ramos, F.J. OrnelasRodríguez, H.G.R.T. Jiménez-Hernández, R. González-Barbosa, LIDAR velodyne HDL-64E calibration using pattern planes, Int. J. Adv. Robot. Syst. 8 (2011) 70–82. [15] Z. Zhang, A flexible new technique for camera calibration, IEEE Trans. Pattern Anal. Mach. Intell. (2000) 1330–1334. [16] J.C. Helton, Quantification of margins and uncertainties: conceptual and computational basis, Reliab. Eng. Syst. Saf. 96 (9) (2011) 976–1013. [17] L. Zhu, H. Luo, X. Zhang, Uncertainty and sensitivity analysis for camera calibration, Ind. Robot. Int. J. 36 (3) (2009) 238–243. [18] S. Kopparapu, P. Corke, The effect of noise on camera calibration parameters, Graph. Models 63 (5) (2001) 277–303. [19] G. Di Leo, C. Liguori, A. Paolillo, Covariance propagation for the uncertainty estimation in stereo vision, IEEE Trans. Instrum. Meas. 60 (5) (2011) 1664–1673. [20] J.Y. Bouguet, Camera calibration toolbox for MATLAB, 2008. URL: http://www.vision.caltech.edu/bouguetj/calib_doc/. [21] For Guides in J.C. Metrology, Evaluation of Measurement Data—Guide to the Expression of Uncertainty in Measurement, Bureau International des Poids et Mesures, 2008. [22] Uncertainty of measurement—part 3: guide to the expression of uncertainty in measurement, ISO/IEC Guide 98-3:2008, 2008. [23] Geometrical Product Specifications (GPS)—Inspection by measurement of workpieces and measuring equipment—part 2: guidance for the estimation of uncertainty in GPS measurement, in calibration of measuring equipment and in product verification, ISO 14253-2, 2011. [24] Geometrical Product Specifications (GPS)—Coordinate Measuring Machines (CMM): technique for determining the uncertainty of measurement—part 3: use of calibrated workpieces or measurement standards, ISO 15530-3:2011, 2011. [25] T. Voegtle, S. Wakaluk, Effects on the measurements of the terrestrial laser scanner HDS 6000 (Leica) caused by different object materials, in: Proceedings of ISPRS Work, vol. 38, 2009, pp. 68–74.
Ángel-Iván García-Moreno He is currently a Ph.D. candidate in Advanced technology at the Research Center for Applied Science and Advanced Technology. He obtained his M.Sc. degree from the National Polytechnic Institute, in 2012, in the computer vision area. He received the B.S. degree in Informatics from the Autonomous University of Queretaro in 2009. His personal interests are computer vision, remote sensing using LIDAR technology and data fusion.
Denis-Eduardo Hernández-García He received the B.S. degree in Systems Engineering in 1989 from the Engineering National University of Nicaragua. He obtained his M.Sc. degree from the National Polytechnic Institute, Mexico, in 2011. He is currently a Professor-Researcher of the FC&S, Nicaragua.
A.-I. García-Moreno et al. / Robotics and Autonomous Systems 62 (2014) 782–793
793
José-Joel González-Barbosa was born in Guanajuato, Mexico, in 1974. He received the M.S. degree in Electrical Engineering from the University of Guanajuato, Mexico, and Ph.D. degree in Computer Science and Telecommunications from National Polytechnic Institute of Toulouse, France, in 1998 and 2004, respectively. He is an Associate Professor at the CICATA Querétaro-IPN, Mexico, where he teaches courses in Computer Vision, Image Processing and Pattern Classification. His current research interests include perception and mobile robotics.
Juan B. Hurtado-Ramos He received the B.S. degree in Communications and Electronics Engineering in 1989 from the Guadalajara University. He obtained his Ph.D. degree from the Optics Research Center of the University of Guanajuato in 1999. He is currently a ProfessorResearcher of the CICATA Querétaro-IPN, Mexico, in the Image Analysis group. His personal interests are mainly in the field of Metrology using optical techniques. Since 1998, he is a member of the Mexican National Researchers System.
Alfonso Ramírez-Pedraza He received the B.S. degree in Informatics in 2009 from the Autonomous University of Queretaro. He obtained his M.Sc. degree from the National Polytechnic Institute, Mexico, in 2012. His personal interests are Image analysis and 3D segmentation.
Francisco-Javier Ornelas-Rodriguez He received the B.S. degree in Electronic Engineering in 1993 from University of Guanajuato. He obtained his Ph.D. degree from the Optics Research Center in 1999. He is currently a Professor-Researcher of CICATA Querétaro-IPN, Mexico, in the Image Analysis team. His personal interests are mainly in the field of Metrology using optical techniques. He is a member of the Mexican National Researchers System.