Optics and Lasers in Engineering 69 (2015) 20–28
Contents lists available at ScienceDirect
Optics and Lasers in Engineering journal homepage: www.elsevier.com/locate/optlaseng
Calibration method for line-structured light vision sensor based on a single ball target Zhen Liu, Xiaojing Li, Fengjiao Li, Guangjun Zhang n Key Laboratory of Precision Opto-Mechatronics Technology, Ministry of Education, Beihang University, No.37 Xueyuan Road, Haidian District, 100191 Beijing, PR China
art ic l e i nf o
a b s t r a c t
Article history: Received 15 August 2014 Received in revised form 21 January 2015 Accepted 22 January 2015 Available online 20 February 2015
Profile feature imaging for ball targets is unaffected by the position of the target. On this basis, this study proposes a method for the rapid calibration of a line-structured light system based on a single ball target. The calibration process is as follows: the ball target is placed at least once and is illuminated by the light stripe from the laser projector. The vision sensor captures an image of this target. The laser stripe and profile images of the ball target are then extracted. Based on these extracted features and the optical centre of the camera, the spatial equations of the ball target and a cone profile are calculated. The plane on which the intersection line of the two equations lies is the light plane. Finally, the optimal solution for the light plane equation is obtained through nonlinear optimization under a maximum likelihood criterion. The validity of the proposed method is demonstrated through simulation and physical experiments. In the physical experiment, the field of view of the structured light vision sensor measures 300 mm 250 mm. A calibration accuracy of 0.04 mm can be achieved using the proposed method. This accuracy is comparable to that of the calibration method which utilizes planar targets. & 2015 Elsevier Ltd. All rights reserved.
Keywords: Calibration Line-structured light Vision sensor Single ball target
1. Introduction The pioneer research on the structured measurement of light vision dates back to the 1970s. As one of many vision-measuring methods, the 3D vision measurement technique based on structured light is characterized by a wide range, non-contact, rapidity and high precision. It has been widely applied in industrial environments [1–3]. Line-structured light vision sensors are the most frequently used type of structured light vision sensor. The calibration process consists of two steps: the calibration of the intrinsic parameters of the camera and of the light plane parameters. Much research [4–12] has examined the former. Nonetheless, methods to calibrate light plane parameters are also described in many related studies [13–19]. For instance, Dewar [13] used a wire-drawing method, whereas Huynh [14] and Xu [15] derived a calibration method by determining the light plane calibration point according to the principle of cross-ratio invariability using a 3D target. The core idea of this calibration method is to obtain the coordinates of the intersection point between the structured light stripe and the line on which the three collinear points are located (the precise coordinates of the collinear
n
Corresponding author. E-mail address:
[email protected] (G. Zhang).
http://dx.doi.org/10.1016/j.optlaseng.2015.01.008 0143-8166/& 2015 Elsevier Ltd. All rights reserved.
points are known). This process is in accordance with the principle of cross-ratio invariability. The high-precision light plane calibration point can thus be located. Zhou [16] proposed a method for the onsite calibration of line-structured light vision sensors using a planar target. This method determines the light plane calibration point on the planar target based on the principle of cross-ratio invariability. Many light plane calibration points can be obtained by moving the planar target repeatedly. This method requires no high-cost auxiliary equipment; thus, it is especially suitable for on-site calibration. Liu [17] also adopted the planar target and applied the Plüker formula to describe the line of the light stripe. Xu [18] calibrated a structured light vision sensor using a flat board with four balls. Wei [19] proposed a method to calibrate a line-structured light system based on a 1D target. The 3D coordinates of the intersection point between the light plane and the 1D target were identified based on the constraints on the distance between the feature points of the 1D target. The light plane equation was solved by fitting the 3D coordinates of several intersection points. Based on the fact that the profile feature of the ball target is unaffected by the placement angle of the ball target, a method is proposed in this paper for the on-site calibration of a linestructured light vision sensor using a single ball target. Firstly, the cone equation which is defined by the profile of the light stripe on the ball target and the optical centre of camera is determined. Secondly, the spatial equation of the ball target is obtained. The
Z. Liu et al. / Optics and Lasers in Engineering 69 (2015) 20–28
21
light plane equation is derived by combining these two equations. Finally, the optimal solution for the light plane equation is determined through nonlinear optimization. The contents of this paper are structured as follows: Section 2 briefly introduces the model of the line-structured light vision sensor; Section 3 describes the basic principle of the algorithm in detail; Sections 4 and 5 contain information regarding the simulation and the physical experiment, respectively; and Section 6 presents the conclusion.
2. Model of the line-structured light vision sensor The geometric structure of a structured light vision sensor is shown in Fig. 1. Owxwywzw is the world coordinate frame (WCF). Ocxcyczc is the camera coordinate frame (CCF). Ouxuyu is the image coordinate frame (ICF). The arbitrary point P on the light plane presumably has a projection point p on the image plane. The undistorted image ~ The homogeneous coordinate of P in coordinate of P in ICF is p. WCF is Pw ¼[xw,yw,zw,1]T. Based on the camera imaging model, the following equation is obtained: ρp~ ¼ KðR tÞP w ¼ MP w ;
ð1Þ
where ρ is the coefficient. K is the matrix of the camera's intrinsic parameters. R and t are the rotation matrix and translation vector from WCF to CCF, respectively. M is the projection matrix of the camera. The image coordinate of P is undistorted to obtain p~ based on the calibration of the intrinsic parameters of the camera [5]. P satisfies the light plane equation. Suppose the light plane equation in WCF is expressed as axw þ byw þ czw þ d ¼ 0;
ð2Þ
where a, b, c and d are the four coefficients of the light plane equation. By combining the equations, ( ρp~ ¼ KðR TÞP w : ð3Þ axw þ byw þ czw þd ¼ 0 In WCF, the equation of line Ocp is determined based on the camera model. The light plane equation is determined with Eq. (2). Therefore,
Fig. 2. Calibration process of a line-structured light vision sensor.
the point of intersection between Ocp and the light plane can be applied to determine the 3D coordinates of P uniquely.
3. Algorithm principle The intrinsic parameters of the camera in the line-structured light vision sensor are calibrated using Zhang's method [5]. The following section describes the calibration of the light plane parameters of a structured light vision sensor in detail. WCF is defined under the CCF of the line-structured light vision sensor. The calibration process of light plane parameters is illustrated in Fig. 2. Ocxcyczc is presumably CCF and π is the light plane. Hence, the light plane equation can be expressed as qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 axþby þcz þd ¼0, where ða2 þb þ c2 Þ ¼ 1. At the ith position of the ball target (i¼1,2,3…n, n is the number of positions in which the ball target is placed), the profile of light stripe Ct(i) and the profile curve of the ball edge are obtained for target Cs(i). This target is determined through ellipse fitting. The calibration process of light plane parameters consists of the following steps: Step 1: The ball target is placed in the correct position at least once. It is illuminated by the laser projector. The line-structured light sensor captures the light stripe image on the ball target and the image points of the light stripe are extracted. The image point of the light stripe is undistorted based on the intrinsic parameters of the camera before Ct(i) is obtained by ellipse fitting. The profile of the ball target is determined by the same distortion, and then Cs(i) is also obtained by ellipse fitting. Step 2: The spatial cone equation Qt(i) determined by Ct(i) and the optical centre of the camera is solved by back projection. Similarly, the spatial cone equation Qs(i) determined by Cs(i) and the optical centre of the camera is solved. Then the sphere equation of the ball target Bi is solved as well. The equation for the light plane is formulated by combining Bi and Qt(i). The circular ring is tangential to the ball target on this plane. Step 3: The optimal solution for the light plane equation is obtained through nonlinear optimization in accordance with the maximum likelihood criterion.
3.1. Solving Ct(i) and Cs(i) Fig. 1. Geometric structure of line-structured light vision sensor. (1) Camera; (2) image plane; (3) line structured light sensor; (4) laser projector; (5) light plane; and (6) measured object.
An image of the ball target as captured by the line-structured light vision sensor is shown in Fig. 3(a). In the proposed method, the image point of the light stripe on the ball target must be
22
Z. Liu et al. / Optics and Lasers in Engineering 69 (2015) 20–28
Fig. 3. Schematic of the results of processing of ball target image. (a) Ball target image; (b) results of extraction of light stripe centre; (c) light stripe profile Ct(i) solved by fitting; (d) gradient image of ball target image; and (e) profile of the edge of ball target; (f) light stripe profile Cs(i) solved by fitting.
extracted from this figure along with the profile of the ball target. They are consequently used to fit Ct(i) and Cs(i). The complete automatic extraction process of the image point of the light stripe consists of the following steps: Step 1: The images are undistorted when the intrinsic parameters of the camera are applied. The Hessian matrix of each pixel (x, y) is computed as ! r xx r xy H ðx; yÞ ¼ ; ð4Þ r xy r yy where rxx, rxy and ryy are the second-order partial derivatives of image intensity function I(x,y), which can be computed by convolving the image with the corresponding differential forms of the Gaussian kernel. Step2: The eigenvector of H(x, y) which corresponds to the largest eigenvalue represents the normal direction of the light stripe. This direction is denoted by n(t) ¼(nx, ny)T. When (x, y) is employed as the reference point, the sub-pixel coordinate of the light stripe centre point (px, py) can be expressed as follows: ðpx ; py Þ ¼ ððtnx þ xÞ; ðtny þ yÞÞ:
ð5Þ
If 0.5 rtnx r0.5, then 0.5 rtny r0.5, thus indicating that the first directional derivative along (nx, ny) vanishes within the current pixel. In addition, if the maximum eigenvalue of H(x, y) is larger than a threshold, then the point can be recorded as a light stripe centre point. This eigenvalue is the second directional derivative along (nx, ny).The extracted light stripe centre points is shown in Fig. 3(b). Step 3: Given the light stripe centre extracted in step 2, Ct(i) is solved through ellipse fitting as indicated in Fig. 3(c). The solution is derived with respect to Cs(i). For image processing, the gradient image of the ball target image is extracted first, as presented in Fig. 3(d). Then, the centre of the line in the gradient image is obtained using Steger's method [20]. The centre
Fig. 4. Schematic of spatial cone equation Qt(i).
of the line is the point on the profile of the ball target edge, as illustrated in Fig. 3(e). Finally, Cs(i) is solved through ellipse fitting, as exhibited in Fig. 3(f).
3.2. Solving Qt(i) As indicated in Fig. 4, the spatial cone equation Qt(i) intersects with the light plane and the ball target, thereby forming a circular ring. The plane on which the circular ring is located is the light plane. Qt(i) is determined by Ct(i) and the optical centre of the camera. Suppose P¼ K[I 0] is the camera matrix. The elliptic cone equation Qt(i) can be solved [21] by following the back perspective
Z. Liu et al. / Optics and Lasers in Engineering 69 (2015) 20–28
23
z c'
projection model of the camera as shown in Eq. (6): T
Q tðiÞ ¼ P C tðiÞ P:
ð6Þ
3.3. Solving the spatial equation Bi of the ball target
r0
The spatial cone equation Qs(i), which is determined by Cs(i) and the light centre of the camera, is also solved based on the back perspective projection model: " # ðKIÞT C sðiÞ ðKIÞ 0 Q sðiÞ ¼ P T C sðiÞ P ¼ : ð7Þ 0 0
D0
Suppose the dual matrix Q 0sðiÞ of Qs(i) is Q 0sðiÞ ¼ ðKIÞT C sðiÞ ðKIÞ ¼ K T C sðiÞ K;
ð8Þ
where Q 0sðiÞ is a full-rank matrix. Ri is the diagonal matrix of Q 0sðiÞ , that is RTi Q 0sðiÞ Ri ¼ Diagðλ1 ; λ2 ; λ3 Þ:
Oc'
ð9Þ
Fig. 6. Schematic of coordinates of the centre of ball target.
A new coordinate frame O0c x0c y0c z0c is established through diagonalization; thus, Qs(i) has this standard form in O0c x0c y0c z0c : λ1 x2 þλ2 y2 þ λ3 z2 ¼ 0;
ð10Þ
where λ1 ; λ2 ; λ3 is the eigenvalue of cone; hence, λ1 ¼ λ2 . Therefore,
Q 0s .
Q 0sðiÞ
is the right circular
λ1 ¼ λ2 4 0; λ3 o 0 or λ1 ¼ λ2 o 0; λ3 4 0: O0c x0c y0c z0c
ð11Þ
zc
O0c .
takes the Oc of Ocxcyczc as The beam which points from Oc to the centre of the ball target is the z0 axis. Any x0 y0 axis which conforms to a right-handed coordinate system is selected. The rotation matrix from O0c x0c y0c z0c to Ocxcyczc is Ri, as depicted in Fig. 5. As illustrated in Fig. 6, the 3D coordinates of the centre of the ball target are expressed as follows based on the knowledge of triangular geometry: α0i ¼ ½ 0
0
D0 T ;
C t(i )
oc
xc yc
Camera
Laser projector
Fig. 7. Schematic of ambiguity.
ð12Þ
where D0 ¼ r 0 ½ðj λ1 j þ j λ3 j Þ=j λ3 j 1=2 . r0 is the radius of the ball. The coordinates of the centre of the ball in CCF are then obtained αi ¼ Ri α0i ¼ Ri ½ 0
xc'
0
D0 T :
ð13Þ
The spatial ball in CCF can then be represented by " # αi I 33 Bi ¼ : αTi αTi αi r 20
zc
Ri zc'
oc yc'
xc ' x yc c
Fig. 5. Schematic of diagonalization.
3.4. Solving the light plane equation As indicated in Fig. 2, the plane on which Bi and Qt(i) intersect is the light plane at the ith position of the ball target. The laser stripe point on the light plane is solved with Bi and Qs(i), as described above (
ð14Þ
X T Bi X ¼ 0 X T Q tðiÞ X ¼ 0
;
ð15Þ
where X ¼[x, y, z, 1]T denotes the 3D coordinates of the point on the light plane. Given that the ball target is always placed in front of the camera, the solution with a z component larger than zero is selected from X. As presented in Fig. 7, two light plane equations, namely, a1x þb1yþ c1zþ d1 ¼ 0 and a2x þb2yþc2zþd2 ¼0, can be derived through fitting by using the solution with a z component larger than zero from X. Therefore, the solution to the light plane equation is ambiguous. If the ball target is placed in only one position, then the correct light plane equation can be selected from between the two equations based on the relative position between the laser projector and the camera. Without loss of generality, the laser projector is positioned on the right side of CCF displayed in Fig. 7. Therefore, the x and z components in the normal direction of the right light plane possess the same signs according to the projection of the normal vector of the light plane onto the Oxz plane. In
24
Z. Liu et al. / Optics and Lasers in Engineering 69 (2015) 20–28
another plane, the x and z components in the normal direction have opposite signs. Similarly, if the laser projector is installed on the left side of the camera, then the x and z components in the normal direction on the right light plane have opposite signs. Given yet another plane, the x and z components in the normal direction have the same signs. The correct light plane equation can be obtained by following the judgment method at only one ball target position. If the ball target is placed twice, the two equations a11xþb11yþ c11zþd11 ¼0 and a12xþ b12yþc12zþ d12 ¼ 0 are obtained at the first position; whereas the equations a21xþb21yþc21zþd21 ¼0 and a22xþ b22yþc22zþd22 ¼0 are derived in the second position. As exhibited in Fig. 8, two out of four equations are identical because they are light plane equations. The other two equations are parallel, but they are not on the same plane. Therefore, the right light plane equations can be obtained using the aforementioned method. 3.5. Nonlinear optimization The light plane equation solved in Section 3.4 is a closed solution. π can be optimized as per the maximum likelihood criterion. Based on the principle which postulates that the distance from the point on the light plane to the light plane is minimal, the following optimal objective function is established: 0 1 n X m X ðaxij þ byij þ czij þ dÞ @ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A; f ða; b; c; dÞ ¼ min ð16Þ 2 a2 þ b þc2 i¼1j¼1 where a,b,c,d are the four parameters of the light plane equation. xij ¼ [xij,y ij,z ij] is the ith position of the ball target. j denotes the jth point on the line of intersection between the light plane and the ball target. m is the number of laser stripe image points on this intersection line. These points can be solved using Eq. (15). The optimal solution of a,b,c,d under the maximum likelihood criterion can be derived through a nonlinear optimization procedure (such as Levenberg–Marquardt [22]).
4. Simulation experiment To verify the proposed method, a simulation experiment is conducted. When the ball target is used to calibrate the line-structured light vision sensor, three factors, namely, image noise, ball target size and the number of placements of the ball target, significantly affect the calibration results. The simulation experiment aims to quantify the influence of these three factors on the calibration accuracy of the linestructured light vision sensor. This experiment applies a camera
resolution of 1380 pixels 1080 pixels and a lens focus of 38 mm. The distance from the camera to the centre of the ball is 500 mm. The field of view (FOV) of the camera is approximately 300 mm 225 mm. The light plane equation is 0.577xþ0.577yþ0.577z 299.449¼0. Finally, the calibration accuracy of the proposed method is evaluated with the relative error of the coefficients of light plane equations [a,b,c,d]. 4.1. Influence of image noise on calibration accuracy The diameter of the ball target is 50 mm in the experiment, and this target is placed once. Moreover, the ball target is positioned thrice. Gaussian noise is incorporated into the profile of the ball target and into the light strip of the image used for calibration. The noise level varies from σ ¼ 0:0 pixels to σ ¼ 1:0 pixels at an interval of 0.1 pixels. For each noise level, 100 experiments are conducted and the relative error is computed. The relative errors of the calibration results at different noise levels are shown in Fig. 9. Calibration accuracy increases as the noise decreases. This accuracy can be improved by increasing image processing accuracy. Based on our experience with image processing, the extraction of edges and of the light stripe centre can reach a precision of 0.2 pixels. Therefore, the relative error of calibration result is approximately 0.5%, as indicated in the simulation results. 4.2. Influence of ball target size on calibration accuracy When Gaussian noise is incorporated into the profile of the ball target and into the light strip of the image used for calibration in the experiment, the noise level is σ ¼ 0:2 pixel. The ball target is placed once, and its diameter varies from 10 mm to 100 mm at an interval of 10 mm. For each diameter level, 100 experiments are conducted and the relative error is computed. The relative errors of the calibration results at different diameter levels are exhibited in Fig. 10. Calibration precision increases gradually as the diameter of the ball target increases. Calibration accuracy varies less significantly as the diameter continues to increase beyond 30 mm. During the simulation, the FOV of the line-structured light vision sensor measures 300 mm 225 mm. When the diameter of the ball target is larger than 30 mm/ 300 mm¼10% of FOV, calibration accuracy is high. 4.3. Influence of the number of placements of the ball target on calibration accuracy In the experiment, the diameter of the ball target is 50 mm. Gaussian noise is added to the profile of the ball target and to the light strip of the image used for calibration. The noise level is
Laser projector
3.0
Relative error (%)
2.5 2.0
a b c d
1.5 1.0 0.5 0.0 0.0
Fig. 8. Schematic of removing the ambiguity that arises from the two positions of the ball target.
0.2
0.4
0.6
0.8
Noise(pixel) Fig. 9. Influence of image noise on calibration accuracy.
1.0
Z. Liu et al. / Optics and Lasers in Engineering 69 (2015) 20–28
25
7
Relative error (%)
Laser projector
a b c d
6 5
Camera
4 3 2 1 0 10
20
30 40 50 60 70 80 Diameter of ball target (mm)
90
100
Fig. 12. Structured light vision sensor in the physical experiment.
Fig. 10. Influence of the diameter of the ball target on calibration accuracy.
0.6 a b c d
Relative error (%)
0.5 0.4 0.3 0.2 0.1
Fig. 13. Ball target used in the physical experiment.
0.0
1
2
3
4
5
6
7
8
Placement timesof the ball target Fig. 11. Influence of placement time of the ball target on calibration accuracy.
σ ¼ 0:2 pixel. The number of placements of the ball target varies from one time to eight times at an interval of one time. For each placement level, 100 experiments are conducted and the relative error is computed. The relative errors of the calibration results at different placement levels are depicted in Fig. 11. Calibration accuracy can be improved by increasing the number of placements of the ball target. When this number is larger than two, calibration accuracy varies less significantly as the number continues to increase. The number of placements need not be increased infinitely in the calibration process. A satisfactory calibration result can be obtained when the placement is performed six times.
5. Physical experiment The line-structured light vision sensor is composed of a camera and a laser projector, as displayed in Fig. 12. A digital camera manufactured by Allied Vision Technologies and equipped with 17 mm Schneider optical lens is used. The image resolution is 1360 pixels 1024 pixels, with an FOV of 300 mm 225 mm and a measurement distance of approximately 500 mm. The laser projector is a single line laser. The diameter of the ball target is 50 mm, and the precision is 0.01 mm, as indicated in Fig. 13. 5.1. Calibration results As in previous literature, the intrinsic parameters of the camera in the line-structured light vision sensor are calibrated using software
[23] compiled based on Zhang's method [5]. During the calibration, the number of placements of the planar target is 10 times; 10 10 feature points are observed on this target. Its accuracy is 5 μm. The calibration results are as follows: 2 3 2748:65 0 691:10 6 7 0 2749:05 536:40 5: K ¼4 0 0 1 The distortion coefficients of the camera are: k1 ¼ 0.23575, k2 ¼0.37946. To verify the efficiency of the proposed method, it is compared with the method presented in previous literature [16] in two experiments.
5.1.1. A: Experiment 1 In experiment 1, the calibration method introduced in literature [16] is employed to calibrate the structured light vision sensor. The planar target is placed in front of the line-structured light vision sensor six times. Its machining accuracy is 0.01 mm. The image used for this calibration is displayed in Fig. 14. The calibration result is 0.8634x 0.1728yþ 0.4740z 223.693 ¼ 0. 5.1.2. B: Experiment 2 In experiment 2, the line-structured light vision sensor is calibrated using the proposed method. The ball target is placed in front of the line-structured light vision sensor six times. The diameter of the ball target is 50 mm, and its machining accuracy is 0.01 mm. The image used for the calibration of structured light vision sensor is presented in Fig. 15.
26
Z. Liu et al. / Optics and Lasers in Engineering 69 (2015) 20–28
Fig. 14. Image used for the calibration of structured light vision sensor in experiment 1.
Fig. 15. Image used for the calibration of structured light vision sensor in experiment 2.
Table 1 Two plane equations obtained at one position of ball target.
Table 2 Two plane equations obtained at two positions of the ball target.
Equation of the light plane Eq. (1) Eq. (2)
0:9578x 0:0929y 0:2720z þ 230:169 ¼ 0 0:8647x 0:1704y þ 0:4726z 224:607 ¼ 0
Equation of the light plane First position Second position
If the ball target is placed only once (for example, the first position), then two plane equations are obtained because of the ambiguity in solving the light plane equation. This scenario is highlighted in Table 1. The laser projector is already installed on the right side of the camera. As explained above, the x and z components in the right light plane equation should have the same signs. Therefore, Eq. (2) in Table 1 is the right light plane.
Eq. Eq. Eq. Eq.
(1) (2) (3) (4)
0:9578x 0:0929y 0:2720z þ 230:169 ¼ 0 0:8647x 0:1704yþ 0:4726z 224:607 ¼ 0 0:9644x 0:1038y 0:2431z þ 241:584 ¼ 0 0:8604x 0:1742yþ 0:4789z 225:557 ¼ 0
If the images of a ball target which is placed two times are used (for example, the first and second positions), then four plane equations are obtained as listed in Table 2. Eqs. (2) and (4) are consistent. The normal vectors of the plane are basically similar when Eqs. (1) and (3) are applied. The d component differs significantly in the plane equations. Therefore, Eqs. (2) and (4) are the right plane equations.
Z. Liu et al. / Optics and Lasers in Engineering 69 (2015) 20–28
The experiments above confirm that the proposed method can determine the correct light plane equation regardless of whether the ball target is placed one or two times. The calibration results obtained by placing the ball target in one and six positions in experiment 2 are shown in Table 3. 5.2. Accuracy evaluation To compare the proposed method with Zhou's method [16], a method for accuracy evaluation is introduced: firstly, the planar target is placed in front of the line-structured light vision sensor twice. At each position, the coordinates of the point of intersection between the laser stripe and the grid line of the planar target in the horizontal direction are extracted from the image (the point of Table 3 Equation of the light plane. Equation of the light plane 0:8647x 0:1704y þ 0:4726z 224:607 ¼ 0 0:8636x 0:1706y þ 0:4745z 224:063 ¼ 0
1 position 6 positions
intersection is called the testing point). Eq. (3) is used to calculate the 3D coordinates of the testing point in CCF. The distance between any two testing points in CCF is computed as the measurement distance dm. The local coordinates of the testing points in the coordinate frame of the planar target are calculated following the principle of cross-ratio invariability. The distance between any two testing points in this coordinate frame is computed as the ideal distance dt. Nine pairs of points are then selected from the images of the ball target at two positions. The root-mean-square error of the distance between the two points is obtained by estimating the deviation Δd between dm and dt. This error is used to evaluate the calibration accuracy of the linestructured light vision sensor. The 3D coordinates of the testing points obtained according to different calibration results are presented in Fig. 16 and Table 4. The calibration accuracies of the different methods are indicated in Table 5. The calibration accuracy of the proposed method which uses the image of the ball target placed one time is approximately 0.09 mm under a measurement range of roughly 300 mm 250 mm, whereas that of the proposed method which utilizes images of the ball target placed six times can be approximately 0.04 mm. This accuracy is comparable to that obtained with the method which uses a planar target [16].
Zhou’method The proposed method (6 times ) The proposed method (1 time )
520
Table 5 Analysis of calibration accuracy (mm).
500 z(mm)
27
480
Zhou's method
460
dt
440 50 0 m)
y(m
-50 -100
-40
-30
-10
-20
0
10
x(mm)
dm
1 24.01 23.99 2 36.01 35.99 3 48.01 47.96 4 24.00 23.99 5 36.00 35.98 6 48.00 47.95 7 24.01 23.99 8 36.01 35.99 9 48.01 47.96 RMS error
The proposed method The proposed method (1 time) (6 times)
Δd
dt
dm
Δd
dt
dm
Δd
0.02 0.02 0.05 0.01 0.02 0.05 0.02 0.02 0.05 0.03
24.01 36.01 48.01 24.00 36.00 48.00 24.01 36.01 48.01
24.04 36.05 48.04 24.04 36.05 48.03 24.04 36.05 48.05
0.03 0.04 0.03 0.04 0.05 0.03 0.03 0.04 0.04 0.04
24.01 36.01 48.01 24.00 36.00 48.00 24.01 36.01 48.01
24.05 36.06 48.04 23.96 35.92 47.86 23.95 35.90 47.83
0.04 0.05 0.03 0.04 0.08 0.14 0.06 0.11 0.18 0.09
Fig. 16. Testing points obtained by different calibration results.
Table 4 3D coordinates of the testing points obtained by different calibration results. Zhou's method
1 2 3 4 5 6 7 8 9
The proposed method (6 times)
The proposed method (1 time)
x (mm)
y (mm)
z (mm)
x (mm)
y (mm)
z (mm)
x (mm)
y (mm)
z (mm)
33.56 27.25 33.56 24.10 33.56 20.95 12.15 5.67 12.15 2.43 12.15 0.80 5.53 1.14 5.53 4.48 5.53 7.82
46.27 23.33 46.27 11.87 46.27 0.42 50.08 27.25 50.08 15.84 50.08 4.45 45.12 22.40 45.12 11.05 45.12 0.29
516.19 513.07 516.19 511.50 516.19 509.94 475.80 472.33 475.80 470.59 475.80 468.86 465.56 461.68 465.56 459.75 465.56 457.81
33.59 27.27 33.59 24.11 33.59 20.96 12.16 5.68 12.16 2.43 12.16 0.80 5.54 1.15 5.54 4.48 5.54 7.82
46.31 23.35 46.31 11.87 46.31 0.42 50.14 27.27 50.14 15.85 50.14 4.45 45.17 22.42 45.17 11.06 45.17 0.30
516.71 513.46 516.71 511.83 516.71 510.21 476.33 472.74 476.33 470.95 476.33 469.16 466.07 462.08 466.07 460.08 466.07 458.09
33.52 27.20 33.52 24.04 33.52 20.89 12.09 5.64 12.09 2.42 12.09 0.80 5.50 1.14 5.50 4.45 5.50 7.76
46.22 23.29 46.22 11.84 46.22 0.42 49.86 27.10 49.86 15.75 49.86 4.42 44.87 22.26 44.87 10.97 44.87 0.29
515.70 512.15 515.70 510.38 515.70 508.62 473.65 469.79 473.65 467.86 473.65 465.94 462.97 458.70 462.97 456.58 462.97 454.45
28
Z. Liu et al. / Optics and Lasers in Engineering 69 (2015) 20–28
6. Conclusion Profile feature imaging for a ball target is unaffected by its position. Based on this feature, a novel method is proposed in this paper for the calibration of a line-structured light vision sensor using a single ball target. The basic principle and the implementation of the proposed method are described in detail. The proposed method is then validated by experiments. The advantages of the proposed method are as follows: (1) profile feature imaging for a ball target is not influenced by the position of the ball target. Thus, a clear profile image of the ball target can be captured at almost any position in front of the camera. This method is especially suitable for narrow spaces or for the rapid on-site calibration of several line-structured light vision sensors from multiple perspectives. (2) As verified by the physical experiment, the FOV of the line-structured light vision sensor measures 300 mm 250 mm. Under this condition, the proposed method can achieve a calibration accuracy of 0.04 mm. This accuracy is comparable to that of the method which uses planar targets.
Acknowledgements The authors acknowledge the support from National Natural Science Foundation of China under Grant no. 51175027 and the Beijing Municipal Natural Science Foundation under Grant no. 3132029. References [1] Lu RS, Li YF, Yu Q. On-line measurement of straightness of seamless steel pipe using machine vision technique. Sens Actuators A: Phys 2001;94(1–3):95–101. [2] Zhang GJ, Sun JH, Chen DZ, Wei ZZ. Flapping motion measurement of honeybee bilateral wings using four virtual structured-light sensors. Sens Actuators: A Phys 2008;148:19–27. [3] Yang RQ, Chen S, Wei Y, Chen YZ. Robust and accurate surface measurement using structured light. IEEE Trans Instrum Meas 2008;57(6):1275–80.
[4] Tsai RY. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses. IEEE J Robot Autom 1987;RA-3(4):323–44. [5] Zhang ZY. A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 2000;22(11):1330. [6] Zhang ZY. Camera calibration with one-dimensional objects. IEEE Trans Pattern Anal Mach Intell 2004;26(7):892–9. [7] Wu FC, Hu ZY, Zhu HJ. Camera calibration with moving one-dimensional objects. Pattern Recognit 2005;38(5):755–65. [8] Fei Q, Li QH, Luo YP, Hu DC. Camera calibration with one-dimensional objects moving under gravity. Pattern Recognit 2007;40(1):343–5. [9] Agrawal M, Davis L. Complete camera calibration using spheres: dual space approach. In: Proceedings of the IEEE international conference on computer vision, 2003. pp.782–789. [10] Teramoto H, Xu G. Camera calibration by a single image of balls: from conic to the absolute conic. In: Proceedings of the Asian conference on computer vision, 2002. pp. 499–506. [11] Zhang H, Wong KK, Zhang GQ. Camera calibration from images of sphere. IEEE Trans Pattern Anal Mach Intell 2007;29(3):499–503. [12] Wong KK, Zhang GQ, Chen ZH. A stratified approach for camera calibration using spheres. IEEE Trans Image Process 2011;20(2):305–16. [13] Dewar R. Self-generated targets for spatial calibration of structured light optical sectioning sensors with respect to an external coordinate system. In: Proceedings of robots and vision'88 conference, 1988. pp. 5–13. [14] Huynh DQ. Calibration a structured light stripe system: a novel approach. Int J Comput Vis 1999;33(1):73–86. [15] Xu GY, Li LF, Zeng JC, Shi DJ. A new method of calibration in 3D vision system based on structure-light. Chin J Comput 1995;18(6):450–6. [16] Zhou FQ, Zhang GJ. Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations. Image Vis Comput 2005;23:59–67. [17] Zhang GJ, Liu Z, Sun JH, Wei ZZ. Novel calibration method for multi-sensor visual measurement system based on structured light. Opt Eng 2010;49(4):043602. [18] Xu J, Douet J, Zhao JG, Song LB, Chen K. A simple calibration method for structured light-based 3D profile measurement. Opt Laser Technol 2013;48:187–93. [19] Wei ZZ, Cao LJ, Zhang GJ. A novel 1D target-based calibration method with unknown orientation for structured light vision sensor. Opt Laser Technol 2010;42(4):570–4. [20] Steger C. An unbiased detector of curvilinear structures,. IEEE Trans Pattern Anal Mach Intell 1998;20(2):113–25. [21] Harley R, Zisserman A. Multiple view geometry in computer vision. Cambridge: Cambridge University Press; 2000. [22] MORE J. The Levenberg–Marquardt algorithm, implementation and theory. In: Watson GA, editor. Numerical analysis, lecture notes in mathematics, 630[M]. Springer-Verlag; 1977. [23] Bouguet JY, Camera calibration toolbox for Matlab. Available from: 〈http:// www.vision.caltech.edu/bouguetj/calib_doc/〉.