Measurement 111 (2017) 122–133
Contents lists available at ScienceDirect
Measurement journal homepage: www.elsevier.com/locate/measurement
Design and analysis of a 3D laser scanner Mohammed A. Isa, Ismail Lazoglu ⇑ Koc University, Manufacturing and Automation Research Center, Mechanical Engineering Department, Rumeli Feneri Yolu, Sariyer, Istanbul 34450, Turkey
a r t i c l e
i n f o
Article history: Received 2 September 2016 Received in revised form 14 July 2017 Accepted 15 July 2017 Available online 20 July 2017 Keywords: Spherical scanner Surface reconstruction Non-contact CMM Calibration of scanner Least squares optimization Uncertainty evaluation
a b s t r a c t A new laser scanner is designed, built and its scan measurement uncertainty is analyzed and deviations are minimized. The design is comprised of the physical setup of scanner, point cloud extraction as well as procedures for scanner calibration. It is designed to operate in a spherical domain, thereby giving wide imaging view possibilities. By exploiting strategies in real-time serial communication and image processing, the prototype acquires uniformly dense point cloud from a geometric specimen. In addition to design, rather than using benchmark geometry to only assess the accuracy of the scanner, data obtained from a known geometric model are used to modify the scanner parameters to obtain optimal results. A method of finding scanner parameters that provides least point deviations is developed using least squares. The methods of calibration and optimization of the scanner prototype in this paper can be extended to any type of scanner design. Ó 2017 Elsevier Ltd. All rights reserved.
1. Introduction The reason why conventional coordinate measurement machines (CMM) have been widely utilized as a standard measurement device for over 40 years is due to their high precision contact sensors that satisfy industrial standard [1]. Beyond their accuracy, CMM can be expensive and slow. The low speed is due to the fact that data acquisition is carried out point-wise as the probe is displaced continuously. It does not support fast and efficient instantaneous linear or 2D data acquisition prevalent in image sensors. Moreover, probe-type measurement devices are incompatible with flexible or deformable parts and reaching some points in complex geometry can be challenging. Therefore, there is benefit in exploring possibilities in non-contact multi-points acquisition based scanners for geometric reconstruction. In light of contact-type CMM drawbacks, non-contact measurement devices have gained some interest in recent years and have taken an integral part in various industrial applications. Several commercial products—referred to as 3D scanners or digitizers— have also emerged. Most of the scanners use led-based projectors or lasers as illumination source. As outlined by Molleda et al. [2], led projectors provide more uniform illumination and sharper edges. However, due to size and cost constraints, a class of noncontact scanners that uses laser as a source of light to project known pattern on geometry is of particular interest in this work. In addition to illumination, an important feature of scanners is ⇑ Corresponding author. E-mail address:
[email protected] (I. Lazoglu). http://dx.doi.org/10.1016/j.measurement.2017.07.028 0263-2241/Ó 2017 Elsevier Ltd. All rights reserved.
image sensing. The popular camera sensor types are complementary metal oxide semiconductor (CMOS) and coupled-charge device (CCD). Based on size, functionality, performance and cost, CMOS is preferable in most machine vision applications [1]. The choice of camera resolution is made based on computation capacity and speed requirement. Theoretically, by optical diffraction limitation imposed by Rayleigh’s criterion, the scanner camera used should be able to resolve distances greater than about 3 mm taking a typical lens f-number as 4 [2]. However, the actual accuracy of triangulation laser scanner can hardly reach around 30 mm even in a controlled environment [3]. Systematic errors arising from laser source, reflection surface, lens defects and environmental non-uniform illumination are the cause of increased uncertainty. In addition, image sensor noise adds error to measurements and partially occluded width of the light beam will give wrong data at some regions. Studies on estimation of uncertainty in laser reconstructions of geometric part have helped shed some light on how the magnitude of error is implicitly related to the part geometry. Isheil [4] found that angle between laser plane and camera axis b, significantly affects local error of measured part. Angles exceeding 40° for regions around 100 mm from camera were observed to contribute systematic error higher than 15 mm. The random error is also found to have magnitude reaching 30 mm. By geometric intuition, Curless and Levoy [5] provided instances where positions of sensor, surface and illuminant could result in erroneous measurement. Occlusion and discontinuities in reflectance or shape were mentioned as sources of errors. Laser scanners extract stripes resulting from the intersections of central plane of incident laser line beam with
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
the scan object. The laser line beam is not a perfect plane because it has width of about 0.5mm to 1mm depending on the imaging sensor. It becomes critical to quantify the form error that arises from the surface form of a measured object. Zhou [6] derived an expression that estimates form error originating from deviation of reflected stripe beam center and the original central incident plane for a Lambertian reflection with relatively very small lens radius. Uncertainty approaching magnitudes of 15 mm were ascribed to the mentioned form errors. The form dependence of laser structured light scanners has made standardization rather difficult. In most cases, a known geometric model is used as measurement benchmark of such scanners. In the designed laser scanner of this paper, the benchmark geometry is used for not only validation of scan results but also to reduce positional and systematic errors. In general, the global error in commercial 3D laser scanners has been analyzed in details in literature [3,4,7], they generally possess errors ranging from 40 to 300 mm. Their uncertainties are usually about one order more than contact type scanners [8], which is why probe-type CMMs are still predominantly in use.
123
2. Mechanical design The scanner in this study is designed based on requirements of high precision and wide scan coverage of part by allowing multiple camera view angles. Therefore, a prototype that allows sufficient relative displacement between camera and workpiece is sought. Feasibility and stiffness of supporting component are also considered in the design. As depicted in Fig. 1, a design that allows camera view direction that covers a form of hemispherical dome was designed and tested. The overall measurement depends directly on the repeatability of step positions in spinning and lifting as well as uncertainty in critical scanner geometric dimensions. Hence, these issues are studied and remedies are proposed in subsequent sections. All components were carefully selected and deformations of critical components are checked to ensure that displacement uncertainty is reduced. The lift frame, which supports the camera and lasers, is particularly significant since it must not transfer a critical torque the motor cannot handle. For this reason, it is fabri-
Fig. 1. Scanner setup showing relevant geometric axes, calibration grid and other design features with front and top views.
124
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
Fig. 2. Cross-section showing turntable rotation system.
cated from aluminum sheet 12 mm thick and designed in a ushape. All other components have negligible deformations compared to the lift frame and their deformations are therefore neglected. The effect of vibration during lifting is also negligible since sufficient time is given for the system to stabilize before image capture. It is observed that time of about 0.5 s is sufficient for this stabilization. This time is not wasted since other computations are carried out simultaneously. The lift motor is controlled by a micro-stepping driver to decrease the vibration amplitude. The turntable is supported by thrust ball bearing to ensure rotation is along a fixed axis—defined as Spin axis in Fig. 1. A step motor fitted with planetary gear that allows fine motion of about 70–225 thousand steps per rotation is attached to the turntable via a stiff coupling. A cross-section showing this connection is shown in Fig. 2. The turntable is designed to scan an object that is up to 10 kg. The scanner is designed to allow manipulation of spherical coordinate variables r, h and u. As illustrated in Fig. 1, the first motor rotates the scan-part placed on the turntable. The second motor controls the support frame that holds camera and laser system. Adjustable radial position r, which is the y-axis location of the focal point C from point S, allows placement of workpiece within the camera’s field of view. The two variables, h and u, are controlled programmatically and the radial r displacement is set manually for each scan. The camera focus changes significantly with radial positions; therefore, the scanner is calibrated for some radial values. Calibration was performed for radial locations of around 100, 150 and 200 mm of point C from central point S. For all measurements in this paper, the radial position of the camera is kept at around 100 mm. Three important axes—the laser, Spin and Lift axes—defined from camera frame are critical in the design as shown Fig. 1. The camera axis is fully defined, since it is coincident with the y-axis of the camera frame, the default reference frame. Each of the three axes should be measured after the system is assembled. First, the laser axis represents the intersection of the laser plane with the vertical camera plane (YZ plane). This axis is set to remain strictly vertical with respect to the camera frame. It is achieved by manually adjusting the laser position and checking the orientation using the camera
feedback discussed in Section 5 below. Second, the Spin axis is the rotation axis of the turntable. The specimen to be scanned needs to be attached rigidly to the top of the table throughout a scan session. By capturing calibration pattern points at several h rotation angles, and fitting the points to circles that have centers on the Spin axis, axis points are found with respect to the camera for any u location. Further detail about this axis measurement is provided in the next section. For the final Lift axis, a similar method of capturing axis data can be utilized by capturing points at various u locations. The intersection point S of the Spin and Lift axis is important for transformations. If the two axes do not intersect, then rotational transformation should be done at different reference points. Computer subroutine has been designed to carry out these calibrations. The turntable is made to rotate by a variable step angle, ranging from 0.02° to 0.4°, depending on average vertical spacing Dsz of previously measured laser stripe location. For uniform point distribution, the directional spacing should have approximately equal magnitudesDsx Dsz as illustrated in Fig. 3. Average Dsz is estimated based on the center of mass of the last saved laser stripe points. With a vertical field of view uv of 43.30 and center of mass n1 ; Z n1 i; the needed step angle in radians for n1 ; Y of prior stripe hX nth stripe image can be estimated using Eqs. (1) and (2). Y cam is the y-location of camera point C with respect to stationary point S shown in Fig. 1. 3. Camera calibration An optical model that defines data from a camera relates a 2D image to the physical 3D world. In order to calibrate any optical device, an accurate model is required. In this study, a camera with 1/300 1920 1080 HD sensor and glass Carl Zeiss lens is used. The calibration is carried out using a precisely known geometry measured within the working space of the scanner. For the modeling of calibration images, the popular pinhole camera model is applied in conjunction with polynomial radial distortion model relating image points with spatial 3D points. Tangential distortion is neglected in the model. The radial distortion model used by Zhang ~p ) to their ideal pinhole [9] relates raw distorted image points (~ xp ; y counterparts ðxp ; yp Þ in Eq. (3).
h i+ ~xp ¼ xp þ x k1 ðx2p þ y2p Þ þ k2 ðx2p þ y2p Þ2 h i ~p ¼ yp þ y k1 ðx2p þ y2p Þ þ k2 ðx2p þ y2p Þ2 y
ð3Þ
Here, k1 and k2 are distortion coefficients. The focal point is taken as the origin of the image points. The image of a square calibration grid with points ðxp ; yp Þ is mapped to a 3D frame points ðX 0 ; Y 0 ; 0Þ on the plane using the following relation by Zhang [9]:
2
xp
3
2
fx
6 7 6 s4 yp 5 ¼ 4 0 1
0
0 fy 0
3 X0 6Y 7 7 6 07 cy 5 E 6 7; 4 0 5 1 1 cx
2
3
ðE ¼ ½RTÞ
ð4Þ
fx and fy are pixel focal length, cx and cy represent pixel principal axis and s is a scaling factor. These factors form the intrinsic parameters of a camera [10]. E is composed of extrinsic parameters
2 ⎡Yn −1 − Ycam ⎤⎦ ⎛φ ⎞ Δsz = ⎣ tan ⎜ v ⎟ py ⎝2⎠ Δsz step ≈ X n −12 + Yn −12
(1) (2)
Fig. 3. Vertical and horizontal spacing between scanned laser stripe and the needed predictive step angle in radians for image pixel size px x py as 1920 1080.
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
125
Fig. 4. Mean calibration error of each image point re-projections.
that include Rotation R and translation T of camera relative to the world point on grid plane. Deviations of modeled image points from their known image positions are minimized using LevenbergMarquardt algorithm to find the best intrinsic, extrinsic and distortion parameters. The mean error of the calibration model points compared to the real images in pixels is shown in Fig. 4. 4. Laser stripe extraction using image processing Images containing laser stripe need to be processed and desired laser plane intersection points should be extracted for further transformations. Image of the laser stripe reflected on object scanned is taken at low exposure so that image background is completely dark therefore reducing the effects of background light. The lighting should be uniform and exposure should be adjusted for every scan condition. As proposed in some papers [11,12], reflectance can be subdivided into diffuse lobe, specular lobe and specular spike. High uncertainty is associated with specular lope and spike reflections. As a result, reflections are assumed to have purely Lambertian diffuse lope [12] and specular reflections are avoided as much as possible. Shiny metallic objects are lightly painted matte white before scan is carried out. The effects of paint can be seen in Fig. 5, where the image of painted aluminum sample is compared with its prior image. In order to isolate the background noise, contrast of the image is adjusted in each color channel. The acquired RGB image is then converted to intensity image. In literature ([1,13]), an industrially effective method of laser stripe extraction by considering each row slice as independently signal was proposed. Usamentiaga et al. [13] and Forest et al. [14] also showed that the subpixel approximation methods such as Gaussian and center of mass show performance in similar range, hence these methods are used in this study.
Thresholding of intensity image is implemented locally within an appropriate window that trails along the laser stripe. The thresholding is done locally because surfaces where normal angle is wide compare to laser incidence direction tend to have low intensity, hence using a single threshold limit will eliminate those points. First, segmentation is carried out to isolate regions that exceed a certain threshold. The initial threshold is made at very low strength, so that all relevant portion of the image is included. This threshold value can be chosen to be a fraction of Otsu global threshold [15] to obtain a binary image. Morphological filling of the binary image is performed and the image’s connected-components are labeled. The areas of these components are stored and mid-point path is found row by row for each. For rows with more than one connected components, one with larger area is chosen since it is less likely to be caused by noise. After mid-points are found, thresholding limit is found locally in a small window whose dimensions depend on stripe width. This window, shown in Fig. 6, is moved along the mid-point path until each row is fully segmented. The image data contains random noise emanating from image sensor and laser itself [16]. They are usually reduced upon convolution with a Gaussian kernel whose deviation is varied based on stripe width and intensity for each image [1]. In this article, two methods of peak intensity determination methods are used—the Gaussian peak and centroid methods. For the centroid method, centroids of grayscale section of the local window are used as laser intersection points with the workpiece. The method of point extraction used is dependent on local vicinity of each row used for computing centroids. Hence unlike row independent methods [13], the row signals are not entirely independent of each other. On the other hand, the Gaussian peak extraction method fits the row data into a normal distribution and the peak point is used. Qi et al. [16] explains the Gaussian method in more detail.
126
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
Fig. 5. (a) Image shinny aluminum part and (b) Image after it is painted.
The camera calibration performed in Section 2 is only to determine the camera properties; there is additional need for a relation between the location of the part during scanning motion and the image acquired. Image points of laser scan must be represented in a 3D world frame. The laser points are extracted from image using methods outlined earlier. From the basic pinhole model, two equations that relate undistorted image laser points ðxp ; yp Þ to world points ðX c ; Y c ; Z c Þ in camera frame located at point C are:
Xc ¼
yp xp Y c ; Zc ¼ Y c fx fy
+
ð5Þ
The extra equation needed to fully define each point is provided by the fact that all the points lay on the laser plane, which can be obtained by triangulation. The triangle formed by laser, camera and measured point form the basic principle of triangulation [17]. Based on the diagram in Fig. 7, the following relation is easily obtained.
X c ¼ ðY l Y c Þ tanðbÞ
ð6Þ
From Eqs. (5) and (6), the following Eq. (7) is found directly.
+ Y l tanðbÞyp Y l tanðbÞxp Y l tanðbÞ ; Z c ¼ xp f Xc ¼ ; Y c ¼ xp y xp þ f x tanðbÞ þ tanðbÞ þ f y tanðbÞ fx fx Fig. 6. Local threshold method carried out in a local vicinity window.
5. Scanner equation and parameters The parameters of the scanner in this section are variables that describe the important geometrical position and orientation of laser, camera and part to be scanned. Due to manufacturing errors and tolerances, the geometric measurements in the assembled scanner vary by up to the order of hundreds of micrometers. Moreover, exact measurement and alignment of some dimensions such as the location of the focal point and axis will require costly setups. During scanning, motion is restricted to rotation about Spin and Lift axes. Since the part is rigidly attached to the turntable, knowing the configuration of the table is enough. It is assumed that laser plane and camera axis remain fixed with respect to each other.
ð7Þ
The obtained points are at an instantaneous i step of spin with respect to point C on camera. The measurement needs to be transformed to the frame attached to the measured part. The chosen part reference point is the point S, which is the intersection of Spin and Lift axes. The new set of points for each step at an arbitrary spin angle of hi at fixed lift configuration / is given below.
2
Xi
3
2
cosðhi Þ
sinðhi Þ
0
0
3 2
1
0
0
0
3
7 7 7 6 6 6 6 Yi 7 6 sinðhi Þ cosðhi Þ 0 0 7 6 0 cosð/Þ sinð/Þ 0 7 7 7 ¼6 76 6 7 6 6Z 7 6 0 0 1 07 5 4 0 sinð/Þ cosð/Þ 0 5 4 i 5 4 0 0 0 1 1 S 0 0 0 1 3 2 3 2 1 0 0 X cam X 7 6 7 6 6 0 1 0 Y cam 7 6 Y 7 7 6 7 6 ð8Þ 60 0 1 Z 7 6Z 7 5 4 cam 5 4 1 C 0 0 0 1
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
127
Fig. 7. Triangulation diagram showing measurable range (MR), field of view, laser 1 plane and sample point (X, Y, Z).
Eq. (8) is the general transformation equation of the scanner. ðX cam ; Y cam ; Z cam Þ is the camera reference point C with respect to the stationary point S. The rest of this article will focus on ways of obtaining accurate and optimal values ofh/; b; yl ; X cam ; Y cam ; Z cam i: The location of the Spin axis is determined by fitting rotated planar calibration points obtained at about 40 instances into circles and extracting the center points. Since the calibration points are on a plane, the mentioned Eq. (4) is used to find just the values of R and T. The plane points can then easily be transformed to camera frame. Given that the camera is fixed for each lift configuration, the plane points should fit into circles since they were generated by mere rotation. The center points of the circles are used to find the location and
direction of the Spin axis. In order to achieve this, principal component analysis (PCA) is used with a singular value decomposition algorithm to obtained axis vector where distances from axis point is minimal [18]. The fitted axes for 7 lift positions shown in Fig. 8a are obtained by continuously rotating the lift motor by 7.2°. The lift angle can be computed from the following dot product of Spin axis normal and the camera y-axis (ncam y ¼ h0; 1; 0i) provided in Eq. (9).
/ ¼ cos1 ðncam y nspin Þ
ð9Þ
The accuracy of these measurements can be verified by checking maximum deviations of axis points from the axis. These angles together with their deviations are plotted in Fig. 8b.
Fig. 8. (a) Spin axis of 7 configurations with some calibration planes and center points of 1st Lift. (b) Measured lift angles from fitted Spin axis data with their confidence interval.
128
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
A similar procedure is carried out for Lift axis where the camera is lifted and patterns are recorded. After Lift axis is defined, the intersection of Spin and Lift axis is assumed as midpoint of the perpendicular distance between the axes. This distance should be small for the axes to be considered as intersecting. For nonintersecting axes, an additional point must be introduced so that each axis has a point where rotation is carried out. In the assembled scanner case, the axes intersect at point ðX cam ; Y cam ; Z cam Þ, which is determined using imaging feedback. The laser orientation is checked by extracting laser lines on the calibration plane at different locations in working space. Image of calibration pattern is first taken without laser stripe and then the laser is switched on and stripe points are found. A multiple of these points are captured at various calibration pattern locations. The laser should be adjusted manually until lines obtained from the laser fit into a plane with normal in the XY plane of the camera. Once aligned, the two variables b and yl can also be determined directly. The summary of the procedures in this section is given in Fig. 9. 6. Evaluation of uncertainty and optimal scanner parameters In this section, all parameters of the scanner are evaluated within some confidence intervals from the calibration data. A standard uncertainty analysis of the scanner measurement is carried
out using Guide on the expression of uncertainty in measurement (GUM) [19]. The principal aim is finding how small variation within a probable interval in each parameter of u ¼ h/; b; yl ; X cam ; Y cam ; Z cam i can potentially change the resulting error in the measured points. Consequently, the points are first expressed as explicit function of u. An overdetermined system is formed when the deviations of numerous scanned points (object scans usually have between 10 thousand to 5 million points) from a known geometry are minimized by varying the variables in u. Such systems do not have a unique solution. As a result, estimation is sought by using least squares techniques. By substitution and reorganization of Eqs. (5), (6) and (8), the relation for each i step is as follows. 2x 3 p 2 3 yl CðbÞ X cam X 6 hf x 7 i 6 7 y 6 7 cosð/Þ þ f yp sinð/Þ yl CðbÞ Y cam cosð/Þ Z cam sinð/Þ 7 4 Y 5 ¼ Rðhi ÞPðuÞ ¼ Rðhi Þ6 6 7 i 4 hy 5 Z i p cosð/Þ sinð/Þ y C ðbÞ þ Y sinð/Þ Z cosð/Þ cam cam l fy
ð10Þ
With PðuÞ defined accordingly in the Eq. (10) and Eqs. (11) and (12) define RðhÞ and CðbÞ respectively.
2
cosðhÞ
sinðhÞ
0
0
0
1
3
6 7 RðhÞ ¼ 4 sinðhÞ cosðhÞ 0 5
ð11Þ
Fig. 9. Procedure of calibration of the scanner variables using camera, lasers and a planar calibration grid.
Table 1 Properties of the scanner parameters used as input PDFs for uncertainty propagation.
Estimates Uncertainties (68% level of confidence)
/ [rad]
b1 [rad]
yl1 [mm]
X cam [mm]
Y cam [mm]
Z cam [mm]
b2 [rad]
yl2 [mm]
0.0248 0.005
0.3517 0.010
93.050 0.15
2.634 0.040
98.330 0.040
2.662 0.040
0.3344 0.008
91.055 0.15
129
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
CðbÞ ¼ xp fx
tanðbÞ þ tanðbÞ
ð12Þ
The derivative of Eq. (10) yields changes in position due to perturbation in the variables. ½ @x @y @z s ¼ Rðhi Þ J P;u @u; where the Jacobian matrix is defined in Eq. (13) below.
2
0
xp yl C0 ðbÞ fx
6 6 yp cð/Þ yp sð/Þ sð/Þ þ C ðbÞy þ y sð/Þ z cð/Þ cð/Þ þ C0 ðbÞyl JP;u ¼ 6 s l s f f 6 y y 4 y sð/Þ y cð/Þ cð/Þ pf y CðbÞyl þ ys cð/Þ þ zs sð/Þ sð/Þ þ p f y C0 ðbÞyl
Where Eq. (14) defines C0 ðbÞ.
C0 ðbÞ ¼
xp h i2 x f x cos2 ðbÞ f px þ tanðbÞ
ð14Þ
6.1. Uncertainty analysis of measured points Having obtained the scanner model of measurement, it becomes necessary to evaluate the uncertainty associated with values measured with the scanner. Guide on the expression of uncertainty in measurement (GUM) and International vocabulary on metrology (VIM), provide an international standard of evaluation and expression of measurement uncertainty [19,20]. These standards require a model of measurement reflecting the relation between influencing quantities and the output. The model of the scanner is given in Eq. (10) and the inputs are the six variables in u. According to GUM, the model of measurement must include all influencing quantities with known corrections included.
To obtain the standard uncertainty of the desired quantity, each of the input needs to be estimated from a distribution of possible values. Probability distribution frequencies maybe based on observations or other knowledge. GUM section 4.3 outlines guidelines for estimate of inputs whose values have not being obtained by repeated observation like data obtained by calibration.
xp CðbÞ fx
cð/Þ þ
yp sð/Þ fy
CðbÞ
y cð/Þ sð/Þ þ p f y CðbÞ
1
0
0
cð/Þ
0
sð/Þ
0
3
7 7 sð/Þ 7 7 5 cð/Þ
ð13Þ
The uncertainties of the input quantities are estimated using calibration data which provides probability distribution function (PDF) of the values (GUM type B uncertainty evaluation) obtained during the calibration process summarized in Fig. 9. Since the values of parameters in u are calibrated through fitting of points from the camera-laser scanner system, the deviations of the points from the fitted model gives a statistical basis for estimation of uncertainty. These deviations are used to obtain best estimate and dispersion in the measurement of each input parameter, such as u error bar plotted in Fig.8b. When the estimate of the input and uncertainty are available, a Gaussian PDF is recommended; for the scanner inputs, the generated PDFs have evaluated estimates and standard deviations given in Table 1. GUM recommends the use of the law of propagation of uncertainty with Taylor’s series truncation to propagate input PDFs to obtain uncertainty of measurand. However, for complex measurement models, a practical and more general tool such as Monte Carlo simulation is applicable [21]. It works by randomly sampling
Fig. 10. Monte Carlo simulation of uncertainty propagation showing average sample distribution of radial deviation.
130
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
large combinations (minimum of 106 samples by GUM) of each input PDF values with which the measurand quantities are found generating the output PDF. The coordinate points X, Y, Z and radial deviation from a known sphere are studied as measurands using image points from the sphere at various locations. The average deviation together with overall Monte Carlo simulation of uncertainty propagation process are presented in Fig. 10; the other measurands of a sample position coordinate on the sphere X, Y, Z are shown in Fig. 11a, b and c. The deviation of the sample points from the sphere surface are expected to be distributed about zero rather than 0.247 shown in Fig. 10. The subsequent subsection deals with ways to minimize such deviations. 6.2. Least squares optimization The input variables for the two-lasers-system are u1 ¼ h/; b1 ; yl1 ; X cam ; Y cam ; Z cam i andu2 ¼ h/; b2 ; yl2 ; X cam ; Y cam ; Z cam i;
making 8 distinct parameters. Calibrated estimates of these quantities have not been verified to give the best measurement since small adjustment in each value will represent a probable estimate. As stated in the last section, use of the calibrated estimates yields a radial measurement estimate that is about 0.25 mm deviated. There is a need for a method of optimizing measurements that gives less deviation. A known spherical sample is used as a benchmark for optimization of the scanner parameters. In fact, any known geometry can be used for this purpose. Iuliano et al. [22] proposed using a geometry that is made from combination of basic surfaces. However, the benchmark geometry must be known precise within an acceptable uncertainty. The sphere used in this paper is measured with contact type CMM with uncertainty less than 3 mm. The CMM result of 60 points on the sphere shows a maximum deviation about 4.5 mm. Assume the sphere has radius r and that the radius is known with an uncertainty of less than 5 mm. The following
Fig. 11. Results of uncertainty propagation in X, Y, Z are shown in plots a, b, and c with 2-sigma confidence interval marked. The average least squares optimized radial deviation PDF of sample points on the sphere is shown in d. Table 2 Scanner variables before and after optimization using a spherical benchmark.
Computed Optimized
/ [rad]
b1 [rad]
yl1 [mm]
X cam [mm]
Y cam [mm]
Z cam [mm]
b2 [rad]
yl2 [mm]
Residuals Factor
0.0248 0.0201
0.3517 0.3662
93.050 92.701
2.634 2.670
98.330 98.345
2.662 2.687
0.3344 0.3394
91.055 90.390
6793.7 117.3
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
functions are minimized to find optimal u1 and u2 variables. The minimization is based on known and verified geometrical relations that must hold for an accurate measurement. The aim is to diminish deviations, given by the following expressions in Eqs. (15) and (16). For each i step,
f 1 ðu1 Þ ¼ kRðhi ÞPðu1 Þ X cnt1 ðu1 Þk r + i i
f 2 ðu2 Þ ¼ kRðhi ÞPðu2 Þ X cnt2 ðu2 Þk r i f 3 ðu1 ; u2 Þ
ð15Þ
¼ kX cnt1 ðu1 Þ X cnt2 ðu2 Þk
The least square problems are defined as:
minu1
X
i
kf 1 ðu1 Þk2 ; minu2
i
X i
X
i
kf 2 ðu2 Þk2 ;
i i
minu1 ;u2 kf 3 ðu1 ; u2 Þk2
ð16Þ
131
Where X cnt1 and X cnt2 are centers of fitted spheres of laser 1 and laser 2 measurements respectively. The spheres are fitted using geometric least square fitting [23]. The least squares solution minimizes the sum of squares of the deviations of thousands of points extracted from the images. The problem is solved using bounded trust-region-reflective algorithm on MatlabÒ platform with the aid of optimization toolbox [24,25]. It works by generating simpler sub-problems which serve as estimation within a trust-region. The optimal values found are indicated in Table 2 with the original values computed by calibration outlined in the previous section. The total residual factor, which is proportional to the sum of distances of the points from the fitted sphere, decreases by more than 50 times as given in Table 2. Moreover, by comparison of Figs. 10 and 11d, the radial deviation is observed to diminish. Fig. 12 shows the fitted spheres using each of the two lasers and the corresponding error distributions of the first laser are shown in Fig. 13a and b.
Fig. 12. (a) First laser scan fitted on sphere after optimization. (b) Painted aluminum specimen. (c) Second laser scan fitted on sphere after optimization.
Fig. 13. (a) Frequency distribution of point radial deviations on sphere. (b) Radial deviations of scan points on the sphere.
132
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
After fusion of data obtained at other lift angles using both lasers, the completely reconstructed sphere specimen is shown in Fig. 14b. A more complex fused mesh of scan object from this scanner is shown in Fig. 14c-d. In order to avoid overlap of surfaces, only region with incomplete data should be patched to the first scan. This can be accomplished by filtering subsequent scans to allow only missed regions before joining data together. An easier
way of patching scan data can be carried out using commercial mesh modeling softwares. Further adjustments of points can be made using constraint iterative closest point (ICP) algorithms to ensure data sets are properly merged. Apart from measuring a spherical object, a planar part is also measured. The error distribution of plane data using obtained scanner parameters is plotted in Fig.15c. Some papers [4,8,26] used
Fig. 14. (a) and (b) are the sphere image and its fused mesh respectively. (c) and (d) are image and fused meshes of a knight piece respectively.
Fig. 15. (a) Plane and sphere model used for measurement (b) Reconstructed model used to measure Ds (c) Error distribution of plane data.
M.A. Isa, I. Lazoglu / Measurement 111 (2017) 122–133
spherical and planar objects joined together and compare the distance D between the plane and the center of the sphere. To compare results with those papers, a rigidly joined sphere and plane model illustrated in Fig. 15 is measured by contact type CMM with Renishaw MH 20 probe. The same model is scanned on the laser scanner five times. The reference CMM result is D0 = 8.032 mm while scanner gave average distance Ds = 7.988 mm. The overall standard deviation in measurements of the designed scanner ranges between 40 and 70 mm while results within 2-sigma probability region stand at 90–130 mm. Acquired points can be organized into mesh. Even though meshing is out of the scope of this paper, a simple meshing method has been tested by connecting neighboring vertices [27]. This mesh could be further processed into any CAD format that can be opened in solid modeling softwares. Advanced mesh processing such as smoothening and resampling can be carried using software like Geomagic Wrap and Meshlab. 7. Conclusion A prototype of 3D laser scanning is developed to facilitate inexpensive reverse engineering at various object view angles by the positioned imaging sensor. An indispensable feature of any 3D scanner is measurement accuracy which should be reliably quantified and improved. Being that the uncertainties of measured points using a laser scanner originate from the part geometry dependent form deviation, laser uncertainty, lens imperfection, non-uniform illumination and camera-laser system positioning, evaluation and improvement of accuracy can be rather challenging. A formal study on uncertainty of measured points is carried out by propagation of uncertainties in influencing parameters of the scanner. A method of finding motion axes and their intersection with respect to the camera reference frame was developed for the scanner prototype. The parameters that define relative position and orientation of the camera-laser system with the rotation table are studied, providing knowledge for derivation of the scanner measurement model. To eliminate the systematic error contributed by uncertainties in these parameters, their values are optimized using least squares. The error in measurements of the improved scanner is measured on some geometric samples and found to be comparable with uncertainties from other studies. Accuracy of the laser scanner can be improved by better calibrations, use of more flexible mechanism to keep the camera closer to workpiece and use of laser with thinner beam width. Conflict of interest None. Funding This work was supported by Koc University. Appendix A. Supplementary material Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.measurement. 2017.07.028.
133
References [1] J. Molleda, R. Usamentiaga, D.F. García, F.G. Bulnes, A. Espina, B. Dieye, L.N. Smith, An improved 3D imaging system for dimensional quality inspection of rolled products in the metal industry, Comput. Ind. 64 (2013) 1186–1200, http://dx.doi.org/10.1016/j.compind.2013.05.002. [2] F. Blais, J.A. Beraldin, Recent developments in 3D multi-modal laser imaging applied to cultural heritage, Mach. Vis. Appl. 17 (2006) 395–409, http://dx.doi. org/10.1007/s00138-006-0025-3. [3] I. Popov, S. Onuh, K. Dotchev, Dimensional error analysis in point cloud-based inspection using a non-contact method for data acquisition, Meas. Sci. Technol. 21 (2010) 75303, http://dx.doi.org/10.1088/0957-0233/21/7/075303. [4] A. Isheil, J.-P. Gonnet, D. Joannic, J.-F. Fontaine, Systematic error correction of a 3D laser scanning measurement device, Opt. Lasers Eng. 49 (2011) 16–24, http://dx.doi.org/10.1016/j.optlaseng.2010.09.006. [5] B. Curless, M. Levoy, Better optical triangulation through spacetime analysis, Proc. IEEE Int. Conf. Comput. Vis. (1995), http://dx.doi.org/10.1109/ ICCV.1995.466772. [6] L. Zhou, A. Waheed, J. Cai, Correction technique to compensate the form error in 3D profilometry, Measurement 23 (1998) 117–123, http://dx.doi.org/ 10.1016/S0263-2241(98)00014-1. [7] M. Drenik, M. Kampel, An evaluation of low cost scanning versus industrial 3D scanning devices, in: Proc. – 1st Int. Congr. Image Signal Process. CISP 2008, 3, 2008, 756–760. 10.1109/CISP.2008.762. [8] F. Xi, Y. Liu, H.Y. Feng, Error compensation for three-dimensional line laser scanning data, Int. J. Adv. Manuf. Technol. 18 (2001) 211–216, http://dx.doi. org/10.1007/s001700170076. [9] Z. Zhang A flexible new technique for camera calibration (Technical Report) 22, 2002, 1330–1334. [10] J. Heikkila, O. Silven, A four-step camera calibration procedure with implicit imagecorrection, Proc. IEEE Comput. Soc. Conf. Comput. Vis Pattern Recognit. (1997), http://dx.doi.org/10.1109/CVPR.1997.609468. [11] Y. Wang, H.-Y. Feng, Modeling outlier formation in scanning reflective surfaces using a laser stripe scanner, Measurement 57 (2014) 108–121, http://dx.doi. org/10.1016/j.measurement.2014.08.010. [12] T.K. Shree, K. Nayar, Katsushi Ikeuchi, surface reflection: physical and geometrical perpectives, IEEE Trans. Pattern Anal. Mach. Intell. 13 (1991). [13] R. Usamentiaga, J. Molleda, D.F. García, Fast and robust laser stripe extraction for 3D reconstruction in industrial environments, Mach. Vis. Appl. 23 (2012) 179–196, http://dx.doi.org/10.1007/s00138-010-0288-6. [14] J. Forest, J. Salvi, E. Cabruja, C. Pous, Laser stripe peak detector for 3D scanners. A FIR filter approach, Proc. - Int. Conf Pattern Recognit. 3 (2004) 646–649, http://dx.doi.org/10.1109/ICPR.2004.1334612. [15] N. Otsu, A threshold selection method from gray-level histograms, IEEE Trans. Syst. Man. Cybern. C (1979) 62–66. [16] L. Qi, Y. Zhang, X. Zhang, S. Wang, F. Xie, Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger’s algorithm, Opt. Express. 21 (2013) 13442–13449, http://dx.doi.org/10.1364/ OE.21.013442. [17] J.G.D.M. França, M.A. Gazziro, A.N. Ide, J.H. Saito, A 3D scanning system based on laser triangulation and variable field of view, Proc. - Int. Conf. Image Process. ICIP. 1, 2005, 425–428. 10.1109/ICIP.2005.1529778. [18] I.T. Jolliffe, Principal Component Analysis, Second Edition, (n.d.). [19] JCGM, Evaluation of measurement data — Guide to the expression of uncertainty in measurement, 2008. [20] JCGM, JCGM 200 : 2012 International vocabulary of metrology – Basic and general concepts and associated terms (VIM) 3rd edition Vocabulaire international de métrologie – Concepts fondamentaux et généraux et termes associés (VIM) 3 e éd., 2012. [21] M. Cox, P. Harris, EVALUATION OF MEASUREMENT UNCERTAINTY BASED ON THE PROPAGATION OF DISTRIBUTIONS 46 (2003) 9–14. [22] L. Iuliano, P. Minetola, A. Salmi, Proposal of an innovative benchmark for comparison of the performance of contactless digitizers, Meas. Sci. Technol. 21 (2010) 105102, http://dx.doi.org/10.1088/0957-0233/21/10/105102. [23] S.J. Ahn, W. Rauh, Geometric least squares fitting of circle and ellipse, Int. J. Pattern Recognit. Artif. Intell. 13 (1999) 987–996, http://dx.doi.org/10.1142/ S0218001499000549. [24] Y.L. Thomas, F. Coleman, An Interior Trust Region Approach for Nonlinear Minimization Subject to Bounds, Soc. Ind. Appl. Math. 6 (1996). [25] C. Mathworks, Optimization Toolbox TM User â€TM s Guide R 2015 a, (2015). [26] H.Y. Feng, Y. Liu, F. Xi, Analysis of digitizing errors of a laser scanning system, Precis. Eng. 25 (2001) 185–191, http://dx.doi.org/10.1016/S0141-6359(00) 00071-4. [27] H. Woo, E. Kang, S. Wang, K.H. Lee, A new segmentation method for point cloud data, Int. J. Mach. Tools Manuf. 42 (2002) 167–178, http://dx.doi.org/ 10.1016/S0890-6955(01)00120-1.