Optik 126 (2015) 701–707
Contents lists available at ScienceDirect
Optik journal homepage: www.elsevier.de/ijleo
A new three-dimensional laser scanner design and its performance analysis Jin Xiao ∗ , Xiaoguang Hu, Weixiong Lu, Jixiao Ma, Xiao Guo State Key Laboratory of Virtual Reality Technology and Systems, School of Automation Science and Electrical Engineering, Beihang University, No.37 Xueyuan Road, Haidian District, Beijing 100191, China
a r t i c l e
i n f o
Article history: Received 26 January 2014 Accepted 4 February 2015 Keywords: Line laser CMOS camera Image denoising Laser stripe extraction
a b s t r a c t The demand for three-dimensional (3D) laser scanning technology has been rapidly increasing recently. However, most existing commercial 3D laser scanning products are heavy, high power and expensive. In this paper, we demonstrate a cost-effective indoor 3D laser scanner with reasonable accuracy (millimeterscale). The advanced prototype mainly designs a control module, and it controls a turntable with a commercial line laser and a COMS camera to scan a target surface by means of triangulation method. System calibration, image denoising and laser stripe line extraction are analyzed in detail as well. © 2015 Elsevier GmbH. All rights reserved.
1. Introduction With the rapid development of optoelectronic technology, 3D laser scanning technology recently has become more and more important for industry reverse engineering, heritage conservation, medical imaging, aerospace mapping, security surveillance, rescue applications, self-localization, pedestrian detection, robots’ operating environment descriptions, etc. For example, 3D laser scanner has been applied to help mobile robots detect indoor environment composed of different objects, it can extract useful feature information easily as an active scanning technology [1,2]. The most efficient principles of laser scanners mainly include timeof-flight (TOF) and triangulation. The TOF laser scanner owns the advantages of a wide measuring range and a high relative accuracy, but most of them are heavy, expensive, high power and time consumption [3]. Commercial TOF laser scanners such as SICK LMS 200 [4], Optech ILRIS-3D [5], and SwissRanger SR4000 (http://www.mesa-imaging.ch/ (accessed in 2013)), are also in high price. These TOF laser scanners have been used by different methods for 3D models. For example, Biter et al. [4] present a 3D model equipped with a SICK LMS 200 laser scanner and a panoramic camera consisting of an ordinary CCD camera. Fu et al. [2] propose a scanner composed of a laser line projector and a camera for indoor navigation of mobile robots. Compared with TOF laser scanners, triangulation laser scanners can be made in low cost, weight, power
∗ Corresponding author. Tel.: +86-10-82339780. E-mail address: xiaojin
[email protected] (J. Xiao). http://dx.doi.org/10.1016/j.ijleo.2015.02.007 0030-4026/© 2015 Elsevier GmbH. All rights reserved.
and time consumption, which are more suitable for indoor environment detection. By the triangulation method, a laser can return the intensity information, and a camera records two dimension (2D) images. As for the more precise values, a calibration rig is so critical. Literatures [6,7] show calibration rigs consisting of planes with known parameters. Projecting a line laser onto the known planes gives an image pointing to world plane correspondence, and transformation matrix is recovered. In [1,8,9], more flexible and economical solutions are given. In this paper, we present an easier, light-weight, low power and cost method to build a 3D laser scanner prototype. A new design of the proposed 3D scanner is illustrated in Section 2. Performance test and analysis are discussed in details as shown in Section 3, including the calibration of the hardware system, image distortion, image denoising processing and laser stripe center extraction. In Section 4, experiments show the prototype can reach a reasonable precision, and meet the needs of low-cost 3D modeling applications.
2. Design of the 3D laser scanner 2.1. The basic principle of 3D laser scanner The proposed 3D laser scanner utilizes a line laser combined with a CMOS camera mounted on a turntable, achieving 3D point cloud data acquisition and scene reconstruction. Fig. 1 is 3D laser scanning rangefinder schematics with camera center, laser center and rotation center. In Fig. 1, the connection line between P1 and P2 is the laser line emitted on the target. Observing
702
J. Xiao et al. / Optik 126 (2015) 701–707
P1 P2
P1’
x
P2’
f’
Laser center
Camera center Rotaon center
Fig. 1. 3D laser scanning rangefinder schematics. x
Fig. 3. Distance measurement in vertical plane.
Target
installation, the rotation center will not be in the baseline connecting laser center and camera center, we introduce an offset c. With the rotation of the turntable, the derived coordinates also rotate. So the world coordinate system can be established as
u
⎧ ⎨ Y = (y + c) cos ˛ − x sin ˛
f Camera center
Rotaon center
D
Laser center
⎩
L
Fig. 2. Horizontal distance measurement.
through the camera, the scene is emitted in the imaging plane P1 and P2 corresponds with P1 and P2. In the horizontal plane, the coordinates are established as shown in Fig. 2. 2.2. The measurement principle of 3D laser scanner In Fig. 2, u is abscissa value of the line emitted in the image plane. The origin point is the center of the imaging plane and the pixel is the unit. The distance between the laser center and the rotation center is named as D. And the distance between the laser center and the camera center is named as L. Formula (1) can thus be deduced by similar triangles.
y = D tan y L−D+x = u f
(1)
And formula (2) can be deduced as
⎧ ⎨x = D−
L 1 + (u/f ) tan
⎩ y = (D − x) tan
(2)
X = (y + c) sin ˛ + x cos ˛
Therefore, by measuring the position of the laser line in an image taken by camera, 3D coordinates of the corresponding points can be worked out, and then the point cloud data can be obtained to achieve the 3D reconstruction. In this system, following parameters need to be measured in advance: -
Focal length f Distance between laser center and the camera center L Distance between laser center and the rotation center D Deviation of rotation center d The laser output angle
Data tested for each measurement are turntable rotation angle ˛, the position of the laser spot u, and v in the image. 2.3. System achievement The system architecture is illustrated in Fig. 4. The system includes hardware platform and software. Software, including calibration and image processing, is introduced in part 3. Hardware platform consists of a turntable module and a control module. The turntable module is mainly composed of a CMOS camera, an 808 nm line laser, a turntable (including stepper motors, motor drives, etc.), chassis and other parts. The control module is mainly composed of MSP430 processor and peripheral circuits.
Then the x, y coordinates in horizontal plane can be calculated by the position of the laser line in pictures taken by camera as shown in Fig. 3. In Fig. 3, the triangles in the imaging plane and the actual triangles are similar triangles. z
v
=
z=
y f
vy f
(3) (4)
Thus the two-dimensional principle of 3D laser rangefinder has been introduced. Considering that in the practical operation and
(5)
Z=z
Fig. 4. The system architecture.
J. Xiao et al. / Optik 126 (2015) 701–707
703
An upper computer sends the command through a serial port to the control module. After receiving the command, the processor MSP430 controls the horizontal rotation of the stepper motor in the turntable. When rotating a suitable angle, the line laser emits vertical laser line to a target, and the CMOS camera takes photos of the target. Then the COMS camera sends these pictures through an USB port to the upper computer. The computer detects the position of the laser line in the pictures by image processing, solving out the distance of the detected target. Through the rotation of the turntable, the device can scan the surroundings and draw 3D point cloud data in a certain precision, helping to achieve scene reconstruction. Fig. 5. Calibration of the laser center.
2.3.1. Turntable module Line laser is fixed in a certain angle on the left side of the turntable, the camera is fixed on the right side of the turntable, and both units are equidistant relative to the rotating shaft. In order to reduce the impact of ambient light, an 808 nm infrared laser is applied. This is because 808 nm is out of visible wavelength range and the line laser light can be separated from the visible region. Meanwhile, the appropriate action should be taken to the CMOS camera for receiving the infrared signal. Firstly, remove the IR cut-off filter of the camera lens. The filter exists in most cameras, and its function is to filter out infrared optical signals. If it is not removed, the received infrared signal would be very weak. Secondly, install the 808 nm infrared filter parallelly at the front of the camera. The visible light would be filtered, and only the infrared light is left, thereby this can improve the identification of the infrared laser beam for the system.
2.3.2. Control module As shown in Fig. 4, the control module communicates with the turntable by RS232. The stepper motor is directly controlled by the M542-Driver by Nachuan Technology Co., Ltd., where the driver can adjust the stepper motor speed and accuracy, and it functions more than a power supply. MSP430 MCU connects the driver through I/O port and controls the driver directly. MSP430 MCU is also connected with the computer through serial ports, receiving commands to control the rotation of the turntable. Software completes image preprocessing and point cloud data rendering. The analysis is shown in the following sections.
3. Performance test and analysis 3.1. System calibration 3.1.1. Hardware platform calibration Laser is a device as a whole, and the inside emitting center position is not known. But after the system is set up, the positioning of laser center has a direct impact on the accuracy of the whole system, so the position of laser center needs to be calibrated. The principle of calibration of laser center is shown in Fig. 5. As shown in Fig. 5, the white wall is chosen as the target for calibration. The laser emits perpendicularly to the wall twice in different locations, two triangles in Fig. 5 are similar, by measuring the length of the laser line on the wall, s and t in Fig. 5, the distance between the laser emitting center and front end face of the laser center x could be worked out, the laser should be moved parallelly at different measurement positions in order to ensure the accuracy of the calibration results. And it is proved by the mathematical relationship that the larger the ratio of t to s, the more accurate the results. In the process of calibration experiments, the following principles need to be ensured:
(1) Laser line emitted by the laser must remain parallel to the ground; (2) Laser emitting direction must always be perpendicular to the wall. During the experiment, a one-dimensional (1D) laser rangefinder and a gradienter are used for more accurate measurements. The laser and the gradienter should be connected together. By adjusting the gradienter to guarantee (1), 1D laser rangefinder can measure the linear distance more accurately, so as long as the straight-line distances that the left and right ends of the gradienter to the wall measured by the 1D laser rangefinder measures, are equal, (2) can thus be ensured. The calibration procedure of the camera center could be similar as mentioned. When the camera is in different locations away from the wall, the camera image window widths are different, and it is similar to the laser emission, so it is not difficult to calculate the distance between the image center and the outer end face of the camera. While the center of the camera has to be on the same surface of the laser, the camera should be placed much longer at a distance during installation. Although the calibration method involves a little cost, one can calibrate two central positions relative to their respective front end faces, it is difficult to achieve a satisfactory accuracy. Because of such a system installation, inevitably an error is generated. Therefore the data after calibration are just approximations. Hence the following software calibration helps strictly ensure the accuracy of experimental measurement data. 3.1.2. Distortion calibration When taking a photo using camera, the distortion problem appears because the camera has been warped toward the scene. This distortion greatly influences the measurement, so it should be eliminated by using software. The camera calibration toolbox in Matlab from Caltech has been used [18]. We generate and print a check board pattern. Then paste it on a flat panel. We use the pattern of the chessboard. In this pattern, each square is 30 mm × 30 mm. Then we acquired 20 images from different angles and distances, and save them in a common folder in jpeg format. Finally, we run the Matlab calibration tool to get the intrinsic parameters for eliminating the distortion. The internal parameters are as follows: - Focal length: the focal length in pixels is stored in the 2 × 1 vector fc . - Principal point: the principal point coordinates are stored in the 2 × 1 vector cc . - Skew coefficient: the skew coefficient defining the angle between the x and y pixel axes is stored in the scalar alpha c. - Distortions: the image distortion coefficients (radial and tangential distortions) are stored in the 5 × 1 vector kc .
704
J. Xiao et al. / Optik 126 (2015) 701–707
Definition of the intrinsic parameters is described as follows. Let P be a point in space of coordinate vector XXc = [Xc ;Yc ;Zc ] in the camera reference frame. Get the point on the image plane according to the intrinsic parameters (fc , cc , alpha c, kc ). Let xn be the normalized (pinhole) image projection.
xn =
Xc /Zc
=
Yc /Zc
x
(6)
y
Let r2 = x2 + y2 . After including lens distortion, the new normalized point coordinate xd is defined as follows.
xd =
xd (1)
= (1 + kc (1)r 2 + kc (2)r 4 + kc (5)r 6 )xn + dx
xd (2)
(7)
Fig. 6. Software flow pattern.
Our results are shown below.
761.74797 762.26178 ? 4.13982 4.25601 cc = 296.92665 271.50956 ? 6.06083 5.01732 fc =
kc =
−0.20997
0.16490
0.00080
−0.00001 0.00000
The camera focal and origin can be obtained. where dx is the tangential distortion vector.
dx =
2kc (3)xy + kc (4)(r 2 + 2x2 )
3.2. Image processing (8)
kc (3)(r 2 + 2y2 ) + 2kc (4)xy
Therefore, the 5-vector kc contains both radial and tangential distortion coefficients (observe that the coefficient of 6th order radial distortion term is the 5th entry of the vector kc ). Once distortion is applied, the final pixel coordinates x pixel = [xp ;yp ] of the projection of P on the image plane is
xp = fc (1)(xd (1) + alpha c × xd (2)) + cc (1) yp = fc (2)xd (2) + cc (2)
(9)
Therefore, the pixel coordinate vector x pixel and the normalized (distorted) coordinate vector xd are related to each other through the linear equation.
⎡
xp
⎤
⎡
xd (1)
⎤
⎢ ⎥ ⎢ ⎥ ⎣ yp ⎦ = KK ⎣ xd (2) ⎦ 1
(10)
1
where KK is known as the camera matrix, and defined as follows:
⎡ ⎢
fc (1)
KK = ⎣ 0 0
alpha c × fc (1) fc (2) 0
cc (1)
⎤ ⎥
cc (2) ⎦
(11)
1
In Matlab, this matrix is stored in the variable KK after calibration. Observe that fc (1) and fc (2) are the focal distance (a unique value in mm) expressed in units of horizontal and vertical pixels. Both components of the vector fc are usually very similar. The ratio fc (2)/fc (1), often called ‘aspect ratio’, is different from 1 if the pixel in the CCD array are not square. Therefore, the camera model naturally handles non-square pixels. In addition, the coefficient alpha c encodes the angle between x and y sensor axes. Consequently, pixels are even allowed to be non-rectangular. g(i, j) =
Fig. 6 shows the flow pattern of the software, and the most important task is image processing. The system uses the following steps to process images: 1. 2. 3. 4. 5.
Obtain original images from the camera. Eliminate distortion through the camera calibration. Convert to gray-scale image. Reduce noise from image. Extract laser stripe centerline. Calculate the real distance of the corresponding point through the formula.
In the above steps, the image noise and the laser stripe centerline extraction affect the result most. Thus, they will be described in details in the following part. 3.2.1. Image denoising The digital image, obtained from the camera, contains certain amount of image noises. Image noises affect the result strongly, making image denoising very important. Image denoising can be divided into spatial domain methods and transform domain methods. As for the spatial domain methods, the pixel on the image plane (spatial domain) is processed directly. The spatial domain methods generally include neighborhood averaging method, spatial low-pass filtering, multiple images averaging and median filtering method. As for the transform domain methods, firstly, the image data will be transformed by some mathematical transforms, then the image coefficients in the transform domain will be corrected, finally the former data will be transformed back to the spatial image data. The transform domain methods generally comprise Fourier-transform methods and Wavelet-transform methods [11]. In view of the system design, the image noise can be primarily classified as Gaussian white noise and impulse noise. For Gaussian white noise, Gaussian filter is mainly considered. For the impulse noise, median filtering is major consideration [12,13]. The 3 × 3 Gaussian filter is shown in formula (12).
{f (i − 1, j − 1) + f (i − 1, j + 1) + f (i + 1, j + 1) + [f (i − 1, j) + f (i + 1, j) + f (i, j − 1) + f (i, j + 1)] × 2 + f (i, j + 1) × 4} 16
In addition to estimating the intrinsic parameters fc , cc , kc and alpha c, the toolbox also returns estimates of the uncertainties on those parameters. For information, those vectors are approximately three times the standard deviations of the errors of estimation [10].
where: f(x, y) is the original gray-scale value; g(x, y) is the gray-scale value after filtering.
(12)
J. Xiao et al. / Optik 126 (2015) 701–707
considering the Gaussian filter and median filter, without considering other complex filtering algorithms. Considering the system design and the simulation results, the median filter is chosen for higher value of PSNR and SNR, so as to obtain the optimal denoising effect and computing speed. Although there is no obvious improvement by images, we definitely strengthen the edges of the images by median filter as we can check it through these three evaluation mechanisms.
Fig. 7. (1) Origin image, (2) after experiment 1, (3) after experiment 2. Table 1 Various filtering data compared with the original image. Filter
PSNR
MSE
SNR
Gaussian Median Gaussian; Median Median; Gaussian
42.2681 44.2890 42.2605 42.1048
3.8572 2.4220 3.8639 4.0050
14.7873 16.8238 14.7781 14.6205
Median filtering is shown in formula (13).
⎧ ⎨ Yi = Med{fi−v , . . ., fi , . . ., fi+v } ⎩ i ∈ Z, v = m − 1
(13)
2
where: fi−v , . . ., fi , . . ., fi+v are m values extracted successively from the input sequence; i is the center of the window; Med means the median operation; Yi is the filter output. To verify the effect of the two denoising algorithms, separate denoising and mixed denoising should be tested by experiments. Firstly, get an image from the system without denoising. Secondly, take different methods of denoising into experiments as follows [14]. 1. 2. 3. 4.
705
3.2.2. Laser stripe centerline extraction Laser stripes are formed on the object by the irradiation of line laser. Meanwhile, the camera records the image information with laser stripes. Through a series of image processing and calculations, coordinates of 3D point cloud data can be obtained. Because the actual line laser has a certain width, and the surface characteristics of the irradiation target and other factors, laser stripes in the image will also have certain width. After the centerline extraction, coordinates of 3D point cloud data can be exactly calculated. Therefore, the laser stripe centerline extraction is not only significant for the system, but also the key part to affect the accuracy. For the laser stripe centerline extraction, common methods include extreme value method, threshold method, centroid method, Steger method, Gaussian fitting and so on [16]. Comparing to a variety of principles and characteristics of these common methods, the proposed system uses the improved method based on Gaussian fitting, which is the width of the Gaussian fitting data also change accordingly with the change of the laser stripe thickness. Firstly, as for the principle of the Gaussian fitting, because normal gray-scale values of laser stripes approximate to the Gaussian distribution, the Gaussian curve can be fitted by normal gray-scale values of laser stripes, and the center of the laser stripe could be formed by the Gaussian curve extreme value points together. Secondly, in order to reduce time of the Gaussian curve fitting, at the basis of image gray gradient, the laser stripe is thinned to change the margin of fitting data. 3.2.2.1. Research of Gaussian fitting. Normal light intensities of laser stripes approximate to the Gaussian distribution [16]. The mathematical expression is shown as the following formula (14).
Experiment 1: only use a Gaussian filter. Experiment 2: only use a median filter. Experiment 3: use a Gaussian filter first and then a median filter. Experiment 4: use a median filter first and then a Gaussian filter.
f (x) = Ae−(x−xc )
Results are as follows:
where:
3.2.1.1. Image. As shown in Fig. 7, because the images from experiment 3 and experiment 4 are not quite different from the image from experiment 1 visually, those two images are omitted. 3.2.1.2. Data analysis. As shown in Table 1, peak signal-to-noise ratio, often abbreviated PSNR, is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. We used PSNR to measure the quality of reconstruction of image compression. And PSNR is most easily defined via the mean squared error (MSE). The MSE of an estimator is one of many ways to quantify the difference between values implied by an estimator and the true values of the quantity being estimated. Therefore, a higher PSNR generally indicates that the reconstruction is of higher quality. The signal-to-noise ratio (SNR) is used in imaging as a physical measure of the sensitivity of an (digital or film) imaging system. e.g. SNR: 32.04 dB = excellent image quality and SNR: 20 dB = acceptable image quality [15]. Therefore, a higher SNR also generally indicates that the reconstruction is of higher quality. From the above results of PSNR and SNR, the system disturbance is not large, so it is enough to meet the requirements only
2
/2
(14)
A is the gray-scale value of laser stripe; is the width of laser stripe; xc is the central coordinates of a Gaussian curve. Calculate the natural logarithm of both sides of Eq. (14), it is translated into polynomial (15). 1 xc ln f (x) = − 2 x2 + 2 2 x +
x2 lnA − c2
(15)
Setting: F(x) = ln f (x) ˛0 = ln A − ˛1 = 2
xc2 2
(16) (17)
xc 2
(18)
1 2
(19)
˛2 = −
Thus, follow formula (20) can be obtained. F(x) = ˛0 + ˛1 x + ˛2 x2
(20)
706
J. Xiao et al. / Optik 126 (2015) 701–707
Supposing n sample points [xi , f(xi )], the corresponding [xi , F(xi )] can be calculated by F(x) = lnf(x). According to the principle of least squares, objective function can be established: M=
n
[F(x) − (a0 + a1 x + a2 x2 )]
2
(21)
i=1
Setting:
∂M =0 ∂a0
(22)
∂M =0 ∂a1
(23)
∂M =0 ∂a2
(24)
Through finishing above-mentioned equations, the following formula (25) can be obtained.
⎛
n
n
n
⎞
2
⎛
⎞
n
1 xi xi ⎟ F(xi ) ⎜ ⎜ ⎜ i=1 ⎟ ⎛ ⎞ ⎜ i=1 i=1 i=1 ⎜ n ⎟ ⎜ a0 n n ⎜ ⎟ ⎜ n ⎜ ⎟ ⎜ x F(x ) 2 3 ⎟⎜ x x x ⎜ i i i i i ⎟ ⎝ a1 ⎠ = ⎜ ⎜ i=1 ⎟ ⎜ i=1 i=1 i=1 ⎜ ⎟ a ⎜ 2 n n n n ⎜ ⎟ ⎜ ⎝ ⎝ 2 3 4⎠ 2 xi
i=1
xi
i=1
xi
i=1
xi F(xi )
⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠
(25)
i=1
Because the coefficient matrix of Eq. (25) is positive definite matrix, the solution of equation [a0 , a1 , a2 ] can be calculated by Householder transformation. Laser stripe center xc follows formula (26). xc = −
a1 2a2
(26)
3.2.2.2. The principle of varied-boundary. Because laser stripes are in certain widths, if without thinning process of laser stripes, there are more fitting data, so it costs more calculation time. Wider laser stripe also generates errors for extracting the laser stripe centerline, then affects the final result of the system. Therefore, it is necessary to thin the laser stripes and change the margin of fitting data. As for thinning process of laser stripes, this paper is based on the gray gradient to determine the boundaries of laser stripes [17]. After the image denoising processing, if noise signal has been reduced to a very small range, the fluctuations of normal gray-scale values of laser stripes resulted from noise are small enough, the impact on the gray gradient can be ignored, meanwhile the main image information is retained very well after the image denoising processing. As the laser brightness is much higher than the background brightness, gray-scale value changes significantly. Therefore, as shown in Fig. 8, we set a threshold value Gbase obtained by experiments to reduce the amount of calculation. Compared with laser stripe, the background occupies almost the whole image, and its gray-scale value is very low, so it is not necessary to concern the most part of background. If gray-scale value is greater than Gbase, calculate the gray gradient. The method of obtaining the gray gradient is to calculate the difference of the gray-scale value of adjacent pixels of the same row. Since normal light intensities of vertical laser stripes approximate to the Gaussian distribution, in order to achieve the purpose of varied boundary, for pixels of each row, those two points in maximum of gray gradient are selected as boundary, during which the data is chosen for fitting. 3.2.2.3. Laser stripe center extraction algorithm. Considering the principle of Gaussian fitting and the principle of varied boundary, the specific algorithm steps are shown as follows.
Fig. 8. Principle of varied boundary.
1. For pixels of the same row, if the gray-scale value is greater than Gbase, calculate the gray gradient. Otherwise, no process. 2. Look for two points in the maximum and minimum of gray gradient, the data of gray-scale value between them are used for fitting. 3. Based on formulas (14)–(26), calculate the central position of laser stripes in this row. 4. Calculate the next row. 3.2.3. Software refactoring After obtaining the data of 3D point cloud, those can be drawn through the following software. The software, using C, C++ as the programming languages, is programmed in VS2008. It mainly uses Open GL as graphics API, achieving the rendering of the point cloud, and also makes translation, rotation, and scale of the overall 3D point cloud. Irrlicht, a real-time 3D engine, is used to realize the effects that moving the camera anywhere in viewing window, enabling the operator to observe the point cloud data in the first-person perspective from any direction. It is convenient for human–computer interaction and the display of point cloud data. 4. Experiments of the system Lots of experiments have been completed by the prototype shown in Fig. 9, including: the average running time, the effective measuring range and the accuracy can be obtained. 4.1. Average running time Through 100 times tests, on average, the time of 120◦ rotation is 120 s, and the image processing takes 100 s. Of course, the image can be processed with the turntable rotation simultaneously.
Fig. 9. The 3D laser scanner prototype.
J. Xiao et al. / Optik 126 (2015) 701–707
707
for indoor space scene reconstruction, and can achieve the purpose of computer recognition. The next job is to optimize algorithm for improving the effect of reconstruction, and further reduce the prototype volume. Acknowledgment This research was supported by National High Technology Research and Development Program of China under the research project (No. 2011AA040902). All support is gratefully acknowledged. References Fig. 10. The accuracy test of the system. Table 2 Measurement accuracy. Measuring distance 40 cm–1 m 1–3 m 3–5 m 5–7 m
Measurement error 5 mm 20 mm 30 mm 50 mm
4.2. Effective measuring range and accuracy Because of measuring principle and limitations of the device, the measuring range and the accuracy are close. In a certain range, the greater the distance is, the greater the error is. The experiments results show that the measuring range is 40 cm to 7 m. As for accuracy, we set a 1D laser rangefinder with the accuracy of 1.5 mm and the measuring distance of 50 m as the standard calibration tool. The markers are placed in the testing scene, which is used to do the comparison with the data from the system and the calibration tool. As shown in Fig. 10, the measured points are the ends of marked rods. Through 100 times tests on every distance range, following results in Table 2 are obtained. 5. Conclusions and future work In this paper, we propose a low-cost 3D laser scanner design. The line laser emits line laser, and the CMOS camera captures images, and through image recognition and decoding, 3D data can be obtained finally. This paper describes the basic principles of range finding, system construction, and the key technologies of rendering 3D point cloud data. Through experiments, the system runs stably and reliably, the control techniques are relatively simple. Also, the hardware platform just costs about $350. It is suitable
[1] M.Y. Kim, H. Cho, An active trinocular vision system of sensing indoor navigation environment for mobile robots, Sens. Actuators A: Phys. 125 (2006) 192–209. [2] G. Fu, A. Menciassi, P. Dario, Development of a low-cost active 3D triangulation laser scanner for indoor navigation of miniature mobile robots, Robot. Auton. Syst. 60 (2012) 1317–1326. [3] M. Muller, H. Surmann, K. Pervolz, S. May, The accuracy of 6D SLAM using the AIS 3D laser scanner, in: 2006 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, 2006, pp. 389–394. [4] P. Biber, H. Andreasson, T. Duckett, A. Schilling, 3D modeling of indoor environments by a mobile robot with a laser scanner and panoramic camera, in: Proceedings 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2004 (IROS 2004), vol. 3434, 2004, pp. 3430–3435. [5] I. Moorthy, J.R. Miller, J.A.J. Berni, P. Zarco-Tejada, B. Hu, J. Chen, Field characterization of olive (Olea europaea L.) tree crown architecture using terrestrial laser scanning data, Agric. For. Meteorol. 151 (2011) 204–214. [6] C.H. Chen, A.C. Kak, Modeling and calibration of a structured light scanner for 3D robot vision, in: Proceedings 1987 IEEE International Conference on Robotics and Automation, 1987, pp. 807–815. [7] I.D. Reid, Projective calibration of a laser-stripe range finder, Image Vis. Comput. 14 (1996) 659–666. [8] F. Zhou, G. Zhang, Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations, Image Vis. Comput. 23 (2005) 59–67. [9] K. Yamauchi, H. Saito, Y. Sato, Calibration of a structured light system by observing planar object from unknown viewpoints, in: 19th International Conference on Pattern Recognition, ICPR 2008, 2008, pp. 1–4. [10] A.L. Reyes, J.M. Cervantes, N.C. Gutiérrez, Low cost 3D scanner by means of a 1D optical distance sensor, Procedia Technol. 7 (2013) 223–230. [11] A. Buades, B. Coll, J. Morel, A review of image denoising algorithms, with a new one, Multiscale Model. Simul. 4 (2005) 490–530. [12] S.G. Chang, Y. Bin, M. Vetterli, Adaptive wavelet thresholding for image denoising and compression, IEEE Trans. Image Process. 9 (2000) 1532–1546. [13] G. Deng, L.W. Cahill, An adaptive Gaussian filter for noise reduction and edge detection, in: 1993 IEEE Conference Record Nuclear Science Symposium and Medical Imaging Conference, vol. 1613, 1993, pp. 1615–1619. [14] C. Sahin, A. Alkis, B. Ergun, S. Kulur, F. Batuk, A. Kilic, Producing 3D city model with the combined photogrammetric and laser scanner data in the example of Taksim Cumhuriyet square, Opt. Lasers Eng. 50 (2012) 1844–1853. [15] ISO 12232, 1997 Photography – Electronic Still Picture Cameras – Determining ISO Speed here, 1997. [16] G. Shiyi, Y. Kaizhen, Research on central position extraction of laser strip based on varied-boundary Gaussian fitting, Chin. J. Sci. Instrum. 32 (2011) 1132–1137. [17] J.-W. Wang, C.-M. Du, Three-dimensional laser scanning image processing algorithms to mend, Comput Eng. Des. 31 (2010) 3929–3931. [18] http://www.vision.caltech.edu