A calibration and range-data extraction algorithm for an omnidirectional laser range finder of a free-ranging mobile robot

A calibration and range-data extraction algorithm for an omnidirectional laser range finder of a free-ranging mobile robot

Mechatronics Vol. 6, No. 6, pp. 665-689, 1996 Copyright © 1996 Elsevier Science Ltd. All fights reserved. Printed in Great Britain. 0957-4158/96 $15.0...

1MB Sizes 0 Downloads 71 Views

Mechatronics Vol. 6, No. 6, pp. 665-689, 1996 Copyright © 1996 Elsevier Science Ltd. All fights reserved. Printed in Great Britain. 0957-4158/96 $15.00+0.00

Pergamon

PII:SID57-4158 (96) 00013-X

A C A L I B R A T I O N AND R A N G E - D A T A E X T R A C T I O N A L G O R I T H M FOR AN O M N I D I R E C T I O N A L L A S E R R A N G E FINDER OF A F R E E - R A N G I N G MOBILE ROBOT Y. Y. CHA* and D. G. GWEON* *Department of Mechanical Design Engineering, College of Engineering, Won-Kwang University, 344-2 Shinyong-Dong, Iksan-Shi, Jeon-Buk 570-749, Korea and tDepartment of Mechanical Engineering, Korea Advanced Institute of Science and Technology, 373-1 Kusong-Dong, Yusong-Ku, Taejon 305-701, Korea

(Received 15 June 1995; revised 9 January 1996; accepted 30 January 1996)

Abstract -- A generalized calibration method and an exact 3D range data extraction algorithm for an omnidirectional laser range finder are described. Firstly, a simple 3D range data extraction is achieved by ignoring the camera's lens distortion and scale factor, using the focal length as it is given by the manufacturer, and introducing some assumptions. The simple extracting equations roughly transform a point on a 2D image plane into a position in a 3D spatial coordinate system. Secondly, the intrinsic parameters of the CCD camera are obtained by an existing camera calibration method based on a pin-hole camera model. Thirdly, the exact design constant of the laser slit emitting device and the extrinsic parameters of the CCD camera are simultaneously obtained by laser range finder calibration, where the previously obtained simple 3D range data extraction equations are modified exactly. Consequently, the simple 3D range data calculation equations can be expanded to the exact one by using the parameters obtained by camera and laser slit emitting device calibration. Finally, the calibration parameters and the exact 3D range data of an omnidi~'ectional laser range finder are obtained by experiment and errors of the measurement distances are calculated. Copyright © 1996 Elsevier Science Ltd.

NOMENCLATURE du~ dv

kl, k2 lh

lB t), t~ IF

Nc~ Nfu R

distance to U and V direction between adjacent pixels on CCD array lens distortion constant to radial direction height from ground to 3rd mirror distance from CCD camera's lens center to 3rd mirror modified height from ground to 3rd mirror modified distance from CCD camera's lens center to 3rd mirror focal distance of CCD camera pixel number to U direction on CCD array discrete pixel number to U direction stored in computer frame memory rotation matrix 665

666

Y.Y. CHA and D. G. GWEON

S~ T [X, Y, Z] [X', Y', Z']

scale factor translation vector robot coordinate system fixed at center of mobile robot sensor coordinate system of laser range finder [x., Yw, z,~] 3D world coordinate system [u, v, w] 3D camera coordinate system coordinate system for idealistic pin-hole camera model [u., vo] [u~, vd] coordinate system considering real lens distortion of camera coordinate system for the discrete element in computer frame memory IV. vd CCD camera view angle; angle between W-axis and ground Ov OB laser beam angle; angle between laser slit and ground rotation angle of X'-axis on X ' Y ' plane with respect to X-axis on XY plane OR

1. INTRODUCTION Since the laser range finder can obtain an object's 3D data directly, it has a number of applications in many fields [1]. Its basic principle is to obtain the object's range data with respect to the reference coordinate system by applying triangulation for the image information gathered on the camera fixed at a certain distance, when the laser beam, slits or stripes are projected to the object. The laser range finder is calibrated by a far more complicated process than that for the general camera, because the laser scanning device as well as the CCD camera itself must be calibrated. For the case of general camera calibration only, there are two types of camera model: the pin-hole model and the two-plane model. The pin-hole model [2-4] is based on the induction of the intrinsic/extrinsic camera parameters. The intrinsic camera parameters are the focal length, the scale factor, and the lens distortion coefficient, while the extrinsic camera parameters are the rotation matrix and the translation vector between the camera coordinate system and the reference coordinate system. Sobel [2] introduced camera calibration methods which cover the solution to the nonlinear equations, where extrinsic camera parameters such as pan and tilt angle were obtained as well as the intrinsic camera parameters. He assumed the pin-hole model and obtained the solution by using the nonlinear optimization method. Nevertheless, this camera calibration method ignored the lens distortion and the system is affected by the selection of the initial value for the optimization. Tsai [3] introduced 4th order modeling to the lens distortion and a good method to calculate the initial value for the optimization, and assumed that the image change due to lens distortion exists only in the radial direction, an improvement on Sobel's nonlinear method. Yakimovsky [4] proposed another calibration method using the pin-hole camera model. He used the parameter combinations of single variables to simulate the linear system, where all the parameters were not linearly independent and the lens distortion was not modeled. The second type of camera model is the two-plane model [5-7]. This method does not require previous camera parameter information, but it is necessary to solve the correspondence problem between the points on the two parallel calibration blocks and the scanned points on the image plane. Martins et al. [5] proposed a camera calibration method using the two-plane model, which measured the calibration data of pixels projected onto the image plane and the other data were obtained from interpolation. The 3D data calculated from the image information were obtained from

Omnidirectional laser range finder

667

the vector calculations covering the interpolated points on the calibration plane, where the two-plane method covered the back-projection issue only. Isaguirre et al. [6] expanded the two-plane method by including the camera location and orientation in the calibration, using an iterative approach based on Kalman filters. In contrast, Champleboux [8] has proposed a range imaging sensor's generalized calibration algorithm which used an N-plane B-spline model and a two-plane model for camera calibration. As noted by Champleboux [8], it is difficult to find papers which describe calibration methods for the laser range finder. The following three methods exist. The first type assumes that the normal vector on the laser slit plane is orthogonal to the normal vector on the image plane [9]. Hence, a number of the traditional laser range finders were adjusted mechanically to meet this assumption. The second type neglects the laser range finder calibration and obtains the object's 3D position by simple calculation [10-12]. The third type assumes that either the laser slit plane or the normal vector on the image plane of the CCD camera is parallel with the ground [10, 13]. However, the best generalized calibration method of the laser range finder is not limited to the above three types. Of course, the laser range finder proposed here is not limited to the above three types. This study covers a generalized calibration method and an exact 3D range data extraction algorithm for an omnidirectional laser range finder. These are useful for acquiring the 3D range data and determining the design constants of the general laser range finder. This is accomplished by the following procedure. Firstly, a simple 3D range data extraction is achieved by ignoring the camera's lens distortion and scale factor, using the focal length as it is given by the manufacturer, and introducing some assumptions. The simple extracting equations roughly transform a point on the 2D image plane into a position in the 3D spatial coordinate system. Secondly, the intrinsic parameters of the CCD camera such as focal length, scale factor and lens distortion factors are obtained by an existing camera calibration method [3] based on a pin-hole camera model. Thirdly, the exact design constant of the laser slit emitting device, and the position and orientation parameter (extrinsic parameter) of the CCD camera are obtained by laser range finder calibration simultaneously, where the previously obtained simple 3D range data extracting equations are modified exactly. Consequently, the simple 3D range data calculation equations can be expanded to the exact one by using the parameters obtained by camera and laser slit emitting device calibration. The results are used in our omnidirectional laser range finder [14, 15], which is applicable to previously developed mobile robots [16, 17]. Finally, the calibration parameters and the exact 3D range data of the omnidirectional laser range finder are obtained by real experiment and the errors on the measurement distances are calculated.

2. LASER RANGE FINDER 2.1. Structure

Figure 1 shows the free-ranging mobile robot "METRO-I" (MEchaTROnics-I) [16] and the omnidirectional laser range finder [15] mounted on its body. Figure 2 shows the structure of the omnidirectional laser range finder. It consists of upper and lower

668

Y.Y. CHA and D. G. GWEON

Fig. 1. Mobile robot "METRO-F" with laser range finder.

cylinders. In the upper cylinder are mounted the CCD camera, the 3rd mirror rotating device and the cylindrical lens set, and the upper cylinder rotating device is mounted on the lower cylinder. The first and second mirror turn the b e a m emitted from the H e - N e laser source at a 90 ° angle. The laser b e a m is changed to a slit b e a m through the cylindrical lens set. The slit b e a m is bent to the forward direction by the 3rd mirror which is rotated by the mirror rotating m o t o r and the bevel gear. The reflected laser b e a m from the object's surface is detected at the C C D element at which the optical band-pass filter is mounted to remove optical noise. Consequently, the emitted slit laser beam can be rotated horizontally with the C C D camera and tilted vertically using the upper cylinder rotating m o t o r and the 3rd mirror rotating motor. Therefore, the laser range finder is of an omnidirectional type. To identify the initial location of these motors, the zero-point plates are fixed to the bevel gear and spur gear linked to the 3rd mirror rotating m o t o r and the upper

Omnidirectional laser range finder

669

i

CCDcamem

~

~

Band-paSS filter

CCD array Upper cylinder - 3rd mirror

Mirror-rotating motor

--

Zero point plate

zero point plate Cylindrical lens set Spur gear

-

Lower cylinder

Cylinder-rotating motor

1st mirror

~ ' ~

2nd mirror

i

He-Ne laser

Fig. 2. Structure of laser range finder.

cylinder rotating motor respectively. Figure 3 shows the ranging principle of the laser range finder.

2.2. Simple calculation of 3D location Figure 4 shows the coordinate systems to derive the simply calculated 3D location calculation equation from the laser slits reflected from the objects. It has three coordinate systems: robot coordinate system [X, Y, Z], sensor coordinate system [X', Y', Z'], and camera coordinate system [U, V, W]. The robot coordinate system [X, Y, Z] is fixed at the center of the bottom of the mobile robot and moves simultaneously along with the robot in the working plane. The X-axis of this coordinate system is the robot's heading direction, while the Z-axis is vertical to the working plane. The origin point of the sensor coordinate system [X', Y', Z'] is the same as that of the robot coordinate system and the Z'-axis is the same as the Z-axis of the robot coordinate system. The X'-axis on the X ' Y ' plane rotates relatively about the X-axis on the XY plane of the robot coordinate system and the relative rotating angle is OR. The point A is the center of the 3rd mirror. The slit laser beam comes from point A, and the laser plane includes the slit laser plane and rotates with the X ' Y ' plane.

670

Y.Y. CHA and D. G. GWEON

Fig. 3. Ranging principle of laser range finder.

The height of the 3rd mirror (distance from point O to point A) is l A. The beam angle 0a is that between the robot working plane and the laser plane. The W-axis of the camera coordinate system [U, V, W] is in the same direction as the camera's view. The camera's view angle 0v is the quantity between the W-axis and the robot working plane. The camera lens' focal point and the center of the 3rd mirror are assumed to be located on the Z-axis and the location of the lens focal point B is at a distance from the center of the 3rd mirror, l B. I v is the focal length of the CCD camera lens. The focal plane is the image plane, where the slit laser beams reflected from the object are captured. To derive the simple location calculation equation, we assume that the lens distortion and the scale factor of the CCD camera are ignored, and the value of the camera's focal length IF is used as it is given by the manufacturer. Also, we assume that the center of the CCD camera lens (focal point) and the center of the 3rd mirror are located at the points B and A on the Z'-axis respectively, while the CCD element is located on the lens focal plane (UV plane), i.e. image plane. Finally, we assume that the design constants such a s [ A and IB are already known. The U-axis value on the image plane represents the Y'-axis value of the reflected slit laser beam, while the V-axis value provides the range information from the robot center to the object which reflects the beam. As shown in Fig. 4, assuming OR = 0, the transform equation from point C on the image plane [U, V] to point C' in the sensor coordinate system IX', Y'~ Z ' ] can be derived as follows [15]. The first equation is obtained from triangles A A L O and A D ' L D " :

Omnidirectional laser range finder

Image

plane~

671

U

/ 7'

/

¥¥'

//

"~x

A

~z'l

/

"

_ X

Working plane

~

Fig. 4. Coordinate systems of laser range finder.

Z'= IA(1--x'tanOB).

(1)

IA The second equation is obtained from triangles A B E F and A B E ' F ' : yp -

X'

U

cos 0 v

lF

(2)

The final relation is obtained from triangles A B D ' H and A BDI: V

x'

= [(lA + tB) -

Z']

1 + tan 0V'--IF

(3)

V tan 0v - - lv As a result, we obtain the formula to calculate the simply extracted 3D position data in the sensor coordinate system from the image data in the camera coordinate system by rearranging Eqns (1), (2) and (3).

672

Y.Y. CHA and D. G. GWEON x'

[B

=

(4)

V tan 0v - - IF

tan 0B

1 + tan Ov-v--

IF

U

Y' = - X '

(5)

Iv cos 0v

Z' =IA(1- x'tan0B). IA

(6)

Since OR is assumed to be 0, the 3D range data measured from the sensor coordinate system [X', Y', Z'] are the same as those in the robot coordinate system [X, Y, Z], i.e. X = X ' , Y = Y', and Z = Z ' . For 0 R ~ 0 , the 3D range data measured from the sensor coordinate system [X', Y', Z'] must be transformed to those in the robot coordinate system IX, Y, Z], since the object's 3D range data in the robot coordinate system are needed for navigation of the free-ranging mobile robot. It can be obtained from the following simple transformation matrix. IZXyI

~cos OR -- { s i n 0 R

- s i n OR cos OR

0

01~X' 1 0 Y' .

o

(7)

13Lz'_j

If the laser plane is parallel with the working plane, that is 0 B = 0, Eqns (4), (5) and (6) are simplified as follows: x'

[B

-

tan 0v

(8)

V -

--

lv V 1 + tan 0 v - 1v U Y' = - X '

(9)

l Fcos 0v

Z ' = lA. These results are equal to the equations obtained by Yuta et al.

(10) [10].

2.3. Image processing We can obtain the range data from the image in the image plane of the CCD camera. The image processing consists of the following five steps. (1) Capturing an image at which the reflected beam is stored. (2) Reading gray-level intensity, l(u, v), of each pixel. (3) Extracting the pixels which have the maximum intensity in each column, i.e.

Omnidirectional laser range finder

673

Vmax(U ) = Max l(u, v). 0

(4) Removing noise by distance test, i.e. If then else

[Vmax(U ) - Vm~(U - 1)1 < el ]Vm~(U) - Vma~(U + 1)[ < ea Vmax(U)survives, Vine(u) is removed.

or (where el is a given finite value),

(5) Calculating the range data R(x, y) by triangulation method with remaining pixels [using Eqns (4), (5) and (6)].

3. LASER SLIT GENERATION

3.1. Cylindrical lens optics The cylindrical lenses are used in applications requiring magnification in one dimension only, such as in transforming a point image into a line image or changing the height of an image without changing its width, or vice versa. Typical applications include slit and line detector array illumination. Piano-convex and piano-concave spherical lenses, with regard to aberrations and conjugate ratios, are also applicable to cylindrical lenses, though these image in only one (instead of two) dimensions. This is because the cylindrical lens surface profiles are circular, just as the meridional profiles of spherical lenses are. Cylindrical lenses are either piano-convex or pianoconcave in form and rectangular in shape. Negative piano-cylindrical lenses are used in this laser emitting device [18]. The lens combination is used in order to produce a wide slit beam over a short distance. We calculate the combination focal length (equivalent focal length, EFL) feq as follows: 1

1 -

1

d

f2

flf2

+

Lq

,

fl

(11)

where f~ is focal length of the first element, f2 focal length of the second element, and d distance from secondary principal point of first element to primary principal point of second element. Here, the distance d can be calculated as follows: d = l[(fb n

-- f ) l + (fb -- f)2],

(12)

where, n is the design index and fb is the back focal length. The Gaussian form of "the lens formula" is 1

f

-

1

i

1 + --,

o

where i (image) and o (object) are the conjugate distances.

(13)

674

Y.Y. CHA and D. G. GWEON

3.2. Laser slit optics The cylindrical lenses set used to make the laser slit beam is shown in Fig. 5, and the beam diverging angles are calculated from the imaging properties of the lens set geometry in Fig. 6. The lens set consists of two pairs. The first and the second lens sets are each made up of two cylindrical lenses. In the first lens set, the beam diverging angle is calculated as follows:

01

Dlaser ], tan-I L 2f,.ql J

(14)

because i = ~ and o = feqx. In the second lens set, the beam diverging angle is calculated as follows: 0 2 = tan-1 [ D-~],

(15)

where i --

OJ4eq2

(16)

0 -- f e q 2

D' -

fl¢q + l m Dlascr, fleq

[

lm

I

r

Fig. 5, Cylindrical lenses set.

(17)

Omnidirectional laser range finder

675 I

J

/ t/

f /

D

Dlaser

"['eq1

feq2

lm Fig. 6. Laser slit optics. because o = lm + feql" Finally, the laser slit width at the ground is calculated as follows: D = D 'feq2 + l _ feq2

(feql

+ lm)(feq2 + l)Dlaser"

(18)

[ e q l feq2

Table 1 shows the calculation results of each parameter, and the relation between lens set distance I m and laser slit width D is shown in Table 2 (Dlaser = 0.8 mm, 1 = 3000 mm).

4. CALIBRATION OF LASER RANGE FINDER To derive the above mentioned simple 3D location calculation equation, several assumptions are used as follows. (1) Lens distortion is ignored. (2) The scale factor to the horizontal axis of the image plane is ignored. Table 1. Cylindrical lens parameters Parameter

Value

Equivalent focal length of lens set 1, feql (mm) Equivalent focal length of lens set 2, feq2 (mm) Distance of lens set 1, dt Distance of lens set 2, d2 Design index, n

-2.355 -6.693 4.420 4.354 1.516

Y. Y. CHA and D. G. GWEON

676

Table2. Relation between lens set distance lm and laser slit width D /m (mm)

Oi (°)

02 (°)

D (mm) (i = 3000 mm)

15 20 30 40 50

7.18 7.18 7.18 7.18 7.18

15.14 21.68 33.07 42.14 49.20

2648.45 3411.47 4937.51 6463.56 7989.60

(3) The value of the CCD camera's focal length as given by the manufacturer is used. (4) The focal point of the CCD camera exists on the Z'-axis. (5) The design constants l A and IB of the laser range finder are already known. Although these assumptions are available to obtain the simple 3D range data from the non-calibrated laser range finder, the 3D data obtained in this manner have many deviations because of the limitations between the real system and the simplified one. Therefore the laser range finder must be calibrated to obtain the exact 3D range data for the object. The calibration of the laser range finder includes the calibration of the laser slit scanning device as well as the camera itself, so that it is far more complicated than is the usual camera calibration. The proposed calibration method of the laser range finder is as follows. Firstly, as mentioned in section 2.2, the simple 3D range data extraction is achieved by several assumptions. Secondly, the intrinsic parameters of the CCD camera such as focal length, scale factor and lens distortion factors are obtained by an existing camera calibration method [3] based on a pin-hole camera model. Thirdly, the exact design constant of the laser slit emitting device, and the extrinsic parameters of the CCD camera such as position and orientation, are obtained by laser range finder calibration simultaneously. Consequently, the simple 3D range data calculating equations can be expanded to the exact one by using the parameters obtained by camera and laser slit emitting device calibration. Of the previously mentioned assumptions, the first, the second and third items are generalized from the CCD camera's calibration, while the fourth and fifth items are generalized in the calibration of the laser range finder. 4.1. C C D camera calibration

The exact camera calibration is a difficult job which requires experts in the field and expensive tools. The difficulties come from the nonlinearity in the lens distortion. Geometric camera calibration is generally split into perspective projection problems and back-projection problems. The former are related to computer graphics and the latter to computer vision. Computer vision and graphics deal with mapping the points between the 3D real space and the 2D image plane. The computer graphics determine which points in the 3D object transform those on the image plane. In computer vision, the points in the 2D image plane are used to obtain the real object's 3D data. In any case, mapping between the 3D world coordinate system and the 2D image coordinate system must be known.

Omnidirectional laser range finder

677

In this section, the intrinsic parameters of the CCD camera--such as focal length, scale factor and lens distortion factors--are obtained by an existing camera calibration method [3] based on the pin-hole camera model. Here six coordinate systems are used; [Xw, Yw, Zw] for the 3D world coordinate system, [U, V, W] for the 3D camera coordinate system, [U i, Vii for the 2D image coordinate system, [Uu, Vu] for the coordinate system of the idealistic pin-hole camera model without counting the lens distortion, [Uo, VO] for the coordinate system of the camera model counting the lens distortion, and [U r, Vf] for the discrete image coordinate system whose origin point at the computer frame memory is (Cu, Cv). Generally camera parameters are involved while transforming the point P in the 3D world coordinate system into the 2D discrete image coordinate system in the computer frame memory. These transformations consist of four stages, as follows. (1) Transform the 3D world coordinate system into the 3D camera coordinate system

?1 E A = R.

Yw

+ T,

(19)

Zw

where R--

rl

r2

r3

r4

r5

r6

?'7

F8

r9

,

Tu

T =

Tv



Zw

(2) Transform the 3D camera coordinate system, assuming the geometric perspective projection from the pin-hole camera, into the idealistic 2D undistorted image coordinate system U u ---~ If u w

(20)

Vu = If° .

(21)

w

(3) Transform the 2D undistorted image coordinate system, counting the lens distortion in the radial direction, into the distorted image coordinate system Ua + Ud(lclr 2 + k2r 4) = Uu

(22)

V d + Vd(kl r2 + k2 r4) = Vu,

(23)

r2 = U2a+ V 2

(24)

where

and kl and k2 are the lens distortion factors in the radial direction. (4) Transform the 2D distorted image coordinate system into the image coordinate system in the computer frame memory

u,= &

Ud

+

(25)

678

Y.Y. CHA and D. G. GWEON Vf-

v~

+ Cv,

(26)

dv where d'u = du Ncu Nfu du and dv are the distances in the U and V directions between the closed pixels in the camera's CCD array, Su is the scale factor in the horizontal axis of the image plane, Ncu is the number of pixels in the U direction, and Nfu is the number of discrete pixels to the U direction in the computer frame memory. Hence the parameters to be obtained are the camera rotation matrix R, translation vector T, the effective focal length lf, the lens distortion factors in the radial direction, kl and k2, and the scale factor in the horizontal axis of the image plane Su. For the lens distortion factor in the radial direction we can ignore the second order term, therefore let k = k~. Based on the measurement result that "the image change due to the lens distortion exists in the radial direction only" [19], the camera parameters can be obtained from the two-step linear and nonlinear calculations. Calibration blocks with many data points are used for the camera calibration. For the linear algebra more than seven data points are needed while the nonlinear calculation needs more than two data points. Practically, to obtain stable solutions from the pseudo inverse equation, far more data points are needed so that two planes with more than 30 data points are used. From the camera calibration, the CCD camera intrinsic parameters can be obtained.

4.2. Calibration of laser range finder The above described CCD camera calibration is accomplished independently from that of the laser range finder, because of manufacturing problems of the large calibration plate used for the camera calibration as well as the complexity of the calibration itself. Through the CCD camera calibration, the intrinsic parameters have been obtained. Hence the extrinsic parameters of the CCD camera and design constant of the laser slit emitting device are simultaneously obtained from the calibration of the laser range finder in this section. For the calibration of the laser range finder, the working plane is assul~ed to be flat, camera roll angle to be zero, when OR = 0, and the laser slit reflected from the ground to be normal to the X'-axis. Figure 7 shows the coordinate system to calculate the 3D range data and the calibration of the laser range finder. Compared to Fig. 4, showing the simple 3D range data calculation, the camera focal point on the Z'-axis is moved from point B to B', where the optical axis of the CCD camera passes the points O c, B', B, Oc and H . The image is reversely represented at the origin point O~ at a focal distance If (= B ' O c ) from the arbitrary focus location B'. The image from the computer frame memory is shown at the origin point Oc at a focal distance If ( = B ' O c = B ' O c ) from the focus location B'. For calibration of the laser range finder, three laser slits can be sequentially positioned at a certain distance d on the working plane. From Fig. 7, it is seen that the laser slits are normal to the X- or X'-axis and pass the points P, Q, and R

Omnidirectional laser range finder

Image plane

679

~zUz ' y y"

%

A'

12

IA

/

Oc

/

/ /

XR ""~.

/ /

/

XX

/ / / /

/

/ /

Working plane

/

Fig. 7. Coordinate systems for laser range finder calibration.

respectively. These images are stored in the computer frame memory and the values v0, vl and v2 can be obtained by image processing. Point A, the location of the 3rd mirror scanning the laser slits, is the same as in Fig. 4. For the slits located on the points P, Q, and R in the X'-axis, rotation angles 0L1 (/_RAQ) and 0L2 (/_QAP) can be determined from the encoder in the 3rd mirror rotating motor. These three laser slits are normal to the V-axis on the image plane of the CCD camera and can be t t t identified as a line passing through the points/) 1, v 0 and v 2. Hence the points on the t r ! image plane vl, Oo and 02 are transformed into /)1, Vo and /)2. Using the effective focal distance If obtained from the camera calibration, the angles 0cl (/-/)oB'v2) and 0c.2 (~roB'~)1) are calculated with respect to the focal point B'. Therefore the following equations can be derived from the laser slits, showing the constant interval on the working plane. tan 0c1 -

/)2 ~

DO

(27)

If + /)29--0-o If tan 0c,2

01 ~

00

-

01/30

If + - If

(28)

680

Y.Y. CHA and D. G. GWEON

From AAPQ and APQS, the relation between 0L (/_AQO) and IL (= AQ) is derived: (l L + d cos 0L) tan OL2 = d sin 0L.

(29)

Using the same method, from ATQR and A A R T (l L - d cos 0L)tan 0L1 = d sin 0L.

(30)

From Eqns (29) and (30), the equations for 0 L and lL can be derived as follows. 2

tan 0L =

(31)

1

1

0L2

tan

tan 0L2 Also, la (= OA) can be obtained as

tan 0LX

tan 0El

la = IL sin 0t.

(33)

Similarly, the relation for 0c (z_B'QO) and lc (= B'Q) can be obtained as 2

tan Oc =

(34)

l

1

tan 0c2

tan 0c~

lc= d(

1 + 1 )sin0c. (35) tan 0c---~_ tan-0ct Therefore, the view angle 0v between the working plane and the normal vector to the camera image plane is obtained as follows.

+ Oc.

0v = tan -l \

If

(36)

]

4.3. Exact calculation of 3D location As described in the above section, if the camera intrinsic/extrinsic parameters and the exact design parameters for the laser slit scanning device are obtained, Eqns (4), (5) and (6) for the simple 3D location calculation described in section 2.2 must be modified. Let xR = OR and Xo = OO': XR --

lA

d

(37)

tan 0L Xo = l c c o s 0 c - XR -- d.

(38)

Assuming the laser slit is located on point P in the bottom plane due to the rotation of the 3rd mirror, the beam angle of the laser slit, 0 B (/_APO), can be obtained as follows. 0B = t a n - l (

IA XR + 2d

).

(39)

Omnidirectional laser range finder

681

As shown in Fig. 7, Eqns (4), (5) and (6) use l'A (O'A') and l'B (A'B') instead of lA and lB. I A = (Xo + XR + 2 d ) t a n 0B t

t

IB = l c s i n 0 c -

lA.

E-xol

(40) (41)

The modified equation for exact 3D location calculation is arranged as follows: Y' Z'

=

Y' Z'

.

(42)

5. EXPERIMENTS

5.1. Experimental device The proposed laser range finder uses the H e - N e laser (10 mW power) as a light source, and four cylindrical lenses are used to form the slit beams. The black and white CCD camera is used for the image sensor and the magnitude of its effective CCD array is 6 . 4 m m ( H ) x 4 . S m m ( V ) , the effective pixel number is 5 7 0 ( H ) × 485 (V), while the focal distance is lF = 12 mm. An optical band-pass filter (632.8 nm wavelength) is used to remove the optical noise. The image processing units are composed of a frame grabber having 512 (H) × 480 (V) x 7bit gray-level input resolution and an IBM 80486 CPU card. The scanning mechanism is composed of two D.C. servo motors, one interface board, two u p - d o w n boards and an IBM 80286 CPU card for motor actuation. The field of view size and resolution of the laser range finder are related to the angle between the laser beam and the camera, CCD camera design constants (number and magnitude of CCD element pixel, focal distance), and the distance to the object from the camera. From Fig. 4, the field of view size is determined by the CCD magnitude x distance ratio, X ' / l F. Then the field of view of the proposed laser range finder is 1.066 m for X ' -- 2 m and 1.6 m for X ' -- 3 m. For reference, the diameter of the mobile robot is D r = 0.6 m. If 0a = 0 ° and 0 B = 0 °, the resolution of the laser range finder is obtained from the CCD magnitude/pixel number x distance ratio X'/lF. Therefore the resolution of the proposed laser range finder is 3.1 m m ( H ) x 2.5 mm (V) at X ' -- 3 m. Since the beam angle 0B is not zero, the integrated resultant resolution in the vertical direction at the image is worse than the value obtained above.

5.2. Experimental results 5.2.1. Camera calibration results. For the linear algebra more than seven calibration points are needed while more than two points are needed for the nonlinear calculation. T o obtain a reliable solution from the pseudo inverse transformation, far more data points are needed. A metal block for camera calibration (height 45 ram, upper surface 160 mm × 100 ram) is used and a total of 60 data points are used by

682

Y.Y. CHA and D. G. GWEON

using two planes with 30 data points each. The diameter of the calibration point is 10 mm. Assuming the center point as the data the effect from the noise can be minimized from the binary image. The linear algebra was performed first to obtain the parameters and the nonlinear calculation was performed to modify the parameters by the steepest descent method, which reduced the error. Table 3 lists the calibrated intrinsic camera parameters. The difference between the effective focal distance and the technical specification comes from the fact that the camera is not the exact pin-hole camera that has no lens thickness.

5.2.2. Calibration results of laser range finder. For the calibration of the laser range finder, the 3rd mirror rotating motor tilts to locate the laser slit at a certain constant distance (0, 300, 600, 900 and 1200 mm) from the arbitrary reference point with a constant distance interval (300 ram) on the working plane. The rotating angle is known from the encoder value from the 3rd mirror rotating motor. The laser slit images reflected from the various locations are stored in the computer frame memory from the CCD camera. Figure 8 shows three different images obtained from the laser slits 0, 600 and 1200 mm from the reference point. Figure 9 shows the laser slit location obtained from the image processing. Figure 10 shows the pixel number, i.e. gap, between the slits calculated from the computer frame memory for the constant distance separation from the reference point. Figure 11 shows the rotation angle of the 3rd mirror. The calibrated parameters of the laser range finder are listed in Table 4. 5.2.3. 3D range data extraction results. Figure 12a shows the object used for the experiment. The CCD camera can collect the reflected laser beam from the object's surface. To obtain the range data from these images, the following processes are performed. First, find the brightest pixel from each row of the images; its result is shown in Fig. 12b. During this process, the reflected laser beam from the object and the noise are simultaneously detected. Figure 12c shows the result of eliminating the noise by applying the relative range test among the adjacent pixels. Figure 12d shows the object's distance calculated using 1A and l~ instead of IA and IB for the pixels which survived the noise elimination utilizing Eqn (42). From this result the proposed laser range finder is determined to have a good measurement accuracy compared with the real location. The measurement deviation of the range data from real experiments is listed in Table 5. As can be seen, a deviation of 9 mm (0.60% deviation) at the 1.5 m location becomes 23 mm (0.96% deviation) at 2.4 m location so that the deviation becomes larger as the distance to the object becomes further. Figure 13 shows the CCD

Table 3. Calibrated parameters of CCD camera Parameter /j (mm) k Su

Value before calibration

Value after calibration

12.000 ---

11.434 0.003 35 1.095

Omnidirectional laser range finder

683

(a)

(b)

(c)

Fig. 8. Three different images obtained from the laser slits 0, 600 and 1200 mm from the reference point. camera resolution in the various cases versus measuring distance. The first is the theoretical resolution of the normal case in which the normal vector on the CCD camera's image plane is orthogonal to the field of view. The second is the theoretical resolution of the inclined case in which the normal vector on the CCD camera image plane is inclined to the working plane, but the laser plane is parallel to that. The final case is the experimental resolution of our system in which both the normal vector on

684

Y . Y . CHA and D. G. GWEON

Fig. 9. Laser slit locations obtained from the image processing.

3°°l 2501-

200 d

Z

i~ 150

1 O0

50

I/ 0

20

40

60 80 X-Distance [Cm]

100

120

Fig. 10. Pixel number between the slits for constant distance separation from the reference point.

Omnidirectional laser range finder i

i

685

"

'

120

100

Q.

eUJ

40

20

0

I

I

20

40

L

I

60 80 X-Distance [Cm]

t

1O0

120

Fig. 11. Rotation angle (pulse No. x 2~r/4000 [rad]) of the 3rd mirror for constant distance separation from the reference point. Table 4. Calibrated parameters of laser range finder Parameter IA (mm) 0c (rad) lc (mm) xR (mm) x0 (mm) Ov (rad)

Value before calibration

Value after calibration

848.000 ---0.000 0.465

817.230 0.489 2197.820 1504.210 -164.630 0.484

the C C D c a m e r a image plane and the laser plane are inclined. It t o o k no m o r e than 1 second to process an image frame with 256 columns. Consequently, the proposed laser range finder has excellent m e a s u r e m e n t accuracy in object identification and range m e a s u r e m e n t for the ultrasonic sensor used for the free-ranging mobile robot, and the data processing time is very short c o m p a r e d to that for the version sensor.

6. C O N C L U S I O N This study has covered a generalized calibration m e t h o d and an exact 3D range data extraction algorithm for an omnidirectional laser range finder. These are useful

686

I~iil!I¸¸~ -~il

Y . Y . CHA and D. G. GWEON

ii iiili

~i~i~i~..... i,~: !ii i i ;i!i!i>~

(a)

(b)

SO00

i

2500 4 E

i

2oood

._oc

15 0 0 ]

©

©

i

lO0O I

x

500~

I

o!1500

,

t000

I

500

-

]

0

Y-Direction (c)

I - -

/

-500 -1000 H500 [ram]

(d)

Fig. 12. (a) Object used in experiment. (b) Result of finding the pixels with maximum intensity in each column. (c) Image after removing noise. (d) Range map and measuring error.

Table 5. Ranging error

Range (mm)

Error (mm)

% Error

1500 1800 2100 2400

9 13 17 23

0.60 0.72 0.81 O.96

Omnidirectional laser range finder

687

25 x Normal Resolution

/~

o Inclined Resolution / 20

t-

.o _.= O

~10

J

z

I

1500

2000 X- Direction [mm]

2500

Fig. 13. Theoretical and experimental resolutions according to measuring distance of laser range finder. for acquiring 3D range data and determining the design constants of a general laser range finder. The following procedure was adopted. Firstly, a simple 3D range data extraction was achieved by ignoring the camera's lens distortion and scale factor, using the focal length as given by the manufacturer, and introducing some assumptions. Secondly, the intrinsic parameters of the CCD camera such as focal length, scale factor and lens distortion factors were obtained by an existing camera calibration method based on a pin-hole camera model. Thirdly, the exact design constant of the laser slit emitting device, and the position and orientation parameter (extrinsic parameter) of the CCD camera were obtained by laser range finder calibration simultaneously, where the previously obtained simple 3D range data extracting equations were modified exactly. Consequently, the simple 3D range data calculating equations were expanded to the exact one by using the parameters obtained by calibration of the camera and the laser slit emitting device. The results were used in our omnidirectional laser range finder which is applicable to already developed mobile robots. Finally, the calibration parameters and the exact 3D range data of the

688

Y.Y. CHA and D. G. GWEON

omnidirectional laser range finder were obtained by real experiment and the errors of the measurement distances were calculated.

REFERENCES

1. Besl P. J., Active, optical range imaging sensors. Machine Vis. Applic. 1(2), 127-152 (1988). 2. Sobel I., On calibrating computer controlled camera for perceiving 3D scenes. Artificial Intell. 5, 185-198 (1974). 3. Tsai R. Y., An efficient and accurate camera calibration technique for 3D machine vision. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 68-75 (1986). 4. Yakimovsky Y. and Cunningham R., A system for extracting three dimensional measurements from a stereo pair of TV cameras. Comput. Graph. Image Proc. 7, 195-210 (1978). 5. Martins H. A., Birk J. R. and Kelley R. B.. Camera models based on data from two calibration planes. Comput. Graph. Image Proc. 17, 173-180 (1981). 6. Isaguirre A., Pu P. and Summers J., A new development in camera calibration: calibrating a pair of mobile cameras. In Proc. IEEE Conf. on Robotics and Automation, pp. 74-79 (1985). 7. Gramban K. D., Thorpe C. E. and Kanade T., Geometric camera calibration using system of linear equation. In Proceedings IEEE of Int. Conf. on Robotics and Automation, pp. 562-568 (1988). 8. Champleboux G. et al., Accurate calibration of cameras and range imaging sensors: the NPBS method. In Proceedings of IEEE Int. Conf. on Robotics and Automation, pp. 1552-1557 (1992). 9. Bolles R. C. et al., Projector camera range sensing of three-dimensional data. In Machine Intelligence Research Applied to Industrial Automation, pp. 29-43. SRI International (1983). 10. Yuta S. et al., Implementation of an active optical range sensor using laser slit for indoor intelligent mobile robot. IEEE/RSJ Int. Workshop on Intelligent Robots and Systems IROS '91, pp. 415-420, November (1991). 11. Tanaka Y. et al., High-speed processing for obtaining three-dimensional distance image and its application. IEEE/RSJ Int. Workshop on Intelligent Robots and Systems IROS '91, pp. 365-370, November (1991). 12. Aldon M. J. and Strauss O., 3D surface segmentation using active sensing. Fifth Int. Conf. on Advanced Robotics ICAR '91, pp. 1371-1376, June (1991). 13. Wernersson et al., On sensor feedback for gripping an object within prescribed posture tolerances. Int. Conf. on Robotics and Automation, pp. 1654-1660. May (1992). 14. Cha Y. Y. and Gweon D. G., A laser scanner calibration algorithm for mobile robot. Autumn Conference of Robotics Society of Japan, pp. 885-888, Fukuoka, Japan, 20-22 November (1994). 15. Cha Y. Y., Gweon D. G. and Lee D. G., An active range sensor for free ranging mobile robot. Int. Symp. on Robotics and Manufacturing, pp. 15-17, Maui, U.S.A., August (1994). 16. Cha Y. Y. and Gweon D. G., Real-time control using explicit dynamic solutions of two-motion modes mobile robot. J. Syst. Control Engng 208, 157-167 (1994). 17. Gweon D. G., Cha Y. Y. and Cho H. S., Development of a mobile robot

Omnidirectional laser range finder

689

controlled by three motors for hostile environment. 5th Int. Conf. on Advanced Robotics, pp. 921-925, Pisa, Italy (1991). 18. Melles Griot, Optics Guide 5: Optics, lasers, opto-mechanical components, tables, electro-optics. Melles Griot Manual (1990). 19. Tsai R. Y., An efficient camera calibration technique for 3D machine vision. IEEE Proc. Computer Vision and Pattern Recognition, pp. 364-374 (1986).