Pergamon
Pattern Recognition, Vol. 27, No. 2, pp. 301 310, 1994 Elsevier Science Ltd Copyright © 1994 Pattern Recognition Society Printed in Great Britain. All rights reserved 0031 3203/94 S6.00 + .00
A NON-ITERATIVE PROCEDURE FOR RAPID AND PRECISE CAMERA CALIBRATION MINORU ITO~"and AKIRAISHII~ t Department of Electronic Engineering, Kogakuin University, 1-24-2Nishi-Shinjuku, Shinjuku-ku, Tokyo 163-91, Japan :~NTT FANET Systems Corp.
(Received 10 December 1992; in revisedform 15 September 1993; receivedfor publication 23 September 1993)
Abstract--A simple, rapid and precise camera calibration method is described by which the position and direction of a camera are calculated without using conventional iterative processes. Error parameters are defined as differences in the values of calculated intrinsic parameters from their fixed ones. Camera parameters are adjusted non-iteratively so that the error parameters become sufficientlysmall. The rotation matrix orthogonality constraints are completely preserved. Experimental error estimation using an actual mark image shows that the position error is less than 0.1 mm and that the direction error is less than 5 × 10-4 rad, which correspond to relative errors of 5 x 10- 5 and 1 x 10-4, respectively.These calibration procedures from the image acquisition to the result output are completed automatically within 6 s using a VAX11/780 (1 MIPS) Camera calibration Direct calculation
i.
Camera position Error estimation
Camera direction Non-iterative method Error parameter Calibration criteria
INTRODUCTION
A simple, rapid and precise camera calibration method is urgently needed to further progress in applications of robot vision systems for high-level three-dimensional (3D) object recognition,t1'2) This is because small camera calibration errors can result in large errors in the 3D position calculated from passive stereo images, t3) Camera calibration is essential not only for stereo image analysis, but also for active 3D measurement using slit pattern or two-dimensional (2D) pattern projections. It is also useful in robot position calibration using TV camera images of calibration marks, because camera position and orientation parameters are used for object position calculation in those measurements. Using a well-defined camera with precise position and orientation parameters makes it possible to measure the position and shape of an object arbitrarily placed in the world coordinate system without touching the object. 12"4) This measurement technique can be applied to 3D shape measurement of mechanical parts in automatic parts assembly and robot position calibration as well as the tracking of a flying object. If a camera is mounted on a robot arm and the camera position and orientation in the world system are calculated or calibrated for each movement of the arm, then 3D positions of the moving arm can be evaluated in the world system. This can be further applied to navigation of a camera-mounted land vehicle. There are two conventional calibration approaches, ts) One is the iterative approximation method, whose main applications have been studied in the field of photogrammetry over the past few decades and which
is now most widely used as a standard optimization technique. Differences between the ideal image points and the corresponding real camera image points are estimated as the error function,t6-11) The calibration procedure consists of the iterative optimization of camera parameters to minimize the error function. A drawback of this approach is that an optimized solution is sometimes affected by both the initial guess of the solution and calibration mark positions. This is possibly caused by the ambiguity properties of the solution due to the overdetermination system of the equations. It should be noted that error function values do not necessarily show the calibration error. In addition, it usually takes a long time to derive the solution. The other approach is the non-iterative method in which matrix equations relating world coordinates to their observed image coordinates of calibration marks are solved non-iteratively to evaluate camera parameters directly,t 12-16)This method was developed about ten years ago. The method shortens the computer calculation time. However, in practice, we often come across unexpected discrepancies between the calculated camera parameter values and the real ones, even though the calculated calibration image coincides quite well with the real mark image. These errors can be attributed to the ambiguity properties arising from the overdetermination system. Furthermore, the extrinsic camera parameters cannot be evaluated under the condition of fixed intrinsic parameters. In other worlds, the intrinsic parameter values calculated by the conventional approaches tend to change depending on the mark setting conditions in spite of that they should not change when the position and orientation of the camera change.
301
302
M. ITO and A. Isrm
Several alternative techniques have also been studied. One uses straight or vanishing lines in place of dot marks." 7-19) This technique has an advantage in that marks on the straight lines can be used to simply configure calibration marks. It appears, however, that it is not easy to prepare the lines clearly observed by a camera. Another technique involves using multiimages such as stereo images, (2°'21) but it requires a complicated calibration system that makes it difficult to obtain high calibration accuracy. Nevertheless, these techniques show promise for moving camera systems where the mark board is difficult to set and high accuracy is not required. This paper describes a simple, rapid and precise camera calibration method by which the camera position and direction are calculated without using the conventional iterative method. Error parameters are defined as the differences of calculated optical axis length and optical axis point coordinates from their fixed values predetermined optically and mechanically prior to the calibration procedure and are used for monitoring extrinsic camera parameter calibration errors. Here, the optical axis length is defined as the distance from the lens center to the optical axis point. Camera parameters are adjusted so that the error parameters become sufficiently small. This technique leads to highly accurate and reliable external parameter calibration when intrinsic parameter values are fixed.
ameters to be calibrated. Six marks at least are needed for calibration. Note that some of the intrinsic parameters can be precisely calibrated by camera manufacturers using optical techniques and that intrinsic parameter calibration need only be done once. The relation between the camera coordinates (x¢,y¢, zc) and the world coordinates (x, y, z) is given by
ix) yo
=
MlY-c21
z~
(1)
\ z - c3/
where M is the rotation transformation matrix expressed as a function of the camera orientation parameters ~, ~b and ft. Further the relation between object camera coordinates (x~,Yc,z~) and a corresponding image point (i, j) on the image memory is given using the perspective projection principle as r/ii = Lxc/yc + r/iio
(2)
r/jj = L z J y c + r/jJo.
(3)
From these relations, the following equations giving the relation between object world coordinates (x, y, z) and their corresponding image point (i, j) on the image memory can be derived: i=
L allx+al2y+al3z--allcl--al2c2--al3c 3 • + io r/i a 2 1 x + a 2 2 y + a 2 3 z - a 2 1 c l - a 2 2 c 2 - a 2 3 c 3
(4) J=
2. C A M E R A M O D E L L I N G A N D P R O B L E M S I N CAMERA CALIBRATION
L a31x+a32y+a332--a31cl--a32c2--a33c 3 " bjo. r/j a 2 1 x + a 2 2 Y + a 2 3 z - - a 2 1 c I - - a 2 2 c 2 - - a 2 3 c 3
(5)
As depicted in Fig. 1, the intrinsic parameter set contains the optical axis length L, which is the distance from the image plane to the optical center 0~, an optical axis point (io, Jo), which is where the optical axis Yc intersects the image plane, and conversion parameters r/~ and r/j, which relate the image coordinate on the image plane to a digital image memory coordinate system. The optical axis length L is fixed throughout each experiment. The extrinsic parameter set contains optical center positions (cl, c2, c3) relative to the world coordinate system and rotations ~t, ~b and fl of the camera coordinate system around the world coordinate axis. Thus, there are 11 unknown camera par-
These can be rewritten as a21ix + a22iy + a23iz + Blx/r/i + B2y/r/~ + B3z/r/i -- B l C l / r / i -- n 2 c 2 / r / i - B 3 c a / r / i = A 2 r / i
(6)
a21jx + a22jy + a23Jz + Dix/r/j + D2y/r/j + D3z/r/j -- OlCl/r/j-- D 2 c 2 / r / j - O3c3/r/j = a2r/j
(7)
where Ak=aklcl+ak2c2+akac3 Bk = -- Lalk -- r/iioa2k
(k = 1,2,3)
Dk = -- La3k -- r/jOa2k
(k = 1, 2, 3).
Zc
-~--~--~
~ ~ - . . e (xc,yc,zc)
(
a
c°..,o
C, It
f
(k = 1,2,3)
World Coordinote
Fig. 1. Coordinate systems.
,Y
(8) (9) (lO)
A non-iterative procedure for rapid and precise camera calibration The dements am, of the rotation matrix M are composed of only the triangular functions of a, 4~and ft. Note that these are dependent on each other so that matrix M satisfies the following orthogonality constraints:
a~,. (m, n = 1, 2, 3) can be derived as
(m=1,2,3)
2
(18)
B,2 + B,22 + B,32 = La/A2q2i + 10/,4 .2 22
(19)
a'21D't + a'22D'2 + a'23D'3 = - j o / A g
(20)
O', 2 + O'22 + D'32 = LZ/A2q~. +j2/AZ~
(21)
a~2 + a ~ + a ~ = A22.
(22)
a'E1B'l + a'22B'2 + a'23B'3 = - i o / a
3
a~k=l
(11)
k=l 3
y. a , , t a , k = 0
(m,n=l,2,3,m:/:n).
303
(12)
k=l
The key concerns of camera calibration are how to determine high accuracy parameter values which satisfy not only perspective transformation relations (6) and (7), but also orthogonality constraints (11) and (12). When the orthogonality constraints are not completely satisfied simultaneously, the camera parameters possibly contain large errors and they cannot be used with confidence in stereoscopic object recognition for tasks requiring high reliability such as robot control. Another problem is that even if the calibration satisfies the above conditions, it is totally impossible to confirm whether the calibration extrinsic parameters have sufficiently high accuracy. Furthermore, it is necessary for the intrinsic parameter values derived from calibration to always be constant. A calibration approach for industrial use must satisfy the following main requirements: (a) The extrinsic parameters must be calibratable with high accuracy under the condition of fixed intrinsic parameters. (b) The orthogonality constraints for the camera rotation matrix must be preserved not only theoretically but also numerically in actual calibration using real marks. (c) High calibration accuracy must be obtainable independently of mark conditions such as the number of marks and mark configuration. Error monitoring, which warns of faulty calibration, must also be possible.
Here, error parameters 6, e~ and ej are defined as the differences in the calculated optical axis length and the optical axis center from their fixed values, respectively. Equations (2) and (3) are modified to ~hi = Lx¢/yc + ~hio + 6xc/Y¢ + I~iEi
(23)
~ljJ = L z j y ¢ + qflo + 6zc/Yc + nfij.
(24)
Equations (9) and (10) are also modified to Bk =
-- Lalk
-- qiioazk
-- 6alk
- - rlig, i a z k
DR = - - L a 3 k -- rljoa2k -- tSaak -- rljgjaEk.
(25) (26)
Note that the error parameters represent physical absolute errors in the intrinsic parameters, while the error function used in the conventional iteration approximation approach does not show any physical error in parameters. In brief, the basis of the proposed calibration method is to use new error parameters that give the physical errors of the intrinsic parameters and to calibrate the extrinsic parameters under the orthogonality constraints so that the error parameters have sufficiently small values. The perspective transformation equations are solved under rotation matrix element constraints without any change in the intrinsic parameters accurately measured in advance.
4. C A L C U L A T I O N D E T A I L S
Equations (13) and (14) for observed points,
(il,
Jl) . . . . . ( i . , j , ) , on the image memory corresponding
to the real mark points of coordinates (xl, Yl, Zl),..., (x., y,, z,) in the world coordinate system are given by the matrix form
3. P R I N C I P L E S O F O U R A P P R O A C H
Equations (6) and (7) are rewritten as
GN = W
(27)
a'21ix + a'22iy + a23iz + B'lX + B'2y + B'3z -- B'tc
I -- B'2c 2 -- B'3c 3 = i
(13)
a'21jx + a'zzjY + a'23Jz + D ' l x + D'2y + D'3z -- D'lCl -- D'2c2 -- D'3c3 = j
(14)
where ( k = 1,2,3)
(15)
B'k = Bk/A2q~
(k = 1,2, 3)
(16)
D'k = D k / A 2 q j
(k = 1,2,3).
(17)
a'2k=a2JA 2
Using rotation matrix constraints (11) and (12), orthogonality constraints for normalized matrix elements
where G = ( a'2 , , a'2 2, a'2 3, B'l , B'2 , B'3, O'x , O'2 , D'a , u'l , u'2 )
(28)
u'1 = B'lc I + B'2c 2 + B'3c 3
(29)
u'2 = D ' l q + D'2c 2 + D'ac3
(30)
W = ( i , i2. . . . . i,, Jl, J2 . . . . . J.).
(31)
The parameter n is the number of mark points. Each element of G is a function of intrinsic and extrinsic camera parameters. W is a 2n-dimensional row vector composed of image point coordinates. N is an 11 x 2n matrix and is composed of mark position coordinates in the world system and of image point coordinates
304
M. ITO and A. ISHll
sin
on the image plane. N can be simply given by
N=
f/
t
(O33~
(O91X) ~
I083 K 1 K 2 + II\053/]/KI - ~ 0I
o
s'n 0c°s
o /
cos qb sin q5 - sin ~b cos q5)
/ K a ff(I'Onn ) (43) \sinfl
']-~I023~K1--llO10'1) (32) where
.....
K l = / y , .....
K2=
\ Z l . . . . . z, /
(el 01 /jl , K3=
0
in
0 (33)
and K 4 is the 1 x n matrix written by (1,..., 1), Omn is an m x n size zero matrix, I is a 3 x 3 unit matrix and I' is an n x n unit matrix. The unknown matrix G can be calculated by the least-square method. Considering that the following relation is obtained from equation (15)
a'21c1 + a'22c2 + a'23c3 -~-1
(34)
the camera position (cl, c2, c3) in the world coordinate system is derived as Cl~
~a'21a'22a'23~( 1
/l"'q 3f
(35)
\O'~O'zO'3 f \u'2f
The optical axis point (io, Jo) is calculated using equations (18) and (20) which are derived from the orthogonality constraints The error parameters e~ and e~ are given by ei = - A22(a'21B'l + a'22B'2 + a'23B'3)- io
(36)
e~ = A22(a'21D'l + a'22D'2 + a'23D'3) - J o .
(37)
Using equations (9) and (10) and the orthogonality conditions (11) and (12), the error parameter 3 is given by 6 = If, t h - L I + Ifjrlj - LI
(38)
where f 2 = A2(B,12 + B,22 + B~2) _ (i ° + e,)2
(39)
f f = A2(D'l 2 + D'22 + D~2) - (Jo + ej) 2.
(40)
F r o m equations (15), (25) and (26) a'lk = alk/A2 = { -- B'k -- (io + ei)a'2k}/fi
(41)
a'3k = a3k/A2 = { --D'~ - (Jo + ej)a'2k)/fj.
(42)
It is important that the calculated rotation matrix satisfies orthogonality constraints (11 ) and (12), or (18)(22). In the right-hand coordinate system shown in Fig. 1, the matrix is given by
0
cosfl
As the elements of matrix M satisfy orthogonality constraints (1 l) and (12) in the above calibration approach, the camera directions ~t, ~b and fl can be calculated by ~t = sin- 1 (a'12 tan dp/a'32)
(44)
fl = t a n - l(a'31/a'33 )
(45)
q~ = t a n - 1 (_a~2 sin fl/a'31).
(46)
As a result, the extrinsic camera parameters as well as the error parameters, ei, ej and 6, can be calculated under the orthogonality conditions for matrix M. The error parameters are, in the physical sense, the reduced errors of the intrinsic parameter values assuming that the extrinsic parameters are precise. It should be noted that the extrinsic camera parameters are obtained after the least-square calculation so that the constraints are completely satisfied, and further that the physical calibration errors can be correctly estimated using the error parameters. The second step in our method is the correction step. Unexpectedly large errors that are dependent on mark configuration are sometimes observed after the first step. Correction is done under the conditions that the error parameters become negligible and both constraints (11) and (12) are preserved after the correction. The camera parameters ct and q5 are corrected temporarily by fl~ and fl~, respectively, where ~ = h 0 cos 0 th/L
(47)
f ~ = ho sin 0 rlffL.
(48)
ho is the length of the line from (i0, J0) to (io + ei, Jo + e~) and 0 is the angle between the line and the intersection of the image plane and the x - y surface in the world coordinate system. The camera positions are temporarily corrected using the error parameters under the perspective projection principle as
(C1,C2,C3)= L / ( L
+ J)(c 1,
C2'C3)"
(49)
Using these corrected values, three matrix elements a'21, a'22 and a~3 are calculated again. Then, the other camera parameter values are evaluated under the assumption that only their three parameter values are known. Equations (13) and (14) are rewritten as B'lx + B'2y + B'3z - B'lC 1 - B'2c 2 -- B'3c 3 = i -- a'21ix -- a'22iy - a'23iz
(50)
D'lx + D'2y + D'3z - D'lcl -- D'2c2 - D'3c3 = j -- a'21jx - a'22jY - a'23Jz.
(51)
The simultaneous equations for image coordinates
A non-iterative procedure for rapid and precise camera calibration (il, Jl ). . . . . (i,,j.) corresponding to the mark points (x l, Yt, z l). . . . . (x., y,, z.) in the world coordinate system can be written as (52)
PQ = v
parameters, the simultaneous equations expressed by equation (52) are solved by the least-square technique and the camera parameters as well as the error parameters are calculated again. After this step, the error parameters become sufficiently small.
where P is the u n k n o w n matrix depending on the camera parameters and is written as
{2
B~ B; u',)
P=\_. ol D; ui'
(53)
Q is the matrix composed of the mark point coordinates in the world system and is given by
Q=
Y'
"'"
11 ......
Y"
.
(54)
inl
V is a matrix composed of coordinates of both mark points and corresponding image points, and is given by v=(Si,
.... ,si,)
(55)
\Sjl,..., Sjn/ where Sik
=
ik -- a'21Xik -- a'22Yik -- a'23Zik
Sjk =Jk -- a'21xjk -- a'22YJk -- a'23ZJk •
(56) (57)
The elements B'I, B~, B], D'I, D~, D], u'l and u~ of matrix P can be calculated by the least-square method. The camera position world coordinates and the camera rotation parameters as well as the error parameters are calculated again as done from equations (35) to (46). The parameter values again satisfy both orthogonal constraints (11) and (12). As there are only eight u n k n o w n parameters in the least-square calculation, the error parameters are usually small in this step. We sometimes, however, come across small but unsatisfactory values of the error parameters dependent on the mark condition. A third step is performed to obtain or confirm more accurate calibration values. This last step is basically a repetition of the second. After the rotational and positional camera parameters are corrected temporarily using the calculated error
5. C O M P U T E R S I M U L A T I O N
For camera parameter values, we assumed: the optical axis point (io, J0) was at coordinates (256, 256) in a 512 x 512 image plane; the optical axis length L was 55 ram; both conversion parameters, t/i and ~/j,were 0.02; the optical center position was at ( - 100, - 800, 400) (unit:mm) in the world coordinate system; the camera rotation parameters ~t, q~and fl were - 0.1111, - 0.4 and 0.0 rad, respectively; the center of the mark board was located at (0, 160, 0) (unit:mm) in the world coordinate system. Sixty-four model marks were arranged on the board and the mark images were calculated by the perspective projection principle. F r o m these images, the camera parameters and error parameters were computed by the proposed method. Table 1 shows the result of this simulation. After each step, orthogonality constraints (11) and (12) are confirmed to be preserved. Here we define an S parameter as the difference between mark positions (i, j) calculated using the calibrated camera parameters and the real mark positions (it.... Jtruo)- It is written as S = { ~ ( i - itrue)2 + E ( J - J m ,
c¢(rad) 4~(rad) fl (rad) cl (mm) c 2 (mm) c a (mm) io Jo L(mm) e I (pixel) e2 (pixel) 6(mm) S (pixel)
e)2}x/2/~_,l
(58)
where ~ represents the summation for all marks. The value of S after the first step is below 1 pixel for the above simulation and after the second, it is only 0.14 pixel. In Table 1, there is a large discrepancy between the calculated parameters and the true parameters after the first step. Even if the true parameters are unknown, the parameter values are found to be unreliable by monitoring the error parameter values. This implies that the intrinsic and extrinsic parameters cannot be calibrated simultaneously and that monitoring of the error parameters makes it possible to estimate camera parameter calibration errors. After the second step, the calibrated values agree well with the cor-
Table 1. Computer simulation of camera calibration Parameter
305
True value
1st step
2nd step
-0.1111 -0.4000 0.0000 - 100.0 - 800.0 400.0 256.0 256.0 55.0 0.0 0.0 0.0 0.0
-0.0722 -0.357 - 0.0010 -109.1 - 860.1 434.1 147.5 303.0 58.8 -- 108.5 147.0 3.8 0.9
-0.1112 -0.4010 0.0002 --100.4 - 799.2 40 1.6 256.2 255.8 54.5 0.2 -- 0.2 --0.4 0.14
3rd step
Error
-0.1111 - 0.0000 -0.4001 -0.0002 0.0001 - 0.0001 - 100.1 -0.1 - 799.9 0.1 400.1 0.1 256.1 0.1 255.9 -0.1 54.9 -0.1 0.1 0.1 -- 0.1 -- 0.1 --0.1 --0.1 0.09 0.1
306
M. ITO and A. ISHII
responding true parameter values, and the error parameter values become small. After the third step, the parameter errors are reduced satisfactorily. Errors in the rotation parameters are below 0.0001 rad and those in the optical center position are below 0.1 mm for each coordinate. 6. REAL MARK EXPERIMENT
An example of the mark configuration on the mark board used in the experiment is shown in Fig. 2. The small circular marks are 6 or 10 mm in diameter and are arranged non-coplanarly with about 50 mm spacing. A CCD camera with a 25 mm focal length lens was used in the experiment. The conversion parameter and the optical axis length were obtained in advance on the basis of camera design parameters given by the manufacturer and their optical calibration data, which will be discussed in Section 8. Figure 3 shows the automatic mark number recognition process. After mark image acquisition, mark regions were segmented by automatic threshold level calculation, binarization and labelling procedures. We then extracted the features of each mark such as the center of gravity, mark size as well as the relation of these features to those of adjacent marks representing the adjacent mark configuration. The position of each mark is taken as the center of gravity and the corresponding real mark number was correctly selected using the mark size and the adjacent mark configuration. After making the mark table, which relates the mark number to its real mark position, the camera paramet.ers and error parameters were calculated from it using the proposed method. These calibration processes are fully automatic. The mark image signal was acquired by a TOSPIX image processor and the digital image was sent to a VAX11/780 (1 MIPS). The mark recognition time was about 1 s and the elapsed parameter computation time was less than 1 s. The total elapsed time of the processing including image acquisition, data transformation
Fig. 2. Calibration mark board used in our experiments.
Nark board image acquisition
I Automatic threshold level calculation
I Image binarization
I Labelling process
Feature extraction for each labeled region
J Mark number recognition
I Plark table listing
! Calibration calculation process
Fig. 3. Automatic mark number recognition process. and all other factors was less than 6 s. Even when three cameras are simultaneously calibrated for 3D object recognition by three-view stereo analysis,12'4~the total elapsed time was less than 9 s. Table 2 contains some examples of results of calibration experiments using real marks. The mark board center position was around (0, 260,0) (unit:mm) and the camera was positioned about 800 mm away from the mark board. In example (a), for which 25 marks were used, calculated camera parameters show nearly correct values except for angle fl after the first step. The error parameters, however, have large values due to the error in angle 8. After the second step, angle fl is well corrected and all the error parameters become small. After the third step, the error parameters become sufficiently small. In example (b), for which 14 marks were used, the error parameters are fairly large after the first step. The error in optical center position c3 is especially large. After the second step, the error parameters become smaller, and they become negligibly small after the third step. Parameter S is below 0.9 pixels. In example (c), for which 12 marks were used, the errors in camera parameters and the error parameter values are extremely large after the first step. The camera parameter error is presumably caused by the inverse matrix calculation in the least-square method. We often come across unexpected large errors dependent on mark conditions such as the number of marks or mark configuration. It is noteworthy that the second step reduces the camera parameter errors effectively despite the fact that they are so large after the first step. The parameters converge to their most likely values
A non-iterative procedure for rapid and precise camera calibration
307
Table 2. Real mark experiments Parameter
1st step
2nd step
3rd step
Example (a) c I (mm) c 2 (ram) c3 (mm) ct (rad) (rad) fl (rad) el (pixel) ej (pixel) 6 (mm)
- 125.91 - 546.94 - 37.23 -0.205 0.0839 0.0023 6.81 - 9.57 0.14
- 125.95 - 546.98 - 37.25 -0.205 0.0834 0.00084 0.28 0.18 0.002
- 125.95 - 546.99 - 37.20 -0.2053 0.0834 0.00087 0.025 0.00011 0.0001
Example (b) c~ (mm) c 2 (mm) c3 (mm) ct (rad) ~b(rad) fl (rad) e i (pixel) ej (pixel) 6(ram)
125.09 - 549.97 - 43.13 - 0.203 0.0913 0.016 80.3 78.0 -0.12
-
124.28 - 547.95 - 36.41 - 0.202 0.0808 0.0018 1.3 17.1 0.12 -
- 125.85 - 547.19 - 37.28 - 0.205 0.0835 0.0008 - 0.15 - 0.017 0.00002
Example (c) c~ (mm) c 2 (mm) c a (mm) ct (tad) (rad) fl (rad) e i (pixel) ~ (pixel) (ram)
-734.4 - 1712.5 - 297.4 -0.515 0.067 0.050 - 8.0 - 35.2 - 24.6
after t h e t h i r d step. T h e s e e x p e r i m e n t a l r e s u l t s c o n f i r m t h a t t h e extrinsic p a r a m e t e r s c a n be easily a n d precisely calibrated b y these three steps in t h e p r o p o s e d technique.
- 125.45 - 544.2 - 38.8 -0.206 0.0872 0.0028 35.0 23.3 1.22
126.1 - 547.1 - 37.30 -0.2053 0.0832 0.00091 - 0.37 - 0.021 0.0003
m a x i m u m , w h i c h is p r o b a b l y d u e to t h e i m a g e digitizing error, m a r k s e t t i n g e r r o r a n d m a r k c e n t e r r e c o g n i t i o n error.
0.3 7. E R R O R E S T I M A T I O N
For the purpose of estimating the rotation parameter calibration error, the camera was placed on a highly a c c u r a t e t w o - a x i s r o t a r y stage. T h e c a m e r a p a r a m e t e r s were c a l c u l a t e d e v e r y t i m e t h e c a m e r a w a s r o t a t e d b y 0.00628 rad. If t h e c a l i b r a t i o n e r r o r is zero, t h e n calib r a t e d r o t a t i o n v a l u e s s h o u l d be c h a n g e d b y e x a c t l y 0.00628 rad. F u r t h e r m o r e , in o r d e r to e v a l u a t e t h e o p tical c e n t e r p o s i t i o n c a l i b r a t i o n error, t h e m a r k b o a r d w a s m o v e d by 0.1 m m in t h e d i r e c t i o n o f e a c h o f t h r e e c o o r d i n a t e axes. If t h e p o s i t i o n e r r o r is negligible, t h e n t h e c a l i b r a t e d c a m e r a p o s i t i o n v a l u e s s h o u l d be i n d e p e n d e n t o f t h e m a r k b o a r d m o v e m e n t a n d s h o u l d give c o o r d i n a t e s in t h e w o r l d s y s t e m . F i g u r e 4 s h o w s a typical e x a m p l e o f c a l i b r a t i o n e r r o r m e a s u r e m e n t s . T h e c a m e r a r o t a t i o n a n g l e p a r a m e t e r e r r o r s a r e less t h a n 1.2 x 1 0 - 4 r a d o n a v e r a g e a n d 5 x 1 0 - * r a d at m a x i m u m . T h e c a m e r a p o s i t i o n e r r o r is 0.05 m m o n a v e r a g e a n d less t h a n 0.1 m m at m a x i m u m . T h e s e v a l u e s c o r r e s p o n d to relative e r r o r s o f 5 x 1 0 - s a n d 1 x 1 0 - * , respectively. T h e S p a r a m e t e r is b e l o w 1 pixel at
-E
0.2
e
0.1
o
,~
0
480
-0.1
~_ -0.2 -0.3
,'T _o wx
i.
485
Mork boord position (mm)
(a)
4
3
2. I 0
~
-14-12-I0-S-6-4-2
0
2
4
6
,
8
I0
,J'-"t~
,
J,
12 14 16 IS
Rototion step (0.006628 rod/stepl (b)
Fig. 4. Experimental error estimation result: (a) calibration error of camera position in the longitudinal direction; (b) calibration error of camera rotation angle.
308
M. ITO and A. ISHII 8. DISCUSSION
The criteria for measuring the effectiveness of a camera calibration method are (i) autonomy, (ii) accuracy, (iii) simplicity and efficiency and (iv) flexibility and reliability. Table 3 shows an estimate of how well the conventional iterative optimization approach and the conventional direct calculation approach satisfy these criteria. Our main goal is to overcome the drawbacks of the conventional direct calculation method and get good marks for criteria (ii) and (iv) without losing any points on the other criteria. To achieve this goal, it is extremely important to: (a) Calculate the extrinsic parameters with high accuracy under the condition of fixed intrinsic parameters. (b) Preserve the orthogonality constraints for the camera rotation matrix not only theoretically but also numerically in actual calibration using real marks. (c) Obtain high calibration accuracy independent of mark conditions such as the number of marks and mark configuration. Error monitoring capability, which warns of faulty calibration, is also important. The conventional direct method fails at requirement (a) because the extrinsic and intrinsic parameters cannot be calibrated independently. This means the conventional method can result in large fluctuations in calculated intrinsic and extrinsic parameter values. This paper presents a solution for suppressing this fluctuation: the error parameters representing intrinsic parameter fluctuations are defined and then reduced to a sufficiently small value using the proposed calculation approach. The least-square equations reduced to the matrix equation and the rotation parameter constraints have to be simultaneously solved. This causes difficulty in satisfying requirement (b). That is, it is impossible to calculate the above two kinds of equations directly and simultaneously in actual experiments using real marks.
In the conventional direct method, camera parameter components are calculated by the least-square method. All of the constraints are, however, not necessarily preserved in the numerical calculation using actual marks and cameras, even though they should be preserved theoretically. In this paper, the parameter values are determined under constraints (11) and (12) after the least-square calculation, so that calibration results satisfy all of the constraints. The problem with achieving (c) with the direct approach comes from the fact that the parameters cannot be strictly determined because of ambiguous properties of the solution due to the overdetermination system of the equation. Furthermore, extremely large errors which are probably due to fluctuations around singular points of the inverse matrix in the least-square calculation can be observed even though mark image positions calculated using calibrated parameters coincide well with the corresponding real mark image positions. In the proposed method, even extremely large errors arising in particular cases can be easily reduced to be negligibly small as demonstrated in the real mark experiments. This solves the problem of constraint preservation depending on the mark configuration and enlarges the flexibility of calibration in various applications. It is possible to monitor errors by checking the error parameters because they show errors in the intrinsic parameters directly under the assumption of highly accurate extrinsic parameter values. Thus, the proposed technique assures that we can use the calibration results with confidence. The optical axis point of the camera is assumed to be the center of the image plane. In actual camera use, this assumption is no problem because the camera direction is defined as the direction from the assumed optical axis point to the lens center. However, if the displacement of the assumed optical axis point from the measured optical axis point is large, the optical axis point must be evaluated. The displacement evaluated
Table 3. Estimate of conventional calibration approaches Criteria Autonomy
Accuracy
Simplicityand efficiency Flexibility and reliability
Iterative optimization approach Poor: Careful initial guess is necessary because the result sometimes depends on the guess. Good: Constraints are always preserved. Accurate if initial guess is appropriate. Poor: Simple but not efficient due to iteration procedure. Good and Poor: Flexibilitybecause constraints are preserved in any case, but not reliable because error monitoring is not provided.
Non-iterative direct approach Good: Initial guess is unnecessary. Poor: Intrinsic parameter fluctuation occurs and also constraints are not necessarily preserved. Good: Simple and efficient because of direct evaluation by least square calculation. Poor: Results are dependent on mark condition and also error monitoring is not provided.
A non-iterative procedure for rapid and precise camera calibration by obtaining the center of zoom by zooming the lens was estimated to be typically 5 pixels and within 10 pixels at maximum, which corresponds to a m a x i m u m difference of only about 0.04~o. This small discrepancy is attributed to the fact that the synchronous camera control circuit was specially designed for our multiplestereo system. The optical axis length and the conversion parameters were predetermined prior to performing the proposed procedure. After the camera was taken apart, the dimensional structure and position of the image sensor were exactly measured with precision instruments. Then, after the distance between the mark and the image sensor was precisely measured, the optical axis length and the conversion parameters were calculated by using the relation between the real marks and their images. Even if the optical axis length or the conversion parameters includes a slight error, the values of the ratios of the optical axis length to the conversion parameters are obtained with high accuracy. It should be noted that what we need to know exactly are the ratios rather than the optical axis length and the conversion parameters, as known from equations (4) and (5) which relate object world coordinates to their corresponding image points on the image memory. Lens distortion is disregarded in the proposed method. To prevent the calibration accuracy from deteriorating due to the distortion effect, the lens aperture ratio is set small and the camera is located sufficiently far from the mark board. No influence of distortion has been observed in our experiments yet, leading us to think that distortion can be disregarded under our experimental conditions. Nevertheless, when a lens with a large aperture ratio such as a microscope lens is used for small object recognition, a distortion estimation is required for measuring the position and shape of a tiny object with high accuracy. Distortion estimation by the direct calculation approach is impossible at present, but can be done through iterative optimization. Otherwise, correction using a look-up table derived from distortion measurement of the lens may be effective. A direct approach to distortion estimation is a problem deserving further investigation. In our approach, the external camera parameters are calculated from only one image of the non-coplanar mark board. The calculated camera parameters represent the relative position and orientation to the world coordinate system; that is, the absolute position and orientation. When the mark board is placed with a slight error in the world coordinate system, the absolute error of the calibrated values increases by the a m o u n t of error in the mark board setting as the offset because camera parameters are calculated under the premise that the mark board is located exactly at a predetermined coordinate in the world system. The easiest case is when the center position of the mark board is defined as the world coordinate origin or when the world coordinate system is set at a predetermined or an assumed position in the mark board coordinate system. When the camera was located about 800 m m from the origin and rotated by about 12 deg, absolute error PR 27-2-E
309
within 0.05-0.1 m m could be obtained as shown by the experimental result. From our experience, the accuracy decreases slightly when the rotation angle is more than 23 deg and is noticeably lower when it is more than 28deg though the decrease is surely not fatal in a millimeter range accuracy control system. This lower accuracy presumably occurs when the image of the mark board is taken from the oblique direction at a large angle. Solving this problem remains as future work. On the basis of simulated and experimental results and the comparisons made in the above discussion, we conclude that our proposed calibration method sufficiently satisfies the four criteria mentioned above and promises to make a substantial contribution to future developments in 3D object measurement.
REFERENCES
1. M. Ito, Three dimensional measurements using stereopsis, Trans. IEE dpn 107-C(7), 613-618 (1987). 2. M. Ito and A. Ishii, Three view stereo analysis, IEEE Trans. Pattern Analysis Mach. InteU. 8(4), 524-532 (1986). 3. S. D. Blostein and T. S. Huang, Error analysis in stereo determination of 3-D point positions, IEEE Trans. Pattern Analysis Mach. InteU. 9(6), 752-765 (1987). 4. M. Ito and A. Ishii, Range and shape measurement using three-view stereo analysis, Proc. Comput. Vision Pattern Recognition, pp. 9-14 (1986). 5. M. Ito, Robot vision modelling--camera modelling and camera calibration, Adv. Robotics 5(3), 321-335 (1991). 6. V. I. Abdel-Aziz and H. M. Karara, Direct linear transformation from comparator coordinates into object space coordinates in close range photogrammetry, Proc. ASP/U1 Syrup. Close Range Photogrammetry, pp. 1-19 (1971). 7. I. Sobel, On calibrating computer controlled cameras for perceiving 3-D scenes, Artif. Intell. 5, 185-198 (1974). 8. H. M. Karara and V. I. Abdel-Aziz, Accuracy aspects of non-metric imageries, Photogrammetric Engng XL, 11071117 (1974). 9. I. W. Faig, Calibration of close range photogrammetric system: mathematical formulation, Photogrammetric Engng Remote Sensing 41(12), 1479-1486 (1975). 10. E.A. Parrish and A. K. Goksel, A camera model for natural scene processing, Pattern Recognition 9, 13 l-136 (1976). 11. M. Ishii, S. Sakane, M. Kakikura and Y. Mikami, 3-Dposition sensor and its application to a robot, IEICE Jpn Tech. Paper, SC-84-9, pp. 91-100 (1984). 12. S. Ganapathy, Decomposition of transformation matrices for robot vision, IEEE Int. Conf. Robotics, pp. 130-139 (1984). 13. R. V. Tsai, An efficient and accurate camera calibration technique for 3-D machine vision, Proc. Comput. Vision Pattern Recognition, pp. 364-374 (1986). 14. R. K. Lenz and R. Y. Tsai, Techniques for calibration of the scale factor and image center for high accuracy 3-D machine vision metrology, IEEE Trans. Pattern Analysis Mach. Intell. 10(5), 713-720 (1988). 15. D. C.TsengandZ. Chen, Computinglocation and orientation of polyhedral surfaces using a laser-based vision system, I E EE Trans. Robotics Automn 7(6),842-848 (1991). 16. M. H. Han and S. Rhee, Camera calibration for threedimensional measurement, Pattern Recognition 25, 155 164 (1992). 17. T. Echigo, A camera calibration technique using three sets of parallel lines, Mach. Vision. Applic. 3(3), 159-167 (1990). 18. L L. Wang and W. H. Tsai, Camera calibration by van-
310
M. ITO and A. ISHII
ishing lines for 3-D computer vision, IEEE Trans. Pattern Analysis Mach. lntell. 13(4), 370-376 (1991). 19. W. Chen and B. C. Jiang, 3-D camera calibration using vanishing point concept, Pattern Recognition 24, 57-67 (1991). 20. S. Weng, P. Cohen and M. Horniou, Calibration of stereo
cameras using a non-linear distortion model, Proc. Int. Conf. Pattern Recognition 10(1), 246-253 (1990). 21. N.A. Thacker and J. E. W. Mayhew, Optimal combination of stereo camera calibration from arbitrary stereo images, Image Vision Comput. 9(1), 27-32 (1991).
About the Author--MINORU ITO received a B.S. degree in applied physics and an M.S. degree in nuclear engineering from Tokyo Institute of Technology, Japan, in 1967 and 1969, respectively. He joined Nippon Telegraph and Telephone Corporation (NTT) in 1969, where he was engaged in the study of optical memory, optical beam control, piezoelectric devices, optical fibers, optical laser diodes, and oxide superconductors. He received a Ph.D. degree from Tokyo Institute of Technology in 1982. Since 1983, he has been engaged in research on computer vision, robot vision, and pattern inspection. At present, he is a professor of Kogakuin University. His current interests are in the field of computer vision, 3D measurement, mobile robot vision, image processor, computer graphics, and pattern inspection. Dr Ito is a member of the Institute of Electronics Information and Communication Engineers of Japan, the Information Processing Society of Japan, and the Society of Instrument and Control Engineers.
About the Author--AKmA ISHn received the B.E., M.E., and Dr. of Eng. degrees in electrical engineering from Waseda University, Tokyo, Japan, in 1966, 1968, and 1981, respectively. He joined Nippon Telegraph and Telephone Corporation in 1968. Since then he has been engaged in research on optical memory, image processing, and robot vision at NTT Electrical Communications Laboratories. At present he is a member of the boards of directors of NTT Fanet Systems Corporation and is in charge of factory automation technology development. Dr Ishii is a member of the IEEE Computer Society.