Robotics and Computer-Integrated Manufacturing 15 (1999) 23 — 37
Modeling errors for dimensional inspection using active vision Christopher C. Yang , Michael M. Marefat*, Frank W. Ciarallo Department of Computer Science and Information Systems, The University of Hong Kong, Hong Kong Department of Electrical and Computer Engineering, The University of Arizona, Tucson, AZ 85721, USA Department of Systems and Industrial Engineering, The University of Arizona, Tucson, AZ 85721, USA Received 19 May 1998; accepted 20 July 1998
Abstract In active vision inspection, errors in dimensional measurements are often due to displacement of the sensor and image digitization effects. Using a model of error in linear measurements based on a normally distributed sensor displacement, uniform image digitization and geometric approximation, the accuracy of measurements from a particular sensor setting can be assessed. In comparison with experimental measurements, this error model demonstrates a high predictive ability. Related experiments demonstrate a procedure for assessing the accuracy of sensor settings providing insight into what sensor settings lead to lower measurement variances and thus higher measurement reliability. 1999 Elsevier Science Ltd. All rights reserved. Keywords: Computer integrated inspection; Active vision; 3D vision; Image understanding; Scene analysis; Error analysis; Quantization error; Displacement error; Dimensional inspection
1. Introduction The geometric features measured in machine vision inspection are those that are not changed with the environment or the set up of the vision system (the position and orientation of the camera). Examples of such invariant features are the length, width, area and volume of a pocket. Measurement of invariant features by an automated, computer vision inspection system introduces errors in the observed dimensions. The sources of uncertainties that lead to these errors include: E the displacement of the camera (position and direction), E the quantization error in image digitization, E illumination error (poor contrast and low intensity), E motion of the object or the camera setup, E parallax (object to camera distance too small). The errors resulting from motion and parallax can be minimized to a negligible level by careful design and control of the environment. However, the errors due to the displacement of the sensor, quantization errors in
* Corresponding author. Tel.: 001 520 621 4852; Fax: 001 520 621 8076; E-mail:
[email protected]
image digitization, and illumination errors cannot be avoided and always produce a significant effect on the measurement. The effect of these errors on the measurements can be minimized by carefully modeling the process leading to the errors. For given dimensions and tolerances of a part, the inspection process capability can also be determined with a better understanding of the errors and uncertainties that exist. In this paper, we will focus on the analysis of spatial quantization errors and displacement errors and their effects on the dimensional measurement of line segments in digitized images generated by an active vision system. Kamgar-Parsi [1] developed the mathematical tools for computing the average error due to quantization and derived an analytic expression for the probability density of error of a function of an arbitrary number of independent quantized variables. Blostein [2] analyzed the effect of image plane quantization on the 3D point position error obtained by triangulation from two quantized image plane in a stereo setup. Ho [3] expressed the digitizing error for various geometric features in terms of the dimensionless perimeter of the object and expanded the analysis to include the error in other measurements such as the orientation for both two-dimensional and threedimensional measurements. Griffin [4] discussed an approach to integrate the errors inherent in the visual
0736-5845/99/$ — see front matter 1999 Elsevier Science Ltd. All rights reserved. PII: S0736-5845(98)00025-8
24
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
inspection such as spatial quantization error, illumination error and positional error to determine the sensor capability for inspecting a specified part dimension using binary images. In addition to quantization errors, in active vision inspection, the uncertainty arising from robot motion and sensory information is also important. Su and Lee [5] presented a methodology for manipulating and propagating spatial uncertainties in a robotic assembly system for generating executable actions for accomplishing a desired task. Renders et al. [6] presented a calibration technique of robots based on a maximum likelihood approach for the identification of errors of position and orientation. Menq et al. [7] presented a framework to characterize the distribution of the position errors of robot manipulators over the workspace and to study their statistical properties. Chen and Chao [8] identified and parameterized the sources that contribute to the positioning error and estimated the values of the parameters. Veitschegger and Wu [9] presented a calibration algorithm for finding the values of the independent kinematic errors by measuring the end-effector Cartesian positions and proposed two compensation algorithms: a differential error transform compensation algorithm and a Newton—Raphson compensation algorithm. Bryson [10] discussed methods for the measurement and characterization of the static distortion of the position data from 3D trackers including least-squares polynomial fit calibration, linear lookup calibration, and bump lookup calibration. Smith and Chessman [11] presented a method for explicitly representing and manipulating the uncertainty associated with the transformation between coordinate frames representing the relative locations of objects. Sensors are used to reduce this uncertainty. The previous research work has neither provided an analysis of the spatial quantization error for line segments in one and two dimensions nor has it investigated the effect of position and orientation errors in camera placement on the dimensional measurement of the object features. In addition, no analysis of expected accuracy in the dimensional measurements is provided. In this paper, we analyze the spatial quantization error in one dimension and two dimensions. The probability density function of the spatial quantization error of a point in one dimension has been modeled as a uniform distribution in previous work [3, 4]. Using a similar representation, we analyze the error in the measured dimension of a line in one or two dimensions. The mean, the variance, the range and the probability density function of the error are derived. We also analyze the displacement error in an active vision system. The total error in the active vision inspection is then characterized based on the integration of the quantization error and the displacement error. The accuracy in inspection of a set of entities by a given sensor setting is then analyzed.
Two experiments are presented to illustrate the fit of the developed error distributions to the actual measurement data and to demonstrate how these techniques can be used to compare the quality of measurements from different sensor settings.
2. Quantization errors in inspection Spatial quantization error is important in inspection when the size of a pixel is significant compared to the allowable tolerances in the object dimension. A quantized sample indicates itself as part of the object image if and only if more than half of it is covered by the edge segment. Significant distortion can be produced by this kind of quantization. A point in the image can only be located to within one pixel of accuracy with traditional edge detection techniques. Recently, several new edge detection techniques have reported subpixel accuracy [12—16], to improve the precision of measurement on images. Many techniques in this group are based on interpolation of the acquired image. Although interpolation-based techniques increase the resolution of quantization to obtain sub-pixel accuracy, quantization still remains at some level. Our careful analysis of the spatial quantization error characterizes the effect on a desired dimensional measurement. 2.1. One-dimensional spatial quantization error In one dimension, a straight line occupies a sequence of pixels with the width of each pixel defined to be r . The V range of the error in this case can be one pixel on either side of the line. Figure 1 shows an example. The number of pixels completely covered by the line is l. The lengths that partially covers the last pixel on each end of the line are u and v (where u4r and v4r ). The true length of V V the line, ¸, (before quantization) is ¸"lr #u#v. V
(1)
Assume that u and v are independent random variables and that each has a uniform distribution in the range [0, r ]. The assumption of independence between u and V v is valid because the actual length of the edge segment is not known. u and v are assumed to be uniformly distributed because the exact locations of the terminal points are not known and there is equal probability that the terminal point lies anywhere within a pixel. The uniform probability density functions (pdfs) are:
and
1 f (u)" if 04u4r , S V r V 1 f (v)" if 04v4r . T V r V
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
25
Fig. 1. A line of length lr #u#v superimposed on a sequence of pixels V of size r . The length of the line partially covering the last pixel on V the left is u and the length of the line partially covering the last pixel on the right is v.
If u5 r , the last pixel on the left will be quantized as V part of the line. Otherwise, the pixel will not be considered as part of the line. Similar conditions hold for v on the right end of the line. The length of the line after quantization, ¸ , is given by (l#2) r if u50.5r and v50.5r V V V (l#1) r if (u50.5r and v(0.5r ) V V V . ¸" O or (u(0.5r and v50.5r ) V V lr if u(0.5r and v(0.5r V V V The quantization error, e , is defined as ¸ !¸. ExpressO O ing e "e #e where e and e are the quantization O S T S T errors of u and v, respectively. It is easily shown that E[e ]"0 and the range of the quantization error is O [!r ,#r ]. The variance Var[e ]"E[e] can be V V O O shown to be r . V
2.2. Two-dimensional quantization errors in dimensional measurement Fig. 2 shows a line on a two-dimensional array of pixels. The resolution of the image is r xr where r is V W V the width of a pixel and r is the length of a pixel. W The horizontal component of the line length, ¸ , is V l r #u #v . The vertical component of the length of VV V V the line, ¸ , is l r #u #v . The actual dimension, ¸, is W W W W W therefore (¸#¸ . V W The quantized length, ¸ "(¸ #¸ , where ¸ O OV OW OV and ¸ are the horizontal and vertical quantized lengths, OW respectively. In two dimensions, there are four random variables, two for the horizontal length, u and v , and V V two for the vertical length, u and v . All four are assumed W W to be uniformly distributed. We will use a geometric approximation to characterize the two-dimensional quantization error. Fig. 3 shows a line with length ¸ and lying at an angle of c to the horizontal axis. If the original line and the quantized line are not parallel ((e #¸ )/(e #¸ )O OV V OW W ¸ /¸ ), and the 2D quantization error e "¸ !¸" V W O O ((¸ cos c#e )#(¸ sin c#e )!¸. OV OW
Fig. 2. A line on a two-dimensional array of pixels. The horizontal length of the line is l r #u #v . The vertical length of the line is VV V V l r #u #v . W W W W
Fig. 3. The original line with length ¸ and angle between itself and horizontal axis c and the quantized line with length ¸ and parallel to O the original line. (Figure is not shown in scale, e and e are much V W smaller than the length of the line in actual case).
Although the lines may not be exactly parallel, they are approximately parallel because e and e are very small OV OW compared to the length of the line ¸. Therefore, e e cos c#e sin c. (2) O OV OW It is easy to see that Eq. (2) is exact when the lines are parallel as in Fig. 3. Using this geometric approximation. The mean of the quantization error in two dimensions can be shown to be E[e ] 0, because E[e ] and E[e ] are both zero as O OV OW shown in Section 2.1. The variance of the quantization error in two dimensions can be shown to be: Var[e ]" O (coscr#sincr), because the variances of e and V W OV e are r and r, respectively. OW V W 2.3. Probability density function of quantization error In this section, we derive the probability density function of the total two-dimensional quantization error
26
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
Fig. 4. The probability density functions of the two-dimensional spatial quantization error (r "r "1) with the angle between the projected line and V W the horizontal axis of the image plane varying between 0° and 90° in (a), 10° and 80° in (b), 20° and 70° in (c), 30° and 60° in (d) and 40° and 50° in (e).
based on the geometric approximation. A random variable, ¼, is said to have triangle distribution, triang(r ), U when its pdf is: 1 1 w# !r 4w40 U r r U U 1 1 04w4r (3) f (w)" ! w# U U r r U U
0
otherwise.
The error in the horizontal dimension, e , has a triangle OV distribution, triang(r ). The error in the vertical dimenV sion, e , has a triangle distribution triang (r ). The approOW W ximate two-dimensional spatial quantization error, eN , O using Eq. (2) is in terms of two random variables e OV and e . The pdf is expressed in terms of eN and the joint OW O statistics of e and e . If e and e are independent, OV OW OV OW f (e , e )"f (e ) f (e ), and the pdf of the spatial OV OW COV OV COW OW quantization error can be shown to be:
1 1 f (eJ )" f tan c eJ !q O OV C O C "cos c" sin c O \
f (q) dq COW
(4)
Figure 4 shows the probability density functions of the two-dimensional spatial quantization error for c"0°, 10°, 20°, 30°, 40°, 50°, 60°, 70°, 80° and 90°, with r "r "1. When the line segment is horizontal or vertiV W cal, the angle is 0° and 90°, respectively and the total error has a triangle distribution (Fig. 4a). The range of the error is [!1, #1] since both r and r are 1. As the V W angle increases from 0° to 45°, the probability density
function changes from a triangular function to a bellshape function where the peak decreases from 1 to 0.93. As the angle increases from 45° to 90°, the peak increases from 0.93 to 1. 3. Displacement error In active vision inspection, different sensor settings are used to position the active head to obtain and inspect various dimensions of interest. In placing the active head to achieve this task, errors in the final position and orientation are common. An end-effector has six degrees of freedom: three translations and three orientations. The image of the inspected part obtained from the sensor, the entities observable from that image, and the measured dimensions of the entities are all dependent on the sensor location and viewing direction. If the sensor location and orientation are different from the planned sensor setting (i.e., there is sensor displacement), the same entities may be observable, but as a result of the displacement, the dimensions derived from the image will be inaccurate. The difference between the observed dimensions and the actual dimensions is defined as displacement error. 3.1. Translational and orientational errors in perspective images In active vision inspection, the desired positions and orientations of the head are provided to the servo control, which accomplishes the sensor placement task, by a sequence of movements. The horizontal and vertical
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
displacements of the projected points on the two-dimensional image are in terms of the six orientation and translation parameters of the sensor setting, the focal length of the sensor, the translational and orientational errors, and the three-dimensional coordinates of the model. Given the orientation and location of the sensor and the coordinates of the object, we can compute the projected image of the object as follows:
x
y z c
x U y "P Q U NCP z U 1
(5)
u"x /c, and v"y /c. (6) ' ' where [x , y , z ]2 are the coordinates of the object in U U U the world coordinate system, (u, v) are the projected coordinates in the image plane, P is the matrix for NCP perspective projection, and Q is the transformation matrix between the world coordinates and the image coordinate system.
1 0 0
0
0 1 0 0 P " , NCP 0 0 1 !(1/f )
r
0 0 0
t V r r r t Q" W r r r r 0 0 0 0
r
1
r
f is the focal length of the sensor, [r ]3;3 is the rotation GH submatrix in terms of the orientation parameters and [t ]3;1 is the translation submatrix in terms of the I translation parameters. If the sensor is displaced, the correct coordinates of the projected points must be computed with modification of the Q matrix because the rotation and translation parameters are distorted due to displacement. Thus, a matrix Q must be substituted for Q in Eq. (5) to compensate. Q is in terms of the three translational errors, dx, dy, dz, and three orientational errors, dx, dy, and dz, and the original translational and orientational parameters. The perspective matrix is unchanged because its only parameter, f, is fixed. Q"Q#*Q "Trans(dx, dy, dz)Rot(x, dx)Rot(y, dy)Rot(z, dz)Q (7) where Trans(dx, dy, dz) is a transformation representing a translation by dx, dy and dz. Rot(x, dx), Rot(y, dy) and Rot(z, dz) are transformations representing a differential
27
rotation about the x,y and z directions, respectively. * is given as
*"
0
!dz
dy
dx
dz
0
!dx
dy
!dy
dx
0
dz
0
0
0
0
.
Using Eqs. (5—7), the horizontal and vertical displacement errors, e and e , can be shown to be: BS BT e "u!u BL [j j j j j j j ][dx dy dz dx dy dz 1]2 " [j j j j j j j ][dx dy dz dx dy dz 1]2 (8) e "v!v BT [j j j j j j j ][dx dy dz dx dy dz 1]2 " [j j j j j j j ][dx dy dz dx dy dz 1]2
(9)
where j "fC1C2, j "f (( f!C3)C3!C1), j " fC2(C3!f ), j "f ( f!C3), j "0, j "fC1, j "0, j "f (C2\( f!C3)C3), j "!fC1C2, j "fC1( f! C3), j "0, j "f ( f!C3), j "fC2, j "0, j " !C2( f!C3), j "C1( f!C3), j "0, j "0, j "0, j "f!C3, j "( f!C3) and C1"r x U #r y #r z #t , C2"r x #r y #r z #t , U U V U U U W and C3"r x #r y #r z #t . U U U X In Eqs. (8) and (9), the displacement errors are in terms of the focal length of the sensor, the three-dimensional world coordinates, (x , y , z ), the translation and U U U orientation parameters of the sensor, and the translational and orientational errors of the active head [dx, dy, dz, dx, dy, dz]. As a result, for different threedimensional points projected onto an image plane, the horizontal and vertical displacement errors on the image are not the same even if the focal length of the sensor, and the translational and orientational errors of the manipulator are the same. For instance, two points on a threedimensional object, (x1, y1, z1) and (x2, y2, z2), may have unequal horizontal displacement errors, e and e , and BS BS unequal vertical displacement errors, e and e . This BT BT implies that the distribution of the displacement error for each projected point in an image is unique in spite of being generated by the same distribution of translational and orientational errors of the sensor. 3.2. Probability density function of displacement errors Suppose the uncertainties in translation and orientation errors are all normal distributed with zero mean (k "0), [25]: ? 1 f (a)" e\?N? (10) ? (2np ? where a represents any one of the transaction errors dx, dy,
28
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
dz, or the orientation errors dx, dy, dz and p is the variance. ? Normal distributions allow simple propagation of the errors with linear relationship as shown below in Eq. (11). Since the image coordinate errors e and e are in terms of BS BT the six translation and orientation errors, we compute the probability density function of these errors as follows. If a"b a #b a #2#b a #a where a are all L L G normal distributed with zero mean and independent for i"1, 2,2, n and a is a constant, then a is normally distributed with k "a, and ? p"bp #bp #2#bp . (11) ? ? ? L ?L From Eqs. (8) and (9), the horizontal and vertical displacement errors, e and e , are rational functions BS BT where both the numerator and the denominator are in terms of three rotational errors and three translational errors of the active head and a constant. Let f e " and BS s
m e " BT s
tional and rotational errors, they are dependent on each other. To find the probability density function of e and BS e , the correlation coefficients of m and s, and f and s are BT needed. The correlation coefficient of f and s is Cov(f,s) r " DQ pp D Q
where Cov(f, s)"j j p #j j p #j j p # BV BW BX j j p #j j p #j j p . V W X A similar expression can be written for the correlation coefficient of m and s. If we define g (e )"p!2r p p e #pe , S D DQ D Q S Q S g (e )"p !r p e , S D DQ Q S h (e )"p!2r p p e #pe , T K KQ K Q T Q T
(12)
where f"[j j j j j j j ] [dx dy dz dx dy dz 1]2, m"[j j j j j j j ] [dx dy dz dx dy dz 1]2, s"[j j j j j j j ] [dx dy dz dx dy dz 1]2. Using Eqs. (10) and (11), the terms in Eq. (12) are
(16)
h (e )"p !r p e . T K KQ Q T The probability density function of the horizontal and vertical errors of image coordinates can be shown to be:
p p (1!r k g (e ) DQ exp ! Q f (e )" D Q e# S CS S S ng (e ) 2g (e ) (1!r)p S S Q
(17)
(18)
g (e )k g (e )k p ke S Q # S Q D exp ! Q S erf 2g (e ) p (2(1!r )g (e ) (2ng (e ) S S Q DQ S
p p (1!r k h (e ) KQ exp ! Q f (e )" K Q e # T CT T 2h (e ) T (1!r)p nh (e ) T T Q
h (e )k p ke h (e )k T Q # T Q K exp ! Q T erf 2h (e ) (2nh (e ) p (2(1!r )g (e ) T T Q KQ T
normally distributed with means and variances as follows: kf"j "0 and pf"jp #jp #jp BV BW BX #jp #jp #jp (13) V W X km"j "0 and pm"jp #jp #j p BV BW BX #j p #j p V W #j p (14) X ks"j "( f!C3) and ps"j p #j p #j p BV BW BX #j p #j p V W #j p . (15) X Since f, m, and s are all in terms of the same transla-
3.2.1. Sensitivity to translational and orientational variance In order to study the effect of the displacement errors on active vision inspection, the characteristics of their probability density functions must be understood. The last section derived the probability density functions for the horizontal and vertical errors in the image coordinates of a point based on the displacement errors. The parameters in these functions are the focal length of the camera, f, and the coordinates of the ideal projection point in the camera coordinate system, (C1, C2, C3). This section explores the sensitivity of the pdfs to the variance of the positioning errors.
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
29
Fig. 5. The probability density functions of the displacement errors due to displacement of the end-effectors in horizontal or vertical directions, f (e S ) CS or f (e T ), (a) where k "1.0, p or p "1.0, 2.0, 3.0, 4.0, 5.0, p "2.0, and r "0.5, (b) where k "2.0, p "p "1.0, 2.0, 3.0, 4.0, 5.0, p "2.0, and CT Q D K Q DQ Q D K Q r "r "0.5. DQ KQ
The pdf ’s in Eqs. (17) and (18) depend on the variances of the errors in Eqs. (8) and (9). These variances depend on f, C1, C2, C3, and variances of dx, dy, dz, dx, dy and dz. Let p"p "p "p and p"p "p "p . G BV BW BX H V W X Under these conditions, p, p, and p can be further D K Q simplified. If the exact variances for each of the orientational and translational errors are available, the exact values of p, p, and p can be obtained by Eqs. (13)—(15). D K Q However, the values for the orientational errors are not the same as for the translational errors. Therefore, p"f (C1C2#C1( f!C3) D #(C2#C3!fC3))p G #f (C2#( f!C3))p G p"f (C1C2#C1( f!C3) K #(C2#C3!fC3))p G #f (C2#( f!C3))p H
(19)
(20)
p"(C1#C2) ( f!C3)p#( f!C3)p. (21) Q G H From the coefficients of p and p in Eqs. (19)—(21), p , p , G H D K and p increase in the same direction as changes in p and Q G p. The effect of the changes of the variance on the H probability density functions of the displacement errors are shown in Figs. 5a and b. Fig. 5a shows the probability density functions of the horizontal or vertical displacement errors with p and p varying from 1.0 to 5.0, D K p "2.0, k "1.0, and r "r "0.5. Fig. 5b shows Q Q DQ KQ the probability density functions of the horizontal or vertical displacement errors with p varying from 1.0 to Q 5.0, p and p "2.0, k "1.0 and r "r "0.5. These D K Q DQ KQ figures show that increasing the standard derivation of the numerator in the probability density function increases the mode, decreases the probability density value of the mode and increases the width of the density func-
tion. On the contrary, increasing the standard derivation of the denominator in the probability density function decreases the mode, increases the probability density value of the mode and decreases the width of the curve of the density function. However, the variances of the numerator, p and p , and the denominator, p , are increasD K Q ing in the values of f, C1, C2, and C3. Since the effect of the variances on the probability density functions of the displacement errors are opposite and they change in the same direction as changes in the focal length of the sensor and the location of the projection point, the effect of the variances of the numerator and denominator on the displacement errors tend to cancel each other. 3.3. Displacement errors in dimensional measurement The displacement errors in projected points result in errors in the measured area of a surface, the measured curvature of an arc or the measured length of a line segment. These features are composed of the projection of the corresponding points from the three-dimensional model. The analysis of the displacement errors in these measurements is very important in visual inspection and other computer vision tasks such as gazing, pattern recognition, etc. In this section, we will investigate the displacement errors introduced in the dimension measurement of linear segments. The coordinates of the end-points of a linear segment in an image are (u1, v1) and (u2, v2). The length of the line segment is the distance between the end-points. Due to the displacement of the sensor, the end-points are not projected to the desired locations (u1, v1) and (u2, v2). Instead, they are displaced to (u1, v1) and (u2, v2). Different translational and orientational errors of the sensor placement result in different displacements of the projected segment. For a given distribution of the translational and orientational errors of the sensor, the
30
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
Fig. 6. The horizontal errors, e and e , and the vertical errors S S e and e , due to the displacement of the camera. T T
displacement error distribution depends on the point being projected. Thus each line being dimensioned has a unique error distribution. The displacement error in the dimension of the segment is composed of two components, the horizontal component e and the vertical component e . The V W horizontal component, e , is equal to e !e , and V S S similarly, the vertical component, e , is equal to W e !e (Fig. 6). T T The probability density functions of e , e , e , and S S T e , are given in Eqs. (17) and (18). The probability T density function of e and e can be obtained by inteV W grating the probability density functions of e and e , S S and e , and e , respectively. T T The correlation between e , and e , is important for S S deriving the density function of e and is investigated by V a simulation as follows: 1. Randomly create 100 pairs of end-points, (xH , yH , zH ) and (xH , yH , zH ), where j"1, 2,2, 100. These pairs of end-points are generated such that they satisfy the camera constraints (such as focus constraint and field-of-view constraints). 2. For each pair of end-points, (2.1) create 50 sets of normally distributed translational and orientational errors, (dx , dx , dx , G G G dx , dx , dx ). G G G (2.2) For each set of translational errors, compute e and e using Eqs. (8) and (9). S S (2.3) Based on the 50 sets of results, compute the covariance between e and e and the variS S ances of e and e . S S (2.4) Compute the correlation coefficient between e and e , r . S S H 3. Plot the empirical distribution of correlation coefficient, r , where j"1, 2, 2, 100. H The result of the simulation is shown in Fig. 7. It shows that e and e are positively correlated, thus, the S S covariance of e and e is positive. Since e " S S V e !e , Var(e )"Var(e )#Var(e )!2 Cov (e , S S V S S S e ). When Cov(e , e ) is positive, this expression shows S S S that the assumption of independence between e and S e causes an overestimation of the variance in the error. S
Fig. 7. Distribution of the correlation between e and e in the S S simulation.
If the correlation coefficient is exactly one, and Var(e )"Var(e ), then the errors from the endpoints S S of the line cancel each other in the dimensioning calculation. In our simple experiment, cases where the correlation coefficient was close to 1 represented situations where the points were very close together and very near the optical axis. These are not very attractive configurations for dimensional measurement. These cases represent the worst overestimation of variance and should not be of major concern in practice. In addition, the pdfs of the errors for the endpoints are unique, thus, their variances are not the same. The results which follow are based on the assumption of independence between endpoints. A similar situation is also observed for the vertical displacement error of a line segment, e . W Since the density functions of the horizontal and vertical displacement errors for a point are complicated functions (Eqs. (17) and (18)), integrating two such functions to find the density functions of the displacement errors of a line segment is very complex. On the other hand, the pdfs of e and e , obtained by assuming independence V W between e and e , and e , and e , are computed by S S T T the convolution of the pdfs of e and !e , and e , S S T and !e , respectively. T
f (e )" CV V
f (e #q) f (q) dq CS V CS \
and
f (e )" CW W
f (e #q) f (q) dq. CT W CT \
(22)
Although e , and e , and e , and e are not indeS S T T pendent, the density function obtained by this assumption bounds the variability of the actual density function and is easy to compute. Similar to the geometric approximation used for the spatial quantization error, the two-dimensional displacement error, e , in the dimension of a linear segment with an angle of c between the segment and the horizontal axis of the image can be expressed in terms of the horizontal
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
and vertical components of dimensioning errors due to displacement and c. Thus we may write: eJ "cos ce #sin ce . (23) B V W The probability density function of the dimensioning error due to the displacement of the sensor can be expressed in terms of the probability density functions of the component dimensioning error and the angle, c, in the following open form.
1 1 f (eJ )" eJ !q f tan c C B CV sin c V "cos c" \
f (q) dq. CW (24)
4. Integration of quantization error and displacement error Displacement is independent of the resolution of the sensor and, consequently, independent of the quantization error. Thus, the total inspection error, e , is the G sum of the quantization error, e , and the displacement O error, e . B e "e #e . (25) G O B The probability density function of the quantization error and the displacement error, f (e ) and f (e ), are CB B CO O given in Eqs. (4) and (24). Based on Eq. (25), the probability density of the total error is computed from the convolution of the pdfs of the quantization and displacement errors.
f (e )" CG G
\
f (e !q) f (q) dq. CO G CB
(26)
5. Planning accuracy in inspection of linear dimensions Accuracy in active vision inspection can be improved by careful choice of the sensor settings for each inspected dimension. A sensor setting determines the location and view direction where an active sensor may be placed to observe one or more objects which contain one or more topologic entities whose dimensions are to be measured. A sensor arrangement, S"+x , x , 2, x ,, has n sensor L settings x to x . x is a sensor setting which is an ordered L G triple (v , d , o ) consisting of a sensor location v , a sensor G G G G view direction d , and a set of observable segments (feaG tures) o from the given setting. The same edge segment G of a part (model) can be observed using many sensor settings and it is necessary to evaluate the accuracy attainable from the sensor settings, and ensure that the potential errors in all dimensional measurements from each setting are acceptable for verification of required (part) tolerances as indicated in the design. The inspec-
31
tion accuracy in dimensioning a linear segment can be defined as follows: Accuracy"1!" e "/¸
(27)
where ¸ is the image (or projected) length of the segment. This representation can be used to analyze the utility of different sensor settings by evaluating the probability for a dimensioning accuracy to be within a particular tolerance. ¸ can be found in terms of the angle between the segment and the direction of the camera axis, b, the translational distances between the camera and the midpoint of the segment, t , t , t (where t , t are along the V W X V W horizontal and vertical image plane axes, and t is along X sensor view direction), and the model length of the segment in three dimensions, ¸ : U ¸(¸ , b, t , t , t , f ) U V W X ¸ f(t cos b#(( f#t )sin b!t cos b) V X W " U . (28) ( f#t )!1/4¸ cos b X U When the camera is pointing at the center of the segment (focal axis meets the midpoint), t "t "0. In this case, V W Eq. (28) can be simplified to be only in terms of b, ¸ , f, U and the distance d"t . X Earlier we obtained expressions for the pdf of e in terms of the sensor and the sensor setting parameters. With both ¸ and the error pdf expressed in terms of these parameters, we can compute the likelihood of achieving a certain accuracy level from particular sensor settings. Comparing the variance of accuracy computed from Eq. (27) with the tolerance specified, we can determine the dimensional inspection capability for given sensor settings. Thus, given that the accuracy tolerance is [1!¹, 1] where 04¹41 for a dimension to be inspected, we can compute the likelihood that the achieved accuracy is within this tolerance range by computing Pr+Accuracy41!¹,. for example, if the likelihood is greater than a certain threshold, Th where 04Th41, the sensor setting may be considered acceptable for inspecting the corresponding dimension, and if the likelihood is less, the view direction of the sensor and/or the sensor location can be changed (one method for this may be based on Eq. (28)). Changing the angle b, and/or the distance d changes the achievable accuracy.
6. Experimental results This section describes two experiments conducted to investigate how well the probability density functions derived in the preceding sections fit some actual experimental data. These experiments include both quantization and displacement errors. In addition, the experiments explore the effect of the choice of sensor setting (sensor orientation and location) on the dimensional inspection accuracy.
32
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
Fig. 8. (a) An object with two slots and two steps used for the experiments. Dimensions of E1 and E2 are used for inspection in experiments 1 and 2 respectively. (b) A sample image for dimensioning E1 out of 50 acquired images. (c) Edge map for the image shown in (a).
The part with two steps and two slots shown in Fig. 8a is used for these experiments. In experiment 1, the sensor on the active head is moved from a random position to the assigned sensor setting in order to dimension the length of the step, E1, on the part. In this process, quantization errors and displacement errors are introduced to the measurements. The resulting data are used to make a comparison with the predicted probability density function of errors as obtained in Eq. (26). In experiment 2, three distinct sensor settings are assigned to measure the dimension of E2. Using Eq. (27), the accuracy of the measurement for each of these settings is computed. Using this experiment, we can observe how the sensor setting and sensor parameters affect the accuracy of the measurement. 6.1. Experiment 1 The goal of this experiment is to test the goodness of fit of the distribution of a set of experimentally generated errors in dimensional measurements to the predicted errors from the analysis described in this paper. The three goodness-of-fit tests used are the quantile—quantile plot, probability—probability plot and the chi-square test. In order to place the sensor at different sensor settings, the sensor was mounted on the end-effector of a robot with five degrees of freedom. To obtain different and arbitrary starting positions, a set of random movements were assigned to the joints of the manipulator from its starting home position. Subsequently, the sensor was assigned a pre-determined location and view direction (a desired sensor setting) to achieve certain dimensional inspections. The sensor uses a CCD array of size 512;480 pixels. Each pixel has a resolution of 0.01 mm; 0.013 mm. The sensor movements and dimensional inspections were repeated 50 times. Digitization and the sensor movements introduce spatial quantization errors and displacement errors. Using the analysis described in the previous sections, the probability density function of the combined errors were computed and compared with the experimental inspection results.
A sample from the acquired images and its edge map are shown in Figs. 8b and c. The results from the experiments for the 50 dimensional inspections of E1 are shown in Fig. 9a. For displacement error, the variances of the orientation and translation are estimated experimentally as 0.0003 rad and 0.03 mm, respectively. The probability density function of the total error is computed based on Eq. (26) and its plot is shown in Fig. 9b. After the distributions of the experimental errors and the predicted errors are determined, three tests were utilized to test how well the predicted errors from the probabilistic analysis represent the experimental errors. Probability plots (probability—probability plot and quantile—quantile plot) are a graphical comparison of the predicted result with the experimental result. The P—P plot amplifies differences between the middle of the cumulative distribution functions of the predicted result and the experimental result, whereas the Q—Q plot amplifies differences between the tail of the cumulative distribution functions. If two distributions are perfectly fitted, both plots would be a straight line with slope of 1. The results of the P—P plot and the Q—Q plot are shown in Figs. 9c and d. Both plots are reasonably linear which means the simulation result agrees closely with the experimental result. The last test of the goodness of fit is the chi-square test. The measurements of our experimental results are divided into nine class intervals, [!R, 2.622), [2.622, 2.627), [2.627, 2.632), [2,632, 2.637), [2.637, 2.642), [2.642, 2.647), [2.647, 2.652), [2.652, 2.657), [2.657, #R) where the size of each interval is 0.005. Nine intervals are chosen because the measurements have three decimal places. The degrees of freedom for the test is equal to the number of class intervals minus the number of parameters estimated minus 1. The estimated parameters in our simulation result are the variance of the orientational and translational errors in the computation of the displacement error. Therefore, the number of degrees of freedom is 6. The corresponding value for comparison is s ) s is defined as the percentage point or value of the chi-square random variable with six degrees of freedom such that the probability that s exceeds this
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
33
Fig. 9. (a) The distribution of 50 dimensioning measurements recorded from experiment. (b) The pdf of the predicted combined quantization and displacement errors with the expected dimension of the edge segment as 2.642 mm. (c) Probability—probability plot of the experimental and simulation result. (d) Quantile—quantile plot of the experimental and simulation result.
Table 1 The observed and expected frequency in the class interval 1 to 9 i
1
2
3
4
O G E G
3 1.40
3 2.77
5 4.71
9 7.31
5 11 9.12
6
7
8
9
8 9.12
5 7.31
3 4.71
3 3.54
value is 0.05. The value of s is calculated using O and G E the observed and the expected frequency in the interG val i from the experiment and model, respectively, and i is the class interval arranged in the order as described above. The values of O and E are shown in Table 1. The G G computed value of s is 4.226 and the value of s is 12.59. Since s is less than the s , the predicted result is acceptable to represent the experimental result.
6.2. Experiment 2 Given a set of part dimensions, M to be inspected, a desirable sensor setting, ss, in active vision inspection will maximize the cardinality of the subset of M which would be observable and can be dimensioned from it. ss should also provide an acceptably high level of dimensioning accuracy for each of the elements in this subset. In dealing with different sensor settings, although some sensor may provide observability of a maximal set of dimensions, the dimensioning accuracy for some dimensions (edge segments) may not be acceptably high. In this experiment, the dimension E2 of the part shown in Fig. 8a (length of the step) is dimensioned from five different sensor settings. The accuracy of the dimensional inspections are analyzed and compared. This analysis clarifies the effect of sensor settings on the
34
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
Fig. 10. (a) img 1 acquired by ss1, (b) img 2 acquired by ss2, (c) img 3 acquired by ss3, (d) img 4 acquired by ss4, (e) img 5 acquired by ss5, (f ) edge map of img 1, (g) edge map of img 2, (h) edge map of img 3, (i) edge map of img 4, ( j) edge map of img 5.
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
35
Fig. 11. (a) The probability density function of the quantization error for img 1, img 2, img 3, img 4, img 5, (b) the probability density function of displacement error for ss1, ss2, ss3, ss4, and ss5, (c) the probability density function of the combined error of ss1, ss2, ss3, ss4, and ss5, (d) the accuracy of dimension by ss1, ss2, ss3, ss4, and ss5.
dimensioning accuracy, and active vision inspecting planning. Figs. 10a—e shows five of the images (img 1, img 2, img 3, img 4, and img 5) acquired from the sensor settings 1, 2, 3, 4, and 5 (ss1, ss2, ss3, ss4, and ss5). In each of ss1 and ss2, the distance between the desired entity (E2) and the sensor is approximately the same, the relative orientation between the entity and the sensor is also about the same (along the optical axes of the sensors), however, the orientation of the object relative to its center is different, causing the entity to appear differently in the image, (the edge segment is shorter on img 1). For ss2 and ss3, the distance between the sensor and the desired entity in ss2 is shorter than in ss3, while ss3 has an orthogonal direction of view (E2 is perpendicular to the optical axis of ss3). The length of the desired entity is approximately equal in images from ss2 and ss3. For ss3, ss4, and ss5, the distance between E2 and the sensor is approximately the same. However, the object is along the optical axis of ss3, the object is farther away from the optical axis of ss4 in the horizontal direction of the sensor coordinate system, and the object is farther away from the optical axis of ss5 in both horizontal and vertical directions of the sensor coordinate system. The relative orientation between E2 and the sensors, ss3, ss4, and ss5, are significantly
different. The length of E2 in these images is approximately the same. Fig. 11a shows the pdf of the quantization error. Since the quantization error is dependent only on the sensor resolution and the orientation of the projected segment in the image, the distributions of the quantization error for ss1, ss2, ss3, ss4, and ss5 are the same. The probability density functions of errors due to displacement for ss1, ss2, ss3, ss4, and ss5 are shown in Fig. 11b. The dimensional errors due to displacement are dependent on the relationship of the segment with respect to the image plane and its coordinate system. Because the relative orientation between E2 and the sensor coordinate systems of ss1 and ss2 is approximately the same, the distribution of the errors due to displacement for these sensor settings is approximately the same as shown in Fig. 11b. In ss3, E2 is farther from the sensor along the optical axis. The distribution of errors due to displacement for ss3 is narrower and its peak is higher as shown in Fig. 11b. In ss4 and ss5, E2 is farther away from the optical axes of the sensor, the distributions of errors due to displacement for ss4 and ss5 are wider and their peaks are lower. This shows that the farther the entity is away from the optical axis of the sensor, the wider the distribution is and the lower its peak. Combining the errors due to quantization
36
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37
and displacement, the pdf of total error for ss1, ss2, ss3, ss4, and ss5 are shown in Fig. 11c. Using the pdfs in Fig. 11c, we can compute the expected accuracy for dimensional inspection from ss1, ss2, and ss3 using Eq. (26). The result of this computation is plotted in Fig. 11d. Although the error pdf for ss1 and ss2 appear the same, the image dimension of E2 from these sensor settings is different. As a result, the dimensioning accuracy from ss1 could be lower than that from ss2. For example, if we were interested in 99% accuracy, the likelihood of achieving that from ss1 is 0.44 while the likelihood for ss2 is 0.79. The pdf of error for ss3 is different than for the other sensor settings, the likelihood of obtaining 99% accuracy is 0.87 (higher than ss1 and ss2). Intuitively, this decrease (from ss3 to ss2) in likelihood of accuracy can be attributed to the sensor being placed closer to the dimensioned entity. As distance decreases the same displacement errors introduce larger distortions in the projected coordinates and thus larger distortions in computed dimensions. Although the quantization errors will have an opposite effect, this effect may be smaller, and in any case the amount to which they cancel each other depends on two given sensor setting’s relationship. Also, the introduced distortion is more serious if the inspected entity is positioned farther away from the optical axis of the sensor, which is shown in the accuracy of ss4 and ss5. The likelihood of obtaining 99% accuracy for ss4 and ss5 are 0.75 and 0.72, respectively. This experiment shows the effect of the quantization and displacement errors based on the analysis presented in Sections 2 and 3. This analysis could be helpful in determining the acceptability of given sensor settings for active vision dimensional inspection, and/or for proposing alternative reasonable sensor settings.
7. Conclusion Displacement in the sensor setting, quantization error in image acquisition and illumination are the major sources of errors in the dimensional measurement using active vision inspection. In this paper, we have concentrated on the analysis of spatial quantization errors, the errors due to sensor displacement, and their effects on the dimensional measurements of linear segments. We have developed a general analysis for the spatial quantization error in one and two dimensions. The range, mean, variance, and the probability density function of the error in dimensional inspection of a segment are derived in terms of the resolution. A probabilistic analysis of the dimensional inspection errors due to displacement is also developed. Finally, a method for integrating the errors due to quantization and displacement makes it possible to compute the total error in active vision dimensional inspection. We have also developed an analysis for expected accuracy in dimensional measurement from differ-
ent sensor settings. Accuracy is derived in terms of the resolution, the sensor location and view direction, the focal length and the model (part) dimension of the edge segment. Using this approach, we can determine the capability for inspecting different dimensions by specific sensor settings to ensure that design specifications are satisfied. Our experimental results suggest that the probabilistic based model of uncertainty in measurements gives a close match with real results. This type of analysis should be part of the system design and inspection planning procedures, leading to more accurate inspection. References [1] Kamgar-Parsi B, Kamgar-Parsi B. Evaluation of quantization error in computer vision. IEEE Trans Pattern Anal Mach Intell 1989;11(9). [2] Blostein SD, Thomas SH. Error analysis in Stereo determination of 3-D point positions. IEEE Trans Pattern Anal Mach Intell 1987;9(6). [3] Ho C-S. Precision of digital vision systems. IEEE Trans Pattern Anal Mach Intell 1983;5(6). [4] Griffin PM, Rene Villalobos J. Process capability of automated visual inspection systems. IEEE Trans Systems Man Cybernet 1992;22(3). [5] Su S-F, Lee CSG. Manipulation and propagating of uncertainty and verification of application of actions in assembly tasks. IEEE Trans systems Man Cybernet 1992;22(6). [6] Renders J-M, Rossigno E, Becquet M, Hanus R. Kinematic calibration and geometrical parameter identification for robots. IEEE Trans Robotics Automation 1991;7(6). [7] Menq C-H, Yau H-T. Lai G-Y. Autoamted precision measurement of surface profile in CAD-directed inspection. IEEE Trans Robotics Automation 1992;8(2). [8] Chen J, Chao LM. Positioning error analysis for robot manipultors with all rotary joints. IEEE Int Conf Robotics Automat San Francisco, CA, April 1986. [9] Veitschegger WK, Wu C-h. A method for calibrating and compensating robot kinematic errors. IEEE Int Conf on Robotics and Automation, 1987. [10] Bryson S. Measurement and calibration of static distortion of position data from 3D trackers. SPIE, 1669 Sterkoscopic displays and applications III. 1992. [11] Smith RC, Chessman P. On the representation and estimation of spatial uncertainty. Int J Robotics Res 1986;5(4). [12] Huertas A, Medioni C. Detection of intensity changes with subpixel accuracy using Laplacian-Gaussain masks. IEEE Trans Pattern Anal Mach Intel 1986; 8(5). [13] Jensen K, Anastassiou D. Subpixel edge localization and the interpolation of still images. IEEE Trans Image Process 1995;4(3). [14] Kisworo M, Venkatesh S, West G. Modeling edges at subpixel accuracy using the local energy approach. IEEE Trans Pattern Anal Mach Intell 1994;16(4). [15] Lyvers E, Mitchell OR, Akey ML, Reeves P. Subpixel measurements using a moment-based edge operator. IEEE Trans Pattern Anal Mach Intell 1989;11(2). [16] MacVicar-Whelan PJ, Bindord TO. Line finding with subpixel precision. Proc SPIE Techniques Appl Image Understanding 1981;281. [17] Ballard DH, Brown CM. Computer vision. Englewood Cliffs, NJ: Prentice-Hall, Inc, 1982.
C.C. Yang et al. / Robotics and Computer-Integrated Manufacturing 15 (1999) 23–37 [18] Chin RT. Survey automated visual inspection: 1981 to 1987. Comput Vision Graphics Image Process 1988;41:346—81. [19] Cowan CK. Automated sensor placement for vision task-requirements. IEEE Trans Pattern Anal Mach Intell 1988;10(3). [20] Foley JD, Van Dam A. Computer graphics: principles and practices. Reading, MA: Addison-Wesley, 1990. [21] Hayclock DI. Geometric precision in noise-free digital images. IEEE Trans Pattern Anal Mach Intell 1989;11:1065—75. [22] Hutchinson SA, Cronwell RL, Kak AC. Planning sensing strategies in robot work cell with multi-sensor capabilities. IEEE Trans Robotics Automation 1989;5(6). [23] Marefat M, Kashyap RL. Image interpretation and object recognition in manufacturing. IEEE Control Systems 1991;11(5). [24] Marefat M, Kashyap RL. Feature-based computer integrated inspection. Proc 1993 ASME Int Computers in Engineering Conf. San Diego, CA, 8—12 August, 1993. [25] Menq CH, Borm J-H. Statistical measure and characterization of robot errors. Proc IEEE Int Conf on Robotics and Automation, 1988. [26] Moon CW. Error analysis for dimensional measurement using computer vision techniques. IEEE instrumentation and Measurement Technology Conf. San Diego, CA, 1988;371—6. [27] Park HD, Mitchell OR. CAD based planning and execution of inspection. Proc IEEE computer vision and Pattern Recognition Conf Ann Arbor, Michigan, 1988. [28] Paul RP. Differential relationships, robot manipulators: mathematics, programming, and control, Ch. 4. The MIT Press, Cambridge, Massachusetts and London, England, 1981.
37
[29] Radack GM, Merat FL. The integration of inspection into the CIM environment. Proc Hawaii Int Conf on System Science, 1990;445—62. [30] Yi S, Harlick RM, Shapiro L. Automatic sensor and light soruce positioning for machine vision. Proc 10th Int Conf on Pattern Recognition. Atlantic City, NJ, 2—3 June, 1991. [31] Tsai RY. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Trans Robotics Automation 1987;3(4). [32] Yan Z, Menq C-H. Coordinate sampling and uncertainty analysis for computer integrated manufacturing and dimension inspection. Proc 1993 NSF Design and Manufacturing Systems Conf. Charlotte, North Carolina, 6—8 January, 1993. [33] Yang CC, Marefat MM, Ciarallo FW. Active visual inspection on CAD models. IEEE International Conference on Robotics and Automation, San Diego, CA, 8—13 May, 1994. [34] Yang CC, Marefat MM, Kashyap RL. Analysis of errors in dimensional inspection based on active vision. Proc SPIE Int Symp on Intelligent Robots and Computer Vision XIII: 3D Vision, Product Inspection, Active Vision. Cambridge, MA, 31 October—4 November, 1994. [35] Tarabanis K, Tsai RY, Allen PK. Analytic characterization of the feature detectability constraints of resolution, focus, and field-ofview for vision sensor planning. CVGIP: Image Understanding 1994;59(3):340—58. [36] Tarabanis K, Tsai RY, Allen PK. The MVP sensor planning system for robotic vision tasks. IEEE Trans Robotics Automation 1995;11(1).