Journal of Manufacturing
Systems
Vision Sensor-Based Measurement for Automatic Die Remodeling Jitae Kim and SuckJoo and Technology (KAIST),
Na, Dept. of Mechanical Taejon, Korea
Engineering,
Abstract
Position Estimation,
Advanced
Institute
of Science
In the development of robot welding systems,position recognition of 3-D objects is of crucial importance. Determination of overall workpiece location and orientation is required before welding starts in order to compensatefor gross fixturing inaccuracies that may cause the part to move. This part location techniquecan be applied to other manufacturing processes such as machining, material handling, mechanical assembly, and inspection or gauging. However, it is well known that recognizing the exact location and orientation of an object is not easyif the object is allowed to rotate and translate in 3-D space (Agapakis et al. 1990; Agapakis 1990; Wu, Smith, Lucas 1996). The purpose of this study is to estimate the location of a press die automatically before the welding processbegins. It is assumedthat the welding passes are preprogrammed into the robot welding system beforewelding. It is thereforenecessaryto find a rigid body transformation between the actual location of the workpiece and the input CAD data. Different classesof part location algorithms operating on discrete coordinate data appearin the literature. These algorithms can be broadly classified as: (1) sequential, parametric regression algorithms, (2) simultaneous, parametric regression algorithms, (3) home-point regressionalgorithms, and (4) ANSI/IS0 datum establishment algorithms (Chakrabory, De Meter, Szuba 2001). A sequential, parametric regression algorithm works by individually applying linear regression to tit the analytic form of eachdatum to the coordinate data collected from the datum feature (Chakrabory, De Meter, Szuba 2001). The merits of this method are simple applicability and fast execution speed.An example of this technique can be found in Bhat and De Meter (2000). The secondmethod, a simultaneous, parametricregressionalgorithm, is similar to the first.
The problem of recognizing and locating rigid objects in 3D space is important for applications in welding automation. This issue relates to finding a transformation matrix between the intended and actual locations to compensate for fixturing errors before welding starts. An algorithm of 3-D position estimation using a laser vision sensor is introduced for automatic die remodeling. First, a vision sensor based on optical triangulation was used to collect the range data on die surfaces for automatic remodeling. Second, line and plane vector equations were constructed from the measured range data, and an analytic algorithm was proposed for recognizing the die location with these vector equations. This algorithm produces the transformation matrix with no specific feature points. To evaluate the proposed algorithm, a corrugated SUS plate was measured with a laser vision sensor attached to a three-axis Cartesian manipulator, and the transformation matrix was calculated. Keywords: Die Remodeling, SOT, Transformation Matrix
Korea
Vision Sen-
Introduction Pressdies for forging or cutting processesin automobile industries are very expensive to manufacture and take much time to produce. Thus, it is often necessaryto repair broken dies or to make some changes to used ones. In many cases,this refurbishing or repairing is performed by machining. However, when some part of the die needsto come outward, the volume for remodeling by machining is too much to cut. In such a case,to make that part by molten metal deposit with a welding processis more efficient than to machine out the entire die surface except that part. In fact, many automobile companies have an in-house department for die remodeling by the welding process.However, most of welding processesdependon the experienceof welders, so that the quality of welding is not always guaranteed,and much processtime is needed. Therefore, an automated welding system is necessaryfor die remodeling. 73
Journal of Manufacturing Vol. 22mo. 2003
Systems
2
planes. The system then used rigidity constraints to guide the matching process.Bhanu (1984) presented a 3-D scene analysis system for shape matching of real-world 3-D objects. Object models were constructed using multiple-view 3-D range images in which shapematching was performed by matching the face description of an unknown view with the stored model by using a relaxation-based scheme called stochastic face labeling. Most of the methods described above create the transformation matrices between the model and the sceneusing the point data, such as plane centroids or corner points (FaugerasandHebert 1986;Oshima and Shirai 1983; Bhanu 1984; Fan, Medioni, Nevatia 1989;Flynn and Jain 1992;Bolles and Horaud 1986). To apply these methods to the press die, it is necessary to find some feature point data from the die surface. However, most pressdies have no featurepoint on the die face and most plane surfaces do not have definite borders, so feature point acquisition is not available in most cases. To effectively compensate for pressdie fixturing errors, it is necessaryto create the transformation matrix without matched points. This paper provides a new analytic part location algorithm, which is classified as a sequential, parametric regression method. As datum features, straight line and flat plane features that can be found easily in most dies were used. This method can be generally applied to large dies with a structured-light vision sensor.
The difference is that this method uses a nonlinear optimization model to get the six rigid body displacement variables (Chakrabory, De Meter, Szuba 2001; Grimson and Lozano-Perez 1984; Sahoo and Menq 1991; Faugerasand Hebert 1986). The results of this technique seem to have a great possibility for application, except that the technique needs a long computing time throughrepeatediterations.A home-point regression algorithm can be applied to objects that have well-described home-point features (Besl and Mckay 1992). Lastly, an ANSI/IS0 part location algorithm attempts to fit the analytic datum to the coordinate datain such a manner as to adhereto datum establishment rules described by either the ANSI Y 14.5M standardor its IS0 equivalent (Chakrabory, De Meter, Szuba2001). The application of this algorithm can be found in Zhang andRoy (1993),although it seemsto have a computational complexity for applications. The easiestway to find the part location can be established by using home points. Most dies have locational surfacesor locational holes that canbe used as home-point features. Sometimes, however, those features are not available, such as when the area of the die is so large that the welding robot manipulator may not reach all areasof the die. Utilizing a robot manipulator with a gantry robot system would be expensive and difficult to program with high accuracy. Therefore,the sequential,parametric regression algorithm seemsto be appropriate for computational simplicity and wide applicability. To apply the part location algorithm to dies where the shapefrequently changes, a new algorithm is needed, becausemost algorithms appearing in the literature are only for specific cases. In this study, a structured-light vision sensor (Nitzan 1998) was considered as a sensing system becausethe same kind of vision system was already adoptedfor seamtracking and beadshapeinspection. Many researchershave studiedobject recognition with structured-light vision sensing systems. Oshima and Shirai (1983) presenteda recognition method for cylinders and polygons by using a range tinder. They suggesteda function for the fitting process and then determinedthe minimum value of the function to find the matched pairs. Faugerasand Herbert (1986) developed a system for recognizing and locating rigid objects in 3-D space.Model objects were represented in terms of linear features such as points, lines, and
Laser Vision Sensor Figure I shows the laser vision sensorused to detect range data for the object surface.The vision sensor consists of a CCD camera, lens, laser diode, line generator, and narrow-band-pass filter. The vision sensoris basedon optical triangulation (Nitzan 1998) and has a 90 mm x 40 mm field of view and approximately 0.2 mm X 0.05 mm resolution.
Position Estimation Algorithm The easiestmethod of estimating the position of an object is to calculate a transformation matrix from three matched points. But, as describedpreviously, it is very difficult to obtain the point datawith invariant feature from the die surface. To accomplish this, Bhanu (1984) and Fan, Medioni, and Nevatia (1989) have suggesteda method of using the centroid of a
74
Journal
of Manufacturing
Systems Vol. 22/No. 2 2003
Laser
;*
)b (Lens
Vision
Sensor
of Structured
Figure Light
1 and Its Schematic
to object
distance)
Diagram
x=a+tb
(2)
To determine the location and direction vectors from the point data measured by the laser vision sensor, those vectors can be defined as follows: bi = ai- aiel
b _ Figure 2 3-D Line of Vector
a=-c ai n
The direction vector is an averagevalue of normalized subdirection vectors, which are calculated by subtraction of adjacent measured points, while the location vector is the averagevalue of measuredpoints (seeFigure 2). In general, the rigid body transformation matrix [T] between objects can be representedin a 4 x 4 homogeneousmatrix as follows:
A. Algorithm by Line Vector Equations A 3-D line equation can be representedas follows:
T=
z-a3 b3
(3)
n
Form
plane to calculate the transformation matrix. To apply this method to a die surface, however, the range data of a whole plane must be obtained, and all the boundaries of the plane surface detected. This is a very time-consuming process and cannot guarantee the accuracy neededfor welding processes. For this reason,a noniterative analytic method is suggestedfor the calculation of transformation matrices from line and plane vector equations without using matched points.
x-a, _ y-a, -----=4 4
Cbi/llbi!
(1)
where aj are the elements of location vector a, and b, are the elements of direction vector b. This equation can be representedin vector form as follows:
R(3x3)
000
1
P(3 x 1)
1
, where
(4)
75
Jounzal of Manufacturing Vol. 22mo. 2 2003
Systems
k
Two Pairs
Figure 3 of Matched
3-D Lines
Axis
where [R] representsthe rotation and P the translation. By using a homogenousmatrix as a transformation matrix, the rotation matrix and translation vector can be separatedso that characteristics of transformation can be traced easily (Craig 1989). There exist many forms for representing the rotation matrix; here it is represented as a rotation of angle u about a unit vector k (axis of revolution) located at the origin. Let k,, kY,and k, represent the components of axis k. Then the relation between the coefficients of [R] and u, k,, ky, and k, can be expressed as follows:
Figure 4 of Revolution
where Ll and L2 arethe normal vectors of eachplane as follows: Ll = (bI + bl’) X (bI X bl’) L2 = (b, + b,‘) X (b, X b2’) The angle of rotation 8 is the angle betweenprojected vectors of bi and bi’ on the plane of which normal is k, so the angle can be calculated at eachpair of direction vectors as follows: Oi= cosd[(b, - (bi . k)k} . {bi’ - (bi’ . k)k)]
k,k,v+c R(k$) = k,k,v+k,s i k,k,v-k,s
k,k,v-k,s k,k,v+k,s k,k,v+c k$,,v-kg kl,krv+&rs k=k=v+c I
The averagevalue of ui is used for the angle of rotation. Consequently, the rotation matrix [R] can be createdwith the axis of revolution k and the rotation angle 8. After rotating the measured lines by the rotation matrix, the measuredand expected line pairs will be aligned parallel (seeFigure 5). Next, it is necessary to find the translation vector P to make a complete transformation matrix. Here, a simple technique is suggestedto find the translation vector by using two line pairs. First, one line of a pair is constructed by adding P, to the other one as follows:
(5)
where s = sine, c = case, and v = 1 - cos0. When there aretwo pairs of measuredand expected line equations, such as II:x=a,+tb,,l,:x=a,+tb,,E,‘:x=a,’+tb,’, Z2’:x = a,’ + tb,’
(6)
axis k can be found at the outset.Figure 3 shows two pairs of measuredlines, I’ and expected lines 1.The axis of revolution lies on the plane that bisects the anglebetweenbi andbi’ (seeFigure 4) (Fan, Medioni, Nevatia 1989). Therefore, the axis of revolution can be calculated by the following: k=Ll
X L2
(8)
P, = a,’ - [R]a,
(9)
The first line pair will then fit together. Next, to fit the secondpair while maintaining the first fitting, the second translation vector should be parallel to the direction of the first fitted line. Thus, it can be written as follows: P, = m[R]b,
(7)
76
(10)
Journal
of Manufacturing
vIx + vg + v3z - d = 0
Systems Vol. 22mo. 2 2003
(16)
And the vector form of the plane equation is as follows: x-v-d=0
(17)
For deriving a plane equation from points, the leastsquare-methodcan be adoptedas follows:
Translation
Matrix
Figure 5 for Two Matched
z=a,fa,x+ap+e Line
Pairs
Sr = C (q - a, - alxi -a,y,
The final translation vector can be expressedby the summation of the two translation vectors, as follows: P = P, + P,
asr as” as?”=o -E-Z-
(11)
To find the second translation vector P,, the following geometric equation is used: 3’ - P, - mb,’ = b,‘t + [R]a,
(12)
This equation means that after translation, a,‘-the location vector of the second measuredline, should be on the secondrotated line. The aboveequation can be simplified as follows:
)2 ,
da,
da,
da,
n
cx
cy
cx
Xx”
cxy
[ cy
cxy
CY”
(18)
IL1 a0
aI
a2
cz
= c
XZ
/ X.JQ
To completea transformationmatrix, threeexpected and measuredplane pairs areneededas follows: PI : v, - x + d, = 0, pz : v, . x + d, = 0, p3 : v3 . x + d3 = 0
[b;
11
b; ‘:
=a), -PI -[R]a,
This form of the equation can be considered as the following form of a linear equation: HX=
G
(14)
whereH=[b,‘b,‘],X=[mt]TandG=a,‘-P,-
IN%
Therefore, m can be determined by the pseudo inverse, as follows: X= (HTH)-l G
pl’ : VI’ .x+d,‘=O,
(13)
(15)
B. Algorithm by Plane Vector Equations The secondway of finding the transformation matrix without using matchedpoints is a similar method, which incorporatesthe plane equations.Plane equations can be representedas follows:
r. p3 . v3’ .x+d3’=0
p2’:v2’.x+d2’=0, (19)
wherep, is the expectedplane equationsandp,’ is the corresponding measuredequations.Further, Vi is the unit normal vector of a plane and di is the distance to the plane from the origin. The method of making a rotation matrix is exactly the same as that with line equations. In its application to plane equations,it is necessaryonly to replace the direction vectors of line equations with the normal vectors of plane equations. This results in three pairs of matched planes that are already aligned parallel by the calculated rotation matrix (seeFigure 6). Then, the normal vector of the first matched planes is selectedas the direction of the first translation vector, and the magnitude of the translation vector is obtainedby the distancedifference of matchedplanes from the origin, as follows:
77
Journal of Manufacturing Vol. 22/No. 2 2003
Systems
Finding
Translation
Vector
Figure with
6 Three
Matched-Plane
Pairs
When the third translation vector is calculated, it is important that the first and the second fittings arenot broken. Therefore, the third translation vector must be parallel to the first and second matched planes. Consequently, the direction of the third translation should be:
&
P, = Iz(v,’ x vz’)
P’z
(24)
The magnitude II can be found in the same way applied above: F&we 7 of Second Translation
Direction
Vector
P, = (d,’ - d,)v,’
d3” =
%’ - P, + d3 ?2=
d,’
- v,’
.
P,
(22)
= d3”
(25)
(4-4) (v; xv’, )- vj
(26)
C. Verification of Algorithm To verify the proposedalgorithm, experimentswith a plastic block were performed as shown in Figwe 8. Figurzs 9 and IO show the results of the test with eachalgorithm, respectively. Star points (*) aremeasured point data, while solid lines show the location of the object calculated by the transformation matrix that is obtained from the algorithms. To verify the transformation matrix, anotherline was measured(in-
= d2”
(4-4) m=[v;x
P, - v,’ . P,
P = P, + P, + P,
the secondtranslation vector can be found asfollows:
Vz’- P, + d,
.
Then, the final translation vector is simply the summation of each subtranslation:
(21)
When L&”is defined as d; =
- vl’
(20)
Further, the second translation vector should be parallel to the first matched plane to preservethe first fitting. Here, to find the shortest vector, direction of the second translation is taken as normal to vr’ (see Figure 7), as follows: P, = mv,’ x (VZ’ x VI’)
d3’
(23)
78
.Journal
Experiments
Figure with
8 a Text Block
and Results
The ideasproposedhavebeenimplementedwith the experimentalsetupshown in Figure II. The object for measurementwas a corrugatedstainlesssteelplate.The plate was corrugatedrandomly to createa similar situation for pressdies,anda laservision sensorwasattached to a Cartesianmanipulator to obtain 3-D rangedata.
250
200 Actual position calculated by transfomatim
I
150
100
50
Result
Figure 9 of Test with Line Vector
Systems Vol. 22ih70. 2 2003
First, the laser vision sensormeasuresthe object’s 3-D coordinates.Next, from the point data, the modeler module constructsthe line vector and plane vector equations. The vector equation parameters were compared with parameters from CAD data, and finally the transformation computing processor computes the transformation matrix using the proposed algorithm, Figure 12. Therefore, what an operator needsto do is just pick two straight lines or three flat surfacesnearthe interestedareain the CAD data.Then the vision sensor attached to the robot manipulator scans those areas automatically. And the computer extracts the real vector equations of those lines or surfaces.Finally, the actual position can be calculated by the proposedalgorithms. Because the exact CAD data of the folded steel plate were not available, two measurementswere necessary,onefrom the initial position andthe other from the moved position. And the axis of revolution k was taken as [ 0 0 1 ] for convenience.Figure 13 shows the experimental results with two line equations,and Figure 14 with three plane equations. The errors of calculatedaxis of revolution, rotation angle,andtranslation vector all appearedto be in the order of resolution of the laser vision sensor,which may be allowed in arc welding processes.
dicated by +). It can be shown that the measuredline data are well fitted with the calculatedposition of the object,
Experiment
of Manufacturing
Result
Algorithm
79
Figure IO of Test with Plane Vector
Algorithm
Journal of Manufacturing Vol. 22/No. 2 2003
Systems
r
Vision Sensor ~-~.“____“-“-...
I--XYZ Coordinates
Transformation Experimental of Corrugated
Conclusion
Figure 11 Setup for Measurement Stainless Steel Plate
!
and Future Work
This paper provides a complete analytic method for finding the transformation matrix from line vector equations and plane vector equations using a laservision sensorand CAD data.The key contribution of this paper is that this method needsno corresponding matched points. Therefore, the correspondence problem (matching measuredpoints with CAD data) is of no concern. This method is further verified by the folded steel plate experiment, which simulated a way of measuring a pressdie. The method is described in analytic equations so no iteration is needed to find the die position. The actual calculating time for one position estimation is on the order of subseconds.This result seems to be good enough for the applicability and economics. However, this method described above has some limitations in its applications. It is necessaryto find two lines or three planes in the measured object. Therefore, more study on freeform surfaces should be conducted for remodeling any kind of press dies.
Procedure
- Rotation male
* Translation
Vector
rigure 1.2 of Experiments
Best, P. and Mckay, N. (1992). “A method for registration of 3D shapes.” IEEE Trans. on Pattern Analysis and Machine Intelligence (~14, n2), ~~239-256. Bhanu (1984). “Representation and shape matching of 3-D objects.” IEEE Trans. on Pattern Analysis and Machine Intelligence (~6, n3), pp340-350. Bhat, V. and De Meter, E.C. (2000). “An analysis of the effect of datum establishment methods on the geometric errors of machined features.” Int’l Journal of Machine Tools &Manufacture (v40), ppl951-1975. Bolles, R.C. and Horaud, P. (1986). “3DPO: A three-dimensional part orientation system.” Int’l Journal of Robotics Research (~5, n3), ~~3-26. Chakrabory, D.; De Meter, E.C.; and Szuba, P.S. (2001). “Part location algorithms for an intelligent fixturing system. Part 1: System description and algorithm development.” Journal of Manufacturing Systems (~20, n2), ~~124-134. Craig, J.J. (1989). Introductiolz to robotics. Addison Wesley. Fan, T.J.; Medioni, G.; and Nevatia, R. (1989). “Recognition 3-D objects using surface descriptions.” IEEE Trans. on Pattern Analysis and Machine Intelligence (~11, nll), pp1140-1157. Faugeras, O.D. and Hebert, M. (1986). “The representation recognition and locating of 3-D objects.” Int’l Journal of Robotics Research (~5, n3), ~~27-52. Flynn, P. and Jain, A.K. (1992). “3D object recognition using invariant feature indexing of interpretation tables.” CVGIP: Image Understanding (~55, n2), ~~119-129. Grimson, W. and Lozano-Perez, T. (1984). “Model-based recognition and localization from sparse range or tactile data.” Int’l Journal of Robotics Research (~3, n3), ~~3-35. Nitzan, David (1998). “Three-dimensional vision structure for robot applications.” IEEE Trans. on Patten Analysis and Machine Intelligence (VIO, n3), pp291-309. Oshima, M. and Shirai, Y. (1983). “Object recognition using three-dimensional information.” IEEE Trans. on Pattern Analysis and Machine Intelligence (~3, n4), ~~353-361.
Acknowledgment This work was supportedin part by the Brain Korea 2 1 Project. References Agapakis, J.E. et al. (1990). “Vision-aided robotic welding: An approach and a flexible implementation.” ht ‘IJournal ofRobotics Research (v9, n5), ~~17-34. Agapakis, J.E. (1990). “Approaches for recognition and interpretation of workpiece surface features using structured lighting.” Int ‘1 Journal of Robotics Research (v9, n5), ~~3-16.
80
Journal
Initial position
j
a,=[20.2141 390.0010 119 93231 a’,=[-32.9150 397.2892 120.0661]
b,=[0.1651 -0.3025 -0.93X7] b’,=[0.2229 -0.2705 -0.93661
a,=[112.4568 295.3951 9.84411 a’.=175.0402 317.7648 10.13081
b,=[0.9265 -0.3671 -0.0824J b-=10.84851 -0.5227 -0.08201
k=[-0.0278
-0.0170 0.99951
of Experiment
Using
v,=[4.3950 0.4552 0.79821 v’,=[-o.4693 0.3800 0.79711 v,=[-C.OOSO -0.5562 0.83111
d,:ll3.5515 d,=114.3759 d!=l37.0669
~‘~=[0.0895 -0.5476 0.8320] v,=[O.2788 0.5983 0.75121 d;[0.1713 0.6376 0.751 l]
d’=l37.2844 d2=358.2288 d’?=368.7945
k=[-o.O071 a.01 18 0 99751 e=9 9409” P=[14.9976 9.7570 4.5129]mm
Figwe 13 Two Lines,
Result
0 = 10” P = [ 15 10 0] mm
Sahoo, K.C. and Menq, C. (199 1). “Localization of 3D objects having complex sculptured surfaces using tactile sensing and surface description.” Journal of Engg. for Industry (vl13), ~~85-92. Wu, J.; Smith, J.S.; and Lucas, J. (1996). “Weld bead placement system for multipass welding.” IEE Proc.-Sci. Meas. Technol. (~143, n2), ~~85-90. Zhang, X. and Roy, U. (1993). “Criteria for establishing datums in mauufactured parts.” Journal qfManufacturing &~stems (VI 2, n l), ~~36-50.
Authors’
Systems Vol. 22/No. 2 2003
+
0=10.0862” P=(14.8292 9.7902 0.4355lmm
Result
qf Manufacturing
research systems,
Using
Figure 14 Three Lines,
8 = 10” P = [ 15 10 51 mm
interests are in the areas of die repair welding, and object recognition by vision sensors.
automatic
welding
Suck-Jo0 Na received his BS in mechanical engineering from Seoul National University (Korea) in 1975, his MS in mechanical engineering from the Korea Advanced Institute of Science and Technology (KAIST) in 1977, and his Dr.Ing. in welding engineering from TU Braunschweig (Germany) in 1983. In 1983, he joined KAIST to research and lecture on welding and other thermal processes. He is now working mostly in the field of analysis, optimization, sensors for automation, quality monitoring and control in the welding process, laser micro welding processes, and Internetbased welding process control. Dr. Na is a member of the Korean Welding Society, the Korean Society of Mechanical Engineers, the American Welding Society, and the German Welding Society.
Biographies
Jitae Kim received his BS in mechanical engineering Advanced Institute of Science and Technology (KAIST) MS in 2000. He has been in the PhD course at KAIST
of Experiment
from the Korea in 1998 and his since 2000. His
81