9th 9th IFAC IFAC Conference Conference on on Manufacturing Manufacturing Modelling, Modelling, Management Management and and Control 9th IFAC IFAC Conference Conference on on Manufacturing Manufacturing Modelling, Modelling, Management Management and and 9th Control Available online at www.sciencedirect.com 9th IFAC Conference on Manufacturing Modelling, Management and Berlin, Germany, August 28-30, Control Control Berlin, Germany, August 28-30, 2019 2019 Control Berlin, Berlin, Germany, Germany, August August 28-30, 28-30, 2019 2019 Berlin, Germany, August 28-30, 2019
ScienceDirect
IFAC PapersOnLine 52-13 (2019) 689–694
Position Position and and orientation orientation calibration calibration of of a a 2D 2D laser laser Position and orientation calibration of a 2D laser line sensor using closed-form least-squares solution Position and orientation calibration of a 2D laser line sensor using closed-form least-squares solution line sensor using closed-form least-squares solution line sensor using Prof. closed-form least-squares solution ∗∗ ¨ Dr.-Ing. R. Muller
¨ Prof. Dr.-Ing. R. Muller ∗∗ ∗∗ ¨ Prof. Dr.-Ing. R.Vette–Steinkamp, Muller Dr.-Ing. (FH) M. M.Eng. Dr.-Ing. Dipl.-Wirt.-Ing. Dipl.-Wirt.-Ing. (FH) M. Vette–Steinkamp, M.Eng. ∗∗ ∗∗ ∗∗∗ ∗ ∗∗ ¨∗∗∗ Prof. Dr.-Ing. R.Vette–Steinkamp, Muller Dr.-Ing. Dipl.-Wirt.-Ing. (FH) M. M.Eng. A. Kanso, M.Sc. A. Kanso, M.Sc. ∗∗ ∗∗∗ ∗∗∗ Dr.-Ing. Dipl.-Wirt.-Ing. (FH) M. Vette–Steinkamp, M.Eng. A. Kanso, M.Sc. ∗∗∗ A. Kanso, M.Sc. ∗∗ ZeMA - Zentrum f¨ ZeMA - Zentrum f¨u urr Mechatronik Mechatronik und und Automatisierungstechnik Automatisierungstechnik ∗∗ ZeMA gemeinn¨ u tzige GmbH, Gewerbepark Eschberger Weg Zentrum f¨ u r Mechatronik und Automatisierungstechnik gemeinn¨ utzige- GmbH, Gewerbepark Eschberger Weg 46, 46, Geb¨ Geb¨aaude ude 9, 9, 66121 66121 ∗ ZeMA Zentrum f¨ur Mechatronik und Automatisierungstechnik gemeinn¨ utzigeu GmbH, Gewerbepark Eschberger Weg 46, Geb¨aude 9, 66121 Saarbr¨ cken, Deutschland (e-mail:
[email protected]). Saarbr¨ u cken, Deutschland (e-mail:
[email protected]). ∗∗ gemeinn¨ utzige-cken, GmbH, Gewerbepark Eschberger Weg 46, Geb¨aude 9, 66121 ∗∗Saarbr¨ ZeMA Zentrum f¨ und Automatisierungstechnik Deutschland (e-mail:
[email protected]). ZeMA u Zentrum f¨u urr Mechatronik Mechatronik und Automatisierungstechnik ∗∗Saarbr¨ ∗∗ u--cken, Deutschland (e-mail:
[email protected]). gemeinn¨ u tzige GmbH, Gewerbepark Eschberger Weg ZeMA Zentrum f¨ u r Mechatronik und Automatisierungstechnik gemeinn¨ utzige-GmbH, Gewerbepark Eschberger Weg 46, 46, Geb¨ Geb¨aaude ude 9, 9, 66121 66121 ∗∗ ZeMA Zentrum f¨ur Mechatronik und Automatisierungstechnik Saarbr¨ u cken, Deutschland (e-mail:
[email protected]) gemeinn¨ u tzige GmbH, Gewerbepark Eschberger Weg 46, Geb¨ a ude 9, 66121 Saarbr¨ u cken, Deutschland (e-mail:
[email protected]) ∗∗∗ gemeinn¨ utzige -GmbH, Gewerbepark Eschberger Weg 46, Geb¨aude 9, 66121 cken, Deutschland (e-mail:
[email protected]) ∗∗∗Saarbr¨ ZeMA Zentrum f¨ und Automatisierungstechnik ZeMAu Zentrum f¨u urr Mechatronik Mechatronik und Automatisierungstechnik ∗∗∗ ∗∗∗Saarbr¨ u--GmbH, cken, Deutschland (e-mail:
[email protected]) gemeinn¨ u tzige Gewerbepark Eschberger Weg ZeMA Zentrum f¨ u r Mechatronik und Automatisierungstechnik gemeinn¨ utzige -GmbH, Gewerbepark Eschberger Weg 46, 46, Geb¨ Geb¨aaude ude 9, 9, 66121 66121 ∗∗∗ ZeMA Zentrum f¨ur Mechatronik und
[email protected]) Automatisierungstechnik Saarbr¨ u cken, Deutschland (e-mail: gemeinn¨ utzige GmbH, Gewerbepark Eschberger Weg 46, Geb¨ a ude 9, 66121 Saarbr¨ u cken, Deutschland (e-mail:
[email protected]) gemeinn¨utzige GmbH, Weg 46, Geb¨aude 9, 66121 Saarbr¨ ucken,Gewerbepark Deutschland Eschberger (e-mail:
[email protected]) Saarbr¨ucken, Deutschland (e-mail:
[email protected]) Abstract: Abstract: Laser Laser line line sensors sensors are are used used for for inspecting, inspecting, positioning positioning and and scanning scanning 3D 3D objects. objects. They They are are Abstract: Laseron arean used for inspecting, positioning scanning 3D objects. They are often mounted online thesensors flange of of an industrial robot. The The laser line lineand sensor provides 2D measurement measurement often mounted the flange industrial robot. laser sensor provides 2D Abstract: Laseronand line arean used for inspecting, positioning scanning 3D objects. They are values, in direction with respect to sensor coordinate frame. In to often mounted thezzsensors flange of industrial laser lineand sensor provides 2D measurement values, in the the x xonand direction with respect robot. to the the The sensor coordinate frame. In order order to transform transform often mounted theinto flange of an industrial robot. The laser line sensor provides 2D measurement the measured values the robot coordinate frame and therefore expand the 2D measured values values, in the x and z direction with respect to the sensor coordinate frame. In order to transform the measured into the robot coordinate and therefore expand theIn2Dorder measured values values, incoordinates, the values x and the z direction with respect toframe the sensor coordinate frame. to transform the measured values into the robot coordinate frame and therefore expand the 2D measured values into 3D position and orientation of the sensor coordinate frame with respect to into 3D coordinates, the position and orientation of the sensor coordinate frame with respect to the the the measured values into the robot coordinate frame and therefore expand the 2D measured values into 3D coordinates, the position and orientation of the sensor coordinate frame with respect to the flange coordinate coordinate system system must must be be determined determined by by solving solving aa kinematic kinematic equation equation in in terms terms of of transformation transformation flange into 3Dcoordinate coordinates, the must position and orientation of thea kinematic sensor coordinate frame with respect to the matrices. This provides abecomplete solution for this A unique solution is flange system determined by solving equation terms of transformation matrices. This article article provides solution for solving solving this problem. problem. A in unique solution is derived derived flange coordinate system must aabecomplete determined by solving a kinematic equation in terms ofunder transformation based on the separable closed-form solution after two movements of the robot flange satisfying matrices. This article provides complete solution for solving this problem. A unique solution is derived based on This the separable closed-form solution after for two movements of the robot flangesolution under satisfying matrices. article provides a complete solution this problem. A unique is derived based on the separable closed-form solution after twosolving movements ofimplemented the robot flange under satisfying constraints [Shiu & Ahmad (2014)]. Robotic applications are usually with the of constraints [Shiu & Ahmad (2014)]. Robotic applications are usually implemented with the presence presence of based on the separable closed-form solution after two movements of the robot flange under satisfying constraints [Shiu & Ahmad (2014)]. Robotic applications are usually implemented with the presence noise. Therefore, Therefore, aa least-squares least-squares solution solution is is determined determined after after performing performing several several measurements measurements [Park [Park of & noise. & constraints [ShiuCopyright & Ahmad Robotic applications areperforming usually implemented with the presence cc(2014)]. Martin (1994)]. 2019 IFAC. noise. Therefore, a least-squares solution is determined after several measurements [Park of & Martin (1994)]. Copyright 2019 IFAC. noise. a least-squares solution c 2019 MartinTherefore, (1994)]. Copyright IFAC.is determined after performing several measurements [Park & © 2019,(1994)]. IFAC (International of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. cFederation Martin Copyright 2019 IFAC. Keywords: Keywords: Industrial Industrial robots, robots, Calibration, Calibration, Modeling, Modeling, Least-square Least-square estimate, estimate, Closed-from Closed-from solution solution Keywords: Industrial robots, Calibration, Modeling, Least-square estimate, Closed-from solution Keywords: Industrial robots, Calibration, Modeling, Least-square estimate, Closed-from solution 1. 1. INTRODUCTION INTRODUCTION Shiu Shiu and and Ahmad Ahmad introduced introduced aa closed-form closed-form method method to to solve solve the the 1. INTRODUCTION Shiu and Ahmad introduced a closed-form method to solve the 1. INTRODUCTION Shiu and Ahmad introduced a closed-form method to solve the The problem problem of of determining determining the the position position and and orientation orientation of of The of determining the position and orientation of 3Dproblem sensor mounted mounted on aa robot robot flange has has been studied by by aaThe 3D sensor on flange been studied of determining the and orientation of aThe 3Dproblem sensor mounted on a robot flange been studiedthat by several scientists. 3D sensors sensors are position mostly has stereo cameras that several scientists. 3D are mostly stereo cameras aprovide 3D sensor mounted on a robot flange has been studied by several scientists. sensors are with mostly stereoto that x, yy and and 3D coordinates with respect tocameras the camera camera provide x, zz coordinates respect the several scientists. sensors mostly that provide x, system. y and 3D zWith coordinates with respect camera coordinate system. With those are sensors it is isstereo easytocameras tothe detect the coordinate those sensors it easy to detect the provide x,parameters y and zWith coordinates with respect totoof the camera coordinate system. those sensors it is easy detect the 6D pose of an object with the help a signal 6D pose parameters of an object with the help of a signal coordinate system. With those sensors it the is easy toofdetect thea 6D pose parameters of an object with help a signal processing algorithm, since the 3D measured values cover processing algorithm,ofsince the 3Dwith measured values cover a 6D pose parameters an object the help of a signal processing algorithm, since the 3D measured values cover large measuring measuring range. range. The The kinematic kinematic problem problem of of the the robot robota large processing algorithm, since 3D measured large measuring range. The kinematic problemvalues of thecover robota sensor calibration results inthe homogeneous transformation sensor calibration results in aa homogeneous transformation large measuring range. The kinematic problem ofBthe robot sensor calibration results in a homogeneous transformation equation of the form AX = XB. Where A, X and are 4×4 equation of the form AX =inXB. Where A, X and B are 4×4 sensor calibration results a homogeneous transformation equation of the form AX = XB. Where A, X and B are 4×4 homogeneous matrices of the form: homogeneous of the form: equation of thematrices form AX = XB. Where A, X and B are 4×4 homogeneous matrices of the form: RH rrH R homogeneous matrices of theform: (1) H= = TH r H R H (1) H 0 H 1 H 0TTTH 1 H= R (1) r H 1H 0 (1) H= Where and rrH are respectively rotation matrix and the T the Where R RH and are respectively the rotation matrix and the 0 1 H H translational vector of the homogeneous matrix H. A and H Where R and r are respectively the rotation matrix and the H H H of the homogeneous matrix H. A and B translational vector B Where RHthe and rH are the between rotation represent transformation matrices the translational vector of respectively the homogeneous matrixmatrix H.permanent Aand andthe B represent the transformation matrices between the permanent translational vector of coordinate the homogeneous matrix H.permanent A and B represent the transformation matrices between the flange and the sensor system with respect to the flange and the sensor coordinate system with respect to the Fig. 1. 1. A A laser laser line line sensor sensor Gocator Gocator 2330 2330 from from the the company company LMI LMI represent the transformation matrices between the permanent Fig. initial flange and with respect to the sensor coordinate system flange and the sensor coordinate system with respect to the initial flange andsensor with respect to the sensorwith coordinate system Technologies [LMI Technologies (2018)] mounted on Fig. 1. A laser line sensor Gocator 2330 from the company LMIaa flange and the coordinate system respect to the Technologies [LMI Gocator Technologies (2018)] mounted on before moving robot flange respectively. X transinitial flange andthe with respect to the sensor coordinate before moving the robot flange respectively. X is is the thesystem trans- Fig. 1. A laser linemeasuring sensor 2330 from the that company LMIa serial robot the base of a cube is defined Technologies [LMI Technologies (2018)] mounted on initial flange and with respect to the sensor coordinate system serial robot measuring the base of(2018)] a cube that is defined formation matrix the sensor coordinate system with before moving thebetween robot flange respectively. X is the transformation matrix between the sensor coordinate system with Technologies [LMI Technologies mounted on serial robot measuring the base of a cube that is defined as aa calibration calibration target target at at the the initial initial robot robot position position and and after aftera before moving thebetween robot flange respectively. is the transformation matrix the sensor coordinate system with as respect to the the flange coordinate system, thus it it X is the the matrix to respect to flange coordinate system, thus is matrix to serial robot measuring the base of a cube that is defined moving the robot. as a calibration target at the initial robot position and after formation matrix between the sensor coordinate system with moving the robot. be determined in to the figure: 1). respect to the flange coordinate system, thus it(see is the matrix be determined in order order to calibrate calibrate the sensor sensor (see figure: 1). to as a calibration target at the initial robot position and after moving the robot. respect to the flange coordinate system, thus it is the matrix to be determined in order to calibrate the sensor (see figure: 1). moving system AX = =the XBrobot. [Shiu & & Ahmad Ahmad (2014)]. (2014)]. The The solution solution has has one one research Interreg V Groregion beThe determined orderby tothe calibrate (seewithin figure:Robotix1). system AX XB [Shiu research is isinfunded funded by the Interreg the V A Asensor Groregion within Robotix The system AX = XBone [Shiu & Ahmad (2014)]. The solution has The research is funded by the Interreg V A Groregion within Robotixtranslational and one rotational degree of freedom freedom as long long asone the Academy project (no 002-4-09-001). The research is(nofunded by the Interreg V A Groregion within Robotixtranslational and rotational degree of as as the Academy project 002-4-09-001). system AX =and XBone [Shiu & Ahmad (2014)]. The solution has The research by the Interreg V A Groregion within RobotixAcademy projectis(no (nofunded 002-4-09-001). translational rotational degree of freedom as long asone the Academy project 002-4-09-001). and one Academy project (no 002-4-09-001). 2405-8963 © 2019, IFAC (International Federation of Automatic Control) translational Hosting by Elsevier Ltd.rotational All rights degree reserved.of freedom as long as the
Copyright © 2019 704 Copyright © under 2019 IFAC IFAC 704 Control. Peer review responsibility of International Federation of Automatic Copyright © 2019 2019 IFAC IFAC 704 Copyright © 704 10.1016/j.ifacol.2019.11.136 Copyright © 2019 IFAC 704
2019 IFAC MIM 690 Germany, August 28-30, 2019 Berlin,
R. Müller et al. / IFAC PapersOnLine 52-13 (2019) 689–694
rotation on A is neither 0 nor π radians. For a unique X, with the assumption that no noise exists on the measurement, two robot movements are essential and a system of two equation is formed: (2) A1 X = XB1 A2 X = XB2 (3) Park and Martin used the Lie theory to derive a closed-form exact solution and a closed-form least-squares solution assuming the presence of noise [Park & Martin (1994)]. A closed-form least-squares solution is more practical since A and B often contain noisy data. They are dependent on the measurement system and on the kinematic chain of the robot. Both these systems have their accuracy boundaries. A practical approach is to perform several measurements after n movements. Thus, a system of n equations will be derived: (4) A1 X = XB1 (5) A2 X = XB2 .. . (6) An X = XBn (7) Shah et al. gave an overview of different methods to solve the system AX = XB [Shah et al. (2012)]. The solutions are then classified into groups: separable closed-form solutions, simultaneous closed-form solutions and iterative solutions. A separable closed-form solution implies that the orientation parameters of the matrix X are solved separately from the position parameters. Chou and Kamel solved the calibration problem by using quaternions [Chou & Kamel (1991)]. Measuring the 6D pose of a reference or calibration target using a laser line sensor that provides 2D measuring data (x and z) while the robot is in the static state and not calibrated to the robot wrist. In the frame of this article, a methodology is presented to measure the 6D pose of a cube that is thought to be a calibration target. A separable closed-form least-squares solution based on a Lie group theory is explained in detail. 2. MEASUREMENT OF THE REFERENCE-POSE IN THE SPACE The laser line sensor records the object distance along a projected laser line. Therefore the collected data is twodimensional. To generate 3D-scans numerous scan lines are generated from movement in a short time sequence, which then have to be transformed in a common measuring coordinate system. Thus the pose of the sensor coordinate frame is essential at the time increment ti of taking the i-th scan line. In order to measure a 6D dimensional frame with the laser line sensor, physical constraints must be achieved. A cube is designed to be a calibration target in order to calibrate the sensor (see figure: 2). The cube is constructed with high accuracy so that the edges are perpendicular to each other. These represent the physical constraints to measure a 6D frame. A predefined corner of the cube represents the reference frame since the three edges of the cube are perpendicular to each other and they build a coordinate frame. Therefore each edge represents an axis of the reference coordinate frame xr , yr and zr . The goal is to measure the reference coordinate frame with respect to the sensor coordinate frame. The idea is to position the sensor obliquely to the corner of the cube. The sensor then projects a laser line 705
Fig. 2. The designed cube representing a calibration target. on the cube. The three points A, B and C (see figure: 2) are important since they lie on the three coordinate axis xr , zr and yr respectively. A signal process algorithm is needed in order to find the coordinate of the points A, B and C with respect to the sensor frame. The projected line can be divided into several segments with different slopes. The equation for each segment is calculated by using a best-fit method. The points (A, B and C) of the line are then calculated at the intersection points of the different segments. The coordinates of the points A, B and C with respect to the sensor coordinate S frame: S S xA xB xC S rA = 0 ; S rB = 0 ; S rC = 0 (8) S S S zB zA zC The general form of the coordinates of the points with respect to the reference coordinate system is clear since the points lie on the coordinate axis (see figure: 2). a 0 c r r r rA = 0 ; rB = 0 ; rC = 0 (9) 0 b 0 The parameters a, b and c are required for the calculation of the above mentioned coordinates. The parameters d, e and f are calculated as the length of the vectors S rBA , S rCB and S rCA respectively. (10) d = S r A − S r B e = S rB − S rC
(11)
a2 = f 2 − c2
(14)
S
S
f = rA − rC (12) A system of three equations and three unknowns is built using the Pythagorean Theorem of right triangles: d 2 = a2 + b2 (13) 2
2
2
b = e −c (15) Substitute a and b from the equations (14) and (15) into equation (13). d 2 = f 2 + e2 − 2c2 (16) f 2 + e2 − d 2 →c= (17) 2 Substitute c into the equations (14) and (15) to determine a and b. d 2 + f 2 − e2 →a= (18) 2 d 2 + e2 − f 2 (19) →b= 2
2019 IFAC MIM Berlin, Germany, August 28-30, 2019
R. Müller et al. / IFAC PapersOnLine 52-13 (2019) 689–694
The parameters a, b and c are positive since the points A, B and C lie on the positive side of the coordinate axes (see figure: 2). Three points on the coordinate axes of the reference coordinate frame are not sufficient to determine the transformation from the reference coordinate frame to the sensor coordinate frame S Tr . Thus the origin of the reference frame with respect to the sensor coordinate frame S rOr is till unknown. An auxiliary coordinate frame is defined to calculate the trans-
Fig. 3. Auxiliary coordinate frame. formation S Tr (see figure: 3). The x-axis of the auxiliary coordinate frame is the unit vector of S rBA : Sr → S rhx = S BA (20) rBA The point C S rC : lies on the y-axis of the auxiliary coordinate frame S rhy . Then an auxiliary vector is defined as: → S rBCh =
Sr
S r
BC BC
Therefore the z-axis of the auxiliary coordinate frame defined as: → S rhz = S rhx × S rBCh The axis
Sr
hy
hz
is
(22)
is calculated using the right-hand rule: → S rhy = S rhz × S rhx
(23)
The position of the origin of the auxiliary coordinate frame S rOh is still missing to completely define the frame. The origin S rOh is the intersection of the two lines g1 and g2 . The vectors S rhx and S rhy are the unit vectors of g1 and g2 respectively. Where g1 passes through the point B and g2 through the point C. This implies the following equations: S rOh = S rB + λ1 · S rhx (24) S
rOh = S rC + λ2 · S rhy
⇒ S rB + λ1 · S rhx = S rC + λ2 · S rhy
(25) (26)
Expanding the last two equations into x, y, and z parts creates a system of three equations and two unknowns. Solving this system results in calculating λ1 and λ2 as following: (S yC − S yB ) · S xhx + (S xB − S yC ) · S yhx (27) λ1 = Sx · Sy + Sy · Sx hy hx hy hx λ2 =
(S xC − S xB ) · S zhy + (S zB − S zC ) · S xhy Sx
(28)
· S zhy + S zhx · S xhy The denominators of the last two equations are calculated, the λ related to the non-zero denominators is taken and substituted hx
into the equation of the relevant line (g1 or g2 ). In the frame of this work λ2 is taken into consideration. After calculating, the latter is substituted into the equation of g2 (equation 25) to calculate S SOh . Therefore the auxiliary coordinate frame is completely determined with respect to the sensor coordinate frame S Th . S rhx S rhy S rhz S SOh S (29) Th = 0 0 0 1 The transformation from the auxiliary coordinate frame to the reference coordinate system r Th is required to calculate S Tr . Let θ be the angle between the vectors S rAB and S rAOr b (30) → θ = arctan2 a y Where atan2 is the inverse of the tangent function that x takes into consideration the sign of both y and x to identify the quadrant in which the resulting angle lies. The position of the origin of the auxiliary frame is then calculated with respect to the reference frame r rOh as following: a − S rBA T · S rOh A · cos(θ ) r 0 r Oh = (31) S T S rBA · rOh A · sin(θ ) Then r rhx , r rhy and r rhz can be calculated as:
(21) Sr
706
691
r
r
rhx =
r
rhy =
rr
A−
rr
Oh
r r A − r r Oh rr − rr C Oh
r rC − r rOh
(32) (33)
→ r rhz = r rhx × r rhy
(34)
Tr = S Th · (r Th )
(37)
Therefore Th is completely calculated: r rhx r rhy r rhz r rOh r (35) Th = 0 0 0 1 The desired transformation from the reference coordinated system to the laser line sensor coordinate system is then calculated as: S Tr = S Th · h Tr (36) S
−1
The calibration matrix of the sensor on the robot flange is represented by F TS . The transformation F Tr can be measured with the help of other high-tech precise measurement systems, e.g. laser tracker. Then the calibration matrix is calculated as: F TS = F Tr · r TS (38) S −1 F F TS = Tr · Tr (39) Due to the high costs of such measurement systems they are not available for every user. Thus this solution is not practical and there is a demand to find a more functional solution without the use of high-tech measurement systems. A methodology based on the closed-form solution is introduced in the next chapter. 3. SEPARABLE CLOSED-FORM SOLUTION BASED ON LIE GROUP THEORY The pose of the reference is unknown with respect to the flange coordinate system. Therefore a system of equations must be set
2019 IFAC MIM 692 Germany, August 28-30, 2019 Berlin,
R. Müller et al. / IFAC PapersOnLine 52-13 (2019) 689–694
• Translational component RA · rX + rA = RX · rB + rX
(49)
The above mentioned problems are solved separately. The orientation component is solved firstly to find RX . Then the latter is substituted in equation (49) to find rX . 3.1 Orientation component The aim is to find the rotation matrix RX of the homogeneous transformation matrix X. By the aim of the Lie group theory, the orientation component problem can be transformed into a linear system.
Fig. 4. Schematic description of transformations in the system. up in order to calculate the calibration matrix. The pose of the cube is measured with the laser line sensor in the static state after moving the sensor to different poses. The transformation matrix F Tr is calculated as mentioned in the previous chapter. The pose of the cube with respect to the robot base coordinate frame B Tr is constant while the robot flange is moved from one pose to another. Considering the assumptions that no noise exists in the sensor measurement, the constructed cube edges have 90◦ degree angles to each other and the pose of the robot flange with respect to the robot base coordinate frame B TF is accurate. Then the problem after changing the pose of the sensor can be mathematically expressed as (see figure: 4): B B
F1
S1
B
Tr = B Tr
F2
(40)
S2
TF1 · TS1 · Tr = TF2 · TS2 · Tr (41) The sensor pose is also constant with respect to the flange when the flange is moving. → F1 TS1 = F2 TS2 = X Substituting equation (42) in equation (41) yields in: B
TF1 · X · S1 Tr = B TF2 · X · S2 Tr
Multiplying both equations (left and right) by the left and right side respectively. F2
F2
(42) (43) r
TB and TS1 on
TB · B TF1 · X · S1 Tr · r TS1 = F2 TB · B TF2 · X · S2 Tr · r TS1 (44) F2
TF1 · X = X · S2 TS1 (45)
Let F2 TF1 and S2 TS1 be denoted by A and B respectively. The sensor calibration problem can be represented by equation: AX = XB (46) The matrices A, B and X are expanded by a rotation matrix and translational vector. The calibration problem is divided into a translational and orientation problem, so that each component is solved separately. RX rX RX r X RB rB RA r A · = · (47) 0T 1 0T 1 0T 1 0T 1 Multiplying the matrices on the left and right side results in the following equations: • Orientation component RA · RX = RX · RB
(48) 707
Based on the Euler theorem, any rotation or rotation matrix R can be expressed by rotation axis u and a rotation angle ϑ around it. A compact description of the rotation axis and rotation angle is a 3D rotation vector rv . The direction of the latter is the rotation axis and its magnitude in radians is the rotation angle. The rotation angle, axis and vector can be calculated as: tr (R) − 1 ϑ = arccos (50) 2 R − RT (51) u= 2sin(ϑ ) rv = ϑ · u (52) The rotation matrix R can be expressed in exponential form [Gallego & Yezzi (2015)][Marcus (1960)] as a function of the rotation vector rv : R = exp(Sk(rv )) (53) Where Sk(rv ) is the skew matrix of the rotation vector. A skew matrix has the characteristic that its transpose is equal to its negative and it has the following form: 0 −zrv yrv 0 −xrv (54) Sk(rv ) = zrv −yrv xrv 0 Then the logarithm of the rotation matrix R is equal to the skew matrix of the rotation vector: log(R) = log(exp(Sk(rv ))) = Sk(rv ) (55) The shorthand notation of log(R) can be expressed as rv . → log(R) = ϑ ·
R − RT 2sin(ϑ )
(56)
Forming equation (48) by right multiplying RTX on both sides implies: (57) RA = RX · RB · RTX Let rvA and rvB be the rotation vectors of RA and RB respectively. Writing equation (57) in exponential form and applying the identity exp(RX · rvB · RTX ) = exp(RX · RB · RTX ) [Eberly (2007)]: exp(Sk(rvA )) = exp(RX · Sk(rvB ) · RTX ) (58)
After applying the identity RX · Sk(rvB ) · RTX = Sk(RX · rvB ) [Condurache & Burlacu (2016)] the orientation problem can be written as: exp(Sk(rvA )) = exp(Sk(RX · rvB )) (59) The logarithms of the left and right sides implies: Sk(rvA ) = Sk(RX · rvB ) (60) (61) → rvA = RX · rvB
2019 IFAC MIM Berlin, Germany, August 28-30, 2019
R. Müller et al. / IFAC PapersOnLine 52-13 (2019) 689–694
Equation (61) has a solution if and only if rvA equals rvB , because RX is an orthogonal matrix which implies that its determinant is equal to one. 3.2 Translational component The translational problem is solvable after applying RX to equation (49) and solving it with respect to rX (rA − RX · rB ) = (I3×3 − RA ) · rX (62) Let C = (I3×3 − RA ) and d = (rA − RX · rB ), where I3×3 is a 3 × 3 identity matrix. (63) → C · rX = d The parameters C and d are known. 4. LEAST-SQUARES SOLUTION The above considered assumptions are not completely valid. This is due to the tolerances of constructing the cube and inaccuracy of the kinematic chain of the robot, as well as the noise occuring while measuring the cube with the laser line sensor. A least-squares solution is needed to minimize the error in the calibration matrix of the sensor. Several measurements are collected with different sensor poses after several movements of the robot. A set of measurement data is collected with different sensor poses after n movements of the robot. The problem is separated again into an orientation problem and translational problem. 4.1 Orientation component Applying the solution calculated for the orientation problem above n times. rvA1 = RX · rvB1 (64) (65) rvA2 = RX · rvB2 .. . (66) (67) rvAn = RX · rvBn RX can be determined by solving the following minimizing equation [Spital et al. (2012)][Farrell & Stuelpnagel (1965)]: n min ∑ RX · rvBi − rvAi 2 (68) RX
i=1
After considering the following definitions: Al = (rvA1 , rvA2 , . . . , rvAn ) Bl = (rvB1 , rvB2 , . . . , rvBn )
(69) (70)
n
The term ∑ RX · rvBi − rvAi 2 can be written as i=1 tr (RX · Bl − Al )T · (RX · Bl − Al ) . After expanding the latter. tr (RX · Bl − Al )T · (RX · Bl − Al ) (71) = tr(BTl · Bl ) − tr(ATl · RX · Bl ) + tr(ATl · Al ) The afore mentioned minimizing problem is solvable when tr(ATl · RX · Bl ) is maximized since tr(BTl · Bl ) and tr(ATl · Al ) are constant and independent of RX . let T =
Bl · ATl
tr(ATl · RX · Bl ) = tr(RX · Bl · ATl )
(72)
→ tr(RX · Bl · ATl ) = tr(RX · T)
(73) 708
693
Performing a polar decomposition of T results in: T = U·P (74) Where U is an orthogonal matrix and P is a symmetric positive semi-definite matrix. P can be expressed as P = N · D · NT , where D is a diagonal matrix of the eigenvalues in decreasing order and N is the orthogonal matrix of the eigenvectors relevant to the non-zero eigenvalues. → tr(RX · T) = tr(RX · U · N · D · NT ) (75) Reformulating the equation above and defining G = NT · RX · U · N: → tr(RX · T) = tr(NT · RX · U · N · D) 3
→ tr(G · D) = ∑ di · gi,i
(76)
i=1
As mentioned before, P is a symmetric positive semi-definite matrix. Then d1 , d2 and d3 are non-negative parameters, since they are arranged in decreasing order. Then tr(G · D) is maximized when the diagonal of tr(G) reaches its maximum. Since G is an orthogonal matrix, the values of the elements of G vary between -1 and 1. To solve the maximizing problem of tr(G · D), gi, j is set to one if i = j and to zero if i and j are different. Therefore the determinant of G is calculated as follows: (77) det(G) = det(N)2 · det(RX ) · det(U) If det(U) = −1 then g3,3 = −1. 0 I (78) → G = 2×2 0T −1 If det(U) = 1 then g3,3 = 1. I2×2 0 →G= (79) 0T 1 Then RX is calculated as: (80) RX = N · G · NT · U 4.2 Translational component The translational problem is solved as discussed in chapter 3.2. Its equation (62) is expanded to a system of n equations. Due to the noise and inaccuracies, the system needs to be solved by a least-squares method. (81) (rA1 − RX · rB1 ) = (I3×3 − RA1 ) · rX (82) (rA2 − RX · rB2 ) = (I3×3 − RA2 ) · rX .. . (83) (84) (rAn − RX · rBn ) = (I3×3 − RAn ) · rX The matrix Cl and vector d l are defined as follows: T Cl = (I3×3 − RA1 )T , (I3×3 − RA2 )T , . . . , (I3×3 − RAn )T (85) T T d l = (rA1 − RX · rBA ) , (rA1 − RX · rBA ) , 1 1 (86) T . . . , (rA1 − RX · rB1 )T Cl and d l have known parameters after the substituting of RX . The latter is calculated by solving the orientation problem as explained in chapter 4 A. The translational problem can now be expressed as: (87) Cl · r X = d l Note that the matrix Cl is not a quadratic matrix. It has more rows than columns. Then, equation (87) is solved by applying the method of least-squares error. rX = (CTl · Cl )−1 · CTl · d l (88)
2019 IFAC MIM 694 Germany, August 28-30, 2019 Berlin,
R. Müller et al. / IFAC PapersOnLine 52-13 (2019) 689–694
If the matrix Cl loses one column rank, then the method of least-squares error is less applicable for calculating the inverse of Cl . Therefore the pseudoinverse of Cl based on singular value decomposition must be calculated to solve equation (88) with respect to rX . 5. IMPLEMENTATION AND VALIDATION In order to validate the above mentioned solution, an experimental setup is built up at ZeMA to perform the measurement in a demonstrator environment with industrial instruments. 5.1 Experimental setup and results A laser line sensor Gocator 2330 from the company LMI Technologies [LMI Technologies (2018)] is mounted on the serial robot UR10 from the company Universal Robots [Universal Robots (2018)]. The cube as a calibration target is constructed precisely. The sensor starts with statically measuring the pose of the cube with the help of the above explained algorithm. The robot then performs three movements. The pose of the cube is measured statically by the sensor after each movement. The pose of the flange of the robot is recorded while the cube has been measured (see figure: 5). Three {Ai , Bi } groups are gained. The pose of the robot flange B TF is taken from the controller of the robot. The gained information is then used to
Fig. 5. Experimental setup at ZeMA for the calibration of the laser line sensor. calculate the calibration matrix of the sensor. A least-squares solution is calculated. The calibration error of the position of the sensor is 0.3 mm and the error of the orientation is 0.05◦ . The accuracy of the calibration is high with respect to the used elements. The pose accuracy of the UR 10 is in the range of a few millimeters, where the repeatability is given as 0.1 mm. More accuracies can be determined by analyzing each component of the system. 6. CONCLUSION A methodology is developed to calibrate a Laser line sensor with respect to the flange of a serial robot. The obtained results are promising and are sufficient for implementation in several industrial applications. The computational time lies on the range of few hundredths of one second by using Matlab. An improvement in the accuracy of the calibration can be achieved 709
by improving the accuracy of the cube which is the milestone of the presented method. The calibration of the robot kinematic chain through the identification of its kinematic parameters would improve the pose accuracy of the robot [M¨uller et al. (2017)]. More measurement data will be collected by moving the robot more than three times to locate the convergence points of the method based on processing of the statistics data. Thereby, the accuracy of the calibration would be improved by being nearer to the repeatability of the robot. REFERENCES Y. C. Shiu and S. Ahmad. Calibration of Wrist-Mounted Robotic Sensors by Solving Homogeneous Transform Equations of the Form AX=XB. In: IEEE Transactions on Robotics and Automation, Vol. 5, No. 1, Feb., 1989. F. C. Park and B. J. Martin. Robot Sensor Calibration: Solving AX=XB on the Euclidean Group. In: IEEE Transactions on Robotics and Automation, Vol. 10, No. 5, OCt., 1994. LMI Technologies https://lmi3d.com/ M. Shah, R. D. Eastman, and T. Hong. An Overview of RobotSensor Calibration Methods for Evaluation of Perception Systems,Proceedings of the Workshop on Performance Metrics for Intelligent Systems. In: PerMIS’12, ACM, 2012, pp. 15-20. J. C. K. Chou and M. Kamel. Finding the Position and Orientation of a Sensor on a Robot Manipulator Using Quaternions. In: The International Journal of Robotics Research, 10(3): 240-254, 1991. G. Gallego and A. Yezzi A Compact Formula for the Derivative of a 3-D Rotation in Exponential Coordinates. In: Journal of Mathematical Imaging and Vision, Vol. 51, No. 3, 2015, pp. 378. M. Marcus Basic Theorems in Matrix Theory. In: Nat. Bureau of Stds., Appl. Math. Series No. 57, 1960. D. Eberly Constructing Rotation Matrices using Power Series. In: Geometric Tools, Redmond WA 98052, Dec., 2007. D. Condurache and A. Burlacu Orthogonal dual tensor method for solving the AX=XB sensor calibration problem. In: Mechanism and Machine Theory, Jun., 2016. S. Spital, D. L. Lansing, and H. E. Fettis Society for Industrial and Applied Mathematics. In: SIAM Review, Problem 65-2, Vol. 8, No. 3, Jul., 1966, pp. 386-388. J. L. Farrell and J. C. Stuelpnagel A least Square Estimate of Satellite Attitude. In: SIAM Rev., Problem 6501, Vol. 8, No.3, Jul., 1965. H. T. Russell, and P. P. Richard on homogeneous transforms, quaternions, and computational efficiency. In: IEEE Transactions on Robotics and Automation, 6(3), pp. 382388, 1990. Universal Robots A/S https://www.universal-robots.com/ R. M¨uller, M. Vette, A. Geenen, T. Masiak and A. Kanso Methodology for design of mechatronics based on suitability for modern application scenarios. In: Conf. International Federation of Automation Control Papers On Line 50(1):12727-12733, Oct., 2017.