Available online at www.sciencedirect.com
ScienceDirect Procedia Manufacturing 11 (2017) 405 – 412
27th International Conference on Flexible Automation and Intelligent Manufacturing, FAIM2017, 27-30 June 2017, Modena, Italy
Cognitive Robot Referencing System for High Accuracy Manufacturing Task Cristina Cristallia, Luca Lattanzia, Daniele Massaa,*, Giacomo Angionea a
Loccioni Group, Via Fiume 16, Angeli di Rosora 60030, Italy
Abstract Industrial robots can be considered very repeatable machines, but they usually lack of absolute accuracy. However, high accuracy during the execution of the task is becoming a more and more critical factor in industrial manufacturing domains. For that reason, in order to fully automatize manufacturing processes, high-precision tasks usually need the integration of additional sensors to improve robot accuracy. This paper proposes an embedded, cognitive and self-learning stereo-vision system that can be used to reference the robot position with respect to the work-piece, increasing robot accuracy locally. An industrial use-case is also proposed and experimental results are presented. ©©2017 Published by Elsevier B.V.B.V. This is an open access article under the CC BY-NC-ND license 2017The TheAuthors. Authors. Published by Elsevier (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility ofthe scientific committee of the 27th International Conference on Flexible Automation and Peer-review under responsibility of the scientific committee of the 27th International Conference on Flexible Automation and IntelligentManufacturing Manufacturing. Intelligent Keywords: industrial robotics; stero-vision; neural-network; robot referencing; high-accuracy manufacturing
1. Introduction Nowadays, robots used for industrial applications still represent about 90 percent of the overall robotics market. The demand for more flexibility on the shop-floor requires novel robotic technologies, that are able to cope with a larger amount of variability than classical shop-floor robots. Classical industrial robots usually employed on the shop-floor are well-established components of modern manufacturing lines. They can be programmed to perform
* Corresponding author. Tel.: +39 0731 816 450; fax: +39 0731 814700 . E-mail address:
[email protected]
2351-9789 © 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the scientific committee of the 27th International Conference on Flexible Automation and Intelligent Manufacturing doi:10.1016/j.promfg.2017.07.125
406
Cristina Cristalli et al. / Procedia Manufacturing 11 (2017) 405 – 412
almost any movement, with very high repeatability [1]. However, industrial robots usually lack of absolute accuracy and adaptability. In manufacturing domains, the need of high accuracy during task execution is more and more becoming a critical factor. Depending on the application domain, the required accuracy can be obtained by exploiting different solutions. For example, in the automotive domain, welding tasks on small-medium work-pieces can be performed maintaining the object in a fixed position and making use of several well-defined reference positions to improve robot accuracy. On the other hand, in some manufacturing domains, this kind of approaches cannot be exploited: for example, in the aircraft assembly process, where large dimensions of the work-pieces (that cannot be fixed in a very precise way) accentuate the lacking of absolute accuracy for robots. In order to assembly together two pieces of an aircraft fuselage, a large quantity of holes needs to be drilled with high precision. Currently, this precision is obtained by mounting some special jigs directly on the fuselage, based on appropriate reference targets: pre-drilled holes are located on these jigs, in order to precisely guide the operator when manually drilling holes on the fuselage.
Fig. 1. Example of holes pattern to drill (blue square) with respect to reference targets (red circles).
In order to perform the task in an automatic way with a robotized solution, the robot must be able to identify and localize with precision the referencing targets, and to move on the final positions composing the drilling pattern, guaranteeing a total absolute accuracy of less than 0.5 mm. (Fig. 1)
Fig. 2. Different kinds of referencing target: (a) hole; (b) countersink hole; (c) rivet head; (d) temporary fastener.
An additional difficulty to the automatization of the process comes from the variability of the referencing targets that can be used, without any specific a-priori knowledge for the robot. In fact, different targets with different shapes
Cristina Cristalli et al. / Procedia Manufacturing 11 (2017) 405 – 412
and dimensions can be indifferently used during the drilling process, in order to mark the numerous drilling areas of the aircraft fuselage, without specific constraints or information. In particular, most common referencing targets may include: holes, countersink holes, rivet heads and temporary fasteners (Fig. 2). A vision based referencing system for standard robotic arms is proposed in this paper. The main novelty of the system is its ability to cope with uncertainties and variabilities of the objects to be recognized (reference targets for the robot). In fact, the system is able to properly classify and analyze the acquired images (through the exploitation of a neural network on-chip), extracting from the images the 3D coordinates of the reference targets (based on wellknown stereo-vision principles [2]), and computing a new robot reference frame which will allow the robotic arm to reach the target positions with a total absolute accuracy of less than 0.5mm. The system has also been validated considering a practical implementation in a real shop-floor environment (aircraft assembly domain). After a brief overview of different solutions developed in order to increase robot absolute accuracy, the paper is organized as follows: Section 2 describes the proposed robot referencing system, focusing on the different steps of the procedure; Section 3 shows experimental results obtained in a real shop-floor application; finally, Section 4 sums up conclusions and further work. 1.1. Related works The problem of robot accuracy has been widely discussed in the last years. In [3] the dynamic model of an industrial robot with elastic joints is estimated in order to improve dynamic accuracy. Still, in industrial context, is usually difficult to have the possibility to use the dynamic model of the robot, as industrial robots are usually position or velocity controlled. In order to improve robot accuracy locally or globally, external measuring systems can be used (e.g. laser trackers), that can track robot movements, closing the position control loop with a real-time feedback on robot TCP final position [4]. However, as an on-line robot control compensation is needed, additional space is required to integrate them into the shop-floor. These kind of instruments can also be used to perform an off-line calibration of the robot kinematic model, in order to improve accuracy [5,6], although they turn out to be very expensive. In [7] a 3D sensor is instead used to improve the robot accuracy locally: unfortunately, this sort of approach can be used only on small work-pieces. Whenever it is possible to exploit known features of the work-piece, visual servoing can be used [8,9]: this solution permits to avoid the need of absolute accuracy, guiding in real-time robot movements based on relative displacement from the reference features. 2. Cognitive referencing procedure The proposed solution consists in the integration on a robotic arm of a cognitive stereo-vision system, able to detect reference targets and compute their position in the 3D space (considering robot base frame). The 3D coordinates of the identified targets are then used in order to re-calibrate the robotic manipulator, creating a local referenced frame used by the robot when positioning the drilling tool. The cognitive stereo-vision system is composed of two 2D cameras (Ximea MU9Px-MH), a National Instruments sbRio 9651 (SOM) [10] for the image acquisition and robot control, and a NeuroMem® Board developed by Cogito Instruments [11] that implements a hardware neural network. Cognitive capabilities permit to the system to infer decisions based on past experience, as well as to recognize the type of reference targets that is acquired by the stereo-vision system. In the following subsections the different steps of the referencing procedure are presented. 2.1. Stereo vision system calibration The first step required in every application based on a stereo-vision solution, is the calibration of the cameras composing the system. The software framework of the stereo-vision system is developed in LabVIEW™, using the NI Vision Development Module [12] for the image processing algorithms and using the OpenCV library (version 2.4.11, [13]) for the stereo algorithms both at calibration level and at measurement level. The calibration procedure follows the approach presented in [14], considering a sparse stereo reconstruction where a number of features are
407
408
Cristina Cristalli et al. / Procedia Manufacturing 11 (2017) 405 – 412
extracted from the two images by using pattern matching algorithms with pyramidal decomposition and sub-pixel refinement [12]. The positions of the same features on the stereo-pair images are then used with the stereo measurement algorithms in order to compute the real world 3D coordinates of each extracted feature. For each feature, the procedure is [2,15]: 1. look for the same feature on the left and right images and store the , , and coordinates of the feature in the two images 2. undistort and rectify the and coordinates using the OpenCV function undistortPoints by providing the feature coordinates, the camera matrices, the distortion coefficients, the rectification transformation matrix and the projection matrix 3. compute the disparity value as the difference between the undistorted and rectified values 4. evaluate the real-world coordinates with OpenCV function perspectiveTransform by providing the array of undistorted points and disparities and the disparity-to-depth-mapping matrix The system has been calibrated as described in [15] by using a rectangular grid of equally spaced dots. The calibration grid is shaped as a regular grid of 14 × 10 dots equally spaced by 2 mm and with diameters of 1 mm. The calibration has been performed by using 6 different views of the calibration grid: at first, each camera has been calibrated to compute its intrinsic parameters, using a model for optical distortion with 8 parameters [16, 17] by the OpenCV function calibrateCamera. Then, the stereo system has been calibrated using the intrinsic parameters from the previous step and computing the extrinsic parameters of the stereo system by using the OpenCV function stereoCalibrate. Both functions return a value to estimate the precision of the found parameters. This value is computed considering all the grid translation and rotation vectors found with calibrateCamera. The function reprojects the points used for finding those translation and rotation vectors and computes the root mean square (RMS) of the Euclidean distances between the re-projected points and the actual coordinates of those points. With respect to the considered application, a good calibration should give RMS values around 1 pixel or lower for single camera calibration and around 2 pixels or lower for stereo calibration. 2.2. TCP-Stereo vision calibration After the stereo-vision system has been calibrated and validated, it is needed to calibrate the reference frame of the stereo cameras with the reference frame of the robot TCP. In order to do that, an appropriate mechanical calibration tool that can be fixed on the robot TCP has been developed. The calibration tool is made up of an aluminum plate with some calibrated holes drilled on it and fixed to the TCP. Knowing the distance between the holes, the positions of those holes in TCP frame and measuring the positions of the holes with the stereo-vision system, it is possible to determine the calibration matrix such that: tcp
P = tcp M sv ⋅sv P
(1)
where is the matrix (where is the number of holes on the calibration tool) containing the positions of the holes in stereo-vision reference frame, and is the matrix containing the positions of the holes in TCP reference frame. To compute the calibration matrix that best fits the transformation between the two sets of points, the Singular Value Decomposition (SVD) has been used. In addition, the position of the central hole of the calibration tool is stored in stereo-vision reference frame, in order to be used in the next steps of the referencing algorithm. 2.3. The referencing procedure Once the two calibrations described in the previous sub-sections have been performed, the referencing procedure can be executed. This procedure consists in the identification of the 3D position of several referencing targets of the work-piece. It is important to highlight that, in order to define a univocal re-calibration frame for the robot, 3
409
Cristina Cristalli et al. / Procedia Manufacturing 11 (2017) 405 – 412
referencing targets are needed (point-line-plane referencing assumption). The position of the first referencing target will represent the origin of the new robot frame ( ), the second reference target will give information about the direction of the x axis for the new robot frame ( ), while the last target will define the x-y plane ( ). Following,the different steps composing the referencing procedure are reported: • for each of the referencing target: ○ the robot moves to the nominal position, meaning a pre-programmed position where it is expected to find the referencing target. Such position could differ from the actual position of the considered reference target, as misalignments and uncertainties could affect the final position of the referencing target itself. The stero-vision system has been designed in order to guarantee a field of view of about 40x50 mm (much bigger with respect to the tolerances admitted by the considered application domain): this will guarantee that even though uncertainties and variabilities in the work-piece positions, the referencing target will always be in the field of view of both cameras. ○ the stereo-vision system acquires the images. ○ the neural network classifies the type of the referencing target, based on past experience and on self-learning capabilities. ○ the proper image analysis algorithm (depending on the target type) is used to compute the position of the referencing target. ○ if the target is not properly centered in the field of view of the stereo-vision system, a corrected robot position will be calculated, which will center the target in the camera reference frame. This new position is obtained as follows: w
p'= wp+ wM tcp ⋅tcp M sv ⋅svΔ,
(2)
where is the transformation matrix that describes the TCP position in world frame (computed through forward kinematics), is the translation vector of the transformation matrix (i.e. the actual TCP position) and is the distance between the measured hole and the reference central hole computed during calibration. ○ once the target is centered in the cameras field of view, the position of the target in robot base reference frame, is computed and stored as follows: w
phole = wM tcp ⋅tcp M sv ⋅sv phole
(3)
• repeat previous steps for the 3 targets. • once the 3 positions of the referencing targets ( , and ) are stored, the referencing frame transformation matrix is defined as follows: w
⎛x M ref = ⎜⎜ ref ⎝ 0
yref 0
zref 0
porigin ⎞ ⎟ 1 ⎟⎠
(4)
where the vectors , and are computed as follows:
xref =
zref =
pxDir − porigin pxDir − porigin xref × ytmp xref × ytmp
ytmp =
yref =
paux − porigin paux − porigin
(5)
zref × xref zref × xref
(5)
410
Cristina Cristalli et al. / Procedia Manufacturing 11 (2017) 405 – 412
Once the new robot referencing frame is defined, the robot can move relatively to that frame both in position and in orientation. 3. Experimental results The experimental set-up consists of an industrial robot (DENSO VS-087) which mounts the stereo-vision system described before. The robot is placed in front of an aluminum panel (dimensions of 1200x800 mm) where some reference holes (reference targets) have been drilled with a CNC machine (in order to ensure high absolute accuracy in their positioning). In order to assess the local accuracy of the robot, all the measures have been taken with a FARO laser tracker [18]. First of all, the robot base frame and the working plane (aluminum panel) have been directly measured using the laser tracker target (SMR). After that, the robotic system performs the referencing procedure as described in the previous Section; finally, the robot moves following a pattern of positions with respect to the reference frame computed with the procedure. While the robot moves on the pattern, the SMR is mounted on the TCP and the positions are measured with the laser tracker.
Fig. 4. Performed tests. The light green area represents the pattern of positions, reporting the number of rows and columns composing it.
Fig. 4 shows the different position patterns that have been used for validating the referencing procedure, covering a large part of the robot workspace (considering robot maximum reachability of about 900mm). The rows and columns of the patterns are spaced by 25mm. For each pattern of positions the error is computed as follows:
ex =xlt − xrob e y = ylt − yrob
(5)
Cristina Cristalli et al. / Procedia Manufacturing 11 (2017) 405 – 412
where and are the position coordinates measured from the laser tracker, while and are the nominal positions where the robot moved. Fig. 5 and Fig. 6 show respectively the error along x and y axis of the reference frames and the error norm.
Fig. 5. Error along X and Y axis of referencing frames for the 5 measured patterns.
Fig. 6. Error norm for the 5 measured patterns.
The results obtained meet the initial specification (related to the considered application domain) of an absolute error less than 0.5 mm. Average error norm, standard deviation and accuracy are reported in Table 1 (it is important to remark that final accuracy has been computed as: ).
411
412
Cristina Cristalli et al. / Procedia Manufacturing 11 (2017) 405 – 412 Table 1. Results Value (mm) Average
0.212078
Standard deviation
0.084465
Accuracy
0.465472
4. Conclusions This paper presents a cognitive referencing system for improving the absolute accuracy of a robotic arm when executing its task. The solution is based on the integration of a stereo-vision system (that provides sensing capability) with a hardware neural network (which empowers the system with artificial cognition skills). The system is able to autonomously identify the different kinds of targets used for referencing, and to properly estimate their 3D position. The referencing procedure is described, focusing on the different steps involved. The system has been validated in a real shop-floor application, considering the aeronautics domain, showing its effectiveness and working performance. References [1] M. Summers. Robot Capability Test and Development of Industrial Robot Positioning System for the Aerospace Industry. SAE Transactions 2005; 114: 1108–1118. [2] R. Hartley, A. Zisserman. Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, New York, 2003. [3] M. Oueslati, R. Béarée, O. Gibaru and G. Moraru. Improving the dynamic accuracy of elastic industrial robot joint by algebraic identification approach, 2012 1st International Conference on Systems and Computer Science (ICSCS), Lille, 2012, pp. 1–6. [4] Y. Jiang , X. Huang , S. Li. An on-line compensation method of a metrology-integrated robot system for high-precision assembly, Industrial Robot: An International Journal, 2016, Vol. 43 Iss: 6, pp.647–656 [5] J. Santolaria, M. Ginés. Uncertainty estimation in robot kinematic calibration, Robotics and Computer-Integrated Manufacturing, 2013, Volume 29, Issue 2, Pages 370-384 [6] A. Nubiola, I. A. Bonev. Absolute calibration of an ABB IRB 1600 robot using a laser tracker, Robotics and Computer-Integrated Manufacturing, 2013, Volume 29, Issue 1, Pages 236–245 [7] G. Lux, G. Reinhart. An approach for the automated self-calibration of robot-based inspection systems, 2015 IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Siem Reap, 2015, pp. 106-111. [8] F. Chaumette and S. Hutchinson. Visual servo control. I. Basic approaches, in: , 2006, vol. 13, no. 4, pp. 82–90. [9] F. Chaumette and S. Hutchinson. Visual servo control. II. Advanced approaches, in: , 2007, vol. 14, no. 1, pp. 109–118. [10] National Instruments. System On Module, sbRIO-9651. [Online], Available: http://sine.ni.com/nips/cds/view/p/lang/it/nid/212788 [11] Cogito Instruments. NeuroMem® Board. [Online], Available: https://cogitoinstruments.com/ [12] National Instruments. NI Vision Concepts, part No.372916M-01, 2012. [13] G. Bradski. The opencv library. Dr. Dobb’s Journal of Software Tools. [Online]. Available: http://www.drdobbs. com/open-source/theopencv-library/184404319, 2000. [14] L. Stroppa, C. Cristalli. Stereo Vision System for Accurate 3D Measurements of Connector Pins’ Positions in Production Lines, Experimental Techniques, 2017, vol. 41, no. 1, pp. 69–78 [15] G. Bradski, A. Kaehler. Learning opencv, 1st edn. O’Reilly Media, Inc, 2008. [16] Z. Zhang. A flexible new technique for camera calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11):1330–1334 [17] J.Y. Bouguet, Matlab camera calibration toolbox. [Online], Available: http://www.vision.caltech.edu/bouguetj/calib_doc/, 2010 [18] FARO. FARO Laser tracker. [Online], Available: http://www.faro.com/products/metrology/faro-laser-tracker/overview