Available online at www.sciencedirect.com
ScienceDirect Procedia CIRP 61 (2017) 281 – 286
The 24th CIRP Conference on Life Cycle Engineering
A process demonstration platform for product disassembly skills transfer Supachai Vongbunyong*, Pakorn Vongseela, Jirad Sreerattana-aporn Innovation and Advanced Manufacturing Research Group,Institute of Field Robotics, King Mongkut’s University of Technology Thonburi 126 Pracha Uthit Rd., Bang Mod, Thung Khru, Bangkok 10140, Thailand * Corresponding author. Tel.: +662-470-9339; fax: +662-470-9703. E-mail address:
[email protected]
Abstract Automated disassembly is challenging due to uncertainties and variations in the process with respect to the returned products. While cognitive robotic disassembly system can dismantle products in most cases, human assistances are required for resolving some physical uncertainties. This article presents a platform that the disassembly process can be demonstrated by skillful human operators. The process is represented with the sequence, position, and orientation of the tools, which are extracted by using vision system and tools’ markers. The knowledge at planning and operational levels will be transferred to the robot to achieve automated disassembly. An operation for removing a back cover of an LCD screen with three disassembly tools is a case study. © 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license © 2017 The Authors. Published by Elsevier B.V. (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the scientific committee of the 24th CIRP Conference on Life Cycle Engineering. Peer-review under responsibility of the scientific committee of the 24th CIRP Conference on Life Cycle Engineering Keywords: Learning by demonstration; Skill transfer; Disassembly; Human-robot collaboration; Robotics.
1. Introduction 1.2. Human-machine collaboration in disassembly 1.1. Product disassembly overview With respect to the circular economy, End-of-Life (EOL) products treatment is one of the key steps that recovers remaining embodied value from the disposal. Product disassembly is one of the important steps for efficient EOL treatment. Disassembly process aims to separate the products into their parts, components, or subassembly. As a result, they will be treated in proper ways where the remaining value is maximized. However, disassembly is currently a non-profit and labor intensive process. As a result, it has become economically infeasible and ignored in most companies. A number of attempts have been made in order to fully automate disassembly process. The research was conducted in a number of approaches, e.g. vision-based system [1-3], multi-sensorial system [4, 5], and smart and integrated disassembly tools [6, 7] However, due to the variations of products returned and uncertainties in disassembly process, fully automated disassembly has been challenging and mostly economically infeasible.
Overcoming the aforementioned problems, HumanMachine collaboration can be occurred in various ways [8]. Types of human involvement in assembly and disassembly are explained as follows: (a) Semi-automatic or hybrid disassembly – optimized disassembly plans consist of tasks performed by machine and human. Automatic workstations are equipped with sensor systems and automatic disassembly tools performing disassembly tasks. Human operators employed at the manual workstations make decision and perform more complicated tasks that is infeasible to be carried out automatically [9, 10] . (b) Augmented Reality (AR) - Much research worked on using AR for facilitating the human experts to interact with actual products that the optimal plans are generated by the system for assembly [11, 12] and disassembly [13]. Human operators perform the process in this case. (c) Tele-presence of human – human operators can control the robot to on-line perform the tasks that can be hazardous or difficult due to human’s physical limitation. A number of
2212-8271 © 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the scientific committee of the 24th CIRP Conference on Life Cycle Engineering doi:10.1016/j.procir.2016.11.197
282
Supachai Vongbunyong et al. / Procedia CIRP 61 (2017) 281 – 286
related works were conducted for assembly, e.g. operator’s movement is captured and translate to the program that controls a dual-arm robot in assembly of heavy products [14]. (d) Machine learning (ML) – experts are responsible for teaching plans and operations to the system. This is similar to (c) but learning capability is added. Eventually, the system is able to perform the tasks autonomously. Regarding ML in disassembly, much research focuses on optimization at strategic levels, e.g. [15-17]. To the best of our knowledge, only cognitive robotic disassembly system implemented ML at operational level to improve the performance of the process [18]. Human operators assist the system to resolve problematic conditions by demonstrating the actions required.
x x x x
Inaccurate location of the components, Non-detectable fasteners, Inaccessible fasteners, and Improper method of disestablishing fasteners.
The skill transfer platform proposed in this article is designed to be capable of handling these problems. It should be noted that, to prove the concept of the training platform, the disassembly process was completely demonstrated without involving other cognitive abilities. As a result, a sequence of disassembly operations that is repeatable by robots is expected to be obtained. 1.4. Organization of this paper
1.3. Learning by demonstration in cognitive robotics According to the framework of using cognitive robotics in product disassembly, learning and revision of the disassembly plans are the cognitive functions that help the system to improve the process performance, i.e. time consumption and degree of autonomy [18, 19]. In the learning and revision process, Cognitive robotic agent (CRA) primarily interacts with the knowledge base (KB) by writing or rewriting new product-specific knowledge in regard to the disassembly sequence plans and operation plans (see framework in Fig. 1). This knowledge will be recalled when the foreseen models of the products are going to be disassembled in the future. During the disassembly process, CRA performs two types of learning, namely by reasoning and by demonstration. (a) For learning by reasoning, CRA autonomously obtains the knowledge by executing the general plans and take the outcome into account. The knowledge can be obtained if the operations successfully remove the desired components. In case of failure, expert human operators may be called for assistance. This is where the (b) learning by demonstration takes place. The operators demonstrate actions to CRA how to resolve the problems which are the causes of failure, e.g. pointing out non-detected components, showing how to remove fasteners, etc. Cognitive Robotic Module Cognitive Robotic Agent
Knowledge Base (KB)
Revision Human Assistance Learning Execution Monitoring Physical world Reasoning
Fig. 1 Cognitive robotics disassembly framework
The failures are caused by unresolved conditions occurred at both planning and operational levels which are explained in [18]. This article focuses on the unresolved conditions at the operational level, which are:
This paper is organized as follows: methodology regarding learning by demonstration and skill transfer in Section 2, system design of the teaching platform in Section 3, experiment in Section 4, discussion and conclusions in Section 5 and 6, respectively. 2. Methodology 2.1. Complete system overview – skills and knowledge The ability to disassemble the products is involved with skills and knowledge that can be extracted from the human expert’s demonstration. In the context of product disassembly, (a) skills are sets of primitive actions that are used to perform tasks, in this case, disassembly operations. (b) Knowledge is physical information related to components or products to be disassembled. The information can be represented as parameters used in the disassembly operations. Skills aim to be transferred in order to make the demonstrated operations adaptable to different physical configurations, e.g. robots with different configuration [20, 21]. This project can be divided into two parts according to the sides: the training side and the playback side. (a) On the training side, a human expert demonstrates a completed disassembly process of a product. The disassembly operations are captured by using a vision system and other sensors. The abstract skills with the product specific knowledge are transferred to the robot. (b) The robot operating on the playback side will be able to disassemble the same product by using these skills and knowledge. The robot is equipped with sensors and cognitive ability, especially reasoning and execution monitoring [19], in order to overcome the physical uncertainties in automated disassembly process. As a result, the system is expected to be able to adapt the skills to carry out the process with respect to the actual physical conditions of the products that may vary from the conditions in the training scenario. An overview of the complete system is shown in Fig. 2. However, it should be noted that in this article, only the system on the training side is presented.
283
Supachai Vongbunyong et al. / Procedia CIRP 61 (2017) 281 – 286
partition equipped with a sensor, i.e. mechanical switch. By reasoning, the current actions and the type of tool can be known by the tool that is absent from the storage. (c) User console, the user can manually give inputs to the system through a graphical user interface (GUI) and a pedal switch. The GUI is used for manually input the types and the removal stage of the components. The pedal switch is used for signaling the system when starting and stopping executing the actions. The user is able to continue disassembly operation by hands. As a result, the duration of the actions can be recorded. 3.1. System architecture Fig. 2 Skill transfer from human expert to robot with cognitive ability
2.2. Skills transfer system – The training side platform Considering the training side of the skill transfer platform, a goal of this prototype system is to obtain “the disassembly sequence and the skills for disassembling a model of a product”. The disassembly sequence and skills are considered the planning levels and operational levels, respectively. The planning level is involved disassembly sequence plans, while the operation level considers physical actions that detach the components. To achieve this goal, the following information (see Table 1) is required to be recorded during the demonstration of the disassembly process.
The main process with GUI is operated on a central computer (Windows10-64bits, Core i5 6300HQ 3.2 GHz 8GB memory, Visual C++ 2015). For the vision system, images are obtained from the RGB-D sensor and with the functions in OpenCV 3.1 [23] and Kinect SDK 1.8 [24]. For the inputs at the physical platform, all switches are connected to the computer via a micro controller unit (MCU) (Arduino UNO). The workflow is in Fig. 4 and the dataflow is in Fig. 5. Kinect RGB-D Sensor
Interactive Toolbox
Table 1 Information required for skill transfer Level Planning
Operation
Information •Type of main components •Type of connective components •Disassembly state changes (main component removed) •Operation time
•Location of the main components •Location of connective components •Target position & orientation of tool •Type of the tool
Recording Method Manual input Manual input
Product to be disassembled
Manual input Automatic
Product Fixture
Vision System Vision System Vision System Automatic
3. System design From Table 1, the system of training platform is designed for capturing the information from the human expert’s demonstration. The platform is designed in the way that the experts can focus on performing the process with minimal distraction. The actions are automatically captured by the vision system and some sensors. However, few manual inputs are required in order to avoid inaccuracy in disassembly sequence plan, in which the error can lead to further logical problems. Fig. 3 shows the physical setup of the platform which consists of: (a) Vision system is used to track the disassembly tools, i.e. screwdrivers. The disassembly tools are equipped with two color markers at the center and the tooltip. As a result, the position and orientation of the tools can be determined by processing RGB-D dataset, which is obtained with an RGB-D sensor (MS-Kinect [22]). (b) Interactive toolbox can determine the status of usage of the disassembly tools. Each tool is individually stored in its
Pedal Switch
Human expert (User)
(a) Disassembly training platform Partition with contact switch
(b) Interactive toolbox
Fig. 3 Physical setup of the training platform
284
Supachai Vongbunyong et al. / Procedia CIRP 61 (2017) 281 – 286
TOOLBOX
MCU
NO YES
Toolbox
Switch State == 0
KINECT
Transform to Complex Matrix (OpenCV)
Get RGB Image (Byte)
Send signal to MCU
MCU read Type of tool being used
Image Thresholding
Get RGB pixel coordinate of markers
Image moment and Find Contour
RGB-D Sensor Get Depth Iamge (Byte)
Coordinate mapping between RGB image and Depth image
COMPUTER Get Depth data of markers from coordinate mapping frame
Calculate real world position (x,y,z) of markers
PEDAL SWITCH
Calculate real world position (x,y,z) the Tool from markers
MCU
NO YES
Pedal Switch
Switch State == 0
PC records user’s actions at that moment
Report all data
Send signal to MCU
MCU send command to PC
YES
YES
Origin Point is set? Set origin point
Receive command from MCU ?
NO
NO
Fig. 4 System architecture Kinect
Markers’ position RGB-D Image
COMPUTER
Vision System processing (OpenCV)
Human expert (User)
Manual Input
GUI
Main Process Control
Display
Pedal Switch
Tool usage Digital I/O
MCU Toolbox
Tool Type Digital I/O
Fig. 5 Framework of dataflow
3.2. Vision system for tool tracking The vision system calculates the position the orientation of the tooltip from the markers detected by RGB-D sensor. The calculation is based on a coordination system shown in Fig. 6. In principle, the green markers, Marker 1 and Marker 2, are
placed on the tool at PM1 and PM2, respectively. Once PM1 and PM2 can be located, the position of the tooltip PT can be obtained by vectors operation. Transformation matrix is applied to these three points in order to transform the positions in the RGB-D image frame to the workspace coordinate. The orientation of the tool can be represented in a spherical coordinate. The detailed process is as follows. (a) Detection of the markers - the markers are detected in the RGB image by locating the region with specified color, blue and neon green. These colors can be distinguished from the working environment. Contours of the region with the color pixels are extracted to locate the position (xc,yc) of PM1 and PM2. RGB image and depth image are registered. Thus, corresponding depth zc at (xc,yc) is known. Position vectors of the markers with respect to {C} (CPM1 and CPM2) are obtained. (b) Transform the markers’ positions to workspace – by applying transformation matrix WCT representing a coordinate frame {C} with respect to {W}. CPMi = [xc,i ,yc,i ,zc,i]T where i = 1 and 2 (see Eq.1-2) W W
PM1 = (WCT) CPM1 PM2 = (WCT) CPM2
(1) (2)
285
Supachai Vongbunyong et al. / Procedia CIRP 61 (2017) 281 – 286
(c) Find position of the tooltip – tooltip PT can be obtained by using a unit vector representing the tool’s pose and the distances between the markers (L1 and L2) predefined for each tool. PT with respect to {W} can be obtained from Eq.3. W
PT = WPM2 + (WPM1 - WPM2) (L1 + L2) / L2
W
W
W
W
W
T
UTool = PT - PM2 = [ xTool, yTool, zTool] ӨiW = arccos( WiTool / ( L1 + L2 )) ; i { אx,y,z} yC
RGB-D Sensor
(4) (5)
{C}
xC
oW
y'W
z'W Өzw
LWC
{T}
Tool
zC
Marker2
+
Өyw +
L1
L2
PM2 PM1
+
+
Screw Snap-fits
+
#1
Back cover of LCD screen
(3)
(d) Find orientation of the tool – the orientation of the tool by considering a spherical coordinate. The angles between the tool and the orthogonal axes of the workspace frame (ӨxW ,ӨyW ,ӨzW) are presented. WUTool represents the pose of the tool (see Eq.4) where the angle can be calculated by using Eq. 5. W
+
Toolbox #2 #3
+
{W}, {P} Fig. 7 Experiment setup
4.2. Workflow of the human operator The disassembly process was divided into three steps. Firstly, for the back cover, the name of the main component was manually input through the GUI (see Fig. 8). A probe (#3) was picked up and used to point at four corners of the back cover, so that the size of this component was recorded. Next, screwdriver (#1) was used to unscrew the 4 Phillip-head screws. Lastly, screwdriver (#2) was used to remove 8 snapfits around the back cover. It should be noted that the pedal switch was pressed during each action had been perform to confirm the recording.
Marker1
x'W
PT Өxw
zW
yW Workspace
{W}
L1 =|PT – PM1| L2 =|PM1 – PM2|
oW xW Fig. 6 Coordinate system of the training platform
Where {C} = image frame on RGB-D sensor; {T} = tool frame; {W} = workspace frame; PT = position of tooltip; PM = position of marker; LWC = distance vector between origins of {C} and {W}. 4. Experiment 4.1. Experiment setup Objectives of this experiment are to prove that the system can record the actions and to produce a disassembly sequence plan. An LCD screen is used as a case study. This preliminary experiment focuses on operations for removing the back cover from the remaining product. The screen was held by suction cups on the fixture. The bottom-left corner of the LCD screen was located at the origin of workspace coordinate (oW). Therefore, {P} and {W} were coincident (see Fig. 7). According to product structure analysis, the process dealt with 1 main component (a back cover) and 2 types of connectors (screws and snap-fits). Three types of tools were assigned to the toolbox, including: x (#1) Phillip-head screwdriver – for unscrewing, x (#2) Flat-head screw driver – for removing snap-fits, x (#3) Probe – for locating features by pointing at them.
Fig. 8 Tracking system and GUI
286
Supachai Vongbunyong et al. / Procedia CIRP 61 (2017) 281 – 286
4.3. Result example From these steps, the system produced the result as follow: Backcover(PT3… PT3), Unscrew(PT1, OT1) … Unscrew(PT1, OT1) , RemoveSnap(PT2, OT2) … RemoveSnap(PT2, OT2) ,
Firstly, the back cover is located by using a probe (#3) pointing at 8 corners of the back cover. Next, a Philip-head screwdriver (#1) was picked and used to unscrew 5 screws on the cover. Lastly, a flat-head screwdriver (#2) was picked and used to dismantle 8 snap-fits around the cover. PTi and OTi are position and orientation of tooltip. Average precision and accuracy were within 6.5mm and 10mm, respectively. 5. Discussion From the preliminary experiment, all the required information for repeating disassembling a model of products was able to be captured. The action sequence – the type and the order of the tools – can be captured accurately. However, with respect to the precision and accuracy of the system, marginal error in position and orientation of the tools was occurred due to an inaccuracy of the RGB-D sensor to locate the markers in the scenes. Three factors should be considered for an improvement. (a) Markers’ tracking method: 3D markers should be used as they can be located more precisely than the color areas on the tools. However, the shape of the markers should not affect the normal movement the operators when performing tasks. (b) Performance of the RGB-D sensor: the resolution of the depth image is limited by the resolution of the infrared pattern. The sensor with higher performance should be used. (c) Calibration: the coordinate frames should be calibrated by using images input in addition to the information of physical configuration. 6. Conclusions Learning by demonstration is one of the machine learning approaches that the intelligent agent of an autonomous system can acquire skills from the actions demonstrated by human operators. In previous works regarding cognitive disassembly automation [18], learning by demonstration was conducted along with learning by reasoning in order to resolve problems regarding the uncertainties in product and the process. In this research, the platform for transferring disassembly skills demonstrated by expert operators to the robots is developed. In this article, the training part of the skill transfer platform in regard to the tracking system of the disassembly tools is developed. This platform is designed to capture the operator actions during entire disassembly process. The sequence of actions can be obtained by using the data from the switches on the toolbox and the pedal. The vision-based tracking system is able to locate the tool-tip of the disassembly tool by using RGB-D images. As a result, disassembly skills, the information regarding the disassembly process at the planning and the operational levels, can be obtained. According to the design of this platform, the operators can typically focus on the tasks with minimal distraction of manual input required.
For the future work, at the operational level, the system will be capable of capturing hand movement required for performing complex tasks. In addition, the complete system including a robot in which the skills will be transferred to will be developed. References [1]
[2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
[21] [22] [23] [24]
Bdiwi M, Rashid A, Putz M, editors. Autonomous disassembly of electric vehicle motors based on robot cognition. 2016 IEEE International Conference on Robotics and Automation (ICRA); 2016 16-21 May 2016. Vongbunyong S, Chen WH. Disassembly Automation Automated Systems with Cognitive Abilities. Herrmann C, Kara S, editors: Springer International Publishing; 2015. Bailey-Van Kuren M. Flexible robotic demanufacturing using real time tool path generation. Robot and Comput Integr Manuf. 2006;22(1):1724. Gil P, Pomares J, Puente SVT, Diaz C, Candelas F, Torres F. Flexible multi-sensorial system for automatic disassembly using cooperative robots. Int J Comput Integr Manuf. 2007;20(8):757-72. Torres F, Gil P, Puente ST, Pomares J, Aracil R. Automatic PC disassembly for component recovery. Int J Adv Manuf Technol. 2004;23(1-2):39-46. Schumacher P, Jouaneh M. A system for automated disassembly of snap-fit covers. Int J Adv Manuf Technol. 2013;1(15):2055-69. Wegener K, Chen WH, Dietrich F, Dröder K, Kara S. Robot assisted disassembly for the recycling of electric vehicle batteries. Procedia CIRP. 2015;29:716-21. Bley H, Reinhart G, Seliger G, Bernardi M, Korne T. Appropriate human involvement in assembly and disassembly. CIRP Ann - Manuf Technol. 2004;53(2):487-509. Kim H-J, Chiotellis S, Seliger G. Dynamic process planning control of hybrid disassembly systems. Int J Adv Manuf Technol. 2009;40(910):1016-23. Franke C, Kernbaum S, Seliger G. Remanufacturing of flat screen monitors. In: Brissaud D TS, Zwolinski P, editor. Innovation in life cycle engineering and sustainable development2006. p. 139-52. Makris S, Pintzos G, Rentzos L, Chryssolouris G. Assembly support using AR technology based on automatic sequence generation CIRP Ann - Manuf Technol. 2013;62(1):9-12. Leu MC, ElMaraghy HA, Nee AYC, Ong SK, Lanzetta M, Putz M, et al. CAD model based virtual assembly simulation, planning and training. CIRP Ann - Manuf Technol. 2013;62(2):799-822. Odenthal B, Mayer MP, Kabuß W, Kausch B, Schlick CM. An empirical study of disassembling using an augmented vision system. Lecture Notes in Computer Science. 6777/20112011. p. 399-408. Makris S, Tsarouchi P, Surdilovic D, Krüger J. Intuitive dual arm robot programming for assembly operations. CIRP Ann - Manuf Technol. 2014;63(1):13-6. Tang Y, Zhou M, editors. Learning-embedded disassembly petri net for process planning. IEEE International Conference on Systems, Man and Cybernetics 1; 2007. Yeh W-C. Simplified swarm optimization in disassembly sequencing problems with learning effects. Computers & Operations Research. 2012;39(9):2168-77. Zeid I, Gupta SM, Bardasz T. A case-based reasoning approach to planning for disassembly. J Intell Manuf. 1997;8(2):97-106. Vongbunyong S, Kara S, Pagnucco M. Learning and revision in cognitive robotics disassembly automation. Robot and Comput Integr Manuf. 2015;34:79-94. Vongbunyong S, Kara S, Pagnucco M. Application of cognitive robotics in disassembly of products. CIRP Ann - Manuf Technol. 2013;62(1):31-4. Paikan A, Schiebener D, Wächter M, Asfour T, Metta G, Natale L, editors. Transferring object grasping knowledge and skill across different robotic platforms. International Conference on Advanced Robotics (ICAR), 2015 2015 27-31 July 2015; Istanbul. Tonggoed T, Charoenseang S. Design and development of human skill transfer system with augmented reality. Jounal of Automation and Control Engineering. 2015;3(5):403-9. Khoshelham K, Elberink SO. Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Sensors. 2012;12:1437-54. Itseez. OpenCV 3.1. 2016. Microsoft. Kinect for Windows. 2016.