5th IFAC IFAC Conference Conference on on Sensing, Sensing, Control Control and and Automation Automation for for 5th 5th Agriculture 5th IFAC IFAC Conference Conference on on Sensing, Sensing, Control Control and and Automation Automation for for Agriculture 5th IFAC Conference on Sensing, Control and Automation for Agriculture August 14-17, USA Available Agriculture August 14-17, 2016. 2016. Seattle, Seattle, Washington, Washington, USA online at www.sciencedirect.com Agriculture August 14-17, 2016. Seattle, Washington, USA August August 14-17, 14-17, 2016. 2016. Seattle, Seattle, Washington, Washington, USA USA
ScienceDirect
IFAC-PapersOnLine 49-16 (2016) 365–370
Towards an Towards an Towards an Towards for an for for for
artificial vision-robotic artificial vision-robotic artificial vision-robotic artificial vision-robotic tomato identification tomato identification tomato identification tomato∗,∗∗ identification ∗
system system system system
F. A. F. Garc´ Garc´ıa-Luna ıa-Luna ∗,∗∗ A. Morales-D´ Morales-D´ıaz ıaz ∗∗∗ ∗,∗∗ ∗,∗∗ A. Morales-D´ F. Garc´ ıa-Luna ıaz ∗ ∗,∗∗ F. Garc´ ıa-Luna A. Morales-D´ F. Garc´ıa-Luna A. Morales-D´ıaz ıaz ∗ ∗ Centro de Investigaci´ o n y de Estudios Avanzados del Centro de Investigaci´ o n y de Estudios Avanzados del Instituto Instituto ∗ ∗ de Investigaci´ o n yy de Estudios Avanzados del Instituto Politcnico Nacional Unidad Saltillo ∗ Centro Centro de Investigaci´ o n de Estudios Avanzados Politcnico Unidad Saltillo del Centro de Investigaci´ onNacional y de Estudios Avanzados del Instituto Instituto Politcnico Nacional Unidad Saltillo Ramos Arizpe, Coah. 25900, M´ Politcnico Nacional Ramos Arizpe, Coah.Unidad 25900, Saltillo M´eexico xico Politcnico Nacional Unidad Saltillo Ramos Arizpe, eexico Phone number:+52-844-4389600 Ext. 8500 Ramos Arizpe, Coah. Coah. 25900, 25900, M´ M´ xico Phone number:+52-844-4389600 Ext. 8500 Ramos Arizpe, Coah. 25900, M´ exico ∗∗ Phone number:+52-844-4389600 Ext. 8500 ∗∗ e-mail:
[email protected] Phone number:+52-844-4389600 Ext. 8500 e-mail:
[email protected] Phone number:+52-844-4389600 Ext. 8500 ∗∗ ∗∗
[email protected] ∗∗ e-mail: e-mail:
[email protected] e-mail:
[email protected] Abstract: Abstract: In In the the present present paper paper we we developed developed aa simple simple and and affordable affordable vision-based vision-based robotic robotic Abstract: In the present paper we developed aa simple and affordable vision-based robotic system for the identification of the Euclidean position of red spheres that emulate ripe Abstract: In the present paper we developed simple and affordable vision-based robotic system for the of the Euclidean position of redand spheres that emulate ripe tomatoes. tomatoes. Abstract: In identification the present paper we developed a simple affordable vision-based robotic system for the identification of the Euclidean position of red spheres that emulate ripe tomatoes. This is done by using a RGB-D sensor in a fixed position, together with a 5 DOF manipulator. system for thebyidentification identification of the the Euclidean position of red redtogether spheres with that emulate emulate ripe tomatoes. This is for done using a RGB-D sensor in a fixed position, a 5 DOFripe manipulator. system the of Euclidean position of spheres that tomatoes. This is done by using a RGB-D sensor in it aa fixed position, together with aa 55 DOF manipulator. To detect the tomato sensor as blob then calculates its using This is using a sensor position, together manipulator. To detect theby tomato the sensor considers considers as aa red red blob and and then it itwith calculates its center center using aa This is done done by using the a RGB-D RGB-D sensor in in it a fixed fixed position, together with a 5 DOF DOF manipulator. To detect the tomato the sensor considers it as aamapped red blob and then it calculates its center using aa point cloud map. The position of the red blob is to the manipulator reference frame using To detect the tomato the sensor considers it as red blob and then it calculates its center using point cloud map. The the position ofconsiders the red blob thethen manipulator reference frame using To detect the tomato sensor it asis amapped red blobtoand it calculates its center using a point cloud map. The position of the red blob is mapped to the manipulator reference frame using the homogeneous transformation matrix fromis the camerato tothe themanipulator manipulator. The position position of the point cloud position red mapped reference frame using the homogeneous transformation matrix from camera the manipulator. The the point cloud map. map. The The position of of the the red blob blob isthe mapped toto the manipulator reference frame of using the transformation matrix the to The of sphere is a to the with of the homogeneous homogeneous transformation matrix from from the camera camera to the the manipulator. manipulator. The position position of the the sphere is sent sent through through a micro-controller micro-controller to drive drive the manipulator, manipulator, with the the purpose purpose of reaching reaching the homogeneous transformation matrix from the camera to the manipulator. The position of the sphere is sent through a micro-controller to drive the manipulator, with the purpose of reaching the sphere’s position. Experimental results of the vision-based robotic system are provided, and sphere is sent through a micro-controller to drive the manipulator, with the purpose of reaching the sphere’s results the the vision-based robotic are provided, and sphere is sentposition. throughExperimental a micro-controller to of drive manipulator, withsystem the purpose of reaching the position. Experimental results of the vision-based robotic system are provided,toand system in and the be the sphere’s sphere’s position. Experimental results the vision-based robotic system are the system accuracy accuracy obtained in localizing localizing and touching the interest interest object demonstrate be sphere’s position. obtained Experimental results of of thetouching vision-based robotic object systemdemonstrate are provided, provided,toand and the system accuracy obtained in localizing and touching the interest object demonstrate to be highly effective, reaching an accuracy of 10/10 in identification and touching the object in ideal the system accuracy obtained in localizing and touching the interest object demonstrate to highly effective, reaching an accuracy of 10/10 in identification and touching the object in ideal the system accuracy obtained in localizing and touching the interest object demonstrate to be be highly effective, reaching an accuracy of 10/10 in identification and touching the object in ideal environment. highly effective, effective, reaching reaching an an accuracy accuracy of of 10/10 10/10 in in identification identification and and touching touching the the object object in in ideal ideal environment. highly environment. environment. environment. © 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. Keywords: Keywords: Artificial Artificial vision, vision, fruit fruit detection, detection, fruit fruit localization, localization, robotic robotic system system Keywords: Artificial vision, vision, fruit fruit detection, detection, fruit localization, robotic Keywords: Artificial fruit localization, robotic system system Keywords: Artificial vision, fruit detection, fruit localization, robotic system 2. RELATED WORK 1. INTRODUCTION 2. RELATED WORK FOR FOR TOMATO TOMATO CROP CROP 1. INTRODUCTION 2. RELATED RELATED WORK WORK FOR FOR TOMATO TOMATO CROP 1. INTRODUCTION INTRODUCTION 2. 1. 2. RELATED WORK FOR TOMATO CROP CROP 1. INTRODUCTION Protected crops are a great opportunity to obtain high Protected crops are a great opportunity to obtain high Tomato Tomato (Lycopersicon (Lycopersicon esculentum esculentum Mill) Mill) is is one one of of the the most most Protected crops are aa great great opportunity to obtain high Tomato (Lycopersicon (Lycopersicon esculentum Mill) is one one of the most most quality and production rates in crops, such as important crops in the world. It is used in both fresh Protected are obtain high esculentum Mill) is of the quality andcrops production rates opportunity in profitable profitable to crops, such as Tomato Protected crops are a great opportunity to obtain high important crops in the world. It is used in both fresh Tomato (Lycopersicon esculentum Mill) is one of the most quality and production rates in profitable crops, such as important crops in the world. It is used in both fresh cucumber, tomato, lettuce, pepper, among others. Thereform and in processed products, Mehdizadeh et al. (2013). quality and production rates in profitable crops, such as important crops in the world. It is used in both fresh cucumber, lettuce, pepper, among crops, others.such Therequality andtomato, production rates in profitable as form and in processed products, Mehdizadeh et al. (2013). important crops in the world. It is used in both fresh cucumber, tomato, lettuce, pepper, amongand others. There- form form and in processed products, Mehdizadeh et al. (2013). fore, many different types of machinery technology Regarding produced hectare, it occupies the second place cucumber, tomato, lettuce, pepper, among others. Thereand in processed products, Mehdizadeh et al. (2013). fore, many different types of machinery and technology cucumber, tomato, lettuce, pepper, among others. There- Regarding produced hectare, it occupies the second place form and in processed products, Mehdizadeh et al. (2013). fore, been manyintended differentfor types of machinery ande.g., technology Regarding produced hectare, it occupies occupies the issecond second place have this type of sowing, after the potato, potato, andhectare, as processed processed product considered fore, many different types machinery and technology produced it the place have this of type of systems, systems, sowing, Regarding fore, been manyintended differentfor types of machinery ande.g., technology after the and as product considered Regarding produced hectare, it occupies the issecond place have been been intended intended for this this type of of systems, systems, e.g., sowing, sowing, after the potato, and as processed product is considered transplanting, watering, environmental automated conthe first one worldwide, Mehdizadeh et al. (2013). Since have for type e.g., the potato, and processed is transplanting, watering, environmental automated con- after have been intended for this type of systems, e.g., sowing, the worldwide, et al. (2013). Since afterfirst the one potato, and as as Mehdizadeh processed product product is considered considered transplanting, watering, environmental automated conthe first first one worldwide, Mehdizadeh et al. al. (2013). Since trol, etc. the equipment for proharvesting represents up to total production transplanting, watering, environmental automated worldwide, et (2013). Since trol, etc. However, However, the available available equipment for green green conpro- the transplanting, watering, environmental automated conharvesting represents up Mehdizadeh to 50 50 % % of of the the total production the first one one worldwide, Mehdizadeh et al. (2013). Since trol, etc. etc.did However, the the available equipment for green propro- harvesting harvesting represents up to 50 % of the total production duction not fulfill specific demand of specialized (Japanesse Robotic Society (1996)), several autonomous trol, However, the available equipment for green represents up to 50 % of the total production duction did not fulfill the specific demand of specialized trol, etc. However, the available equipment for green pro- (Japanesse Robotic Society (1996)), several autonomous harvesting represents up to 50 % of the total production duction where did not not high fulfill the specific demand of specialized (Japanesse Robotic Society (1996)), several autonomous devices, automation and robotics systems have been (1996)), developed to harvest harvest it. In In duction did fulfilllevel the of specific demand ofautonomous specialized (Japanesse Robotic Society several autonomous devices, a high level of automation andof autonomous duction where did nota fulfill the specific demand specialized robotics systems have been developed to it. (Japanesse Robotic Society (1996)), several autonomous devices, where where a high highin level ofproduction automation and autonomous autonomous robotics robotics systems have been developed to harvest it. In In systems can operate the site. Corell et al. (2009) a novel and a complete distributed devices, a level of automation and systems have been developed to harvest it. systems can operate theofproduction site. devices, where a highin level automation and autonomous Corell et al. (2009) a novel and a complete distributed robotics systems have been developed to harvest it. In systems can can operate operate in in the the production production site. site. Corell et al. (2009) a novel and a complete distributed autonomous gardening system is developed, where the systems Corell et al. (2009) a novel and a complete distributed systems can operate in the production site. autonomous gardening system is adeveloped, where the Corell et al. (2009) a novel and complete distributed The evolution of robotic systems can help to develop greenThe evolution of robotic systems can help to develop green- autonomous autonomous gardening system is developed, developed, where the garden consists of of and The gardening system is the The evolution evolution of tasks robotic systems can help help to develop greengarden consists of aa mesh mesh of robots robots and plants. plants.where The gargarautonomous gardening system is developed, where the house repetitive in autonomous way, see Comba et The of robotic can house repetitive in systems autonomous way,to seedevelop Combagreenet al. al. garden The evolution of tasks robotic systems can help to develop greengarden consists of a mesh of robots and plants. The gardening robots are are mobile manipulators with an anThe eye-inconsists of aamobile mesh of robots and plants. garhouse repetitive repetitive tasks in autonomous autonomous way, seedeveloped Comba et et for al. dening robots manipulators with eye-ingarden consists of mesh of robots and plants. The gar(2010). Moreover, autonomous robots can be house tasks in way, see Comba al. (2010). Moreover, autonomous robotsway, can be house repetitive tasks in autonomous seedeveloped Comba et for al. hand deningcamera, robots and are mobile mobile manipulators with an eye-ineye-inare of plants in robots are manipulators with an (2010).surveillance, Moreover, autonomous autonomous robots can be be developed for dening hand are capable capable of locating locating plants in the the deningcamera, robots and are mobile manipulators with an eye-incrop’s picking to highly precision (2010). Moreover, robots can developed for crop’s picking tasks, tasks, to make make highly precision (2010).surveillance, Moreover, autonomous robots can be developed for hand hand camera, camera, and are and capable of locating locating plantsfruit. in the the garden, watering them, locating and grasping In and are capable of plants in crop’s surveillance, picking tasks, to make highly precision garden, watering them, and locating and grasping fruit. In hand camera, and are capable of locating plants in the chemical treatments and precision fertilization. crop’s tasks, to tofertilization. make highly highly precision precision Liang chemical treatmentspicking and precision crop’s surveillance, surveillance, picking tasks, make garden,etwatering watering them, and locating and grasping grasping fruit. In al. (2010), the authors developed a motion plangarden, them, and locating and fruit. In chemical treatments treatments and and precision precision fertilization. fertilization. Liang al. (2010), the and authors developed a motion plangarden,etwatering them, locating and grasping fruit. In chemical chemicalthe treatments and precision fertilization. Liang et al. (2010), the authors developed a motion planning approach for tomato harvesting manipulators with Despite fact that several researches have been conet the developed aa motion ning for tomato harvesting manipulators with Despite the fact that several researches have been con- Liang Liangapproach et al. al. (2010), (2010), the authors authors developed motion planplanning approach for proved tomato their harvesting manipulators with Despite based the fact fact that several researches artificial have been been concon- ning DOF, and proposed algorithm only ducted on robotics, automation, approach for tomato harvesting manipulators Despite the several researches have DOF, and they they proposed algorithm with only ducted on that robotics, automation, vision, approach for proved tomato their harvesting manipulators with Despite based the fact that several researches artificial have beenvision, con- 77ning 7 DOF, DOF, and they they proved their that proposed algorithm only ducted based intelligence on robotics, robotics,forautomation, automation, artificial vision, 7by simulation and considering the tomato position and artificial application in agricultural and proved their proposed algorithm only ducted based on artificial vision, simulation and proved considering the tomato position and artificial applicationartificial in agricultural 7 DOF, and they their that proposed algorithm only ducted based intelligence on robotics,forautomation, vision, by byknown. simulation and et considering that the tomato position and artificial artificial intelligence for years application in agricultural agricultural is In al. the authors developed a systems, more than have between simulation and considering tomato position and for application in is In Arefi Arefi al. (2011), (2011),that the the authors developed a systems, moreintelligence than twenty twenty have passed passed between by byknown. simulation and et considering that the tomato position and artificial intelligence for years application in agricultural is known. In Arefi et al. (2011), the authors developed a systems, more than twenty years have passed between segmentation algorithm for recognition and localization the first publications and now, were many papers have et (2011), the developed systems, more years have passed between segmentation algorithm recognition and localization ofaa the first publications and now, were many papers have is is known. known. In In Arefi Arefi et al. al.for (2011), the authors authors developed of systems, more than than twenty twenty years have passed between segmentation algorithm for recognition recognition and localization of the first first publications and now, now, were many et papers have segmentation ripe tomato based on machine vision, which was proved been published, see Kassler (2001), Belforte al. (2006), algorithm for and localization of the publications and were many papers have tomato based on machine vision, and which was proved been published, see Kassler (2001), Belforte al. (2006), segmentation algorithm for recognition localization of the first publications and now, were many et papers have ripe ripeusing tomato based on machine vision,under whichgreenhouse’s was proved proved been published, published, see Kassler Kassler (2001), Belforte et al. al. (2006), (2006), by 110 color images of Comba et Nowadays, the improvement comtomato machine vision, which was been see (2001), et by 110based color on images of tomato tomato Comba et al. al. (2010). (2010). Nowadays, the Belforte improvement in com- ripe ripeusing tomato based on machine vision,under whichgreenhouse’s was proved been published, see Kassler (2001), Belforte et al. in (2006), by using using 110 110 color color images images of tomato tomato under greenhouse’s greenhouse’s Comba et et al. (2010). Nowadays, Nowadays, the affordable improvement in comcomlighting Nezhad et the munication technology, in cameras, in of under Comba (2010). the improvement in lighting condition. In Nezhad et al. al. (2011), (2011), the authors authors munication technology, in faster faster and and cameras, in by by usingcondition. 110 color In images of tomato under greenhouse’s Comba et al. al. (2010). Nowadays, the affordable improvement in comlighting condition. In Nezhad et al. (2011), the machine authors munication technology, in faster and affordable cameras, in developed a manufactured tomato picking vision servo-motors and in cheaper and smaller control devices, In et (2011), the authors munication in and affordable cameras, in developed a manufactured tomato picking vision servo-motors and in cheaper and smaller control devices, lighting condition. condition. In Nezhad Nezhad et al. al. (2011), the machine authors munication technology, technology, in faster faster and affordable cameras, in lighting developed a manufactured manufactured tomato picking visiona machine machine servo-motors anddesign in cheaper cheaper and smaller smaller control devices, devices, using OpenCV. Wand et al. (2012) designed tomato are allowed the of simple and reasonable priced developed a tomato picking vision servo-motors and in and control OpenCV. Wand et tomato al. (2012) designed tomato are allowed the of simple and reasonable priced using developed a manufactured picking visiona machine servo-motors anddesign in cheaper and smaller control devices, using OpenCV. OpenCV. Wand et al. (2012) platform, designed aa 4tomato tomato are allowed allowed the the design design of simple simple and robotic reasonable priced harvesting robot based on a mobile DOF servo-mechanisms. In general, accesible systems to using Wand et al. (2012) designed are of and reasonable priced robot Wand based et on al. a mobile DOF servo-mechanisms. In general, accesible systems to harvesting using OpenCV. (2012) platform, designed a 4tomato are allowed the design of simple and robotic reasonable priced harvesting robot based on a mobile platform, a 4 DOF servo-mechanisms. In general, accesible robotic systems to manipulator, a vision system together with an end effector navigate between the crops (as much in outdoor conditions harvesting robot based on a mobile platform, a 4 DOF servo-mechanisms. In general, accesible robotic systems to manipulator, a vision system together with an end effector navigate between the crops (as much in outdoor conditions servo-mechanisms. In general, accesible robotic systems to harvesting robot based on a mobile platform, a 4 DOF manipulator, a vision system together with an end effector navigate between the crops (as much in outdoor conditions and a cutter tool, the authors only provide simulations as in greenhouses), to do surveillance, crop manipulation vision system together with effector navigate between (as conditions and a cutter aatool, the authors only provide simulations as in greenhouses), do surveillance, crop manipulation manipulator, vision system together with an an end end effector navigate between the thetocrops crops (as much much in in outdoor outdoor conditions manipulator, andthe cutter tool, the the authors only provide simulations as in in greenhouses), greenhouses), to making do surveillance, surveillance, crop manipulation of electromechanical system. In Chen al. and tasks, this research and aaa cutter tool, simulations as to do manipulation of electromechanical system.only In provide Chen et etsimulations al. (2015) (2015) and harvesting tasks, this a a big bigcrop research and dede- and andthe cutter tool, the authors authors only provide as inharvesting greenhouses), to making do surveillance, crop manipulation of the the electromechanical system. In Chen Chen et al. al. (2015) and harvesting harvesting tasks, making making this aa big big research and and de- of authors developed cognition framework for velopment opportunity area. electromechanical system. In et and tasks, the authors developed aa vision vision cognition framework for a a velopment opportunity area. this of the electromechanical system. In Chen et al. (2015) (2015) and harvesting tasks, making this a big research research and dede- the the authors authors developed developed aa vision vision cognition cognition framework framework for for a velopment opportunity opportunity area. area. the velopment the authors developed a vision cognition framework for aa velopment opportunity area.
Copyright © 2016, 2016 370 2405-8963 © IFAC (International Federation of Automatic Control) Copyright 2016 IFAC IFAC 370 Hosting by Elsevier Ltd. All rights reserved. Copyright © 2016 IFAC 370 Copyright © 2016 IFAC 370 Peer review under responsibility of International Federation of Automatic Copyright © 2016 IFAC 370Control. 10.1016/j.ifacol.2016.10.067
IFAC AGRICONTROL 2016 366 F. García-Luna et al. / IFAC-PapersOnLine 49-16 (2016) 365–370 August 14-17, 2016. Seattle, Washington, USA
tomato harvesting humanoid robot based on geometrical and physical reasoning. Their vision approach is based in two RGB-D sensors, one installed in the humanoid’s head and the other on the hand. These authors used the upper body of HRP-2 humanoid robot and a VMAX omnidirectional mobile platform, which has 7 DOF on each hand and 2 DOF on the head. Moreover, in their proposed vision approach, they modeled the fruit in one branch to estimate the pedicel direction of each fruit and as well as the remaining stable crops in a branch, with respect to the gravity and interaction forces from near elements. Finally, in Gongal et al. (2015) the authors present a review of several methods for fruit localization using different types of sensors, features, and classifications. They also mentioned that use color as a feature for detection improves up to a range of 80 to 85 % the detection accuracy. Among their conclusions they recommended to use a Kinect like sensor and calculate the depth of the object in order to approach a manipulator for harvesting purposes. 3. MAIN CONTRIBUTION In this paper we propose the development of a simple and affordable vision-based robotic system, for the identification of the Euclidean position of red spheres that emulate a ripe tomato. This is done by using a RGB-D sensor on a fixed position together with a 5 DOF manipulator. Our proposed system detects the tomato by considering it as a red blob, and calculate its center using a point cloud map. The position of the red blob is mapped to the manipulator reference frame using the homogeneous transformation matrix from the camera to the manipulator. The position of the sphere is sent to a micro-controller to drive the manipulator with the end of reaching the sphere position.
Fig. 1. 5 DOF manipulator, showing the actuators (M) and the links (L). Where: • • • • • •
l1 : link 1 (12cm) l2 : link 2 (17.5cm) l3 : link 3 (23cm) M1,2 : Servomotor HS-755HB M3,4 : Servomotor HS-5685MH M5,6 : Servomotor HS-422
And its kinematic chain is showed in Table 1. With: Table 1. Manipulator Kinematic Chain HTM 1 2 3 4 5
t(x) – – 12 17.5 23
t(y) – – – – –
t(z) – – – – –
R(x) – − π2 – – –
R(y) – – – – –
R(z) − q1 −q2 − π2 − π4 + q3 π + π4 − q4 2 π − q5 2 π 2
The work is organized as follows: In Section 4 the kinematic model of the manipulator is provided together with the imaging processing description. In section 5 some experimental results are showed, and in Section 6 some conclusions are provided.
• HT M : Homogeneous Transformation Matrix. • t(α) : translation vector between two reference frames over the α axis in cm. • R(α) : Rotation matrix between two reference frames over the α axis in radians.
4. KINEMATIC MODEL AND IMAGE PROCESSING
Knowing that the equation of the forward kinematics is given by: x = f (q) (1) where x represents de 3D position of the final effector, we can make use of the information in Table 1 to re-write system 1 in explicit form as:
4.1 Kinematic model According to Siciliano et al. (1996), kinematics is the study of the mathematics of motion without considering the forces that affect motion and deals with the geometric relationships that govern a robotic system. It consists on the translation and rotation of the reference frames on which each articulation moves in. In this work a 5 DOF manipulator with rigid rotational joints that consists of 6 HiTEC servomotors and 3 aluminum links is considered and it is depicted in Fig. 1.
T0 = I4
π
− q1 2 −π Rz (−q2 ) T20 = T10 Rx 2 −π π 0 0 T3 = T2 Tx (12)Rz − + q3 2 4 π π + − q4 T40 = T30 Tx (17.5)Rz 2 4 π π 0 0 Rz − q5 T5 = T4 Tx (23)Ry 2 2 and x = T50 (1..3, 4). T10 = T0 Rz
(2)
For the artificial vision system, a RGB-D sensor mounted on a fixed position was used (see Fig. 2). The camera 371
IFAC AGRICONTROL 2016 August 14-17, 2016. Seattle, Washington, USA F. García-Luna et al. / IFAC-PapersOnLine 49-16 (2016) 365–370
367
on C ++ . Its main goal is to detect the Euclidean position of the tomato in an illuminated controlled environment. To prove it, two colored spheres suspended in the air emulating the color of a ripe and an unripe tomato were used. Since an infrared sensor was used there were some parameters to be considered: Fig. 2. RGB-D sensor used in this project, it consists on a depth sensor, a RGB camera, an IR camera and two microphones used was an ASUS XTion Pro Live, which uses infrared sensors, adaptive depth detection technology, color image sensing, and audio stream, to capture real-time images, movement, and voice. Allowing the user precisely to track images, and its kinematic chain is represented in Table 2. Table 2. Camera Kinematic Chain T 1 2
t(x) 31 –
t(y) −2 –
t(z) −46 –
R(x) – π 2
R(y) π + 10 –
π 2
R(z) − π2 –
From Table 2 the transformation that maps the position of the 3D objects from the camera reference frame to the manipulator reference frame can be seen, and its explicit form is written as: π π −π 0 + Rz = T0 Tx (31)Ty (−2)Tz (−46)Ry Tc1 2 10 2 π 0 0 (3) = Tc1 Rx Tc2 2 This transformation can be seen in Fig. 3 as the relationship between the same point from two different perspectives: 0 X = Tc2 x Where X is the 3D position of the object from the manipulator reference frame and x is the 3D position of the object from the camera reference frame.
• The illumination must be artificial because the sun light makes interference with the IR sensor making it impossible to use. • The IR sensor have an operational rank of at least 50cm, everything that it is located beneath that threshold, it can not be detected. The Algorithm 1 shows the series of steps that are made in order to detect the tomatoes at no more than 100cm (pxdepth ≤ 100cms): Algorithm 1 Tomato detection 1: function detection(t) t : exit time 2: Open camera. 3: while tsim < t do 4: Obtain RGB Image. 5: Obtain Point Cloud Map. 6: Obtain Depth Map. 7: if px > threshsat then px : pixel’s RGB value, threshsat : saturation threshold 8: px = 0. 9: Convert RGB to Gray scale. 10: Red channel extraction and binarization. 11: if pxdepth > 100 then 12: px = 0. 13: Erode and Dilate until the image is clean. 14: Detect the contours and the centroids. 15: Find the 3D position using the point cloud map. 16: Map the 3D position from the camera reference frame to the manipulator’s. 17: Close camera. It is important to notice that the accuracy of the IR sensor is about 3mm at 2m, which decreases as it gets closer to the object. By clean in the Algorithm 1, we mean that the image does not contain any kind of noise and that only silhouettes are detected. 5. EXPERIMENTAL RESULTS In order to perform the experiments a support structure was built. The structure consist on a compressed wood rectangular prism of 45 × 41 × 10 cm, height, thick and width, respectively. This holds both the manipulator and the aluminum fixture for the camera (see Fig. 4).
Fig. 3. Interest object perspective from the camera field of view and the inertial frame. 4.2 Image processing The image processing algorithm was made using the OpenCV library (for the image processing functions) and the OpenNI library (for accessing the sensor’s functions) 372
The full experimental set up consist on the RGB-D sensor, the support structure and the 5 DOF manipulator, and it is showed in Fig. 5 where: (1) (2) (3) (4)
ASUS XTion Pro Live. Objectives (Colored spheres). Support Structure. 5 DOF manipulator.
IFAC AGRICONTROL 2016 368 F. García-Luna et al. / IFAC-PapersOnLine 49-16 (2016) 365–370 August 14-17, 2016. Seattle, Washington, USA
Fig. 6. Red and green spheres: original image captured from the RGB camera
Fig. 4. Support Structure scheme with measures (in cm)
Fig. 7. Red objects segmentation output of Fig. 6 in RGB-color space.
Fig. 8. Extracting silhouettes output from Fig. 7.
Fig. 5. Experimental platform and its components: (1) RGB-D sensor, (2) red and green spheres, (3) support structure, (4) 5-DOF manipulator The main algorithm used in this work consists of the following steps: (1) The RGB-D sensor detects the red sphere and calculates its center by using a Point Cloud Map, and maps its position from the camera reference frame to the manipulator reference frame. (2) This position is set as a regulation point in a proportional controller using eq. 2 to obtain the articular values needed for the manipulator to reach and touch the objective. (3) Finally, the articular values calculated in the previous step are sent to the micro-controller (Arduino UNO) in order to move the servos on the manipulator. Fig. 6 represents a frame of the original video captured from the device without any kind of alterations. The two spheres suspended by a rope from the ceiling are seen. The green sphere represent an unripe tomato meanwhile the red sphere represents a ripe one. In Fig. 7 the result obtained from the segmentation algorithm it is shown, in which all colors but red were changed to black. In here we can see that even the green sphere is ignored. After the segmentation step, the silhouette is extracted to obtain the centroid in pixels of each blob, see Fig. 8.
373
Fig. 9. Point Cloud Map obtained from the IR camera view of Fig. 6 The Point Cloud Map (PCM) depicted in Fig. 9 is obtained from the IR sensor using the OpenNI library, which use structured light. Each point on the PCM contains information about its 3D position w.r.t. the sensor, and because it has the same dimensions as the Fig. 6, it is possible to do a 1:1 relation between Fig. 6 and 9 to obtain the 3D position of every silhouette centroid. Once the 3D position is obtained, it is mapped to the manipulator reference frame in order to be sent to the controller and make the end-effector reach that position. To do this a feedback control law is applied to the system. The control law used on this work was a proportional controller: (4) u = Kp e
IFAC AGRICONTROL 2016 August 14-17, 2016. Seattle, Washington, USA F. García-Luna et al. / IFAC-PapersOnLine 49-16 (2016) 365–370
q˙ = J + eˆ
Where u = q, ˙ Kp is the fixed proportional gain and the regulation error e is defined as, e = (xd − x) (5) This type of controller gives an exponential convergence of the error as can be seen in Fig. 10 where some simulation results are shown. In robotics this is not recommended,
369
(9)
Where: • J + : is the Jacobian’s pseudo-inverse.
The convergence profile changed from an exponential behavior (Fig. 10) to a controlled and smoothed behavior (Fig. 11). The experiment was made up to 10 times by changing the position of the spheres in depth and height over the work area of the manipulator, achieving an accuracy of 100% in terms of detecting and touching the red sphere.
Fig. 10. Exponential convergence of the error (eq. 5) without the cubic interpolation. since this abrupt convergence behavior can harm the system. In order to avoid this convergence, a velocity control sub-algorithm was applied interpolating a cubic polynomial with the desired position, to this end the following polynomial was applied: (6) x = a0 + a1 (t − ti ) + a2 (t − ti )2 + a3 (t − ti )3 Derivating eq. 6 w.r.t. time we obtain: x˙ = a1 + 2a2 (t − ti ) + 3a3 (t − ti )2
(7)
Where:
Fig. 11. Error convergence result after the cubic interpolation (eq. 6). All the experiment took almost 10 seconds to be completed (2 seconds for the detection of the 3D point and 8 seconds for the manipulator to reach it). The algorithm was executed in a computer with the following characteristics: • Ubuntu 14.04 • 12 GB RAM • Intel Core i7 Processor @ 2.5 GHz
a 0 = xi a1 = x˙ i a2 = (−3(xi − xd ) − (2x˙ i + x˙ f )(tf − ti ))/(tf − ti )2
The results are depicted in Figs. 12 - 13.
a3 = (2(xi − xd ) + (x˙ i + x˙ f )(tf − ti ))/(tf − ti )3
for more details see Biagiotti and Melchiorr (2008). This polynomial makes a trajectory interpolating the initial and the desired position of the end-effector using eq. 6 - eq. 7. And through this polynomial the position error (eq. 5) is re-defined as: (8) eˆ = x ˆd − x Where: • x ˆd : is the desired position in the trajectory.
Now, for the calculation of q˙ we need to use the inverse kinematics and the re-defined error (eq. 8), so, the inverse kinematics is given by: 374
Fig. 12 represents the position error (xd − x) of the endeffector, where the colored line represents the position error and the black line represents the zero. The horizontal axis represents the iteration (time) and the vertical axis the dimension of the error (cm). Meanwhile, in Fig. 13 the end-effector behavior (colored line) starting in its home position and converging to the reference (black line) is shown. The data used for generating Figs. 12 - 13 are the result of sending the output of the controller (PWM) to each servo-motor and computing the forward kinematics (eq. 2). Ten experiments were done, and in terms of identification and touching the objective, as long as it is on an ideal working area (well illuminated, no occlusions, inside the working area of the manipulator), the accuracy was 100%.
IFAC AGRICONTROL 2016 370 F. García-Luna et al. / IFAC-PapersOnLine 49-16 (2016) 365–370 August 14-17, 2016. Seattle, Washington, USA
vision system. It is also important to mention that an accuracy of 10/10 was achieved in terms of detection and reaching the objective as long as it exists in the manipulator’s working area. Our system is more affordable than those provided in the state of the art. As future work, the algorithm will be tested in a more realistic environment, and implemented in a mobile platform. REFERENCES
Fig. 12. End effector’s position error calculated in openloop (eq. 5)
Fig. 13. End effector’s position calculated in open-loop 6. CONCLUSIONS The proportional control law works well with this manipulator, making the end effector reach the desired position every time. The velocity control sub-algorithm prevents the system overworking when the control starts actuating, avoiding potential damage to the system. Our proposed system is simple and only costs around 1,835 dlls (Manipulator: 165 dlls, RGB-D Sensor: 50 dlls, Arduino UNO: 9 dlls, Structure: 10 dlls, Laptop: 1,600 dlls). In comparison with the one proposed in Chen et al. (2015), that consists on the HRP-2 upper body with two arms for the manipulation system with two RGB-D cameras. Which costs 253,559 dlls only considering the HRP-2 upper body, 130 times more expensive than the system here proposed. The algorithm took around 2 seconds for the identification of the tomato and 8 seconds more for the manipulator to reach the objective. These results were achieved by using state of the art tools such as the proportional controller with smoothed convergence and the artificial 375
Arefi, A., Motlagh, A., Mollazade, K., and Teimourlou, R. (2011). Recognition and localization of ripen tomato based on machine vision. Australian Journal of Crop Science, 5(10), 1144–1149. Biagiotti, L. and Melchiorr, C. (2008). Trayectory Planning for Automatic Machines and Robots. Springer. Chen, X., Chaudhary, K., Tanaka, Y., Nagahama, K., Yaguchi, H., Okada, K., and Inaba, M. (2015). Reasonning-based vision recognition for agricultural humanoid robot toward tomato harvesting. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. N. Correll and N. Arechiga and A. Bolger and M. Bollini and B. Charrow and A. Clayton and F. Dominguez and K. Donahue and S. Dyar and L. Johnson, H. Liu, A. Patrikalakis and T. Robertson and J. Smith and D. Soltero and M. Tanner and L. White and D. Rus. (2009). Building a distributed robot garden. In The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1509–1516. IEEE. Belforte G. and Gay P. and Piccarolo P., Ricauda Aimonino, D. (2006). Robot design and testing for greenhouse applications. Biosystems Engineering, 95, 309– 321. Gongal, A., Amatya, S., Karkee, M., Zhang, Q., and Lewis, K. (2015). Sensors and systems for fruit detection and localization: A review. Computers and Electronics in Agriculture. Japanesse Robotic Society (1996). Japanese Robot Society in: Robotics Manual. Science Publishing House, China. Comba L. and Gay P. and Piccarolo, P. and Ricauda Aimonino, D. (2010). Robotics and automation for crop management: Trends and perspective. In International Conference Ragusa SHWA2010, 471–478. M. Kassler. (2001). Agricultural automation in the new millennium. Computers and Electronics in Agriculture, 30, 237–240. Mehdizadeh M. and Darbandi E.I. and Naseri-Rad H. and Tobeh H. (2013). Growth and yield of tomato (lycopersicon esculentum mill.) as influenced by different organic fertilizers. International Journal of Agronomy and Plant Production, 4(734-738). Nezhad, M., Massh, J., and Komleh, H. (2011). Tomato picking machine vision using with the open cv’s library. In Machine Vision and Image Processing (MVIP), 1–5. IEEE. Siciliano, B., Sciavicco, L., Villani, L., and Oriolo, G. (1996). Robotics: Modeling, planning and control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Springer. Liang X. and Y. Wang. (2010). Motion planning of tomato harvesting manipulators with 7 dof. Advanced Materials Research, 97-101, 2840–2844.