A benchmarking perspective of underwater intervention systems★

A benchmarking perspective of underwater intervention systems★

Proceedings of the IFAC Workshop on Navigation, Guidance Proceedings the and Control ofof Vehicles on Proceedings ofUnderwater the IFAC IFAC Workshop ...

968KB Sizes 9 Downloads 160 Views

Proceedings of the IFAC Workshop on Navigation, Guidance Proceedings the and Control ofof Vehicles on Proceedings ofUnderwater the IFAC IFAC Workshop Workshop on Navigation, Navigation, Guidance Guidance Proceedings of the IFAC Workshop on Navigation, Guidance and of Vehicles April 28-30, 2015. Girona, Spain Available online at www.sciencedirect.com and Control Control of Underwater Underwater Vehicles and Control of Underwater Vehicles April 28-30, 2015. Girona, Spain April 28-30, 2015. Girona, Spain April 28-30, 2015. Girona, Spain

ScienceDirect

A A A A

IFAC-PapersOnLine 48-2 (2015) 008–013

benchmarking perspective of underwater benchmarking perspective of underwater benchmarking perspective of  benchmarking perspective of underwater underwater intervention systems  intervention systems intervention systems intervention systems∗ ∗ ∗ ∗ ∗

P. J. Sanz ∗ J. P´ erez ∗ J. Sales ∗ A. Pe˜ nalver ∗ J. J. Fern´ andez ∗ ∗ J. ∗ A. ∗Pe˜ ∗ J. J. P. J. Sanz Sanz ∗ J.D.P´ e rez alver a ndez ∗∗ ∗ Sales ∗ Fern´ P. e rez Sales n alver Fern´ a Fornas Mar´ ın ∗Pe˜ J.n C. Garc´ ıaJ. ∗ J. ∗ A. ∗ J. ∗ R. ∗ Fern´ P. J. J. Sanz ∗ J. J.D.P´ P´ e rez J. Sales A. Pe˜ n alver J. J. andez ndez ∗ R. Mar´ ∗ J. C. Garc´ ∗ Fornas ın ıa D. Fornas ın ıa ∗ R. Mar´ ∗ J. C. Garc´ ∗ D. Fornas R. Mar´ ın J. C. Garc´ ıa ∗ ∗ Computer Science and Engineering Department, ∗ Computer Science and Engineering Department, Science and Engineering Department, University of Jaume-I, Castell´ n, Spain. (e-mail: [email protected]) ∗ Computer Computer Science ando Engineering Department, University of Jaume-I, Castell´ o n, Spain. (e-mail: [email protected]) University of Jaume-I, Castell´ o n, Spain. (e-mail: University of Jaume-I, Castell´ on, Spain. (e-mail: [email protected]) [email protected]) Abstract: This paper presents recent progress concerning benchmarking issues in the underwaAbstract: paper presents recent progress benchmarking issues in the underwaAbstract: This paper recent progress concerning benchmarking issues in the ter roboticsThis manipulation context. After a veryconcerning intensive 6-years period of work, several Abstract: This paper presents presents recent progress concerning benchmarking issues in under the underwaunderwater robotics manipulation context. After aa very intensive 6-years period of work, under several ter robotics manipulation context. After very intensive 6-years period of work, under several funding research projects, all of them in the aforementioned area, a strong know-how has been ter robotics manipulation context. After a very intensive 6-years period of work, under several funding research projects, all of them in the aforementioned area, a strong know-how has been funding research projects, all of them in the aforementioned area, a strong know-how has developed. As part of this expertise, a new underwater simulation tool has been implemented. funding research projects, all of themainnew theunderwater aforementioned area, atool strong know-how has been been developed. As part of this expertise, simulation has been implemented. developed. As part of this aa new underwater simulation tool has been implemented. This platform enables the expertise, integration, simulation, comparative analysis and experimentation developed. As part of this expertise, new underwater simulation tool has been implemented. This platform enables the integration, simulation, experimentation This platform enables the simulation, comparative analysis and experimentation on real data and a detailed characterization of the comparative results, usinganalysis as inputand a simple web-based This platform enables the integration, integration, simulation, comparative analysis and experimentation on real data and a detailed characterization of the results, using as input aa has simple web-based on real data and a detailed characterization of the results, using as input simple web-based user interface. In fact, our previous experience in related research projects evidenced the on real data and a detailed characterization of the results, using as input a simple web-based user interface. In fact, our previous experience in related research projects has evidenced the user interface. In fact, our previous experience in related research projects has evidenced the necessity of testing, comparing and evaluating different algorithms in similar conditions. So, user interface. In fact, our previous experiencedifferent in related researchinprojects has evidenced the necessity of testing, comparing and evaluating algorithms similar conditions. So, the necessity of testing, comparing and evaluating different algorithms in similar conditions. So, the similarity between the virtual scenario, where the benchmarking will be performed, and the real necessity of testing, comparing and evaluating different algorithms in similar conditions. So, the similarity between the virtual scenario, where the benchmarking will be performed, and the real similarity between the virtual scenario, where the benchmarking will be performed, and the real one, will determine the quality of the results. In that sense, a methodology is presented which similarity between the virtual scenario, where the benchmarking will be performed, and the real one, will determine the quality of the results. In that sense, a methodology is presented which one, determine quality of In that methodology is which updates scenariothe with real information each a real aatrial is performed. Benchmarking, one, will will the determine the quality of the the results. results. In time that sense, sense, methodology is presented presented which the scenario with real information each time aa real trial is performed. Benchmarking, updates the scenario with real information each time real trial is performed. Benchmarking, aupdates very active robotic area in nowadays, is the underlying problem to solve. In summary, the updates the scenario with real information each time a real trial is performed. Benchmarking, a very active robotic area in nowadays, is the underlying problem to solve. In summary, the a very active robotic area in nowadays, is the underlying problem to solve. In summary, the main functionalities for benchmarking available in the simulation platform will be highlighted, a veryfunctionalities active roboticfor area in nowadays, is the underlying problem to solve. Inbe summary, the main benchmarking available in the simulation platform will highlighted, main functionalities for benchmarking available in the simulation platform will be highlighted, by using some case studies concerning object tracking under visibility changes, object tracking main functionalities for benchmarking available in the under simulation platform will object be highlighted, by using some case studies concerning object tracking visibility changes, tracking by using some object under visibility tracking under variable waterstudies currentconcerning and 3D reconstruction subject differentchanges, optical object conditions. by using some case case studies concerning object tracking tracking underto visibility changes, object tracking under variable water current and 3D reconstruction subject to different optical conditions. under variable water current and 3D reconstruction subject to different optical conditions. under variable water current and 3D reconstruction subject to different optical conditions. © 2015, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. Keywords: Underwater simulator; Benchmarking; Underwater intervention; Robotics. Keywords: Underwater Underwater simulator; Benchmarking; Benchmarking; Underwater intervention; intervention; Robotics. Keywords: Keywords: Underwater simulator; simulator; Benchmarking; Underwater Underwater intervention; Robotics. Robotics. 1. INTRODUCTION Regarding benchmarking in robotics, a big effort has 1. INTRODUCTION INTRODUCTION Regarding in robotics, aa big effort has 1. Regarding benchmarking in robotics, effort has been made benchmarking over the last few In fact, recent 1. INTRODUCTION Regarding benchmarking in years. robotics, a big bigsome effort has been made over the last few years. In fact, some recent been made over the last few years. In fact, some recent European projects, like FP7-BRICS (Best Practice in During the last 6-years period (i.e. 2009-2014) our re- been made over the last few years. In fact, some recent European projects, like FP7-BRICS (Best Practice in During group the last last 6-yearshas period (i.e. 2009-2014) our rereEuropean projects, like FP7-BRICS (Best Practice in Robotics), were devoted to this specific context (Nowak During the 6-years period (i.e. 2009-2014) our search IRS-Lab been very active working in European projects, like to FP7-BRICS (Best Practice in During the last 6-yearshas period (i.e. 2009-2014) our reRobotics), were devoted this specific context (Nowak search group IRS-Lab been very active working in Robotics), were devoted to this specific context (Nowak et al., 2010). Moreover, following previous research in search group IRS-Lab has been very active working in the underwater robotics manipulation field under three Robotics), wereMoreover, devoted to this specific context (Nowak search group IRS-Lab has been very active working in et al., 2010). following previous research in the underwater underwater robotics manipulation field under three et al., 2010). Moreover, following research in this context (DEXMART, 2009), it previous is clear that: “In the the robotics manipulation field under three different research projects: RAUVI (Sanz et al., 2010) and et al., 2010). Moreover, following previous research in the underwater robotics manipulation field under three this context (DEXMART, 2009), it is clear that: “In the different research projects: RAUVI (Sanz et al., 2010) and this context (DEXMART, 2009), it is clear that: “In the domain of robotics research, it is extremely difficult not different research projects: RAUVI (Sanz et al., 2010) and TRITON (Sanz et al., 2013a), funded by Spanish Ministry, this context (DEXMART, 2009), it is clear that: “In the different research RAUVI (Sanz et al., 2010) and domain of robotics research, it is extremely difficult not TRITON (Sanz et etprojects: al.,(Sanz 2013a), funded by Spanish Spanish Ministry, domain of research, it difficult not to compare results from different approaches, but also TRITON (Sanz 2013a), by Ministry, and FP7-TRIDENT etfunded al., 2013b), funded by the only domain of robotics robotics research, it is is extremely extremely difficult not TRITON (Sanz et al., al.,(Sanz 2013a), funded by Spanish Ministry, only to compare results from different approaches, but also and FP7-TRIDENT et al., 2013b), funded by the only to compare results from different approaches, but also to assess the quality of the research. This is especially true and FP7-TRIDENT (Sanz et al., 2013b), funded by the European Commission. All these projects have been cooronly to compare results from differentThis approaches, but true also and FP7-TRIDENT (Sanz et al., 2013b), funded by the to assess the quality of the research. is especially European Commission. All these these projects have complexity been coorcoor- if to one assess the of This true wishes to evaluate the performance of intelligent European Commission. All have been dinated with several partners and,projects with a high assess the quality quality of the the research. research. This is is especially especially true European Commission. All these projects have complexity been coor- to if one wishes to evaluate the performance of intelligent dinated with several partners and, with a high if one wishes to evaluate the performance of intelligent robotic systems interacting with the real world.” Several dinated with several partners and, with a high complexity in both, hardware and software components. Moreover, if one wishes to evaluate the performance of intelligent dinated with severaland partners and,components. with a high complexity robotic systems interacting with the real world.” in both, hardware software Moreover, robotic systems interacting with the real Several of benchmarks have been but Several we will in both, hardware and components. Moreover, these projects were targeted to common objectives dealing definitions robotic systems interacting with theproposed, real world.” world.” Several in both, hardware and software software components. Moreover, definitions of benchmarks have been proposed, but we will these projects were targeted to common objectives dealing definitions of benchmarks have been proposed, but we use the one stated at Dillman (2004): “adds numerical these projects were targeted to common objectives dealing with underwater intervention systems, to be validated in definitions of stated benchmarks have been proposed, but we will will these projects were targeted tosystems, common to objectives dealing use the one at Dillman (2004): “adds numerical with underwater intervention be validated in use the one stated at Dillman (2004): “adds numerical evaluation of results (performance metrics) as key elewith underwater systems, to be validated in sea conditions at intervention the end. use the oneofstated at(performance Dillman (2004): “addsasnumerical with underwater intervention systems, to be validated in evaluation results metrics) key elesea conditions at the end. evaluation of results (performance metrics) as key ment. The main aspects are repeatability, independency, sea at end. of results (performance metrics) as key eleelesea aconditions conditions at the the ment. The main aspects are repeatability, independency, As consequence, allend. the partners need to be sure that evaluation ment. The main aspects are repeatability, independency, and unambiguity”. A short state-of-the-art can be found ment. The main aspects are repeatability, independency, As a consequence, all the partners need to be sure that unambiguity”. state-of-the-art can be found As a consequence, all partners to sure that their part of the system their need algorithms and A short state-of-the-art elsewhere (P´erez et A al.,short 2014b). As all the theand partners to be be will sure work that and and unambiguity”. unambiguity”. A short state-of-the-art can can be be found found theira consequence, partwhen of the the system and their need algorithms will work elsewhere (P´ e rez et al., 2014b). their part of system and their algorithms will work properly the system is assembled and tested. With elsewhere (P´ e rez et al., 2014b). their partwhen of the and their algorithms will With work elsewhere (P´erez et al., 2014b). properly thesystem system is assembled assembled and tested. tested. In order to simulate the experiments, the UWSim simulaproperly when the system is and With this aim, a simulator that allows the researchers to introproperly when the system is assembled and tested. With In In order to et simulate the experiments, experiments, the UWSim UWSim simulathis aim, aim, simulator that allows the as researchers to introto simulate the the tor (Prats al., 2012) and a benchmarking toolsimulawhich this aa simulator allows the researchers introduce the model of thethat whole system well as a to realistic In order order to et simulate the experiments, the UWSim simulathis aim, a simulator that allows the researchers to introtor (Prats al., 2012) and aa simulator benchmarking tool which duce the model of the whole system as well as a realistic tor (Prats et al., 2012) and benchmarking tool which is highly integrated with the were developed duce the model of the whole system as well as a realistic scenario for testing their algorithms was considered an tor (Prats et al., 2012) and a benchmarking tool which duce the model of the whole system as well as a realistic is highly integrated with the simulator were developed scenario for testing their algorithms was considered an is highly integrated with the simulator were developed (see Figure 1). Moreover, a methodology that allows rescenario for testing their algorithms was considered an extremely important tool. In addition to the simulator, is highly integrated with the simulator were developed scenario forimportant testing their algorithms was considered an (see Figure 1). Moreover, aa methodology that allows reextremely tool. In addition to the the simulator, (see Figure 1). Moreover, methodology that allows searchers to work in different conditions and increasing extremely important tool. In addition to simulator, benchmarking facilities can help the researchers to com(see Figureto 1). Moreover, a methodology that allows rereextremely important tool. In addition to the simulator, searchers work in different conditions and increasing benchmarking facilities can help the researchers to comsearchers to work in different conditions and increasing thework level inof different difficultyconditions has been and designed. This benchmarking facilities can the researchers to compare different algorithms andhelp better understand their lim- gradually searchers to increasing benchmarking facilities can help the researchers to comgradually the level of difficulty has been designed. This pare different different algorithmsfor andenabling better understand understand their limlim- methodology gradually the level of has been designed. to improve forThis the pare algorithms and better their itations and robustness, their improvement. the also levelhelps of difficulty difficulty has the beenscenarios designed. This pare different algorithmsfor andenabling better understand their lim- gradually methodology also helps to improve the scenarios for the itations and robustness, robustness, their improvement. improvement. methodology also helps to improve the scenarios for benchmarking, thus obtaining increasingly a more realistic itations and for enabling their methodology also helps to improve the scenarios for the the itations and robustness, for enabling their improvement. benchmarking, thus obtaining increasingly a more realistic benchmarking, thus obtaining increasingly aa more realistic one.  This work was partly supported by Spanish Ministry of Rebenchmarking, thus obtaining increasingly more realistic one.  one. work was supported Ministry of  This search and Innovation DPI2011-27977-C03 (TRITON Project), by one. work was partly partly supported by by Spanish Spanish Ministry of ReRe This The proposed methodology can be summarized in FigThis and work was partly supported by Spanish Ministry of Research Innovation DPI2011-27977-C03 (TRITON Project), by The proposed methodology can be asummarized in FigFoundation Caixa Castell´ o-Bancaixa, Universitat Jaume I grants search and Innovation DPI2011-27977-C03 (TRITON Project), by The proposed methodology can in ure This figure is representing roadmap enabling search and Innovation DPI2011-27977-C03 (TRITON Project), by Foundation o Universitat The 2. proposed methodology can be be asummarized summarized in FigFigPI.1B2011-17 and Castell´ PID2010-12, Universitat JaumeJaume I PhDII grants Foundation Caixa Caixa Castell´ o-Bancaixa, -Bancaixa, Universitat Jaume grants ure 2. This figure is representing roadmap enabling ure 2. This figure is representing a roadmap enabling the experimental validation, independently of the kind of Foundation Caixa Castell´ o-Bancaixa, Universitat Jaume I grants PI.1B2011-17 Universitat Jaume II PhD ure 2. This figure is representing a roadmap enabling PREDOC/2012/47 and PREDOC/2013/46, by Generalitat VaPI.1B2011-17 and and PID2010-12, PID2010-12, Universitat and Jaume PhD grants grants the experimental validation, independently of the kind of PI.1B2011-17 and PID2010-12, Universitat and Jaume I PhD grants the experimental validation, independently of the kind of running intervention (e.g. “search & recovery”, or “panel PREDOC/2012/47 and PREDOC/2013/46, by Generalitat Valenciana PhD grant and ACIF/2014/298. the experimental validation, independently of the kind of PREDOC/2012/47 PREDOC/2013/46, and by Generalitat Varunning intervention (e.g. “search & recovery”, or “panel PREDOC/2012/47 and PREDOC/2013/46, and by Generalitat Varunning intervention (e.g. “search & recovery”, or “panel lenciana PhD grant ACIF/2014/298. lenciana PhD grant ACIF/2014/298. running intervention (e.g. “search & recovery”, or “panel lenciana PhD grant ACIF/2014/298.

Copyright © 2015, IFAC 2015 8 Hosting by Elsevier Ltd. All rights reserved. 2405-8963 © IFAC (International Federation of Automatic Control) Copyright © IFAC 2015 8 Copyright ©under IFAC responsibility 2015 8 Control. Peer review of International Federation of Automatic Copyright © IFAC 2015 8 10.1016/j.ifacol.2015.06.002



P. J. Sanz et al. / IFAC-PapersOnLine 48-2 (2015) 008–013

9

able comparison between different algorithms that share a common robotic platform. For this reason, the tool has modelling capabilities that enable scenario customization. Different use cases will be explained also, where the features of the benchmarking infrastructure are shown in detail. The next section describes the UWSim architecture, detailing the benchmarking module and scenarios. Then, in Section 3 the different use cases will be explained and some conclusions are provided in Section 4. 2. UWSIM: A 3D UNDERWATER SIMULATION TOOL FOR BENCHMARKING

Fig. 1. UWSim underwater simulation: panel manipulation scenario. Girona 500 I-AUV in water pool conditions manipulating an underwater panel using the ARM5E light-weight robotic arm.

UWSim 1 (Prats et al., 2012) is an open source software tool for visualization and simulation of underwater robotic missions that offers benchmarking capabilities through a specific module (see Figure 1). The software is able to visualize underwater virtual scenarios that can be configured using standard modeling software and can be connected to external control programs by using the ROS interfaces. UWSim is currently used in different ongoing projects funded by European Commission (MORPH 2 and PANDORA 3 ) in order to perform HIL (Hardware in the Loop) experiments and to reproduce and supervise real missions from the captured logs.



    

 



     

The simulator has been implemented in C++ and makes use of the OpenSceneGraph 4 (OSG) and osgOcean 5 libraries. OSG is an open source 3D graphics application programming interface (API) used by application developers in fields such as visual simulation, computer games, virtual reality, scientific visualization and modeling. OsgOcean is another open source project that implements realistic underwater rendering using OSG and was developed as part of an EU funded research initiative called the VENUS project (Alcala et al., 2008).



 

  

  

!

Fig. 2. Cyclic development methodology for continuous integration, enabling the experimental validation, independently of the kind of running intervention.

The UWSim is divided into different modules (see Figure 3): there is a Core module in charge of loading the main scene and its simulated robots; an Interface module that provides communication with external architectures through the Robot Operating System (ROS); a Dynamics module that implements underwater vehicle dynamics; a Physics module that manages the contacts between objects in the scene; the osgOcean, in charge of rendering the ocean surface and special effects; the GUI module, that provides support for visualization and windowing toolkits; an User Interface Abstraction Layer (UIAL), which is in charge of improving the Human-Robot Interaction, the 3D immersion using a Head-Mounted Display and filtering the information to be shown to the user; and the benchmarking module explained in the following section.

intervention”). First of all, the software is developed and the hardware is designed and modeled to be used in the UWSim. Then, using the simulator and the benchmarking module described below, the system is tested virtually, evaluating its robustness as well as its limitations. Secondly, a water tank is used to check each part of the system individually in a real but controlled scenario, where problems like salt water or underwater currents do not appear. For the third step, the whole system is integrated and tested in a water pool. This scenario is also controlled, but some problems like underwater currents or different kinds of visibility can artificially be produced. In the fourth step, the system is drawn to the sea, where the final intervention is executed. During the sea trials, information of the scenario is collected to create a virtual scenario for the simulator as similar as possible to the real environment. Finally, the algorithms can be improved, or even new ones can be implemented using the real information, and then the cycle restarts.

1

Available online: http://www.irs.uji.es/uwsim FP7-MORPH “Marine Robotic System of Self-Organizing, Logically Linked Physical Nodes (MORPH)” Available: http://morph-project.eu 3 FP7-PANDORA, “Persistent Autonomy through learNing, aDaptation, Observation and Re-plAnning (PANDORA)” Available: http://persistentautonomy.com 4 R. Osfield, D. Burns et al., Available: http://www.openscenegraph.org 5 Available: http://code.google.com/p/osgocean 2

The aim of this paper is to present our recent results generating a benchmarking tool for underwater intervention contexts and a simulation tool which enables the suit9

P. J. Sanz et al. / IFAC-PapersOnLine 48-2 (2015) 008–013

10

!"     

 

#!"  

 

      



  

 



'

%!%

  



   

   

  

  !  "



    

!"

  

$   

 

  

    

     

      



  

 

"    

 

  

 





   !  "

 

    

 

 

( # " 

(

) &

# " 

* 



%

 )

Fig. 4. Benchmarking module flow diagram: a benchmark configuration is loaded into the benchmark module and a scene is loaded into the simulator. Then, the benchmark module produces some results that can be logged for posterior analysis.

$ )

$) +  

) 

  &



 

$) 

&

Fig. 3. UWSim modules diagram and its interconnections: Core, Interfaces, Physics, Dynamics, osgOcean, Graphical User Interface (GUI), User Interface Abstraction Layer (UIAL) and Benchmarks.

changes. This feature allows the possibility to not only testing algorithms in a fixed condition setup, but also in a range of possible scenarios, and checking the performance of the software depending on the parameters variation. Measures and scene updaters can be controlled through triggers to start and stop evaluation depending on events such as reaching a position, receiving a message or elapsed time.

2.1 Benchmarking module A benchmarking module for UWSim is also available (P´erez et al., 2013). This module uses ROS to interact with other external software, as UWSim does. The ROS interface permits users to evaluate an external program which can communicate both with the simulator (which can send commands to perform a task) and with the benchmarking module (which can send the results or data needed for evaluation). Detailed information on how to configure and run a benchmark in UWSim can be found online 6 .

In order to increase the benchmarking module usability, a benchmarking web tool has been developed to execute and supervise different predefined benchmarks (P´erez et al., 2014b). This tool acts as an interface, allowing to easily configure, execute and download results from a set of available benchmarks, using ROSBridge and ROSlibjs. The web for benchmarking is also available online 7 and is currently under development to add brand new features such as software uploading, benchmark definition, teleoperation, etc.

For the development of the module, two important objectives were taken into account. The first one is to be transparent to the user, in other words, that it does not require major modifications to the algorithm to be evaluated. The other objective of the module is that it must be adaptable to all kind of intervention tasks in the underwater robotic environment.

2.2 Real Benchmarking scenarios The benchmarking procedure presented in this paper makes use of virtual scenarios that describe the environment in an accurate manner, including water conditions, terrain, objects or even visibility, to face different problems such as black-box recovery, panel intervention, etc. Moreover, the use of information from real environments, adds to the benchmark unexpected situations that enrich the robotic experience. Real modelled scenarios offer much more interesting environments to compare algorithms and to improve the research procedure.

Benchmarks are defined in XML (eXtensible Markup Language) files. Each file will define which measures are going to be used and how they will be evaluated. This allows the creation of standard benchmarks defined in a document to evaluate different aspects of underwater robotic algorithms, being able to compare algorithms from different origins. Each of these benchmarks will be associated with one or more UWSim scene configuration files, being the results of the benchmark dependent on the predefined scene. The whole process is depicted in Figure 4.

Following the methodology previously mentioned, the scenarios used for the benchmarking increase their realism at each iteration of the cycle. The first step for the generation of the benchmarking scenario, consists of modelling all the previous knowledge related to the possible characteristics of the real scenario. For this scenario, the limitations and robustness of the algorithms are obtained. Then, after each physical experiment in which real data is acquired, the virtual scenario is updated with the new real information in order to obtain the closest modelled scenario to the reality.

Two main aspects define a benchmark in UWSim: measures and scene updaters. Measures are the different things to measure in a benchmark, for instance position error, elapsed time or 3D reconstruction coverage. Scene updaters create a controlled environment change through the benchmark execution, allowing multiple tests to be performed, such as visibility, water current force or light 6

The UWSim Benchmarks Workspace. Available http://sites.google.com/a/uji.es/uwsim-benchmarks.

7 The UWSim online Underwater Simulator for benchmarking. Available: http://robotprogramming.uji.es/UWSim/config.html

online:

10



P. J. Sanz et al. / IFAC-PapersOnLine 48-2 (2015) 008–013

 





11

 

  

  

        

   

Fig. 6. Software schema used in Visibility use case. Orange: benchmark outputs, Green: evaluated software available to configure online. subject to water currents at different forces, (2) tracking algorithms evaluation with varying degrees of visibility and (3) 3D reconstruction methods under different visibility and noise conditions. Finally, the reconstruction has also been done in real conditions and then evaluated using the simulator. Future use cases, such as manipulation benchmarks, are still under development. Each benchmarking use case is formed by two main blocks and the benchmarked algorithm. The first block is UWSim, which simulates the scene and provides ground truth to the second main block benchmarking module. This benchmarking module evaluates the software using UWSim ground truth and modifies UWSim scene according to benchmarking needs. Finally, the evaluated software, that user should provide, may use a structure as complex as it is required by the problem to solve. The evaluated software acquires data from UWSim sensors as it would do in a real scenario and it generates the results for the benchmarking module. Fig. 5. Scenario improvement: (top) initial scenario based on previous knowledge; (bottom) scenario updated with the real acquired data.

3.1 Object tracking evaluation subject to visibility changes.

This updated scenario can be used to develop algorithms that can cope with its real counterpart even in the simplest feasible manner, avoiding oversized complexity. Moreover, this scenario could be provided to the scientific community that perhaps do not have the resources (e.g. robots) or the permissions (environment) to benchmark their algorithms in it.

In P´erez et al. (2014a), a visibility benchmark for tracking algorithms is presented. The benchmark measures the tracker error while following a target in a camera as can be seen in Figure 6. The tracker must find target’s corners and centroid of a target object as visibility conditions get worse. The results will show the minimum visibility environment that the tested algorithm requires to accurately track an object.

As an example of the described benchmarking scenario cycle, in the TRIDENT project, a first scenario (see Figure 5 top) was developed where the kinematic model of the robotic system and the black-box mockup dimensions were equivalent to the real ones, while the environment was a generic one. After the final sea trials carried out in Port de S´ oller (Mallorca, Spain; 1-5 Oct 2012), the previous scenario was updated and improved with a 3D textured terrain from the real acquired bathymetry data and the obtained seafloor images (see Figure 5 bottom).

In this experiment, many different tracker configurations were tested, using as reference the ESM (Malis, 2004) and ViSP algorithms (ViSP, 2010). This configuration includes two similarity functions, four methods, and five different warps. The goal of the study was to choose the best algorithm for a tracking target and further manipulation. Results showed that ZNCC (Zero mean Normalized CrossCorrelation) trackers from ViSP were much better for visibility changes while SSD (Sum of Square Differences) ones, also from ViSP, are affected by them. Moreover, as there was no movement at all restrictive warps performed better than general ones. A larger discussion on results is provided on P´erez et al. (2014a), and the whole experiment can be repeated in the online benchmarking platform.

3. USE CASES FOR UNDERWATER BENCHMARKING The benchmarking functionality has recently been used in three use cases: (1) tracking algorithms evaluation 11

P. J. Sanz et al. / IFAC-PapersOnLine 48-2 (2015) 008–013

12

   



 

   

     

  



 



   

! $

  

 

 



    '

 

      

Fig. 8. Software schema used in the 3D reconstruction use case. Orange: benchmark outputs, Green: evaluated software available to configure online.

  & 

be seen in Figure 8. The 3D reconstruction algorithms are evaluated while the light and noise conditions are dynamically changed.

#   $%&   

$ %  

 # 

   

!"   

!"     & 

! " 

  

 

The stereo reconstruction is done using a block matching algorithm that uses previously calibrated stereo cameras to match points from the left and right cameras. The displacement between each point pair is used to compute the 3D coordinates of the corresponding point. With the 3D point list, a disparity map and a point cloud can be built.

Fig. 7. Software schema used in tracker current use case. Orange: benchmark outputs, Green: evaluated software available to configure online. 3.2 Object tracking and control evaluation under current variations.

The second algorithm requires the use of a laser stripe projector attached to the forearm of a robotic arm (i.e. the ARM5E manipulator) and a camera. Then, the elbow link of the arm moves to scan the scene while the camera captures images. For each frame, a segmentation algorithm is used to detect the laser projection points. Using triangulation, the 3D position of each point can be estimated and used to build a point cloud.

In this use case, a controller and tracking algorithm is evaluated in simulation. The goal of the experiment is to maintain the vehicle (the Girona 500 I-AUV) in a relative position to a target (black-box), so it is able to start a manipulation intervention. The metrics used are the error on the tracked object, measuring the difference between the real object position and the tracker estimation, and the vehicle positioning error with respect to the desired position.

To compare these methods in real (see Figure 9) and virtual environments, a precise model of the object was used as a ground truth to develop 3D reconstruction quality metrics. The algorithms were executed in this scenarios where a box was placed in a flat surface. This benchmark was done in simulation and in real hardware, more precisely, in a water tank with the ARM5E arm. The defined metrics used to compare the performance are: mean box reconstruction error, standard deviation of the box reconstruction error, box surface coverage and number of outliers.

The experiment is available online 8 for further testing. The software architecture used to solve the problem can be observed in Figure 7. A tracker is used to find the target in the camera, using that information the visual station keeper decides the optimal position, and position and velocity controllers maintain the vehicle there. In the experiments, several trackers were tested using P (proportional) and PI (proportional-integral) position controllers under varying current force conditions. As expected, PI achieved better results in every situation. For a deeper results analysis P´erez et al. (2014b) can be consulted and, it is also available on the online platform.

The results of the benchmark confirm that the laser reconstruction works better in darker environments, while the stereo reconstruction requires objects with a well lit textured surface to be reconstructed properly. Laser reconstruction obtains good results in scenes with more light, and its performance increases with the darkness, being slightly affected by noise. Although stereo reconstruction gets the best results in ideal conditions, noise affects negatively the reconstruction in a great extent and it can not reconstruct the environment without decent illumination.

3.3 Evaluation of 3D Reconstruction techniques with varying light conditions In the last use case, two 3D reconstruction methods are compared in order to obtain the best possible results before the manipulation stage of an underwater robot. The first one consists of a stereo pair performing stereo reconstruction, while the second one reconstructs the scene using a laser projector, a camera and segmentation techniques. The resulting point clouds have been evaluated under different light conditions. The benchmark structure can

4. CONCLUSIONS As it has been demonstrated along the aforementioned projects, when complexity is very high and, a lot of human and different mechatronic resources must be combined to achieve the expected objectives, integration and benchmarking procedures are crucial aspects looking for succeed.

8

The UWSim online Underwater Simulator for benchmarking. Available: http://robotprogramming.uji.es/UWSim/config.html

12



P. J. Sanz et al. / IFAC-PapersOnLine 48-2 (2015) 008–013

  

13

DEXMART (2009). Specification of benchmarks. In Deliverable D6.1 from FP7-DEXMART Project (DEXterous and autonomous dual-arm/hand robotic manipulation with sMART sensory-motor skills: A bridge from natural to artificial cognition. URL http://www.dexmart.eu. Dillman, R. (2004). KA 1.10 Benchmarks for Robotics Research. Technical report, University of Karlsruhe. Malis, E. (2004). Improving vision-based control using efficient second-order minimization techniques. In Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004 IEEE International Conference on, volume 2, 1843 – 1848 Vol.2. doi:10.1109/ROBOT.2004.1308092. Nowak, W., Zakharov, A., Blumenthal, S., and Prassler, E. (2010). Benchmarks for mobile manipulation and robust obstacle avoidance and navigation. In Deliverable D3.1 from FP7-BRICS Project (Best Practice in Robotics). URL http://www.best-of-robotics.org/home. Prats, M., P´erez, J., Fern´ andez, J., and Sanz, P. (2012). An open source tool for simulation and supervision of underwater intervention missions. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, 2577–2582. doi:10.1109/IROS.2012.6385788. P´erez, J., Sales, J., Mar´ın, R., and Sanz, P.J. (2014a). Web-based configuration tool for benchmarking of simulated intervention autonomous underwater vehicles. In Autonomous Robot Systems and Competitions (ICARSC), 2014 IEEE International Conference on, 279–284. Espinho, Portugal. doi:10.1109/ICARSC.2014.6849799. URL http://dx.doi.org/10.1109/ICARSC.2014.6849799. P´erez, J., Sales, J., Mar´ın, R., and Sanz, P. (2014b). Online tool for benchmarking of simulated intervention autonomous underwater vehicles: Evaluating position controllers in changing underwater currents. In 2014 Second International Conference on Artificial Intelligence, Modelling and Simulation (AIMS2014), 246–251. Madrid, Spain. P´erez, J., Sales, J., Prats, M., Mart´ı, J.V., Fornas, D., Mar´ın, R., and Sanz, P.J. (2013). The underwater simulator UWSim: Benchmarking capabilities on autonomous grasping. In 11th International Conference on Informatics in Control, Automation and Robotics (ICINCO). Sanz, P.J., Pe˜ nalver, A., Sales, J., Fornas, D., Fern´andez, J.J., Perez, J., and Bernab´e, J.A. (2013a). GRASPER: A multisensory based manipulation system for underwater operations. In 2013 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, Manchester, UK. Sanz, P.J., Prats, M., Ridao, P., Ribas, D., Oliver, G., and Orti, A. (2010). Recent progress in the RAUVI project. a reconfigurable autonomous underwater vehicle for intervention. In 52-th International Symphosium ELMAR-2010, 471–474. Zadar, Croatia. Sanz, P.J., Ridao, P., Oliver, G., Casalino, G., Petillot, Y., Silvestre, C., Melchiorri, C., and Turetta, A. (2013b). TRIDENT: An european project targeted to increase the autonomy levels for underwater intervention missions. In OCEANS’13 MTS/IEEE conference. San Diego, CA. ViSP (2010). Visual Servoing Platform. Available online: http://www.irisa.fr/lagadic/visp/visp.html.

    

 

     Fig. 9. Physical benchmarking system: water tank, Lightweight ARM5E manipulator, stereo camera, laser stripe projector and an object to be reconstructed and grasped (i.e. an amphora). Concerning the benefits of this platform, it is noticeable that some current EU projects like PANDORA and MORPH have been using this tool in their work plan. As has been highlighted previously, the developed methodology has consisted on a simulated trial in first place, secondly the test in the water tank scenario, and at the end, in the seabed. Moreover, the platform has been recently extended to include a benchmarking module, which allows the characterization of the results from the real scenario, and using these datasets for further research, allowing the comparison of different algorithms for the given real conditions of the experiments. Currently, the tool presents a web-based user interface that enables the researcher to select the experimentation scenario, while focusing on particular test algorithms (e.g. 2D vision-based grasping control, 3D object reconstruction, etc.). Moreover, the benchmarking tool incorporates the possibility to add specific conditions to the testbed scenario, such as low visibility and water currents. As conclusion, the benchmarking tool has demonstrated to be and excellent software for integration and experimental validation, for both, virtual and real scenarios. It allows the definition of ordered datasets from real experiments, and their further use for comparing specific scientific algorithms such as pattern recognition and 3D object reconstruction for manipulation, taking into account specific visibility and current conditions. Further work will focus on enhancing the tool to provide more datasets and use cases that enable measuring and comparison of specific robotics algorithms. REFERENCES Alcala, F. et al. (2008). VENUS (Virtual ExploratioN of Underwater Sites) Two years of interdisciplinary collaboration. In 14th International Conference on Virtual Systems and Multimedia. Limassol, Cyprus. 13