Towards a Deliberative Mission Control System for an AUV

Towards a Deliberative Mission Control System for an AUV

Towards a Deliberative Mission Control System for an AUV ? Narc´ıs Palomeras ∗ Pere Ridao ∗ Marc Carreras ∗ Carlos Silvestre ∗∗ ∗ University of Giron...

719KB Sizes 0 Downloads 62 Views

Towards a Deliberative Mission Control System for an AUV ? Narc´ıs Palomeras ∗ Pere Ridao ∗ Marc Carreras ∗ Carlos Silvestre ∗∗ ∗

University of Girona. Edifici Politecnica IV, Campus Montilivi Girona, Spain [email protected] ∗∗ Instituto Superior Tecnico. Lisboa, Portugal

Abstract: In this paper it is investigated how to introduce onboard planning capabilities onto a reactive control architecture. Our proposal includes a reactive layer composed by several vehicle primitives, a safe control execution layer based on Petri nets and a deliberative layer combining a planner and a world modeler. A case with practical applicability is studied to illustrate the viability of the whole system and its possible application to the Ictineu AUV. Keywords: Autonomous Vehicles, Robot Navigation, Programming and Vision and Marine Systems. 1. INTRODUCTION When using Autonomous Underwater Vehicles (AUVs) in order to undertake complex missions, plans are required to specify those missions. In previous articles (see Palomeras (2009)) a Mission Control System (MCS) based on Petri nets was presented. In this MCS, plans were easily described using an imperative language called Mission Control Language (MCL), that allowed sequential/parallel, conditional and iterative task execution. Then, the MCL missions were automatically translated into a Petri net, to formally describe the mission thread of execution. Using the Petri net formalism it was possible to ensure the reachability of the end states of the mission as well as to avoid possible deadlocks. Predefined plans are the state of the art for today AUV missions. However, when dealing with the uncertain and unknown underwater environment with processes and other agents changing the world in unpredictable ways, and with notoriously imprecise and noisy sensors, plans can fail by several reasons (see Turner (2005)). The difficulty to control the time in which events happens, how to deal with resources as energy, how to respond to sensor malfunctions or the lack of onboard situational awareness may cause off-line plans to fail at execution time as assumptions upon which they were based are violated. Moreover, the difficulty to add new sub-goals during a mission diminishes the possibilities of these off-line planned systems as exposed in Rajan et al. (2007). Consequently, an onboard planner with the ability to modify or re-plan the original plan should be included in an AUV. However, scientists want assurance that the data that they want is collected where and when they specify and not when a ? This research was sponsored by the Spanish government (DPI200806548-C03-03), FREEsubNET (MRTN-CT-2006-036186) and EU FP-7, ICT Challenge 2: Cognitive Systems, Interaction, Robotics Collaborative Project (STREP), Grant agreement No: ICT-248497.

planner to decide. In the same way, military have often been opposed to on-board planners, since it is critical for most of their missions to ensure a predictable behavior of the robot. Then, a compromise between off-line plans and automated onboard planning is desirable. Not many successful approaches using deliberative modules onboard an AUV are found in the literature. The Ocean Systems Laboratory of the Heriot-Watt University has developed and architecture with planning abilities to carry out multiple AUV missions (Evans et al. (2006)). They are using on-line plan repairing algorithms in order to obtain automated plans similar to the ones defined by the user. The T-REX (Rajan et al. (2007)), developed by the Monterey Bay Aquarium Research Institute, is another AUV architecture that takes advantages of an onboard planner. It is a port of the Remote Agent architecture developed by the NASA’s Deep Space 1 spacecraft. Another approach in this area is the Orca project (Turner (2005)) under which was developed an intelligent mission controller for AUVs that provides a context-sensitive reasoner that enables autonomous agents to respond quickly, automatically, and appropriately to their context. Nowadays, it is impossible to have a system able to react to any unforeseen event. Today algorithms are only able to react to those situations for which they have been programmed. However, the way in which the actions to take are specified can change a lot from one paradigm to another. For instance, if we try to define an off-line plan using only conditional structures and taking into account all the events that may happen during a mission, the size and complexity of this plan will grow exponentially to the number of goals and events. On the other hand, if we can describe each task to be performed in such way that can be understood by an onboard planner, the planner itself will be able to choose the most suitable task each time simplifying the a priory mission description. Thus, our proposal consists in adding a domain-independent







 

&

$  

   

 

  

    

&



"

 

 "





%

vehicle-independent. It offers an interface based on three types of signals: Actions, Events and Perceptions. Actions enable or disable basic primitives within the vehicle’s architecture. Events are used for the vehicle architecture to notify changes in the state of its primitives. Finally, some Perceptions are transmitted from the vehicle’s architecture to the deliberative layer in the MCS in order to be used by the world model. The AAL, allows for the possibility of using this MCS approach in different vehicles with different reactive architectures. 3. CONTROL EXECUTION LAYER

 

" 

!





  

 "

!

Fig. 1. Three-layer hybrid control architecture. onboard planner as well as a world modeler into our framework following the three-layer schema proposed for hybrid control architectures (Lyons and Hendriks (1992)). Therefore, the presented control architecture is divided in three layers as shown in Figure 1. This paper is organized as follows. Section 2 presents the reactive layer of the vehicle’s control architecture. Section 3 reviews the control execution layer previously introduced in Palomeras (2009). The deliberative layer is introduced in Section 4 and finally, a preliminary case study is reported in Section 5 before the conclusions. 2. REACTIVE LAYER As depicted in Figure 1, the reactive layer is composed by a set of vehicles primitives. These primitives read perceptions from the vehicle’s sensors and send set-points to its actuators. The reactive layer contains the basic robot functionalities. For an AUV a primitive can range from a basic sensor enabling (enableCompass) to a complex behavior as navigate towards a 3D way-point (goToWayPoint). Primitives have a goal to be achieved. For instance, the goal of the KeepAltitude primitive is to drive the robot at a constant altitude with respect to the seabed within an uncertainty interval. In general, a primitive can be enabled or disabled sending an action from the MCS. Depending if the primitive is able to achieve its goal or not, if a failure is detected, etc., different events may be generated by the primitive. Additionally to these events, several sensor readings (perceptions) can be also sent from the reactive layer to the MCS. All communications between the vehicle architecture (the reactive layer) and the MCS (the control execution and the deliberative layers) are done through the Architecture Abstraction Layer (AAL). Our intention has been to design a MCS as generic as possible and to allow for an easy adaptation to different reactive architectures. To achieve this goal, the AAL is in charge of the communications between the MCS and the vehicle’s architecture making it

As depicted in Figure 1, the MCS is composed by the control execution layer together with the deliberative layer. The control execution layer is responsible to execute the sequence of operators selected by the deliberative layer decomposing these operators in primitives to be run within the reactive layer. Then, the deliberative layer is planning using the operators defined in the control execution layer instead of using the vehicle primitives. Hence, the main function of the control execution layer is to define and execute the operators used in the deliberative layer. 3.1 Petri net Building Blocks Petri net Building Blocks are the basic building structures used to define as well as to execute an operator. Petri net Building Blocks are defined using the Petri net formalism and they must ensure some properties: (1) Petri net Building Blocks must be free of deadlocks. (2) The reachability graph of every Petri net Building Block must show that the set of places marked when all the possible transitions have already been fired after starting in the rest-state with a valid input marking are the places marked in the rest-state plus a valid combination of output places. (3) All the structures used to define an operator must share the same interface. This means that they must have the same input and output places. First and second properties can be tested through a reachability analysis. It is well known the state explosion problem when performing a reachability analysis, however, at this level, Petri net Building Blocks are kept small enough and hence the reachability analysis is not a problem. The third property introduces the concept of interface. A Petri net interface is composed by a set of input places Pi where ∀pk ∈Pi • pk = Ø and a set of output places Po where ∀pk ∈Po pk • = Ø in which Pi ∪ Po 1 are used as a fusion places to build more complex structures using different Petri net Building Blocks. It is possible to design any kind of interface but all the structures used to define a mission must share the same interface. Fig. 2(a) shows an example of a task with an interface composed by Pi = {begin, abort} and Po = {ok, f ail}. There are two different types of Petri net Building Blocks: Tasks and Control Structures. Both of them accomplish the above presented properties, however, their characteristics and usefulness differ as detailed hereafter. 1

Where •p is the set of transitions with output arcs to place p and p• is the set of transitions receiving input arcs from place p.

(a) Fig. 3. Composition of the tasks Goto() and AchieveHeading() with the control structure SEQ. 3.3 Control Structures

(b) Fig. 2. (a) Simple Task with time-out. (b) Sequence control-structure. Table 1. Description of element in Fig. 2 T0 T1 ...T4 T3 POk , PF ail

POf f

begin2 , abort2 , ok2 , f ail2 begin3 , abort3 , ok3 , f ail3

Task Send action Enable primitive. Send action Disable primitive. Timed transition. Fired after a time-out. Event place. Receives a token when the associated primitive changes its state to Ok/F ail. Event place. Receives a token when the associated primitive changes its state to off. Control Structure Fusion places (interface) to the first structure in the sequence. Fusion places (interface) to the second structure in the sequence.

3.2 Tasks Tasks are Petri net Building Blocks that communicate with the robot architecture by means of Actions and Events. Actions are associated to transitions in the Petri net being executed whenever the related transition is fired. Events communicate changes detected by the vehicle architecture to the mission Petri net. Every event is associated to a particular place. If an specific event is triggered by a primitive in the control architecture, the respective place receives a token. The operators used in the deliberative layer are programmed commanding the control flow of tasks. Task execution triggers the execution of the primitives as well as the detection of the corresponding events. Fig. 2(a) shows a task able to launch a primitive and to stop it when a primitiveOk or a primitiveFail event is raised or the T3 time-out expires (see Table. 1 for a description about some places and transitions in Fig. 2).

The Petri net Building Blocks used to aggregate other Petri net Building Blocks with the objective of modeling more complex operators by controlling the flow among different execution paths are called control structures. The resulting net after aggregate several Petri net Building Blocks using a control structure is a new Petri net Building Block that satisfies the desired Petri net properties, as inherited from the original Petri net Building Blocks. This operation will be denominated by composition. It is worth noting that it is not necessary to span the whole reachability tree of the resulting Petri net to ensure the deadlock free as well as the state reachability properties. Spanning it, would have a very high computational cost as the complexity of the Petri Net that results from the composition operation can be very large. These properties are guaranteed by construction and hence an operator implemented according to these rules progresses from its starting state to an exit state without sticking into a deadlock. It can be shown that this set of Petri net Building Blocks is closed with respect to the composition operation. Hence, the result of a composition of some Petri net Building Blocks with the above presented properties is a new Petri net Building Block, for which the same properties hold by construction without the need of further verification. If two tasks called GoTo() and AchieveAltitude(), each one described by the Petri net reported in Fig. 2(a), are composed by the sequence control structure shown in Fig. 2(b), the resulting Petri net will be as the one presented in Fig. 3. Depending on the selected interfaces, different control structures can be defined. Based on the above presented interface Pi = {begin, abort} and Po = {ok, f ail}, the MCL provides some popular control structures like sequence, parallel, while, if-then-else, etc. while others may be defined by the programmer, if needed. In Palomeras (2009) these control-structures and many others are detailed. 3.4 The Mission Control Language Because the direct manipulation and construction of the Petri nets rapidly becomes cumbersome, a high level language called Mission Control Language (MCL) was introduced in Palomeras (2009). In order to use the MCL to automatically construct the Petri nets several sections must be defined. First, the necessary actions to enable/disable

the primitives in the reactive layer as well as the events that these primitives are able to raise must be described. Then, to define the Petri net Building Blocks, their internal structure (places, transitions and arcs) has to be specified. Moreover, it is necessary to relate the previously defined actions and events with the corresponding places and transitions in the tasks Petri net Building Blocks. Finally, once the tasks and the control structures have been described, several operators can be coded in MCL. The MCL control structures are very similar to those provided by other popular languages while tasks can be seen as function calls. Hence, programming a new operator using MCL becomes very simple. Extract 1: Search Operator using MCL operator Search(object, zone, altitude) { AchieveAltitude(altitude) ; parallel { KeepAltitude(altitude) } or { SearchPattern(zone) } or { DetectObject(object) } }

Extract 1 describes an operator called Search coded in MCL. Four vehicle primitives and two control structures are used to describe it. The primitives are Achieve and KeepAltitude used to reach and keep an altitude with respect to the seabed, SearchPattern that performs a lawnmower type trajectory in a predefined area and the DetectObject primitive able to recognize an object using a down-looking color camera. The sequence (;) and the parallel (parallel-or) control structures are also used in this operator. When the Search operator presented in Extract 1 is executed, the vehicle dives until a certain altitude with respect to the seabed is achieved. Then, a search pattern trajectory covering the specified area is performed while keeping the altitude. The operator finalizes when an object is detected by the primitive DetectObject or when the entire search area is covered by the SearchPattern primitive. The Search operator may fail if any of the involved primitives fails. For example, if the navigation data is no longer accurate, the searchPattern primitive fails causing a failure to the whole operator. The process to generate a Petri net from a MCL program is automatically performed by the MCL-Compiler. The code generated by the MCL-Compiler contains a single Petri net for each operator, coded using an standard XML-based interchange format for Petri nets called Petri Net Markup Language (PNML) 2 . A Petri net Player is used to execute the obtained operator in real-time applying the basic Petri net transition rules. The Petri net Player has to control all the timers associated to timed transitions, fire the enabled transitions, send actions from the MCS to the reactive layer in order to enable, disable or reconfigure the vehicle primitives and transform the events received from the AAL to marked places within the operator’s Petri net. When an operator is successfully executed, the next operator in the plan is executed. However, if the executed operator fails, the Petri net player must wait until a new plan is generated by the deliberative layer. 2

http://www.pnml.org

4. DELIBERATIVE LAYER Two main modules compose the deliberative layer: a planner and a world modeler. The former is a domainindependent planner algorithm that given a set of facts provided by the world modeler, as well as the list of available operators in the control execution layer, generates a plan to achieve the goals described by the user (see Figure 1). The current implementation is a simple breadthfirst search algorithm working in the state-space using the restricted planning conceptual model (see below). The second module is the world modeler. It receives perceptions from the reactive layer and applying a set of scripts defined by the user generates the corresponding facts to model the world. 4.1 Planner A restricted conceptual model is used for the planner (see Ghallab et al. (2004)), therefore the following properties holds: • The system has a finite and fully observable set of states with complete knowledge about it. • The system is deterministic and static. This means that the states only change when an action is applied to a state and its application brings the system to a single other state. • The planner handles only restricted goals. Extended goals such as states to be avoided and constraints on state trajectories are not handled. • The solution plan is a linearly ordered finite sequence of actions with no duration. To keep the planner as much simple as possible, the number of planning actions has been drastically reduced grouping several vehicle primitives into a single operator. Moreover, for the sake of simplicity, time and resources are not yet taken into account when planning. These simplifications allow us to produce faster plans avoiding to anticipate the state of the resources as well as avoiding to take decisions based on the worst possible case that is the usual approach when planning with resources. Then, the main goal of our proposal is to allow the AUV to plan within a few seconds. An additionally advantage is that the vehicle’s behavior is more predictable because it is the result of executing an operator predefined by the user instead of an arbitrary combination of primitives selected by the onboard planner. Moreover, although the planner is only capable to sequences the operators to execute, parallelization, conditional or iterative execution of primitives can be done within the operators. A drawback of this architecture is that the vehicle needs to re-plan each time that the world model changes. This is a consequence of the static system assumption of the adopted restricted model. Figure 4 shows the four control loops within the proposed architecture. The lower-level control loop contains the velocity controller that operates at a frequency of 10Hz. It is in the reactive layer and is responsible for sending setpoints to the actuators. The second control loop is also in the reactive layer and runs at a frequency of 5Hz. It is in charge of coordinating the primitive responses and

  

 

 

     

       

  

any alarm raised. The result of executing the operator, is that the o entity is not in an unknown position anymore. An additional element has been added to the classical representation to indicate the MCL implementation of the operator and its corresponding parameters. The MCL operator Search (see Extract 1) will be executed with the entity o as the object to search, the entity z as the area to survey and with the literal 1.5 (meters) as the flight altitude during the search. Extract 2 Search operator for planning op: Search Robot r, Object o, Zone z pre: o UnknowPos, r In z, r NavigationOk, r BatteryOk, r Pay-



LoadEmpty, r NoAlarms add: o KnownPos

Fig. 4. The four control loops within the proposed hybrid architecture. sending the resulting set-points to the velocity controller. The third control loop appears in the control execution layer. It is able to react to any new event with less than a second. Using the events received from the reactive layer it controls which primitives must be in operation and which not, following a given plan. Finally, the higher-level control loop is able to modify the plan executed in the control execution layer depending on the received perceptions. It can react to new facts in the world with less than 10s. Hence, the operators composed by primitives must be able to react to fast changes in the real world while the planner has to look only for major changes in order to adapt the plan to execute. Extract 2 shows how the Search operator, presented in Extract 1 must be described in order to be used by the planner algorithm. A classical representation is used where the operator is defined as a triple o = {name, preconditions, effects}. The name is defined as name = n(x1 , ..., xk ) where n is a symbol called operator name and x1 , ..., xk are all the variables that appear anywhere in o. We call these variables entities and each entity has assigned a type. Facts are used to describe the other two elements of an operator. They are the combination of one or two entities with a proposition representing a quality or a relation among the entities. P reconditions indicate the facts that must be present in the world model in order to apply an operator. Finally, ef f ects are the changes produced in the world when applying the operator. We separate effects between add and del effects where add effects are the facts added into the world and del effects the facts removed from it once the operator has been applied. Additionally, boolean expressions can be add to the operator. Only when all the expressions are true the operator can be executed. The planning operator Search presented in the Extract 2 has three entities. The first one (r) is a Robot type entity, the second (o) is an Object type entity and the last one (z) is a Zone type entity. Preconditions indicate that the position of the entity o must be unknown, the r entity must be in the same zone than the one indicated by the z entity and finally, that both the navigation as well as the robot’s battery have to be correct and its payload empty in order to apply the operator. Also, there should not be

del: o UnknowPos mcl: Search( o, z, 1.5 )

4.2 World modeler Several techniques have been studied to extract information from the real world. A world modeler, inspired in the context provider presented in Jeong and Kim (2008), has been used to extract the knowledge about the world. The world used to plan is a collection of facts. These facts are extracted by applying a set of scripts, written by the user, using the perceptions received from the reactive layer as input parameters (see Figure 1). Extract 3 shows one of these scripts. Each script is defined by the triple s = {name, conditions, effects}. The name is an identifier. The conditions are a set of expressions involving variables coming from the reactive layer. Like for operators, ef f ects are the changes to be produced in the world if the script is applied. The effects are also divided between add and del effects. When all the conditions inside one script are true the effects are applied. In Extract 3, when the auv.battery perception is sent from the reactive layer to the world modeler, the script low battery is executed. If the value of the variable auv.battery is smaller than 30%, the fact auv BatteryOk is removed from the world database while the fact auv BatteryLow is added to it. Extract 3 low battery fact provider script

5. A CASE OF USE A case of use for a simple intervention mission has been programmed in order to study the viability of the whole system and its application to the Ictineu AUV (Ribas et al. (2007)). The objective of the mission is to deploy an AUV in a specified area to perform a survey searching and collecting a set of objects. Due to the size of the search area it is expected that the mission may last for hours.

As the vehicle’s navigation is based on dead-reckoning it is necessary to surface to get a GPS position fix whenever the accuracy of the navigation data is below a threshold. It is assumed that the vehicle can run out of batteries during the mission. In this case, the AUV has to go to recharge its batteries in a docking station installed in the seabed. Additionally, while the vehicle is collecting the objects, its payload may becomes full. Then, the AUV has to go to the base ship to empty the payload. Finally, some sensors or actuators could fail causing the robot to abort the mission. Eight operators have been defined. Search that is detailed in Extracts 1 and 2. Goto to go from one zone to another. ApproachObject to maneuver the vehicle from where the object is detected towards where it can be recovered. GraspObject to take the object once it is within the workspace. GetGpsFix to surface the vehicle and wait for a GPS fix. ChargeBattery to charge the battery once docked into the docking station. ClearPayLoad to empty the payload. Finally, CheckAlarms to check if a raised alarm is so important to abort the mission or not. The entities that will be used in this mission are the Robot type entity auv, several Zone type entities called deploy, dock, base, survey and recovery and an Object type entity identified as item. Finally, several scripts must be also defined in order to translate the perceptions in the real world into facts in the world database. The mission goals have been set to: auv In recovery and recollectedItems 2. The first generated plan consist of the following sequence of operators: Goto(auv, deploy, survey), Search(auv, item, survey), ApproachObject(auv, item, survey), TakeObject(auv, item, recollectedItems, survey), Search(auv, item, survey), ApproachObject(auv, item, survey), TakeObject(auv, item, recollectedItems, survey) and Goto(auv, survey, recovery). However, if when searching for the second item the vehicle’s navigation data becomes poor and the batteries charge falls below 30%, the second Search operator is aborted and the next sequence of operators is executed instead: GetFix(auv), Goto(auv, survey, dock), ChargeBattery(auv, dock), Goto(auv, dock, survey), Search(auv, item, survey), ApproachObject(auv, item, survey), TakeObject(auv, item, recollectedItems, survey) and Goto(auv, survey, recovery). All the plans used in the course of this mission have been generated in less than 0.3s using a standard laptop. The simplicity of this mission leads to the conclusion that using conditional structures such as If-Then-Else or Try-Cath it may be possible to program a similar behavior using an off-line plan. However, the complexity of describing an off-line plan is exponential to the number of events that the vehicle is able to react as well as the number of goals to accomplish. Thus, although the inclusion of a planner within the architecture does not provide a greater degree of intelligence than the achieved using an off-line plan it dramatically simplifies the way in which the mission is described and avoids possible errors introduced by the user when describing the mission. The inclusion of new goals during the execution of the mission (i.e. change the number of items to be recolleted) is another advantage of this proposed system.

6. CONCLUSIONS AND FUTURE WORK In this paper it is investigated how to introduce onboard planning capabilities to a reactive control architecture. In the control execution layer, Petri nets are used to safely model the operators. It has been shown that it is possible to compose Petri net Building Blocks, designed free of deadlocks and reusable, to generate the operators. Moreover, to describe these operators, instead of directly manipulate the Petri nets a high level imperative language, called MCL, which compiles into a Petri net is used. Finally, the deliberative layer includes a domain-independent planner and a world modeler that transforms the perceptions in the reactive layer into facts in the world database used to plan. To illustrate the system, an intervention mission has been formulated. Although the inclusion of a planner within the architecture does not provide a greater degree of intelligence than the achieved using an off-line plan it dramatically simplifies the way in which the mission is described and avoids possible errors introduced by the user when describing the mission. As a future work, the whole proposed hybrid control architecture must be implemented in the Ictineu AUV and tested in a survey mission. Moreover, the planning algorithm capabilities can be improved. REFERENCES Evans, J., Sotzing, C., Patron, P., and Lane, D. (2006). Cooperative planning architectures for multi-vehicle autonomous operations. Systems Engineering for Autonomous Systems Defence Technology Centre. Ghallab, M., Nau, D., and Traverso, P. (2004). Automated planning. Morgan Kaufmann, Elsevier. Jeong, I.B. and Kim, J.H. (2008). Multi-layered architecture of middleware for ubiquitous robot. IEEE International Conference on Systems, Man and Cybernetics, 1–6. Lyons, D.M. and Hendriks, A.J. (1992). Planning for reactive robot behavior. Robotics and Automation, 1992. Proceedings., 1992 IEEE International Conference on, 2675 – 2680 vol.3. doi:10.1109/ROBOT.1992.220001. Palomeras (2009). Using petri nets to specify and execute missions for autonomous underwater vehicles. Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on, 4439 – 4444. doi:10.1109/IROS.2009.5354045. Rajan, K., McGann, C., Py, F., and Thomas, H. (2007). Robust mission planning using deliberative autonomy for autonomous underwater vehicles. ICRA Workshop on Robotics in challenging and hazardous environments, 21–25. Ribas, D., Palomeras, N., Ridao, P., Carreras, M., and Hernandez (2007). Ictineuauv wins the first sauc-e competition. Robotics and Automation, 2007 IEEE International Conference on, 151 – 156. doi: 10.1109/ROBOT.2007.363779. Turner, R.M. (2005). Intelligent mission planning and control of autonomous underwater vehicles. ICAPS.