10th IFAC Symposium on Robot Control International Federation of Automatic Control September 5-7, 2012. Dubrovnik, Croatia
Distributed Systems in Control and Navigation of Small Underwater Vehicles Dula Nad ∗ Nikola Miˇ skovi´ c ∗ Tomislav Lugari´ c∗ Zoran Vuki´ c∗ University of Zagreb, Faculty of Electrical Engineering and Computing, LabUST - Laboratory for Underwater Systems and Technologies Unska 3, Zagreb, Croatia (e-mail:
[email protected]) ∗
Abstract Distributed systems are pervasive in the process industry but less so in underwater robotics. Development of open-source frameworks, targeted at mobile systems, increases applicability of distributed techniques in vehicle control. Most often supervisory control is implemented in a distributed framework while low-level control is kept in the embedded system. This approach allows sharing of sensor data, logging and increased reconfigurability of high level controllers. Meantime, the embedded system offers precise timing and reliable behaviour. Alternatively, the low-level control can be moved into the distributed framework to allow easier reconfiguration and prototyping. In this paper we describe the architecture used for low-level control and analyze its performance in the MOOS and ROS distributed frameworks. Keywords: Distributed systems; low-level control; MOOS; ROS 1. INTRODUCTION Small underwater vehicles (UUVs) are employed in different domains of the maritime sector. Compact and affordable, they make excellent platforms for shallow water research. Often, they are implemented with basic thruster control. Embedded low-level controllers for heading, depth are sometimes available but their quality and flexibility is usually limited. Sensor suites on small UUVs are limited. Usually a compass and pressure sensor are available, while others have to be added separately. Filtering and state estimation is non existent and has to be implemented by help of a external processing unit. Different missions and research objectives require different sensor suites and controllers. This imposes a lot of software development effort during different experiments. Effectively, the amount of time spent on research is reduced in favor of code development and maintenance. Several frameworks are developed around different land, air, surface and underwater platforms. Their concept is to ease sensor, control and navigation integration and reduce the development effort by creating supporting libraries. Classical solutions embed control, navigation and sensor acquisition in a single application. This approach suffers from lack of modularity and robustness. Maintenance and additions of large amounts of monolith code is complicated and error-prone. Naturally, system have evolved away from this development model into modular and distributed ⋆ The work was carried out in the framework of a Coordination and Support Action type of project supported by European Commission under the Seventh Framework Programme ”CURE – Developing Croatian Underwater Robotics Research Potential” SP-4 Capacities (call FP7–REGPOT–2008–1) under Grant Agreement Number: 229553.
978-3-902823-11-3/12/$20.00 © 2012 IFAC
224
frameworks. Advantages of software and hardware distribution such as flexibility, reliability, easy expansion and maintenance have made distributed frameworks popular. Further, failures in one module do not compromise the whole system and other modules can continue operation and adjust accordingly. Inputs and outputs between modules are exposed and enable easier debugging and error detection. However, distributed frameworks have a inherent short coming of transport delay which can influence system stability. We shall analyze the problem of interprocess delay in later sections. Open source distributed frameworks are rapidly increasing in quality and currently support many sensors, platforms and offer visualization, benchmarking and debugging tools. Development time is greatly reduced by applying these freely available utilities. Plethora of these frameworks exists from more general like OpenJAUS (Galluzzo and Kent [2010]) to more specific like MOOS (Newman [2007a]) targeted at underwater vehicles, Player (Gerkey et al. [2003]), ROS (Quigley et al. [2009]), Orca (Makarenko et al. [2006]) targeted towards mobile land robots. All of these frameworks have a different approach to inter-process communication. Some offer greater flexibility and reconfigurability which reduces transport delay between modules and processes. The authors have chosen to analyze the MOOS and ROS frameworks for this paper. Previous works by the authors utilized the MOOS framework for higher level control, Djapic and Nad [2010]. The ROS framework is well established in land robotics and offers a broad range of supported platforms. We selected ROS as a potential new framework utilized in underwater vehicle control. Lately, the emerging use of the ROS framework in underwater vehicle competitions
10.3182/20120905-3-HR-2030.00168
IFAC SYROCO 2012 September 5-7, 2012. Dubrovnik, Croatia
like SAUC-E triggered this interest. We focus on systems that can provide soft real-time behaviour. Although hard real-time systems exist, we find that they are not necessary considering the average dynamics of underwater vehicles. 2. MODULAR ARCHITECTURE The chosen distributed frameworks offer a solid interprocess communication but the application layer and exchanged message contexts are still left open. Without unique sensor, control and navigation messages we still have to perform the overhead of data and message context adaptation. ROS takes a step in this direction by defining unique messages, i.e. for range and visual sensor data, 3D point cloud data, etc. However, in the domain of underwater robotics control there is still room for additional abstraction. Underwater vehicles can be modeled with a rigid body model, as described in Fossen [1994]. We observe that vehicles, in general, are controllable by surge, sway, heave forces and roll, pitch, yaw moments, Fossen [1994]. Often, in underwater systems, controller action results directly with a desired rudder deflection or propeller rotation. This approach requires redesign of controllers when the platform changes. Designing a controller in the force/torque domain leaves the control surface or thruster action to the allocation system which is closely coupled with the platform. Such design methods achieve uncoupling of the controller and platform. Controller parameters will still have to be changed, but now a generic auto-tune method can be used for each platform to determine the dynamic model and controller parameters. This abstraction offers a plug and play system with easy configuration and minimises code changes. Note that a similar approach can be applied to sensor acquisition and navigation algorithms. The architecture for underwater vehicle navigation and control used in this article is designed to be modular, loosely coupled and message based. Primary design goals are shifting focus from hardware specifics to data specifics. Processes are separated to resemble the usual control loop elements, as can be seen of Figure 1. +
Control
Filter
Process
Sensor
Figure 1. Simple control loop block diagram Usually several sensors will exist and several degrees of freedom will be controlled. Sensor fusion is then handled within the filtering sequence. Loose coupling makes the system robust because a failure in one of the modules does not prevent other modules from functioning. One example would be the failure in the control application, in this case the separate process application can detect that and bring the vehicle into a safe shutdown. Sensor redundancy and fault tolerance can be also be easily implemented on top of the control system. Data sharing is simplified since several modules can receive the same data. Processes are connected over one communication framework that is chosen 225
by the user. The communication framework will be chosen depending on overhead, transport delay, availability and similar benchmarks. Figure 2 shows a block diagram of this architecture. The architecture resembles a heterarchical architecture, Valavanis et al. [1997], since it uses a parallel structure. However, a lose hierarchy is maintained inside the low-level process group and between high-level mission control processes. More information about the benefits of this architecture can be found in Lugaric et al. [2011]. Vehicle driver
Sonar driver
DVL driver
USBL driver
Communication interface
Communication interface
Communication interface
Communication interface
Communication medium
Communication interface
Communication interface
Communication interface
Communication interface
Map display (UI)
Location calculation
Control console (UI)
DVL+USBL fusion
Figure 2. Simple control loop block diagram 3. DISTRIBUTED COMMUNICATION FRAMEWORKS For analysis, we have chosen two distributed frameworks: The Mission Oriented Operating Suite (MOOS) and the Robotic Operating System (ROS). We mentioned that transport delay exists in these frameworks. Sensor acquisition, estimation and higher level control is more tolerable to delays than low-level control. However, we wish to implement the low-level control in these frameworks to have the ability of easy configuration and prototyping during research. Therefore it is necessary to analyze the communication framework and determine its suitability for distributed low-level control. Distributed controls can be differentiated by the way nodes (processes) become activated. Nodes can be activated periodically or by events on the communication bus. Figure 3 shows a communication diagram between three distributed applications. The sensor is time driven and outputs measurements periodically. In the left image the controller reacts instantly on the new measurement event. The control signal is calculated and forwarded to the actuators of the process. Since the actuation system of the process is event driven it reacts instantly. Assuming that the communication transport delay is small, effects of the control signal will be noticeable in the next sampling period of the sensor. Therefore, we can say that the control loop is executing with time period Td . The right image displays a time driven controller and process. Observe that the sensor information arrives at the controller node before the next sampling time. However, the controller is executed periodically and it will generate a new control signal at the next sampling period. The time
IFAC SYROCO 2012 September 5-7, 2012. Dubrovnik, Croatia
Process B
Sensor kTd
(k+1)Td
(k+2)Td
kTd
(k+1)Td
(k+2)Td
kTd
(k+1)Td
(k+2)Td
kTd
(k+1)Td
(k+2)Td
Control kTd
(k+1)Td
(k+2)Td
Process A
Process C
Process Process D kTd
(k+1)Td
(k+2)Td
(a) Star-like topology
Figure 3. Event and time driven node activation diagram driven actuator will react at its next sampling period. It can be observed that a lag time larger than one sampling period exists in the time driven case. This delay can not be neglected when sampling times are only few times faster than the process dynamics. This can be compensated for by increasing the sampling period of the process. That approach is possible when the system dynamics is slow, but for faster dynamics the sampling period has to be kept small.
Process C
Process A
Process B
Process D
3.1 MOOS
(b) Peer-to-peer topology
MOOS is very useful for navigation and UUV guidance since development is mostly focused around underwater vehicle application. However, we find that a lag time in MOOS is large since process activation is time driven. MOOS processes consist of two parts: (1) the communication layer and (2) the iteration layerNewman [2007b]. Both of these layers operate in a time-driven manner. Effectively we have two nodes inside each MOOS process. The main iteration node is performing payload work with a defined sampling period. The communication node is periodically contacting the MOOSDB server and exchanging messages. The layer operational frequencies are independently adjustable. Let us assume that both layers operate with a 10 Hz tick. Then, measurement of the round trip time 226
Figure 4. MOOS and ROS topologies in the control loop shows a variant lag-time, see 5(a). On average, loop time is around 0.6 s. Assuming that the iteration layer finishes instantly the communication tick becomes the limiting factor. Hence, the minimum loop time that we can expect is 0.5 s. Since node execution is not synchronized the loop time does not remain settled on the minimum value but it rather drifts between iterations.
Iterations
2000
1000
0 0.45
0.5
0.55
0.6
0.65 0.7 Time (s)
0.75
0.8
0.85
0.9
(a) 2000 Iterations
The Mission Oriented Operating Suite (MOOS) was developed by P.M. Newman at Oxford University, (Newman [2007a]). It provides a framework for interprocess communication. The MOOS philosophy is based around the star-like topology, see Figure 4(a). Applications in the framework all communicate through a mediator database. MOOS clients never communicate between each other in order to prevent dependencies and interference from rouge clients. A centralized network can be easily scaled since only a connection with the central database is required. One common problem of centralized topologies is that the central hub represents a bottleneck. However, assuming that control and navigation messages are usually small in size, the possibility of congestion is negligible. The researchers at the Mobile Robotics Research Group at Oxford University, the Computer Science and Artificial Intelligence Lab and Dept. of Mechanical Eng. at MIT, and the Naval Undersea Warfare Center in Newport Rhode Island (NUWC-NPT) have extended the MOOS framework to the MOOS-IvP autonomy architecture, (Benjamin et al. [2007]), specialized for underwater and surface vessels. The IvP (Interval Programming) extension serves as the high-level behavioural control that outputs desired state references.
1000
0
0.2
0.25
0.3
Time (s)
0.35
0.4
0.45
0.5
(b)
Figure 5. Signal round trip time in MOOS
In order to reduce the round trip time we can increase the communication layer frequency. This increases the strain on the MOOSDB and also requires more computational cost for each MOOS application. However, the introduced time-lag is reduced as shown on Figure 5(b). The remaining round trip time is still influenced by the lack of node synchronization.
IFAC SYROCO 2012 September 5-7, 2012. Dubrovnik, Croatia
3.2 ROS The Robot Operating System (ROS) started development at Stanford and is currently developed by Willow Garage. It is a framework providing a structured interprocess communication. Although the underlying concept is similar to MOOS the implementation is different. ROS clients, called nodes, rely on peer-to-peer communication. Nodes publish and subscribe on topics of interest. In order for the nodes to locate each other a master server acts as a nameservice. This adds more flexibility and nodes can be decoupled much like in MOOS. Process activation in ROS can be both time driven and event driven. We can choose one node to be time driven in order to provide the loop tick. Often the sensors can be sampled with a constant rate and it can be used as the source of the control loop tick. However, in applications where several sensors with different sampling frequencies exist, we can choose the filtering node which fuses these measurements to perform in a time driven manner. Measurements of the round trip time in the ROS implemented control loop shows a constant 0.1 s lag-time. This is the minimum achievable lag time. Since three nodes are event driven they react instantaneously when new information is achieved. Assuming that the transport delay between the nodes is smaller than the sampling period it has no effect on the loop time. 4. CONTROL SYSTEM DESIGN The low-level control performance is tested by implementing a simulation loop in the distributed framework. Let us divide a simple control loop in four parts as shown on Figure 1. Implementing this layout in a distributed framework yields separate clients. The Process client handles actuator level control and safety procedures of the controlled process. Sensor configuration and acquisition is handled in the Sensor process. Sensor data is published in the framework to be used by other processes, i.e. logging, filtering, fault detection. Filtering and processing of the arrived sensor data for control purposes is performed in the Filter process. When sensor data does not require filtering or preprocessing this component can be removed from the chain and raw sensor data used instead. Finally, the Control process is containing a desired low-level controller for the process. Considering the distributed nature of the chain we observe that changing a controller or a filter during system execution can be done without stopping the process or sensor acquisition. It can also be achieved without intervention in the Process or Sensor applications. This increases each process stability as opposed to monolithic systems where interventions in the application can impact the overall system performance. For our process we select the first order yaw model, see (Fossen [1994]), since it adequately approximates the yaw dynamics of small UUVs: ψ K = (1) N s(1 + T s) This model is used as a linear approximation of the yaw dynamics in ships and underwater vehicles. Choosing a 227
I-PD controller for the processes gives us the following closed-loop dynamics: 1 ψ = T (2) −1 K Kd+K 3 ψr s + s2 + p s + 1 KKi
Ki
Ki
where Ki , Kp , Kd are respective gains of the controller. Note that we neglected the Filter and Sensor dynamics for simplicity. Controller parameters can be calculated by selecting the desired model function. We calculate the controller parameters as: T Ka3 a1 T Kp = Ka3 a2 T Kd = − K −1 Ka3 Ki =
(3) (4) (5)
4.1 Discrete system model We perform ZOH discretization of the Process shown in equation (1). The discretization yields the following transferh function: i h i ψ = N
Ts T
Ts
Ts
Ts
− (1 − e− T ) z + (1 − e− T ) − TTs e− T Ts Ts (KT )−1 z 2 − (1 + e− T )z + e− T
(6)
where Ts is the model sampling time. Sensor and filtering step are assumed as constant and only introduce the signal propagation delay inherent to the communication of the distributed framework. The discrete version of the I-PD controller is implemented as described in ˚ Astr¨ om and H¨ agglund [1995].
Figure 6. Discrete implementation 5. RESULTS The reference control loop is implemented in a monolithic system and yields identical results as a Matlab simulation. Both ROS and MOOS implementations contain the four defined nodes. However, in ROS we selected the asynchronous message exchange. Figure 7 shows the step response of the ROS implemented control loop in comparison to the model function response. The control loop behaves similar to the desired model response. The small difference from the model is due to discretization effects. The same control loop implemented in MOOS displayed unstable behaviour. Unstable behaviour was expected due to the afore mentioned delays introduced by the MOOS communication framework.
IFAC SYROCO 2012 September 5-7, 2012. Dubrovnik, Croatia
Step Response 1
Amplitude
0.8 0.6 0.4 0.2 0
0
0.5
1
1.5
2
2.5 Time (sec)
3
3.5
4
4.5
5
Figure 7. ROS simulation 6. CONCLUSION This paper offers a basic overview of the ROS and MOOS inter-process communication paradigms. The low-level architecture for underwater vehicle control has been implemented on top of these distributed frameworks. Heading control has been used to demonstrate the control loop behaviour. The MOOS time-driven communication method has shown to be unpractical for low-level control due to transport delays between communication layers. Compensating for this delay could be achieved by using a slower loop-time that encompasses the foreseen delay between processes. Alternatively, the control could be expanded with a time delay compensation scheme. However, we find the flexibility of the ROS communication better suits the requirements for distributed low-level control. The transport delay introduced by the asynchronous communication is reduced to network communication delay between ROS nodes. Implementing asynchronous behaviour in the MOOS communication layer cannot be achieved without breaking the design rule that the MOOS database never contacts clients on its own. ROS alternatives to IvP behavioural vehicle control are unknown to the authors. MOOS is therefore, currently, more suitable for higher level control than ROS. This fact has inspired other groups to start experimenting with bridging ROS and MOOS frameworks and enable easy data exchange, see DeMarco et al. [2011]. Using this bridge we are able to implement low-level controllers in ROS and mission control in MOOS. REFERENCES K.J. ˚ Astr¨ om and T. H¨ agglund. PID controllers. Setting the standard for automation. International Society for Measurement and Control, 1995. ISBN 9781556175169. URL http://books.google.co.in/books?id=nF1oQgAACAAJ. M. R. Benjamin, H. Schmidt, and J. Leonard. A Guide to the IvP Helm for Autonomous Vehicle Control, December 2007. K. DeMarco, M.E. West, and T.R. Collins. An implementation of ros on the yellowfin autonomous underwater vehicle (auv). In OCEANS 2011, pages 1 –7, sept. 2011. V. Djapic and D. Nad. Command filtered backstepping design in moos-ivp helm framework for trajectory tracking of usvs. In American Control Conference (ACC), 2010, pages 5997 –6003, 30 2010-july 2 2010. 228
Thor I. Fossen. Guidance and Control of Ocean Vehicles. John Wiley & Sons, 1994. T. Galluzzo and D. Kent. The OpenJAUS approach to designing and implementing the new JAUS standards. In AUVSI’s Unmanned Systems North America 2010 Online Proceedings, 2010. Brian P. Gerkey, Richard T. Vaughan, and Andrew Howard. The player/stage project: Tools for multi-robot and distributed sensor systems. In In Proceedings of the 11th International Conference on Advanced Robotics, pages 317–323, 2003. T. Lugaric, D. Nad, and Z. Vukic. A modular approach to system integration in underwater robotics. In Control Automation (MED), 2011 19th Mediterranean Conference on, pages 412 –417, june 2011. doi: 10.1109/MED.2011.5983028. A. Makarenko, Brooks, and T. Kaupp. Orca: Components for robotics. In In Proceedings of 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’06), 2006. Paul Newman. Introduction to Programming with MOOS, December 2007a. Paul Newman. Under the Hood of the MOOS Communications API, December 2007b. Morgan Quigley, Brian Gerkey, Ken Conley, Josh Faust, Tully Foote, Jeremy Leibs, Eric Berger, Rob Wheeler, and Andrew Ng. ROS: an open-source robot operating system. In International Conference on Robotics and Automation, 2009. K.P. Valavanis, D. Gracanin, M. Matijasevic, R. Kolluru, and G.A. Demetriou. Control architectures for autonomous underwater vehicles. Control Systems, IEEE, 17(6):48 –64, dec 1997. ISSN 1066-033X. doi: 10.1109/37.642974.