Copyright © IFAC Control in Transportation Systems, Tokyo, Japan, 2003
ELSEVIER
IFAC PUBUCATIONS www.clscvicr.com/localeJifac
MULTI-AGENT BASED HIL SIMULATION WITH VIRTUAL SENSORS FOR INTELLIGENT VEHICLE SYSTEMS
Z. Papp, M. Dorrepaal, A.H.C. Thean, K. Labibes*, M.P.M. Krutzen
TNO Institute ofApplied Physics. De/ft, The Netherlands Email: {papp.dorrepaal.thean.krutzen}@tpd.tno.nl *TNO Automotive, De/ft, The Netherlands Email:
[email protected]
Abstract: The advance in Intelligent Vehicle Systems (IVS) puts emphasis to vehiclevehicle and vehicle-environment interactions. Sensing and sensory data interpretation play ever increasing role because sensing becomes the primary way of collecting information about the environment. In the IVS application domain the environment is unstructured, which results in demanding data interpretation algorithms. Consequently the resulting sensor data interpretation problem demands sophisticated evaluation/test environment, which provides full control of the circumstances, reproducibility and flexible mix of real and virtual components. The paper presents the extension of a multiagent based distributed HIL simulation environment (MARS) with high fidelity sensor simulation functionality. To illustrate the methodology the simulation of laser range finder based object detection for pre-crash control is presented. Copyright © 2003 IFAC Keywords: vehicle simulators, vehicles, sensor systems, real-time systems
control of the circumstances, reproducibility and flexible mix of real and virtual components. Traditional hardware-in-the-loop (HIL) simulation environments fall short when it comes to high fidelity sensor simulation in complex environments (Fayad, et aI., 1992; Stolpe et aI., 1998).
I. INTRODUCTION The advance in Intelligent Vehicle Systems (IVS) puts emphasis to vehicle-vehicle and ·vehicleenvironment interactions. Advanced Driver Assistant Systems (ADAS), pre-crash sensing and control, collision avoidance systems are particular examples of these new challenges. Sensing and sensory data interpretation play ever increasing role because sensing becomes the primary way of collecting information about the environment. In the IVS application domain the environment is unstructured, which results in demanding data interpretation algorithms (e.g. stereo vision based object identification). Other, equally important application domain requirements are (I) sensor fusion, (2) realtime response time guarantees, (3) robustness and (4) safety criticality. Consequently the resulting sensor data interpretation problem demands sophisticated evaluation/test environment, which provides full
In parallel, advances in embedded and distributed systems technology have made multi-agent based problem decomposition schemes an effective way to cope with complexity and scalability (Coutts, et al., 1997; Lekkas, et aI., 1995) in autonomous-guidedvehicle and transportation-system applications. The approach to IVS and sensor modelling and simulation presented here also follows the multiagent paradigm (Bic, et al., 1996). The paper is organised as follows. In Section 2 a more detailed description of the formalization of the problem domain in a multi-agent framework is given.
365
In Section 3 an extension of a multi-agent based distributed HIL simulation environment with highfidelity sensor simulation functionality is presented. Section 4 summarizes the basic features of the MARS runtime environment. In Section 5 a case study illustrating how the simulation environment is used to simulate laser range finder based object detection for a pre-crash control system (PCCS) is presented. Though the pre-crash sensing is merely used as an illustration here it clearly shows that our high-fidelity virtual sensor approach can cover important aspects of the problem domain. Section 5 offers conclusions and indicates further research directions.
representation of the environment relevant to the entities. It can be a conceptual model of the problem solving process, a representation of real world objects, or both. Items are characterised by their attributes (e.g. position, colour, etc.). Items are passive components: they cannot change their attributes themselves - but entities can operate on them. Entities may have •
sensors (S): to collect information about the current state of the World;
•
actuators (A): to modify items' attributes and/or create/destroy items in the World.
Entities can also be (and typically are) represented in the World via bound items. A bound item has properties, which are determined by the states of the associated entity. For example, the dynamics of a vehicle, including both the vehicle model and its controller (driver), are defined by an entity. The bound item defines the entity's manifestation in the World (e.g. shape, colour). The important consequence of this decomposition is that the behavioural model of the entities (the so called internal dynamics) is completely self-contained: the "only" interface between an entity and its environment is via its own sensors and actuators. To keep the framework generic, sensors and actuators are handled in an abstract way. Sensors and actuators have no dynamics and have no data processing features: they represent particular relationships between the entity they are assigned to and the World (i.e. abstract sensors/actuators can be interpreted as queries/action queries on the "World database"). This approach supports the clean separation between the world representation and sensor models: sensor models should be incorporated into the entity's dynamical model.
2. VIRTUAL SENSORS IN HIL SIMULAnONS The targeted application domains usually involve a collection of interacting, highly autonomous, dynamically complex systems. Interactions are carried out via sensor, actuator and communication subsystems and the interaction patterns are changing in time. Extensive formal analysis of these systems except special cases - is not feasible, consequently the availability of proper simulation and test tools is of primary importance. Various distributed virtual environment (DVE) systems are capable of modelling and simulating dynamic entity interactions in a virtual world. Due to performance reasons typically DYE systems are implemented on distributed computing platforms consisting of networked servers responsible for simulating the behaviour of multiple entities. Partitioning of the workload (i.e. assigning entities to hosts for simulation) is a crucial problem in DYE systems with a direct impact on system level performance (Bic, et al., 1996; Coutts, et al., 1997; Lekkas, et al., 1995; Papp, et aI., 2003). DVE systems put the emphasis on load balancing and communication overhead and typically run under soft real-time requirements. The MARS (Multi-Agent Real-time Simulator) simulation framework was designed to provide guarantied real-time performance and implemented on a distributed computing platform to assure scalability. The original MARS concept and main implementation features are described in Papp, et al., 2003. Here only a brief summary is given in order to be able to introduce the multi-world based sensor modelling architecture.
I_item
Conceptually the control and/or simulation problem has to be decomposed using a multi-agent framework (for transportation/vehicle systems it is a natural, straightforward decomposition; in other cases it may be a purely conceptual structure). The model consists of a collection of autonomous entities (EJ, which interact with the surrounding World (Fig. I). The World is a set of items and serves as a formal
Fig. 1: The MARS modelling concept
366
World as required by the sensor types used. Adding a new type of"imaging sensor" requires:
3. "MULTI-WORLD" EXTENSION OF THE MARS FRAMEWORK The MARS simulation framework was and is used in various intelligent transportation systems related developments and tests incorporating radio links, road-side traffic sensors, GPS, etc. On the other hand the introduced abstract sensor interface does not sufficiently support those sensor modelling problems, in which the mapping between the object world and the sensory data is complex (i.e. does not match with the "attribute value retrieval" style but more indirect "imaging" is involved). The targeted new application areas (e.g. vision and radar based obstacle sensing and situation assessment) clearly demand architectural support for extended sensor modelling.
•
designing and implementing the mapping, which transforms the "ultimate world" (W) to the "sensor world" (W '), and
•
designing and implementing the sensor imaging, which generates the sensory readings based on the "sensor world".
Obviously these tasks should be covered by the sensor modeller. The entity (e.g. intelligent vehicle controller) developer's responsibility is the design and implementation of sensory data interpretation (beside developing the control, decision making, etc. algorithms). The ultimate goal of the sensory data interpretation is to reconstruct the World (i.e. to identify and describe the items in the World) using only a partial representation of the World (e.g. W) as input (for example to determine the states of surrounding vehicles based on a set of image sequences). In the case of sensor fusion multiple W sensor worlds are accessible. The embedded control and decision algorithms use these interpreted sensory data to determine the proper actions. Using accurate mapping and imaging algorithms (e.g. based on frrst-principle sensor models) is the "backbone" of the high-fidelity simulation and extends the testability of the sensory data processing algorithms. Furthermore the scheme described "separates concerns": the sensor modeller can concentrate on the accurate mapping/imaging algorithms, while the controVdata processing algorithm designer's responsibility is the powerful and robust sensory data interpretation.
A way to go beyond the limitation is to extend the framework with multiple representation of the world. This approach mimics the way real sensors work. The concept can be summarised as follows. Different real sensor types can "see" the same World differently. Fig. 2. shows a hypothetical situation. Let us assume the simulation consists of three entities (Ei , i=1..3) and each has an bound item in the World (Oi, i=I..3). Particularly 0 1 is the representation of El entity in the World. El uses two types of sensors (Si and S) to collect information about the World. Each sensor type has its own representation of the World (WI and W). W is called a (sensor type specific) mapping of W. For example S' can be vision based sensor, S can be particular type of radar sensor. Consequently S' and S are sensitive in different bands of the electromagnetic spectrum and have their own characteristics (e.g. different propagation and reflection properties, sensitivity, resolution, range, etc.). Consequently W' may contain objects, which are not present in W1 (invisible in W1) and vice versa (e.g. in Fig. 2. E, is invisible by S' and EJ is invisible by S).
4. THE MARS RUNTIME ENVIRONMENT For MARS a dedicated runtime environment was developed, which, by implementing a mobile entity execution scheme, gives scalable real-time performance (under certain circumstances; for details see Papp, et al., 2003). Running the simulation described with a model introduced above means the execution of M pieces of entities on N computing nodes (M>N) connected via a communication. Each node has its own runtime environment (RTE). Entities are connected to the RTE via the sensor/actuator API (Fig. 2). The attributes of the World objects form a distributed common memory, which serves as an information exchange place for the entities. RTEs implement this distributed common memory scheme on a message-passing framework thus hiding the memory allocation details from the experiment developers/users (Papp, et al., 2003). The "engine" of the entity simulation is an integrator (numerical solver). Each simulation node incorporates an integrator. This local integrator invokes the entity's code (i.e. the algorithm of the entity's behaviour, the state update rule) in timely manner (synchronized with other nodes' progress in
E, \.-------~
Fig. 2: Sensor "multi-world"
In the extended MARS framework this behaviour is realised by using multiple representations of the
367
time). An entity merely sees the abstract sensor/actuator interface as the only way to communicate with its environment. This nmtime architecture clearly separates the entity code from the actual configuration details (i.e. number of nodes involved, allocation of entities, etc.).
5.1 The laser range finder model
An LMS 220 sensor works by using a pulsed laser beam to make range measurements: the distance to an object is calculated by measuring the time delay between emitting a laser pulse and detecting its reflection. By performing repeated sweeps through an angle of 180 degrees in 0.5 degree steps aLMS 220 builds a time-series of data vectors containing range measurements for each of 361 angle steps. The first step in developing the sensor model was to map the attributes of the 3D model elements (the 'ultimate world') to a reduced set of sensor-specific attributes (the 'sensor world'). The visual representation of the 3D world is defined as a high fidelity VRML model. Two types of mapping were implemented. The main mapping involved is transforming model element shapes and positions to line-of-sight range values (since the primary function of the sensor is to measure distance infonnation). Another mapping involved transfonning model element textures (colours) to reflectance values (since the sensor has limited sensitivity and may not be able to measure the range of objects that do not reflect laser light strongly).
Fig. 3: MARS runtime infrastructure
5. A CASE STUDY: PRE-CRASH CONTROL Next to improving active safety (e.g. collision avoidance, lane departure warning, etc.) intelligent vehicle systems also have considerable potential for improving passive safety (i.e. to mitigate consequences when a crash cannot be avoided anYmore). The systems developed for this purpose are called Pre-Crash Control Systems (PCCS). PCCSs are foreseen as the link between active and passive safety allowing an integrated approach to vehicle safety issues.
The mappings outlined above may be performed in various ways, some of which are more efficient than others. A strong point of the solution described here is that it uses dedicated hardware (an OpenGL accelerated video controller) to make distance calculations. This results in reduced CPU load and makes real-time processing possible. Sophisticated video controllers calculate depth values in order to ensure that objects are drawn in the correct order and ensure realistic rendering; these so-called z-buffer values give the encoded Cartesian z-coordinates of the nearest object found at each screen pixel. We introduced a virtual camera to render an RGB image of the World from the standpoint of the sensor to exploit the z-buffer feature of the video controller. By reading the z-buffer values of the resulting image we are able to calculate line-of-sight distances by knowing the properties of the virtual camera, specifically, the distance to the near andfar clipping planes in which the image is projected. The result is an image containing line-of-sight range values. Note that the virtual camera is 'attached' to the Virtual Vehicle Under Test (VVUT) consequently it changes its position and orientation as VVUT moves in the World.
The PCCS in this case study uses a laser range finder (LRF) to collect infonnation about the vehicle's environment and activate a safety belt pre-tensioner whenever necessary to bring passengers to optimal position to minimize the effects of the impact (Fig. 3). In this section, the model of a commercially available SICK LMS 220 laser range fmder and its integration into the MARS simulation framework are described to illustrate the approach proposed.
vehide
state
To transfonn model element attributes to reflectance values the following approximation is used. The RGB image is transfonned to a HSV (Hue, Saturation and Value) image. The V values are taken as reflectance values. Since the modeller is free to choose model element textures whose V values approximate real reflectance values this approach has considerable flexibility. The result is an image containing modelled reflectance values.
Fig. 4: A pre-crash control system
368
The next step is to generate realistic sensory readings from the 'sensor world'. At each angle step during the sweep the laser beam is modelled by extracting a subimage from the range and reflectance images. The position at which the subimages are extracted is determined by the current sweep angle and the shape of the subimages is determined by the opening angle of the divergent beam and the current sweep angle; at the center of the field the extracted subimage is discshaped, and it becomes more elliptical due to projection effects at the field edges. For each pixel in the extracted subimage a sensor response model is used to assess whether a range measurement is possible (see below). The range value assigned to a given sweep angle is the nearest range value for which a given fraction of the pixels in the beam measure ranges within the range resolution of the sensor i.e. returning laser pulses will not be detected if they are too "smeared out' in time. If this criterion is not fulfilled no range value is measured.
In Fig. 4 the sensor model output for a typical simulation run can be seen. The polar plot, overlaid on a view of the virtual world seen by the virtual camera mounted at the sensor position on VVUT, shows the range (meters) versus the sweep angle (degrees). As the sweep angle increases from 60 to 120 degrees (less than the full range of the sensor for display purposes) on the polar plot the two nearest cars, the road surface and fence poles can be identified. Note that the fence poles are detectable at high distances because they have high modelled reflectance values.
The sensor response model mentioned above takes into account the operational range of the sensor with different targets under different visibility conditions. Determining the operational range of the sensor can be posed as a detection problem. The laser unit emits a divergent beam of intensity 10 and detects reflected signals. To be detectable the measured flux (Smeasured) must be above the detection threshold of the sensor (SwJ. For each pixel, it is assumed that the laser beam does not diverge on its way to the target object and that at the target object it is scattered homogeneously in all backward directions i.e. the reflected flux is distributed equally over a hemisphere (Lambertian surface). The degree of atmospheric attenuation is taken into account using the Beer-Lambert law and related to atmospheric visibility using the Koschmieder equation (using the approximation that infrared light behaves as visible light). By taking into account the reflectivity of the target object and the geometrical spreading of the reflected beam we derive the following relation relating the measured flux to the laser intensity, target range (R), target reflectivity (p) and atmospheric visibility (Zvis).
SmeasurW = 10 exp(
-7.824.R) zvis
Fig. 5: Virtual sensor output 5.2 Embedding the model into the HIL simulation framework
The LMS 220 sensor model has been used as a sensor plug-in for MARS to test closed-loop control algorithms and pre-crash sensing strategies. The main components of the closed-loop PCCS simulation are shown in Fig. 5.
21r
·P·-2 R
LRF-oout()
The sensor response model described above has been validated using data contained in the technical specifications provided by the sensor manufacturer (SICK, 2003). A good agreement between the model and the technical specifications data are found for range values higher than 2 meters using a value of Slim = 0.0007 (for 10=1). The fact that only one freeparameter (Slim) is required to fit data for four different values of visibility indicates that the physical basis of the model is reasonable at range values higher than 2 meters.
Fig. 6: The high fidelity LRF model in MARS
The setup consisted of three computing nodes. Due to the moderate complexity of the scenario the simulation of all entities (cars in this particular case) was carried out on a single-host MARS simulator. The simulator fed the world update data into the 3D
369
visualisation server (operator console) and the LRF model host via UDP multicast message stream. The LRF model host ran both the LRF visual rendering and LRF response model functionalities (as described in the previous section). The VVUT (including driver model and PCCS) model used a remote procedure call (RPC) type interface to access to the raw (virtual) sensory data produced by the sensor model.
ACKNOWLEDGEMENT The MARS simulation framework serves as an implementation platform for the simulator component of the TNO Automotive's VEHIL (Vehicle Hardware-In-the-Loop) test facility (VEHIL, 2003). DJ. Verburg, A. van der Knaap, R.C. van de Pijpekamp, J.P. van Dijke and J. Ploeg of TNO Automotive play crucial roles in the realisation of the VEHIL facility.
These simulations allow the effect of environmental factors, such as changes in visibility, to be assessed in a safe and controlled manner.
REFERENCES Sic, L.F. et al (1996). Distributed Computing Using Autonomous Objects. IEEE Computer, August 1996, pp. 55-61 Coutts, LA., J.M. Edward (1997). Model-Driven Distributed Systems. IEEE Concurrency, JulySeptember 1997, pp. 55- 62 Fayad, M. E., et al (1992). Hardware-In-the-Loop (HIL) simulation: an application of Colbert's object-oriented software development method. Proceedings of the Conference on TRI-Ada '92 (ACM-SIGADA), 1992, Orlando, FL, US, pp. 176-188 Karsai, G. (1995). A Configurable Visual Programming Environment: A Tool for IEEE Domain-Specific Programming. Computer, pp. 36-44,1995. Lekkas, G.P., N.M Avouris, G.K. Papakonstantinou, (1995). Development of Distributed Problem Solving Systems for Dynamic Environments. IEEE Trans. on Systems. Man. and Cybernetics, vol. 25, pp. 400-414, March 1995. SICK, (2003). LMS 200lLMS 21/ILMS 220lLMS 22 IILMS 291 Laser Measurement Systems. Technical description, http://www . sick.
6. CONCLUSION, FURTHER ACTIVITIES
The 'Multi-world' extension to the MARS simulation framework with high-fidelity sensor models exhibits the following distinguishing features: •
It supports a distributed implementation of control/measurement systems and simulators; this means that a high degree of execution parallelism can be achieved.
•
The architecture is scaleable enabling a wide (and changing) range of user requirements to be implemented.
•
It has a model-based system architecture, where the application-independent kernel and the application-specific parts are clearly separated (Sztipanovits, et al., 1997).
•
High fidelity sensor models can be added as plug-ins.
•
It provides a HIL test environment in which real and virtual components (including sensors) can be mixed facilitating a smooth transition between fully simulated and prototype implementations.
•
It has a direct link to the real-time fast prototyping environment.
•
It is open to integration with simulators and external programs.
com.hk/gbOS/TBLMS2e.pdf
Papp, Z. et al (2003). Distributed Hardware-In-theLoop Simulator for Autonomous Continuous Dynamical Systems with Spatially Constrained Interactions. to be published in the Proc. of the IEEFJACM WPDRPS 200. Nice, France, June 2003 Papp, Z. (2001). Runtime Support for Reconfigurable Real-Time Embedded Systems. Proc. of the IEEE-IMTC 2001, pp. 2111-2116 Stolpe, R., O. Oberschelp (1998). Distributed Hil Simulation For The Design Of Decentralized Control Structures. http://www . uni-
other
Our application domain covers intelligent vehicle technology, automated transportation systems, mobile robotics. The proposed solution and architecture is generic and can readily be applied to other domains. The simulation environment is used successfully in testing complex inter-vehicle and vehicle-roadside interactions in a fully controlled and safe way. Without the extended simulation functionalities tests with similar coverage and fidelity could only be achieved at a much higher costs (e.g. building dedicated full-scale test infrastructure). Further research will focus on modelling tools to maintain coherency among the different world mappings (Karsai, 1995).
paderborn.de/sfb376/projects/cl/Pub lications/PDF/Dipes98.pdf
Sztipanovits, J., G. Karsai (1997). Model-Integrated Computing. IEEE Computer, April 1997, pp. 110-112 TNO Automotive VEHIL Facility (2003), http://www.automotive.tno.nl
370