Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
Contents lists available at SciVerse ScienceDirect
Engineering Applications of Artificial Intelligence journal homepage: www.elsevier.com/locate/engappai
A computational model of the car driver interfaced with a simulation platform for future Virtual Human Centred Design applications: COSMO-SIVIC Thierry Bellet a,n, Pierre Mayenobe a, Jean-Charles Bornard a,c, Dominique Gruyer b, Bernard Claverie c a
Universite´ de Lyon, IFSTTAR (Laboratoire d’Ergonomie et de Sciences Cognitives pour les Transports), 25 Avenue F. Mitterrand, 69675, Bron, France IFSTTAR, Laboratoire Interaction Ve´hicule-Infrastructure-Conducteur (LIVIC), 78000 Versailles-Satory, France c ENSC, Ecole Nationale Supe´rieure de Cognitique, Universite´ de Bordeaux 1, 33405 Talence, France b
a r t i c l e i n f o
a b s t r a c t
Article history: Received 29 April 2011 Received in revised form 3 January 2012 Accepted 15 May 2012 Available online 18 July 2012
This paper presents the first step of a research programme implemented by IFSTTAR in order to develop an integrative simulation platform able to support a Human Centred Design (HCD) method for virtual design of driving assistances. This virtual platform, named COSMO-SiVIC, implements a COgnitive Simulation MOdel of the DRIVEr (i.e. COSMODRIVE) into a Vehicle–Environment–Sensors platform (named SiVIC, for Simulateur Ve´hicule-Infrastructure-Capteur). From this simulation tool based on a computational driver model, the design costs of driving assistances is expected to reduce in the future, and the end-users needs during the design process are also better taken into account. This article is mainly focussed on the description of the driver model developed and implemented on the SiVIC virtual platform, which is only the first step towards a future Virtual HCD integrated tool. The first section will discuss the research context and objective, and the second one will present the theoretical background in cognitive sciences supporting our driver modelling approach. Then, the SiVIC tool is used in this research as a methodological and technical support for both empirical data collection among human drivers and as a virtual road environment to be interfaced with the COSMODRIVE model. In the result section, the functional architecture of COSMO-SIVIC (based on three complementary modules of Perception, Decision and Action) will be described, and an example of virtual simulation of human driver’s errors due to visual distraction while driving will be presented. The perspectives concerning future use of COSMO-SIVIC for virtual HCD will be then discussed in the conclusion section. & 2012 Elsevier Ltd. All rights reserved.
Keywords: Driver model Cognitive simulation Virtual Human Centred Design Situation awareness Driver’s perception Human errors analysis
1. Introduction: research context and objectives This research takes place in the field of human driver modelling for ergonomics methods, applied to driving assistance design in the frame of the ISI-PADAS European project (Integrated human modelling and Simulation to support human error risk analysis of Partially Autonomous Driver Assistance Systems, of automation). The challenge was to develop a virtual simulation tool liable to support Human-Centred Design (HCD) approach by considering since the earliest stages of technological design (through simulations based on a driver model) the effective needs and the potential errors of future end-users. From this use, it is expected to reduce costs (by saving both time and money) and to increase the efficiency of the design process of driving assistances in the future, which are more and more complex and thus, expensive to develop (Cacciabue and
n
Corresponding author. Tel.: þ33 072 142 457; fax: þ33 072 376 837. E-mail addresses:
[email protected] (T. Bellet),
[email protected] (J.-C. Bornard). 0952-1976/$ - see front matter & 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.engappai.2012.05.010
Vollrath, 2011). Indeed, recent advances in car industry have opened the door to automated embedded systems able to takeover the vehicle in normal driving conditions (e.g. adaptive cruise control) as well as in critical situations (in order to avoid collisions with obstacle or lane departure, for example). This technological ‘‘revolution’’ will radically modify the driving activity in the near future according to the potentialities of vehicle control sharing between the human driver and automatisms. However, past efforts of automation in other areas, like nuclear plants or aviation, demonstrated potential risks in terms safety due to automation (e.g. Bainbridge, 1987), which may be the source of specific human errors and may cause critical conflicts between humans and machines. To keep the human operator ‘‘in the loop’’ of control is also crucial in case of automation malfunction, or when the driving situation comes out of the validity limits of the assistance. Consequently, the decision and the way to use automation in order to assist, to support, or to replace the human driver must not only take into account the technological capabilities themselves, but also the human needs, their own abilities and characteristics, and their acceptance regarding such a technological assistance.
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
A Human-Centred Design approach is particularly important in car driving context, according to the heterogeneity of the drivers, the potential risk of the driving task, the variability of the driving situations and, lastly, the responsibility issue in case of accident. However, integrating end-users’ needs is not always easy, especially when one wishes to design innovative devices often costly to develop. Ergonomics evaluations with real drivers indeed required to develop mock-ups and operational prototypes which are generally expensive, and may cost a lot of time. In order to better promote human drivers’ needs since the early stages of technological design, a new generation of virtual tools is necessary. Such virtual tools should jointly include models of (i) the Driver, (ii) the Vehicle, and (iii) the road Environment (i.e. DVE platform), in order to virtually assess the driving assistance effects through simulation that is the global objective of the ISi-PADAS European project (Cacciabue and Vollrath, 2011). The research presented in this article aims to contribute to the development of a future simulation platform to design in-vehicle systems and to evaluate their interest and their potential impact on road safety. In this objective, as a first step a cognitive simulation model of the Driver (named COSMODRIVE) to implement on a Vehicle–Environment platform (named SiVIC) is proposed, in order to provide a simulation platform (named COSMO-SiVIC) liable to support the virtual design of vehicle automation technologies in the future. After having introduced the theoretical background in cognitive sciences supporting COSMODRIVE model, we will briefly present the SiVIC tool used in this research (as a support for both empirical data collection among human drivers and as a virtual road environment to be interfaced with COSMODRIVE). Then, in the result section, the functional architecture of the COSMO-SIVIC tool will be described, and an example of virtual simulation of human driver’s errors due to visual distraction while driving will be presented, for illustrating the potential interest of such a COSMO-SIVIC approach for future virtual design of driving assistance.
2. Theoretical background in driver modelling: the COSMODRIVE model COSMODRIVE is a COgnitive Simulation MOdel of the DRIVEr developed at IFSTTAR-LESCOT in order to simulate drivers’ mental activities carried out when driving, from perceptive functions to behavioural performances (Bellet et al., 2007). The general objective
of this research programme is to design, develop and implement a computational model of the car driver able to drive a virtual car into a virtual road environment, through a dynamic and iterative ‘‘Perception–Cognition–Action’’ regulation loop (Fig. 1). This model is more particularly in charge to simulate the three functions: - Perception of the road environment and of the other road users behaviours. - Cognition, integrating (i) elaboration of mental representations of the road scene (corresponding to driver’s situational awareness) and (ii) decision-making processes (based on these mental models of the driving situation and on anticipations assessed from dynamic mental simulations). - Action, corresponding to behavioural performances as decided at the cognitive level, and then effectively implemented via actions on vehicle commands, in order to dynamically progress into the road environment.
Moreover, the aim of this driver modelling approach is not only to simulate perceptive, cognitive and executive functions in an optimal way, but also to potentially generate human errors in terms of misperception of event, erroneous situational awareness, or inadequate behavioural performance, due for example to visual distraction. The key-component of COSMODRIVE theoretical model are the drivers’ mental representations (Bellet et al., 2009) of the driving environment, corresponding to the driver’s Situation Awareness, according to Endsley, 1995 definition of this concept: the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. Mental representations, as mental models (Johnson-Laird, 1983) of the driving situation, are dynamically formulated in working memory (Baddeley, 1990) through a matching process between (i) information perceived in the external environment and (ii) pre-existing operative knowledge (Ochanine, 1977) that are modelling in COSMODRIVE as Driving Schemas (Bellet and Tattegrain-Veste, 1999). They are formulated by and for the action, and they provide interiorized models of the task (Leplat, 1985). At the tacticallevel (Michon, 1985; ), mental representations provide ego-centred and a goaloriented understanding of the traffic situation. They take the form
SITUATION AWARENESS
COGNITION
DECISION MAKING MENTAL REPRESENTATION
PERCEPTION
Perceptive Cycle ACTION PLANNING
COGNITION
PERCEPTION
Regulation Loop ACTION
1489
ACTION IMPLEMENTATION (Executive Functions) Fig. 1. Driving activity modelling in COSMODRIVE.
1490
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
of a Three-Dimensional models (i.e. temporal–spatial) of the road environment, liable to be mentally handled by the driver, in order to support anticipation through cognitive simulations, and thus providing expectations on future situational states. This cognitive process of anticipation, based on both implicit and explicit mental simulations (Bellet et al., 2009), is a core function of the human cognitive system in dynamic environments (Stanton et al., 2001). When driving, humans continually update these Situational Awareness as and when they dynamically progress on the road (Bellet, 2011). Their contents depends on the aims drivers pursue, their short-term intentions (i.e. tactical goals, such as turn left at a crossroads) and their long-term objectives (i.e. strategic goals, such as reaching their final destination within a given time), the knowledge they possess, and the attentional resources they allocated to the driving task. They consequently play a key role ¨ in drivers’ decision making (Van der Molen and Botticher, 1988), risk awareness (Bellet and Banet, 2011), and behaviours implemented by the driver in the current situation. Moreover, car driving is based on two different levels of activity control: an automatic and implicit regulation mode versus an attentional and explicit control mode (Bellet et al., 2009). This dichotomy is well established in scientific literature, for example, with the distinction put forward by Scheinder and Shiffrin, 1977 and then Norman and Shallice (1986) between controlled processes that require cognitive resources and which can only be performed sequentially, and automatic processes that can be performed in parallel without any attentional effort. In the same way, Rasmussen (1986) distinguishes different levels of activity control according to whether the behaviours implemented rely on (i) highly integrated sensorial-motor reflexes (i.e. Skill-based behaviours), (ii) well mastered decision rules for managing familiar situations (i.e. Rule-based behaviours), or (iii) more abstract and generic knowledge that is activated in new situations, for which the driver has no prior experience (i.e. Knowledge-based behaviours). By considering this theoretical background, the simplified version of the COSMODRIVE model implemented on a virtual platform during the ISI-PADAS project is made of three main functional modules (i.e. the Perception, the Cognition, and the Action modules) and is in charge to drive a virtual Vehicle in a virtual Environment through two synchronized ‘‘Perception– Cognition–Action’’ regulation loops (Fig. 2): an attentional control mode (primarily focussed on Rasmussen’s Rule-based behaviours which are supported in COSMODRIVE by the Driving Schemas theoretical approach), and an automatic control loop (corresponding to Rasmussen’s Skill-based behaviours, which are supported in
COgnitive Simulation MOdel of the DRIVEr
COSMODRIVE by the Envelope Zones and the Pure-Pursuit Point methods). 2.1. Modelling the tactical cognition: the ‘‘Driving Schemas’’ Based on both the Piaget, 1936, concept of operative scheme and the Minsky, 1975, frames theory, the COSMODRIVE driving schema is a computational formalism defined at IFSTTAR for modelling tactical mental models (i.e. mental representation or Situational Awareness) of the driving activity ‘‘situated on the road’’. They correspond to prototypical situations, actions and events, learnt by the driver from their empirical experience. From a formal point of view (Fig. 3) a Driving Schema is made of (i) a functional view of road Infrastructure, (ii) a Tactical Goal (e.g. to turn to the left), (iii) a sequence of States and (iv) a set of Zones. Two types of zone are distinguished: Driving Zones (Zi), corresponding to the driving path of the car for progressing in the crossroads, and Perceptive Exploration Zones (exi) in which the driver seeks information (e.g. potential events liable to occur). Each driving zone is linked with Actions to be implemented (e.g. braking or accelerating, in view to reach a given state at the end of the zone), with required Conditions for performing these actions, and with perceptive exploration zones permitting to check these conditions (e.g. colour of traffic lights, presence of other road users). A State corresponds to a given position and velocity of the ego-car. The different sequences of driving zones that make up Driving Paths allow the driver to go from the initial state to the final state of the schema (i.e. achievement of the tactical goal), by potentially using different behavioural alternatives. Once activated in working memory and instantiated with the road environment, an active driving schema becomes the tactical mental representation of the driver, which will be continually updated as and when s/he progresses into the current road environment. A tactical representation corresponds to the driver’s explicit awareness of the driving situation providing a functional mental model of the road environment, according to the tactical goal pursued by the driver in this particular context (e.g. turn left). 2.2. Modelling the operational cognition: the ‘‘Envelope-Zones’’ At the operational level, corresponding to the automatic control loop presented in Fig. 2, the COSMODRIVE model regulation strategy is based on the ‘‘Envelope Zones’’. From a theoretical point of view (Bellet et al., 2007), the concept of envelope zones recalls two classical theories in psychology: the notion of body
PERCEPTION MODULE
(COSMODRIVE)
Road Environment
COGNITION MODULE: Situation Awareness & Decision-Making
(TACTICAL LEVEL) Perception-Cognition-Action ACTION MODULE (OPERATIONAL LEVEL)
Loops of Control : - Automatic control mode (implicit) - Attentional control mode (explicit)
Fig. 2. Driving as ‘‘Perception–Cognition–Action’’ regulation loops.
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
ex6
1491
high speed vehicle ? ex4
vehicle ? ex5
ex7 : pedestrian ? ex3 Z3b2 Z4
Z3b1 Z3a2 Z3a1
Z2
color of the traffic lights ?
ex1 : pedestrian ?
Initial State Local States Final State Driving Zones (Zi) Perceptive Exploration Zones (Exi) Driving Path alternatives
ex2 :
Z1
Fig. 3. Driving knowledge modelling in COSMODRIVE: The Driving Schemas.
Fig. 4. The ‘‘Three Envelope-Zones’’ model of COSMODRIVE.
schema proposed by Schilder (1950), and the theory of proxemics defined by Hall, 1966, relating to the distance maintained in social interactions with other humans. Regarding car-driving activity, envelope zones also refer to the notion of safety margins (Gibson and Crooks, 1938), reused by several authors (e.g. Ohta, 1993). At this last level, COSMODRIVE model approach (Fig. 4) is more particularly based on Kontaratos work (1974), and distinguishes a safety zone, a threat zone, and a danger zone in which no other road user should enter (if this occurs, the driver automatically activates an emergency reaction). Envelope zones are ‘‘relative’’ zones because their sizes are dependent of the car speed. They correspond to the portion of the path of the driving schema to be occupied by the car in the near future. In accordance with Schilder’s body schema theory (1950), this 3-dimensional ‘‘virtual skin’’ surrounding the car is permanently active while driving, as an implicit awareness of our
expected allocated space for moving. It is used by drivers for ‘‘feeling’’ others’ positions and motions. This highly integrated ability is therefore a key process supporting the implicit regulation loop, but also playing a core role in the emergence of critical events in the driver’s explicit awareness (in case of ‘‘implicit feeling’’ that an external object is ‘‘touching’’ this virtual skin, it alerts drivers’ awareness and allows them to focus their attention on this object). Moreover, as a ‘‘hidden dimension’’ (i.e. implicit awareness) of the social cognition as described by Hall (1966), these proxemics zones are also mentally projected to other road users, and are then used to dynamically interact with other vehicles as well as to anticipate and manage collision risks. Therefore, envelope zones play a key role in the regulation of ‘‘social’’ as well as ‘‘physical’’ interactions with other road users under normal driving conditions (e.g. intervehicle distance keeping) or for assessing critical path conflicts requiring an emergency reaction.
1492
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
3. Methodological approach and technological support for driver modelling The methodological specificity of the driver modelling approach implemented in this research was to use the SiVIC virtual Platform as both a virtual road environment to be interfaced with the driver model for dynamic virtual simulations and as a driving simulator, for empirical data collection among real drivers that are required for driver model development and validation. After having presented the SiVIC platform characteristic, the second part of this section will provide an overview of empirical data collected among human drivers by using SiVIC as a driving simulator. 3.1. SiVIC: a Vehicle-Environment-Sensors virtual platform SiVIC (for Simulateur Ve´hicule-Infrastructure-Capteur) is a Vehicle–Environment–Sensors (VES) platform developed by IFSTTAR-LIVIC (Gruyer, et al., 2006) for virtual design of driving assistances. The main objective of this tool is the prototyping of virtual sensors for embedded systems, with respect to their physical capabilities and with the aim to provide real-time measurement of environmental behaviour changes including weather conditions, moving objects, infrastructures, or others dynamic events. Indeed, the design and the development of driving assistance devices generally requires to collect data through vehicles equipped with an embedded architecture of perception and control/command systems. However, it appears necessary to find solution of substitution in case of lack of real data or when scenarios are too dangerous or too difficult to be implemented in the real world. Moreover, virtual tools are needed to test and to evaluate embedded algorithms with very accurate references. The software architecture of SiVIC has been developed in that perspective. SiVIC models a virtual road environment including the vehicle, the infrastructure and the sensors. Moreover in order to achieve the test step of embedded software applications, an interconnection has been developed
between SiVIC and RTMaps (Real Time, Multisensor, Advanced Prototyping Software—Intempora software). RTMaps is a modular environment used to manage data flows coming from several types of asynchronous sensors in order to process these data in embedded ADAS design. RTMaps also have the capability both to record and to replay these data. The logic of the SiVIC software architecture is to reproduce, in the most faithful way, the reality of a situation as well as the behaviour of a vehicle and all its embedded sensors. Thus this platform gives the possibility to generate data that can be recorded by RTMaps (Steux, 2001) vehicle’s data acquisition system. This means managing a continuous flow of time-stamped and synchronized numerical data, from cameras, GPS, laser scanners, IR transmitters, inertial navigation equipments, or odometers. RTMaps can then replay the recorded virtual scenarios and specific ADAS can be developed in order to be tested and finally embedded on a real vehicle. Hence it is possible to share data and to avoid the investment related to real experiments at an early stage of the research and development programs. The coupling of SiVIC with RTMaps (Fig. 5) brings RTMaps the ability to replace real-life data by simulated data. Moreover it also allows opening in RTMaps the perspective for prototyping on desktop the control/command algorithms since SiVIC takes advantage of a physical car model. The need for an equipped vehicle is no longer necessary for the first stages of the prototyping cycle. 3.1.1. Road environment modelling with SiVIC: the graphical 3D engine The graphical 3D engine of SIVIC has a decoupling between simulation and rendering process. This decoupling is made in order to get a simulation without the rendering stage or with a reduced number of rendering stages. In order to manage the temporal aspect, two time bases are available (CPU and virtual times). The modelling of 3D environment in SiVIC is based on a tree of binary partitioning (BSP) in order to reduce the computing load of the processor. Moreover, functionalities have been added in order to handle specific sensor operating like Radar, camera,
Fig. 5. Screenshot of the SiVIC-RTMaps platform.
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
GPS. Among these functionalities, SiVIC provides modules for the reflection, the weather constraint, the shadows, the light sources, the HDR textures, the meta-information, the layers mechanism, the movement of object without physical model. But for the realisation and the simulation of a road scene, the 3D engine is not sufficient. It is also necessary to use a set of dynamic modules (i.e. plug-ins) developed independently from the 3D engine. These plug-ins model and ensure the simulation of both the sensors and the realistic vehicle model. They have a dynamic mechanism of classes loading which allows the addition or destruction of actors during the simulation (sensors, vehicles or graphical objects) without compromising the working of the application. Along with this mechanism, a script language is integrated to dynamically manage and adjust the attributes and the actors of the scene. 3.1.2. Virtual sensors and vehicle modelling with SiVIC Indeed, SiVIC is not really an extension of the 3D engine, but an application using this graphical engine for the graphical rendering stage and including dynamically many external modules that simulate all the actors of a road situation. In order to have access to all the parameters of the sensors and the vehicles, the communication protocol uses the same rules as the ones used in the graphical engine. Thus, the mechanism used for the communications protocol is modular and is distributed on all the SiVIC modules. In order to be able to reproduce a coherent situation with the reality and to be able to generate all the data coming from the sensors embedded in the vehicles of the LIVIC, a set of sensors as well proprioceptive as exteroceptive are modelled and developed inside SiVIC (Fig. 6). The main sensors modules currently available on SiVIC tool are video and fisheye cameras, odometers, laser scanners, inertial navigation system, GPS, RADAR and beacons. In Fig. 6, we present a complex application which provides the most dangerous obstacle from the cooperative fusion between stereovision and laser scanner processing. A laser impact filtering is done from the odometer and the INS sensors. This application was effectively embedded in a real prototype in order to mitigate collision with pedestrians.
1493
For providing realistic data for the embedded virtual sensors, it is necessary to reproduce the movement of the vehicle bodywork on the three axes (i.e. roll, pitch and head). These movements also must take into account the effects of the shock absorbers (pumping). The vehicle model used in SiVIC (Fig. 7) is based on the work of Glaser, 2004, and includes shock absorbers and non-linear tyre road forces. The contact modelling between the tyres and the road surface uses an effort formalization of either Bakker et al., 1989, or Dugoff et al., 1970. This model permits to simulate a coupling between longitudinal and lateral axis, the impact of the normal force variations and the moment of the car alignment. The modelling of the vehicle bodywork is made with an unbending suspended mass. The architecture used for coupling of the embedded sensors and a vehicle is a set of classes of abstract interfacing, allowing easily to install a great number of sensors onboard (in the vehicle). The module of vehicle management includes a model of vehicle, a solver, and a set of control law. Six different modes of dynamic vehicle management are possible on SiVIC: (i) Autonomous, (ii) Tracking, (iii) Control/ Command, (iv) RTMaps, (v) Replay, and (vi) Matlab. The control modes (ii) and (iii) permit to manage complex scenarios with many vehicles following their own instructions of trajectory. In the Tracking mode, the trajectory instructions and the integration of control laws allow to test and evaluate lateral and longitudinal controls algorithms. The command inputs are the steering angle and the acceleration (torque on each wheel). Autonomous and Tracking modes also integrate a set of command scripts for a very complete and dynamic handling of the vehicle during the simulation stage. Control modes (iv) and (vi) allow to get orders to control vehicle from RTMaps or Matlab control algorithms. Mode (v) gives the capability to record a set of manoeuvres made by a vehicle and to replay it later. 3.2. Methodology for model development: empirical data collection among human drivers with SiVIC used as a driving simulator As previously explained, the methodological originality of this research in terms of driver modelling approach was to use in a first step the SiVIC virtual Platform as a driving simulator (Figs. 8 and 9)
Fig. 6. Examples of SiVIC use for virtual sensors simulation.
1494
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
Fig. 7. Vehicle modelling on SiVIC.
Fig. 8. The COSMO-SiVIC driving simulator.
for empirical data collection among real drivers, before using it during a second step as a virtual road environment to be interfaced with the driver model. According to this approach, human drivers’ behaviour and driver model performances can be observed for the same driving scenarios of car-following, in the same virtual road environment. From these similarities, it is expected to facilitate the comparison between human drivers and drive model performances and consequently increase our model validity. In this section we will briefly present the experiment implemented in ISI-PADAS project for such empirical data collection.
Fig. 9. The Secondary Task use for visual distraction study and modelling.
3.2.1. Human drivers investigated According to the modelling objectives of this experiment, requiring the collection of a set of ‘‘data of reference’’, 20 experienced drivers of middle-age (from 23 to 56 years old), including 50% of males and 50% of females, have participated in the experiment. All the drivers have a minimum of 5 years of driving experience and drive a minimum of 5000 kms/year. This sample of drivers has been divided into two similar sub-groups: Group1 for studying Visual
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
Distraction, and Group2 to investigate Cognitive Distraction. Both groups 1 and 2 also realised the scenarios without any additional task. However, as this paper is centred on visual distraction, only a part of the results will be presented and discussed here. 3.2.2. Driving scenarios and drivers’ tasks studied The participants’ driving task was to follow a lead car in different driving conditions. Four main sources of variation have been more particularly investigated: (1) the driving context (i.e. motorway, rural road and urban area), and consequently the vehicle speed required (respectively 130, 90, and 50 km/h), (2) the nature of the car-following task (i.e. free versus imposed car following distance keeping at a given Inter-Vehicular Time of 0.6 s.), (3) the lead car behaviour (having a steady versus irregular speed), and (4) the necessity to perform a Secondary Task while driving. Concerning more specifically visual distraction, the Secondary Task (ST) to be performed by the participants was the following: a set of 3 visual pictograms (Fig. 8), accompanied by an auditory beep, were displayed on an additional screen situated on the right side (near the usual position of the radio). Some seconds later (randomised time from 3 to 4 s), 1 of these 3 pictograms appeared under the first set, and the driver had to use a 3-button command for indicating which pictogram is replicated. 3.2.3. Main results obtained from empirical data collected The empirical results presented in the following Table 1 (statistical analysis by using a Student T-test) concern the negative impact of a visual Secondary Task (ST) on the drivers’ performances, more particularly by considering the driving behaviour modifications in normal conditions (e.g. inadequate following distance), and the accident risk increasing in the frame of critical scenarios (i.e. when the lead car suddenly brakes). In normal driving conditions, the two main observed differences due to visual distraction concern: - A significant reduction of the safety margins in ‘‘free following conditions’’, for all the driving contexts (i.e. urban, motorway, and rural areas). Indeed, without visual ST, the mean value of Inter-Vehicular Time (IVT) is of 3 s, against 2.65 s. with ST. This difference is statistically significant (T-test, p o0.001). - A significant degradation of the following performance in ‘‘constrained car-following conditions’’. In this set of scenarios (all contexts), the drivers have to follow the lead car at an imposed IVT of 0.6 s. The percentage of time when this value is performed is of 57% without ST, against 44% with ST (p o0.05). These results show a negative effect of visual distraction for keeping up a short following distance. Moreover, the erroneous following performance takes more frequently the form Table 1 Percentage of collision with the lead for critical scenarios. Context
Driving scenario
Highway Free steady lead car following Free unsteady lead car followingn Constrained steady lead car following Constrained unsteady lead car following
1495
of critical errors (i.e. IVT of less of 0.5 s) with ST (17% of the time) than without ST (10%). The difference observed at this level is significant (T-test, p o0.05). In critical driving conditions (i.e. emergency braking), the two main negative impacts of the visual Secondary Tasks on human drivers’ performances are: - Reaction time increasing: concerning the action on the brake pedals when the car ahead brakes. However, these observed differences are only significant for the constrained following conditions (0.89 s. versus 1.1 s.; T-test, p o0.05). - Risk of crash increasing: the following table presents the percentage of collision with the lead car/total number of required emergency braking, for the different driving conditions investigated. It appears that the risk of collision due to a ST is significantly increased for 4 of the 10 driving scenarios requiring an emergency braking. The highest negative impacts of ST are observed for the ‘‘constrained unsteady car following’’ scenarios, in both urban and rural areas. 4. Result: COSMO-SIVIC, A simulation platform for virtual Human centred design The functional architecture of the version of the COSMODRIVE model implemented into SiVIC is composed of three main modules (Fig. 5): A Perception Module (in charge to simulate human perceptive information processing), a Cognition Module (in charge to simulate mental representation elaboration, anticipation and decision-making processes at both the explicit/attentional versus implicit/automatic levels), and an Action Module (in charge to simulate executive functions and vehicle control abilities) generating an effective driving performance. By interfacing COSMODRIVE and SiVIC, it becomes possible to have a virtual platform (i.e COSMO-SiVIC) able to generate dynamic simulations of a driver model interacting with a virtual road environment, through actions on a virtual vehicle. Nevertheless, this two pre-existing tools was not initially connected, and specific developments were required for interfacing them. First of all, a new version of COSMODRIVE has been defined in accordance with the ISi-PADAS project objectives, in order to be implemented on the COSMO-SIVIC integrated platform. This new COSMODRIVE version aims to simulate driving activity through three main modules: (i) A Perception module of the road environment, (ii) Cognition Module, including Mental Representations elaboration (i.e. driver’s situational awareness), Anticipation abilities and Decision-Making (based on the mental model of the current state of the world but also on anticipations generated through internal mental simulations) and (iii) the Action module supporting executives functions, in order to dynamically drive a virtual car and to progress in the SIVIC environment, through two dynamic, synchronized and adaptive ‘‘Perception–Cognition–Action’’ regulation loops (Figs. 10,11 and 12).
Without ST ST-Visual 55% 35% 65% 70%
50% 50% 70% 70%
Rural
Free unsteady lead car following 60% Constrained unsteady lead car followingn 55%
60% 80%
Urban
Free steady car lead followingn Free unsteady lead car following Constrained steady lead car following Constrained unsteady lead car followingn
30% 30% 30% 90%
20% 30% 30% 25%
n Bold values indicate main observed differences between without-ST and ST conditions.
4.1. The perception module: from perceptive exploration to cognitive integration The Perception Module act as an ‘‘interface’’ between the external road environment (as simulated with SiVIC) and the driver model. It simulates human information processing of sensorial data before their integration in the Cognition module for traffic conditions analysis, situational change anticipation, decision making, and then action planning and implementation through the Action Module. The following figure presents the functional architecture of the Perception Module of COSMODRIVE as implemented into the COSMO-SiVIC virtual platform.
1496
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
SIVIC Virtual Platform
COSMODRIVE Virtual Driver
Virtual Eye (SIVIC Camera)
PERCEPTION MODULE
COGNITION MODULE Explicit Implicit Cognition Cognition (representations & decision)
3D Model of the External Road Environment
Perceptive Cycle
Virtual Car
(representations & decision)
ŅPerception-Cognition-Action Ó Regulation Loops (Attentional versus Automatic)
Virtual Control/Command Functions
Fig. 10. Overview of COSMO-SiVIC integrated platform.
Fig. 11. Functional architecture of the Perception Module.
Fig. 12. Visual field of the ‘‘virtual eye’’ of the driver model.
First of all, the Perception module integrates a Virtual Eye, designed as a new type of SIVIC virtual sensor, that has been derivated and adapted from the pre-existing SiVIC virtual camera model. This virtual eye includes three visual field zones (Fig. 10): the central zone corresponding to foveal vision (solid angle of 2.51 centred on the fixation point) with a high visual acuity, parafoveal vision (from 2.51 to 91), and peripheral vision (from 91 to 1501), allowing only the perception of dynamic events.
Moreover, two complementary perceptive processes are implemented in this module, in order to simulate the human information processing while driving. The first one, perceptive integration, is a ‘‘data-driven’’ process (i.e. bottom-up integration based on a set of perceptive filters) and allows cognitive integration of environmental information in the driver’s tactical mental representations of the Cognition Module, according to their physical characteristics (e.g. size, colour, movement) and their saliencies in the road scene for a human eye. The second perceptive process is perceptive exploration (based on Neisser’s theory of perceptive cycle, Neisser, 1976), which is a ‘‘knowledge-driven’’ process (i.e. top-down integration of perceptive information) in charge to continuously update the driver’s mental models of the Cognition Module, and to actively explore the road scene, according for example to the expectations included in tactical representations (Perceptive Exploration Zones and Driving Zones of the driving schemas). From a computational point of view, described in more detail in Bornard et al., 2011, these perceptive processes of informational search and integration are both under the control of a key mechanism of the Perception Module: the Visual Strategy Manager (VSM). This process is indeed in charge to manage Visual Queries (i.e. information to be obtained) coming from the different cognitive processes that are active at a given time. Visual queries are characterised by (i) a fixation point (to be observed in the road environment) (ii) a level of priority, (ii) a duration and (iii) a lifetime. Duration indicates the time required to visually process the information when observed, and lifetime defined the time of validity of the query before being processed (if not processed by the VSM during this time, it will be cancelled). The priority level is defined by (i) the cognitive process having generated the query (i.e. knowledge driven) or (ii) by the virtual eye according to their visual saliency (i.e. data driven). It can be more or less urgent when arriving at the VSM, but the priority value will be progressively increased until its VSM processing. Indeed, when received by the VSM, queries are stored in a list according to their level of priority (from the lowest to the highest). If a new query with a high priority comes, it will take in the first place in the list if more urgent than the oldest queries. When a query is processed by the VSM, it focusses the virtual eye on the fixation point for a period corresponding to the duration time needed for information processing. Then, a response is created with the different items collected by the eye’s fovea, and this information is sent towards
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
the initial sender (e.g. cognitive modules) by the VSM, in order to update the mental representation. The visual strategy manager task is to consequently determine the order of priority of these queries and, on this basis, to specify the perceptive exploration strategies for exploring the road scene. Information collected is then transmitted to the querying cognitive processes. Through such a Perception Module, the model is able to dynamically explore the road scene with its virtual eye and to dynamically integrate perceptive information. It is then possible to simulate human drivers’ perception: visual strategies implemented by real drivers are simulated here through a dynamic visual scanning of the road scene by the virtual eye. Such visual strategies, considered as performance, are thus modelled as a sequence of fixation points that are successively implemented by progressively considering the perceptive queries received by the Perception module from the Cognition module. Indeed, each query requires to focus the virtual eye on a specific area of the road scene. The perceived information is then integrated into the implicit and the explicit mental representations of the Cognition Module.
4.2. The cognition module: from situation awareness to decision making simulation Basically, the Cognition Module is mainly in charge to simulate human drivers’ abilities in Situational Awareness (i.e. mental representation elaboration), Decision-making and Anticipation. Such cognitive functions are implemented in COSMO-SIVIC through two regulation modes: an attentional control mode, based on an explicit awareness of the driving situation requiring cognitive resources for sequential reasoning, and an automatic regulation mode, based on an implicit situational awareness and cognitive skills liable to run in parallel. Concerning Mental Representation elaboration, this cognitive process is mainly based on the mental driving schemas instantiation. From a computational point of view (Fig. 13), a Driving Schema is firstly defined by a Tactical Goal to be reached in a given Roadway Infrastructure. It is made of a Driving Path, defined as a sequence of driving Zones, and of sequences of Actions (Simple or
Tactical Goal Characteristics Roadway Infrastructure
Driving Path subset
Object
Operational Unit
Driving D riving Schema Fram e
Action
member
member Event
Relative Zone
Zone Complex Action
Absolute Zone
Simple Action
Condition
UML Notation legend : Simple Association Multiple association (from 1 to ŅnÓ)
Generalization (heritance link): Class
Aggregation (part of) Dependency (with direction)
Sub-Class
Fig. 13. Driving schemas modelled with UML diagram.
1497
Complex). The implementation of these actions depends of Conditions to be conformed to and checked regarding the occurrence of Events in certain Zones of the infrastructure (Absolute Zones, like Driving and Perceptive Exploration Zones, or Relative zones, like envelope zones). An event is defined by the occurrence of an Object with specific Characteristics (describing its appearance, behaviour, or status). The term ‘‘object’’ is used here in its widest meaning. It can be a vehicle, a pedestrian, or a road sign. Then, driving schema implementation for car driving is based on a set of Operational Units (like pure pursuit point and Envelope zones methods) that are used by the Action Module for lateral and longitudinal control of the vehicle on the virtual road. In order to use the COSMODRIVE driving schema approach into the SiVIC virtual environment, it was needed to extend the preexisting SiVIC environment models by including remarkable points into the road infrastructure (Mayenobe et al., 2002). Remarkable points are specific landmarks (like the centre of an intersection, road sign positions, roads corners) that are needed for computationally matching the ‘‘qualitative geometry’’ of the mental driving schemas (more particularly concerning the perceptive zones and the driving zones; cf. Fig. 3) with the physical reality of road infrastructure as effectively simulated in the SiVIC Platform (i.e. corresponding to the ‘‘objective geometry’’). From this matching process, mental representation becomes similar with the external environment that is needed for precisely piloting the virtual eye and to accurately implement driving actions allowing the model to move in the objective reality. Therefore, the preexisting SiVIC model of road infrastructures has been enhanced with a set of remarkable points in the new COSMO-SiVIC platform. The procedure of implementing anticipation is based in COSMODRIVE on the mental deployment of a driving schema (Mayenobe, 2004). This deployment consists in moving the vehicle along a driving path, by successively travelling through the different driving zones of the schema, from the initial state (entry zone) until reaching the tactical goal (exit zone). This deployment procedure occurs at two levels: (i) on the representational level (explicit or implicit mental simulation), when the drivers anticipate and project themselves mentally in the future, (ii) and through the activity itself, during the effective implementation of the schema while driving the car in the road environment. As a result, the deployment process generates a particular instance of schema execution, composed of a temporal sequence of occurrent representations, causally interlinked, corresponding to the driving situation as it was progressively understood, anticipated, experienced, and lastly acted by the driver, along the driving path progression. Concerning driver’s Decision-Making simulation, two processes are implemented on COSMO-SIVIC, one for each regulation loops (i.e. attentional versus automatic) previously discussed. At the attentional level, corresponding to explicit decisions, this process is modelling through a set of State-Transition automats intimately linked with the driving path and conditions integrated in the tactical driving schemas. At the automatic level, corresponding to the automatic regulation loop, the implicit decision-making is directly implemented at the operational level of vehicle control, to be described in the next section. Moreover, in order to support tactical decision-making based on cognitive anticipations (i.e. drivers’ abilities to project themselves into the future through mental simulations), which are implemented in COSMODRIVE as a process of mental deployment of the driving schemas (Bellet et al., 2009), the SiVIC 3D graphical engine is indeed dually used in COSMO-SiVIC. As the first instance is in charge to simulate the current road environment, the other ones are used to dynamically simulate driver’s mental representations. In order to synchronise several instances of SiVIC (potentially implemented on different computers), a specific functionality has
1498
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
been developed in COSMO-SIVIC. This plug-in allows parallel execution of multiple instances of SiVIC that are synchronised through UDP/IP network connections. The synchronisation process may act at two levels: when loading a static environment into SiVIC, and when running simulations by synchronising the state of every dynamic item that evolves in the road environment. Technically, one instance of SiVIC acts as master and the others act as slaves. At the global COSMO-SIVIC level as a whole, the former computes the simulation of the actual driving environment (external road), and the latter is used to simulate the different driver’s internal mental models (i.e. the tactical representation of the current driving situation as well as the anticipated representations corresponding to the driver’s expectations) and only integrate the new status of the dynamic events, in accordance with the perceptive integration and perceptive exploration processes carried out by the Perception module. It is then possible to simulate human errors in terms of inadequate mental representations (non-integration of perceptive data or event false-updating due to distraction, for example) and, consequently, to generate and explain erroneous human decision-making. On the other side, this synchronisation process between several instances of SIVIC can be also used at the internal level of COSMODRIVE, in order to jointly update driver’s internal representations in a unified way and for synchronising the implicit and the explicit cognition. This is typically the case for jointly integrating the situational changes due to a driving action implemented at the operational level that require to similarly update the ego-car position in all the internal representations of the model (explicit as well as implicit). 4.3. The action module: executive functions and vehicle control skills The COSMODRIVE vehicle-control abilities, mainly corresponding to the automatic regulation loop, are based on two main mechanisms: the Envelope Zone regulation strategy (which has been discussed in section 2.2) and the Pure-Pursuit Point method (Mayenobe, 2004). The Pure-Pursuit Point method (Fig. 14) was initially introduced for modelling the lateral and the longitudinal controls of automatic cars along a trajectory in a simplified way (Amidi, 1990), and has been then adapted by Sukthankar, 1997, for driver’s situational awareness modelling. Mathematically, the pure-pursuit point is defined as the intersection of the desired vehicle path and a circle of radius centred at the vehicle’s rear axle midpoint (assuming front wheel steer). Intuitively, this point describes the steering curvature that would bring the vehicle to the desired lateral offset after travelling a distance of approximately l. Thus the position of the pure-pursuit point maps directly onto a recommended steering curvature: k¼ 2x/l, where k is the curvature (reciprocal of steering radius), x is the relative lateral offset to the pure-pursuit point in vehicle coordinates, and l is a parameter known as the look-ahead distance. According to this definition, the operational control of the car by COSMODRIVE on COSMO-SIVIC can be seen as an automatic regulation loop that permanently keeps the Pursuit Point in the tactical driving path, to a given speed assigned with each segment
Fig. 14. The ‘‘Pure-Pursuit Point’’ method.
Fig. 15. Visualisation of COMODRIVE Pursuit Point and Envelope Zones regulation strategies on the COSMO-SIVIC platform.
of the current tactical driving schema, as instantiated in the mental representation. From a computational point of view, these two vehicle-control abilities (i.e. pure pursuit point and Envelope Zones methods) have been implemented on COSMO-SIVIC platform as a new mode of the pre-existing SIVIC models of vehicles control (cf. section 3.2). Indeed, a new class of ‘‘COSMO-CAR’’ objects based on the ‘‘SivicCar’’ pre-existing class has been thus defined, in order to provide new specific COSMO-SIVIC car-models able to integrate both the pure-pursuit point method for monitoring their lateral and their longitudinal controls, and the envelope zones strategy, for managing their interactions with the other road users. Figs. 15 and 16 illustrate such a regulation strategy in the frame of a car-following task: the pursuit point determines the cap to be followed, and then the envelope zones are used for regulating the distance with the lead car.
5. Conclusion and perspective: COSMO-SIVIC use for Human errors simulation and perspectives for virtual Human centred design in car industry 5.1. COSMO-SIVIC use for driver’s performance and Human errors simulation In its current status, the COSMODRIVE model implemented on the SiVIC platform is able to observe, mentally analyse, decide and then dynamically progress into a virtual road environment, through continuous actions on a virtual car. Moreover, it is possible to use this model to simulate human drivers’ visual strategies through a dynamic visual scanning of the road scene by the virtual eye. The dynamic sequence presented in Fig. 12 provides an example of such a visual scanning of the road scene while approaching to urban cross-roads (with the tactical intention to turn left), as simulated by our driver model by using the explorative zones of the driving schema previously presented in Fig. 3 (section 2.1). At a long distance of the intersection (View 1), the driver model explores the front scene and detects a crossroads. After detection, the model observes (View 2) the colour of traffic lights in order to decide whether to cross the road (if colour is green) or to stop the car (if colour is red). As the colour of traffic lights is green, the model focusses its visual attention on the opposite traffic, in accordance with the risk of envelope zones conflicts liable to occur between the ego-car and the other road users. In case of a perceived conflict (View 3), the driver model stops, until the opposite car crosses the road. In case of no conflict, the model decides to cross the road, while consulting the walkways (View 4) for managing potential interactions with pedestrians. Moreover, the driver model implemented on SIVIC is able to simulate drivers’ performances and human errors due to
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
1499
Fig. 16. Visual strategies and mental representation dynamic simulation, during a turn-left manoeuvre at urban cross-roads.
distraction, for example. Beyond the observable effects on driving performance, the aim of our computational modelling approach was also to simulate visual distraction effects on car drivers Situational Awareness. The two following figures (Figs. 17 and 18) illustrate this type of simulation results in relation with two typical cases of human emergency braking manoeuvres observed during the experiment (cf. section 3.3), for the same driving scenario of ‘‘free car-following task on highway’’. By considering the human drivers’ behaviour actually observed during the experiment (including both visual strategies and action on vehicle controls), a COSMODRIVE simulation of the respective driver’s situation awareness has been implemented. Fig. 17 presents an example of driving performance without Secondary Task. In this example, the driver reacts 2 s after the beginning of the lead car braking. As the initial following distance of the lead car was safe, and as the driver’s attention was focussed on the road
scene when the lead car breaks, the driver detected earlier this critical event, and then adequately regulates his Inter-Vehicular Time when the lead car entered in the green envelope-zone (i.e. view ‘‘c’’ below the curves, simulating the driver’s mental representation as it can be inferred from the observed behaviour at this particular moment). By contrast, Fig. 18 presents the same driving scenario in case of a visual distraction due to the secondary task. In this example, the lead car brakes when the driver observes the on-board screen (view ‘‘b’’). In view ‘‘c’’, she discovers the gap between (i) her mental representation of the driving situation, where the lead car is expected as out of the green envelop-zone, and (ii) the objective reality, where the lead car is actually in the red envelop-zone. Therefore, she immediately carried out an emergency braking (0.58 s of reaction time). Unfortunately, the collision cannot be avoided, and the crash with the lead car occurs in view ‘‘d’’.
1500
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
Fig. 17. Example of human driver’s emergency braking while driving without any visual distraction.
Fig. 18. Example of late emergency braking due to visual distraction.
Figs. 19 and 20 illustrate the types of results liable to be obtained through COSMO-SIVIC simulation for these two typical occurrences of the same driving scenario. Fig. 20 corresponds to the simulation of typical human performances when the drivers’ visual attention is focussed on the road scene, and while the lead-car suddenly triggers an emergency braking (corresponding to 45% of the emergency braking cases observed in section 3.3 experiment). Like the human drivers, the model detects early the critical event, decides to regulate its Inter-Vehicular Time immediately when the lead car entered in the green envelope-zone, and then implements an
action on the brake pedal for appropriately managing the collision risk. Therefore, the accident is avoided. By contrast, Figs. 20 and 21 present another example of simulation obtained for the same driving scenario, but when drivers are visually distracted. While driving, drivers must continually update their mental model of the driving situation as and when they dynamically progress on the road. In case of a secondary task requiring off-road scanning, mental model updating is un-perfectly done and drivers’ actions are based on a mental representation that may differ from the situational reality. This is typically what occurred in the example of crash presented
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
1501
Fig. 19. Simulation of a typical drivers’ behaviour for crash avoidance in simple task condition.
Fig. 20. Simulation of typical effects of visual distraction on driving performances.
in Fig. 18, and simulated here with COSMO-SIVIC through Figs. 20 and 21. Like 58% of observed behaviours among human drivers (Bornard et al., 2011), COSMODRIVE implemented here a strategy for scanning of the additional screen requiring a long glance of 2 s (view b in Fig. 20 and stage 2 in Fig. 21). During this time, like human drivers, COSMODRIVE has to manage the Inter-Vehicular Time with the lead car by using its mental model of the driving situation (stage 3 in Fig. 21). Unfortunately, the lead car decides to brake when the virtual eye is looking for something additional off-road and COSMODRIVE/driver’s Situation Awareness becomes progressively totally different from the situational reality (stage 3 in Fig. 21). When the driver/model repays attention to the road scene (view c in Fig. 20), they suddenly become aware of the critical gap existing between the expected lead car position as mentally assessed during the off-road glance by assuming a steady speed of the lead car (i.e. out of the green envelop-zone) and its actual position where the lead car is actually already in the red envelop-zone (see stage 4 in Fig. 21). Therefore, like the
human drivers, the model immediately triggers out an emergency braking, but because of this quite late detection of the lead car actual position, the crash cannot be avoided. As illustrated in this last example, COSMODRIVE simulations allow us to understand what is really happening in the driver’s mind when visually distracted: incomplete or incorrect perception of roadway cues due to off-road glances (required by the secondary task) may directly impact the formulation of an adequate Situation Awareness that will affect, for a second time, the driving performance as a whole. 5.2. Perspectives in using COSMO-SIVIC for virtual Human centred design in car industry Designing new driving assistances based on vehicle automation is a very complex issue that needs a considerable effort from both engineering (for technological developments) and ergonomics (in order to integrate end-users’ needs during the design
1502
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
Fig. 21. Visual distraction effects on driver’s Situation Awareness.
process). Providing simulation tools able to simulate human needs and performances since the earlier stages of the technological design process (i.e. before the development of expensive prototypes) is thus a crucial challenge in the near future. Indeed, such virtual Driver–Vehicle–Environment platforms are necessary
for promoting Human centred design approach, and thus increasing the future systems efficiency, effectiveness and acceptance by end-users. Examples of simulation presented in this paper illustrate the type of results liable to be obtained with COSMODRIVE model, when interfaced with SiVIC virtual Vehicle–Environment
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
platform. Indeed, these model results take the form of dynamic simulations of the human drivers’ activity at four complementary levels: - At the visual level (i.e. Perception module), by dynamic simulation of a sequence of visual fixation points, corresponding to the areas of interest successively explored by the driver while progressing on the road, according to its own tactical intentions (e.g. turning left at crossroads), or as influenced by a visual secondary task while driving requiring to alternate the road scene scanning and an on-board screen observation. - At the cognitive level (i.e. Cognition module), by dynamic elaboration of mental representations (i.e. situational awareness simulated through 3-Dimensional mental models of the road scene, integrating driving schemas, envelope-zones and pure pursuit points abilities), and decision-making processes simulations in charge to determine which relevant action should be implemented in the current driving context, as perceived, understood and anticipated by the driver model. - At the behavioural level (i.e. Action module), corresponding to the driver’s action actually performed on the virtual car commands for dynamically progressing into the virtual road environment and interacting adequately with the other road users. - At the performance level as a whole, corresponding to the consequences of a dual ‘‘Perception–Cognition–Action’’ loop of regulation, continuously implemented by the driver model (e.g. respective speeds and positions of the vehicles and thus, Inter-Vehicular distances), and which is dynamically simulated through the actual effects of driver’s action on the current driving situational status, as modelling into the SiVIC virtual environment. Moreover, by considering the respective underlying simulations implemented by the Perception, the Cognition, and the Action modules, it becomes possible to investigate human errors from their mental origins to their consequences, and thus to open the door of in-depth understanding and analysis of the human drivers’ reliability versus unreliability issues. According to its current functionalities and limits, the main interest of the COSMO-SIVIC tool for virtual Human Centred Design of driving assistances more particularly concerns the initial design phase supporting the driving aid concept definition. At this earlier stage, driver model could be used for virtual simulations allowing designers to estimate human drivers’ performances in case of unassisted driving, in order to identify and specify the most critical driving scenarios for which the targetsystem (to be developed) should provide a palliative assistance. These critical scenarios will correspond to driving situations where human drivers’ reliability – as assessed from our driver model performances and error simulations – seems not sufficient for avoiding accident or to adequately manage the risk. Through these simulations, it becomes thus possible to provide ergonomics specifications of the drivers’ needs in terms of future assistance. Then, during the driving aid assessing phases, upcoming in the design process, it could therefore be possible to evaluate the assistance effectiveness in a virtual way for the specific sub-set of the most critical scenarios, as initially selected through the model simulations, in order to test the efficiency of this device (and, therefore, its interest for the human driver) in these particular driving conditions. However, further developments will be required for providing a virtual tool liable to be used in this comprehensive way. Indeed, the COSMO-SIVIC platform presented in this article only constitutes a first step towards such an integrated tool for Virtual Human Centred Design of driving assistances. In its current status, it allows to simulate several drivers’ cognitive abilities and behavioural performances
1503
– via a virtual car – in order to progress in a virtual road environment, from dynamic ‘‘Perception–Cognition–Action’’ regulation loops, which is a crucial step for HCD in order to identify critical driving scenarios and select relevant concepts of vehicle assistances liable to adequately assist human drivers. However, the long-term objective of this approach will be to implement virtual driving assistances in the COSMO-SIVIC platform, for virtually assessing these aids when interfaced with a driver model. Despite the limits of the current COSMO-SIVIC tool, the advantages in having interfaced COSMODRIVE with SIVIC Vehicle–Environment–Sensors are already promising for future HCD applications due to the current efficiency of the SIVIC for virtual design of driving aids, from a technological point of view. Indeed, SIVIC was successfully used in several French and European projects in order to generate test databases with accurate ground truth (Gruyer et al., 2006) or for prototyping a virtual co-pilot by simulating complex scenes with environment perception for each vehicle. This type of development has demonstrated SiVIC capabilities to implement complex architectures and applications (Glaser et al., 2010). Moreover, this tool was also recently optimised by adding features in order to obtain a dedicated distributed simulation architecture (Gruyer et al. 2010b): a communication bus (bus DDS) was developed to exchange resources allowing to run multiple SiVIC in parallel and several prototyping platforms (like RTMaps or Matlab), on several distant computers. In a recent project, named eMotive, SiVIC was used as the core component of a platform for prototyping and then evaluating detection systems. Many features and several new sensors (RADAR, GPS, etc.) were developed and then tested and validated by comparison with real sensors. Results obtained confirmed that SIVIC virtual sensors can be considered as very close to the real sensors (Gruyer et al. 2010a; Hiblot et al., 2010; Bossu et al. 2010). According to these recent advances of the SIVIC tool, it becomes now realistic to interface virtual driving assistances with COSMODRIVE with the aim to compare drivers’ simulated performances with those without driving assistance, and then to virtually appreciate the respective benefits versus the potential risks on road safety of different versions of a driving aid. It will be the next step of this COSMOSIVIC research, in order to provide in a full integrative platform for Virtual Human Centred Design of future driving assistances.
Acknowledgements The research leading to these results has received funding from the European Commission Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 218552 Project ISi-PADAS. References Amidi, O., 1990. Integrated mobile robot control. Technical Report CMU-RI-TR-9017. Carnegie Mellon University Robotics Institute. Baddeley, A.D., 1990. Human memory. theory and practice. Lawrence Erlbaum Associates, London. Bainbridge, L., 1987. Ironies of automation. In: Rasmussen, J., Duncan, J., Leplat, J. (Eds.), New Technology and human errors. Wiley, New York, pp. 271–286. Bakker, E, Pacejka, H.B., Lidner, L., 1989. A new tire model with an application in vehicle dynamics studies. SAE paper, no. 890087. Bellet, T., Bailly-Asuni, B., Mayenobe, P., Banet, A., 2009. A theoretical and methodological framework for studying and modelling drivers’ mental representations. Saf. Sci. 47, 1205–1221. Bellet, T., Bailly, B., Mayenobe, P., Georgeon, O., 2007. Cognitive modelling and computational simulation of drivers mental activities. In: Cacciabue, P.C. (Ed.), Modelling Driver Behaviour in Automotive Environment: Critical Issues in Driver Interactions with Intelligent Transport Systems. Springer, pp. 315–343. Bellet, T., Banet, A., 2011. Towards a conceptual model of motorcyclists’ Risk Awareness: A comparative study of riding experience effect on hazard detection and situational criticality assessment. Accid. Anal. Prev. 2011 http://dxdoi.org/10.1016/j.aap2011.10.007.
1504
T. Bellet et al. / Engineering Applications of Artificial Intelligence 25 (2012) 1488–1504
Bellet, T., Tattegrain-Veste, H., 1999. A framework for Representing Driving Knowledge. Int. J. Cogn. Ergon. 3 (1), 37–49. Bossu, J, Gruyer, D., Smal, J.C., Blosseville, J.M., 2010. Validation and benchmarking for pedestrian video detection based on a sensors simulation platform. In: Proceedings of IV 2010. San Diego, June 2010. Bornard J.C., Bellet T., Mayenobe P., Gruyer D., Claverie B., 2011. A perception module for car drivers visual strategies modelling and visual distraction effect simulation. In: Proceedings of the 1st IEA-DHM Conference. Lyon, June 2011. Cacciabue, C., Vollrath, M., 2011. The ISI-PADAS Project: Human Modelling and Simulation to supprt Human Error Risk Analysis of Partially Autonomous ¨ ¨ Driver Assistance Systems. In: Cacciabue, P.C., Hjalmdahl, M., Ludtke, A., Riccioli, C. (Eds.), Human Modelling in Assisted Transportation: Models, Tools and Risk Methods. Springer, pp. 65–77. Dugoff, J., Flanches, P., Segal, L., 1970. An analysis of tire traction properties and their influence on vehicle dynamic performance. SEA paper, no. 700377. Endsley, M.R., 1995. Toward a theory of situation awareness in dynamic systems. Hum Factors 37 (1), 32–64. Glaser, S., 2004. Mode´lisation et analyse d’un ve´hicule en trajectoires limites. Application au de´veloppement de syste mes d’aide a la conduite, The se de doctorat de l’Universite´ d’Evry. Glaser, S., Vanholme, B., Mammar, S., Gruyer, D., Nouveliere, L., 2010. Maneuver based trajectory planning for highly autonomous vehicles on real road with traffic and driver interaction, IEEE. Transaction on Intelligent Transportation Systems 11 (3), 589–606. Gibson, J.J., Crooks, L.E., 1938. A theoretical field-analysis of automobile driving. American Journal of Psychology 51, 453–471. Gruyer, D., Roye re, C., du Lac, N., Michel, G., Blosseville, J.M., 2006. SiVIC and RT-MAPS Interconnected platforms for the conception and the evaluation of driving assistance systems. In: Proceedings of the ITS World Congress. London, UK, october. Gruyer, D., Glaser, S., Monnier, B., 2010a. SiVIC, a virtual platform for ADAS and PADAS prototyping, test and evaluation. In: Proceedings of FISITA’10. Budapest, Hungary, 30 may–4 june 2010. Gruyer, D., Glaser, S., Monnier, B., 2010b. Simulation of vehicle automatic speed control by transponder-equipped infrastructure. In: Proceedings of IEEE ITST 2009. 20–22 October 2009, Lille, France. Hall, E.T., 1966. The hidden dimension. Doubleday, New York. Hiblot, N., Gruyer, D., Barreiro, J.S., Monnier, B., 2010. Pro-SiVIC and ROADS. A Software suite for sensors simulation and virtual prototyping of ADAS. In: Proceedings of DSC 2010. Paris, 9–10 September, 2010. Johnson-Laird, P.N., 1983. Mental models: Towards a cognitive science of language, inference, and consciousness. University Press, Cambridge.
Leplat, J., 1985. Les repre´sentations fonctionnelles dans le travail. Psychologie Franc- aise 30, 269–275. Mayenobe P., 2004. Perception de l’environnement pour une gestion contextualise´e de la coope´ration Homme-Machine. Ph.D. Thesis. University Blaise Pascal de Clermont-Ferrand. Mayenobe, P., Trassoudaine, L., Bellet, T., Tattegrain-Veste, H., 2002. Cognitive Simulation of the driver and cooperative driving assistances. In: Proceedings of the IEEE Intelligent Vehicles Symposium: IV-2002. Versailles, June 17–21, pp. 265–271. Michon, J.A., 1985. A critical view of driver behavior models: what do we know, what should we do?. In: Evans, L., Schwing, R.C. (Eds.), Human behavior and traffic safety. Plenum Press, New York, pp. 485–520. Minsky, M., 1975. A Framework for Representing Knowledge. In: Winston, P.H. (Ed.), The Psychology of Computer Vision. Mc Graw-Hill, New York, pp. 211–277. Neisser, U., 1976. Cognition and reality: principles and implications of cognitive psychology. W.H.Freeman, San Francisco. Norman, D.A., Shallice, T., 1986. Attention to action: willed and automatic control of behavior. In: Davidson, R.J., Schwarts, G.E., Shairo, D. (Eds.), Consciousness and self-regulation. Advances in research and theory, 4. Plenum Press, New York, pp. 1–18. Ochanine, V.A., 1977. Concept of operative image in engineering and general psychology. In: Lomov, B.F., Rubakhin, V.F., Venda, V.F. (Eds.), Engineering Psychology. Science Publisher, Moscow. Ohta, H., 1993. Individual differences in driving distance headway. In: Gale, A.G. (Ed.), Vision in Vehicles IV. Elsevier Science Publishers, Netherlands. Piaget, J., 1936. La Naissance de l’intelligence chez l’enfant, Delachaux & Niestle´, Neuchˆatel. Rasmussen, J., 1986. Information processing and human-machine interaction: an approach to cognitive engineering, Amsterdam, North Holland. Scheinder, W., Shiffrin, R.M., 1977. Controlled and automatic human information processing I: Detection, search and attention. Psychological Review 84, 1–88. Stanton, N.A., Chambers, P.R.G., Piggott, J., 2001. Situational awareness and safety. Safety Science 39, 189–204. Steux, B., 2001. RTMaps: un environnement logiciel de´die´ a la conception d’applications embarque´es temps-re´el. Utilisation pour la de´tection automatique de ve´hicules par fusion radar/vision, The se de Doctorat de l’Ecole des Mines de Paris. Sukthankar, R., 1997. Situation Awareness for Tactical Driving. Ph.D. Thesis. Carnegie Mellon University, Pittsburgh, PA, United States of America. ¨ Van der Molen, H.H., Botticher, M.T., 1988. A hierarchical risk model for traffic participants. Ergonomics 31 (4), 537–555.