SUPERVISION OF HUMAN OPERATORS USING A SITUATION-OPERATOR MODELING APPROACH Dirk Söffker and Elmar Ahle Chair of Dynamics and Control, University Duisburg-Essen, Germany
Abstract: Core of this contribution is the applicable conception for automated supervision of human-machine-interaction (HMI). Therefore, first the logic of the interaction has to be examined and second the structure of human errors is investigated. The combination of the logical and functional specific discretization of HMI using the introduced situation-operator modeling (SOM) concept and the assumed and included formalizability of human errors allows the automated supervision within the HMI context. An example describes the interaction of the system driver-vehicle and environment, whereby the driver with the vehicle is driving on a multi-lane road following different goals, e.g. drive as fast as possible. Copyright © 2006 IFAC Keywords: Supervisory control, Human error, Autonomous vehicles, Modeling errors, Human-Machine-Systems, Human-Machine-Interaction
1. INTRODUCTION AND BASIC IDEAS
and problem-equivalent one, is used to describe the internal system structure (as a part of the real world). Here, only the logical structure of the 3D-space, time, and functional-oriented connections are of interest. The item 'operator' (cf. Fig. 1.b) is used to model effects / actions changing scenes (modeled as situations) in time. The situation S consists of characteristics Ci and a set of relations R. The characteristics are linguistic terms describing the nodes of facts (as perceptible qualities). This includes physical, informational, functional, and logical connections. To describe the relations ri, known problem related modeling techniques, like ODEs, DAEs, algorithms or more general, even graphical illustrations (e.g. Petri-nets), can be used.
In previous work (Söffker, 2001), a system-theoretic modeling approach is introduced, dealing with a special situation-operator modeling kernel (calculus), called SOM. This modeling approach combines classical ideas of the situation and event calculus, and leads to a uniform and homogenous modeling approach describing human learning, planning, and acting. Core of the approach is the assumption that changes of the considered parts of the real world are understood as a sequence of effects described by the items scenes and actions of the real world. In the proposed approach, the definition of the items scenes and actions are coordinated in a double win. They are related to each other and they relate the assumed structure of the real world to the structure of the database - called mental model - of intelligent systems (IS). IS and humans (human operators HO) are included in the real world. Depending on their principal sensory inputs, their natural (HO) or technical (IS) perceptions, and on the related knowledge base, the IS and the HO adapt and learn only parts or aspects of the real world. These parts are modeled using the developed situation and operator calculus. The describable part of the real world is called a system.
The SOM-approach only gives the frame to model the structure of changeable scenes, and therefore maps the 'reality' of the real world using the proposed structural framework to a formalizable representation. This is useful to describe problems, where the system structure is complex and cannot be modeled with available (single) approaches. This is also useful to describe interactions between human operators and their environment. The introduced item 'characteristic' C also includes the possibility of representing time dependent parameters P as example. The complete set of relations R fixes the structure of the considered scene of the real world modeled as situation S.
The item 'situation' (cf. Fig. 1.a), which is in contrast to (McCarthy, Hayes, 1969) a time-fixed, system-,
956
(a) Situation (considered system)
(b) Operator Fig. 2. Connections between situation and operator modeling
Fig. 1. Introduced terms situation / operator (here: graphically illustrated)
representation of the organization of the system, but also for internal knowledge representation and storage of HO and IS. They are the core/background of all higher organized internal (cognitive) operations and functions like learning, planning etc. (Söffker, 2001) and also of the proposed supervision concept. This modeling is typical for cognitive architectures (Cacciabue, 1998).
The operator O (cf. Fig. 1.b) is understood and modeled from a functional point of view, i.e. the operator is an information-theoretic term which is defined by its function F (as the output) and the related necessary assumptions. Here, explicit and implicit assumptions are distinguished. The function F is only realized, if the explicit assumptions eA are fulfilled. The implicit assumptions iA include the constraints between eA and F of the operator. The explicit assumptions eA are of the same quality as the characteristics Ci of the situation S. For the internal structure of the operator other descriptions like textual, logical, mathematical or problem-related descriptions are allowed. The double use of the term operator O is graphically illustrated in Fig. 2.
According to Rasmussens ‘skill-based, rule-based, knowledge-based’ well-known distinction (Rasmussen, 1983) of humans interaction, the proposed approach is focussed to the description of the interaction itself and yields to an internal structuring (and therefore modeling) of the interaction itself and the related connection to assumed human mental models and connected structures of the formalizable interaction partner, the system. As a result the approach allows the consideration of individual contexts as action sequences.
The illustrated item 'operator' is used for the modeling of internal (passive) connections of situations as well as changes between situations. Operators are used to model the system changes (changes of situations). This defines the discrete events of the change of the considered part of the real world, the system. Operators and situations are strongly connected due to the identity (partly or complete) of the characteristics of the situations and the explicit assumptions of the operators. This includes that the situation consists of 'passive' operators (internal causal relation: 'because'), whereby the change is done by 'active' operators (external causal relation: 'to'), shown in Fig. 2.
Human errors are described and phenomenologically analyzed in different publications, e.g. (Reason, 1990). The first publications assuming a specific interaction loop describing and distinguishing human errors in a detailed and structured way are published by Dörner and his group (Dörner, 1989).
The description of complex systems using a situationoperator model allows • within the situation S, the mixture of different types of (variable) quantities (the set of relations R can be different ones) • within the situation S, the integration of logical and numerical quantities (by different characteristics Ci), and • the description of real-world problems using a mixture of a complex set of descriptions (variables). A hidden structure of R is also possible and graphically illustrated in Fig. 3. The change of the considered world results as a sequence of actions modeled by operators as illustrated in Fig. 4. Fig. 3. Graphical notation
It should be noted that operators correspond to situations. Both are not only used for structural
957
Fig. 4. Sequence of operators changing the situations (arbitrary example) Dörner and his group show in various publications, e.g. (Dörner, 1989), that human errors also within the HMI can be classified and therefore structured. Coming from psychology, Dörner uses word models to describe and distinguish different (mainly four) clusters of human errors and within more than 16 typical human errors. Assuming, like others, a classical action scheme, which appears as a control loop between the environment (the machine) and the human operator, human errors are distinguished in • goal elaboration, • decision processing, • control of action, and • internal cognitive organization. The usage areas of this idea are outlined and demonstrated with various examples.
Fig. 5. Graphical illustration of error 'Rigidity' knowledge) or unknown interactions have to be supervised based on the internal sequence logic. In (Söffker, 2004), the translations of the 'Dörner'errors 'No-part-goal elaboration', 'Establishment of fix goals', 'Rigidity', 'Magic hypothesis', 'Central Reduction', and 'Side- and wide effects' using SOMapproach are given in detail; in (Söffker, 2001) all human errors (of Dörner) are discussed. The error 'Rigidity' as example to illustrate the idea is shown in Fig. 5. Due to external effects the planned and desired situation S2 is not reached, but the different situation S2a results. The human error 'Rigidity' means that the unreflected use of O2 will not lead to the desired goal due to the changed structural situation S2a. The known and experienced operator O2 is applied (as in the planned sequence), though the assumptions for the execution are not fulfilled. Using the SOM-approach this can be easily graphically expressed.
Based on this abstract and detailed distinction, it becomes very clear that • human errors within the HMI-context are individual errors from the logical point of view, but these errors can also be understood from a structural point of view as classifiable, • a very general interaction model may be possible including a detailed but generalizable structure and its changes, • a general qualitative modeling approach can be developed like (Söffker, 2001) describing and detailing the HMI. Understanding the HMI by detailed modeling also includes the possibility of sequence analysis or with other words the possibility to examine the logic of interaction.
3. CONCEPT OF AUTOMATED SUPERVISION OF THE HUMAN-MACHINE-INTERACTION The core and the idea of the approach is • to examine within a narrow time window the transition between situation and operator and the following sequence, • to analyze the line of clustered interaction sequences following a goal which is assumed as an obvious interaction goal, and (if possible) • to search for internal connections and relate them to the introduced human error cluster. By this way both sides of the HMI can be considered, examining the logic of the situation trajectory. Therefore, the following modules have to be realized: • Generation of situation vector. This module can be easily realized with existing signal- and/or model-analysis tools combining continuous, discrete, and logical information in the way the dependent system properties/states appear, which can be understood as characteristics. • Description of operators for description of usual actions. This module can be easily realized with the knowledge of the operators and system
2. HUMAN ERROR CLASSIFICATION OF DÖRNER The verbal classification of Dörner is from a systemtheoretic point of view a descriptive one, no interpretation about the cause of human errors has to be made. The difficulty which has to be solved is that the psychological understanding about the logic of errors (Dörner, 1989) has to be formalized. Goal of the formalization is to make the situations readable for automata. If this is possible, the machine behavior as well as the human operator behavior can be supervised. Therefore, the assumption is that the knowledge about the logic of action (coming from the task) has to be known by the supervision machine completely, or, the internal logic of connected actions has to be known (in the case of incomplete
958
Fig. 7. Concept of automated supervision for Human drivers lane-change manoevers
Fig. 6. Structuring the complexity of the interaction
• •
are sensed to set up a characteristic vector as situation description on the processing level. The operator representing the current action of the driver is selected from the basisoperator library. With the situation description and the actual operator, a metaoperator representing the goal-oriented action of the driver is chosen from the metaoperator library. On the analysis level, the assumptions of the operator are checked, if the operator is applicable. Furthermore, models of typical human errors are summarized in an error classification library. The operator and error libraries can be replaced or extended for automated supervision of other systems, while the overall structure remains unaltered. With this formalization the action sequence of the driver can be checked for • consistency (checking assumptions), • typical human errors, and • goal conflicts so that the driver can be informed, if an error or conflict is detected.
designers. Monitoring these internal connections gives a lot of information. Analysis of sequences of operators. This module can be realized using existing neuro-/neurofuzzy approaches. Additional features, if necessary.
The complex, dynamical scenes in the real world are described using the SOM-approach, resulting in a sequence of situations assuming a sequence of actions in the real world as basic pattern of the dynamics (cf. Fig. 6). The mapping of the logic of action (goaloriented human operation) and the mapping of human errors enables the analysis of interaction. The basic patterns of the logic of action and human errors as well as the basic patterns of the dynamics and inner structure represent the invariant or constant part of the interaction on an abstract level. However, the changing parameters are system specific and application dependent. With these assumptions it is possible to model the complexity of open systems completely and keep the complexity manageable to supervise human operators.
3.1 From Scene to Situation The situation of the lane-changing car is described by the seven characteristics as illustrated in Fig. 8. In this case, no relations are specified. The data compression is realized by extracting parameters of the seven characteristics from the time discrete and partly continuous raw sensor data. Possible parameters of the characteristics are denoted in squared brackets. In the module “Setting up situation” different algorithms are used to calculate the parameters of the characteristics, which have discrete values and are event discrete. For example, to calculate the
4. EXAMPLE OF AUTOMATED SUPERVISION In the following example, the system 'human driven vehicle – dynamically changing environment' is described and the interaction between both parts. Different goals, e.g. “get ahead as fast as possible” or “drive on right lane and get ahead at a suitable rate” are possible. First, the scene is modeled as situation by defining its characteristics. Then, the set of actions the driver can perform, are specified and modeled as operators. The passing maneuver provides enough complexity but it is not too complex to get lost in details on the other hand. In Fig. 7, the whole concept of automated supervision of the passing maneuver as example is illustrated. On the sensory level, the relevant facts of the real world
Fig. 8. Compression of the sensor data by extraction of parameters of the seven characteristics 959
parameter of the characteristic lane change possible the distances and velocities of the vehicles in front and back on the left and right lane are used. With this sensor data the safety distances, specified e.g. in (Mayr, 2001), can be calculated to determine the parameter of the characteristic. The provided sample algorithms should only illustrate the extraction process and may be replaced by more sophisticated algorithms, e.g. neural networks, fuzzy logic, to model the structure of a scene on a higher abstraction level. In (Ahle and Söffker, 2004), one possible extraction of characteristics is given for a car on a three-lane road. 3.2 From Action to Operator The possible actions of the driver are modeled as operators, which are stored in the basisoperator library (cf. Fig. 9). From the set of basisoperators such as brake, drive, accelerate, metaoperators are constructed, e.g. normal lane change to right, fast passing. Parts of the libraries developed in (Blötz, 2005) are shown in Fig. 9 and Fig. 10. For each operator the assumptions to apply this operator are denoted on the left hand side, while its effects are stated on the right hand side, distinguishing whether this change is deterministic (solid line) or probabilistic (dashed line).
Fig.10. Metaoperators for automated supervision of the passing maneuver as possible”, which is translated into SOM description in the way that the parameter of the characteristic actual velocity is as large as possible, the applicable operators are ordered in the last column. To reach the given goal, the operators accelerate or drive should be used, i.e. the parameter of the characteristic acceleration possible and actual lane free, should be yes. If the goal is changed, the order of the operators in the last column is changed. The operator of the executed action is framed. The metaoperator of the whole sequence is normal lane change to left.
Fig. 11 illustrates the operating mode of the automated supervision considering the passing maneuver as example. In the first scene, the considered car is on the right lane of a three-lane road with a speed limit and cannot accelerate because of a vehicle in front. A lane change to the left to overtake a vehicle allows an acceleration until the speed limit is reached. Generally, all actions modeled as operators in the basisoperator library can be chosen by the driver. Taking the assumptions of the operator into account, only the operators in the third column can be chosen. With the given goal “get ahead as fast
4. CONCLUSION The new aspect of the contribution is the unique view to human-machine-systems (HMS) especially to HMI by applying a modified and suitable adapted situation-operator modeling technique and application to automated monitoring of human operators. Using the introduced approach it is possible to describe those parts of HMS which are characterized by the knowledge-guided HMI. The advantage of this concept is that the uniform formulation leads to a modular and easily extendable automaton due to the structuring by the different abstraction levels. A full integration of the proposed automated supervision would lead to an autonomous system that can perform the supervised task on its own. This approach allows a formalizable strategy to analyze the logic of interaction. The example of a human driven car interacting with an unknown dynamically changing environment shows in detail how this approach can be applied to real systems. By closing the gap between the pure signal level and the complex situation interpretation level, firstly a human like complex scene interpretation module applied to an open and complex, but formalizable and closed environment is developed. The next step, beside the practical application, is the extension to library-less applications.
Fig. 9. Basisoperators for automated supervision of the passing maneuver
960
Fig. 11. Operating mode of the proposed automated supervision considering the passing maneuver as example REFERENCES Mayr, R. (2001). Regelungsstrategien für die automatische Fahrzeugführung: Längs- und Querregelung, Spurwechsel- und Überholmanöver, Springer-Verlag, Berlin. Rasmussen, J. (1983). Skills, rules, knowledge: Signals, signs, and symbols, and other distinctions in human performance models, IEEE Transactions on System, Man, and Cybernetics, 13, 257-267. Reason, J. (1990). Human Error. Cambridge University Press, Cambridge, UK. Söffker, D. (2001). Systemtheoretische Modellbildung der wissensgeleiteten Mensch-MaschineInteraktion, Habilitation Thesis, University of Wuppertal, Germany, 2001, published by Logos Wissenschaftsverlag, Berlin, 2003. Söffker, D. (2004). System-theoretic Understanding of MMI – Part II: Concepts for Supervision. In: IFAC Symposium Analysis, Design, and Evaluation of Human-Machine Systems, Atlanta.
Ahle, E. and D. Söffker (2004). Modeling the Decision Process of the Lane-Change Maneuver Using a Situation-Operator-Model. In: Proc. IEEE Conference on Cybernetics and Intelligent Systems, Singapore, 1106-1110. Blötz, N. (2005) Konzeption eines Überwachungsautomaten für Überholvorgänge im Straßenverkehr, Diploma Thesis, Chair of Dynamics and Control, University Duisburg-Essen. Cacciabue, P.C. (1998). Modelling and Simulation of Human Behavior in System Control. Springer series: Advances in Industrial Control, Springer, Berlin, Heidelberg, New York. Dörner, D. (1989). Die Logik des Misslingens, Verlag Rowohlt, Reinbek. McCarthy, J. and Hayes, P.J. (1969). Some Philosophical Problems from the Standpoint of Artificial Intelligence. In: Machine Intelligence 4 (B. Meltzer, D. Mitchie and M. Swann (Eds.)), 463-502, Edinburgh University Press, Edinburgh.
961