A model of operator behaviour for man-machine system simulation

A model of operator behaviour for man-machine system simulation

00(15-109gl~)$3.00+ 0.(10 Pergamon Press pie (~ 1990InternationalFederationof AutomaticControl +Col.26. No. 6, pp. 1025-1034.1990 Printedin Great Bri...

982KB Sizes 34 Downloads 103 Views

00(15-109gl~)$3.00+ 0.(10 Pergamon Press pie (~ 1990InternationalFederationof AutomaticControl

+Col.26. No. 6, pp. 1025-1034.1990 Printedin Great Britain.

Automatica.

A Model of Operator Behaviour for Man-Machine System Simulation* P. C. CACCIABUE,*t§ G. MANCINIt and U. BERSINI~

A model of plant operator behaviour, which simulates cognitive processes leading to decision making and implementation of strategies during man-machine interactions, contributes to perform safety studies, to design the architectures of control systems and to develop decision support system technology. Key Words--Cognitive systems; fuzzy logic; hierarchical systems; human factors; psychological methods.

also, to few but highly significant accidents. The most important factor affected by such extensive innovations is the interaction of the operator with the machine he is supposed to control/supervise. This implies the exploitation of computerised control instrumentations and a variety of support systems for diagnosing, planning and executing complex procedural sequences. The actual contribution of the human performance to the management of plants and the role of modelling such behaviour for the analysis of man-machine systems have been debated by many authors (Edwards and Lees, 1974; Pew and Baron, 1983; Rouse, 1983; Sheridan, 1986; Rasmussen, 1986; Stassen et al., 1988), focusing on the erroneous or inappropriate behaviour of operators and on the implications that adequate modelling must have at all levels of design. From the modelling viewpoint, a crucial aspect of the problem is represented by the appropriate balancing and interfacing between the two components of the Man-Machine System (MMS), i.e. the human and the machine models (Mancini, 1986), while, from the simulation perspective, the focus of the problem rests on the research of logical/mathematical formulations for representing the cognitive mechanisms of human understanding of physical phenomena, learning, beliefs, as well as the generation of errors (Rasmussen et al. 1987). Many attempts to model human behaviour have been performed in a behaviouristic oriented perspective, i.e. decomposing the overall behaviour of the operator in a sequence of different elementary acts or subtasks and assigning, to each of these, a certain probability of failure. This type of approach, although very efficient in terms of

Abstract--In this paper, a model of plant operator behaviour is proposed in the context of an architecture for simulating human-machine interaction. In such a model, the cognitive processes leading to decisions as well as the execution of strategies are simulated in details for the study of the management of a plant in incidental conditions. The model foresees the representation of two cognitive levels of reasoning and decision making, namely the High Level Decision Making (HLDM), which allows one to exploit the operator's knowledge by continuously recognising situations and by building supervisory and control strategies, and the Low Level Decision Making (LLDM), which is supported by the working and conscious memory dynamics, when the operator implements a preprogrammed response or a planned strategy in order to satisfy a clearly defined intention. The details of the formalisms and methodologies implemented in the model are described and possible applications in the fields of design and safety of complex plants are discussed. INTRODUCTION

THE APPROACHto design and licencing of plants, in many fields of current technology, has changed during the past decades according to a number of factors. These mainly depend on the increasing level of complexity of the plants, the extensive use of automation, the introduction of powerful computers in the control room and, * Received 7 May, 1989; revised 16 December, 1989; received in final form 4 January, 1990. The original version of this paper was presented at the Third IFAC/ IFIP/IEA/IFORS Conference Symposium on Man-Machine Systems, Analysis, Design and Simulation which was held in Oulu, Finland during June, 1988. The Published Proceedings of this IFAC Meeting may be ordered from: Pergamon Press pie, Headington Hill Hall, Oxford OX3 0BW, U.K. This paper was recommended for+publication in revised form by Associate Editor T. Sheridan under the direction of Editor A, P. Sage. t Commission of the European Communities, Joint Research Centre Ispra, 21020 Ispra (Va), Italy. ~:IRIDIA, Universit~ Libre de Bruxelles, Av. Franklin Roosevelt, 50, BruxeUes, Belgium. §Author to whom all correspondence should be addressed. 1025

1026

P . C . CACClABUE et al.

quantification, has been questioned by many authors (Rasmussen, 1981; Reason, 1986; De Keyser, 1987; Decortis, 1988) mainly on the basis of psychological considerations, which imply that, in a behaviouristic view, only the consequences of human errors are accounted for, without worrying about the reasons and the underlying mechanisms from which they stem. The counterpart to the behaviouristic approach is represented by the cognitive modelling perspective, which has to be considered as a step forwards, in that it attempts to model the mental processes as well as the motor behaviour of the operator in a deterministic way, combining psychological consideration to logic formalisms and decision making theories (Bainbridge, 1986; Rasmussen, 1986; Woods and Roth, 1986; Bersini et al., 1988). In this paper, a cognitive model of plant operator behaviour is proposed, whereby the processes leading to decisions as well as the execution of strategies are simulated in details for the study of the management of a plant in incidental conditions. In Section 2, the general architecture of the man-machine simulation and the main features of the models are described focusing on the operator model. Then, in Section 3, the details of the methodologies applied for the different parts of the cognitive model are discussed. Section 4 describes the implementation of the error mechanisms and some examples of applications currently being developed on dedicated hardware. Finally Section 5 contains the final remarks including directives for future research. ARCHITECTURE OF THE M A N - M A C H I N E MODEL

The process of the man-machine interaction is essentially composed of the decision making and action of the human being and of the plant dynamic response to the demands of the operator. Thus, simulating the behaviour of operators of process plants in transient conditions implies primarily the modelling of cognitive processes, in the environmental constraints in which they are activated, and then the consideration of human biases and mechanical failures occurring during the man-machine interaction. This leads to a representation of the man-machine system whereby three main components are considered (Fig. 1): on the one hand, the human and the machine, which are modelled in a consistent and balanced proportion; and, on the other hand, an interface mechanism, a "driver" of the simulation, which handles dynamically the data transfer and the communication between the models of the human and the machine.

OPERATOR I

MODEL

DRIVER

FIG. 1. Representation of the man-machine system.

The machine simulation requires that the physical behaviour of the plant, in terms of material dynamics, and the control system, in terms of information displays and actuators, are modelled, using classical conservation equations and correlations. This modelling aspect of the architecture is of less concern to the present work because it is based on physical principles and uses analytical and numerical techniques already well known and developed. The driving mechanism in our architecture is based on the D Y L A M methodology (Amendola, 1987), which is a framework capable of managing systems with discrete state "components", continuous variables and operational task elements interacting with each other, as well as, with the physical evolution of the plant. Using DYLAM terminology, the "components" in our approach are the two actors of the man-machine system, i.e. the actual plant, with its instrumentation and interfaces and the operator, in terms of decision making and actions performance. Formally, D Y L A M lies in the background of the architecture and assures that the communications between the two "components", as well as the behaviour of a component itself, is the outcome of the logical-stochastical process which generates errors and malfunctions. What D Y L A M needs is the relation, expressed in terms of error dependency, existing: (a) among "components" of the man-machine system, in order to account for intercomponent dependencies; (b) between "components" and physical environment or physical evolution of the transient, in order to account for functional dependencies; and

A model of operator

1027

(c) between "components" and time constraints, in order to consider stochastical failures by means of transition rates. Using such relations, D Y L A M offers the possibility of either performing, systematically, an exhaustive series of sequences; or of driving a simulation through an unplanned and random sequence, keeping track of the state behaviour of the "components". The role of the DYLAM technique, as driving mechanism, will be further clarified when discussing the details of the error generation algorithms for the operator model. Although the overall man-machine architecture is based on a balanced simulation between the models of the plant physical behaviour and of the human cognitive responses, this work focuses on the latter and, consequently, we will concentrate, in the remaining part of the paper, on the human model and on the human error making mechanism.

The human component Considering the human behaviour model, while many existing techniques focus separately on models of detection, planning, diagnosis or execution with adequately different formalisms (Rouse, 1983), in the present model, the main tendency is towards an integrated simulation, without a clear-cut separation between different phases. This means that the human activities imply a continuous interaction of the operator planning-execution-assessment processes with the physical evolution of the plant. In this way modifications of a plan or of a strategy can be carried over during the course of the manmachine interaction, keeping track of the past events and the previous decision processes, for further recoveries or for more systematic plan changes. In previously developed models (Cacciabue and Cojazzi, 1986), a plant safety perspective was considered by means of a "passive" model, whereby the actions of the operator were only related to the consequences on the plant and therefore identified in an "a priori" analysis. On the contrary, the present model can be considered as "active", in the sense that the operator actions are dynamically identified by the actual situation assessment and by the reasoning about the system evolution. The modelling framework of the operator simulation is shown in Fig. 2. Within this frame, two cognitive levels of reasoning and decision making are foreseen. On the one hand, the assessment of a situation (diagnosis) and the formulation of a strategy (planning) are considered as "High Level Decision Making" (HLDM) processes, because they imply long AUTO 26:6-F

Recovery

FIG. 2. Framework of the human model.

term planning as well as the analysis of the plant as a whole and, possibly, the reasoning about the evolution of physical phenomena. It represents a reasoning process without manual interaction with the control system: here the operator observes the evolution of the transient and develops the initial strategy of actions. On the other hand, the implementation of a preprogrammed response, or, of a planned strategy, in order to satisfy a clearly defined intention, are actually carried over by the "Low Level Decision Making" (LLDM) model. Here the interaction with the machine is dual, in the sense that plant behaviour data and operator actions develop on a short time scale and on localised part of the plant, with the objective of the operator of executing and optimising the selected strategy. The mechanisms of error detection and recovery are implicitly considered as feedbacks or results of the various ongoing processes within the H L D M and LLDM levels. In particular, recovery occurs either by modifying the ongoing sequence of actions, performing alternatives at LLDM, or by resorting to a new plan, reasoning at HLDM. At the psychological level, the H L D M architecture borrows basic concepts developed by Reason (1986,1987a) in various works, whereby the content of the knowledge base of a human being is exploited via the basic mechanisms of similarity matching (SM) and frequency gambling (FG), leading to the cognitive processes of recognition and/or planning. Given that intentionality is a fundamental aspect of such processes, the formulation of sequences of intentions or goals and the ordering of goals in a hierarchical goal-oriented structure (Bainbridge, 1986; Rasmussen, 1985) is the

P . C . CACCIABUE et al.

1028

general framework within which the strategies of operators are developed and carried over within the LLDM model. Moreover, the simulation of the attainment of a goal during the flow of events is based on fuzzy logic, which is a well suited theory for representing the approximate operator's knowledge (Gupta and Sanchez, 1982) and allows one to create a semantic interface between the system simulation and the operator cognitive model. In the following, a detailed description is given of the mechanisms and principles which contribute to the interaction of the two levels of decision making. THE TWO COGNITIVE LEVELS

High Level Decision Making model There are two fundamental components of the model and two basic mechanisms are envisaged as critical activators of cognitive processes (Fig. 3). The two components are the Working Memory (WM) and the Knowledge Base (KB), which represent respectively: WM: the "workspace" where the mental schema are internally, consciously and laboriously processed; and KB: a vast "repository" of frames or schema of different natures and distinct levels of compilation. The two mechanisms which ensure the unfolding of the cognitive processes by the interaction of WM and KB are recognition and planning. Working Memory. WM is subdivided in two parts: a Focal WM (FWM) and a Peripheral WM (PWM). The environmental data, i.e. the external cues or cues of the world, arrive into the PWM. Similarly, internal cues may be generated

KB

I0Il~ACCIil KDIE~T Recognition

Rull

frumea

Plnnnlnll

Knowlcdse

frum*o

F G ENVIRONMENT

-%,

Ctlllll|

CUES

Condlllonl

..-.. ".~

PWM

FIG. 3. Architecture of HLDM model.

by some analytical reasoning or cognitive process. All cues that reach PWM are processed by a cognitive filter, whereby a selection mechanism is activated in order to evaluate admittance to FWM. This mechanism is governed by a variety of priority principles, such as visual dominance, change detection and coherence, which are summarised in the variable Salience (S) of cues. The cues that gain admittance to FWM become the "calling conditions" for exploring and exploiting the Knowledge Base by the mechanisms of recognition and planning. Knowledge Base. KB is structured in frames or schema which represent the operator knowledge about the plant in the form of geometrical structure (connections, locations, states of components), process representations (variables behaviours, causal relations, functional landmarks and thresholds), and control sequences (actions, tasks, procedures and respective effects). Two types of frames are distinguished. The first ones, called knowledgeframes, describe the processes and the structure of the system only in terms of general physical and engineering principles, as well as rules of thumb. The second ones, called rule-frames, account for all the operator's interaction with the system, in that they encompass pre-definite plans of actions for different situations. Experience and learning are responsible for a compilation mechanism which leads to formalising in procedures the content of frames. Each frame, or part of a frame, is a "content addressable" knowledge unit, in the sense that its elements can match with WM calling conditions. A large range of rule-frames are assumed to be present in the KB of the operator. The frames comprise a "state label", which relate to a specific event, a set of "attributes", which are the values associated to diagnostic signs, and a "frequency tag", which represents the frequency or recency of encounter of the event. In particular, the attributes are generally described in linguistic and vague terms, such as "temperature increases", "low pressure", etc. Fuzzy set theory is the best suitable theory for representing the semantic interface between the system dynamics and operator reasoning models. Therefore, the values of the attributes are represented by fuzzy sets, which depend on the operator expertise and plant design. The cognitive processes by which the products of the stored knowledge units are brought into the WM are: --similarity-matching between the "calling conditions" and the "attributes" of KB schema; --frequency gambling for selecting a final

A model of operator candidate among a number of partially matched candidates. Recognition. The operator is a furious pattern matcher and, when confronted with a new situation, he will essentially activate the cognitive schema that he possesses in his KB, in order to deal with the situation. The recognition of situations implies the recall in the focal working memory of well known and familiar plans of actions: this is the process of activation of rule-frames. By the similarity matching, the cues, coming from the system and perceived via the cognitive filter, are matched with the attributes of the stored candidate events (Fig. 4). The number of cues that reach the operator's attention is a small subset of the available ones and, consequently, more than one frame is likely to be selected. Indeed, a small number of cues is likely to match the attributes present in a considerable number of different frames. In order to be coherent with the attributes description and the perceptual approximation, the technique implemented to match like-with-like is a fuzzy pattern-matching. The selection among partially matched frames, i.e. potential candidates identified by the recognition process of SM, is performed on the basis of frequency or recency of encounter: the frequency gambling primitive. In the model, the frame which is selected, at

MACHINE

COQ~ITI VE FILTER

PERCEIVED WORLD

SIMILARITY MATCHING

POTENTIAL CANDIDATES

CONFIRMATION

FREQUENCY GAMBLING

CIF

PLAN

FIG. 4.

Recognitionp r o c e s s .

1029

the end of this initial process of screening, presents the highest frequency tag, among the ones selected by the SM mechanism: this frame is called the Currently lnstantiated Frame (CIF). The first CIF is totally dependent on the data perceived from the environment and thus the mechanism, which leads to it, is a data driven process. The data driven process, governed by the two primitive mechanisms of SM and FG, is complemented by the powerful confirmation bias principle, which describes the strong tendency of the operator to check for confirmation of prior hypothesis. This principle implies the consideration of a feedback loop, whereby a search of the most relevant information relative to the CIF is activated and, therefore, a mechanism of the so-called schema driven process is introduced. The schema driven process is performed through the calculation of the Diagnosticity (D) index, which represents an economical approximation of the content of information of the CIF's attributes (Masson, 1989). All attributes are ranked and those receiving the highest score are selected for confirmation of the CIF. The search of schema driven data is repeated until the fuzzy pattern-matching mechanism is adequately satisfied. In this way, the abilities of the operator to reason and to search for confirmation, are accounted for at the same time. This feedback loop completes the process of recognition and leads to the selection of a CIF and the relative Goal structure to be performed during the execution phase of the Low Level Decision Making part of the model. Planning. The simulation of experience acquisition and learning allows the formalisation of the complex process of planning within modelling boundaries (Searle, 1980). Indeed, in the case of experienced operators, when a very high number of procedures and recovery strategies are known, three approaches to the decision are followed: ---either strongly proceduralised rule-frames are addressed, via few specific environmental cues, containing adequate plans to be immediately implemented; or, - - b y an analogical process, rather than a logical one, the operator adapts, to the actual situation, known plans which worked correctly in case of analogous circumstances; or, finally, --if a plan has to be built, the parts of precompiled plans which may be adapted to the current situation are immediately restored in the working memory and included in the strategy being planned. This last reasoning mechanism makes use of the concept of content addressable parts of frames, which allows the use of some specific

P . C . CACCIABUE et al.

1030

"slots", of a compiled plan, in the context of other plans or strategies being built. In the case of a less expert behaviour, more frequently, a real planning process develops, whereby logical links amongst the plant components and inferential reasoning are widely exploited. In the model, the conditions that lead to planning, as alternative and complenentary mechanism to recognition, are encountered when: (a) no rule-frame, accessible in the knowledge base, matches the calling conditions selected in the focal working memory; and (b) the time pressure is not too high and it allows an inductive process to take place. In this case, the slow, laborious and serial mental work of planning by means of knowledge-frame is simulated. This part of the modelling architecture represents one of the most difficult steps in the development of the model. Indeed, it is still under detailed study and revision, before the actual formalisation by means of logical and mathematical expressions is implemented in the simulation (Cacciabue et al., 1988).

Low Level Decision Making model The implementation of the plan, either contained in a rule-frame o r developed during the planning stage, is carried over in the Low Level Decision Making model (LLDM). The structure that supports the whole LLDM is called FUzzy Goal Oriented Script (FUGOS) (Fig. 5). In a FUGOS the direct man-machine interaction, at the lowest level of the control

I TOPGOALI ~

[s...o., el

"OR"gate

I s,,..o., ,I

eeO e e

Elementary Goals FIG.5. A FUzzyGoalOrientedScript.

loop, is reproduced by a hierarchical goal oriented structure. Fuzzy logic is the mechanism by which the navigation through the F U G O S is exploited. Starting with a main intention or "Top-Goal", the architecture consists of a simple hierarchical network where the different sub-goals and sub-tasks to be performed by the operator are schematically arranged in a "tree" type structure and are linked to each other by different gates such as " A N D " and " O R " . A "goal" is an element of the network at any level. An "act" is a last elementary goal of the network, i.e. an elementary action that the operator performs. A "task" is the sequence of acts that the operator has to perform in order to attain a certain goal at any level. Interaction of the operator with the plant is simulated as a sequential travelling in a FUGOS where the operator executes elementary acts in order to gradually satisfy goals at different levels of the hierarchy. A similar architecture accounts for the operator monitoring, detection and low level recovering strategies (presence of " O R " gates). Each goal is characterised by a certain number of parameters, which regulate the unfolding of execution. They are: --the degree of priority (GDP), which expresses the measure of sequentiality between goals; this priority distribution results from the planning process taking into account the potential interaction amongst goals; --the degree of membership (GDM), which defines the measure of the dependency between a goal and its directly superior goals; this degree is also a consequence of the planning process and represents a measure of uncertainty in the mechanism of decomposition of a goal in different sub-goals; --the degree of satisfaction (GDS), which represents the correlation between the result of a specific goal and the operator's expectancy; and finally --the degree of certainty (GDC), which represents the measure of the attainment of a goal. The two parameters GDC and GDS are evaluated during the actual execution of the selected strategy. Travelling, in a top-down way, through a FUGOS allows the model to select the acts to be executed. At any level the operator attends to the goal of highest priority. The attainment of a goal is measured by the GDS and GDC parameters, which are governed by the "fuzzy feedback mechanism". The following steps are performed. GDS is evaluated as the result of matching the goal expectancy and the real behaviour of the related indicators, expressed by means of a

A model of operator trapezoidal membership function of the type:

GDS(goal)=

0

for x < a

(x - a ) (b - a)

for a < x < b

1

forb
(a-x) (d - c)

for c < x < d

0

forx>d

(1)

in case of static estimations; and: GDS (goal) =ftrape,(dx/dt, a, b, c, d)

(2)

in case of dynamic estimations. When GDS is greater than a pre-established threshold the goal is considered as attained and the next goal in the tree structure is tackled, in order of priority. If GDS is below the pre-established threshold value, then GDC is evaluated in terms of GDS of the goal itself and GDC and GDM of the connected sub-goals. Assigning two weighting factors, x to GDS and y to the sub-goals GDC and GDM, and using the fuzzy logic dual concepts of necessity (N) and possibility (H), the expression of GDC of a goal is: lI = max [rain (x, GDS (goal)), min (y, GDC* (sub-goals))]

(3)

N = min [max (1 - x, GDS (goal)), max (1 - y, GDC* (sub-goals))] N ~< G D C (goal) ~< FI

(4) (5)

where: GDC* (sub-goals)

max

1031

conflicting information; for instance, the operator is certain of his previous actions ( G D C * = 1) but doesn't satisfy the expectancy of the current goal (GDS = 0). When condition 3 is encountered, the operator is caught in a dead-end condition and no further progression in the goal structure is made. Indeed the operator waits for some new information to reach his attention in order to help him to decide whether: - - t o neglect one of the two previous pieces of information (x or y = 0); or ---to give more credit to the possibility value H, accounting, in this way, for the redundancy of the information and keeping the most satisfactory one; or - - t o consider the necessity value N, accounting for the coherence of information. By this approach it is possible to model various degrees of confidence experienced by the operator during the management of the accidental sequence. Indeed the two weighting factors x and y represent the relative importance given by the operator to the information concerning the current goal vs the information obtained from the previously achieved goals. These data can in principle be elicited from operators and represent the degree of variability in the credibility associated to the instrumentation and control by different operators. Moreover an important aspect of these two weights is that they are dynamic variables and thus they can also represent the changing of opinion of the operator during the evolution of a transient itself. ERROR MECHANISMS AND APPLICATIONS

j=I,k

x (rain [GDC (sub-goal/), G D M (sub-goali)]}

(6) in case of an "or" gate connecting the goal with its sub-goals; or: GDC* (sub-goals) = min j=l,k

x {max [GDC (sub-goalj), 1 - G D M (sub-goali)]}

(7) in case of an "and" gate connecting the goal with its sub-goals. Thus, given equation (5) and assuming a threshold value D for the G D C parameter, three different situations can occur: 1. N > D : the operator is then satisfied and carries on with the execution of the tree; 2. H < D: the operator detects a problem in the tree execution; 3. N ~< D ~< H: this situation is symptomatic of

The human error events are generated during the interaction of the external world with the currently instantiated frame and the basic primitives of cognition. At the basis of the error generation is the concept of under-specifications, which consist of an incomplete representation of the world. Two types of under-specifications are accounted for: 1. under-specifications of cognition, which are the origin of human related faults. They can be introduced by assigning omissions and inaccuracies in the knowledge base, in the rule frames as well as in the knowledge frames; and 2. under-specifications of environment, which are related either to faults of the plant or to the inappropriate design of the information system. They lead to unfamiliar or ambiguous signals coming from the external world. The architecture of the model generates the conditions for error making as result of the event

1032

P . C . CACCIABUEet al.

sequence and of the man-machine interaction, either by propagating automatically through the sequence the biases of human cognition, due to cognitive under-specifications, or by inducing the human error as consequence of the inappropriate information representation, due to environment under-specifications. These errors are managed by the driving mechanism of the simulation, i.e. the DYLAM methodology. The generally accepted subdivision of errors into slips and mistakes (Norman, 1981), where slips are unintended actions while mistakes are inappropriate intentions, has been further refined, according to the stages involved in conceiving and carrying out a planned strategy: mistakes represent defects in the formulation of the strategy, lapses are memory failures during the implementation of intended plans and slips are imperfections of attentional monitoring (Reason and Mycielska, 1982; Reason, 1987b). The way in which DYLAM manages the error generation is as follows: 1. Slips are generated during the process of execution of a strategy, either by omitting an action or by disregarding new information, under the pressure of the current situation. Slips are due to under-specifications of cognition or environment. 2. Lapses are induced during the processes of execution as well as planning, by setting temporary misinterpretations of the connections between parts of plants or physical phenomena. Lapses are due to underspecifications of cognition combined with under-specifications of environment. 3. Mistakes are generated only during the planning process as the result of incorrect representation, in the knowledge base, of the relations between parts of the plant or between physical quantities, which sustain the inductive reasoning. Mistakes are due uniquely to under specifications of cognition.

Slips In order to generate the event of slip, DYLAM considers the correlation between the probability of a generic omission and a function called "stress-in-action". During the manmachine simulation, the stress function is dynamically evaluated by an empirical correlation: F(t) = Wl(t) Tp (t)ln (t) (8) where: - - F is the dynamic value of the stress function (fatigue), and --WI, Tp and In are respectively the coefficients of workload, time pressure and information gathering.

When F(t) reaches a threshold value, DYLAM imposes the omission of the current action of the operator. In this way the omission is not defined a priori, but rather, depends on the physical evolution of the accident and on the management procedure selected by the operator himself. Another, indirect, cause of omissions is related to the Salience of a cue, i.e. the parameter that concentrates the attention grabbing qualities of an external information. Modifying the salience of some information implies altering the content of information that reaches the focal working memory and this implies the modification of the execution of an intended action and/or the formulation of a new plan. DYLAM governs the time variation of the salience according to external parameters as well as to some stochastic correlation, capable of reducing or enhancing the salience with random dependency:

S(t)

= Soe±C~c'r(t-t°)e - z ( t - t ° )

(9)

where OCCXF is a coefficient related to the Currently Instantiated Frame and ~. is the rate of failure of salience. The terra e ±~c'F"-'°) represents the enhancement or decay of the attention grabbing power in relation to the strengthening of a frame instantiated in the FWM: the more a frame is instantiated and confirmed in the FWM, the less is the salience of disconfirming information. The term e -~' represents the probability that the salience of a cue remains unaltered between time to and time t. When S(t) falls below a threshold value, the relative cue may not be able to pass the cognitive filter, it does not reach the level of FWM and thus it is not perceived.

Lapses In order to generate the event of memory lapse, two conditions have to be satisfied: on the one hand, the level of the function "stress-inaction", F(t), must be above a predefined value; and on the other hand, a misunderstanding of the information display system must occur. By the DYLAM mechanism, this particular combination of circumstances can be accounted for by assuming a certain probability distribution of misreading indicators, in combination with the level of the function F(t). When the conditions of a lapse are recognised, the model maintains the current plan of actions, adapting it to the misinterpreted information newly perceived. This may lead to an inappropriate sequence of actions and decisions, which are recognised much later during the sequence, if not only after the catastrophic ending of the transient.

A model of operator

Mistakes Although the model of planning is still under development, the mechanism that accounts for imperfections during the inferential process has already been envisaged. The underlying reasons for the mechanisms stems from the fact that a reasoning process is activated only when none of the well known situations, with their corresponding recovery strategy, are identified in the knowledge base, and, thus, the formulation of a new plan must be developed. In this case, the known relationships, or homomorphisms (Holland et al., 1987), between parts and states of the plant as well as between physical quantities must be further developed in such a way that a new configuration of homomorphisms is generated for coping with the new unfamiliar situation (Moray, 1987). In the model, the mechanisms of reasoning process which activate and make use of these homomorphisms are based on qualitative and logic modelling approaches (Cacciabue et al. 1988). During these processes, the biases of the knowledge base will affect the behaviour and the decision making by: --producing inappropriate understanding of situations, --generating erroneous coupling of relationships, and --formulating incorrect recovery strategies. The way in which the DYLAM methodology follows the generation of mistakes is only marginal. Indeed, a mistake is not affected by the actual development of the sequence, as it is done for slips or lapses and DYLAM simply offers a structured input data architecture, which allows an easy manipulation when different characteristics and biases of knowledge bases need to be observed. Applications The hardware architecture for the development of the human model and its applications to the man-machine interaction is based on a SYMBOLICS-LISP machine linked to a network of SUN workstations. The human model "runs" on the Symbolics computer, while the simulation of the plant and of the information display system are managed by the network of SUN workstations. The two computers are connected and the transfer of data is governed by the DYLAM driver. In principle the human model is mostly developed in LISP language, because LISP is particularly well suited for simulating the processes of decision making and execution according to prestructured plans. Moreover the use of a recently implemented software package,

1033

KEE (Knowledge Engineering Environment) (Fikes and Kelher, 1985), allows the easy formulation of structures representing the operator knowledge bases, reasoning and decision making. The model of the plant and of DYLAM are developed in Fortran standard and have already been tested separately on different types of computer architecture, in order to ensure portability and flexibility of the model. In a preliminary study, the operator model and the simulation of a plant have been considered, mainly in the perspective of human reliability analysis (Bersini et al., 1987). In particular, the assessment of the safe evolution of possible incidental scenarios have been studied, demonstrating the flexibility of the model to represent situations such as, for example: 1. the sequence of decisions and actions taken when the operator is confronted with a contradiction between a successfully performed action and the unsatisfied verification; and 2. the process by which a typical memory lapse affects the planning leading to a scenario of consequences and, eventually, to the recovery of the situation through the interaction of the operator with the plant dynamics.

CONCLUSIONS

In this paper the overall architecture of a cognitive model of operator behaviour has been presented and the methodologies on which the model is based have been discussed. Presently, a considerable amount of work is being done in the fields of knowledge based reasoning and mental representation according to established formalisms of Artificial Intelligence, with the aim of ameliorating generality and portability of the human model. The case studies already performed on a sample plant have shown encouraging results and the hardware and software tools currently in use have the potentiality of allowing fast and rich development of high quality work in the fields of cognitive modelling and man-machine interaction simulation. The final aim of our work is related to: --the improvement of plant safety, for the design and/or validation of emergency procedures; --the study of usefulness and need of automatisms; --the evaluation of completeness and functionality of the interfaces and decision support systems; and --the design of the architecture of the control

1034

P . C . CACCIABUE et al.

system in order to diversify the subdivision of tasks between computers and man. For these tasks, the advantages of a cognitive approach in comparison with other type of methodologies can be summarized in the following three main points: 1. the cognitive modelling is fully adaptive to whichever configuration the components of the system might take, at any time of the transient; 2. the interactivity with the physical simulation of the plant can be straightforwardly performed, and thus the dynamic aspect of the plant evolution does not represent a serious problem to the man-machine simulation; 3. the cognitive attitude of the operator and the defects of knowledge can be fully accounted for by the model; therefore, the formal aspects of the operator errors are not evaluated by an a priori function but, rather, they stem from the entire evolution of the man-machine interaction. Finally, maintaining a safety perspective, it can be argued that only by modelling the human cognitive processes one can have some confidence that the response of the "human" component of the man-machine system is the result of deterministic processes involving reasoning, cognition and expertise instead of being only the outcome of an a priori speculation referring simply to the external "behaviouristic" aspects of the human being. Acknowledgement--The authors would like to thank James Reason, Francoise Decortis and M. Masson for their essential collaboration in the modelling project development. REFERENCES Amendola, A. (1987). The DYLAM Approach to Systems Safety and Reliability Assessment. EUR 11361 EN. Bainbridge, L. (1986). What should a good model of the NPP operator contain? Proc. Int. Topical Meeting on Advances in Human Factors in Nuclear Power Systems. Knoxville, Tennessee, U.S.A. Bersini, U., P. C. Cacciabue and G. Mancini (1987). Cognitive modelling: a basic complement of human reliability analysis. In G. Apostolakis, G. Mancini and P. Kafka (Eds.), Accident Sequence Modelling: Human Actions, System Response, Intelligent Decision Support. North-Holland, Amsterdam. Bersini, U., P. C. Caceiabue and G. Mancini (1988). A model of operator behaviour for man-machine system simulation. Proc. 3rd IFAC Conf. on AnalysiL Design and Evaluation of Man-Machine Systems, Oulu, Finland. Pergamon Press, Oxford. Cacciabue, P. C. and G. Cojazzi (1986). Analysis and design of a nuclear safety system versus the operator time constraints. Proc. 2nd IFAC Conf. on Analysis, Design and Evaluation of Man-Machine Systems, Varese, Italy. Pergamon Press, Oxford. Cacciabue, P. C., G. Mancini and G. Guida (1988). A knowledge based approach to modelling the mental processes of a nuclear power plant operator. Proc. IEAEA/CEC/OECD/NEA Int. Conf. on Man-Machine Interface in the Nuclear Industry, Tokyo, Japan; IAEA-CN-48/94, 71-78, 1AEA, Vienna.

De Keyser, V. (1987). Cognitive development of process industry operators. In J. Rasmussen, K. Duncan and J. Leplat (Eds), New Technology and Human Errors. Wiley, Chichester, U.K. Decortis, F. (1988). Mod6lisation cognitive et analyse de l'activit6. XXIV Congr~s de la Socidtd d'Ergonomie de Langue Franqaise (SELF) Paris. Edwards, E. and F. P. Lees (Eds) (1974) The Human Operator in Process Control. Taylor and Francis, London. Fikes, R. and T. Kelher (1985). The role of frame-based representation in reasoning Communications of the ACM, vol. 28, No. 9. Gupta, M. M., and E. Sanchez (Eds) (1982). Fuzzy Information and Decision Process. North Holland, Amsterdam. Holland, H. J., K. J. Holyoak, R. E. Nisbett and P. R. Thagard (1987). Induction: Processes of Inference, Learning and Discovery. The MIT Press, Cambridge, MA. Mancini, G. (1986). Modelling humans and machines. In E. Hollnagel, G. Mancini and D. D. Woods (Eds), Intelligent Decision Support in Process Environments, pp. 307-323, NATO ASI Series, Springer, Berlin. Masson, M. (1989). Filtrage cognitif et pertinence des informations. CEE-JRC Ispra Technical Note No. 1.89.37 (to be published). Moray, N. (1987). Intelligent aids, mental models, and the theory of machines. Int. J. Man-Machine Studies, 27, 619-629. Norman, D. A. (1981). Categorization of action slips. Psych. Rev. 88, 1-15. Pew, R. W. and S. Baron (1983). Perspective of human performance modelling. Automatica 19, 663-676. Rasmussen, J. (1981). Models of mental strategies in process plant diagnosis. In J. Rasmussen and W. B. Rouse (Eds), Human Detection and Diagnosis of System Failures. Plenum, New York. Rasmussen, J. (1985). The role of hierarchical knowledge representation decision making and system management. IEEE Trans. Syst. Man Cybern. SMC-15. Rasmussen, J. (1986). Information Processing and HumanMachine Interaction. An Approach to Cognitive Engineering. A. P. Sage (Ed.). North Holland, Amsterdam. Rasmussen, J., K. Duncan and J. Leplat (Eds) (1987). New Technology and Human Errors. Wiley, London. Reason, J. (1986). Recurrent errors in process environments: some implications for the design of Intelligent Decision Support Systems. In E, HoUnagel, G. Mancini and D. D. Woods (Eds), Intelligent Decision Support in Process Environments. NATO ASI Series, Springer, Berlin. Reason, J. (1987a). Papers in J. Rasmussen, K. Duncan and J. Leplat (Eds) New Technology and Human Errors pp. 5-14, 15-22, 45-52, 63-86. Wiley, Chichester, U.K. Reason, J. (1987b). The cognitive bases of predictable human error. Contemporary Ergonomics 1987. Proc. Ergonomics Society's Annual Conference. Swansea, U.K. Reason, J. and K. Mycielska (1982). Absent Minded? The Psychology of Mental Lapses and Everyday Errors. Prentice-Hall, London. Rouse, W. (1983). Models of human problem solving: detection, diagnosis and compensation for system failures. Automatica 19, 613-625. Searte, J. R. (1980). The intentionality of intention and action. Cognitive Science, 4~ 47-70. Sheridan, T. B. (1986). Forty-five years of Man-Machine systems: history and trends. Keynote Address. Proc 2nd IFAC Conf. on Analysis, Design and Evaluation of Man-Machine Systemv, Varese, Italy. Pergamon Press, Oxford. Stassen, H. G., G. Johannsen and N. Moray (1988). Internal Representation, Internal Model, Human Model, Human Performance Model and Mental Workload. Proc. 3rd IFAC Conf. on Analysis, Destgn and Evaluation of Man-Machine Systems, Oulu, Finland. Pergamon Press, Oxford. Woods, D. D., and E. M. Roth (1986), Models of cognitive behaviour in nuclear power plant personnel. NUREG/CR4352.