(:opn-ighl
© (F:\( : \Ltll - \LH hi tit" S\ 'It'1l1'.
(hilu. Finland.
\1()[)U.Ll~(;
OF
\L\~
\L\CH I , ,:: SYSTDIS
I~IXX
A MODEL OF OPERATOR BEHAVIOUR FOR MAN-MACHINE SYSTEM SIMULATION U. Bersini, P. C. Cacciabue and G. Mancini LIIIIIIIII"IIIII
"l
1111' Film/mill LIIIIIIIIIII/IIII',' . ./1//1/1 Rn/'flu'l! Cl'lIlrl'. I,/nll !:',III/JII,IIIIII'III, 2111211 I,/nll (!'tll. IIII/)
Abstract. In this paper, a model of plant operator behaviour is proposed, whereby the cognitive processes leading to decisions as well as the execution of strategies are simulated in details for the study of the management of a plant in incidental conditions. The architecture of the model foresees the representation of two cognitive levels of reasoning and decision making, namely the High Level Decision Making (HLDM), which allows to exploit operator's knowledge by continuously recognising situ3tions and by building supervisory and control strategies, and the Low Level Decision Making (LLDM), which is supported by the working and conscious memory dynamics, when the operator implements a preprogram med response or a planned strategy in order to satisfy a clearly defined intention. The details of the formalisms and methodologies implemented in the model are described and the perspective applications in the fields of design and safety of complex plants are discussed, Keywords . Cognitive systems; Fuzzy logics; methods,
INTRODUCTION The attitude of designers and licensors towards the analysis of complex plants has gradually changed during the past decades according to the increasing level of complexity of the plants, to the introduction of powerful computers in the control room and, specially, to the very few but highly significant accidents occurred in the reality, In particular the accidents which have resulted in some degree of danger to the public health, have shown that a factor plays a fundamental role in the transient evolution: the erroneous or inappropriate behaviour of operators, due to misunderstanding of physical phenomena, lack of knowledge, overconfidence, stress etc. The theoretical study and the evaluation of engineered systems by means of Man-Machine Interaction (MMI) approaches have thus become key issues and the appropriate balancing and interfacing between the two components of the interface has been identified as the crucial factor of the simulation (Mancini, 1986), Many attempts to model human behaviour have been performed in a behaviouristic oriented perspective, i,e, decomposing the overall behaviour of the operator in a sequence of different elementary acts or sub-tasks and assigning also to each of these a certain probability of failure. This type of approach, although very efficient in terms of quantification, has been questioned by many authors (Decortis, 1987; Bersini, Cacciabue and Mancini,1987; Reason, 1986) mainly on the basis of psychological considerations , which imply that, in a behaviouristic view, only the consequencies of human errors are accounted for, without worrying about the reasons and the underlying mechanisms from which they stem, The criticism s to the behaviouristic approach lead to the research and formulation of cognitive modelling of the operator (Bainbridge, 1986 ; Rasmussen , 1986), These models have to be considered as a step forwards in that they attempt to model the mental reasoning as well as the motor behaviour of the operator in a deterministic way, combining psychological consideration to logic formalisms and decision making theories (Cacciabue and Bersini, 1987; Woods and Roth, 1986), In this paper, a model of plant operator behaviour is proposed, whereby the cognitive processes leading to decisions
Hierarchical systems;
Human
factors;
Psychological
as well as the execution of strategies are simulated in details for the study of the management of a plant in incidental conditions. In section 2, the general architecture and the rationale of the model will be firstly described and then, in section 3, the details of the methodologies applied for the different parts of the model will be discussed, Sections 4 describes the implementation of the error mechanisms and some examples of applications currently being developed on a dedicated hardware, Finally section 5 contains the final remarks including the directives for future research,
THE ARCHITECTURE OF THE MODEL Modelling the behaviour of operators of process plants in transient conditions implies primarily the simulation of the prim itive cognitive processes performed by the operators, accounting for the environmental constraints in which they are activated, Many existing techniques focus separately on models of detection, planning, diagnosis or execution with adequately different formalisms, In the present model , the m ain tendency is towards an integrated simulation allowing to tackle all the activities of the operator in the same framework, Indeed the peculiarity of the proposed approach lies in the consideration of an overall human behaviour, without a clear-cut separation between different phases such as planning, diagnosis and execution, This entails a parallelism, rather than a sequentiality, of human activities and a continuous interaction of the operator planning-executionassessment processes with the physical evolution of the plant. A model of this kind can be considered as "active", in the sense that the operator actions are dynam ically identified by the actual situation assessment and by the reasoning about the system evolution, Indeed in previously developed models (Cacciabue and Cojazzi, 1986), firstly a plant safety perspective was considered by means of a "passive" model, where the actions of the operator were only related to the consequences on the plant and therefore identified in an "apriori" structure; the operator actions were paced by the accident according to preestablished procedures. A conceptual framework has been developed whereby the models of the plant and of the operator act as interactive counterparts of the man-machine system simulation (Fig, 1),
244
C. Bersini . P. C. Cacciabue and G.
Within this frame, two cognitive levels of reasoning and decision making are foreseen . On the one hand, the assessment of a situation (diagnosis) and the formulation of a strategy (planning) are considered as " High Level Decision Making" (HLDM) processes, because they imply long term planning as well as the analysis of the plant as a whole and possibly the reasoning about the evolution of physical phenomena. It represents pure mind work without direct interaction with the actual control system. On the other hand, the implementation of a preprogrammed response or of a planned strategy, in order to satisfy a clearly defined intention, are actually carried over by the" Low Level Decision Making" (LLDM) model. Here t'Je interaction with the machine is dual in the sense that plant behaviour data and operator actions develop on a short time scale and on localised part of the plant. The mechanisms of error detection and recovery are implicitly considered as feedbacks or results of the various ongoing processes within the HLDM and LLDM levels.
~IaIJCini
There are two fundamental components of the model and two basic mechanisms are envisaged as critical activators of cognitive processes (Fig. 2) . The two components are the Working Memory (WM) and the Knowledge Base (KB), which represent respectively : the "workspace" where the mental schema are internally, consciously and laboriously processed; and a vast repository of frames or schema of different natures and distinct levels of compilation.
t
KB
c
;. t"ecogni t:ion .1
~
.• ~
/
o :structure
n
MACHINE
planning
.. { (--~o~l:ed-ge'l \
I
proceSj
control
h
"'------ - - - - - - - - - - - - - -"'------
Focal
e
UOI'king
l~ernot"],I
n
v
1 r
--1
l+-:i
H L D M
I Diagnosis I Planning
I
I
L L D M
t-
IExec·lOptim.1Err. Det I
o
,.n e
n t
Peripheri ca l Vorkin
~
KB: Knowledge Base ; FWM: Focal Working Memory; PWM : Peripherical Working Memory SM: Similarity Matching; FG : Frequency Gambling. Recovery
Fig. 2. Architecture of HLDM model
Fig. 1. General framework of the human behaviour model
At psychological level, the HLDM architecture borrows basic concepts developed by J . Reason (1986, 1987) in various works, whereby the content of the knowledge base of a human being is exploited via the basic mechanisms of similarity matching and frequen cy gambling, leading to the cognitive processes of recognition and! or plan ning. Given that intentionality is a fundamental aspect of such processes, the formulation of sequences of intentions or goals and the ordering of goals in a hierarchical goal-oriented structure (Bainbridge , 1986; Rasmussen, 1985) is the general framework within which the strategies of operators are developed and carried over within the LLDM model. Moreover , the dynam ic allocation of goals in the working memory of the operator and the simulation of the attainment of a goal during the flow of events is based on fuzzy logics, which is a well suited theory for representing the approximate operator 's knowledge and allows to create a semantic interface between the system simulation and the operator cognitive model.
FORMALISATION OF THE TWO COGNITIVE LEVELS The mathematical formulations and the formalisms adopted in order to develop the model and to simulate the flow of reasoning of operators are now described in some details, starting with the mechanisms of the HLDM followed by the mathematical formulation of the LLDM model.
WM is subdivided in two parts : a Focal WM (FWM) and a Peripheral WM (PWM) , The environmental and the KB input data arrive into the PWM where they are adequately filtered in order to gain admittance into the FWM. The FWM is the "workspace " where the KB schema are pro cessed . Moreover, the WM can transmit to the KB some "calling conditions" for acquiring fresh knowledge. This cal ling conditions can be supplied by environmental signals, by frame dependent activations o r finally by some analytical process in the WM leading to specific intention. The access to FWM is governed by a variety of prio rity principles such as : visual dominance , change detection, coherence prin ci ple s (favour information that correspond to the current contents) an d activation principles (the admittance will depend on the level of activation of the units) . The co ntent of KB is structured in frames or schema which represent the operator knowledge about the plant in the form of geometrical structure (connections, locations , states components), process representations (variables of behaviours, causal relations , functional landmarks and thresholds) , and control sequences ( actions, tasks, procedures and respective effects). Two main categories of frames are distinguished. A first one, called knowledge-frame, describes o nly the processes and the structure of the system . A seco nd one , called action-frame, accounts for all the operator ' s interaction with the system, in that the action frames enclose pre-definite plans of actions for different situati o ns. Experience is respo nsible for a compilation mechanism which leads to form alising in procedures the co ntent of frames. Each frame, or part of a frame, is a "conte nt addressable" knowledge unit in the sense that its elements can match with WM calling-conditions.
Operator Beha\i()ur for :\lan-:\!achinc S\stem Simulatioll The mechanisms for bringing the products of the stored knowledge units into the WM are: similarity-matching between the calling conditions and the attributes of KB schema; frequency gambling for selecting a final candidate among a number of partially matched candidates, on the bases of frequency and recency of encounter.
Two main parallel and continuous intentional mechanisms are present in the operator's mind: the recognition of situations and, if needed, planning of recovering the plant conditions. The experience consists of a procedural fusion of these two intentional mechanisms and their concealment in an "unconscious backstage". Indeed , m0re the operator is expert, more his behaviour will be supported by only action-frames, which refer to some characteristic system cues and contain pre-definite plans of actions. Less the operator is expert and more the two intentional mechanisms will tend to be satisfied consciously and separately. Each one m ay require further analytical and intentional processing in the WM. Recognition. The operator is a furious pattern matcher and, when confronted with a new situation, he will essentially activate the cognitive schema that he possesses in his KB in order to deal with the situation. This activation, generally due to some "economical" analogy principles with past and most frequent experiences, will influence all successive behaviours from perception to reasoning and action. A large range of knowledge-frames are assumed to account for different degrees of causal and structural compilation. The frames comprise a "state label", a set of "attributes values" and a "frequency tag" which relate to a specific event, to the associated diagnostic signs and to the frequency or recency of encounter of the event. These frames are not characterized by structural or process links among the attributes, but only by the values they can take. The process of selection of the schema to be instantiated will start by parallely exploring a set of these possible procedural frames. The mechanisms by which the selection is performed are based on the concepts of similarity matching and frequency gambling. The similarity matching is the mechanism of matching the system perceived cues (in this case the calling conditions) with the attribute values of the stored candidate events. The associated attribute values are generally described in linguistic and vague terms: "temperature increase", "Iow pressure". Fuzzy set theory is the best suitable theory for representing the semantic interface between the system dynam ics and operator reasoning models (Gupta and Sanchez, 1982). Therefore, values of the frames attributes are represented by fuzzy sets. They depend on the operator expertise and plant design. The number of cues that will reach the operator's attention will be a small subset of the available ones. This perception can be tinged of imprecision and uncertainty. More than one frame is likely to be selected. In order to be coherent with the attributes description and the perceptual approximation, the technique implemented to match like-with-like is a fuzzy patternmatching. The selection among partially matched frames i.e. potential candidates will be performed on the basis of frequency or recency of encounter i.e. the frequency gambling mechanism. To complement these two primitive mechanisms, the powerful confirmation bias principle describes the strong tendency of the operator to check for confirmation of prior hypothesis. He will observe only the most salient symptoms, forgetting contrary evidences and thus being incapable of performing more parallel reasoning. The first hypothetical switches remain at this shallow level till a turning point when no more procedural frames can cope with the present situation. If the operator is not strongly stressed by time, switching to a more declarative level will allow him to consider more intermediary structural , causal and temporal links in the diagnostic progression . Indeed, a way of activating these frames can be of phenomenological nature rather than event dependent, and the calling conditions can contain qualitative links representing process and structural relations.
ADMS-I
In this case, the korking memory processing will be more laborious, accounting for the operator's ability to make some qualitative simulation of the plant as well as causal and structural inferences. Planning. Cognitive planning must be regarded more as an analogical process than logical one. The operator, instead of reconstructing plans for each new situation , adapts previous plans which worked correctly in case of analogous situation. Analogies, partial matching among situations, and adaptive mechanisms are the key and the complex reality of the pla'1ning process. In the model, we use a limitative definition of intention. Intention is generated by a mismatch between a perceived image of the current world and some continuously active internal constraints . It is the mental attitude which allows to reach a new steady-state cleared of this mismatch. The internal constraints are organized in a network architecture and classified in order of importance or criticallity. They can relate to some basic and generic facts, such as "the system is safe", and to more elementary, domain specific, facts like functional thresholds of the system. The mental attitude relies on two snapshots of the world: a current one (violating the constraint) and an intentional one. Following Searle's (1980) distinction between "prior intention" and "intention in action", it results that: intention in action is inherent to a spontaneous behaviour where a plan of action is directly associated to a situation assessment without some in-between planning analytical steps; prior intention is an explicit and conscious mental attitude leading straightforwardly to a real planning process which will elaborate an adequate plan of actions. Intentional attitudes are supported by a perceived image of the world and an intentional one. The perceived image of the world is represented by the frame which is active at the time of the disturbing even t, i.e. the currently instantiated frame (CIF), and the intentional world aims at eliminating the constraint violation. The part of the KB , containing the action frames, supports the planning process simulation. Indeed, the two snapshots of the world, the current one and the intentional one , are confronted, and, only in case of quasi -optimal matching of the calling conditions with the attribute values of an actionframe, the internal plan can be executed just as it is. Once again, the similarity matching and frequency gambling primitives will select which action-frame will gain admittance in the WM. Here the only difference with the recognition pro cess relies on the new definition of the calling conditions which contain some intentional aspects . These analogy principles may seem conceptually clear but call for different symbolic representations of the system. Experience and learning make planning and execution to become more unconscious and automatic processes. As an exam pIe, in case of expertise, strongly proceduralised action-frames are addressable via few specific system cues and contain adequate plans to be immediately implemented, while a less expert behaviour leads more frequently to a real planning process development.
The implementation of a plan is carried over in the LowLevel-Oecision-Making model. This basic structure that support the whole LLOM is called FUzzy-Goal-OrientedScript (FUGOS) (Fig. 3) . In a FUGOS the direct ManMachine interaction at the lowest level of the control loop is reproduced by a hierarchi cal goal oriented structure. Fuzzy logics is the mechanism by which the navigation through the FUGOS is exploited. Starting with a main intention or "Top-Goal", the architecture consists of a simple hierarchical network where the different sub-goals and sub-tasks to be performed by the operator are schematically arranged in a "tree" type structure and are linked to each other by different gates such as "ANO" and "OR".
L. Bersini. P. C. Cacciabuc and (;. \Lincilli
246
A "goal" is an element of the network at any level. An "act" is a last elementary goal of the network i.e. an elementary action that the operator perform. A "task" is the sequence of acts that the operator has to perform in order to attain a certain goal at any level. Interaction of the operator with the plant is simulated as a sequential travelling in a FUGOS where the operator executes elementary acts in order to gradually satisfy goals at different levels of the hierarchy. A similar architecture accounts for the operator monitoring, detection and low level recovering strategies (presence of "OR" gates).
the pre-established threshold value, then GDC is evaluated in terms of GDS of the goal itself and GDC and GDM of the connected sub-goals. Assigning two weighting factors, x to GDS and y to the sub-goals GDC and GDM, and using the fuzzy logic dual concepts of necessity (N) and possibility (n), the expression of GDC of a goal is:
n
= max[min(x , GDS(goal» , min(y,GDC*(sub-goals)) )
(2)
N= min[max( 1- x,GDS(goal)),max( 1- y, GDC*(sub- goals))) (3)
N" GDC(goal)"
n
(4)
where: GDC*(sub-goals) = maxj= l.k {min [GDC(sub-goalj)' GDM(sub- goalJ») }
(5)
in case of an "or" gate connecting the goal with its sub-goals; or: GDC*(sub- goals) = minJ=I .k {max[GDC(sub- goaIJ)' 1-GDM(sub-goaIJ)J}
(6)
in case of an "and" gate connecting the goal with its subgoals.
------task
Fig. 3. A FUzzy Goal Oriented Script.
Each goal is characterised by a certain nu m ber of parameters, which regulate the unfolding of execution. They are: the degree measure of distribution into account
of priority (GDP), which expresses the sequentiality between goals; this priority results from the planning process taking the potential interaction among goals;
the degree of membership (GDM), which defines the measure of the dependency between a goal and its directly superior goals; this degree is also a consequence of the planning process and represents a measure of uncertainty in the mechanism of decomposition of a goal in different sub-goals; the degree of satisfaction (GDS), which represents the correlation between the result of a specific goal and the operator's expectancy; and finally the degree of certainty (GDC), which represents the measure of the attainment of a goal. The two parameters GDC and GDS are evalu ated during the actual execution of the selected strategy. Travelling, in a top-down way, through a FUGOS allows the model to select the acts to be executed. At any level the operator attends to the goal of highest priority. The attainment of a goal is measured by the GDS and GDC parameters, which are governed by the "fuzzy feedback mechanism". The following steps are performed. GDS is evaluated as the result of matching the goal expectancy and the real behaviour of the related indicators, expressed by means of a trapezoidal membership function: GDS(goal) =
r,,, .. ,(a, b,c,d)
(I)
When GDS is greater than a pre-established threshold the goal is considered as attained and the next goal in the tree structure is tackled, in order of priority. If GDS is below
By this approach it is possible to model various degrees of confidence experienced by the operator during the management of the accidental sequence. Indeed the two weighting factors x and y represent the relative importance given by the operator to the information concerning the current goal vs the information obtained from the previously achieved goals. These data can in principle be elicited from operators and represent the degree of variability in the credibility associated to the instrumentation and control by different operators. Moreover an important aspect of these two weights is that they are dynam ic variables and thus they can also represent the changing of opinion of the operator during the evolution of a transient itself.
ERROR MECHANISMS AND APPLICATIONS The error events are generated by the interaction of the external world with the currently instantited frame and the basic prim itives of cognition. Indeed , the exploitation of the knowledge base frames, which can contain underspecifications and fuzzy conditions, by the driving mechanisms of similarity matching and frequency gambling, may resu It in au tom atic error generation (Reason, 1987) . In principle , cognitive under-specifications can be introduced by assigning omissions and inaccuracies in the knowledge base , i.e. in the action frames as well as in the knowledge frames. On the other hand, environmental under-specifications can be simulated by presetting unfamiliar or ambiguous signals coming from the external world. The architecture of our model is governed by a methodology which accounts for such underspecifications, and consequently , during a man-machine interaction simulation, the defects of the knowledge base and the induced biases of human cognition are propagated automatically through the sequence, via the mechanisms of the HLDM and LLDM models. In a preliminary study , the operator model and the simulation of a plant have been studied, mainly in the perspective of human reliability analysis (Bersini, Cacciabue and Mancini, 1987). In particular , the assessment of the safe evolution of possible incidental scenarios have been studied, demonstrating the flexibility of the model to represent situations such as , for example: the sequence of decisions and actions taken when the operator is confronted with a contradiction between a performed action and the unsatisfied verification ; and
Operator Behayiour for Man-Machine System Simulation 2
the process by which an error of planning, made at HLDM, leeds to a scenario of consequences and to the recovery of the situation through the interaction of the operator with the plant dynam ics.
The hardware architecture for the development of the human model is based on a SYMBOLICS-LISP machine, which is linked to the simulation of the plant, runnnig on a network of SUN stations. In principle the human model is mostly developed in LISP language, because LISP is particularly well suited for simulating the processes of decision making and execution according to prestructured plans. Moreover the use of a recently implemented software package, KEE (Knowledge Engineering Environment). allows the easy formulation of structures representing the operator knowledge bases, reasoning and decision making.
247
REFERENCES Bainbridge, L. (1986). What should a good model of the NPP operator contain? Proceedings of Int . Topical Meeting on Advances in Human Factors in Nuclear Power Systems, April 1986, Knoxville, Tennessee , USA. Bersini, U., P. C. Cacciabue , and G. Mancini (1987). Cognitive modelling: l\ basic complement of human reliability analysis. 9th SMiRT Post-Conference Seminar on Accident Sequence Modelling: Human Actions, System Response, Intelligent Decision Support. Munchen , FRG, August 24-25, 1987. To be publised in Engineering Reliability. Cacciabue, P . C., and U. Bersini (1987) . Modelling human behaviour in the context of a simulation of ManMachine Systems. In l. Patrick and K. Duncan (Eds .), Human Decision Making and Control, North-Holland , Elsevier, Amsterdam.
CONCLUSIONS In this paper the overall architecture of a cognitive model of operator behaviour has been presented and the methodologies on which the model is based have been discussed. A considerable amount of work still remains to be done, spe cially in th., fields of knowledge based reasoning and mental representation , before the whole model is fully developed and assumes the characteristics of generality and portability to complex plants. However, case studies already performed on a sample plant have shown encouraging results and the hardware and software tools currently in use have the potentiality of allowing fast and rich development of high quality work in field of cognitive modelling and man-machine interaction simulation. The final aim of our work is related to the development and the improvement in various fields of plant safety, such as the design and validation of emergency procedures; the study of usefulness and need of automatisms; the evaluation of completeness and functionality of the interfaces and decision support systems; and the design of the architecture of the control system in order to diversify the subdivision of tasks between computers and man . For these tasks, the advantages of a cognitive approach in comparison with other type of methodologies can be summarised in the following tree main points: the cognitive modelling is fully adaptive to whichever configuration the components of the system might take, at any time of the transient; 2 the interactivity with the physical simulation of the plant can be straightforwardly performed, and thus the dynam ic aspect of the plant evolution does not represent a serious problem to the man-machine simulation ; the cognitive attitude of the operator and his internal 3 errors can be fully accounted for by the theoretical prethought simulation , i.e . behaviouristic aspect of the operator's error is not evaluated by an a-priory function but rather it results from the entire evolution of the man -machine interaction. Finally, maintaining a safety perspective, it can be argued that only in this way one can have some confidence that the response of the human component of the man-machine sys tem is the result of deterministic processes involving reasoning, cognition and expertise instead of the outcame of an apriori speculation referring only to the external aspects of the underlaying basic mechanism governing operator beha\'iou r.
ACKNOWLEDG MENT The authors would like to thank lames Reason and Franco ise Decortis for their essential collaboration in the modelling project development.
Cacciabue, P. C., and G. Cojazzi (1986). Analysis and Design of a nuclear safety system versus the operator time constraints. Proceedings of: 2nd IFAC Con! on Analysis, Design and Evaluation of Man-Machine Systems, Varese, Se pt. 1985, Pergamon Press, Oxford. Decortis, F. (1987) . A cognitive perspective for human reliability. Some observations on the Human Reliability Benchmark Exercise . EUR Report to be published. G upta, M. M., and E. Sanchez (Eds.) (1982). Fuzzy I nformation and Decision Process, North Holland , Amsterdam. Mancini, G. (1986). Modelling humans and machines. In E. Hollnage1, G. Mancini and D. D. Woods (Eds.). Intelligent Decision Support in Process En vironments, NATO ASI Series, Springer-Ver1ag, Berlin . Rasmussen, l. (1985). The role of hierarchical knowledge representation in decision making and system management. IEEE Trans . on Syst. Man, and Cybern ., SMC-15, No. 2. Rasmussen , l . (1986) . Simulation of operators' response in emergencies. RISO-M-2616. Reason, l. (1986). Recurrent errors in process environments: some implications for the design of Intelligent Decision Support Systems. In E. Hollnagel , G. Mancini and D. D. Woods (Eds .), Intelligent Decision Support in Process Environments, NATO ASI Series, SpringerVerlag, Berlin. Reason, l. (1987). The cognitive bases of predictable human error. Contemporary Ergonomics 1987. Proceedings of the Ergonomics S ociety's Annual Conference. Swansea, UK, April 1987. Searle, J. R. (1980). The intentionality of intention and action. Cognitive Science. 4, 47-70. Woods, D. D., and E. M. Roth (1986). Models of cognitive behaviour in nuclear power plant personnel. NUREG / CR-4352.