User Modelling: A New Technique to Support Designers of Graphical Support Systems in Conventional Power Plants

User Modelling: A New Technique to Support Designers of Graphical Support Systems in Conventional Power Plants

CO\'Hight © I F.V : \lall- \1:t, hille S\·'''·1I1.' . Ou lll . Filll'lIld. I'!XX USER MODELLING: A NEW TECHN IQUE TO SUPPORT DESIGNERS OF GRAPHICAL S...

1MB Sizes 1 Downloads 41 Views

CO\'Hight © I F.V : \lall- \1:t, hille S\·'''·1I1.' . Ou lll . Filll'lIld. I'!XX

USER MODELLING: A NEW TECHN IQUE TO SUPPORT DESIGNERS OF GRAPHICAL SUPPORT SYSTEMS IN CONVEN TIONAL POWER PLANTS G. A. Sundstrom '-ri/mm/on .for

.\I(/II- .\I({(/iiIU'

::'\,,/('11/.\. l'lIh 'I'ni/\' FR(;

'1' }\({".\('/

((;/i}\).

/) -35{){) }\(/\\d.

Abstract. The concept of user modelling as a method to support designers of graphical interfaces to dynamic technical systems is discussed. A general framework viewing operators' information processing behaviour as decision making is described, and used in a combined knowledge elicitation and task analysis procedure. Knowledge about operators' failure management strategies during a failure in the high pressure preheating system of a coal-fired power plant is summarised in rules of action. The KEEworlds concept is discussed as a way of implementing a knowledge based system modelling operators' actions. Keywords. Man-Machine systems; knowledge based graphical support; system supervision; control room operators. INTRODUCTION Currently, design of graphical interfaces to dynamic technical systems relies heavily on designers "common sense" hypotheses about the behaviour and needs of operators in supervisory and control systems. How to possibly change this situation using knowled~ based systems is the focus of the work to be presented. More specific, the focus is on how to use the concept of user modelling while developing an intelligent graphical editor for support of designers of graphical interfaces in conventional power plants. This work constitutes a part of the project "Graphics and Knowledge Based Dialogue for Dynamic Systems" (GRADIENT), which is part l y supported by the European Strategic Programme for Research and Development in Information Technology (ESPRIT of the European Communities). The paper starts out with a short de scrip ti on of the concept of user modelling. Thereafter, the framework underlying the present work is described . Then, results from a combined knowledge elicitation and task analysis procedure are summarised. Finally, the concepts underlying the implementation in a knowledge based system are described. CLASSIFICATION OF US ER MODELS In the literature, several ways of c lass ifyi ng user models have been suggested, e . g., Rich (1983) and Sleeman (1985). Sleeman (1985) pointed out the important distinction between proces s user models and non-process user models. Process user mode ls actually aim at simulating cognitive processes of users, whereas non-process user models don't. Thus, the terms cognitive model I ing and process user model l ing may be used interchangeably. For a supervisory task, cognitive modelling implies mapping of cognitive represen t ations as well as of the cognitive processes related to these representations. A non-process approach to user modelling does not aim at actually simulating cognitive processes of users. Rather, this approach stresses the structure of the tasks to be performed by the user. Nevertheless, assumptions about cognitive processes may be entailed in the non - process user model, but focus is not on the si~u lati o n of the cognitive pro cesses .

A cognitive modelling approach requires that cognitive theor i es exist which may be used as a basis for the modelling . In addition, the theories should have been tested either through experiments or through simulation (preferably both) . Current cognitive theories in the field of man-machine systems, e.g., frameworks described by Rouse (1983) or by Rasmussen (1981, 1983) have not been exten sively tested experimentally nor do simulation models with explicit modelling of cognitive structures and cognitive processes exist. Thus, modelling of operators information processing behaviour must at the present time be pursued using a nonprocess user modelling framework. As already mentioned, this c l ass of user models focuses on t he description of structures of tasks and strategies used by oper~tors, without claiming that cognitives structures and cognitive processes underlying ope rators' strategies are simulated. GENERAL FRAMEWORK The main assumption is that informat ion processing behaviour of ope rato rs can be described as choices between hypotheses about states of the tech ni ca l system and /o r hypothe ses about possible actions to take. It is assumed that hypotheses are repre se nted in cogn itive structures, which are bel ieve d to guide informa tion search. These cognitive structures do not only represent hypotheses, but al so entail st rategies which are used in orde r to evaluate the hypotheses. Al l operators' choices aim at achievin~ an ove rall goal, which can be decomposed into subsoa ls . Subgoals indicate necessary and sufficient cond i tions in respect to a global goal. T. .,o tyP?S of ov erall goals are distinguished: overall gorl s rele.ted to sta tes of the technic al process being sup2rvis ed and whet could be called information processing goals. Overell goals pertaining to states of the technical process being supervised are assumed to fall into one of the follov1in S categories (cf . Kna uper & Rouse, 1985 ) : tri'nsitions ( star':-l'!J, shut - down, load change), steady-state-tuning, and f ailu re management . Correspondingly, ov er al l go al s dist inguished on an infor~ation processi~~ level

G. A. Sundstrom

54

are categorisation, planning, and action (cf. Rouse, 1983). It is assumed that all information processing activities of operators are related to one of the three overall information processing goals. Information processing activities related to categorisation, planning, and action are listed in Table 1. TABLE 1 Information Processing Activities Related to Categorisation, Planning and Action GOALS

INFORMATION PROCESSING ACTIVITIES

CATEGORISATION perception and classification of states in the technical system PLANNING

preparation of actions, i.e., check whether necessary and/or sufficient conditions for specific actions are satisfied

ACTION

observe effects of current actions and judge outcomes of decisions

Infor~ation processing activities related to categorisation and planning are closely intertwined. In both cases the concern is with choices of hypotheses on which later actions are based. The main aspect of information processing activities associated with actions is to judge whether or not the actions taken produce the expected effects. Thus, choices are evaluated through 80nitoring of consequences of actions, i.e., the outcomes of the decisions. It should be stressed that explicit assumptions about how plans are generated and executed are not made. For suggestions on how to describe plan generation and execution, see Schank & Abelson (1977), or Johannsen & Rouse (1983) for an example in the field of man-machine systems.

In the literature on cognitive biases it has been suggested to distinguish between two major types of cognitive activities: predictive activities, and activities related to explaining events (Lewicka, 1985). Predictive activities are related to "How"questions, i.e., given the set of conditions P, what are the possible goal-states? Search for an explanation is related to "Why"-questions, i.e., given the set Q, which are the conditions Pleading to Q? For example, during failure management, "How" -questions are closely related to how operators can compensate failures, i.e., stabilise the technical process without necessarily knowing the cause of the failure. On the other hand, "Why"-questions are closely related to failure diagnosis, i.e., search for an explanation of a certain state in the technical system. The two types of cognitive strategies correspond to the ·two reasoning strategies forward and backward chaining suggested in the field of artificial intelligence (Barr & Feigenbaum ,1982). Reasoning, using a forward chaining mechanism, implies that one starts with a set of conditions and finds out at which conclusion these conditions point. Thus, given a set of conditions, forward chaining corresponds to predicting events on the basis of a set of given conditions. In the case of backward chaining, reasoning $tarts from a conclusion, which validity is checked by going "backwards", hereby evaluating the conditions of the conclusion. Backward chaining corresponds to asking "Why"-questions, i.e., ask for possible causes of the given state of the technical system. GOALS OF KNOHLEDGE ELICITATION On t he basis of the general framework, goals for knowledge elicitation may be specified. Since operators' information processing behaviour is seen as

decision making behaviour, the major focus of knowledge elicitation is on describing decision alternatives and criteria for choosing between alternatives. Recall, that decision alternatives are either hypotheses about the state of the technical process or hypotheses about possible actions to take. The four goals listed below should make it possible to specify which information an operator needs, and how this information is used in order to choose between alternative actions. Goal 1. Collect information about what an operator needs to see in order to perceive that a change has taken place in the techn i ca 1 sys tem duri nq the occurrence of a failure. Goal 2. Collect information about how operators may proceed in order to compensate for the failure. This includes knowledge about what operators have to check for in order to choose between actions re1ated to "How"-questioons. Goal 3. Collect information about strategies to dlagnose causes of the symptoms during a failure situation. Thus, knowledge about possible causes and strategies how to find out what caused a failure has to be gathered. This knowledge is related to operators' choices between actions pertaining to "Why"-questions. Goal 4. Collect information about which process variables an operator needs in order to judge the outcome of actions. In addition, knowledge is needed about which process variables an operator needs to monitor during steady-state-tuning. METHODS USED FOR KNOWLEDGE ELICITATION A combined task and knowledge elicitation procedure as suggested by Borys, Johannsen, Hansel & Schmidt (1987) was adopted. In a first step schematics, manuals and instruction material from a power plant school were analysed. Detailed knowledge about the technical process and about possible strategies of operators was gained through using a simulator available in a power plant school. This simulator was used in two four hour sessions during which a failure in the high pressure preheating system of a coal-fired power plant was simulated. In the first session, focus was on detailed documentatio~ of the behaviour of the technical process. The second session centred on operators' strategies in the failure situation. In addition, operators were observed during a two hour training session. To record acquired knowledge, three types of timelines were used: 1) Time-line to record knowledge about the power plant process and the control system without considering human actions. 2) Time-line to record knowledge about the technical process, about the control system and about optimal operator strategies. 3) Time-line recording steps an operator may go through during the failure management. Knowledge represented in the three time-lines partly overlaps. The validity of the acquired knowledge was checked in discussions with operators and the engineer who trains operators at the power plant schoo 1 . Failure Situation. The main "symptom" of the failure is an increase in the level in one of the high pressure preheaters. The operator has about one minute time for failure compensation. if the failure cannot be compensated during this time, a by-pass pipe is automatically opened. Thus, the tec hnical system is stabilised at a lower level of effic i ency. Once the by-pass is secured, operators have ti~e to perform failure diagnosis.

55

Lser i'.lodelling

RESULTS First, it is described how knowledge about operators' behaviour may be summarised using rules of action. Thereafter, the concepts underlying the implementation of a knowledge based system are described Representation of Knowledge in Rules of Action Knowledge about operators choices and strategies recorded in the time-lines was summarised in two types of rules of action, both derivable from the general framework: a) rules of action related to "How"-questions, i.e., fai lure compensation, and b) rules of action relatec; to "Why"-questions, i.e., failure diagnosis. Both types of rules describe how operators could proceed as a function of whic~ messages have been displayed. An illustration of each type of rule is provided in Tables 2 and 3. TABLE 2 Rule of Action for Failure Compensation IF message "level in high pressure preheater is in emergency drain" has occurred & message "level in high pressure preheater is high " has occurred & posi t ion of flow controller of high pressure preheater is not between 70%100% THEN change mode of operation of flow controler to manual change position of flow controller to 70%-100% monitor level in high pressure preheater monitor position of emergency flow controller

& & &

If the rule described in Table 2 is successfully employed, the level in the preheater decreases and the position of the emergency flow controller approaches zero. TABLE 3 Rule of Action for Failure Diagnosis IF &

& & &

message "level in high pressure preheater is in emergency drain" has occurred message "level in high pressure preheater is high" has occurred position of the flow controller was changed to 70%-100% the emergency flow controller closed the level in the high pressure preheater co uld be stabilised

THEN the flow control valve is clogged The rule described in Table 3 prescribes the same actions as the rule in Tab le 2. The difference between the two rules is that the rule described in Table 3 not only suggests what t he operator can do, but also how the different events can be evaluated in order to make a cho ice of a hypothesis about the state of the technical system. Representation of Knowl edge in a Knowledge Based System The general framew ork suggested to represent operators' information processing behaviour within a decision making framework. Thus, the representation in a knowledge based sys tem should al low the deSigner of the graphical interface to understand, which different alternatives an opera tor may choose between, and what the oper ator needs to see in order to make these choices. In other words, the

steps an operator goes through during failure management needs to be modelled, and each step needs to be represented in such a way that designers can access information characterising each step. The Knowledge Engineering Environment (KEE, InteTl icorp, 1986 ) provides the KEEworlds facilities which allow for such a stepwise modelling of actions. A KEEworld is simply a set of facts representing a hypothetical situation, for example a problem solving state. In the present case, KEEworlds are used to represent the different steps an operator may go through during failure management. Each world represents the results of a specific action (or of actions) of operators, and is generated through application of rules modelling the actions of operators. Thus, facts in each world depend on which facts are represented in the "background" and which action(s) the operator has chosen. The "background" is a KEE knowledge base representing facts which will be present in all generated worlds. Hence, the knowledge base forming the "background" is the starting pOint for the modelling of operators' information search behaviour. Thus, knowledge about what the operator needs to see has to be represented in a knowledge base which later is used as a "background" for generating KEEworlds representing different steps during failure management. The structure and content of the "background" knowledge base designed on the basis of the knowledge elicitation and the task analysis is described in Fig.l. Knowledge about information being available to operators is represented using frames, whereas actions are represented in rules. Information for an arbritary failure situation in a conventional power plant is represented as class units (sets of objects). Member units ( instances of a class) represent information pertaining to the analysed failure situation. Slots shared by all objects in the knowledge base are "check", "monitor", "search", "KKS". The slot "check" indicates whether an operator has to check this information during a problem solving step. Correspondingly, the slots "monitor" and "search" indicate whether an information has to be monitored and /or searched for. Finally, the slot "KKS" gives the name of the represented object using a power plant designation system. In addition to these four slots, the class unit 'Measurement' has the two slots "tendency" and "status". The slot "tendency" indicates the direction of a measurement, i.e., increasing, decreasing or stable. The slot "status" indicates whet her or not a measurement is within normal range . All member units have a slot "status", but the set of slot values allowed differs depending on the object type being represented. Knowledge about actions is represented as IF ..... THEN rules divided into two classes, i.e., rules about failure compensation and rules about failure diagnosis. These rules correspond to the action rules described above, and represent possible actions of operators. All rules modelling steps an operator goes through during failure management, generate a new world representing added and /or deleted facts. The commands available in KEEworlds allow the designer to access the facts in each generated world. It is planned to represent each KEEworld using the graphical capabilities of KEE. CONCLUSION As long as throughly tested information processing theories on problem solving behaviour in supervisory and control systems do not exist, a non-process user modelling approach has to be adopted. Nevertheless, it is necessary to specify a general framework that guides knowledge eli c itation and task anolysis. In the present work, a decision mak ing framework was chosen to describe operators' informati on processing behavi ou r during f?ilure management.

G. A. Sundslrorn

The advantage of a decision making framework is that focus is on information necessary in order to evaluate decision alternatives, and on decision alternatives available to operators. The work presented constitutes the first step in the development of a support tool for designers of graphical interfaces to dynamic technical systems. Future work is concerned with how to display the information an operator needs during failure management. ACKNOWLEDGEMENTS The support of the Commission of the European Communities under ESPRIT P857 for the work reported in this paper is gratefully acknowledged. The GRADIENT project is carried out by a consortium consisting of Computer Resources International (Copenhagen, Denmark), Brown Boveri & Cie (Heidelberg, F.R. Germany), University of Kassel (Kassel, F.R. Germany), University of Strathclyde (Glasgow, Scotland), and the Catholic University of Leuven ( Leuven, Belgium). REFERENCES Barr, A., & Fei genbaum, E. A. (1982). The Handbook of Artificial Intelligence . Vo1.2. London: Pitman. Borys, B.-B., Johannsen, G., Hansel, H.-G., & Schmidt, J. (1987). Task and knowledge analysis in coal-fired power plants. IEEE Control Systems Magazine, l, 26-29. Johannsen, G., & Rouse, W.B. (1983 ) . Studies of planning behaviour of aircraft pilots in normal, abnormal, and emergency situations. IEEE Transactions on Systems, Man, and Cybernetics, 13, 267-278. KEE \198OT. Software Development System. User's Manua 1 . iiill!iiffii1lfl&l·! 1.111

, .i"~pj!" ' ''ff·j5

I

{.&el : ?,

.' .

caMP.c,"

,

<

'.CO ..... ENS.'ION ..... CTIONS-'.COhllP .MPP .ACfIONS.':

· ·COMP.ACYZ

'"

a coC/"'P.ACT:l

IF"'luRf ....IoN"c.EMENTJt.CTlONs

!

!

. , . "'A

.IA.

1

,- .ou.ONOSIS..ACTlO~-f.DI"'Q.J1P!I'.ACTlONS- ; - - · ·Qu.a .... CTl . COA.l..

- . OIAO ..... CT 3

'COND"'~

. : CONQ.QUANTITY

. -: CONO.n:w

.~:.:: ~'. ~~:~

Ilr-~vov~~):~:r.-.: :=~~_ .•....

l.JI,I"UTjI'A(SSUIII!

UVUT..QU"""'TrTY ",, : UV£ST.TUIW'

" , lOAD

az

Fig. 1. The Knowledge Base "Dperatorl" .

Knauper, A., & Rouse, W.B. (1985) . A rule-based model of human problem solving behaviour in dynamic environments. IEEE Transactions on Systems, Man, and Cybernetics, 15, 708-719. Lewicka, M. (1985) . Positive-negative evaluative assymetry and human cognitive biases. Paper presented at the Tenth Research Conference on Subjective Probability and Utility and Decision Making, Helsinki, Finland . Rasmussen, J. (1981). Models of mental strategies in process plant diagnosis. In, J. Rasmussen & W.B. Rouse (Eds.) . Human Detection and Diagnosis of System Failures . New York: Plenum. Rasmussen, J. (1983). Skills, rules and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics, 13, 257-266. Rich, E. (1983) . Users are individuals: individualising user models. International Journal of Man-Machine Studies, 18, 199-214. Rouse, W.B. (1983). t~odelsof human problem solving: Detection, diagnosis, and compensation for system failures. Automatica, 19, 613-625. Schank, R., & Abel son, R. (1977) .Scripts, Plans, Goals and Understanding . Hillsdale: Lawrence Erlbaum. Sleeman, D. (1985). UMFE: A user modelling frontend subsystem. International Journal of ManMachine Studies, ~, 71-88.

57

Cse r :-'I odelling

• ·C OMP .ACT 1 .,/'f .COMPENSA TlON.ACTIONS - - f .COMP .HPP .ACTlONS ·' : : : ' COMP .ACT 2 .,/' fAILURE .MANAGEMENT .ACTIONS ~

·COMP .ACTJ . ·OIAG .ACT 1 f .OIAGNOSIS .ACTIONS - - f .OIAG .HPP .ACT IONS' , : : , ' OIAG .ACT 2 COAL

' ·01AG.A CT J

" CONO .PRESSURE ,',' ' .CONO .QUANTlTY .. :',·' .CONO.TEMP ,;:::,,: ·fEEOW .PRESSURE

t;:~::": ~ ·fEEDW .QUANTITY INfORM .OVERVIEWS., f2':: : ' • ·fEEOW .TEMP ~~ " _ • -. ·fRESH .AIR

'~" : " • • ' INT .SUPHEA T .PRESSURE

'~::~~~', ~:~::~ :~~~S:~~~ '::,' ·UVEST.TEMP

\: LOAD

02 ',

j

LEVELS' ,: : :

, I

" "

1

I

:~:~~~~~ .TANK L.I.HPP.2

\

/

ODJECTS'~

V

MEASUREMENTS'

. /PRESSURES. i:-:,'· P .fEEOW .TANK ',:- P.HPP.l "

~

MESSAGES

i, !

P .OETW .HPP1 .ANO .HPP .2

, P .FEEOW .OEF .ECO

·P .HPP . 2 •. Q .fEEOWATER QUANTITIES' ,: : ' ·Q .INJECTlON '• ·Q .UVESTEAM

I

...T.fEEOW .ECO

,

TEMPERA TURES' ': : , "T .UVESTEAM • "T. TURDINE.HPP. 1 . - ·P.EM.FL. CONT .HPP.l ': : , -· P .EM.fL.CONT.HPP.2 CONTROLLER .POSITIONS ~ • _. p .fLOW .CONT .HPP. 1 P .fLOW .CONTROLLERS·: : . . " p .fLOW .C ONT.~IPP . 2 ~ P .EMERD.FLOW .CONTROLLERS

~

P .DY.PASS.VALVES< : POSITIONS

/

... P .OY.PASS .VALVE . l ·-·P .DY .PASS .VALV E.2

I P.CONTROL.VALVES'~: :-:~

• P .EM .FLOW .CONTROL . V AL VE. 1

•• ' _· P.EM .fLOW .CONTROL .V ALV E.2

\ VAL VE .POSITIONS

,

P .fILLlNG.VAL VES" :: ,

••

· P .FLOW .CONTROL .VALVE . l ·P .FLOW .CONTROL.VALVE .2 ·P .TURD .CONTROL .VALVE

. ' ·P.fILL.VALV E. l • -· P .FILL.VALV E.2

P .S TEAM.SLlO E.VA LVES · ' - - · P .STEAM .SLlOE.VALVE.l • " p .SW .CIiECK . 1

P .SWING.CHECK .VALVES·: : ,

-· P .SW.CHEC K.2

Fig. 1. The Knowledge Base "Opera torl " .