Two Approaches for Man-Computer Cooperation in Supervisory Tasks

Two Approaches for Man-Computer Cooperation in Supervisory Tasks

Copyright © IFAC: ~I"n - \Lt("hin~ Svst(,llIs. Xi'an, PRC, 1989 TWO APPROACHES FOR MAN-COMPUTER COOPERATION IN SUPERVISORY TASKS P. Millot, V. Tabori...

2MB Sizes 0 Downloads 34 Views

Copyright © IFAC: ~I"n - \Lt("hin~ Svst(,llIs. Xi'an, PRC, 1989

TWO APPROACHES FOR MAN-COMPUTER COOPERATION IN SUPERVISORY TASKS P. Millot, V. Taborin and A. Kamoun L([Ii()ml();r~ tI'AIII()III([I;qll ~ L'11I1'1'1S11l;

tI"

111(/11.1'11';"/1"

1'/

HIIII/([;III' .

LRA C,\RS 1118.

1'1 till 1/([;1/(1111 CIIII/lids ; .\'. 1.1' ,\[0111 hOllY. 59326 \ '11/"",;1'11111'.1 C"d,,\,. FIWIO'

\'(1/1'11,;/' 11111'.1

Abstract. The increase of computer decisional abilities in supervision systems of automated processes is a very promising step for preventing human errors in fault detection tasks, diagnosis and trouble shooting and therefore for warranting the process safety. However, the integration of decision tools, such as knowledge base systems, in the supervision loop requires designing advanced man machine interfaces with a view to implementing a real cooperation between man and computer. This paper presents the possible integration modes of the decision tools in the supervision systems and then describes two types of cooperation: 1) a vertical cooperation in which the human operator is responsible of all process variables and he can call a decision aid tool for problem solving, and 2) an horizontal cooperation in which the supervision tasks can be dynamically shared between man and a decision tool acting on the process. The principles for designing these two cooperation modes are described and evaluated in two industrial experimental contexts according to technical criterion on one hand and the need of preventing conflicts between man and computer on the other hand. Keywords. supervision tasks, software decision tools, man computer cooperation, advanced interfaces, man computer conflicts. INTRODUCI10N

~

of the by the human Actions pro~~t~ operator on the process

The supervision tasks of automated complex systems, consist essentially in decision tasks: fault detection, diagnosis and trouble shooting, in order to warrant the system safety. To prevent human errors which may lead to serious consequences, decision aid tools must be integrated in the supervision systems, for alarm selection and processing, diagnosis and trouble shooting. At present, many research works are developped in this way, most of them being based on artificial intelligence and real time expert systems, especially for micro and macro operational situations encountered in supervision posts. This paper first presents the possible integration modes of the decision tools in the supervision systems and then describes two types of cooperations between man and computer, according to the capabilities of both decision makers and to the need of preventing conflicts between them.

by the human operator

Automatically

Automatically

CD

CV

CONSULTANTS

MONITORS

@

@

SERVANTS

AGENTS

Fig. 1. The four possible modes of integration of a decision tool in a supervision system, from FININ and KLEIN/87/

stopped after a breackdown, but moreover its operation mode can be under-optimal because of disturbances, human mistakes or defects. PICON by Moore and col.(84) or ESCORT by Sachs and col.(86) are first examples of these tools. Indeed these techniques must be tackled, and several research works on non-monotonic reasonning (Sourmail and col., 87; Mc Dermott, 82), and on qualitative modelling (DeKleer, 84; Kuipers, 84; Caloud, 87; Tang and col., 88) are promising for accounting time in the knowledge base of real time expert systems. The expected interest of these tools lies in their capability to predict the evolution of the process and therefore in their ability to provide the human supervisor with predictive alarms and preventive advice. Moreover in the next future when these tools are improved and completely reliable, we forsee no limit to transforming them into on-line tools for control and supervision (i.e.: agents, class number four, Fig 1). We can notice that the class number three ( Fig 1) would correspond to a master-slave relationship between the decision making system and the man, in which the human operator would be the assistant of the computer. By chance there is no example of this class at present. Recent research works and developpements concem especially classes number 2 and 4. In this perspective, the decision concerning the process will be made by two kinds of decision makers : the human supervisor(s) on the one hand and the Artificial Intelligence System(s) on the other. Therefore, the designer of man-machine systems must cope with this new situation and define the new role of the human operator in the supervisory loop, especially

APPROACHES FOR MAN-OOMPUTER COOPERATION The increase in computer decisional abilities is a very helpful step towards man machine systems safety, but integrating decision tools requires designing advanced man-machine interfaces with a view to implementing a real cooperation between the Man and the Computer. In this context, the performances of the global man-machine system depend : 1) on the automation level chosen (or feasible) for the decision aid system (Sheridan, 84), and 2) on the ergonomic quality of the Man-computer interfaces. This concerns the choice of the relevant information to transmit to the operator, the choice of the best adapted device for transmitting this information and moreover the definition of the dialogue between the man and the decision aid tool. The first condition may be discussed according to the four classes of decision tools related to the four possible modes of integration in the supervisory loop of the process, as quoted by Finin and Klein(87) fig 1. The first two classes concern decision aid tools, which provide the human decision maker with advice : the consultants correspond for example to expert systems for diagnosis aids, reasonning from static data in order to locate a faulty component of a process stopped after a breakdown; the monitors gather more recent tools based on real time expert systems meant to deal with more general cases in which the process is not only

39

-10

1'. \Iillol. \ '. TaiJoritl a lld .\. Kalllollll

for preventing conflicts and implementing cooperation between man and AI systems.In this context, we can define two types of Man-Computer cooperation modes according to the class of the decision tool: - A vertical cooperation in which the operator is responsible for all process variables and in which he can call, if necessary, a decision aid tool (i.e. : a monitor) for problem solving; conflict can hapen when the operator doesn't agree with the solutions proposed by the decision aid system. - An horizontal cooperation in which supervision tasks and consequent corrective actions can be dynamically allocated between the man and an agent; this cooperation can be implemented according to two principles : . The first principle is an "explicit" allocation controlled by the human operator through a dialogue interface. Man is his own estimator of performances and workload, and he allocates tasks to the computer when he is overloaded. The implementation is easy but the major inconveniency is the extra workload for the man, involved by the allocation organization and the commands to send to the computer. . The second principle is an "implicit" allocation controlled by the computer. In this method, the major difficulty to be solved concerns the possible conflicts between the man and the computer during the allocation of tasks (Greenstein and Revesman, 81). To avoide these conflicts Greenstein and Revesman (86) propose to insert a human predictive model in the task allocation control system. Our approach is a slightly different one based on an optimal control method (Millot, Willaeys 85). We describe below two studies aiming at designing advanced Man-Machine interfaces and at avoiding conflicts, in both contexts of cooperation: the vertical cooperation and the implicit dynamic task allocation respectively. VERTICAL O)()PERATION BETWEEN MAN AND COMPUTER

P02

P02

T""3··:·~fTOO3 o"'-'~"-'OOI

.": ,~,::,~-~< ~.':'.~-}::~ ~;.:-.:: .,I ""

.

'" .,,\

T".I5

TO]

(a) Normal operat ion mode

Fig. 2.The star view for monitOring and fault detection tasks . regular polygon when the process operation mode is normal, Fig 2a. When a defect appears, the polygon is deformed, wich alerts the operator, and the future evolutions of the process state are successively displayed according to the expert system predictions, Fig 2b. The interface is a graphic colour screen with three windows Fig 3. The starview is displayed in the first window on the right hand part of the screen. The expert system provides at the same time correction or prevention advice which is displayed in the second window on the upper left hand side of the screen Fig 3. The process is then in an abnormal operation mode. .<;\PREVENTION 161

ad vices

Velricble : Action : Sens : Before +

Variable Ac t10n Sens Before

Interface between the decision aid system and the operator The interface has been designed according to the operator's information needs in the different operation modes of the process: I) monitoring of the normal operation and fault detection, 2) problem solving and trouble shooting in the abnormal operation mode. Monitorin~ of the normal operation mode. and fault detection. In this kind of interconnected processes, the human supervisor monitors only a reduced number of significant variables during the normal operation. In the present process the number is seven. Therefore, the seven significant variables have been grouped on a "star view", adapted from the idea of Coekin(6S) and Woods(SI). The star view consists of seven radius of a same circle, Fig 2. The characteristics of each variable are displayed on each radius : the set value, and six thresholds (high, very high, and high alarm; low, very low and low alarm). These characteristics are normalized for each variable so that they are linked between themselves by concentric circles. The amplitude of each variable is represented by a point on its radius. The seven points are linked with straight lines to form a

P02

Dnc4 Upem03 -)

2 :20

: Too 15 : C-P02 : +1 + 3:30

Background of the study The study takes place in the French project ALLIANCE* aiming at implementing a real expert system for alarm filtering, diagnosis, and trouble shooting. The process is a power station used for testing steam generators before they are implemented on real nuclear power plants. Its structure is similar to that of a nuclear plant: a heat generator produces and transmits energy to the steam generator which transforms it into pressurized water. The heat generator consists of electrical resistors. The process is controlled by a computer and supervised by human operators in a control room. At present, the developed expert system continously predicts the future evolution of the process along a prediction horizon of several minutes (till 15 min) with a prediction period of one minute (Caloud, 8S). It compares the process future evolution with alarm thresholds, and if need be, it provides the operator with preventive advice. Our research work concerning the integration of the expert system in the supervisory loop of the process is now described.

(b) Abnormal operation mode

Dna.

Tnl15

103

Fig. 3. Interface between expert system and human supervisor. Abnormal operation mode. The prevention (or correction) advice includes the label of the disturbed variable, and of the actuator concerned by the action, the amplitude of the action, and the maximum possible delay of intervention. When several pieces of advice exist they are hierarchized according to their respective intervention delays, Fig 3. In this context, the human supervisor can react in three different way when faced with the advice: 1) he has no solution for correcting the defect and therefore he tends to apply the expert system's advice, 2) his own solution agrees with those of the expert system and he is comforted in his decision and 3) his own solution desagrees with those of the expert system. In this case a conflict appears between both decision makers and the human operator must check the processing validity of the decision aid tool, with a view to finding a consensus, i.e. : defining who is right and why. This checking task can be long and complex and the decision tool must help man by explaining its processing in such a way that the consensus can be obtained as quickly as possible. For that purpose, our approach consists in analysing in parallel the problem solving process of each decision-maker, in defining the possible consensus points and in deducing the relevant information the decision aid tool must provide the operator with, in order to obtain the consensus. The implemented images for explaining the expert system's reasonning process are first presented and then our design approach is discussed. Ima~es displayin~

the expert system justification. The expert system may justify its reasonning process by displaying the diagnosis which has lead to the advice, and the set of variables taken into account for achieving this diagnosis. These data are grouped in a synthetic view called "propagation view" which the operator can call with a dedicated keyboard.

-11

"!\'"O .-\ pproaciles for \lall- ( :0 III pllll'!" Coo\>eral iOIl ill Sll \>cnison "Llsks

The propagation view is then displayed in window number one, Fig 4. The set of variables concerned with the defect are presented on a simplified synopsis of the process. They are linked with oriented arcs which represent the propagation of the defect from its origin to the different variables concerned with the correction (or prevention) advice.

solving process_ For example when the procedures of actions differ, this can be due to different diagnosis (situation assesment). Diagnosis is then a consensus point, and the "propagation view" must help the operator finding this consensus. Furthermore diagnosis may be different when the operator has missed some variables when assessing the situation, and the set of observations is another possible consensus point for which the "curve view" has been defined. Figure 6 summarizes the different images constituting the interface, in parallel with the operator's problem solving process modelized by a "linearized" form of RASMUSSEN's model. This interface has been implemented on the power station in Cadarache and is being tested by the operators. HORIZONTAL COOPERATION BETWEEN MAN AND COMPUTER: DYNAMIC TASK ALLOCATION PRINCIPLES

Fig. 4. The propagation view. If the operator needs further explanations, he can call a second justification level, the "curve view" Fig 5, which takes place in the first window. This view displays several curves of variables. Each curve consists of : - the historic curve of the variable on the left hand side of the screen, - the instant of the last prediction, - the curve of the future evolution predicted by the expert system, on the right hand side, - given that the prediction period is greater than the sampling period of the variable, a third curve displays the real evolution of the variable since the last prediction instant: this allows the operator to compare predicted evolution and real evolution. These curves are updated at each prediction instant. The third window is used for displaying a support the operator can call if needed for the use of the interface. Curent evolution High alarm

Historic curve

····~:7tt~:~:r~~~:~~~h'ld

The control and supervision structure of a general process can be decomposed in three levels: I) the process, 2) the automated control and regulation level and 3) the supervision level, usually affected to the human operator, and which concerns decision tasks such as goal setting, fault management, planning and general organization tasks. The decision tools such as agents previously presented will be integrated in the supervision level for assisting the human supervisor in fault management tasks. Then, the research work for dynamic task allocation aims at defining a general organization for the supervision level in which the responsabilities of the variables will be dynamically shared between the man and an agent performing fault management tasks. The principle aims at including in the process' supervision system a task allocator which dynamically shares the information X comming from the process, between the man and a decision tool, Fig 7 (Millot, Willaeys, 85). As mentionned in the first section, the dynamic allocation can be controlled either by the human operator (explicit allocation) or by the computer (implicit allocation). In the first case the man plays the role of task allocator according to his own estimation of performance and workload, which can increase his global workload. Therefore the research work we present concerns the second case. Implicit ut. allocation structure

e:zplicit wt. allocation ,tructur.

(controlled by the compUler)

(perfonned by man)

:_________)0___________ :::::: :'; ____________

,

X·... Computer

+4

+8

___ ~"~" "~"~ ~'~\ ~"~~i~jru;(mn)

----IlIst _p_r_e_d_ic_t_io_n_ _ _-"

<- _____________ ;

deci,ioDl

Xc

/

Task Allocator

: ....

I-'-.,--F-+I

Uc

----~~-~----~~:\_~~ ~-- ~ -~~~~- ~ -:~~-~-~ predictIOn curve

level 3: pl'OCel~.

I

lupef'YlaOD ,

\\~reShold

and man- I computer : OOOperatiOD:

ver low threshold I warm level 2:

process

Fig. 5_ The curve view Desil:n approach for solvinl: conflict_When a conflict appears between the expert system and the operator, the latter must find a consensus as fast as possible_ For that purpose, we have analyzed in parallel each decision-maker's problem solving process in order to determine the possible consensus points_ The design approach consists in sening in parallel the operator's mental pathway and the expert system's reasonning process. For the human operator we used Rasmussen's well known model (80) in four stages: event detection, situation assesment, decision-making, action. The expert system's reasonning process is supposed to be divisible in three steps: I) a supervisory step concerned with the prediction of the significant variable's evolutions, 2) a fault detection step that leads to identifying the predicted abnormal evolutions, and 3) a correction step that concludes to the definition of preventive actions set. In this context, the operator only detects the conflict at the end of the processing of each decision maker. Therefore, the consensus must be found by a "back tracking" in the problem

S.... ofthe Procea X

PROCESS

control level 1:

proce..

Fig. 7. Principles of dynamic allocation of supervision tasks. Implicit dynamic allocation principle In the principle we propose here, an optimal system controls the task allocator through an optimization of a criterion I: obtained by comparing the control objectives and the state X of the process. The task allocation policy aims at researching an optimal performance for the process, by modifying iteratively the number of tasks affected to each decision-maker. The control of the task allocator must take into account: 1) The set of tasks that man and computer are both able to perform, and which defines the allowed commands of the task allocator; 2) The allocation policy aiming at optimizing the process' performance I: and at preventing conflicts between both decision-makers. These conflicts can occur for instance when

42

1'. \lill,,1. \ ' . Ta/)orill alld :\. Kalllollll

normtll opertltion mode

Alarm or detecti 0 n of ebnormel stete

.,c:: 0

., Z 'n

. . ., . c::

0

""

defect tlppetlrtlnce

curve view

observetion of i nformetion

Identifyi ng the system stete

propogotion view

-0

c::

Q. 'Q. '0

.,'-

P redi cti 0 ns Evel uetions Alte r neti ves

Levels of j usti fi Ctlti on

'E ....., Defi nition of tesks selection of modi fi ceti 0 ns

yes

Defi nition of e procedure ..........

no

_ _ _ --.a. _

verificotion step

Eecution of the procedure ~~------------~,,~-------------,j

u~ ,

~~--------------------~"r--------------------~j

STEPS OF THE OPERATOR DECISIONNAl PATHWAY

INTERFACE BETWEEN OPERATOR AND EXPERT SYSTEM

Fig. 6. Synthesis of the different views for solving conflicts between expert system and operator.

the man tries to process variables allocated to the computer by the task allocator. Such conflicts contribute to a decrease in performance (Greenstein, Revesman, 81). 3) The human resources in order to avoid human overload and underload. With a view to respecting the third condition, two limits of maximum and minimum acceptable workload, Wlmax and Wlmin, are introduced as contraints in the optimal control system of the task allocator. Therefore, an assessment method on line has been defined for human workload in order to check these constraints during the dynamic allocation. The assessment method is based on an observer model which receives the same information H as the operator, as well as the actions Urn performed by him (Millot, Kamoun, 88). From these data, the model deduces the operator's occupation rate and the demands of each task he performs. Task demands and occupation rate are then aggregated on line, in order to determine the sampled

workload at each sampling instant ~T. In the experimental context we are presenting in this paper, the sampled workload calculated at the jth sampling instant can be written as follows : Aj . ~T Wlj

= --------- . TDj

Where :

~T

N L Gji i=1

is the sampling period TDj is the sampled available time corresponding to the temporal pressure Aj is the operator's occupation rate Gji is the gravity function corresponding to the task functional demands. Gji E [0,1]

T,nl .\pproaciIes I(lr \1a1l-COlllpllter Cooperatioll ill SllPLTI-isorl Tasks

Conflicts must be prevented by informing each decision-maker about the tasks allocated to him. Furthermore, in the case of supervision tasks for interconnected variables, these must be grouped into uncoupled subsets of variables. This problem also concerns the definition of the allowed commands, and the task allocation consists in affecting one or several mutually independent subsets, so that each task performed by one of the decision makers doesn't perturb the other one's corrective actions. Nevertheless these two precautions may not be enough for avoiding the man attributes himself variables allocated to the computer. In fact, we are faced with a paradox, still unknown of the automation science, because we must define a hierarchical system in which a decision maker of a lower level in the hierarchy (here the man) can contest or refuse a command given by a higher level (here the task allocator). Therefore it's necessary to define precisely the role attributed to the human operator in the present context: - either a role of partial decision maker only responsible for the variables allocated to him by the allocator; - or a double role of partial controller and global supervisor who can interviene in case of defect of the computer.

Total absolute error at time t=IO mJ1 "::: CASE 3

''\ En Eo'",

'", CASE 1 '",

' ' ', 10 : 20

WLmin -- .- .-'

30

min

40

t

so

No

W Lmax -- ---- --- ---- ------ - - - -- -- - -----

The choice between these two alternatives does indeed engage the designers' responsibility for the reliability and the safety of the man-machine system. In other words are we sure of the total reliability of the computer, allowing us to allocate to itself the overall responsability of the supervision? In the present state of technology it seems to be more advisable to adopt the second alternative. We have consequently chosen to display all the variables on the operator's screen for allowing him the overall supervision of the process, but also to limit his actions to the only variables allocated to him, with a view to avoiding conflicts. Moreover, if in this context the operator detects a computer failure, he can interrupt the implicit allocation with an emergency key and control the whole system manually. This principle have been implemented on an experimental platform. We will now describe this platform and the experiments performed.

WL Workload

Fig. 8. Conditions of existence of an optimal allocation. Implementation of the implicit dynamic allocation The implicit allocation has been implemented on the same experimental platform as previously in which a task allocator has been integrated, Fig 9. The process performance is recorded at each sampling period T=lmin and the human workload is estimated by the observer model at each period t.T= 5 seconds. GOAL POINT

Previous results showin~ the feasibility of the implicit allocation The existence of an optimal allocation of tasks between the man and the computer has been shown previously (Millot, Taborin, Kamoun, Willaeys, 86). The experimental platform and the main results are only briefly described here. The platform includes a control station for a process simulated on a real-time industrial computer. The simulated process contains 96 independant variables which can be randomly disturbed. The operator or the computer has to detect disturbed variables and to compensate each disturbance. The process performance is calculated as the sum of the absolute errors on all variables and is measured at each sampling time T, (T=lmin). The experiments done concerned the relationships between the performance E of the process and the number of disturbed variables when the decision-maker were the man and PID regulators separately. In this context we simulated an allocation of the disturbed variables by attributing to the computer a set of variables which was the complement to 90 of the set allocated to the man. Therefore the total error ET calculated by adding the

x 6 .... , T.R. , 16....

c

Fig. 9. Experimental protocol for implicit dynamic allocation

error due to the operator Em and the error due to the computer Ec,represented the total error which would be obtained if the 90 variables had been shared between both decision makers. The results of these experiments have shown the existence of an optimal allocation when the performances of the process controlled by the man and the computer are similar, Fig 8. Moreover, when the performances obtained by each decisionmaker are greatly different, the optimal allocation consists in affecting all the variables to the best decision-maker, and can therefore lead to either a human underload or overload. In order to avoid these extreme situations, two workload constraints have been introduced, Wlmax and Wlmin. This prevents task sharings in which the operator deals with more than Nmax variables and less than Nmin variables respectively. From these results corresponding to a static allocation we have implemented a dynamic allocation.

Given' that the process performance is not known a priori, the allocation policy consists in modifying iteratively the number of tasks affected to each decision-maker and in researching the allocation which optimizes the global system's performance, while respecting the contraints of maximal and minimal workload acceptable for the man. This policy is based upon a predictive model of the performance Ec and Em' where Ec and Em are the performances of the subsystems controlled by the calculator and by the the man respectively (Kamoun, Debernard, Millot, 88). Therefore, the idea consists in determining at a given instant kT whether transfering variables from one decision maker to the other would result in improving the global performance ET«k+ I)T). This requires the prediction of future performances, Em«k+l)T) and Ec «k+ 1)T), according to the previous and present

1'. \iillot. Y. i"ai>orill ;Illd .\. halllllllll perfonnances. The prediction of the future error Ed (whether EC or Em) due to the decision maker d (whether m or c) when the variable allocation between both decision-makers remains constant, is made by a linear extrapolation from the errors at the instant kT and (k-I)T. The prediction of the total error at the instant (k+ I)T is computed as a function of errors of each decision-maker. According to this prediction the total error of the process at the instant (k+I)T when applying no modification is compared with the total error which would be made when making a tran sfer of variables between both decision makers. In the case where this last total error would be lower than the previous one, the process perfonnance would be improved and therefore this new allocation must be adopted. Defining the control policy requires to define the initial conditions of the iterative search for the optimal allocation, the iteration step to apply during the search and the choice of the variables to be allocated. - The initial conditions correspond to allocating an equal number of variables to each decision maker (N c =N m =N/2) . - The maximum number of variables transfered at each sampling period is five, which corresponds to the operator's maximum abilities for corrective actions during the period T (l min). - The choice of variables to be transfered has been defined as follows : a random choice is done from all the variables belonging to the computer and a random choice is done from the variables belonging to the operator, among the variables that he hasn't yet corrected. - And finally , the two workload contraints have been chosen experimentally: Wlmax=D,006s/s and Wlmin=O,OI2s/s. This control policy has been implemented on the experimental platfonn. EXPERIMENTS AND RESULTS Experiments have been carried out with the same operators as for the static allocation. The results have shown that the evolution of the total perfonnance Eimp(t) obtained with a dynamic implicit allocation is very close to the best perfonnance obtained in a static allocation context but without exactly reaching it, Fig 10. This is due to the transfer procedure of variables between the two decision makers. Nevertheless, these results show that the use of a model for predicting the performance evolution increases significantly the global perfonnance of the human-computer system.

ERRORS (U) lOOO

i

750f

c max(t)

~imp(0\

W \ V""l \ Cmin(t)'\

"'L

REFERENCES

\

\

250

\ \

~\~

. . . .. "- "-"-

. . . . . : .:-~: -

5

10

After a brief presentation of the possible integration modes of decision tools in the supervision systems of automated processes, this paper has proposed two approaches for mancomputer cooperation. The implementation conditions of the cooperations have been ev aluated through two experimental studies. The first one concerns the vertical cooperation in which the human operator can be assisted by an expert system providing advise for fault detection , diagnosis and trouble shooting. An advanced interface between the expert system and the human supervisor has been designed with a view to solving possible conflicts between themselves. The second type of cooperation is an implicit dynamic allocation of supervision tasks between the human operator and a decision tool which can directly act on the process. The dynamic allocation has been implemented on an experimental platfonn in such a way conflicts between both supervisors (either the man or the computer) can be avoided. The experimental results have shown an increase in the global process performances and a decrease in the large variability of the operator's workload. The perspectives of this research concern in the short run the implementation of an explicit dynamic allocation (i.e. controlled by the man) with a view to compare both dynamic allocation method s. In the long run we plan evaluating the integration possibilities of both approaches of cooperation in a mixed cooperation in which the human operator could be assisted by a decision tool through a dynamic allocation and moreover he could be assisted by a software aid in the execution of the tasks affected to him by the task allocator.

\\

\1 500

CONCLUSION AND PERSPECTIVES

* The ALLIANCE project is supported by French Ministry of Research and Technology. It is managed by CEA and gathers several laboratories (LAG at Grenoble, LAIH at Valenciennes) and industrial compagnies of Artificial Intelligence (ITMI, IIRIAM) of engineering (SGN) and of production (EDF, SHELL).

I /·. . . . "\.. /

Fig. 11 . Evolution in time of sampled workload W!.

15

20

25

30

' 35

TI\1E

(~1~)

Fig. 10. Evaluation in time of the errors from ten operators. The workload evolution, Fig 11 , shows th at the two thresholds, Wlmax and Wlmin, contribute to avoidin g a large wo rkload variability to the operators. We can finally notice that the sampled workload is almost independant from the total number of disturbed variables.

Caloud, P. (1987) . Toward Continuous Process Supervision. in Proc IJCAI-87 - Milan - August 1987 - pp1086 . 1089 Caloud P., Descotte Y., Feray Beaumont S., (1988). Projet Alliance : Raisonnement qualitatif et Aide a ('Operation . 10urnee d'etudes "Systemes Experts et Conduite de Processus en Li~ne"- Societe des Electriciens et des Electroniciens, Octobre 1988 - Gif-Sur- Yvette FRANCE Coekin, 1. A. (1968). A Versatile Presentation of Parameters for Rapid Recognition of Total State. - International Symposium on Man-Machine Systems - Sept. 1968 IEEE Conf. Record 69 C58-MMl DeKleer, 1. (1984) . A Qualitative Physics based on Confluences. "Artificial Intelligence" - vo!. 24 - n0 1 1984 - pp. 7-83 Finin F., Klein D . (1987) . On the Requirements of Active Expert Systems. - 7emes 10urnees Internationnales: Les Systemes Experts et leurs Applications - Avignon France - luin 1987

Two Appruaches fur Man-Computer Cooperation in Superyison" T;ISks

Greenstein J.S ., Revesman M.E., (1981) . A Monte-Carlo simulation investigating means of human computer communication for dynamic task allocation. IEEE Conference on Cybernetics and Society - pp; 488-494 New York - 1981 Greenstein J.S., Revesman M.E., (1986). Application of a Mathematical Model of Human Decision-making for a O Human-Computer Communication. IEEE SMC 16, n I, pp. 142-147 , January/February 1986 Kamoun A., Debernard S., Millot P., (1988). Implicit Dynamic Allocation of Tasks between Man and Computer based on Optimal Control. Ith EurQpean Annual CQnference Qn "Human DecisiQn Makin~ and Manual CQntrQ)" October 1988 - Paris - France Kuipers B., (1986). Qualitative Simulation . - in Artificial Intel!i~ence - Vol 29 - n03 - pp. 289-338 - September 1986 Mc Dennott D., (1982). A Temporal Logic for Reasonning about processes and plans. - Cognitive Science - Vo16pp. 101-155-1982 Millot P., Willaeys D., (1985). An approach of dynamic allocation of supervision tasks between man and computer in control rooms of automated production systems. 2lli1 IFAC CQngress Qn "Analysis. Design and Evaluation Qf Man-Machine Systems" - Varese - Italy September 1985 Millot P., Taborin V., Kamoun A., Willaeys D., (1986). Effects of the dynamic allocation of supervision tasks between man and computer on the performances of automated processes. Q1h European Annual Manual: CQnference on Human Decision Making and Manaul Qm\rQl- Cardiff - Great Britain - June 1986 MiIlot P., Kamoun A., (1988). An Implicit Method for Dynamic Allocation between Man and Computer in Supervision Posts of Automated Processes. lId IFAC CQngress Qn "Analysis Desi~n and EvaluatiQn Qf ManMachine Systems". Oulu - Finland - June 1988 Moore R.L., Haowkinson L.B., Knickbocker L.G., Chunnan L.M., (1984). A Real-Time Expert System for Process Control. - Proc IEEE - The first Conference on I.A. Applications - pp.569-576 - 1984 Rasmussen J., (1980). The Human as a System Component Human InteractiQn with the CQmputer - "Smith & Green" Eds - London Academic Press - 1980 Sachs P.M., Paterson A.M., Turner M.H.M., (1986). ESCORT - an Expert System for Complex Operation in Real-Time. - Expert System - January 1986 - vol.3 - nO l Sheridan T.B., (1984). Supervisory Control of Remote Manipulators, Vehicles and Dynamic Processes Experiments in Command and Display Aiding. Advances in Man-Machine Systems Research - Volume 1 - pp. 49-137Sounnail N., Tang X., Millot P., WiIlaeys D., (1987). An expert System for Process Control coping with Dynamic Infonnations. - IECON'87 - Thirteenth Annual IEEE Industrial Electronics Society Conference - Hyatt Regency, Cambridge, November 1987 Tang X., Grzesiak F., Millot P., (1988). Optimal Management of Dynamic Knowledge for Process Supervision Aid Systems. - International Computer Symposium - ICS'88 - December 15-17 - 1988 - Taipei Taiwan Woods D.D., Wise J.A., Hanes L.F., (1981). An Evaluation of Nuclear Power Plant Safety Parameter Display Systems. - Proc. Human Factors - Soc. 25 th Annual Meeting - 1981 - pp. 110-114

45