Copyright © IF AC Distributed Int elligen ce Systems , Yarna , Bulgaria . 19BH
MAN-MACHINE DISTRIBUTED INTELLIGENCE G. A. Boy Groupe d'lntelligence Artificielle, Centre d'Etudes et de R echerches de Toulouse, Office National d'Etudes et de R echerches Aerospatiales, 2 avenue Edouard B elin, B? 4025, 31055 Toulouse Cedex, France
Abstract Human-machine interactions constitute a special case of distributed intelligence. We have an ongoing research program addressing a variety of issues concerning the distribution of tasks and responsibilities between human operators and complex machines. An early product of this research, the simulation model MESSAGE , addressed transfer and processing of information in aircraft cockpits. This system enables the comparison of cockpits or procedures for given flight management tasks. Workload and performance indices are computed from information flow and content. In MESSAGE, there are several information processing systems (IPSs), which can be human or machine, communicating and cooperating through a common information database or "blackboard". The research centered on analyses of the structure of the messages exchanged between IPSs and the supervisory function of each IPS in managing these messages. The critical features of the system were found. to be the interface filter and the ability of each IPS to cope with parallel and sequential processing. Humans have been shown to be very good parallel processors for highly learned tasks. Conversely, they handle new or non-common situations sequentially. Recent investigations, carried out in cooperation with NASA, examined the effects of different distribution of autonomy between humans and machines on the performance of complex human-machine systems. These issues were investigated in an application for fault diagnosis in an orbital system for refuelling satellites. A cooperative expert system, HORSES, was developed and used in a series of experiments with naive and experienced operators diagnosing faults. It was found that the division of autonomy needs to be modified as the human operator learns to use the system. It was shown that humans can be helped by intelligent tools which are self-explanatory. Other research suggests that if the human cannot understand the whole system, he tends to become too reliant on automatic aids and to lose overall control of the system. Indeed, humans tend to be poor in pure monitoring, however given some responsibility for action, their monitoring function can improve. One solution is total automation, however if a human is to be included, intelligent tools can be used to monitor his performance and offer assistance, particularly in routine operations. This suggests that distributed intelligence systems could be used to "obserye" ~hat other IP~s (including human operators) are dOIng, In order to aVOId undesired situations. This previous work has led to the concept of "Operator Assistant" (OA) systems which are IPSs that simultaneously monitor the environment and the operator(s). An OA system includes a situation recognizer, a diagnoser, a planner and controller. Each module works on multiple knowledge bases able to handle information coming from the environment, including the operator(s). D .l.S,M .- J
Keywords. Man-Machine Interactions, Distributed Intelligence, Decision and Complex System Control Aide. 1. Introduction Automation has been the main issue in engineering for several decades. Engineers and scientists have focused on design and optimization of automated systems, e.g., autopilots for aircraft, satellite launching, nuclear power plant control, etc. Most of this work has not taken into account the man in the loop. So far, while there have been very few incidents and accidents on such systems, most of these have been shown to be human-dependant. Thus human operators become important limiting factors in the interaction loop with complex systems. The main problem remains that of sharing intelligence between human and machine agents. Human operators have been shown to be bad at monitoring without acting. However, when they act, people become very good at assessing situations. It seems that when human operators are needed in a control loop, ~ey should do something which involves problem solVIng, without imposing an excessive workload. What, then, IS the role of intelligence sharing in this picture! So far, humans have proven the most intelligent p:oblem solvers for unexpected situations. Conversely, machIne are far better at computIng and executing very well known procedures, even if these procedures are very complex. So, if h.uma~ operators have to be kept "in control " of the a~tual SItuation at any Instant, presentation of information IS a very Importan~ tOpiC. A variety of machine-agents could be used for proVidIng such "intelligent" information to the operator. This paper focuses primarily on. ways of formulating multiagent human-machine problems In dynamiC SItuations. As part of this work, a model, called SRAR (SituatIOn Recognition and Analytical Reasoning), will be introduced for the purpose of describing interacting "intelligent" agents. SRAR comes from previous studies in human-computer cooperative fault dignosis (Boy, 1.9 86). As we shall s.ee, SRAR is well suited to the muluagent human-machine problems that have been tackled so far. 2. Man-Machine Systems ? 2.1. Managing the information Control of complex systems has moved from manual and procedural control to a higher level of contr?l. In aircraft cockpits, for instance, early pilots were more In touch ~lth the machine; now, they have to manipulate arnficlal deVices which represent symbolic parameters and aspects of machine control. Thus, the human is more and more physically separated from "real" ma~hi!1e control. Since 1?76, NASA has implemented an AViation Safety ReportIng System (ASRS) (Fooshee & Manos, 1981) in which all personnel involved in aviation can submit anonymous reports of incidents and accidents. More than 70% of the reports mention information transfer problems. Less than 3% of the
280
G. A. Boy
reports are related to a technical failure. Problems are mainly related to verbal communication and visual information tranfer. Recent results have shown that 80% are due to human errors (Nagel, 1986). Information tranfer requires availability of the information and of the receptor. Moreover, information must be available to the receptor at the right time and with an appropriate format. The way of presenting information is very important. Auditive information has been shown to be less reliable than visual information. The ASRS study suggests that human factors (e.g., distraction, forgetting, imperfect monitoring, non-standard procedures), and system factors (e.g., degraded or non-available information, ambiguous procedures, environment, higher task demand), have to be taken into account jointly in human-machine system design in order to minimize information management problems during operations. All these considerations have led to the design of a human-machine interaction model. 2.2. A Model Our approach to human-machine interation includes artificial intelligence concepts and techniques. Studying intelligent behavior requires giving preliminary definitions. The world is defined by a set of elementary states or facts which have values that may change. At a given instant, the world is characterized by its state (vector). An entity will be defined when it is separated from its environment. The world is then divided into two interacting parts: the entity and the environment. Intelligent entities have goals which have to be satified using a set of appropriated actions in particular situations. Intelligent entities are able to anticipate environmental reactions as well as the consequences of their own actions. Such anticipation works on the perceived facts about the environment. Intelligent behavior can be defined using a set of goals, a set of facts and a set of action principles.
planning problems, however in real world human-machine problems the set of pairs for each action can be too complicated. In a multi-agent world, a more appropriate definition must be chosen. We have been considering so far that an agent acts when it sends one or several messages to the environment, i.e., to others agents in the social field. Thus, an action is a particular set of messages from one agent to the corresponding environment. Actions to be executed can be defined as scripts (Schank & Abelson, 1977). Scripts may be classified into action structures such as sequences, alternatives and waiting loops. An action sequence is a set of ordered actions. An alternative is a 'if-then-else' conditional branching on two action structures. A waiting loop is a 'while' conditional branching on one action structure. Action structures are often called plans (Wilensky, 1983). 2.2.4. Strategies and Behaviors Each agent has its own strategies. A strategy is a set of goals to satisfy. Each goal involves action principles. Such principles can be classified according to three kinds of behavior: skill-based, rule-based and knowledge-based (Rasmussen, 1983). Skill-based behaviors are very simple; an action structure is applied as a result of a pattern matching between acquired situation patterns and the current situation. Rule-based behaviors use an inferential process which can be compared to inference engines of "conventional" expert systems. At the knowledge-based level, more intelligent processes are needed. This level includes three parts: (I) identification of the situation, i.e., complete or partial pattern matching and conflict resolution , (2) decision making to start a planning process taking into account global goals, and (3) planning according to appropriate goals.
MESSAGE 2.2.1. Agents and Environment An agent is an intelligent entity interacting with its environment. Whether it is a human or a machine, it can be modeled by channels (emittors and receptors), a knowledge base, and a processor. Multi-agent interaction is very sensitive to the corresponding social field structure. The social field structure is a network of information relations between agents. It is characterized by the structure and the nature of exchanged messages. Following these considerations, the environment of an agent is characterized by the set of the other agents in the defined social field. 2.2.2.
Message
The message structure has been found to be the key factor of information management modeling. It can be modeled including three main properties: its content, its direction and its temporal characteristics (Figure I). The message content can be separated into four fields: support and syntax; semantics; semiologic aspects; and unconditionnal aspects. Support and syntax define the language in which messages will be exchanged. The semantic content is coded information which gives significance to the message. The semiologic aspect of a message represents its degree of symbolic significance in the logic of the receptor, e.g., the same message can be interpreted as an order or an acknowledgement. The unconditional aspect is related to human factors as personality, stress, fatigue, etc. The message direction specifies emitting and receiving agents. Temporal characteristics may include the decision time, beginning and end of the message emission, its normal duration, beginning and end of the message reception (Boy, 1983; Tessier, 1984). 2.2.3. Action The effect of an action is to cause the world to jump from one state to another. Mathematically, an action is a set of world state ordered pairs: (Si' sf»), where Si is an possible initial state before executing the action and Sf is a corresponding final state after the action has been executed (Pednault, 1986). Such a definition is used in classical
CONTENT
SUPPORT SEMANnCS { SEMOLOGY UNCONDITIONAL
DIRECTION(EMITIER RECEPTOR TEMPORAL CHARACTERISTICS Figure 1. Message exchanged between agents. 3. Sharing Various Abilities Why are pilots still present in aircraft cockpit? This question is frightening and provocative. Pilots are still managing the flight decks of modern transport aircraft because engineers cannot imagine pre-programmed intelligent actions for unexpected situations. What kind of knowledge do pilots use in such situations? First of all, they own a set of patterns which they can match to the percieved situation. These patterns are the result of many years of experience. Such a user-knowledge is very difficl,ilt, and most of the time impossible, to decompile. 3.1. Levels of Autonomy Sharing tasks between humans and machines is very important and poorly understood. In this section, we will extent the concept of levels of automation introduced by Sheridan (1984). We will give here a theoretical model of the performance of a simple human-machine system under different levels of autonomy determined by different levels of understanding or knowledge about the human-machine system (Figure 2). The horizontal axis represents a continuum of levels of autonomy from "manual control" to "complete automation" . The vertical axis represents the
Man-Machine Distributed Intelligence perfonnance of the human-machine system. Perfonnance ' could be related to time, precision of results, costs to solve a given problem or any heuristic criterion appropriate for measuring perfonnance. Each curve of figure 2 corresponds to a given level of knowledge or understanding. The lowest curve represents a poor knowledge level.
281
replanning cannot be implemented without mastering machine learning. An operator assistant system (Boy & Rappaport, 1987; Boy, 1988) must tolerate human errors and provide the operator with a task-oriented dialogue. Operator Assistant systems will be presented in section 4. 3.3. What Can Humans Do ?
HUMAN· MACHINE SYSTEM PERFORMANCE
TECHNOLOGICAL LIMITATIONS
/ LEVELS OF
AUTOOMY HUMAN Fi~re
OPTIMUM
MACHINE
2. Human-Machine System Perfonnance versus Levels of Autonomy.
Following the above model, the perfonnance of the human-machine system increases with the autonomy of the machine, but only until some optimum, after which it decreases. Indeed, if the autonomy of the machine is further increased, the human operator is likely to lose control of the situation. Note that the level of autonomy can be increased to a certain limit fixed by technological limitations. At present, most engineering projects are developed on these technological limitations, i.e. , machines are built as well as possible from strictly machine-centered optimization criteria. In our approach, optimization criteria are centered on the end-user/machine performance. Such an approach necessitates simulations and real-world experimentation . Human-machine system perfonnance optima can be shifted to the right if human operators are very well trained. Globally, when the level of understanding and knowledge increases, these optima shift up and right. Finally, if everything is known about the machine control in its environment, then the corresponding optimum is located on the complete autonomy limit. It has been shown that searching such perfonnance optima is a good method for acquiring knowledge from experience, i.e., generally we jump from a lower curve to an upper one as we acquire knowledge. 3.2. What Can Computers Do ? Recent accidents in aerospace and nuclear power plants have dramatically pointed out that exessive automation tends to lower the level of vigilence of human operators. On the other hand, it has been shown that when human operators act, they are also very good monitors. Such results indicate that computers must be used more as associates than crutches during operations. Design of distributed control of multiple systems is not a short tenn goal. We must understand first what is involved in the control of a single system from a human-machine cooperation point of view. Expert system technology is only just becoming mature enough to be used successfully in dynamic environments. There is a need for more knowledge and know-how for building real-world and real-time interactive intelligent processors. What we can do is: monitor single subsystems; display goals and causal explanations to the human operator; implement very simple rule-based simulations such as diagnosers, schedulers and sometimes reschedulers. At present, however, the reasoning is always based on standard procedures. To perfonn coordinated control of several systems, it is necessary to improve our understanding about what can be learned by the machine "automatically". Planning and
A study realized by Sears in 1986 on 93 major aircraft accidents shows that responsability for 65 % of these accidents was assigned to the crew (reported by Nagel, 1986). 33% of the pilots deviated from standard procedures, 26% were due to an inappropriate verification by the copilot, in 10% of the cases the captain did not take into account infonnation from the crew, and finally in 9% of the cases the crew was imperfectly trained to cope with critical situations. It is now very well known that most human errors are slips. If the causes of slips are known , they can be used for designing operator assistant system functions. The mental model of the human operator may be very different when he/she controls the machine and when he/she only monitors the same machine augmented with an automated system. The mental model is a construct of how the human represents the task he has to perform. Thus, switching from monitoring to "manual" control may be very difficult for a human operator. Human beings are different from machines because workload can affect their perfonnance, In particular, mental workload (Hart, 1985; Parks, 1977; Phatek, 1983) can be prohibitive in problem solving or in distributed control. It is known that understanding of how the machine works is a function of instruction and training, Human beings act and decide according to very small sets of heuristics. However, the se heuristics can be very complicated. Building and structuring heuristics relies on training. Generally, early stages of knowledge acquisition are analytical. This knowledge is then compiled through an accommodation process. Compiled knowledge is called situational knowledge. From a general point of view, pilots see automation as a plus (Nagel, 1986). However, results from human-machine interaction studies mu st be taken into account in future developments in automation. 4. The SRAR Model The quality of communication between two individuals rests on reciprocal understanding of the internal model of the other. For instance, a discussion between experts of the same domain is reduced to an operative language (Falzon, 1986), i.e., "very situational". In this case, experts have quasi-indentical knowledge about the subject: their internal models are said to be quasi-identical. Conversely , when a professor gives a talk to his students, everybody has a very different internal model. In particular, the professor must "decompile" his situational knowledge to make it intelligible to novices, we will say that he uses an "analytical explanation" in order to make himself understood. This distinction between analytical and situational is not new. In his critique of Artificial Intelligence, Hubert Dreyfus (1979) points out the importance of situational knowledge in expertise, and the difficulty of eliciting and representing it in order for it to be amenable to computer programming. Following Hubert Dreyfus, it seems that actual expert systems are only able to represent knowledge of a "good" beginner, i.e., repeat the professor's course or at best, to solve a few "easy" exercises. Following Newell and Simon's classical model (1972), a model for an "agent" has been developed, in the context of the MESSAGE system for analysis and evaluation of aircraft cockpits (Boy, 1983; Boy & Tessier, 1985). It includes a supervisor managing three types of processes called channels, i.e., receptors, effectors and cognition (Figure 3). The supervisor works like a blackboard (Nii, 1986). Each channel exchanges information with the supervisor. The concept of automatic and controlled processes, introduced
282
G. A. Boy
and experimentally observed by Schneider and Shiffrin (1977), has been implemented in the supervisor of the MESSAGE system (Tessier, 1984). This allows generation and excecution of tasks either in parallel (automatic processes) or sequentially (controlled acts). In the present paper, automatic processes involve particular knowledge which is modeled by situational representation., whereas the knowledge corresponding to controlled processes is modeled by analytical representation.
In general, the operator Pi is "fuzzy" in Zadeh 's sense (1965). All the sub-sets of the world fact set characterizing each perceived fact are also assumed to be finite. (3) The desired situation characterizes the set of operator's goals. It will be noted :
SO (t)
= { ai / i = I, m }.
which must be interpreted as : "The desired situation at a given time t is the set of the goals which the operator intends to reach". This set is assumed to be finite. Each goal ai is the result of the mapping of the operator Di on a sub-set of the world fact set:
ENVIRONMENT (REAL SITUATION)
In general, the operator Di is "fuzzy". All the sub-sets of the world fact set characterizing each goal are also assumed to be finite. The real situation characterizes the environment. Perceived and desired situations characterize the operator's short term memory. In the terminology used in Artificial Intelligence, the perceived situation is called the fact base (i.e., "perceived facts") and the desired situation is called the goal (and sub-goal) base.
G
DESIRED SITUATION PERCEIVED SITUATION
PRXEDURAL Mr.E
[]
PROBLEM SOLVING
COGNITION
LTM
Figure 3 . Operator Model LTM : Long Term Memory STM : Short Term Memory
S. K. : Situational Knowledge A. K. : Analytical Knowledge
4.1. Situational Representation. The term "situation" is used here to characterize the instantaneous state of the environment. We speak generally about the state of the world. A situation is described by a set of components called world/acts. A generic world fact will be noted fi. At a given time, three basic types of situation will be distinguished (Boy, 1983) : (1) The real situation characterizes the "true" world. It is only partially accessible to the operator. It will be noted :
SR (t) = (fi/i= 1,00
l,
which must be interpreted as : "The real situation at a given time t is the world fact set (may be restricted to the environment)". The set is a priori infinite. (2) The perceived situation is a particular image of the world. It is the part of the world accessible to the operator. It is characterized by incomplete, uncertain and imprecise components. It will be noted : SP (t) = { 1ti / i = I, n
l,
which must be interpreted as : "The perceived situation at a given time t is the set of facts perceived by the operator". This set is assumed to be finite. Each perceived fact 1ti is the result of the mapping of the operator Pi on a sub-set of the world fact set:
"Situation Patterns" Concept. The concept of a situation pattern is fundamental. A situation pattern characterizes the operator's "expected situation" at a given time. We will say that situation pattern is activated if it belongs to the short term memory. It will be noted: SE (t)
= { ~i / i = 1, I }.
which must be interpreted as : "The expected situation at a given time t is the set of the sit uation patterns which are activated in the short term memory". This set is assu med to be finite. Each situation pattern ~i is the result of the mapping of the operator ITi on a sub-set of the world fact set: ~i = ITi ( { fj
/ j = I, r l ).
In general, the operator ITi is "fuzzy". All the sub-sets of the world fact set characterizing each situation pattern are also assumed to be finite. In practice each fact fj is represented by a (numerical or logical) variable vjo and its tolerance functions { TFi,j l in the situation pattern ~i' A tolerance/unction (TF) is an application of a set V into the segment [0,1]. The elements of V are called "values", e.g. the values indicated by a numerical variable. V is divided in three sub-sets: preferred values {TF(v)=I}. allowed values (O
Man-Machine Distributed Intelligence in a monitoring process, the operator matches perceived facts with activated situation patterns. Reason (1986) calls this process: "similarity matching". At a given instant, if several patterns are candidate for a perceived situation, Reason's results show that the most frequent pattern is selected first. He calls this process: "frequency gambling". We also call this process: "conflict resolution". We have defined the total process as situation recognition (Boy,1986). 4.2. Analytical Representation When a situation is recognized, problem solving generally follows. A problem is initiated by an event (i.e., perceived or desired situation) which necessitates the operator to allocate intellectual resources. These intellectual resources can be represented by structured objects and inference rules, such as: IF
THEN AND . Note that if several situation patterns have been matched and kept as "interesting to be considered", then several hypotheses are suggested. Thus, problem solving can be multi-hypotheses. Generally, each hypotheSIS defmes an "extension" ("world" or "viewpoint"). Consistency maintenance in each extension and between generated extensions is a central mechanism. Such a mechanism has been described and developed by De Kleer (1986). De Kleer called this mechanism "Assumption-Based Truth Maintenance System". Thus, generally, analytical reasoning involves a problem solver and a consistency maintenance mechanism. The problem solver provides inference. Inference can be performed in forward chaining (from data to conclusIOns) or in backward chaining (from goals to elementary actIOns). Human probem solving has been defined as "opportunistic" by Hayes-Roth and Hayes-Roth (1978), i.e., either forward or backward chaining is possible at any given time. Moreover, the human being tends to reason using a knowledge already structured in knowledge islands. Each island is connected to one (or several) situation pattern(s) and defines a context. In multi-hypotheses problem solving, each extention is said to be working in its context. The corresponding representation will be called analytical representation. 4.3. Compiled, Instantiated and Deep Knowledge Rasmussen's performance model provides a ~ood framework to analyze human-machine interactions. Acquisition of skills, both sensory-motor and cognitive,. is the result of long and intensive training. Rule-based behavIOr is still an operative level dealing with specified plans. Knowledge-based behavior is the level of intelligence. Most actual expert systems work at Rasmussen's second level because it is not easy to acquire knowledge from the skill level the real situationallevel of expertise (Dreyfus, 1982). The intermediate level is the easiest to formalize and elicit from expert explanations. Indeed, as teachers, human experts decompile their knowledge to explain the "how" and ':wh( of their own behaviors. The result of such decompliatlon IS easily transferable to a computer with, for instance, an "IF ... THEN ... " format. This does not necessarily capture expert knowledge and behavior. Rather, it provides. an analytic representation of the expert's knowledge and logiC. We consider that situational knowledge (situation patterns) is (are) compiled as the result of "training-type" lear~ing (Anderson, 1983). In a fault diagnOSIs expenme~t carned out at NASA (Boy, 1986), the human expert s situational knowledge was found to be a compilation of part of the analytical knowledge previously used when the eXllert was a beginner. The complexity and number of. situation patterns is an indication of the level of expertise. SltuatIonal knowledge corresponds to a skill-based behavior in the classification of Rasmussen. Decompilation, i.e., the explanation of intrinsic elementary knov.:ledge, of e~ch situation pattern is a very difficult and sometimes Impossible task. This knowledge can be elicited only through an incremental observation process. Analytical representation can be decomposed into two types: procedural representation and problem solving.
283
(1) Procedural knowledge is represented in the form of instantiated plans, i.e., procedures ..I~ can ge.nera~ly be translated by production rules in propoSiOonal logiC. It IS thus also the result of instantiation of more general knowledge. Here after we will not distinguish between instantiated knowledge' and procedural knowledge. Operation manuals include almost exclusively this type of knowledge. It corresponds to Rasmussen's rule-based behavior. Procedural knowledge is the easiest to acqUire and IS usually self-explanatory.
(2) The term problem solving will be used only when deep knowledge (Van der Velde, 1986) is impl!c~ted. This type of knowledge is not always avallablemall.dl~clplIne~.It corresponds to a descriptive model of the domam m question. Several models (and sometimes theories) can be used to describe or reason about an engineering system. In fault diagnosis, for example, it is possible to develop a model of the structure of the system by decomposing it into elementary components and the relations between these compon.ents. Sometimes, a geometric model is necessary: A functional model will permit simulation of the behavIOr of vanous components. A causal diagram can also be used t? establIsh the various connections between the propertIes of the components. The type of ~easoning involved in problem solving requires a mechanism of instantiation of general knowledge, since established procedures do not already exist. This knowledge is organised according to the structure of the field of the problem to be solved, its topology, its functionality, its causality and other aspects such as organisation of the objects into hierarchies, autonomy and heteronomy of various parts of the system. It corresponds to Rasmussen's knowledge-based behavior. 4.4. Cooperative Fault Diagnosis Experiment The Orbital Refueling System (ORS) used in the space shuttle to refuel satellites has been chosen as an example of a system to be controlled (Boy, 1986). A~ ORS .simulator, a malfunction generator, and a fault diagnOSIs Operator Assistant KBS, called Human-ORS-Expert System (HORSES), were used to implement knowledge acquisition experiments. These functions are concurrent processes, communicating through a shared memory feature on a MASSCOMP computer. The malfunction generator generates simulated malfunctions for the ORS, and introduces them at appropriate times into the ORS simulation. qraphical interfaces are implemented on an IRIS 1200 graphiC system and a MASSCOMP workstation. They provide a good tool for eliciting operator knowledge to improve HORSES.
SITUATION PATTERNS
ANALYTICAL KNOWLEDGE
Beginner
Expert
Figure 4 . Situation Recognition I Analytical Reasoning Model
284
G. A. Boy
Fault identification can be represented by the SiTUation Recognition / Analytical Reasoning (SRAR) model (Figure 4). From the experiments to date, it is evident that people use chunks of knowledge to diagnose failures. A chunk of knowledge is ftred by the matching of a situation pattern with a percieved critical situation. This matching is either total or partial. After situation recognition, analytic reasoning is generally implemented. This analytical part gives two kinds of outputs: diagnosis or new situation patterns. The chunks of knowledge are very differents between beginners and experts. Beginners' situation patterns are poor, crisp and static, e.g., "The pressure PI is less than SO psia". Subsequent analytical reasoning is generally important and time-dependent. On the one hand, when a beginner uses an operation manual to make a diagnosis, his or her behavior is based on the pre-compiled engineering logic he has previously learned. On the other hand, when he or she tries to solve the problem directly, the approach is very declarative, i.e., using the system first principles. With practice, beginner subjects were observed to get a personal procedural logic (operator logic), either from the pre-compiled engineering logic or from a direct problem solving approach. This process is called knowledge compilation. Conversely, experts' situation patterns are sophisticated, fuzzy and dynamic, e.g., "During fuel transfer, one of the fuel pressures is close to the isothermal limit and this pressure is decreasing". This situation pattern includes many implicit variables defined in another context, e.g., "during fuel transfer" means "in launch configuration, valves VI and V2 closed, and V3, V4, V7 open". Also, "a fuel pressure" is a more general statement than "the pressure PI". The statement "isothermal limit" includes a dynamic mathematical model, i.e., at each instant, actual values of fuel pressure are compared fuzzily ("close to") to this time-varying limit [Pisoth = f(Quantity, Time)]. Moreover, experts take this situation pattern into account only if "the pressure is decreasing", which is another dynamic and fuzzy pattern. It is obvious that experts have transferred part of analytical reasoning into situation patterns. This part seems to be related to dynamic aspects. Thus, with learning, dynamic models are introduced in the situation patterns. It is also clear that experts detect broader sets of situations. First, experts seems to fuzzify and generalize their patterns. Second, they have been observed to build patterns more related to the task than to the functional logic of the system. Third, during the analytical phase, they disturb the system being controlled to get more familiar situation patterns, which are static most of the time, e.g., in the ORS experiment, pilots were observed to stop fuel transfer after recognizing a critical situation.
4.5. Variations of Above Concepts with Training Experienced people monitored the system being controlled with more sophisticated situation patterns than beginners. Once they recognized a situation, experts implemented short sequences of tests. The analytical reasoning they employed at this stage was minimal, by comparison with beginners. Experts and beginners also implemented this necessary analytical reasoning differently. Beginners followed the operation manual flowcharts in a forward chaining fashion. Experts tested possible diagnoses or new situation patterns for which analytical reasoning was familiar, i.e. they implemented a backward chaining process. It seems then that operator assistant systems will be different for beginners and for experienced people. Beginners must be provided with very analytical tools which can be explored easily. Moreover, they could be able to learn from these tools for their own training. Experienced people must be provided with the good situation patterns at the right time. Their knowledge of the system will subsequently infer diagnosis or planning activities.
5. Discussion and Conclusion Problems analyzed in this paper have a common factor: understanding of automated systems and more generally how autonomy can be shared efficiently. If we want to solve such problems, we must build tools which are able to help human operators. Ideally, these aids should allow better distribution of levels of autonomy between operators and machines,
increased human vigilance, decreased human errors, reduced human workload, improved performance in instruction and training, and more ready acceptance of new automated systems. Distributed control of multiple systems is a long term project effort. It will provide the user with autonomous controllers which can be adapted as artificial copilots, i.e., they will leave the leadership to the pilot while they observe and understand his/her behavior and intentions. Artificial copilots must include Rasmussen's three levels of behavior. They should be able to learn from experience and assess complex situations. The goal of this paper was to tackle the difficult problem of man-machine distributed systems. It has been shown that automation must be pursued following the human-machine performance model. This model leads to a better understanding of what the end-user knowledge will be. Understanding is also a matter of analyzing and assessing procedures and workstations. We have already developed computer tools to do this job in aeronautics, e.g., the MESSAGE system. We are currently continuing our research effort in the area of artificial intelligence and human-machine interaction, and in particular Operator Assistant Systems, in a joint agreem~ with NASA. Acknowledgement Many thanks to Philippa Gander for insightful comments. References Anderson, J.R. (1983). Acquisition of Proof Skills in Geometry, pp. 191-219, in Machine Learning: An Artificial Intelligence Approach, ed. R. Michalski, J.G. Carbonnel, and T.M. Mitchell, Tioga, Palo Alto, CA. Boy, G.A. (1983). Le systeme MESSAGE: Un Premier Pas vers l'Analyse Assistee par Ordinateur des Interactions Homme-Machine, Le Travail Humain, 46, N° 2. Boy, G.A., and Tessier, C., (198S). Cockpit Analysis and Assessment by the MESSAGE Methodology, in Analysis, Design and Evaluation of Man-Machine Systems, Edited by G. Mancini, G. Johannsen and L. Martensson, Proceedings of the and IFAC/IFIP/IFORS/lEA Conference, Varese, Italy, September 10-12. Boy, G.B. (1986). An Expert System for Fault Diagnosis in Orbital Refueling Operations, AIM 24th Aerospace Sciences Meeting, Jan., Reno, Nevada. Boy, G.B. (1986). Le systeme HORSES et ses perspectives en genie cognitif. Proceedings Expert Systems and Applications, Avignon, France. Boy, G.B., & Rappaport, A. (1987). Operator Assistant System in Space Telemanipulation: Knowledge Acquisition by Experimentation. ROBEX, Pittsburgh, USA, Ju. 4-S. Boy, G.B. (1988). Operator Assistant Systems. to appear in the International Journal of Man-Machine Studies. De Kleer, J., (1986). An Assumption-based TMS. Artificial Intelligence, 28, pp. 127-162.
,.
Dreyfus, H.L. (1979). What Computers can't do: The Limits of Artificial Intelligence, Harper & Row, Pub., Inc., New York. Dreyfus, S.E. (1982). Formal Models vs. Human Situational Understanding: Inherent Limitations on the Modeling of Business Expertise, Office: Technology and People, 1 , pp. 133-16S. Falzon, P. (1986). La Communication Homme-Homme: les Strategies de Dialogue et le Langage de l'Interaction, Workshop on Human Machine Interactions and Artificial Intelligence in Aeronautics and Space, ENSAE, Toulouse, Oct. 13-14.
Man-Machine Distributed Intelligence Foushee, H.C., and Manos, K.L., (1981). Information Transfer within the Cockpit: Problem of intracockpit Communications, in Information Tranfer Problems in the Aviation System, e.E. Billings and E.S. Cheaney, NASA Report TP-1875, NASA Ames Research Center, Moffett Field, CA. Hart, S.G., (1985). Theory and Measurement of Human Workload. NASA-Ames Research Center, Aerospace Human Factors Division, Moffett Field, CA. Hayes-Roth, B & Hayes-Roth, F (1978). Cognitive Process in Planning, Rep. no. R-2366-0NR, Rand Corp., Santa Monica, CA. Nagel, D.C., (1986). Pilotes du futur : Humain ou Ordinateur ? Journees d'Etudes Interactions Homme-Machine et Intelligence Artificielle, Toulouse, 13-14 octobre. Newell, A., & Simon, H.A., (1972). Human Problem Solving. Prentice Hall, Englewood Cliffs, HJ. Nii, P., (1986). Blackboard Systems, AI Magazine, vol. 7, nQ 2 & 3. Parks, D.L., (1977). Current Workload Methods and Emerging Challenges, Mental Workload: Its Theory and Measurements, N. Moray (Ed.), New York, Plenum Press, pp. 387-416. Phatek, A.V., (1983). Review of Model-Based Methods for Pilot Performance and Workload Assessment. AMA Report No. 83-7 for NASA Contract NAS2-l1318. Rasmussen, J. (1983). Skills, Rules and Knowledge; Signals, Signs, and Symbols, and other Distinctions in Human Performance Models, IEEE Transactions on Systems, Man, and Cybernetics, 3, pp. 257-266. Reason, J (1986). Decison Aids: Prothesis or Tools ?, NATO Workshop on Intelligent Decision Support in Process Environments, Ispra, Italie, Nov. 11-14. Schank, R.e., & Abelson, R.P., (1977). Scripts, Plans, Goals and Understanding. Hillsdale, H.J., Laurence Erlbaum. Schneider, W. & Shiffrin, R.M. (1977). Controlled and Automatic Human Information Processing: I. Detection, Search, and Attention - 2. Perceptual Learning, Automated Attending, and A General Theory, Psychological Review, vol. 84, nQ I, Jan. & n 22, Mar. Sheridan, T.B. (1984). Supervisory Control of Remote Manipulators, Vehicle and Dynamic Processes: Experiments in Command and Display Aiding, in Advanced in Man-Machine Systems Research, J.A.I. Press Inc., Vol. I, pp. 49-137. Teissier, e. (1984). MESSAGE: Un Outil d'Analyse Ergonomique de la Gestion d'un Vol, Doctorate Dissertation ENSAE, France. Van der Vdde, W. (1986). Learning Heuristics in Second Generation Expert Systems, Int. Conference - Expert Systems and their Applications, Avignon, France. Wilensky, R., (1983), "Planning and Understanding. A Computational Approach to Human Reasoning", Addison Wesley Pub. Company, Reading, Massachusetts. Zadeh. L.A. (1965). Fuzzy Sets, Information and Control, Vol. 8. pp. :n8-353.
285