A method for team intention inference

A method for team intention inference

Int. J. Human-Computer Studies 58 (2003) 393–413 A method for team intention inference Taro Kannoa,*, Keiichi Nakatab, Kazuo Furutab b a Graduate Sc...

519KB Sizes 0 Downloads 59 Views

Int. J. Human-Computer Studies 58 (2003) 393–413

A method for team intention inference Taro Kannoa,*, Keiichi Nakatab, Kazuo Furutab b

a Graduate School of Engineering, The University of Tokyo 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan Graduate School of Frontier Sciences, The University of Tokyo 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan

Received 27 September 2001; received in revised form 30 October 2002; accepted 26 November 2002 Paper accepted for publication by Accepting Editor, N. Cooke

Abstract Recent advances in man–machine interaction include attempts to infer operator intentions from operator actions, to better anticipate and support system performance. This capability has been investigated in contexts such as intelligent interface designs and operation support systems. While some progress has been demonstrated, efforts to date have focused on a single operator. In large and complex artefacts such as power plants or aircrafts, however, a team generally operates the system, and team intention is not reducible to mere summation of individual intentions. It is therefore necessary to develop a team intention inference method for sophisticated team–machine communication. In this paper a method is proposed for team intention inference in process domains. The method uses expectations of the other members as clues to infer a team intention and describes it as a set of individual intentions and beliefs of the other team members. We applied it to the operation of a plant simulator operated by a two-person team, and it was shown that, at least in this context, the method is effective for team intention inference. r 2003 Elsevier Science Ltd. All rights reserved. Keywords: Man–machine communication; Intention inference; Team intention; Mutual belief

1. Introduction Along with the introduction of advanced automation technology, the reliability and safety of today’s technological systems have been enhanced, and often the workload of human operators has been greatly reduced. On the other hand, these *Corresponding author. E-mail addresses: [email protected] (T. Kanno), [email protected] (K. Nakata), [email protected] (K. Furuta). 1071-5819/03/$ - see front matter r 2003 Elsevier Science Ltd. All rights reserved. doi:10.1016/S1071-5819(03)00011-9

394

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

systems have increasingly complicated and their behaviours have become increasingly invisible to human operators. Even though new types of man–machine interfaces are proposed and implemented in real systems, there are some cases in which human operators cannot understand behaviours of these automated systems because the basic function of the man–machine interfaces is limited to information exchanges, lacking more conceptual and intentional aspects of communication that enable humans to manage cooperative work efficiently (Paris et al., 2000; Hutchins, 1995). Introduction of automation also raises a difficult problem in human–machine relations. As for the final authority of decision making, human-centred automation (Hollnagel, 1995) has been widely acknowledged because it is difficult to anticipate every situation beforehand in system design and the automated system cannot take responsibility for accidents. However humans do not always make an optimal decision because of the limitation of cognitive capability and thus, the probability of human errors cannot be eradicated (Reason, 1990). There might not be a straightforward answer to the problem of whether humans or machines run the show. In fact, invisible behaviours of the automatedsystem and a poor relationship between humans and machines can cause serious problems, and sometimes lead to a critical accident (e.g. TMI, airplane crash in NAGOYA airport). In order to enhance the reliability and safety of highly automated systems, it is important to develop such a sophisticated means of man–machine communication that humans and machines can understand the process behind each other’s behaviours and decisions and establish cooperative relations so that they can complementarily perform required tasks. Research has been already carried out on intent inferencing or plan recognition methods to make an interactive system serve as a more cooperative partner in the user’s task by plan completion, error handling, information management, and so on (Goodman and Litman, 1992; Rubin et al., 1988; Hollnagel, 1992) This research concerned only one person’s intention and does not relate to social or team situations in which intentions of others are somewhat relevant to one’s intention. In other words they are only focused on man–machine communication. However, in large and complex artefacts such as power plants and aircrafts, a team operates the system. We have to therefore deal with the team intention in cooperative activities that is not the same as the mere summation of individual intentions. The present study aims to develop a method for team intention inference for sophisticated team–machine communication in process domains. In Section 2, team intention is explained by individual intention and mutual belief based on philosophical arguments regarding cooperative activities. In Section 3, a team intention inference method is explained in detail. In Section 4, the implementation of our method for a plant simulator, dual reservoir system simulation (DURESS), is explicated. In Section 5, inference results of the proposed method applied to the log data of operation of a plant simulator operated by a two-person team are illustrated and discussed. Then conclusions are given in Section 6.

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

395

2. Team intention In this section, we provide an explanation of team intention based on philosophical arguments regarding cooperative activities. We can find many discussions and analyses on various notions of intention. Such analyses typically have to do with intention of a single person and do not seriously relate to team situations. Most of the conventional intention inference methods are based on a model of single person. It is common place, however, that one’s group or its members affect one’s intention, and vice versa. A number of empirical philosophers think that intentional behaviours in cooperative activities originate from a different mechanism than that of the individual. 2.1. Individual intention and intention inference In the studies of intent inferencing or plan recognition, an individual intention is defined as a set of goals and procedures that an operator is trying to carry out, and intention inference is defined as specifying a set of the goals and procedures from observed behaviours (what he/she sensed and actually did). Plan recognition or intention inference can be categorised into three types: keyhole recognition, intended recognition and obstructed recognition. Keyhole recognition is the recognition that the agent whose plan is being inferred is unaware of, or indifferent to, the plan recognition process as if ‘‘looking through a keyhole’’. Plan recognition when the agent is aware of and actively cooperating in the recognition, for example by choosing actions that make the task easier is called intended recognition. Obstructed recognition is the recognition where the agent is aware of and might actively thwart plan recognition process. The situation that this kind of recognition is applied to is adversarial settings such as warfare, or understanding hidden intentions such as irony, fakes, and so on. This kind of recognition must be the ultimate requirement for a system in human–machine cooperation. In the human–machine system targeted in our study, such as cockpits in airplanes or supervisory control rooms in power plants, the operator cannot be bothered with an explicit discussion of his/her intentions because of time constraints and information overload and therefore the keyhole strategy was adopted. Both user’s intended actions to convey his/her intentions to the system (intended recognition) and obstructing behaviours to hide his/her real intentions (obstructed recognition) are out of the scope of this paper. Our cognitive process in the strategic or tactical mode of decision making roughly follows these steps: observation, state recognition, goal setting, planning and executions (Rasmussen, 1986; Norman, 1988). We ordinarily specify another person’s goals and procedures by inferences based on the whole cognitive process. Our method deals with the whole cognitive process: both analysis and action stage. The analysis stage is the process from information observation to state recognition, and action stage is from goal setting to action executions. However our method only utilises system state and input actions for its inference. Therefore when there is no action, for example there are delays between decisions and actions, the system tries to identify operator’s intention only from system state (goal stack generation) and

396

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

heuristics and its inference becomes less certain. Inferences based on the analysis stage are also difficult for human observer, who needs more inference information such as eye movements, verbal communications, and so on. Hatakeyama et al. developed a method for identifying state recognition process which utilises observed symptoms for inference of analysis stage (Hatakeyama and Furuta, 2000). Individual errors can be defined as deviations, such as replacement, omission, or unexpected actions, from the plan defined and can be detected by analysis of such deviations based on the intent inferencing (Hollnagel, 1992). Error handling is one of the major applications of intent inferencing. The present study focuses on providing a framework of team intention inference and does not deal with error handling for both individuals and teams. 2.2. Team intention Conte (1989) used the term ‘‘collective minds’’, which is external and independent from an individual mind, to describe cooperative activities. With such an assumption, it is convenient to explain a team intention behind a cooperative activity and the inference for it can be considered as the same as that for individual intention. Most of the works in multi-agent plan recognition rely on the assumption that the plan is being carried out by a single entity such as a team, a group, a troop, and so on, and use the same strategy as plan recognition for a single agent (Devaney and Ram, 1998). It is however doubtful that there is such an external mental component such as a collective mind or consciousness that represents and causes the actions of the team members. It is rather natural to think that a team behaviour is the result of nothing but individual cognitive activities of the constituents. Moreover, without any models of individual, why each agent behaves as such or the interrelations among them from individual perspectives cannot be described. A number of researchers therefore use the notion of we-intentions, group intentions, or joint intentions that can be reduced to a set of individual intentions together with a set of mutual beliefs to refer to the mental component that represents and causes the actions of team members (Bratman, 1992; Searl, 1989). Tuomela and Miller (1987) analysed we-intention as follows. When agent A and B intend to do X cooperatively, the following conditions hold. (a) A intends to do his/her part of X (IaXa). (b) A believes that B will do his/her (B’s) part (BaXb). (c) A believes that B believes that he/she (A) will do his/her part (BaBbXa). X is used to denote some joint task, Xa to denote agent A’s part of X, Ia and Ba to respectively denote A’s intention and belief. Symmetrical conditions hold on B’s intention and beliefs. Fig. 1 shows an illustrative view of this analysis. A team between A and B describedV by we-intention consists of V intention V V (IaXa BaXb 4 3BaBbXa) in A’s mind and (IbXb BbXa BbBaXb) in B’s mind. Beliefs that are hierarchically justifiable such as in the condition (b) and (c), are

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

397

Fig. 1. Schematic illustration of we-intention.

called mutual beliefs. A mutual belief is theoretically ad infinitum, however, in actual practice only two or three layers may be enough. The above conditions request agent A to construct each mental component: his/ her intention (IaXa), belief on agent B’s intention (BaXb), and belief on B’s belief (BaBbXa). It is necessary for agent A to plan his/her own actions to achieve the designated goal and knowledge relevant for the planning is the most basic requirement for intention formation. B’s intention can be exchanged directly by verbal communication, however it is not the usual mode of human communication. Without any explicit exchanges, we can infer intentions of others from their external behaviours, and an ability of intention inference is required for the condition (b) to obtain. B’s belief on A’s intention (BbXa) is not always exchanged explicitly either. Therefore agent A has to infer the belief to form the third mental component BaBbXa. This could be obtainable by the interpretation of B’s external behaviours as responses or supports to A’s intention or actions. Besides an ability of intention inference, ability for understanding response or support and knowledge relevant to this kind of reaction is required for belief inference. IaX1, IaXa2 and IaXa3 are respectively formed by planning, mutual response or support, and following the other member’s belief. It can be said that B’s intention inference is a description of A’s behaviours as IaXa1 and his/her belief inference is a description of B’s own behaviours from the viewpoint of IaXa2. When their cooperation is going well, the following condition is established: IaXa1=IaXa2=IaXa3. This bottom up definition can describe each individual intention and relationships among them. We adopt this assumption and propose a method for team intention inference that is representative of the internal mechanism of collective behaviours.

398

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

3. Method for team intention inference When an observer outside of the team tries to understand team behaviours from the individual perspective, he or she infers individual intentions and beliefs of constituents to interpret collective behaviours and then specifies a team intention by checking consistencies among his/her inference results of each constituent’s mental components. Considering this technique of humans to infer a team intention from the bottom up viewpoint, we propose a method that identifies a set of intentions and V beliefs that fulfills the condition that (BoXa=BoBbXa) (BoXb=BoBaXb). In the above, BoXa/BoXb is used to denote inference result of A’s/B’s intention and BoBaXb/BoBbXa is used to denote inference result of A’s/B’s belief on B/A by another agent, respectively. The condition of team intention means that Operator A intends to do what Operator B believes for A to do, and also Operator B intends to do what Operator A believes for B to do. The proposed method is composed of three primary parts: intention inference, belief inference, and combination search. The details of these parts will be explained in the following sections.

3.1. Intention inference The individual intention inference method used in this study is based on that developed in the previous study (Furuta et al., 1998). First, goals that an operator might be expected to pursue are listed (goal stack generation) from current system states with a consideration of urgency. In the experiments for validation shown in Section 5, operators are expected to watch the system states and keep some physical parameters to demanded values, therefore goals are ordered based on the deviation from the demanded values. Secondly, procedures to achieve the goals are listed from the plan library that stores predefined plans (sets of goals and procedures) constructed by a task analysis. The procedures of which prerequisites cannot be fulfilled in the situation are excluded here. Then, comparing these plans with the actions that an operator actually entered into the system, candidates of his/her intention are narrowed. It is not always possible to identify only one candidate by this method, the following heuristics (some kind of empirical rules) are applied to set priorities and to order candidates. (1) The procedures that do not include current actions are excluded. (2) The procedures with fewer actions are placed higher in priority. (3) The procedures that can easily achieve intended goal are placed higher in priority. (4) The goals with higher urgency are placed higher in priority. After reordering candidates by these heuristics, ten from the top are defined as current contexts, and beliefs regarding the other member are inferred from the viewpoint of each current context. When a goal is achieved, then plans with the achieved goal are cleared away.

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

399

3.2. Belief inference One of the proposals in this paper is a belief inference method. As for A, his or her belief regarding B (BaXb) is the interpretation of B’s behaviour as Xb2 that is responsive or supportive of A’s intention or actions. According to this inner mechanism of a team intention, we define A’s belief on B inferred by another agent as an interpretation of B’s behaviours from the viewpoint of A’s expectations derived from inference result of A’s intention. An expectation here means the relevant knowledge to understand responses or supports for the other agent’s intentions and actions. The inference procedure is basically the same as that of intention inference. Along with it, goals that are derived from expectations are added to the goal stack, and procedures are added, deleted, and reordered by the expectations. There are many domain specific or situation dependent expectations that enable effective belief inference in real situations. It is, however, difficult to predefine all of them. In terms of plan relations, an operator generally has three kinds of expectations: (a) that is not harmful to his/her own intention and actions (non-negative relation), (b) that is convenient to his/her intention (positive relation), and (c) that is necessary for his/her intention (request relation) (Martial, 1992). In our study, we predefined several general expectations shown in Table 1 for belief inference. All these expectations are obtainable from domain knowledge and team knowledge. Individual beliefs are inferred from the viewpoint of each current context obtained by individual intention inference.

3.3. Combination search Intention and belief inferences are conducted in parallel, and a goal stack and procedure candidates are shared among each inference process. Fig. 2 shows an overview of the proposed method. The upper side of Fig. 2 shows Operator A’s intention and belief inference and the lower is B’s.

Table 1 Expectations for the other member Relations

Expectations

Inference procedures

Non-negative

Not to adopt plans that includes conflicting operations Not to adopt plans of which side effects are conflicting To adopt counter plans those negate side effects of my plan To adopt plans with different goals, if my intention can be accomplished by myself To adopt the same plan, if my intention cannot be accomplished alone

Exclude such negative plans

Positive

Request

Add counter plans Exclude plans with the same goal Exclude plans with the same goal

400

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

Fig. 2. Overview of team intention inference.

Fig. 3. Relationship among individual intentions and beliefs.

As the result of intention and belief inference shown in Fig. 2, sets of individual intentions and beliefs of the other member are obtained. The inference system searches for the combinations of candidates of individual intentions (BoXa/BoXb) and candidates of beliefs (BoBaXb/BoBbXa) that fulfill the condition of team intention and define them as candidates of team intention. Fig. 3 shows the relationship among individual mental components in a team intention. It is not always possible to specify only one team intention. Therefore they are reordered based on the each rank order of the candidate of constituents’ intentions. In the case that Operator A/B has given more actions than B/A at some inference

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

401

points, then they are ordered by the rank order of inference result of Operator A’s/ B’s intention. If they have done the same number of actions, the candidates are ordered without giving any priority to the result of A’s or B’s, and by the higher rank order of the individual candidates. Those combinations that do not satisfy the condition of team intention can include conflicts among team members’ intentions in light of the expectations. By analysing them, it is possible to recognise and diagnose the errors specific to team situations. Conflict detection and recovery can be one of the promising applications of this bottom-up approach to team intention inference. The reasoning behind our method uses domain dependent knowledge but is itself, domain independent.

4. Implementation We used a thermal-hydraulic process simulation (DURESS) to generate decision plans and test data. In this section, the system architecture applied to this simulation is explicated. 4.1. Plant simulator DURESS is a thermal-hydraulic process simulation that was designed to be representative of complex human–machine systems (Vicente and Rasmussen, 1990). It poses many of the demands that designers and operators of high-tech systems are likely to face. The present study was conducted within the context of this simulation. The physical structure of DURESS is illustrated in Fig. 4. The system consists of two redundant feedwater streams that can be configured to supply water to two tanks. Operators can control seven valves (Valve0B6), two pumps (Pump A/B), and two heaters (Heater A/B) by mouse operations. On the interface panel, opening value of each valve, power of each pump and heater, level of each tank, inlet flow of each tank, outlet flow and temperature, and demanded flow and temperature are indicated. There are also alarm lamps for each parameter. 4.2. System architecture The whole structure of the inference system is illustrated in Fig. 5. It consists of two primary parts: an inference engine and knowledge base. The data utilised for the inference are entered to the system successively, then the inference engine infers the team intention with several kinds of knowledge predefined in the knowledge base and outputs the results. The inference engine is designed by logic programming, and the interface between input data and the inference engine is coded in Java. The input data include the following:

402

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

Fig. 4. Dual reservoir system simulation (DURESS).

Fig. 5. System architecture. *

*

*

System log is a transition of physical parameters of the plant simulator relevant to the operator’s goal: outlet temperature/flow at a valve 5/6, and level of each tank. The inference engine generates a goal stack from this information. Plant log is state changes of each control apparatus: valve opening, pump power, and heater power. The state of each apparatus is described in three values: closed, open/on, and full admission. These data are utilised for narrowing procedure candidates. Operation log is a history of operators’ actions to each control apparatus. They are described in two values: increase and decrease. For example when an operator controls Heater A, his/her operation is represented as ‘‘increase Heater A’’ or

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

403

‘‘decrease Heater A’’. This granularity of representation is corresponding to that of plan descriptions defined in the library. Therefore more detailed representations such as ‘‘close’’ or ‘‘open the valve at full opening’’, or description with quantitative values such as ‘‘open the valve to 85%’’ can be available with the plans defined with such granularity.

The knowledge base stores the following kinds of knowledge: *

* *

*

Knowledge on structure describes qualitative causality among physical parameters. Only qualitative causal relationships are described in the knowledge base, therefore, it is impossible to conduct inference based on quantitative analysis. Goal and operation is defined as the relation between the physical parameters. Knowledge of team describes shared knowledge for cooperation such as rules, norms, and so on. In the present study, only role allocation is implemented. This knowledge allows inferences of team intention based on various team structures. Plan library stores predefined plans by a means-ends task analysis (Hollnagel, 1993). The task hierarchy of DURESS is illustrated in Fig. 6. A plan is represented as a set of a task goal and procedure to achieve it. The number of task goals defined in this study is 12: ‘‘increase/decrease outlet flow at Valve 5/6’’, ‘‘increase/decrease outlet temperature at Valve 5/6’’, and ‘‘increase/decrease level of Tank A/B’’ that correspond to the operator’s mission. A procedure to achieve a goal is defined as a list of the actions placed in the bottom level of the hierarchy. The plan library was simply designed: action procedures for predefined subgoals (domain plans). Therefore the inference engine can describe operator’s intention only with these descriptions. But in the experiment the participants did not intend

Fig. 6. Goal hierarchy of operational tasks for DURESS.

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

404

plans that are not defined in the plan library. It is necessary of course to deliberately design the plan library in order to apply this method to real situations. All expectations shown in Table 1 are described by these kinds of knowledge, and do not require additional knowledge units in the knowledge base. Integrity of the knowledge base strongly affects the performance of the system; inference with the aspects of knowledge not captured here is impossible. As Carberry (Carberry, 2001) pointed out, the acquisition and representation of knowledge is one of the general problems in intention inference. 4.3. Scalability The proposed method has been applied to a two-person team, however, it is of course applicable to a larger team. The individual intention and belief of the other members’ intention are obtainable by the respective inference methods. By searching for and excluding the conflicts among them, team intention candidates that are consistent in light of expectations can be obtained. If the number of team members is N; there are NðN  1Þ intention–belief relations. We assume rather small teams, for example in cockpits of airplanes or supervisory control rooms of power plants as the application areas of this method. In such fields, the scalability is not a big problem. If, however, it is possible to extract more explicit and strict knowledge on the team and define the structure of mental components in a team, this method is applicable to some extent to a relatively large team. There must be effective and powerful rules or norms for team intention inference, because without such knowledge, humans cannot cooperatively work in a large team. It might be necessary, however, to divide a team into some group or to define hierarchical team structure in order to effectively infer team intention of much larger teams.

5. Validation The proposed method was applied to the operation of a plant simulator (DURESS), where two operators cooperatively controlled the system. The inference ability of the method was evaluated from the two standpoints. We compared the results by the proposed method and (i) the actual team intention obtained by a postscenario interview and (ii) a human observer’s inference results. The performance of the proposed method was also compared with that of mere summation of individual intention inference. 5.1. Method Three participants, two operators and one observer, were used for each scenario. The participants were 18(3  6 groups) graduate students of engineering. There was

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

405

Table 2 Role allocation Assignment no.

Operator A

Operator B

1

Valve0, Valve2, Valve4, Valve5, Pump A, Heater A Valve1, Valve2, Valve5, Pump B, Heater B Valve2, Valve3, Pump A, Heater A, Heater B

Valve1, Valve3, Valve6, Pump B, Heater B Valve3, Valve4, Valve6, Pump A, Heater A Valve1, Valve4, Valve5, Valve6, Pump B

2 3

Table 3 Event Event no.

Contents

1

After steady state, demanded outlet flow at Valve5 and Valve6 are suddenly changed. (Valve5: 100-150, Valve6: 200-300) After steady state, water leaks at the pipe connected to Tank A. After steady state, demanded outlet temperature and flow at Valve5 are suddenly changed. (Valve5: 70 C-65 C, 100-150) After steady state, Pump A trip and demanded outlet flow at Valve6 is suddenly changed. (Valve6: 200-300) After steady state, demanded outlet flow at Valve5 and demanded outlet temperature at Valve6 is suddenly changed.(Valve5: 100-150, Valve6: 50 C-45 C)

2 3 4 5

no overlap in the make-up of the groups. Operator’s missions in each scenario were to keep demanded outlet flows/temperatures at Valve 5/6, and maintain each tank level from 20% to 80% (12 goals). Each member in a group was allotted a role to operate some control apparatus shown in Table 2. He or she could not control the apparatuses assigned to the other members, and this role allocation was static during each scenario. Firstly, the participants were given instructions on DURESS structure, mission and role allocation by paper materials and verbal explanation. Secondly, they practiced DURESS operation by several practice scenarios. In practices, they first operated control apparatuses freely for a while, and then performed two scenarios. After the practices, they were asked to perform one or two scenarios that include some anomalies or sudden changes of demanded flow and temperature. Table 3 shows the events used in the experiments. Table 4 shows scenario No, its event, and group No. Each scenario was defined as a combination of an event and a role allocation. Every scenario was 120 s. During each scenario participants could communicate with each other freely, and system states (system log and plant log) and the operation log were recorded, and the screen, operators’ behaviours and verbal communications were videotaped. System logs, plant logs and operation logs were used in the inference system. After each

406

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

Table 4 Test scenario Scenario no.

Event no.

Role assignment

Group no.

1 2 3 4 5 6 7 8 9 10

1 2 2 3 4 3 3 1 5 2

1 1 2 1 1 2 1 3 1 2

1 1 2 2 3 3 4 4 5 6

Fig. 7. Operation log (Scenario 1).

scenario, they were asked to describe their own and the other member’s actions while watching the video. The former description is regarded as his or her actual intention and the latter as his or her actual belief regarding the other member’s intention, and this set of actual individual intentions and beliefs is regarded as an actual team intention. They separately (with no communication) answered their own intention and belief in order to avoid joint remembering effects. These descriptions were compared with the results of team intention inference by the proposed method. We also compared the inference results with the interpretations of their operations by another person (an observer). The observer observed operators’ actions and interpreted them point by point during each scenario. He or she could refer the video to complement his or her inference after the scenario. If the observer had plural interpretations, he or she was asked to order them according to their certainty. We regard these interpretations and rankings as observer’s inference results. 5.2. Results and discussions Figs. 7 and 8 show the operation log of Scenarios 1 and 2, respectively. The starting point of the time corresponds to the event occurrence. The upper side of the time-axis is the Operator A’s operation log, and the lower is B’s. For example, In Scenario 1, Operator A increased the Valve 5 opening from 5 to 10 s. after the

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

407

Fig. 8. Operation log (Scenario 2).

Table 5 Result of belief inference (P6: Scenario 1) Operator

Intention (Top)

Belief (Top 3)

A

Increase level of tank B by increasing valve4 opening.

Increase level of tank A by increasing valve1 opening. Increase level of tank A by increasing valve1 and valve3 opening. Increase level of tank A by increasing pump a power and valve1 opening. Total 5

Total 10 B

Increase level of tank A by increasing valve1 opening.

Total 10

Increase level of tank B by increasing valve4 opening. Increase level of tank B by increasing valve2 and valve4 opening. Increase level of tank B by increasing pump B power and valve4 opening. Total 5

demanded outlet flow had been changed. Each scenario is divided into several intervals (e.g. P1, P2) by the time point when inference information such as operations or a goal accomplishment is entered into the inference system. From the interview after the scenario, their intention in Scenario 1 was to realise demanded outlet flow at Valves 5 and 6 by increasing the opening of Valves 5 and 6, respectively, then to increase the level of Tanks A and B for the compensations by increasing the opening of Valves 1 and 4, respectively. In Scenario 2 their intention was to decrease outlet temperature at Valve 5 by increasing the opening of Valves 1 and 3, and decreasing the power of Heater A, because the outlet temperature at Valve 5 exceeded the demanded value as the result of the water leak. 5.2.1. Result of belief inference An example of the result of belief inference is shown in Table 5. This is the result at P6 in Scenario 1. An intention candidate shown in Table 5 is the top out of ten, and three belief candidates regarding the other member inferred against it are listed. The combination of the first ranks of individual intentions and beliefs satisfied V the condition of team intention ((BoXa=BoBbXa) (BoXb=BoBaXb)) and was

408

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

equivalent to the actual team intention. Both Operator A’s intention and Operator B’s belief are the interpretation of Operator A’s behaviours, however, there were 10 candidates of A’s intention whereas only 5 of B’s belief. That is because B’s belief on A is more restricted by the expectations from B’s intention shown in Table 1. A team intention is guaranteed by the existence of the combination fulfilled by the condition; there could be some conflicts between their intentions from the viewpoint of the expectations when there are no such combinations.

5.2.2. Comparison with the actual team intention Table 6 shows comparison of the results between the inference system and the actual team intention. In Table 6, the number of candidates, the rank order of the candidate that is equivalent to the actual team intention, and the rank order of each candidate of an individual intention that constitutes it are shown. For example, 26 candidates were listed by the inference system at P4 in Scenario 1 and the actual team intention was found at the first position among them. The inference failed at P3, because in the present method, counter plans that negate side effects of the preceding intention are added to the procedure candidates when it is accomplished, however, Operator B committed to the counter plan before the preceding Operator A’s intention had been accomplished. Table 7 shows rank orders of the actual team intention in each inference interval in ten scenarios. The numbers between parentheses represent the rank orders of each actual individual intention. The inference system identified the actual team intention as the first position in 18 inference intervals out of 44 in total. If those candidates that match parts of the actual team intention when it is composed of plural actions are regarded as success, then the result rises to 37/44. Comparing the rank orders between a team intention and individual intentions, it was shown that the combination of the first candidates of individual intentions inferred independently is not always equivalent to the actual team intention. This is consistent with the fact that a team intention is not equal to the mere summation of individual intentions. The rank order of the actual team intention was generally higher than that of actual individual intentions inferred independently. It is likely that the expectations regarding the other members work to effectively narrow the hypotheses of team intention. These results show the effectiveness of the proposed method.

Table 6 Comparison with the actual team intention (Scenario 1) Inference point

P1

P2

P3

P4

P5

P6

Team intention A’s intention B’s intention

1/1 1/1 3/3

1/1 1/1 1/1

Fail Fail 2/10

1/26 4/10 3/10

1/23 1/10 3/10

1/21 1/10 1/10

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

409

Table 7 Rank order of the actual team intention by the inference system Scenario no.

1st.

2nd.

3rd.

Lower

Failed

Total

1

0

0

0

P3(F,2)

6

P2(5,5) 0

0 P1(3,8)

P1(10,5) 0

0 0

3 4

0

P4(6,6)

0

5

0

0

0

P2(F,F) P3(F,6) 0

6

P1(1,3) P2(1,1) P4(4,3) P5(1,3) P6(1,1) P3(5,2) P2(3,3) P3(1,2) P4(1,1) P1(1,1) P5(2,6) P1(4,2) P2(2,2) P4(2,2)

0

P1(1,6)

0

4

7

P6(2,2)

P2(1,2) P3(3,2) P5(2,4)

P4(4,4)

8

0

0

0

3

9

P1(1,1) P3(1,1) 0

P1(1,5) P2(5,5) P3(1,5) P2(1,6)

P2(1,2)

P1(1,5)

P4(9,1)

6

10

P4(5,2)

P2(5,5) P3(5,5) 7

0

P3(9,5) P5(8,5) P6(2,5) P1(8,5)

P5(1,2)

5

7

7

5

44

2 3

4 5

18

2

Table 8 Comparison with an observer’s inference (Scenario 2) Inference point

P1

P2

P3

Inference system Observer Observer’s 1st 2nd 3rd 4th

5/6 Fail/4 1 4 3 2

2/4 2/2 1 2 — —

1/2 1/1 1 — — —

5.2.3. Comparison with an observer’s inference Table 8 compares the results of the inference system and those of an observer in Scenario 2. The first and second rows compare the inference accuracy between the inference system and an observer, and the lower rows compare the inference tendency between them.

410

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

Table 9 Rank order of the actual team intention by a human observer Scenario no.

1st

1 2 3 4 5 6 7 8 9 10

P1, P3 P2, P1, P1, P3, P6 P1, P6 P4, 21

P2, P5, P6 P3, P4 P5 P2 P4 P2, P3 P5

2nd

3rd

Lower

Failed

Total

0 P2 P1 0 0 0 P5 0 0 0 3

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

P3, P1 0 P2, 0 P1, P1, 0 P1, P1, 20

6 3 4 5 2 4 6 3 6 5 44

P4

P3, P4 P2 P2, P3, P4 P2, P3, P4, P5 P2, P3

For example, at P1 an observer listed four candidates of team intention but failed to identify the actual team intention, while the inference system listed six and identified it at the fifth position. The first candidate by an observer was equivalent to the first by the inference system, and the second was equivalent to the fourth, the third to the third, and the fourth to the second. The inference accuracy and tendency of the system and an observer look almost the same. Rank orders in each inference interval by human observers are shown in Table 9. In total, they succeeded in identifying the actual team intention at the first position in 21 inference intervals out of 44, which was slightly better than that of the inference system. However they failed to list any candidates in 20 because of cognitive overload such as time pressure and information overload. The inference system is superior to the human on this point. 5.2.4. Failed cases In ten scenarios, inference failed at five intervals. They can be categorised into three cases listed below, one is specific to team intention inference, and the others are due to the method of individual intention inference. Look ahead intention: Inference failed at P3 in Scenario 1, because Operator A made a look-ahead intention. His intention was a counter plan of Operator B’s intention. Counter plans in the present method are designed to be added to procedure candidates at P4: after the preceding intention is accomplished. It is possible to cope with this problem by adding counter plans beforehand, however they do not always have just one step look-ahead, but two or more depending on the situation, and it could cause combinatorial explosion. This problem is specific to team intention inference. Intention with many procedural steps: Inference failed at P2 and P3 in Scenario 4. The team goal from P2 to P3 was to decrease outlet temperature at Valve 5, and their intention included three action steps: increasing Valve 1 opening, decreasing power of Heater A, and increasing power of Pump A. It is difficult to identify the complete team intention that requires multiple actions when only one or two actions were

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

411

entered. If intention candidates that match parts of the actual intention, for example at P2, ‘‘to decrease outlet temperature at Valve 5 by increasing Valve 1 opening’’, ‘‘by increasing Valve 1 opening and decreasing power of Heater A’’, or ‘‘by increasing Valve 1 opening and Valve 3 opening’’ are regarded as a correct answer, the accuracy of inference looks better. With the consideration of quantitative information such as contributions of each action to the goal, or fine-tuning of heuristics could possibly fix this problem. Multi-intention: Inference failed at P4 in Scenario 9 and P5 in Scenario 10 because operators had plural intentions. The individual intention inference method used in this study was based on the serial-processing model, and it does not permit operators to have plural goals at one time. This problem was simply due to the performance of the method of individual intention inference, and relatively easy to handle with a hierarchical plan representation. 5.2.5. Limitations The inference system correctly inferred team intention or set it high in the priority list in these 10 scenarios, however, our method does not capture all the aspects of individual and team intention formation. Therefore several problems listed below could impede the use of our method in large-scale real world situations. Inference information: Our method utilises system state and operator’s actions through human–machine interface to infer team intention. Therefore when there is no action or delayed action, the system infers operator intention only from the system state and heuristics and its inference becomes less certain (e.g. P4 in Scenario 1). There is, however, a variety of information that conveys a lot about operator’s intention, for example, verbal communication, eye movements, mind state, and so on. Those methods that sense and analyse these factors are not powerful enough to infer intention by itself, however, it could be possible to develop more powerful tool by integrating them into the intent inferencing technique. Knowledge integrity: Knowledge integrity in the knowledge base is one of the major problems in intention inference (Carberry, 2001). For example, inference is hampered if the plan library does not capture all means of achieving a goal. It is a very difficult problem to design a complete plan library in real situations. Previous work on this problem resulted in the introduction of a mechanism of automatic construction or updating the system’s plan library with new recipes (Lesh and Etzioni, 1996; Levi et al., 1990), hierarchical plan representation (meta plans) (Carberry, 1993), and so on. Basian belief network (BBN) is one of the promising approaches to deal with this kind of problem since BBN is able to deal with nonmonotonic and evidential reasoning of human intention inference. The heuristics used in this study were rather general ones, but it was confirmed in the previous study that they did not deeply depend on scenarios or operators and effectively worked to identify an operator’s intention (Furuta et al., 1998). However heuristics might depend on domain, tasks or situations to some extent. We therefore should avoid too much generalisation. It could be possible to extract proper domain or task dependent heuristics by analysing the inference results with these general heuristics.

412

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

Cognitive mode: Since our method is based on the procedural model of cognition, the behaviours that cannot be explained by procedural cognitive models, such as a reactive planning or situated action (Suchman, 1987) that a human selects actions reactively based on the immediately perceived status of the environment, are not correctly inferred. As long as the procedural cognitive process is assured, for example in the strategic and tactical control mode (Hollnagel, 1993) in decision making, our method is applicable even in dynamic and interactive contexts. However the inference might become less certain because we used rather general heuristics and expectations to narrow the hypothesis of team intention. Situation dependent heuristics or expectations therefore are necessary to enhance the inference ability in such situations. In the experiment, other modes of decision making (situated actions, opportunistic mode, and scrambled mode) were not observed.

6. Conclusion The major contribution of this study is to provide a method for team intention inference in process domains. The underlying assumption here is that the mechanism of intention formation behind team cooperative activities is different from that of the individual, and methods for individual intention inference cannot be directly applicable to team cooperative activities. The proposed method is based on the mechanism of team intention and uses expectations of the other members as clues to infer a team intention. It was confirmed through an experiment that the expectations worked to narrow the hypotheses of team intentions and the inference accuracy and tendency of the proposed method were similar to those of humans. The results suggest that the method based on we-intention is effective for team intention inference, at least in the tested context. Further investigations are necessary to apply this method to real systems, however, team intention inference by this method is expected to contribute to a sophisticated team–machine communication such as the cockpit of airplanes or a central control room in power plants by providing appropriate information sharing, supplemental information, and avoiding misunderstanding among operators based on inferred team intention.

References Bratman, M.E., 1992. Shared cooperative activity. The Philosophical Review 101, 327–341. Carberry, M.S., 1993. Plan Recognition in Natural Language. MIT Press, Cambridge, MA. Carberry, M.S., 2001. Techniques for plan recognition. User Modeling and User Adapted Interaction 11, 31–48. Conte, R., 1989. Institutions and intelligent systems. In: Jackson, M.C., Keys, P., Cropper, S.A. (Eds.), Operational Research and Social Sciences. Plenum Press, New York, pp. 201–206.

T. Kanno et al. / Int. J. Human-Computer Studies 58 (2003) 393–413

413

Devaney, M., Ram, A., 1998. Needles in a haystack: plan recognition in large spatial domains involving multiAgents. Proceedings of the 1998 American Association for Artificial Intelligence, pp. 942–947. Furuta, K., Sakai, T., Kondo, S., 1998. Heuristics for intention inferencing in plant operation. Proceedings of the 4th Probabilistic Safety Assessment and Management, pp. 1907–1912. Goodman, B.A., Litman, D.J., 1992. On the interaction between plan recognition and intelligent interfaces. User Modeling and User-Adapted Interaction 2, 83–115. Hatakeyama, N., Furuta, K., 2000. Bayesian network modeling of operator’s state recognition process. Proceeding of the 5th Probabilistic Safety Assessment and Management, pp. 53–58. Hollnagel, E., 1992. The design of fault tolerant systems: prevention is better than cure. Reliability Engineering and System Safety 36, 231–237. Hollnagel, E., 1993. Human Reliability Analysis. Academic Press, New York. Hollnagel, E., 1995. Automation, Coping, and Control, Proceedings of Post HCI’95 Conference Seminar on Human–Machine Interface in Process Control, pp. 21–32. Hutchins, E., 1995. Cognition in the Wild. MIT Press, Cambridge, MA. Lesh, N., Etzioni, O., 1996. Scaling Up Goal Recognition, Proceedings of the 5th International Conference on Knowledge Representation and Reasoning, pp. 178–189. Levi, K., Shalin, V., Perschbacher, D., 1990. Learning plans for an intelligent assistant by observing user behavior. International Journal of Man–Machine Studies 33, 489–503. Martial, F., 1992. Coordinating Plans of Autonomous Agents, Lecture Notes in Computer Science 610. Springer-Verlag, New York. Norman, D.A., 1988. The Psychology or Everyday Things. Basic Books Inc, New York. Paris, C.R., Salas, E., Cannon-Bowers, J.A., 2000. Teamwork in multi-person systems: a review and analysis. Ergonomics 43, 1052–1075. Reason, J., 1990. Human Error. Cambridge University Press, Cambridge, New York. Rasmussen, J., 1986. Information Processing and Human–Machine Interaction. Elsevier, New York. Rubin, K.S., Jones, P.M., Michell, C.M., 1988. OFMspert: inference of operator intentions in supervisory control using a blackboard architecture. IEEE Transactions on Systems, Man, and Cybernetics 18, 618–637. Searle, J.R., 1989. Collective Intentions and Actions Intentions in Communication. MIT Press, Cambridge, MA, pp. 401–415. Suchman, L.A., 1987. Plans and Situated Actions. Cambridge University Press, Cambridge, New York. Tuomela, R., Miller, K., 1987. We-intentions. Philosophical Studies 53, 367–389. Vicente, K.J., Rasmussen, J., 1990. The ecology of human machine systems. Ecological Psychology 2, 207–249.