Situation assessment and prediction in intelligence domains

Situation assessment and prediction in intelligence domains

Knowledge-l~d 5VSTEMS"'-Knowledge-Based Systems 10 (1997) 87-102 ELSEVIER Situation assessment and prediction in intelligence domains L i s a A . J...

2MB Sizes 0 Downloads 29 Views

Knowledge-l~d

5VSTEMS"'-Knowledge-Based Systems 10 (1997) 87-102

ELSEVIER

Situation assessment and prediction in intelligence domains L i s a A . J e s s e , J u g a l K. K a l i t a * Department of Computer Science, University of Colorado, Colorado Springs, CO 80933. USA

Received 7 June 1996;revised 9 December 1996: accepted 12 December t996

Abstract

We discuss a user-friendly system containing an operational suite of tools that support intelligence data processing, data visualization, historical analysis, situation assessment and predictive analysis. The tools facilitate the study of events as a function of time to determine situational patterns. To support this analysis, the system has various data displays (e.g., timelines, maps charts, and tables) a historical event database, query capabilities, and expert system tools. The expert system tools analyze temporal information, predict future events, and explain decisions visually and textually. The tools are currently installed in several military commands and intelligence agencies supporting analysis ranging from strategic C 3 to counter drug. © 1997 Elsevier Science B.V. Keywords: Situation assessment~ Situation prediction; Intelligence analysis

1. Introduction

One of the primary responsibilities of an intelligence analyst is to keep decision makers apprised of the current situation and give projections of future activities. This process consists of monitoring observable events for indications of particular activities and deviations from expected norms. Though the analysis of specific forces (air, missile, space, naval, etc.) is necessary from a tactical standpoint, the analysis of a potential adversary's command, control, and communications (C 3) structure reveals overall strategies and plans. The C 3 structure is the 'glue' which holds everything together and allows an adversary to operate his forces in a coherent manner. A methodology commonly used in the intelligence environment is temporal analysis. Temporal analysis involves the study of events as a function of time to determine patterns of behavior. Events consist of recording of activities such as communications, political activities, and aircraft maintenance. Events may occur instantaneously or may have a duration. Displaying intelligence data in terms of an annotated network of events provides a context for seemingly unrelated activities, allowing intelligence analysts to make accurate assessments regarding whether activities of interest are occurring or impending. By analyzing the various elements which comprise the potential adversary's C 3 structure over a

* Corresponding author. E-mail: [email protected]

0950-7051/97/$17.00 © 1997 Elsevier Science B.V. All rights reserved Pll S0950-7051 (96)00008- 1

long period of time, it is possible to establish correlations which enable intelligence analysts to predict future activity based upon historical patterns or expectations. Lacking this ability, intelligence analysts are forced into a mode of constantly reacting to individual events. Situation assessment and predictive analysis are very time consuming and require extensive training because the analyst must fuse large volumes of data received from many diverse and complex function areas. Currently, most analysts manually detect indications of particular activities by visually pattern matching events against general situation templates. These situation templates consist of key events and event relationships known only by most experienced analysts. There are several problems in manual situation assessment. Only the most experienced analysts have the knowledge required for accurate and timely analysis. In addition, documentation on methods to perform this analysis is typically nonexistent and frequent personnel turnover is a significant problem. Often, relevant event sequences are obscured by numerous irrelevant events. This 'noise' hinders the analyst and may lead the less experienced analyst to wrong conclusions. In addition, key events that indicate a particular activity may be missing. Knowledge of how events correlate to specific activities is also in constant flux and requires extensive training. This dynamic nature is especially apparent in emerging third world countries where lack of historical data and erratic behavior have made it difficult to establish norms.

88

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems 10 ~1997) 87-102

expert system contains an explanation sub-system. The Model Developer allows the users to maintain the expert system's knowledge base. Fig. 2 gives a gross overview of the architecture of the system. It shows only three basic components. Details of the system are discussed in 11 ]. The last two applications, the Dictionary and Chalkboard, are relatively minor. The Dictionary is a user-defined lexicon of information. This information includes terminology, definitions and synonym relationships relevant to the specific application domain. The Chalkboard is a generic drawing tool used to develop briefings. K-PASA and Model Developer were originally developed on a VAXStation 3520 platform. The database was Rdb, a relational database from DEC. The user interface was developed using DECWindows and GKS Graphics software. The Model Developer was implemented in the Ada programming language. The K-PASA system was also implemented using Ada, but the core functionality was developed using the Nexpert Object AI development environment. Since the original development, the tools have been ported to a X-Windows and MOTIF environment using the Sybase DBMS. In the current system every component is written in Ada except tor the Query Panel which is written in C + + . As previously mentioned, the environments these tools operate in are constantly changing, As a result, the primary design consideration for the expert system was to provide flexibility in modifying the knowledge base. The knowledge base must be maintainable by users who are experts in their respective intelligence domains, but who are relatively inexperienced in computer software. To promote user knowledge base maintenance, the knowledge representation reflects the actual temporal analysis methodology utilized in the manual human thought processes and retains the 'look and feel' of the Timeline displays.

Fig. 1. Domains supported by the system.

2. System overview Fig. 1 provides an overview of its functionality and the current operational analysis areas. The complete system is composed of seven tools: Timeline, Map, Query Panel, Chalkboard, Dictionary, Model Developer and the Knowledge-Based Predictive Analysis and Situation Assessment (K-PASA). K-PASA and the Model Developer are expert system-based tools which help automate temporal analysis by supporting situation assessment and predictive analysis. The primary input into K-PASA is a set of events selected by the user and a set of models from the database. The expert system returns descriptions of activities most likely indicated by the input events database. Based upon the assessments the user can ask the system to predict future events. To provide transparency in system conclusions, the

"--..

IE_ 1

Fig. 2. Systemoverview.

./ Or.p,c.

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems 10 (1997) 87-102

3. Knowledge representation: Temporal transition models Knowledge in the intelligence analysis domain is in continuous flux. Hard-coded knowledge bases requiring specialized software or AI expertise to modify are neither cost effective nor logistically practical. So, the knowledge representation must be maintainable by an analyst who works with the system on a day to day basis. As a result, the system cannot assume any kind of software training, let alone training in AI knowledge representation techniques. Representation and processing of temporal knowledge is of paramount importance for the system discussed in this paper. In computer science, there is a great diversity in the approaches towards the representation of time and the reasoning methods used for dealing with time-varying data [2]. Temporal models can be classified primarily into two classes - point-based and interval-based. The standard technique for dealing with temporal representation and reasoning is in terms of logical properties of time-dependent statements, creating what are called temporal logics. Notable temporal logics include Pneuli's linear time temporal logic [3], Chandy and Misra's specialized formal language for dealing with linear time [4], and the Computational Tree Logic, CTL* [5], that combines both branching time and linear time. Among logical approaches that treat time as intervals include the influential work by Allen [6]. Since physical events have non-zero duration (that may be infinitely large or infinitesimally small), intervals have more appeal to some researchers than time instants. Allen [7] discusses an interval-based representation and an algorithm based on his logic [6]. Events (time intervals) are stored as nodes of the network and are related with arcs labeled by one of 13 possible temporal relations. Applying constraint propagation, the system responds to queries about time of events even when no direct information relating the events queried is provided. Allen's algorithm has been used in several practical systems such as the Logos system [8], with temporal data bases [9], the Time Logic system [10], etc. The reasoning method used is sound, but not complete. The algorithm is polynomial O(N 3) in the number of nodes in the network. It is computationally attractive because the alternative is to derive all possible implied relations leading to an intractable problem [1 1]. Research on characterizing and improving Allen's logic and algorithm continues to the present day. For example, Trudel [12] points out how Allen's logic fails to define processes correctly and provides an alternative in terms of point primitives. Mitra and Loganantharaj [ 13] have undertaken a large-scale empirical testing of Allen's algorithm with a hundred thousand randomly generated networks with up to 60 nodes each. Song and Cohen [14] improve Allen's algorithm by reducing the ambiguity in arc labeling that remains at the end of constraint propagation. They do so by viewing an event structure (e.g., a plan for actions) as both as a temporal network and a hierarchical structure.

89

Research in the area of human-computer interactions is also grappling with problems in temporal representation. For example, Gray et al. [15] use XUAN (eXtended User Agent Notation) to specify temporal behavior of tasks in complex human-computer interactions. The notation borrows elements from Allen's temporal logic [61] and Schneiderman's multi-party grammar [16]. Blandford et al. [17] study temporal and other properties of interactions and propose corresponding interactional requirements in order to design and evaluate interactive systems. Their approach called the IF (Interaction Framework), and approaches such as XUAN allow the consequences of timing variations to be inferred with relative ease. A method of representation and reasoning based purely or even partially on logic was found to be cumbersome and complicated to the users of our system. Therefore, in spite of the existence of these logic based methods and algorithms, we developed a relatively simple, but elegant, temporal model of our own. It provides for efficiency in knowledge acquisition and still possesses the appropriate representational power for this project. This knowledge representation, called Temporal Transition Models (TTMs), specifies generalized event patterns that characterize a particular situation of interest. Associated with the TTMs is a graphical specification language developed to be consistent with the method by which users manually perform situation assessment and predictive analysis. This language utilizes the same icon notation found in the timeline and maps. The Model Builder implements this graphical language, allowing the user to maintain the expert system's knowledge base by manipulating the TTMs, without the aid of a knowledge engineer. A brief comparison of TTMs and Allen's representational scheme is found in [ 1 ].

3.1. TTM representation The TTM structure is a combination of concepts from Augmented Transition Networks (ATNs) used in natural language processing [18] and decision trees [19]. Like ATNs, TTMs are composed of states and transitions. A state corresponds to the moment before the beginning of an event in the application area (e.g., communications, aircraft activity, and personnel movements) although loosely we may say that a state corresponds to an event - the one that it starts. A state is graphically represented by a circle with an icon denoting the type of event it begins. State specifications define the type and characteristics of events which may match the state. For example, strategic exercises may be initiated by a series of communications activity from headquarters unit to one or more of its subordinate units. In this case, the states may specify the source, destination and communication medium of these communication activities. Internally, a state is represented by a flame structure [201] that contains information about the event that is begun there. State specifications support several operators which allow the user to constrain attributes of the events that start there.

90

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems I0 (1997) 87-102

These operators are E q u a l , S u b s e t , = , < , > , ~ and -> . E q u a l and S u b s e t are used on symbolic attributes; ----, < , > , -< , -> are used on numeric attributes. The S u b s e t operator defines one or more values to be matched by the events. The E q u a l operator specifies exact values. For example, the following specification defines a state that denotes the moment before an event specifying a H e a d quarters (HQ) contacting one or more of its subordinate units (Unit i, Unit 2, Unit 3):

Communications state Source: Equal(HQ) Destination: Subset 2,Unit 3)

(Unit l,Unit

The numeric operators perform a counting function. Used primarily for TTMs describing indications and warning situations, these operators provide the capability to specify the number of occurrences over a time period (e.g., if more than 2 Bear-H aircraft depart from Base X within a 24 hour period). Dynamic constraints can also be specified using variables. Variables can be assigned to the attribute values of

Application Area Independent

~J,,,

State

events matching a particular state that need to be referenced in later states. This capability allows dynamic specification of attribute constraints using information from previously matched events. For example, assume a variable was specified for the D e s t i n a t i o n attribute in the above example. If during a particular exercise, HQ only contacted U n i t 1 and U n i t 3, those two values would be bound to the variable. Subsequent states could then refer to that variable in order to further constrain the search to activities. Transitions define the ordering of the states and provide temporal constraints on the events. These temporal constraints can be both relative and absolute. Relative constraints specify the time that events must occur in order to satisfy the next state, relative to the event(s) fulfilling the previous state. For example, 1 to 10 minutes after communications are detected from a headquarters to a bomber wing, aircraft departing from the air base should be expected. Absolute constraints allow the specification of particular times (e.g., between 1200 and 1630), weekdays, months, and years which the events should occur. Currently, the reference point used in evaluation is the start time of the event(s) fulfilling the previous state. Multiple transitions

Mode/ Model ID Activity Organization

Model ID State ID Event Type State Relationship

,,,J

Transition Model ID ' Current State 113 Next State ID Transition Type ' Mininum Time Maximum "time Confidence increment

Fig. 3. Sample TTM table.

91

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems I0 (1997) 87-102

additional 25% gain in confidence leading to a total of 100% confidence in a match with the TTM T. It should be noted here that the numbers given here are arbitrary, but for a given TTM, the total confidence from its start state to its end state obtained compositionally must add to 100% if there is full confidence in the TTM. The confidence values represent the level of belief that the event timeline is indicative of the correlated activity as well as the accuracy of the TTM description. Generally, as more states are traversed during the analysis, there is an increase in confidence that events indicate a certain kind of activity. It is possible that a TTM T may have a transition with a negative certainty factor to indicate that the event that the indicated transition leads to gives us negative evidence that the event sequence represented by T has occurred. Combination of confidences based upon the various paths satisfied by the events is based upon MYCIN confidence factors [21]. More formal probabilistic methods (e.g., Dempster-Shafer Theory [22]) were investigated, but were discarded as being too cumbersome and unintuitive for analysts. The method

from a particular state are considered a branch. Transitions in a branch can be either ' A N D ' or ' O R ' transitions. The evaluation of transitions is very similar to AND/OR decision trees, where ' O R ' branches are evaluated independently and "AND' branches are evaluated together. We intend to incorporate XOR (exclusive OR) branches in the near future. Confidence values are associated with transitions. These values range from - 1 0 0 % to 100%, where negative values indicate a decrease in belief and positive values indicate an increase in belief. It also must be noted that composition of confidence is additive along a sequential path, but along AND and OR paths, composition of confidence is done differently. As a very simple example, let us assume that a sequential TTM T is composed of three events A, B, and C that occur in sequence. Also assume that the occurrence of event A gives us 25% confidence in that we may have a match with TTM T; occurrence of B following A gives us an additional 50% confidence that we have a match with the TTM T; occurrence of C following A and B gives us an

' ~illlily lit

(lllieillt

# ti

~lltltl

llit~

ttll

1144z-I.

i 1930z

t~o~

i IN' II'

~I~,:

SI~U~ ~

~

I1'"

I

I

I

I

I IIIIIII

Lille i ~ < ~ , ~ , : ~ ! ~

~,~:

t ~ z ! l ~ t

~ I : ¢ H ~

t ~ t < ~ , ~ t t ~ ' : W t ! ~ t ~ , i

~!

l ~

Fig. 4. SAC ORI TTM.

~ l ~ ! ~ P I 4 ~ ! ~ ? ~ t ~ t ~ l ~ ' , ~

~ , ~

~ i ~ V ~ , ~

¸ ~,'~

92

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems 10 (1997) 87-- 102

chosen, however, more accurately models the method analysts use in manually performing situation assessment making it more natural and less threatening to the user. It is possible to argue that linguistic expressions or labeling of likelihood is more suitable for our domain. However, we found it simple and beneficial to use an arithmetic-based model. Although it is simple, our expert analysts concur with our choice. The TTMs are stored in a relational database for access by the Model Developer and K-PASA. Fig. 3 describes sample TTM tables that support communication, aircraft and VIP movement event types. 3.1.1. Example An example of a simple TTM describing Strategic Air Command (SAC) Operational Readiness Inspection (ORI) activity is shown in the top pane of Fig. 4. SAC, which has been recently deactivated, periodically performed ORIs to exercise the combat readiness of its subordinate units. All of the unit and aircraft information displayed in the example was derived from a 1991 issue of Air Force Magazine. Four event types are illustrated in the TTM: codewords (square icon), airborne command posts (straight-wing aircraft icon), communications (telephone icon), and aircraft (swept-wing airplane icon). Codewords are high level communications. Note that this is a made-up example for the paper. Models actually used by the system are much more complex and cannot be discussed in the paper. The leftmost codeword event is the initial state. This state describes General Strategic Alert activities by SAC headquarters. In detail, this state specifies that matching events must be codewords from SAC headquarters issued to the headquarters of the 15th and 8th Air Forces. From this initial state, two paths can be taken. The top branch describes a typical 8th Air Force exercise and the bottom branch describes a 15th Air Force exercise. Both transitions in this branch are 'OR' transitions so that either one or both Air Force entities may respond to the SAC ORI alert. Precursor 15th Air Force activities before the actual conduct of the exercise can be summarized as follows: the 15th Air Force contacts subordinate units that will participate in the exercise, a 15th Air Force airborne command post will be active to provide orders, participating units will confirm alert status, then the 15th Air Force will initiate the exercise. Depending on whether the aircraft activity is to be performed or simulated, aircraft will depart from various bases associated with the participating units and then perform bombing runs on the designated bombing areas. After aircraft activity, participating units will contact the 15th Air Force headquarters, followed by the 15th Air Force headquarters contacting SAC. The 8th Air Force exercise progresses in a similar fashion, but with units subordinated to the 8th Air Force rather than the 15th Air Force. Let us now consider the specifications of the transition leading from the initial state to the first codeword state on the 15th Air Force branch. The transition specifies that 1 to 10

minutes after event(s) that match the initial state are detected, we should expect the occurrence of codeword event(s) issued from the 15th Air Force headquarters to one or more subordinate units. If event(s) which satisfy these constraints are found, the confidence that an SAC OR1 is occurring is raised by an appropriate amount. The confidence value is the average of values provided by several expert human analysts. A variable has been defined for the subscriber's attribute in the state specification. Semantically, this variable captures all of the subordinate units participating in the exercise and is used in subsequent states to further constrain events to those relating to participating units. It is possible to examine the attributes associated with a transition such as the one discussed here using the graphical user interface. 3.2. Other components in the system The Map subsystem displays the activity in both a temporal and spatial context. The events are displayed using the same icon notation found in the Timelines. Several types of map data are available ranging from country outlines to detailed Defense Mapping Agency (DMA) data (e.g., terrain features). Detailed map data support is derived from the Common Mapping Toolkit (CMTK) mapping capabilities. Common mapping functions such as zoom, unzoom, and various map overlays (e.g., states, rivers, grid) are supported. Both the Timelines and Maps support a historical database, data entry, query capabilities, briefing creation, and filtering mechanisms to aid in the analysis. Manual data entry using forms is provided. Some domains, however, have automatic message processing capabilities for fixedformat messages. When messages are parsed, the Timelines and Maps are automatically updated with the new information. To support briefings, the Timeline and Map applications have an extensive annotation capability. A point-and-click annotation toolbox is available for drawing text and graphics. Specialized map annotations such as zones and place names are also available. The Timeline and Map applications provide homogeneous displays for heterogeneous data sources. These data visualization capabilities provide the user a common context for viewing, analyzing and manipulating data from external databases and applications. A framework has been developed which allows the integration of external databases into the environment. This framework provides the capability for the user to select the integrated databases via menu options. In addition, the system has a file import and export capability. This capability allows other applications to write event information to a formatted ASCII flat file and have Timeline and Maps display the information. The user also has the option to save the information to the TAS database. The Query Panel is a point-and-click interface for retrieving data from integrated databases. The user can perform adhoc queries against the databases and have the retrieved data piped to various displays. Displays currently supported are maps, timelines, tables, and histograms. The Query Panel is

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems 10 (1997) 87-102

data driven so that layering the tool on top of new or existing databases does not require code modifications. The user is abstracted from the underlying database management system, associated with only those two units (e.g., aircraft departing from bases associated with those units). 4. Situation assessment

K-PASA is the engine that performs situation assessment by analyzing events against TTM specifications. The user may select the types of activities that the system should search for in the incoming event descriptions. Assessments are displayed in a list, ordered in decreasing confidence. The user may select an assessment and receive either an explanation or a prediction of future activity. Situation assessment involves the detection of key activities from incoming event descriptions. There is some research in event-based situation assessment and event prediction in the field of political science. Early work included a weight-based method of comparing event sequences called Leveshtein Metric [23,24]. More recent work includes POES [25] and situation predisposition models based on knowledge-intensive rule-based processing [26]. The Levenshtein distance, developed in the context of molecular biology, is the sum of the weights of operations needed to convert one linear sequence into another. IfA and B are two sequences afft2a3...am and blb2b3...bn, the Levenshtein approach converts one to the other using three operations: delete an element from A, insert an element in B, and substitute b/for a,. A drawback of Levenshtein's approach is that it deals only with linearly ordered sequences. In real life, many archetypical situations have partially ordered component events. POES has been developed to handle such situations where some preconditions of events can be performed in any temporal order. For situation assessment, the primary input into K-PASA kernel is a set of events selected by the user and a set of TTMs describing correlations between event patterns and activities of interest. K-PASA is very easy to use. K-PASA is currently invoked via a Timeline where the user is asked to select the events to be processed off the Timeline display. The initial K-PASA screen contains the types of activities (e.g., the activities with defined TTMs) known to the system. A status line is provided to guide the beginning user. The user may then select the activities to be detected by the system. K-PASA performs the mapping between the events and TTMs associated with the selected activities returning the assessments listed in order of decreasing confidence. Assessment display thresholds can be set by the user. If the user selects an assessment, the confidence will be displayed. At this point, the user may request an assessment explanation or prediction of future events. 4.1. Mapping o f events

The input into K-PASA is a timeline of events and a set of

93

TTMs. The output is a set of distinct mappings between the event timeline and the TTMs. Each mapping corresponds to a particular assessment. The TTMs are stored in the database and retrieved during situation assessment processing. The comparison process starts at a TTM's initial states, searching for events that match the initial states' specifications. If one or more initial states are satisfied, the system searches for events that match the constraints specified by subsequent transitions and states. This TTM traversal process continues until all TTM branches terminate or no events satisfy next transition or state specifications. Currently, event start times are used as reference points in the transition evaluation. A high-level description of this algorithm is given below. R e t r i e v e events and TTMs s p e c i f i e d by user For each model do R e t r i e v e states and t r a n s i t i o n s For each event f u l f i l l i n g an initial state do C r e a t e a root h y p o t h e s i s and put in the U n i n v e s t i g a t e d H y p o t h e s i s List Until the U n i n v e s t i g a t e d H y p o t h e s i s List is e m p t y do Get next h y p o t h e s i s from U n i n v e s t i g a t e d H y p o t h e s i s List Get list of next states and put in State List Set Evaluate Current Hypothesis false Until State List e m p t y do Get next state If events m a t c h the t r a n s i t i o n to the next state and next state spec i f i c a t i o n s then If m u l t i p l e events m a t c h then Create a Meta-Event If t r a n s i t i o n to event is part of a b r a n c h then Create a new child hypothesis and put in the U n i n v e s t i g a t e d H y p o t h e s i s List Set E v a l u a t e C u r r e n t H y p o t h esis false Else A p p e n d state and m a t c h i n g event to c u r r e n t h y p o t h e s i s Set E v a l u a t e C u r r e n t H y p o t h esis true Go to next state If E v a l u a t e C u r r e n t Hypothesis is false then R e m o v e c u r r e n t h y p o t h e s i s from Uninv e s t i g a t e d H y p o t h e s i s List Aggregate Hypotheses

94

L.A. Jesse, J.K. Kalita/Knowledge-Based Svstem.v I0 f 1997) 87 t02

The

building blocks in situation assessment are Each assessment has an associated set of hypotheses which describe aspects of the mapping between the events and TTM. Specifically, the hypotheses contain a list of traversed states, matching events and a confidence. For a given assessment, hypotheses describing the assessment are created in hierarchical structure. One assessment corresponds to one hypothesis hierarchy. The 'root' hypotheses are generated from initial states. Each event that satisfies an initial state generates a root hypothesis. Children hypotheses are created when the comparison process encounters a TTM juncture. If more than one branch is taken at a state, new hypotheses are generated for each branch and parented to the originating hypothesis. After all of the TTMs have been processed, the expert system aggregates the hypothesis hierarchies to create assessment specifications. This aggregation process, collapses the hypothesis hierarchy into a single structure and determines the final confidence of the assessment. To illustrate the aggregation mechanism, consider the sample TTM in Fig. 5(a). The initial state is State 1. The transitions from state 2 are 'AND' transitions and the transitions from state 5 are "OR' transitions. If all states are traversed and only a single event matches each state, a possible hypothesis hierarchy would be similar to Fig. 5(b). The aggregation mechanism takes all of the 'leaf node' hypotheses (Hypothesis 2, 4, and 5 in Fig. 5(b)) associated with the root hypothesis and goes backward up the hypothesis tree. This backward trace combines the state and event lists in the hypotheses. States in the state list are correlated by position to the matching events in the event list. Fig. 5(c) describes the final event and state lists after the aggregation. Associated with each hypothesis is a confidence. This confidence is computed from the confidence specified in transitions traversed by the input events. During aggregation the confidence in the assessment conclusion is determined from the branch types using the standard confidence combination theory used in MYCIN [21]. In particular, the MYCIN combination rules are as follows: C1 or C2 = max(C1,C2); C1 and C2 = min(C1,C2). hypotheses.

4.2. P r o b l e m s in m a p p i n g

The process described above is linear in nature. One of the problems associated with the traversal process is its inflexibility in detecting deviations from the pattern described in the TTM. Deviations from the patterns due to inadequate intelligence collection or changes in the way the organization conducts the activity can result in the system ignoring evidence of the situation. Currently, K-PASA supports two mechanisms that compensate for these problems: partial state activation and nonlinear model traversal. In the future, these methods are planned to be augmented by machine learning techniques. Partial state activation allows user-acceptable deviations within the reported events. For example, perhaps the norm

in an exercise situation is a higher headquarters contacting its subordinate units within ten minutes of exercise initiation. However, in a particular instance, the headquarters were detected communicating with unknown entities 5 minutes after a suspected exercise initiation command. The analyst would give some credence to the assessment that the communications activity was related to the exercise; however with less confidence than if the events reflected the norm. The degree of tolerance in partial state activation is defined in the states. Each state attribute specification has an associated activation threshold. The possible thresholds are C O M P L E T E (the event must meet the specification), U N K N O W N (unknown values are acceptable but at a lower confidence), and M I S M A T C H (wrong values are acceptable but at an even lower confidence). The level of state activation is derived from the 'completeness' of the fit measured by a weighted average of the degree for which each attribute specification has been satisfied. This weighted average is factored into the overall assessment confidence. Nonlinear traversal provides additional flexibility in processing the overall TTM structure. Instead of strictly

P

G (a)

HypothesisI I State list: statel, state2 Eventlist: event4,event5

/ Ypothesis2 tale list: state3,state4 vent list: eventS,eventl0

IHypothels 4 State lbt: state6 (b)

Event list: eventl2

Hypothesis3 I State list: state5 Eventlist: event9

I

I Hypothesis5

State list: state7 Eventlist: eventl4

Mapping State list: s t a t e l , state2, state3, state,I, state5, state6, state7 E v e n t list: event, l, eventS, event& e v e n t l 0 event9, e v e n t l 2 , e v e n t l 4

(c) Fig. 5. Developingand aggregatinghypothesis.

95

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems 10 (1997) 87-102

adhering to the event sequences specified in the TTM, K-PASA will also search for skipped activity and relax the temporal constraints. The nearest fulfilled ancestor provides the temporal context for analyzing events associated with children of skipped states. In Fig. 6, assume events satisfy states 1 and 2. Although events were not reported that satisfy state 3. the nonlinear processing will search tor events that satisfy state 4 within the timeframe of 3 - 6 hours from the events that satisfy state 2. Relaxing temporal constraints is performed by expanding the expected timeframe defined by the transitions by user-defined temporal variances. These variances are relative to the timeframe for which the events should have occurred. Using Fig. 6 as an example, if the first variance level is 10% then the system would look for events satisfying state 4 3.3-6.6 hours after the events that satisfy state 2. As the temporal variances increase, the confidence in the assessment decreases. Another typical problem concerns reporting of several events each of which is the instantiation of the same generic event with different actors (e.g., different values in the S o u r c e and D e s t i n a t i o n fields). Each such individual event may be recorded separately in the system. For example, we may have three events each of which is an instantiation of a communications event. The Source field in each contains HQ, whereas the D e s t i n a t i o n field is U n i t 1 in the first, U n i t 2 in the second, and U n i t 3 in the third, The three events take place either at the same time, or at slightly different times. Now, suppose we need to match with a TTM model where a node is given as Communications

state

Source: Equal(HQ) Destination: Subset U n i t 3) .

Unit

l,Unit

2,

K-PASA, during the event mappmg process, will aggregate the three individual events in order to satisfy the state.

A related problem is the reverse situation in which an event may be encapsulated into a larger event. For example, one communications event may encapsulate a single source talking to multiple destinations. The system will also parse large events to find embedded events. K-PASA is integrated with the Dictionary in order to utilize synonym relationships when comparing events to state specifications. Synonym relationships in the Dictionary define terminology equivalency. Examples include acronyms or alternative spellings. Without this integration, the user would have to enter all phrases and their associated synonyms in the state specification, even though they both semantically represent the same activity. The Dictionary contains some frequently used entries that are used across domains. It is then adapted to the specific domain by the knowledge engineer in consultation with the domain expert.

5. A s s e s s m e n t explanation

Due to the critical nature of their work, the analysts need to understand and accept the reasoning behind our system's assessments. Buchanan et al. ([21], Ch. 17) state several overlapping reasons for the need for an explanation subsystem in an expert system: understanding the knowledge content of the system and its line of reasoning, debugging of the same, presenting an educational experience for the users, and improving acceptance of the system by potential users and managers. Kidd and Cooper [27] discuss the characteristics that ideal explanations should possess by examining recorded dialogs between experts and their clients in various domains. Moore et al. [28] give a detailed survey of the techniques expert systems have used for explanation in rule-based as well as non-rule-based systems. In early expert systems such as MYCIN, which were mostly rule-based, explanation consisted of providing a trace of the rules that fired leading to a particular conclusion

3-6 hours



1-2 hours 2-4 hours

Fig. 6. Nonlinear processing example.

96

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems 10 (1997) 87 102

[29-31]. TERESIAS, a progeny of MYCIN, allowed 'why' (several times in succession) and 'how' questions to be asked by the user [32]. By the mid-1980s, systems such as STEAMER (used to train naval personnel to operate steam power plants onboard ships [33]), Drilling Advisor [34], and Langlotz's work with ONCOCIN's explanation facility [35] provided mixed natural language and graphical interface. Fig. 4 shows that the assessment explanation by K-PASA is a mixture of graphics and natural language text. In the top pane, graphics illustrate the TTM structure and the matched states associated with the assessment. States traversed by the events are filled for easy identification. Matched events are displayed in the middle pane using a timeline display. The graphics allow for quick, superficial explanation in situations where the analyst is pressed for time. The natural language text, displayed in the bottom pane, provides details when the user has the time and inclination for more in-depth explanation. The text communicates the reasoning behind the assessment, but does not overwhelm the user with irrelevant details. The natural language text that we produce to meet the communicative goal of explanation is elaborate in terms of its contents, but not very complex in terms of word choice, choice of phrasal and sentential structure, and grammar. Sophisticated generation of multi-sentence text involves three steps: (1) identifying the goal(s) to communicate, (2) planning the structure of the text to achieve the goal(s), and (3) realizing the planned text in terms of sentences, phrases, and words [36]. In our case, the goal to communicate is explaining how the system came to a certain analytical conclusion. The text planning process breaks the text into simple paragraphs as discussed below. The text realization step is modeled after the work of Chester [37] who translated formal deductive logical proofs into natural English text. McDonald's MUMBLE system [38] also produced

similar, but improved text from many conceptual sources including formal proofs similar to Chester's. Allen also discusses a simple head-driven algorithm for text realization ([39], pp. 286-290) based on the work by Shieber et al. [40]. The text generation system described in [41] also addresses many of the issues relevant to the text generation subsystem of the current project. 5.1. Text generation

The overall structure of the explanation text is divided into two sections. The first section consists of an introductory paragraph which provides a high level summary of the matching between the TTM and the input events. In particular, it informs the user of the type of phenomena that was detected and the confidence the system has in the conclusion. In addition, the main portions of the TTM that were satisfied are summarized using relationship specifications in the states. These relationships describe the meaning of individual states or groups of states to the activity being described in the TTM. The second section contains the main body of the explanation which describes in detail the relevant events that matched the TTM. The main body is composed of one paragraph for each relationship listed in the introductory paragraph. In each paragraph, sentences are generated that describe the event(s) that matched the states within the relationship. During the planning process, a graph is constructed using the relationships in the TTM. Nodes in the graph consist of sentence specifications and correspond to one sentence in the text. The nodes in this graph are ordered in time and therefore presentation order. The specifications include information concerning events to be mentioned in the sentence, how the events are related, and the satisfied states.

Main Body Sentence Nodes ©

-..

...

©

Main Main Body Topic Node List Fig. 7. Sentence specificationGrap.

I

...

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems 10 (1997) 87-102

In structuring the text, two types of nodes are created: main body topic nodes and main body sentence nodes. The structure of the graph is shown in Fig. 7. The main body topic nodes organize the main body paragraphs according to state relationships. These nodes contain information about the main body sentence nodes in the paragraph as well as information required for generating the paragraph's topic sentence. The main body sentence node specifications include information concerning events to be mentioned in the sentence and how the events are related. Multiple events which in conjunction satisfy a single state are combined into one main body sentence node. During the generation stage, the introductory paragraph is developed using a set of preconstructed sentences with slots filled in by the appropriate objects. This is appropriate as the introduction always explains the same features of the hypothesis and these features are always available. The construction of the main body paragraphs is more complicated. Basically, the text generation process traverses the graph generated by the planning component to generate the main body paragraphs. A realization rule [36,39] is defined for each type of event and the paragraph introductory sentence. To focus on important event attributes rather than reciting all attributes, the event realization rules describe the event in terms of its relevant and key attributes. A sample aircraft event realization rule is described below. Aircraft event realization rule

97

General strategic alert is indicated by ihe following events. First, a HQSAC codeword was detected with subscribers HQBAF and HQ15AF. Next, a HQ15AF code word was detected with subscribers 57AD, 92BW, 22ARW, 93BW and 384BW. Last, a HQ8AF codeword was detected with subscribers 12AD, 65ARW, 7BW and 509BW. Battle stuff activation is indicated by the following events. Communications were transmitted from 96BW, 22ARW, 384BW, 92BW, 93BW and 57AD to HQ15AF. Exercise initiation is indicated by the following event. A HQ15AF codeword was detected with subscribers 57AD, 92BW, 22ARW, 93BW, 96BW and 384BW. Live play is indicated by the following events. First, B-52, KC-135 and B-1 were detected departing at MINOT_AFB, DYESS AFB, FAIRCHILD_AFB and MCCONNELL_FB. Next, B-1 was detected enroute at LAJUNTA-CO. Exercise termination is indicated by the following events. First, a HQ15AF code word was detected with subscribers 57AD, 92BW, 96BW, 93BW, 22ARW and 384BW. Next, a HQ15AF codeword was detected with subscriber HQSAC.

[relative_time_adj] ](number)] [(unit)] (type) type_phrase (activity) (type [locationprep] (location) [heading_verb (heading)] Objects in [ ] are associated with relevant attributes or optional words. Objects in ( ) are slots filled in by the event(s). The objects in italics are words and phrases retrieved from the lexicon. An attribute's relevancy is determined by the state matched by the event. If the state places a constraint on a particular event attribute, then that attribute is mentioned in the sentence. Key attributes are always mentioned. Attributes of multiple events within a single node and attributes with more than one value are combined with an 'and' inserted at the appropriate spot. Explicit event times are also not mentioned. Instead, adjectives describing the relative timing between events (e.g., first, last, next) are placed at the beginning of the main body sentences when appropriate. The explanation text seen in the bottom pane of Fig. 4 is given below. The events seem to indicate SAC operational readiness inspection activity with a confidence of 1.00. The phenomenon is described in model number 335 displayed above. In particular, this is indicated by general strategic alert, battle stuff activation, confirmation of alert status, exercise initiation, live play and exercise termination by SAC.

6. Predictive analysis

For primarily numerical data, there are many approaches to trend detection or predictive analysis such as time-series analysis [42], semi-qualitative simulation [43], or Bayesian networks [44]. The problem of plan recognition (e.g., [45]) where one is interested in recognizing that a sequence of actions constitute a complex plan is also similar to predictive analysis. However, temporal issues are not predominant in plan recognition. Dousson et al. [46] describe a point based temporal reasoning system that performs recognition of instances of predefined situations from among a set of time-stamped events by using a propositional reified logic formalism [47] and by propagating temporal constraints with a 'path consistency' algorithm [48]. Our approach to predictive analysis is informal and simple, but sufficient for our purposes. At a high level, the situation assessment is a two phase process: (1) monitoring observable events for indications of particular activities, then (2) based upon those assessments predict future activity. The K-PASA predictive analysis component is invoked by the user selecting an assessment generated by the situation assessment component and providing a prediction timeframe. K-PASA predictive analysis processing is relatively straightforward. Basically, the

98

L.A. Jesse, J.K. Kalita/Knowledge-BasedSystems 10 (1997) 87-102

system predicts next events by looking at the states yet to be fulfilled. Paths stemming from the last states matched in the assessment's hypothesis hierarchy are analyzed by the system using the event(s) matching those last states as time references. For example, in Fig. 8, the shaded circles denote states traversed in a particular hypothesis hierarchy. The start time of the event(s) that matched state 2 is used as a reference time for predicting events associated with subsequent states in branch 1 (e.g., states 3, 4, 5). The start time of event(s) that matched state 7 is used as a reference time for predicting events associated with subsequent states in branch 2 (e.g., states 8 and 9). In Fig. 8, non-shaded states are predicted states. The constraints in the state specification provide additional information on predicted event attributes. For example, assume that state 3 specifies a communication event with the source equal to SAC Headquarters and the destination equal to either the headquarters of the 15th or 8th Air Force. These constraints on the source and destination attributes are used to further define the predicted communications event. Fig. 4 displays a sample presentation of the event prediction using the SAC ORI TTM. The prediction timeframe for this example is from 12:05 to 2:00 on the 1 lth of April. The TTM associated with the assessment is displayed with the states traversed filled and the predicted events highlighted in yellow. Textual description of the predicted events is displayed at the bottom of the screen. The events are ordered by increasing predicted start time. The prediction text in Fig. 4 is: The events seem to indicate SAC operational readiness inspection activity with a confidence of

1-8 hours

0.45. The phenomenon described in model number 335 displayed above. Following events can be predicted from this hypothesis: • • • •

• •

Active HQ8AF airborne command post between 1136 and 1207 on April 11. Communications from 12AD, 65ARW, 7BW or 509BW to HQ8AF between 1137 and 1230 on April 11. HQ8AF codeword to 12AD, 65ARW, 78W or 509BW between 1537 and 2031 on April 11. B-l, B-52 or KC-135 aircraft departing Barksdale_ AFB, Carswell_AFB, Loring_AFB, PLattsburg_AFB or Eaker_AFB between 1537 and 2036 on April 11. HQ15AF codeword to 57AD, 92BW, 22ARW, 93BW or 384BW between 1556 and 1955 on April 11. B-l, B-52 or KC-135 aircraft departing Fairchild_AFB, Castle_AFB, McConnel_AFB, Anderson_AFB or Minot_AFB between 1556 and 2000 on April 11.

7. Knowledge acquisition Our system needs to extract knowledge from expert analysts and transfer it to the TTM knowledge base. Hill et al. [49] state that the objectives o f an ideal knowledge acquisition system include: the possibility of direct interaction with the expert without the intervention of a knowledge engineer, the presence of tutorial capabilities to eliminate the need for prior training of the expert, the ability to analyze work in progress to detect inconsistencies and gaps in knowledge, and an interface that makes the use of the system enjoyable and attractive to the expert. Our

It

Branch2 2-4hours ~ O

1-5days

Fig. 8. Prediction example.

hours

L.A. Jesse. J.K. Kalita/Knowledge-Based Systems 10 (19971 87-102

goal in designing the knowledge acquisition system was to achieve these goals as much as possible. Acquisition techniques are classified as manual where usually the knowledge engineer interviews or tracks the expert(s), semi-automatic techniques that may be expertdriven or knowledge engineer-driven, or automatic where the system is endowed with learning abilities [50-52]. We ruled out manual techniques because we wanted to remove the knowledge engineer's mediation between the expert and the knowledge base leading to reduction in potential representation mismatches, and because experts were available to us. We decided against automatic techniques because they are still primarily research-oriented and not usually practical due to the need for a large number of training examples, and possibly non-examples. Semi-automatic methods are usually divided into two categories: those that allow the experts to build the knowledge base with little or no outside help, and those that are intended to help the knowledge engineer with minimal participation of an expert. Since in our case, we had ample access to experts, we decided to allow the experts to incorporate chunks of knowledge themselves. Wolfgram [53], and Kim and Courtney [54] discuss the assumptions that need to be true for a computer-aided expert-driven method to work. These primarily relate to the ability of the expert to identify the variables and relationships that exist in the domain, and the amount of familiarity the expert has with computer tools and techniques, and his or her ability to refine knowledge using such tools and techniques. Researchers have noted a basic trade-off in a knowledge representation language between acquirability and expressive power [55]. We found that although the TTM language lacks in certain expressive issues, it was excellent from an acquirability point of view due to its simplicity and naturalness. It is well-known that special-purpose editors and interfaces facilitate the task of entering knowledge into the system and decrease the chance of errors [50,52,51]. Just like systems such as APPRENTICE [56], GKE [57], and SEEGRPAH [58], we provide a knowledge browser and editor as the primary elicitation tool. In addition, an explanation facility like the one we have allows the knowledge engineer and/or the expert in refining and improving the knowledge base. Because the knowledge in the situation assessment task addressed by this system is in continuous flux, a knowledge acquisition tool called the Model Developer was developed. The Model Developer allows domain experts to enter and maintain the knowledge without the aid of a knowledge engineer. Thus, in our case, there is no separate mediating representation. It allows the user to add, modify, and delete TTMs using the TTM graphical specification language. The specifications of both states and transitions are form-based for easy data entry. The Model Developer maximizes the use of mouse pointand-click object manipulation and XWindows capabilities. Pull down menus appear at the top of the Model Developer

99

window. The Model menu contains general TTM options such as creating a new TTM, opening an existing TTM, deleting a TTM, saving a TTM and copying a TTM. There are many other useful menu options. Because of the requirements to archive and document the knowledge contained in the TTMs, a powerful annotation subsystem developed provides the ability to annotate the TTMs with comments and other graphics. Annotations can be used to provide additional explanation or point out important information. This capability is very useful when creating briefing slides or using the TTMs for training. To support operational TTM development, the Model Developer supports two types of TTMs: working and validated. Working TTMs are considered models 'in progress:'. These TTMs are not validated and are not saved in the database for processing by K-PASA. Validated TTMs, on the other hand are syntactically correct and are stored in the database. Examples of validation include precluding the user from defining only one AND transition in a branch. Typically, TTMs are in the working mode until the analyst feels comfortable in using the TTM in the situation assessment processing.

8. Application area portability A primary consideration in the development of K-PASA and the Model Developer was to achieve relative domain independence. Application area portability allows the tools to be customized to an analysis topic (e.g., command and control, and counter terrorism) with minimal effort. Application area portability is supported by an architecture which divides the processing into two levels: an application independent kernel level and an application specific application layer. In K-PASA, the kernel performs a comparison between the input events and the TTMs. The processing is general and does not presume anything about the contents of the TTMs or the events. K-PASA application layers define the particular application area in which the expert system tool operates by specifying the types of events. The application layer development includes specific event type related processing such as database queries to the appropriate TTM and event database tables, comparison of event attributes to state specifications, and explanation text generation. Typically, the application layer is less than 15% of the total code. The application layer development for Model Developer also requires definition of the appropriate event types. This requires developing queries to TTM database tables and creating the event-specific user interface for the state specification forms. To date, most event types, the user interface development only required one DECWindows/MOTIF User Interface Language (UIL) file and one Ada package for each event type. The Model Developer uses the same TTM tables accessed by K-PASA. However, since the Model Developer adds, modifies and deletes TTM database records, the

tO0

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems 10 (1997) 87 102

TTM database queries required for the Model Developer are a superset of those required for K-PASA. This methodology has been used to not only add new events to an existing application area, but also to customize the tools to a completely new application area. For the average event type (e.g., around six attributes), the development time is approximately 5 hours.

9. Conclusions This paper presents an approach for applying knowledge based systems technology to the problem of situation assessment and event prediction. Due to intensive knowledge maintenance, traditional expert system knowledge representation and acquisition methods are not viable solutions. As a result, a simple knowledge representation, TTMs, was developed that allows analysts who are domain experts, but have limited computer skills, to define and manipulate the knowledge. Associated with TTMs is a graphical specification language that corresponds to the methods in which the analysts manually perform situation assessment and event prediction. The system, customized for command and control, was first installed for evaluation. Since that time, the system's tools have been significantly enhanced. The system has gone operational, and is currently used on a daily basis, and is considered one of the critical intelligence systems in the space defense organization where it was initially installed. It has also been customized to work with navy, security, and other defense organizations. Because the system operates in multiple intelligence domains, it is constructed in a manner that allows reusability of software across all domains. Approximately 85% of the software consists of reusable components. User response to the knowledge representation and system capabilities has been very positive. The close mapping between the manual timeline analysis methods and the TTM structure, together with the graphical TTM specification language supported in the Model Developer provide for a truly user maintainable system. In addition to utilizing K-PASA to aid in analysis tasks, the analysts also use the system and TTMs for knowledge archival and training. Because TTMs formalize the techniques known only by the senior analysts, usage of the TTMs and K-PASA allows for an easier transition of knowledge to less experienced analysts. Through evaluation from the users, many desired enhancements to K-PASA and the Model Developer tools have been identified and implemented. For example, an enhancement has been the development of a real time capability. In real time mode, K-PASA continuously runs in the background analyzing events as they are entered into the task database. As significant activities are detected, the user is notified by messages. Future enhancements will include exploring the use of more sophisticated temporal models (such as [7,6,59,60]), adding NOT and other

constraints (i.e., looking for absence of an event), among others. The TAS tools standardize a general intelligence analysis methodology used in situation assessment and predictive analysis. Tasks that were once very labor intensive can now be accomplished more rapidly and accurately and at a lower cost. Knowledge base maintenance performed by computer-naive users instead of engineers significantly reduces the knowledge acquisition process. This feature not only reduces the maintenance costs, but the system capabilities remain current in an environment of rapidly changing requirements. In addition to analysis tasks, TAS is used for knowledge archival and training. As the TTMs are developed by the locally-recognized experts and represent their analytical processes, this knowledge is more easily transferred to less experienced analysts. Therefore, the domain knowledge remains with the organization unaffected by personnel arrivals and departures.

Acknowledgements The work reported here has been supported by Air Force Rome Laboratory and GTE Government Systems.

References [1] L. Jesse, Temporal transition models: A new approach to situation assessment and predictive analysis, M.S. Thesis, University of Colorado at Colorado Springs, 1993. [2] F.D. Anger, E.M. Clarke, New and used temporal models:An issue o1 time, Journal of Applied Intelligence 3 (1993) 5-15. [3] A. Pneulli, The temporal semantics of concurrent programs, in: Proceedings of the 18th IEEE Symposiumon Foundations of Computer Science, Providence, RI, November 1977, pp. 46-57. [4] K. Chandy, J. Misra, Parallel Program Design, Addison Wesly, Reading, MA, 1988. [5] E. Clarke, E. Emerson, A. Sistla. Automatic verificationof finite-state concurrent systems using temporal logic specification, ACM Transactions on ProgrammingLanguages 8 (2) (1986) 244-263. [6] J. Allen, Towards a general theory of action and time, Artificial Intelligence 23 (1984) 123-154. [7] J. Allen. Maintaining knowledge about temporal intervals, Communications of ACM 26 (11) (1983) 832-843. [8] A.C.-C. Meng, M. Sullivan, Logos: A constraint-directed reasoning shell for operations management, IEEE Expert 6 ( 1) ( 1991). [9] W.A. Perkins, A. Austin. Addingtemporal reasoning in expert-system building environments, IEEE Expert 5 (1) (1990). [10] S. Keretho, R. Loganantharaj, Reasoningabout networks of temporal relations and its applications to problem solving, Journal of Applied Intelligence 3 (1993) 47-70. [11] M. Vilain, H. Kautz. Constraint propagation algorithms for temporal reasoning, in: Proceedings of the Sixth National Conference of the American Association for Artificial Intelligence, Philadelphia, PA, 1986, pp. 377-382. [12] A. Trudel, Representing Allen's properties, events and processes, Journal of Applied Intelligence 6 (1996) 59-65. [13] D. Mitra, R. Loganantharaj, Experimenting with a temporal constraint propagation algorithm, Journal of Applied Intelligence 6 (1996) 3948.

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems 10 (1997) 87 102 [14] F. Song, R. Cohen, A strengthened algorithm for temporal reasoning about plans, Computational Intelligence 12 (2) (1996) 331-356. [15] P. Gray, D. England, S. McGowan. XUAN: Enhancing UAN to capture temporal relationships among actions, in: People and Computer IX, Proceedings of Human-Computer Interface '94, Glasgow, UK, 1994, pp. 301-312. [ 16] B. Shneiderman, Multi-party grammars and related features in designing interactive systems, IEEE Transactions in Systems, Man, and Cybernetics 12 (2) (1982) 148-154. [17] A.E. Blandford. P.J. Bernard, M.D. Harrison, Using interaction framework to guide the design of interactive systems, International Journal of Human Computer Studies 43 (1995) 101-130. [18] W. Woods, Transition network grammars for natural language analysis, Communications of the ACM 13 (1970) 591-606. [19] S. Russell, P. Norvig, Artificial Intelligence, A Modern Approach, Prentice Hall, Inc., Englewood Cliffs, New Jersey, 1995. [20] M.L. Minsky, A framework for representing knowledge, in: P.H. Winston (Ed), The Psychology of Computer Vision, McGraw-Hill Publishing, Inc., New York, 1975, pp. 211-277. [21] B.G. Buchanan, E.H. Shortliffe, Rule-Based Expert Systems, The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison-Wesley Publishing Company, Reading, MA, 1984. [22] J. Gordon. E.H. Shortliffe, The Dempster-Shafer theory of evidence, in: B.G. Buchanan, E.H. Shortliffe (Eds.), Rule-Based Expert Systems, The MYCIN Experiments of the Stanford Heuristic Programming Project, Addison-Wesley Publishing Company, Reading, MA, 1984, pp. 272-292. [23] D. Sankoff, J.B. Kruskal (Eds.), Time Warps, String Edits and Macromolecules; The Theory and Practice of Sequence Comparison. Addison-Wesley, New York, 1983. [24] D. Meflord, Formulating Foreign Policy on the Basis of Historical Analogies: An Application of Developments in Artificial Intelligence. International Studies Association, Atlanta, GA, 1984. [25] D. Heise, Modeling event structures, Journal of Mathematical Sociology 13 (1988) 183-196. [26] V. Hudson, Scripting international power dramas: A model of situational predisposition, in: V. Hudson (Ed.), Artificial Intelligence and International Politics, Westview Press, Boulder, CO, 1991, pp. 194-220. [27] A.L. Kidd. M.B. Cooper. Man-Machine interface issues in the construction and use of an expert system, International Journal of Man-Machine Studies 22 (1985). [28] J.D. Moore et al., Explanation in expert systems - A survey, in: Proceedings of the First International Symposium on Expert Systems in Business, Finance, and Accounting, University of Southern California, Los Angeles, September 1988. [29] A.C. Scott, W.]. Clancey, R. Davis, E.H. Shortliffe, Methods for generating explanations, in: Rule-Based Expert Systems, The MYCIN Experiments of the Stanford Heuristic Programming Project, Addison-Wesley Publishing Coinpany, Reading, MA, 1984, pp. 338362. [301 S.W. Bennett, A.C. Scott, Specialized explanations for dosage selection, in: Rule-Based Expert Systems, The MYCIN Experiments of the Stanford Heuristic Programming Project, Addison-Wesley Publishing Company, Reading, MA, 1984, pp. 363-370. [31] J.W. Wallis, E.H. Shortliffe, Customized explanations using causal knowledge, in: Rule-Based Expert Systems, The MYCIN Experiments of the Stanford Heuristic Programming Project, Addison-Wesley Publishing Company, Reading, MA, 1984, pp. 371-388. [32] E.H. Shortliffe, R. Davis, S.G. Axline, B.G. Buchanan, C.C. Green, S.N. Cohen, Computer-based consultations in clinical therapeutics: Explanation and rule acquisition capabilities of the MYCIN system, Computers and Biomedical Research 8 (1975) 303-320. [33] J.D. Hollan, E.L. Hutchins, L. Weitzman, STEAMER: An interactive

[34]

[35]

[36]

[37] [38]

[39] [40]

[41]

[42]

[43]

[44]

[45]

[46]

[47] [48]

[49] [50] [51]

[52] [53] [54]

[55] [56]

[57]

I01

inspectable simulation-based training system. AI Magazine 5 (2) (1984) 15-27. W.B. Rauch-Hindin, Artificial intelligence in Business, Science, and Industry: Volume I - Fundamentals, Volume II -Applications, Prentice-Hall, Englewood Cliffs, New Jersey, 1986. C.P. Langlotz, E.H. Shortliffe, Adapting a consultation system to critique user plans, International Journal of Man-Machine Studies 19 (1983) 479-496. D.D. McDonald, Natural language generation, in: S.C. Shapiro (Eds.), Encyclpedia of Artificial Intelligence, 2nd Edition, John Wiley and Sons, New York, 1992, pp. 983-997. D. Chester, The translation of Ik~rmal proofs into English, Artificial Intelligence 7 (1976)261 278. D.D. McDonald, Natural language generation as a computational problem: An introduction, in M. Brady. R. Berwick (Eds.), Computational Models of Discourse, MIT Press. Cambridge. Massachusetts. 1983, pp. 209-266. J. Allen. Natural Language Understanding. 2nd edition, BenjaminCumming Publishing Company, Inc., Menlo Park, CA, 1995, S.M. Shieber, G. van Noord. F.C.N Pereira, R,C. Moore, Semantic head-driven generation, Computational Linguistics 16 (1) (1990) 3042. J. Kalita, S. Shende, Automatically generating natural language reports in an oflice environment, Second Conference on Applied Natural Language Processing, Austin, TX, 1988, pp. 178 185. R.K. Avent, J.D. Charlton, A critical review of trend-detection methodologies for biomedical monitoring systems, Critical Reviews in Biomedical Engineering 17 (6) ( 19901 621-659. D. Dvorak, B. Kuipers, Model-based monitoring of dynamic systems, International Joint Conference on Artificial Intelligence, 1989, pp. 1238-1243. C. Berzuini, R. Bellazzi. S. Quaglini, D.J. Spiegelhalter, Bayesian networks for patient monitoring, Artificial Intelligence in Medicine 32 (1992) 1-55. H. Kaulz. J. Allen, Generalized plan recognition, in: Proceedings of the National Conference of the American Association for Artificial Intelligence, 1986, pp. 32-37. C. Dousson, P. Gaborit, M. Challab. Situation recognition: Representation and algorithms, International Joint Conference on Artificial Intelligence, 1993, pp. 166-172. Y. Shoham, Reasoning About Change: Time and Causation from the Standpoint of Artificial Intelligence, M1T Press, 1987. A.K. Macworth, E.C. Freuder. The complexity of some polynomial network consistency algorithms for constraint satisfaction problems, Artificial Intelligence 25 (1985) 65-74. R.B. Hill, D.C. Wolfgram, D.E. Broadbent, Expert systems and the man-machine interface, Expert Systems (1986). E. Turban, Expert Systems and Applied Artificial Intelligence, MacMillan Publishing, Inc., New York, 1992. J.H. Boose, Knowledge acquisition, in: S.C. Shapiro (Ed.), Encyclpedia of Artificial Intelligence, 2nd Edition, John Wiley and Sons, New York, 1992, pp. 719-742. R.V. Kelly, Jr., Practical Knowledge Engineering, Digital Press, 1991. D.D. Wolfgram, Expert Systems, John Wiley and Sons, New York, 1987. J. Kim, J.F. Courtney, A survey of knowledge acquisition techniques and their relevance to managerial problems. Decision Support Systems (1988). T.R. Gruber, The acquisition of strategic knowledge, Ph.D. Dissertation, University of Massachusetts. Amherst, 1989. R.L. Joseph. Graphical knowledge acquisition, in: Proceedings of the Fourth Knowledge Acquisition for Knowledge-Based Systems Workshop, Banff, Canada, 1989, pp. 18.1 - 16. J. Konito, P. Lounamaa, A graphical framework for knowledge acquisition and representation, in: Proceedings of the Third European Workshop on Knowledge Acquisition for Knowledge-Based Systems Workshop, Express-Tirage, France, 1989, pp. 490-501.

102

L.A. Jesse, J.K. Kalita/Knowledge-Based Systems 10 (1997) 87- 102

[58] D. Kopec, L. Latour, Toward an expert/novice learning system with application to infectious diseases, SIGART Newsletter 108 (1989) 84-92. [59] J. Allen, Temporal Reasoning and Planning, Reasoning About

Plans. Morgan Kaufman Publishers. Inc.. San Marco. CA. 1991. pp. 1-67. [60] T. Dean, J. Firby. D. Miller, Hierarchical planning involving deadlines, travel times, and resources, Computational Intelligence 4 (41 (1988) 381-398.