Int. J. Man-Machine Studies (1988) 29, 197-213
A knowledge acquisition environment for scene analysis DEBORAH TRANOWSKI Advanced Decision Systems 1500 Plymouth Street, Mountain View, CA 94043-1230, USA
(Based on a paper presented at the Second A A A I Workshop on Knowledge Acquisition for Knowledge-Based Systems, Banff, October 1987) This paper describes a knowledge acquisition environment under development to help capture expertise from domain experts involved in analysing scenes from aerial imagery. The research is important because automated image understanding systems are increasingly relying on expert knowledge to help analyse objects and control the analysis process. It is desirable to enable the domain experts to enter and manipulate the domain knowledge directly. The research described is based on the concept of an integrated knowledge acquisition environment (KAE). The goal is to integrate the domain inputs, the translation into internal representations and the actual execution and feedback. The KAE contains a collection of computer-based tools facilitating: viewing and editing domain knowledge in both textual and graphic format (analysts tend to be visually oriented), knowledge base execution and testing, and expert system performance analysis.
1. Introduction and research motivation A perceptual and reasoning activity of increasing importance is the capability to interpret and analyse imagery to detect objects, or activities of interest, to notice changes in a scene, or to interpret observed behaviours. This need is strongly identified in military applications of intelligence gathering such as predicting apparent presence and behaviour, for verifying expected behaviour (say for treaty verification). It is also important for a variety of other applications such as medical imagery diagnosis, mineral or crop analysis from aerial photographs, and industrial inspection. In recent years, advances in sensor technology for imaging (for not only optical sensors but also radar, infrared, X-ray, laser etc.) have lead to a vast explosion in the quantity and quality of available imagery for the tasks mentioned above. As the imagery has become available, human "experts" have emerged that have remarkable perceptual abilities to locate and analyse objects in the image. However, they can not keep up; the imagery explosion has outrun manpower and training availability. This has lead to emphasis in developing automatic photo-interpretation systems. Research in knowledge-based image understanding and vision systems to develop automatic photo-interpretation capabilities has focused on capturing human expertise and "rules" about how to identify objects or how to undertake and control the image analysis process. Hence there is a strong motivation to provide knowledge acquisition and refinement of knowledge for the vision system. The nature of the photo-interpretating problem and the requirements of knowl197 0020-7373/88/070197 + 17503.00/0
9 1988 Academic Press Limited
198
D. TRANOWSK1
edge acquisition process that drive an integrated knowledge acquisition capability include: 9 The knowledge is visually oriented. Typically, visual appearance is being described. It may be the 9 appearance of objects in certain conditions (in terms of their size, shape, texture, colour, etc.), 9 simple relationships between objects (such as near, adjacent, behind, on-top-of, etc.), 9 more complex relationships between objects that describe a spatial pattern. 9 A tight coupling is required between the expert and the expert system (vision system) being developed. Because visual concepts are difficult to express, being able to suggest one and obtain rapid feedback on how the concept affected the system's performance is critical. 9 Multiple types of expertise are often required and multiple experts are likely to be involved. For example, photo-interpreters, terrain experts, intelligence experts, and sensor experts may all participate in finding military units deployed in aerial radar imagery. The vision system consists of several cooperating expert systems each supporting knowledge bases and relevant knowledge representation methodologies. Hence methods are required for allowing each expert to contribute to, interact with, and obtain feedback from a larger, more complex, but unified system. 9 The domain experts are likely to be fairly unsophisticated in working with computers. Often, asking the experts to discuss or describe computer representations of knowledge causes them to alter their knowledge and provide biased information. Hence, it is especially critical that all knowledge be described and discussed in domain terms and that computer science implementation issues be transparent. 9 Currently, domain expertise is elicited through conversations (interviews) with experts. A knowledge engineer familiar with the system organization and representation schemes as well as possessing a degree of familiarity in the domain performs the role of intermediary between the computer system and domain expert. As the various domain knowledge bases are enhanced, the manual process of obtaining information from human experts and representing that information in a computer usable form is becoming increasingly cumbersome. It is desirable to reduce the dependence on the knowledge engineer and allow the domain experts to enter and manipulate the domain knowledge directly. This would also eliminate any biases that may be imposed by the knowledge engineer thus allowing the expert to describe their own cognitive techniques.
2. Application overview Driven by the motivations described above, a prototype knowledge acquisition environment (KAE) has been developed. In order to make rapid progress in the protoype development and to provide near-term assistance with real photointerpretation system development activities, these specific tasks involving scene
KNOWLEDGE ACQUISITION ENVIRONMENT
199
analysis were selected for study. These tasks are: 9 using terrain information and tactical analysis to determine likely locations for objects of interest (terrain analysis), 9 identification of objects based on their spatial layout (spatial pattern identification). Terrain analysis combines two types of expertise: 9 tactics concerning likely object locations and, 9 knowledge of the underlying terrain in an area. The target system (for which knowledge is being acquired) operates by defining constraints about locations and combining the constraints (in a weighted fashion) to obtain most likely overall locations. For example, constraints for predicting most likely locations for a regiment headquarters may include: 9 9 9 9
behind hills (high ground), near (but not on) a road, in a lightly wooded region, fairly dry area.
Spatial pattern identification involves analysing a set of image derived objects to see if they are deployed in a pattern that is significant (reflects the presence of an activity or scene of interest). This analysis is possible because of tactical rules about how military units deploy and behave. Hence, the experts are military analysts. The target system attempts to locate an instance of a unit in a spatially distributed set of component units. The domain knowledge in the system specifies the constraints on the pattern of interest. There may be missing component units, additional false component units or incorrectly deployed (based on doctrine) component units in the scene.
3. Knowledge acquisition environment (KAE) Given the problem scope described in the previous section, an initial step in the development of the prototype KAE was to define a flexible, yet robust architecture. The KAE architecture should provide support of a collection of computer-based tools facilitating: 9 viewing and editing domain knowledge in both textual and graphic format (analysts tend to be visually oriented in their reasoning and interpretations), 9 translation of raw domain information onto an intermediate representation and finally translation into an executable format, 9 knowledge base execution and testing, 9 expert system performance analysis, 9 knowledge base management. Figure 1 shows the KAE architecture. The user (domain expert or knowledge engineer) interacts with the KAE through the interface manager. Section 3.1 describes the tools and interactions currently available through the interface manager. Once entered into the KAE, raw domain information is translated into an
200
D. TRANOWSKI DATA
~I
N ' TE~ACRE
KNMOA~LEGDEGEEEBAS
TRANSLATION~
PERFORMANCE
INFERENCE
TRANS .... N
]FUNCTIONALMODULES O
DATAREPRESENTATION FIO. 1. Knowledge acquisition environment architecture.
intermediate representation (i.e. frames, rules). Section 3.2 explains the motivation for an intermediate representation and the types of representations available within the KAE. The domain information and data are then submitted to the appropriate inference engine for processing. Results are the output of the inferencing process. Results can then be analysed and displayed to the user. Section 3.3 describes execution and analysis in the KAE. Having completed the cycle of input, translation, execution and display, the user has the opportunity to receive immediate feedback in terms of displays and analysis to guide the refinement process. As the KAE begins to store and manage large bodies of domain knowledge, knowledge management tools will be essential. Section 3.4 presents an envisioned set of knowledge management tools. Components of the interface manager and translation modules have been implemented for the terrain analysis and spatial pattern identification domains. The user interface allows information to be entered/edited in both a textual and graphical fashion as well as the ability to display information on maps and images for additional support. Current development is being performed on a Symbolics 3270 with plans to migrate to a SUN workstation during 1988. 3.1. INTERFACE M A N A G E R
The goal of the KAE user interface is to support experts in the various domains required to perform scene analysis. In most instances, the domain experts are not sophisticated computer users, therefore an additional requirement is to make the interactions as easy and intuitive as possible. In order to meet these requirements domain models (Musen, 1986) were defined for the initial key areas (terrain analysis and spatial pattern identification). Domain models provide a means in which the intended actions, goals, and vocabulary of the application (i.e. terrain analysis) are formally expressed. By combining graphics and text within the specific application domain model the user is insulated from the detailed implementation and can interact with the information in ways which are familiar.
KNOWLEDGE ACQUISITIONENVIRONMENT
201
For example, during the process of defining the terrain analysis domain model it became apparent that experts use a variety of "overlays" to make assessments. An overlay represents a particular terrain feature (i.e. canopy closure) which is displayed over a map or image under analysis. A unit is then given a likelihood probability constituting suitability for each terrain feature. This process has been modeled in the KAE. The KAE screen layout is presented in Fig. 2. The top right window is the system menu window. This window reflects the comprehensive set of information required to perform scene analysis. As previously stated, the prototype KAE is concentrating in the terrain analysis and pattern identification domain. The item selected from the system menu provides further context-specific options that are then displayed in the sub-menu window. Selection of an item in the sub-menu performs a system processing operation. The displays generated by system processing are positioned in the work area. During the processing of a function, messages may appear in the notification window. As shown in Fig. 3, selecting terrain analysis from the system menu produces a sub-menu of actions specific to terrain analysis. Once the user chooses to "Enter or Edit Unit Information" a knowledge base of previously defined unit specifications is read into the KAE. These are shown in Fig. 3. The user decides to add a "new" unit to the knowledge base and is prompted for the unit name (Fig. 4). The user is then prompted to select the features which are sufficient to describe the types of terrain suitable for a M R R HO---Motorized Rifle Regiment Headquarters (Fig. 5). From the choices made in the terrain feature KNOWLEDGER,CQUIS~TIONENVIRONMENT
Teltain hnelysis ActiviW Analysis
All Source Fusion
Pattern Identification
Quit 5yster, ~2~ady
~o#o~
FIG. 2. Knowledge acquisition environment-screen layout.
Vehicle ClassifiealJon Inle~/Ion Analysis
202
D. TRANOWSKI
FIG. 3. Adding a new unit to the terrain analysis knowledge base.
FIG. 4. Entering unit name.
KNOWLEDGE ACQUISITION ENVIRONMENT
203 It t ' : T i " ~ t7114 ~
KNOWLEDGE ACQUISITION ENVIRONMENT
All Sour Fnsion Pattern Identification
tnsr4;t"4
Activity Analysis Quit
Vehicle Classilication Intention Analysis
FIG. 5. Terrain feature selection. All Sellgce F u s i o n Pattern Identification
|l~4lnmm~l~m~
KNOWLEDGE RCQUISITION ENVIRONMENT
httlvityAnaly$i$
Vehicle Classiflcati~n I.tenlion Analysis
Quit
Top
S~sten Re~d~
~nter ,
lnfe~lnatiol
Unit .
,
,
,
Edit U n i t I n f o z m a t i o n Display Terrain Image
$oteo~
i..%l$@.:.%Q@i~@::t::4~i@ ~i..~ : $: ~ :~:~::~}~.-.",:.-'.::~:~:: :~:: $~:::~: L'ike
....... 9.!~.x-.~-.~...,g~.,x.,:-..,~,:-:,,..--.,.~{~
!ihood
[ .......
:~$~.::~::N~:g..-~.:.::>:.:.::~:h..~.::::.~.~.:.~:.:~::.:.>:::
Hydro9r~ph le
Area
~::~::.::~:::~$5::::::::::::::::::::::::::::::::::: L2
~i
No Go Nhee|ed
:::::::::::::::::::::::::::::::::::::::::::::::::::::
,,::~:~::'::~::::tz.5:'::!::::!: ~:~',.~:~ 8.2 ::'&:~:$~:'-:~:~..i.'.':!:!:!~~:~::~~:::~ :::::::::::::::::::::::::::::::::::.-'..':.-r
1t.5 @.5
............
i~f " ' l l l l f i . . . . . . .
I
I
FIe. 6. MRR-HQ terrain feature specification.
:::::::~;:~r162~'~i.~ :-.:~&:~~~~ ! ~
204
D. TRANOWSKI
menu, a window is configured to allow the user to enter the likelihood probabilities. As shown in Fig. 6, each of the terrain features are expressed in finer detail in order to allow for an accurate specification. Each of these designations are fields in a terrain database in which each image is catalogued. The terrain feature names are mouse-sensitive to provide definition information (Fig. 7). The sub-categories belonging to each terrain feature are also mouse-sensitive so that individual categories may be displayed on a map or image (Fig. 8). This assists the domain experts in assigning the likelihood probabilities for a unit to be located in areas representing different types of terrain. Figure 9 shows the Canopy Closure (75-100%) feature displayed on a terrain area (map) under analysis. In this display, the darkened areas reflect the chosen terrain feature. The map is typically displayed on a colour monitor where each terrain feature is assigned a colour with colour intensity reflecting the categories within each terrain feature. In this way, users may "overlay" different terrain features to aid in the analysis of the underlying terrain. Once the user is comfortable with the probabilities, the terrain analysis inference engine may be executed so that results may be obtained and the refinement process initiated. Changes and updates may also be saved, adding the MRR-HQ terrain information to the terrain knowledge base for later recall and editing. Having completed an initial set of terrain analysis functions, development will now proceed to the spatial pattern identification domain. The spatial pattern identification domain model showed that users required a tool to enter spatial patterns as well as describe properties of the pattern. A property may be the
KNOULEOGE @CQUISITION ENVIRONMENT Top Ready
All Source Fusion Pattern Identification
Activity A nalysis Quit "
Edit Unit Information D i s p l a y T e r r a i n Image
,
9
Edit T e r r a i n Feature
Un:tt Type: MRR-HQ T e r r a i n Featurm
_tketthoad
Canope C l o s u r e 58-75 25-58 e-25
,opy c]osure offers concealment f r o . aerial ervatlon or From elevated points on the
rain.
Th~
F~ature i ~ des
the percentage
Hydr~
R.2 B.1
8_1
Hydrographic Area
in ter.~
os vertical obscurat~on
@,e
Nnn-Hpdrographlc Rre~- I.A Percent
Slope
,vlded by either tree canopy closure 30~-45~
e.fl
15~-30~
8.2
O~-IBZ
B.8
r~d/Rev Slope
Forwnrd
O.1
~everse H i l l Cre~t
8,R
Io Go Irmck & ~heeled
B,~
~o S o W h e e l e d
8.8 1.@
O.g
5ten S~aneter
)o R l l
Types
~oad Zones
'rlnary' Road + 2 0 8 ~ ~econdary Road * 288. rertlarx Road + 2SOn
8.2
"rl/~ec/Tertiary
~.8
*
lk~
B.5 ~.3
Cey Terrain iElev~ted
.~ne os SISht
Vehicle Classificaiion Intention A n a l y s i s
t.S
Rot Elevated
18.5
Masked Terrain Visible lerra|n
I8"g ~.l
FIG. 7. Terrain feature definition information.
KNOWLEDGE ACQUISITION ENVIRONMENT
KtJOWLEOGE ACQUISITION ENVIRONIIENT 7ol'
205 errain Analy$ AciivityAnalysi~ Quit
All Source Fusion IPatteln ldanlification
Vehicle Classificalian Intention Analysis
y s t e n Read~ Edit T e r r a i n Fealute
Unit Type: ano
MRR-HQ
. o_~ure . . . . . .
_tkelihood Di-~ I a
T r r a i n F a ore X
N~drogroDhir Brea Hon-Nydrogranhlc Area Slope
...........
:.e
~4sz
o.~.e
30Z-45Z t5Z-3e~ o~-tez
e.,.j.e
I
0.2
Slope Foruard Reuer~e H i l l Crest Dia.eter Ho Go Track & Wheeled B . ~ No Co Wheeled G ~ RI I Types 1.0 P r l m e r v Road 9 2OOn Secondary Road § 208~ rmrt~ary R o a d 9 28Q~ n r l / S e c / T ~ r C i o r v + lkm
o.~
Terrain E1~veted Mot E l e v o t e d
e._~5
Sight ~asked Terrain ~islble ] e r r ~ t n
FIG. 8. Selecting terrain feature to be displayed.
FIO. 9. Canopy closure map display.
Edit Unit Information Display T e r r a i n Image
206
D. TRANOWSKI
number of objects, the distance between each of the objects or the type of each object in the pattern. The user can enter spatial information in two ways, The K A E will store a pre-defined set of patterns which the user may choose from in order to enter specific unit information or the user has the ability to add new patterns to the KAE. The following paragraphs explain both modes of operation. Figure 10 shows the spatial pattern identification sub-menu with "Define New Pattern" selected. The user is asked to give the pattern a name as well as specify the number of objects in the pattern. Figure 11 shows a graphic "working area" containing the specified number of objects (in no apparent order) along with a textual specification template. The user is prompted (in the notification window) to place the objects in the desired pattern. Each of the objects are mouse-sensitive and can be moved anywhere in the "working area". Once the user is comfortable with the layout the specification template can be "updated" to reflect the configured pattern (Fig. 12). Notice the automatic addition of the angle descriptions. Finally, the user "saves" the pattern, causing the list of pre-defined pattern specifications to be updated reflecting the new triangle pattern. Editing can be accomplished from either the text or graphics display. Figure 12 shows changes in the graphic display reflected in the text specification. Figure 13 shows an angle change in the text display reflected graphically. A second mode of spatial pattern identification involves utilizing one of the predefined patterns to describe the spatial layout of a specific unit (Fig. 14). In this case, the newly created triangle pattern was chosen. The user is prompted to enter the unit which will utilize a triangle pattern--Motorized Rifle Regiment (MRR). Implicitly, the M R R inherits the properties of the triangle pattern (i.e. three objects in the pattern and the objects placed at the specified angles). The initial graphics display shown in Fig. 15 is the triangle pattern that was previously created and saved.
MESSAGE/NOTIFICATION WINDOW
Terrain Analysis Activity Analysis Quit I DEFINE NEW
All SourceFusion [Pattern Identificationl
PA-Jq-ERN]
EDITPATTERN
USE PRE-DEFINED PATTERN
I .PATr.E_FINN_AM E." _ TRIANGLE :~i:i:i:bi:i:i:i:i:i:i:i:i:bi:i:i:i:i:i:i:E:i:i:i:i:!:i:i
FIG. 10. Spatial pattern specification.
Vehicle Classification Intention Analysis
207
KNOWLEDGE ACQUISITION ENVIRONMENT
T~l'ainAnalysis Activity Analysis Ouit
PLEASE PLACE OBJECTS TO REFLECT PATTERN
All Source Fusion IPattern identiiticationl
I
[DEFINE NEW PA]-I-ERN]
Vehicle Classification Intenition Analysis
EDIT PATTERN
USE PRE-DEFINED PATTERN
PA'I-FERN: TRIANGLE
A
B
[]
C
FIG. 11. Triangle pattern specification.
The text display has additional fields allowing the user to specify ranges for angle and distance as well as indicate the type(s) of objects that compose the unit. The angle and distance fields can be edited. The user can change the graphic and cause the text display to be updated and v i c e v e r s a . The graphics window is initially configured with a system defined ruler measurement. The ruler can be adjusted to
MESSAGEJNOTIFICATION WINDOW
Terrain Analysis Activity Analysis Quit
All Source Fusion [Paittem Identification I !
DEFINE NEW PATTERN1
Vehicle Classification Intention Analysis
EDiT PA'NERN
USE PRE-DEFINED PA'r-FERN PATTERN: TRIANGLE
Q A
B
i i@Si@i {i{i i i i i i i i i i i i i i i{i i PATTERN SPECIFICATION: TRIANGLE NUMBER OFOBJECTS: 3 ANGLE A-B-C: 57 B-C-A: 61 C-A-B: 62
i ii,i i i i i i i i i i i i i !i i i i i FIG. 12. Spatial pattern updating.
208
D. TRANOWSKI
MESSAGE/NOTIFICATION
WINDOW
Terrain Analysis Activity Analysis Quit
All Source Fusion I Pattern Identification
DEFINE NEW PATTERN[
I
Vehicle Clarification Intention Analysis
EDIT PA'I-rERN
USE PRE-DEFINED PAI-I'ERN
iil iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiil
PATTERN: TRIANGLE
iii!iiiiii!,',i!iiiii!iiiiiiiiiiiiil I A
B
C"
FIG. 13. Spatial pattern modification.
reflect any type of distance measure (i.e. miles, kilometers, metres, etc.). In this example the ruler is shown in kilometers. Figure 16 show modifications occurring as a result of moving an object in the pattern. As soon as the "update" request is received by the KAE the text display is changed. On the other hand, modifications can be made by "mousing" on an entry in the text display, entering a new value and seeing the change in the graphic display. These features offer the domain expert the Terrain Analysis Activity Analysis Quit
All Source Fusion
IPattern identification I
DEFINE NEW PATFERN
EDIT PATTERN
USE PRE-DEFtNED PATTERN I
i•I!II•I••i!ii``iiii``iiii•ii````i•••Ii•i•••iii••••iiiii•••iii••iii• iiiiiii!iill CHOOSEPATTERN: i! i i i i i! i i !iilii i LINE i ! i i i i i i i ! i i i i i i ! i i i!i i i ',!',i',il ', ',i ',,,',;',,~,',,I,,
iiiiii!iii',i
5-ON-A-DIE W
', ', ',!', ', ', ', ', ', ', ', ', ; ' ;,,', ', I,,~I', ', ', :,,1
:::::::::::::::::::::::::
', ',
:: :
', ', ', ', ', ', ', I ~ : I ', ', ; ; I ', :
::::::,iiiii',iiiii
EN RON,TNAM :
FIG. 14. Spatial pattern specification-MRR.
Vehicle Classification Inlention Analysis
209
KNOWLEDGE ACQUISITION ENV]RONMENT
TerrainAnalysis ACtivityAnalysls Quit
All SourceFusion IPattern Identification I
DEFINE NEW PATTERN]
VehicleClassification Intention Analysis
EDIT PA'I-I'ERN
USE PRE-DEFINED PATTERN UNIT NAME: MRR
0 0__1 1--
RULER: KM
1
2
3
4
5
6
I
I
I
I
I
I
7
,B
.A
!iii
PATTERN SPECIFICATION: MRR
iiii i:s
FORMATION TYPE TRANGLE NUMBEROFOBJECTS: 3
:::::::: : i : i ANGLE:
2-3-4-5--
A-B-C B-C-A
57 61
i:i:
C~A-B
62
:i:i:i:i:!:!:
:i:i i:i:
5km
iii iiiiiiiii
5km
C~A
sk~
i:i:i:i:i-'i:i :i:i:i:i:i:i:
::i:;i OBJECTTYPE: A
7
:!: :::: i I i llll
::":::::::::
DISTANCE: A-B B-C
6--
llll
i::i iiiiii!ii i:i:i:i'i"i:i
::::
iiii oC
i iiiiiiii!i iii!iiii!i!i iiii,'i'i'i:i:
ii!i!~iiiiiii
" c
:i:~:i:i:i: :i:i:::::::~:
llllOll=llllOllllllalAllll
lllll
i lllll41llllll=llllll
I
FIG. 15. Initial spatial pattern specification-MRR.
TerrainAnalysis ActivityAnalysis Quit IDEFINE NEW PATTERNI
All SourceFusion IPattern Identification I
VehicleClassification IntentionAnalysis
EDIT PATTERN
USE PRE-DEFINED PATrERN UNIT NAME: MRR
0
RULER: KM
1
2
3
0--
I
I
I
1--
9A
4 I
5 I
6 I
7 I
-B
2-3--
C-A-B
62
4m
C
s-
[]
DISTANCE: A-B B-C C-A
MIN 5kin 7km 7kin
[] MAX 1Okra 12kin 12km
ENTER MAX ANGLE: 75 ,,,,,,.,.,.,
6--
OBJECT TYPE: A battalion
Bba~a,o.
FIG. 16. M R R spatial pattern editing.
i i iiiiii
=i=i:!:i: :
210
D. TRANOWSKI
Terrain Analysis Activily Analysis Quit
All Source Fusion
Vehicle Classificalion Intention Analysis
[Pattern IdentificationJ
IDEFINENEW PATTERNI
EDIT PATTERN
USE PRE-DEFINED PATI'ERN
UNIT NAME: MRR
0 0--
1 I
.U,E.: KM
2 I
....
!! ...........
i i':i ......
i .....
3 I
~ 2~ 3-4-5-6-7
oC
',,,',:',,,',,,
~.C-A
,s
i:i:i:i:
C-A-B
30
i:i:i:i:
DISTANCE: A-B
MN 5"~
:i:i:!:i i:i:i:i: !i!iiii!
:i:i:i:i i:i:i:i:
B-c
C~A OBJECT'I~'PE:
7~, 7~
Aba~alion Bb~,o~
C battalion
12o
70
"',"::11::',
i:i:i:::i'
MAX i i ili! l--O--kin i : i : : : : : i :
12km : : : : i : l~km i:i:!:N: ii i iliii :~:i:i:i:i i:i:i:ili:
FIG. 17. Final spatial pattern specification-MRR.
most in flexibility. Once the user is comfortable with the unit spatial description the text and graphic descriptions can be saved for later recall. Figure 17 shows the final result for an MRR. The user interface and supporting tools described in the preceding paragraphs constitute the initial development phase of the K A E . Domain experts and knowledge engineers can now begin to enter and edit terrain domain knowledge. Spatial pattern identification will be available soon. Minor changes to the current interface are anticipated but for the most part development will continue in the spatial pattern area. 3.2. T R A N S L A T I O N OF D O M A I N K N O W L E D G E
Within the K A E , raw domain information is translated into an intermediate representation (refer to Fig. 1). This intermediate representation serves several functions. Initially, the intermediate representation acts as a buffer between the graphics and text entered through the user interface and the executable representation used by the application specific inference engine. Our goal is to define a rich set of intermediate representations through which the domain information may be translated (i.e. frames, rules, semantic networks, scripts). The intermediate representation chosen for any given domain will be influenced by the representation utilized by the application specific inference engines. Frames were chosen and implemented for the terrain analysis domain. Once the domain information is stored in the intermediate representation, another translation process transforms the intermediate representation into a form which can be executed. This translation process is only performed when the application specific inference engine is to be
KNOWLEDGEACQUISITIONENVIRONMENT
211
executed. This saves a significant amount of time during the initial entry and editing stages. The intermediate representation is also used as a means to transform results of the inferencing process into a form suitable for display to the user. Both the intermediate and executable knowledge representations are transparent to the user, freeing the expert from learning implementation details and allowing him to concentrate on the transfer of expertise. As shown in Fig. 1 a host of auxiliary tools (i.e. performance analysis and knowledge-base management) will also use the intermediate knowledge representation. These tools will be described in later sections. 3.3. EXECUTION AND ANALYSIS A lot of current and previous effort is engaged in the development of a variety of application specific inference engines for scene analysis. Each of these will be utilized by the KAE. Knowledge bases for each supported application area will be submitted to the appropriate inference engine for execution. Once results are available, tools will be developed to assist in performance evaluation in the various application areas. Most of the sample imagery used to develop knowledge-based image understanding and vision systems will have what is known as "ground truth" associated with the imagery. The ground truth describes the objects, patterns and activity contained in a particular image. A performance analysis tool will be used to compare ground truth information with inference results to test the overall accuracy of the inference process. This comparison can also guide the domain expert in the areas the knowledge base may be improved. For example, presenting the expert the cases which did not produce useful results may show that a piece of imagery was bad, new terrain features need to be added to the terrain analysis area or certain objects are never found. 3.4. KNOWLEDGE-BASEMANAGEMENTTOOLS As the KAE begins to store and manage large bodies of domain information knowledge-base management tools will be essential. These tools will use the intermediate representation (i.e. rules, frames, semantic networks). They will assist in both knowledge-base debugging and knowledge-base configuration management. In the debugging area a tool should check for consistency in the saved knowledge base. As a knowledge-base grows, identifying the effects of modifying a concept or relation will be important. For example, a user decides to change the system definition within terrain analysis for "Hill Crest". We now need to be directed to all the units that rely on the hill crest feature and determine if the feature is still relevant to the unit and if the suitability measure should be modified. Another debugging aid would compute statistics based on inference results showing when a rule was last "fired" or frame instantiated. Perhaps the domain information within the rule or frame is too special purpose, a particular unit is rarely seen or you are lacking imagery containing the unit. A graph could show a utilization ratio for each unit. Knowledge-bases will change frequently. Any useful, nontrivial knowledge-base requires periodic correction, extension and improvement. In order to manage the
212
D TRANOWSKI
updates and provide a historical summary of the knowledge-base development activity a configuration management tool will be embodied in the K A E . This tool will log who, when and p r o m p t the user for why a knowledge-base was modified. " W h a t " was changed should also be captured. This could be accomplished by recognizing the difference between the current and last version of the knowledgebase. The set of tools described above is not a complete set. As domain experts begin to work with the K A E additional special purpose tools will most certainly be required.
4. Conclusion As previously mentioned, a goal of the K A E is to maintain as much domain independence as possible and still be useful to a n u m b e r of experts in different domains. We are hoping to gain insight into the amount of domain independence an effective K A E can possess. A n o t h e r area of study is the investigation of whether knowledge entered directly from a domain expert is " m o r e valid" than that elicited by a knowledge engineer. Is there a difference? If so, what are the characteristics? It is felt that with the basic K A E infrastructure in place, research and development can proceed in parallel. Domain experts can begin using the system allowing for active refinement and study into the areas cited above. Functionality will continue to be added to each of the core K A E components described. In addition, the basic structure will be in place to support new research in tool development, learning techniques, explanation and elicitation methodologies. The author would like to thank R. Drazovich and C. McKee for their helpful comments and suggestions in the course of this research. C. Neveu implemented portions of the prototype version of the KAE. Work for this paper has been sponsored by the Defense Advanced Research Projects Agency and the U.S. Army Engineer Topographic Laboratories under U.S. Army Contract No DACA76-86-C-0010.
References ALEXANDER, J. H., FREILINO, M. J., SHULMAN, S. J., STALEY, J. L., REHFUSS, S. L. MESSICK, S. L. (1986). Knowledge level engineering ontological analysis. Proceedings of AAAI-86, Philadelphia, PA (August). BooNE, J. H. & BRADSHAW, J. M. (1986). Expertise transfer and complex problems: using AQUINAS as a knowledge acquisition workbench for knowledge-based systems. International Journal of Man-Machine Studies, 26, 3-28. BOONE, J. H. (1984). Personal construct theory and the transfer of human expertise. Proceedings AAAI-84, Austin, Tx (August). BUCHANAN,B. G. (1985). Some approaches to knowledge acquisition. STAN-CS-85-1076, Stanford University, Department of Computer Science (July). BUCHANAN, B. G. & SHORTLIFFE, E. H. (1985). Rule-Based Expert Systems: The Mycin Experiments of the Stanford Heurisitic Programming Project. Reading, MA: AddisonWesley. DARPA, ETL, AFWAL, ASPO (1986). Advanced digital radar imagery exploitation system (ADRIES) Program Plan (October). FROSCHER, J. N. & JACOB, R. J. K. (1985). Designing expert systems for ease of change. Proceedings of Expert Systems in Government Symposium, McLean, VA (October).
KNOWLEDGEACQUISITIONENVIRONMENT
213
GAINES, B. R. (1986). An overview of knowledge acquisition and transfer. Proceedings of Knowledge Acquisition for Knowledge-Based Systems Workshop, Banff, Canada (November). GARO-JANARDAN, C. & SALVENDV, G. (1986). A conceptual framework for knowledge elicitation. Proceedings of Knowledge Acquisition for Knowledge-Based Systems Workshop, Banff, Canada (November). KAHN, G., NOWLAN, S. & McDERMOTr, J. ((1985). Strategies for knowledge acquisition. IEEE Transactions on Pattern Analysis Machine Intelligence, 7, No. 5. McKEowN, D. M., JR & HARVEY, W. A. (1987). Automating knowledge acquisition for aerial image interpretation. CMU-CS-87-102, Carnegie-Mellon University (January). MORII(, K. (1986). Acquiring domain models. Proceedings Knowledge Acquisition for Knowledge-Based Systems Workship, Banff, Canada (November). MUSEN, M. A., FAGAN, L. M., COMBS, D. M. & SHORTLIFFE,E. H. (1986). Use of a domain model to drive an interactive knowledge editing tool, MEMO KSL-86-24, Knowledge Systems Laboratory, Stanford University.