~
Pergamon
Engineering Fracture Mechanics Vol. 50, No. 5/6, pp. 903-921, 1995 Elsevier Science Ltd 0013-7944(94)E0066-P Printed in Great Britain 0013-7944/95 $9.50 + 0.00
A FRAMEWORK FOR INTEGRATING DATABASES WITH KNOWLEDGE BASES FOR STRUCTURAL ENGINEERING CHEE-KIONG SOH School of Civil and Structural Engineering, Nanyang Technological University, Singaore 2263 JOHN C. MILES School of Engineering, University of Wales, College of Cardiff, Cardiff, U.K. Abstract--The focus of this paper is on the problem of how can we easily combine the advantages of database and knowledge-based systems. Solving this problem has the potential to radically extend the usefulness of existing database systems. Conversely,the solution would also make large-scale knowledgebased systemsless costlyto construct. In order to solve this problem, we present a framework for exploring the various alternatives in the integration of the two kinds of systems. We also use this framework to motivate our proposed solution: knowledge-embeddeddatabase systems. Our proposal entails the use of simple but powerful extensions of a database system to increase its ability to represent and manipulate knowledge. We describe Kbase, an environment for developing knowledge-embeddeddatabase systems. We further illustrate our proposed solution with IPDOS, a Kbase application for the preliminary design of offshore structures.
INTRODUCTION To MANY people who use and maintain engineering databases, knowledge-based systems offer a paradoxical challenge. They have been widely used and seem to have demonstrated practical value over the last decade. Therefore, many engineering research and development efforts are expended on developing knowledge-based systems. Yet, their use is often based on premises different than those for the use of database systems. They are usually used with lots of dynamic inferencing (e.g. with production rules) and with relatively little static information (e.g. assertions about the state of the world). On the other hand, database systems have evolved into repositories for containing large amounts of static information (e.g. tuples in a relational database) and with relatively few dynamic actions (e.g. database queries, semantic integrity constraints such as "each engineer record must have a unique employee number"). It seems natural, then, that coupling database systems with knowledge-based systems can offer a platform for constructing practical systems that require large-scale storage of both static and dynamic information. Conversely, many knowledge engineers find a dilemma with database systems. Knowledgebased systems that can use existing databases are less costly to build. For example, the transfer of information from the database system to the knowledge-based system is eliminated; also, errors that may arise from the transfer are avoided. The use of database systems also makes knowledgebased systems easier to scale up. Database systems now use well-developed algorithms and designs to store large amounts of information for quick retrieval. While database systems offer these appealing advantages, they are also notorious for their rigidity in the representation of information. For example, each record cannot contain more than one value in a column of a relational database model. Furthermore, the communication between database and knowledge-based systems could be too slow for practical value. With the kinds of problems mentioned above, it is not surprising that the objective of obtaining synergy from the integration is still elusive. Various methods of integration have been developed to help developers handle these problems. To various extents, these methods solve some problems of integration so that a developer need only worry about others. In general, there are two ways to advance the state of the art in constructing integrated applications: (l) better understand the trade-offs involved in the different methods for integration so that we can use an appropriate one for a given situation, and (2) explore new methods for constructing integrated systems that avoid © C r o w n c o p y r i g h t (1995). EFM 50/5-6---T
903
904
CHEE-KIONG SOH and J. C. MILES
some of the hard trade-offs inherent in current methods (see Fig. 1). These two ways constitute the two major components of our framework for solving the integration problem. The first part of the framework lays out the space of all methods, both existing and possible ones. The space is indicated by the whole quadrant shown in Fig. 1. Within this space, we identify the existing methods for constructing integrated applications. These methods roughly correspond to the lower curve shown in the figure. This explicit identification provides a better understanding of existing methods. For instance, it is instructive to see how methods that might be thought to be different are identifiably the same (i.e. have the same characteristics with respect to the construction and maintenance of integrated applications). It is also possible to see the trade-offs involved in the existing methods. The second part of the framework aims to explore better methods for building integrated applications. The space of possible methods is a good conceptual start in identifying potential candidates (indicated by the upper curve in Fig. 1). However, more solid research can only be produced by actual implementation. With respect to Fig. 1, we need to explore in detail what a particular point on the upper curve entails. To this end, we identify a method that produces what we call knowledge-embedded database systems. The balance of this paper explores in detail this method and the particular class of systems it produces. We implemented Kbase, an environment that helps developers build knowledge-embedded database systems. A key feature in Kbase is that these developers use a programming extension of dBase III [1], a familiar database language. It should be stated at the outset that while we use dBase as a platform for exploring our ideas, we have no reason to believe that other relational database systems cannot be used. Indeed, one of our current research focuses is to reconstruct Kbase to use the SQL relational database query language [2]. We further illustrate our integration method with IPDOS, a knowledge-embedded database system that helps in the preliminary design of offshore structures. IPDOS is constructed using Kbase. We conclude that we have found a method of integrating databases and knowledgebased systems that is more effective and efficient than most existing methods (i.e. those on the lower curve in Fig. 1). We also discuss what are the potential trade-offs involved with other possible methods for integration (i.e. other points on the upper curve in Fig. 1).
A N O T E ON T E R M I N O L O G Y Before we proceed further, we need to explain several terms that are used in this paper. We have earlier mentioned two aspects of database systems and knowledge-based systems: static and dynamic. The static components of database systems, which will be called databases, are repositories of structured information. The static components of knowledge-based systems, which will be called knowledge bases, contain highly interconnected information (e.g. information about how different parts of an offshore structure can be categorized and related to each other). The dynamic components of database and knowledge-based systems are also called data
Advantages for characteristic 2 fi
with the ~ , ~ . ~ . . . _ appropriate tradeoff
] ~ d n e w methods
Advantages for characteristicl
Fig. I. Two ways to advance the state of the art in building environments for supporting integrated database and knowledge-basedsystems. In this simple graph, we assume that each environmentprovides two kinds of advantages.
Integration of databases and knowledgebases
905
processing and knowledge processing, respectively. Data processing generally involves more predictable and algorithmic transformations of information such as the calculation of structural stresses. Knowledge processing involves transformations such as generalizations (e.g. dealing with engineers in general rather than specific engineers) and abstraction (e.g. considering only aspects of a structural analysis problem that are of first order importance). The differences between databases and knowledge bases, and between data processing and knowledge processing are sometimes not clear and often depend on the roles and goals of the people or machine that use the systems. Therefore, the seamless integration of database and knowledgebased systems naturally supports this blurred boundary. For instance, data can be abstracted into knowledge for knowledge processing, while knowledge can be codified into data for more efficient data processing. Our method of integration is to use a development environment called Kbase. We call the people who use Kbase developers. These developers can use Kbase to easily create knowledge-embedded database systems which have integrated database and knowledge-based systems. We also call such Kbase-produced systems Kbase applications. We call the people who use Kbase applications users. THE SPACE OF METHODS FOR INTEGRATING DATABASES WITH KNOWLEDGE BASES
In this section, we present the space of general methods of integrating database and knowledge-based systems. We also identify how various existing techniques for integration fit into some of these general methods. As we discuss each method, we evaluate it according to the following set of criteria:
1. Construction. How easy is it to use the method to build applications that integrate database and knowledge-based systems? How easy is it to customize an existing integrated system (such as when needed in rapid prototyping)? 2. Usage. This can be divided into two sub-criteria: (1) How robust would the integration system be? (2) How fast can the system run? 3. Maintenance. How easy is it to extend the system in the future? Can the system scale up? While we do not claim that these criteria are absolutely exhaustive, we believe that they are fairly complete (the various stages of the life cycle of an application system are represented) and representative of the concerns of practical systems. In Fig. 2, we summarize all possible methods that are either in use or are potential techniques for integration. For each method, there is a higher "information" level at which we group the database and knowledge-based systems. These systems use devices (especially file storage) at the lower "system" level. Each method also takes a stand on what is the primary construct to be built by the application developer for integrating database and knowledge-based systems.
Method 1. Loose coupling The traditional method for the integration of database and knowledge-based systems is to use a flat file via the following procedure. (1) Create a flat file. (2) Select the data to be used in the knowledge-based system and write it into an ASCII or VSAM file. The writing and reading processes typically use delimiters that are recognized by the knowledge-based system. (3) If the database system and knowledge-based system reside on different machines, transfer the file from the database machine to the knowledge base machine. This transfer can be done using standard communications software or by manual means (e.g. using a disk pack). (4) Open and read the file in the knowledge-based system. Once the file is in the knowledge-based system, open and read it into the appropriate knowledge base structures. (5) Return the results from the analysis within the knowledge-based system to the database system. This is done using the converse of the process just mentioned. A more modern reincarnation of this method requires a communications bridge between the database system and the knowledge-based system [3]. The shaded area in Fig. 2 indicates that the
906
C H E E - K I O N G SOH and J. C. MILES
1. Loosecoupling. Database system
]
I
Files
I
Kn°wledge'based system
I
~
Files
2. Build interface in knowledge-based system.
I
~
Database system
___~
Knowledge-based system
I
~ 111111
I
3. Data-embedded knowledge-based system. Knowledgebased ........ system I
Files 4. Build interface in database system. Knowledge-based Database system ~ - - - ~ system
!
I
Files
Files
H
5. Knowledge-embedded database system. Database
system I
[
1
Fig. 2. Various methods of integration. The shaded areas indicate what constructs are required for the method.
bridge is at a file storage level, not the higher information (database or knowledge-based system) level. The application developer builds his or her own facilities for interactions at the information level. Examples of such interactions include the formulation of knowledge base information requirements into database queries and the translation of database records into knowledge base structure (e.g. frames). Table 1. Construction Initial coding .
Usage
Customization .
.
Robustness .
Maintenance Speed
Extensibility
Scalability
0
0
Integration of databases and knowledge bases
907
Table 2. Construction Initial coding +
Usage
Customization -
Robustness +
Maintenance Speed -
Extensibility 0
Scalability -
Table I summarizes our evaluation of this method with respect to the criteria mentioned earlier. The evaluation marks are " + " (for good), " - " (for bad) and "0" (for neutral or moderate). Each evaluation is with respect to the same criteria for other methods we will discuss, and not with respect to other criteria for the same method. We review the evaluation of all methods at the end of this section. Because of the low level system details that the developer has to be concerned about, this method is expensive both initially to construct the integration system as well as to customize it later. Because the interface between the database and knowledge-based system is coded from scratch, there is also a higher chance of unanticipated error conditions than if the interface were to be built with a standardized toolkit or protocol. This makes the integrated system less robust than if it is constructed with other methods to be discussed. The only way the procedure outlined above differs from conventional file transfer between two database systems is that the knowledge-based system usually runs in an "artificial intelligence language," like Lisp or Prolog. This straightforward method is perfectly suitable for knowledgebased systems that reason over a well-specified data set, and where the interaction between the database system and the knowledge-based system can be kept to a minimum (i.e. a simple file exchange before and after the knowledge processing phase). Often, though, the exact data a knowledge-based system needs may not be known until analysis is well underway. Furthermore, additional infusions of data are required as reasoning progresses. In either case, each time data are needed the above process must be repeated. Therefore, the resulting integration is likely to be slow and computationally expensive. Most early work on the integration of knowledge-based and database systems used this method because the work was done primarily by artificial intelligence researchers who require system-level platforms for experimenting with different means of integration. The wide room for experimentation was obtained through tedious low-level programming. Scalability was also achievable by programming.
Method 2. Build interface in knowledge-based system
This method requires a set of database management functions (e.g. semantic query optimization techniques and the use of semantic data models) within the knowledge-based system. The developer can store information from the knowledge-based system directly into files (the line in the right part of the diagram) as well as into database files through the database interface (the shaded area). Of course, some rarely used database functionality may still be profitably located in the database system. Austin et al. [4] and KEE-Connection [5] are examples of systems that use this method. A more ambitious variation of this method is to use a natural language interface to access the database. Logcher et al. [6] described an innovative system that used this method. This method provides benefits to the knowledge application system developer who now does not have to worry about the details of database systems. Therefore, both initial coding and customization can be relatively painless. Furthermore, the standardization afforded by the database interface can lead to a robust implementation. However, to the extent that the database interface is standardized, this method suffers from the lack of opportunities to optimize database management functions. For instance, the more information the database system has about a transaction, the greater the scope it has for optimizing that transaction. The worst situation for optimization is when the database system receives, during the course of a transaction, a large number of isolated information requests. Without knowing the overall context of these requests, the system has no choice but to respond to them individually. For instance, the system cannot take advantage of "lazy evaluation", in which it accumulates a few requests before executing all of them
908
CHEE-KIONG SOH and J. C. MILES
at one go. Another example is that it can reorder a sequence of queries (e.g. do a SELECT before a JOIN) to produce the same result as the original sequence, and yet is cheaper to execute. It is common in computer science to assume that a standardized, abstraction interface is desirable because it allows the developer to manage the complexity of lower level issues [7]. However, in the case of knowledge systems development, it can be quite difficult to anticipate what kind of functionality should be provided by the interface. Therefore, instead of hiding complexity, an interface can become a brick wall through which the developer has to struggle through to obtain functionality not provided by the interface. Therefore, an integrated system with a standardized interface could be less extensible than desirable. The poor speed also makes it relatively less scalable.
M e t h o d 3. Data-embedded knowledge-based system
This method needs to have database functionality extensions built into the knowledge-based system. It extends the previous method even further, avoiding access to a database system by building the required database functionality within the knowledge-based system. This means that this method is more suitable when an existing database system does not already exist. This situation is of course more common in research laboratories, and Balzer et al. [8] and Jarke et al. [9] are examples of research systems that use this method. It would indeed be quite difficult to transfer existing databases to the database in the knowledge-based system. Furthermore, it is generally quite difficult to encode a database system with the common artificial intelligence languages used to implement knowledge-based systems. Some work has been done to this effect. A particularly important piece of work is by Reiter [10], in which the relational database model is recast in terms of first-order logic. However, no implementation along this line has been attempted to date, suggesting that it is not easy to create an integrated system which contains a fully-fledged embedded database (such as the relational data model) within a knowledge-based system. This difficulty also makes it difficult to customize integrated applications and to make them robust. Since the embedded database must be constructed using knowledge-based system structure, the database functionality provided is necessarily more specialized than that provided by generic database systems. The specialization in turns leads to potentially better optimization. For instance, knowledge-based systems that require rare access to database storage facilities can build simple routines to write database records into flat files. Knowledge-based systems which require a richer set of specific database management functions (e.g. recursive queries) can build routines that optimize these functions, possibly at the expense of other less frequently used ones. Another way in which this method differs from the previous one is that the knowledge-based system no longer has any means to store information directly. All storage mechanisms are through the database functions embedded within the knowledge-based system. One consequence of this is that knowledge structures which are not naturally mapped into database storage cannot be independently stored. If such a feature is indeed necessary (e.g. an object-oriented knowledge structure that cannot be easily stored into the embedded database), then it would be difficult to accommodate such an extension. The scalability issue is not limited by low speed as in the previous method. However, it is still restricted by the common limitations of knowledge-based systems (e.g. garbage collection in artificial intelligence languages). The three methods mentioned above have one commonality: they do not focus on the database system. The first method concentrates on the link between the database and knowledge-based system, while the next two methods focus on the knowledge-based system. Because so much of the installed base of existing data is in database systems, we take a fresh cut to the integration problem by focusing on the database system. Indeed, most engineering and commercial practices have
Construction
Table 3. Usage
Initial coding Customization Robustness Speed o o +
Maintenance Extensibility Scalability o -
Integration of databases and knowledgebases
Construction
Table 4. Usage
Initial coding Customization Robustness Speed 0 0 + 0
909
Maintenance Extensibility Scalability 0
existing databases before they begin building knowledge-based systems that make use of these databases. We can take one of two additional methods. Method 4. Build interface in database system This method requires knowledge-processing extensions for a database system connected to a knowledge-based system. To the extent that a database model can accommodate knowledge base structures, it is relatively easy to construct and customize an integrated system using this method. The work on semantic data models [11-13] can be considered to use this method. Another current and very active area of research that fits into this method is the extension of relational database models to accommodate object-oriented features and other kinds of knowledge level processing (e.g. inheritance)[14-17]. A third variation of this method consists of "generative expert systems" [18, 19]. Like semantic data models and object-oriented extensions of relational databases, this variation has its difficulties. In particular, generative expert systems seem to be difficult to construct out of databases. The implemented system described in Rasdorf and Wang [20] replaces the originally proposed relational database system with a "factbase" encoded with OPS5, a knowledge engineering language. This method also has another disadvantage due to the relative rigidity of database systems. The disadvantage is especially evident when the knowledge base and database are initially committed to certain designs and then change over time. A knowledge-based system is especially easy to change given the emphasis on rapid prototyping. Changes in the knowledge system, however, may be difficult to accommodate in the database system. For example, a frame-based knowledge system may initially contain fields with single values and then subsequently be changed to contain multiple values, some of which are other frames. These multiple values would be difficult to implement in a relational database scheme that has previously been committed to single values in its columns. Some changes, however, will be easy to accommodate. This is because database systems have a very well developed set of management functions for scheme redesign and reindexing. Therefore, customization is moderately easy. Because the database system is the primary working platform and because database systems are usually better developed than knowledge-based systems, it is conceivable that the resulting integrated system can also be robust. This method does have some disadvantages. It retains the limitation of the first two methods of having to connect the database and knowledge-based systems. On the other hand, well-developed optimization algorithms in database theory can be brought in with full force to speed up the integrated system. Furthermore, knowledge processing can be done in the separate knowledgebased system, without hoarding the computation cycles required for further data processing. On the whole, the speed of an integrated system built with this method is likely to be moderate. Since the kinds of knowledge processing done with this method are restricted by the interface to the knowledge-based system, extending the database system for more intelligent processing can be limited. Indeed, work on object-oriented extensions of relational databases assumes that the database system is only used as a secondary repository--intelligent processing is done with programming extensions at a level lower than that provided by the standard knowledge processing interface (which provides features like inheritance and active demons). Scaling up such an integrated system is also limited by the same interface, but emphasis on the well-tested database system is likely to be an advantage. Method 5. Knowledge-embedded database system With this method, we need to extend database systems for knowledge processing. At the extreme, the database system may even contain a knowledge-based subsystem. As with the fourth
CHEE-KIONG SOH and J. C. MILES
910
Table 5. Usage
Construction
Maintenance
Initial coding Customization Robustness Speed 0 0 + +
Extensibility Scalability 0 0
method, this method has advantages associated with its emphasis on the database system. (1) The number of database systems professionals is several orders of magnitude greater than the number of knowledge engineers. Therefore, working from the database side will have many advantages associated with highly skilled application developers. (2) Database systems are, in general, highly optimized and much more well developed than knowledge systems. Thus, we would expect that concentration on database systems would yield performance benefits not obtainable from the second and third methods. (3) Database systems can be much more scalable than knowledge systems. Indeed, except for the Cyc project [21], few knowledge systems come to a scale as large as ordinary commercial database systems. This method seems to perform as well as the previous one, but it dominates when evaluated against the criteria of speed and extensibility. It does not suffer the costs of connecting the database and knowledge-based systems, thus avoiding the costs in translating between data and knowledge structures as well as in communication. It is also more suited to cope with incompatible changes that may occur in database and knowledge-based systems. Indeed, if the developer constructs an integrated system such that the database is the knowledge base, then coping with changes becomes a non-issue. In general, however, we think that the removal of a standardized interface of knowledge processing functions allows developers to easily extend knowledge processing functions with relatively high level data query and manipulation languages. To be sure, this fifth method is not suitable across the whole range of knowledge system applications. In particular, it is unsuitable for applications that require highly flexible knowledge structures (e.g. discover systems require structures that can be easily and frequently changed). Nevertheless, we believe this fifth method has a niche in solving the problem of integrating database and knowledge-based systems for civil and structural engineering. Kbase uses this method to construct application systems which we call knowledge-embedded database systems. Table 6 summarizes our evaluation of the space of methods that are used or can be used for integrating knowledge-based and database systems. It is important to note two points about our evaluation. (1) The evaluation commits to no value judgement about which criteria are more important and which less. Therefore, developers in any particular situation would need to decide for themselves what method is more desirable in that situation. (2) Our evaluation is general, with respect to the method. A specific implementation of a method could plausibly lead to a different evaluation. Table 6 shows that methods 2-4 are more favorable in most situations than method 1. It also shows that method 5 seems to be at least as favorable as the other methods when measured against all criteria. This informal evaluation therefore shows that the first four methods lie roughly on the same trade-off curve such as the lower curve in Fig. 1. Method 5, however, lies on a higher curve. The enticing possibility for a new and better method of integration leads us to explore the last method in greater detail. In the balance of this section, we present Kbase, an environment that enables developers to use method 5 to build knowledge-embedded database systems.
Table 6. Construction Usage Method Initialcoding Customization Robustness Speed I
.
2 3 4 5
+ 0 0 0
.
.
.
0 0 0
+ + +
+ 0 +
Maintenance Extensibility Scalability 0
0
0 0 0
0 0
Integration of databases and knowledge bases
J
~ AFI~UeaUon deveil°Fer )
I
Kbaseinterlace callsJ
[ baseeditorIJ Helpfacility, edits
calls~
calls
Funclion
I
911
STRUDS I
°[
(Gen~atorlEditor ]
programs
/
presents evalla~ Objects
Declarative knowledge
Knowledgebases Rules Us~-defined functions Heuristic
dBase
tunctiom
c,,r
I
I
Algori~
procedural procedural knowkdge
knowl~lb~
Fig. 3. The Kbase architecture. AN OVERVIEW OF T H E
Kbase A R C H I T E C T U R E
Kbase is constructed using Clippert, a programming environment for building dBase-compatible applications. Figure 3 shows the overall architecture of Kbase from the point of view of the application developer. Kbase allows the developer to build an application system, which consists of a knowledge base of objects, rules and user-defined functions (i.e. in addition to those functions provided b y Kbase). The objects and rules created by Kbase are dBase files. This means that developers can also use dBase to create such files (although it is much easier to do that in Kbase) and, conversely, such files can be read in dBase. The user-defined functions can be compiled into a stand-alone application which uses the objects and rules. All three kinds of knowledge bases--objects, rules and user-defined functions--can be constructed through a natural and consistent template-based interface. Such an interface resembles forms with which we are familiar. For instance, a rule might be a template with fields such as "Name," " I f " and "Then." Application developers can also have access to a comprehensive on-line help facility. Kbase provides a function called " I N F E R " , which calls the inference engine. The engine is used to run the rules over the objects. Since the " I f " and " T h e n " parts of these rules could contain functions, the inference engine also calls the function evaluator to execute these functions. These functions include those defined by the developer,~ as well as those provided by dBase, Clipper and Kbase. Kbase functions are generally for knowledge level processing, such as inferencing (through the " I N F E R " function), and accessing and modifying rules and objects. Kbase also has a foreign function interface through which the developer can call other programs such as G E N E D , which can generate schematics of offshore structures and allow the user to edit these schematics. Of course, these foreign programs may be able t o use their foreign function interface to call yet other external programs. G E N E D , for example, can be used to call STRUDS, a system for structural design. tClipper is a trademark of Nantucket, Inc. Clipper is built usingthe "C" programming language. ;~Sincethe term "user-definedfunctions" is well-established,we continue to use it in this paper. However,a more accurate term would be "developer-definedfunctions."
912
CHEE-KIONG SOH and J. C. MILES
We have mentioned that Kbase developers can create stand-alone applications for users. Such stand-alone applications are similar to the development architecture, except that they do not have the knowledge base editor and debugger. THE KNOWLEDGE BASES
Each Kbase application has three knowledge bases. One comprises the objects, another the rules and the third user-defined functions. At 'any time during the development, a new knowledge base can be created, o r an existing knowledge base can be renamed or deleted. The developer can also switch to another knowledge base to be used through the rest of the session. This feature is useful when a developer is experimenting with different knowledge bases for a given application.
Objects Objects are used to represent declarative knowledge. In a simple sense, the object's knowledge base represents the state of the world. Each object is represented as a frame with an unordered list of fields and values for each field. This frame representation transfers naturally into the developer's interface. An example of an object from IPDOS is shown in Fig. 4. Such a representation is natural and familiar. It is common to list information in this manner, as in forms. It is also expressive because one can make one object point to another. For instance, "jacket" and "topside" are other objects with their own fields and values representing all the structural components of the platform sub-structures. Finally, an object-based representation is flexible and general enough to represent both what one commonly thinks of as objects, as well as relations between them. Below is a relation in a fields-values representation. Editor for the HAS-LEGS o b j e c t L e g g e d _ o b j e c t : jacket Legs: l e g _ l , leg_2, leg_3, leg_4 Type-of-legs: j a c k e t - l e g The objects in Kbase can be edited with the "Edit" in the "Objects" menu (Fig. 4). This will give a pop-up window asking for the name of the new object. The template for the object appears after
Fig. 4. An object is presented in a template-based interface.
Integration of databases and knowledgebases
913
its name is entered. If the developer types in something which is not an existing object, Kbase will produce a menu of all the existing objects for him/her to choose. Within the editing template, such as that shown in Fig. 4, the developer can simply use the dBase style of typing over the current values to edit the object, and can move between the fields using the arrow or "Enter" keys. Pressing "Esc" or "Ctrl-Enter" aborts or finishes the editing, respectively. Adding, deleting, renaming or printing objects are also natural and easy. Just choose the appropriate action under the "Objects" men. Choosing "Flush" will delete all the objects in the knowledge base. Also, all the values in the current knowledge base can be cleared together by choosing the "Clear Values" in the "Objects" menu. Destructive actions like deleting, clearing and flushing of objects require confirmation by the developer.
Rules Another advantage with an object-based representation using a relational database language is the consistency with which both the declarative (i.e. objects) and procedural (i.e. rules and functions) knowledge can be encoded. The conceptual consistency is derived from the use of the field-values format in the predicates and actions of rules. A rule has three parts: its name, rule set, predicate and action. The predicate says when the rule should fire, and the action says what to do if the predicate is true. The consistency also extends to the developer's interface. Below, we show a sample rule in the "jacket" rule set. Rule sets are used to group rules into categories so that the rule sets can be activated separately. The rules in a rule set are fired in an unspecified order. The use of rule sets reduces the amount of computation in rule matching. The predicate is true when the function of the platform under consideration is "central production," there are no wells for the platform, the launch cradle spacing is less than 40 feet, and the jacket is installed either by float-off or by launching it. When the predicate is true, the rule sets the width and length of the jacket top to 130 and 45 feet, respectively. It also sets the jacket to have eight legs. Editor for rule 22 RULE SET: jacket IF: GET ("Function", " p l a t f o r m " ) = " c e n t r a L p r o d u c t i o n " .AND. GETN ( " N u m b e r " , "well") = 0 .AND. GETN ("Spacing", " l a u n c h _ c r a d l e " ) < 40 .AND. (GET("Installation_method", "jacket' ') = "float_off" .OR. GET ("Installation__method", " j a c k e t " ) = "launch") THEN: PUT ("TX", "jacket_top", "130") .AND. PUT ("TY", "jacket_top", "45") .AND. PUT ( " N u m b e r " , "jacket_leg", "8") This rule also shows an important point: that the syntax for writing predicates and actions is just that of the dBase programming language. Since many developers are familiar with dBase, we believe they would be comfortable writing quite sophisticated predicates and actions. Editing, adding, deleting, renaming, clearing and flushing rules are similar to those same operations for objects.
Functions and the functions library dBase programming is widely known and easy. Kbase exploits this by using the same programming language in its code. The developer can use the same commands and functions in dBase to build additional user-defined own functions. A function is like an object with the following fixed set of fields: "Parameters," "Private," "Body" and "Return." "Parameters" contains the values passed to the called function, technically the formals or the arguments. "Private" contains the local variables used in the body of the function. The scope of the variables is within the function and other functions that it might call. "Body" contains a series of dBase commands and other functions to be executed when the function is called. The function returns the value of the expression or variable in the "Return" field.
914
CHEE-KIONG SOH and J. C. MILES
As with rules, an important point is that the developer's interface to the function library is similar to that for objects. The following is an example of an editor for a function: Editor for the DXN1 function PARAMETERS: jacket, l e g _ b a t t e r PRIVATE: bax_c BODY: b a x _ c - - G E T N ("BAX_C", leg_batter) RETURN: IIF (bax_c > 0, GETN ( " H e i g h t " , jacket)/bax_c, 0) In the example above, " d x n l " is passed "jacket" and "leg_batter" as arguments. Its body then computes "bax_c," which has declared as a local variable in the "Private" field. If "bax_c" is positive, " d x n l " returns the height of the jacket divided by "bax_c." Otherwise, it returns 0. With user-defined functions and a comprehensive function library, the developer can enrich the knowledge base environment. Rules can be more powerful because they can take arbitrary Boolean combinations of functions and execute complex functions with their actions. To further reduce the effort by the developer, Kbase provides many functions for knowledge processing. Here are two examples. The " Q U E R Y " function is used in a Kbase application to ask the user for information. " Q U E R Y " takes the following as its arguments: 1. A question, given as a string. 2. A picture template which determines what the user can type. For example, "999" means that the Kbase application will accept only three digits as the answer to the question. Richer restrictions on the user input can be easily constructed. For example, a question which provides a multiple choice answer menu can be constructed with the " O N E - O F " function as a picture template. For instance, the expression "ONE_OF('bent_l', 'bent_2', 'bent_3', 'bent_4')" produces four items in a menu so that the user may choose one of them. 3. A valid check expression. If this returns .T., then the answer is acceptable; if it returns .F., the user is asked to enter the input. While the picture template argument provides a syntactic check on the user input, this valid check expression can be thought of as a semantic check. The developer can construct arbitrarily sophisticated checks so that the input is acceptable. Indeed, this check can be another " Q U E R Y " function. This allows the developer to create a sequence of queries that can spawn another sequence in the middle of the first sequence. When the user goes through or aborts the spawned sequence, the dialog returns to the point in the first sequence from which the spawning originated. 4. A string that tells the user where to find the answer to the question. 5. A string that tells the user why this question is being asked. The second example is the " I N F E R " function. This system-provided function takes the following arguments: 1. A rule set to consider. 2. A logical expression that is to be the goal of the inferencing. The goal is considered achieved if, when tested, it returns anything other than .F. " I N F E R " runs all the rules in the rule set until the goal is achieved or it is unachievable, i.e. no rules in the rule set apply, or if any apply, it is not going to further change the knowledge base of objects. For instance, the function call INFER ( " 4 - l e g g e d " , "KNOWN ( ' N u m b e r ' , 'horizontal') .AND. KNOWN ('BAX_AI', 'leg_batter') .AND KNOWN ('BAX_BI', 'leg_batter') .AND. KNOWN ('BAX__A2', 'leg_batter') .AND KNOWN ('BAX_B2', 'leg_batter') .AND. KNOWN ('BX', 'jacket_bottom')") runs over all the rules in the rule set named "4_legged." The goal is achieved when the number of horizontals, the leg batters and the width of the jacket bottom are all known. In the example above, the " I N F E R " function may be called, say, by a rule which detects
Integration of databases and knowledge bases
915
whether a platform should have four legs. When the platform does have four legs, then the rule calls "INFER" in its " T h e n " action to find the details of a four-legged structure. Kbase has a special function called "First_Function." This is the first function automatically called by a stand-alone application system when it is started by a user. It starts the program running. Editor for the FIRST_FUNCTION function PARAMETERS: PRIVATE: m d e p t h BODY: CLEAR
@1o,o TEXT
IPDOS INTERACTIVE PRELIMINARY DESIGN OF OFFSHORE STRUCTURES C o p y r i g h t (1988) C h e e - K i o n g Soh ENDTEXT @23,0 SAY "At any point, y o u can t y p e 'Esc' to i n t e r r u p t . " WAIT QUERY ( " W h a t is the function of y o u r s t r u c t u r e ? " )
INFER ( " G o a l " , " D o _ s o m e t h i n g " ) showresult ( ) IF QUERY ( " W o u l d y o u like to file these conclusions?", .... , ; "ONE_OF ('Yes', 'No') ", .... , .... ) = " Y e s " ) mille = QUERY ("What is the file n a m e to u s e ? " , "XXXXXXXX . . . . . . . . . . . . . ,. , ) IF ( .... # mille, s h o w r e s u l t (mille), .F.) ENDIF RETURN: A typical "First_Function" as shown above sets the screen and asks a few initial questions. It then sets the interference engine running with an " I N F E R " function call. At the end, it calls the user-defined function "showresult," which shows the details of a recommended offshore structure. It then gives the end-user the option of saving the recommendations into a DOS file. While it may be generally sufficient to build customized functions in the Kbase programming language, Kbase has an open architecture which allows integration with " C " and/or an assembler code that is external to Kbase.
THE INFERENCE ENGINE The inference engine is the workhorse of the system. It goes through the following procedure during the inference process: 1. The engine is started by the " I N F E R " function. The function specifies the goal to be achieved and a rule set whose rules will be used to achieve the goal. 2. The engine runs the rules in the specified rule set over the object's knowledge base in a forward chaining manner. This means that for each rule, its predicate is checked against the objects representing the state of the world. If the predicate is true, the action is fired, perhaps changing the object's knowledge base. This may lead to more rule predicates being true, which in turn enable more rule actions to be executed. This goes on until the goal is achieved or becomes unachievable. 3. The action of a rule may activate the inference engine again, perhaps with another (sub)goal and (sub)rule set. Such rules which spawn inference chains are a kind of meta-rule. The spawned inference process starts all over again with the first step. The use of goals and rule sets reduces the problem into sub-problems. The grouping of the rules into separate rule sets ensures that only the relevant rules are considered during the inferencing process.
916
CHEE-KIONG SOH and J. C. MILES
In general, meta-rules are rules for reasoning about control instead of the regular rules for reasoning about the state of the world. They are particularly useful in large knowledge-based systems where exhaustive invocation of the encoded sub-knowledge bases will make the system awkwardly slow and impractical. The use of meta-rules to reason about control is given in the literature [22, 23]. We have mentioned one way of reasoning about control: by spawning inference chains as a strategy to guide invocation of regular rules. Meta-rules can also be used to modify rules during an inference chain, much like regular rules can be used to modify objects. For example, a meta-rule may change the value of the " I f " field of a rule so that its precondition. Finally, meta-rules can be used for specialized conflict resolution when more than one regular rule can be fired at a time. For instance, the "Rule Set" field of a regular rule can be used to indicate its priority. Meta-rules first fire those rules in high priority rule sets, followed by those of second-order priority, and so on. This general way of conflict resolution can be used to accommodate the various resolution strategies that have been proposed [24]. This again illustrates the Kbase approach of providing the barest minimum while enabling developers to build sophistication into their applications. We have presented an overview of the Kbase features. We now illustrate how one can build an engineering application with Kbase. IPDOS: INTERACTIVE PRELIMINARY DESIGN OF OFFSHORE STRUCTURES This section describes the design of an integrated "intelligent" structural design system, IPDOS, for the preliminary design of offshore structures. The system is implemented using the Kbase environment. The knowledge-based component of IPDOS is able to configure and automatically generate the basic structural model of fixed template-type, steel, offshore structures for shallow water regions. Currently, IPDOS is able to configure the basic three-, four-, six- and eight-legged platforms for the "routine" oil and gas related functions. In addition to configuring and generating the structural model, IPDOS has the facility for the user to alter any of the inferred recommendations. Due to variations in the physical and functional requirements as well as environmental and geographical conditions, some of the inferred recommendations may not be suitable for practical use. Therefore, the facility to alter the recommended components allows iterative design of offshore structures prior to generating the numerical model of the final structure for the structural analysis. IPDOS can also function as a computer-aided tutor to teach neophytes how to design offshore structures via a system of explanations of all the prompted questions. These explanations are structured so as to guide the user not only in deciding what questions he or she should be asking when designing an offshore platform, but also in what order they should be asked. IPDOS also has a user-defined function that can generate the structural model, t
Overview of lPDOS Supporting facilities for IPDOS, such as the user interface and inference engine, are provided by the Kbase environment. The central knowledge module has three components: objects, rules and user-defined functions. Factual knowledge about the physique of fixed steel offshore structures is represented as associative triplets in the object network (Fig. 5). Each triplet is a list of an object name, a field name and a value. A collection of such triplets that have the same object name forms the representation for an object. There is potential gross redundancy in object names for such a representation. Such redundancies can be handled by database normalization [25], which is straightforward because Kbase has access to many database functions. However, IPDOS does not have objects which have very many fields. Therefore, we made the design decision to trade-off possibly small gains in performance for simplicity [26]. Knowledge required for inferring the basic structural configuration is represented in the rules knowledge base. Knowledge required for controlling the inference is also represented as meta-rules in the same rules knowledge base. The objectives of the various tasks comprising the solution process are represented as goal determination functions in the knowledge base of user-defined functions. Procedures which execute special tRecall from the previoussection that the "user" in the term "user-definedfunction" refers to the application developer who uses Kbase to develop application systems such as IPDOS. It does not refer to the user of 1PDOS.
Integration of databases and knowledge bases
~
Offshore Platform / Jacket Topside_superstructure: ... Function :... Water depth : ,..
917
Installatlon Method Recommended: ... Intended : ...
Jacket Height Width Installation_lntd: Main_deck Cellar_deck Subcelfar_deck : Launch/nethod : Frame_bracing : Bent_bracing Legs
Number of:
Batter
= |
Batt@z X_direction: ... Y_direction:
Fig. 5. A small portion of the IPDOS object knowledge base. For example, the value of the "Jacket" field of the "Offshore Platform" object is another object, called "Jacket". The ellipses (...) indicate values not explicitly shown here.
operations on the factual knowledge as well as the system function "First_Function" are also stored in this knowledge base. The "First_Function" initiates each consultation by prompting for all the essential design parameters from the user.
Representation of "static" knowledge "Static" knowledge refers to the information required to describe the physical system and assumed states addressed in a problem. For IPDOS, it comprises the physical components and the geometry of the fixed steel offshore structures, such as the length and width of the jacket top and bottom, the layout of the topside main and cellar decks, and the structural members. This knowledge is mainly factual and can be effectively represented using relational database techniques. The method used for IPDOS is essentially the representation scheme provided by the Kbase environment. Kbase interprets the elucidated "static" knowledge and stores it in the knowledge base as a network of objects. The physical implementation is in a form similar to the dBase III type data file. An example of such object representation has been given earlier. Each object is associated with at least one field, and each field can have any number (including none) of values. For example, the object "Jacket" has fields such as "Height" and "Installation-method." Objects, fields and values may be pre-defined, set as unknown or left unrepresented in the network. Pre-defined values are the default values which may be overwritten either by new input from the user or inferred to be otherwise; for example, the thickness of the neck plating has a default value of 0.375 inches (9.5 mm), but if necessary can be 0.500 inches (12.7 mm) too. Unknown values are essentially those facts to be inferred by the inference engine for the proposed recommendations, for example the height of the jacket structure. Those objects, fields and values that are not represented in the network can be automatically created when necessary. Suppose a rule has been fired to set the size of the deck legs to be 60 inches (1523 mm) in diameter with 2.00 inches (50.8 mm) wall thickness. If the "wall_thickness" field of the object "Deck_Legs" does not already exist in the object network, IPDOS will automatically create the field and subsequently add to it the value "60 x 2.00".
Representation of "active" knowledge Since the design of fixed steel offshore structures is highly dependent on the practical experience of the designers, a major portion of the "active" knowledge encoded in the current IPDOS system is heuristic and qualitative in nature. Such knowledge is represented primarily in the form of production rules. Algorithmic procedures which are repetitive, such as the numerical
918
CHEE-KIONG SOH and J. C. MILES
operations to compute the dimensions of the main and the cellar decks, are represented mainly as user-defined functions. However it is generally difficult to separate distinctly the representation of engineering heuristic and qualitative knowledge from that of the quantitative knowledge [27]. Hence, it is not unusual to find that heuristic, qualitative and quantitative knowledge can be represented both as production rules or user-defined functions. For example, there are several algorithms such as those for computing the dimension of the jacket top and the required number of horizontal framings. These are embedded within the actions of production rules. In general, our design methodology is to minimize the use of production rules. We do this by limiting rules to represent non-redundant knowledge whose existence may not be predictable. For example, there are several points in the inference process where the dimensions of the jacket bottom have to be calculated from a fixed formula. Therefore, such knowledge is encoded as a user-defined function. On the other hand, there is usually only one point in the inference process where, if ever, the number of legs is set according to certain known conditions of the jacket. Furthermore, it is difficult to predict when these conditions are known. Therefore, the setting of the number of legs is encoded as a production rule. The rationale for limiting the use of rules is the observation that algorithms run much faster than rules. In a sense, as heuristic and qualitative knowledge becomes better known and more widely used, they could be compiled into algorithmic, user-defined functions. For large repetitive numerical computations, such as structural analysis and design, the required algorithmic procedures reside in large numerical analysis programs. IPDOS relies on external programs and routines, such as STRUDL and GENED, to perform structural analysis and design. The process of calling these external programs is through a customized coupling interface [28]. Goal determination The process of selecting an appropriate basic structural configuration for a fixed steel offshore structure involves a number of goals or tasks# which are related to each other in many ways. An important relation is that of prerequisites. For example, in Fig. 6, "Decide on configuration" is a prerequisite for other tasks such as "Compute dimensions for jacket bottom" and "Select appropriate dimensions, elevations, and layout for all topside decks" However, the "Decide on configuration" task is really an abstract one in the sense that it can be decomposed into several sub-tasks, such as "Decide number of jacket and deck legs" and "Compute height of jacket." Of course, each sub-task may in turn be an abstract task that can be further decomposed into even more specific sub-tasks. This abstraction hierarchy allows IPDOS to control complexity in task prerequisite relations. If IPDOS were about to embark on a task such as "Compute dimensions for jacket bottom," it needs only check if the task "Decide on configuration" has been done, without checking whether each of its four sub-tasks are completed. Because the design of offshore structures involves a large number of tasks, this abstraction hierarchy helps the system and its user reduce the complexity of the design process. The abstraction hierarchy, then, is a mechanism that is quite useful but not usually associated with database systems. Knowledge-embedded database systems make the construction of such mechanisms easy, extending the usefulness of databases.
TOTAL COMPATIBILITY WITH A FAMILIAR DATABASE LANGUAGE This section looks at how IPDOS exploits a special feature of Kbase-total compatibility with the dBase III database management system. This feature enables developers to build additional user-defined functions to manipulate dBase III files from within the knowledge-based system. The following example is abstracted from one of IPDOS's user-defined functions, "Generator," which has been implemented for the automatic generation of structural models based on the inferred structural configuration. FUNCTION G e n e r a t o r PARAMETERS type, file tWe use the terms "task" and "goal" indistinguishablyhere because each task (e.g. "Decide on configuration")has a corresponding goal ("Configurationhas been decided").
Integration of databases and knowledge bases
919
SET DECIMAL TO 2 PRIVATE m b e a m s [4], m i n d e x INDEX ON b e a m s - > b e a m _ t y p e TO b e a m s USE b e a m s .dbf INDEX b e a m s && b e a m s .dbf is a dBase table mbeams[1] = GET("Main_trus_beam", "main_deck") mbeams[2] = GET("Beam_type", "main_deck") && for the main d e c k mbeams[3] = GET("Main_truss_beam", "cellar_deck") && for cellar d e c k m b e a m s [ 4 ] = GET ( " B e a m _ t y p e " , " m a i n _ d e c k " ) FOR m i n d e x = 1 TO 4 LOCATE FOR b e a m s - > b e a m _ t y p e > = m b e a m s [ m i n d e x ] ? N U M 2 S T R ( b e a m s - > inertia_iy) + .... + NUM2STR(beams - > inertia_iz) + .... + NUM2STR ( b e a m s - > inertia_iy + b e a m s - > inerti_iz) + . . . . + ; NUM2STR ( b e a m s - > × s e c t _ a r e ) NEXT INDEX ON b e a m s - > × _ s e c t _ a r e TO t e m p USE b e a m s . d b f INDEX t e m p PRIVATE m b e a m s [ 2 ] mbeams[1] = GETN ("Plate_thickness", "main_deck") && d i a g o n a l ties mbeams[2] = GETN ("Plate_thickness", "cellar_deck") && d i a g o n a l ties FOR m i n d e x = 1 TO 2 LOCATE FOR b e a m s - > × _ s e c t _ a r e > = m b e a m s [ 1 ] / ( m l e n g t h / ( m l e g s / 2 - 1) + mwidth)*; (POWER ( m l e n g t h / ( m l e g s / 2 - 1), 2 ) + p o w e r (mwidth,2)) *11500/29000 ?NUM2STR ( b e a m s - > inertia_iy) + . . . . + NUM2STR(beams - > inertia_iz) + .... +; NUM2STR ( b e a m s - > inertia_iy + b e a m s - > inertia_iz) + . . . . + ; NUM2STR ( b e a m s - > × _sect_are) NEXT CLOSE DATABASE The first observation is that the function's code is in dBase. Next, it is apparent that the function can simply open up a dBase datafile " b e a m s . d b f " (using the statement " U S E beams.dbf"), access the data in the file (using " b e a m s - > . . . " ) , write data into a file (using " ? N U M 2 S T R . . . " ) that will be used to generate the structural model automatically and, finally, close the file (using the statement " C L O S E D A T A B A S E " ) . More important, there are statements that are database management operations. For example, " U S E beams.dbf I N D E X beams" specifies a database index to be used for the " b e a m s . d b f " database. The first part of the example simply searches for the correct beam type, or the next bigger size, as allocated (either by IPDOS's inference or the user's choice) for the main truss beams and the deck beams of the main and the cellar decks. It then retrieves all the relevant properties of the selected beam type as required by a PC-based structural analysis package for analysis and design. Next, it assigns these properties to the corresponding members in the file to be created. This process is intended to emulate the manual process of checking the beam-section tables and catalogs, and preparing the file as input to the numerical computation software. The second part of the example goes one step further to incorporate some simple member sizing prior to the data retrieval. First, it computes the required cross-sectional areas of the diagonal ties to provide the equivalent shear resistance to that provided by the deck plating of the main and the cellar decks. With the required cross-sectional area, the function then searches for the beam type with at least the required area and, subsequently, retrieves the relevant properties of the selected beam type and inputs them for the corresponding diagonal tie in the input data file. CONCLUSION We have introduced a framework for integrating database and knowledge-based systems. This framework has two parts. In the first part, we presented the space of methods for integration. We EFM 50/5-6~U
920
CHEE-KIONG SOH and J. C. MILES
,, I I
~
•
[
jS S j~
Fig. 6. An abstraction hierarchy of goals (indicated by the decomposition of abstract goals through dotted lines to more specific goals) and their prerequisite relationships (indicated by solid arrows).
also classified existing pieces o f research according to these methods, and informally evaluated these methods according to a fairly complete set o f criteria. The first part is illuminating because it shows a new m e t h o d that can have m a n y desirable properties. This new m e t h o d can be used to construct what we call knowledge-embedded database systems. In the second part o f our framework, we illustrate h o w system developers can use the new method, with Kbase, to produce knowledgeembedded database systems like I P D O S . The implementation and initial use o f I P D O S verify some o f our evaluation accorded to the fifth m e t h o d o f integration. Because Kbase provides a familiar and high-level language for development, it was quite easy to build and customized I P D O S . It also took a relatively short time (less than 2 weeks) o f testing before I P D O S was stable for practical use. I P D O S is also quite fast. M o s t operations complete within one second. While we have not extended or scaled up I P D O S , our initial experience and our experience with dBase applications leads us to believe that there are no potential roadblocks in doing so. Therefore, the performance o f I P D O S vindicates our informal evaluation o f the potential o f knowledge-embedded database systems.
REFERENCES [1] Ashton-Tate Inc, dBase III Plus, Version 1.10 (1986). [2] J. D. Ullman, Principles of Database and Knowledge-based Systems, 2nd edn. Computer Science Press, Potomac, MD (1988). [3] S. S. AI-Saadoun and J. S. Arora, Interactive design optimization of framed structures. ,4SCE J. Comput. Civil Engng 3 (1), 60-74 (1989). [4] M.A. Austin, S. A. Mahin and K. S. Pister, CSTRUCT: computer environment for design of steel structures, J Comput. Civil Engng 3(3), 209-227 (1989). [5] Intellicorp Inc., KEE-Connection (1987). [6] R. Logcher, M.-T. Wang and F. H.-S. Chen, Knowledge processing for construction management data base. ASCE J. Construct. Engng Mgmt 115 (2), 196-211 (1989). [7] H. Abelson and G. Sussman, Structure and Interpretation of Computer Programs. MIT Press, Cambridge, MA (1986). [8] R. Balzer et al., HEARSAY-III: a domain-independent framework for expert systems. Proc. AAAI (Aug. 1980).
Integration of databases and knowledge bases
921
[9] M. Jarke and Y. Vassiliou, Coupling expert systems with database management systems, in Artificial Intelligence Applications for Business (Edited by W. R. Reitman), pp. 65 85. Ablex, Norwood, NJ (1984). [I0] R. Reiter, Toward a logical reconstruction of relational database theory, in On Conceptual Modelling: Perspectivesfrom Artificial Intelligence, Databases, and Programming Languages (Edited by M. L. Brodie, J. Mylopoulos and J. W. Schmidt), pp. 191-234. Springer, Berlin (1984). [11] J. R. Abrial, Data semantics, in Data Management Systems (Edited by J. W. Klimbie and K. L. Koffeman). North Holland, Amsterdam (1974). [12] M. Brodie, On the development of data models, in On Conceptual Modelling: Perspectivesfrom Artificial Intelligence, Databases, and Programming Languages (Edited by M. L. Brodie, J. Mylopoulos, and J. W. Schmidt). Springer, Berlin (1984). [13] M. Hammer and D. McLeod, Database description with a semantic data model: SDM. ACM Trans. Database Syst. 6 (3), 351-386 (1981). [14] D. S. Batory, J. R. Barnett, J. F. Garza, K. P. Smith, K. Tsukuda, B. C. Twitchell and T. E. Wise, GENESIS: an extensible database management system. IEEE Trans. Software Engng 14 (11) (1988). [15] M. J. Carey, D. J. DeWitt, G. Graefe, D. M. Haight, J. E. Richardson, D. T. Schuh, E. J. Shekita and S. L. Vandenberg, The EXODUS extensible DBMS project: an overview. Report 808, University of Wisconsin, Madison, WI (Nov. 1988). [16] L. Rowe and M. Stonebraker, The postgres data model. Proc. XIII Int. Conf. on Very Large Databases. Morgan Kaufmann, Brighton (1987). [17] C. Zaniolo, The database language GEM. Proc. ACM-SIGMOD Int. Conj'. on Management of Data. ACM, San Jose, CA (May 1983). [18] W. J. Rasdorf and G. C. Salley, Generative engineering databases--toward expert systems. Comput. Structures 20 (IL I1 15 (1985). [19] S. J. Fenves and W. J. Rasdorf, Treatment of engineering design constraints in a relational data base. Engng Comput. 1, 27-37 (1985). [20] W.J. Rasdorfand T. E. Wang, Generic design standards processing in an expert system environment. ASCE J. Comput. Civil Engng 2 (1), 68-87 (1988). [21] D. B. Lenat and R. V. Guha, Building Large Knowledge-Based Systems. Addison-Wesley, Reading, MA (1990). [22] R. Davis, Meta-rules: reasoning about control. Artificial Intelligence 15, 179-222 (1980). [23] R. Davis and D. B. Lenat, Knowledge-Based System in Artificial Intelligence. McGraw-Hill, New York (1982). [24] J. McDermott and C. L. Forgy, Production system conflict resolution strategies, in Pattern-DirectedInference Systems (Edited by D. A. Waterman and F. Hayes-Roth). Academic Press, New York (1978). [25] E. F. Codd, Further normalization of the data base relational model, in Data Base Systems (Edited by R. Rustin), pp. 33 64. Prentice-Hall, Englewood Cliffs, NJ (1972). [26] M. Schkolnick and P. Sorenson, The effects of denormalization on database performance. Report RJ3082, IBM, San Jose, CA (1981). [27] M. S. Jones and V. E. Saouma, Prototype hybrid expert system for R/C design. J. Comput. Civil Engng 2 (2), 136 143 (1988). [28] C.-K Soh and A.-K Soh, Coupling interface of a microcomputer-based intelligent structural design system. Comput. Structures 31 (6), 1031-1039 (1989).